url stringlengths 14 1.76k | text stringlengths 100 1.02M | metadata stringlengths 1.06k 1.1k |
|---|---|---|
https://www.physicsforums.com/threads/parametric-description-of-a-plane.752727/ | Parametric Description of a Plane
1. May 7, 2014
mill
I read the definition that a plane is a point and two vectors with the equation being plane sum = {OP + tv + sw} where v and w are vectors and t and s are real numbers. This is called the parametric description of the plane. I haven't seen the equation in this form before though.
Can someone explain what these values stand for/how to use this equation or direct me to a page that explains it? I just see the regular plane equation when I google this.
2. May 7, 2014
Simon Bridge
You are aware that a plane can be defined by three points that are not co-linear?
This form is just the same - using two pairs of points to form the two vectors.
Any two vectors $\vec v$ and $\vec w$ must lie in a common plane.
In fact, they can define a set of parallel planes.
The parametric equation is just the instructions to get to another point in the plane starting from where you are at.
i.e. If point $P$ is in the plane defined by the above vectors, then you can get from there to point $Q$, also in the pane, by starting out at $P$ and walking $t$ steps of size $v$ in the direction of $\vec v$, then turning to the direction of $\vec w$ and walking $s$ steps of size $w$ in that direction.
In maths that is: $Q=P+t\vec v + s\vec w$
Concrete example:$$\begin{pmatrix}x\\y\\z\end{pmatrix}=\begin{pmatrix}1\\-1\\2\end{pmatrix}+t\begin{pmatrix}3\\4\\0\end{pmatrix}+s\begin{pmatrix}-1\\0\\0\end{pmatrix}$$... tells you how to get to point (x,y,z) from point (1,-1,1) using (3,4,0) and (-1,0,0) as cardinal directions.
i.e. you want to get to (x,y)=(8,8), then t=2 and s=-1
so you must travel 10 units along the hypotenuse of the 3-4-5 triangle, then 1 unit parallel to the x-axis.
Notice that the plane in the example is parallel to the cartesian x-y plane, and have expressed the vectors in cartesian coordinates. I don't have to.
The directions done this way basically translate the (t,s) coordinates for positions on the plane to (x,y,z) cartesian coordinates ...
Last edited: May 7, 2014
3. May 7, 2014
mill
Thanks. That cleared it up. I had never thought of Q like that, but it really helped.
4. May 8, 2014
Simon Bridge
No worries.
Similarly the parametric equation for a line is $Q=P+s\vec v$ and for a 3D volume you need three parameters.
It gets more fun when you use surfaces instead of planes - those are allowed to curve.
Similar Discussions: Parametric Description of a Plane | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7316569089889526, "perplexity": 438.25800847856453}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104636.62/warc/CC-MAIN-20170818121545-20170818141545-00129.warc.gz"} |
http://mathhelpforum.com/calculus/205564-show-verticle-turning-point-sign-d2x-dy2-sign-d2x-dytheta2.html | Math Help - Show that at verticle turning point sign(d2x/dy2)=sign(d2x/dytheta2)
1. Show that at verticle turning point sign(d2x/dy2)=sign(d2x/dytheta2)
Question is on an assignment for polar curves. I am sure that at vertical turning point dx/dtheta=0 will be used. I'm not sure what sign means, could we just presume its a coefficient? It's definitely not sin.
So far I have tried d2x/dy2=d/dy(dx/dy)=d/dy(dx/dtheta*dtheta/dy) but I don't know how to go from there.
Any help appreciated.
2. Re: Show that at verticle turning point sign(d2x/dy2)=sign(d2x/dytheta2)
Usually, sign(x) gives you whether it is positive or negative. For example, sign(3) = 1, sign(-2) = -1. The question seems to ask you to show that the two derivative are either both positive or both negative. I hope it makes sense.
3. Re: Show that at verticle turning point sign(d2x/dy2)=sign(d2x/dytheta2)
Ah thanks, my tutor spent ages derping around with it and I wasn't paying attention I remember he had some moment when he realized he was making it much harder than it needed to be. Think he had forgotten the sign part. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9164765477180481, "perplexity": 875.156776618755}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645261055.52/warc/CC-MAIN-20150827031421-00304-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://kimsereylam.com/python/2021/03/19/how-to-use-patch-in-python-unittest.html | # How To Use Patch In Python Unittest
Mar 19th, 2021 - written by Kimserey with .
When writing unit tests, we sometime must mock functionalities in our system. In Python unittest.mock provides a patch functionality to patch modules and classes attributes. In this post, we will look at example of how to use patch to test our system in specific scenarios.
## Patch Classes
Patch can be used as a decorator or a context manager. In this post we’ll use it as a context manager which will apply the patch within a with block.
We start by creating a dumb class:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# in my_class.py
class MyClass:
my_attribute = 1
def __init__(self):
self.value = 2
def do_something(self):
return "hello {}".format(self.value)
def get_value():
o = MyClass()
return o.value
We created a module my_class with a class MyClass with:
• an instance attribute value,
• a class attribute my_attribute,
• a method do_something.
We also added a method get_value which returns the instance attribute value.
In the following steps we will demonstrate how to patch the instance attribute, the class attribute and instance attribute of MyClass.
### Patch Instance Attribute
Starting from the instance attribute being used in get_value, we can patch it by patching my_class.MyClass.
1
2
3
4
5
6
from unittest.mock import patch
from my_class import MyClass, get_value
with patch("my_class.MyClass") as mock:
mock.return_value.value = "hello"
print(get_value())
The result of patch is a MagicMock which we can use to set the value attribute.
1
mock.return_value.value = "hello"
return_value would be the instance itself (from MyClass()) where we mock on it value.
The result of print(get_value()) will then be Hello rather than 2.
### Patch Class Atribute
For the class attribute, we can use patch.object which makes it easier as we can direclty pass the reference of the class.
1
2
3
4
5
6
7
from unittest.mock import patch
from my_class import MyClass
with patch.object(MyClass, "my_attribute", "hello"):
o = MyClass()
print(MyClass.my_attribute)
print(o.my_attribute)
The third argument of patch.object is the value of the attribute to be patched.
### Patch Method
Similarly we can use patch.object to patch class method.
1
2
3
with patch.object(MyClass, "do_something", return_value="hello"):
o = MyClass()
print(o.do_something())
We use the two arguments signature where we specify return_value. The difference with the three arguments signature is that using return_value patches a method rather than a class attribute.
While patching methods, we can also access the call arguments using call_args from the patch result.
### Patch Chained Methods
Another common scenario is when we have chained methods for example such a call MyClass().get_client().send():
1
2
3
4
5
6
7
8
9
10
11
12
class Client:
def send(self):
return "Sent from client"
class MyClass:
my_attribute = 20
def get_client(self):
return Client()
def send():
return MyClass().get_client().send()
From what we’ve learnt we can easily patch Client.send using patch.object:
1
2
with patch.object(Client, "send", return_value="hello"):
print(send())
but we can also patch MyClass.get_client and mock the whole chain:
1
2
3
with patch.object(MyClass, "get_client") as mock:
mock.return_value.send.return_value = "hello"
print(send())
We start by mocking method get_client, then mock the return_value.send which returns a mock of the send method which we then mock the return_value resulting in Client.send being mocked.
## Patch Module Functions
Lastly we’ll see how we can mock a module function. In this example we have a second module lib which contains a function some_function:
1
2
3
# in lib.py
def some_function():
return "Test"
We import that function from my_class which we call in test:
1
2
3
4
5
# in my_class.py
from lib import some_function
def test():
return some_function()
If we want to patch some_function, we can do so with patch:
1
2
with patch("my_class.some_function", return_value="hello"):
print(test())
One important point to note is that we have to patch from my_class.some_function rather than lib.some_function. This is because some_function is imported in my_class hence this is the instance that needs to be mocked.
If we need to use arguments to construct the return value, we can also use a lambda:
1
with patch("my_class.some_function", lambda x: "hello {}".format(x)):
And that concludes today’s post!
## Conclusion
In today’s post, we looked at unittest.mock patch functionality. We started by looking at how we could patch a class attribute, an instance attribute and a method. And we completed the post by looking at how we could patch a module. I hope you liked this post and I see you on the next one!
## External Sources
Designed, built and maintained by Kimserey Lam. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22443431615829468, "perplexity": 7825.229378772868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488560777.97/warc/CC-MAIN-20210624233218-20210625023218-00237.warc.gz"} |
https://www2.physics.ox.ac.uk/contacts/people/hesjedal/publications?page=13 | # Publications by Thorsten Hesjedal
## Proximity-induced odd-frequency superconductivity in a topological insulator
arxiv (0)
JA Krieger, A Pertsova, SR Giblin, M Döbeli, T Prokscha, CW Schneider, A Suter, T Hesjedal, AV Balatsky, Z Salman
At an interface between a topological insulator (TI) and a conventional superconductor (SC), superconductivity has been predicted to change dramatically and exhibit novel correlations. In particular, the induced superconductivity by an $s$-wave SC in a TI can develop an order parameter with a $p$-wave component. Here we present experimental evidence for an unexpected proximity-induced novel superconducting state in a thin layer of the prototypical TI, Bi$_2$Se$_3$, proximity-coupled to Nb. From depth-resolved magnetic field measurements below the superconducting transition temperature of Nb, we observe a local enhancement of the magnetic field in Bi$_2$Se$_3$ that exceeds the externally applied field, thus supporting the existence of an intrinsic paramagnetic Meissner effect arising from an odd-frequency superconducting state. Our experimental results are complemented by theoretical calculations supporting the appearance of an odd-frequency component at the interface which extends into the TI. This state is topologically distinct from the conventional Bardeen-Cooper-Schrieffer (BCS) state it originates from. To the best of our knowledge, these findings present a first observation of bulk odd-frequency superconductivity in a TI. We thus reaffirm the potential of the TI/SC interface as a versatile platform to produce novel superconducting states.
## Diameter-independent skyrmion Hall angle observed in chiral magnetic multilayers
Nature Communications Nature Research (part of Springer Nature) (0)
K Zeissler, S Finizio, C Barton, A Huxtable, J Massey, J Raabe, A Sadovnikov, S Nikitov, R Brearton, T Hesjedal, G van der Laan, M Rosamond, E Linfield, G Burnell, C Marrows
## Direct observation of the energy gain underpinning ferromagnetic superexchange in the electronic structure of CrGeTe$_3$
arxiv (0)
I Marković, F Mazzola, A Rajan, EA Morales, DM Burn, THORSTEN Hesjedal, GVD Laan, S Mukherjee, TK Kim, C Bigi, I Vobornik, G Balakrishnan, MC Hatnean, PDC King, G Balakrishnan, S Mukherjee, MC Hatnean, I Vobornik, GVD Laan, PDC King, C Bigi, A Rajan, TK Kim, I Marković
We investigate the temperature-dependent electronic structure of the van der Waals ferromagnet, CrGeTe$_3$. Using angle-resolved photoemission spectroscopy, we identify atomic- and orbital-specific band shifts upon cooling through ${T_\mathrm{C}}$. From these, together with x-ray absorption spectroscopy and x-ray magnetic circular dichroism measurements, we identify the states created by a covalent bond between the Te ${5p}$ and the Cr ${e_g}$ orbitals as the primary driver of the ferromagnetic ordering in this system, while it is the Cr ${t_{2g}}$ states that carry the majority of the spin moment. The ${t_{2g}}$ states furthermore exhibit a marked bandwidth increase and a remarkable lifetime enhancement upon entering the ordered phase, pointing to a delicate interplay between localized and itinerant states in this family of layered ferromagnets.
## The topological surface state of $α$-Sn on InSb(001) as studied by photoemission
arxiv Museu de Ciències Naturals de Barcelona (0)
MR Scholz, VA Rogalev, L Dudy, F Reis, F Adler, J Aulbach, LJ Collins-McIntyre, LB Duffy, HF Yang, YL Chen, T Hesjedal, ZK Liu, M Hoesch, S Muff, JH Dil, J Schäfer, R Claessen
We report on the electronic structure of the elemental topological semimetal $\alpha$-Sn on InSb(001). High-resolution angle-resolved photoemission data allow to observe the topological surface state (TSS) that is degenerate with the bulk band structure and show that the former is unaffected by different surface reconstructions. An unintentional $p$-type doping of the as-grown films was compensated by deposition of potassium or tellurium after the growth, thereby shifting the Dirac point of the surface state below the Fermi level. We show that, while having the potential to break time-reversal symmetry, iron impurities with a coverage of up to 0.25 monolayers do not have any further impact on the surface state beyond that of K or Te. Furthermore, we have measured the spin-momentum locking of electrons from the TSS by means of spin-resolved photoemission. Our results show that the spin vector lies fully in-plane, but it also has a finite radial component. Finally, we analyze the decay of photoholes introduced in the photoemission process, and by this gain insight into the many-body interactions in the system. Surprisingly, we extract quasiparticle lifetimes comparable to other topological materials where the TSS is located within a bulk band gap. We argue that the main decay of photoholes is caused by intraband scattering, while scattering into bulk states is suppressed due to different orbital symmetries of bulk and surface states.
## Transverse field muon-spin rotation measurement of the topological anomaly in a thin film of MnSi
arXiv:1511.04972v1 (0)
T Lancaster, F Xiao, Z Salman, IO Thomas, SJ Blundell, FL Pratt, SJ Clark, T Prokscha, A Suter, SL Zhang, AA Baker, T Hesjedal
We present the results of transverse-field muon-spin rotation measurements on an epitaxially grown 40 nm-thick film of MnSi on Si(111) in the region of the field-temperature phase diagram where a skyrmion phase has been observed in the bulk. We identify changes in the quasistatic magnetic field distribution sampled by the muon, along with evidence for magnetic transitions around T ≈ 40 K and 30 K. Our results suggest that the cone phase is not the only magnetic texture realized in film samples for out-of-plane fields.
## Three-dimensional micromagnetic domain structure of MnAs films on GaAs(001): Experimental imaging and simulations
PHYSICAL REVIEW B AMERICAN PHYSICAL SOC 75 (0) 9
R Engel-Herbert, T Hesjedal, DM Schaadt
The micromagnetic domain structure of MnAs films on GaAs(001) has been systematically investigated by micromagnetic imaging and simulations. The magnetic force microscopy (MFM) contrast resulting from the stray field of the simulated three-dimensional domain patterns was calculated and found to be in excellent agreement with MFM experiments. By combining three-dimensional stray-field imaging by MFM with surface sensitive probing and micromagnetic simulations, we were able to derive a consistent picture of the micromagnetic structure of MnAs. For example, the origin of the comblike contrast observed through MFM was identified as a metastable domain configuration exhibiting a cross-tie wall.
## Epitaxial Heusler Alloys on III-V Semiconductors
John Wiley & Sons, Ltd (0)
T Hesjedal, KH Ploog
## Calculation and experimental verification of the acoustic stress at GHz frequencies in SAW resonators
Proc. 33rd European Microwave Conference (0)
F Kubat, W Ruile, T Hesjedal, J Stotz, U Roesler, L Reindl | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8203877806663513, "perplexity": 6443.732716489348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400210616.36/warc/CC-MAIN-20200923081833-20200923111833-00221.warc.gz"} |
https://www.aithercfd.com/2016/01/24/integrating-turbulence-part-1.html | ## Part 1 - Explicit Formulation
This post will focus on integrating turbulence models into Aither’s explicit solvers. This includes calculation of the residual which is necessary for the implicit solvers. The next post in this series will cover integrating turbulence models into the LU-SGS implicit solver.
## Reynolds-Averaged Navier-Stokes Equations
To solve the Navier-Stokes equations with turbulence models we use the Favre and Reynolds-averaged equations as shown below. The mean flow equations are identical to the Navier-Stokes equations with the exception of the viscosity and thermal conductivity terms. The viscosity $\mu$ is replaced by its sum with the turbulent eddy viscosity $\mu_t$. The thermal conductivity term $\frac{\mu}{Pr(\gamma - 1)}$ is replaced by its sum with the turbulent thermal conductivity $\frac{\mu_t}{Pr_t(\gamma - 1)}$. The turbulence equations themselves only couple to the mean flow equations through the turbulent eddy viscosity.
In the above equations the turbulence model shown is a two equation k-$\omega$ model. The turbulent source terms consist of production, destruction, and cross diffusion terms. These terms are model dependent and their values for the SST model can be seen here.
## Numerical Solution Strategy
We solve the turbulence equations separately from the mean flow equations instead of solving both sets of equations simultaneously. This is done because the coupling between the equation sets is relatively weak, and solving the equations simultaneously requires more work. To solve the equations simultaneously using an implicit method requires a new flux jacobian to be calculated for each turbulence model in the code. Solving the equations separately allows the same flux jacobian to be used for all turbulence models.
To fully implement the turbulence models into the Aither code the inviscid and viscous flux calculations must be extended for the turbulence equations. Also the code must now calculate the source terms of the turbulence models. The source terms can provide a lot of issues numerically because they are characteristically stiff. These terms can severely limit the stable time increment for a given time integration scheme. For this reason it is usually only practical to solve these equations with an implicit method. Aither uses the Lower-Upper Symmetric Gauss Seidel (LU-SGS) implicit solver, so this must be extended for the turbulence equations (covered in next post).
## Flux Calculations With Turbulence Equations
Aither uses the Roe flux difference splitting method to calculate the inviscid fluxes, and a central difference method to calculate the viscous fluxes. These flux calculations must be extended for use with the turbulence equations.
# Roe Flux With Turbulence Equations
To calculate the inviscid fluxes the primative variables are reconstructed at the cell faces. A given face will have two adjacent cells, so there will be two separated reconstructed states. These states may not be equal to each other and therefore form a Riemann problem. Roe’s approximate Riemann solver is used to determine the inviscid flux at the cell face. The flux is calculated using the convective fluxes from the left and right reconstructed states as well as a dissipation matrix as shown below. In the equations below variables marked with a ~ indicated Roe averaged quantities, and the $\Delta$ refers to the right state minus the left state. The dissipation matrix ($D$) is calculated from the left eigenvalues of the Roe matrix ($T$), the eigenvalues of the Roe matrix ($\Lambda$), and the wave strengths ($\Delta C$).
As can be seen from the above equations there is no coupling from the turbulence equations to the mean flow equations in the inviscid flux calculation. However, the mean flow equations have some coupling to the turbulence equations. The above formulation is written in a way that is independent of the face tangent vectors. More information on this formulation and its derivation can be found here.
# Viscous Flux With Turbulence Equations
The viscous fluxes are calculated in the same way whether turbulence equations are present or not. A central difference is used to reconstruct the state at the cell face. The viscous flux itself is calculated as in $F_v$ above. In order to calculate the viscous flux for the turbulence equations gradients of $k$ and $\omega$ are needed. Therefore the existing gradient calculation methods in the code are extended to calculate these additional gradients. All gradients are calculated using the Green-Gauss method.
## Residual Calculation
The residual calculation with turbulence equations is identical with to that without it with the exception of the source terms. There are no source terms in the mean flow equations, but they are present in the turbulence equations. The source terms are multiplied by the cell volume, not the cell face area. It is important to note that the source terms start out on the right hand side of the equation, opposite of the inviscid and viscous fluxes. The equation below shows how the source terms contribute to the residual calculation.
With the changes above and the addition of appropriate boundary conditions, the code has been extended to solve the turbulence equations in an explicit manner. | {"extraction_info": {"found_math": true, "script_math_tex": 13, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9146661758422852, "perplexity": 431.4382725294857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039748901.87/warc/CC-MAIN-20181121133036-20181121155036-00519.warc.gz"} |
http://mathoverflow.net/questions/160468/constructing-an-interval-exchange-given-a-prescribed-trajectory | # Constructing an interval exchange given a prescribed trajectory
Given a prescribed trajectory, is it possible to construct an interval exchange having this trajectory?
For example, given a 3-letter word (like aaabbbccabcaaa ), is it possible to construct a 3- interval exchange with a point having this word as the beginning of its trajectory?
what necessary conditions on a given word to be the trarjectory of a IET can be found?
For the relation between coding and interval exchange, see e.g. : http://combinat.sagemath.org/doc/reference/combinat/sage/combinat/iet/tutorial.html#orbit-and-symbolic-coding
-
The complexity of an infinite sequence $x$ is a sequence $C(n)$, where $C(n)$ is the number of distinct blocks of length $n$ in $x$. For an interval exchange with $k$ symbols, it's not hard to show that $C(n)=(k-1)n+1$. If your word has more complexity than this, it can never appear as the coding sequence of an IET.
If $E$ is the set of endpoints of intervals, then $|E|=k-1$. $E\cup T^{-1}E\cup \ldots\cup T^{-(n-1)}E$ has cardinality at most $n(k-1)$, and so the complement in [0,1] has at most $1+(k-1)n$ intervals. If two points are not separated by one of these endpoints, they have the same $n$-step coding. – Anthony Quas Mar 17 '14 at 21:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8994247913360596, "perplexity": 176.56382614750265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00178-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://math.libretexts.org/Courses/Monroe_Community_College/MTH_098_Elementary_Algebra/2%3A_Solving_Linear_Equations_and_Inequalities/2.1%3A_Solve_Equations_Using_the_Subtraction_and_Addition_Properties_of_Equality/2.1E%3A_Exercises | # 2.1E: Exercises
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$
## Practice Makes Perfect
Verify a Solution of an Equation
In the following exercises, determine whether the given value is a solution to the equation.
##### Exercise $$\PageIndex{1}$$
Is $$y=\frac{5}{3}$$ a solution of
$$6 y+10=12 y ?$$
Yes
##### Exercise $$\PageIndex{2}$$
Is $$x=\frac{9}{4}$$ a solution of
$$4 x+9=8 x ?$$
##### Exercise $$\PageIndex{3}$$
Is $$u=-\frac{1}{2}$$ a solution of
$$8 u-1=6 u ?$$
No
##### Exercise $$\PageIndex{4}$$
Is $$v=-\frac{1}{3}$$ a solution of
$$9 v-2=3 v ?$$
Solve Equations using the Subtraction and Addition Properties of Equality
In the following exercises, solve each equation using the Subtraction and Addition Properties of Equality.
##### Exercise $$\PageIndex{5}$$
$$x+24=35$$
x = 11
##### Exercise $$\PageIndex{6}$$
$$x+17=22$$
##### Exercise $$\PageIndex{7}$$
$$y+45=-66$$
y = -111
##### Exercise $$\PageIndex{8}$$
$$y+39=-83$$
##### Exercise $$\PageIndex{9}$$
$$b+\frac{1}{4}=\frac{3}{4}$$
$$b = \frac{1}{2}$$
##### Exercise $$\PageIndex{10}$$
$$a+\frac{2}{5}=\frac{4}{5}$$
##### Exercise $$\PageIndex{11}$$
$$p+2.4=-9.3$$
p = -11.7
##### Exercise $$\PageIndex{12}$$
$$m+7.9=11.6$$
##### Exercise $$\PageIndex{13}$$
$$a-45=76$$
a = 121
##### Exercise $$\PageIndex{14}$$
$$a-30=57$$
##### Exercise $$\PageIndex{15}$$
$$m-18=-200$$
m = -182
##### Exercise $$\PageIndex{16}$$
$$m-12=-12$$
##### Exercise $$\PageIndex{17}$$
$$x-\frac{1}{3}=2$$
$$x=\frac{7}{3}$$
##### Exercise $$\PageIndex{18}$$
$$x-\frac{1}{5}=4$$
##### Exercise $$\PageIndex{19}$$
$$y-3.8=10$$
y = 10.8
##### Exercise $$\PageIndex{20}$$
$$y-7.2=5$$
##### Exercise $$\PageIndex{21}$$
$$x-165=-420$$
$$x=-255$$
##### Exercise $$\PageIndex{22}$$
$$z-101=-314$$
##### Exercise $$\PageIndex{23}$$
$$z+0.52=-8.5$$
$$z=-9.02$$
##### Exercise $$\PageIndex{24}$$
$$x+0.93=-4.1$$
##### Exercise $$\PageIndex{25}$$
$$q+\frac{3}{4}=\frac{1}{2}$$
$$q = -\frac{1}{4}$$
##### Exercise $$\PageIndex{26}$$
$$p+\frac{1}{3}=\frac{5}{6}$$
##### Exercise $$\PageIndex{27}$$
$$p-\frac{2}{5}=\frac{2}{3}$$
$$p=\frac{16}{15}$$
##### Exercise $$\PageIndex{28}$$
$$y-\frac{3}{4}=\frac{3}{5}$$
Solve Equations that Require Simplification
In the following exercises, solve each equation.
##### Exercise $$\PageIndex{29}$$
$$c+31-10=46$$
c = 25
##### Exercise $$\PageIndex{30}$$
$$m+16-28=5$$
##### Exercise $$\PageIndex{31}$$
$$9 x+5-8 x+14=20$$
x = 1
##### Exercise $$\PageIndex{32}$$
$$6 x+8-5 x+16=32$$
##### Exercise $$\PageIndex{33}$$
$$-6 x-11+7 x-5=-16$$
x = 0
##### Exercise $$\PageIndex{34}$$
$$-8 n-17+9 n-4=-41$$
##### Exercise $$\PageIndex{35}$$
$$5(y-6)-4 y=-6$$
$$y=24$$
##### Exercise $$\PageIndex{36}$$
$$9(y-2)-8 y=-16$$
##### Exercise $$\PageIndex{37}$$
$$8(u+1.5)-7 u=4.9$$
$$u=-7.1$$
##### Exercise $$\PageIndex{38}$$
$$5(w+2.2)-4 w=9.3$$
##### Exercise $$\PageIndex{39}$$
$$6 a-5(a-2)+9=-11$$
$$a=-30$$
##### Exercise $$\PageIndex{40}$$
$$8 c-7(c-3)+4=-16$$
##### Exercise $$\PageIndex{41}$$
$$6(y-2)-5y=4(y+3) -4(y-1)$$
y =28
##### Exercise $$\PageIndex{42}$$
$$9(x-1)-8 x=-3(x+5)+3(x-5)$$
##### Exercise $$\PageIndex{43}$$
$$3(5 n-1)-14 n+9=10(n-4)-6n-4(n+1)$$
n = -50
##### Exercise $$\PageIndex{44}$$
$$2(8m+3)-15m-4=9(m+6)-2(m-1)-7m$$
##### Exercise $$\PageIndex{45}$$
$$-(j+2)+2 j-1=5$$
j = 8
##### Exercise $$\PageIndex{46}$$
$$-(k+7)+2 k+8=7$$
##### Exercise $$\PageIndex{47}$$
$$-\left(\frac{1}{4} a-\frac{3}{4}\right)+\frac{5}{4} a=-2$$
$$a=-\frac{11}{4}$$
##### Exercise $$\PageIndex{48}$$
$$-\left(\frac{2}{3} d-\frac{1}{3}\right)+\frac{5}{3} d=-4$$
##### Exercise $$\PageIndex{49}$$
$$\begin{array}{l}{8(4 x+5)-5(6 x)-x} \\ {=53-6(x+1)+3(2 x+2)}\end{array}$$
x=13
##### Exercise $$\PageIndex{50}$$
$$\begin{array}{l}{6(9 y-1)-10(5 y)-3 y} \\ {=22-4(2 y-12)+8(y-6)}\end{array}$$
Translate to an Equation and Solve
In the following exercises, translate to an equation and then solve it.
##### Exercise $$\PageIndex{51}$$
Nine more than $$x$$ is equal to $$52 .$$
$$x+9=52 ; x=43$$
##### Exercise $$\PageIndex{52}$$
The sum of $$x$$ and $$-15$$ is 23.
##### Exercise $$\PageIndex{53}$$
Ten less than $$m$$ is $$-14$$.
$$m-10=-14 ; m=-4$$
##### Exercise $$\PageIndex{54}$$
Three less than $$y$$ is $$-19$$.
##### Exercise $$\PageIndex{55}$$
The sum of $$y$$ and $$-30$$ is $$40 .$$
$$y+(-30)=40 ; y=70$$
##### Exercise $$\PageIndex{56}$$
Twelve more than $$p$$ is equal to $$67 .$$
##### Exercise $$\PageIndex{57}$$
The difference of 9$$x$$ and 8$$x$$ is 107.
$$9 x-8 x=107 ; 107$$
##### Exercise $$\PageIndex{58}$$
The difference of 5$$c$$ and 4$$c$$ is $$602 .$$
##### Exercise $$\PageIndex{59}$$
The difference of 5$$c$$ and 4$$c$$ is 602
$$n-\frac{1}{6}=\frac{1}{2} ; \frac{2}{3}$$
##### Exercise $$\PageIndex{60}$$
The difference of $$f$$ and $$\frac{1}{3}$$ is $$\frac{1}{12}$$.
##### Exercise $$\PageIndex{61}$$
The sum of $$-4 n$$ and 5$$n$$ is $$-82$$
$$-4 n+5 n=-82 ;-82$$
##### Exercise $$\PageIndex{62}$$
The sum of $$-9 m$$ and 10$$m$$ is $$-95$$
Translate and Solve Applications
In the following exercises, translate into an equation and solve.
##### Exercise $$\PageIndex{63}$$
Distance Avril rode her bike a total of 18 miles, from home to the library and then to the beach. The distance from Avril’s house to the library is 7 miles. What is the distance from the library to the beach?
11 miles
##### Exercise $$\PageIndex{64}$$
Reading Jeff read a total of 54 pages in his History and Sociology textbooks. He read 41 pages in his History textbook. How many pages did he read in his Sociology textbook?
##### Exercise $$\PageIndex{65}$$
Age Eva’s daughter is 15 years younger than her son. Eva’s son is 22 years old. How old is her daughter?
7 years old
##### Exercise $$\PageIndex{66}$$
Age Pablo’s father is 3 years older than his mother. Pablo’s mother is 42 years old. How old is his father?
##### Exercise $$\PageIndex{67}$$
Groceries For a family birthday dinner, Celeste bought a turkey that weighed 5 pounds less than the one she bought for Thanksgiving. The birthday turkey weighed 16 pounds. How much did the Thanksgiving turkey weigh?
21 pounds
##### Exercise $$\PageIndex{68}$$
Weight Allie weighs 8 pounds less than her twin sister Lorrie. Allie weighs 124 pounds. How much does Lorrie weigh?
##### Exercise $$\PageIndex{69}$$
Health Connor’s temperature was 0.7 degrees higher this morning than it had been last night. His temperature this morning was 101.2 degrees. What was his temperature last night?
100.5 degrees
##### Exercise $$\PageIndex{70}$$
Health The nurse reported that Tricia’s daughter had gained 4.2 pounds since her last checkup and now weighs 31.6 pounds. How much did Tricia’s daughter weigh at her last checkup?
##### Exercise $$\PageIndex{71}$$
Salary Ron’s paycheck this week was $17.43 less than his paycheck last week. His paycheck this week was$103.76. How much was Ron’s paycheck last week?
## Everyday Math
##### Exercise $$\PageIndex{73}$$
Construction Miguel wants to drill a hole for a $$\frac{5}{8}$$ inch screw. The hole should be $$\frac{1}{12}$$ inch smaller than the screw. Let $$d$$ equal the size of the hole he should drill. Solve the equation $$d=\frac{5}{8}-\frac{1}{12}$$ to see what size the hole should be.
$$d=\frac{13}{24}$$ inch
##### Exercise $$\PageIndex{74}$$
Baking Kelsey needs $$\frac{2}{3}$$ cup of sugar for the cookie recipe she wants to make. She only has $$\frac{3}{8}$$ cup of sugar and will borrow
the rest from her ner neighbor. Let $$s$$ equal the amount of sugar she will borrow. Solve the equation $$\frac{3}{8}+s=\frac{2}{3}$$ to find the
amount of sugar she should ask to borrow.
## Writing Exercises
##### Exercise $$\PageIndex{75}$$
Is $$-8$$ a solution to the equation $$3 x=16-5 x ?$$ How do you know?
No. Justifications will vary.
##### Exercise $$\PageIndex{76}$$
What is the first step in your solution to the equation $$10 x+2=4 x+26 ?$$
## Self Check
ⓐ After completing the exercises, use this checklist to evaluate your mastery of the objectives of this section.
ⓑ If most of your checks were:
…confidently. Congratulations! You have achieved your goals in this section! Reflect on the study skills you used so that you can continue to use them. What did you do to become confident of your ability to do these things? Be specific!
…with some help. This must be addressed quickly as topics you do not master become potholes in your road to success. Math is sequential - every topic builds upon previous work. It is important to make sure you have a strong foundation before you move on. Who can you ask for help? Your fellow classmates and instructor are good resources. Is there a place on campus where math tutors are available? Can your study skills be improved?
…no - I don’t get it! This is critical and you must not ignore it. You need to get help immediately or you will quickly be overwhelmed. See your instructor as soon as possible to discuss your situation. Together you can come up with a plan to get you the help you need.
This page titled 2.1E: Exercises is shared under a not declared license and was authored, remixed, and/or curated by OpenStax. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7338682413101196, "perplexity": 4235.029533281167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.0/warc/CC-MAIN-20230401032604-20230401062604-00422.warc.gz"} |
http://cs.stackexchange.com/questions/6874/show-that-a-language-belongs-to-the-polynomial-hierarchy | # Show that a language belongs to the polynomial hierarchy
I think the following exercise is to "warm up", but nevertheless it's quite difficult for me:
Let $k \in \mathbb{N}$ and let $L \in \Sigma_k$. Show that also $L^{*} \in \Sigma_k$.
The following details from my lecture notes seem to be useful:
Notation. Let $n \in \mathbb{N}$.
We write $\exists_n y. \varphi(y)$ for $\exists y \in \Sigma^{*}.|y| \le n \wedge \varphi(y)$.
We write $\forall_n y. \varphi(y)$ for $\forall y \in \Sigma^{*}.|y| \le n \Rightarrow \varphi(y)$.
Theorem.
$L \in \Sigma^P_i \Leftrightarrow$ there is a language $A \in P$ and a polynomial $p$ so that: $x \in L \Leftrightarrow \exists_{p(|x|)}y_1 \forall_{p(|x|)}y_2 \exists_{p(|x|)}y_3 .../\forall_{p(|x|)}y_i (x,y_1,y_2,...,y_i) \in A$
Unfortunately I don't see the solution of the "puzzle". Can somebody please help me a little bit (despite the fact that it's weekend)?
-
Let me assume your are familiar with the oracle-version of the polynomial hierarchy. Thus, if $L\in \Sigma_k$, then there exists a (non-deterministic polytime) Turing machine $M$ with oracle $\Sigma_{k-1}$.
To show that $L^*$ is also in $\Sigma_k$ we explain how one could build a new non-deterministic polytime Turing machine $M^*$ with oracle $\Sigma_{k-1}$ that accepts $L^*$. The key idea is, that we use the (simulation) of $M$ as a sub-module for $M^*$.
The machine $M^*$ works as follows. It guesses a partition of the input $w$ into some words $u_1u_2\cdots u_k$. Then it runs the simulation for $M$ for every $u_i$. If the simulation verifies that all $u_i\in L$ then $M^*$ accepts, otherwise it rejects. Clearly $M^*$ accepts $L^*$ and uses only the oracle $\Sigma_{k-1}$. What is left to check is if $M^*$ runs in polytime. The running time of $M^*$ is dominated by $$t_M(|u_1|)+t_M(|u_2|)+\cdots+ t_M(|u_k|)\le t_M(n)+t_M(n)+\cdots+ t_M(n)\le n \cdot t_M(n),$$ for $n=|w|$. Since $t_M(n)$ is a polynomial, we have that $n\cdot t_M(n)$ is a polynomial as well.
Thank you very much for your answer, Professor Schulz. I unfortunately fail to understand why the oracle is $Σ_{k−1}$. Can you (or somebody else) please explain why the index has to be k-1? – Uriel Nov 24 '12 at 19:45
@Uriel: There are different (equivalent) ways how to define the classes in the polytime hierarchy. Please read the wikipedia article. One way to define $\Sigma_k$ is as the class NP with oracle $\Sigma_{k-1}$, where $\Sigma_0=P$. This definition was imho better suited for presenting a solution to your question. – A.Schulz Nov 25 '12 at 7:46
Thank you, now this is clear. I have a last question, which concerns the running time of $M^*$. The part $t_M(|u_1|)+t_M(|u_2|)+\cdots+ t_M(|u_k|)$ is clear. But why are there multiple $t_M(n)$? – Uriel Nov 25 '12 at 12:42
@Uriel: We have to show that the new machine runs in polytime. So we use as a very rough estimation $t_M(|u_i|)\le t_M(n)$ for each of the $t_M(|u_i|)$ terms. Also there are at most $n$ of such terms. – A.Schulz Nov 26 '12 at 20:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9486470818519592, "perplexity": 244.18143704715604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929656.4/warc/CC-MAIN-20150521113209-00141-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://cs.stackexchange.com/questions/131977/proving-that-dconn-is-nl-complete | # Proving that DCONN is NL-Complete
I am having trouble with some homework regarding proving that DCONN is NL-Complete. As part of the exercise, the fact that RCH is NL-Complete can be assumed.
Problem definitions:
RCH: Given a directed graph G and nodes x, y , is there a path from x to y?
DCONN: given a directed graph G, is it connected?
To my understanding we have have to prove two things:
1. $$DCONN \leq RCH$$
2. $$RCH \leq DCONN$$
Reduction $$\leq$$ in this case is defined as: $$L$$ is logspace-reducible to $$L^{'}$$ ($$L \leq_{log} L{'})$$ iff there is a LOGSPACE function f such that: $$x \in L$$ iff $$f(x) \in L{'}$$.
To be honest I have no idea where to even start. The following is my naive attempt to tackle the first issue. For the first part I thought that running $$n^2$$ RCH questions to see if node $$y$$ is reachable from node $$x$$ for all $$x,y \in G, x \neq y$$ will be sufficient to prove that $$G$$ is connected or not. But I am not sure if that is sufficient to be considered a reduction.
When it comes to the second part I have no idea where to even start. Any help or pointers would be appreciated.
• I am not sure if that is sufficient to be considered a reduction. If you don't know the notion of reductions in this case, you cannot expect to be able to prove that a reduction exists. Before attempting the exercise, make sure that you know the formal definition of reduction which is relevant here. – Yuval Filmus Nov 5 '20 at 22:38
• @YuvalFilmus I've updated the question to contain the correct definition of reduction relative to this context. – Walker Panuccio Nov 5 '20 at 23:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8961471319198608, "perplexity": 209.85427190517538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519600.31/warc/CC-MAIN-20210119170058-20210119200058-00560.warc.gz"} |
https://okayama.pure.elsevier.com/en/publications/chemical-treatment-of-copper-and-aluminum-derived-from-waste-crys | # Chemical Treatment of Copper and Aluminum Derived from Waste Crystalline Silicon Solar Cell Modules by Mixed Acids of HNO3 and HCl
Teruaki Matsubara, Md Azhar Uddin, Yoshiei Kato, Takanori Kawanishi, Yoshiaki Hayashi
Research output: Contribution to journalArticlepeer-review
7 Citations (Scopus)
## Abstract
In this study, copper (Cu) and aluminum (Al) particles derived from waste crystalline silicon solar cell modules were etched with mixed acid containing HNO3 and HCl, and the optimal mixing conditions were examined for the purpose of recovering silicon with high yield. The crushed particles of waste silicon solar cells were used after sieving between 450 and 600 μm particle size. The Cu etching rate decreased with the increasing HCl concentration in the region of HNO3/HCl ≧ 3.36, whereas it increased at HNO3/HCl < 3.36. The Al etching rate increased when HCl was added, although it was almost independent of the amount of HNO3. 99.6% silicon purity was achieved at the treatment time of 30 min. The rate-determining step of Cu and Al etchings was represented by the volume reaction model instead of the surface reaction model. The CuCl coating was observed on the residuals of Cu. The increasing HCl blocked the Cu etching, but the excess Cl promoted the dissolution of CuCl due to complex formation, corresponding to the regions of HNO3/HCl ≧ 3.36 and HNO3/HCl < 3.36, respectively. In the region of HNO3/HCl < 3.36, the spontaneous complete etching time of Cu and Al was achieved with higher HNO3 concentration of 8.5–10 mol/L.
Original language English 378-387 10 Journal of Sustainable Metallurgy 4 3 https://doi.org/10.1007/s40831-018-0184-2 Published - Sep 1 2018
## Keywords
• Chemical etching
• Mixed acid
• Silicon
• Waste solar cell module
## ASJC Scopus subject areas
• Environmental Science (miscellaneous)
• Mechanics of Materials
• Metals and Alloys
## Fingerprint
Dive into the research topics of 'Chemical Treatment of Copper and Aluminum Derived from Waste Crystalline Silicon Solar Cell Modules by Mixed Acids of HNO3 and HCl'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8015909194946289, "perplexity": 9923.41345210959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710662.60/warc/CC-MAIN-20221128203656-20221128233656-00442.warc.gz"} |
http://math.stackexchange.com/questions/168976/trivial-restriction-of-line-bundles | # Trivial Restriction of Line Bundles
Say I have some projective space $\mathbb{P}^n$ and some line bundle $L=\mathcal{O}(-k)$. Now, I want to have a subvariety $Y$ in $\mathbb{P}^n$ such that $L\vert_Y$ is trivial.
When is this the case? I can only think of trivial solutions, like when $Y$ is just a point and I can't seem to find a standard treatment of this in literature
-
Let's exclude the other trivial solution: $k=0$ and any $Y$.
Suppose now that $k\ne 0$. As $O(-k)|_Y$ trivial is equivalent to $O(k)|_Y$ (isomorphic to the dual of $O(-k)|_Y$) trivial, we can restrict to the case $k<0$. Then $L$, hence $L|_Y$, are ample. If $L|_Y$ is moreover trivial, then $O_Y$ is ample, which implies that $Y$ is affine. So necessarily $Y$ is affine. If $Y$ is a closed subvariety, this forces $Y$ to be a finite set.
Conclusion: if $Y$ is a closed subvariety such that $L|_Y$ is trivial with $k\ne 0$, then $Y$ is a finite subset. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9818406701087952, "perplexity": 94.17188266550122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768529.27/warc/CC-MAIN-20141217075248-00151-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/88392/how-do-you-find-the-distance-from-a-point-to-a-plane/88394 | How do you find the distance from a point to a plane?
I am having trouble with this:
Find the distance from the point $(1,1,1)$ to the plane $2x+2y+z=0$.
Any ideas? Thanks.
-
Try this, it is the answer to your question. – user12205 Dec 4 '11 at 23:52
@Jeroen Vaelen: I'd like to remark that formula $\frac{|ax_0+by_0+cz_0+d|}{\sqrt{a^2+b^2+c^2}}$ for the distance from the point $(x_0,y_0,z_0)$ to the plane $ax+by+cz+d=0$ holds also in a more general setting: in fact one can prove an analogous formula, known as Ascoli's formula (e.g., see: matematicamente.it/forum/…), in linear normed vector spaces of any dimension. – Pacciu Dec 5 '11 at 1:51
The family of planes, indexed by $\alpha$ $$f(x,y,z)=2x+2y+z=\alpha$$ are all parallel, with normal vectors parallel to $\nabla f=(2,2,1)$.
Moving a distance $d$ along the normal means moving $d\frac{(2,2,1)}{|(2,2,1)|}$. This movement changes $\alpha$ by $d\frac{2\cdot2+2\cdot2+1\cdot1}{|(2,2,1)|}=d|(2,2,1)|$. Thus, the distance between two of these planes is $\frac{|\Delta\alpha|}{|(2,2,1)|}=\frac{|\Delta\alpha|}{3}$.
Since $\alpha=0$ for $2x+2y+z=0$ and $\alpha=2x+2y+z=5$ for the plane that contains $(1,1,1)$, we get the distance from $2x+2y+z=0$ to $(1,1,1)$ to be $\frac{5}{3}$.
-
Thanks that helps a lot! – jmendegan Dec 5 '11 at 1:25
The shortest distance will be achieved along a line that is perpendicular to the plane.
The normal vector to the plane can be read off the equation: since the plane is $2x+2y+z=0$, the normal vector of the plane is $(2,2,1)$.
That means that the shortest path from $(1,1,1)$ to the plane will be along a line parallel to $(2,2,1)$. That is, you are looking a value of $t$ such that $$(1,1,1) + t(2,2,1)$$ lies in the plane. That will be the point in the plane closest to $(1,1,1)$. And once you know the point of the plane closest to $(1,1,1)$, you can compute the distance by simply using the formula for distance between two points.
-
I always use 3D homogeneous coordinates for points and planes with the following constructs:
1. Point $P=\left| \begin{matrix} \vec{p} & \delta\end{matrix} \right|=\left| \begin{matrix} (p_x,p_y,p_z) & \delta \end{matrix} \right| = \left| \begin{matrix} (1,1,1) & 1\end{matrix} \right|$
2. Plane $W=\left| \begin{matrix} \vec{w} & \epsilon \end{matrix} \right| = \left| \begin{matrix} (a,b,c) & \epsilon\end{matrix} \right| = \left| \begin{matrix} (2,2,1) & 0 \end{matrix} \right|$
3. Point Plane Distance $h=\dfrac{\vec{p}\cdot\vec{w}+\delta\,\epsilon}{\delta\,|\vec{w}|} = \dfrac{(1,1,1)\cdot(2,2,1)+0}{1\,\sqrt{2^2+2^2+1^2}} = \dfrac{5+0}{1*3}=\frac{5}{3}$
NOTE: that the equation for the plane is $P\cdot W = 0$ $$P=\left| \begin{matrix} (x,y,z) & 1 \end{matrix} \right|\cdot \left| \begin{matrix} (a,b,c) & \epsilon\end{matrix} \right| = 0$$ $$ax+by+cz+\epsilon =0$$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7395704984664917, "perplexity": 133.28052969274063}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738006497.89/warc/CC-MAIN-20151001222006-00105-ip-10-137-6-227.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/48795/how-do-you-calculate-the-odds-getting-a-single-pair-in-texas-hold-em | How do you calculate the odds getting a single pair in Texas Hold 'Em?
Given that I am dealt one card, what are the odds that I will then make a pair either from the next card dealt to me or from the river of 5 cards played out?
I'm thinking something like: given I have one card already, I figure I have a 3/51 chance in getting its pair (ignoring cards being dealt to other players). But I come unstuck when trying to then figure out the next 5 cards in the river.
Would they be cumulative - so 3/51 + (3/50 + 3/49 + 3/48 + 3/47 + 3/46)?
-
As ncmathsadist says, it is easier to calculate the probability of not getting a pair, then subtract from $1$. It depends upon whether you want to calculate the chance of pairing the first card, or the chance of getting a pair when dealt $7$ cards (your two hole cards plus the five of the board). To not pair the first card, the chance on the second is $\frac{48}{51}$ as you have to avoid $3$ cards of what is left. Assuming you missed on the second, the chance on the third is $\frac{47}{50}$, so the chance of pairing the first in two tries is $1-\frac{48\cdot 47}{51\cdot 50}$. If you are calculating the chance of any pair, missing on the second card is again $\frac{48}{51}$, but missing on the third is $\frac{44}{50}$ as there are now $6$ cards that can pair you. So getting any pair in three cards is $1-\frac{48\cdot 44}{51\cdot 50}$. The pattern should be clear enough.
-
Hint. It is easier to compute the probability of failing to get a pair.
-
How so? Please assume that I'm not an expert on probability. – HorusKol Jul 1 '11 at 2:26
If you want to make sure you do not get a pair, throw out all the cards of the same value as your first card. Then select the next four cards from this last batch. Then the number of ways of getting a pair equals the total ways of selecting 5 cards minus the number of ways of not getting a pair. Can you take it from there? – gary Jul 1 '11 at 2:47
You are given the card, say with value $1$ for definitness. Now, we use the fact that:
P(getting a pair)+P(not getting a pair)=1 , since the two are mutually-exclusive.
Then let's look at the two cases:
i)Not getting a pair with the second card:
Then the second card is not a $1$. So the second card can be chosen out of the 48 cards that are not $1$'s in 48 ways. But there is a total of 51 ways of choosing the second card.
ii)Not getting a pair in the 5 cards you are given:
Then you can choose the 4 remaining cards out of the 48 cards that are different from 1. So choose any 4 out of 48. The total number of choices you can make out of 4 cards is just the number of choices of 4 cards out of 51.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9354326128959656, "perplexity": 206.70581441967155}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00074-ip-10-164-35-72.ec2.internal.warc.gz"} |
http://www.ck12.org/book/CK-12-Trigonometry---Second-Edition/r6/section/6.5/ | <img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# 6.5: The Trigonometric Form of Complex Numbers
Difficulty Level: At Grade Created by: CK-12
## Learning Objectives
• Understand the relationship between the rectangular form of complex numbers and their corresponding polar form.
• Convert complex numbers from standard form to polar form and vice versa.
A number in the form a+bi\begin{align*}a + bi\end{align*}, where a\begin{align*}a\end{align*} and b\begin{align*}b\end{align*} are real numbers, and i\begin{align*}i\end{align*} is the imaginary unit, or 1\begin{align*}\sqrt{-1}\end{align*}, is called a complex number. Despite their names, complex numbers and imaginary numbers have very real and significant applications in both mathematics and in the real world. Complex numbers are useful in pure mathematics, providing a more consistent and flexible number system that helps solve algebra and calculus problems. We will see some of these applications in the examples throughout this lesson.
## The Trigonometric or Polar Form of a Complex Number
The following diagram will introduce you to the relationship between complex numbers and polar coordinates.
In the figure above, the point that represents the number x+yi\begin{align*}x + yi\end{align*} was plotted and a vector was drawn from the origin to this point. As a result, an angle in standard position, θ\begin{align*}\theta\end{align*}, has been formed. In addition to this, the point that represents x+yi\begin{align*}x + yi\end{align*} is r\begin{align*}r\end{align*} units from the origin. Therefore, any point in the complex plane can be found if the angle θ\begin{align*}\theta\end{align*} and the r\begin{align*}r-\end{align*} value are known. The following equations relate x,y,r\begin{align*}x, y, r\end{align*} and θ\begin{align*}\theta\end{align*}.
x=rcosθy=rsinθr2=x2+y2tanθ=yx\begin{align*}x=r \cos \theta && y=r \sin \theta && r^2=x^2+y^2 && \tan \theta=\frac{y}{x}\end{align*}
If we apply the first two equations to the point x+yi\begin{align*}x + yi\end{align*} the result would be:
x+yi=rcosθ+risinθr(cosθ+isinθ)\begin{align*}x + yi = r \cos \theta + r i \sin \theta \rightarrow r (\cos \theta + i \sin \theta)\end{align*}
The right side of this equation r(cosθ+isinθ)\begin{align*}r(\cos \theta + i \sin \theta)\end{align*} is called the polar or trigonometric form of a complex number. A shortened version of this polar form is written as r cis θ\begin{align*}r \ cis \ \theta\end{align*}. The length r\begin{align*}r\end{align*} is called the absolute value or the modulus, and the angle θ\begin{align*}\theta\end{align*} is called the argument of the complex number. Therefore, the following equations define the polar form of a complex number:
r2=x2+y2tanθ=yxx+yi=r(cosθ+isinθ)\begin{align*}r^2=x^2+y^2 && \tan \theta =\frac{y}{x} && x+yi=r(\cos \theta + i \sin \theta)\end{align*}
It is now time to implement these equations perform the operation of converting complex numbers in standard form to complex numbers in polar form. You will use the above equations to do this.
Example 1: Represent the complex number 5+7i\begin{align*}5 + 7i\end{align*} graphically and express it in its polar form.
Solution: As discussed in the Prerequisite Chapter, here is the graph of 5+7i\begin{align*}5 + 7i\end{align*}.
Converting to polar from rectangular, x=5\begin{align*}x = 5\end{align*} and y=7\begin{align*}y = 7\end{align*}.
r=52+72=8.6tanθ=75tan1(tanθ)=tan175θ=54.5\begin{align*}& r=\sqrt{5^2+7^2}=8.6 && \tan \theta=\frac{7}{5}\\ &&& \tan ^{-1}(\tan \theta)=\tan ^{-1}\frac{7}{5}\\ &&& \theta=54.5^\circ\end{align*}
So, the polar form is 8.6(cos54.5+isin54.5)\begin{align*}8.6(\cos 54.5^\circ + i \sin 54.5^\circ)\end{align*}.
Another widely used notation for the polar form of a complex number is rθ=r(cosθ+isinθ)\begin{align*}r \angle \theta=r (\cos \theta + i \sin \theta)\end{align*}. Now there are three ways to write the polar form of a complex number.
x+yi=r(cosθ+isinθ)x+yi=rcisθx+yi=rθ\begin{align*}x+yi=r(\cos \theta+i \sin \theta) && x+yi=rcis \theta && x+yi=r \angle \theta\end{align*}
Example 2: Express the following polar form of each complex number using the shorthand representations.
a) 4.92(cos214.6+isin214.6)\begin{align*}4.92 (\cos 214.6^\circ + i \sin 214.6^\circ)\end{align*}
b) 15.6(cos37+isin37)\begin{align*}15.6 (\cos 37^\circ + i \sin 37^\circ)\end{align*}
Solution:
a) 4.92214.6\begin{align*}4.92 \angle 214.6^\circ\end{align*}
4.92 cis 214.6\begin{align*}4.92 \ cis \ 214.6^\circ\end{align*}
b) 15.637\begin{align*}15.6 \angle 37^\circ\end{align*}
15.6 cis 37\begin{align*}15.6 \ cis \ 37^\circ\end{align*}
Example 3: Represent the complex number 3.124.64i\begin{align*}-3.12 - 4.64i\end{align*} graphically and give two notations of its polar form.
Solution: From the rectangular form of 3.124.64i x=3.12\begin{align*}-3.12 - 4.64i \ x = - 3.12\end{align*} and y=4.64\begin{align*}y = - 4.64\end{align*}
rrr=x2+y2=(3.12)2+(4.64)2=5.59\begin{align*}r &= \sqrt{x^2+y^2}\\ r &= \sqrt{(-3.12)^2+(-4.64)^2}\\ r &= 5.59\end{align*}
tanθtanθθ=yx=4.643.12=56.1\begin{align*}\tan \theta &= \frac{y}{x}\\ \tan \theta &=\frac{-4.64}{-3.12}\\ \theta&=56.1^\circ\end{align*}
This is the reference angle so now we must determine the measure of the angle in the third quadrant. 56.1+180=236.1\begin{align*}56.1^\circ + 180^\circ = 236.1^\circ\end{align*}
One polar notation of the point 3.124.64i\begin{align*}-3.12 - 4.64i\end{align*} is 5.59\begin{align*}5.59\end{align*} (cos236.1+isin236.1)\begin{align*}(\cos 236.1^\circ + i \sin 236.1^\circ)\end{align*}. Another polar notation of the point is 5.59236.1\begin{align*}5.59 \angle 236.1^\circ\end{align*}
So far we have expressed all values of theta in degrees. Polar form of a complex number can also have theta expressed in radian measure. This would be beneficial when plotting the polar form of complex numbers in the polar plane.
The answer to the above example 3.124.64i\begin{align*}-3.12 - 4.64i\end{align*} with theta expressed in radian measure would be:
tanθ=4.643.125.59(cos4.12+isin4.12)tanθ=.9788(reference angle)0.9788+3.14=4.12 rad.\begin{align*}& \tan \theta =\frac{-4.64}{-3.12} && \tan \theta=.9788(\text{reference angle})\\ &&& 0.9788+3.14=4.12 \ \text{rad}.\\ & 5.59(\cos 4.12+i \sin 4.12)\end{align*}
Now that we have explored the polar form of complex numbers and the steps for performing these conversions, we will look at an example in circuit analysis that requires a complex number given in polar form to be expressed in standard form.
Example 4: The impedance Z\begin{align*}Z\end{align*}, in ohms, in an alternating circuit is given by Z=465035.2\begin{align*}Z=4650 \angle -35.2^\circ\end{align*}. Express the value for Z\begin{align*}Z\end{align*} in standard form. (In electricity, negative angles are often used.)
Solution: The value for \begin{align*}Z\end{align*} is given in polar form. From this notation, we know that \begin{align*}r = 4650\end{align*} and \begin{align*}\theta = -35.2^\circ\end{align*} Using these values, we can write:
\begin{align*}Z &= 4650 (\cos(-35.2^\circ) + i \sin(-35.2^\circ))\\ x &= 4650 \cos(-35.2^\circ) \rightarrow 3800\\ y &= 4650 \sin (-35.2^\circ) \rightarrow -2680\end{align*}
Therefore the standard form is \begin{align*}Z= 3800 - 2680i\end{align*} ohms.
## Points to Consider
• Is it possible to perform basic operations on complex numbers in polar form?
• If operations can be performed, do the processes change for polar form or remain the same as for standard form?
## Review Questions
1. Express the following polar forms of complex numbers in the two other possible ways.
1. \begin{align*}5 \ cis \frac{\pi}{6}\end{align*}
2. \begin{align*}3 \angle 135^\circ\end{align*}
3. \begin{align*}2 \left(\cos \frac{2\pi}{3}+i \sin \frac{2\pi}{3}\right)\end{align*}
2. Express the complex number \begin{align*}6 - 8i\end{align*} graphically and write it in its polar form.
3. Express the following complex numbers in their polar form.
1. \begin{align*}4 + 3i\end{align*}
2. \begin{align*}-2 + 9i\end{align*}
3. \begin{align*}7 - i\end{align*}
4. \begin{align*}-5 - 2i\end{align*}
4. Graph the complex number \begin{align*}3(\cos \frac{\pi}{4}+i \sin \frac{\pi}{4})\end{align*} and express it in standard form.
5. Find the standard form of each of the complex numbers below.
1. \begin{align*}2 \ cis \frac{\pi}{2}\end{align*}
2. \begin{align*}4 \angle \frac{5\pi}{6}\end{align*}
3. \begin{align*}8 \left( \cos \left(-\frac{\pi}{3}\right)+i \sin \left(-\frac{\pi}{3}\right)\right)\end{align*}
1. \begin{align*}5 \ cis \frac{\pi}{6}=5 \angle \frac{\pi}{6}=5 \left(\cos \frac{\pi}{6}+ i \sin \frac{\pi}{6}\right)\end{align*}
2. \begin{align*}3 \angle 135^\circ=3cis 135^\circ=3(\cos 135^\circ+i \sin 135^\circ)\end{align*}
3. \begin{align*}2 \left(\cos \frac{2\pi}{3}+ i \sin \frac{2\pi}{3}\right)=2cis \frac{2\pi}{3}=2 \angle \frac{2\pi}{3}\end{align*}
1. \begin{align*}6 - 8i\end{align*} \begin{align*}& \quad 6-8i\\ x &= 6 \ \text{and} \ y=-8 && \tan \theta = \frac{y}{x}\\ r &= \sqrt{x^2+y^2} && \tan \theta = \frac{-8}{6}\\ r &= \sqrt{(6)^2+(-8)^2} && \quad \quad \theta = -53.1^\circ\\ r &=10\end{align*} Since \begin{align*}\theta\end{align*} is in the fourth quadrant then \begin{align*}\theta = -53.1^\circ + 360^\circ = 306.9^\circ\end{align*} Expressed in polar form \begin{align*}6 - 8i\end{align*} is \begin{align*}10(\cos 306.9^\circ + i \sin 306.9^\circ)\end{align*} or \begin{align*}10 \angle 306.9^\circ\end{align*}
1. \begin{align*}4+3i \rightarrow x=4, y=3\end{align*}\begin{align*}r=\sqrt{4^2+3^2}=5, \tan \theta =\frac{3}{4} \rightarrow \theta=36.87^\circ \rightarrow 5(\cos 36.87^\circ + i \sin 36.87^\circ)\end{align*}
2. \begin{align*}-2+9i \rightarrow x=-2, y=9\end{align*}\begin{align*}r=\sqrt{(-2)^2+9^2}=\sqrt{85} \approx 9.22, \tan \theta =-\frac{9}{2} \rightarrow \theta=102.53^\circ \rightarrow 9.22(\cos 102.53^\circ + i \sin 102.53^\circ)\end{align*}
3. \begin{align*}7-i \rightarrow x=7, y=-1\end{align*}\begin{align*}r=\sqrt{7^2+1^2}=\sqrt{50} \approx 7.07, \tan \theta =-\frac{1}{7} \rightarrow \theta=351.87^\circ \rightarrow 7.07(\cos 351.87^\circ + i \sin 351.87^\circ)\end{align*}
4. \begin{align*}-5-2i \rightarrow x=-5, y=-2\end{align*}\begin{align*}r=\sqrt{(-5)^2+(-2)^2}=\sqrt{29} \approx 5.39, \tan \theta =\frac{2}{5} \rightarrow \theta=201.8^\circ \rightarrow 5.39(\cos 201.8^\circ + i \sin 201.8^\circ)\end{align*}
2. Note: The range of a graphing calculator’s
3. \begin{align*}\tan^{-1}\end{align*}
4. function is limited to Quadrants I and IV, and for points located in the other quadrants, such as
5. \begin{align*}-2 + 9i\end{align*}
7. \begin{align*}180^\circ\end{align*}
8. to get the correct angle
9. \begin{align*}\theta\end{align*}
10. for numbers given in polar form.
11. \begin{align*}& 3 \left (\cos \frac{\pi}{4}+i \sin \frac{\pi}{4} \right )\\ & r=3\\ & x=\cos \frac{\pi}{4}=\frac{\sqrt{2}}{2}\\ & y=\sin \frac{\pi}{4}=\frac{\sqrt{2}}{2}\end{align*} The standard form of the polar complex number \begin{align*}3 \left (\cos \frac{\pi}{4}+i \sin \frac{\pi}{4} \right )\end{align*} is \begin{align*}\frac{3\sqrt{2}}{2}+\frac{3\sqrt{2}}{2}i\end{align*}.
1. \begin{align*}2cis \frac{\pi}{2} \rightarrow x=\cos \frac{\pi}{2}=0, y=\sin \frac{\pi}{2}=1 \rightarrow 2(0)+2(1i)=2i\end{align*}
2. \begin{align*}4 \angle \frac{5\pi}{6} \rightarrow x=\cos \frac{5\pi}{6}=-\frac{\sqrt{3}}{2}, y= \sin \frac{5\pi}{6}=\frac{1}{2} \rightarrow 4 \left(-\frac{\sqrt{3}}{2}\right)+4\left(i\frac{1}{2}\right)=-2 \sqrt{3}+2i\end{align*}
3. \begin{align*}8\left(\cos \left(-\frac{\pi}{3}\right)+i \sin \left(-\frac{\pi}{3}\right)\right) \rightarrow x=\cos \left(-\frac{\pi}{3}\right)=\frac{1}{2}, y=\sin \left(-\frac{\pi}{3}\right)=-\frac{\sqrt{3}}{2} \rightarrow 8 \left(\frac{1}{2}\right)+8\left(-\frac{\sqrt{3}}{2}i\right)=4-4i\sqrt{3}\end{align*}
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
Show Hide Details
Description
Tags:
Subjects: | {"extraction_info": {"found_math": true, "script_math_tex": 95, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996846914291382, "perplexity": 4303.966888980266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00406-ip-10-31-129-80.ec2.internal.warc.gz"} |
http://pianoo.blogspot.com/2009/10/ | ## Wednesday, October 28, 2009
A trader is like a rodeo rider. The market gets violent and shakes everyone but the most determined and convicted trader off its back.
A trader is like a surfer. He analyses wind conditions and tide levels and catches the waves just as they are about to form.
A trader is like a midnight clubber. The booze is on, the music is playing, there are hundreds of people dancing, but everybody has his eye on the exit door.
A trader is like a hunter. He waits in stealth, locks in on his target, goes for the kill, and gets out fast. He lives by the motto "one shot, one kill".
A trader is like a coin-picker. The coins are littered all over the road. They seem easy pickings but a bulldozer is parked right there.
A trader is like a poker player. The market always acts like it has a hand. He either plays along with it or calls a bluff. And he has a trump card--stay out.
A trader is like a trench soldier. 90% sheer boredom, and 10% sheer terror.
A trader is like a daredevil. He makes his judgment of the braking distance, and stands in front of the locomotive train. Get it right, and he lives, but only if he gets it right.
A trader is like a doctor. He monitors the pulse of the market with the EEG, and when the market goes into cardiac arrest, he performs elaborate maneuvers to rescue the health of his portfolio--calmly.
A trader is like an alchemist. He transmutes what is essentially trading noise into the most precious resource of all--gold.
## Monday, October 26, 2009
### Katahdin (Part 1)
Mount Katahdin is the northernmost peak of the Appalachian mountains that stretch from as far south as Georgia to Maine, and some say, to Canada too, depending on how pedantic you are on cartographing the mountain peaks. It has inspired hikes, climbs, poetry, paintings, a piano sonata and most notably, the writings of Henry Thoreau, who wrote of Katahdin:
"The tops of mountains are among the unfinished parts of the globe, whither it is a slight insult to the gods to climb and pry into their secrets, and try their effect on our humanity. Only daring and insolent men, perchance, go there. Simple races, such as savages, do not climb mountains -- their tops are sacred and mysterious tracts never visited by them. Pomola is always angry with those who climb to the summit of Ktaadn".
Katahdin actually means the Greatest Mountain in native Indian language. The Indians are obviously not well-travelled. Katahdin is by no means the greatest mountain in the world, whose height (1600m, slightly taller than Cameron Highlands) would barely cause a ripple among the sheer enormity that is the Himalayas. But there must be something about this particular Maine mountain that so inspired such dramatic prose. So it was not mere coincidence that I decided to embark on this pilgramage in the summer of 2009 to Katahdin, having been acquainted with both the Appalachian mountains and Henry Thoreau before.
I took off on a 330-mile drive via Interstate 95 from Boston to Milinocket, the nearest town to Mount Katahdin. Car rental is costly, especially if you are travelling alone, so you can be sure that I had overturned every timetable in every single bus company (Greyhound, Vermont, Concord) that ply on the Maine roads before deciding to go rental. I keep telling myself, how much would I pay to see Katahdin, and the practicalities of financial matters paled into insignificance.
Interstate 95
The mountain ranges loom far ahead , up among the clouds.
Welcome to Baxter State Park
Katahdin lies inside Baxter State Park. The story goes that Governor Percival Baxter was so spellbound by Katahdin that in order to prevent loggers from mining the surrounding area that he bought over the entire piece of land around the mountain, and entrusted it to the care of the state of Maine. That was how it became a state park. For the record, 204733 acres is slightly bigger than the island of Singapore.
The infrastrature of Baxter Park is laid out in this way: there is only one road leading into the Baxter Park, via an entrance. The nearest town, Milinocket, is probably 20 miles away. The base camps scattered around the main mountain ranges are located about 5 miles away from the entrance. You can elect to drive your vehicles to some of the base camps (like Roaring Brooks, Katahdin Stream and Abol), and you pay $24 per day for vehicle+man, or you park your car at the entrance and hike your way into the base camps--for$11 a night. At no time are you allowed to spend the night anywhere else in the park, so basically it means every night spent in Baxter State Park costs at least \$11 per head.
I parked my car beside a lake, which was near the entrance. Seemingly tranquil and serene, but who knows what lurks beneath.
Since i would be away for a few days at least, thought it would be prudent to have the number plate recorded just in case the car gets stolen. But it was remarked to me (later of course) "nobody would come here to steal cars one lor." True.
Recording the numbers for security, not for 4D.
Spread out my barang-barang. From left to right:
Insect repellent (25% deet), Crumpler camera bag with D70, 17-70mm auto and 70-200 manual lens, a dozen toblerones and snickers, peanut butter, guide book with map of Baxter state Park, note book, Paul Theroux reading material and pencil, a pack of organic carrots, torch light, bread, Campbell soup tin can, 2 toggle ropes, rain coat, groundsheet, and an Adidas backpack.
Having never hiked overnight before in my life and lacking necessary experience, packing up has been a woeful hit-and-miss affair on hindsight. Why in the world would I want to carry reading materials up there? I realised my folly halfway up the mountain, with the weight of the books digging into my flesh. And what's with the 70-200mm lens? I had thought about it, and thought that I will never forgive myself if I come face to face with a bear and do not have a good zoom lens with which to shoot the bear with. Incredulously naive, because the first thing I should do is to make as much noise as possible to drive the bear away, and then run in the opposite direction--for dear life. On the other hand, the toggle ropes proved to be very useful later when the hikes turned to climbs. Finally, I can never overstate the importance of that humble groundsheet, without which, hmm, I could not contemplate beyond.
After packing my stuffs, remembering specifically to lock my car, and paying my dues to the rangers on duty at the entrance, I began to hike my way into Roaring Brooks camp with a spring in my steps. Loved every minute of it, but a very friendly ranger driving by insisted on picking me up along the way. Learnt from the ranger that Baxter State Park is a very well-policed park, with over 40 rangers on duty at any one time, unlike his last call of work, Denali National Park in Alaska, while 10 times larger in area, had only 4 rangers working in it. I guess he must have had a back-breaking time in Alaska. But I was getting excited too, because Denali (McKinlay) was also where Christoper Mccandles perished, and he must surely have heard of him, but I was careful to keep mum. I didn't want him to think of me as another silly college boy trying to tempt fate just because he watched "Into the Wild" on a lazy Sunday afternoon. Instead I joked about his workload being cut up by 40 times, which would otherwise never happen in the corporate world, and he beamed, "It certainly is!".
At this point in time doubts began to creep in. I had wanted this trip to be wild, but not so wild that I would lose my life, nor so mild to be like a walk in the park either. And with over 40 rangers policing every aspect of life in Baxter, it certainly sounded like a trip to Central Park indeed.
Spent a night inside one of the huts at Roaring Brooks camp. It is primitive, with wooden planks for bed and candles for light. "I am the noble savage, living in the primitive age of the world." It's always cool to be able to quote Thoreau and actually mean it. When darkness descends upon the land, the woods comes alive with fireflies dancing in the trees and the river sparkling with moonlight. These are enchanting moments that will remain in the deep recesses of my soul for long to come.
The morning after. Washing up beside Roaring Brooks, the icy-cold water stings me awake and hydrates me for what is going to a gruelling day.
I was carrying a few nigging fears with me at this point. I had forgotten about buying iodine pills in Boston, and was obsessed with the fear of drinking from the streams, until a fellow hiker said,"Just drink it up, let's worry about the ringworms later." Also, I had read that summertime was black fly season, and had heard stories from a Canadian traveller earlier that his face got stung so bad that it swelled for a few hours. So there, my 2 obsessions coming at the start of the hike, fear of black flies and fear of drinking poisoned water.
To get to the mountain proper, I had to cross a few miles of thick forest, but rest assured, paths have already been cleared for us. There is no need to trailblaze through. And, temperate forests, with their sparse undergrowth of soft lichen and moss, are a joy to walk in.
Into the wild...
Started the trail around 5am, with the sky already quite bright. I had elected to do the Helon Taylor trail, which is a hike with only a few climbs, after which it should adjourn to the infamous Knife Edge before reaching Baxter Peak, the tallest peak of Mount Katahdin.
This is the Helon Taylor trail, which involves jumping along these boulders.
Oh yea, and one more fear, the fear of getting my boots wet. So this stream was a considerable challenge in keeping my boots dry. My Timberland Gore-Tex held up nicely, and passed the test with flying colours. Of course I replenished my water supply here too. River streams don't come by so often in the wilderness.
A 2-m tall boulder, one of the few climbing challenges along the trail, facing me.
Easily done--looking down.
This is getting fun. At this point, I had still thought of Baxter State somewhat like a more rugged Sunday climb at the gym. I recalled the joke in the Peep Show, where Jeremy mentioned that "the world is his gym, the mountains, the rivers.", whereupon Mark concurred, "The world is my gym too, well, just that little bit where it is actually a gym." That's the polarity between country and city life.
Wildlife--I mustn't forget to photograph the wildlife I encountered along the way.
Slowly the treeline becomes more exposed. I think I am halfway up the mountain already.
The scenery gets more breathtaking as I go higher up.
More wildlife.
I am soon up among the clouds. I expended approximately 5 hours of non-stop hiking to get to this far. Everything goes to plan. This is still a stroll in Central Park.
Steep climb
Uh-oh. The steepest climb yet. I think it was a 2.5-m climb here. There was no other way but to somehow haul myself up. After much difficulty, including throwing my 2 baggages over the top, could I actually overcome the boulders here.
After doing a few more 2-m haul-ups, I soon realised that its not so easy after all. Looking down, I was thinking, oh my gosh, I am actually CLIMBING now! Quelling my fears, I keep telling myself, "Comon, you've done all these before at the Kallang gym."
...Don't look down.
And a new fear supplanted the old ones--the fear of falling. This particular fear of falling is quite unlike that encountered in roller-coaster rides. It is as if the sheer intensity of a roller-coaster ride gets diffused across time, resulting in a less acute but no less palpable throbbing of the heart. It doesn't matter how high you go, because by the time you climb to a certain height, it doesn't make a difference to your brittle sack of flesh anymore. I was thinking, the Helon Taylor "Central Park" trail must have ended, and I must be on this so-called Knife Edge already. If so, then I must be near the peak already.
Is over yonder the peak? No it isn't.
Sometimes you couldn't see over yonder, and you thought that what you saw was the peak. You hastily scramble up, only to see yet another of such mound, and yet another, and yet another. Its beginning to take a toll on my physique.
Taking a break. I'm not alone in getting tired from all these humps.
Spiders.
Wildlife shots indicate my generally high state of morale for I still have it in me to find the mood, not to mention energy, to observe wildlife (mostly insects unfortunately) around me. For a while, I was worried about snakes lurking beneath the undergrowths. But bah...none whatsoever.
This is getting a bit hardcore now. Not unlike one of those fearsome obstacles you have to overcome in those Nintendo games in order to progress to the next stage. I was thinking, hmm, should I just give up and turn back? At this moment, the choice still lies with me, because I had hiked over what is not too difficult to backtrack--a gentle slope punctuated by some large boulder climbs.
It was really tough getting up that wall, but I kept telling myself, this must be the Knife Edge, and I must be nearing my journey. I was elated to see a signpost upon scaling that final rockface, only to realise its not Knife Edge. It was only the Helon Taylor Trail that I had done, and its already 11am now. I had taken 6 hours to trek just 3.2 miles? That must be terribly slow by anybody's standards. And in order to get to the real peak, Katahdin Peak, I have to trek through a 1.5 mile long ridge called the Knife Edge.
Signpost that says Pamola Peak (not Katahdin), and gently points Katahdin-bound hikers to what lies to their left...
...the Knife Edge.
to be continued...soon...
## Saturday, October 24, 2009
### BNP Nick Griffin on BBC Question Time
I don't know anything about British politics, never heard of the British National Party (BNP), and much less of their leader Nick Griffin and his extremist views that Britain should remain fundamentally white. While white supremacy is nothing new, what is refreshing is that BBC has given him an opportunity to air his views in public. Giving white supremacy any sort of attention, much less on prime time television, is a very dangerous affair, and the controversy was brewing for some time on Financial Times, so I decided to check out what's the whole deal about.
And what transpired from the video I watched was, in my opinion, a triumph of free speech and democracy, where ideologies and arguments are allowed to stand or fall on their own merits. Against a panel of admittedly very illustrious opponents, Nick Griffin, a more oratically gifted one perhaps(an Obama with that Hitler moustache?) could have grasped control of the stage and turned the table against the incumbents. Instead he hemed and hawed, backtracked many times and and was reduced to nervous laughter, which drew swift and sharp rebuttals ("Why are you smiling? It's not a particularly funny matter."). The straw men he built over the course of his political career, denying the Holocaust for example, cosying up to the Ku Klax Clan for example, were admittedly his major liabilities. His rambling ways betrayed a complete lack of clarity of thoughts.
But of course, BBC must have known the outcome in advance. They strategised right down to the last detail--why else invite an American black woman on the panel, who would be both an academic and moral authority to speak on the Ku Klax Clan--to milk maximum humiliation for all Nick Griffin was worth. The trojan horse was delivered, and the bait was taken. The only person there to defray the heat was hapless Jack Straw, UK Home Affairs Minister, who was being blamed for giving birth to the BNP through 12 years of lax immigration laws. So we have a curious case of unwilling father and bastard son, sitting uncomfortably side by side. The 2 women panelists came off with their reputations enhanced. You wouldn't want Sayeeda Warsi sitting opposite you in any debate competition. Eloquent and displaying a sort of economic rationale that is difficult to refute--"this is no longer a race issue, but a resources issue"--she is one daunting opponent. Bonnie Greer, disarmingly humourous and chummy with her snide comments, is just danger.
Add to the mix an engaging and at times emotional audience, and a sprinkling of beautiful people, this is as fun as politics can ever be.
Last note, if the programme had set out to humiliate Nick Griffin, it would have comfortably met its objectives. But I don't think anybody from either side of the ideological divide--liberals and supremists alike--would be convinced to switch camp on the sole basis of a TV programme. Thoughts are entrenched in people over the course of a lifetime. The brain entertains a million thoughts a day, but most of them are just repetitions in various guises, and only reinforce the structure of the brain, compelling the next thought that comes along to travel along well-worn synapses. It is less a philosophical problem than a biological one. It takes enormous commitment and intellectual honesty to come clean with oneself and reorganise our own house of thoughts. Far easier to let the cobwebs manifest themselves in their own ways, rightly or wrongly, and allow ourselves to be forever entangled in our own convoluted web of thoughts.
## Thursday, October 22, 2009
### On Mr Market
I used to think Mr Market was some kind of omnipotent masters of the universe, and we are mere slave to his whims and fancy. But as long as we fear him accordingly and give him the due respect, we would be shown mercy. The streets are littered with the bodies--hang, drawn and quartered no less-- of those who have been victims to his occasional but unspeakable wrath. They serve as stark warnings to the survivors.
But Mr Market is an elusive one. Nobody knows who he is, or has even looked him in the eye before. Some claim to be able to communicate with him through tongues. We call these people chartists. Those who are unable to comprehend these strange languages resort to vague ideas of superstition. So superstitious was I about Mr Market that I worship him in my mind, and refused to even mutter anything that would be construed as disrespect to Mr Market, much like how people do not speak ill of the dead, or of deities. A book I read warned just that, that we shouldn't speak of "fighting the market", for it will hit back, and hit hard. You should think of Mr Market in more benevolent terms, as a figure who will conspire to fulfill your wishes so long as you go with the flow. It's more Zen than biblical.
But now I know better. Mr Market is just indifferent. You can say anything you want, you can do anything you want. It doesn't matter. You can bet against the Black Swans all your life and retire rich. Others blow up even before they start. Go ahead, be so mighty impudent once in a while and remove your stop-losses just before it hits. Don't worry. Nobody is going to come up to you with some sort of a probability bill to pay afterwards, and certainly not Mr Market. He is just a psychological construct. He is just like God.
## Saturday, October 17, 2009
### Fighting the world is tough
a great story on the struggles of a molecular scientist Ted Steele, who seem to suffer setbacks after setbacks in his quest to convince the world of his ideas. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18405072391033173, "perplexity": 4150.380678638193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948580416.55/warc/CC-MAIN-20171215231248-20171216013248-00137.warc.gz"} |
http://en.wikipedia.org/wiki/Euler's_laws_of_motion | # Euler's laws of motion
In classical mechanics, Euler's laws of motion are equations of motion which extend Newton's laws of motion for point particle to rigid body motion.[1] They were formulated by Leonhard Euler about 50 years after Isaac Newton formulated his laws.
## Overview
### Euler's first law
Euler's first law states that the linear momentum of a body, p (also denoted G) is equal to the product of the mass of the body m and the velocity of its center of mass vcm: [1][2][3]
$\mathbf p = m \mathbf v_{\rm cm}$.
Internal forces between the particles that make up a body do not contribute to changing the total momentum of the body.[4] The law is also stated as:[4]
$\mathbf F = m \mathbf a_{\rm cm}$.
where acm = dvcm/dt is the acceleration of the centre of mass and F = dp/dt is the total applied force on the body. This is just the time derivative of the previous equation (m is a constant).
### Euler's second law
Euler's second law states that the rate of change of angular momentum L (also denoted H) about a point that is fixed in an inertial reference frame, or is the mass center of the body, is equal to the sum of the external moments of force (torques) M (also denoted τ or Γ) about that point:[1][2][3]
$\mathbf M = {d\mathbf L \over dt}$.
For rigid bodies translating and rotating in only 2d, this can be expressed as:[5]
$\mathbf M = \mathbf r_{\rm cm} \times \mathbf a_{\rm cm} m + I \boldsymbol{\alpha}$,
where rcm is the position vector of the center of mass with respect to the point about which moments are summed, α is the angular acceleration of the body, and I is the moment of inertia. See also Euler's equations (rigid body dynamics).
## Explanation and derivation
The density of internal forces at every point in a deformable body are not necessarily equal, i.e. there is a distribution of stresses throughout the body. This variation of internal forces throughout the body is governed by Newton's second law of motion of conservation of linear momentum and angular momentum, which normally are applied to a mass particle but are extended in continuum mechanics to a body of continuously distributed mass. For continuous bodies these laws are called Euler’s laws of motion. If a body is represented as an assemblage of discrete particles, each governed by Newton’s laws of motion, then Euler’s equations can be derived from Newton’s laws. Euler’s equations can, however, be taken as axioms describing the laws of motion for extended bodies, independently of any particle structure.[6]
The total body force applied to a continuous body with mass m, mass density ρ, and volume V, is the volume integral integrated over the volume of the body:
$\mathbf F_B=\int_V\mathbf b\,dm = \int_V\mathbf b\rho\,dV$
where b is the force acting on the body per unit mass (dimensions of acceleration, misleadingly called the "body force"), and dm = ρdV is an infinitesimal mass element of the body.
Body forces and contact forces acting on the body lead to corresponding moments of force (torques) relative to a given point. Thus, the total applied torque M about the origin is given by
$\mathbf M= \mathbf M_B + \mathbf M_C$
where MB and MC respectively indicate the moments caused by the body and contact forces.
Thus, the sum of all applied forces and torques (with respect to the origin of the coordinate system) in the body can be given as the sum of a volume and surface integral:
$\mathbf F = \int_V \mathbf a\,dm = \int_V \mathbf a\rho\,dV = \int_S \mathbf{t} dS + \int_V \mathbf b\rho\,dV$
$\mathbf M = \int_S \mathbf r \times \mathbf t dS + \int_V \mathbf r \times \mathbf b\rho\,dV.$
where t = t(n) is called the surface traction, integrated over the surface of the body, in turn n denotes a unit vector normal and directed outwards to the surface S.
Let the coordinate system (x1, x2, x3) be an inertial frame of reference, r be the position vector of a point particle in the continuous body with respect to the origin of the coordinate system, and v = dr/dt be the velocity vector of that point.
Euler’s first axiom or law (law of balance of linear momentum or balance of forces) states that in an inertial frame the time rate of change of linear momentum p of an arbitrary portion of a continuous body is equal to the total applied force F acting on the considered portion, and it is expressed as
\begin{align} \frac{d\mathbf p}{dt} &= \mathbf F \\ \frac{d}{dt}\int_V \rho\mathbf v\,dV&=\int_S \mathbf t dS + \int_V \mathbf b\rho \,dV. \\ \end{align}
Euler’s second axiom or law (law of balance of angular momentum or balance of torques) states that in an inertial frame the time rate of change of angular momentum L of an arbitrary portion of a continuous body is equal to the total applied torque M acting on the considered portion, and it is expressed as
\begin{align} \frac{d\mathbf L}{dt} &= \mathbf M \\ \frac{d}{dt}\int_V \mathbf r\times\rho\mathbf v\,dV&=\int_S \mathbf r \times \mathbf t dS + \int_V \mathbf r \times \mathbf b\rho\,dV. \\\end{align}
The derivatives of p and L are material derivatives. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9951801896095276, "perplexity": 356.87394372775293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010855566/warc/CC-MAIN-20140305091415-00071-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://xb.sicau.edu.cn/CN/10.16036/j.issn.1000-2650.2016.01.004 | • •
### 结实期施氮对水稻空育131品质的影响
1. 黑龙江八一农垦大学农学院、黑龙江省普通高校寒地作物栽培技术重点实验室, 黑龙江 大庆 163319
• 收稿日期:2015-09-28 发布日期:2016-02-29
• 通讯作者: 钱永德,博士,教授,主要进行水稻高产理论与生理基础研究,E-mail:qyd1973@126.com。 E-mail:qyd1973@126.com
• 作者简介:王龙,硕士研究生。
• 基金资助:
黑龙江省重大科技招标项目(GA14B102-03);黑龙江八一农垦大学博士启动基金项目(寒地水稻育苗技术关键技术研究);农垦总局科技攻关项目(HNK125B-08-21A)
### Effect of Nitrogen Fertilizer on the Quality of the Kongyu 131 in Seed Setting Stage
WANG Long, ZHONG Wei-jun, ZHANG Li-wei, HUANG Cheng-liang, PAN Shi-ju, JIANG Yu-wei, ZHAO Ting-ting, SONG Ze, ZHOU Jian, QIAN Yong-de
1. Key Laboratory of Crop Cultivation in Cold Area, College and University, HLJ Province, Heilongjiang Bayi Agricultural University, Daqing 163319, Heilongjiang, China
• Received:2015-09-28 Published:2016-02-29
Abstract: [Objective] The aim of this study was to explore the effects of different nitrogen fertilizer on the quality of Kongyu 131 during the grain filling stage and to provide a theoretical basis for reasonable application of nitrogen fertilizer. [Method] Effects of 5different nitrogen fertilizer on the quality of Kongyu 131 were investigated using randomized block design during the grain filling stage. [Results] The milling quality of Kongyu 131 and nitrogen rate in seed setting period showed a single peak curve. There was an optimal value when the nitrogen rate was 22.5 kg/hm2. The chalky rice rate and chalkiness degree both increased with an increase in nitrogen rate. The protein content showed a single peak curve and the difference was significant. However, there were no significant differences in amylose and fatty acid contents between treatments. The effect of low nitrogen on rice quality was not obvious but the comprehensive taste of Kongyu 131 showed decreasing trend when the nitrogen fertilizer rate was 90 kg/hm2. [Conclusion] In cold regions, when the amount of nitrogen fertilizer is 22.5 kg/hm2, Kongyu 131 quality is the best.
• S511 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19685757160186768, "perplexity": 17770.047002177984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00227.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-connecting-concepts-through-application/chapter-3-exponents-polynomials-and-functions-chapter-review-exercises-page-288/56 | # Chapter 3 - Exponents, Polynomials and Functions - Chapter Review Exercises - Page 288: 56
Cannot be factored
#### Work Step by Step
25x$^2$+81 These terms do not have any common factors and there is no formula for factoring the sum of squares so the expression is completely factored as is.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34108826518058777, "perplexity": 868.7623627115963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158450.41/warc/CC-MAIN-20180922142831-20180922163231-00198.warc.gz"} |
https://www.originlab.com/doc/Tutorials/fill-partial-area-between-curves | # 6.5.2 Fill Partial Area between Function Curves
## Summary
This tutorial shows how to plot two functions and customize the graph by partially filling an area between the two function curves.
Minimum Origin Version Required: 2017 SR0
## What you will learn
• Generate function data using Set Values tool.
• Fill areas between two lines with different colors.
## Steps
This tutorial with the Fill Partial Area between Function Plots folder in the project <Origin EXE Folder>\Samples\Tutorial Data.opj.
### Filling Area Between Parts of Two Curves
To apply different fill colors to two or more portions of the curve, you need to plot curves in segments. In this tutorial, you will learn how to fill an area between curves defined by X <= 1.
1. Open the Tutorial Data.opj and browse to the Fill Partial Area between Function Plots folder. Book2L contains two function curves (Note: To see how to generate a dataset from a function, see the last section of this Tutorial).
2. Select rows 1~36 (-2.5 <= X <= 1) of all three columns in Sheet1 of Book2L and on the menu, click Plot> Basic 2D: Line to plot two lines. The two datasets (lines) are automatically grouped.
3. Now, go back to the worksheet, select rows 36~51 (1 <= X <= 2.5) of all three columns and hover over the edge of the highlighted area until the cursor looks like this . Drag and drop the selected range, onto the graph you just created . If prompted to rescale the axes and show all data, choose Yes.
4. Select and delete the legend and the axis title.
5. Double-click on one of the line plots, to open the Plot Details dialog box. Select the 1st plot under the Layer1 node on left panel.
6. Go to the Line tab and below Fill Area Under Curve, select Enable. Set Fill to data plot - Above Below Colors, Data Plot = Next Plot, Fill to = Common X Area, then click Apply. Notice that this action adds a Pattern_Above and a Pattern_Below tab to the dialog box.
7. Go to Group tab, click the line color list in the Details column to select the increment list Candy as below:
8. Go to the Pattern_Above tab and set the fill color of the black line to LT Magenta with a 50% transparency.
9. Go to the Pattern_Below tab and set the fill color below the black line to be LT Cyan. Note that transparency controls for Pattern_Below are dimmed and set to Auto, meaning that the fill will use the same transparency settings as Pattern_Above).
10. Select the third plot in the left panel, go to Group tab, click the line color list in the Details column to select the increment list Candy as below:
11. Click OK to close the dialog box. The area in between the curves where X<=1 is now filled.
As an alternative to the Plot Details Line tab controls, note that you can select any two plots in the graph layer using the Ctrl key, then apply fills between the selected curves using Mini Toolbar buttons.
### Changing the Axis Range
1. We want to change the display range of X and Y axis. To do this, click on the X axis, and in the popup mini toolbar, click the Axis Scale button to open the Axis Scale dialog as below. Set the display range from -2.5 to 2.5 with Tick Thickness = 2.
Do the same for Y axis to set the Y axis display range from -10.5 to 4 with Tick Thickness = 4.
2. To configure the X and Y axes so that they intersect at 0,0. Double-click on the X axis to open the Axis dialog, go to the Line and Ticks tab, select both Bottom and Left icon in the left panel of the Axis Dialog. Set Axis Position to At Position = 0.
3. Click OK to close the dialog. Delete the Axis titles, and then select the two line groups separately to set Width to 2 both using the style toolbar. Then, the graph will looks like:
### Adding Special Points with Labels to Annotate the Intersections
In the graph above, there are three intersections of two function curves. We want to mark two of them, at X=-2 and X=2 respectively.
1. With Ctrl key been pressed down, click on the intersection at X=-2 to select this individual point and then right-clicking on it to select Edit Point to open the Plot Details dialog. You can learn more about how to show and customize a individual point on graph.
2. In the dialog that opens, you will see a special point showing its row index has been added and selected under the second plot.
• Go to the Symbol tab, customize its style as below:
• Go to the Drop Line tab, enable the vertical drop line and set its style as shown below:
• Go to the Label tab, check the Enable check box to set Label Form to (X,Y) and Font Size to 22
3. Click OK to turn off this dialog. Do the similar steps as step 1 to add another intersection at X=2(row index = 46).
And then, set the same styles for it. At the end, you will get the graph such as the one shown below:
### Adding Function Formulas and Axis Arrows
1. To hide the axis tick labels at (0,0), open the Axis dialog again and go to Special Ticks tab, then for the Bottom icon set as below and do the same for the Left icon.
2. To add arrows to the ends of the axes. Go to the Line and Ticks tab of Axis dialog. Select both Bottom and Left icon in the left panel of the Axis dialog. Expand Arrow node, check the Arrow at End checkbox, and then set Width to 5.
3. Double-click on X axis to open the Axis dialog. Go to Reference Lines tab, enter 1 in the Reference Lines at Value text box, then click any place in the list table to add a reference line at X=1. Do the settings as below:
Click the Details button to set the line style.
Click OK to close this dialog, and then click OK button to apply the settings and close the Axis dialog too.
You can also use the Add Straight Line tool(opened by selecting Insert: Straight Line menu) to add such a vertical straight line at X=1.
4. To add the two curves' formulas to the graph, right-click on a blank area and choose Add Text.... Type any character into the object to create a text object firstly. And then right-click on it to select Properties from the context menu to open the Object Properties dialog. Enter the first formula in the Text tab.
y=-x\+(2)+3x
y=2x\+(3)-x\+(2)-5x
5. Click OK button to close the dialog. Add one more text object and open Text Properties dialog again to enter the second formula above in Text tab. Click OK button again, then these two formulas have been added to the graph window both. Reposition them as needed.
6. Select the Curved Arrow Tool and add two curved arrows to connect formula labels to line plots.
7. Your final graph should look something like this:
### Generating Function Data using Set Values tool
1. Open a new workbook. Choose Add New Columns to the worksheet so there are 3 columns.
2. Right-click Col(A) and select Fill Column with: A set of Numbers...
3. In the patternN dialog box, set up the following parameters:
4. Double-click on the F(x) label row of Col(B) to turn to the in-place edit mode, then enter -1*A^2+3*A as below.
Note: Since Origin 2017, a new control Spreadsheet Cell Notation has been added and turned on by default. Origin supports use of column Short Name as notation in column formula, such as A+1 instead of col(A)+1. For earlier version, you are supposed to enter -1*col(A)^2+3*col(A) instead.
5. Highlight col(C) and right-click on it to select Set Column Values from context menu to open Set Values dialog. In this dialog, enter formula and the range definition as below: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24538572132587433, "perplexity": 2450.0168691370964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945218.30/warc/CC-MAIN-20230323225049-20230324015049-00704.warc.gz"} |
http://www.computer.org/csdl/trans/tc/2009/07/ttc2009070994-abs.html | Subscribe
Issue No.07 - July (2009 vol.58)
pp: 994-1000
Stef Graillat , Université Pierre et Marie Curie, Paris
ABSTRACT
Several different techniques and softwares intend to improve the accuracy of results computed in a fixed finite precision. Here, we focus on a method to improve the accuracy of the product of floating-point numbers. We show that the computed result is as accurate as if computed in twice the working precision. The algorithm is simple since it only requires addition, subtraction, and multiplication of floating-point numbers in the same working precision as the given data. Such an algorithm can be useful for example to compute the determinant of a triangular matrix and to evaluate a polynomial when represented by the root product form. It can also be used to compute the integer power of a floating-point number.
INDEX TERMS
Accurate product, exponentiation, finite precision, floating-point arithmetic, faithful rounding, error-free transformations.
CITATION
Stef Graillat, "Accurate Floating-Point Product and Exponentiation", IEEE Transactions on Computers, vol.58, no. 7, pp. 994-1000, July 2009, doi:10.1109/TC.2008.215
REFERENCES
[1] T. Ogita, S.M. Rump, and S. Oishi, “Accurate Sum and Dot Product,” SIAM J. Scientific Computing, vol. 26, no. 6, pp. 1955-1988, 2005. [2] S.M. Rump, T. Ogita, and S. Oishi, “Accurate Floating-Point Summation. Part I: Faithful Rounding,” SIAM J. Scientific Computing, vol. 31, no. 1, Oct. 2008. [3] Research Report 04, S. Graillat, N. Louvet, and P. Langlois, “Compensated Horner Scheme,” Équipe de recherche DALI, Laboratoire LP2A, Université de Perpignan Via Domitia, France, July 2005. [4] P. Langlois and N. Louvet, “How to Ensure a Faithful Polynomial Evaluation with the Compensated Horner Algorithm,” Proc. 18th IEEE Symp. Computer Arithmetic (ARITH '07), pp. 141-149, 2007. [5] P. Kornerup, V. Lefevre, and J.-M. Muller, Computing Integer Powers in Floating-Point Arithmetic, arXiv:0705.4369v1 [cs.NA], 2007. [6] P.H. Sterbenz, Floating-Point Computation. Prentice-Hall, 1974. [7] IEEE Standard for Binary Floating-Point Arithmetic, vol. 22, no. 2,ANSI/IEEE Standard 754-1985, New York, IEEE, 1985, reprinted in SIGPLAN Notices, pp. 9-25, 1987. [8] N.J. Higham, Accuracy and Stability of Numerical Algorithms, second ed. SIAM, 2002. [9] T.J. Dekker, “A Floating-Point Technique for Extending the Available Precision,” Numerical Math., vol. 18, pp. 224-242, 1971. [10] D.E. Knuth, The Art of Computer Programming, Volume 2, Seminumerical Algorithms, third ed. Addison-Wesley, 1998. [11] Y. Nievergelt, “Scalar Fused Multiply-Add Instructions Produce Floating-Point Matrix Arithmetic Provably Accurate to the Penultimate Digit,” ACM Trans. Math. Software, vol. 29, no. 1, pp. 27-48, 2003. [12] C. Jacobi, H.-J. Oh, K.D. Tran, S.R. Cottier, B.W. Michael, H. Nishikawa, Y. Totsuka, T. Namatame, and N. Yano, “The Vector Floating-Point Unit in a Synergistic Processor Element of a Cell Processor,” Proc. 17thIEEE Symp. Computer Arithmetic (ARITH '05) pp. 59-67, 2005. [13] T. Ogita, S.M. Rump, and S. Oishi, “Verified Solution of Linear Systems without Directed Rounding,” Technical Report 2005-04, Advanced Research Inst. of Science and Eng., Waseda Univ., 2005. [14] D.H. Bailey, A Fortran-90 Double-Double Library, http://crd.lbl.gov/dhbailey/mpdistindex.html , 2001. [15] C.Q. Lauter, Basic Building Blocks for a Triple-Double Intermediate Format, Research Report RR-5702, INRIA, Sept. 2005. [16] P. Langlois and N. Louvet, More Instruction Level Parallelism Explains the Actual Efficiency of Compensated Algorithms, hal-00165020, version 1, 2007. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8513530492782593, "perplexity": 4209.701189588518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096208.17/warc/CC-MAIN-20150627031816-00047-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/222268-logarithmic-equation-different-bases.html | # Math Help - Logarithmic equation with different bases
1. ## Logarithmic equation with different bases
Would appreciate some guidance on the problem below:
Solve Log[base 25](6x + 25) + Log[base 5](x + 25) = 5, x a Natural number
To solve I tried converting to a single base using Log[base A]x = Log[base B]x / Log[base B]A which gives:
Log[base 25](6x + 25) = Log[base 5](6x + 25) / Log[base 5]25 = Log[base 5](6x+25) / 2
So the equation becomes:
Log[base 5](6x + 25) / 2 + Log[base 5](x + 25) = 5
Log[base 5](6x + 25) + 2Log[base 5](x + 25) = 10
Log[base 5](6x + 25) + Log[base 5](x + 25)^2 = 10
Log[base 5](6x + 25)(x + 25)^2 = 10
5^10 = (6x +25)(x +25)^2
This seems like a nasty polynomial to solve - is there a simpler approach to the problem?
Corbomite1
2. ## Re: Logarithmic equation with different bases
No that is exactly what you have to do.
OK, thanks!
4. ## Re: Logarithmic equation with different bases
Originally Posted by corbomite1
Would appreciate some guidance on the problem below:
Solve Log[base 25](6x + 25) + Log[base 5](x + 25) = 5, x a Natural number
Here is some advice on notation.
You can learn to post symbols. It really is so easy. And it makes most of us more willing to find out how to help.
This subforum will help you with the code. Once you begin, you quickly learn the code.
[TEX]\log_{25}(6x+25)+\log_{5}(x+25)=5 [/TEX] gives $\log_{25}(6x+25)+\log_5(x+25)=5$
If you click on the “go advanced tab” you should see $\boxed{\Sigma}$ on the tool-bar. That gives the [TEX]..[/TEX] wrap. Your LaTeX code goes between them.
5. ## Re: Logarithmic equation with different bases
Understood, will do, thanks!
So the problem would read $\log_{25}{(6x+25)} + log_5{(x+25)}=5$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8094940185546875, "perplexity": 5632.622424046976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645257890.57/warc/CC-MAIN-20150827031417-00134-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://www.emathzone.com/tutorials/general-topology/continuity-in-topological-spaces.html | # Continuity in Topological Spaces
Let $f$ be a function defined from topological space $X$ to topological space $Y$, then $f$ is said to be continuous at a point $x \in X$ if for every neighborhood $V$ of $f\left( x \right)$, there exists a neighborhood $U$ of $x$, such that $f\left( U \right) \subseteq V$.
In other words, let $X,Y$ be two topological spaces. A function $f:X \to Y$ is said to be continuous at a point $x \in X$ if and only if for every open set $V$, which contains $f\left( x \right) \in Y$, there exists an open set $U$, such that $x \in U \subseteq {f^{ – 1}}\left( V \right)$; ${f^{ – 1}}\left( V \right)$ is the inverse image of $V$.
It can also be defined as: let $\left( {X,{\tau _X}} \right)$ and $\left( {X,{\tau _Y}} \right)$ be topological spaces. A function $f:X \to Y$ is said to be a continuous function at a point ${x_o}$ of $X$ if for any neighborhood ${N_Y}$ of $f\left( {{x_o}} \right)$ in $Y$, there is a neighborhood ${N_X}$ of ${x_o}$ in $X$ such that $f\left( {{N_X}} \right) \subseteq {N_Y}$. The function$f:X \to Y$ is said to be a continuous function on $X$ if it is continuous at each point of $X$.
Note: It may be noted that a function $f$ from topological space $X$to topological space $Y$ is said to be continuous on $X$ if $X$ is continuous of each point of $X$.
Theorems
• A function $f$ from one topological space $X$ into another topological space $Y$ is continuous if and only if for every open set $V$ in $Y$, ${f^{ – 1}}\left( V \right)$ is open in $X$.
• A function $f$ from one topological space $X$ into another topological space $Y$ is continuous if and only if for every closed set $C$ in $Y$, ${f^{ – 1}}\left( C \right)$ is closed in $X$.
• If $X$ and $Y$ are topological spaces, then a function $f:X \to Y$ is continuous on $X$ if and only if for any sub set $A$ of $X$, $f\left( {\overline A } \right) \subseteq \overline {f\left( A \right)}$.
• If $X$ and $Y$ are topological spaces, then a function $f:X \to Y$ is continuous on $X$ if and only if for any sub set $A$ of $X$, ${f^{ – 1}}\left( {{A^o}} \right) \subseteq {\left[ {{f^{ – 1}}\left( A \right)} \right]^o}$.
• If $X$ is an arbitrary topological space and $Y$ is an indiscrete topological space, then every function $f:X \to Y$ is a continuous function on $X$.
• Let $X,Y$ and $Z$ be topological spaces. If $f:X \to Y$ and $g:Y \to Z$ are continuous mappings, then $g \circ f:X \to Z$ is continuous. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9891059994697571, "perplexity": 19.38492574891204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540536855.78/warc/CC-MAIN-20191212023648-20191212051648-00220.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/145191-isomorphism-question-print.html | # Isomorphism Question
• May 17th 2010, 04:21 PM
wutang
Isomorphism Question
Let n be an even integer. Prove that Dn/Z(Dn) is isomorphic to
D(n/2).
• May 17th 2010, 08:18 PM
tonio
Quote:
Originally Posted by wutang
Let n be an even integer. Prove that Dn/Z(Dn) is isomorphic to D(n/2).
Hints: if $D_{2n}=\left\{a,b\;;\;a^2=b^n=1\,,\,aba=b^{-1}=b^{n-1}\right\}$ , then:
1) $Z\left(D_{2n}\right)=\{1,b^{n/2}\}$
2) $D_{2n}/Z\left(D_{2n}\right)=\left\{\overline{a}\,,\,\over line{b}\;;\;\overline{a}^2=\overline{b}^{n/2}=\overline{1}\,,\,\overline{a}\overline{b}\overl ine{a}=\overline{b}^{-1}\right\}$ , with $\overline{x}:=xZ\left(D_{2n}\right)\in D_{2n}/Z\left(D_{2n}\right)\,,\,x\in D_{2n}$
Tonio
• May 19th 2010, 07:27 PM
wutang
It should say that Dn/Z(Dn) is isomorphic to D(n/2). I understand your definition for Dn/Z(Dn), but I don't get how to set up the isomorphism. I think I should use the fact that any group generated by a pair of elements of order 2 is dihedral to get the isomorphism from Dn/Z(Dn) to D(n/2) ?
• May 19th 2010, 07:41 PM
wutang
never mind, I think I just have to play with the elements in Dn/Z(Dn) until I get it to look that D(n/2). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9348580241203308, "perplexity": 1204.3714264602986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380355.69/warc/CC-MAIN-20141119123300-00005-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://www.surajx.in/tags/automated-ml/ | Bayesian Optimization - Part 1: Stochastic Processes
A mathematical primer to automated Hyperparameter tuning using Bayesian Optimization.
Suraj Narayanan Sasikumar
Reinforcement Learning • Machine Learning • AI Safety
Research | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8577475547790527, "perplexity": 27756.276557446814}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027322160.92/warc/CC-MAIN-20190825000550-20190825022550-00367.warc.gz"} |
http://eprints.iisc.ernet.in/7610/ | Home | About | Browse | Latest Additions | Advanced Search | Contact | Help
High room-temperature hole mobility in $Ge_{0.7 Si_0.3}/Ge/Ge_{0.7}Si_{0.3}$ modulation-doped heterostructures
Madhavi, S and Venkataraman, V (2001) High room-temperature hole mobility in $Ge_{0.7 Si_0.3}/Ge/Ge_{0.7}Si_{0.3}$ modulation-doped heterostructures. In: Journal of Applied Physics, 89 (4). pp. 2497-2499.
PDF high.pdf Restricted to Registered users only Download (50Kb) | Request a copy
Abstract
Modulation-doped two-dimensional hole gas structures consisting of a strained germanium channel on relaxed $Ge_{0.7}Si_{0.3}$ buffer layers were grown by molecular-beam epitaxy. Sample processing was optimized to substantially reduce the contribution from the parasitic conducting layers. Very high hall mobilities of $1700\hspace{2mm} cm^2/V s$ for holes were observed at 295 K which are the highest reported to date for any kind of p-type silicon-based heterostructures. Hall measurements were carried out from 13 to 300 K to determine the temperature dependence of the mobility and carrier concentration. The carrier concentration at room temperature was $7.9\times10^{11} cm^{-2}$ and decreased by only 26% at 13 K, indicating very little parallel conduction. The high-temperature mobility obeys a $T^{-\alpha}$ behavior with $\alpha$~2, which can be attributed to intraband optical phonon scattering.
Item Type: Journal Article Copyright of this article belongs to American Institute of Physics. Division of Physical & Mathematical Sciences > Physics 19 Jan 2007 19 Sep 2010 04:29 http://eprints.iisc.ernet.in/id/eprint/7610
Actions (login required)
View Item | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6422773599624634, "perplexity": 4796.227936804732}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510266894.52/warc/CC-MAIN-20140728011746-00144-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://blogs.ams.org/mathgradblog/tag/algebraic-geometry/ | # Tag Archives: Algebraic geometry
## What is an Infinitesimal?
A guest post from Reginald Anderson at Kansas State University. First-time learners of calculus often struggle with the notion of an infinitesimal, and considering $\frac{dy}{dx}$ literally as a fraction can lead students astray in Calculus III and differential equations, when … Continue reading
Posted in Algebraic Geometry, Math Education, Teaching | Tagged , , | Comments Off on What is an Infinitesimal?
## A Pretty Lemma About Prime Ideals and Products of Ideals
I was trying to prove a theorem in algebraic geometry which basically held if and only if this lemma held. Here’s the lemma: Lemma: Given any ring $A$, a prime ideal $\mathfrak{p} \subset A$, and a finite collection of ideals … Continue reading
Posted in Algebra, Algebraic Geometry, Math | Tagged , , | 3 Comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3741951882839203, "perplexity": 1841.3654078321556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573667.83/warc/CC-MAIN-20220819100644-20220819130644-00645.warc.gz"} |
https://www.physicsforums.com/threads/uncertainty-principle-and-the-size-of-an-atom.324554/ | # Uncertainty Principle and the size of an atom
1. Jul 11, 2009
### nobahar
Hey,
Sorry, but I have a qustion on the uncertainty principle to join the many others.
Just reading a book on physics, and its says that, as a result of the Heisenberg uncertainty principle, if the proton and electron were confined to the same volume of space, the electron would be travelling about 2,000 times faster, as the proton is 2,000 times bigger. How is this a consequence of the uncertainty principle? It must entail the momentum and the position, but I don't see how.
2. Jul 11, 2009
### Naty1
m (delta(v))(delta(x)) greater than h, says it all...
3. Jul 11, 2009
### gabbagabbahey
First, it is incorrect to say that the electron will be travelling 2000 times faster; the correct statement would be that the uncertainty in the electron's speed is about 2000 times greater than the the uncertainty in the proton's speed.
By confining the proton and the electron to the same volume, you are essentially saying that the uncertainty in position is the same for both particles (i.e. $\Delta x_{\text{electron}}=\Delta x_{\text{proton}}$ )
So, if you assume that $\Delta x_e\Delta p_e=\frac{\hbar}{2}$ and $\Delta x_p\Delta p_p=\frac{\hbar}{2}$ then you have:
$$\Delta x_e\Delta p_e=\Delta x_p\Delta p_p$$
$$\implies \Delta p_{\text{electron}}=\Delta p_{\text{proton}}$$
Then you simply use the fact that $\Delta p_e=m_e\Delta v_e$ and that $\Delta p_p=m_p\Delta v_p$ (since you presumably know the masses of the electron and proton exactly, the uncertainty in their momenta is due entirely to the uncertainty in their speeds) and you get
$$\Delta v_e =\frac{m_p}{m_e} \Delta v_p \approx 2000\Delta v_p$$
Last edited: Jul 11, 2009
4. Jul 11, 2009
### dave_baksh
gabba> you mean v subscript p in your last line of working
5. Jul 11, 2009
### gabbagabbahey
Yes, thank you. I've edited my post.
6. Jul 13, 2009
### nobahar
Thanks Gabba,
That's extremely clear and awesome.
Thanks! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9150141477584839, "perplexity": 568.1928889317378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948542031.37/warc/CC-MAIN-20171214074533-20171214094533-00151.warc.gz"} |
http://www.thefullwiki.org/Current_division | # Current division: Wikis
Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.
# Encyclopedia
(Redirected to Current divider article)
Figure 1: Schematic of an electrical circuit illustrating current division. Notation RT. refers to the total resistance of the circuit to the right of resistor RX.
In electronics, a current divider is a simple linear circuit that produces an output current (IX) that is a fraction of its input current (IT). Current division refers to the splitting of current between the branches of the divider. The currents in the various branches of such a circuit will always divide in such a way as to minimize the total energy expended. This can be shown by calculus.
The formula describing a current divider is similar in form to that for the voltage divider. However, the ratio describing current division places the impedance of the unconsidered branches in the numerator, unlike voltage division where the considered impedance is in the numerator. This is because in current dividers, total energy expended is minimized, resulting in currents that go through paths of least impedance, therefore the inverse relationship with impedance. On the other hand, voltage divider is used to satisfy Kirchoff's Voltage Law. The voltage around a loop must sum up to zero, so the voltage drops must be divided evenly in a direct relationship with the impedance.
To be specific, if two or more impedances are in parallel, the current that enters the combination will be split between them in inverse proportion to their impedances (according to Ohm's law). It also follows that if the impedances have the same value the current is split equally.
## Resistive divider
A general formula for the current IX in a resistor RX that is in parallel with a combination of other resistors of total resistance RT is (see Figure 1):
$I_X = \frac{R_T}{R_X + R_T}I_T \$
where IT is the total current entering the combined network of RX in parallel with RT. Notice that when RT is composed of a parallel combination of resistors, say R1, R2, ... etc., then the reciprocal of each resistor must be added to find the total resistance RT:
$\frac {1}{R_T} = \frac {1} {R_1} + \frac {1} {R_2} + \frac {1}{R_3} + ... \ .$
## General case
Although the resistive divider is most common, the current divider may be made of frequency dependent impedances. In the general case the current IX is given by:
$I_X = \frac{Z_T} {Z_X + Z_T}I_T \ ,$
Instead of using impedances, the current divider rule can be applied just like the voltage divider rule if admittance (the inverse of impedance) is used.
$I_X = \frac{Y_X} {Y_{Total}}I_T$
Take care to note that YTotal is a straightforward addition, not the sum of the inverses inverted (as you would do for a standard parallel resistive network). For Figure 1, the current IX would be
$I_X = \frac{Y_X} {Y_{Total}}I_T = \frac{\frac{1}{R_X}} {\frac{1}{R_X} + \frac{1}{R_1} + \frac{1}{R_2} + \frac{1}{R_3}}I_T$
### Example: RC combination
Figure 2: A low pass RC current divider
Figure 2 shows a simple current divider made up of a capacitor and a resistor. Using the formula above, the current in the resistor is given by:
$I_R = \frac {\frac{1}{j \omega C}} {R + \frac{1}{j \omega C} }I_T$
$= \frac {1} {1+j \omega CR} I_T \ ,$
where ZC = 1/(jωC) is the impedance of the capacitor.
The product τ = CR is known as the time constant of the circuit, and the frequency for which ωCR = 1 is called the corner frequency of the circuit. Because the capacitor has zero impedance at high frequencies and infinite impedance at low frequencies, the current in the resistor remains at its DC value IT for frequencies up to the corner frequency, whereupon it drops toward zero for higher frequencies as the capacitor effectively short-circuits the resistor. In other words, the current divider is a low pass filter for current in the resistor.
Figure 3: A current amplifier (gray box) driven by a Norton source (iS, RS) and with a resistor load RL. Current divider in blue box at input (RS,Rin) reduces the current gain, as does the current divider in green box at the output (Rout,RL)
The gain of an amplifier generally depends on its source and load terminations. Current amplifiers and transconductance amplifiers are characterized by a short-circuit output condition, and current amplifiers and transresistance amplifiers are characterized using ideal infinite impedance current sources. When an amplifier is terminated by a finite, non-zero termination, and/or driven by a non-ideal source, the effective gain is reduced due to the loading effect at the output and/or the input, which can be understood in terms of current division.
Figure 3 shows a current amplifier example. The amplifier (gray box) has input resistance Rin and output resistance Rout and an ideal current gain Ai. With an ideal current driver (infinite Norton resistance) all the source current iS becomes input current to the amplifier. However, for a Norton driver a current divider is formed at the input that reduces the input current to
$i_{i} = \frac {R_S} {R_S+R_{in}} i_S \ ,$
which clearly is less than iS. Likewise, for a short circuit at the output, the amplifier delivers an output current io = Ai ii to the short-circuit. However, when the load is a non-zero resistor RL, the current delivered to the load is reduced by current division to the value:
$i_L = \frac {R_{out}} {R_{out}+R_{L}} A_i i_{i} \ .$
Combining these results, the ideal current gain Ai realized with an ideal driver and a short-circuit load is reduced to the loaded gain Aloaded:
$A_{loaded} =\frac {i_L} {i_S} = \frac {R_S} {R_S+R_{in}}$ $\frac {R_{out}} {R_{out}+R_{L}} A_i \ .$
### Unilateral versus bilateral amplifiers
Figure 4: Current amplifier as a bilateral two-port network; feedback through dependent voltage source of gain β V/V
Figure 3 and the associated discussion refers to a unilateral amplifier. In a more general case where the amplifier is represented by a two port, the input resistance of the amplifier depends on its load, and the output resistance on the source impedance. The loading factors in these cases must employ the true amplifier impedances including these bilateral effects. For example, taking the unilateral current amplifier of Figure 3, the corresponding bilateral two-port network is shown in Figure 4 based upon h-parameters.[1] Carrying out the analysis for this circuit, the current gain with feedback Afb is found to be
$A_{fb} = \frac {i_L}{i_S} = \frac {A_{loaded}} {1+ {\beta}(R_L/R_S) A_{loaded}} \ .$
That is, the ideal current gain Ai is reduced not only by the loading factors, but due to the bilateral nature of the two-port by an additional factor[2] ( 1 + β (RL / RS ) Aloaded ), which is typical of negative feedback amplifier circuits. The factor β (RL / RS ) is the current feedback provided by the voltage feedback source of voltage gain β V/V. For instance, for an ideal current source with RS = ∞ Ω, the voltage feedback has no influence, and for RL = 0 Ω, there is zero load voltage, again disabling the feedback.
## References and notes
1. ^ The h-parameter two port is the only two-port among the four standard choices that has a current-controlled current source on the output side.
2. ^ Often called the improvement factor or the desensitivity factor. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9031056761741638, "perplexity": 1203.4461020581234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811830.17/warc/CC-MAIN-20180218100444-20180218120444-00228.warc.gz"} |
http://www.jcyoon.com/phpBB/viewtopic.php?t=24&postdays=0&postorder=asc&start=30 | jcYoon's Physics
Newsgroup and Email Discussion
No Subject Goto page Previous 1, 2, 3
Author Message
jcyoon
Joined: 08 Aug 2006
Posts: 213
Posted: Thu Nov 15, 2007 3:17 am Post subject: RE: normalization factor Thurs 2007-11-01 4:53 AM Dear Professor Zhi-qiang Shi, Thanks for your email. I think I was not clear with the normalization factor. Please let me try with explicit mathematical expressions. The Lorentz invariance of Lagrangian can be substantiated either by proving the Lagrangian is Lorentz scalar or by comparing the values of Lagrangian evaluated for the reference frames under Lorentz transformations such as the rest frame and the boosted frame of particle. For the weak interaction Lagrangian $\overline{\psi}\gamma^{\mu}(1 + \gamma^{5}\psi$, when the Lagrangian is evaluated in the rest frame with u_{rest} = \sqrt{m} \left( \begin{array}{c} \xi & \xi \end{array} \right), it gives 2m \xi^{\dagger} \sigma^{\mu} \xi, while in the boosted frame with u_{boosted} = \left( \begin{array}{c} \sqrt{p \cdot \sigma}\xi & \sqrt{p \cdot \overline{\sigma}}\xi \end{array} \right), it gives 2 (p \cdot \overline{\sigma}) \xi^{\dagger} \sigma^{\mu} \xi. Therefore, the normalization factor for the rest frame and boosted frames are different as the normalization factor (p \cdot \overline{\sigma}) is not Lorentz invariant. We may overcome this issue by ignoring its Lorentz variance, but we are using this Lorentz-variant factor that vanishes when we make the relativistic approximation to neglect the left-handed helicity u_{boosted} = \left( \begin{array}{c} \sqrt{p \cdot \sigma}\xi & \sqrt{p \cdot \overline{\sigma}}\xi \end{array} \right) \rightarrow \sqrt{E} \left( \begin{array}{c} \xi & 0 \end{array} \right). Note that if the normalization factor had been Lorentz-invariant, then there would not have been such a relativistic approximation. Sincerely yours, J.C. Yoon
Zhi-qiang Shi
Joined: 15 Nov 2007
Posts: 19
Posted: Thu Nov 15, 2007 3:18 am Post subject: Lorentz invariance Thurs 2007-11-01 7:14 PM Dear Dr. J. C. Yoon, Thanks you for your elaborate reply. 1. As for my statement in mail dated October 11, the plane wave solutions of Dirac equation should reduce to the eigenfunction of \Sigma_z in its rest frame, like Eq. (3.2) in Relativistic Quantum Mechanics by Bjorken and Drell. However, the solution given by you does not satisfy this condition. A correct solution should be Eq. (2) or (3) in my paper (calculation on the lifetime of polarized muons in flight). 2. Both vector and axial vector (pseudovector) are not Lorentz scalar, see Eq. (2.38) in Relativistic Quantum Mechanics by Bjorken and Drell. However, the Lagrangian is the product of vector (or axial vector) and axial vector (or vector), and so the Lagrangian is Lorentz invariant. For example, the Lagrangian of purely leptonic process is given by L(x)~[\overline{\psi}_a\gamma_\mu(1+\gamma_5)\psi_b][\overline{\psi}_c\gamma_\mu(1+\gamma_5) \psi_d]. Under proper Lorentz transformation, we have L’(x’)~ a_\mu^\nu a_\sigma^\mu [\overline{\psi}_a\gamma_\nu(1+\gamma_5)\psi_b] [\overline{\psi}_c\gamma_\sigma(1+\gamma_5) \psi_d]. Because a_\mu^\nu a_\sigma^\mu=\delta^\nu_\sigma , (see Eq. (2.3) in Relativistic Quantum Mechanics by Bjorken and Drell ), we obtain L’(x’)=L(x) that is to say that the Lagrangian is Lorentz invariant. Sincerely yours, Zhi-qiang Shi
jcyoon
Joined: 08 Aug 2006
Posts: 213
Posted: Thu Nov 15, 2007 3:19 am Post subject: RE: Lorentz invariance Fri 2007-11-02 6:16 AM Dear Professor Zhi-qiang Shi, Thanks for your patient and elaborate reply, which was quite helpful for me to understand your point. 1. The Dirac solution in the rest frame that I have used is from the standard textbook of Peskin and Schroeder, Eq. (3.47) on page 45 in terms of chiral representation. The $\Sigma_{z}$ for the chiral representation can be given by \Sigma_{z}= \left( \begin{array}{cc} \sigma_{z} 0 & 0 \sigma_{z} \end{array} \right) And for the Dirac solution in the rest frame with spin-up along the z direction \xi = \left( \begin{array}{c} 1 & 0 \end{array} \right) We have the eigenvalue of +1 for $\Sigma_{z}$ \Sigma_{z}u_{rest} = +1 u_{rest} 2. I agree that the Lorentz invariance of the weak interactions can be shown from your argument. What I have pointed out is that the practical Dirac solution is not properly normalized to retain the Lorentz invariance that is supposed to be guaranteed from the proof of infinite Lorentz transformations. Let us make my mistake clear. My claim of Lorentz violation of the weak interactions Lagrangian (not the Standard Model) is incorrect, in the sense that its Lorentz invariance can be proved from using infinite Lorentz transformations. The correct statement should be that the weak interactions Lagrangian is proven to be Lorentz-invariant, but the Dirac solutions we use fail to retain the Lorentz invariance as its normalization factor is Lorentz variant, which is later exploited by the relativistic approximation. I will be thinking about it more, but I am glad that you brought up a critical point to me, which I dearly appreciate. Sincerely yours, J.C. Yoon
Zhi-qiang Shi
Joined: 15 Nov 2007
Posts: 19
Posted: Thu Nov 15, 2007 3:21 am Post subject: normalization factor Fri 2007-11-02 6:45 PM Dear Dr. J. C. Yoon, Thank you for your email. 1. According to the Dirac solution that you have used, in the boosted frame the solution is given by u_{boosted}=\sqrt{E} \left( \begin{array}{c}\sqrt{p\cdot \sigma}\xi&\sqrt{p\cdot\overline{\sigma}} \xi \end {array} \right), In the rest frame, p=0, we should obtain u_{rest} =\sqrt{m} \left( \begin{array}{c}0 & 0 \end{array} \right), It is not the way you given by using u_{rest} =\sqrt{m} \left( \begin{array}{c} \xi & \xi \end{array} \right). Please you consider again. 2. You said: “the practical Dirac solution is not properly normalized to retain the Lorentz invariance that is supposed to be guaranteed from the proof of infinite Lorentz transformations.” “the Dirac solutions we use fail to retain the Lorentz invariance as its normalization factor is Lorentz variant.” What does it mean? I think that the Dirac solutions are assuredly not Lorentz invariant. The symmetry theory only requires that Dirac equation and the Lagrangian are Lorentz invariant. The normalization factor is decided by using normalizing condition, not Lorentz invariance. Sincerely yours, Zhi-qiang Shi
jcyoon
Joined: 08 Aug 2006
Posts: 213
Posted: Thu Nov 15, 2007 3:22 am Post subject: RE: normalization factor Sat 2007-11-03 5:26 AM Dear Professor Zhi-qiang Shi, Thanks for your prompt and kind reply. 1. In the notation from my statement following Peskin and Schroeder, p is p^{\mu} and thus in the rest frame we have p^{\mu} = (m,0) which give the same solution in the rest frame u_{rest} =\sqrt{m} \left( \begin{array}{c} \xi & \xi \end{array} \right). Note that \sqrt{E} should be dropped in the solution in the boosted frame. 2. In my statement 2 I was equivocally talking about two different Lorentz invariances: Lagrangian and matrix element. What I meant to say was that though the Lagrangian is Lorentz invariant under an infinitesimal Lorentz transformation, the matrix element is not necessarily guaranteed to be Lorentz invariant, since the matrix elements for the rest frame and boosted frame are different due to normalization factors. Sincerely yours, J.C. Yoon
Zhi-qiang Shi
Joined: 15 Nov 2007
Posts: 19
Posted: Thu Nov 15, 2007 3:23 am Post subject: OK Mon 2007-11-05 6:35 PM Dear Dr. J. C. Yoon, Thanks for your patient and elaborate reply. I agree that though the Lagrangian is Lorentz invariant under an infinitesimal Lorentz transformation, the matrix element is not necessarily guaranteed to be Lorentz invariant. Our discussion is successful. Now, your perspectives have completely accorded with mine. It is a nice opportunity to discuss with you about the Lorentz invariant. Both of us can learn many something in this constructive discussion. I am very gratified. Sincerely yours, Zhi-qiang Shi
jcyoon
Joined: 08 Aug 2006
Posts: 213
Posted: Thu Nov 15, 2007 3:24 am Post subject: RE: OK Wed 2007-11-07 5:39 AM Dear Zhi-qiang Shi, I am glad to receive your reply and I sincerely appreciate the great opportunity for me to learn from you through our discussion. If you don't mind, I would like to post our helpful discussion on my web site, for which I will be very grateful. But if it is not a good idea, please feel free to let me know so. Sincerely yours, J.C. Yoon
Zhi-qiang Shi
Joined: 15 Nov 2007
Posts: 19
Posted: Thu Nov 15, 2007 3:26 am Post subject: post our discussion Wed 2007-11-07 6:04 PM Dear Dr. J. C. Yoon, I am pleased to post our discussion on your web site. I hope that more physicists can share our productive discussion. Sincerely yours, Zhi-qiang Shi
Display posts from previous: All Posts1 Day7 Days2 Weeks1 Month3 Months6 Months1 Year Oldest FirstNewest First
All times are GMTGoto page Previous 1, 2, 3 Page 3 of 3
Jump to: Select a forum Newsgroup----------------sci.physics.research Selected Emails----------------Professor Douglas J. NewmanProfessor Zhi-qiang ShiProfessor Steven WeinbergProfessor Sheldon Lee GlashowProfessor Gerard 't HooftProfessor Warren SiegelProfessor Michael E. PeskinProfessor Martin GruenewaldProfessor Pierre RamondProfessor Lance DixonProfessor OW GreenbergProfessor Emlyn W. HughesProfessor Hitoshi MurayamaProfessor Stanley J. BrodskyProfessor Paul SouderProfessor Mary K GaillardProfessor L.B. Okun
You can post new topics in this forum
You can reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9794694781303406, "perplexity": 1915.9374465520903}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592861.86/warc/CC-MAIN-20180721223206-20180722003206-00517.warc.gz"} |
https://statkat.com/stattest.php?t=9&t2=11&t3=33&t4=4&t5=6&t6=44 | Two sample t test - equal variances not assumed - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Two sample $t$ test - equal variances not assumed
One way ANOVA
Friedman test
Chi-squared test for the relationship between two categorical variables
One sample $t$ test for the mean
Binomial test for a single proportion
Independent/grouping variableIndependent/grouping variableIndependent/grouping variableIndependent /column variableIndependent variableIndependent variable
One categorical with 2 independent groupsOne categorical with $I$ independent groups ($I \geqslant 2$)One within subject factor ($\geq 2$ related groups)One categorical with $I$ independent groups ($I \geqslant 2$)NoneNone
Dependent variableDependent variableDependent variableDependent /row variableDependent variableDependent variable
One quantitative of interval or ratio levelOne quantitative of interval or ratio levelOne of ordinal levelOne categorical with $J$ independent groups ($J \geqslant 2$)One quantitative of interval or ratio levelOne categorical with 2 independent groups
Null hypothesisNull hypothesisNull hypothesisNull hypothesisNull hypothesisNull hypothesis
H0: $\mu_1 = \mu_2$
Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2.
ANOVA $F$ test:
• H0: $\mu_1 = \mu_2 = \ldots = \mu_I$
$\mu_1$ is the population mean for group 1; $\mu_2$ is the population mean for group 2; $\mu_I$ is the population mean for group $I$
$t$ Test for contrast:
• H0: $\Psi = 0$
$\Psi$ is the population contrast, defined as $\Psi = \sum a_i\mu_i$. Here $\mu_i$ is the population mean for group $i$ and $a_i$ is the coefficient for $\mu_i$. The coefficients $a_i$ sum to 0.
$t$ Test multiple comparisons:
• H0: $\mu_g = \mu_h$
$\mu_g$ is the population mean for group $g$; $\mu_h$ is the population mean for group $h$
H0: the population scores in any of the related groups are not systematically higher or lower than the population scores in any of the other related groups
Usually the related groups are the different measurement points. Several different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher.
H0: there is no association between the row and column variable
More precisely, if there are $I$ independent random samples of size $n_i$ from each of $I$ populations, defined by the independent variable:
• H0: the distribution of the dependent variable is the same in each of the $I$ populations
If there is one random sample of size $N$ from the total population:
• H0: the row and column variables are independent
H0: $\mu = \mu_0$
Here $\mu$ is the population mean, and $\mu_0$ is the population mean according to the null hypothesis.
H0: $\pi = \pi_0$
Here $\pi$ is the population proportion of 'successes', and $\pi_0$ is the population proportion of successes according to the null hypothesis.
Alternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesis
H1 two sided: $\mu_1 \neq \mu_2$
H1 right sided: $\mu_1 > \mu_2$
H1 left sided: $\mu_1 < \mu_2$
ANOVA $F$ test:
• H1: not all population means are equal
$t$ Test for contrast:
• H1 two sided: $\Psi \neq 0$
• H1 right sided: $\Psi > 0$
• H1 left sided: $\Psi < 0$
$t$ Test multiple comparisons:
• H1 - usually two sided: $\mu_g \neq \mu_h$
H1: the population scores in some of the related groups are systematically higher or lower than the population scores in other related groups H1: there is an association between the row and column variable
More precisely, if there are $I$ independent random samples of size $n_i$ from each of $I$ populations, defined by the independent variable:
• H1: the distribution of the dependent variable is not the same in all of the $I$ populations
If there is one random sample of size $N$ from the total population:
• H1: the row and column variables are dependent
H1 two sided: $\mu \neq \mu_0$
H1 right sided: $\mu > \mu_0$
H1 left sided: $\mu < \mu_0$
H1 two sided: $\pi \neq \pi_0$
H1 right sided: $\pi > \pi_0$
H1 left sided: $\pi < \pi_0$
AssumptionsAssumptionsAssumptionsAssumptionsAssumptionsAssumptions
• Within each population, the scores on the dependent variable are normally distributed
• Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
• Within each population, the scores on the dependent variable are normally distributed
• The standard deviation of the scores on the dependent variable is the same in each of the populations: $\sigma_1 = \sigma_2 = \ldots = \sigma_I$
• Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2, $\ldots$, group $I$ sample is an independent SRS from population $I$. That is, within and between groups, observations are independent of one another
• Sample of 'blocks' (usually the subjects) is a simple random sample from the population. That is, blocks are independent of one another
• Sample size is large enough for $X^2$ to be approximately chi-squared distributed under the null hypothesis. Rule of thumb:
• 2 $\times$ 2 table: all four expected cell counts are 5 or more
• Larger than 2 $\times$ 2 tables: average of the expected cell counts is 5 or more, smallest expected cell count is 1 or more
• There are $I$ independent simple random samples from each of $I$ populations defined by the independent variable, or there is one simple random sample from the total population
• Scores are normally distributed in the population
• Sample is a simple random sample from the population. That is, observations are independent of one another
• Sample is a simple random sample from the population. That is, observations are independent of one another
Test statisticTest statisticTest statisticTest statisticTest statisticTest statistic
$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{\sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s^2_1$ is the sample variance in group 1, $s^2_2$ is the sample variance in group 2, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis.
The denominator $\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1 - \bar{y}_2$ is removed from 0.
Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$.
ANOVA $F$ test:
• \begin{aligned}[t] F &= \dfrac{\sum\nolimits_{subjects} (\mbox{subject's group mean} - \mbox{overall mean})^2 / (I - 1)}{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2 / (N - I)}\\ &= \dfrac{\mbox{sum of squares between} / \mbox{degrees of freedom between}}{\mbox{sum of squares error} / \mbox{degrees of freedom error}}\\ &= \dfrac{\mbox{mean square between}}{\mbox{mean square error}} \end{aligned}
where $N$ is the total sample size, and $I$ is the number of groups.
Note: mean square between is also known as mean square model, and mean square error is also known as mean square residual or mean square within.
$t$ Test for contrast:
• $t = \dfrac{c}{s_p\sqrt{\sum \dfrac{a^2_i}{n_i}}}$
Here $c$ is the sample estimate of the population contrast $\Psi$: $c = \sum a_i\bar{y}_i$, with $\bar{y}_i$ the sample mean in group $i$. $s_p$ is the pooled standard deviation based on all the $I$ groups in the ANOVA, $a_i$ is the contrast coefficient for group $i$, and $n_i$ is the sample size of group $i$.
Note that if the contrast compares only two group means with each other, this $t$ statistic is very similar to the two sample $t$ statistic (assuming equal population standard deviations). In that case the only difference is that we now base the pooled standard deviation on all the $I$ groups, which affects the $t$ value if $I \geqslant 3$. It also affects the corresponding degrees of freedom.
$t$ Test multiple comparisons:
• $t = \dfrac{\bar{y}_g - \bar{y}_h}{s_p\sqrt{\dfrac{1}{n_g} + \dfrac{1}{n_h}}}$
$\bar{y}_g$ is the sample mean in group $g$, $\bar{y}_h$ is the sample mean in group $h$, $s_p$ is the pooled standard deviation based on all the $I$ groups in the ANOVA, $n_g$ is the sample size of group $g$, and $n_h$ is the sample size of group $h$.
Note that this $t$ statistic is very similar to the two sample $t$ statistic (assuming equal population standard deviations). The only difference is that we now base the pooled standard deviation on all the $I$ groups, which affects the $t$ value if $I \geqslant 3$. It also affects the corresponding degrees of freedom.
$Q = \dfrac{12}{N \times k(k + 1)} \sum R^2_i - 3 \times N(k + 1)$
Here $N$ is the number of 'blocks' (usually the subjects - so if you have 4 repeated measurements for 60 subjects, $N$ equals 60), $k$ is the number of related groups (usually the number of repeated measurements), and $R_i$ is the sum of ranks in group $i$.
Remember that multiplication precedes addition, so first compute $\frac{12}{N \times k(k + 1)} \times \sum R^2_i$ and then subtract $3 \times N(k + 1)$.
Note: if ties are present in the data, the formula for $Q$ is more complicated.
$X^2 = \sum{\frac{(\mbox{observed cell count} - \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
Here for each cell, the expected cell count = $\dfrac{\mbox{row total} \times \mbox{column total}}{\mbox{total sample size}}$, the observed cell count is the observed sample count in that same cell, and the sum is over all $I \times J$ cells.
$t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
Here $\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $s$ is the sample standard deviation, and $N$ is the sample size.
The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$.
$X$ = number of successes in the sample
n.a.Pooled standard deviationn.a.n.a.n.a.n.a.
-\begin{aligned} s_p &= \sqrt{\dfrac{(n_1 - 1) \times s^2_1 + (n_2 - 1) \times s^2_2 + \ldots + (n_I - 1) \times s^2_I}{N - I}}\\ &= \sqrt{\dfrac{\sum\nolimits_{subjects} (\mbox{subject's score} - \mbox{its group mean})^2}{N - I}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned}
Here $s^2_i$ is the variance in group $i.$
----
Sampling distribution of $t$ if H0 were trueSampling distribution of $F$ and of $t$ if H0 were trueSampling distribution of $Q$ if H0 were trueSampling distribution of $X^2$ if H0 were trueSampling distribution of $t$ if H0 were trueSampling distribution of $X$ if H0 were true
Approximately the $t$ distribution with $k$ degrees of freedom, with $k$ equal to
$k = \dfrac{\Bigg(\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}\Bigg)^2}{\dfrac{1}{n_1 - 1} \Bigg(\dfrac{s^2_1}{n_1}\Bigg)^2 + \dfrac{1}{n_2 - 1} \Bigg(\dfrac{s^2_2}{n_2}\Bigg)^2}$
or
$k$ = the smaller of $n_1$ - 1 and $n_2$ - 1
First definition of $k$ is used by computer programs, second definition is often used for hand calculations.
Sampling distribution of $F$:
• $F$ distribution with $I - 1$ (df between, numerator) and $N - I$ (df error, denominator) degrees of freedom
Sampling distribution of $t$:
• $t$ distribution with $N - I$ degrees of freedom
If the number of blocks $N$ is large, approximately the chi-squared distribution with $k - 1$ degrees of freedom.
For small samples, the exact distribution of $Q$ should be used.
Approximately the chi-squared distribution with $(I - 1) \times (J - 1)$ degrees of freedom$t$ distribution with $N - 1$ degrees of freedomBinomial($n$, $P$) distribution.
Here $n = N$ (total sample size), and $P = \pi_0$ (population proportion according to the null hypothesis).
Significant?Significant?Significant?Significant?Significant?Significant?
Two sided:
Right sided:
Left sided:
$F$ test:
• Check if $F$ observed in sample is equal to or larger than critical value $F^*$ or
• Find $p$ value corresponding to observed $F$ and check if it is equal to or smaller than $\alpha$ (e.g. .01 < $p$ < .025 when $F$ = 3.91, df between = 4, and df error = 20)
$t$ Test for contrast two sided:
$t$ Test for contrast right sided:
$t$ Test for contrast left sided:
$t$ Test multiple comparisons two sided:
• Check if $t$ observed in sample is at least as extreme as critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
• Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
$t$ Test multiple comparisons right sided
• Check if $t$ observed in sample is equal to or larger than critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
• Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
$t$ Test multiple comparisons left sided
• Check if $t$ observed in sample is equal to or smaller than critical value $t^{**}$. Adapt $t^{**}$ according to a multiple comparison procedure (e.g., Bonferroni) or
• Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$. Adapt the $p$ value or $\alpha$ according to a multiple comparison procedure
If the number of blocks $N$ is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:
• Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
• Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
• Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
• Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
Two sided:
Right sided:
Left sided:
Two sided:
• Check if $X$ observed in sample is in the rejection region or
• Find two sided $p$ value corresponding to observed $X$ and check if it is equal to or smaller than $\alpha$
Right sided:
• Check if $X$ observed in sample is in the rejection region or
• Find right sided $p$ value corresponding to observed $X$ and check if it is equal to or smaller than $\alpha$
Left sided:
• Check if $X$ observed in sample is in the rejection region or
• Find left sided $p$ value corresponding to observed $X$ and check if it is equal to or smaller than $\alpha$
Approximate $C\%$ confidence interval for \mu_1 - \mu_2$$C\% confidence interval for \Psi, for \mu_g - \mu_h, and for \mu_in.a.n.a.C\% confidence interval for \mun.a. (\bar{y}_1 - \bar{y}_2) \pm t^* \times \sqrt{\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}} where the critical value t^* is the value under the t_{k} distribution with the area C / 100 between -t^* and t^* (e.g. t^* = 2.086 for a 95% confidence interval when df = 20). The confidence interval for \mu_1 - \mu_2 can also be used as significance test. Confidence interval for \Psi (contrast): • c \pm t^* \times s_p\sqrt{\sum \dfrac{a^2_i}{n_i}} where the critical value t^* is the value under the t_{N - I} distribution with the area C / 100 between -t^* and t^* (e.g. t^* = 2.086 for a 95% confidence interval when df = 20). Note that n_i is the sample size of group i, and N is the total sample size, based on all the I groups. Confidence interval for \mu_g - \mu_h (multiple comparisons): • (\bar{y}_g - \bar{y}_h) \pm t^{**} \times s_p\sqrt{\dfrac{1}{n_g} + \dfrac{1}{n_h}} where t^{**} depends upon C, degrees of freedom (N - I), and the multiple comparison procedure. If you do not want to apply a multiple comparison procedure, t^{**} = t^* = the value under the t_{N - I} distribution with the area C / 100 between -t^* and t^*. Note that n_g is the sample size of group g, n_h is the sample size of group h, and N is the total sample size, based on all the I groups. Confidence interval for single population mean \mu_i: • \bar{y}_i \pm t^* \times \dfrac{s_p}{\sqrt{n_i}} where \bar{y}_i is the sample mean in group i, n_i is the sample size of group i, and the critical value t^* is the value under the t_{N - I} distribution with the area C / 100 between -t^* and t^* (e.g. t^* = 2.086 for a 95% confidence interval when df = 20). Note that n_i is the sample size of group i, and N is the total sample size, based on all the I groups. --\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}} where the critical value t^* is the value under the t_{N-1} distribution with the area C / 100 between -t^* and t^* (e.g. t^* = 2.086 for a 95% confidence interval when df = 20). The confidence interval for \mu can also be used as significance test. - n.a.Effect sizen.a.n.a.Effect sizen.a. - • Proportion variance explained \eta^2 and R^2: Proportion variance of the dependent variable y explained by the independent variable:$$ \begin{align} \eta^2 = R^2 &= \dfrac{\mbox{sum of squares between}}{\mbox{sum of squares total}} \end{align} $$Only in one way ANOVA \eta^2 = R^2. \eta^2 (and R^2) is the proportion variance explained in the sample. It is a positively biased estimate of the proportion variance explained in the population. • Proportion variance explained \omega^2: Corrects for the positive bias in \eta^2 and is equal to:$$\omega^2 = \frac{\mbox{sum of squares between} - \mbox{df between} \times \mbox{mean square error}}{\mbox{sum of squares total} + \mbox{mean square error}}$$\omega^2 is a better estimate of the explained variance in the population than \eta^2. • Cohen's d: Standardized difference between the mean in group g and in group h:$$d_{g,h} = \frac{\bar{y}_g - \bar{y}_h}{s_p}$$Cohen's d indicates how many standard deviations s_p two sample means are removed from each other. --Cohen's d: Standardized difference between the sample mean and \mu_0:$$d = \frac{\bar{y} - \mu_0}{s}$Cohen's$d$indicates how many standard deviations$s$the sample mean$\bar{y}$is removed from$\mu_0.$- Visual representationn.a.n.a.n.a.Visual representationn.a. ---- n.a.ANOVA tablen.a.n.a.n.a.n.a. - Click the link for a step by step explanation of how to compute the sum of squares. ---- n.a.Equivalent ton.a.n.a.n.a.n.a. -OLS regression with one categorical independent variable transformed into$I - 1$code variables: •$F$test ANOVA is equivalent to$F$test regression model •$t$test for contrast$i$is equivalent to$t$test for regression coefficient$\beta_i$(specific contrast tested depends on how the code variables are defined) ---- Example contextExample contextExample contextExample contextExample contextExample context Is the average mental health score different between men and women?Is the average mental health score different between people from a low, moderate, and high economic class?Is there a difference in depression level between measurement point 1 (pre-intervention), measurement point 2 (1 week post-intervention), and measurement point 3 (6 weeks post-intervention)?Is there an association between economic class and gender? Is the distribution of economic class different between men and women?Is the average mental health score of office workers different from$\mu_0 = 50$?Is the proportion of smokers amongst office workers different from$\pi_0 = 0.2$? SPSSSPSSSPSSSPSSSPSSSPSS Analyze > Compare Means > Independent-Samples T Test... • Put your dependent (quantitative) variable in the box below Test Variable(s) and your independent (grouping) variable in the box below Grouping Variable • Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow • Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2 • Continue and click OK Analyze > Compare Means > One-Way ANOVA... • Put your dependent (quantitative) variable in the box below Dependent List and your independent (grouping) variable in the box below Factor or Analyze > General Linear Model > Univariate... • Put your dependent (quantitative) variable in the box below Dependent Variable and your independent (grouping) variable in the box below Fixed Factor(s) Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples... • Put the$k$variables containing the scores for the$k$related groups in the white box below Test Variables • Under Test Type, select the Friedman test Analyze > Descriptive Statistics > Crosstabs... • Put one of your two categorical variables in the box below Row(s), and the other categorical variable in the box below Column(s) • Click the Statistics... button, and click on the square in front of Chi-square • Continue and click OK Analyze > Compare Means > One-Sample T Test... • Put your variable in the box below Test Variable(s) • Fill in the value for$\mu_0$in the box next to Test Value Analyze > Nonparametric Tests > Legacy Dialogs > Binomial... • Put your dichotomous variable in the box below Test Variable List • Fill in the value for$\pi_0$in the box next to Test Proportion JamoviJamoviJamoviJamoviJamoviJamovi T-Tests > Independent Samples T-Test • Put your dependent (quantitative) variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable • Under Tests, select Welch's • Under Hypothesis, select your alternative hypothesis ANOVA > ANOVA • Put your dependent (quantitative) variable in the box below Dependent Variable and your independent (grouping) variable in the box below Fixed Factors ANOVA > Repeated Measures ANOVA - Friedman • Put the$k$variables containing the scores for the$k$related groups in the box below Measures Frequencies > Independent Samples -$\chi^2$test of association • Put one of your two categorical variables in the box below Rows, and the other categorical variable in the box below Columns T-Tests > One Sample T-Test • Put your variable in the box below Dependent Variables • Under Hypothesis, fill in the value for$\mu_0$in the box next to Test Value, and select your alternative hypothesis Frequencies > 2 Outcomes - Binomial test • Put your dichotomous variable in the white box at the right • Fill in the value for$\pi_0\$ in the box next to Test value
• Under Hypothesis, select your alternative hypothesis
Practice questionsPractice questionsPractice questionsPractice questionsPractice questionsPractice questions | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9759972095489502, "perplexity": 928.1652074676763}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711417.46/warc/CC-MAIN-20221209144722-20221209174722-00748.warc.gz"} |
http://physics.stackexchange.com/questions/21484/why-was-pacer-abandoned | Why was PACER abandoned?
The PACER project is described in this question: How much of the energy from 1 megaton H Bomb explosion could we capture to do useful work?
Why was it abandoned? It seems that it is the only readily economical and engineeringwise useful path to fusion power, and it seems that its breeder possibilities can easily let it pay for itself for generating fissile elements and helium (which is getting to be rare too nowadays!)
Was it political or technical limitations that killed it? Is there hope for a renewed interest in this in todays energy conscious politics?
-
I think it would have been a lovely trigger for the California seismic fault line. The earth is full of fault lines waiting for the fullness of time to be triggered printable-maps.blogspot.com/2009/04/…. And we only know the recently, within human history, active ones. – anna v Feb 26 '12 at 6:19
@anna v: Here is a technology which actually produces energy too-cheap-to-meter (running costs at least 1/10 current technology, probably closer to 1/100 or 1/1000), it is carbon neutral, it produces renewable fission and tritium resources and uses unlimited deuterium as the major fuel. it is available today, no R/D required, but is not implementable because of some vague fears? Using this thinking, as cavepeople we would have rejected fire because somebody will get burned. I am a little outraged that this technology is quietly buried. – Ron Maimon Feb 27 '12 at 2:37
Have a look at "erthquakes" in hydraulic fracking en.wikipedia.org/wiki/Hydraulic_fracturing . Pacer would be delivering megatons of energy on the crust over and over and over again, and there is conservation of momentum too. We have an old greek proverb :"how many times will the water pot go to the spring?" This is used to mean that a breakable pot after a number of uses will break. I do not think that caution in this case is different than the caution of not lighting cigarets next to the car when filling up. – anna v Feb 27 '12 at 5:32
It seems, the main reason is politics. The movement towards prohibition of nuclear tests just started. Facility of this kind is an ideal polygon for nuclear tests. Few hundred explosions per year plus mass production would result in few orders of magnitude cheaper and more effective weapons automatically. That time it was not a good idea to boost development of nuclear weapons that much. It was still too complicated for average countries and everyone wanted to postpone the time when these average countries get an access to nuclear weapons.
-
I think so too, but perhaps there are more technical answers forthcoming. – Ron Maimon Feb 27 '12 at 2:33
Accept this answer, mostly because you had a lot of interesting comments, and I do believe this is the main reason. – Ron Maimon Feb 27 '12 at 17:13
I don't have any specific knowledge about the project, but based on what you've linked to there are quite a few potential issues that could have contributed to it:
1. Cost. Not only the mega-engineering needed to build the chamber and ensure the stability of the surrounding geology, but also the need to instigate continuous production of nuclear bombs in large numbers. Although it would eventually pay off, it might simply be that the initial investment was unaffordable.
2. Safety. You'd have to be pretty certain that neither the containment chamber nor the surrounding rock can crack under the force of those explosions, or because of geological movement. There are also safety issues involved in the manufacturing of the bombs and the running of the plant.
3. Proliferation. You'd be manufacturing massive numbers of bombs that could easily be made into incredibly destructive weapons. You'd have to be pretty certain that none of them could ever find their way into the wrong hands. And if another country decided to copy the project then they'd have loads of bombs too.
4. Environmental impact. That chamber isn't going to remain in an operational state forever, because it will be absorbing neutrons, which will eventually weaken it. When it reaches the end of its lifespan the only thing you can really do is leave all that accumulated radioactive material inside the chamber for ever, and hope it never leaks out. So you'd have to make sure the surrounding geology was stable over very long time scales and that the chamber was completely resistant to any form of corrosion.
My guess is that when all these factors were added together it simply didn't look like a good investment.
-
Factor 4 doesn't seem right: the chamber is carved out of salt because the weakened parts will dissolve into the water, and any cracks self-repair. It is placed deep underground so that any leeching water will not contaminate groundwater. The radioactive materials are part of the economic output, they are to be chemically removed from the water, and form the breeder program. You can also use a completely different liquid, not water, if you are worried about contamination. I think the whole thing can be environmentally relatively ok, although it makes a lot of radioactive stuff. – Ron Maimon Feb 26 '12 at 18:52
Factor 1 was analyzed in estimate in the document, and it seems to be competitive with other power sources. The chamber itself is a side-effect of certain mining operations, and the explosion effect on the chamber was already known. The issues with mass-producing hydrogen bombs was already known in 1974, and it can only be cheaper today. I agree with point 3, but it seems a shame to base rejection on this. Point 2 is also dubious, because the cracks in salt can be repaired, and the salt formations are much larger than the cavity, so that the leaks should be contained for a long time. – Ron Maimon Feb 26 '12 at 18:55
@RonMaimon I wouldn't be so quick to dismiss the environmental issues with a subterranean cavity considering the economics of nuclear geologic repositories. Isolating the radioactive material isn't the kind of thing that you show in principle how it works and then proceed. It would be subject to endless scrutiny and failure mechanisms do exist. – Alan Rominger Feb 26 '12 at 20:20
@Zassounotsukushi: The point is that the cavity is constant use--- you are extracting the water for removing the breeder materials, and reusing the radioactive components. You would get buildup of radioactive materials, sure, but they are all in one spot, and they are chemically separated at the plant, and can be bred and rebred by putting on the bomb casings until they are either safe or until they are fuel. If you have short half-life isotopes, you let them decay, for intermediate or long, you put them on the bomb-case for one more run-through. It's a continuous recycling program. – Ron Maimon Feb 27 '12 at 2:18
I was partially going by PACER's wikipedia page, which describes a much more highly engineered solution in later versions of the proposal, including a building a metal-lined chamber, reinforcing the surrounding rock and using molten salts rather than water as a coolant. It doesn't say why it was changed but I assumed it was because there were engineering/safety related reasons why a simple hollowed rock dome wouldn't be suitable. If this is the case then the added complexity might be what killed the project. It would be good to know. – Nathaniel Feb 27 '12 at 13:56
Sorry for answering my own question, but I thought of a tentative answer--- there is an uncontrollable problem, which is the unknown chemistry you will generate in the water tank. As the thing operates, you have a constant neutron and fissile material flux which will produce a mix of plutonium, uranium, fission products, pusher products (isotopes near lead), various breeder elements, and various neutron absorption products on the salt, on the water, on the plutonium, on the lead, which will eventually produce every element under the sun in some proportion.
The chemistry of all these elements in solution is completely unknown. For all we know, they will form some plutonium compound that will produce a chemical plutonium polymer muck at the bottom of the chamber. Worse yet, this sludge could flow from one part to another, producing a critical fissioning mass which could sit there, making a meltdown which could wreck the containment.
The thing will also produce hydrogen gas. It could find a way to make polymers from hydrogen and trans-uranics, and these sludges would be highly radioactive, and they could clog the pipes with impossible to clean gunk of unknown chemistry, or it could just make a standard chemical explosion with hydrogen. The unknown compounds could be chemically explosive in much worse ways, even underwater, or otherwise chemically annoying.
I don't know any way to test this other than a trial run. It might not be a problem. But if residues collect in miniscule amounts, the radioactive chemical explosions might not begin until a few years of running. The moment you have to close a plant, the disposal problem becomes a nightmare of radioactivity. Although, I suppose you could just leave it where it is.
Whether such a thing should kill the project is a matter of judgement. One could try to figure out all the chemistry (this would be an enormous RD project), or just experiment with one power plant for 10 years in the middle of Antarctica. I still think the promise is greater than the danger.
-
This is hardly the main reason. Salt is good also because Na is well known to have only short living isotopes (it is used in breeder reactors as a heat exchanger due to this fact), Cl is also light. Only admixture of heavy elements might result in isotopes which are able to produce fissioning products with small critical mass. To get a plutonium you should start at least from plumbum. Though, I'd also prefer to experiment with such plant at least on the Moon. – Misha Feb 27 '12 at 4:15
@Misha: But you necessarily have Pu and heavy elements from the bombs, and all the fission products--- the light stuff isn't going to be radioactive, but it might chemically bind Pu into polymers which then can precipitate out and start a Pu reaction. You are using 2 bombs a day, each with several Kg of Pu and many tons of heavy pusher, which is lead, or thorium, or uranium, or breeder stuff, or something else that's necessarily heavy. – Ron Maimon Feb 27 '12 at 4:23
I've heard, almost all plutonium in H-bombs detonator burns out. First, fission in detonator is quite effective, second high neutron flow in the middle of explosion burns the rest. I would count on few [tens] grams from each explosion. Pusher is a problem. However, there was a significant progress in this direction. And who knows what would come out in mass-production where people could experiment a lot with light materials like berillium. – Misha Feb 27 '12 at 5:14
@Misha: interesting! But with two bombs a day, even a few grams will build up over a few years to a critical mass with the appropriate chemistry. As far as pusher, it has to be heavy, because it has to stay in place during the ablation cycle, so it has to withstand its own ablation pressure by inertia long enough to spark the seondary. So you are really limited in material choice. Further, the accumulation of possibly chemically explosive fission products will be a headache in the pipes. – Ron Maimon Feb 27 '12 at 14:57
I am not sure that pusher has to be heavy. Under these conditions anything behaves more as a gas. It is heavy due to the fact that nuclear head is supposed to be installed to the rocket or something. Probably, 10 times more iron would go, but not suitable for nowadays use due to the size and weight. Even if it has, there is some choice: one of bismuth, mercury or another most likely is able to produce isotopes with "green" products only. – Misha Feb 27 '12 at 17:13
The day gasonline hits $10 a gallon, the PACER project will be back on track. It's only a matter of time. A one megaton H-bomb is equivalent to one megaton of TNT which is more or less equivalent to one megaton of gasoline. At ten bucks a gallon that's worth$2000/ton, or 2 billion dollars. I think if we can extract useful energy with an efficiency of 10%, that will be a totally adequate return.
And am I mistaken or did they make bombs as big as 50 megatons??? That's a lot of gasoline.
-
There is no limit to H-bomb power, but here the goal is to keep the cavity structurally intact, so 1 megaton is an extreme upper limit, realistically 200KT. A megaton is a metric billion kilograms, and a US gallon is 2.7 Kg, so you have 380,000,000 , and you should compare with raw unrefined fuel cost of coal, which gives you about 300,000,000 (300 million) dollars. The cost of a 1 megaton warhead is about 300,000 dollars, so even with 100KT warheads it's at least 100 times cheaper today (no research). There are capital costs in setting up the plant, etc, but the running cost is much lower. – Ron Maimon Feb 27 '12 at 2:15
You miscalculated in your answer, using your parameters, and your misinterpretation of megaton as 2000 US gallon tons, the cost equivalent of a warhead in gasoline is 20 billion dollars, not 2 billion. But the true cost is closer to 100,000,000 USD as in the previous comment, still dwarfing the cost of a warhead at 300,000 USD. – Ron Maimon Feb 27 '12 at 2:35
You're neglecting the fact that I live in Canada, and our gallon is different. – Marty Green Feb 27 '12 at 7:17
So is our dollar, for that matter. – Marty Green Feb 27 '12 at 7:17
Come on, the difference is 30% at most. – Ron Maimon Feb 27 '12 at 17:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5682052969932556, "perplexity": 1376.2834428712733}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065241.14/warc/CC-MAIN-20150827025425-00149-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://mathhelpforum.com/statistics/50931-permutation-combination-print.html | # Permutation and Combination
• Sep 28th 2008, 07:03 AM
magentarita
Permutation and Combination
Can someone explain the basic difference between permutation and combination?
Are there a formula(s) to know?
• Sep 28th 2008, 07:34 AM
Plato
Quote:
Originally Posted by magentarita
Can someone explain the basic difference between permutation and combination?
Permutations are order driven: the number of ways to form a queue.
Combinations are content driven: the number of ways to form a collection.
Quote:
Originally Posted by magentarita
Are there a formula(s) to know?
$P\left( {N,k} \right) = \frac{{N!}}{{\left( {N - k} \right)!}} = N(N - 1)(N - 2) \cdots (N - k + 1)$
$C(N,k) = {{N} \choose {k}}= \frac{{N!}}{{k!\left( {N - k} \right)!}}$
• Sep 28th 2008, 09:37 PM
magentarita
great notes
Quote:
Originally Posted by Plato
Permutations are order driven: the number of ways to form a queue.
Combinations are content driven: the number of ways to form a collection.
$P\left( {N,k} \right) = \frac{{N!}}{{\left( {N - k} \right)!}} = N(N - 1)(N - 2) \cdots (N - k + 1)$
$C(N,k) = {{N} \choose {k}}= \frac{{N!}}{{k!\left( {N - k} \right)!}}$
Thank you for the great formulas and definitions. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9830043315887451, "perplexity": 3758.3212235775895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320545.67/warc/CC-MAIN-20170625170634-20170625190634-00297.warc.gz"} |
https://eprints.utas.edu.au/4664/ | # The environmental effects of energy competition in the Asia-Paific
Kellow, AJ 2007 , 'The environmental effects of energy competition in the Asia-Paific', in M Wesley (ed.), Energy Security in Asia , Routledge, Oxon, pp. 195-208.
Full text not available from this repository.
Item Type: Book Section Kellow, AJ Routledge View statistics for this item | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9091798067092896, "perplexity": 21229.41575316144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488550571.96/warc/CC-MAIN-20210624015641-20210624045641-00470.warc.gz"} |
https://socialsci.libretexts.org/Courses/Sacramento_City_College/FCS_324_PSYC_370%3A_Human_Development_-_A_Life_Span_(Lorenz)/01%3A_Lifespan_Psychology/1.01%3A_Introduction_to_Life_Span_Growth_and_Development | # 1.1: Introduction to Life Span, Growth and Development
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$
##### Learning Objectives
• Explain the study of human development.
• Define physical, cognitive, and psychosocial development.
• Differentiate periods of human development.
• Analyze your own location in the life span.
• Judge the most and least preferable age groups with which to work.
• Contrast social classes with respect to life chances.
• Explain the meaning of social cohort.
• Critique stage theory models of human development.
• Define culture and ethnocentrism and describe ways that culture impacts development.
• Explain the reasons scientific methods are more objective than personal knowledge.
• Contrast qualitative and quantitative approaches to research.
• Compare research methods noting the advantages and disadvantages of each.
• Differentiate between independent and dependent variables.
Welcome to life span, growth and development. This is the study of how and why people change or remain the same over time.
This course is commonly referred to as the “womb to tomb” course because it is the story of our journeys from conception to death. Human development is the study of how we change over time. Although this course is often offered in psychology, this is a very interdisciplinary course. Psychologists, nutritionists, sociologists, anthropologists, educators, and health care professionals all contribute to our knowledge of the life span.
We will look at how we change physically over time from early development through aging and death. We examine cognitive change, or how our ability to think and remember changes over time. We look at how our concerns and psychological state is influenced by age and finally, how our social relationships change throughout life.
There are several goals of those involved in this discipline:
1. Describing change - many of the studies we will examine simply involve the first step in investigation, which is description. Arnold Gesell’s study on infant motor skills, for example.
2. Explaining changes is another goal. Theories provide explanations for why we change over time. For example, Erikson offers an explanation about why our two-year-old is temperamental.
Think about how you were 5, 10, or even 15 years ago. In what ways have you changed? In what ways have you remained the same? You have probably changed physically; perhaps you’ve grown taller and become heavier. But you may have also experienced changes in the way you think and solve problems. Cognitive change is noticeable when we compare how 6 year olds, 16 year olds, and 46 year olds think and reason, for example. Their thoughts about others and the world are probably quite different. Consider friendship for instance. The 6 year old may think that a friend is someone with whom you can play and have fun. A 16 year old may seek friends who can help them gain status or popularity. And the 46 year old may have acquaintances, but rely more on family members to do things with and confide in. You may have also experienced psychosocial change. This refers emotions and psychological concerns as well as social relationships. Psychologist Erik Erikson suggests that we struggle with issues of independence, trust, and intimacy at various points in our lives. (We will explore this thoroughly throughout the course.)
Our journeys through life are more than biological; they are shaped by culture, history, economic and political realities as much as they are influenced by physical change. This is a very interesting and practical course because it is about us and those with whom we live and work. One of the best ways to gain perspective on our own lives is to compare our experiences with that of others. By periodically making cross-cultural and historical comparisons and by presenting a variety of views on issues such as healthcare, aging, education, gender and family roles, I hope to give you many eyes with which to see your own development. This occurs frequently in the classroom as students from a variety of cultural backgrounds discuss their interpretations of developmental tasks and concerns. I hope to recreate this rich experience as much as possible in this text. So, for example, we will discuss current concerns about the nutrition of children in the United States (for a middle-class boy of 11 years who is 130 pounds overweight and suffering with Pediatric Type II diabetes) as well as malnutrition experienced by children in Ethiopia as a result of drought. Being self-conscious can enhance our ability to think critically about the systems we live in and open our eyes to new courses of action to benefit the quality of life. And knowing about other people and their circumstances can help us live and work with them more effectively. An appreciation of diversity enhances the social skills needed in nursing, education, or any other field.
## New Assumptions and Understandings
I was also introduced to the theories of Freud, Erikson, and Piaget, the classic stage theorists whose models depict development as occurring in a series of predictable stages. Stage theories had a certain appeal to an American culture experiencing dramatic change in the early part of the 20th century. But that sense of security was not without its costs; those who did not develop in predictable ways were often thought of as delayed or abnormal. And Freudian interpretations of problems in childhood development, such as autism, held that such difficulties were in response to poor parenting. Imagine the despair experienced by mothers accused of causing their child’s autism by being cold and unloving. It was not until the 1960s that more medical explanations of autism began to replace Freudian assumptions.
Freud and Piaget present a series of stages that essentially end during adolescence. For Freud, we enter the genital stage in which much of our motivation is focused on sex and reproduction and this stage continues through adulthood. Piaget’s fourth stage, formal operational thought, begins in adolescence and continues through adulthood. Again, neither of these theories highlights developmental changes during adulthood. Erikson, however, presents eight developmental stages that encompass the entire lifespan. For that reason, Erikson is known as the “father” of developmental psychology and his psychosocial theory will form the foundation for much of our discussion of psychosocial development.
Today we are more aware of the variations in development and the impact that culture and the environment have on shaping our lives. We no longer assume that those who develop in predictable ways are normal and those who do not are abnormal. And the assumption that early childhood experiences dictate our future is also being called into question. Rather, we have come to appreciate that growth and change continues throughout life and experience continues to have an impact on who we are and how we relate to others. And we recognize that adulthood is a dynamic period of life marked by continued cognitive, social, and psychological development.
## Who Studies Human Development?
Many academic disciplines contribute to the study of life span and this course is offered in some schools as psychology; in other schools it is taught under sociology or human development. This multidisciplinary course is made up of contributions from researchers in the areas of health care, anthropology, nutrition, child development, biology, gerontology, psychology, and sociology among others. Consequently, the stories provided are rich and well-rounded and the theories and findings can be part of a collaborative effort to understand human lives.
## Many Contexts
People are best understood in context. What is meant by the word “context”? It means that we are influenced by when and where we live and our actions, beliefs, and values are a response to circumstances surrounding us. Sternberg describes a type of intelligence known as “contextual” intelligence as the ability to understand what is called for in a situation (Sternberg, 1996). The key here is to understand that behaviors, motivations, emotions, and choices are all part of a bigger picture. Our concerns are such because of who we are socially, where we live, and when we live; they are part of a social climate and set of realities that surround us. Our social locations include cohort, social class, gender, race, ethnicity, and age. Let’s explore two of these: cohort and social class.
## REFERENCES
Aries, P. (1962). Centuries of childhood. A social history of family life. New York: Vintage.
Davis, N. (1999). Youth crisis: Growing up in the high risk society. Westport, CN: Praeger.
Debt juggling. The new middle class addiction. (2005, March/April). The Sunday Times Review. Retrieved from www.timesonline.co.uk/article/o..2092-1551813.00.html
DeNavas-Walt, C., & Cleveland, R. W. (2002). Money income in the United States: 2001. Current population reports. (P60-218) (United States, U. S. Census Bureau). U. S. Government Printing Office.
Gilbert, D. (2003). The American class structure in an age of growing inequality. (6th ed.). Belmont, CA: Wadsworth.
Gilbert, D., & Kahl, J. A. (1998). The American class structure. (5th ed.). Belmont, CA: Wadsworth.
Glazer, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. New York: Aldine.
Kohn, M. L. (1977). Class and conformity: A study in values. (2nd ed.). Homewood, IL: Dorsey.
Mawathe, A. (2006, March/April). Period misery for Kenya schoolgirls. BBC News. Retrieved August 10, 2006, from http://news.bbc.co.uk/hi/africa/4816558.stm
Seccombe, K., & Warner, R. L. (2004). Marriages and families: Relationships in social context. Belmont, CA: Wadsworth.
Sternberg, R. J. (1996). Sucessful intelligence. New York: Simon and Shuster.
The secret life of the credit card. (2004). PBS: Public Broadcasting Service. Retrieved May 02, 2011, from http://www.pbs.org/cgi-registry/generic/trivia.cgi
Thornton, S. (2005, June/July). Karl Popper (Stanford Encyclopedia of Philosophy/Summer 2005 Edition). Stanford Encyclopedia of Philosophy. Retrieved May 02, 2011, from http://plato.stanford.edu/archives/s...entries/popper
United States, U. S. Census Bureau, Housing and Household Economics Statistics Division. (2005). Poverty Thresholds 2005. Retrieved August 10, 2006, from http://www.census.gov/hhes/www/pover.../thresh05.html
Weitz, R. (2007). The sociology of health, illness, and health care: A critical approach, (4th ed.). Belmont, CA: Thomson.
1.1: Introduction to Life Span, Growth and Development is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Laura Overstreet via source content that was edited to conform to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29112809896469116, "perplexity": 2860.037517708537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662532032.9/warc/CC-MAIN-20220520124557-20220520154557-00133.warc.gz"} |
https://rbspgway.jhuapl.edu/biblio?aname=Fathy | # Bibliography
## Found 1 entries in the Bibliography.
### Showing entries from 1 through 1
2018 Characteristics of Sudden Commencements Observed by Van Allen Probes in the Inner Magnetosphere We have statistically studied sudden commencement (SC) by using the data acquired from Van Allen Probes (VAP) in the inner magnetosphere (L = 3.0\textendash6.5) and GOES spacecraft at geosynchronous orbit (L =\~ 6.7) from October 2012 to September 2017. During the time period, we identified 85 SCs in the inner magnetosphere and 90 SCs at geosynchronous orbit. Statistical results of the SC events reveal the following characteristics. (1) There is strong seasonal dependence of the geosynchronous SC amplitude in the radial BV c ... Fathy, A.; Kim, K.-H.; Park, J.-S.; Jin, H.; Kletzing, C.; Wygant, J.; Ghamry, E.; YEAR: 2018 DOI: 10.1002/2017JA024770
1 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.943919837474823, "perplexity": 10469.35670365088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057598.98/warc/CC-MAIN-20210925052020-20210925082020-00281.warc.gz"} |
https://mathshistory.st-andrews.ac.uk/Biographies/Sintsov/ | # Dmitrii Matveevich Sintsov
### Quick Info
Born
21 November 1867
Viatka (now Kirov), Russia
Died
28 January 1946
Kharkov, Ukraine
Summary
Dmitrii Matveevich Sintsov was a Russian mathematician who worked on the theory of conics and the theory of nonholonomic differential geometry.
### Biography
Dmitrii Matveevich Sintsov was born in Viatka (sometimes written as Vyatka) which was a large city in western Russia and the administrative centre of the Kirov Province. The change of name of this city to Kirov did not happen until 1934 when it was renamed after the Soviet official Sergey M Kirov. He attended the Third Kazan High School, graduating with the Gold Medal in 1886. Later in the year in which he graduated from the High School, he began his studies at Kazan University, graduating in 1890. This University, the result of one of the many reforms of the Emperor Alexander I, was founded in 1805, and was famed in mathematics by having Lobachevsky as its rector from 1827 to 1846. By the time Sintsov began his university studies he was already convinced that mathematics was the topic for him to concentrate on, and he became a member of the mathematics section of the Physics and Mathematics Faculty of the university. His lecturers in mathematics were A V Vasil'ev, F M Suvorov, V V Preobrazhenskii and P S Nazimov. He also took courses in astronomy with D I Dubyago.
Sintsov's first research was on Bernoulli functions of fractional order and he carried this out while taking his fourth year undergraduate courses. His paper on the topic was published in the Notices of the Kazan Physics and Mathematics Society in 1890. This was a remarkable piece of work for a student at this stage in his undergraduate studies and it earned him a Gold Medal. Although Sintsov's interests moved away from the areas of his first scientific investigations, nevertheless he did undertake further research into Bernoulli functions and published further papers on this topic near the beginning of his career. Having made such an excellent start to his research, his "esteemed teacher" Aleksandr Vasil'evich Vasil'ev (1853-1929) recommended that he continue his studies at the University of Kazan with the aim of qualifying as a High School teacher. He spent three years, from the beginning of February 1891 to the beginning of February 1894, taking the necessary courses to obtain his teaching qualification. During this period he was being advised on research topics by Vasil'ev and, following his advice, he wrote his Master's Thesis The Theory of Connexes in Space in Connection with the Theory of First Order Partial Differential Equations. I A Naumov explains in [7]:-
The German mathematician A Clebsch was the first to investigate the theory of connexes in the period 1870-1872. He considered plane connexes i.e., plane geometrical objects, where the point-straight line combination was chosen as the basic element of the plane. Such connexes are termed ternary. Clebsch constructed the geometry of a ternary connex and applied it to the theory of ordinary differential equations.
Sintsov was appointed to the staff of Kazan University and taught there from 1894 to 1899. After leaving Kazan, Sintsov taught at the Odessa Higher Mining School, then, in 1903, he was appointed to Kharkov University where he taught until his death in 1946. He took a leading role in the development of mathematics at Kharkov University and, for many years, he was President of the Kharkov Mathematical Society. This Society is one of the early mathematics societies, being founded in 1879. Following Vladimir Andreevich Steklov's presidency from 1902 to 1906, Sintsov took over as President, and held the position until his death forty years later [8]:-
Through Sintsov's initiative, the Kharkov Mathematical Society was deeply involved in the improvement of mathematical education in the schools of the Kharkov region. Sintsov also put considerable effort into maintaining the Kharkov Mathematical Society mathematical library which is still one of the most complete mathematical libraries in the Ukraine.
Sintsov had an outstanding research record, and published 267 works during his long and productive scientific and teaching career. Of course through his many years of research his interests varied but the main areas on which he worked were the theory of conics and applications of this geometrical theory to the solution of differential equations and, perhaps most important of all, the theory of nonholonomic differential geometry. I A Naumov writes [7]:-
His classical work on the theory of connexes, of which he was one of the founders, and on nonholonomic differential geometry are well known far beyond the frontiers of our country.
The book in which the articles [2] (written by Ja P Blank who was a student of Sintsov) and [5] appear, contains a selection of the Sintsov's major works on nonholonomic geometry. These were first published during the years 1927-1940 and include: A generalization of the Enneper-Beltrami formula to systems of integral curves of the Pfaffian equation Pdx + Qdy + Rdz = 0 (1927); Properties of a system of integral curves of Pfaff's equation, Extension of Gauss's theorem to the system of integral curves of the Pfaffian equation Pdx + Qdy + Rdz = 0 (1927); Gaussian curvature, and lines of curvature of the second kind (1928); The geometry of Mongian equations (1929); Curvature of the asymptotic lines (curves with principal tangents) for surfaces that are systems of integral curves of Pfaffian and Mongian equations and complexes (1929); On a property of the geodesic lines of the system of integral curves of Pfaff's equation (1936); Studies in the theory of Pfaffian manifolds (special manifolds of the first and second kind) (1940) and Studies in the theory of Pfaffian manifolds (1940).
At Kharkov University, Sintsov created a school of geometry which became the leading school in this field in the Ukraine and has continued to flourish through the years still today being a leading centre. There he studied the geometry of Monge equations and he introduced the important ideas of asymptotic line curvature of the first and second kind. In 1903 he published two papers on the functional equation $f (x, y) + f (y, z) = f (x, z)$, now called the 'Sintsov equation,' which are discussed by Detlef Gronau in [4]. He writes:-
Sintsov gave in 1903 an elegant proof of its general real solution, which has the form $f (x, y) = q(x) - q(y)$, where q is an arbitrary function in one variable. ... [Sintsov] was the first who gave (in two papers ... in 1903) elementary simple proofs of its general real solutions. But before, it was Moritz Cantor who proposed these equations (there are two equations). In his journal 'Zeitschrift fur Mathematik und Physik,' ... he published [a note on them] in 1896. Cantor quotes these equations as examples of equations in three variables which can be solved by the method of differential calculus due to Niels Henrik Abel. ... The proof of Sintsov is much simpler and elegant.
Sintsov also took an interest in the history of mathematics and one of the major projects which he undertook in this area was the detailed study of the work of previous mathematicians at Kharkov University. This work provides a fascinating account of the development of mathematics there from the founding of the university in 1805.
The Ukrainian Academy of Sciences honoured Sintsov by electing him to membership on 22 February 1939.
### References (show)
1. I A Naumov, Dmitrii Matveevich Sintsov (his life and scientific and pedagogical work) (Kharkov University Press, 1955).
2. Ja P Blank, D M Sintsov (1867-1946), in Ja P Blank, D Z Gordevskii, A S Leibin and M A Nikolaenko (eds.), D M Sintsov, Papers on nonholonomic geometry (Kiev, 1972), 4-8.
3. Dmitrii Syntsov, Encyclopedia of Ukraine (Toronto-Buffalo-London, 1993).
4. D Gronau, A remark on Sincov's functional equation, Notices of the South African Mathematical Society 31 (1) (2000), 1-8.
5. List of the scientific works of D M Sintsov, in Ja P Blank, D Z Gordevskii, A S Leibin and M A Nikolaenko (eds.), D M Sintsov, Papers on nonholonomic geometry (Kiev, 1972), 286-293.
6. I A Naumov, Dmitrii Matveevich Sintsov on the 100th anniversary of his birth (Ukrainian), Ukrainskii Matematicheskii Zhurnal 20 (2) (1968), 232-237.
7. I A Naumov, Dmitrii Matveevich Sintsov on the 100th anniversary of his birth, Ukrainian Mathematical Journal 20 (2) (1968), 208-212.
8. I V Ostrovskii, Kharkov Mathematical Society, European Mathematical Society Newsletter 34 (December, 1999), 26-27. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5748052000999451, "perplexity": 1917.6969251781086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038464065.57/warc/CC-MAIN-20210417222733-20210418012733-00603.warc.gz"} |
http://math.stackexchange.com/questions/250459/whats-the-intuition-behind-non-integer-exponents-powers | # What's the intuition behind non-integer exponents/powers
Consider some $a \in \mathbb{R}$ and $x \in \mathbb{R}\backslash \mathbb{N}$.
Is there some intuition to be had for the number $a^x$?
For example the intuition of $a^2$ is obvious; it's $a*a$ which I can think about with real world objects such as apples (when $a \in \mathbb{N}$). What about $a^{1.9}$?
-
Having defined positive integer exponents, if you want the property $$a^m \cdot a^n= a^{m+n}$$ to continue to hold, then you must define $a^0=1$ and $a^{-n} = 1/a^n$ for integer $n$. This takes care of all integers. Then, if you want the property $$a^{mn} = (a^m)^n$$ to continue to hold, you must define $a^{p/q} = \sqrt[q]{a^p}$ (for positive real $a$, and integers $p$ and $q$, $q \ne 0$). This takes care of all rational numbers. And then, if you want the function $a^x$ to be continuous from $\mathbb{R} \to \mathbb{R}$, there is only one such extension.
-
If we fix $a>0$, $f(x)=a^x$ is continuous on $\mathbb R$. The intuition behind rational exponents is pretty clear, and one extends from the rationals to all reals in this manner.
-
This helps!${}{}{}$ – Jase Dec 4 '12 at 4:20
Nicely said, + 1 $\land\;>8$k – amWhy Dec 4 '12 at 4:24
Yet another intuition follows from the following formula: $$x^{\,y} = e^{\;y\log x}$$ (with appropriate restrictions of course).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9055899381637573, "perplexity": 238.78539876035512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827077.13/warc/CC-MAIN-20160723071027-00013-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://fdtd.kintechlab.com/en/fdtd | Old revisions Backlinks Show pagesource
Sitemap Recent changes
Here one can find introductory description of the Finite-Difference Time-Domain (FDTD) method.
# FDTD (Finite-Difference Time-Domain)
FDTD is one of the most popular numerical methods in computational electrodynamics. Since introduction in 70th years of the previous century this method became popular due to it certain advantages:
• simplicity of explicit numerical scheme
• high parallel efficiency
• easiness of complex geometry generation
• ability to handle dispersive and nonlinear media
• natural description of impulsive regimes
FDTD includes various numerical techniques and options, such as algorithm for dispersive and nonlinear media modeling, different mesh types, simulation results postprocessing etc.
Real optical applications often require extensive parallel FDTD calculations. One can use existing commercial solutions for this purpose, but they do not provide open code that can be modified. To cover this gap we developed Electromagnetic Template Library.
## FDTD numerical experiment
Typical scenario of FDTD experiment includes following steps:
• User specifies calculated volume and mesh resolution, optical properties and geometry of the structure, boundary conditions (typically, periodic or absorbing), wave source and set of points where field values should be recorded (we call it detectors).
• Source generates finite time width impulse impinging on structure. Its propagation and scattering is recorded by detectors and possibly transformed to the frequency domain. Total exit of the radiation through absorbing boundaries determines the simulation time.
• Recorded field values are processed (for example, energy flux integrating through the chosen surface) to get optical characteristics of the structure.
# Using FDTD
FDTD can be used for varios type of simulations: light scattering from arbitrary shaped objects, modeling of source radiation in specified electromagnetic environment, optical properties of resonators and waveguides. In this section we consider these and other possible examples in details.
## Preliminary notes
Solution of Maxwell's equations ( is or ), in absence of free charges, current sources and any nonlinearities, can be represented as a superposition of harmonic fields:
.
It is convenient to look at as a real part of complex vector , where :
.
Complex dependency is introduced for convenience purposes only and does not have any physical meaning.
The Poynting vector specifies the magnitude and direction of the rate of electromagnetic energy transfer. The instantaneous Poynting vector is rapidly varying function of time for frequencies that are usually of interest. Most instruments are not capable of following the rapid oscillations of the instantaneous Poynting vector, but respond to some time average :
,
where is a time interval long compared with .
It can be shown that for harmonic field
.
Light intensity is absolute value of .
## Absorption and scattering by arbitrary object
Consider arbitrary object illuminated by harmonic incident wave. Field in the medium surrounding the object can be represented as superposition of incident and scattered fields:
.
Here we consider how to estimate energy scattered and absorbed by object. We construct imaginary closed surface around the object ; the net rate at which electromagnetic energy crosses this surface is
,
where is normal to the surface.
If , energy is absorbed within the volume confined by surface. If object is embedded in nonabsorbing environment, is the rate at which energy is absorbed by object.
Time averaged Poynting vector can be represented as (we omit here index for complex vectors and )
,
where
.
Last term is a consequence of an interference between incident and scattered fields.
After integrating over surface we have
,
.
For nonabsorbing environment , and
These energy flow rates are linearly dependent on incident wave intensity . Their normalized values
,
are extinction, absorption and scattering cross sections with dimensions of area.
We may define efficiencies for extinction, scattering and absorption
,
where is the object cross-sectional area projected onto a plane perpendicular to the incident beam (e.g. for a sphere of radius ). Particles can scatter and absorb more light that is geometrically incident upon them (corresponding efficiencies are greater than unity) if their sizes are comparable or smaller than the incident wavelength.
## The amplitude scattering matrix
This matrix is used to characterize angular distribution of scattered light.
Consider object that is illuminated by a harmonic wave.
The direction of propagation of the incident light defines axis, the forward direction. Any point in object may be chosen as the origin of a rectangular coordinate system, where and axes are orthogonal to axis and to each other but otherwise arbitrary. The orthonormal basis vectors , , are in direction of positive , and axes.
The scattering direction and forward direction define a scattering plane. This plane is uniquely determined by the azimuthal angle , except when is parallel to the axis. In this two instances () any plane containing axis is a suitable scattering plane.
It is convenient to resolve the incident electric field , which lies in the plane, into components parallel and perpendicular to the scattering plane
The orthonormal basis vectors
,
form a right-handed triad with :
We also have
,
where , , are orthonormal basis vectors associated with the spherical polar coordinate system .
If the x and y components of the incident field are denoted by and , then
,
.
At sufficient distances from the origin (), in the far-field region, the scattered field is approximately transverse () and has the asymptotic form
,
where
.
The basis vector is parallel and is perpendicular to the scattering plane. Note, however, that and are specified relative to different set of basis vectors. Because of the linearity of Maxwell's equations, the relation between them can be written in matrix form
,
where () are elements of the amplitude scattering matrix, and depend in general on scattering angle and azimuthal angle .
Experimental measurement of elements is difficult. However, the amplitude scattering matrix is related to elements of so called scattering matrix, the measurement of which poses considerable fewer experimental problems. Scattering matrix is real matrix , with 7 independent elements, which can be expressed using absolute values and phase differences between . One can find more detailed information in chapter 3.3 of book 1).
# Applications
EMTL was applied for various range of optical applications. Here is chosen publications list:
• A. Deinega, I. Valuev, B. Potapkin and Yu. Lozovik, “Minimizing light reflection from dielectric textured surfaces,” JOSA A 28, 770 (2011) httpPDF
• S. Zalyubovskiy et. al., “Theoretical limit of localized surface plasmon resonance sensitivity to local refractive index change and its comparison to conventional surface plasmon resonance sensor”, JOSA A 29, 994 (2012) httpPDF
• A. Deinega, S. John, “Solar power conversion efficiency in modulated silicon nanowire photonic crystals”, J. Appl. Phys. 112, 074327 (2012) httpPDF
• S. Belousov et. al., “Using metallic photonic crystals as visible light sources”, Phys. Rev. B 86, 174201 (2012) httpPDF
• A. Deinega, S. Eyderman, S. John, “Coupled optical and electrical modeling of solar cell based on conical pore silicon photonic crystals”, J. Appl. Phys. 113, 224501 (2013) httpPDF
1) C. F. Bohren and D. R. Huffman: Absorption and Scattering of Light by Small Particles, Wiley-Interscience, New York (1983)
/home/kintechlab/fdtd.kintechlab.com/docs/data/pages/en/fdtd.txt · Last modified: 2013/08/08 21:25 by deinega | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8509334325790405, "perplexity": 1295.12053995543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655900614.47/warc/CC-MAIN-20200709162634-20200709192634-00366.warc.gz"} |
https://hal-univ-artois.archives-ouvertes.fr/hal-03517923 | A Compilation of Succinctness Results for Arithmetic Circuits - Archive ouverte HAL Access content directly
Conference Papers Year :
## A Compilation of Succinctness Results for Arithmetic Circuits
Alexis de Colnet
• Function : Author
Stefan Mengel
• Function : Author
#### Abstract
Arithmetic circuits (AC) are circuits over the real numbers with 0/1-valued input variables whose gates compute the sum or the product of their inputs. Positive AC -- that is, AC representing non-negative functions -- subsume many interesting probabilistic models such as probabilistic sentential decision diagram (PSDD) or sum-product network (SPN) on indicator variables. Efficient algorithms for many operations useful in probabilistic reasoning on these models critically depend on imposing structural restrictions to the underlying AC. Generally, adding structural restrictions yields new tractable operations but increases the size of the AC. In this paper we study the relative succinctness of classes of AC with different combinations of common restrictions. Building on existing results for Boolean circuits, we derive an unconditional succinctness map for classes of monotone AC -- that is, AC whose constant labels are non-negative reals -- respecting relevant combinations of the restrictions we consider. We extend a small part of the map to classes of positive AC. Those are known to generally be exponentially more succinct than their monotone counterparts, but we observe here that for so-called deterministic circuits there is no difference between the monotone and the positive setting which allows us to lift some of our results. We end the paper with some insights on the relative succinctness of positive AC by showing exponential lower bounds on the representations of certain functions in positive AC respecting structured decomposability.
#### Domains
Computer Science [cs]
### Dates and versions
hal-03517923 , version 1 (08-01-2022)
### Identifiers
• HAL Id : hal-03517923 , version 1
• ARXIV :
### Cite
Alexis de Colnet, Stefan Mengel. A Compilation of Succinctness Results for Arithmetic Circuits. 18th International Conference on Principles of Knowledge Representation and Reasoning, KR 2021, Nov 2021, Online Event, Vietnam. ⟨hal-03517923⟩
### Export
BibTeX TEI Dublin Core DC Terms EndNote Datacite
16 View | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8594112992286682, "perplexity": 3303.3984788086996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00279.warc.gz"} |
https://www.physicsforums.com/threads/reexamination-of-fundamental-mathematical-concepts.25913/ | # Reexamination of fundamental mathematical concepts
1. May 16, 2004
### WWW
Hi,
In the attached address ( http://www.geocities.com/complementarytheory/M_E.pdf ) you can find my reexamination of fundamental mathematical concepts.
Thank you,
WWW
2. May 16, 2004
### Janitor
Where is Matt Grime these days?
3. May 16, 2004
### pallidin
My thoughts... your theory fails by virture of its foundation.
To say that "x" defines "something" but also that "x" defines "nothing" renders further mathematical analysis under those conditions impossible.
It's much like my saying that "x" equals "1" but also equals "0" , so any equations using that standard can be multi-interpreted, thus rendering those equations invalid, pointless and of no use.
Last edited: May 16, 2004
4. May 16, 2004
### WWW
Hi pallidin,
Can you see beyond the 0 XOR 1 excluded-middle reasoning?
In this pdf I use an included-middle reasoning where x is a GENERAL notation for any concept.
If you try to force the excluded-middle reasoning on what is written in my pdf, then you don't give yourself any chance to be able to understand it.
So, please give youself the chance, put aside your excluded-middle reasoning and try to read it again with an open mind until the end of it, before you air your view about it.
Thank you,
WWW
Last edited: May 16, 2004
5. May 16, 2004
### Russell E. Rierson
What is the stratification of relations in your system?
6. May 16, 2004
### WWW
Last edited by a moderator: May 1, 2017 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9125174880027771, "perplexity": 5844.09186901725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591497.58/warc/CC-MAIN-20180720041611-20180720061611-00084.warc.gz"} |
https://www.physicsforums.com/threads/how-can-one-find-the-area-between-the-curves.90727/ | How can one find the area between the curves
1. Sep 25, 2005
mathelord
when three curves intersect,i mean like the intersection of three straight lines to give a triangle,how can one find the area between the curves
2. Sep 25, 2005
mathmike
area of a triangle is given by 1/2 * b * h
3. Sep 25, 2005
Poncho
I think mathelord means any three curves. You can use a double integral. Do you know calculus?
4. Sep 25, 2005
Werg22
If you don't know calculus you calculate the height. The factor of two perpendicular slopes is -1.
5. Sep 26, 2005
HallsofIvy
Staff Emeritus
The man said curves! Assuming he is asking about the area of the region formed by three general curves, he will need to use caluculus.
Exactly how that is done depends on the curves themselves. In the very common situation, a sort of "curvy" triangle, where you have one curve under the other two (between the points where the other two intersect it), then you don't need a double integral. You will need to break the integral into two parts. I'm going to call the curve on the bottom C1, the graph of y= f1(x), and the other two C1 and C2, graphs of y=f2(x), y= f3(x) respectively. Let's say that C2 intersect C1 at x=a, C3 intersects C1 at x= c, and that C2 is below C3 until they intersect at x= b after which C3 is below C2.
Then the area is given by two separate integrals:
$$\int_a^b(f2(x)-f1(x))dx+ \int_b^c(f3(x)-f1(x)dx$$
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
Similar Discussions: How can one find the area between the curves | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6966395974159241, "perplexity": 881.9933670743842}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00318-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://eng.libretexts.org/Bookshelves/Computer_Science/Programming_Languages/Book%3A_Embedded_Controllers_Using_C_and_Arduino_(Fiore)/5%3A_C_Storage_Types_and_Scope/5.1%3A_Types | # 5.1: Types
C has several ways of storing or referencing variables. These affect the way variables behave. Some of the more common ones are: auto, register, and static.
Auto variables are variables declared within functions that are not static or register types. That is, the auto keyword is the default. Normally, auto variables are created on the application’s stack, although C doesn’t require this. The stack is basically a chunk of memory that is allocated for the application’s use when it is first run. It is a place for temporary storage, with values popped onto and pulled off of the stack in first-in, last-out order (like a stack of plates). Unless you initialize an auto variable, you have no idea what its value is when you first use it. Its value happens to be whatever was in that memory location the previous time it was used. It is important to understand that this includes subsequent calls to a function (i.e., its prior value is not “remembered” the next time you call the function). This is because any subsequent call to a function does not have to produce the same the memory locations for these variables, anymore than you always wind up with the same plate every time you go to the cafeteria.
Register variables are similar to auto types in behavior, but instead of using the usual stack method, a CPU register is used (if available). The exact implementation is CPU and compiler specific. In some case the register keyword is ignored and a simple auto type is used. CPU registers offer faster access than normal memory so register variables are used to create faster execution of critical code. Typically this includes counters or pointers that are incremented inside of loops. A declaration would like something like this:
register int x;
Static variables are used when you need a variable that maintains its value between function calls. So, if we need a variable that will “appear the way we left it” from the last call, we might use something like this:
static char y;
There is one important difference between auto and static types concerning initialization. If an auto variable is initialized in a function as so:
char a=1;
Then a is set to 1 each time the function is entered. If you do the same initialization with a static, as in:
static char b=1;
Then b is set to 1 only on the first call. Subsequent entries into the function would not incur the initialization. If it did reinitialize, what would be the sense of having a static type? This is explained by the fact that a static does not usually use the stack method of storage, but rather is placed at a fixed memory location. Again, C does not require the use of a stack, rather, it is a typical implementation.
Two useful but not very common modifiers are volatile and const. A volatile variable is one that can be accessed or modified by another process or task. This has some very special uses (typically, to prevent an optimizing compiler from being too aggressive with optimizations-more on this later). The const modifier is used for declaring constants, that is, variables that should not change value. In some instances this is preferred over using #define as type checking is now available (but you can’t use the two interchangeably). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2027883678674698, "perplexity": 999.2955034106938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150264.90/warc/CC-MAIN-20210724094631-20210724124631-00562.warc.gz"} |
https://mathematica.stackexchange.com/questions/79069/compute-expectation-numerically | # Compute expectation numerically
I want to compute this expectation numerically with Mathematica. I could not figure out how to solve it numerically. Could you please help me?
$$E(x|p)=\sum _{i=0}^m \sum _{j=0}^m \binom{m}{i}\binom{m}{j}\frac{(j+1)}{(m+2)}p^i(1-p)^{m-i}\frac{Beta(i+j+1,2m-i-j+1)}{Beta(i+1,m-i+1)}$$
m is integer and p is probability. I want to compute this expectation numerically. Analytically, I obtained the solution.
The Mathematica format of the expression is below
Sum[
(Binomial[m, i])*(Binomial[m, j])*(Beta[i + j + 1, m + m - i - j + 1])*
((Beta[i + 1, m - i + 1])^-1)*(p^i)*((1 - p)^(m - i))*(j + 1)/(m + 2),
{i, 0, m}, {j, 0, m}]
• Welcome to Mathematica.SE! I suggest that: 1) You take the introductory Tour now! 2) When you see good questions and answers, vote them up by clicking the gray triangles, because the credibility of the system is based on the reputation gained by users sharing their knowledge. Also, please remember to accept the answer, if any, that solves your problem, by clicking the checkmark sign! 3) As you receive help, try to give it too, by answering questions in your area of expertise. – bbgodfrey Apr 4 '15 at 23:25
• Please include this expression in Mathematica format, so that readers can work with it more easily. – bbgodfrey Apr 4 '15 at 23:27
I believe the symbolic evaluation is not correct but correct simplified expression can be found. I await insight from others wrt reasons. Here, as integer arguments of Beta have just changed to factorial.
func[a_, b_] := Factorial[a - 1] Factorial[b - 1]/Factorial[a + b - 1];
f[m_] := FullSimplify@
Sum[Binomial[m, i] Binomial[m, j] (j + 1) p^
i (1 - p)^(m - i) func[i + j + 1,
2 m \[Minus] i \[Minus] j +
1]/((m + 2) func[i + 1, m \[Minus] i + 1]), {i, 0, m}, {j, 0,
m}]
FindSequenceFunction[(f[#] /. p -> u) & /@ Range[10], x]
Using the found function:
sf[x_, u_] := (2 + 2 x + u x^2)/(2 + x)^2
Note: sf[14,6/10] yields 369/640 (0.576563).
Testing (not proof):
Row[Grid[#, Frame -> All] & /@
Partition[Table[{j, f[j], Simplify[sf[j, p]]}, {j, 1, 30}], 10]]
and for fun:
Manipulate[
Plot[{Evaluate[sf[n, p]], p}, {p, 0, 1}, Frame -> True,
FrameLabel -> {"p", "E[x|p]"}, PlotLegends -> "Expressions"], {n, 2,
100, 1}]
• Thanks a lot for your help,Its really great work! – Jimmy Dur Apr 5 '15 at 7:09
• Nice work as always, plus to you. – ciao Apr 5 '15 at 8:38
This?
FullSimplify[Sum[
Binomial[m, i] Binomial[m, j] (j + 1)/(m + 2) p^i (1 - p)^(m - i)
Beta[i+j+1, 2m-i-j+1]/Beta[i+1,m-i+1], {i, 0, m}, {j, 0, m}]]
(* gives (2 (1+m)(1-p)^m)/(2+m)^2 *)
BUT your latex doesn't match your Mathematica and I don't know which to trust.
• Thanks for correction.I made a typo.and what you found is analytical solution I need to solve it numerically. – Jimmy Dur Apr 5 '15 at 0:16
• Your analytical solution is correct I made a mistake .I have already did some computation as you said for fixing m and p.but I am not sure its the numerical solution of this expectation.I was thinking if I can use some numerical methods and get a result after some iterations. – Jimmy Dur Apr 5 '15 at 0:40
• another thing is that when we do numerical solution as you wrote down, for p=0.6; m=14; Sum[Binomial[m, i] Binomial[m, j] (j+1)/(m+2) p^i (1-p)^(m-i) Beta[i+j+1, 2m-i-j+1]/Beta[i+1, m-i+1], {i, 0, m}, {j, 0, m}]] which gives us 0.576563 .But if I plug m=14 and p=0.6 in the analytical solution (2 (1+m)(1-p)^m)/(2+m)^2 , it gives 3.14573*10^-7. I dont know why but these two solutions do not agree.Thats kind of weird. – Jimmy Dur Apr 5 '15 at 0:51
• That is odd. My first guess is round-off error. So change 0.6 to 6/10, restart and try again. Still not the same. You might dig into this and see if you can figure this out. You might learn something from doing that. You might have found an "unexpected behavior" in MMA, but see if you can find an explanation before jumping to any conclusion. – Bill Apr 5 '15 at 1:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3303624093532562, "perplexity": 2807.148480328467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574039.24/warc/CC-MAIN-20190920134548-20190920160548-00163.warc.gz"} |
http://math.stackexchange.com/questions/167090/maximal-ideal-space-of-a-quotient-or-banach-subalgebra/169892 | # Maximal ideal space of a quotient or Banach subalgebra
Let $\mathcal{A}$ be a commutative unital Banach algebra, $\mathcal{B} \subset \mathcal{A}$ a closed unital subalgebra, $\mathcal{I} \subset \mathcal{B}$ a closed ideal.
Is there in general a way to "identify" (soft question, perhaps) the maximal ideal spaces $\Sigma(\mathcal{B})$ and $\Sigma(\mathcal{A}/\mathcal{I})$, in terms of $\Sigma(\mathcal{A})$ and possibly some sort of other data? A couple of examples of the sort of thing I have in mind:
• If $\mathfrak{X}$ is a Banach space and $M \subset \mathfrak{X}$ a closed subspace, then $M^* \simeq \mathfrak{X}^*/M^\perp$ and $(\mathfrak{X}/M)^* \simeq M^\perp$, where $$M^\perp = \{f \in \mathfrak{X}^* \mid \forall m \in M: \, f(m) = 0\}.$$
• In the special case where $\mathcal{A}$ is a $C^*$-algebra, the contravariant equivalence of categories with (compact Hausdorff spaces, continuous maps) implies that $C^*$-subalgebras of $\mathcal{A}$ correspond to quotients of $\Sigma(\mathcal{A})$, and quotients of $\mathcal{A}$ correspond to closed subspaces of $\Sigma(\mathcal{A})$.
Are there some sort of analogous relationships with commutative Banach algebras? An example: Viewing the disc algebra $A(\mathbb{D})$ as a closed subalgebra of $C(\mathbb{T})$, the above analogies might lead us to expect that $\Sigma(A(\mathbb{D}))$ is a quotient of $\Sigma(C(\mathbb{T}))$. But $\Sigma(A(\mathbb{D})) \simeq \overline{\mathbb{D}}$ while $\Sigma(C(\mathbb{T})) \simeq \mathbb{T}$, so it looks like in this case the relationship is a subspace rather than a quotient (and going the other direction).
-
There's some relevant stuff in Rickart's General theory of Banach algebras (the U of Iowa library has a copy). E.g.: "THEOREM (3.1.17). Let $\tau$ be a homomorphism of $\mathfrak B$ onto $\mathfrak A$ and let $\mathfrak K$ be the kernel of $\tau$. Then the dual mapping of $\Phi_{\mathfrak A^\infty}$ into $\Phi_{\mathfrak B^\infty}$ takes $\Phi_{\mathfrak A}$ homeomorphically onto $\mathcal h(\mathfrak K)$, the hull in $\Phi_{\mathfrak B}$ of the ideal $\mathfrak K$." – Jonas Meyer Jul 5 '12 at 16:11
Placeholder until I can come back with a more thought-out answer.
Take B to be the Jacobson radical of A to see that the natural map from max ideal space of A to that of B need not be injective.
Take B to be the disc algebra and A to be C(T), as you did, to see that said map need not be surjective.
As Jonas has mentioned in his comment, the natural map from max ideal space of A/I to that of A will be injective with closed range.
In the non-unital setting, note that one can have commutative Banach algebras with trivial Jacobson radical which quotient onto radical Banach algebras. The standard example is the Volterra algebra arising as a quotient of the convolution algebra L^1(R_+).
-
I'm not really sure what you're asking. $\Sigma$ is still a contravariant functor from commutative Banach algebras to compact Hausdorff spaces (it just isn't an equivalence), so from the sequence of morphisms $$B \to A \to A/I$$
you get a sequence of morphisms in the other direction $$\Sigma(A/I) \to \Sigma(A) \to \Sigma(B)$$
but I don't think there's much you can say anything in general about the corresponding morphism $\Sigma(A/I) \to \Sigma(B)$ without more information.
-
I voted up because this answer points out that you still get maps the other way, which was not pointed out in the question. However, I do think that more can be said, even if it is (admittedly) a soft question. – Jonas Meyer Jul 5 '12 at 16:20
+1 for noting that the Gelfand rep is a functor from CBAs to Spaces^op, and not just some theorem about the category of C^*-algebras, contrary to the impression one might get from some posts on MO... ;-) – user16299 Jul 5 '12 at 23:11
Contra @JonasMeyer, my gut feeling is that for general CBAs this is about as much as can be said, but I don't have immediate examples to hand. One major difference is that in any C^*-algebra (not just the commutative ones) all ideals have bounded approximate identities. Another is that for commutative C^*-algebras the Gelfand topology coincides with the hull-kernel topology. – user16299 Jul 5 '12 at 23:17
Thanks everyone. Here's what I understand so far:
• Regarding the maximal ideal space of a quotient, one has (in the unital case) the identification $$\Sigma(\mathcal{A}/I) \simeq \text{hull}(I) = \{\omega \in \Sigma(\mathcal{A}) \mid \omega = 0 \text{ on } I\}.$$
• Regarding the maximal ideal space of a subalgebra, things aren't as clean. Denote by $E(\mathcal{B}) \subseteq \Sigma(\mathcal{B})$ the subspace of homomorphisms which are extendible to $\mathcal{A}$, and by $\sim$ the equivalence relation on $\Sigma(\mathcal{A})$ induced by restriction to $\mathcal{B}$. Then $$E(\mathcal{B}) \simeq \Sigma(\mathcal{A})/\sim.$$ Some examples:
(1) $\mathcal{A} = C(\mathbb{T})$, $\mathcal{B} = A(\mathbb{D})$ shows that $E(\mathcal{B})$ can be a proper subspace of $\Sigma(\mathcal{B})$. The multiplicative linear functionals on $\mathcal{B}$ correspond to evaluation at points in the closed disc $\overline{\mathbb{D}}$, but only those corresponding to points in $\mathbb{T}$ are extendible to $\mathcal{A}$.
(2) $\mathcal{B}$ could be the Jacobson radical of $\mathcal{A}$, showing that $\sim$ can be nontrivial.
(3) Another (unital) example where $\sim$ is nontrivial is $\mathcal{A} = C(K)$ and $\mathcal{B} = C(K/\approx)$ where $\approx$ is a (nontrivial) equivalence relation on the compact Hausdorff space $K$. Then $\sim$ is the same as $\approx$, modulo the identification of $K$ with $\Sigma(C(K))$ and $K/\approx$ with $\Sigma(C(K/\approx))$. Forgive my sense of humor, but I can't pass up the opportunity to write $\sim \simeq \approx$ and have it almost mean something.
(4) Let $\mathcal{B} \subseteq A(\mathbb{D})$ be the subalgebra generated by $z^2$, i.e. the functions whose odd Taylor coefficients are all zero. Then $\sim$ is the antipodal equivalence on $\Sigma(\mathcal{A}) \simeq \mathbb{T}$, and $\Sigma(\mathcal{B})$ is the quotient of the disc under the antipodal map. In this example we have both that $E(\mathcal{B})$ is properly contained in $\Sigma(\mathcal{B})$, and that $\sim$ is nontrivial.
Rather elementary considerations, but I hadn't really thought through them before. Thanks for your patience.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9225945472717285, "perplexity": 228.20246119308604}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246655589.82/warc/CC-MAIN-20150417045735-00080-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://indico.cern.ch/event/341292/contributions/1739269/ | # DIS 2015 - XXIII. International Workshop on Deep-Inelastic Scattering and Related Subjects
27 April 2015 to 1 May 2015
US/Central timezone
## A New Method for Indirect Mass Measurements using the Integral Charge Asymmetry at the LHC
29 Apr 2015, 15:20
20m
THEATER ()
### THEATER
WG5 Heavy Flavours
### Speaker
Steve Guy Muanza (CPPM, Aix-Marseille Université, CNRS/IN2P3 (FR))
### Description
We propose a novel method for an indirect measurement of the mass of final states produced through charged current processes at the LHC. This method is based upon the process integral charge asymmetry. First, the theoretical prediction of the integral charge asymmetry and its related uncertainties are studied through parton level cross sections calculations. Then, the experimental extraction of the integral charge asymmetry of a given signal, in the presence of some background, is performed using particle level simulations. Process dependent templates enable to convert the measured integral charge asymmetry into an estimated mass of the charged final state. Finally, a combination of the experimental and the theoretical uncertainties determines the full uncertainty of the indirect mass measurement. This new method applies to all charged current processes at the LHC. In this study, we demonstrate its effectiveness at extracting the mass of the W boson, as a first step, and the sum of the masses of a chargino and a neutralino in case these supersymmetric particles are produced by pair, as a second step. Note that contrarily to most of the usual mass reconstruction techniques that are based upon the kinematics of the events final state, this method depends on the events initial state and mainly reflects the charge asymmetry of the colliding protons.
### Primary author
Steve Guy Muanza (CPPM, Aix-Marseille Université, CNRS/IN2P3 (FR))
### Co-author
Thomas Clement Serre (Centre National de la Recherche Scientifique (FR))
Slides | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9830893278121948, "perplexity": 1862.7478314940365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987829458.93/warc/CC-MAIN-20191023043257-20191023070757-00543.warc.gz"} |
http://math.stackexchange.com/questions/233453/minimal-conditions-for-convergence-of-measurable-functions | # Minimal conditions for convergence of measurable functions
If $X_i$ is a sequence of $\mathbb{R}^m$-valued random variables that converges either in probability or almost surely to $X$ and if $f$ is some measurable function from $\mathbb{R}^m$ into $\mathbb{R}$, does it follow that $f(X_i)$ converges to $f(X)$ as well?
I know about the continuous mapping theorem. Is there something similar for arbitrary measurable functions?
New Question: As the answer below suggested, let $\mathcal{C}$ be the set of functions $f$ that satisfy $f(X_i)$ converges in probability to $f(X)$ when $X_i$ converges in probability to $X$. What are the minimal properties of such a set?
-
How about $X_i=1/i$ with probability one and $f(x)=1_{\{x\neq 0\}}$. Then $X_i\to X=0$ almost surely, but $f(X_i)=1 \nrightarrow 0=f(X)$. – Stefan Hansen Nov 9 '12 at 8:58
Note that you cannot avoid talking about continuity: If your statement holds for all random Variables (esp. for the constant ones), then $f$ is continuous. – martini Nov 9 '12 at 9:17
Let $\cal C_1$ the collection of measurable functions from $\Bbb R^m$ to $\Bbb R$ such that whenever $\{X_n\}$ is a sequence of random variables with values in $\Bbb R^m$ converging to $X$ almost surely, then $f(X_n)\to f(X)$ almost surely.
We define in the same way $\cal C_2$ replacing "almost everywhere" by "in probability".
Fix $(x_1,\dots,x_m)\in\Bbb R^n$, and $\{(x_1^{(k)},\dots,x_m^{(k)})\}_k$ an arbitrary sequence converging to $(x_1,\dots,x_m)$. Let $X=(x_1,\dots,x_m)$ and $X_k=(x_1^{(k)},\dots,x_m^{(k)})$ (constant random variables). Then $X_k\to X$ in probability and almost surely. If $f\in\cal C_i$, then $f$ is continuous at $(x_1,\dots,x_m)$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9901649951934814, "perplexity": 91.27989735501077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422118973352.69/warc/CC-MAIN-20150124170253-00044-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://competitive-exam.in/questions/discuss/in-cournot-model-each-firm-expects-a-reaction | In cournot model, each firm expects a reaction from his rival but the expected reaction is not:
important
materialized
accepted
rejected
Please do not use chat terms. Example: avoid using "grt" instead of "great". | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8817656636238098, "perplexity": 9053.682417076998}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665726.39/warc/CC-MAIN-20191112175604-20191112203604-00384.warc.gz"} |
https://www.interest-rates-data.com/risk-metric-ofer-abarbanel-online-library-2/ | # Risk metric (Ofer Abarbanel online library)
In the context of risk measurement, a risk metric is the concept quantified by a risk measure. When choosing a risk metric, an agent is picking an aspect of perceived risk to investigate, such as volatility or probability of default.[1]
Risk measure and risk metric
In a general sense, a measure is a procedure for quantifying something. A metric is that which is being quantified.[2] In other words, the method or formula to calculate a risk metric is called a risk measure.
For example, in finance, the volatility of a stock might be calculated in any one of the three following ways:
• Calculate the sample standard deviation of the stock’s returns over the past 30 trading days.
• Calculate the sample standard deviation of the stock’s returns over the past 100 trading days.
• Calculate the implied volatility of the stock from some specified call option on the stock.
These are three distinct risk measures. Each could be used to measure the single risk metric volatility.
References
1. ^Holton, Glyn A. (2004). “Defining risk” (pdf). Financial Analysts Journal. 60 (6): 19–25. doi:10.2469/faj.v60.n6.2669. Retrieved March 11, 2012.
2. ^Holton, Glyn A. (2002). “Risk Measure and Risk Metric”. Retrieved March 11, 2012.
Ofer Abarbanel – Executive Profile
Ofer Abarbanel online library
Ofer Abarbanel online library
Ofer Abarbanel online library | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9628539085388184, "perplexity": 2333.069999774258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402131777.95/warc/CC-MAIN-20201001143636-20201001173636-00594.warc.gz"} |
https://www.gradesaver.com/textbooks/math/prealgebra/prealgebra-7th-edition/chapter-7-section-7-1-percents-decimals-and-fractions-practice-page-473/21 | # Chapter 7 - Section 7.1 - Percents, Decimals, and Fractions - Practice - Page 473: 21
0.275 as a decimal $\frac{11}{40}$ as a fraction
#### Work Step by Step
27.5%=27.5(0.01)=0.275 as a decimal 27.5% =27.5$\times$ $\frac{1}{100}$=$\frac{27.5}{100}$= =$\frac{27.5}{100}$ $\times$ $\frac{10}{10}$=$\frac{275}{1000}$= =$\frac{11}{40}$ as a fraction
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6288773417472839, "perplexity": 4241.763225583959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742569.45/warc/CC-MAIN-20181115075207-20181115101207-00087.warc.gz"} |
https://christianjaques.ch/blog/ | Select Page
## It’s doctor, actually…
I am happy and proud to announce that I passed my PhD's defense on the 27th of April of 2020. The title of my PhD is "Active illumination and computational methods for temporal and spectral super-resolution microscopy". The members of my jury were Michaël Unser, BIG...
## Watch my talk about our patent
[Sorry it's in French] Une présentation que j'ai donnée à l'Idiap lors de la journée de l'innovation, le 28 Août 2019. J'y parle de notre brevet (brevet européen EP19154253) déposé par M. Liebling et moi-même. La vidéo est disponible en suivant le lien :...
## Our paper won the best paper award (1st place) at IEEE ISBI 2019 !
We have a paper accepted at the IEEE ISBI2019 conference in Venice, Italy, called "Multi-spectral widefield microscopy of the beating heart through post-acquisition synchronization and unmixing" (available on ieeexplore, 10.1109/ISBI.2019.8759472). This paper has been...
## Valaisans fishes under a Valaisan microscope
The build of our microscope was kept very much "Valaisan": we've developed it at Idiap (based on the OpenSpim project) and the parts were machined in Sion, by the Base Aérienne. The first biological sample we've imaged was the wing of a fly. This was a nice sample,...
## The L1-magic, a take on the data term
[latexpage] Recent advances in Compressed Sensing (CS) have focused a great deal of attention onto $\ell_1$ norm minimization. This is due to the fact proven by D.L. Donoho (see "Compressed sensing", Donoho 2004) that minimizing an $\ell_1$ norm gets the same...
## PyBind is great!
I recently came across (thanks to Nicolas D.) a great library called PyBind that originally was a condensed part of Boost dealing with Python interfacing. It allows to bind C++ and Python in many ways, relies heavily on meta-programmation (it is headers-only). With... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6515958905220032, "perplexity": 11550.832636687806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621519.32/warc/CC-MAIN-20210615180356-20210615210356-00159.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/150084-help-finding-determinants-matrices-print.html | # Help finding determinants of matrices
• July 4th 2010, 03:28 PM
seanP
Help finding determinants of matrices
suppose:
|a b c|
|A| = |d e f | = 6
|g h i |
find the determinants of these matrices:
a)
|a c b |
B = |d f e |
|g i h |
b)2A
c) A^-1 (inverse of A)
Is there a way to find a number value for the determinants of these matrices only knowing that |A| = 6 ?
I know how to find a determinant, but this question is confusing to me since there are no values
• July 4th 2010, 03:37 PM
Also sprach Zarathustra
Hint for a:
recall the theorem which state that if if you replace one column with another the det. isn't changes.(wrong!!!)
for b.
det(2A)=2det(A)
for c.
det(A)=det(A^-1)
• July 4th 2010, 03:38 PM
pickslides
$|A|= a(ei-fh)-b(di-fg)+c(dh-eg) =aei-afh-bdi+bfg+cdh-ceg= 6$
$|B| =a(fh-ei)-c(dh-eg)+b(di-fg) =-aei+afh+bdi-bfg-cdh+ceg=$ $-(aei-afh-bdi+bfg+cdh-ceg)=-|A| = -6$
• July 4th 2010, 03:51 PM
Chris11
• July 4th 2010, 03:53 PM
seanP
thanks to both of you. just to clarify, zarathustra is right for b and c ?
• July 4th 2010, 05:58 PM
Soroban
Hello, seanP!
Quote:
$\text{Suppose: }\;A \:=\:\left|\begin{array}{ccc} a&b&c \\ d&e&f \\ g&h&1\end{array}\right| \;=\;6$
Find the determinants of these matrices:
$a)\;B \;=\;\left|\begin{array}{ccc}a&c&b \\ d&f&e \\ g&i&h \end{array}\right|$
. . the sign of the determinant is changed.
Therefore: . $B \:=\:-6$
Quote:
$b)\;2A$
Too easy!
$2A \;=\;2(6) \;=\;12$
Quote:
$c)\;A^{-1}$
$\text{Since }A\cdot A^{-1} \:=\:I\,\text{ and }\,|I| \:=\:1$
. $\text{we have: }\;6\cdot A^{-1} \:=\:1 \quad\Rightarrow\quad A^{-1} \:=\:\dfrac{1}{6}$
• July 4th 2010, 06:17 PM
pickslides
Hi Soroban, the OP has stated (not so clearly!) that
$|A| \:=\:\left|\begin{array}{ccc} a&b&c \\ d&e&f \\ g&h&1\end{array}\right| \;=\;6$
so
$A \:=\:\left[\begin{array}{ccc} a&b&c \\ d&e&f \\ g&h&1\end{array}\right] \implies 2A \:=\:2\left[\begin{array}{ccc} a&b&c \\ d&e&f \\ g&h&1\end{array}\right]$
Your solution is for $2|A|\neq 2A$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8904682397842407, "perplexity": 4534.871862834921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115863063.84/warc/CC-MAIN-20150124161103-00204-ip-10-180-212-252.ec2.internal.warc.gz"} |
http://martinat.tandu.it/you-are-odz/5124c0-finite-field-operations | finite field operations
... A finite field must be a finite dimensional vector space, so all finite fields have degrees. 0000003503 00000 n If you have previously obtained access with your personal account, please log in. Introduction to finite fields 2 2. Other classical applications of finite fields are error correcting codes and residue number systems. However, the set S is closed under the field operations, so S is itself a field. In the case of Zm, an exponentiation algorithm based on the Montgomery multiplication concept is also described. $\endgroup$ – MickG Jun 18 '14 at 12:37 The recursive direct inversion method presented for OTFs has significantly lower complexity than the known best method for inversion in optimal extension fields (OEFs), i.e., Itoh-Tsujii's inversion technique. Plus, Times, D — operators overloaded by the Finite Fields Package. and you may need to create a new Wiley Online Library account. The number of elements in a finite field is the order of that field. The basic arithmetic operations used in PKC are the addition, subtraction and multiplication operations in finite … 0000025774 00000 n Yes; No; Profile; Class/Job; Minions; Mounts; Achievements; Friends; Follow; Field Operations. In AES, all operations are performed on 8-bit bytes. 0000026239 00000 n A field is a special type of ring. 0000051088 00000 n Return the globally unique finite field of given order with generator labeled by the given name and possibly with given modulus. On the other hand, efficient finite field and ring arithmetic leads to efficient public-key cryptography. United States Patent 7142668 . 0000026465 00000 n Need a library in python that implements finite field operations like multiplication and inverse in Galois Field ( GF(2^n) ) Finite Fields. To create a prime field you can use the createPrimeField function. 0000012710 00000 n 0000004653 00000 n *,./,inv) for finite field. 0000010936 00000 n 280 0 obj << /Linearized 1 /O 282 /H [ 1487 1782 ] /L 375051 /E 62351 /N 49 /T 369332 >> endobj xref 280 53 0000000016 00000 n Compute The Multiplication Between 01101011 And 00001011. Clear Castrum Lacus Litore 50 times. It is also possible for the user to specify their own irreducible polynomial generating a finite field. Many questions about the integers or the rational numbers can be translated into questions about the arithmetic in finite fields, which tends to be more tractable. Finite fields are constructed using the FlintFiniteField function. 0000020345 00000 n 2. The theory of finite fields is a key part of number theory, abstract algebra, arithmetic algebraic geometry, and cryptography, among others. Perhaps the most familiar finite field is the Boolean field where the elements are 0 and 1, addition (and subtraction) correspond to XOR, and multiplication (and division) work as normal for 0 and 1. Given two elements, (a n-1…a 1a 0) and (b n-1…b 1b 0), these operations are defined as follows. We call $$\ZZ _2$$ a field (specifically, the finite field of order $$2$$) since the operations of addition, multiplication, subtraction, and division all work as we would expect. 0000042688 00000 n 0000061307 00000 n The definition consists of the following elements. 5570. 0000011042 00000 n 0000008540 00000 n simple operations over finite fields; hence, the most important arithmetic operation for RSA based cryptographic systems is multiplication. NOTES ON FINITE FIELDS 3 2. It is also possible for the user to specify their own irreducible polynomial generating a finite field. 0000014064 00000 n 0000001411 00000 n name – string, optional. PyniteFields is meant to be fairly intuitive and easy to use. 0000006678 00000 n 0000008562 00000 n DOI: 10.2991/ICCST-15.2015.25 Corpus ID: 55623620. 0000005985 00000 n In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field that contains a finite number of elements.As with any field, a finite field is a set on which the operations of multiplication, addition, subtraction and division are defined and satisfy certain basic rules. 0000017809 00000 n A field is a set F with two binary operations + and × such that: 1) (F, +) is a commutative group with identity element 0. Galois Fields GF(p) • GF(p) is the set of integers {0,1, … , p-1} with arithmetic operations modulo prime p • these form a finite field –since have multiplicative inverses • hence arithmetic is “well-behaved” and can do addition, subtraction, multiplication, and division without leaving the field GF(p) 0000026831 00000 n FINITE FIELD ARITHMETIC. Classical examples are ciphering deciphering, authentication and digital signature protocols based on RSA‐type or elliptic curve algorithms. 0000013494 00000 n As far as I could tell: if $+$ and $\times$ are the only field operations then $\{1\}$ can only generate $\mathbb N = \{1,2,3,\ldots\}$, which isn't even a field! Similarly, division of field elements is defined in terms of multiplication: for a,b ∈F If you do not receive an email within 10 minutes, your email address may not be registered, 0000019528 00000 n The definition of a field 3 2.2. trailer << /Size 333 /Info 269 0 R /Root 281 0 R /Prev 369321 /ID[<3257d5715d6018337c3a90d6847a5b85>] >> startxref 0 %%EOF 281 0 obj << /Type /Catalog /Pages 268 0 R /Metadata 270 0 R >> endobj 331 0 obj << /S 2129 /T 2283 /Filter /FlateDecode /Length 332 0 R >> stream These operations include addition, subtraction, multiplication, and inversion. 0000014499 00000 n You could perhaps also look at the "finite" part of the term "finite field cryptography", but I am not aware of any practical cryptographic schemes that use an infinite field (such as unbounded rational numbers). Hardware Implementation of Finite-Field Arithmetic describes algorithms and circuits for executing finite-field operations, including addition, subtraction, multiplication, squaring, exponentiation, and division. Implementation of Finite Field Arithmetic Operations for Polynomial and Normal Basis Representations @inproceedings{Maulana2015ImplementationOF, title={Implementation of Finite Field Arithmetic Operations for Polynomial and Normal Basis Representations}, author={M. Maulana and Wenny … This is an interdisciplinary research area, involving mathematics, computer science, and electrical engineering. An isomorphism of the field K 1 onto the field K 2 is a one-to-one onto map that preserves both field operations, i.e., (+ ) = + (), () = () for all , in K 1. PyniteFields is implemented in Python 3. * Notifications for standings updates are shared across all Worlds. SetFieldFormat — set the output form of elements in a field. Finite Field Arithmetic Field operations AfieldF is equipped with two operations, addition and multiplication. I am working on a project that involves Koblitz curve for cryptographic purposes. This allows construction of finite fields of any characteristic and degree for which there are Conway polynomials. Currently, only prime fields are supported. Finite Fields DOUGLAS H. WIEDEMANN, MEMBER, IEEE Ahstruct-A “coordinate recurrence” method for solving sparse systems of linear equations over finite fields is described. The function has the following signature: Creates a prime field for the specified modulus. PyniteFields is implemented in Python 3. Given two elements, (a n-1…a 1a 0) and (b n-1…b 1b 0), these operations are defined as follows. Section 4.7 discusses such operations in some detail. United States Patent 6349318 . Use the link below to share a full-text version of this article with your friends and colleagues. However multiplication is more complicated operation and in terms of time and implementation area is more costly. The definition of a field. The formal properties of a finite field are: (a) There are two defined operations, namely addition and multiplication. If p is prime and f(x) an irreducible polynomial then Zp, Zp[x]/f(x), GF(p) and GF(pn) are finite fields for which inversion algorithms are proposed. 2.2 Finite Field Arithmetic Operat ions The efficiency of EC algorithms heavily depends on the performance of the underlying field arithmetic operations. golang arithmetic finite-fields bignumber finite-field-arithmetic bignum-library Updated Dec 22, 2020 Finite fields are provided in Nemo by Flint. Here is a quick overview of the provided functionality: 0000025796 00000 n Maps of fields 7 3.2. elliptic curves - elliptic curves with pre-defined parameters, including the underlying finite field. Definition and constructions of fields 3 2.1. INPUT: order – a prime power. A class library for operations on finite fields (a.k.a. In AES, all operations are performed on 8-bit bytes. This makes sense, because a finite field means that every value can be encoded in a constant amount of space (such as 256 bits), which is very convenient for practical implementations. 0000005363 00000 n This toolbox can handle simple operations (+,-,*,/,. To perform operations in a finite field, you'll first need to create a FiniteField object. name – string, optional. In particular, we disprove a conjecture from . Return the globally unique finite field of given order with generator labeled by the given name and possibly with given modulus. 0000011368 00000 n Hardware Implementation of Finite-Field Arithmetic, 1st Edition by Jean-Pierre Deschamps (9780071545815) Preview the textbook, purchase or get a FREE instructor-only desk copy. A quick intro to field theory 7 3.1. This chapter proposes algorithms allowing the execution of the main arithmetic operations (addition, subtraction, multiplication) in finite rings Zm and polynomial rings Zp[x]/f(x). Please check your email for instructions on resetting your password. A “finite field” is a field where the number of elements is finite. This thesis introduces a new tower field representation, optimal tower fields (OTFs), that facilitates efficient finite field operations. %PDF-1.4 %���� 0000033471 00000 n Many cryptographic protocols are based on the difficulty of factoring large composite integers or a related problem. 0000003751 00000 n 0000001487 00000 n The structure of a finite field is a bit complex. After defining fields, if we have one field K, we give a way to construct many fields from K by adjoining elements. The existence of these inverses implicitly defines the operations of subtraction and division. This is a toolbox providing simple operations (+,-,*,/,. Unlimited viewing of the article/chapter PDF and any associated supplements and figures. 0000007259 00000 n goff (go finite field) is a unix-like tool that generates fast field arithmetic in Go. 0000006656 00000 n 0000018469 00000 n This allows construction of finite fields of any characteristic and degree for which there are Conway polynomials. NOTES ON FINITE FIELDS AARON LANDESMAN CONTENTS 1. 0000008041 00000 n This invention relates to a method of accelerating operations in a finite field, and in particular, to operations performed in a field F 2 m such as used in encryption systems. So instead of introducing finite fields directly, we first have a look at another algebraic structure: groups. DEFINITION AND CONSTRUCTIONS OF FIELDS Before understanding finite fields, we first need to understand what a field is in general. 0000013226 00000 n Arithmetic processor for finite field and module integer arithmetic operations . However, finite fields play a crucial role in many cryptographic algorithms. In 1985, Victor S. Miller (Miller 1985) and Neal Koblitz (Koblitz 1987) proposed Elliptic Curve Cryptography (ECC), independently. ... A finite field must be a finite dimensional vector space, so all finite fields have degrees. (b) The result of adding or multiplying two elements from the field is always an element in the field. Finite Field Arithmetic (Galois field) Introduction: A finite field is also often known as a Galois field, after the French mathematician Pierre Galois. GAP supports finite fields of size at most 2^{16}. Characteristic — prime characteristic of a field. Implement Finite-Field Arithmetic in Specific Hardware (FPGA and ASIC) Master cutting-edge electronic circuit synthesis and design with help from this detailed guide. We consider implementations of multiplication with one fixed element in a binary finite field. The finite field arithmetic operations: addition, subtraction, division, multiplication and multiplicative inverse, need to be implemented for the development and research of stream ciphers, public key cryptosystems and cryptographic schemes over elliptic curves. 2.2 Finite Field Arithmetic Operat ions The efficiency of EC algorithms heavily depends on the performance of the underlying field arithmetic operations. With the appropriate definition of arithmetic operations, each such set S is a finite field. Working off-campus? FunctionOfCode FunctionOfCoefficients. The Wings of Time. Constructing field extensions by adjoining elements 4 3. Galois fields) which I find useful in my line of work. Finite fields are provided in Nemo by Flint. Apparatus and method for generating expression data for finite field operation . 0000050405 00000 n 0000062079 00000 n (c) One element of the field is the element zero, such that a + 0 = a for any element a in the field. The first section in this chapter describes how you can enter elements of finite fields and how GAP prints them (see Finite Field Elements). An automorphism of K is an isomorphism of K onto itself. The following Matlab project contains the source code and Matlab examples used for a toolbox for simple finite field operation. 0000019945 00000 n Inordertoobtainane˝˛˙cientellipticcurvewith128-bitsecurityanda primeorder,weexploretheuseof˛˙nite˛˙eldsGF„pn”,withpasmallmodulus(less Characteristic of a field 8 3.3. Subtraction of field elements is defined in terms of addition: for a,b ∈ F, a−b = a+(−b) where −b is the unique element in F such that b+(−b)=0(−b is called the negative of b). $\begingroup$ To @MartinBrandenburg who marked this as duplicate, I don't think so, for two reasons: 1) I'm asking about the whole group, not finite subgroups, and 2) I'm asking about a finite field, whereas the question this question has been marked as possible duplicate of asks about the subgroups of a generic field's multiplicative group. Finite field operations are used as computation primitives for executing numerous cryptographic algorithms, especially those related with the use of public keys (asymmetric cryptography). Fast Multiplication in Finite Fields GF(2N) 123 The standard way to work with GF(2N) is to write its elements as poly- nomials in GF(2)[X] modulo some irreducible polynomial (X) of degree N.Operations are performed modulo the polynomial (X), that is, using division by (X) with remainder.This division is time-consuming, and much work has denotes the remainder after multiplying/adding two elements): 1. The formal properties of a finite field are: (a) There are two defined operations, namely addition and multiplication. 0000006007 00000 n 0000025257 00000 n The source code and files included in this project are listed in the project files section, please make sure whether the listed source code meet your needs there. Finite Fields, also known as Galois Fields, are cornerstones for understanding any cryptography. Learn more. We implement the finite field arithmetic Question: 1. AES Uses Operations Performed Over The Finite Field GF(28) With The Irreducible Polynomial X8 + X4 + X3 + X + 1. Finite Field. Finite Fields Sophie Huczynska (with changes by Max Neunhoffer)¨ Semester 2, Academic Year 2012/13 Closed — any operation p… GF — represent a Galois field using its characteristic and irreducible polynomial coefficients. FINITE FIELD ARITHMETIC. Finite field operations are used as computation primitives for executing numerous cryptographic algorithms, especially those related with the use of public keys (asymmetric cryptography). 0000003269 00000 n The number of elements in a finite field is the order of that field. 0000011919 00000 n Am I right to assume that $-$ and $\div$ are field operations? Galois Field GF(2 m) Calculator. Enter your email address below and we will send you your username, If the address matches an existing account you will receive an email with instructions to retrieve your username, I have read and accept the Wiley Online Library Terms and Conditions of Use. A field is a set F with two binary operations + and × such that: 1) (F, +) is a commutative group with identity element 0. E˙icient Elliptic Curve Operations On Microcontrollers With Finite Field Extensions ThomasPornin NCCGroup,thomas.pornin@nccgroup.com 3January2020 Abstract. Section 4.7 discusses such operations in some detail. The full text of this article hosted at iucr.org is unavailable due to technical difficulties. 0000026443 00000 n 0000017233 00000 n The value of a − c is a + (−c) where −c is the additive inverse of c. ... 1.1 Finite fields Well known fields having an infinite number of elements include the real numbers, R, the complex numbers C, and the rational numbers Q. You can find complete API definitions in galois.d.ts. Abstract: The present disclosure provides an arithmetic processor having an arithmetic logic unit having a plurality of arithmetic circuits each for performing a group of associated arithmetic operations, such as finite field operations, or modular integer operations. Finite Clockchase. 6.2 Arithmetic Operations on Polynomials 5 6.3 Dividing One Polynomial by Another Using Long 7 Division 6.4 Arithmetic Operations on Polynomial Whose 9 Coefficients Belong to a Finite Field 6.5 Dividing Polynomials Defined over a Finite Field 11 6.6 Let’s Now Consider Polynomials Defined 13 over GF(2) 6.7 Arithmetic Operations on Polynomials 15 See addition and multiplication tables. finite fields are simple operations, which are usually perform in a simple clock cycle. A finite field (also called a Galois field) is a field that has finitely many elements.The number of elements in a finite field is sometimes called the order of the field. Top Battle. FINITE FIELDS OF THE FORM GF(p) In Section 4.4, we defined a field as a set that obeys all of the axioms of Figure 4.2 and gave some examples of infinite fields. Classical examples are ciphering deciphering, authentication and digital signature protocols based on RSA‐type or elliptic curve algorithms. Finite fields are constructed using the FlintFiniteField function. Since splitting fields are minimal by definition, the containment S ⊂ F means that S = F. Follower Requests. 0000003246 00000 n H��V}P�w��(H�EJ��8G��e����N��ݖ\Yڴ"s��v%[��n�e�c����6��>w���>�����<. Apparatus and method for generating expression data for finite field operation Download PDF Info Publication number US7142668B1. Finite fields of characteristic two in F 2 m are of interest since they allow for the efficient implementation of elliptic curve arithmetic. Filter which items are to be displayed below. 26 2. Infinite fields are not of particular interest in the context of cryptography. 1. The finite field arithmetic operations need to be implemented for the development and research of stream ciphers, block ciphers, public key cryptosystems and cryptographic schemes over elliptic curves. The performance of EC functionality directly depends on the efficiently of the implementation of operations with finite field elements such as addition, multiplication, and squaring. Follow this character? With the advances of computer computational power, RSA is becoming more and more vulnerable. 0000033577 00000 n INPUT: order – a prime power. 0000042263 00000 n Synthesis of Arithmetic Circuits: FPGA, ASIC, and Embedded Systems. 0000013472 00000 n Binary values expressed as polynomials in GF(2 m) can readily be manipulated using the definition of this finite field. Finite Fields Package. We prove some new results about two different XOR-metrics that have been used in the past. XOR-metrics measure the efficiency of certain arithmetic operations in binary finite fields. Multiplication is defined modulo P(x), where P(x) is a primitive polynomial of degree m. To this end, we first define fields. Famfrit (Primal) You have no connection with this character. We claim that the splitting field F of this polynomial is a finite field of size p n. The field F certainly contains the set S of roots of f (X). 2.1. ... under the usual operations on power series (the integer m may be positive, … 0000025235 00000 n BACKGROUND OF THE INVENTION. Finite fields are eminently useful for the design of algorithms for generating pseudorandom numbers and quasirandom points and in the analysis of the output of such algorithms. Arithmetic follows the ordinary rules of polynomial arithmetic using the basic rules of algebra, with the following two refinements. These operations include addition, subtraction, multiplication, and inversion. A Galois field in which the elements can take q different values is referred to as GF(q). It is so named in honour of Évariste Galois, a French mathematician. In particular, the arithmetic operations of addition, multiplication, and division are performed over the finite field GF(2 8). A group is a non-empty set (finite or infinite) G with a binary operator • such that the following four properties (Cain) are satisfied: * Notifications for PvP team formations are shared for all languages. 0000009184 00000 n The next sections describe the operations applicable to finite field Operations for Finite Field Elements). This implies that on most cases when the two conventions have to be used simultaneously, input bit strings have to be reflected first before being applied finite field operations and the result be reflected back, to comply with the standard (one can find an analysis of such a choice by Rogaway in , Remark 12.4.4, p.130). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.603271484375, "perplexity": 1023.1353971444103}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584567.81/warc/CC-MAIN-20211016105157-20211016135157-00693.warc.gz"} |
https://mooseframework.inl.gov/source/kernels/PorousFlowMassRadioactiveDecay.html | ## Information and Tools
Radioactive decay of a fluid component
This Kernel implements the weak form of where all parameters are defined in the nomenclature.
## Input Parameters
• variableThe name of the variable that this Kernel operates on
C++ Type:NonlinearVariableName
Options:
Description:The name of the variable that this Kernel operates on
• decay_rateThe decay rate (units 1/time) for the fluid component
C++ Type:double
Options:
Description:The decay rate (units 1/time) for the fluid component
• PorousFlowDictatorThe UserObject that holds the list of PorousFlow variable names.
C++ Type:UserObjectName
Options:
Description:The UserObject that holds the list of PorousFlow variable names.
### Required Parameters
• strain_at_nearest_qpFalseWhen calculating nodal porosity that depends on strain, use the strain at the nearest quadpoint. This adds a small extra computational burden, and is not necessary for simulations involving only linear lagrange elements. If you set this to true, you will also want to set the same parameter to true for related Kernels and Materials
Default:False
C++ Type:bool
Options:
Description:When calculating nodal porosity that depends on strain, use the strain at the nearest quadpoint. This adds a small extra computational burden, and is not necessary for simulations involving only linear lagrange elements. If you set this to true, you will also want to set the same parameter to true for related Kernels and Materials
• blockThe list of block ids (SubdomainID) that this object will be applied
C++ Type:std::vector
Options:
Description:The list of block ids (SubdomainID) that this object will be applied
• fluid_component0The index corresponding to the fluid component for this kernel
Default:0
C++ Type:unsigned int
Options:
Description:The index corresponding to the fluid component for this kernel
### Optional Parameters
• enableTrueSet the enabled status of the MooseObject.
Default:True
C++ Type:bool
Options:
Description:Set the enabled status of the MooseObject.
• save_inThe name of auxiliary variables to save this Kernel's residual contributions to. Everything about that variable must match everything about this variable (the type, what blocks it's on, etc.)
C++ Type:std::vector
Options:
Description:The name of auxiliary variables to save this Kernel's residual contributions to. Everything about that variable must match everything about this variable (the type, what blocks it's on, etc.)
• use_displaced_meshFalseWhether or not this object should use the displaced mesh for computation. Note that in the case this is true but no displacements are provided in the Mesh block the undisplaced mesh will still be used.
Default:False
C++ Type:bool
Options:
Description:Whether or not this object should use the displaced mesh for computation. Note that in the case this is true but no displacements are provided in the Mesh block the undisplaced mesh will still be used.
• control_tagsAdds user-defined labels for accessing object parameters via control logic.
C++ Type:std::vector
Options:
Description:Adds user-defined labels for accessing object parameters via control logic.
• seed0The seed for the master random number generator
Default:0
C++ Type:unsigned int
Options:
Description:The seed for the master random number generator
• diag_save_inThe name of auxiliary variables to save this Kernel's diagonal Jacobian contributions to. Everything about that variable must match everything about this variable (the type, what blocks it's on, etc.)
C++ Type:std::vector
Options:
Description:The name of auxiliary variables to save this Kernel's diagonal Jacobian contributions to. Everything about that variable must match everything about this variable (the type, what blocks it's on, etc.)
• implicitTrueDetermines whether this object is calculated using an implicit or explicit form
Default:True
C++ Type:bool
Options:
Description:Determines whether this object is calculated using an implicit or explicit form
• vector_tagstimeThe tag for the vectors this Kernel should fill
Default:time
C++ Type:MultiMooseEnum
Options:nontime time
Description:The tag for the vectors this Kernel should fill
• extra_vector_tagsThe extra tags for the vectors this Kernel should fill
C++ Type:std::vector
Options:
Description:The extra tags for the vectors this Kernel should fill
• matrix_tagssystem timeThe tag for the matrices this Kernel should fill
Default:system time
C++ Type:MultiMooseEnum
Options:nontime system time
Description:The tag for the matrices this Kernel should fill
• extra_matrix_tagsThe extra tags for the matrices this Kernel should fill
C++ Type:std::vector
Options:
Description:The extra tags for the matrices this Kernel should fill | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15511389076709747, "perplexity": 5057.9400711691405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827137.61/warc/CC-MAIN-20181215222234-20181216004234-00350.warc.gz"} |
https://cs.stackexchange.com/questions/93524/probabilisitc-timed-automaton/93539 | # Probabilisitc timed automaton
I am kind of new to timed automaton domain. I am trying to understand in which way they are different to Markov Decision Process. First I know there objectives is to solve the non-determinism of a MDP. However, is there anything else? Are they still memoryless? I am not able to find any information about this topic.
The reason that you do not find any results is thta from a scientific point of view, the well-posed questions for PTAs and MDPs are quite different:
• MDPs typically have a reward function assigned, while PTAs do not necessarily have to have them.
• It is a bit imprecise to state that the "objectives is to solve the non-determinism of a MDP" -- it depends on what you want to do. PTAs are foremost models that you can analyse. Resolving the non-determinism means computing a policy, which makes sense if you have some kind of optimization criterion that you want to follow. This could be a temporal logic formula.
• In MDPs, the classical question is to find a policy that maximizes the limsup average payoff or the discounted payoff. In the latter case, there exist optimal memoryless policies (which is the result that you refer to). In PTAs, asking the same question is kind-of odd as the number of states in PTAs is infinite and there is (by default) nothing to optimize. A state consists of a location and a valuation to the clocks. This is already the case for timed automata, from which PTAs inherit many of their properties (and hardness results).
• The questions typically asked for PTAs are whether there exist policies that raise the probability of some temporal property holding along a trace over some given limit. A paper my Norman et al. (http://www.prismmodelchecker.org/papers/fmsd-ptas.pdf) contains details. They also define reward structures for PTAs, but the do not need to be concerned with discounted payoff optimization, like it is common in the MDP research case.
• Thanks for your answer. So you say that there is by default nothing to optimize using PTAs? But you could still use a finite number of states on a PTA right? I mean, I could model a system using a PTA, and limit the number of states to the number of machines in my system, saying that a machine correspond to a state for example. Then I will have a finite number of states. I am interested on doing optimization on time metrics, thats why PTAs looked OK in my case. – Ecterion Jun 26 '18 at 14:58
• @Ecterion Do you mean "States" or "Locations"? The only way to have a final number of states in a PTA is by not having any clocks. But then, you could also just use a non-timed automaton instead. You could do optimization in the sense that you try to find out what the minimal time duration $t$ is for which it is ensured that some specific state is reached at least once with probability >= 0.5 until $t$ time units have passed. – DCTLib Jun 26 '18 at 20:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.736695408821106, "perplexity": 475.4733427710102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703527224.75/warc/CC-MAIN-20210121163356-20210121193356-00609.warc.gz"} |
https://bitbucket.org/dmj111/uwhpsc/commits/3144d50ee333de1724ba48f91b9ed044b8c7583e | # Commits
committed 3144d50
• Participants
• Parent commits 8bd1c31
• Branches master
# File codes/homework5/Makefile
+
+OBJECTS = functions.o quadrature.o test.o
+FFLAGS = -fopenmp
+
+.PHONY: test clean
+
+test: test.exe
+ ./test.exe
+
+test.exe: $(OBJECTS) + gfortran$(FFLAGS) $(OBJECTS) -o test.exe + +%.o : %.f90 + gfortran$(FFLAGS) -c $< + +clean: + rm -f *.o *.exe *.mod + # File codes/homework5/README.txt + +Sample code to start with for Homework 5. + +For convenience a Makefile is provided: +$ make test
+
+to compile and run.
+
+A notebook describing Simpson's rule is in the notebook subdirectory, in
+various formats.
+
# File codes/homework5/functions.f90
+
+module functions
+
+ use omp_lib
+ implicit none
+ integer :: fevals(0:7)
+ real(kind=8) :: k
+ save
+
+contains
+
+ real(kind=8) function f(x)
+ implicit none
+ real(kind=8), intent(in) :: x
+ integer thread_num
+
+ ! keep track of number of function evaluations by
+ ! each thread:
+ thread_num = 0 ! serial mode
+ !$thread_num = omp_get_thread_num() + fevals(thread_num) = fevals(thread_num) + 1 + + f = 1.d0 + x**3 + sin(k*x) + + end function f + +end module functions # File codes/homework5/notebook/quadrature2.ipynb +{ + "metadata": { + "name": "quadrature2" + }, + "nbformat": 3, + "nbformat_minor": 0, + "worksheets": [ + { + "cells": [ + { + "cell_type": "heading", + "level": 1, + "metadata": {}, + "source": [ + "Numerical Quadrature" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Numerical quadrature refers to approximating a definite integral numerically, \n", + "$$~~ \\int_a^b f(x) dx.$$\n", + "Many numerical analysis textbooks describe a variety of quadrature methods or \"rules\". " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "First define a simple function for which we know the exact answer:" + ] + }, + { + "cell_type": "code", + "collapsed": false, + "input": [ + "def f1(x):\n", + " return 1. + x**3\n", + "\n", + "a1 = 0.\n", + "b1 = 2.\n", + "int_true1 = (b1-a1) + (b1**4 -a1**4) / 4.\n", + "print \"true integral: %22.14e\" % int_true1" + ], + "language": "python", + "metadata": {}, + "outputs": [] + }, + { + "cell_type": "heading", + "level": 2, + "metadata": {}, + "source": [ + "The Trapezoid Rule" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We will first look at the Trapezoid method. This method is implemented by evaluating the function at$n$points and then computing the areas of the trapezoids defined by a piecewise linear approximation to the original function defined by these points. In the figure below, we are approximating the integral of the blue curve by the sum of the areas of the red trapezoids." + ] + }, + { + "cell_type": "code", + "collapsed": false, + "input": [ + "def plot_trap(f,a,b,n):\n", + " x = linspace(a-0.2, b+0.2, 10000) # points for smooth plot\n", + " plot(x,f(x),'b-')\n", + " xj = linspace(a,b,n)\n", + " plot(xj,f(xj),'ro-')\n", + " for xi in xj:\n", + " plot([xi,xi], [0,f(xi)], 'r')\n", + " plot([a,b], [0,0], 'r') # along x-axis\n", + "\n", + "plot_trap(f1,a1,b1,5)" + ], + "language": "python", + "metadata": {}, + "outputs": [] + }, + { + "cell_type": "heading", + "level": 3, + "metadata": {}, + "source": [ + "The Trapezoid rule formula" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The area of a single trapezoid is the width of the base times the average height, so between points$x_j$and$x_{j+1}$this gives:\n", + "$$\\frac{h}{2} (f(x_j) + f(x_{j+1}).$$\n", + "\n", + "Summing this up over all the trapezoids gives:\n", + "$$h\\left(\\frac 1 2 f(x_0) + f(x_1) + f(x_2) + \\cdots + f(x_{n-2}) + \\frac 1 2 f(x_{n-1})\\right) = h\\sum_{j=0}^{n-1} f(x_j) - \\frac h 2 \\left(f(x_0) + f(x_{n-1})\\right) = h\\sum_{j=0}^{n-1} f(x_j) - \\frac h 2 \\left(f(a) + f(b))\\right).$$\n", + "\n", + "This can be implemented as follows (note that in Python fj[-1] refers to the last element of fj, and similarly fj[-2] would be the next to last element)." + ] + }, + { + "cell_type": "code", + "collapsed": false, + "input": [ + "def trapezoid(f,a,b,n):\n", + " h = (b-a)/(n-1)\n", + " xj = linspace(a,b,n)\n", + " fj = f(xj)\n", + " int_trapezoid = h*sum(fj) - 0.5*h*(fj[0] + fj[-1])\n", + " return int_trapezoid\n" + ], + "language": "python", + "metadata": {}, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can test it out for the points used in the figure above:" + ] + }, + { + "cell_type": "code", + "collapsed": false, + "input": [ + "n = 5\n", + "int_trap = trapezoid(f1,a1,b1,n)\n", + "error = abs(int_trap - int_true1)\n", + "print \"trapezoid rule approximation: %22.14e, error: %10.3e\" % (int_trap, error)" + ], + "language": "python", + "metadata": {}, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Using more points will give a better approximation, try changing it in the cell above." + ] + }, + { + "cell_type": "heading", + "level": 3, + "metadata": {}, + "source": [ + "Convergence tests" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If we increase n, the number of points used, and hence decrease h, the spacing between points, we expect the error to converge to zero for reasonable functions$f(x)$.\n", + "\n", + "The trapezoid rule is \"second order accurate\", meaning that the error goes to zero like$O(h^2)$for a function that is sufficiently smooth (for example if its second derivative is continuous). For small$h$, the error is expected to be behave like$Ch^2 + O(h^3)~$as$h$goes to zero, where$C$is some constant that depends on how smooth$h$is. \n", + "\n", + "If we double n (and halve h) then we expect the error to go down by a factor of 4 roughly (from$Ch^2$to$C(h/2)^2~$).\n", + "\n", + "We can check this by trying several values of n and making a table of the errors and the ratio from one n to the next:" + ] + }, + { + "cell_type": "code", + "collapsed": false, + "input": [ + "def error_table(f,a,b,nvals,int_true,method=trapezoid):\n", + " \"\"\"\n", + " An improved version that takes the function defining the method as an\n", + " input argument.\n", + " \"\"\"\n", + " print \" n approximation error ratio\"\n", + " last_error = 0. # need something for first ratio\n", + " for n in nvals:\n", + " int_approx = method(f,a,b,n)\n", + " error = abs(int_approx - int_true)\n", + " ratio = last_error / error\n", + " last_error = error # for next n\n", + " print \"%8i %22.14e %10.3e %10.3e\" % (n,int_approx, error, ratio)\n", + " \n", + "nvals = array([5, 10, 20, 40, 80, 160, 320])\n", + "error_table(f1,a1,b1,nvals,int_true1,trapezoid)" + ], + "language": "python", + "metadata": {}, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "(Note that the first ratio reported is meaningless.)\n", + "\n", + "Convergence might be easier to see in a plot. If a method is p'th order accurate then we expect the error to behave like$E\\approx Ch^p$for some constant$C$, for small$h$. This is hard to visualize. It is much easier to see what order accuracy we are achieving if we produce a log-log plot instead, since$E = Ch^p~$means that$\\log E = \\log C + p\\log h$\n", + "\n", + "In other words$\\log E~$is a linear function of$\\log h~$." + ] + }, + { + "cell_type": "code", + "collapsed": false, + "input": [ + "def error_plot(f,a,b,nvals,int_true,method=trapezoid):\n", + " errvals = zeros(nvals.shape) # initialize to right shape\n", + " for i in range(len(nvals)):\n", + " n = nvals[i]\n", + " int_approx = method(f,a,b,n)\n", + " error = abs(int_approx - int_true)\n", + " errvals[i] = error\n", + " hvals = (b - a) / (nvals - 1) # vector of h values for each n\n", + " loglog(hvals,errvals, 'o-')\n", + " xlabel('spacing h')\n", + " ylabel('error')\n", + " \n", + "error_plot(f1,a1,b1,nvals,int_true1,trapezoid)" + ], + "language": "python", + "metadata": {}, + "outputs": [] + }, + { + "cell_type": "heading", + "level": 3, + "metadata": {}, + "source": [ + "An oscillatory function" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If the function$f(x)$is not as smooth (has larger second derivative at various places) then the accuracy with a small number of points will not be nearly as good. For example, consider the function$f_2(x) = 1 + x^3 + \\sin(kx)~~~$where$k$is a parameter. For large$k$this function is very oscillatory. In order to experiment with different values of$k$, we can define a \"function factory\" that creates this function for any given$k$, and also returns the true integral over a given interval:" + ] + }, + { + "cell_type": "code", + "collapsed": false, + "input": [ + "def f2_factory(k, a, b):\n", + " def f2(x):\n", + " return 1 + x**3 + sin(k*x)\n", + " int_true = (b-a) + (b**4 - a**4) / 4. - (1./k) * (cos(k*b) - cos(k*a))\n", + " return f2, int_true\n", + " " + ], + "language": "python", + "metadata": {}, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "First create a version of$f_2$with$k=50$:" + ] + }, + { + "cell_type": "code", + "collapsed": false, + "input": [ + "k = 50.\n", + "a2 = 0.\n", + "b2 = 2.\n", + "f2, int_true2 = f2_factory(k, a2, b2)\n", + "print \"true integral: %22.14e\" % int_true2" + ], + "language": "python", + "metadata": {}, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For this function with k=50, using n=10 points is not going to give a very good approximation:" + ] + }, + { + "cell_type": "code", + "collapsed": false, + "input": [ + "plot_trap(f2,a2,b2,10)" + ], + "language": "python", + "metadata": {}, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This doesn't look very good, but for larger values of$n$we still see the expected convergence rate:" + ] + }, + { + "cell_type": "code", + "collapsed": false, + "input": [ + "error_plot(f2,a2,b2,nvals,int_true2)" + ], + "language": "python", + "metadata": {}, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now make the function much more oscillatory with a larger value of$k$..." + ] + }, + { + "cell_type": "code", + "collapsed": false, + "input": [ + "k = 1000.\n", + "f2, int_true2 = f2_factory(k,a2,b2)\n", + "print \"true integral: %22.14e\" % int_true2" + ], + "language": "python", + "metadata": {}, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For the previous choice of nvals the method does not seem to be doing well:" + ] + }, + { + "cell_type": "code", + "collapsed": false, + "input": [ + "nvals = array([5, 10, 20, 40, 80, 160, 320])\n", + "print \"nvals = \",nvals\n", + "error_plot(f2,a2,b2,nvals,int_true2, trapezoid)" + ], + "language": "python", + "metadata": {}, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In this case the$O(h^2)~$behavior does not become apparent unless we use much smaller$h$values so that we are resolving the oscillations:" + ] + }, + { + "cell_type": "code", + "collapsed": false, + "input": [ + "nvals = array([5 * 2**i for i in range(12)])\n", + "print \"nvals = \",nvals\n", + "error_plot(f2,a2,b2,nvals,int_true2,trapezoid)" + ], + "language": "python", + "metadata": {}, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Eventually we see second order convergence and ratios that approach 4:" + ] + }, + { + "cell_type": "code", + "collapsed": false, + "input": [ + "error_table(f2,a2,b2,nvals,int_true2,trapezoid)" + ], + "language": "python", + "metadata": {}, + "outputs": [] + }, + { + "cell_type": "heading", + "level": 2, + "metadata": {}, + "source": [ + "Simpson's Rule" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "There are much better methods than the Trapezoidal rule that are not much harder to implement but get much smaller errors with the same number of function evaluations. One such method is Simpson\u2019s rule, which approximates the integral over a single interval from$x_i$to$x_{i+1}$by\n", + "$$\\int_{x_i}^{x_{i+1}} f(x)\\, dx \\approx \\frac h 6 (f(x_i) + 4f(x_{i+1/2}) + f(x_{i+1})),$$\n", + "where$x_{i+1/2} = \\frac 1 2 (x_i + x_{i+1}) = x_i + h/2.$\n", + "\n", + "Derivation: The trapezoid method is derived by approximating the function on each interval by a linear function interpolating at the two endpoints of each interval and then integrating this linear function. Simpson's method is derived by approximating the function by a quadratic function interpolating at the endpoints and the center of the interval and integrating this quadratic function." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Adding this up over$n-1$intervals gives the approximation\n", + "$$\\frac{h}{6}[f(x_0) + 4f(x_{1/2}) + 2f(x_1) + 4f(x_{3/2}) + 2f(x_2) + \\cdots + 2f(x_{n-2}) + 4f(x_{n-3/2}) + f(x_{n-1})].$$\n", + "In Python this can be implemented by the following code:" + ] + }, + { + "cell_type": "code", + "collapsed": false, + "input": [ + "def simpson(f,a,b,n):\n", + " h = (b-a)/(n-1)\n", + " xj = linspace(a,b,n)\n", + " fj = f(xj)\n", + " xc = linspace(a+h/2,b-h/2,n-1) # midpoints of cells\n", + " fc = f(xc)\n", + " int_simpson = (h/6.) * (2.*sum(fj) - (fj[0] + fj[-1]) + 4.*sum(fc))\n", + " return int_simpson" + ], + "language": "python", + "metadata": {}, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This method is 4th order accurate, which means that on fine enough grids the error is proportional to \\Delta x^4. Hence increasing n by a factor of 2 should decrease the error by a factor of 2^4 = 16. Let's try it on the last function we were experimenting with:" + ] + }, + { + "cell_type": "code", + "collapsed": false, + "input": [ + "k = 1000.\n", + "f2, int_true2 = f2_factory(k,a2,b2)\n", + "print \"true integral: %22.14e\" % int_true2\n", + "\n", + "error_table(f2,a2,b2,nvals,int_true2,simpson)" + ], + "language": "python", + "metadata": {}, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Note that the errors get smaller much faster and the ratio approaches 16. The improvement over the trapezoid method is seen more clearly if we plot the errors together:" + ] + }, + { + "cell_type": "code", + "collapsed": false, + "input": [ + "error_plot(f2,a2,b2,nvals,int_true2,trapezoid)\n", + "error_plot(f2,a2,b2,nvals,int_true2,simpson)" + ], + "language": "python", + "metadata": {}, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You might want to experiment with changing$k$in the two cells above." + ] + }, + { + "cell_type": "heading", + "level": 4, + "metadata": {}, + "source": [ + "Simpson's method integrates cubic functions exactly" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Even though Simpson'e method is derived by integrating a quadratic approximation of the function, rather than linear as with the Trapezoid Rule, in fact it also integrates a cubic exactly, as seen if we try it out with the function f1 defined at the top of this notebook. (This is because the error between the cubic and the quadratic approximation on each interval is not zero but does have integral equal to zero since it turns out to be an odd function about the midpoint.) For this reason Simpson's Rule is fourth order accurate in general rather than only third order, as one might expect when going from a linear to quadratic approximation.\n", + "\n", + "Note the error ratios are whacky as a result." + ] + }, + { + "cell_type": "code", + "collapsed": false, + "input": [ + "error_table(f1,a1,b1,nvals,int_true1,simpson)" + ], + "language": "python", + "metadata": {}, + "outputs": [] + }, + { + "cell_type": "code", + "collapsed": false, + "input": [], + "language": "python", + "metadata": {}, + "outputs": [] + } + ], + "metadata": {} + } + ] +} # File codes/homework5/notebook/quadrature2.pdf Binary file added. # File codes/homework5/notebook/quadrature2.py + +# Saved from notebook and then edited a bit by hand, e.g. added +# from pylab import * + +# As it, this will do all the plot on top of one another, so you might want +# to add +# figure() commands before each plot to create a new figure, and +# savefig(filename) afterwards +# if you wanted to save them. + +#------------------------------------ + + +# -*- coding: utf-8 -*- +# <nbformat>3.0</nbformat> + +# <headingcell level=1> + +# Numerical Quadrature + +# <markdowncell> + +# Numerical quadrature refers to approximating a definite integral numerically, +# $$~~ \int_a^b f(x) dx.$$ +# Many numerical analysis textbooks describe a variety of quadrature methods or "rules". + +# <markdowncell> + +# First define a simple function for which we know the exact answer: + +# <codecell> + +from pylab import * # added by hand + + +def f1(x): + return 1. + x**3 + +a1 = 0. +b1 = 2. +int_true1 = (b1-a1) + (b1**4 -a1**4) / 4. +print "true integral: %22.14e" % int_true1 + +# <headingcell level=2> + +# The Trapezoid Rule + +# <markdowncell> + +# We will first look at the Trapezoid method. This method is implemented by evaluating the function at$n$points and then computing the areas of the trapezoids defined by a piecewise linear approximation to the original function defined by these points. In the figure below, we are approximating the integral of the blue curve by the sum of the areas of the red trapezoids. + +# <codecell> + +def plot_trap(f,a,b,n): + x = linspace(a-0.2, b+0.2, 10000) # points for smooth plot + plot(x,f(x),'b-') + xj = linspace(a,b,n) + plot(xj,f(xj),'ro-') + for xi in xj: + plot([xi,xi], [0,f(xi)], 'r') + plot([a,b], [0,0], 'r') # along x-axis + +plot_trap(f1,a1,b1,5) + +# <headingcell level=3> + +# The Trapezoid rule formula + +# <markdowncell> + +# The area of a single trapezoid is the width of the base times the average height, so between points$x_j$and$x_{j+1}$this gives: +# $$\frac{h}{2} (f(x_j) + f(x_{j+1}).$$ +# +# Summing this up over all the trapezoids gives: +# $$h\left(\frac 1 2 f(x_0) + f(x_1) + f(x_2) + \cdots + f(x_{n-2}) + \frac 1 2 f(x_{n-1})\right) = h\sum_{j=0}^{n-1} f(x_j) - \frac h 2 \left(f(x_0) + f(x_{n-1})\right) = h\sum_{j=0}^{n-1} f(x_j) - \frac h 2 \left(f(a) + f(b))\right).$$ +# +# This can be implemented as follows (note that in Python fj[-1] refers to the last element of fj, and similarly fj[-2] would be the next to last element). + +# <codecell> + +def trapezoid(f,a,b,n): + h = (b-a)/(n-1) + xj = linspace(a,b,n) + fj = f(xj) + int_trapezoid = h*sum(fj) - 0.5*h*(fj[0] + fj[-1]) + return int_trapezoid + +# <markdowncell> + +# We can test it out for the points used in the figure above: + +# <codecell> + +n = 5 +int_trap = trapezoid(f1,a1,b1,n) +error = abs(int_trap - int_true1) +print "trapezoid rule approximation: %22.14e, error: %10.3e" % (int_trap, error) + +# <markdowncell> + +# Using more points will give a better approximation, try changing it in the cell above. + +# <headingcell level=3> + +# Convergence tests + +# <markdowncell> + +# If we increase n, the number of points used, and hence decrease h, the spacing between points, we expect the error to converge to zero for reasonable functions$f(x)$. +# +# The trapezoid rule is "second order accurate", meaning that the error goes to zero like$O(h^2)$for a function that is sufficiently smooth (for example if its second derivative is continuous). For small$h$, the error is expected to be behave like$Ch^2 + O(h^3)~$as$h$goes to zero, where$C$is some constant that depends on how smooth$h$is. +# +# If we double n (and halve h) then we expect the error to go down by a factor of 4 roughly (from$Ch^2$to$C(h/2)^2~$). +# +# We can check this by trying several values of n and making a table of the errors and the ratio from one n to the next: + +# <codecell> + +def error_table(f,a,b,nvals,int_true,method=trapezoid): + """ + An improved version that takes the function defining the method as an + input argument. + """ + print " n approximation error ratio" + last_error = 0. # need something for first ratio + for n in nvals: + int_approx = method(f,a,b,n) + error = abs(int_approx - int_true) + ratio = last_error / error + last_error = error # for next n + print "%8i %22.14e %10.3e %10.3e" % (n,int_approx, error, ratio) + +nvals = array([5, 10, 20, 40, 80, 160, 320]) +error_table(f1,a1,b1,nvals,int_true1,trapezoid) + +# <markdowncell> + +# (Note that the first ratio reported is meaningless.) +# +# Convergence might be easier to see in a plot. If a method is p'th order accurate then we expect the error to behave like$E\approx Ch^p$for some constant$C$, for small$h$. This is hard to visualize. It is much easier to see what order accuracy we are achieving if we produce a log-log plot instead, since$E = Ch^p~$means that$\log E = \log C + p\log h$ +# +# In other words$\log E~$is a linear function of$\log h~$. + +# <codecell> + +def error_plot(f,a,b,nvals,int_true,method=trapezoid): + errvals = zeros(nvals.shape) # initialize to right shape + for i in range(len(nvals)): + n = nvals[i] + int_approx = method(f,a,b,n) + error = abs(int_approx - int_true) + errvals[i] = error + hvals = (b - a) / (nvals - 1) # vector of h values for each n + loglog(hvals,errvals, 'o-') + xlabel('spacing h') + ylabel('error') + +error_plot(f1,a1,b1,nvals,int_true1,trapezoid) + +# <headingcell level=3> + +# An oscillatory function + +# <markdowncell> + +# If the function$f(x)$is not as smooth (has larger second derivative at various places) then the accuracy with a small number of points will not be nearly as good. For example, consider the function$f_2(x) = 1 + x^3 + \sin(kx)~~~$where$k$is a parameter. For large$k$this function is very oscillatory. In order to experiment with different values of$k$, we can define a "function factory" that creates this function for any given$k$, and also returns the true integral over a given interval: + +# <codecell> + +def f2_factory(k, a, b): + def f2(x): + return 1 + x**3 + sin(k*x) + int_true = (b-a) + (b**4 - a**4) / 4. - (1./k) * (cos(k*b) - cos(k*a)) + return f2, int_true + + +# <markdowncell> + +# First create a version of$f_2$with$k=50$: + +# <codecell> + +k = 50. +a2 = 0. +b2 = 2. +f2, int_true2 = f2_factory(k, a2, b2) +print "true integral: %22.14e" % int_true2 + +# <markdowncell> + +# For this function with k=50, using n=10 points is not going to give a very good approximation: + +# <codecell> + +plot_trap(f2,a2,b2,10) + +# <markdowncell> + +# This doesn't look very good, but for larger values of$n$we still see the expected convergence rate: + +# <codecell> + +error_plot(f2,a2,b2,nvals,int_true2) + +# <markdowncell> + +# Now make the function much more oscillatory with a larger value of$k$... + +# <codecell> + +k = 1000. +f2, int_true2 = f2_factory(k,a2,b2) +print "true integral: %22.14e" % int_true2 + +# <markdowncell> + +# For the previous choice of nvals the method does not seem to be doing well: + +# <codecell> + +nvals = array([5, 10, 20, 40, 80, 160, 320]) +print "nvals = ",nvals +error_plot(f2,a2,b2,nvals,int_true2, trapezoid) + +# <markdowncell> + +# In this case the$O(h^2)~$behavior does not become apparent unless we use much smaller$h$values so that we are resolving the oscillations: + +# <codecell> + +nvals = array([5 * 2**i for i in range(12)]) +print "nvals = ",nvals +error_plot(f2,a2,b2,nvals,int_true2,trapezoid) + +# <markdowncell> + +# Eventually we see second order convergence and ratios that approach 4: + +# <codecell> + +error_table(f2,a2,b2,nvals,int_true2,trapezoid) + +# <headingcell level=2> + +# Simpson's Rule + +# <markdowncell> + +# There are much better methods than the Trapezoidal rule that are not much harder to implement but get much smaller errors with the same number of function evaluations. One such method is Simpson’s rule, which approximates the integral over a single interval from$x_i$to$x_{i+1}$by +# $$\int_{x_i}^{x_{i+1}} f(x)\, dx \approx \frac h 6 (f(x_i) + 4f(x_{i+1/2}) + f(x_{i+1})),$$ +# where$x_{i+1/2} = \frac 1 2 (x_i + x_{i+1}) = x_i + h/2.$ +# +# Derivation: The trapezoid method is derived by approximating the function on each interval by a linear function interpolating at the two endpoints of each interval and then integrating this linear function. Simpson's method is derived by approximating the function by a quadratic function interpolating at the endpoints and the center of the interval and integrating this quadratic function. + +# <markdowncell> + +# Adding this up over$n-1$intervals gives the approximation +# $$\frac{h}{6}[f(x_0) + 4f(x_{1/2}) + 2f(x_1) + 4f(x_{3/2}) + 2f(x_2) + \cdots + 2f(x_{n-2}) + 4f(x_{n-3/2}) + f(x_{n-1})].$$ +# In Python this can be implemented by the following code: + +# <codecell> + +def simpson(f,a,b,n): + h = (b-a)/(n-1) + xj = linspace(a,b,n) + fj = f(xj) + xc = linspace(a+h/2,b-h/2,n-1) # midpoints of cells + fc = f(xc) + int_simpson = (h/6.) * (2.*sum(fj) - (fj[0] + fj[-1]) + 4.*sum(fc)) + return int_simpson + +# <markdowncell> + +# This method is 4th order accurate, which means that on fine enough grids the error is proportional to \Delta x^4. Hence increasing n by a factor of 2 should decrease the error by a factor of 2^4 = 16. Let's try it on the last function we were experimenting with: + +# <codecell> + +k = 1000. +f2, int_true2 = f2_factory(k,a2,b2) +print "true integral: %22.14e" % int_true2 + +error_table(f2,a2,b2,nvals,int_true2,simpson) + +# <markdowncell> + +# Note that the errors get smaller much faster and the ratio approaches 16. The improvement over the trapezoid method is seen more clearly if we plot the errors together: + +# <codecell> + +error_plot(f2,a2,b2,nvals,int_true2,trapezoid) +error_plot(f2,a2,b2,nvals,int_true2,simpson) + +# <markdowncell> + +# You might want to experiment with changing$k$in the two cells above. + +# <headingcell level=4> + +# Simpson's method integrates cubic functions exactly + +# <markdowncell> + +# Even though Simpson'e method is derived by integrating a quadratic approximation of the function, rather than linear as with the Trapezoid Rule, in fact it also integrates a cubic exactly, as seen if we try it out with the function f1 defined at the top of this notebook. (This is because the error between the cubic and the quadratic approximation on each interval is not zero but does have integral equal to zero since it turns out to be an odd function about the midpoint.) For this reason Simpson's Rule is fourth order accurate in general rather than only third order, as one might expect when going from a linear to quadratic approximation. +# +# Note the error ratios are whacky as a result. + +# <codecell> + +error_table(f1,a1,b1,nvals,int_true1,simpson) + +# <codecell> + + # File codes/homework5/quadrature.f90 + +module quadrature + + use omp_lib + +contains + +real(kind=8) function trapezoid(f, a, b, n) + + ! Estimate the integral of f(x) from a to b using the + ! Trapezoid Rule with n points. + + ! Input: + ! f: the function to integrate + ! a: left endpoint + ! b: right endpoint + ! n: number of points to use + ! Returns: + ! the estimate of the integral + + implicit none + real(kind=8), intent(in) :: a,b + real(kind=8), external :: f + integer, intent(in) :: n + + ! Local variables: + integer :: j + real(kind=8) :: h, trap_sum, xj + + h = (b-a)/(n-1) + trap_sum = 0.5d0*(f(a) + f(b)) ! endpoint contributions + + !$omp parallel do private(xj) reduction(+ : trap_sum)
+ do j=2,n-1
+ xj = a + (j-1)*h
+ trap_sum = trap_sum + f(xj)
+ enddo
+
+ trapezoid = h * trap_sum
+
+end function trapezoid
+
+
+subroutine error_table(f,a,b,nvals,int_true,method)
+
+ ! Compute and print out a table of errors when the quadrature
+ ! rule specified by the input function method is applied for
+ ! each value of n in the array nvals.
+
+ implicit none
+ real(kind=8), intent(in) :: a,b, int_true
+ real(kind=8), external :: f, method
+ integer, dimension(:), intent(in) :: nvals
+
+ ! Local variables:
+ integer :: j, n
+ real(kind=8) :: ratio, last_error, error, int_approx
+
+ print *, " n approximation error ratio"
+ last_error = 0.d0
+ do j=1,size(nvals)
+ n = nvals(j)
+ int_approx = method(f,a,b,n)
+ error = abs(int_approx - int_true)
+ ratio = last_error / error
+ last_error = error ! for next n
+
+ print 11, n, int_approx, error, ratio
+ 11 format(i8, es22.14, es13.3, es13.3)
+ enddo
+
+end subroutine error_table
+
+
+end module quadrature
+
# File codes/homework5/test.f90
+
+program test
+
+ use omp_lib
+
+ use quadrature, only: trapezoid, error_table
+ use functions, only: f, fevals, k
+
+ implicit none
+ real(kind=8) :: a,b,int_true
+ integer :: nvals(12)
+ integer :: i, nthreads
+
+ real(kind=8) :: t1, t2, elapsed_time
+ integer(kind=8) :: tclock1, tclock2, clock_rate
+
+ nthreads = 1 ! for serial mode
+ !$nthreads = 4 ! for openmp + !$ call omp_set_num_threads(nthreads)
+ print 100, nthreads
+100 format("Using ",i2," threads")
+
+ fevals = 0
+
+ k = 1.d3 ! functions module variable for function f2
+ a = 0.d0
+ b = 2.d0
+ int_true = (b-a) + (b**4 - a**4) / 4.d0 - (1.d0/k) * (cos(k*b) - cos(k*a))
+
+ print 10, int_true
+ 10 format("true integral: ", es22.14)
+ print *, " " ! blank line
+
+ ! values of n to test: (larger values than before)
+ do i=1,12
+ nvals(i) = 50 * 2**(i-1)
+ enddo
+
+ ! time the call to error_table:
+ call system_clock(tclock1)
+ call cpu_time(t1)
+ call error_table(f, a, b, nvals, int_true, trapezoid)
+ call cpu_time(t2)
+ call system_clock(tclock2, clock_rate)
+
+ elapsed_time = float(tclock2 - tclock1) / float(clock_rate)
+ print *, " "
+ print 11, elapsed_time
+ 11 format("Elapsed time = ",f12.8, " seconds")
+
+ print 12, t2-t1
+ 12 format("CPU time = ",f12.8, " seconds")
+
+
+ ! print the number of function evaluations by each thread:
+ do i=0,nthreads-1
+ print 101, i, fevals(i)
+101 format("fevals by thread ",i2,": ",i13)
+ enddo
+
+ print 102, sum(fevals)
+102 format("Total number of fevals: ",i10)
+
+end program test
# File notes/homework5.rst
+
+.. _homework5:
+
+==========================================
+Homework 5
+==========================================
+
+
+Due Wednesday, May 22, 2013, by 11:00pm PDT.
+
+The goals of this homework are to:
+
+* Get more experience with Fortran and OpenMP.
+* Experiment with coarse grain parallelism.
+* Learn a bit more about quadrature.
+
+#. I wrote an article recently urging applied mathematicians to share
+ research code, which appeared in SIAM News.
+ Some colleagues at other universities have told me it's required
+ reading for their students, so of course I have to assign it too!
+ It's pretty light reading.
+ A short quiz will be available on the Canvas page.
+
+ * <http://www.siam.org/news/news.php?id=2064>_
+ * some related links <http://faculty.washington.edu/rjl/pubs/topten/index.html>_
+
+
+#. The IPython notebook $UWHPSC/codes/homework5/notebook/quadrature2.ipynb + is an improved version of the notebook from the last homework. Some of + the functions have been made more general and a discussion has + been added of Simpson's Rule, a more accurate formula than Trapezoid. + + This notebook is best viewed live so that you can experiment with + changing things in order to explore these examples. If you have a + sufficiently recent version of the notebok installed (see + :ref:ipython_notebook) then you should be able to do:: + +$ cd $UWHPSC/codes/homework5/notebook +$ ipython notebook --pylab inline
+
+ and then click on the quadrature2 notebook.
+
+ I will also try to post it on wakari, but that seems to be down
+ currently.
+
+ .. comment ::
+ You can also view it at wakari <?>_
+ and if you have created a Wakari account, download it to your account to
+ try it out.
+
+ You can also view a static version of the notebook, which is in
+ $UWHPSC/codes/homework5/notebook/quadrature2.pdf. + + There is also a Python script version of the code in the notebook at + $UWHPSC/codes/homework5/notebook/quadrature.py if you
+ find that easier to experiment with.
+ (But see the comments at the top of that file.)
+
+ Experiment with the notebook or the module to make sure you understand
+ the material presented.
+
+ Study this code to make sure you understand it.
+
+#. The directory $UWHPSC/codes/homework5/ also contains Fortran code + that implements the last part of homework 4, with some added + enhancements. In particular: + + * timing has been added + * a counter has been added to the f2 function to count how many times it + is called, and this is printed at the end. + * fevals is a module variable (shared between threads) to store the + number of function calls by each thread when OpenMP is used. + * the error_table subroutine has a new input parameter method to + pass in the function that approximates the integral. When this is + called from the main program test.f90 the function trapezoid is + passed in. In this homework you will also test Simpson's rule. + * The function f2 has been moved to a module and the parameter k + is a module variable. + + + Study this code and experiment with it. + + With 4 threads it might produce something like the following (timings + will depend on how many cores you have):: + + Using 4 threads + true integral: 6.00136745954910E+00 + + n approximation error ratio + 50 6.00200615142458E+00 6.387E-04 0.000E+00 + 100 6.01762134207395E+00 1.625E-02 3.929E-02 + 200 5.99787907396672E+00 3.488E-03 4.659E+00 + 400 5.99537682567465E+00 5.991E-03 5.823E-01 + 800 6.00057196798962E+00 7.955E-04 7.531E+00 + 1600 6.00118591794817E+00 1.815E-04 4.382E+00 + 3200 6.00132301603504E+00 4.444E-05 4.085E+00 + 6400 6.00135640717690E+00 1.105E-05 4.021E+00 + 12800 6.00136470029559E+00 2.759E-06 4.006E+00 + 25600 6.00136677000209E+00 6.895E-07 4.002E+00 + 51200 6.00136728718235E+00 1.724E-07 4.000E+00 + 102400 6.00136741645906E+00 4.309E-08 4.000E+00 + + Elapsed time = 0.00554200 seconds + CPU time = 0.01890300 seconds + fevals by thread 0: 51211 + fevals by thread 1: 51187 + fevals by thread 2: 51187 + fevals by thread 3: 51165 + Total number of fevals: 204750 + + + You do not need to submit anything for this part. + +#. Create a new subdirectory $MYHPSC/homework5 for the code you write
+ below. You can use the code provided as a starting point.
+
+ Create a new module quadrature2.f90 by starting with quadrature.f90
+ and adding a new function simpson that
+ implements Simpson's rule. It should have the same input arguments as
+ trapezoid.
+
+ Write a new main program test2.f90 to test this.
+ Check that it is 4th order accurate on the function f2
+ provided with various values of k. Check it also with some other
+ functions if you want, since we will test it with something other than
+ the provided function f2.
+
+ **Note:** this module should also be called quadrature2, not just the
+ file name, i.e. it should start with the line::
+
+ module quadrature2
+
+ and test2.f90 should::
+
+ use quadrature2, only: ...
+
+ This is important for grading purposes since we might have a different
+ main program that will use your module!
+
+#. Your simpson routine should include an omp parallel do loop similar
+ to trapezoid. Make sure it gives the same results in the error table
+ for both with and without the -fopenmp during compilation, and for
+ different choices of the number of threads.
+
+ Remember that you can run with more threads than your computer has cores
+ and it should still work, but will probably make it run slower rather
+ than faster. We will not be checking timings although you might want to
+ pay attention to this to see if your computer behaves as expected.
+
+#. Create a new version of the quadrature module named quadrature3 that
+ has no parallel loops in trapezoid and instead has a parallel do loop
+ in the error_table routine when it loops over the different values of
+ n to test from the nvals array.
+
+ In this loop make last_error a *firstprivate* variable and think about
+ what other variables need to be *private*. More about this below.
+
+ Test this version with a new test program test3.f90 that calls
+ error_table with method = trapezoid.
+
+ Note the following:
+
+ * If you run this with more than one thread, the different lines of the
+ error table probably will not print out in the same order as on a
+ single thread.
+ * The values of ratio in the table will be wrong relative to the single
+ thread code for various n. Make sure you understand why.
+ (The values of the error should still agree with the single-thread
+ code, however.)
+ * This is not a very good way to try to parallelize this code because
+ it does not have good *load balancing*. If you run with 2 threads, for
+ example, one of them will do many more function evaluations than the
+ other thread, if you allow OpenMP to split up the values of n between
+ threads in the default manner. Think about why this is so and make
+ sure you understand what's going on.
+
+#. Because of the load-balancing issue just mentioned, it is useful to
+ include another clause in the omp parallel do loop directive in error
+ table::
+
+ !$omp parallel do ... & ! whatever you needed before + !$omp schedule(dynamic)
+ do j=1,size(nvals)
+
+ This instructs the compiler to split up the values of j from 1 to
+ size(nvals) dynamically rather than deciding in advance that the first
+ half of the values will go to Thread 0 and the second half to Thread 1,
+ for example. Instead the two threads would start working on j=1 and
+ j=2 and whichever finishes first would start on j=3. This should
+ give a somewhat better balance between threads.
+
+ Note that it can't do a perfect job for this example since computing the
+ error for the last value of j (the largest value of n)
+ takes more function evaluations that all the others put together!
+
+#. In order to improve load balancing, reorder the parallel loop so that
+ n is decreasing rather than increasing via::
+
+ do j=size(nvals),1,-1
+
+ Put this in a new version of the quadrature2 module named quadrature3.
+ and provide a main program test3.f90 to test it.
+ (The same as test2.f90 but using the new module.)
+ Think about why this is better.
+
+ In this case you might get results like this::
+
+ Using 4 threads
+ true integral: 6.00136745954910E+00
+
+ n approximation error ratio
+ 12800 6.00136470029559E+00 2.759E-06 0.000E+00
+ 6400 6.00135640717688E+00 1.105E-05 2.497E-01
+ 25600 6.00136677000212E+00 6.895E-07 0.000E+00
+ 1600 6.00118591794817E+00 1.815E-04 3.798E-03
+ 3200 6.00132301603504E+00 4.444E-05 2.487E-01
+ 800 6.00057196798962E+00 7.955E-04 2.282E-01
+ 400 5.99537682567465E+00 5.991E-03 7.419E-03
+ 200 5.99787907396672E+00 3.488E-03 2.280E-01
+ 100 6.01762134207395E+00 1.625E-02 3.686E-01
+ 50 6.00200615142457E+00 6.387E-04 5.462E+00
+ 51200 6.00136728718236E+00 1.724E-07 0.000E+00
+ 102400 6.00136741645906E+00 4.309E-08 0.000E+00
+
+ Elapsed time = 0.00621600 seconds
+ CPU time = 0.01550900 seconds
+ fevals by thread 0: 51200
+ fevals by thread 1: 102400
+ fevals by thread 2: 22600
+ fevals by thread 3: 28550
+ Total number of fevals: 204750
+
+ (Can you guess from this which thread got which values of n?)
+ Notice that the table is very much out of order in this case, since lines
+ were printed as threads finished their work.
+
+ One could clean up the table by keeping the approximation and error
+ values for each n in a short array and then printing at the end in
+ the proper order, along with the correct ratios. But you don't need
+ to do this for the assignment.
+
+.. warning :: An additional problem for 583 students is still to appear.
+
+To submit
+---------
+
+Your homework5 directory should contain:
+
+* functions.f90 (unchanged)
+* quadrature2.f90
+* test2.f90
+* quadrature3.f90
+* test3.f90
+* Makefile (optional if you find it useful to enhance what's provided)
+
+* Some files for 583...
+
+As usual, commit your results, push to bitbucket, and see the Canvas
+course page for the link to submit the SHA-1 hash code. These should be
+submitted by the due date/time to receive full credit.
+
+
# File notes/homeworks.rst
* :ref:homework2: Wednesday of Week 3, April 17
* :ref:homework3: Wednesday of Week 5, May 1
* :ref:homework4: Wednesday of Week 6, May 8
- * Homework 5: Wednesday of Week 8, May 22
+ * :ref:homework5: Wednesday of Week 8, May 22
* Homework 6: Wednesday of Week 9, May 29
There will be a "final project" tentatively due on Wednesday, June 12. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2006598562002182, "perplexity": 12543.670183806411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398455246.70/warc/CC-MAIN-20151124205415-00180-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://www.nature.com/articles/s41535-018-0118-z?error=cookies_not_supported&code=416f87d0-4c2b-42aa-8a58-987d931b35a7 | # Microscopic origin of Cooper pairing in the iron-based superconductor Ba1−xKxFe2As2
## Abstract
Resolving the microscopic pairing mechanism and its experimental identification in unconventional superconductors is among the most vexing problems of contemporary condensed matter physics. We show that Raman spectroscopy provides an avenue towards this aim by probing the structure of the pairing interaction at play in an unconventional superconductor. As we study the spectra of the prototypical Fe-based superconductor Ba1−xKxFe2As2 for 0.22 ≤ x ≤ 0.70 in all symmetry channels, Raman spectroscopy allows us to distill the leading s-wave state. In addition, the spectra collected in the B1g symmetry channel reveal the existence of two collective modes which are indicative of the presence of two competing, yet sub-dominant, pairing tendencies of $$d_{x^2 - y^2}$$ symmetry type. A comprehensive functional Renormalization Group and random-phase approximation study on this compound confirms the presence of the two sub-leading channels, and consistently matches the experimental doping dependence of the related modes. The consistency between the experimental observations and the theoretical modeling suggests that spin fluctuations play a significant role in superconducting pairing.
## Introduction
In superconductors such as the cuprates, ferro-pnictides, ruthenates, or heavy-fermion systems, the pairing mechanism is believed to be unconventional and related to direct electronic interactions rather than conventional electron–phonon mediated couplings. Yet, the precise microscopic mechanism, the “glue” that binds electrons into Cooper pairs, remains elusive. Measurements of the superconducting ground state alone are insufficient to unambiguously determine whether a superconductor has a conventional or unconventional pairing mechanism. Raman spectroscopy provides the avenue for gathering the missing information in both dominant and sub-dominant pairing channels.
In comparison to other techniques, Raman spectroscopy (which involves inelastic scattering of light) is rather unique as it provides access to both the energy gaps of a superconductor and to bound states inside the gaps1 that serve as signposts marking the strength of a given pairing interaction.
These bound states were predicted a long time ago by Bardasis and Schrieffer (BS)2 and are collective excitations that correspond to the phase oscillations of the ground state order parameter triggered by the sub-dominant (d-wave) interactions. The BS modes or particle-particle excitons couple to the Raman probe, but there is no consensus yet about their observation in conventional superconductors.3,4 Fe-based superconductors (FeSCs), however, presented a more favorable scenario to search for this physics as many of them are believed to exhibit s± pairing (with an order parameter that may change sign between Fermi surface pockets5,6,7,8,9) and also a sub-leading d-wave pairing interaction that can be strongly competitive. Theoretical calculations based on spin fluctuations have even argued that d-wave could become the ground state for sufficiently strong hole-doping.10,11
For these reasons, Scalapino and Devereaux12 performed a “bare-bones” calculation for a typical FeSC electronic structure with s± symmetry of the ground state and anisotropic gaps, showing that the mode frequency should depend on 1/λd − 1/λs, where λd and λs are the respective coupling strengths of the electrons to the glue that binds the Cooper pair in the d-wave and the s-wave channel. Recent measurements on Ba1−xKxFe2As2,4,13,14 NaFe1−xCoxAs,15 Ba(Fe1−xCox)2As216,17 found peaks in the B1g spectrum which were consistent with a collective mode, but its direct association with a BS mode was unclear.
In this work, we confirm the presence of two sub-dominant pairing interactions, as predicted theoretically, by providing an identification of multiple BS modes in the B1g spectrum of the prototypical ferro-pnictide Ba1−xKxFe2As2 (BKFA). Each sub-dominant pairing interaction results in a BS mode.18 This perspective underlies our identification of the two new peaks in the Raman spectrum with B1g BS modes. The analysis of our experimental peak energies also supports this scenario and even allows us to empirically extract the relative coupling strengths, λd(1)/λs and λd(2)/λs, of the two distinct B1g $$\left( {d_{x^2 - y^2}} \right)$$ pairing channels competing with the s± ground state. We could reproduce the presence of all three pairing channels by performing a functional Renormalization Group (fRG) as well as a Random Phase Approximation (RPA) study. Since the fRG calculation includes the leading fluctuations (magnetic, superconducting, charge density wave etc.) whereas the RPA is distinctly based on magnetically driven (i.e., spin-fluctuation-induced) pairing, the agreement of both approaches with each other and the experiment strongly points to a spin-fluctuation scenario in BKFA. Since a direct observation of spin fluctuations below Tc is not achievable by Raman scattering (the relevant scattering states are gapped out) we study the BS modes which remain as the fingerprints of the microscopic pairing interactions.
## Results
### Experiments
To this end we measured eight samples of BKFA in the wide doping range 0.22 ≤ x ≤ 0.70 as indicated in Fig. 1a and described in detail in Sec. II of the Supplementary Information. BKFA forms high quality single crystals19,20,21 and fairly clean and isotropic gaps.22,23 In the samples with x = 0.22 and x = 0.25 superconductivity and the spin density wave (SDW) state coexist. The samples with x = 0.62 and x = 0.70 are above the doping level of x = 0.6, where EF reaches the bottom of the inner electron band and the topology of the Fermi surface changes qualitatively.24 To present the case for the physics of sub-dominant pairing interactions, we wish to stay away from special effects arising from magnetism or disappearance of pockets and focus on the samples with x = 0.35, 0.40, 0.43, 0.48. In this range, the Raman spectra in the B1g symmetry channel (1 Fe unit cell) change continuously as shown in Fig. 2a–d. Spectra of the other symmetries and outside the range 0.35 ≤ x ≤ 0.48 are compiled in Sec. IV of the Supplementary Information.
The spectra above the superconducting transition temperature Tc are dominated by the electron-hole continua. Below Tc additional (symmetry-dependent) structures appear in the energy range up to ~300 cm−1, and the spectral weight is redistributed from below twice the superconducting gap 2Δ to energies above. New features arise from pair breaking, excitations across the gap, and exciton-like bound states.1,4,18 With increasing doping and a concomitant reduction of Tc, the peaks move to lower energies.
To illustrate why BKFA is a model superconductor for investigating BS modes we highlight the changes in the electronic spectra below Tc. For this purpose we subtract the normal state response from the superconducting spectra. This procedure elimantes temperature-independent components of the spectra like phonons in A1g and B2g symmetry (see Sec. IV of the Supplementary Information). By plotting the difference $${\mathrm{\Delta }}R\chi ^{\prime\prime} ({\tilde{\mathrm \Omega }})$$ ≡ $$R\chi ^{\prime\prime} ({\tilde{\mathrm \Omega }},T \leq 10{\kern 1pt} {\mathrm{K}})$$ − $$R\chi ^{\prime\prime} ({\tilde{\mathrm \Omega }},T \geq T_c)$$ in Fig. 2e with $${\tilde{\mathrm \Omega }}$$ = ħΩ/kBTc we extract superconductivity-induced features of pure B1g symmetry. Due to the full gap, the difference spectra become negative at low energies and three pronounced peaks are observed. The differences between normal and superconducting spectra disappear (Δ″ → 0) close to $${\tilde{\mathrm \Omega }} = 8$$. The highest peak (purple arrows in Fig. 2e) at ~6.2, which we identify with the maximal gap, depends weakly on doping. The range of 2Δ/kBTc 6.2 is in qualitative agreement with the results from other methods.22,23,25 There are two additional narrow lines in the ranges 1.5–3 (green arrows) and 4–5.5 (orange arrows) displaying a strong monotonic downshift with increasing K content. At optimal doping (x = 0.40), evidence was furnished that the narrow line at $${\tilde{\mathrm \Omega }} = 5.3$$ (140 cm−1 in Fig. 2b) results from a bound state of two electrons of a broken Cooper pair.4
Along with the line at $${\tilde{\mathrm \Omega }} = 5.3$$, we find another narrow line in B1g symmetry at $${\tilde{\mathrm \Omega }} = 2.8$$ (75 cm−1 in Fig. 2b), which is difficult to properly assign on the basis of just one doping level. In ref.4 it was suggested that this peak originates in pair-breaking. However, upon studying several doping levels and all symmetries (Secs. IV and V of the Supplementary Information) we find the following systematics in favor of two BS modes: (i) The two in-gap modes appear only in B1g symmetry. (ii) As opposed to the pair-breaking maxima at ~6kBTc there are no other gap energies observed the two sharp modes could correspond to. (iii) The spectral weights of both modes depend on their binding energies as predicted by theory (see Sec. VI of the Supplementary Information). (iv) Upon doping K for Ba the in-gap modes increasingly split off of the pair-breaking maximum. The nearly identical doping dependences of the two modes and the absence of pair-breaking features in other symmetries suggest that both modes are linked to the maximal gap. The unique appearance of narrow BS modes in B1g symmetry for 0.35 ≤ x ≤ 0.48 indicates that there are sub-dominant interactions with d-wave symmetry. We label the corresponding sub-leading B1g channels as d(1) and d(2) for the lower- and the higher-energy line, respectively.
In Fig. 3a we compile experimental peak energies derived from Fig. 2. The difference between 2Δ (purple) and the BS modes in the range 1.5–5.5kBTc (green and orange) corresponds to the binding energies Eb(i) = 2Δ − ΩBS(i) with i = 1, 2 of the bound states. The ratios of the relative coupling strengths λd(i)/λs are estimated from Eb(i)/2Δ using the results of refs.3,4,12 and λs = 0.7 from refs.26,27. Note that we used a doping-independent value of 0.7 for this estimate as the ratios λd(i)/λs are weakly sensitive to small changes of λs (see Sec. VI of the Supplementary Information). This analysis enables us to check the validity of the RPA and fRG approaches in a system with intermediate coupling strength.
### Theory
According to ref.18, the presence of two BS modes in the same symmetry channel must imply the presence of two pairing interactions with different form factors competing with the ground state. Thus in addition to the ratios λd(i)/λs derived from experiment, we show in Fig. 3b, c the results of two microscopic studies using fRG and RPA schemes that precisely identify these pairing channels and also provide an estimate for λd(i)/λs.
In order to determine the hierarchy of pairing interactions from the effective pairing vertex V from either fRG or RPA, we decompose this pairing channel into eigenmodes, which is tantamount to solving an eigenvalue problem of the form
$${\int}_{{\mathrm{FS}}} {\kern 1pt} dqV(k,q)g_\alpha (q) = \lambda _\alpha g_\alpha (k),$$
(1)
where k comprises momentum, band, and spin degrees of freedom, and α is the index consecutively numbering the different eigenvalues. We assume α to be ordered according to the magnitude of eigenvalues λα. gα(k) is the pairing eigenvector along the Fermi surfaces specifying the symmetry of the pairing. More details can be found in Sec. I of the Supplementary Information.
From both fRG and RPA, we find λs, gs(k) (α = 1) to be the dominant superconducting pairing of A1g (s±) type and λd(1,2), gd(1,2)(k) (α = 2, 3) the sub-leading B1g type couplings. Schematic eigenvectors gα(k) for α = 1, 2, 3 are shown as insets in Fig. 3a. These results apply to both $$V \equiv V_{{\mathrm{fRG}}}^{\mathrm{\Lambda }}$$ and V ≡ VRPA when used in Eq. (1), where Λ is the low-energy cutoff in the fRG flow that serves as an upper bound for the transition temperature28,29 (see also Sec. I of the Supplementary Information). The leading eigenvalue λs ≡ λ1 in Eq. (1), which is a function of Λ in the case of fRG, then determines the leading Fermi surface instability. The ratios of the eigenvalues λd(1,2)/λs ≡ λ2,3/λ1 determine the peak positions of the BS modes and are shown along with the experiments in Fig. 3b, c. Note that λ2 ≡ λd1 fits the extended d-wave harmonic form predicted in ref.10.
## Discussion
Arguably the most critical and presumably controversial part of this research is the identification of the in-gap modes observed in B1g symmetry. There are essentially four proposals for the explanation of narrow modes close to or below the gap edge 2Δ, where we assume that the gap on a given band is nearly isotropic in BKFA in accordance with experiment:25 (i) Josephson-like number-phase oscillations of Cooper pairs between the electronic bands are expected for a multi-band system (Leggett mode).30 In the ferro-pnictides they appear in A1g symmetry close to 2Δ for the dominating interband pairing and are strongly damped.31 Experimentally they cannot be distinguished from the pair-breaking peak since the relative intensity of the two effects is not obvious. (ii) For an s± gap an exciton-like narrow mode is predicted to appear in A1g symmetry below the pair-breaking peak.32 Since the materials are very clean with the elastic scattering rate ħ/τ much smaller than Δ it should not be overdamped and be as clearly visible as the B1g collective modes. We did not find indications thereof even upon using various laser lines (see Sec. III of the Supplementary Information). (iii) In the presence of nematic fluctuations the intensity close to the gap edge is predicted to be enhanced in the related B1g channel at a putative quantum critical point.17 In Ba(Fe1−xCox)2As2 the intensity of the B1g response is indeed enhanced close to optimal doping. However, the A1g intensity follows the B1g intensity33 in contrast to the expectation. In NaFe1−xCoxAs a very strong mode close to the gap edge was observed below Tc. The mode appears only along with the response of nematic fluctuations above Tc.15 Yet, the variation with doping of both intensity and energy of this mode is distinctly different from that in BKFA. In addition, the response from fluctuations in BKFA is already very weak for x = 0.2233 and can safely be excluded to exist for 0.35 ≤ x ≤ 0.7. Therefore, the modes in NaFe1−xCoxAs have an origin different from that in BKFA. (iv) Phase oscillations of the order parameter first described by Bardasis and Schrieffer2 entail δ-like in-gap modes in the case of a clean gap appearing in symmetry channels orthogonal to that of the ground state. For the Fermi surface structure of the ferro-pnictides they are expected in B1g symmetry as observed experimentally here. We provide additional arguments in favor of this interpretation now thus extending the detailed quantitative discussion of ref.4 to all doping levels relevant here.
Bound states are generally expected in the presence of competing interactions.2,12 They complete the excitation spectrum of a superconductor and are similar to excitons in a semiconductor. The identification of BS modes and their differentiation from other collective excitations is possible through various characteristic properties. These include the BCS-like temperature dependence of a resolution-limited line in materials having a clean gap. In contrast, the pair-breaking maximum is broad and does not normally follow the BCS prediction4 since the peak energy depends on the gap, the concentration of impurities,34 and on interactions.35 In addition, the BS mode drains spectral weight from the pair-breaking maximum in agreement with theoretical predictions3,12 (see Fig. S10a1–d3 of the Supplementary Information). The transfer of spectral weight and the fitting of the two BS modes is only qualitatively captured by the phenomenology proposed earlier (see Fig. S10e of the Supplementary Information) and may eventually be improved by future 3D calculations. Finally, the spectral weight of BS modes does not increase monotonically with increasing coupling strength of the bound state but, rather, has a maximum for intermediate coupling (see Fig. S9 of the Supplementary Information). Obviously all criteria could be observed experimentally and we feel on safe ground for comparing the doping dependences of the observed modes with model calculations based on fRG and RPA schemes.
The comparison of the two independent theoretical approaches allows us to pin down the origin of the leading pairing channel since the fRG includes all interactions28,29, whereas the RPA focuses on the spin sector as spelled out in detail in Sec. I of the Supplementary Information. Another difference becomes apparent in the procedure used to determine the effective interaction potential. The fRG analysis is designed to start its unbiased Renormalization Group flow already at energies above the bandwidth while the effective model scale entering the RPA resummation has to be chosen at comparably lower energies (see Sec. I of the Supplementary Information). As it turns out, however, in spite of these differing initializations, transcending further down to energies at which superconductivity occurs yields similar findings for both methods.
From the plethora of theories intended to describe the iron-based superconductors, the comparison with the experiment now enables us, as a first step, to verify the validity of fRG and RPA for the intermediately coupled electronic system of BKFA. We find in accordance with our experiments that both approaches predict an s-wave ground state and the two strongest sub-leading channels to be of d-wave symmetry. Furthermore, the theoretical predictions for the relative coupling parameters as shown in Fig. 3 are in good agreement with the experiment. The fRG results are in quantitative agreement, the RPA values systematically underestimate the relative coupling strength but are still close to the experiment. Hence we conclude that fRG and RPA are suitable to describe the experiment around optimal doping, 0.35 ≤ x ≤ 0.48, where the two collective BS modes can be identified. Besides the agreement with the experiment the fRG interaction eigenvectors gα(k) match very well with those obtained from the spin-fluctuation-based RPA analysis in all three channels (α = 1, 2, 3). These agreements indicate that spin fluctuations are an important if not the leading interaction in the system under consideration.
The results presented here put narrow constraints on the description of the Raman data and render differing interpretations15,17 rather unlikely to be applicable to BKFA. Hence, the observation of two collective modes inside the gap of a superconductor establishes a novelty in terms of experimental analysis which promises to have an impact on the general understanding of unconventional superconductivity. Along with the magnitude of the gap, the modes reveal the hierarchy of pairing states in a prototypical material, in full agreement with microscopic predictions. As a result, our experiment demonstrates the unique possibilities of using light scattering as a probe for observing unconventional pairing fingerprints.
## Methods
In this joint experimental and theoretical study we compare results of electronic Raman spectroscopy with predictions of two independent simulations, a fRG analysis and spin-fluctuation theory in the RPA.
### Light scattering
The experiments were performed with calibrated light scattering equipment.1 For excitation a solid state laser (Coherent, Genesis MX SLM) was used emitting at 575 nm. A few experiments at optimal doping (x = 0.40) were performed with additional laser lines at 532 (Coherent, Sapphire 532 SF), 514 and 458 nm (Coherent, Innova 304C) in order to scrutinize the resonance behavior as described in Sec. III of the Supplementary Information. The samples were mounted on the cold finger of a He-flow cryostat in a cryogenically pumped vacuum. The laser-induced heating was determined experimentally to be close to 1 K per mW absorbed laser power (see ref.36). Spectra were measured in the four polarization configurations xy, xy′, RR, and RL where x and y are along the Fe-Fe bonds, $$x^\prime = 1{\mathrm{/}}\sqrt 2 \left( {x + y} \right)$$, $$y^\prime = 1{\mathrm{/}}\sqrt 2 \left( {y - x} \right)$$, and $$R{\mathrm{/}}L = 1{\mathrm{/}}\sqrt 2 \left( {x \pm iy} \right)$$. All symmetry components (A1g, A2g, B1g, and B2g for tetragonal Ba1−xKxFe2As2) can be extracted using linear combinations of the experimental spectra. For the symmetry assignment we use the 1 Fe per unit cell (cf. Fig. 1b for the corresponding BZ).16,37 The spectra we show within this work represent the response ″(Ω, T) which is obtained by dividing the cross section by the Bose–Einstein factor {1 + n(T, Ω)} = [1 − exp(−ħΩ/kBT)]−1 in which R is an experimental constant. In some cases we isolate superconductivity-induced contributions by subtracting the response measured at TTc from the spectra taken at $$T \ll T_c$$ and label the difference spectra Δ″(Ω, T).
### Theory
For analyzing the Cooper pairing in the ferro-pnictides we studied two microscopic models which allow us to disentangle the various contributions to the interaction potential $$V_{{\bf{k}},{\bf{k}}^\prime }$$. This disentanglement becomes possible since the scheme of the fRG analysis28,29 includes all possible interactions a priori in an unbiased fashion whereas the RPA scheme focusses on spin fluctuations. We are aware that both models are valid only in the weak coupling limit but we believe that the essential physics is captured correctly. Either approach leads to an eigenvalue equation (see Eq. (1)) which yields a hierarchy of eigenvalues and the related eigenvectors. (For technical details see Sec. I of the Supplementary Information.) Upon comparing the results the relative influence of the various pairing tendencies can be estimated.
## Data availability
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
## References
1. 1.
Devereaux, T. P. & Hackl, R. Inelastic light scattering from correlated electrons. Rev. Mod. Phys. 79, 175 (2007).
2. 2.
Bardasis, A. & Schrieffer, J. R. Excitons and plasmons in superconductors. Phys. Rev. 121, 1050–1062 (1961).
3. 3.
Monien, H. & Zawadowski, A. Theory of Raman scattering with final-state interaction in high-T c BCS superconductors: collective modes. Phys. Rev. B 41, 8798–8810 (1990).
4. 4.
Böhm, T. et al. Balancing act: evidence for a strong subdominant d-wave pairing channel in Ba0.6K0.4Fe2As2. Phys. Rev. X 4, 041046 (2014).
5. 5.
Mazin, I. I., Singh, D. J., Johannes, M. D. & Du, M. H. Unconventional superconductivity with a sign reversal in the order parameter of LaFeAsO1−xFx. Phys. Rev. Lett. 101, 057003 (2008).
6. 6.
Hosono, H. & Kuroki, K. Iron-based superconductors: current status of materials and pairing mechanism. Phys. C. 514, 399–422 (2015).
7. 7.
Chubukov, A. V., Fernandes, R. M. & Schmalian, J. Origin of nematic order in FeSe. Phys. Rev. B 91, 201105 (2015).
8. 8.
Hirschfeld, P. J. Using gap symmetry and structure to reveal the pairing mechanism in Fe-based superconductors. C. R. Phys. 17, 197–231 (2016).
9. 9.
Si, Q., Yu, R. & Abrahams, E. High-temperature superconductivity in iron pnictides and chalcogenides. Nat. Rev. Mater. 1, 16017 (2016).
10. 10.
Thomale, R., Platt, C., Hu, J., Honerkamp, C. & Bernevig, B. A. Functional renormalization-group study of the doping dependence of pairing symmetry in the iron pnictide superconductors. Phys. Rev. B 80, 180505 (2009).
11. 11.
Thomale, R., Platt, C., Hanke, W., Hu, J. & Bernevig, B. A. Exotic d-wave superconducting state of strongly hole-doped KxBa1−xFe2As2. Phys. Rev. Lett. 107, 117001 (2011).
12. 12.
Scalapino, D. J. & Devereaux, T. P. Collective d-wave exciton modes in the calculated Raman spectrum of Fe-based superconductors. Phys. Rev. B 80, 140512 (2009).
13. 13.
Kretzschmar, F. et al. Raman-scattering detection of nearly degenerate s-wave and d-wave pairing channels in iron-based Ba0.6K0.4Fe2As2 and Rb0.8Fe1.6As2 superconductors. Phys. Rev. Lett. 110, 187002 (2013).
14. 14.
Wu, S.-F. et al. Superconductivity and electronic fluctuations in Ba1−xKxFe2As2 studied by Raman scattering. Phys. Rev. B 95, 085125 (2017).
15. 15.
Thorsmølle, V. K. et al. Critical quadrupole fluctuations and collective modes in iron pnictide superconductors. Phys. Rev. B 93, 054515 (2016).
16. 16.
Muschler, B. et al. Band- and momentum-dependent electron dynamics in superconducting Ba(Fe1−xCox)2As2 as seen via electronic Raman scattering. Phys. Rev. B 80, 180510 (2009).
17. 17.
Gallais, Y., Paul, I., Chauvière, L. & Schmalian, J. Nematic resonance in the Raman response of iron-based superconductors. Phys. Rev. Lett. 116, 017001 (2016).
18. 18.
Maiti, S., Maier, T. A., Böhm, T., Hackl, R. & Hirschfeld, P. J. Probing the pairing interaction and multiple Bardasis-Schrieffer modes using Raman spectroscopy. Phys. Rev. Lett. 117, 257001 (2016).
19. 19.
Rotter, M., Tegel, M. & Johrendt, D. Superconductivity at 38 K in the iron arsenide (Ba1−xKx)Fe2As2. Phys. Rev. Lett. 101, 107006 (2008).
20. 20.
Shen, B. et al. Transport properties and asymmetric scattering in Ba1−xKxFe2As2 single crystals compared to the electron doped counterparts Ba(Fe1−xCox)2As2. Phys. Rev. B 84, 184512 (2011).
21. 21.
Karkin, A. E., Wolf, T. & Goshchitskii, B. N. Superconducting properties of (Ba−K)Fe2As2 single crystals disordered with fast neutron irradiation. J. Phys. Condens. Matter 26, 275702 (2014).
22. 22.
Evtushinsky, D. V. et al. Momentum dependence of the superconducting gap in Ba1−xKxFe2As2. Phys. Rev. B 79, 054517 (2009).
23. 23.
Nakayama, K. et al. Universality of superconducting gaps in overdoped Ba0.3K0.7Fe2As2 observed by angle-resolved photoemission spectroscopy. Phys. Rev. B 83, 020501 (2011).
24. 24.
Xu, N. et al. Possible nodal superconducting gap and Lifshitz transition in heavily hole-doped Ba0.1K0.9Fe2As2. Phys. Rev. B 88, 220508 (2013).
25. 25.
Hardy, F. et al. Strong correlations, strong coupling, and s-wave superconductivity in hole-doped BaFe2As2 single crystals. Phys. Rev. B 94, 205113 (2016).
26. 26.
Ikeda, H., Arita, R. & Kuneš, J. Phase diagram and gap anisotropy in iron-pnictide superconductors. Phys. Rev. B 81, 054502 (2010).
27. 27.
Kuroki, K., Usui, H., Onari, S., Arita, R. & Aoki, H. Pnictogen height as a possible switch between high-T c nodeless and low-T c nodal pairings in the iron-based superconductors. Phys. Rev. B 79, 224511 (2009).
28. 28.
Metzner, W., Salmhofer, M., Honerkamp, C., Meden, V. & Schönhammer, K. Functional renormalization group approach to correlated fermion systems. Rev. Mod. Phys. 84, 299–352 (2012).
29. 29.
Platt, C., Hanke, W. & Thomale, R. Functional renormalization group for multi-orbital Fermi surface instabilities. Adv. Phys. 62, 453–562 (2014).
30. 30.
Leggett, A. J. Number-phase fluctuations in two-band superconductors. Prog. Theor. Phys. 36, 901 (1966).
31. 31.
Cea, T. & Benfatto, L. Signature of the Leggett mode in the A 1g Raman response: from MgB2 to iron-based superconductors. Phys. Rev. B 94, 064512 (2016).
32. 32.
Chubukov, A. V., Eremin, I. & Korshunov, M. M. Theory of Raman response of a superconductor with extended s-wave symmetry: application to the iron pnictides. Phys. Rev. B 79, 220501 (2009).
33. 33.
Böhm, T. et al. Superconductivity and fluctuations in Ba1−pKpFe2As2 and Ba(Fe1−nCon)2As2. Phys. Status Solidi (b) 254, 1600308 (2017).
34. 34.
Devereaux, T. P. Theory of electronic Raman scattering in disordered unconventional superconductors. Phys. Rev. Lett. 74, 4313 (1995).
35. 35.
Manske D. Theory of Unconventional Superconductors. Springer Tracts in Modern Physics. 202, Springer Verlag GmbH: Berlin, 2004.
36. 36.
Kretzschmar, F. et al. Critical spin fluctuations and the origin of nematic order in Ba(Fe1−xCox)2As2. Nat. Phys. 12, 560–563 (2016).
37. 37.
Mazin, I. I. et al. Pinpointing gap minima in Ba(Fe0.94Co0.06)2As2 via band-structure calculations and electronic Raman scattering. Phys. Rev. B 82, 180502 (2010).
38. 38.
Böhmer, A. E. et al. Nematic susceptibility of hole-doped and electron-doped BaFe2As2 iron-based superconductors from shear modulus measurements. Phys. Rev. Lett. 112, 047001 (2014).
## Acknowledgements
We acknowledge useful discussions with L. Benfatto, A. Eberlein, D. Einzel, S. A. Kivelson, C. Meingast, and I. Tüttő. W.H. gratefully acknowledges the hospitality of the Institute for Theoretical Physics at the University of California Santa Barbara. Financial support for the work came from the Deutsche Forschungsgemeinschaft (DFG) via the Priority Program SPP 1458 (T.B., A.B., R.H., C.P. and W.H., project nos. HA 2071/7-2 and HA 1537/24-2), the Collaborative Research Centers SFB 1170 (W.H., C.P., and R.T.), and TRR 80 (F.K. and R.H.), the Bavaria California Technology Center BaCaTeC (T.B. and R.H., project no. A5 [2012-2]), the European Research Council (ERC) through ERC-StG-Thomale-TOPOLECTRICS (R.T.), and from the U.S. Department of Energy (DOE), Office of Basic Energy Sciences, Division of Materials Sciences and Engineering, under Contract Nos. DE-AC02-76SF00515 (B.M. and T.P.D.) and DE-FG02-05ER46236 (P.J.H. and S.M.). The RPA calculations were conducted at the Center for Nanophase Materials Sciences, which is a DOE Office of Science User Facility. The work in China (H.-H.W.) was supported by the National Key Research and Development Program of China (2016YFA0300401), and the National Natural Science Foundation of China (NSFC) via projects A0402/11534005 and A0402/11374144.
## Author information
T.B. and R.H. conceived the experiments. R.T., T.A.M., W.H., T.P.D., D.J.S., S.M. and P.J.H. developed the theoretical concept. P.A., T.W. and H.-H.W. prepared the samples. T.B., F.K., A.B., M.R., D.J. and R.H.A. performed the experiments. T.B. developed the phenomeology and fitted the data. C.P., T.A.M., B.M. and S.M. performed the numerical work. T.B., R.T., W.H., B.M., T.P.D., D.J.S., S.M., P.J.H., and R.H. analyzed the results and wrote the manuscript.
Correspondence to Rudi Hackl.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
• #### DOI
https://doi.org/10.1038/s41535-018-0118-z
• ### Scaling of the Fano Effect of the In-Plane Fe-As Phonon and the Superconducting Critical Temperature in Ba1−xKxFe2As2
• B. Xu
• , E. Cappelluti
• , L. Benfatto
• , B. P. P. Mallett
• , P. Marsik
• , E. Sheveleva
• , F. Lyzwa
• , Th. Wolf
• , R. Yang
• , X. G. Qiu
• , Y. M. Dai
• , H. H. Wen
• , R. P. S. M. Lobo
• & C. Bernhard
Physical Review Letters (2019) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8446652889251709, "perplexity": 2612.231454376355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986700435.69/warc/CC-MAIN-20191019214624-20191020002124-00447.warc.gz"} |
https://proofwiki.org/wiki/Definition:Product_(Category_Theory)/Binary_Product | # Definition:Product (Category Theory)/Binary Product
## Definition
Let $\mathbf C$ be a metacategory.
Let $A$ and $B$ be objects of $\mathbf C$.
A (binary) product diagram for $A$ and $B$ comprises an object $P$ and morphisms $p_1: P \to A$, $p_2: P \to B$:
$\begin{xy}\[email protected][email protected]+3px{ A & P \ar[l]_*+{p_1} \ar[r]^*+{p_2} & B }\end{xy}$
subjected to the following universal mapping property:
For any object $X$ and morphisms $x_1, x_2$ like so:
$\begin{xy}\[email protected][email protected]+3px{ A & X \ar[l]_*+{x_1} \ar[r]^*+{x_2} & B }\end{xy}$
there is a unique morphism $u: X \to P$ such that:
$\begin{xy}\[email protected][email protected]+3px{ & X \ar[ld]_*+{x_1} \[email protected]{-->}[d]^*+{u} \ar[rd]^*+{x_2} \\ A & P \ar[l]^*+{p_1} \ar[r]_*+{p_2} & B }\end{xy}$
is a commutative diagram, i.e., $x_1 = p_1 \circ u$ and $x_2 = p_2 \circ u$.
In this situation, $P$ is called a (binary) product of $A$ and $B$ and may be denoted $A \times B$.
Generally, one writes $\left\langle{x_1, x_2}\right\rangle$ for the unique morphism $u$ determined by above diagram.
The morphisms $p_1$ and $p_2$ are often taken to be implicit.
They are called projections; if necessary, $p_1$ can be called the first projection and $p_2$ the second projection. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9580838680267334, "perplexity": 242.13050858354075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141686635.62/warc/CC-MAIN-20201202021743-20201202051743-00258.warc.gz"} |
https://www.jstage.jst.go.jp/article/zisin/73/0/73_2020-3/_article/-char/ja/ | Online ISSN : 1883-9029
Print ISSN : 0037-1114
ISSN-L : 0037-1114
1914年桜島地震における鹿児島市内の震度分布
ジャーナル 認証あり
2021 年 73 巻 p. 225-249
A large earthquake (M7.1) occurred during the 1914 eruption of Sakurajima volcano in Kagoshima prefecture, Japan. I estimated seismic intensities in the present Japan Meteorological Agency (JMA) scale for the 1914 Sakurajima earthquake in Kagoshima city. Previous studies on seismic intensities in Kagoshima city are used their own unique methods to estimated seismic intensities from data of damaged houses and stonewalls. I used a method of estimating seismic intensities from data of damaged houses, which has been commonly used. Previous studies using this method have assumed that each household owns one house, but the numbers of damaged houses in some towns are significantly larger than those of households in the case of the 1914 Sakurajima earthquake. I, therefore, made two assumptions A and B. The assumption A is that each household owns one house. The assumption B is that the ratio of houses to households in each town equal to the maximum ratio of total damaged houses to households. As a result, the maximum seismic intensities for the assumptions A and B are 6 upper and 6 lower, respectively. The areas with higher seismic intensity than 6 lower are consistent with the previous studies on the 1914 Sakurajima earthquake, but the areas with lower seismic intensities than 5 upper are inconsistent because I did not take stonewall damages into consideration, which the previous studies did. I compared our results with distributions of seismic intensities predicted by an attenuation relation between intensity and distance with two different sets of source parameters. One set of source parameters showed seismic intensities of 6 lower and 6 upper in Kagoshima city, and the other set showed those of 5 lower and 5 upper. The observed seismic intensities are on average lower than the predicted ones, probably suggesting that large coseismic slip area is distributed further than epicenter, that Kagoshima city is located in the opposite direction of the rupture propagation and its directivity effect make the amplitudes smaller, or that amplitudes at frequencies which affect seismic intensities are smaller than those expected from the magnitude. The observed seismic intensity distributions are also rougher than the predicted ones and could be affected by local soil conditions. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8188202977180481, "perplexity": 2493.007774210956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057504.60/warc/CC-MAIN-20210924050055-20210924080055-00664.warc.gz"} |
https://en.wikipedia.org/wiki/Mariner_1 | # Mariner 1
Mission type Launch of Mariner 1 Venus flyby NASA / JPL 4 minutes, 53 seconds Failed to orbit Mariner[citation needed] based on Ranger Block I Jet Propulsion Laboratory 202.8 kilograms (447 lb) 220 watts (at Venus encounter) July 22, 1962, 09:21:23 UTC Atlas LV-3 Agena-B Cape Canaveral LC-12
Mariner 1 was the first spacecraft of the American Mariner program, designed for a planetary flyby of Venus. It cost \$18.5 million in 1962. It was launched aboard an Atlas-Agena rocket on July 22, 1962. Shortly after takeoff the rocket responded improperly to commands from the guidance systems on the ground, setting the stage for an apparent software-related guidance system failure.[1] With the craft effectively uncontrolled, a range safety officer ordered its destructive abort 294.5 seconds after launch.[2]
According to NASA's current account for the public:
The booster had performed satisfactorily until an unscheduled yaw-lift (northeast) maneuver was detected by the range safety officer. Faulty application of the guidance commands made steering impossible and were directing the spacecraft towards a crash, possibly in the North Atlantic shipping lanes or in an inhabited area. The destruct command was sent 6 seconds before separation, after which the launch vehicle could not have been destroyed. The radio transponder continued to transmit signals for 64 seconds after the destruct command had been sent.[1]
The role of software error in the launch failure remains somewhat mysterious in nature, shrouded in the ambiguities and conflicts among (and in some accounts, even within) the various accounts, official and otherwise. The probe's mission was accomplished by Mariner 2 which launched 5 weeks later.
## Spacecraft and subsystems
The Mariner 1 spacecraft was identical to Mariner 2, launched 27 August 1962. Mariner 1 consisted of a hexagonal base, 1.04 meters [m] (3.41 ft) across and 0.36 m thick (1.2 ft), which contained six magnesium chassis housing the electronics for the science experiments, communications, data encoding, computing, timing, and attitude control and the power control, battery, and battery charger, as well as the attitude control gas bottles and the rocket engine. On top of the base, was a tall pyramid-shaped mast on which the science experiments were mounted which brought the total height of the spacecraft to 3.66 m (12.0 ft). Attached to either side of the base were rectangular solar panel wings with a total span of 5.05 meters and width of 0.76 meters (16.6 ft × 2.5 ft). Attached by an arm to one side of the base and extending below the spacecraft was a large directional dish antenna.
The Mariner 1 power system consisted of the two solar cell wings, one 183 × 76 cm (72 × 30 in) and the other, 152 × 76 cm (60 × 30 in), with a 31 cm (12 in) dacron extension (a solar sail) to balance the solar pressure on the panels. Those panels powered the craft directly or recharged a 1,000-watt-hour sealed silver-zinc cell battery, which was to be used before the panels were deployed, when the panels were not illuminated by the Sun, and when loads were heavy. A power-switching and booster regulator device controlled the power flow. Communications consisted of a 3-watt transmitter capable of continuous telemetry operation, the large high gain directional dish antenna, a cylindrical omnidirectional antenna at the top of the instrument mast, and two command antennas, one on the end of either solar panel, which received instructions for midcourse maneuvers and other functions.
Propulsion for midcourse maneuvers was supplied by a monopropellant (anhydrous hydrazine) 225 N retro-rocket. The hydrazine was ignited using nitrogen tetroxide and aluminium oxide pellets, and thrust direction was controlled by four jet vanes situated below the thrust chamber. Attitude control with a 1 degree pointing error was maintained by a system of nitrogen gas jets. The Sun and Earth were used as references for attitude stabilization. Overall timing and control was performed by a digital Central Computer and Sequencer. Thermal control was achieved through the use of passive reflecting and absorbing surfaces, thermal shields, and movable louvers.
The scientific experiments were mounted on the instrument mast and base. A magnetometer was attached to the top of the mast below the omnidirectional antenna. Particle detectors were mounted halfway up the mast, along with the cosmic ray detector. A cosmic-dust detector and solar plasma spectrometer/detector were attached to the top edges of the spacecraft base. A microwave radiometer and an infrared radiometer and the radiometer reference horns were rigidly mounted to a 48 cm (18.9 in) diameter parabolic radiometer antenna mounted near the bottom of the mast.
In addition, a small 91 × 150 cm (3-by-5-foot) United States flag was folded and stowed onboard Mariner 1 (and Mariner 2), before it was mated to the Agena.
## Launch failure
The launch was aborted due to a combination of two failures, an antenna hardware failure and an onboard guidance system software failure.
First, "the guidance antenna on the Atlas performed poorly, below specifications. When the signal received by the rocket became weak and noisy, the rocket lost its lock on the ground guidance signal that supplied steering commands."[3]
As a result, the rocket had to rely on its onboard guidance system, which had a bug in it. There are differing accounts of the details of this error.
### Overbar transcription error
The most detailed and consistent account was that the error was in hand-transcription of a mathematical symbol in the program specification for the guidance system, in particular a missing overbar.
The error had occurred when a symbol was being transcribed by hand in the specification for the guidance program. The writer missed the superscript bar (or overline) in
${\displaystyle {\bar {{\dot {R}}_{n}}}}$
by which was meant "the nth smoothed value of the time derivative of a radius R". Since the smoothing function indicated by the bar was left out of the specification for the program, the implementation treated normal minor variations of velocity as if they were serious, causing spurious corrections that sent the rocket off course.[4][5][6] It was then destroyed by the Range Safety Officer.[7]
### Alternate guidance system failure explanations
The cryptic nature of the problems that led to the decision to abort Mariner 1, as well as the confusion in various reports on the incident, led to other explanations in the popular press.
#### "The most expensive hyphen in history"
Many accounts note a missing "hyphen" ('-') rather than the overbar, in either the equations, the computer instructions or the data. For example, Arthur C. Clarke wrote several years later that Mariner 1 was "wrecked by the most expensive hyphen in history".[8]
Several factors contributed to the "missing hyphen" narrative and its longevity, even in official accounts from technical cognoscenti at JPL and NASA. Among the factors cited (or obvious enough):
• The overbar's resemblance to a hyphen ('‾' versus '-').
• The difficulty of explaining the real error to the American public and its elected representatives.
• External political pressures and internal schedule pressures, as the mission was
• an expensive failure of a three-way collaboration (JPL, NASA, USAF),
• legitimized within the narrative of the US-USSR space race,
• very high profile, as America's first planetary mission,
• on a very tight schedule, as it was planned with a narrow launch window (45 days), leaving little time for inquiries, investigations or recriminations before the launch of Mariner 2. The official accounts (which included mentions of a missing hyphen) were the results of an inquiry conducted in less than a week.
Regardless of whatever may have given rise to initial reports of a "missing hyphen", the simplest and most consistent-sounding explanation that the public and Congress would accept would probably have been preferable to those who simply wanted to get on with the job of a Venus fly-by mission. The stories had contradictions, perhaps, but they were so technical that nobody who could have interfered with Mariner-program progress was likely to care about them or even notice. (After all, even in one later NASA account, the supposed "hyphen" is reported as missing from instructions at one point in the text, and from equations at another[3]).
#### Ambiguity of error location
The New York Times, reporting on the results of a review board, said that the error stemmed from "the omission of a hyphen in some mathematical data" .[9] The same report also said the hyphen was
a symbol that should have been fed into a computer, along with a mass of other coded mathematical instructions.
This sort of inconsistency or ambiguity was seen in many subsequent variations on the story, official and otherwise. "Missing hyphen" versions of the story gained from official support before the month was out. NASA official Richard B. Morrison testified before Congress that the supposed hyphen
... gives a cue for the spacecraft to ignore the data the computer feeds it until radar contact is once again restored. When that hyphen is left out, false information is fed into the spacecraft control systems. In this case, the computer fed the rocket in hard left, nose down and the vehicle obeyed and crashed.[10]
(Note that Morrison says the spacecraft "crashed", not that it was intentionally destroyed). In a NASA account submitted to Congress in 1963, the hyphen is described as missing in two different ways:
NASA-JPL-USAF Mariner R-1 Post-Flight Review Board determined that the omission of a hyphen in coded computer instructions transmitted incorrect guidance signals to Mariner spacecraft boosted by two-stage Atlas-Agena from Cape Canaveral on July 21. Omission of hyphen in data editing caused computer to swing automatically into a series of unnecessary course correction signals which threw spacecraft off course so that it had to be destroyed.[11]
In the same 1963 report to Congress, Morrison's testimony from the previous year is recounted differently:
In testimony before House Science and Astronautics Committee, Richard B. Morrison, NASA's Launch Vehicles Director, testified that an error in computer equations for Venus probe launch of Mariner R-1 space-craft on July 21 led to its destruction when it veered off course.[12]
JPL's Mariner Venus Final Project Report in 1965 noted that, at 4 minutes and 25 seconds into the flight, there was an "[U]nscheduled yaw-lift maneuver":
...steering commands were being supplied, but faulty application of the guidance equations was taking the vehicle far off course.[13]
In a NASA report published in 1985, Oran Nicks offered another slightly differing account, but with the software-related error still identified as a missing "hyphen":
The guidance antenna on the Atlas performed poorly, below specifications. When the signal received by the rocket became weak and noisy, the rocket lost its lock on the ground guidance signal that supplied steering commands. The possibility had been foreseen; in the event that radio guidance was lost the internal guidance computer was supposed to reject the spurious signals from the faulty antenna and proceed on its stored program, which would probably have resulted in a successful launch. At this point a second fault took effect. Somehow a hyphen had been dropped from the guidance program loaded aboard the computer, allowing the flawed signals to command the rocket to veer left and nose down. The hyphen had been missing on previous successful flights of the Atlas, but that portion of the equation had not been needed since there was no radio guidance failure.[3]
NASA's website now says the problem was:
... apparently caused by a combination of two factors. Improper operation of the Atlas airborne beacon equipment resulted in a loss of the rate signal from the vehicle for a prolonged period. The airborne beacon used for obtaining rate data was inoperative for four periods ranging from 1.5 to 61 seconds in duration. Additionally, the Mariner 1 Post Flight Review Board determined that the omission of a hyphen in coded computer instructions in the data-editing program allowed transmission of incorrect guidance signals to the spacecraft. During the periods the airborne beacon was inoperative the omission of the hyphen in the data-editing program caused the computer to incorrectly accept the sweep frequency of the ground receiver as it sought the vehicle beacon signal and combined this data with the tracking data sent to the remaining guidance computation. This caused the computer to swing automatically into a series of unnecessary course corrections with erroneous steering commands which finally threw the spacecraft off course.[14]
#### Other punctuation
In other accounts, the bug consisted of:
• a period typed in place of a comma, causing a FORTRAN DO loop statement to be misinterpreted (although there is no evidence that FORTRAN was used in the mission), of the form "DO 5 K=1. 3" interpreted as assignment "DO5K = 1.3"[15] There are anecdotal reports that there was in fact such a bug in a NASA orbit computation program at about this time, but it was a program for Project Mercury, not Mariner, and the claim was that the bug was noticed and fixed before there could be any serious consequences.[16]
• a missing comma [17]
• an extraneous semicolon [18]
## References
1. ^ a b "Mariner 1". 4.0.8. NASA. 2008-08-05. Retrieved 2009-02-14.
2. ^ "Venus Shot Fails as Rocket Strays" (fee required). New York Times. 1962-07-23. Retrieved 2009-02-14.
3. ^ a b c NASA publication SP-480, Far Travelers -- The Exploring Machines, Oran W. Nicks, 1985
4. ^ Peter Neumann (1989-05-27). "Mariner I -- no holds BARred". The Risks Digest Volume 8: Issue 75. Retrieved 2014-10-31.
5. ^ Ceruzzi, Paul E. Ceruzzi (1989). Beyond the Limits: Flight Enters the Computer Age. ISBN 978-0262530828.
6. ^ "Space FAQ 08/13 - Planetary Probe History". Retrieved 9 September 2016.
7. ^ Beyond the Limits: Flight Enters the Computer Age, Paul E. Ceruzzi, p.203. In one of the notes for this book (p. 250), the author writes "The same flawed program had been used in earlier Ranger launches with no ill effects."
8. ^ The Promise of Space, Arthur C. Clarke, 1968, p. 225.
9. ^ "For Want of Hyphen Venus Rocket Is Lost", New York Times, July 27, 1962, as quoted in RISKS Digest, Vol 5, Issue #66.
10. ^ House Science and Astronautics Committee, July 31, 1962, also quoted here
11. ^ "Astronautical and Aeronautical Events of 1962," report to the House Committee on Science and Astronautics, June 12, 1963 p.131.
12. ^ "Astronautical and Aeronautical Events of 1962," report to the House Committee on Science and Astronautics, June 12, 1963 p.333
13. ^
14. ^ "Mariner 1", Version 4.0.7, 2 April 2008.
15. ^ Beyond the Limits: Flight Enters the Computer Age, Paul E. Ceruzzi, In p.250, footnote 13 for Chapter 9, where Ceruzzi writes that, "[S]ince the Atlas Guidance Computer did not have a Fortran compiler ....", and in footnote 14, "The Atlas Launch computer did not even use Fortran, the Fortran programming language. How the story has become embellished in this way is a mystery."
16. ^ RISKS Digest, v. 9, issue 54, "Mariner I [once more]", Mark Brader, 12 December 1989.
17. ^ Famous bugs
18. ^ JPL 101 page 22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45598873496055603, "perplexity": 4766.98591041632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00066-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://www.computer.org/csdl/trans/tc/1987/05/01676940-abs.html | Subscribe
Issue No.05 - May (1987 vol.36)
pp: 554-561
P.J.B. King , Department of Computer Science, Heriot-Watt University
ABSTRACT
Models for local area networks of the slotted ring style of architecture are developed and evaluated. The hardware protocol is modeled using a BCMP network. The Basic Block protocol of the Cambridge ring is modeled using an approximate solution method of the fixed-point type. A limited comparison between the Cambridge Ring and another ring architecture?the token ring?is carried out.
INDEX TERMS
ring network, Cambridge ring, fixed-point approximations, local area networks, performance evaluation
CITATION
P.J.B. King, I. Mitrani, "Modeling a Slotted Ring Local Area Network", IEEE Transactions on Computers, vol.36, no. 5, pp. 554-561, May 1987, doi:10.1109/TC.1987.1676940 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.878079354763031, "perplexity": 8740.444159224224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042992201.62/warc/CC-MAIN-20150728002312-00253-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/293631/dont-understand-a-bound-on-dirichlet-l-function-for-principal-character/293767 | # Don't understand a bound on Dirichlet L function for principal character
$s= \sigma + it$ is any complex number with real part $> 0$.
This came up because $L(s,\chi) = \zeta(s)\prod_{p | q} (1-p^{-s})$ and I have a bound for zeta I want to change to a bound for $L$ where $\chi$ is the principal character mod $q$.
Let $q$ be squarefree then $$\prod_{p | q} (1-p^{-s}) < d(q)$$ for all $s > 0$.
$d$ is the number of divisors of $q$.
I tried doing 1/ to both sides and then change the product to sum $\sum_{n\text{ is a product of primes in }q} n^{-s} > 1/d(q)$ but this makes the inequality look trivial because the LHS is $> 1$. I must be missing or misunderstanding something. Thank you for any suggestion.
im trying to get this theorem: $$\left| L(s,\chi_0) - \frac{s}{s-1}\prod_{p|q}(1-p^{-s})\right| \le d(q)\frac{|s|}{\Re(s)}$$ from the bound on the zeta function $$\left| \zeta(s) - \frac{s}{s-1} \right| \le \frac{|s|}{\Re(s)}$$ if that helps make this clearer.
As I said I already showed $L(s,\chi) = \zeta(s)\prod_{p | q} (1-p^{-s})$, show multiply both sides the zeta inequality by $\prod_{p | q} (1-p^{-s})$ and we get $$\left| L(s,\chi_0) - \frac{s}{s-1}\prod_{p|q}(1-p^{-s})\right| \le \left[ \prod_{p | q} (1-p^{-s}) \right]\frac{|s|}{\Re(s)}$$ therefore if I can just show $$\prod_{p | q} (1-p^{-s}) \le d(q)$$ I get the theorem.
-
@anon, I added a third edit showing how my original question is connected to that. – user58512 Feb 3 '13 at 16:19
@anon, yeah! that's what I'm confused about! I wonder why my notes have d(q) in it, and I couldn't find this theorem in my book. – user58512 Feb 3 '13 at 16:23
Actually, the $s$ is no longer real, so $|1-p^{-s}|<1$ may not hold in general even assuming $\mathrm{Re}(s)>0$. When you withheld the background information originally, your question made it seem like you were only interested in real $s$, which was misleading. – anon Feb 3 '13 at 16:25
If $\Re(s)>0$ then $\lvert p^{-s}\rvert<1$, so $\lvert 1-p^{-s}\rvert < 2$.
Note that $d(n) \ge 2^{\omega(n)}$ where $\omega(n)$ is the number of unique prime factors of $n$. They are actually equal when $n$ is squarefree, but this is not a necessary assumption for the inequality you seek.
By the way, you misstated the inequality. What you want is $$\left| \prod_{p | q} (1-p^{-s}) \right| \le d(q),$$
since the quantity inside the absolute value is not necessarily real (neither is $s$ so "$s>0$" makes little sense). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9604520201683044, "perplexity": 183.55847344137666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121561.0/warc/CC-MAIN-20160428161521-00029-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://en.wikibooks.org/wiki/Geometry/Differential_Geometry/Basic_Curves | # Geometry/Differential Geometry/Basic Curves
The differential geometry of curves is usual starting point of students in field of differential geometry which is the field concerned with studying curves, surfaces, etc. with the use of the concept of derivatives in calculus. Thus, implicit in the discussion, we assume that the defining functions are sufficiently differentiable, i.e., they have no corners or cusps, etc. Curves are usually studied as subsets of an ambient space with a notion of equivalence. For example, one may study curves in the plane, the usual three dimensional space, curves on a sphere, etc. The most common notion of equivalence is that of rigid or Euclidean motion where two curves may be brought into alignment by a rotation and a translation. There are other interesting notions, however. In particular, in affine differential geometry of curves, two curves are equivalent if the may be brought into alignment through a rotation and a linear transformation. Special affine differential geometry considers two curves equivalent if they may be brought into alignment with a translation and linear transformation of determinant one. All ellipses in the plane are equivalent in affine geometry and are equivalent in special affine geometry if their interior has the same area. We will concentrate on the equivalence under Euclidean motions. In all these notions of equivalency, the ambient space is equipped with some additional structure. In the case of Euclidean motion equivalency, the additional structure is the inner or dot product of vectors.
Plane curves: Curves may defined parametrically, say $(x(t),y(t)) = (\cos(t),\sin(t))$ or as the level set of a function $f(x,y)=c$, e.g., $\{ (x,y)|x^2+y^2=1\}$. These, of course, both define the circle of radius one. The third method of defining curves is that of a graph, $(x,y=f(x))$. We will find the parametrically formulation usually easier to work with. Any graph type curve has the parametrization $(x,y)=(t,f(t))$. For the most part, we will not concern ourselves with the "speed" of curve, i.e., the actual parametrization of the curve. For example, the map $t\to 2t$ defines a curve that traverses the same path only twice as quickly.
Given a plane curve $\bold{x}(t)=(x(t),y(t))$, we may consider it's velocity which is the simply the component-wise derivative, $\bold{x}'(t)=(x'(t),y'(t))$. If one considers a rather simple reparametrization $t\to e^{tan(t)}$, one can quickly obtain derivatives which become quite ugly and unwieldy even starting with "easy" curves such as the circle. Thus recognizing that one is dealing with a circle might not be obvious if one looks at these derivatives.
Given a curve with a specific parametrization, there is a special reparametrization (almost unique) that eliminates this freedom that causes our headaches. Again, let $\bold{x}(t)=(x(t),y(t))$ be our curve. We want $s$ to be the reparametrization $t=t(s)$ so that the new curve has speed one, i.e., the magnitude of the vector velocity $= \sqrt{x'(s)^2+y'(s)^2}=1$. One may determine the function $t(s)$ with the chain rule. In order for the curve $\bold{x}(s)=(x(t(s)),y(t(s)))$ to have unit speed, by the chain rule, we need
$(\frac{dx}{dt}\frac{dt}{ds})^2+ (\frac{dy}{dt}\frac{dt}{ds})^2=1$ or
$\frac{dt}{ds}=\frac{1}{\sqrt{(\frac{dx}{dt})^2+ (\frac{dy}{dt})^2}}$.
The later is a differential equation, which, for all intents and purposes, cannot be solved explicitly except in very special circumstances. It will, by the standard theory of differential equations, have a solution away from points where the velocity vanishes. We will develop the theory assuming that this equation has been solved but then show how to work in other, non-unit-speed, parametrizations.
Thus, we assume that $\bold{x}=\bold{x}(s)=(x(s),y(s))$. With this parametrization, we have $\bold{x}'(s)=(x'(s),y'(s))$ is a unit vector, the tangent unit vector. Let us call this vector $\bold{e}_1$, i.e., $\bold{e}_1(s)=(x'(s),y'(s))$. We define unit normal vector $\bold{e}_2$ be the unit vector obtained by rotating $\bold{e}_1$ by 90° counter clockwise:
$\bold{e}_2(s)=(-y'(s),x'(s))$.
Because we are in the plane, all vectors that are perpendicular to the vector $\bold{e}_1$ are necessarily some scalar multiple of $\bold{e}_2$. We use this observation as follows: The scalar function of $s$ given by $\bold{e}_1(s)\cdot \bold{e}_1(s)$ is constant (equal to one). Thus, it's derivative vanishes:
$\frac{d}{ds}(\bold{e}_1(s)\cdot \bold{e}_1(s))=2\,\bold{e}_1(s)\cdot \frac{d}{ds}\bold{e}_1(s)=0$.
Thus, the vector valued function $\frac{d}{ds}\bold{e}_1(s)=(x''(s),y''(s))$ is perpendicular to $\bold{e}_1(s)$, and thus by the above observation, we may write:
$\frac{d}{ds}\bold{e}_1(s)=\kappa(s)\, \bold{e}_2(s)$
for some function $\kappa(s)$. The function $\kappa(s)$ is intrinsic and may be understood as the amount that the unit vector swings into the unit normal direction.
An example: We consider a circle of radius $r$ centered around the origin. This may be parametrized by $(r\, \cos(t), r\,\sin(t))$. One can solve the differential equation defining $s$ in this case and the unit speed parametrization is given by $(r\, \cos(s/r), r\,\sin(s/r))$. (This may have guessed as well.) The unit tangent and normal is given by $\bold{e}_1= \cos(s/r), \sin(s/r),\bold{e}_2=(-\sin(s/r), \cos(s/r))$. One then has:
$\frac{d}{ds}\bold{e}_1(s)=(1/r)(-\sin(s/r), \cos(s/r))=\kappa(s)\, \bold{e}_2(s)$
with $\kappa(s)$ being the constant function $1/r$. This gives another interpretation of the function $\kappa(s)$ as the reciprocal of the radius of the best fitting or osculating circle.
The function $\kappa(s)$ characterizes curves up Euclidean motion with the following result:
Theorem: If two curves $\bold{x}_1(s)=(x_1(s),y_1(s))$ and $\bold{x}_2(s)=(x_1(s),y_2(s))$ have the same curvature function $\kappa(s)=\kappa_1(s)=\kappa_2(s)$ then necessarily there exists a rigid motion involving a rotation and translation taking the curve $\bold{x}_1$ to $\bold{x}_2$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 49, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9688228964805603, "perplexity": 177.02346336307437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986646.29/warc/CC-MAIN-20150728002306-00006-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/98757/schur-functors-generalization-to-jack-hall-littlewood-macdonald-functors?sort=oldest | # Schur functors generalization to “Jack”, “Hall-Littlewood”, “Macdonald” functors ?
Schur functors are functors from the category of vector spaces to itself. If we take an operator $M: V->V$ and apply a Schur functor to it and then calculate trace $Tr(M^{\Lambda})$ we will get Schur polynomial in the eigenvalues of $M$.
Question Can one generalize (deform) Schur functors, such that $Tr(M^{\Lambda})$ will give polynomials which generalize (deform) Schur polynomials e.g. Hall-Littlewood polynomial, or Jack polynomial and most generally Macdonald polynomials ?
-
Have you seen arxiv.org/abs/q-alg/9503012 ? – Gjergji Zaimi Jun 4 '12 at 9:05
@Gjergii Thank you I know, may be I forget something now, but it seems to me it is not the answer. Why do I need intertwiner? is it natural ? It does seems to me so. Morever you will need to take very specific representation to obtain Calogero model (which corresponds to Jack polynoms, respectively in q-case to Macdonalds)... – Alexander Chervov Jun 4 '12 at 14:14
It seems to me that this is answered, perhaps in a boring way, by Haiman's work on the $n!$-conjecture (now a theorem due to Haiman). For any partition $\lambda$, Haiman constructs a finite dimensional graded module $C_{\lambda}$ for $\mathbb{C}[x,y][S_n]$. (The group elements commute with $x$ and $y$, and $x$ and $y$ commute with each other.) The doubly graded Frobenius character of $C_{\lambda}$ is the $\lambda$-Macdonald polynomial.
Now just use Schur-Weyl duality: Define the functor $F_{\lambda}$ from vector spaces to vector spaces by $$V \mapsto V^{\otimes |\lambda|} \otimes_{\mathbb{C}[S_n]} C_{\lambda}.$$ The result is a doubly graded $\mathbb{C}[x,y]$ module which is the sum of Schur functors corresponding to the Macdonald polynomial.
The "Frobenius character" is the standard map which sends an $S_n$-representation to a symmetric polynomial. The funny thing is that it is NOT a character -- in particular, the Frobenius character of a tensor product is not the product of the Frobenius characters. The relationship goes through Schur-Weyl duality: If $M$ is an $S_n$-rep with Frob. character $f$, then $V^{\otimes n} \otimes_{k[S_n]} M$, as a $GL(V)$-rep, has character $f$. – David Speyer Jun 4 '12 at 15:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9687650799751282, "perplexity": 639.6486608959159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246660628.16/warc/CC-MAIN-20150417045740-00015-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/refraction-defraction-reflection.401403/ | # Refraction Defraction Reflection
1. May 6, 2010
### j3wfrobklyn
A laser beam traveling in air strikes the mid-
point of one end of a slab of material with
index of refraction 1.45 as shown in the figure
below.
Dimensions of Block 46mm (L) and 3.4mm (W)
Initial ray of light enters the slab from the left side at an angle of incidence of 39 degrees.
Find the number of internal reflections of
the laser beam before it finally emerges from
the opposite end of the slab.
2. May 6, 2010
### cesiumfrog
Re: Refraction/Defraction/Reflection
You phrase that as if it were your homework question.
3. May 6, 2010
### j3wfrobklyn
Re: Refraction/Defraction/Reflection
lol it is a homework question
4. May 7, 2010
### Born2bwire
Re: Refraction/Defraction/Reflection
Well, we have a forum for that. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8713361620903015, "perplexity": 3883.2706210760075}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823565.27/warc/CC-MAIN-20181211040413-20181211061913-00012.warc.gz"} |
http://www.zora.uzh.ch/id/eprint/61829/ | # Flat-Panel CT Arthrography: Feasibility Study and Comparison to Multidetector CT Arthrography
Guggenberger, Roman; Fischer, Michael Alexander; Hodler, Juerg; Pfammatter, Thomas; Andreisek, Gustav (2012). Flat-Panel CT Arthrography: Feasibility Study and Comparison to Multidetector CT Arthrography. Investigative Radiology, 47(5):312-318.
## Abstract
OBJECTIVES:
: To show the feasibility of flat-panel computed tomography (FPCT) arthrography and quantitatively and qualitatively compare different FPCT protocols with standard multidetector computed tomography (MDCT).
MATERIALS AND METHODS:
: First, a phantom simulating joint space with increasing iodine concentrations was scanned using a standard MDCT and 3 different FPCT protocols. Quantitative analyses were performed by measuring CT numbers of iodine dilutions, radiation dose, and image noise as well as signal-to-noise ratio and contrast-to-noise ratio. Second, FPCT arthrographies of 4 animal joint specimens were performed and analyzed qualitatively by 2 independent readers who evaluated image artifacts, image noise, overall image quality and anatomic depiction of bone, cartilage, and soft tissue. Kappa values were calculated for inter-reader agreement. Pearson's correlation coefficient (r) and Wilcoxon signed-ranks test with Bonferroni corrections for multiple comparisons were used to compare MDCT and FPCT.
RESULTS:
: In phantoms, all CT scans showed a linear correlation between increasing iodine concentrations and mean HU values of contrast media and radiation dose, respectively (r = 0.98-0.99, P < 0.01). Dose-length product remained constant for MDCT scans. Signal-to-noise ratio for phantom water linearly decreased in all FPCT scans with increasing iodine concentrations. Contrast-to-noise ratio curves showed reduced slope at iodine concentrations higher than 75 mg/mL. FPCT arthrography after intra-articular administration of 5 to 6 mL of a 25% dilution of iopromide (Ultravist 300 mg/mL, Bayer HealthCare, Berlin, Germany) was successfully performed in all 4 animal joint specimens. Kappa values for inter-reader agreement of qualitative image analyses were 0.62 to 0.91. Image and depiction quality of 20-s FPCT scans were similar or superior compared with standard MDCT (P < 0.005).
CONCLUSION:
: FPCT arthrography is feasible and may allow similar image quality compared with standard MDCT arthrography.
## Abstract
OBJECTIVES:
: To show the feasibility of flat-panel computed tomography (FPCT) arthrography and quantitatively and qualitatively compare different FPCT protocols with standard multidetector computed tomography (MDCT).
MATERIALS AND METHODS:
: First, a phantom simulating joint space with increasing iodine concentrations was scanned using a standard MDCT and 3 different FPCT protocols. Quantitative analyses were performed by measuring CT numbers of iodine dilutions, radiation dose, and image noise as well as signal-to-noise ratio and contrast-to-noise ratio. Second, FPCT arthrographies of 4 animal joint specimens were performed and analyzed qualitatively by 2 independent readers who evaluated image artifacts, image noise, overall image quality and anatomic depiction of bone, cartilage, and soft tissue. Kappa values were calculated for inter-reader agreement. Pearson's correlation coefficient (r) and Wilcoxon signed-ranks test with Bonferroni corrections for multiple comparisons were used to compare MDCT and FPCT.
RESULTS:
: In phantoms, all CT scans showed a linear correlation between increasing iodine concentrations and mean HU values of contrast media and radiation dose, respectively (r = 0.98-0.99, P < 0.01). Dose-length product remained constant for MDCT scans. Signal-to-noise ratio for phantom water linearly decreased in all FPCT scans with increasing iodine concentrations. Contrast-to-noise ratio curves showed reduced slope at iodine concentrations higher than 75 mg/mL. FPCT arthrography after intra-articular administration of 5 to 6 mL of a 25% dilution of iopromide (Ultravist 300 mg/mL, Bayer HealthCare, Berlin, Germany) was successfully performed in all 4 animal joint specimens. Kappa values for inter-reader agreement of qualitative image analyses were 0.62 to 0.91. Image and depiction quality of 20-s FPCT scans were similar or superior compared with standard MDCT (P < 0.005).
CONCLUSION:
: FPCT arthrography is feasible and may allow similar image quality compared with standard MDCT arthrography.
## Statistics
### Citations
11 citations in Web of Science®
11 citations in Scopus®
### Altmetrics
0 downloads since deposited on 20 Apr 2012
Item Type: Journal Article, refereed, original work 04 Faculty of Medicine > University Hospital Zurich > Clinic for Diagnostic and Interventional Radiology 610 Medicine & health English 2012 20 Apr 2012 08:20 05 Apr 2016 15:47 Lippincott Wiliams & Wilkins 0020-9996 https://doi.org/10.1097/RLI.0b013e3182419082 22488509 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8155335187911987, "perplexity": 17960.24088097185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886123359.11/warc/CC-MAIN-20170823190745-20170823210745-00139.warc.gz"} |
https://math.stackexchange.com/questions/3328306/what-are-the-final-objects-of-the-arrowset-category | # What are the final objects of the $Arrow(Set)$ category?
By $$Arrow(Set)$$ I mean the category whose objects are the arrows of the Set category and whose arrows from the object $$f : A \rightarrow B$$ to the object $$f' : A' \rightarrow B'$$ are the pairs of functions $$(a,b)$$ such that $$f' \circ a = b \circ f$$
• Consider what the final objects in $\mathbf{Set}$ are; this should give you a clue as to what the final objects in this category should be. – Clive Newstead Aug 19 '19 at 20:27
• If we write $\to$ for the category consisting of two objects and one arrow joining them (and the identities), then the arrow category of $\mathcal C$ is $\mathcal C^{\to}$, i.e. the category of functors from $\to$ to $\mathcal C$. Now you can apply generic theorems about limits in functor categories, in particular that they are computed point-wise when the target category is complete. You are probably more at a point where you should just try to directly prove the statement though... – Derek Elkins left SE Aug 19 '19 at 21:00
To find a terminal (final) object we want a map $$t: S\to T$$ such that for all maps $$f: A\to B$$ there exist unique maps $$a :A \to S$$ and $$b: B\to T$$ such that $$ta=bf$$. For convenience lets write such a morphism as $$(a,b):f\to t$$. Suppose such a $$t$$ exists. Now for any two maps $$u : W\to S$$ and $$v : W\to S$$ we obtain morphisms $$(u,tu): 1_W \to t$$ and $$(v,tv) : 1_W \to t$$. What can we conclude about $$(u,tu)$$ and $$(v,tv)$$ and hence about $$u$$ and $$v$$? On the other hand for any set $$C$$ there must be a morphism from $$1_C$$ to $$t$$ and hence a map from $$C$$ to $$S$$. What does this and the previous part tell us about $$S$$? After that let $$(x,y) : 1_T\to t$$ be the unique morphism and note that $$(xt,y)$$ is a morphism from $$t$$ to $$t$$ and hence must be $$(1_S,1_T)$$ meaning that $$xt=1_S$$ and $$y=1_T$$. This means that $$txt=t$$ and hence $$(1_S,tx)$$ is a morphism from $$t$$ to $$t$$. Why does this mean that $$tx=1_T$$? What can you conclude? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 38, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9607381224632263, "perplexity": 71.19532720377713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988759.29/warc/CC-MAIN-20210506175146-20210506205146-00265.warc.gz"} |
https://infoscience.epfl.ch/record/154813 | Infoscience
Journal article
# Pump angle and laser energy dependence of stimulated scattering in microcavities
We show that stimulated polariton scattering in a semiconductor microcavity is observable over a wide range of laser pump angles. The ratio of idler to signal beams is found to vary strongly with angle, consistent with the expected variation of the photon fraction of the idler states, and the increasing probability of scattering of idler polaritons to the exciton reservoir with increasing wave-vector. The observed variation of threshold with angle is in good agreement with a semi-classical treatment of the parametric scattering process. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9344730973243713, "perplexity": 1805.3708215405716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612069.19/warc/CC-MAIN-20170529091944-20170529111944-00243.warc.gz"} |
https://docs.exabyte.io/security/current-state/ | Last updated: Aug 1, 2018
# Introduction¶
We are often asked about the security of cloud-based solutions for storing mission-critical information since cloud computing is initially perceived as a threat. This document is a brief attempt to explain our position, demonstrate that at present cloud computing is more than capable of facilitating secure access to data and computation even for most critical applications, and direct the reader to the policies we deploy to assert security for the users of Exabyte platform.
How secure is cloud computing?
"Cloud computing is often far more secure than traditional computing, because companies like Google and Amazon can attract and retain cyber-security personnel of a higher quality than many governmental agencies."
Vivek Kundra, former federal CIO of the United States
# Concerns¶
Cloud computing revolutionized the way we store and interact with data in our daily work. We now take it as a must to have simultaneous access to information from many different devices scattered around the globe. Cloud service providers make it possible and seamless. The data stored by the cloud providers, however, is often of sensitive nature to businesses and research institutions. Moreover, such data rapidly increases in quantity. New security issues of practical importance appear as a result of this digital transformation [1-8]. These issues are related to the confidentiality, integrity, accessibility, and resilience to online piracy and malware attacks.
# Examples¶
## More secure?¶
There is a growing consensus among the global IT community that cloud computing has now become inherently more reliable and secure than traditional privately-owned data-centers in the same way that money is safer when mixed up with other people's money in a bank vault than sitting alone in one’s dresser drawer [9, 10]. The largest cloud service providers such as AWS, Microsoft Azure and the Google cloud platform (our references of choice at Exabyte.io) have the means and resources to invest billions in maintaining and improving top-notch IT security protocols and infrastructure. They do it in a way that would be cost-prohibitive even for the Global 500 companies and completely unaffordable for most small-to-medium sized enterprises and/or research labs.
## How is it possible?¶
For example, Microsoft has invested \$1 billion and doubled its number of security executives over the course of the year 2015 alone. They also announced the launch of a new managed security services group and a new cyber defense operations center [11]. Similar policies are regularly being enacted by all other major cloud providers. Furthermore, providers have servers hosted in a variety of locations regionally and globally, which naturally preserves data better and more reliably than keeping it on-premises in a single location. Other advantages of clouds compared to traditional computing can be found in their superior performance and their lower operational costs as it was pointed out in a 2017 NASDAQ summary [12]. No wonder even the CIA now trusts Amazon Web Services to store some of its classified information! [13]
## OK, but not for our data, is it?¶
Chemical and Energy sectors have so far been reluctant to adopt cloud computing because of the associated security concerns, however, such companies are increasingly eager to explore the field as Accenture reported [14, 15]. We believe that in the future the situation will change, and we base our conclusions on the modern day state of the computer-aided design industry, where 4 out of the 5 largest companies now use cloud for their Research and Development efforts [16]. Pharmaceutical industrial sector, first reluctant to adopt cloud computing too, is now rapidly changing its stance [17]. Lastly, some of the recent updates from Microsoft Azure partnering with British Petroleum and Chevron in storing mission-critical data point to similar conclusions [18, 19].
# Our approach¶
We at Exabyte.io consider a confidential and secure experience for our customers our top priority. We use industry-standard strict security protocols, and the degree of data privacy and integrity protection we deliver has so far received universal appreciation by our users [20]. We are convinced that together we can extend the forefront of the cloud computing transformation currently unfolding [21], and apply it to accelerate the materials and chemical R&D.
We further discuss the analysis of cloud-computing related threats and our ways to mitigate them here.
1. "The Treacherous Twelve" Cloud Computing Top Threats in 2016; report by the Cloud Security Alliance (CSA); link
2. A. Backe and H. Lindén: "Cloud Computing Security: A Systematic Literature Review", Uppsala University (2015); link
3. N. Dahiya and S. Rani: "Cloud Computing Security: a Review", IJEDR | Volume 5, Issue 3 | ISSN: 2321-9939 (2017); link
4. P.S. Naidu and B. Bhagat: "Emphasis on Cloud Optimization and Security Gaps: A Literature Review"; Cybernetics and Information Technologies, Volume 17, No 3 (2017); link
5. M.F. Mushtaq et al.: "Cloud Computing Environment and Security Challenges: A Review"; (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 8, No. 10 (2017); link
6. P.R. Kumar ,P.H. Raj and P. Jelciana: "Exploring Security Issues and Solutions in Cloud Computing Services – A Survey"; Cybernetics and Information Technologies, Volume 17, No 4 (2017); link
7. G. Ramachandra, M. Iftikhar and F.A. Khan: "A Comprehensive Survey on Security in Cloud Computing"; Procedia Computer Science 110 (2017) 465–472; link
8. Online repository of review papers on Cloud Security issues and solutions; link
9. D. Linthicum: "Clouds are more secure than traditional IT systems -- and here's why" (2014); link
10. "Cloud security: Why clouds are more secure than your own datacenter" (2016); link
11. K.J. Higgins: "Microsoft Invests $1 Billion In 'Holistic' Security Strategy" (2015); [link](https://www.darkreading.com/endpoint/microsoft-invests-$1-billion-in-holistic-security-strategy/d/d-id/1323170)
12. G. Pendse: "Cloud Computing: Industry Report & Investment Case", Nasdaq (2017); link
13. A. McLean: "CIA to continue cloud push in the name of national security" (2017); link
14. Accenture: "Chemical Companies’ Cloud Strategies: Current Adoption and Future Plans" (2014); link
15. Accenture: "A New Era for Energy Companies: Cloud computing changes the game" (2012); link
16. R. Maguire: "Cloud Computing: Shaping the Future of CAD" (2017); link
17. R. Mullin: "How the pharmaceutical research sector learned to stop worrying and love the cloud"; c&en, Volume 94, Issue 42 | pp. 26-30 (2016); link
18. V. Ho: "Chevron fuels digital transformation with new Microsoft partnership" (2017); link
19. Microsoft reporter: "BP selects Microsoft Azure for company-wide platform as part of its modernisation programme" (2017); link
20. Security section, Exabyte.io homepage; link
21. B. Darrow: "How These Fortune 500 Companies Are Moving to the Cloud" (2016); link | {"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15262135863304138, "perplexity": 6779.734747361058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742322.51/warc/CC-MAIN-20181114232605-20181115014605-00195.warc.gz"} |
https://sourceforge.net/p/kile/mailman/attachment/7e39ff421002031454p476cfd2ax5607361486ace11a@mail.gmail.com/1/ | Hello All
i have been using Kile 2.03 on Ubuntu 9.10 since this is the latest stable version.
but i have problems with toolbar icons.
whenever i want to add actions to the toolbar (math, extra or any type), the actions will be added without any icon.
For example to add the \left( \right) delimiter action to the toolbar, it will be added without any icon.
If i try to add a icon from the KDialog box, it doesnt contain relevant icons.
This Toolbar icons problem is not there in Kile 2.1x.
Can somebody hint me as to how to solve this problme??
Thanks
--
Amogh Rajanna | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9305208325386047, "perplexity": 3747.0561244952864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794868316.29/warc/CC-MAIN-20180527131037-20180527151037-00555.warc.gz"} |
https://docs.nvidia.com/deeplearning/dali/user-guide/docs/operations/nvidia.dali.fn.crop.html | # nvidia.dali.fn.crop¶
nvidia.dali.fn.crop(*inputs, **kwargs)
Crops the images with the specified window dimensions and window position (upper left corner).
This operator allows sequence inputs and supports volumetric data.
Supported backends
• ‘cpu’
• ‘gpu’
Parameters
input (TensorList ('HWC', 'CHW', 'DHWC', 'CDHW', 'FHWC', 'FCHW', 'CFHW', 'FDHWC', 'FCDHW', 'CFDHW')) – Input to the operator.
Keyword Arguments
• bytes_per_sample_hint (int or list of int, optional, default = [0]) –
Output size hint, in bytes per sample.
If specified, the operator’s outputs residing in GPU or page-locked host memory will be preallocated to accommodate a batch of samples of this size.
• crop (float or list of float or TensorList of float, optional) –
Shape of the cropped image, specified as a list of values (for example, (crop_H, crop_W) for the 2D crop and (crop_D, crop_H, crop_W) for the volumetric crop).
Providing crop argument is incompatible with providing separate arguments such as crop_d, crop_h, and crop_w.
• crop_d (float or TensorList of float, optional, default = 0.0) –
Applies only to volumetric inputs; cropping window depth (in voxels).
crop_w, crop_h, and crop_d must be specified together. Providing values for crop_w, crop_h, and crop_d is incompatible with providing the fixed crop window dimensions (argument crop).
• crop_h (float or TensorList of float, optional, default = 0.0) –
Cropping the window height (in pixels).
Providing values for crop_w and crop_h is incompatible with providing fixed crop window dimensions (argument crop).
• crop_pos_x (float or TensorList of float, optional, default = 0.5) –
Normalized (0.0 - 1.0) horizontal position of the cropping window (upper left corner).
The actual position is calculated as crop_x = crop_x_norm * (W - crop_W), where crop_x_norm is the normalized position, W is the width of the image, and crop_W is the width of the cropping window.
See rounding argument for more details on how crop_x is converted to an integral value.
• crop_pos_y (float or TensorList of float, optional, default = 0.5) –
Normalized (0.0 - 1.0) vertical position of the start of the cropping window (typically, the upper left corner).
The actual position is calculated as crop_y = crop_y_norm * (H - crop_H), where crop_y_norm is the normalized position, H is the height of the image, and crop_H is the height of the cropping window.
See rounding argument for more details on how crop_y is converted to an integral value.
• crop_pos_z (float or TensorList of float, optional, default = 0.5) –
Applies only to volumetric inputs.
Normalized (0.0 - 1.0) normal position of the cropping window (front plane). The actual position is calculated as crop_z = crop_z_norm * (D - crop_D), where crop_z_norm is the normalized position, D is the depth of the image and crop_D is the depth of the cropping window.
See rounding argument for more details on how crop_z is converted to an integral value.
• crop_w (float or TensorList of float, optional, default = 0.0) –
Cropping window width (in pixels).
Providing values for crop_w and crop_h is incompatible with providing fixed crop window dimensions (argument crop).
• dtype (nvidia.dali.types.DALIDataType, optional) –
Output data type.
Supported types: FLOAT, FLOAT16, and UINT8.
If not set, the input type is used.
• fill_values (float or list of float, optional, default = [0.0]) –
Determines padding values and is only relevant if out_of_bounds_policy is set to “pad”.
If a scalar value is provided, it will be used for all the channels. If multiple values are provided, the number of values and channels must be identical (extent of dimension C in the layout) in the output slice.
• image_type (nvidia.dali.types.DALIImageType) –
Warning
The argument image_type is no longer used and will be removed in a future release.
• out_of_bounds_policy (str, optional, default = ‘error’) –
Determines the policy when slicing the out of bounds area of the input.
Here is a list of the supported values:
• "error" (default): Attempting to slice outside of the bounds of the input will produce an error.
• "pad": The input will be padded as needed with zeros or any other value that is specified with the fill_values argument.
• "trim_to_shape": The slice window will be cut to the bounds of the input.
• preserve (bool, optional, default = False) – Prevents the operator from being removed from the graph even if its outputs are not used.
• rounding (str, optional, default = ‘round’) –
Determines the rounding function used to convert the starting coordinate of the window to an integral value (see crop_pos_x, crop_pos_y, crop_pos_z).
Possible values are:
• "round" - Rounds to the nearest integer value, with halfway cases rounded away from zero.
• "truncate" - Discards the fractional part of the number (truncates towards zero).
• seed (int, optional, default = -1) –
Random seed.
If not provided, it will be populated based on the global seed of the pipeline.
• output_dtype (nvidia.dali.types.DALIDataType) –
Warning
The argument output_dtype is a deprecated alias for dtype. Use dtype instead. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24798011779785156, "perplexity": 11409.887759579688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943637.3/warc/CC-MAIN-20230321064400-20230321094400-00026.warc.gz"} |
https://gmachine1729.com/2017/10/09/composition-series/ | Composition series
My friend after some time in industry is back in school, currently taking graduate algebra. I was today looking at one of his homework and in particular, I thought about and worked out one of the problems, which is to prove the uniqueness part of the Jordan-Hölder theorem. Formally, if $G$ is a finite group and
$1 = N_0 \trianglelefteq N_1 \trianglelefteq \cdots \trianglelefteq N_r = G$ and $1 = N_0' \trianglelefteq N_1' \trianglelefteq \cdots \trianglelefteq N_s' = G$
are composition series of $G$, then $r = s$ and there exists $\sigma \in S_r$ and isomorphisms $N_{i+1} / N_i \cong N_{\sigma(i)+1} / N_{\sigma(i)}$.
Suppose WLOG that $s \geq r$ and as a base case $s = 2$. Then clearly, $s = r$ and if $N_1 \neq N_1'$, $N_1 \cap N_1' = 1$. $N_1 N_1' = G$ must hold as it is normal in $G$. Now, remember there is a theorem which states that if $H, K$ are normal subgroups of $G = HK$ with $H \cap K = 1$, then $G \cong H \times K$. (This follows from $(hkh^{-1})k^{-1} = h(kh^{-1}k^{-1})$, which shows the commutator to be the identity). Thus there are no other normal proper subgroups other than $H$ and $K$.
For the inductive step, take $H = N_{r-1} \cap N_{s-1}'$. By the second isomorphism theorem, $N_{r-1} / H \cong G / N_{s-1}'$. Take any composition series for $H$ to construct another for $G$ via $N_{r-1}$. This shows on application of the inductive hypothesis that $r = s$. One can do the same for $N_{s-1}'$. With both our composition series linked to two intermediary ones that differ only between $G$ and the common $H$ with factors swapped in between those two, our induction proof completes. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 30, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9499922394752502, "perplexity": 126.84922486364067}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825227.80/warc/CC-MAIN-20171022113105-20171022133049-00047.warc.gz"} |
http://link.springer.com/article/10.1007%2FBF02535991 | Lipids
, Volume 28, Issue 8, pp 709–713
# Bile acid and very low density lipoprotein production by cultured hepatocytes from hypo- orhyperresponsive rabbits fedcholesterol
• Evgeniy A. Podrez
• Yuri V. Lakeev
• Evgeniy I. Kosenkov
• Elvira T. Mambetisaeva
• Tatu A. Miettinen
Article
DOI: 10.1007/BF02535991
Podrez, E.A., Kosykh, V.A., Lakeev, Y.V. et al. Lipids (1993) 28: 709. doi:10.1007/BF02535991
## Abstract
Two groups of rabbits, either hyperresponsive or hyporesponsive to dietary cholesterol, wereselected after ten weeks of cholesterol feeding (0.2 g cholesterol/kg body weight per day). Bile acids and very low density lipoprotein (VLDL) production were determined in primary hepatocyte cultures from control, hyper- and hyporesponsive rabbits. Free cholesterol and cholesteryl ester contents in hepatocytes of the hyperresponsive rabbits was significantly increased. In contrast, lipid composition in hepatocytes of the hyporesponders was similar to that of control cells. Cholic acid was the predominant bile acid in the culture medium of hepatocytes together with small amounts of chenodeoxycholic and deoxycholic acids. The rate of cholic acid production by hepatocytes in the hyporesponsive group was two times higher than that in the hyperresponsive group. Bile acid production by control hepatocytes was slightly higher than in the hyperresponsive group. In contrast, secretion of VLDL cholesteryl ester was significantly increased by hepatocytes of the hyperresponsive rabbits. Similar differences, in bile acid production were found between hypo- and hyperresponsive rabbits selected after five days of cholesterol feeding and subsequent maintenance on a low cholesterol diet for a period of one month. The results suggest that the increased rate of bile acid production could contribute to the apparent resistance of hyporesponders to the atherogenic diet.
### Abbreviations
B/E
apoprotein B and apoprotein E
FBS
fetal bovine serum
GLC
gas-liquid chromatography
HPTLC
high-performance thin-layer chromatography
MEM
Eagle's minimum essential medium
VLDL
very low density lipoprotein
© American Oil Chemists’ Society 1993
## Authors and Affiliations
• Evgeniy A. Podrez
• 1
• 1
• Yuri V. Lakeev
• 1
• Evgeniy I. Kosenkov
• 1
• Elvira T. Mambetisaeva
• 1 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9311485290527344, "perplexity": 20999.89889346617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721347.98/warc/CC-MAIN-20161020183841-00021-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://wiki.math.wisc.edu/index.php?title=Past_Probability_Seminars_Spring_2020&oldid=11109 | # Spring 2015
Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted.
## Thursday, January 28, Leonid Petrov, University of Virginia
Title: The quantum integrable particle system on the line
I will discuss the higher spin six vertex model - an interacting particle system on the discrete 1d line in the Kardar--Parisi--Zhang universality class. Observables of this system admit explicit contour integral expressions which degenerate to many known formulas of such type for other integrable systems on the line in the KPZ class, including stochastic six vertex model, ASEP, various $\displaystyle{ q }$-TASEPs, and associated zero range processes. The structure of the higher spin six vertex model (leading to contour integral formulas for observables) is based on Cauchy summation identities for certain symmetric rational functions, which in turn can be traced back to the sl2 Yang--Baxter equation. This framework allows to also include space and spin inhomogeneities into the picture, which leads to new particle systems with unusual phase transitions.
## Thursday, February 4, Inina Nenciu, UIC, Joint Probability and Analysis Seminar
Title: On some concrete criteria for quantum and stochastic confinement
Abstract: In this talk we will present several recent results on criteria ensuring the confinement of a quantum or a stochastic particle to a bounded domain in $\displaystyle{ R^n }$. These criteria are given in terms of explicit growth and/or decay rates for the diffusion matrix and the drift potential close to the boundary of the domain. As an application of the general method, we will discuss several cases, including some where the background Riemannian manifold (induced by the diffusion matrix) is geodesically incomplete. These results are part of an ongoing joint project with G. Nenciu (IMAR, Bucharest, Romania).
## Friday, February 5, Daniele Cappelletti speaks in the Applied Math Seminar, 2:25pm in Room 901
Note: Daniele Cappelletti is speaking in the Applied Math Seminar, but his research on stochastic reaction networks uses probability theory and is related to work of our own David Anderson. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7844986915588379, "perplexity": 937.3223623152224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711390.55/warc/CC-MAIN-20221209043931-20221209073931-00716.warc.gz"} |
https://crunchingnumbers.live/2019/10/11/write-tests-like-a-mathematician-part-3/ | # Write Tests Like a Mathematician: Part 3
In the last two blog posts, you learned that Ember treats testing as a first-class citizen. Out of the box, Ember provides 3 types of tests so that you can fine-tune test coverage and performance. It also supports a variety of addons and debugging tools to improve your developer experience in testing.
Today, we address an important question: How should you write tests? By the end of this post, you will learn 5 simple rules that I like to follow. The rules aren’t do-this-or-do-that’s (cold hard facts). Instead, they carry nuance and interesting side stories. To keep your learning experience fun, I will transcribe my talk at EmberFest 2019 (rather than summarizing it) to engage in a dialogue with you.
# 0. Introduction
Hello. How is everyone doing? Show thumbs-up if you’re doing awesome, thumbs-down if you’re jet lagged and disoriented, and thumbs-middle if you’re just fine.
That’s what my yoga instructor likes to do before class: thumbs-up, thumbs-down, or so-so. It’s a good exercise for your wrists.
## a. Who Am I
My name is Isaac. I have been an Ember developer for two years. I’m still very new to web development.
Show of hands. How many of you are also new to Ember—two years or less? Please, give them a round of applause for taking time to attend EmberFest and hone their skills. Once again, show of hands please. How many of you are mentors to new developers? This time, I thank you from the bottom of my heart. I believe it’s all of you—the mentors and the new developers—who will keep Ember strong and innovative as we move to the year 2020. That’s just in two months. Can you believe that?
## b. Overview
Today, I’ll talk about two things that I love with passion: mathematics and writing tests. It’s kinda funny. As an applied mathematician, I studied fracture mechanics. It’s the study of how things break. Now, as a web developer, I’m still studying how things break—especially on IE 11 and Edge, the banes of our society.
Joking aside, I want to talk about writing tests especially for developers who are new to Ember like me, and for the mentors who are guiding them. I believe Ember gives new developers the power to feel productive—the power to feel good about themselves after work—from day one.
As a new developer, one of the things that you can do is to poke around the app and describe what you saw. That’s exactly what writing tests is. The only question that remains is, how should you write tests? Are there best practices that you can follow?
I’m here to tell you, the answer is yes. Every one of you has the power to write tests like a mathematician. By the end of this talk, you will learn 5 simple rules that I like to follow when I write tests:
1. If, if, if
2. Use common, everyday words
3. Write less with theorems and new terms
4. All your basis are belong to us (Hmm, what does that mean?)
5. 1 picture = 1000 words (“A picture is worth a thousand words.”)
## c. Motivation
I came up with these rules based on my experience of writing proofs in college and graduate school. See, when I was in college, I wrote proofs that were horrendous—an abomination. I made plain-wrong assumptions; the logic didn’t follow; and oftentimes, my proof worked for some cases but not all like it’s supposed to. (Does that sound like you and your tests?) In college, I actually hated writing proofs, and sometimes I hated myself, because I wasn’t good at it like all the other mathematicians.
It wasn’t until I entered graduate school, when I had to study countless proofs day and night, and had to write many in return, that I began to see patterns—best practices, if you will—for writing proofs. Once I saw these patterns, each month, I got better at writing proofs, and soon, I was even able to teach others how to write theirs.
When I look back upon this experience, I wish I had known sooner in college how to write proofs because I’m sure I would have been fantastic at it. I wish I had had someone—a mentor—to show me how.
That’s why I give this talk today to you. If you are a new developer, who perhaps feels inadequate when writing tests, I want you to know that you are not alone in how you feel—I was like you—that you can get better with practice like I did. If you are a mentor, I want you to think about what are some best practices that you can teach and how your mentees might feel when they write their tests.
# 1. If, If, If
With that said, let’s get the show started! Rule number one—If, If, If—is about making good assumptions.
## a. Motivation
When mathematicians write proofs—you will be surprised—they are actually telling you a story. They rely on conditional statements, also known as if-then statements, to describe what happens to a character in the beginning and what happens to them at the end. If-then statements allow mathematicians to connect different scenes to tell one cohesive story.
I think it’s a powerful thought, that writing tests is nothing more than telling someone a story. When the user visited the signup page, they saw a form that was majestic. The user filled out the form with ease and the submit button was…enabled. The user clicked on submit and they were redirected to—whoa!—dashboard. They are now on level 2, hoping to find the princess in yet another castle. (Wouldn’t it be awesome if your tests spoke to you like that?)
## b. What to Watch Out for
Here’s where things go wrong. To illustrate, I will share a personal fact. Yesterday was my birthday. I turned 31 and I’m this close to a mid-life crisis.
To cheer me up, I was hoping that we could do something special. We have about 200 people here. Mathematics tells us, there should be about 16 other people born in October like me. Raise your hands, October people.
Now, the rest of you, will you please sing Happy Birthday for me and these wonderful friends of yours whom you haven’t met until yesterday? I promise you, afterwards, you will feel good about yourself. Are you ready? On the count of three…
In the beginning, I mentioned that rule #1 is about making good assumptions. On the surface, this test worked perfectly. I got personal, you sang Happy Birthday, and now you are happy (I hope).
Well, I lied. Yesterday wasn’t my birthday. (Of course not. That’s a 1 out of 365 chance.) I am 31, though, and I may or may not be close to a mid-life crisis. As you can see, not all assumptions were correct, yet you were able to sing Happy Birthday and feel happy.
This example, although it’s contrived, illustrates a problem that you will often face. You have to write assertions based on assumptions, but in reality—when your tests are running—not all of your assumptions may have happened. To make matters worse, the reason for that may not be obvious, especially if you are new to Ember.
## c. Making Wrong Assumptions
In my experience, there are 5 things that cause me to make wrong assumptions. I’ll explain what they are and how Ember can help us solve these problems.
Keep in mind that these reasons are not at all obvious and deal with advanced topics such as state transitions. Don’t feel overwhelmed if any of these are new. What I want you to see is that, when your tests fail, don’t be discouraged. They are an opportunity to pause and ask yourself: Are my assumptions correct?
First, there are observers and computed properties. As your app grows, it becomes hard to ensure that these are triggered at the right time and all of your dependencies are listed and up-to-date. Things are better in Octane, thanks to async observers and tracked properties. Still, managing state is not an easy task and requires thoughtful design.
Second is data. The form builder that I use relies on observers and two-way bindings. As a result, the data that I expect to have and what actually exists in Ember Data store can be out of sync if I am not careful. In general, when you’re designing components, please use Data Down, Actions Up.
Third is unsettled state. Ember’s tests run synchronously, one assertion after another, but Ember’s test helpers aren’t aware of all async operations that happen in your app. If you animate elements or read files—things that take time—you will want to make sure that your app has settled before making assertions. You can use Ember’s waitFor and waitUntil helpers to do so.
Fourth is leaky tests. They are a headache. Ideally, each of your tests runs in an isolated environment. Unfortunately, the assumptions made in one test can sneak into another if you’re not careful with asyncs, global variables, mocks, and stubs. As a result, whether one test passes starts to depend on another. To my knowledge, the only solution here is prevention. Ember QUnit can find async leakage, while Ember Sinon can prevent stub leakage. You can also use Ember Exam to randomize test order to identify leaks early.
Last but not least is permissions. As developers, we are used to writing code as an admin because we get to see all parts of the application. As a result, we may end up biasing our tests thinking that the user does also. Please write tests for non-admins too. You can use nested modules to easily set up and group tests for admins and tests for non-admins.
# 2. Use Common, Everyday Words
Whew, that was a lot. Let’s all take a deep breath.
I’ve been doing yoga since last December. In yoga, people practice ujjayi breath. You close your mouth and make a guttural sound in your throat like this. I swear, you will find it soothing every time except if you’re on a video call, forget to turn off your mic, and everyone hears you, heaving. That happened to me in a meeting with the Ember Learning Team and the person who had to point out was Tom Dale. Talk about first impressions. (He was nice about it.)
## a. Motivation
Speaking of video calls, rule number two is about how to communicate with others. Remember that, by writing tests, you are telling someone a story. You want to make sure that the other person understands you easily.
You can do so by using common, everyday words. Here, common means convention; it’s something many people agreed to. Everyday means simple; something that many are familiar with.
## b. Everyday
When mathematicians write proofs, they use everyday words, such as continuous and zero, to tell their story. You will be surprised that proofs aren’t always about numbers and complex equations. There are words. There are pictures. There’s always a story.
Just like mathematicians, you can write tests with words that you use in your daily life. Thanks to addons like Ember Test Selectors, Ember CLI Page Object, QUnit DOM, and Chai DOM, your tests can be easily understood by other developers.
## c. Common
Mathematicians also follow a common language. For example, the letter $f$ often stands for a function, while brackets and parentheses indicate the domain of $f$—the input of the function. Thanks to standard notations, mathematicians easily understand each other, no matter where they come from.
When writing tests, I recommend creating a standard that is both easy to remember and easy to type. For example, if you use Ember Test Selectors, always refer to links by data-test-link and buttons by data-test-button. To check data, you can use the name data-test-field in both edit and view modes.
You may have noticed that I’ve been using the label or ARIA label to identify data-test tags. This practice of identifying elements by what’s visible on the screen nicely complements accessibility-driven testing. This is something that addons like Semantic Test Helpers are trying to solve.
# 3. Write Less with Theorems and New Terms
## a. Motivation
Mathematicians are very much like developers. They love writing proofs that are short and DRY (don’t repeat yourself).
## b. Theorems
Remember this statement? Mathematicians can prove this with a single line, “Use the Intermediate Value Theorem.” If they are really ambitious, they will just write “IVT” and call it a day!
It’s absurd but a powerful idea. You can write short tests if you know that a part of your test is proven already. Short tests mean less code that you have to maintain later. That’s a good thing. Mathematicians call what’s already proven a theorem. We developers call it a test helper.
Thanks to Ember’s new testing API, writing a test helper is simple. You write an ordinary function—async, in this case—to describe what the user does, then you call the function in your test. Think of writing test helpers as extracting a part of your test code for reusability.
Another test helper that I use every day is called authenticate. It assigns the user the right permissions in my test. Given these two examples, can you now think of ways to refactor your test code?
## c. New Terms
There is another way for mathematicians to write less. It’s to create a new term to describe an idea that happens over and over, in different contexts.
For example, you will see continuous functions everywhere—in linear algebra, calculus, topology, optimization, and machine learning. To avoid having to write continuous each time, mathematicians introduced the C-notation. Now, they can write less yet precisely tell their stories.
When you write tests, you will find assertions that happen over and over, but QUnit DOM or Chai DOM doesn’t provide the API out of the box. It simply can’t satisfy all of your business logic. But not to worry. Just like mathematicians who create new terms, you can create custom assertions.
A custom assertion, like test helpers, is just a function. You can define it however you’d like. Here, the assertion isEnabled hides the complexity of checking a button’s state. It’s a simple example. You can definitely get more ambitious by checking complex data structures such as tables and D3 visualizations—arrays and trees.
By creating good abstractions, you get to enjoy two benefits. One, you can write short, meaningful tests. (Again, short means maintainable.) Two, others can write similar tests without having to know the technical details on day one. They get to be productive thanks to you.
# 4. All Your Basis Are Belong to Us
## a. Motivation
When mathematicians write proofs, they have to cover all possible cases. Often, that’s an infinite amount. With tests, however, you can’t cover infinite cases. Your tests would run forever!
Instead, you can find a basis to guarantee that your app works almost always, in the fewest number of tests possible.
## b. Basis
In math, a basis means independent vectors that span the entire space. The key words are independent and span. Span means “to cover” (like test coverage). You probably haven’t heard of the term, basis, but I bet that you’re already familiar with the idea.
Latitude and longitude are an example of a basis. These coordinates are independent and, for every combination, you can find a unique location in space, which is infinite. For example, $(55.676^{\circ}, 12.568^{\circ})$ takes you to Copenhagen and $(30.267^{\circ}, -97.743^{\circ})$ to Austin, where I come from. Just two numbers—three, counting elevation—to describe the infinite space.
RGB scheme is another example. There are infinitely many colors but each is a unique combination of red, green, and blue—three numbers.
For us, a basis means finding a small number of tests that can be considered building blocks. If we prove that our building blocks work, then mathematics tells us, our app works no matter what the user does.
## c. Examples
Let’s make this idea of a basis more concrete by looking at 3 examples.
Suppose you’re designing this page where the user sees data and can take 5 different actions: Create, Edit, Delete, Clone, and Import. You want to prove that the table will always show the right data.
The problem is, there are infinitely many sequences of actions that the user can take. For example, they can create a record, then delete another. They can do import, edit, then clone. These are just two examples out of infinitely many. There’s no way you can write a test for each and every possible sequence.
What if, instead, you guarantee that each action returns the user to the right state? When the user visits the page, everything works and they are happy. When the user takes an action, you make sure that they return to the happy state where everything works. That way, when the user exits the page, everything is correct.
The beauty of this approach is that, now, you only have to consider 5 test cases—one for each action. You can be confident that your app is in the right state when the user visits and exits the page.
Example two. Here is a component that I created to visualize LDAP filters. You can enter a filter to understand its logical structure and edit the expression with confidence. I can also find errors and help you fix them.
There are infinitely many expressions that an LDAP filter can take, but you can always break it down into these four basic types and three operations. 7 vectors to check.
The last example is a D3 component for visualizing a DSL (domain-specific language). You can interact with this graph to write your code in reverse.
Again, there are infinitely many programs that you can write, but the graph always boils down to a few basic types. It’s how I know that my D3 visualization is correct.
# 5. 1 Picture = 1000 Words
Up till now, you learned several strategies to level up your testing. You have to make good assumptions, choose the right words, write short tests, and break down large problems into small ones.
I have to ask: Are you as overwhelmed as I? I simply had to read my script; you actually had to listen and learn, which is infinitely worse!
The final rule, in yoga terms, is shavasana. It’s a moment to wind down, reflect on your journey, and wake up re-energized.
## a. Motivation
One tragic thing about proofs and tests is that they ask us to be perfect. The things that we build are ambitious and complex, but if we make one mistake, a proof gets rejected by the community while an app faces vocal customers. It’s a terrible feeling to have, to constantly worry whether our tests and our app are good enough.
The last rule that I want to share asks you to let go of your worries and be okay with not being perfect. What you go for instead is incremental change. Each day—each sprint, even—you work on writing better tests. The most important thing above all is to have fun—to enjoy what you do.
## b. Start out Simple
Suppose you are designing a component that is complex and you’re not sure how to write tests for it. Start out with the most simple test: assert.ok(true). Even this one line offers valuable information—namely, that your component won’t blow up just by existing in your app.
Similarly, if you are working on a complex page, you don’t have to write thorough application tests right away. You can use Percy or Backstop to take visual snapshots while you gradually introduce written tests.
It feels counterintuitive, right? That visual snapshots are as good as written tests?
## c. Proof-by-Pictures
Let’s take a look what mathematicians do one last time. I bet that, before, this meant very little to you because you didn’t understand the words. Am I right?
Here’s the same statement in drawing. From the picture, it’s clear that, if my function is continuous and is positive on one end and negative on the other, it must equal to zero somewhere in-between. That’s all the statement said!
Furthermore, from this picture, we can imagine a powerful technique called bisection method. It’s how Git bisect works!
Let me show you one more example to illustrate that pictures are powerful. I will prove to you that there are as many numbers between 0 and 1 (this small little piece) as there are between $-\infty$ and $\infty$ (all numbers). It’s a mind-blowing fact about infinity, that a part can have the same size as the whole.
This green line represents all numbers between 0 and 1. Each number is a point on the line.
From the center, I’m going to draw a semicircle.
These dotted lines show that there are as many points on the green line as there are on the blue line. Do you agree with that?
From the center to each point on the circle, I can draw a straight line and stop at the bottom.
Just like before, these dotted lines show that there are as many points on the blue line as there are on the bottom green line. Notice that the bottom green line goes to infinity in both directions.
Therefore, you can find as many numbers between 0 and 1 as you can between $-\infty$ and $\infty$. How’s that for a slice of fried gold?
# 6. Conclusion
Let’s go over the rules once more.
1. Writing tests is nothing more than telling someone a story. You describe what happens to a character in the beginning and what happens to them at the end. Please check whether your assumptions are met in your tests.
2. To tell stories effectively, you will want to use words that others can easily understand. Come up with a convention that you and your team would love to use every day.
3. Thanks to Ember’s new testing API, you can create test helpers and custom assertions to write less code. You can write short, meaningful tests and help others be productive.
4. When you face a large problem, see if you can break it down into smaller ones. In particular, if you find independent vectors that span the entire problem, you can guarantee that your app works almost always.
5. Finally, it’s okay not to be perfect. Be happy with who you are today and whom you can be tomorrow. The most important thing above all is to have fun—to enjoy writing tests.
It’s time for me to start running and say goodbye for a little while, and I know you’re going to miss me so I’ll leave you with this. No, not an MCR song. A quote that I love most dearly.
To me, this quote is about continuous learning and not giving up when things are hard. It goes like this: “We shall not cease from exploration / and the end of all our exploring / will be to arrive where we started / and know the place for the first time.”
I thank you very much for listening. If you’d like to know more about me, please reach out on LinkedIn and Discord.
# Notes
“All Your Basis Are Belong to Us” is a reference to a video game. Basis is singular, just like base.
I said that a basis guarantees correctness almost always (theory suggests always) because how we write tests for the building blocks still matters. If we write bad ones, this rule won’t help much.
You can find my presentation slides here (Keynote, PDF). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3584422767162323, "perplexity": 1150.6095776843172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251687725.76/warc/CC-MAIN-20200126043644-20200126073644-00392.warc.gz"} |
http://www.perimeterinstitute.ca/people/research-area/mathematical-physics | # Mathematical Physics
Mathematical physics, including mathematics, is a research area where novel mathematical techniques are invented to tackle problems in physics, and where novel mathematical ideas find an elegant physical realization. | more on mathematical physics |
Mathematical physics, including mathematics, is a research area where novel mathematical techniques are invented to tackle problems in physics, and where novel mathematical ideas find an elegant physical realization. | more on mathematical physics |
### ## Freddy Cachazo
Gluskin Sheff Freeman Dyson Chair in Theoretical Physics at Perimeter Institute
### ## Kevin Costello
Krembil William Rowan Hamilton Chair in Theoretical Physics at Perimeter Institute
### ## Davide Gaiotto
Krembil Galileo Galilei Chair in Theoretical Physics at Perimeter Institute
### ## Pedro Vieira
The Clay Riddell Paul Dirac Chair in Theoretical Physics at Perimeter Institute
### ## Ben Webster
Representation Theory, Low-dimensional Topology
### ## Michael Jarret
Spectral Graph Theory
### ## Kostiantyn Tolmachov
Representation Theory
### ## Victor Py
Bringing together the world's top Theoretical Physicists to unlock the mysteries of the universe. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8169972896575928, "perplexity": 8854.122704733212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494125.62/warc/CC-MAIN-20190220003821-20190220025821-00623.warc.gz"} |
https://proofwiki.org/wiki/User:Barto | # User:Barto
Random
caveat. Any perceived negative feelings in my communication are misinterpreted enthusiasm, partly due to poor word choice from a limited vocabulary. I try not to let my mind trick me into making the same mistake!:)
My mission at ProofWiki: Introducing rigor where it's missing, choosing better page names, looking for ways to expand the community, refactoring, connecting pages with relevant links in the also see section, or by organizing pages using transclusion.
My philosophy:
• Transparency. Every proof of more than ~ 20 lines should probably be preceded by an Outline of Proof. The level of detail at ProofWiki is otherwise way too high to quickly understand what's going on. Just a very simple overview may make it 10 times as easy to read the proof, no exaggeration. $\mathsf{Pr} \infty \mathsf{fWiki}$ can be a very qualitative and even more unique source with both outlines and fully detailed proofs.
• Coherence. When writing a proof, look if part of it has already been proved elsewhere. If there are two proofs doing the same reasoning to arrive at an intermediate result; consider placing that result in a separate page (either as Theorem or Lemma). Long proofs are difficult to read, especially in the style they are presented at ProofWiki. Linking to other articles allows, if this makes the proof shorter, for a global view and quicker understanding.
If a proof is too long, it needs more lemma's.
• Connecting pages. Aside from the usual links, I try to add at least one link in the Also See section. This allows us to guide visitors to related or more interesting results. -- Just don't go too mad. Try to make it so that "Also see" contains only items that are immediately relevant. We already have a "What links here" option, and at least one Category. The danger is that this section can soon become bloated and distracting. -- Yes, true. Of course if I don't find any relevant pages, I don't add any. Thanks for the feedback.
Ideas for the ProofWiki community
• Make proofwiki popular and more known using some kind of blog (e.g. put the facebook page to use)
I tried to set up a blog but I was told I wasn't allowed to. So I stopped.
Ideas to improve structure and appearance
• A namespace for algorithms
• (bad idea) An idea to make pages more neat and appealing: Add a templates to include hidden notes containing material such as:
• Links to definitions (instead of cluttering the page with things like "Where $\cup$ denotes union" The "clutter", as you call it, is essential. Leaving the concepts undefined is not acceptable. So until someone designs some "hidden notes" template, we must include the links to the appropriate definitions. Until that time, the fact that this may be unappealing to you is regrettable but unavoidable.
• Links to more trivial results being used (to avoid cluttering the page of a more advanced proof with links to elementary results that are still worth mentioning, but not worth taking space).
Those hidden notes would show upon clicking and disappear when clicking again. Or when hovering, and disappear when moving the mouse away.
• (Improvement of the above idea) When you ask something to WolframAlpha, such as solve sin(x)=2 it will show a note on the right for any new symbol appearing in the solution. This made me think we can do something similar: placing notation explanations in a column somewhere on the right. This could be achieved using some sort of {{Notation}} template, as in {{Notation| $\sin(x)$ is the [[Definition:Sine Function|]]}}, instead of adding a line where $\sin(x)$ is the [[Definition:Sine Function|]]. It may be argued that one of those makes the page look more neat. It's hard to say which, as I don't know what the new suggestion looks like. I'll try an example someday.
2 immediate reasons why I'd be wary about doing this:
a) It would be a lot of work to go through the existing page and implement this style -- in the meantime the site would look inconsistent
b) If we add a column on the right, we lose considerable real estate on a page with such a right-hand column. Some of the pages are already quite wide, and splitting them down so as to make room for a bit of stylistic flam may compromise it. Then again, if we are then to hide that column and make it available on a click, you lose the flow of the argument of what you are reading.
So I remain to be convinced, me, but feel free to try something out if you think it could work. I am still a big fan of a linear flow down the page of a train of thought. If you have to keep referring to boxes elsewhere on the page, you end up with something like those infuriatingly irritating books which put boxes of text on the page in the middle of the flow of text, which is the worst ever design mistake which, I suppose, is supposed to try and make a work more accessible for submorons who have an attention span of a brain-damaged butterfly. --prime mover (talk) 04:46, 27 August 2017 (EDT)
An extra reason why I think it's important to be cautious with this: around 30% (and growing) of our traffic consists of mobile users. Admittedly these users would probably also profit from changes in other parts, but relying on left-right split is probably not a good idea... But maybe a separate setting for mobile and desktop is feasible. — Lord_Farin (talk) 05:59, 27 August 2017 (EDT)
Mobile users are certainly something to take into account. I also agree that it should not break the argument, which is why I think it should only be used for defining notations, as does W/A. For example, in proofs about more advanced topics, users may be bothered by having to read definitions for every symbol such as $\gcd$, divisibility, $\Vert\cdot\Vert$, Euler $\phi$, ... It makes proofs look longer than they are and, ironically, can be argued to break the flow of the argument as well. Either way, it's still unclear what it would look like (small/large font, ugliness) and how much space it will take up. I will leave the idea here so it can be revisited someday, and try some examples in my sandbox. --barto (talk) 06:40, 27 August 2017 (EDT)
Notations should usually be defined upfront, as it would be usual to encounter them in the statement of the theorem. There may be exceptions, where a notation is encountered in the middle of a proof, but they will be rare. and anyway, defining 5 things as they are encountered adds 5 lines to a proof, which does not bulk it out that much. I wonder if you may be trying to solve a problem that isn't there -- unless there is a specific example which you can point to which needs to be improved. --prime mover (talk) 06:48, 27 August 2017 (EDT)
## Projects in Analysis
### Asymptotic notations
See the project page User:Barto/Asymptotic Notation
### Infinite Products
• Convergence and analyticity of analytic products $\checkmark$
• Logarithmic derivatives and analyticity $\checkmark$
• Goal: A treatment of Factorization of Analytic functions, including full versions of Weierstrass and Hadamard factorization. Apply this to e.g. the Gamma Function and deduce Stirling's Formula for complex arguments.
• Difficulty: this needs a proper set of definitions of asymptotic notations (see corresponding project)
## Clean-up Projects in Abstract Algebra
### Direct Product and Direct Sum
• Make sure that the definition pages for direct products include a definition for arbitrary families. Ideally, there should be:
• A definition for two components
• A definition for a finite number of components
• A definition for general families
• (Optionally: a definition for countable families)
and the naming of those subdefinitions should be consistent: Finite Case, General Case or something like that.
• For direct sums, the definition for general families should suffice.
• Be consistent in naming pages:
• product vs. sum: product is the primary notion, sum is derived from it. It is thus impossible to define the direct sum without defining the direct product; unless when they coincide and allow for ambiguous naming.
• External vs. internal: If nothing is specified, external has to be understood. IMO external can be removed from titles; it's unnecessarily pedantic; though I could live with it if it stays.
• Word order: (less important) Direct Product of X's vs. X Direct Product? I'm in favor of Direct Product of X's, and accordingly for sum, internal or not internal.
• The following table shows the jolly inconsistencies (no, I didn't misplace the entries):
General Group Ring Module Vector Space
External Product Definition:External Direct Product Definition:Group Direct Product Definition:Ring Direct Product Definition:Module Direct Product
Definition:Module of All Mappings
Definition:Module on Cartesian Product
Definition:Direct Product of Vector Spaces
Definition:Vector Space of All Mappings
Definition:Vector Space on Cartesian Product
Sum Finite Submodule of Function Space Definition:Module Direct Sum
Internal Product=Sum Definition:Internal Direct Product Definition:Internal Group Direct Product Definition:Ring Direct Sum
Please, feel free to add missing entries. These are the only definitions I found so far (not including transclusions). Some definitions are still to be extracted from a theorem that's being refactored.
## Actual Projects in Abstract Algebra
### Separability
#### Multiple roots in a ring
Let $R$ be a commutative ring with unity and $P\in R[X]$ TFAE:
#### Separable Polynomial
TFAE:
etc..
Interesting selection:
to be continued
## Long Term Projects
• Disambiguate between left and right modules. Does this mean that every occurrence of "module" should be amended? Very probably. The coverage of this is inadequate, as will be seen when we come to tensors.
## Enhancing the Help Section
One of the things I like to do is expanding the Help Section. This is to avoid questions being brought up over and over, and discussions to be lost in the talk pages.
Do I think every single minor discussion point should be addressed in the Help Section? Yes. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6975091099739075, "perplexity": 1134.9218411309275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315174.57/warc/CC-MAIN-20190820003509-20190820025509-00121.warc.gz"} |
https://stats.stackexchange.com/questions/399714/conditional-probability-involving-mixed-variable-types | # conditional probability involving mixed variable types
I'm trying to answer the following question
A defective coin minting machine produces coins whose probability of heads is a random variable $$T$$ with PDF $$f_{T}(p) = 1+\mathrm{sin}(2\pi p)$$ if $$p \in [0,1]$$ and $$f_{T}(p)=0$$ otherwise.
In essence a specific coin produced by this machine will have a fixed probability $$T=p$$ of giving heads,but you do not know initially what that probability is. A coin produced by this machine is selected and tossed repeatedly, with successive tosses assumed independent.
a.Find the probability that the first coin toss results in heads.
b.Given that the first coin toss resulted in heads, find the conditional PDF of $$T$$.
c.Given that the first coin toss resulted in heads, find the conditional probability of heads on the second toss.
Now i've worked out the solutions to the three questions. I'm going through the actual solutions and i'm confused as to how they've arrived at a particular point in the solution for part c). As they did theirs differently to me i really wanted to understand their approach too.
The solution is here
How did they arrive at the part $$P(B|A) = \int^{1}_{0} P(B|T=p,A) f_{T|A}(p)dp$$? (note i've switched $$P$$ with $$T$$ to not confuse it with the probability $$P$$)? For some reason i'm convinced it should be
$$P(B|A) = \int^{1}_{0} P(B|T=p,A) f_{T}(p)dp$$ and my reasoning is if we let $$C = (B|A)$$ then $$P(C) = \int^{1}_{0} P(C|T=p) f_{T}(p) dp$$ from the continuous version of the law of total probability. Can someone please explain the answer perhaps with a proof and why my reasoning is invalid. Note this is not for any homework assignment i'm just keen to sharpen my skills in probability theory and i want to understand their approach step by step.
Note my solution was
$$P(B|A) = P(A,B)/P(A) = \frac{\int^{1}_{0} P(A,B|T=p) f_{T}(p)}{P(A)} dp$$ which is easier to work with. Regarding their solution
My thinking is they may have got it somewhere like the following.
$$P(B|A) = \frac{\int^{1}_{0} P(A,B,T=p) dp}{P(A)} = \frac{\int^{1}_{0} P(B|A,T=p)P(A,T=p)}{P(A)}$$
Now my guess would be they derived the conditional density distribution from the last part? It looks similar but i realise there is subtleties involved with mixing probabilities and densities. If this is the case i'd be grateful to see the proof thanks!
How did they arrive at the part $$P(B|A) = \int^{1}_{0} P(B|T=p,A) f_{T|A}(p)dp$$?
Conditioning on A has to be respected in both terms under the integral, hence the correct use of $$f_{T|A}(p)$$.
Replacing $$A|B$$ with $$C$$ is not propagated correctly in your expressions and hides the conditioning. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9308000206947327, "perplexity": 252.92006938833813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668682.16/warc/CC-MAIN-20191115144109-20191115172109-00050.warc.gz"} |
https://pure.au.dk/portal/en/persons/lene-juul-pedersen(225fd3b7-711b-4e90-a0c5-137193d3b7bb)/publications/the-use-of-rooting-material-by-growing-pigs-in-relation-to-time-of-feeding(eae9d910-a6f3-11dc-b18d-000ea68e967b)/export.html | # Lene Juul Pedersen
## The use of rooting material by growing pigs in relation to time of feeding
Research output: Contribution to book/anthology/report/proceedingConference abstract in proceedingsResearch
### Standard
Proceedings of the 41st international congress of the ISAE. ed. / Francisco Galindo; Lorenzo Alvarez. 2007. p. 144-144.
Research output: Contribution to book/anthology/report/proceedingConference abstract in proceedingsResearch
### Harvard
Jensen, MB & Pedersen, LJ 2007, The use of rooting material by growing pigs in relation to time of feeding. in F Galindo & L Alvarez (eds), Proceedings of the 41st international congress of the ISAE. pp. 144-144, 41st International Congress of the ISAE, Merida, Mexico, 30/07/2007.
### APA
Jensen, M. B., & Pedersen, L. J. (2007). The use of rooting material by growing pigs in relation to time of feeding. In F. Galindo, & L. Alvarez (Eds.), Proceedings of the 41st international congress of the ISAE (pp. 144-144)
### CBE
Jensen MB, Pedersen LJ. 2007. The use of rooting material by growing pigs in relation to time of feeding. Galindo F, Alvarez L, editors. In Proceedings of the 41st international congress of the ISAE. pp. 144-144.
### MLA
Jensen, Margit Bak and Lene Juul Pedersen "The use of rooting material by growing pigs in relation to time of feeding". and Galindo, Francisco Alvarez, Lorenzo (editors). Proceedings of the 41st international congress of the ISAE. 2007, 144-144.
### Vancouver
Jensen MB, Pedersen LJ. The use of rooting material by growing pigs in relation to time of feeding. In Galindo F, Alvarez L, editors, Proceedings of the 41st international congress of the ISAE. 2007. p. 144-144
### Author
Jensen, Margit Bak ; Pedersen, Lene Juul. / The use of rooting material by growing pigs in relation to time of feeding. Proceedings of the 41st international congress of the ISAE. editor / Francisco Galindo ; Lorenzo Alvarez. 2007. pp. 144-144
### Bibtex
@inbook{eae9d910a6f311dcb18d000ea68e967b,
title = "The use of rooting material by growing pigs in relation to time of feeding",
author = "Jensen, {Margit Bak} and Pedersen, {Lene Juul}",
year = "2007",
language = "English",
pages = "144--144",
editor = "Francisco Galindo and Lorenzo Alvarez",
booktitle = "Proceedings of the 41st international congress of the ISAE",
note = "null ; Conference date: 30-07-2007 Through 03-08-2007",
}
### RIS
TY - ABST
T1 - The use of rooting material by growing pigs in relation to time of feeding
AU - Jensen, Margit Bak
AU - Pedersen, Lene Juul
PY - 2007
Y1 - 2007
M3 - Conference abstract in proceedings
SP - 144
EP - 144
BT - Proceedings of the 41st international congress of the ISAE
A2 - Galindo, Francisco
A2 - Alvarez, Lorenzo
Y2 - 30 July 2007 through 3 August 2007
ER - | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8252200484275818, "perplexity": 15274.17831170664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886516.43/warc/CC-MAIN-20200704170556-20200704200556-00477.warc.gz"} |
http://astronomy.stackexchange.com/help/badges/31/commentator?userid=8 | # Help Center > Badges > Commentator
Leave 10 comments.
Awarded 205 times
Awarded 23 hours ago to
Awarded yesterday to
Awarded Apr 16 at 23:19 to
Awarded Apr 15 at 22:14 to
Awarded Apr 13 at 22:44 to
Awarded Mar 29 at 2:41 to
Awarded Mar 15 at 17:55 to
Awarded Mar 15 at 12:24 to
Awarded Mar 15 at 11:27 to
Awarded Mar 11 at 1:10 to
Awarded Mar 6 at 19:23 to
Awarded Mar 5 at 16:49 to
Awarded Mar 4 at 16:58 to
Awarded Mar 4 at 5:02 to
Awarded Mar 3 at 22:05 to
Awarded Mar 2 at 17:41 to
Awarded Feb 26 at 19:42 to
Awarded Feb 19 at 14:06 to
Awarded Feb 18 at 9:46 to
Awarded Feb 11 at 19:46 to
Awarded Feb 11 at 7:45 to
Awarded Feb 5 at 15:55 to
Awarded Feb 3 at 10:14 to
Awarded Feb 3 at 8:31 to
Awarded Feb 2 at 9:05 to
Awarded Jan 28 at 12:09 to
Awarded Jan 27 at 20:02 to
Awarded Jan 27 at 15:10 to
Awarded Jan 26 at 20:13 to
Awarded Jan 24 at 21:33 to
Awarded Jan 24 at 9:58 to
Awarded Jan 22 at 0:09 to
Awarded Jan 14 at 21:44 to
Awarded Jan 5 at 14:41 to
Awarded Jan 4 at 12:05 to
Awarded Jan 1 at 22:03 to
Awarded Dec 29 '15 at 18:55 to
Awarded Dec 24 '15 at 13:04 to
Awarded Dec 21 '15 at 14:15 to
Awarded Dec 16 '15 at 17:10 to
Awarded Dec 12 '15 at 18:04 to
Awarded Dec 9 '15 at 10:46 to
Awarded Dec 9 '15 at 1:36 to
Awarded Dec 8 '15 at 1:32 to
Awarded Dec 2 '15 at 10:45 to
Awarded Nov 29 '15 at 7:57 to
Awarded Nov 26 '15 at 16:50 to
Awarded Nov 24 '15 at 8:30 to
Awarded Nov 23 '15 at 4:17 to
Awarded Nov 18 '15 at 11:45 to
Awarded Nov 18 '15 at 11:12 to
Awarded Nov 2 '15 at 4:04 to
Awarded Oct 25 '15 at 14:13 to
Awarded Oct 25 '15 at 13:38 to
Awarded Oct 24 '15 at 6:23 to
Awarded Oct 22 '15 at 2:30 to
Awarded Oct 21 '15 at 18:34 to
Awarded Oct 15 '15 at 3:55 to
Awarded Oct 6 '15 at 1:01 to
Awarded Sep 21 '15 at 12:49 to | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8019358515739441, "perplexity": 2173.4078867397225}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121423.81/warc/CC-MAIN-20160428161521-00183-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/124111/approximation-by-polynomials | # Approximation by polynomials
The following is a well-known theorem (see e.g. The Chebyshev Polynomial by Rivlin):
If $p(x) = x^n + a_{n_1} x^{n-1} + \ldots + a_0$, then $\max_{-1\leq x \leq 1} |p(x)| \geq 2^{1-n}$ for $n \geq 1$ with equality being attained only if $p$ is the $n$th Chebyshev polynomial normalized so that the leading coefficient is $1$.
In my work I came across a question that can be seen as a refined version of the question answered by the above theorem:
Let $p$ be a polynomial as above and let $0 \leq \epsilon \leq 2^{1-n}$. Suppose that $x$ is a uniformly distributed random variable on $[-1, 1]$. What is $\sup_p \Pr[|p(x)|\leq \epsilon]$ as a function of $\epsilon$ and $n$ (sup is over all possible choices of the deg-$n$ monic polynomial $p$)? A good upper bound on this quantity will also be useful. Note that for $\epsilon=2^{1-n}$ this quantity is equal to $1$ by the theorem above.
-
I do not understand your very last sentence: How does it follow by the theorem above that the probability is 1 ? – Alexandre Eremenko Mar 9 '13 at 23:45
@Alexandre: I realize that I forgot to state the theorem with $p$ under the absolute values. I have modified the statement accordingly. Now I think my last statement is immediate:: for $\epsilon=2^{1-n}$ the normalized degree-$n$ Chebyshev polynomial is always in $[-2^{1-n}, 2^{1-n}]$ by the theorem. Does this answer your question? – Navin Goyal Mar 10 '13 at 5:09
So for small $\epsilon$ the supremum should be attained by $p(x):=x^n$, giving $|\{|p|\le\epsilon\}|=2\epsilon^{1/n}$. – Pietro Majer Mar 10 '13 at 13:20
Actually $p(x)=x^n$ can't be optimal (once $n>1$), though it probably comes within a constant factor. For example, if $n$ is even then $x^n-\epsilon$ already does better by a factor $2^{1/n}$. Likewise for $n\geq 3$ odd you can subtract some multiple of $x^{n-2}$ to extend the interval $|p(x)| \leq \epsilon$ beyond $|x| \leq \epsilon^{1/n}$. Better yet: if $p_0$ is the Chebyshev polynomial that works for $2^{1-n}$, and $\delta = 2^{n-1} \epsilon$, then $p(x) = \delta p_0(\delta^{-1/n} x)$ yields an interval of length $2\delta^{1/n} = 2^{2-1/n} \epsilon^{1/n}$. – Noam D. Elkies Mar 10 '13 at 15:54
Come to think of it, it might be natural to conjecture that the Chebyshev polynomial $p_0$ of each degree $n$ maximizes the size of $\lbrace x \in {\bf R} : |p(x)| \leq 1 \rbrace$ among all polynomials $p$ of the same degree and leading coefficient (is this a known theorem?), which would imply that the desired sup is $\min(2, 2^{2-1/n} \epsilon^{1/n})$ for all $\epsilon$, attained by $p_0(x)$ or $\delta p_0(\delta^{-1/n} x)$. – Noam D. Elkies Mar 10 '13 at 18:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9226216077804565, "perplexity": 152.68977071617462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644062327.4/warc/CC-MAIN-20150827025422-00104-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/19692-application-derivatives.html | Math Help - Application of derivatives
1. Application of derivatives
A light shines frm the top of a pole 50ft high.A ball is dropped frm the same height frm a point 30ft away frm the light.How fast is the shadow of the ball moving along the ground 1/2sec later?(Assume the ball falls s=16t^2ft in t sec)
2. hello
why nobody help?
3. Originally Posted by Joyce
A light shines frm the top of a pole 50ft high.A ball is dropped frm the same height frm a point 30ft away frm the light.How fast is the shadow of the ball moving along the ground 1/2sec later?(Assume the ball falls s=16t^2ft in t sec)
hello,
I've attached a drawing of the situation.
You are dealing with 2 similar triangles. You can set up the proportion:
$\frac s{30} = \frac{30-s}y$. Solve for y because that's the length which the shadow of the ball is away from the pole:
$y=\frac{900}s - 30$ . You are told that $s = 16t^2$ . Substitute the variable s and you'll get the equation:
$y(t) = \frac{900}{16t^2}-30$
You know that speed is the first derivative wrt t of the length:
$y'(t) = -\frac{900}{8t^3}$
The speed at $t = \frac12$ is:
$\rm{speed} = y'\left(\frac12 \right) = -\frac{900}{8 \cdot \frac18} = -900$
Attached Thumbnails
4. Originally Posted by Joyce
why nobody help?
I'm an infirm old man and need some time to type the solution and to make a nice drawing. I'm now really breathless and exhausted | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.73578280210495, "perplexity": 2256.7702102040594}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258943369.84/warc/CC-MAIN-20160723072903-00260-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://dubz-by-dan.co.uk/works/study-of-behaviors-as-observed-by-students-and-teachers-6186 | # Study of Behaviors as Observed by Students and Teachers
55 PAGES (2876 WORDS) Seminar 236 Views
A study of the behaviors exhibited by new teachers and students.
Overall Rating
## 0
5 Star
(0)
4 Star
(0)
3 Star
(0)
2 Star
(0)
1 Star
(0)
APA
Joshua, M (2018). Study of Behaviors as Observed by Students and Teachers. Afribary.com: Retrieved April 26, 2018, from http://dubz-by-dan.co.uk/works/study-of-behaviors-as-observed-by-students-and-teachers-6186
MLA 8th
Manning, Joshua. "Study of Behaviors as Observed by Students and Teachers" Afribary.com. Afribary.com, 29 Jan. 2018, http://dubz-by-dan.co.uk/works/study-of-behaviors-as-observed-by-students-and-teachers-6186 . Accessed 26 Apr. 2018.
MLA7
Manning, Joshua. "Study of Behaviors as Observed by Students and Teachers". Afribary.com, Afribary.com, 29 Jan. 2018. Web. 26 Apr. 2018. < http://dubz-by-dan.co.uk/works/study-of-behaviors-as-observed-by-students-and-teachers-6186 >.
Chicago
Manning, Joshua. "Study of Behaviors as Observed by Students and Teachers" Afribary.com (2018). Accessed April 26, 2018. http://dubz-by-dan.co.uk/works/study-of-behaviors-as-observed-by-students-and-teachers-6186 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9984374046325684, "perplexity": 20784.691072286496}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948464.28/warc/CC-MAIN-20180426183626-20180426203626-00604.warc.gz"} |
https://www.cnblogs.com/yyf0309/p/7235531.html | # Codeforces Round #425 (Div. 2) Problem A Sasha and Sticks (Codeforces 832A)
It's one more school day now. Sasha doesn't like classes and is always bored at them. So, each day he invents some game and plays in it alone or with friends.
Today he invented one simple game to play with Lena, with whom he shares a desk. The rules are simple. Sasha draws n sticks in a row. After that the players take turns crossing out exactly k sticks from left or right in each turn. Sasha moves first, because he is the inventor of the game. If there are less than k sticks on the paper before some turn, the game ends. Sasha wins if he makes strictly more moves than Lena. Sasha wants to know the result of the game before playing, you are to help him.
Input
The first line contains two integers n and k (1 ≤ n, k ≤ 1018k ≤ n) — the number of sticks drawn by Sasha and the number k — the number of sticks to be crossed out on each turn.
Output
If Sasha wins, print "YES" (without quotes), otherwise print "NO" (without quotes).
You can print each letter in arbitrary case (upper of lower).
Examples
input
1 1
output
YES
input
10 4
output
NO
Note
In the first example Sasha crosses out 1 stick, and then there are no sticks. So Lena can't make a move, and Sasha wins.
In the second example Sasha crosses out 4 sticks, then Lena crosses out 4 sticks, and after that there are only 2 sticks left. Sasha can't make a move. The players make equal number of moves, so Sasha doesn't win.
题目大意 桌面上有n根棍子,每次拿k根,两个人交换着拿,当轮到谁时,桌面上的棍子数量少于k根,谁就输。问先手能否获胜。
一共可以拿轮,判断它的奇偶性就好了。
在比赛时3分钟a掉这道题,还是比较满意的。
### Code
1 /**
2 * Codeforces
3 * Problem#832A
4 * Accepted
5 * Time:15ms
6 * Memory:2000k
7 */
8 #include <iostream>
9 #include <cstdio>
10 #include <ctime>
11 #include <cmath>
12 #include <cctype>
13 #include <cstring>
14 #include <cstdlib>
15 #include <fstream>
16 #include <sstream>
17 #include <algorithm>
18 #include <map>
19 #include <set>
20 #include <stack>
21 #include <queue>
22 #include <vector>
23 #include <stack>
24 #ifndef WIN32
25 #define Auto "%lld"
26 #else
27 #define Auto "%I64d"
28 #endif
29 using namespace std;
30 typedef bool boolean;
31 const signed int inf = (signed)((1u << 31) - 1);
32 const signed long long llf = (signed long long)((1ull << 61) - 1);
33 const double eps = 1e-6;
34 const int binary_limit = 128;
35 #define smin(a, b) a = min(a, b)
36 #define smax(a, b) a = max(a, b)
37 #define max3(a, b, c) max(a, max(b, c))
38 #define min3(a, b, c) min(a, min(b, c))
39 template<typename T>
40 inline boolean readInteger(T& u){
41 char x;
42 int aFlag = 1;
43 while(!isdigit((x = getchar())) && x != '-' && x != -1);
44 if(x == -1) {
45 ungetc(x, stdin);
46 return false;
47 }
48 if(x == '-'){
49 x = getchar();
50 aFlag = -1;
51 }
52 for(u = x - '0'; isdigit((x = getchar())); u = (u << 1) + (u << 3) + x - '0');
53 ungetc(x, stdin);
54 u *= aFlag;
55 return true;
56 }
57
58 long long n, k;
59
60 inline void init() {
61 readInteger(n);
62 readInteger(k);
63 long long c = n / k;
64 if(c & 1) puts("YES");
65 else puts("NO");
66 }
67
68 int main() {
69 init();
70 return 0;
71 }
posted @ 2017-07-25 17:45 阿波罗2003 阅读(...) 评论(...) 编辑 收藏 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1793617308139801, "perplexity": 19389.70654308592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573368.43/warc/CC-MAIN-20190918213931-20190918235931-00188.warc.gz"} |
https://en.wikipedia.org/wiki/Linear_differential_equation | # Linear differential equation
Jump to: navigation, search
In mathematics, linear differential equations are differential equations having solutions which can be added together in particular linear combinations to form further solutions. They can be ordinary (ODEs) or partial (PDEs). The solutions to (homogeneous) linear differential equations form a vector space (unlike non-linear differential equations).
## Introduction
Linear differential equations are of the form
$Ly = f$
where the differential operator L is a linear operator, y is the unknown function (such as a function of time y(t)), and the right hand side f is a given function of the same nature as y (called the source term). For a function dependent on time we may write the equation more expressly as
$L y(t) = f(t)$
and, even more precisely by bracketing
$L [y](t) = f(t)$
The linear operator L may be considered to be of the form[1]
$L_n(y) \equiv \frac{d^n y}{dt^n} + A_1(t)\frac{d^{n-1}y}{dt^{n-1}} + \cdots + A_{n-1}(t)\frac{dy}{dt} + A_n(t)y$
The linearity condition on L rules out operations such as taking the square of the derivative of y; but permits, for example, taking the second derivative of y. It is convenient to rewrite this equation in an operator form
$L_n(y) \equiv \left[\,D^n + A_{1}(t)D^{n-1} + \cdots + A_{n-1}(t) D + A_n(t)\right] y$
where D is the differential operator d/dt (i.e. Dy = y' , D2y = y",... ), and the An are given functions.
Such an equation is said to have order n, the index of the highest derivative of y that is involved.
A typical simple example is the linear differential equation used to model radioactive decay.[2] Let N(t) denote the number of radioactive atoms in some sample of material [3] at time t. Then for some constant k > 0, the number of radioactive atoms which decay can be modelled by
$\frac{dN}{dt} = -k N$
If y is assumed to be a function of only one variable, one speaks about an ordinary differential equation, else the derivatives and their coefficients must be understood as (contracted) vectors, matrices or tensors of higher rank, and we have a (linear) partial differential equation.
The case where f = 0 is called a homogeneous equation and its solutions are called complementary functions. It is particularly important to the solution of the general case, since any complementary function can be added to a solution of the inhomogeneous equation to give another solution (by a method traditionally called particular integral and complementary function). When the Ai are numbers, the equation is said to have constant coefficients.
## Homogeneous equations with constant coefficients
The first method of solving linear homogeneous ordinary differential equations with constant coefficients is due to Euler, who realized that solutions have the form ezx, for possibly-complex values of z. The exponential function is one of the few functions to keep its shape after differentiation, allowing the sum of its multiple derivatives to cancel out to zero, as required by the equation. Thus, for constant values A1,..., An, to solve:
$y^{(n)} + A_{1}y^{(n-1)} + \cdots + A_{n}y = 0\,,$
we set y = ezx, leading to
$z^n e^{zx} + A_1 z^{n-1} e^{zx} + \cdots + A_n e^{zx} = 0.$
Division by ezx gives the nth-order polynomial:
$F(z) = z^{n} + A_{1}z^{n-1} + \cdots + A_n = 0.\,$
This algebraic equation F(z) = 0 is the characteristic equation considered later by Gaspard Monge and Augustin-Louis Cauchy.
Formally, the terms
$y^{(k)}\quad\quad(k = 1, 2, \dots, n).$
of the original differential equation are replaced by zk. Solving the polynomial gives n values of z, z1, ..., zn. Substitution of any of those values for z into ezx gives a solution ezix. Since homogeneous linear differential equations obey the superposition principle, any linear combination of these functions also satisfies the differential equation.
When these roots are all distinct, we have n distinct solutions to the differential equation. It can be shown that these are linearly independent, by applying the Vandermonde determinant, and together they form a basis of the space of all solutions of the differential equation.
Examples
$y''''-2y'''+2y''-2y'+y=0$
has the characteristic equation
$z^4-2z^3+2z^2-2z+1=0.$
This has zeroes, i, −i, and 1 (multiplicity 2). The solution basis is then
$e^{ix} ,\, e^{-ix} ,\, e^x ,\, xe^x.$
This corresponds to the real-valued solution basis
$\cos x ,\, \sin x ,\, e^x ,\, xe^x \,.$
The preceding gave a solution for the case when all zeros are distinct, that is, each has multiplicity 1. For the general case, if z is a (possibly complex) zero (or root) of F(z) having multiplicity m, then, for $k\in\{0,1,\dots,m-1\} \,$, $y=x^ke^{zx}$ is a solution of the ODE. Applying this to all roots gives a collection of n distinct and linearly independent functions, where n is the degree of F(z). As before, these functions make up a basis of the solution space.
If the coefficients Ai of the differential equation are real, then real-valued solutions are generally preferable. Since non-real roots z then come in conjugate pairs, so do their corresponding basis functions xkezx, and the desired result is obtained by replacing each pair with their real-valued linear combinations Re(y) and Im(y), where y is one of the pair.
A case that involves complex roots can be solved with the aid of Euler's formula.
### Examples
Given $y''-4y'+5y=0$. The characteristic equation is $z^2-4z+5=0$ which has roots "2±i". Thus the solution basis $\{y_1,y_2\}$ is $\{e^{(2+i)x},e^{(2-i)x}\}$. Now y is a solution if and only if $y=c_1y_1+c_2y_2$ for $c_1,c_2\in\mathbf{C}$.
Because the coefficients are real,
• we are likely not interested in the complex solutions
• our basis elements are mutual conjugates
The linear combinations
$u_1=\mbox{Re}(y_1)=\tfrac{1}{2} (y_1+y_2) =e^{2x}\cos(x),$
$u_2=\mbox{Im}(y_1)=\tfrac{1}{2i} (y_1-y_2) =e^{2x}\sin(x),$
will give us a real basis in $\{u_1,u_2\}$.
#### Simple harmonic oscillator
The second order differential equation
$D^2 y = -k^2 y,$
which represents a simple harmonic oscillator, can be restated as
$(D^2 + k^2) y = 0.$
The expression in parenthesis can be factored out, yielding
$(D + i k) (D - i k) y = 0,$
which has a pair of linearly independent solutions:
$(D - i k) y = 0$
$(D + i k) y = 0.$
The solutions are, respectively,
$y_0 = A_0 e^{i k x}$
and
$y_1 = A_1 e^{-i k x}.$
These solutions provide a basis for the two-dimensional solution space of the second order differential equation: meaning that linear combinations of these solutions will also be solutions. In particular, the following solutions can be constructed
$y_{0'} = {C_0 e^{i k x} + C_0 e^{-i k x} \over 2} = C_0 \cos (k x)$
and
$y_{1'} = {C_1 e^{i k x} - C_1 e^{-i k x} \over 2 i} = C_1 \sin (k x).$
These last two trigonometric solutions are linearly independent, so they can serve as another basis for the solution space, yielding the following general solution:
$y_H = C_0 \cos (k x) + C_1 \sin (k x).$
#### Damped harmonic oscillator
Given the equation for the damped harmonic oscillator:
$\left(D^2 + \frac{b}{m} D + \omega_0^2\right) y = 0,$
the expression in parentheses can be factored out: first obtain the characteristic equation by replacing D with λ. This equation must be satisfied for all y, thus:
$\lambda^2 + \frac{b}{m} \lambda + \omega_0^2 = 0.$
Solve using the quadratic formula:
$\lambda = \tfrac{1}{2} \left (-\frac{b}{m} \pm \sqrt{\frac{b^2}{m^2} - 4 \omega_0^2} \right ).$
Use these data to factor out the original differential equation:
$\left(D + \frac{b}{2m} - \sqrt{\frac{b^2}{4 m^2} - \omega_0^2} \right) \left(D + \frac{b}{2m} + \sqrt{\frac{b^2}{4 m^2} - \omega_0^2}\right) y = 0.$
This implies a pair of solutions, one corresponding to
$\left(D + \frac{b}{2m} - \sqrt{\frac{b^2}{4 m^2} - \omega_0^2} \right) y = 0$
$\left(D + \frac{b}{2m} + \sqrt{\frac{b^2}{4 m^2} - \omega_0^2}\right) y = 0$
The solutions are, respectively,
$y_0 = A_0 e^{-\omega x + \sqrt{\omega^2 - \omega_0^2} x} = A_0 e^{-\omega x} e^{\sqrt{\omega^2 - \omega_0^2} x}$
$y_1 = A_1 e^{-\omega x - \sqrt{\omega^2 - \omega_0^2} x} = A_1 e^{-\omega x} e^{-\sqrt{\omega^2 - \omega_0^2} x}$
where ω = b/2m. From this linearly independent pair of solutions can be constructed another linearly independent pair which thus serve as a basis for the two-dimensional solution space:
$y_H (A_0, A_1) (x) = \left(A_0 \sinh \left (\sqrt{\omega^2 - \omega_0^2} x \right ) + A_1 \cosh \left ( \sqrt{\omega^2 - \omega_0^2} x \right ) \right) e^{-\omega x}.$
However, if |ω| < |ω0| then it is preferable to get rid of the consequential imaginaries, expressing the general solution as
$y_H (A_0, A_1) (x) = \left(A_0 \sin \left(\sqrt{\omega_0^2 - \omega^2} x \right ) + A_1 \cos \left (\sqrt{\omega_0^2 - \omega^2} x \right ) \right) e^{-\omega x}.$
This latter solution corresponds to the underdamped case, whereas the former one corresponds to the overdamped case: the solutions for the underdamped case oscillate whereas the solutions for the overdamped case do not.
## Nonhomogeneous equation with constant coefficients
To obtain the solution to the nonhomogeneous equation (sometimes called inhomogeneous equation), find a particular integral yP(x) by either the method of undetermined coefficients or the method of variation of parameters; the general solution to the linear differential equation is the sum of the general solution of the related homogeneous equation and the particular integral. Or, when the initial conditions are set, use Laplace transform to obtain the particular solution directly.
Suppose we face
$\frac {d^{n}y(x)} {dx^{n}} + A_{1}\frac {d^{n-1}y(x)} {dx^{n-1}} + \cdots + A_{n}y(x) = f(x).$
For later convenience, define the characteristic polynomial
$P(v)=v^n+A_1v^{n-1}+\cdots+A_n.$
We find a solution basis $\{y_1(x),y_2(x),\ldots,y_n(x)\}$ for the homogeneous (f(x) = 0) case. We now seek a particular integral yp(x) by the variation of parameters method. Let the coefficients of the linear combination be functions of x:
$y_p(x) = u_1(x) y_1(x) + u_2(x) y_2(x) + \cdots + u_n(x) y_n(x).$
For ease of notation we will drop the dependency on x (i.e. the various (x)). Using the operator notation D = d/dx, the ODE in question is P(D)y = f; so
$f=P(D)y_p=P(D)(u_1y_1)+P(D)(u_2y_2)+\cdots+P(D)(u_ny_n).$
With the constraints
$0=u'_1y_1+u'_2y_2+\cdots+u'_ny_n$
$0=u'_1y'_1+u'_2y'_2+\cdots+u'_ny'_n$
$\cdots$
$0=u'_1y^{(n-2)}_1+u'_2y^{(n-2)}_2+\cdots+u'_ny^{(n-2)}_n$
the parameters commute out,
$f=u_1P(D)y_1+u_2P(D)y_2+\cdots+u_nP(D)y_n+u'_1y^{(n-1)}_1+u'_2y^{(n-1)}_2+\cdots+u'_ny^{(n-1)}_n.$
But P(D)yj = 0, therefore
$f=u'_1y^{(n-1)}_1+u'_2y^{(n-1)}_2+\cdots+u'_ny^{(n-1)}_n.$
This, with the constraints, gives a linear system in the u′j. This much can always be solved; in fact, combining Cramer's rule with the Wronskian,
$u'_j=(-1)^{n+j}\frac{W(y_1,\ldots,y_{j-1},y_{j+1}\ldots,y_n)_{0 \choose f}}{W(y_1,y_2,\ldots,y_n)}.$
In the very non-standard notation used above, one should take the i,n-minor of W and multiply it by f. That's why we get a minus-sign. Alternatively, forget about the minus sign and just compute the determinant of the matrix obtained by substituting the j-th W column with (0, 0, ..., f).
The rest is a matter of integrating u′j.
The particular integral is not unique; $y_p+c_1y_1+\cdots+c_ny_n$ also satisfies the ODE for any set of constants cj.
### Example
Suppose $y''-4y'+5y=\sin(kx)$. We take the solution basis found above $\{e^{(2+i)x}=y_1(x),e^{(2-i)x}=y_2(x)\}$.
\begin{align} W &= \begin{vmatrix}e^{(2+i)x}&e^{(2-i)x} \\ (2+i)e^{(2+i)x}&(2-i)e^{(2-i)x} \end{vmatrix} = e^{4x}\begin{vmatrix}1&1\\ 2+i&2-i\end{vmatrix} =-2ie^{4x}\\ u'_1 &=\frac{1}{W}\begin{vmatrix}0&e^{(2-i)x}\\ \sin(kx)&(2-i)e^{(2-i)x}\end{vmatrix} = -\tfrac{i}{2} \sin(kx)e^{(-2-i)x}\\ u'_2 &=\frac{1}{W}\begin{vmatrix}e^{(2+i)x}&0\\ (2+i)e^{(2+i)x}&\sin(kx)\end{vmatrix} =\tfrac{i}{2} \sin(kx)e^{(-2+i)x}. \end{align}
$u_1=-\tfrac{i}{2}\int\sin(kx)e^{(-2-i)x}\,dx =\frac{ie^{(-2-i)x}}{2(3+4i+k^2)}\left((2+i)\sin(kx)+k\cos(kx)\right)$
$u_2=\tfrac{i}{2}\int\sin(kx)e^{(-2+i)x}\,dx=\frac{ie^{(i-2)x}}{2(3-4i+k^2)}\left((i-2)\sin(kx)-k\cos(kx)\right).$
And so
\begin{align} y_p &= u_1(x) y_1(x) + u_2(x) y_2(x) = \frac{i}{2(3+4i+k^2)}\left((2+i)\sin(kx)+k\cos(kx)\right) +\frac{i}{2(3-4i+k^2)}\left((i-2)\sin(kx)-k\cos(kx)\right) \\ &=\frac{(5-k^2)\sin(kx)+4k\cos(kx)}{(3+k^2)^2+16}. \end{align}
(Notice that u1 and u2 had factors that canceled y1 and y2; that is typical.)
For interest's sake, this ODE has a physical interpretation as a driven damped harmonic oscillator; yp represents the steady state, and $c_1y_1+c_2y_2$ is the transient.
## Equation with variable coefficients
A linear ODE of order n with variable coefficients has the general form
$p_{n}(x)y^{(n)}(x) + p_{n-1}(x) y^{(n-1)}(x) + \cdots + p_0(x) y(x) = r(x).$
### Examples
A simple example is the Cauchy–Euler equation often used in engineering
$x^n y^{(n)}(x) + a_{n-1} x^{n-1} y^{(n-1)}(x) + \cdots + a_0 y(x) = 0.$
## First-order equation with variable coefficients
Examples
Solve the equation
$y'(x)+3y(x)=2$
with the initial condition
$y(0)=2.$
Using the general solution method:
$y=e^{-3x}\left(\int 2 e^{3x}\, dx + \kappa\right). \,$
The indefinite integral is solved to give:
$y=e^{-3x}\left(\left(2/3\right) e^{3x} + \kappa\right). \,$
Then we can reduce to:
$y=2/3 + \kappa e^{-3x}. \,$
where κ = 4/3 from the initial condition.
A linear ODE of order 1 with variable coefficients has the general form
$Dy(x) + f(x) y(x) = g(x).$
Where D is the differential operator. Equations of this form can be solved by multiplying the integrating factor
$e^{\int f(x)\,dx}$
throughout to obtain
$Dy(x)e^{\int f(x)\,dx}+f(x)y(x)e^{\int f(x)\,dx}=g(x)e^{\int f(x) \, dx},$
which simplifies due to the product rule (applied backwards) to
$D\left (y(x)e^{\int f(x)\,dx} \right )=g(x)e^{\int f(x)\,dx}$
which, on integrating both sides and solving for y(x) gives:
$y(x) = e^{-\int f(x)\,dx} \left(\int g(x)e^{\int f(x)\,dx} \,dx+\kappa\right).$
In other words: The solution of a first-order linear ODE
$y'(x) + f(x) y(x) = g(x),$
with coefficients that may or may not vary with x, is:
$y=e^{-a(x)}\left(\int g(x) e^{a(x)}\, dx + \kappa\right)$
where κ is the constant of integration, and
$a(x)=\int{f(x)\,dx}.$
A compact form of the general solution based on a Green's function is (see J. Math. Chem. 48 (2010) 175):
$y(x) = \int_a^x \! {[y(a) \delta(t-a)+g(t)] e^{-\int_t^x \!f(u)du}\, dt}\,.$
where δ(x) is the generalized Dirac delta function.
### Examples
Consider a first order differential equation with constant coefficients:
$\frac{dy}{dx} + b y = 1.$
This equation is particularly relevant to first order systems such as RC circuits and mass-damper systems.
In this case, f(x) = b, g(x) = 1.
Hence its solution is
$y(x) = e^{-bx} \left( \frac{e^{bx}}{b}+ C \right) = \frac{1}{b} + C e^{-bx} .$
## Systems of linear differential equations
An arbitrary linear ordinary differential equation or even a system of such equations can be converted into a first order system of linear differential equations by adding variables for all but the highest order derivatives. A linear system can be viewed as a single equation with a vector-valued variable. The general treatment is analogous to the treatment above of ordinary first order linear differential equations, but with complications stemming from noncommutativity of matrix multiplication.
To solve
$\left\{\begin{array}{rl}\mathbf{y}'(x) &= A(x)\mathbf{y}(x)+\mathbf{b}(x)\\\mathbf y(x_0)&=\mathbf y_0\end{array}\right.$
(here $\mathbf{y} (x)$ is a vector or matrix, and $A( x )$ is a matrix), let $U( x )$ be the solution of $\mathbf y'(x) = A(x)\mathbf y(x)$ with $U(x_0) = I$ (the identity matrix). $U$ is a fundamental matrix for the equation — the columns of $U$ form a complete linearly independent set of solutions for the homogeneous equation. After substituting $\mathbf y(x) = U(x)\mathbf z(x)$, the equation $\mathbf y'(x) = A(x)\mathbf y(x)+\mathbf b(x)$ simplifies to $U(x)\mathbf z'(x) = \mathbf b(x).$ Thus,
$\mathbf{y}(x) = U(x)\mathbf{y_0} + U(x)\int_{x_0}^x U^{-1}(t)\mathbf{b}(t)\,dt$
If $A(x_1)$ commutes with $A(x_2)$ for all $x_1$ and $x_2$, then
$U(x) = e^{\int_{x_0}^x A(x)\,dx}$
and thus
$U^{-1}(x) = e^{-\int_{x_0}^x A(x)\,dx},$
but in the general case there is no closed form solution, and an approximation method such as Magnus expansion may have to be used. Note that the exponentials are matrix exponentials. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 101, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9866745471954346, "perplexity": 444.9177082614881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447913.86/warc/CC-MAIN-20151124205407-00107-ip-10-71-132-137.ec2.internal.warc.gz"} |
http://www.rationalresponders.com/forum/19740?page=1 | Truden
Posts: 170
Joined: 2008-01-25
Offline
Some of you people might not believe it from the mouth of a theist, but I highly respect the opponents in this forum.
So, I would like to discus Time with you guys
---
The definition of time which we use is:
“Non spatial continuum in which the events occur.”
My definition is:
“Time is the relation between two events in the continuum of events”
Time is relevant and limited to the events.
Each event has its place in the chain of events and does not depend on anything out of its cause.
In philosophical discussions I always introduce the idea about the hierarchy in the mind concepts.
Every mind concept appears in certain hierarchical order and by changing the hierarchy we end up with fallacy.
In this particular argument the time is placed before the events.
The definition of time IMPLIES that the events appear IN time, but it is actually the other way round – time is created as concept from the relation between two or more events.
My arguments:
1) Universe without events is Universe without time.
Some people will argue that there will be time although it will be impossible to measure it.
That would be fallacy.
We measure time with time which is actually event with event (circle around the sun with spins around the Earth axis)
The logical conclusion is that we cannot apply time to a motionless universe.
What is to MEASURE time? – it is to relate one event to another event.
I think that this is quite clear.
2) We need two or more events to have time as existing concept.
One event is insufficient for time creation.
To have “motion” we need universe with at least two objects.
To have “time” we need universe with at least two events.
If there is universe with one only object, that object can not show motion and cannot exist in time.
It can only exist as motionless in space.
The definition of time does not apply to such Universe.
If the Universe is created from two objects, which are moving away from each other, according to the definition of time we should have time, but how can we explain and how can we measure time in such universe?
In this case we can only claim that an event occurs in space, but not in time.
3) When you argue the above, do not refer to the already built mind concept of time.
- Have in mind, that you already have the time concept from at least two events in your life.
Note that your thinking is an event too.
- Do not use “speed” for proving “time”.
Speed (if we can talk about it in this case) is related to motion in such Universe.
If we have only two moving away from each other objects, speed has no use for time.
- Relation between to events is for example “the number of Earth spins in one circle around the sun”.
- Every time-measuring tool is “event”
- All events appear in space except the thought (the thinking).
Well, this is my idea about “time”.
Most probably I missed something, but that is why I put it on discussion
Truden
Posts: 170
Joined: 2008-01-25
Offline
BobSpence1 wrote:You are
BobSpence1 wrote:
You are simply mistaken here. Einstein 's theory is NOT dependant on a conscious observer. Exactly the same phenomena would be recorded by instruments. The time does indeed run differently for everything in the two locations. Clocks will tick at different rates, radioactive atoms will decay at different rates. The theory is applied in the design of GPS systems to correct for the effect of the orbital motion and different level of gravity experienced by GPS satellites relative to the receivers on Earth. It is NOT just our perception - the difference in rate of time passing is as real as anything else. It shows up in the received frequency from the satellites being different from what we know they are set to transmit, which needs to be adjusted for in the actual circuitry and programming in the GPS sets to get accurate position readings. Not in our brains, note, in the physical devices.
Time really is passing differently for us and the satellites, not just our perception of it - of course the difference is way too small for us to actually perceive it, but it is real and physically measureable.
The difference in the time measuring tools on the GPS satellites and Earth comes from the difference in the gravity.
The experiment with the two aircrafts, traveling in opposite directions around the earth should measure not only the time, but the gravity as well. Because it is very logical to assume that the gravity on them will be different.
It is easy to predict that the gravity on these two airplanes will change proportionally with the difference between the time measuring tools.
Before such experiment is done, we cannot claim that simultaneous events can be seen different due to relative speed.
(The relative speed can change the perception of a length and it can be even recored but that is the same like when blue color changes under yellow light (it can also be recorded).
The fact is that the object does not change its color. It is the perception that changes.)
And by the way, if you take in account my idea about the absolute and perceptive universal values, you'll see that Speed is based on the perceptive value of Time, and cannot have absolute value, like Einstein's claim about the speed of light.
Quote:
Are you really saying that not having any physical way of estimating the passage of time in a universe with only two non-colliding objects would have some actual significance??
I cannot think of greater significance than the fact that we cannot measure Time (since we use Time to measure Time). Are you saying that this is not significant at all !?
Quote:
Your time idea is an empty triviality based on a total misunderstanding of the science involved.
I hope you may come to really understand the nature of time, but I am not going to hold my breath waiting, since you do seem to get stubbornly attached to your odd little theories.
At least you seemed to have changed your description of sound from the way you had it in the first response I read, so as to better match my description. Thank you for correcting that error at least. Maybe their is hope for you to achieve scientific 'enlightenment' yet..
Definition of SoundVibrations transmitted through an elastic solid or a liquid or gas, with frequencies in the approximate range of 20 to 20,000 hertz, capable of being detected by human organs of hearing.
I agreed with your definition because it doesn't change the subject of my argument - sound is vibrations or "pressure wave" (if you like it better) with certain frequencies.
Which supports my idea that SOUND has perceptive value.
---
I understand the embarrassment of woodworker showing your logical fallacy, but it would be even more embarrassing one day to find out that the woodworker was right and you were calling him fool.
Intelligent people know that "ad hominem" argument can easily turn against you.
And just to remind you that you still haven't comment the points in my article.
Why do you think that Time is not the relation between two or more events?
Why do you think that we cannot measure Time in my thought experiments, even in the one in which we have an event?
cj
Posts: 3330
Joined: 2007-01-05
Offline
Thanks, Bob.
Thanks, Bob.
BobSpence
Posts: 5785
Joined: 2006-02-14
Online
Truden wrote:BobSpence1
Truden wrote:
BobSpence1 wrote:
You are simply mistaken here. Einstein 's theory is NOT dependant on a conscious observer. Exactly the same phenomena would be recorded by instruments. The time does indeed run differently for everything in the two locations. Clocks will tick at different rates, radioactive atoms will decay at different rates. The theory is applied in the design of GPS systems to correct for the effect of the orbital motion and different level of gravity experienced by GPS satellites relative to the receivers on Earth. It is NOT just our perception - the difference in rate of time passing is as real as anything else. It shows up in the received frequency from the satellites being different from what we know they are set to transmit, which needs to be adjusted for in the actual circuitry and programming in the GPS sets to get accurate position readings. Not in our brains, note, in the physical devices.
Time really is passing differently for us and the satellites, not just our perception of it - of course the difference is way too small for us to actually perceive it, but it is real and physically measureable.
The difference in the time measuring tools on the GPS satellites and Earth comes from the difference in the gravity.
Not just the gravity difference:
Wikipedia wrote:
Special relativity predicts that the frequency of the atomic clocks moving at GPS orbital speeds will tick more slowly than stationary ground clocks by a factor of $\frac{v^{2}}{2c^{2}}\approx 10 ^{-10}$, or result in a delay of about 7 μs/day, where the orbital velocity is v = 4 km/s, and c = the speed of light. The time dilation effect has been measured and verified using the GPS system.
The effect of gravitational frequency shift on the GPS system due to general relativity is that a clock closer to a massive object will be slower than a clock farther away. Applied to the GPS system, the receivers are much closer to Earth than the satellites, causing the GPS clocks to be faster by a factor of 5×10^(-10), or about 45.9 μs/day. This gravitational frequency shift is also a noticeable effect
Quote:
The experiment with the two aircrafts, traveling in opposite directions around the earth should measure not only the time, but the gravity as well. Because it is very logical to assume that the gravity on them will be different.
It is easy to predict that the gravity on these two airplanes will change proportionally with the difference between the time measuring tools.
Before such experiment is done, we cannot claim that simultaneous events can be seen different due to relative speed.
The gravity on two aircraft travelling at the same height and latitude will logically be almost exactly the same.
In practice, there will be differences due mainly to any differences in average altitude. If they fly at different latitudes, there will be further differences, due to the Earth not being perfectly spherical.
When they fly in opposite directions, the effective speed will be different due to the rotation of the Earth.
The experiment has been done, several times.
And of course they took both velocity (Time Dilation due to Special Relativity) and gravitational (General Relativity) are always taken into account:
http://hyperphysics.phy-astr.gsu.edu/HBASE/Relativ/airtim.html#c4 wrote:
In October 1971, Hafele and Keating flew cesium beam atomic clocks around the world twice on regularly scheduled commercial airline flights, once to the East and once to the West. In this experiment, both gravitational time dilationand kinematic time dilation are significant - and are in fact of comparable magnitude.
Do you want to give us any more examples of your mis-reading and mis-understanding of this subject?
Quote:
(The relative speed can change the perception of a length and it can be even recored but that is the same like when blue color changes under yellow light (it can also be recorded).
The fact is that the object does not change its color. It is the perception that changes.)
The change in frequency is recorded, and is not anything to do with different apparent color perceived under different lights. If the color is a pure color, of a single wavelength, it will NOT change apparent color under different lighting colors. That only occurs for objects reflecting a range of wavelengths, because it changes the relative amount of light perceived at different wavelengths.
Quote:
And by the way, if you take in account my idea about the absolute and perceptive universal values, you'll see that Speed is based on the perceptive value of Time, and cannot have absolute value, like Einstein's claim about the speed of light.
Quote:
Are you really saying that not having any physical way of estimating the passage of time in a universe with only two non-colliding objects would have some actual significance??
I cannot think of greater significance than the fact that we cannot measure Time (since we use Time to measure Time). Are you saying that this is not significant at all !?
We do not "use Time to measure Time".
We use all sorts of oscillating physical mechanisms which we have no reason to believe, from both theory and observation, should vary in frequency, from pendulums to atoms.
The fact that the results from all these sources all agree as closely as we could expect from the precision of their construction, confirms that they are useful for measuring time duration.
Whatever they are actually measuring, the results we get in all kinds of experiments all point to it being measuring a version of time that produces consistent results in a vast number of observations and experiment.
Quote:
Quote:
Your time idea is an empty triviality based on a total misunderstanding of the science involved.
I hope you may come to really understand the nature of time, but I am not going to hold my breath waiting, since you do seem to get stubbornly attached to your odd little theories.
At least you seemed to have changed your description of sound from the way you had it in the first response I read, so as to better match my description. Thank you for correcting that error at least. Maybe their is hope for you to achieve scientific 'enlightenment' yet..
Definition of SoundVibrations transmitted through an elastic solid or a liquid or gas, with frequencies in the approximate range of 20 to 20,000 hertz, capable of being detected by human organs of hearing.
I agreed with your definition because it doesn't change the subject of my argument - sound is vibrations or "pressure wave" (if you like it better) with certain frequencies.
Which supports my idea that SOUND has perceptive value.
---
Who has ever denied sound waves can be perceived? Are you seriously claiming you have something new or original there???
"Transmitted through" - in the case of liquids and gases, that transmission is via longitudinal pressure waves.
If you are still not accepting the FACT that "pressure waves" is the most accurate and widely-used description of the mechanism of transmission of sound through liquids and gases, you are being silly and stubborn.
Quote:
I understand the embarrassment of woodworker showing your logical fallacy, but it would be even more embarrassing one day to find out that the woodworker was right and you were calling him fool.
Intelligent people know that "ad hominem" argument can easily turn against you.
And just to remind you that you still haven't comment the points in my article.
Why do you think that Time is not the relation between two or more events?
Why do you think that we cannot measure Time in my thought experiments, even in the one in which we have an event?
You have still not acknowledged the many factual and logical errors I have pointed to in your posts. The nearest you have come is to rephrase the statement into something closer to the facts and then pretend it is just an alternative way of saying the same thing, and you just did it to make it easier for dumb old me to understand.
You have not shown any errors in my posts, you have simply refused to accept any errors on your part.
I have commented on the substance of your errors in the OP, which I assume you mean when you say "article", although it hardly justifies that description in either content or length. When you say "article" I expect to see a link to a longer description, but I can see none.
Time is the background against which all changes in observed reality, whether change of state of an object in the same place, or by movement, are measured and perceived.
Your distinction between ABSOLUTE and PERCEIVED is not useful, since there is no actual absolute time, and perception of time is an aspect of psychology and neuroscience and cognitive studies, it is clearly separate from any issues with time in physical measurements and observation.
Perception of time would be affected by relativistic effects if sufficiently large, but no-one has experienced velocities or gravitational fields large enough to be noticeable to our direct perception.
Your posts are just a collection of errors.
Favorite oxymorons: Gospel Truth, Rational Supernaturalist, Business Ethics, Christian Morality
"Theology is now little more than a branch of human ignorance. Indeed, it is ignorance with wings." - Sam Harris
The path to Truth lies via careful study of reality, not the dreams of our fallible minds - me
From the sublime to the ridiculous: Science -> Philosophy -> Theology
Truden
Posts: 170
Joined: 2008-01-25
Offline
BobSpence1 wrote:We do not
BobSpence1 wrote:
We do not "use Time to measure Time".
Bob, I don't know whether you are scientist, but if you are, that only shows how childishly illogical science can be.
You are actually right in the above statement, but that is according to my explanation of Time.
Yes we don't use Time to measure Time, because there is no such thing as Time.
We use an event to "measure" another event.
But then if we say that we don't use Time to measure Time, all the differences between the measuring tools do not prove different Time.
They only prove that the measuring tool is affected by the motion, gravity or something else.
Do you get my point, Bob?
Scientist like to refer to the GPS usage of theory of relativity, but note that 10% error does not give us much of a proof. (All measurements are arguable)
Adding to it, that science does not measure Time with Time, we logically arrive to the conclusion that the appearing difference is not in Time but in measurement.
(In regard with the time-delay measuring for the GPS system I'd like to say that I'm not mathematician or physicist and therefore I cannot argue the way science calculate the GPS time delay, but there are people who seams to know mathematics and they claim that there is another way to calculate it.
Any way, I'm not using this as a supportive argument. It is only an interesting entertaining fact.)
Bob wrote:
Your distinction between ABSOLUTE and PERCEIVED is not useful, since there is no actual absolute time, and perception of time is an aspect of psychology and neuroscience and cognitive studies, it is clearly separate from any issues with time in physical measurements and observation.
Bob, you cannot even comprehend my idea, and you are trying to argue it.
"PERCEPTIVE value" does not refer to the way we perceive the things, but to their conceptual values.
Let me clear it one more time for you.
It doesn't matter how do we perceive "sweet" (very sweet or little sweet). "Sweet" has perceptive value because it is created in the relation between the mind and the object.
On the other hand, the object which is in interaction with the mind (in which interaction "sweet" was created) has absolute value, because it exists regardless the interaction. The cause for its existence is not in our mind, therefore - ABSOLUTE value.
Bob wrote:
Who has ever denied sound waves can be perceived? Are you seriously claiming you have something new or original there???
Not new, just misinterpretation from your side.
Pressure wave and sound are not one and the same thing.
"Pressure wave" has absolute value, because its cause is not in the mind.
"Sound" has perceptive value, because it is created in the interaction between certain frequency of the pressure wave with the mind.
BobSpence
Posts: 5785
Joined: 2006-02-14
Online
Your points and arguments are not worth 'getting'.
Your stubborn ignorance, coupled with your condescending attitude to my remarks, are really tedious.
It seems we are not going to really make contact here.
I see you as deeply and arrogantly deluded and confident in your delusion and misunderstanding.
You obviously see me as deeply mistaken - I see no way to break through your stubborn ignorance.
Goodbye.
Favorite oxymorons: Gospel Truth, Rational Supernaturalist, Business Ethics, Christian Morality
"Theology is now little more than a branch of human ignorance. Indeed, it is ignorance with wings." - Sam Harris
The path to Truth lies via careful study of reality, not the dreams of our fallible minds - me
From the sublime to the ridiculous: Science -> Philosophy -> Theology
Truden
Posts: 170
Joined: 2008-01-25
Offline
Point of exit
BobSpence1 wrote:
Your points and arguments are not worth 'getting'.
Your stubborn ignorance, coupled with your condescending attitude to my remarks, are really tedious.
It seems we are not going to really make contact here.
I see you as deeply and arrogantly deluded and confident in your delusion and misunderstanding.
You obviously see me as deeply mistaken - I see no way to break through your stubborn ignorance.
Goodbye.
I'm deeply sorry, Bob, and I apologize for my little game with you.
I thought it is fun for you
What I'm gonna say now, I could say it in the beginning but I wanted to have all arguments against my idea, before I make my point clear.
I actually came for your help, to check with you the consistency of my idea (and to have fun of course)
I had to disagree with you in some points where I actually agree, just to make sure that my understanding is scientifically correct.
Now, lets get to the point of exit.
---
I said that:
The relations between two or more events is what we call Time.
Every measuring tool uses event to "measure" Time.
We both agree on that.
We don't use Time to measure Time, we use events.
In that respect my definition is correct - we relate one event to another event in order to "measure" Time.
What you don't see in the picture is that by putting the measuring tool in relative motion, we create event which is different from the event where the measuring tool is in relative peace.
We cannot expect these two events to have equal values.
The tool (which is actually an event) is identical as object and process, but it takes part in two different events.
All events in the chain of events depend on each other.
If we create two chains of events, the dependency in them will be different and the result will be different.
Not having the above in mind, we came to the absurd where we use two different events (chain of events) to measure imaginary value which we call Time.
And because the results differ we falsely conclude that the Time is different.
I know that some opponents will argue the "imaginary" word, but since we don't have empirically presented subject, I advise you, not to take it on faith.
I'm not against science.
I respect it and I'm amazed by its achievements, but there are some flaws in it, and one of them is the interpretation of Time.
Once again, Bob, I apologize for bringing up in you some unpleasant feelings and emotions.
That wasn't my intent.
BobSpence
Posts: 5785
Joined: 2006-02-14
Online
Truden wrote:BobSpence1
Truden wrote:
BobSpence1 wrote:
Your points and arguments are not worth 'getting'.
Your stubborn ignorance, coupled with your condescending attitude to my remarks, are really tedious.
It seems we are not going to really make contact here.
I see you as deeply and arrogantly deluded and confident in your delusion and misunderstanding.
You obviously see me as deeply mistaken - I see no way to break through your stubborn ignorance.
Goodbye.
I'm deeply sorry, Bob, and I apologize for my little game with you.
I thought it is fun for you
What I'm gonna say now, I could say it in the beginning but I wanted to have all arguments against my idea, before I make my point clear.
I actually came for your help, to check with you the consistency of my idea (and to have fun of course)
I had to disagree with you in some points where I actually agree, just to make sure that my understanding is scientifically correct.
Now, lets get to the point of exit.
---
I said that:
The relations between two or more events is what we call Time.
Every measuring tool uses event to "measure" Time.
We both agree on that.
We don't use Time to measure Time, we use events.
In that respect my definition is correct - we relate one event to another event in order to "measure" Time.
What you don't see in the picture is that by putting the measuring tool in relative motion, we create event which is different from the event where the measuring tool is in relative peace.
We cannot expect these two events to have equal values.
The tool (which is actually an event) is identical as object and process, but it takes part in two different events.
All events in the chain of events depend on each other.
If we create two chains of events, the dependency in them will be different and the result will be different.
Not having the above in mind, we came to the absurd where we use two different events (chain of events) to measure imaginary value which we call Time.
And because the results differ we falsely conclude that the Time is different.
I know that some opponents will argue the "imaginary" word, but since we don't have empirically presented subject, I advise you, not to take it on faith.
I'm not against science.
I respect it and I'm amazed by its achievements, but there are some flaws in it, and one of them is the interpretation of Time.
Once again, Bob, I apologize for bringing up in you some unpleasant feelings and emotions.
That wasn't my intent.
OK, I'll give you another chance.
Please show a little less arrogance and be prepared to admit where you may have actually got some things wrong.
You still misunderstand Einstein's theory. Of course he took the normal problems of measuring time intervals between events in two differently moving frames of reference into account - these are mainly due to the effects of the finite speed of light, which is how we observe the apparent passage of time as indicated on clocks in each frame.
There is no actual physical measuring tool involved to measure velocities, only optical measurements, imagined to be based on things like surveyors instruments.
In the Special Theory he was working through the implications of the Michelson-Morley experiments which showed that light appears to travel at a constant speed regardless of the motion of the frame of reference - in their case, the orbital motion of the Earth at different times of the year.
The formula for the apparent spatial contraction and time dilation of one frame of reference moving at a constant speed relative to the observer was formulated to so as to get the result that when we observed someone measuring the speed of light in the other frame it would always come out to the same value as we would get measuring the same beam of light, so matching the results of the MM experiment.
The results of this calculation are that even after correcting for the normal direct effects of the speed of light on the time we see something happening in other frame, the time interval we measure between two events, one in our frame, one in the one moving relative to us, is going to depend on the relative velocity of the two frames, so there is no absolute time which is independent of the motion of the observer.
Light is the basic measuring 'tool' for speed and distance in all this. The precise measure of the local passage of time is usually done by counting the cycles of a signal at the natural resonance frequency of caesium atoms in an atomic clock. The time interval between two events is measured by counting how many oscillations occur between when we observe one event and the other. If they are in frames of reference in relative motion with respect to each other, we measure the relative motion with optical instruments and our clock, and apply the Einsteinian corrections.
The General Theory extended the Special Theory to allow for the effects of gravitational fields, and varying relative velocity (in magnitude and direction). Part of this included the equivalence principle, which states that it is not possible for observers to distinguish between being in a box accelerating at a constant rate, and in a box suspended in a uniform gravitational field strong enough to cause the same acceleration on freely falling objects.
Now that is an outline of the current thoroughly tested aspects of Einstein's Theories.
You cannot just dismiss any of that without a far more carefully thought-through hypothesis, with the appropriate mathematics, than I have seen from you so far. You have not argued remotely adequately to show flaws in the current theory, as I just outlined it. You have mainly demonstrated the flaws in your understanding of proven science.
Care to have another go, and specifically point out what you see as the flaw(s) in the description I just gave? Remember, time is measured by precisely measuring the natural oscillation frequency of atoms in an atomic clock, speed and distance using optical instruments.
If you are doing your reasoning and calculations carefully, you are doing science, and any flaws you feel you have uncovered are not flaws in science, they are (if demonstrated) flaws in the current interpretation of Time by most scientists. There are scientists exploring far more elaborate and strange interpretations of Time and Space than you are here, such as in String Theory, so don't rubbish Science, its the only tool we have for coming to grips with reality with our individual prejudices and biases and sensory limitations filtered out.
Scientists welcome fresh theories, but they have to be carefully defined so that other scientists can understand what you are trying to say.
Favorite oxymorons: Gospel Truth, Rational Supernaturalist, Business Ethics, Christian Morality
"Theology is now little more than a branch of human ignorance. Indeed, it is ignorance with wings." - Sam Harris
The path to Truth lies via careful study of reality, not the dreams of our fallible minds - me
From the sublime to the ridiculous: Science -> Philosophy -> Theology
Truden
Posts: 170
Joined: 2008-01-25
Offline
BobSpence1 wrote:OK, I'll
BobSpence1 wrote:
OK, I'll give you another chance.
Thank you, Bob.
I appreciate it
---
I'm afraid you didn't understand my previous comment, or most likely I wasn't clear enough.
I don't put in doubt the results of Einstein's theory.
What I'm saying is that he wrongly interprets Time.
As I said, I agree that we measure time with events.
Not necessarily tools (as I pointed, all time measuring tools are events)
Yes, we use light to measure time, but when used from two different frame of reference, the event of the traveling light becomes part of two different chain of events (complex events) and of course it returns different values.
That is why I said in my previous comment, that all events in the chain of events are dependent on each other.
If we change one of the events in the chain, we get different results for our measurements.
And this is exactly what we do by changing the frame of reference.
BobSpence
Posts: 5785
Joined: 2006-02-14
Online
Truden wrote:BobSpence1
Truden wrote:
BobSpence1 wrote:
OK, I'll give you another chance.
Thank you, Bob.
I appreciate it
---
I'm afraid you didn't understand my previous comment, or most likely I wasn't clear enough.
I don't put in doubt the results of Einstein's theory.
What I'm saying is that he wrongly interprets Time.
As I said, I agree that we measure time with events.
Not necessarily tools (as I pointed, all time measuring tools are events)
Yes, we use light to measure time, but when used from two different frame of reference, the event of the traveling light becomes part of two different chain of events (complex events) and of course it returns different values.
That is why I said in my previous comment, that all events in the chain of events are dependent on each other.
If we change one of the events in the chain, we get different results for our measurements.
And this is exactly what we do by changing the frame of reference.
He did not wrongly interpret time, he applied an interpretation that works well in the context of current science, and so far has not been found to have any discrepancies when compared against observations. That is all we can ask of any theory.
Now if you want to try a different way to study time, and can demonstrate that it provides some useful new insights, then fine, go ahead.
Actually I was a bit sloppy in that post. We don't use light to measure time, we use light to measure the distances and between different objects and their directions. Their speed can then be measured by observing them at two times close together.
We measure time by counting the number of ticks of a mechanical clock, or the vibrations of a quartz crystal, or the oscillations of the atoms in an atomic clock, between one event and another. We use light to observe the events we are timing.
The events in a sequence are not necessarily dependent on each other - I don't quite see what you mean by that. All we need for this discussion is two separate events, plus the moving dial or counters of the clock, plus our surveyor scope to measure the position of each event.
If two events we are comparing or timing are in differently moving frames of reference, we have to at least correct for the time it will take light to reach us from each event.
If a third observer compares the times of events in two other frames of reference which are moving at different velocities, he will get different results, even after correcting for the time it will take for light to reach him from each event, depending on his velocity, observing exactly the same two events. This is the consequence of Einstein's Special Relativity. Do you understand and accept this?
Favorite oxymorons: Gospel Truth, Rational Supernaturalist, Business Ethics, Christian Morality
"Theology is now little more than a branch of human ignorance. Indeed, it is ignorance with wings." - Sam Harris
The path to Truth lies via careful study of reality, not the dreams of our fallible minds - me
From the sublime to the ridiculous: Science -> Philosophy -> Theology
Truden
Posts: 170
Joined: 2008-01-25
Offline
BobSpence1 wrote: If two
BobSpence1 wrote:
If two events we are comparing or timing are in differently moving frames of reference, we have to at least correct for the time it will take light to reach us from each event.
If a third observer compares the times of events in two other frames of reference which are moving at different velocities, he will get different results, even after correcting for the time it will take for light to reach him from each event, depending on his velocity, observing exactly the same two events. This is the consequence of Einstein's Special Relativity. Do you understand and accept this?
I understand that the event is actually one and is observed from two different frames of reference.
Am I correct?
When we "measure time" we relate the measuring event, to the measured event.
The misinterpretation comes from the fact that we treat the measuring event as unchangeable value, because of its precision.
The precision doesn't matter here.
What matters is that for us Time is the relation between the measuring event and the measured event.
In our case the measuring event is exercised from two different frames of reference, which changes its relation to the measured event.
We have two different set of events here:
1) measuring event in motion (first frame of reference) in relation with event A (measured event)
2) measuring event in peace (second frame of reference) in relation with event A (measured event)
These two are not the same type set of events and we cannot compare them in order to extract "difference in Time".
The first one includes relative motion, while the second one includes relative peace, which will result in different relations, from where the difference between the results comes.
Truden
Posts: 170
Joined: 2008-01-25
Offline
BobSpence1 wrote:The events
BobSpence1 wrote:
The events in a sequence are not necessarily dependent on each other - I don't quite see what you mean by that
Sorry, I missed that in my previous comment.
The events in sequence are cause and effect (result)
I don't see how they "are not necessarily dependent".
The measuring event is in motion, and the motion becomes part of the cause for the returned result.
We can check this by putting two clocks on two airplanes
BobSpence
Posts: 5785
Joined: 2006-02-14
Online
Truden wrote:BobSpence1
Truden wrote:
BobSpence1 wrote:
If two events we are comparing or timing are in differently moving frames of reference, we have to at least correct for the time it will take light to reach us from each event.
If a third observer compares the times of events in two other frames of reference which are moving at different velocities, he will get different results, even after correcting for the time it will take for light to reach him from each event, depending on his velocity, observing exactly the same two events. This is the consequence of Einstein's Special Relativity. Do you understand and accept this?
I understand that the event is actually one and is observed from two different frames of reference.
Am I correct?
Of course not.
A single event is still seen as a single event from every frame of reference, and of course you need at least two events to mark a time interval two be measured.
What did you think we were going to measure with one event?
Quote:
When we "measure time" we relate the measuring event, to the measured event.
No.
We count the number of standard timing events ( such as clock ticks ) that occur between the observation of the two observed events.
Quote:
The misinterpretation comes from the fact that we treat the measuring event as unchangeable value, because of its precision.
The precision doesn't matter here.
What matters is that for us Time is the relation between the measuring event and the measured event.
You can't measure anything with a single measuring event.
The measuring system has to be something generating a regular sequence of (measuring) events.
To measure precisely, there are two requirements:
1. The process determining the time interval between one 'tick' and the next be as simple and and unaffected by external conditions as possible;
2. That interval be as short as possible, since this directly defines the smallest difference in timing that can be measured.
Quote:
In our case the measuring event is exercised from two different frames of reference, which changes its relation to the measured event.
We have two different set of events here:
1) measuring event in motion (first frame of reference) in relation with event A (measured event)
2) measuring event in peace (second frame of reference) in relation with event A (measured event)
These two are not the same type set of events and we cannot compare them in order to extract "difference in Time".
The first one includes relative motion, while the second one includes relative peace, which will result in different relations, from where the difference between the results comes.
That doesn't seem to match how we actually would test Einstein's theory.
The standard test depends on having two clocks running at the same rate.
This can be checked by simply letting them run side-by-side over a longish time and seeing if their tick-counters keep registering the same total count.
Then you put one counter in something moving, and compare its counter reading with the one back with the observer.
There is no actual measured event as such. Just two sequences of measuring events (clock ticks).
You could send the two clocks out each in their own frame of reference and compare the two counters as observed from the one point of observation, which is theoretically more valid, in that the only difference between the two clocks is their motion.
You can't measure any time with a single 'measuring' event - all you can say with one event is whether another event occurs before, at the same time as, or after, another event.
You need a clock of some sort, ie a regular sequence of measuring events.
So, actually you are correct, you cannot do a valid test with what you describe. So that is not how we actually test this.
You don't have single measured and measuring events, you have two clocks, which each generate a rapid sequence of events very close together in time. We have to be confident that the two clocks will always tick at exactly the same rate, which can be tested to a useful degree by comparing them against each other and other clocks or natural sources of oscillation such as vibrating atoms.
This is the key point, which means we cannot strictly compare the timing of two events directly and absolutely, we have to rely on constructing clocks which will keep in step very precisely under varying external conditions, such as temperature and gravity and acceleration forces. But this can be tested to increase our confidence, and we can run the experiment repeatedly to see if we get consistent results, which is exactly what has been done.
So it seems the problem you have identified has already been identified - it is actually pretty obvious when you think about it, so you have to rely on accurate clocks.
It doesn't even rely on the clocks accurately measuring 'real' time, it just relies on the two clocks keeping in step, at least when in the same frame of reference and gravity.
Favorite oxymorons: Gospel Truth, Rational Supernaturalist, Business Ethics, Christian Morality
"Theology is now little more than a branch of human ignorance. Indeed, it is ignorance with wings." - Sam Harris
The path to Truth lies via careful study of reality, not the dreams of our fallible minds - me
From the sublime to the ridiculous: Science -> Philosophy -> Theology
BobSpence
Posts: 5785
Joined: 2006-02-14
Online
Truden wrote:BobSpence1
Truden wrote:
BobSpence1 wrote:
The events in a sequence are not necessarily dependent on each other - I don't quite see what you mean by that
Sorry, I missed that in my previous comment.
The events in sequence are cause and effect (result)
I don't see how they "are not necessarily dependent".
The measuring event is in motion, and the motion becomes part of the cause for the returned result.
We can check this by putting two clocks on two airplanes
Ok, I may have mis-intepreted what you had in mind.
But still, the result of the moving clock is not dependant on what happens to the reference clock.
The possibility that the motion will effect the moving clock is what we are looking for.
And it has been done with clocks on two different planes. All of these experiments have confirmed Einstein's formulas.
As per the post I just put up, the actual experiment is done by comparing clocks, which I see may be what you had in mind.
So what is your problem with it? Are you saying we can't be sure the relative motion is what is causing the difference in the tick rate of the clocks?
Or are you suggesting the clocks are not measuring 'real' Time?
Favorite oxymorons: Gospel Truth, Rational Supernaturalist, Business Ethics, Christian Morality
"Theology is now little more than a branch of human ignorance. Indeed, it is ignorance with wings." - Sam Harris
The path to Truth lies via careful study of reality, not the dreams of our fallible minds - me
From the sublime to the ridiculous: Science -> Philosophy -> Theology
Truden
Posts: 170
Joined: 2008-01-25
Offline
BobSpence1 wrote: Ok, I may
BobSpence1 wrote:
Ok, I may have mis-intepreted what you had in mind.
But still, the result of the moving clock is not dependant on what happens to the reference clock.
The possibility that the motion will effect the moving clock is what we are looking for.
And it has been done with clocks on two different planes. All of these experiments have confirmed Einstein's formulas.
As per the post I just put up, the actual experiment is done by comparing clocks, which I see may be what you had in mind.
So what is your problem with it? Are you saying we can't be sure the relative motion is what is causing the difference in the tick rate of the clocks?
Or are you suggesting the clocks are not measuring 'real' Time?
I'll skip your previous comment and I'll make an effort on this one.
I'm not saying that we can't be sure the relative motion is what is causing the difference in the tick rate of the clocks.
On the contrary - I'm VERY SURE that the relative motion affects the clocks.
Yes, Einstein's formulas are right.
I said:
Truden wrote:
What you don't see in the picture is that by putting the measuring tool in relative motion, we create event which is different from the event where the measuring tool is in relative peace.
We cannot expect these two events to have equal values.
The tool (which is actually an event) is identical as object and process, but it takes part in two different events.
All events in the chain of events depend on each other.
If we create two chains of events, the dependency in them will be different and the result will be different.
You said:
Bob wrote:
The events in a sequence are not necessarily dependent on each other - I don't quite see what you mean by that.
Truden wrote:
The events in sequence are cause and effect (result)
I don't see how they "are not necessarily dependent".
The measuring event is in motion, and the motion becomes part of the cause for the returned result.
We can check this by putting two clocks on two airplanes
So, to compact it with the same answers:
By observing the relation between two events from two different frames of reference, we create two completely different lines of events.
In each line the relations between the events are different from the other line and they depend on each other in their line.
The measuring event (it is event, because the ticking of the clock is event) depends on the relative speed, and obviously gives different value.
We cannot compare any value in the two lines to prove different Time. It is like to compare apples with pears.
By comparing the values we can only prove that the clocks are running differently because they are affected by the speed and therefore they are in different relation with the measured events.
To put it in one sentence: The measuring event is affected, not the measured one.
liberatedatheist
Posts: 137
Joined: 2009-12-08
Offline
If i can jump in real quick,
If i can jump in real quick, I think Truden is using some really ambiguous language that might make it seem like he is saying something different from what he means.
truden wrote:
By observing the relation between two events from two different frames of reference, we create two completely different lines of events.
This is true according to einstein's perception of time as well. Two events that appear simultaneous to one observer in his rest frame will appear to take place at two different times to a second observer that has a non zero velocity relative to the first observer.
truden wrote:
In each line the relations between the events are different from the other line and they depend on each other in their line.The measuring event (it is event, because the ticking of the clock is event) depends on the relative speed, and obviously gives different value.
I'm going to try to restate this in a way that makes sense. "In each [frame of reference] the [order of events] is different from the [order of events in the other frame of reference] and [the events] depend on each other in their [frame of reference]. The measuring event (I think you mean the standard of measure used to give a specific time value, as in one tick of a clock would be the measuring event) depends on [its] relative speed, and obviously [will record] different [time] values.
the two events do not depend on each other, that implies some sort of force or interaction. The order of events as perceived by an observer will change if the the observer changes his frame of reference. We can make the events observers in themselves, so the order of events as observed by one event will be different than the order of events as perceived by the other event if the events are moving relative to one another. If we have two identical clocks that are both at rest relative to one another, they will tick exactly in sync. As soon as one clock is put into motion while the other stays at rest than both clocks will immediately start ticking out of sync. How far off they will be depends on how fast one is moving relative to the other. Both clocks will perceive that the other clock changed while they stayed the same. The order of events depends on the reference frame that you choose to be in but not on the events themselves which is hopefully what you were trying to say.
truden wrote:
We cannot compare any value in the two lines to prove different Time. It is like to compare apples with pears. By comparing the values we can only prove that the clocks are running differently because they are affected by the speed and therefore they are in different relation with the measured events.
The clocks will register different time values if they are moving relative to one another as long as they measured the same time value when they were at rest relative to one another. This difference proves that time "moves" faster or slower in one frame of reference that is moving relative to another frame of reference. Each frame of reference will be equally correct and neither will be preferred by the laws of physics.
truden wrote:
To put it in one sentence: The measuring event is affected, not the measured one.
Each event will observe the other event to have changed (slow down or speed up. Which one depends on the frame of reference you are in). Which is what i think you are trying to say??
To bring up the twin paradox: Put my twin on a rocket. While the rocket is at rest we both are aging at the same rate. Now accelerate his rocket close to the speed of light. My twin will not age any different relative to his normal perception. For him in his frame of reference, time will not have changed. The same thing applies to me, i will not perceive any difference. But, if i observe my twin years later after he has slowed down, he will appear to be and actually be younger than me because he "passed through" less time than me. Relative to my frame of reference, time slowed down for him. Relative to his frame of reference time has speed up for me and i will appear older. How much older depends on how fast he was going relative to me.
I Am My God
The absence of evidence IS evidence of absence
Truden
Posts: 170
Joined: 2008-01-25
Offline
to liberated atheist
@liberated atheist
Thank you for trying to help me, but in this case you need help to understand me.
Read EXACTLY what I've written and try to understand it.
Ask what do I mean by saying something. Don't restate it.
If I say that the relations in both event lines are different I mean exactly that.
The order of the events doesn't matter anymore because the events are not the same.
In one of the frame we have motion with different speed, and that is not the same event as in the other frame.
Quote:
the two events do not depend on each other, that implies some sort of force or interaction
Every event is created by certain force and every next event in the line depends on that force.
To say it simple that everyone could understand it I'd say:
The more kinetic energy the more fucked up the clock is.
We can say it because the kinetic energy is a result of an event and if we follow the event line we can see the following.
different speed -> different kinetic energy for the clocks - > different measurement.
So, not different TIME, because the Time "is" in the measured event, not in the clock.
BobSpence
Posts: 5785
Joined: 2006-02-14
Online
You still don't quite get
You still don't quite get it.
In the case of an observer watching two different clocks moving at different velocity relative to his frame, he will see the clocks running at different speeds, the clock in the one moving faster appearing to run slower. There is no 'measured' and 'measuring' event, there are just two clocks, with the count of ticks on one frame appearing to fall behind the clock on the other.
If we have an observer on each of two frames of reference moving with respect to each other, each observer will see the other clock moving slower than the one beside them. There is no real difference between the two frames from the view-point of a third observer. If the other frames are moving at the same speed relative to the third observer, their clocks will appear to to be ticking at the same rate, and slower than one beside the third observer. This is at the same time as each observer on the original moving frames will see the clock on the other running slower than the one on their own frame.
Do you understand? Each clock on the original pair of frames is experiencing the same velocity with respect to the other. There is no meaning to saying one is moving and one is not. There is no absolute velocity. It seems to be a paradox - it is impossible for each clock in a pair to really run slower than the other.
General relativity resolves this paradox. The problem in Special Relativity is that you cannot start with two synchronised clocks, which are then put on two different aircraft, flown on different speeds and tracks, and then brought back together at in a common frame of reference to compare them, without subjecting them to many changes of velocity, which means we cannot use special relativity alone to calculate what time (tick count) they will show at the end of the process. Unless you can bring the two clocks back together, there is no actual paradox.
Every change in velocity means they are subjected to acceleration, which is not relative, and 'really' slows down the clock. If we do the calculations taking acceleration into account, the paradox disappears - the clock subject to most acceleration will have fallen behind the other.
So actually you are partially correct, but it is not 'kinetic energy', which a function of velocity squared, which causes the final time discrepancy, it is acceleration/deceleration. IOW change in velocity, not velocity itself. The one subject to the most velocity change will be the one that has slipped behind when you bring them back together.
It is cumulative count of successive 'events' in the form of ticks which is being compared - the number of such events is the measure of effective time duration.
The only causality which is ultimately relevant here is the effect of acceleration ( 'g-forces' ) on the rate at which time appears to pass, as measured by some regular physical process such as a spring and balance wheel, or vibrating atoms. All theory and experiment confirms that the effect applies equally to every object experiencing that acceleration.
Favorite oxymorons: Gospel Truth, Rational Supernaturalist, Business Ethics, Christian Morality
"Theology is now little more than a branch of human ignorance. Indeed, it is ignorance with wings." - Sam Harris
The path to Truth lies via careful study of reality, not the dreams of our fallible minds - me
From the sublime to the ridiculous: Science -> Philosophy -> Theology
Truden
Posts: 170
Joined: 2008-01-25
Offline
BobSpence1 wrote:You still
BobSpence1 wrote:
You still don't quite get it.
In the case of an observer watching two different clocks moving at different velocity relative to his frame, he will see the clocks running at different speeds, the clock in the one moving faster appearing to run slower. There is no 'measured' and 'measuring' event, there are just two clocks, with the count of ticks on one frame appearing to fall behind the clock on the other.
Bob, I am not that stupid, but with no offence I have to say that you are disappointing me.
How can you say that the tick of the clock is not event!?
One tick is one event.
Many ticks are one reappearing event, or many events with the same cause, which (events) appear in the same rate, if you like it better.
We measure Time by relating one reappearing event to the measured event.
We can measure the event motion (which is one only event) by counting one reappearing event during our observation of the motion and we could say that the motion appeared to us for the time (number) of 100 ticks of the clock.
This 100 ticks come as result of relating one reappearing event to the event "motion".
Now is it clear to you that we MEASURED the event "motion" with the event "tick"? The result we call Time and use it in expressions like "for the time of 100 seconds", which must tell you that the second is not Time.
The relation between the reappearing second and the other event is what we call Time.
An observer does the same what the measuring devises do - he relates one event to another.
An observer can see that one reappearing event does not match another reappearing event, which will only proof that they are running in different rate.
Why they run in different rate is another question.
I don't know what do you mean by saying "the observer will SEE", when explaining the different observing results in different frames of reference?
Is this a conclusion which is taken out of a theory, or such experiment was conducted and they asked all observers what did they see
I think that we need measuring here. Don't you think so?
But when we measure, we come to the point of different kinetic energy in the different frames of reference, which fucks up the clocks
BobSpence
Posts: 5785
Joined: 2006-02-14
Online
Truden wrote:BobSpence1
Truden wrote:
BobSpence1 wrote:
You still don't quite get it.
In the case of an observer watching two different clocks moving at different velocity relative to his frame, he will see the clocks running at different speeds, the clock in the one moving faster appearing to run slower. There is no 'measured' and 'measuring' event, there are just two clocks, with the count of ticks on one frame appearing to fall behind the clock on the other.
Bob, I am not that stupid, but with no offence I have to say that you are disappointing me.
How can you say that the tick of the clock is not event!?
One tick is one event.
Many ticks are one reappearing event, or many events with the same cause, which (events) appear in the same rate, if you like it better.
We measure Time by relating one reappearing event to the measured event.
We can measure the event motion (which is one only event) by counting one reappearing event during our observation of the motion and we could say that the motion appeared to us for the time (number) of 100 ticks of the clock.
This 100 ticks come as result of relating one reappearing event to the event "motion".
Now is it clear to you that we MEASURED the event "motion" with the event "tick"? The result we call Time and use it in expressions like "for the time of 100 seconds", which must tell you that the second is not Time.
The relation between the reappearing second and the other event is what we call Time.
An observer does the same what the measuring devises do - he relates one event to another.
An observer can see that one reappearing event does not match another reappearing event, which will only proof that they are running in different rate.
Why they run in different rate is another question.
I don't know what do you mean by saying "the observer will SEE", when explaining the different observing results in different frames of reference?
Is this a conclusion which is taken out of a theory, or such experiment was conducted and they asked all observers what did they see
I think that we need measuring here. Don't you think so?
But when we measure, we come to the point of different kinetic energy in the different frames of reference, which fucks up the clocks
It did not say a clock tick is not an event. But to concentrate on the individual events and their 'relationship in the way you seem to is to not see the forest for the trees.
It is the continually changing count of ticks that is the measure of elapsed time, not the 'motion' of any event, which doesn't quite make sense, because you are using 'event' in a everyday sense, not the way it is used in Physics. I hope you see this is one reason why a scientist will find your account confusing.
The only 'motion' involved is the changing counter or dial of the clock.
One second is a standard measure of the dimension of Time, just as one meter is a standard measure of spatial dimension.
Clocks are designed so that the rate of ticking is as consistent as possible, and as high as possible relative to how accurately we wish to measure Time. They can only measure time in terms of the number of discrete ticks ( 'tick events' if you like ). Beyond establishing that the rate at which ticks occur is as consistent as possible, the detail of how we go from one tick to the next does not give us any insight into Time.
Motion is NOT an EVENT, in the language of physics and relativity. That is totally wrong.
From Wikipedia:
Quote:
In physics, and in particular relativity, an event indicates a physical situation or occurrence, located at a specific point in space and time. For example, a glass breaking on the floor is an event; it occurs at a unique place and a unique time, in a given frame of reference. Strictly speaking, the notion of an event is an idealization, in the sense that it specifies a definite time and place, whereas any actual event is bound to have a finite extent, both in time and in space.
So the elapsed time is a measure of the 'distance' along the Time dimension between two events, not the duration of one event. This misuse of the word 'event' in this context confuses the issue.
The number of ticks that are counted starting from the occurrence of the first event until the second event occurs is a measure of the separation in the Time dimension between the two events, in the frame of reference that the clock is in.
We do not compare the apparent passage of time in two frames of reference tick by tick, we compare the change in the tick count on one timer with the change in count on the other for some reasonable period of time. If we want an estimate of apparent elapsed time within 1% accuracy, we need the slowest clock to have ticked at least 100 times.
The difference is only proof that they appear to be running at different rates as viewed from the reference frame of the observer.
The observer 'sees' the counts at the start and at the finish of the timing period. "Sees" refers to whatever means is actually used to transfer the reading on the tick counters, or the position of the dial, etc of each clock back to the reference frame of that observer. This could hypothetically be with a telescope, but in practical experiments it will be as data transmitted via radio or possibly a laser beam. The important thing is to get the counts while the clocks are actually moving.
The 'kinetic energy' is not relevant, it is the relative velocity, plus any acceleration, so we need a matching record of the position, tick by tick, to allow us to compare the velocity and acceleration over the timing period, with the rate at which the clock is ticking. They also need to record the 'g' force experience by the clock, since this is also affects the rate at which time passes in that frame of reference. This would be measured directly by a precision accelerometer.
There are two distinct aspects of the experiments, one concerned with the predictions of Special Relativity, which are concerned only with observations from frames of reference moving at a constant velocity in a straight line in the absence of gravity, and General Relativity, where you take the effects of gravity and acceleration into account.
You really seem to have completely misunderstood the effects of Special Relativity
If two spacecraft or airplanes are moving at constant velocity away from each other, each one will observe that the clock in the other craft is appears to be running slower than the one in his craft. Now part of this is simply due to the fact that the radio or laser signal which is carrying the information on the clock reading in the other craft is going to be taking longer to get back to him as the other craft is continually getting further away. But even after allowing for this, there will still be a discrepancy.
Each one will see the other's clock apparently running slow when compared to his.
There are no 'moving' and 'motionless' frames of reference, in an absolute sense. In this example. Each observer sees, or measures with instruments, the other craft moving away from his at the same speed. This idea really does allow many apparent paradoxes, like the famous Twins Paradox. But most, if not all, of those are not describable purely within the 'constant speed in a straight line' condition that Special Relativity applies to. The Twins Paradox, which has one twin travelling at high speed away from the Earth and back, who has therefore been subject to acceleration, in getting up to speed, slowing down and accelerating back in the direction of home, then slowing back to a stop relative to Earth, whereas the one who stayed at home hasn't been, which explains how he will have aged relative to his twin.
Your idea is roughly consistent with the effects of gravity and acceleration, ie, General Relativity. NOT kinetic energy, though. We observe that light emitted by atoms from very dense stars appears to be 'red-shifted', ie, slowed in rate of time passing relative to an observer (us) viewing or measuring it from a lower gravity environment. It isn't just that clocks run slow, everything runs slower in a high gravity environment, which is exactly equivalent to saying that Time is running slower in that environment. And note, there is no general movement involved there, so no "kinetic energy". Just the thermal vibrations of the atoms, which is itself slowed in the strong gravitational field.
I have read your account, and I have tried to make sense of it, and pointed out problems like using words in different ways to how they are used in scientific descriptions, where it is vitally important that we all understand exactly what all our words refer to.
Can you possibly admit that you may have got something at least slightly wrong? I am basing my explanations on established science. You are trying to explain what you see as an error in the science, so you need to justify carefully why you disagree, and be prepared to adjust your 'theory' when errors are pointed out.
Your stubborn insistence that I haven't understood what you are saying whenever I don't agree with it is still annoying.
Favorite oxymorons: Gospel Truth, Rational Supernaturalist, Business Ethics, Christian Morality
"Theology is now little more than a branch of human ignorance. Indeed, it is ignorance with wings." - Sam Harris
The path to Truth lies via careful study of reality, not the dreams of our fallible minds - me
From the sublime to the ridiculous: Science -> Philosophy -> Theology
BobSpence
Posts: 5785
Joined: 2006-02-14
Online
With respect to your
With respect to your definitions, you said at the start of your OP:
Quote:
The definition of time which we use is:“Non spatial continuum in which the events occur.”
According to my dictionary, the definition which seems to apply in general use is:
"the indefinite continued progress of existence and events in the past, present, and future regarded as a whole : travel through space and time | one of the greatest wits of all time."
From Wikipedia:
"Time is part of the measuring system used to sequence events, to compare the durations of events and the intervals between them, and to quantify the motions of objects."
"An operational definition of time, wherein one says that observing a certain number of repetitions of one or another standard cyclical event (such as the passage of a free-swinging pendulum) constitutes one standard unit such as the second, is highly useful in the conduct of both advanced experiments and everyday affairs of life. The operational definition leaves aside the question whether there is something called time, apart from the counting activity just mentioned, that flows and that can be measured."
From another site:
"Time is an observed phenomenon, by means of which human beings sense and record changes in the environment and in the universe. A literal definition is elusive. Time has been called an illusion, a dimension, a smooth-flowing continuum, and an expression of separation among events that occur in the same physical location."
From a very interesting article here:
NewScientist wrote:
Scientists have long worried about the nature of time. At the beginning of the 18th century, Isaac Newton and Gottfried Leibniz argued over whether time was truly fundamental to the universe. Then Einstein came along and created more problems: his general theory of relativity is responsible for our most counter-intuitive notions of time.
General relativity knits together space, time and gravity. Confounding all common sense, how time passes in Einstein's universe depends on what you are doing and where you are. Clocks run faster when the pull of gravity is weaker, so if you live up a skyscraper you age ever so slightly faster than you would if you lived on the ground floor, where Earth's gravitational tug is stronger. "General relativity completely changed our understanding of time," says Carlo Rovelli, a theoretical physicist at the University of the Mediterranean in Marseille, France.
I have to ask, who is this "we" you refer to there? Because it does not quite seem to be a definition in general use.
Favorite oxymorons: Gospel Truth, Rational Supernaturalist, Business Ethics, Christian Morality
"Theology is now little more than a branch of human ignorance. Indeed, it is ignorance with wings." - Sam Harris
The path to Truth lies via careful study of reality, not the dreams of our fallible minds - me
From the sublime to the ridiculous: Science -> Philosophy -> Theology
Truden
Posts: 170
Joined: 2008-01-25
Offline
BobSpence1 wrote:With
BobSpence1 wrote:
With respect to your definitions, you said at the start of your OP:
Quote:
The definition of time which we use is:“Non spatial continuum in which the events occur.”
According to my dictionary, the definition which seems to apply in general use is:
Quote:
"the indefinite continued progress of existence and events in the past, present, and future regarded as a whole : travel through space and time | one of the greatest wits of all time."
.......................................................................................................
I have to ask, who is this "we" you refer to there? Because it does not quite seem to be a definition in general use.
I assume that you have nothing to say (argue) about my last comment on Time and you finally decided to look into the definitions.
Well, I can only say thank you, Bob.
My last comment reflects my definition, therefore it is the right one, and science must adopt it for use.
All definitions that deffer from the one proposed by me are wrong.
The above statement is based on basic logic.
This comment of mine is the end of my "point of exit".
I have nothing more to say on the discussed matter.
---
I thank you all for helping me to take in account some of the arguments that might be critical for the right explanation of Time.
I think that Rational Responders must be proud that it is one of the first participants where the issue about the Time was discussed and cleared from the delusional understanding.
I am a man who works as a carpenter, but I might as well be a man who did not find the same satisfaction in science as he found in the woodwork.
The man doesn't matter.
I don't matter.
What matters is the truth which sometimes stays beneath our understanding, not as foundation but as covered treasure.
Thank you All.
Truden
Posts: 170
Joined: 2008-01-25
Offline
Truden wrote:BobSpence1
Truden wrote:
BobSpence1 wrote:
With respect to your definitions, you said at the start of your OP:
Quote:
The definition of time which we use is:“Non spatial continuum in which the events occur.”
According to my dictionary, the definition which seems to apply in general use is:
Quote:
"the indefinite continued progress of existence and events in the past, present, and future regarded as a whole : travel through space and time | one of the greatest wits of all time."
.......................................................................................................
I have to ask, who is this "we" you refer to there? Because it does not quite seem to be a definition in general use.
I assume that you have nothing to say (argue) about my last comment on Time and you finally decided to look into the definitions.
Well, I can only say thank you, Bob.
My last comment reflects my definition, therefore it is the right one, and science must adopt it for use.
All definitions that deffer from the one proposed by me are wrong.
The above statement is based on basic logic.
This comment of mine is the end of my "point of exit".
I have nothing more to say on the discussed matter.
---
I thank you all for helping me to take in account some of the arguments that might be critical for the right explanation of Time.
I think that Rational Responders must be proud that it is one of the first participants where the issue about the Time was discussed and cleared from the delusional understanding.
I am a man who works as a carpenter, but I might as well be a man who did not find the same satisfaction in science as he found in the woodwork.
The man doesn't matter.
I don't matter.
What matters is the truth which sometimes stays beneath our understanding, not as foundation but as covered treasure.
Thank you All.
Oh, I did not see the comment before your last one, Bob.
But still, I have nothing more to say.
BobSpence
Posts: 5785
Joined: 2006-02-14
Online
Truden wrote:BobSpence1
Truden wrote:
BobSpence1 wrote:
With respect to your definitions, you said at the start of your OP:
Quote:
The definition of time which we use is:“Non spatial continuum in which the events occur.”
According to my dictionary, the definition which seems to apply in general use is:
Quote:
"the indefinite continued progress of existence and events in the past, present, and future regarded as a whole : travel through space and time | one of the greatest wits of all time."
.......................................................................................................
I have to ask, who is this "we" you refer to there? Because it does not quite seem to be a definition in general use.
I assume that you have nothing to say (argue) about my last comment on Time and you finally decided to look into the definitions.
Well, I can only say thank you, Bob.
Umm, so you have no comment on my long post just before that one, where I went into some detail about various aspects of your ideas and how they related to Einstein's theories??
Quote:
My last comment reflects my definition, therefore it is the right one, and science must adopt it for use.
All definitions that deffer from the one proposed by me are wrong.
The above statement is based on basic logic.
This is a sarcastic comment, I have to assume, because it totally devoid of logic. It is correct because you thought it up? WTF?
Or did you really read my previous post and this is your reaction to my pointing out a few more of your basic errors and referring to your stubborn refusal to re-examine your theory in the light of my comments?
Quote:
This comment of mine is the end of my "point of exit".
I have nothing more to say on the discussed matter.
---
I thank you all for helping me to take in account some of the arguments that might be critical for the right explanation of Time.
I think that Rational Responders must be proud that it is one of the first participants where the issue about the Time was discussed and cleared from the delusional understanding.
I am a man who works as a carpenter, but I might as well be a man who did not find the same satisfaction in science as he found in the woodwork.
The man doesn't matter.
I don't matter.
What matters is the truth which sometimes stays beneath our understanding, not as foundation but as covered treasure.
Thank you All.
Pissed-off because no-one took your confused ideas very seriously - oh well, none so blind as those who refuse to see...
Favorite oxymorons: Gospel Truth, Rational Supernaturalist, Business Ethics, Christian Morality
"Theology is now little more than a branch of human ignorance. Indeed, it is ignorance with wings." - Sam Harris
The path to Truth lies via careful study of reality, not the dreams of our fallible minds - me
From the sublime to the ridiculous: Science -> Philosophy -> Theology
Truden
Posts: 170
Joined: 2008-01-25
Offline
BobSpence1 wrote:Truden
BobSpence1 wrote:
Truden wrote:
BobSpence1 wrote:
With respect to your definitions, you said at the start of your OP:
Quote:
The definition of time which we use is:“Non spatial continuum in which the events occur.”
According to my dictionary, the definition which seems to apply in general use is:
Quote:
"the indefinite continued progress of existence and events in the past, present, and future regarded as a whole : travel through space and time | one of the greatest wits of all time."
.......................................................................................................
I have to ask, who is this "we" you refer to there? Because it does not quite seem to be a definition in general use.
I assume that you have nothing to say (argue) about my last comment on Time and you finally decided to look into the definitions.
Well, I can only say thank you, Bob.
Umm, so you have no comment on my long post just before that one, where I went into some detail about various aspects of your ideas and how they related to Einstein's theories??
Quote:
My last comment reflects my definition, therefore it is the right one, and science must adopt it for use.
All definitions that deffer from the one proposed by me are wrong.
The above statement is based on basic logic.
This is a sarcastic comment, I have to assume, because it totally devoid of logic. It is correct because you thought it up? WTF?
Or did you really read my previous post and this is your reaction to my pointing out a few more of your basic errors and referring to your stubborn refusal to re-examine your theory in the light of my comments?
Quote:
This comment of mine is the end of my "point of exit".
I have nothing more to say on the discussed matter.
---
I thank you all for helping me to take in account some of the arguments that might be critical for the right explanation of Time.
I think that Rational Responders must be proud that it is one of the first participants where the issue about the Time was discussed and cleared from the delusional understanding.
I am a man who works as a carpenter, but I might as well be a man who did not find the same satisfaction in science as he found in the woodwork.
The man doesn't matter.
I don't matter.
What matters is the truth which sometimes stays beneath our understanding, not as foundation but as covered treasure.
Thank you All.
Pissed-off because no-one took your confused ideas very seriously - oh well, none so blind as those who refuse to see...
Bob, don't take my last comments as disrespect.
I respect you more than you can imagine
I really said everything I could say in this discussion.
You have to try to understand me, otherwise you will run in circles, making me repeat myself over and over again.
I admit that there are flaws in my explanation, but they don't affect my idea.
I'll have to work out some differences between the common and the scientific use of the terms and I'll come back to you
For now, I urge you to take "out of the box" look at my idea and try to help me next time we meet
Don't be afraid to think that something as big as changing the interpretation about Time can happen.
Einstein is not the best that humanity can give birth to
I'll be back soon.
BobSpence
Posts: 5785
Joined: 2006-02-14
Online
Truden wrote:Bob, don't take
Truden wrote:
Bob, don't take my last comments as disrespect.
I respect you more than you can imagine
I really said everything I could say in this discussion.
You have to try to understand me, otherwise you will run in circles, making me repeat myself over and over again.
I admit that there are flaws in my explanation, but they don't affect my idea.
I'll have to work out some differences between the common and the scientific use of the terms and I'll come back to you
For now, I urge you to take "out of the box" look at my idea and try to help me next time we meet
Don't be afraid to think that something as big as changing the interpretation about Time can happen.
Einstein is not the best that humanity can give birth to
I'll be back soon.
Look, I fully understand that there can and will be progress beyond Einstein, and scientists are exploring new ideas about 'Time' all the time, of course.
But when you do not seem to quite understand Einstein's ideas, and make such basic errors, as I kept pointing out, and which you refused to acknowledge, it is hard to take you seriously.
But I honestly cannot see that you have anything that really amounts to a useful new concept here.
I do think I get the core idea what you are trying to say, but it really is not that different from what some others have already proposed, but they have expressed it more clearly and much better related to what has already been established about the subject.
1. In a 'Universe' containing just one object, we cannot meaningfully talk about time, OK. We cannot even talk about motion, so even saying it is 'motionless' is meaningless.
2. It is not so much 'events' we need to have time, it is change, even continuous change, which is not really well described by talking about events. This is where you start to go 'off the rails'.
3. If two objects are moving away from each other, that is all we need to be able to talk about time. The movement would be observed and measured with light, or are you assuming there is not even light in this hypothetical universe?
4. If the increase in distance between the two objects is observable, we can meaningfully refer to their relative velocity is (that is the more precise term to use here, since it includes the direction as well as the magnitude of velocity).
I am not using velocity ('speed') to prove time. It is not time that requires there to be 'speed', to 'prove' it, it is the existence of motion that requires there to be a dimension of Time to make it even possible. I think you have it backwards.
The absence of any discrete 'events' is not relevant, just change. Change is what needs Time. This is your basic error.
To measure Time consistently we do use repeated events which we have reason to believe are consistent and therefore assumed to occur at equal intervals of time.
Now here is another thought - imagine that one object is a long rod with regular marks on it, and the other object is moving past close to it along its length. Then we can measure time by how far the other object has moved along the length of the first object. That is all we need!!
If you insist on 'events', then consider: as the other object passes each mark on the first object, that is sufficient to define an event!
All events occur in space and time, by definition.
Two objects moving past each other is indeed sufficient to define events and so mark the passage of time.
If you had defined the objects as two ideal geometric points, then 'space' would be a problem, since you would have no reference for distance. But if at least one object has a finite physical size, then we have some reference to express the distance between them, and that then allows change to be defined, ie changes in the ratio between separation and size of at least one object. Change is the minimum requirement for Time to be meaningful, or alternatively, 'change' only can occur if there is a dimension of Time.
If you want to express it in terms of events, then each event can be defined to be when the distance between the objects reaches a whole number times the diameter of one of the objects.
There, I have analysed your idea, as presented in your first post. It is ultimately trivial, I am sorry.
Most of your ideas are not new, the rest are based on misunderstandings, such as what is necessary to define an 'event'.
Favorite oxymorons: Gospel Truth, Rational Supernaturalist, Business Ethics, Christian Morality
"Theology is now little more than a branch of human ignorance. Indeed, it is ignorance with wings." - Sam Harris
The path to Truth lies via careful study of reality, not the dreams of our fallible minds - me
From the sublime to the ridiculous: Science -> Philosophy -> Theology
darth_josh
Posts: 2642
Joined: 2006-02-27
Offline
This is testing my
This is testing my patience.
http://flashforward.web.cern.ch/flashforward/excerpt2/
Atheist Books, purchases on Amazon support the Rational Response Squad server, which houses Celebrity Atheists.
Eloise
Posts: 1804
Joined: 2007-05-26
Offline
Truden wrote:Eloise wrote:
screw it... double post.
Eloise
Posts: 1804
Joined: 2007-05-26
Offline
Truden wrote: Eloise wrote:
Truden wrote:
Eloise wrote:
The 'time' used by science is really no different to your second definition, modern science essentially considers time as synonymous with 'change'.
You may be right, Eloise.
Sometimes I have the feeling that Einstein made fun of the science by creating his theory of relativity to prove that everything is related to our conscious perception.
His theory doesn't make sense without intelligent conscious observation,
That's not so Truden. Relativity makes perfect sense without conscious observation. You've probably been confused by stories of Einstein developing the theory from thoughts about observation in a moving reference frame. Those things are not required for relativity to work, they are just ways of coming to the conclusion that space and time are not constant in contrast to light velocity.
Quote:
Yes, modern science may use the notion "change" (or "step" ) as Time, but Time still stays in science as property of the Universe.
It's quite more involved than that, Truden. Time cannot be ignored since it is an inextricable part of how we structure our experience, however, models of physical reality that do not have time as a property of the universe in them are studied in science - they are called time-independent.
Theist badge qualifier : Gnostic/Philosophical Panentheist
www.mathematicianspictures.com
Marquis
Posts: 776
Joined: 2009-12-23
Offline
Truden wrote:how childishly
Truden wrote:
how childishly illogical science can be
If you think that is bad, you need to consider how childishly illogical nature can be.
Anyway, while you guys were having fun I read through it all once again, and I fail to see how anything in Mr. Truden's proposition is inconsistent with my stating that time = gravity (non-spatial relation between events and yada yada). Consequently, instead of stating for instance that gravity warps spacetime, we can observe how an increase in local gravity moves towards and finally reaches a "boiling point" from whence it approaches a shift in existential conditions (an event horizon) before collapsing out of measurable spacetime coordinates alltogether.
Time in an isolated context can be both the temporal difference between events A and B as the duration of event C (which can even include the set A,B). There is no "continuum" of time that doesn't include variables in the gravitational field.
"The idea of God is the sole wrong for which I cannot forgive mankind." (Alphonse Donatien De Sade) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6332855820655823, "perplexity": 686.9585543461783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163049570/warc/CC-MAIN-20131204131729-00014-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/math/other-math/thinking-mathematically-6th-edition/chapter-9-measurement-9-1-measuring-length-the-metric-system-exercise-set-9-1-page-584/68 | # Chapter 9 - Measurement - 9.1 Measuring Length; The Metric System - Exercise Set 9.1 - Page 584: 68
c. 250 mm
#### Work Step by Step
The average length of a page is around 220 - 260 mm, therefore the answer is: c. 250 mm
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8229212164878845, "perplexity": 2025.7809583434616}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584203540.82/warc/CC-MAIN-20190123064911-20190123090911-00592.warc.gz"} |
https://hal-supelec.archives-ouvertes.fr/hal-01092802 | Robust covariance estimation and linear shrinkage in the large dimensional regime
Abstract : The article studies two regularized robust estimators of scatter matrices proposed in parallel in [1] and [2], based on Tyler's robust M-estimator [3] and on Ledoit and Wolf's shrinkage covariance matrix estimator [4]. These hybrid estimators convey robustness to outliers or impulsive samples and small sample size adequacy to the classical sample covariance matrix estimator. We consider here the case of i.i.d. elliptical zero mean samples in the regime where both sample and population sizes are large. We prove that the above estimators behave similar to well-understood random matrix models, which allows us to derive optimal shrinkage strategies to estimate the population scatter matrix, largely improving existing methods.
Type de document :
Communication dans un congrès
MLSP'14, Sep 2014, Reims, France. Proceedings of the 2014 IEEE International Workshop on Machine Learning for Signal Processing, pp.1-6, 〈10.1109/MLSP.2014.6958867〉
Domaine :
https://hal-supelec.archives-ouvertes.fr/hal-01092802
Contributeur : Catherine Magnet <>
Soumis le : mardi 9 décembre 2014 - 15:05:17
Dernière modification le : jeudi 29 mars 2018 - 11:06:05
Citation
Romain Couillet, M. Mckay. Robust covariance estimation and linear shrinkage in the large dimensional regime. MLSP'14, Sep 2014, Reims, France. Proceedings of the 2014 IEEE International Workshop on Machine Learning for Signal Processing, pp.1-6, 〈10.1109/MLSP.2014.6958867〉. 〈hal-01092802〉
Métriques
Consultations de la notice | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8869112133979797, "perplexity": 2926.9552000113804}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946453.89/warc/CC-MAIN-20180424022317-20180424042317-00253.warc.gz"} |
https://forum.azimuthproject.org/plugin/ViewComment/20403 | I literally tried copying and pasting that code in and it still didn't work! Sticking in a space seems to do the trick tho: \cup _{A^\text{op}} = \$$\cup _{A^\text{op}}\$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8186822533607483, "perplexity": 1586.874019776931}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570730.59/warc/CC-MAIN-20220807211157-20220808001157-00023.warc.gz"} |
http://math.stackexchange.com/questions/194604/a-simple-question-about-angles-on-a-circumference | # A simple question about angles on a circumference
Given two points on a circumference of radius $R$, $P_0$ and $P_1$ subtended by an angle $\theta$ at the center of the circumference, what is the angle at which a generic point $P_m$ inside the circle 'sees' the two points $P_0$ and $P_1$?
-
It depends. If $P_m$ is on the circle, the angle is $\frac12\theta$ on one arc and (with correct orientation) $\pi+\frac12\theta$ on the other arc. For an interior point, any value inbetween is possible. – Hagen von Eitzen Sep 12 '12 at 9:32
@Hagen von Eitzen: $P_m$ can be in any point inside the radius $R$ that means: $P_m=P_m(x,y)$ with $x^2+y^2 \leq R$ – Riccardo.Alestra Sep 12 '12 at 9:39
@Riccardo.Alestra , I think Hagen did understand that and thus his answer: for a point $\,P_m\,$ on the circle, the angle is either $\,\theta/2\,\,or\,\,\theta/2+\pi\,$ , depending on the relative position of the three points. For a point inside the disk enclosed by the circle, any value in between occurs. – DonAntonio Sep 12 '12 at 11:58
@DonAntonio: the problem is: given a point $P_m(x,y)$ and two points on the circunference $P_0$ and $P_1$, what is the formula linking the angle $\theta$ to the angle 'seen' by $P_m(x,y)$? $\theta \leq \frac{\pi}{2}$ – Riccardo.Alestra Sep 12 '12 at 12:45
@Riccardo.Alestra . I think we all understood that from the beginning, and $\,P_m\,$ is inside the circle, as you wrote. – DonAntonio Sep 12 '12 at 17:09
With $P_0, P_1$ as given and $O$ as center of the circle, extend the line $P_0P_m$ until it meets the circle in $Q$. Let us assume that $P_m$ is on the same side of $P_0P_1$ as $O$. Then by the inscribed angle theorem we have $\theta = \angle P_0OP_1 = 2\angle P_0QP_1$. In triangle $P_1QP_m$, we have $\angle QP_1P_m + \angle P_mQP_1+\angle P_1P_mQ=2\pi$ and we also have $\angle P_0P_mP_1+\angle P_1P_mQ=\angle P_0P_mQ=2\pi$, hence $\angle P_0P_mP_1=\angle QP_1P_m + \angle P_mQP_1> \angle P_mQP_1= \angle P_0QP_1 = \frac12 \theta$.
There is a slight subtlety involved when $P_m$ is on the other side of $P_0P_1$ and also beware of orientation. All in all you will find that the range $\frac12\theta<\angle P_0P_mP_1<\frac12\theta+\pi$ is possible.
Here's a proof variant with coordinate calculations: Without loss of generality, $R=1$, the center is $(0,0)$, $P_0=(c, -s)$, $P_1=(c,s)$ with $0\le c=\cos\frac\theta2<1$, $0<s=\sin\frac\theta2\le 1$ and $P_m=(x,y)$ with $r^2:=x^2+y^2<1$. We can find the sine of $\alpha:=\angle P_0P_mP_1$ by calculating the $z$ coordinate of the crossproduct $\overrightarrow{P_mP_0}\times \overrightarrow{P_mP_1}$ and dividing by the lengths of the factors. The $z$ coordinate of the cross product is $(c-x)(s-y)-(-s-y)(c-x)=2s(c-x)$. Hence $$\sin\alpha = \frac{2s(c-x)}{|P_mP_0||P_mP_1|}.$$ We can also find the cosine of $\alpha$ via the scalar product: $$\cos\alpha=\frac{\overrightarrow{P_mP_0}\cdot \overrightarrow{P_mP_1}}{|P_mP_0||P_mP_1|}= \frac{(c-x)^2+y^2-s^2}{|P_mP_0||P_mP_1|}$$ The lngths in the denominators are a bit clumsy, but we can ignore them if we only need to test for the sign of $\sin\alpha$ and $\cos\alpha$ and compute the tangent of $\alpha$: $$\tan\alpha = \frac{\sin\alpha}{\cos\alpha}=\frac{2s(c-x)}{(c-x)^2+y^2-s^2}.$$ First note that $\sin\theta=2cs$ and $\cos\theta=c^2-s^2$ by the double angle formulas, hence $\tan\theta=\frac{2cs}{c^2-s^2}$. I claim that one of the follwoing cases holds:
• $\cos\alpha>0, \sin\alpha>0, \tan\alpha>\tan\theta$
• $\cos\alpha\le 0, \sin\alpha\ge 0$
• $\cos\alpha<0, \sin\alpha<0, \tan\alpha<\tan\theta$
For each of these cases you can manipulate the expressions given for the trigonometric functions to verify the inequality in the given situation. It is a bit lengthy, and I'm a bit tired, though
-
I suppose you have written $P_1$ and $P_2$ rather than $P_0$ and $P_1$ – Riccardo.Alestra Sep 12 '12 at 14:38
In your proof there is something I don't understand. Suppose $P_m$ very close to the circumference, that means $P_{mx}^2+P_{my}^2=R-\epsilon$ with $\epsilon$ very little. Suppose $\theta$ less than $\frac{\pi}{2}$. Following your reasoning we can have the angle $\angle P_0 Q P_1$ shoud be less than $\frac{\pi}{4}$ when it's close to $\pi$ – Riccardo.Alestra Sep 12 '12 at 14:48
@Riccardo.Alestra: $\angle P_0QP_1$ is always either $\frac12\theta$ or $\pi+\frac12\theta$, depending on the arc $Q$ is on. And if $P_m$ is very close to the circumference than $\angle P_0P_mP_1$ is close to one of these two values, but definitely between. – Hagen von Eitzen Sep 12 '12 at 16:26
@von Heitzen: I don't see any dependence of the angle $\angle P_0 P_m P_1$ from the coordinates $x,y$ of the point $P_m$ – Riccardo.Alestra Sep 13 '12 at 6:14
I argued in classic geometry. I'll add an answer purely using cartesian coordinates. – Hagen von Eitzen Sep 13 '12 at 10:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.930469810962677, "perplexity": 145.29051632096846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://www.varsitytutors.com/algebra_ii-help/natural-log | # Algebra II : Natural Log
## Example Questions
← Previous 1 3
### Example Question #1 : Natural Log
Solve for .
Explanation:
The first thing we notice about this problem is that is an exponent. This should be an immediate reminder: use logs!
The question is, which base should we choose for the log? We should use the natural log (log base e) because the right-hand side of the equation already has e as a base of an exponent. As you will see, things cancel out more nicely this way.
Take the natural log of both sides:
Rewrite the right-hand side of the equation using the product rule for logs:
Now rewrite the whole equation after bringing down those exponents.
is the same thing as , which equals 1.
Now we just divide by on both sides to isolate .
### Example Question #2 : Natural Log
Rewrite as a single logarithmic expression:
Explanation:
Using the properties of logarithms
and ,
we simplify as follows:
### Example Question #3 : Natural Log
Which of the following expressions is equal to the expression ?
None of the other responses is correct.
Explanation:
By the reverse-FOIL method, we factor the polynomial as follows:
Therefore, we can use the property
as follows:
### Example Question #4 : Natural Log
Solve . Round to the nearest thousandth.
Explanation:
The original equation is:
Subtract from both sides:
Divde both sides by :
Take the natural logarithm of both sides:
Divde both sides by and use a calculator to get:
### Example Question #5 : Natural Log
What are the domain and the range of the function ?
Domain = all real numbers
Range = all real numbers
Domain = all positive numbers
Range = all non-negative numbers
Domain = all non-negative numbers
Range = all positive numbers
Domain = all positive real numbers
Range = all real numbers
Domain = all positive numbers
Range = all positive numbers
Domain = all positive real numbers
Range = all real numbers
Explanation:
Remember that is still a logarithm of a positive number, .
It's not possible to raise to ANY power and obtain a negative number. Because even , for example, is just , which is a ratio of two positive numbers, and therefore positive.
More than that, it's also not possible to obtain 0 by raising to any power. Think: "To what power can I exponentiate e and obtain 0?"
So the domain is strictly positive. It excludes negative numbers and 0.
What about the range? To what possible values are we allowed to exponentiate ?
Well, we just saw that has a definition for negative numbers. (this fact is true for ALL numbers, not just ).
And we can obviously raise it to positive powers. So the range is all real numbers. It includes negative numbers, 0, and positive numbers.
### Example Question #6 : Natural Log
Solve for :
.
If necessary, round to the nearest tenth.
No solution
Explanation:
Give both sides the same base, using e:
.
Because e and ln cancel each other out, .
Solve for x and round to the nearest tenth:
### Example Question #7 : Natural Log
Solve for x:
Explanation:
To solve for x, keep in mind that the natural logarithm and the exponential cancel each other out (property of any logarithm with a base that is being taken of that same base with an exponent attached). When they cancel, we are just left with the exponents:
### Example Question #8 : Natural Log
Determine the value of:
Explanation:
The natural log has a base of . This means that the term will simplify to whatever is the power of . Some examples are:
This means that
Multiply this quantity with three.
### Example Question #7 : Logarithms
Determine the value of:
Explanation:
In order to simplify this expression, use the following natural log rule.
The natural log has a default base of . This means that:
### Example Question #9 : Natural Log
Simplify:
Explanation:
According to log properties, the coefficient in front of the natural log can be rewritten as the exponent raised by the quantity inside the log.
Notice that natural log has a base of . This means that raising the log by base will eliminate both the and the natural log.
The terms become:
Simplify the power. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8094123005867004, "perplexity": 1507.583936633196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542687.37/warc/CC-MAIN-20161202170902-00099-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://www.studypug.com/asvab-test-prep/circles/arcs-of-a-circle | # Arcs of a circle
### Arcs of a circle
#### Lessons
• 1.
Find the length of the arc in red.
a)
b)
c)
• 2.
In the following circle, AD is diameter and BC $\bot$ EF, and the radius is 7. Find the measures of:
a)
$arc AB$
b)
$arc ABE$
c)
$arc AEC$
d)
$\angle BFD$
• 3.
On circle D, arcAB= 105°, arcBC= 140° and arcABC= 245°. Point E is on the circle D too and arcCE= 145°. Explain what E must be on ArcAB. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9641799330711365, "perplexity": 6368.262670365888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812247.50/warc/CC-MAIN-20180218173208-20180218193208-00733.warc.gz"} |
http://math.stackexchange.com/questions/40053/how-is-the-uniform-boundedness-principle-compatible-with-this-seemingly-weak-con?answertab=votes | # How is the uniform boundedness principle compatible with this seemingly weak convergent sequence?
In showing that $(x_i\rightharpoonup x)\not\Rightarrow(x_i\to x)$ or similar noncorallaries, one frequently uses the counterexample $$(u_i)_{i\in\mathbb{N}}\in \ell^2\colon \quad u_i = (\underbrace{0,\ldots,0}_{i-1},1,0,\ldots)$$ This, if I can trust my professor, is a weak convergent sequence in $\ell^2$, because it is bounded (I apparently missed that bit, this was the cause of the confusion) and for each $\varphi_j=(0,\ldots,1,0,\ldots)$, there exists an $N\in\mathbb{N}$, namely $N=j$, such that for every $i>N$, $$\langle u_i, \varphi_j \rangle_{\ell^2} = 0 + \ldots + 0\cdot1 + 0 + \ldots + 1\cdot 0 + 0\ldots = 0 < \varepsilon$$ for all $\varepsilon>0$. Because the $\varphi_j$ form a complete orthonormal system in $\ell^2$, this is sufficient for $(u_i)_i$ to be weakly convergent.
The problem is now: consider the sequence $$(v_i)_{i\in\mathbb{N}}\in \ell^2\colon \quad v_i = (\underbrace{0,\ldots,0}_{i-1},i,0,\ldots).$$ You could show that this is weakly convergent in exactly the same way as I just did for $(u_i)_i$, but on the other hand, $(v_i)_i$ is obviously an unbounded sequence and according to the principle of uniform boundedness (which I don't quite understand) every weak convergent sequence is bounded.
So what's wrong here?
Well, as the answers pointed out that it was the requirement of $(u_i)$ being bounded which was necessary for using just the $\varphi_j$ to show weak convergence.
-
I must say that I don't like the way you phrase your professor's proof at all, as there is a crucial ingredient missing (either this was a major mistake on your professor's side, or you forgot something). Namely boundedness of the sequence $(u_i)$.
First recall Bessel's inequality: For an orthonormal system $(u_{i})$ and all $x \in H$ the inequality $$\sum_{i} |\langle x, u_{i} \rangle|^2 \leq \Vert x \Vert^2$$ holds. This implies that we must have $\langle x, u_i \rangle \to 0$ for all $x$, and hence an orthonormal system converges weakly to zero. (that's my preferred way of showing that an orthonormal system converges weakly to zero). Note also that an orthonormal system is bounded, as $\|u_{i}\| = 1$.
Now there is the following result (which I guess was what your professor was referring to):
A sequence $(u_{i})$ converges weakly to zero if and only if it is bounded and there exists an orthonormal basis $(\phi_{n})$ such that $\langle u_{i}, \phi_{n} \rangle \to 0$ as $i \to \infty$ for all $n$.
Indeed, if $u_{i}$ converges weakly to zero then the condition is clearly fulfilled for any orthonormal basis by Bessel's inequality and the sequence is bounded by the uniform boundedness principle.
Conversely, assume that $\Vert u_{i} \Vert \lt C$ for all $i$ and assume that there exists an orthonormal basis such that $\langle u_{i}, \phi_{n} \rangle \to 0$ as $i \to \infty$ for all $n$. Fix $\varepsilon \gt 0$. Bessel's inequality tells us that for every $x \in H$ there exists $N$ such that $\langle x, \phi_{n} \rangle \leq \varepsilon$ for all $n \geq N$. Choose $i$ so large that $\langle \phi_{k}, u_{i} \rangle \leq \varepsilon$ for all $k \lt N$. Then we can estimate $$|\langle u_{i}, x \rangle| = \left\vert \langle u_{i}, \sum_{n} \langle x_{i}, \phi_{n} \rangle \phi_{n} \rangle \right\vert\leq \sum_{k \lt N} \underbrace{|\langle u_{i}, \phi_{k}\rangle|}_{\leq \varepsilon}\, \underbrace{|\langle x, \phi_{k} \rangle|}_{\leq \|x\|} + \sum_{n \geq N} \underbrace{|\langle u_{i}, \phi_{n}\rangle|}_{\lt C}\, \underbrace{|\langle x, \phi_{n} \rangle|}_{\leq \varepsilon}$$ so that $|\langle u_{i}, x \rangle| \lt (\|x\| + C)\varepsilon$ for all large enough $i$. As $\varepsilon \gt 0$ was arbitrary this means that $|\langle u_{i}, x \rangle| \to 0$ for all $x$, and hence $u_{i}$ converges weakly to zero.
As Luboš pointed out, your sequence $(v_i)$ does not converge weakly to zero. The above criterion is not applicable, as your sequence is not bounded. Indeed, it's the canonical example showing that assuming boundedness is indeed necessary in that criterion.
Since you said that the uniform boundedness principle is still a bit of a mystery to you, I can't do better than recommend Alan Sokal's recent article A really simple elementary proof of the uniform boundedness theorem in which he gives a proof that gets away without using any Baire-trickery.
-
Thank you! The professor probably did mention the necessity of $u_i$ being bounded, but as the whole counterexample was more of a sidenote anyway I missed it. – leftaroundabout May 19 '11 at 12:58
@leftaroundabout: You're welcome. Do have a look at Sokal's paper, it is really cool! – t.b. May 19 '11 at 13:01
Dear leftaroundabout, your $v_i$ sequence is not weakly convergent to zero. Take its inner product with $$x = (1/1, 1/2, 1/3, 1/4, 1/5, \dots )$$ to see that this inner product of $x$ with $v_i$ is actually equal to one in the infinite $i$ limit (and not zero as required by the weak convergence to zero). Note that $x$ is $l^2$ summable, because $\sum 1/n^2 = \pi^2/6$ is convergent, i.e. that it is a vector in the Hilbert space.
-
Right... but that means it's actually not sufficient to show convergence of $\langle \varphi_j,v_i\rangle$ for all $\varphi_j$. How can I then proove any not-strongly convergent sequence at all to be weakly convergent? – leftaroundabout May 19 '11 at 10:59
If I understand you well: on the contrary: for the weak convergence it is enough to show the convergence of the inner product with any $\phi_i$ because that's how the weak convergence is defined. What I tried to argue is that you did not prove the convergence of the inner products to the "right" inner product and you could not because this convergence of inner products is not true. – Luboš Motl May 19 '11 at 11:07
This last comment doesn't make any sense to me. What is this "right" inner product you're talking about? – t.b. May 19 '11 at 11:12
@Theo: I believe he is referring to the inner product with $0$. That is, if the sequence converges weakly to zero, then the sequence of inner products with a vector $v$ should converge to $\langle 0,v\rangle=0$, so $0$ would be the "right" inner product to which the sequence of inner products should converge, and this is not the case for the example given above. (I know that the mathematics is clear to you, but hopefully that clears up Luboš's meaning for you. Apologies, Luboš, if I have misinterpreted.) – Jonas Meyer May 19 '11 at 11:30
@Jonas: Ah, okay, that is one viable way to make sense out of that comment. Thanks! I was seriously puzzled. – t.b. May 19 '11 at 11:38
The argument you attribute to your professor is incorrect.
Suppose we wish to show a sequence $(f_i)$ of vectors in $\ell^2$ (or any Hilbert space) is weakly convergent to some $f$. It is not sufficient to show that $$\langle f_i, \phi \rangle \to \langle f , \phi \rangle \quad \text{ as } i \to \infty \quad (*)$$ for all $\phi$ in some complete orthonormal set. Indeed, your counterexample $v_i$ shows that this is not sufficient. Nor is it sufficient to show that (*) holds for all $\phi$ in some dense set.
However, if one knows a priori that the set $\{f_i\}$ is bounded in $\ell^2$ (i.e. there exists $M$ with $||f_i|| \le M$ for all $i$), then it is sufficient to show that (*) holds for all $\phi$ in some dense set, or even for all $\phi$ in some set whose linear span is dense. Such as, for instance, a complete orthonormal set. Proving this is a good exercise.
It can be shown, using an appropriate version of the uniform boundedness principle, that even without knowing $\{f_i\}$ is bounded, it is sufficient to verify that (*) holds for all $\phi$ in some nonmeager set. However, nonmeager sets are not so easy to come by. (In particular, the linear span of a countable set is always meager.)
So to give a better proof that the sequence $\{u_i\}$ converges weakly (to 0), one could first note that the sequence is bounded in $\ell^2$ norm, and then check that (*) holds for all the $\varphi_j$ you describe (and aren't the $u_i$ and $\varphi_j$ actually the same here?). Actually, my preferred proof is to note that $u_i$ is itself an orthonormal set, and so for any $\phi \in \ell^2$ we have $$\sum_i |\langle u_i , \phi \rangle|^2 \le ||\phi||^2 < \infty$$ by Bessel's inequality. Since the sum converges we must have $\langle u_i , \phi \rangle \to 0$.
-
Thank you! I guess it's not likely that I will ever encounter a sequence which is weakly convergent, but cannot easily be shown to be bounded? Because these nonmeager sets seem indeed rather scary to me. – leftaroundabout May 19 '11 at 13:12
@leftaroundabout: Of course, you can always try to show directly that $\langle f_i, \phi \rangle \to \langle f, \phi \rangle$ for all $\phi \in \ell^2$. My point about nonmeager sets is that although it is technically possible to verify it on a smaller set, it may not be practical, so it is not worth spending a lot of time trying to find a "good" smaller set to use. – Nate Eldredge May 19 '11 at 14:55
I've since learned that the comment about nonmeager sets is really not very useful at all. The set of $\phi$ for which (*) holds is obviously a vector space, hence showing (*) holds for all $\phi$ in some set $E$ is tantamount to showing it for the span of $E$. So we might as well assume $E$ is a vector space. But nonmeager proper subspaces of a Hilbert space are very weird. One can show that such a space cannot have the Baire property; in particular it is neither Borel, analytic, nor coanalytic. – Nate Eldredge Aug 9 '12 at 3:01
See here for a proof that $E$ does not have the Baire property. We would have to use the axiom of choice in an essential way to even show that sets without the BP exist (it is consistent with ZF+DC that they do not). – Nate Eldredge Aug 9 '12 at 3:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.983790397644043, "perplexity": 109.651202685582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121985.80/warc/CC-MAIN-20160428161521-00044-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://www.luogu.com.cn/problem/P5063 | # [Ynoi2014]置身天上之森
3 3
1 2 3 9
2 1 2 1
2 1 3 1
1
1
## 说明
Idea:zcysky, Solution:nzhtl1477( $O( m\sqrt{n\log n})$ solution ),ccz181078( $O( m\sqrt{n})$ solution ) Code:nzhtl1477( $O( m\sqrt{n} \log n)$ code ),ccz181078( $O( m\sqrt{n\log n})$ code ), Data:nzhtl1477 对于 $100\%$ 的数据,$1\leq n,m\leq 10^5$,$1\leq l\leq r\leq n$,$1\leq op\leq 2$,$-10^5\leq a\leq 10^5$。 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5655010938644409, "perplexity": 15989.22435845475}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371880945.85/warc/CC-MAIN-20200409220932-20200410011432-00194.warc.gz"} |
https://codereview.stackexchange.com/questions/162744/bathroom-stalls-in-c-google-code-jam-2017 | # Bathroom stalls in C (Google Code Jam 2017)
Problem
A certain bathroom has N + 2 stalls in a single row; the stalls on the left and right ends are permanently occupied by the bathroom guards. The other N stalls are for users.
Whenever someone enters the bathroom, they try to choose a stall that is as far from other people as possible. To avoid confusion, they follow deterministic rules: For each empty stall S, they compute two values LS and RS, each of which is the number of empty stalls between S and the closest occupied stall to the left or right, respectively. Then they consider the set of stalls with the farthest closest neighbor, that is, those S for which min(LS, RS) is maximal. If there is only one such stall, they choose it; otherwise, they choose the one among those where max(LS, RS) is maximal. If there are still multiple tied stalls, they choose the leftmost stall among those.
K people are about to enter the bathroom; each one will choose their stall before the next arrives. Nobody will ever leave.
When the last person chooses their stall S, what will the values of max(LS, RS) and min(LS, RS) be?
Input
The first line of the input gives the number of test cases, T. T lines follow. Each line describes a test case with two integers N and K, as described above.
Output
For each test case, output one line containing Case #x: y z, where x is the test case number (starting from 1), y is max(LS, RS), and z is min(LS, RS) as calculated by the last person to enter the bathroom for their chosen stall S.
Limits
1 ≤ T ≤ 100. 1 ≤ K ≤ N. 1 ≤ N ≤ 10^18.
Input Output
5
4 2 Case #1: 1 0
5 2 Case #2: 1 0
6 2 Case #3: 1 1
1000 1000 Case #4: 0 0
1000 1 Case #5: 500 499
Description of my algorithm
I attached a little graphic to show how the algorithm is supposed to work. That's maybe quicker to understand than reading through the algorithm.
/*
Definitions:
Group = A number of consecutive free stalls, in the beginning there is only one group.
Layer = A new layer is started when all groups of the previous layer are split up.
Each layer holds therefore 2^(layer - 1) customers
e.g. 1st layer: 1 customer
2nd layer: 2 customers
3rd layer: 4 customers
.
nth layer: 2^(n-1) customers
With the above definitions it can be calculated how many layers are necessary.
(Eq. 1) lastLayer = ceil(log(numberCustomers)/log(2))
To calculate the size of the group, the last customer will be assigned to, the size of the
groups in the last layer must be calculated. To do so, we need the following:
- number of stalls in the last layer
(Eq. 2) custPrevLayers = 2^(lastLayer - 1) - 1
(Eq. 3) stallsLastLayer = totalStalls - custPrevLayers
- the number of groups
(Eq. 4) nbrGroupsLastLayer = 2^(lastLayer - 1)
- and the number of customers in the last layer
(Eq. 5) custLastLayer = totalCustomers - custPrevLayers
Thus, the average group size is
(Eq. 6) avgGroupSizeLastLayer = stallsLastLayer / nbrGroupsLastLayer
Since avgGroupSizeLastLayer will in most cases not be an integer, we end up with two different
group sizes:
(Eq. 7) largeGroupSize = ceil(avgGroupSizeLastLayer)
(Eq. 8) smallGroupSize = largeGroupSize - 1
The last step is to figure out how many large groups are present in the last layer. This can
be calculated by means of (Eq. 9) and (Eq. 10):
(Eq. 9) stallsLastLayer = largeGroupSize * nbrLargeGroups + smallGroupSize * nbrSmallGroups
(Eq. 10) nbrGroupsLastLayer = nbrLargeGroups + nbrSmallGroups
Solving (Eq. 9) for nbrSmallGroups and inserting it into (Eq. 10) leaves us with
(Eq. 11) nbrLargeGroups = (stallsLastLayer - (nbrGroupsLastLayer * smallGroupSize))/(largeGroupSize - smallGroupSize);
If custLastLayer is smaller than the nbrLargeGroups min and max is to be calculated
with the largeGroupSize, otherwise with the smallGroupSize.*/
Implementation:
/****************************************************************************************
Includes
*****************************************************************************************/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>
#include <stddef.h>
/****************************************************************************************
Defines
*****************************************************************************************/
#define MIN(X,Y) ((X) < (Y) ? (X) : (Y))
#define MAX(X,Y) ((X) > (Y) ? (X) : (Y))
#define MAX_LINE_LENGTH (80)
/****************************************************************************************
Structs and typedefs
*****************************************************************************************/
typedef unsigned long long uint64;
typedef struct TestCase_s
{
uint64 stalls;
uint64 customers;
uint64 max;
uint64 min;
} TestCase_t;
/****************************************************************************************
Forward declarations
*****************************************************************************************/
static void getTestCases(FILE *fp, TestCase_t testCases[], int nbrOfTestcases);
static int getNumberOfTestCases(FILE *fp);
static uint64 getNumberOfStalls(char line[]);
static uint64 getNumberOfCustomers(char line[]);
static int getLine(char line[], size_t buflen, FILE *stream);
static uint64 getNumberOfLargeGroups(uint64 sizeLargeGroup, uint64 sizeSmallGroup, uint64 stallsFree, uint64 totalGroups);
static uint64 getLayers(uint64 customers);
static void calcStallsLeftRight(uint64 stalls, uint64 *stallsLeft, uint64 *stallsRight);
static void outputToFile(TestCase_t testCases[], int entries, char *outputFile);
/****************************************************************************************
Public functions
*****************************************************************************************/
int main (void)
{
FILE *fp;
int nbrTestCases = 0;
char input[] = "Debug/C-large-practice.in";
char output[] = "Debug/output.txt";
fp = fopen(input, "r");
if (fp == NULL)
{
perror("Error while opening the file.\n");
exit(EXIT_FAILURE);
}
nbrTestCases = getNumberOfTestCases(fp);
TestCase_t *testCases = calloc(nbrTestCases, sizeof(*testCases));
getTestCases(fp, testCases, nbrTestCases);
fclose(fp);
int currentCase = 0;
while (currentCase < nbrTestCases)
{
uint64 stallsToLeft;
uint64 stallsToRight;
TestCase_t *curTC = &testCases[currentCase];
uint64 lastLayer = getLayers(curTC->customers);
uint64 custPrevLayers = pow(2, lastLayer - 1) - 1;
uint64 stallsLastLayer = curTC->stalls - custPrevLayers;
uint64 nbrGroupsLastLayer = pow(2, lastLayer - 1);
uint64 sizeLargeGroup = stallsLastLayer/nbrGroupsLastLayer + (stallsLastLayer % nbrGroupsLastLayer != 0); // ciel
if (sizeLargeGroup == 1)
{
stallsToLeft = 0;
stallsToRight = 0;
}
else
{
uint64 customersLastLayer = curTC->customers - custPrevLayers;
uint64 sizeSmallGroup = sizeLargeGroup - 1; // size of small group is always one less than number of big group
uint64 largeGroups = getNumberOfLargeGroups(sizeLargeGroup, sizeSmallGroup, stallsLastLayer, nbrGroupsLastLayer);
if (largeGroups >= customersLastLayer)
{
calcStallsLeftRight(sizeLargeGroup, &stallsToLeft, &stallsToRight);
}
else
{
calcStallsLeftRight(sizeSmallGroup, &stallsToLeft, &stallsToRight);
}
}
curTC->max = MAX(stallsToLeft, stallsToRight);
curTC->min = MIN(stallsToLeft, stallsToRight);
currentCase++;
}
outputToFile(testCases, nbrTestCases, output);
printf("done");
return 0;
}
/****************************************************************************************
Private functions
*****************************************************************************************/
/**
* Calculates the number of layers needed for a number of customers.
*/
static uint64 getLayers(uint64 customers)
{
uint64 layers = 1;
uint64 possibleCustomers = 1;
while (customers > possibleCustomers)
{
layers++;
possibleCustomers = possibleCustomers << 1;
possibleCustomers = possibleCustomers | 1;
}
return layers;
}
/**
* Calculate the number of large groups
*/
static uint64 getNumberOfLargeGroups(uint64 sizeLargeGroup, uint64 sizeSmallGroup, uint64 stallsFree, uint64 totalGroups)
{
return (stallsFree - (totalGroups * sizeSmallGroup))/(sizeLargeGroup - sizeSmallGroup);
}
/**
* Calculates the max number of stalls to the left and right of the stall in the middle.
* If stalls is an even number the stall to the left is chosen to base the calculation on.
*/
static void calcStallsLeftRight(uint64 stalls, uint64 *stallsLeft, uint64 *stallsRight)
{
*stallsLeft = (stalls - 1)/2;
*stallsRight = stalls/2;
}
/**
* Reads test cases from file and fills them into the preallocated memory
*/
static void getTestCases(FILE *fp, TestCase_t testCases[], int nbrOfTestcases)
{
char *line = malloc(MAX_LINE_LENGTH);
int i = 0;
while((getLine(line, MAX_LINE_LENGTH, fp) != -1) && (i < nbrOfTestcases))
{
testCases[i].stalls = getNumberOfStalls(line);
testCases[i].customers = getNumberOfCustomers(line);
i++;
line = malloc(MAX_LINE_LENGTH);
}
free(line);
}
/**
* Number of stalls is the first number in a line
*/
static uint64 getNumberOfStalls(char line[])
{
return atoll(line);
}
/**
* Number of customers is the second number in a line
*/
static uint64 getNumberOfCustomers(char line[])
{
int i = 0;
while(line[i] != ' ')
{
i++;
}
i++;
return atoll(&line[i]);
}
/**
* Number of test cases is the first line in a file.
*/
static int getNumberOfTestCases(FILE *fp)
{
char *line = malloc(MAX_LINE_LENGTH);
getLine(line, MAX_LINE_LENGTH, fp);
return atoi(line);
}
static int getLine(char line[], size_t buflen, FILE *stream)
{
int i = 0;
char c = getc(stream);
if (c == EOF)
{
return EOF;
}
while(c != EOF && c != '\n' && i < buflen)
{
line[i++] = c;
c = getc(stream);
}
line[i] = '\0';
return 0;
}
/**
* Writes the test results to an output file
*/
static void outputToFile(TestCase_t testCases[], int entries, char *outputFile)
{
FILE *fp;
fp = fopen(outputFile,"wb"); // read mode
if (fp == NULL)
{
perror("Error while opening the file.\n");
exit(EXIT_FAILURE);
}
for (int i = 0; i < entries; i++)
{
fprintf(fp, "Case #%d: %I64d %I64d\n", i + 1, testCases[i].max, testCases[i].min);
}
fclose(fp);
}
• What's your C "level"? Maybe you want to add that to your question, so that answers are appropriate. – Zeta May 7 '17 at 11:36
• I gained most of my experience in the development of embedded C (bare metal) software, therefore I am not very familiar with the operating system interface. The main focus of the review should be on the algorithm but any kind of feedback is welcome! – Frode Akselsen May 7 '17 at 12:48
• Ha I also tried this problem ! – Yk Cheese May 7 '17 at 20:51
• "Nobody will ever leave" W-why not? – Nic Hartley May 8 '17 at 0:09
• @QPaysTaxes Its Hotel California or superglue. – Martin York May 8 '17 at 8:26
A very nice question.
The question doesn't specify what exactly is desired from the code review, so just a few suggestions, but first I'm not sure the algorithm takes into account the guards in the 2 extra stalls. The graphic doesn't show them but they do affect the algorithm.
Memory Leaks
The following code allocates memory for the variable line N + 1 times where N is the number of test cases, but only frees the allocated memory once.
/**
* Reads test cases from file and fills them into the preallocated memory
*/
void getTestCases(FILE *fp, TestCase_t testCases[], int nbrOfTestcases)
{
char *line = malloc(MAX_LINE_LENGTH);
int i = 0;
while((getLine(line, MAX_LINE_LENGTH, fp) != -1) && (i < nbrOfTestcases))
{
testCases[i].stalls = getNumberOfStalls(line);
testCases[i].customers = getNumberOfCustomers(line);
i++;
line = malloc(MAX_LINE_LENGTH);
}
free(line);
}
The testCases variable is never freed.
It May be Better to Use Standard C Library Functions for Input
The fgets(char *buffer, int bufferLength, FILE *stream) Standard C Library function has almost the same signature as int getLine(char line[], size_t buflen, FILE *stream) and performs almost exactly the same function. The difference is that fgets() returns a pointer to the filled string buffer. A null pointer is returned where getLine() returns EOF.
The function fgets() may perform better than getLine() because it uses buffered input.
The Single Responsibility Principle states that every module or class should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility.
Robert C. Martin expresses the principle as follows:
A class should have only one reason to change.
While this is primarily targeted at classes in object oriented languages it applies to functions and subroutines in procedural languages like C as well.
The main function could be broken up into at least 2 more functions:
int getInput(int* nbrTestCases, TestCase_t* testCases[]) // return EXIT_SUCCESS or EXIT_FAILURE
int executeAlgorithm(int nbrTestCases, TestCase_t testCases[]) // return EXIT_SUCCESS or EXIT_FAILURE
Inconsistent Use of System Constants
The main() function may exit using EXIT_FAILURE, but it doesn't return EXIT_SUCCESS at the end. It might be more readable if it used both constants.
Test for Input Errors
The while loop in the function getTestCases() tests the return value of getLine(), but the function getNumberOfTestCases() does not test the return value of getLine() before using the value of line which may be of length zero.
The length of line is not tested prior to use in getTestCases(), getNumberOfStalls() or getNumberOfCustomers().
There is no test to ensure that the contents of line are numeric before calling atoll() or atoi().
• thanks for the thorough review, the guards do play a role, but they are rather there to simplify the problem. If the guards were not there, the first two customer have to make a different decision in choosing a stall. (they would probably choose the left and right most stalls to keep distance maximum and from there on it would be the same algorithm) – Frode Akselsen May 8 '17 at 0:11
Your getLine is broken:
If your buffer is too small, it will discard the last-read byte and write a NUL behind the buffer!
Also, it mixes int and size_t with potentially fatal consequences.
80 bytes is a fairly small amount of memory. Getting it from the stack is no trouble, and far more efficient than asking malloc.
• char c is incorrect. The reason is that it is implementation defined where char is signed or unsigned. On a platform with char being unsigned, it will never compare equal to EOF. It is strongly recommended to declare it int c.
• Combining test case data and results in the same structure is another SRP violation. I recommend to separate them into
struct TestCase {
uint64 stalls;
uint64 customers;
}
and
struct Result {
uint64 max;
uint64 min;
}
• Reading in all test cases in advance leads to unnecessary complications. I recommend to restructure the code as
TestCase_t testCase;
Result_t result;
for (int i = 0; i < nbrTestCases; i++) {
getTestCase(&testCase);
solveTestCase(&testCase, &result);
printResults(i + 1, &result);
}
• atoll returns the long long, while your values are unsigned long long. Mixing signed and unsigned is generally unsafe. Even though in this case you are OK, I recommend using stroull instead. Another reason to use strtoull is that it tells you where the token ended, so there's no need to manually search for it:
char * end;
testCase.stalls = strtoull(line, &end, 10);
testCase.customers = strtoull(end, NULL, 10); | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.224225714802742, "perplexity": 7656.760880959543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628001089.83/warc/CC-MAIN-20190627095649-20190627121649-00101.warc.gz"} |
https://orbiter-forum.com/threads/vulcancentaur.40142/#post-587067 | # ProjectVulcanCentaur
#### francisdrake
Joined
Mar 23, 2008
Messages
829
Reaction score
301
Points
78
Website
francisdrakex.deviantart.com
The VulcanCentaur is a two-stage-to-orbit, heavy-lift launch vehicle under development by the United Launch Alliance (ULA). Attached is an early development add-on, with limited capability.
For a launcher already in production, very little technical data can be found, aside from promotional stuff.
Specifically on the new Centaur V data are scarce. I would be thankful if somebody could point me to things like wet and dry mass of each stage, or a flight profile!
The idea is to include more functions and better meshes over time, but please be patient with me.
#### Attachments
• VulcanCentaur-05.zip
2.6 MB · Views: 12
• Tenacity Dream Chaser-00.zip
2.2 MB · Views: 12
Last edited:
#### Gargantua2024
##### The Desktop Orbinaut
Joined
Oct 14, 2016
Messages
781
Reaction score
907
Points
108
Location
San Jose Del Monte, Bulacan
For a launcher already in production, very little technical data can be found, aside from promotional stuff.
Specifically on the new Centaur V data are scarce. I would be thankful if somebody could point me to things like wet and dry mass of each stage, or a flight profile!
The new Vulcan Centaur addon looks promising already! In the version me and gattispilot made for the Artemis landers thread and later posted on OHM, I used the specifications of the rocket stages posted from the Space Launch Report site, while also taking guide from the official website of ULA itself:
The grey column is the specs for the Centaur stage used on Atlas V. I think both the current version and Centaur V will share almost identical performance, albeit of course Centaur V will carry a lot more propellant than the Atlas V version
Last edited:
#### Gargantua2024
##### The Desktop Orbinaut
Joined
Oct 14, 2016
Messages
781
Reaction score
907
Points
108
Location
San Jose Del Monte, Bulacan
Here's an SCN with Vulcan-Centaur positioned at Pad 41:
Code:
BEGIN_DESC
This is a test flight of the VulcanCentaur.
Launch the Peregrine Lunar Lander into an eastward trajectory. Pull backwards.
When in orbit, wait until the dashed node line, then restart the Centaur and perform the Trans Lunar Insertion (TLI) burn .
END_DESC
BEGIN_ENVIRONMENT
System Sol
Date MJD 61943.6945518604
END_ENVIRONMENT
BEGIN_FOCUS
Ship VulcanCentaur
END_FOCUS
BEGIN_CAMERA
TARGET VulcanCentaur
MODE Extern
POS 3.970000 16.180000 -42.570000
TRACKMODE Ground Earth
GROUNDLOCATION -80.58099 28.58409 46.26
FOV 40.00
END_CAMERA
BEGIN_HUD
TYPE Surface
END_HUD
BEGIN_MFD Left
TYPE Surface
SPDMODE 1
END_MFD
BEGIN_MFD Right
TYPE Orbit
PROJ Ship
FRAME Equator
ALT
REF Earth
END_MFD
BEGIN_SHIPS
VulcanCentaur:VulcanCentaur\VulcanCentaur
STATUS Landed Earth
POS -80.5828310 28.5834560
ALT 40.287
AROT 151.086 -8.299 94.537
AFCMODE 7
PRPLEVEL 0:1.0 1:1.0
NAVFREQ 0 0
MODE 0
FAIRING 1
ICPS
END
Peregrine:VulcanCentaur\Peregrine
STATUS Landed Earth
POS -80.5828310 28.5834560
ATTACHED 0:0,VulcanCentaur
PRPLEVEL 0:1
END
LC41_crewtower:Vessels/B_SLC41/b_slc41_crewtower
STATUS Landed Earth
POS -80.5829310 28.5835960
END
LC41:Vessels/B_SLC41/b_slc41
STATUS Landed Earth
POS -80.5828310 28.5834560
PRPLEVEL 0:1.000
THLEVEL 0:1.000 3:1.000
NAVFREQ 0 0
UMB 0 0.0000
END
END_SHIPS
#### francisdrake
Joined
Mar 23, 2008
Messages
829
Reaction score
301
Points
78
Website
francisdrakex.deviantart.com
In the version me and gattispilot made for the Artemis landers thread and later posted on OHM, I used the specifications of the rocket stages posted from the Space Launch Report site
Thank you for the data, these are really helpful! The values in your table are close to my ballpark-estimate. For example, for the Centaur I estimated a propellant mass of 52000 kg, your spreadsheet says 54000 kg. For the core stage I overestimated the propellant somehow, will correct this.
A question on the SLC 41 pad: Which one are you using? I tried installing the MRO addon, but got only an empty pad (no meshes).
A more general question:
What is the correct way to attach a 'spacecraft'-vessel to a dll-launcher?
I would love to fly the X-37B on this launcher, but it seems I miss out some lines in the cfg-file.
#### Gargantua2024
##### The Desktop Orbinaut
Joined
Oct 14, 2016
Messages
781
Reaction score
907
Points
108
Location
San Jose Del Monte, Bulacan
A question on the SLC 41 pad: Which one are you using? I tried installing the MRO addon, but got only an empty pad (no meshes).
A more general question:
What is the correct way to attach a 'spacecraft'-vessel to a dll-launcher?
I would love to fly the X-37B on this launcher, but it seems I miss out some lines in the cfg-file.
I'm no expert at this, but as far as I can tell, the payload should define at least one child attachment of its own to allow itself be attached on the DLL-coded launcher.
#### francisdrake
Joined
Mar 23, 2008
Messages
829
Reaction score
301
Points
78
Website
francisdrakex.deviantart.com
I tried the Starliner addon for the SLC 41 scenery, but had this strange 90° upwards effect, where the meshes are all canted sideways. Will give it another try later on.
The child_attach for the spacecraft vessel was an excellent tip!
I added a child attachment point to the X-37B.ini file to the rear of the vessel, and now it works.
For those who want to try:
I installed the Boeing_X-37B_110405.
Then modified the X-37B.ini file by adding
Code:
[CHILD_ATTACH_0]
POS=(0,0,-4.3)
DIR=(0,0,-1)
ROT=(0,-1,0)
TOPARENT=1
LOOSE=0
ID="XS"
#### Gargantua2024
##### The Desktop Orbinaut
Joined
Oct 14, 2016
Messages
781
Reaction score
907
Points
108
Location
San Jose Del Monte, Bulacan
I tried the Starliner addon for the SLC 41 scenery, but had this strange 90° upwards effect, where the meshes are all canted sideways. Will give it another try later on.
Strange, I thought I got the working SLC-41 pad from there. Have you tried installing from any of Abdullah Radwan's recompiles of MRO, MAVEN, GPS-2F4, or SDO? One of those mods should have the working SLC-41
#### Mr. Residuals
Joined
May 1, 2021
Messages
482
Reaction score
115
Points
43
Location
Los Angeles
where the meshes are all canted sideways
#### Gargantua2024
##### The Desktop Orbinaut
Joined
Oct 14, 2016
Messages
781
Reaction score
907
Points
108
Location
San Jose Del Monte, Bulacan
This is the DLL of the SLC-41 pad that works for me, though I don't remember which mod I got this from. The crew tower is definitely from the Starliner addon, however.
If the pad is still 90 degrees off from where it should be, here's the state as saved from the (Current state).scn
Code:
LC41_crewtower:Vessels/B_SLC41/b_slc41_crewtower
STATUS Landed Earth
POS -80.5829310 28.5835960
ALT 2.000
AROT 60.301 -5.406 8.298
AFCMODE 7
NAVFREQ 0 0
END
LC41:Vessels/B_SLC41/b_slc41
STATUS Landed Earth
POS -80.5828310 28.5834560
ALT -0.183
AROT 151.079 -8.260 -175.462
AFCMODE 7
PRPLEVEL 0:1.000000
NAVFREQ 0 0
UMB 1 1.0000
END
#### Attachments
• SLC-41.zip
1 MB · Views: 8
#### francisdrake
Joined
Mar 23, 2008
Messages
829
Reaction score
301
Points
78
Website
francisdrakex.deviantart.com
Thanks, this SLC41 is upright! Tried it, everything looks fine, but the rocket gets a little 'side push' on launch. Maybe a side effect of the touchdown points. Will see if I can find the reason.
Focus at the moment is implementing the number of SRB's and fairing size (standard or large), controlled by a version designation in the scenario file. VC6L = VulcanCentaur, 6 SRB's, Long fairing.
#### francisdrake
Joined
Mar 23, 2008
Messages
829
Reaction score
301
Points
78
Website
francisdrakex.deviantart.com
Update 01: The zip-file in the first post was updated.
Made some progress on the launcher configurations. It can now have 0, 2, 4 or 6 SRBs, and either a standard or a long fairing. This is controlled by the configuration key, like "VC2S" for VulcanCentaur 2 SRBs, standard fairing.
Still have to hunt down several bugs in mass allocation.
Materials and textures made lighter, to improve the look while sitting on the launch pad. Still launching from pad 39A, as I did not look into the SLC41 scenarios yet.
Note that the launcher handles heavier now. In the previous version I forgot to add the Centaur propellant mass to the first stage dead load )
#### francisdrake
Joined
Mar 23, 2008
Messages
829
Reaction score
301
Points
78
Website
francisdrakex.deviantart.com
A short update: I found the payload bug! Took me two evenings ...
Reason was, it is necessary to call 'clbkPostCreation' to update the vessel after initialization.
Why I did not get the idea eralier?
In the API reference there is the chapter
8.2 The frame update loop and vessel module callback functions
8.2.1 Frame update diagram
... but there is no diagram!
I would appreciate if this diagram could be included in the next Orbiter release, or if someone could point me to a place where it is.
#### francisdrake
Joined
Mar 23, 2008
Messages
829
Reaction score
301
Points
78
Website
francisdrakex.deviantart.com
New version 02 in the first post of this thread.
Switched to BrianJ's fantastic SLC41 launchpad!
Caught several bugs, like mass allocation, touchdown points, crash on payload separation, etc.
Scenarios were renamed. Now an ISS-resupply scenario with a DreamChaser boilerplate vessel is included.
For the those who want of fly the 'real' DreamChaser, an Xperimental scenario is included.
General flight advice: After launch pull slightly back to achieve a heads-down attitude. Aim for a 120-150 km apoapsis with the first stage. When the first stage fuel is down to 70 ton, pitch up to 40°. This is to gain upward momentum. The Centaur has a very low thrust-to-weight ratio and will drop back into the atmosphere if not boosted upwards.
#### francisdrake
Joined
Mar 23, 2008
Messages
829
Reaction score
301
Points
78
Website
francisdrakex.deviantart.com
New version 03 on the front page!
Updated textures, flames and exhausts. Added support for the CST-100 StarLiner. The scenario is in 'Xperimental' and requires the Starliner addon CST-100-06.
#### Jeremyxxx
##### Active member
Joined
Jan 25, 2013
Messages
291
Reaction score
81
Points
43
Location
Dawson Springs
View attachment 26754
New version 03 on the front page!
Updated textures, flames and exhausts. Added support for the CST-100 StarLiner. The scenario is in 'Xperimental' and requires the Starliner addon CST-100-06.
That is what the launch of the Starliner would look like following the Atlas V's retirement
#### francisdrake
Joined
Mar 23, 2008
Messages
829
Reaction score
301
Points
78
Website
francisdrakex.deviantart.com
Yes, with all the delays of the StarLiner development, they may run out of Atlas V's before StarLiner flies.
#### DaveS
##### Space Shuttle Ultra Project co-developer
Donator
Beta Tester
Joined
Feb 4, 2008
Messages
9,206
Reaction score
481
Points
173
Yes, with all the delays of the StarLiner development, they may run out of Atlas V's before StarLiner flies.
You might want to make the adapter between Centaur V and Starliner being that of the Starliner SM to ensure a smooth transition. The way you have it right now it creates parasitic drag being creating a low pressure zone between the Centaur and the Starliner SM. This not only reduces performance but also creates extra heating and structural loading.
#### Gargantua2024
##### The Desktop Orbinaut
Joined
Oct 14, 2016
Messages
781
Reaction score
907
Points
108
Location
San Jose Del Monte, Bulacan
Yes, with all the delays of the StarLiner development, they may run out of Atlas V's before StarLiner flies.
Out of the 28 missions left of the Atlas V, only 8 were reserved for Starliner flights (OFT-2, CFT, Starliner 1 to 6). So yeah, it could switch to Vulcan Centaur most likely starting with Boeing Starliner 7
#### francisdrake
Joined
Mar 23, 2008
Messages
829
Reaction score
301
Points
78
Website
francisdrakex.deviantart.com
On the adapter between the StarLiner and the Centaur:
The top of the adapter has the same outer dia as the SM.
This dark line may cause an optical illusion when looked at an oblique angle.
The StarLiner has a mesh finer than the Centaur, which causes some irregularities along the circumference.
This I have to improve to look better.
Unsolved is, how the vessel is supported internally on the PAF. On the Atlas V a cradle is used, which is 1.6 m high.
If the same is used here, the length of the adapter would nearly double, with a steeper cone angle.
I will probably make this as an alternative design and then compare the two, to find out which one looks more realistic.
#### MaxBuzz
##### Well-known member
Joined
Sep 19, 2016
Messages
816
Reaction score
1,199
Points
108
Location
Kursk
Yes, with all the delays of the StarLiner development, they may run out of Atlas V's before StarLiner flies.
I a couple of weeks ago the news that Roskosmos sent its engineers to fix the errors of the raptor engine | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3977353572845459, "perplexity": 9823.124658170382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662606992.69/warc/CC-MAIN-20220526131456-20220526161456-00613.warc.gz"} |
http://iaaras.ru/en/about/issues/emp/intro/ | Sections
# Ephemerides of Minor Planets: Introduction
As in the last year, the Ephemerides of Minor Planets for 2021 are prepared by the IAA RAS only in the electronic form. Distribution of the EMP-2021 is accomplished via Internet or through sending compact disks (CD) by mail.
As compared with previous period before 2012, the substantial changes are introduced in the process of preparing of the EMP. Orbital elements of all numbered planets have been determined in IAA all over again on the ground of available observations in the catalog of the Minor Planet Center. Some changes have been introduced in the procedure of taking into account for perturbations. More detailed information on orbital elements used in the present volume of EMP and their accuracy is given in section "Information on new orbital elements".
The data on CD for 2021 embrace information on elements and ephemerides for 545135 numbered objects as of April 30, 2020. The form of representation of these data is traditional. Orbital data and ephemerides for several dwarf planets defined by the resolution of 26th General Assembly of the IAU are also included in the volume. Besides, the volume contains Table of elements of all numbered near-Earth asteroids (NEAs). Two more next Tables contain orbital elements of Centaurs and transneptunian objects and those of some unusual asteroids. Ephemerides of NEAs and unusual asteroids covering near-opposition intervals and intervals of close approaches to the Earth are also given.
The data are presented as PDF files. They can be displayed/ printed by a variety of applications, in particular by Acrobat Reader. Apart from full set of Tables of EMP for 2021 the CD contains also the integrated software package AMPLE (Adaptable Minor Planet Ephemerides) which is intended for computation various ephemeris data for 2021 and solving a number of problems associated with their use (see information on AMPLE at the end of Introduction).
The volume of the EMP for 2021 (the seventy fifth year of publication) contains:
• information on new orbital elements;
• orbital elements of 545135 minor planets numbered as of April 30, 2020 and dates of their oppositions in 2021;
• Table of orbital elements of all near-Earth asteroids (NEAs) numbered as of April 30, 2020 and dates of their oppositions in 2021;
• Table of orbital elements of Centaurs and transneptunian objects and dates of their oppositions in 2021;
• Table of orbital elements of some unusual minor planets (Jupiter-crossers, Jupiter-approachers and some Mars-crossers) and dates of their oppositions in 2021;
• osculating elements of perturbing planets;
• minor planet lightcurve parameters;
• ambiguoes periods of lightcurves;
• binary asteroid lightcurves;
• binary asteroid parameters;
• non-principal axis rotation (tumbling) asteroids;
• asteroid spin axes;
• list of all minor planets in order of opposition dates in 2021;
• ephemerides of NEAs and some unusual minor planets;
• status of minor planet observations as of April 30, 2020;
• positions of the antisun and the Moon;
• information on the AMPLE package;
• information on asteroid families as obtained in IAA RAS by T. A. Vinogradova.
In conformity with the resolution of IAU Commission 20 (New Delhi, November 1985, the Minor Planet Circulars 10193 — 10194), the so-called H, G magnitude system for minor planets is used in the EMP. A formula for the prediction of the apparent magnitude is
$$V = 5 \log{(r \Delta)} + H - 2.5 \log{[(1 - G) \Phi_1 + G \Phi_2]}$$
where r and D are the heliocentric and geocentric distances, respectively, H is the absolute magnitude (in the V band unless otherwise specified) at solar phase angle β=0º, G is termed the slope parameter, and Φ1 and Φ2 are two phase functions given by the expressions:
$$\Phi_i = \exp{\left[ -A_i \left( \tan{\frac{\beta}{2}} \right)^{B_i} \right]}, \ \ i = 1, 2,$$ $$A_1 = 3.33, A_2 = 1.87, B_1 = 0.63, B_2 = 1.22$$
The formula (1) predicts the observed opposition surge and the non-linear drop off in brightness at large phase angles, and is valid for 0º < β < 120º. H and G are fundamental photometric parameters for each minor planet. They are related to the previously used absolute magnitude B(1,0) and the phase coefficient. In particular, conversion from the B band of the old system to the V band of the new system is conveniently carried out using the approximate relationship H = B(1,0) – 1 mag. Further details have been given by E. Bowell, B. Hapke, D. Domingue, K. Lumme, J. Peltoniemi, and A. W. Harris in "Asteroids II" (eds. R. P. Binzel, T. Gehrels and M. S. Matthews), p. 524 – 556, Univ. of Arizona Press, Tucson, 1989).
The magnitudes in the present volume of EMP are based on the those values of H and G which were published in MPC 28103 – 28116 and in subsequent issues. The parameters for the first 3200 minor planets were determined mainly by E. F. Tedesco (Jet Propulsion Laboratory, U.S.A.). For almost all remaining minor planets the H value were determined by G. V. Williams (Harvard-Smithsonian Center for Astrophysics, U.S.A.).
## Information on New Orbital Elements
Since the last year's EMP, the number of newly numbered minor planets has increased by 21311. Elements of all numbered minor planets have been determined by IAA. The sets of elements are given for the standard epoch July 5, 2021. They were determined as a result of orbit improving on the base of all kinds of optical observations available in the catalog of observations of the Minor Planet Center on April 30, 2020 (radar observations have not been used at this stage). The brief information on the accuracy of each set of elements is given in the Table on p. 10–1145. It contains the following data:
1. Number of planet.
2. The number of oppositions with observations used for determination of the orbit.
3. The time interval covered by the observations used for determination of the orbit.
4. The mean-root-square error of observations, expressed in arcsec, with respect to orbits fitted by the least squares.
When estimating these data it is necessary to take into consideration the following.
1. Corrections to the initial sets of elements were determined by the least square fit of weighted conditional equations. In so doing, the observations made before 1901 were assigned weight equal to 1/16, the observations made during the time interval from 1901 to 1950 were assigned weight equal to 1/9, the observations made during the time interval from 1951 to 1995 were assigned weight equal to 1/4, and at last observations starting from 1996 were considered as having unit weight.
2. Observations of only standard accuracy were used for orbit improvement.
3. Observations in right ascension and in declination were considered as independent, so that conditional equation, e.g. in right ascension, could be excluded by "the three sigma criterion" from solution whereas equation in declination could be used or vice versa.
4. Observation is considered as used, if both conditional equations in right ascension and in declination or at least one of them are used for finding solution.
5. The mean-root-square error of observations, given in column 4 of the Table was computed by the formula
$$\sigma = \sqrt\frac{\sum_{i=1}^l[(\alpha_O - \alpha_C)\cos\delta]_i^2 + \sum_{j=1}^{m}[\delta_O - \delta_C]_j^2}{l+m-6},$$
where l is the number of equations in right ascension and m is that of in declination. Note that weights appearing in the conditional equations are not used in the above formula.
In the Table below the brief information on the accuracy of orbital elements which are publishing in EMP-2021 is correlated with corresponding information on the accuracy of orbital elements included in minor planet orbit file maintained by Minor Planet Center. For different intervals of σ-value the Table gives number of sets of elements, determined with mean errors that fall in the indicated limits.
Interval of σ Number of sets
within interval
EMP orbits
Percent
of the total
Number of sets
within interval
MPC orbits
Percent
of the total
σ ≥ 1.0″140.003210.004
0.9″ ≤ σ < 1.0″280.00515 0.003
0.8″ ≤ σ < 0.9″570.01030 0.006
0.7″ ≤ σ < 0.8″1620.030163 0.030
0.6″ ≤ σ < 0.7″498 0.0915400.099
0.5″ ≤ σ < 0.6″3383 0.6216462 1.185
0.4″ ≤ σ < 0.5″7409713.59222451341.185
0.3″ ≤ σ < 0.4″21042138.60023053942.290
0.2″ ≤ σ < 0.3″220376 40.42680655 14.795
0.1″ ≤ σ < 0.2″35804 6.56821720.398
σ < 0.1″ 2950.05425 0.005
These data show that quality of improvement of new sets is in general good. At least, 99.24 % the EMP orbits have σ < 0.5″, as compared with 98.27 % of MPC orbits.
## Elements and Opposition Dates in 2021
The elements of the numbered minor planets are given with respect to the ecliptic and equinox J2000.0. The computation of osculating elements for the new standard epoch JDT 2459400.5 = 2021 July 5.0 TT was carried out by numerical integration of relativistic equations of motion in rectangular coordinates taking into account the perturbations from Mercury to Neptune and from Pluto, Ceres, Pallas, and Vesta. Coordinates and masses of perturbing planets were taken from DE 405. The perturbations from the Earth and the Moon were considered separately.
Osculating elements of minor planets are given on p. 1146–7960. The first column of the Table gives the number and name or principal provisional designation of each minor planet. The second one gives the absolute magnitude, H, that is, the brightness averaged over rotation for minor planets having known lightcurves and reduced to unit heliocentric and geocentric distances and to zero phase angle. The third column gives the slope parameter, G.
As noted above, the values of H and G are given in this year in accordance with the listing published in MPC 28103–28116 and in subsequent issues. In general the parameters given for the first 3200 minor planets are precisely those of the previous listing, prepared mainly by E. F. Tedesco. When H and G values were determined sufficiently precisely from photoelectric observations, frequently by least-square fitting, although in some instances G was selected, H values are given to 0.01 mag. Actual solutions for G (ranging from –0.12 to +0.60) have been made in only 111 cases. In the Table these values are given to 0.01. For by far the majority of minor planets the default value of G equal to 0.15 is used and it is denoted in the Table by X. Magnitudes based solely on photographic observations or for photoelectrically observed asteroids with large lightcurve and/or aspect variations (cf. lightcurve parameter Table, p. 8069–8271) are given to 0.1 mag. For almost all minor planets with numbers greater than 3200 the values of H were found by G. V. Williams (also to 0.1 mag and using G = 0.15) from magnitudes given on the astrometric records collected by the Minor Planet Center. For photoelectrically observed NEAs (3361), (3551), (3554), (3757) and (4015) the more precise H values from previous listing are preserved. For (4179) the H and G values suggested by D. J. Tholen (Institute for Astronomy, University of Hawaii, U.S.A.) are adopted. The new values of H determined by G. V. Williams are also adopted for 98 minor planets with numbers less than 3200. In three other discordant cases the improved H values were suggested by A. W. Harris (Jet Propulsion Laboratory, U.S.A.).
The orbital elements (mean anomaly, M, argument of perihelion, ω, longitude of ascending node, Ω, inclination, i, eccentricity, e, mean motion, μ, and semi-major axis, a ) are given in the columns from 4 to 10.
Eleventh column of the Table headed TE contains last two digits of the title year of the EMP in which the orbit was first introduced and where the information on its precision can be found.
The 12th and the 13th columns of the Table contain opposition date of minor planet in 2021 and its apparent magnitude for the fourth ephemeris date. A dash indicates that there is no opposition in 2021.
## Elements and Opposition Dates of Neas
The Table of elements and opposition dates of the near-Earth asteroids (NEAs) (see p. 7961 – 7997) contains osculating elements of the numbered minor planets with perihelion distances q less or equal to 1.3 a.u., and dates of their oppositions in 2021. It is patterned after corresponding Table of elements of minor planets. The minute differences are as follows: the semi-major axis is given with less number of decimals and instead column headed TE the columns with approximate values of perihelion distance q and aphelion distance Q are given. The column headed T (Type) contains indication of minor planet type where Am stands for Amor type (a > 1 a.u., 1.017 a.u. < q ≤ 1.3 a.u.), Ap stands for Apollo type (a > 1 a.u., q ≤ 1.017 a.u.), At stands for Aten type (a < 1 a.u., Q ≥ 0.983 a.u.), and Ar stands for Atira type (Q < 0.983 a.u.). (163693) Atira is the first numbered asteroid of new type (Q < 0.983 a.u.) for which in literature one can find also other names (Apohele type, Arjuna type and so on).
## Elements and Opposition Dates of Centaurs and Transneptunian Objects
The Table of elements and opposition dates of Centaurs and transneptunian objects (see p. 7998 – 8007) is constructed like that of NEAs. Column headed T (Type) contains indication of object's type where Ct stands for Centaurs (a < 30 a.u., q > 5.45 a.u.), Tn stands for transneptunian objects ("kjubiwanos" (like classical 1992 QB1) or Plutinos), Sd stands for scattered disk objects.
## Elements and opposition dates of some unusual minor planets
The Table (p. 8008 – 8067) contains data for Jupiter-crossers, Jupiter-approachers as well as for some Mars-crossers, mainly with unusual orbits. Minor planets whose stability is protected by special mechanisms (like Trojans, Hildas and so on) are not included in the Table except in a few special cases.
The Table is constructed like two preceding ones. The header T (Type) indicates type of minor planet where Jc stands for Jupiter-crosser, Ja stands for Jupiter-approacher, Mc stands for Mars-crosser, and MT stands for Mars Trojan.
It should be noted that all objects included into the Tables of NEAs, Centaurs and transneptunian objects, and some unusual planets can be also found in the Table of elements and opposition dates of all minor planets.
## Osculating elements and inverse masses of perturbing planets
The osculating elements of perturbing planets and their inverse masses used in DE 405 are given in the Table on p. 8068. They may be applied in continued numerical integration. For Ceres, Pallas, and Vesta the osculating elements correspond to those in the Table on p. 1146.
On the same page values of some constants used when preparing ephemeris volume are given. Also Julian Day Number for some dates in 2020 – 2022 are given there.
## Minor Planet Lightcurve Parameters
The lightcurve parameters for some minor planets are given on p. 8069 – 8271. The data of the Table are based on the data published by C.-I. Lagerkvist, A. W. Harris and V. Zappalá (See: "Asteroids II machine-readable data base: March 1988 floppy disk version", National Space Science Data Center, Greenbelt MD). The Table was updated for publication in the volume for 2020 by B. Warner (Palmer Divide Observatory, Colorado Springs, U.S.A.), A. W. Harris (Space Science Institute, U.S.A.), and P. Pravec (Astronomical Institute, Academy of Sciences of Czech Republic).
The Table contains the number and name or principal provisional designation of minor planet, the period of rotational lightcurve expressed in hours, the amplitude of variation or range of amplitude observed, and a reliability code. More detailed explanations are given in the notes on p. 8270 – 8271.
## Ambiguous Periods of Lightcurves
The Table on p. 8272 – 8297 includes alternative values of period and amplitude for minor planets marked in the Table "Minor planet lightcurve parameters" as having ambiguous value of period. The data for publication in the Table were supplied by B. Warner (Palmer Divide Observatory, Colorado Springs, U.S.A.), A. W. Harris (Space Science Institute, U.S.A.) and P. Pravec (Astronomical Institute, Academy of Sciences of Czech Republic).
## Binary Asteroid Lightcurves
The Table on p. 8298 – 8317 contains additional data for the lightcurves of the objects marked in the Table "Minor planet lightcurve parameters" as known or suspected binaries. The data are meant to provide a quick overview of a primary period and amplitude of the lightcurves as well as a secondary period and, if available, amplitude. The data for publication in the Table were supplied by B. Warner (Palmer Divide Observatory, Colorado Springs, U.S.A.), A. W. Harris (Space Science Institute, U.S.A.) and P. Pravec (Astronomical Institute, Academy of Sciences of Czech Republic).
## Binary Asteroid Parameters
The data in the Table on p. 8318 – 8321 embrace estimated parameters for 210 binary systems in near Earth, Mars crossing, main belt and Trojan orbits (more than 90 % of corresponding objects are marked in the Table "Minor planet lightcurve parameters" as undoubtedly binary and in other cases as suspected binary). The Table contains such data as estimated diameters of primary and secondary components, their rotation periods, semi-major axis of orbit of the secondary component, etc (see more detailed explanation in the Table footnote). The data were presented by P. Pravec (Astronomical Institute, Academy of Sciences of Czech Republic) (see Pravec, P., Harris, A. W., 2007. Binary Asteroid Population. 1. Angular Momentum Content. Icarus, 190 (2007) 250 – 259. The July 2011 update of the data was made by P. Pravec and 41 colleagues for paper "Binary Asteroid Population. 2. Anisotropic distribution of orbit poles of small, iner main-belt binaries", Icarus, 218 (2012) 125 – 143). The September 2015 update of the data was made by P. Pravec et al. for paper "Binary asteroid population. 3. Secondary rotations and elongations", Icarus 267 (2016) 267 – 295). The January 2019 update of the data was made by P. Pravec et al. for paper "Asteroid pairs: a complex picture", Icarus 333 (2019) 429 – 463.
## Non-principal Axis Rotation (Tumbling) Asteroids
The Table on p. 8322 – 8337 includes additional data for the asteroids for which non-principal axis rotational motion (tumbling) has been reported. The data for publication in the Table were prepared by B. Warner (Palmer Divide Observatory, Colorado Springs, U.S.A.), A. W. Harris (Space Science Institute, U.S.A.) and P. Pravec (Astronomical Institute, Academy of Sciences of Czech Republic).
## Asteroid Spin Axes
The Table on p. 8338 – 8446 includes information about spin axis properties (ecliptic coordinates of spin vector pole and siderial period) for any numbered minor planet for which corresponding data have been reported. The data for publication in the Table were prepared by B. Warner (Palmer Divide Observatory, Colorado Springs, U.S.A.), A. W. Harris (Space Science Institute, U.S.A.) and P. Pravec (Astronomical Institute, Academy of Sciences of Czech Republic).
## List of All Minor Planets in Order of Opposition Dates in 2021
The list of the numbered minor planets as of April 30, 2020 arranged according to their opposition dates in 2021 is given in the Table on p. 8447 – 9325. To save space the data are given in six columns on a page. For each minor planet the following data are given: number, date of opposition in 2021, and apparent V magnitude for the fourth ephemeris date (as a rule, this date coincides with the nearest standard Julian day number preceding the opposition). As for most planets on a page the month when the oppositions take place is the same, it enters in header, so that only day of opposition is indicated in the line with number and magnitude.
## Ephemerides of Neas and Some Unusual Planets
Extended ephemerides of NEAs and some unusual minor planets listed in the Table on p. 8008 – 8067 (Jupiter-crossers, Jupiter-approachers, some unusual Mars-crossers) are presented on p. 9326 – 10658. At the end of the Table special index is placed which indicates page numbers where the ephemerides of different planets are given.
Each ephemeris contains astrometric positions for 0h TT referred to the equator and equinox J2000.0, the geocentric and heliocentric distances Δ and r, respectively, the phase angle, β, the apparent magnitude V , the solar elongation, ψ, and the maximum altitude during darkness at latitudes + 45º, and – 26º. In cases where there is no upper culmination during nighttime, the maximum altitude above the horizon during civil twilight is given; such altitudes are denoted by an asterisk.
## Status of Minor Planet Observations
The so-called critical list, containing minor planets not observed since 2015 or observed at fewer than four oppositions, is given on p. 10659 – 10664.
## Antisun and Moon
Positions of the antisun and the Moon are given on p. 10665 – 10666. For 0h TT of each day of 2021, the Table contains geocentric equatorial coordinates of the antisun (the point diametrically opposite the Sun) and those of the Moon. The solar elongation of the Moon is also given.
## Information on Ample Package
Information on the AMPLE integrated software package is given on p. 10667 – 10668.
## Information on Asteroid Families
Information on the Asteroid Families as obtained in IAA RAS is given on p. 10669.
Data for the present volume of the EMP have been prepared by Yu. A. Chernetenko, G. R. Kastel', O. M. Kochetova,V. B. Kuznetsov, V. A. Shor, T. A. Vinogradova. Typesetting and page makeup have been done by D. A. Ryzhkova using system SVITA and TEX.
Special system for search of necessary information on CD was compiled by N. I. Alehina and V. B. Kuznetsov.
Editor-in-Chief
Yu. A. Chernetenko
Please address orders to:
Institute of Applied Astronomy of RAS
naberezhnaya Kutuzova, 10, St.Petersburg, 191187, Russia
Fax: +7-812-275-11-19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.6739742159843445, "perplexity": 2872.999879090045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703536556.58/warc/CC-MAIN-20210123063713-20210123093713-00641.warc.gz"} |
https://www.lessonplanet.com/teachers/practice-quiz-equilibrium | # Practice Quiz -Equilibrium
In this equilibrium instructional activity, students solve eight problems including determining the effects of changes to a system as it related to Le Chatelier Principle, calculating equilibrium constants and finding boiling points of solutions. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8010775446891785, "perplexity": 3960.6452808623126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608107.28/warc/CC-MAIN-20170525155936-20170525175936-00092.warc.gz"} |
http://www.gtagaming.com/gtagaming/news/archive.php?p=201203&viewpollresults=1 | News Archive
Grand Theft Auto V - General News | @ 10:21 AM CST | By victim
Given Rockstar's recent love-affair of delivering A-class titles in the April-May period, many fanboys were predicting GTA V to be hitting the shelves in one of those months. This speculation came mostly out of the fact that Rockstar had no games announced for a release in that window. This, however, is no longer true.
Rockstar bumped back the release of Max Payne 3, another extremely popular franchise, from it's original March release to a new May one. Max Payne 3 is now scheduled to release on Xbox 360 and PS3 on May 15th, with a PC version also coming later that month.
Strauss Zelnick, Take 2 CEO
"With Max Payne 3 now slated for May, our robust lineup of upcoming releases for fiscal 2013 is even stronger, including BioShock Infinite, Borderlands 2, Spec Ops: The Line, XCOM, XCOM: Enemy Unknown, and other titles yet to be announced for release that year."
Whether this has affected GTA V at all is quite doubtful, though it certainly silences all of the hopeful speculation that the game may be in our hands early. When GTA V's release date is official, you'll hear it from us first.
Grand Theft Auto IV - Modifications | @ 03:32 PM CST | By rappo
After you've completed every available mission in Grand Theft Auto IV (two or three times), what's left besides wreaking havoc throughout the city? Honestly, not a whole lot, but that's why the GTA community never stops modding. If you're running GTA IV on the latest patch (v1.0.7.0), there's a script mod available that can add a whole new dimension to the game: LCPD First Response.
LCPDFR is a police simulation mod for GTA IV and EFLC. It transforms the game into a law enforcement sim where you arrest the crooks, rather than working for them. LCPDFR 0.95 RC2 adds the ability to use a slick new Taser which acts like a native weapon, stop and search multiple suspects, search the trunks of vehicles, issue parking citations, and engage in high speed pursuits.
This mod, featured in the February 2012 issue of PCGamer, is available for download now at GTA4-Mods.com.
18
JAN
SOPA and PIPA
GTAGaming.com - General News | @ 01:58 PM CST | By Zidane
SOPA and PIPA are two bills that are currently going through United States congress. If they pass, the internet as we know it would change forever, and the freedom, and possibly existence of fansites, such as GTAGaming and their forum communities, would be threatened.
Learn more about these bills, and what you can do to help prevent them from being passed.
Grand Theft Auto IV - Modifications | @ 06:05 PM CST | By rappo
Community modder Casull16 took on the daunting task of updating the textures for every single pedestrian in GTA IV. That's right... all 335 pedestrians, coming in at over 8,000 textures and 712Mb. The mod page shows a comparison between the original and updated textures, which also shows the infinite patience that the author must have.
April 2004 (16 Posts)Week: [ #4 ] May 2004 (66 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] June 2004 (78 Posts)Week: [ #1 | #2 | #3 | #4 ] July 2004 (80 Posts)Week: [ #1 | #2 | #3 | #4 ] August 2004 (117 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] September 2004 (66 Posts)Week: [ #1 | #2 | #3 | #4 ] October 2004 (79 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] November 2004 (19 Posts)Week: [ #1 | #2 | #3 | #4 ] December 2004 (13 Posts)Week: [ #1 | #2 | #3 | #4 ] January 2005 (16 Posts)Week: [ #-51 | #-50 | #-49 | #-48 | #-47 ] February 2005 (8 Posts)Week: [ #2 | #3 | #4 ] March 2005 (18 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] April 2005 (8 Posts)Week: [ #3 | #4 | #5 ] May 2005 (16 Posts)Week: [ #2 | #3 | #4 | #5 ] June 2005 (28 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] July 2005 (12 Posts)Week: [ #2 | #3 | #4 | #5 ] August 2005 (18 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] September 2005 (23 Posts)Week: [ #2 | #3 | #4 | #5 ] October 2005 (19 Posts)Week: [ #2 | #3 | #4 | #5 | #6 ] November 2005 (6 Posts)Week: [ #2 | #4 | #5 ] December 2005 (8 Posts)Week: [ #2 | #4 | #5 ] January 2006 (6 Posts)Week: [ #-50 | #-49 | #-48 | #-47 ] February 2006 (2 Posts)Week: [ #4 | #5 ] March 2006 (3 Posts)Week: [ #2 | #4 | #5 ] April 2006 (3 Posts)Week: [ #2 | #3 | #4 ] May 2006 (9 Posts)Week: [ #1 | #2 | #4 | #5 ] June 2006 (1 Posts)Week: [ #4 ] July 2006 (2 Posts)Week: [ #2 | #5 ] August 2006 (2 Posts)Week: [ #1 | #5 ] September 2006 (6 Posts)Week: [ #2 | #3 | #4 | #5 ] October 2006 (2 Posts)Week: [ #6 ] November 2006 (1 Posts)Week: [ #3 ] December 2006 (2 Posts)Week: [ #5 | #6 ] February 2007 (4 Posts)Week: [ #1 | #4 ] March 2007 (14 Posts)Week: [ #2 | #3 | #4 ] April 2007 (7 Posts)Week: [ #1 | #2 | #3 | #4 ] May 2007 (7 Posts)Week: [ #1 | #3 | #4 ] June 2007 (26 Posts)Week: [ #2 | #3 | #4 ] July 2007 (31 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] August 2007 (6 Posts)Week: [ #1 | #3 | #4 ] September 2007 (3 Posts)Week: [ #1 | #3 | #4 ] October 2007 (15 Posts)Week: [ #0 | #1 | #2 | #3 | #4 ] November 2007 (5 Posts)Week: [ #1 | #3 | #4 ] December 2007 (39 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] January 2008 (20 Posts)Week: [ #1 | #2 | #3 | #4 ] February 2008 (38 Posts)Week: [ #1 | #2 | #3 | #4 ] March 2008 (88 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] April 2008 (105 Posts)Week: [ #1 | #2 | #3 | #4 ] May 2008 (25 Posts)Week: [ #1 | #2 | #3 | #4 ] June 2008 (22 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] July 2008 (11 Posts)Week: [ #1 | #2 | #3 | #4 ] August 2008 (18 Posts)Week: [ #1 | #2 | #3 | #4 ] September 2008 (16 Posts)Week: [ #1 | #2 | #3 | #4 ] October 2008 (15 Posts)Week: [ #1 | #2 | #3 | #4 ] November 2008 (34 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] December 2008 (20 Posts)Week: [ #1 | #2 | #3 | #4 ] January 2009 (24 Posts)Week: [ #1 | #2 | #3 | #4 ] February 2009 (28 Posts)Week: [ #1 | #2 | #3 | #4 ] March 2009 (16 Posts)Week: [ #1 | #2 | #3 | #4 ] April 2009 (18 Posts)Week: [ #0 | #1 | #2 | #3 | #4 ] May 2009 (5 Posts)Week: [ #1 | #2 | #3 | #4 ] June 2009 (9 Posts)Week: [ #0 | #2 | #3 ] July 2009 (12 Posts)Week: [ #0 | #1 | #2 | #3 | #4 ] August 2009 (29 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] September 2009 (25 Posts)Week: [ #1 | #2 | #3 | #4 ] October 2009 (45 Posts)Week: [ #1 | #2 | #3 | #4 ] November 2009 (9 Posts)Week: [ #1 | #2 | #3 ] December 2009 (7 Posts)Week: [ #1 | #2 | #3 | #4 ] January 2010 (8 Posts)Week: [ #-51 | #-50 | #-49 | #-48 ] February 2010 (9 Posts)Week: [ #1 | #2 | #3 ] March 2010 (11 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] April 2010 (17 Posts)Week: [ #2 | #3 | #4 | #5 ] May 2010 (8 Posts)Week: [ #2 | #3 | #5 | #6 ] June 2010 (7 Posts)Week: [ #2 | #3 | #5 ] July 2010 (1 Posts)Week: [ #2 ] August 2010 (5 Posts)Week: [ #2 | #3 | #5 ] September 2010 (5 Posts)Week: [ #1 | #2 | #4 ] October 2010 (7 Posts)Week: [ #2 | #3 | #5 ] November 2010 (5 Posts)Week: [ #1 | #2 | #3 ] December 2010 (6 Posts)Week: [ #3 | #4 | #5 ] January 2011 (8 Posts)Week: [ #-49 | #-48 | #-47 | #-46 ] February 2011 (3 Posts)Week: [ #2 | #3 ] March 2011 (8 Posts)Week: [ #1 | #2 | #4 | #5 ] April 2011 (14 Posts)Week: [ #2 | #3 | #4 | #5 ] May 2011 (3 Posts)Week: [ #3 | #4 | #5 ] June 2011 (6 Posts)Week: [ #2 | #4 | #5 ] July 2011 (9 Posts)Week: [ #2 | #3 | #4 | #5 ] August 2011 (7 Posts)Week: [ #3 | #4 | #5 ] September 2011 (7 Posts)Week: [ #2 | #3 | #4 | #5 ] October 2011 (12 Posts)Week: [ #2 | #3 | #4 | #5 ] November 2011 (9 Posts)Week: [ #1 | #2 | #3 | #4 ] December 2011 (12 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] January 2012 (19 Posts)Week: [ #-50 | #-49 | #-48 | #-47 | #-46 ] February 2012 (16 Posts)Week: [ #2 | #3 | #4 | #5 ] March 2012 (15 Posts)Week: [ #2 | #3 | #4 | #5 ] April 2012 (9 Posts)Week: [ #2 | #3 | #4 | #5 ] May 2012 (10 Posts)Week: [ #1 | #2 | #3 | #4 ] June 2012 (5 Posts)Week: [ #1 | #2 | #3 | #5 ] July 2012 (9 Posts)Week: [ #2 | #3 | #4 | #5 | #6 ] August 2012 (10 Posts)Week: [ #2 | #3 | #4 | #5 ] September 2012 (7 Posts)Week: [ #2 | #3 | #4 | #5 ] October 2012 (20 Posts)Week: [ #2 | #3 | #4 | #5 ] November 2012 (32 Posts)Week: [ #2 | #3 | #4 | #5 ] December 2012 (25 Posts)Week: [ #2 | #3 | #4 | #5 | #6 ] January 2013 (11 Posts)Week: [ #1 | #2 | #3 | #4 ] February 2013 (1 Posts)Week: [ #1 ] March 2013 (6 Posts)Week: [ #1 | #2 | #4 ] April 2013 (19 Posts)Week: [ #0 | #1 | #2 | #3 | #4 ] May 2013 (7 Posts)Week: [ #1 | #2 | #3 | #4 ] June 2013 (4 Posts)Week: [ #2 | #3 | #4 ] July 2013 (20 Posts)Week: [ #0 | #1 | #2 | #3 | #4 ] August 2013 (20 Posts)Week: [ #1 | #2 | #3 | #4 ] September 2013 (27 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] October 2013 (15 Posts)Week: [ #1 | #2 | #3 | #4 ] November 2013 (12 Posts)Week: [ #1 | #2 | #3 | #4 ] December 2013 (12 Posts)Week: [ #1 | #2 | #3 | #5 ] January 2014 (2 Posts)Week: [ #1 ] February 2014 (5 Posts)Week: [ #2 | #3 | #4 ] March 2014 (7 Posts)Week: [ #1 | #2 | #3 | #4 ] April 2014 (6 Posts)Week: [ #0 | #1 | #3 ] May 2014 (4 Posts)Week: [ #0 | #1 | #2 ] June 2014 (3 Posts)Week: [ #2 | #3 ] August 2014 (3 Posts)Week: [ #2 | #3 | #4 ] September 2014 (3 Posts)Week: [ #0 | #1 | #2 ] November 2014 (8 Posts)Week: [ #1 | #2 | #3 | #4 ] December 2014 (2 Posts)Week: [ #2 ] January 2015 (3 Posts)Week: [ #2 ]
Not a GTAGaming member yet?
Total Members: 185,126
2 Online, 1,237 Guests
GTA IV Walkthrough
Poll
Where will you pre-order GTAV for PC?
Rockstar Warehouse (28%)
Steam (28%)
Not pre-ordering (19%)
Amazon (8%)
Other (7%) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9476752877235413, "perplexity": 12468.21811853887}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422118108509.48/warc/CC-MAIN-20150124164828-00235-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://fliptomato.wordpress.com/2008/02/05/inverse-symbolic-calculator-20/ | ### Inverse Symbolic Calculator 2.0
05Feb08
For all you numerologists out there trying to explain the hierarchy of Yukawa couplings, you might enjoy playing with the Inverse Symbolic Calculator 2.0 (via Reddit some time ago).
The Inverse Symbolic Calculator (ISC) uses a combination of lookup tables and integer relation algorithms in order to associate with a user-defined, truncated decimal expansion (represented as a floating point expression) a closed form representation for the real number.
In other words, you give the decimal and the ISC gives you an expression (involving things like $e$ and $pi$) that evaluates to that number to the same precision.
The real geeks out there will test out the ISC with a real problem, what is the inverse symbolic expression for 42?
The Answer to Life, the Universe, and Everything.
That’s the honest-to-Witten verbatim output of the program. I think it passes the test with flying colors.
#### 3 Responses to “Inverse Symbolic Calculator 2.0”
1. 1 andy.s
That’s really impressive. I put in the value for the solution of cos(x) = x and it reproduced the expression (well pretty close).
I wonder – if you put in the masses of all particles, would it reproduce the Standard Model for you?
2. 2 robert
Is there a challenge here, to trip the inverse symbolic calculator up? The response to 42 was indeed impressive. Sadly, it failed to recognise
3.141592653589793238462643383279726619348
as
Log[262537412640768744]/Sqrt[163]
and came up with Pi (which to the same number of decimal places is
3.141592653589793238462643383279502884197)
3. 3 robert
I know that this is really sad. Perhaps the ISC doesn’t know about logs (see previous example) Putting in .3010 elicits exp(-6/5) (0.3012). So where was log[2]/log[10]? (3010 was also the telephone number of Marianne Faithful’s next door neighbour when she was a girl – could the internet, or any computer, tell you that?) And, even worse, 0.707 could only be approximated to by 5/(4 Sqrt[Pi]) (circa 0.677) rather than the sine of Pi/4. i.e. 1/Sqrt[2]. No need for modular equations and theta functions to kick its ass, then – trig 101 is all it takes. 42 still does the business, though. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8311754465103149, "perplexity": 1651.0251981954102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314696.33/warc/CC-MAIN-20190819073232-20190819095232-00280.warc.gz"} |
https://motls.blogspot.com/2009/04/michael-atiyah-80th-birthday.html | Tuesday, April 21, 2009 ... //
Michael Atiyah: 80th birthday
Sir Michael Atiyah, one of the most influential 20th century mathematicians, is celebrating his 80th birthday tomorrow. Congratulations!
See The Herald to learn about the Atiyah Fest...
Although he has been a mathematician, his results - especially those in algebraic topology - have become essential in modern theoretical physics in general and string theory in particular.
The most well-known result is the Atiyah-Singer String Index Theorem, proved in 1963. It is often called simply the "Atiyah-Singer Index Theorem" although this short name doesn't quite reflect the depth of the result - i.e. its proximity to the most fundamental mathematical features of our string-theoretical Universe.
Sir Atiyah has also co-authored a famous paper with Ed Witten on M-theory on manifolds of G_2 holonomy, one of the cute 100-page papers that I couldn't have resisted to read, and a related, shorter, and comparably famous paper with Maldacena and Vafa about the M-theoretical counterpart of Calabi-Yau flop transitions.
His work on K-theory turned out to be important in the physics of D-branes. The ADHM construction, of which he is the alphabetically first co-author, was able to understand how D0-branes dissolve within D4-branes and how to describe the space of instanton solutions in an alternative way - long before this trick was needed or understood by the physicists. Consequently, many of his findings may be viewed as prophesies that became physics long after Atiyah saw their importance.
The index theorem and string theory
Because a lot of misinformation and vitriolic denial is being promoted by the stinky sweepings of the blogosphere that a rapidly dwindling (see top 3 string theory blogs) but unfortunately still nonzero number of people keeps on reading, let me recall why the Atiyah-Singer String Index Theorem is so closely linked to string theory.
In general, the theorem allows us to calculate certain analytic data - the indices i.e. sums and differences of the numbers of independent solutions to differential equations of various kinds - in terms of topological data about the base manifold, e.g. the Betti numbers or the number of holes in the manifolds.
This topological interpretation and derivation of analytical or physical data (or the opposite procedure) is one of the main points of perturbative string theory (and, arguably, not only in the perturbative one). Also, string-theoretical applications of the theorem form a dense subset of the space of all applications of this theorem. As all theorems in mathematics, it is more general than its main physics application. But in some sense, the overlap is so substantial that it makes sense to identify the theorem with its explanation and interpretation within string theory.
The most important special example of the theorem is the classic Riemann-Roch theorem. If you open Green-Schwarz-Witten's textbook on Superstring Theory, volume 1, page 162, you will see that on a real two-dimensional Riemann surface, the number of independent moduli B_g and the dimension of the group of conformal isomorphisms C_g (both of which can be written as the number of zero modes for various tensors, namely the conformal ghosts and antighosts, as can be easily seen) satisfy
Deltag = Cg - Bg = 3(1-g).
The right hand side is proportional to the Euler character (it's equal to 3/2 times chi) which can be calculated from the integral of scalar curvature. This result is crucial for the consistency and non-vanishing-ness of loop diagrams in perturbative string theory - and its validity may be naturally proven by string-theoretical methods, too.
In volume 2, page 372, of the same GSW textbook, you find out that the index of the Dirac operator in 4k+2 dimensions can be calculated from the theorem, too. It's no coincidence that the second simplest example after "k=0" deals with "k=1", i.e. with six-dimensional manifolds. While the "k=0" case was essential for the string world sheet, the index of the "k=1" i.e. "d=6" Dirac operator is important because it determines the number of generations of leptons and quarks in the conventional compactifications with 6 internal dimensions (6+4 = 10). They are given in terms of the Euler character - or its generalizations in the presence of additional stringy physics.
Other generalizations involve signature indices and/or separate holomorphic/antiholomorphic exterior derivatives. A lot of work extending results of Atiyah and others has been done by Witten. They can be embedded into string theory. In many cases, the truncation to topological string theory is enough to see the whole structure.
Mathematics as prophecy in physics
Once the true physical meaning of these mathematical results - whose depth was always manifest to the big thinkers - is appreciated, one can use the other knowledge of the physical application (string theory) to search for the natural guesses how to generalize the results and estimate their importance. The interactions between mathematics and physics are very subtle, however.
So mathematicians often find generalizations whose physical meaning seems obscure or non-existent at the moment of the mathematicians' brainstorm.
However, if the mathematicians are deep, it often happens that the result is viewed as an extremely physical one a few decades later. It was surely also the case of the Atiyah-Singer String Index Theorem that appeared in front of their eyes five years before string theory was officially born in physics - and almost 20 years before the relevant mathematical methods became common in physics calculations (especially the modern BRST quantization of string theory; and derivations of low-energy physics of string theory).
Mathematicians are often ahead of their time. But those 20 years are close to the upper limit at which the mathematician can still feel genuinely happy about her or mostly his work. If he or she is a century ahead of the physical world, I am afraid that he or she may remain misunderstood.
Congratulations to Sir Atiyah once again. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6999607682228088, "perplexity": 670.0137354627664}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670643.58/warc/CC-MAIN-20191121000300-20191121024300-00322.warc.gz"} |