url stringlengths 14 1.76k | text stringlengths 100 1.02M | metadata stringlengths 1.06k 1.1k |
|---|---|---|
http://mathematica.stackexchange.com/questions/1542/exporting-graphics-to-pdf-huge-file/1551 | # Exporting graphics to PDF - huge file
I want to draw some basic surfaces, export them to PDF and include them in a LaTeX file. I create a simple 3D graphics object, for instance with
ParametricPlot3D[{r Cos[\[Theta]], r Sin[\[Theta]], r^2}, {r, 0, 1}, {\[Theta], 0, 2 \[Pi]}]
then context-click on the graphics and select Save Graphic As ..., and save it as PDF.
The resulting PDF is 5MB large! When I include it in my LaTeX file it takes a long time to compile and to render in a PDF viewer.
I know that I can save it in another format such as PNG to get the file size down (in this case all the way to 33KB). But I'd prefer a vector format because my final product will be PDF. Are there some options to the Export[] command which might reduce the number of colors and make the exported file smaller?
-
This is one downfall of the new (since V6) graphics engine. The exported vector graphics (EPS or PDF) were very useful up to version 5. Would be very interested in a fix, too. Had to change a lot of my export stuff to sufficiently large rasterized images (for which the pdf export works pretty well, esp. with PDFLaTeX). – Yves Klett Feb 9 '12 at 14:03
One way is to rasterize the surface but not the axes. See here. – Alexey Popkov Feb 9 '12 at 14:21
Also see here. – Eli Lansey Feb 9 '12 at 14:38
Are Wolfram aware of the problems with vector export? ...does anyone know ...@Brett?? – Mike Honeychurch Feb 9 '12 at 22:24
Check also here: stackoverflow.com/questions/7953955/… – Piotr Migdal Feb 13 '12 at 18:09
My preferred method to export graphics to pdf is to do something like
Export["figure.pdf", plot, "AllowRasterization" -> True,
ImageSize -> 360, ImageResolution -> 600]
This uses vectors for simple graphics, but produces a high resolution rasterized image if the plot becomes too complicated.
-
the documentation for AllowRasterization says "whether to rasterize a graphic that requires advanced versions of PDF". Do you know what is meant by advanced versions? Is that more recent versions than when Mma was written? I've had import problems with recent PDF formats and had to re-save them as "older" PDFs to get them to import. – Mike Honeychurch Feb 9 '12 at 22:08
I hink this has the best chance of being close to foolproof, so I'm upvoting it. – Jens Feb 10 '12 at 3:10
Very nice, upvoting. This also works on a whole notebook. So if bignb has head NotebookObject one can just do, e.g.: Export["tentimessmallerfile.pdf", bignb, "AllowRasterization" -> True, ImageResolution -> 300] – Rolf Mertig Feb 16 '12 at 9:28
Making use of this now, thanks. – rcollyer Nov 30 '12 at 15:49
If I want to export graphics to .eps instead of .pdf what should be changed to the Export[]; it's not working with .eps files, the size remains the same. – Vaggelis_Z Aug 11 '13 at 12:52
Before you step into the same traps I once stepped, let me point out some key-points. First of all two things: 1. although I spend some time digging in the subject, my knowledge is far from being complete, keep this in mind. 2. as everyone else around here, I would really like to have a fast, good-looking export of vector graphics too. Unfortunately, we have none of those properties currently. Here are some issues I spend some time investigating:
1. The exported vector-graphics from Mathematica are more or less projection of the (OpenGL-)polygon-scene which is showed in the frontend. This has several disatvantages, first of all a huge amount of polygons is exported. Not only the visible ones, even when you don't use Opacity all polygons are exported. This does not only result in very big file-sizes, it is not uncommon, that those pdf-files need 10 minutes or longer to render in a pdf-viewer. This is simply impossible to use.
2. Altough the polygons share the same vertex-points, their edges are not completely opaque when rendered in a pdf-viewer. This might result from the alpha-blending which is done at points a line (or a polygon-border) does not lie completely on a pixel-position (which is almost never). This results in artifacts which let you see the polygon-structure. This problem is often discussed, since it happens in density-plots too. If the background is white and the polygons are infront of this background, this looks then like the image below. The situation gets worse when you have surface behind visible polygons. Then you kind of see the 3d-surface through like it would not be completely opaque. I discussed this issue some time ago with one of the developers of InkScape who (and others) was kind enough to explain what happens. You can find this discussion here
3. One particular thing which really bothers me even if I ignore the first two points are the mesh-lines. In the good old days mesh-lines represented really the underlying sampling. This is no longer the case, but since they are just so cute and everyone likes them, they are added to the 3d model. Unfortunately, this is not done correctly since it leads to serious display-error. Even in the above image the mesh-lines do not look equally thick, but taking a closer look shows the chaos:
## Idea for a solution
The main-idea is, that while I see the requirement for included text to be in vector-format, for the surface with its smooth lighting and color-gradients it would be enough to have a high-resolution raster image. So maybe we can extract them, transform the surface and put it back together, but how to do it? Like everywhere in science you can easily use the results of smart people who where kind enough to share the knowledge. So please look at the references.
So what you could do is that you plot your Graphics as usual
size = 800;
g = ParametricPlot3D[{r Cos[\[Theta]], r Sin[\[Theta]], r^2}, {r, 0,
1}, {\[Theta], 0, 2 \[Pi]}, ImageSize -> size];
The next step is to take the Graphics3D and use only the surface without the axes and the box. Since we are rasterizing this graphics anyway, we could in this step smooth out hard edges and supress aliasing effects. A very simple an nice approach was given by @Szabolcs:
antialias[g_] :=
ImageResize[
Rasterize[g, "Image", ImageResolution -> 72*2, Background -> None],
Scaled[1/2]]
img = antialias[Show[g, Axes -> False, Boxed -> False]];
After having the surface as image you can use Inset to put it back into the graphics. Please note, that Background->None in the antialiasing function makes the white background transparent and therefore it works so nicely with the axes.
final = Graphics3D[
Inset[img, ImageScaled[{1/2, 1/2, 0}], {Center, Center}, size],
AbsoluteOptions[g], ImageSize -> size];
Export["tmp/gr3d.pdf", final]
What you should not now is, that the axes are vector-graphics while the rest is a raster-graphics;
## Open Questions
• If the ImageSize option is not used, the placement of the surface with Inset works find. Using a higher ImageSize requires an adjustment of the surface-size when it is inset. It's open to be proofed that this works reliable and that the surface is placed correctly.
• The pdf-export seems to use jpg-encoding for the raster-image. This look ugly. Maybe whe can prevent it anyhow from doing that during the export
• Note how nice it works that the box-lines are sometimes over and sometimes under the surface, always right. Inset seems to place the raster really in 3d. Does this work always.
## References
-
@AlexeyPopkov I missed that. Did you, just for curiosity, export your 3d-plot to a pdf? It's 1.1 GB here ;-) – halirutan Feb 11 '12 at 15:44
No, I export rasterized version with high resolution. But it is also not so easy as one can expect... – Alexey Popkov Feb 11 '12 at 17:46
" It's open to be proved..." I have an example that shows this placement method doesn't always work: raw.github.com/peeterjoot/mathematica/master/phy487/… – Peeter Joot Dec 14 '13 at 1:41
@halirutan When I copy the code above and execute it, the final figure I get (the one that in your code follows the Export), I get an axis box that seems to be twice as big as the image (That is, the vertex appears to be at the origin, but the top circle at $z=0.5$). Did you use exactly the code in your answer to get the output you show? (MMA 9, if that matters) – rogerl Apr 7 '14 at 22:29
For 3D graphics, I truly don't think it's worth the effort to attempt exporting as vector graphics. The valiant attempts to keep at least the axes and labels as vector graphics are in my opinion not something an everyday user would consider.
With PDF for 3D graphics, you're fighting two problems: not just the file size but also the slow rendering when your PDF reader has to execute a huge computation whenever the page containing the graphic needs to be displayed. The argument for vector graphics is typically that it creates smaller files because the graphic is essentially a program that runs at the time of rendering. But if that program (the PDF) just stupidly enumerates zillions points or polygons to be drawn one by one, you get the worst of both worlds: inefficient data representation with lots of data.
So I would just say: retreat to bitmaps, as Szabolcs was saying. This isn't necessarily a bad thing. Consider the example plot
a = Show[ParametricPlot3D[{16 Sin[t/3], 15 Cos[t] + 7 Sin[2 t],
8 Cos[3 t]}, {t, 0, 8 \[Pi]}, PlotStyle -> Tube[.2],
AxesStyle -> Directive[Black, Thickness[.004]]],
TextStyle -> {FontFamily -> "Helvetica", FontSize -> 12},
DefaultBoxStyle -> {Gray, Thick}]
This takes up 200 MB when exported straight to PDF. If instead I export this with
Export["wiggle.png", Magnify[a, 4]]
the file size is a reasonable 170 KB. The pixelated axes can of course be discerned if you look closely -- but so can the imperfections in the 3D plot itself that will always be there due to limited number of polygons.
The actual question was how to export to PDF, so I guess I'll answer it this way:
Export["wiggles.pdf", Rasterize[Magnify[a, 4], "Image"]]
Edit:
Unfortunately, this isn't foolproof because Magnifiy stops magnifying when the size exceeds the width of the notebook window! If the window isn't big enough to accomodate the desired magnification, the relative scaling of fonts and graphics will be messed up.
Edit 2:
As is discussed in this related question, Magnify will work reliably provided that you specify an explicit value for the ImageSize option of your 3D graphics.
Edit 3:
The remaining problem with Magnify is that it doesn't scale up tick marks properly. So I asked myself how to make @Heike's method of rasterization work automatically for Graphics3D without having to think about the resolution and image size every time.
Of course one could write a custom export function, but in some situations it would be convenient if one could modify the standard export behavior for the entire notebook. To do this, one only has to make sure that all Graphics3D automatically contain some part that requires an advanced version of PDF. In particular, this is the case for polygons with vertex colors.
So to achieve rasterization by default, one could initialize the notebook with a statement like this:
Map[SetOptions[#,
Prolog -> {{EdgeForm[], Texture[{{{0, 0, 0, 0}}}],
Polygon[#, VertexTextureCoordinates -> #] &[{{0, 0}, {1,
0}, {1, 1}}]}}] &, {Graphics3D, ContourPlot3D,
ListContourPlot3D, ListPlot3D, Plot3D, ListSurfacePlot3D,
ListVectorPlot3D, ParametricPlot3D, RegionPlot3D, RevolutionPlot3D,
SphericalPlot3D, VectorPlot3D}];
This adds an invisible 2D polygon as a Prolog to every Graphics3D that is created in the notebook (edit: I had to explicitly do this for various wrapper functions that create Graphics3D, such as ParametricPlot3D). My rationale is that Prolog isn't likely to be needed for anything else in my 3D plots under normal circumstances. Now when I try the above plot a in a simple export command such as
Export["a.pdf",a]
I get a high-resolution image that's ready for printing.
-
One workaround I've discovered is to export as SVG, open in Inkscape, and save as PDF. The default parameters there (which I have not explored thoroughly) result in a PDF file size of 324KB.
-
That works for relatively benign SVG - I was even thinking I could write a script for that. But then I realized that even for moderately complex 3D files, the intermediate SVG that you'd have to export can be so large that it will crash Inkscape. The example in my answer below gives a 90 MB SVG file, for example. That's very unwieldy to work with. – Jens Feb 9 '12 at 18:48
It doesnt preserve the figure labels especially if they are complicated! – Seyhmus Güngören Feb 1 at 14:31
I suggest exporting to a raster format with high resolution. The ImageResolution option is very useful for controlling the resolution. I wrote a little tutorial on how to export images for LaTeX in this answer (since I would just repeat the same thing here, I am linking to it instead).
Note: High resolution raster images will take up a lot of space, but they are fast to render, and with a high enough resolution they give the same quality in printed documents as vector images. The size of included figures should not influence TeX compilation time significantly.
-
Seconded, but especially for fine line graphics containing e.g. a lot of text as well (e.g. technical drawings) working with the vector graphics was a pleasure and the file sizes and compilation speeds in LaTeX were a joy. – Yves Klett Feb 9 '12 at 14:13
@Yves I agree with you, I also don't like how 3D graphics export to a vector format. I don't believe all those tiny polygons are necessary because PDF does support "triangle gradients". There's this little OpenGL to PostScript/PDF library that produces much much better results, so it is definitely technically possible to export 3D graphics to a small fast to render good quality PDF. – Szabolcs Feb 9 '12 at 14:27
– Szabolcs Feb 9 '12 at 14:28
Just don´t hold your breath ;-) – Yves Klett Feb 9 '12 at 14:34
... and to be fair, I would not want to go back to Version5Graphics, got used to pretty things like Opacity or Texture far too quickly. – Yves Klett Feb 9 '12 at 14:37
Another convenient workaround is to export to PDF, reimport the PDF file and export it again. For my files it reduced sizes from 2 MB down to 200 kb.
Regards Patrick
Edit: Here is an example of an 700kb export. If reimported and reexported, the file size is down to 28k. Quality seems the same to me. Special characters however, like ä or ö from German are not converted correctly though. If anybody has an idea on how to deal with this?
Export["C:\Users\Desktop\test.pdf", "abc äüö"];
reimport = Import["C:\Users\Desktop\test.pdf"];
Export["C:\Users\Desktop\test2.pdf", reimport];
-
To illustrate your solution, you could add example code and images (is there any quality difference...?). – Yves Klett Sep 6 '13 at 12:30
That might just be due to the font embedding issue: mathematica.stackexchange.com/questions/15929/… – Peeter Joot Dec 13 '13 at 15:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42023730278015137, "perplexity": 1998.3828634909296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095273.5/warc/CC-MAIN-20150627031815-00070-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://socratic.org/questions/why-does-electron-capture-produce-a-neutrino | Chemistry
Topics
# Why does electron capture produce a neutrino?
Feb 10, 2014
Electron capture produces an electron neutrino in order to conserve lepton numbers.
Electron capture is a process in which one of the inner electrons of an atom is captured by a proton in the nucleus, forming a neutron and emitting an electron neutrino.
p + e⁻ → n + ν_e
You are probably familiar with conservation laws like the Law of Conservation of Energy and the Law of Conservation of Momentum. One of the rules that must be followed in nuclear reactions is the Law of Conservation of Lepton Number.
A lepton is an elementary particle with spin ½ that does not undergo strong nuclear forces. Electrons and neutrinos have spin ½ and are outside the nucleus, so they are leptons. The lepton numbers of electrons and of electron neutrinos are +1.
Protons and neutrons are held together in the nucleus by the strong nuclear force. They have lepton numbers of zero.
If the process were simply p + e⁻ → n, the sum of the lepton numbers would be +1 on the left and 0 on the right. Lepton numbers would not be conserved.
If an electron neutrino is emitted, as in p + e⁻ → n + ν_e, the sums of the lepton numbers are +1 on the left and +1 on the right. Lepton numbers are conserved.
Good presentation, very nice pictures on this website: http://hyperphysics.phy-astr.gsu.edu/hbase/nuclear/radact2.html
##### Impact of this question
3770 views around the world | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6993013620376587, "perplexity": 485.98881491702525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735909.19/warc/CC-MAIN-20200805035535-20200805065535-00013.warc.gz"} |
https://scholars.bgu.ac.il/display/n1890424 | # Significance of the hyperfine interactions in the phase diagram of LiH xy1-xF4 Academic Article
•
• Overview
•
• Research
•
•
• View All
•
### abstract
• We consider the quantum magnet $\rm LiHo_xY_{1-x}F_4$ at $x = 0.167$. Experimentally the spin glass to paramagnet transition in this system was studied as a function of the transverse magnetic field and temperature, showing peculiar features: for example (i) the spin glass order is destroyed much faster by thermal fluctuations than by the transverse field; and (ii) the cusp in the nonlinear susceptibility signaling the glass state {\it decreases} in size at lower temperature. Here we show that the hyperfine interactions of the Ho atom must dominate in this system, and that along with the transverse inter-Ho dipolar interactions they dictate the structure of the phase diagram. The experimental observations are shown to be natural consequences of this.
### publication date
• January 1, 2005 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9270385503768921, "perplexity": 1257.874394914405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738723.55/warc/CC-MAIN-20200810235513-20200811025513-00190.warc.gz"} |
https://www.scribd.com/doc/111733291/80/INCOMPLETE-RANKINGS | P. 1
Nonparametric Statistical Inference, Fourth Edition
# Nonparametric Statistical Inference, Fourth Edition
|Views: 1,472|Likes:
### Availability:
See more
See less
07/01/2013
pdf
text
original
As an extension of the sampling situation of Section 12.4, suppose that
we have n objects to be ranked and a fixed number of observers to rank
them but each observer ranks only some subset of the n objects. This
situation could arise for reasons of economy or practicality. In the case
of human observers particularly, the ability to rank objects effectively
and reliably may be a function of the number of comparative judg-
ments to be made. For example, after 10 different brands of bourbon
have been tasted, the discriminatory powers of the observers may
legitimately be questioned.
such that the rankings are incomplete in the same symmetrical way as
in the balanced incomplete-blocks design which is used effectively in
agricultural field experiments. In terms of our situation, this means
that:
1. Each observer will rank the same number m of objects for some
m < n.
2. Everyobjectwillberankedexactlythesametotalnumberkoftimes.
3. Each pair of objects will be presented together to some observer a
total of exactly l times, l51, a constant for all pairs.
These specifications then ensure that all comparisons are made with
the same frequency.
476
CHAPTER 12
In order to visualize the design, imagine a two-way layout of p
rows and n columns, where the entry dij in (i,j) cell equals 1 if object j is
presented to observer i and 0 otherwise. The design specifications then
can be written symbolically as
1. Pn
j¼1dij ¼ m
for i ¼ 1;2;. . .;p
2. Pp
i¼1dij ¼ k
for j ¼ 1;2;. . .;n
3. Pp
i¼1dijdir ¼ l for allr j ¼ 1;2;. . .;n
Summing on the other subscript in specifications 1 and 2, we obtain
X
p
i¼1
X
n
j¼1
dij ¼ mp ¼ kn
which implies that the number of observers is fixed by the design to be
p ¼ kn=m. Now using specification 3, we have
X
p
i¼1
X
n
j¼1
dij
!2
¼
X
p
i¼1
X
n
j¼1
d2
ij þ
X
n
j¼1
X
n
r¼1
jr
dijdir
mpþlnðnÀ1Þ
and from specification 1, this same sum equals pm2
. This requires the
relation
l ¼ pmðmÀ1Þ
nðnÀ1Þ ¼ kðmÀ1Þ
nÀ1
Since p and l must both be positive integers, m must be a factor of kn
and nÀ1 must be a factor of kðmÀ1Þ. Designs of this type are called
Youden squares or incomplete Latin squares. Such plans have been
tabulated (for example, in Cochran and Cox, 1957, pp. 520–544). An
example of this design for n ¼ 7;l ¼ 1;m ¼ k ¼ 3, where the objects
are designated by A, B, C, D, E, F, and G is:
Observer
1
2
3
4
5
6
7
Objects presented
for ranking
A
B
C
D
E
F
G
B
C
D
E
F
G
A
D
E
F
G
A
B
C
MEASURES OF ASSOCIATION IN MULTIPLE CLASSIFICATIONS
477
We are interested in determining a single measure of the overall
concordance or agreement between the kn=m observers in their re-
lative comparisons of the objects. For simplification, suppose there is
some natural ordering of all n objects and the objects labeled accord-
ingly. In other words, object number r would receive rank r by all
observers if each observer was presented with all n objects and the
observers agreed perfectly in their evaluation of the objects. For per-
fect agreement in a balanced incomplete ranking then, where each
observer assigns ranks 1;2;. . .;m to the subset presented to him, ob-
ject 1 will receive rank 1 whenever it is presented; object 2 will receive
rank 2 whenever it is presented along with object 1, and rank 1
otherwise; object 3 will receive rank 3 when presented along with both
objects 1 and 2, rank 2 when with either objects 1 or 2 but not both,
and rank 1 otherwise, etc. In general, then, the rank of object j when
presented to observer i is one more than the number of objects pre-
sented to that observer from the subset of objects f1;2;. . .;jÀ1g, for
all 24j4n. Symbolically, using the d notation of before, the rank of
object j when presented to observer i is 1 for j ¼ 1 and
1þ
X
jÀ1
r¼1
dir for all 24j4n
The sum of the ranks assigned to object j by all p observers in the case
of perfect agreement then is
X
p
i¼1
1þ
X
jÀ1
r¼1
dir
!dij ¼
X
p
i¼1
dij þ
X
jÀ1
r¼1
X
p
i¼1
dirdij
kþlðjÀ1Þ
for j ¼ 1;2;. . .;n
as a result of the design specifications 2 and 3.
Since each object is ranked a fixed number, k, of times, the ob-
served data for an experiment of this type can easily be presented in
a two-way layout of k rows and n columns, where the jth column
contains the collection of ranks assigned to object j by those observers
to whom object j was presented. The rows no longer have any sig-
nificance, but the column sums can be used to measure concordance.
The sum of all ranks in the table is½mðmþ1Þ=2 ½kn=m ¼ knðmþ1Þ=2,
and thus the average column sum is kðmþ1Þ=2. In the case of perfect
concordance, the column sums are some permutation of the numbers
k; kþl;kþ2l;. . .;kþðnÀ1Þl
478
CHAPTER 12
and the sums of squares of deviations of column sums around their
mean is
X
nÀ1
j¼0
ðkþjlÞÀkðmþ1Þ
2
!2
¼ l2
nðn2
À1Þ
12
Let Rj denote the actual sum of ranks in the jth column. A relative
measure of concordance between observers may be defined here as
W ¼
12Pn
j¼1½Rj Àkðmþ1Þ=2 2
l2
nðn2
À1Þ
ð5:1Þ
If m ¼ n and l ¼ k so that each observer ranks all n objects, (5.1) is
equivalent to (4.4), as it should be.
This coefficient of concordance also varies between 0 and 1 with
larger values reflecting greater agreement between observers. If there
is no agreement, the column sums would all tend to be equal to the
average column sum and W would be zero.
TESTS OF SIGNIFICANCE BASED ON W
For testing the null hypothesis that the ranks are allotted randomly by
each observer to the subset of objects presented to him so that there is
no concordance, the appropriate rejection region is large values of W.
This test is frequently called the Durbin (1951) test.
The exact sampling distribution of W could be determined only
by an extensive enumeration process. Exact tables for 15 different
designs are given in van der Laan and Prakken (1972). For k large
an approximation to the null distribution may be employed for tests
of significance. We shall first determine the exact null mean and
variance of W using an approach analogous to the steps leading to
(2.7). Let Rij; i ¼ 1;2;. . .;k, denote the collection of ranks allotted to
object number j by the k observers to whom it was presented.
From (11.3.2), (11.3.3), and (11.3.10), in the null case then for all i, j,
and q j
EðRijÞ ¼ mþ1
2
varðRijÞ ¼ m2
À1
12
covðRij;RiqÞ ¼ Àmþ1
12
and Rij and Rhj are independent for all j where i h. Denoting
ðmþ1Þ=2 by m, the numerator of W in (5.1) may be written as
MEASURES OF ASSOCIATION IN MULTIPLE CLASSIFICATIONS
479
12
X
n
j¼1
X
k
i¼1
Rij Àkm
"
#2
¼ 12
X
n
j¼1
X
k
i¼1
ðRij ÀmÞ
"
#2
¼ 12
X
n
j¼1
X
k
i¼1
ðRij ÀmÞ2
þ24
X
n
j¼1
XX
14i4k
ðRij ÀmÞðRhj ÀmÞ
¼ pmðm2
À1Þþ24U ¼ l2
nðn2
À1ÞW
ð5:2Þ
Since covðRij;RhjÞ ¼ 0 for all i < h; EðUÞ ¼ 0. Squaring the sum
represented by U, we have
U2
¼
X
n
j¼1
XX
14i4k
ðRij ÀmÞ2
ðRhj ÀmÞ2
þ2
XX
14j4n
XX
14i4k
Â
XX
14r4k
ðRij ÀmÞðRhj ÀmÞðRrq ÀmÞðRsq ÀmÞ
and
EðU2
Þ ¼
X
n
j¼1
XX
14i4k
varðRijÞvarðRhjÞ
þ2
XX
14j4n
l
2
covðRij;RiqÞcovðRhj;RhqÞ
since objects j and q are presented together to both observers i and h
a total of l
2
times in the experiment. Substituting the respective
variances and covariances, we obtain
varðUÞ ¼ EðU2
Þ ¼
n
k
2
m2
À1Þ2
þ2
n
2
! l
2
mþ1Þ2
144
¼ nkðmþ1Þ2
ðmÀ1ÞðmÀ1ÞðkÀ1ÞþðlÀ1Þ
288
From (5.2), the moments of W are
EðWÞ ¼ mþ1
nþ1Þ
varðWÞ ¼ 2ðmþ1Þ2ðmÀ1ÞðkÀ1ÞþðlÀ1Þ
nkl2
ðmÀ1Þðnþ1Þ2
480
CHAPTER 12
Asinthecaseofcompleterankings,alinearfunctionofWhasmoments
approximately equal to the corresponding moments of the chi-square
distribution with nÀ1 degrees of freedom if k is large. This function is
Q ¼ lðn2
À1ÞW
mþ1
and its exact mean and variance are
EðQÞ ¼ nÀ1
varðQÞ ¼ 2ðnÀ1Þ 1À mðnÀ1Þ
nkðmÀ1Þ
!% 2ðnÀ1Þ 1À1
k
The rejection region for large k and significant level a then is
Q 2 R for Q5w2
nÀ1;a
TIED OBSERVATIONS
Unlike the case of complete rankings, no simple correction factor can
be introduced to account for the reduction in total sum of squares of
deviations of column totals around their mean when the midrank
method is used to handle ties. If there are only a few ties, the null
distribution of W should not be seriously altered, and thus the statistic
can be computed as usual with midranks assigned. Alternatively, any
of the other methods of handling ties discussed in Section 5.6 (except
omission of tied observations) may be adopted.
APPLICATIONS
This analysis-of-variance test based on ranks for balanced incomplete
rankings is usually called the Durbin test. The test statistic here,
where l is the number of times each pair of treatments is ranked and
m is the number of treatments in each block, is most easily computed
as
Q ¼
12Pn
j¼1R2
j
lnðmþ1Þ À3k2
ðmþ1Þ
l
ð5:3Þ
which is asymptotically chi-square distributed with nÀ1 degrees of
freedom. The null hypothesis of equal treatment effects is rejected for
Q large.
Kendall’s coefficient of concordance descriptive measure for k in-
completesetsofnrankings,wheremisthenumberofobjectspresented
MEASURES OF ASSOCIATION IN MULTIPLE CLASSIFICATIONS
481
for ranking and l is the number of times each pair of objects is ranked
together, is given in (5.1), which is equivalent to
W ¼
12Pn
j¼1R2
j À3k2
nðmþ1Þ2
l2
nðn2
À1Þ
ð5:4Þ
and Q ¼ lðn2
À1ÞW=ðmþ1Þ is the chi-square test statistic with nÀ1
degrees of freedom for the null hypothesis of no agreement between
rankings.
If the null hypothesis of equal treatment effects is rejected, we
can use a multiple comparisons procedure to determine which pairs of
treatments have significantly different effects. Treatments i and j are
declared to be significantly different if
jRi ÀRjj5zÃ
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
kmðm2
À1Þ
6ðnÀ1Þ
s
ð5:5Þ
where zà is the negative of the ½a=nðnÀ1Þ th quantile of the standard
normal distribution.
Example 5.1 A taste-test experiment to compare seven different kinds
of wine is to be designed such that no taster will be asked to rank more
than three different kinds, so we have n ¼ 7 and m ¼ 3. If each pair of
wines is to be compared only once so that l ¼ 1, the required number
of tasters is p ¼ lnðnÀ1Þ=mðmÀ1Þ ¼ 7. A balanced design was used
and the rankings given are shown below. Calculate Kendall’s coeffi-
cient of concordance as a measure of agreement between rankings and
test the null hypothesis of no agreement.
Solution Each wine is ranked three times so that k ¼ 3. We calculate
PR2
j ¼ 280 and substitute into (5.4) to get W ¼ 1, which describes
Wine
Taster
A
B
C
D
E
F
G
1
1
2
3
2
1
3
2
3
3
2
1
4
2
3
1
5
1
3
2
6
2
1
3
7
1
3
2
Total
3
5
9
7
8
4
6
482
CHAPTER 12
perfect agreement. The test statistic from (5.3) is Q ¼ 12 with 6 de-
grees of freedom. The P value from Table B of the Appendix is
0:05 < P < 0:10 for the test of no agreement between rankings. At the
time of this writing, neither STATXACT nor SAS has an option for the
Durbin test.
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//--> | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8168004155158997, "perplexity": 4947.765433718478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701163421.31/warc/CC-MAIN-20160205193923-00196-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://www.rdocumentation.org/packages/geoR/versions/1.8-1/topics/boxcox.geodata | # boxcox.geodata
0th
Percentile
##### Box-Cox transformation for geodata objects
Method for Box-Cox transformation for objects of the class geodata assuming the data are independent. Computes and optionally plots profile log-likelihoods for the parameter of the Box-Cox simple power transformation $$y^lambda$$.
Keywords
models, hplot, regression
##### Usage
# S3 method for geodata
boxcox(object, trend = "cte", ...)
##### Arguments
object
an object of the class geodata. See as.geodata.
trend
specifies the mean part of the model. See trend.spatial for further details. Defaults to "cte".
arguments to be passed for the function boxcox.
##### Details
This is just a wrapper for the function boxcox facilitating its usage with geodata objects.
Notice this assume independent observations which is typically not the case for geodata objects.
##### Value
A list of the lambda vector and the computed profile log-likelihood vector, invisibly if the result is plotted.
boxcox for parameter estimation results for independent data and likfit for parameter estimation within the geostatistical model.
##### Aliases
• boxcox.geodata
##### Examples
# NOT RUN {
if(require(MASS)){
boxcox(wolfcamp)
data(ca20)
boxcox(ca20, trend = ~altitude)
}
# }
Documentation reproduced from package geoR, version 1.8-1, License: GPL (>= 2)
### Community examples
Looks like there are no examples yet. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24616621434688568, "perplexity": 11406.468370129507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704803737.78/warc/CC-MAIN-20210126202017-20210126232017-00683.warc.gz"} |
https://brilliant.org/discussions/thread/what-is-wrong-with-this-proof/?sort=new | # What is wrong with this proof?
i didn't get the mistake in this proof: 2+2=5.Not my proof. Found it on Facebook.
text
Note by Priyankar Kumar
5 years, 5 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
Sort by:
$$\sqrt{(4 - \frac{9}{2}) ^{2}}$$ is not equal to $$4 - \frac{9}{2}$$
- 5 years, 5 months ago
thanks
- 5 years, 5 months ago
$$\sqrt x^2 = |x|$$ not $$x$$.
- 5 years, 5 months ago
You cant make a negative number positive by just squaring it and taking root. I'm talking about 2nd step, (4-(9/2))
- 5 years, 5 months ago
thanks
- 5 years, 5 months ago
$$(4-\frac{9}{2})=-\frac{1}{2}$$
but,
$$(5-\frac{9}{2})=\frac{1}{2}$$
in this proof we wrote--
$$2+2=4-\frac{9}{2}+\frac{9}{2}...........step (i)$$
$$=\sqrt{(5-\frac{9}{2})^2}+\frac{9}{2}..........step(viii)$$
$$=5-\frac{9}{2}+\frac{9}{2}..........step(ix)$$
so,what we actually doing is
$$-\frac{1}{2}+\frac{9}{2}=\frac{1}{2}+\frac{9}{2}$$..............consider step (i) and (ix)
this is where we made mistake.
So in this case at the time of removing square root of $$\sqrt{(5-\frac{9}{2})^2}$$
we need to consider $$\sqrt{(5-\frac{9}{2})^2}=-(5-\frac{9}{2})$$
- 5 years, 5 months ago
$$\sqrt {(5-\frac{9}{2})^{2}}$$ is not equal to $$-(5-\frac{9}{2})$$
- 5 years, 5 months ago
ok..i understood thanks.
- 5 years, 5 months ago
Careful! Gypsy's last statement is incorrect.
$$\sqrt{(5-\frac{9}{2})^2}$$ is not equal to $$-(5-\frac{9}{2})$$.
However, $$\sqrt{(4-\frac{9}{2})^2}=-(4-\frac{9}{2})$$.
- 5 years, 5 months ago
I think so that the square roots can't be split...
- 5 years, 5 months ago
It is wrong because $$\sqrt{x^2}$$ is not always equal to $$x$$. It has two values, that are $$x$$ and $$-x$$
- 5 years, 5 months ago
sqrt( x^2 ) is always | x |
- 5 years, 5 months ago
You should be careful about what you write. $$\sqrt{x^2}$$ [the principal square-root of $$x^2$$] does not have two values.
Take this for example: what is $$\sqrt{(1-\sqrt{3})^2}$$. Is it $$(1-\sqrt{3})$$? Or is it $$-(1-\sqrt{3})$$? Or is it both?
According to you, both of these should be correct. Are they?
Also see the solutions for this problem where almost all of them make this same mistake.
- 5 years, 5 months ago
Mmm, that got me.
- 5 years, 5 months ago
exactly..but in the subsequent steps, (a-b)^2 disguises this.
- 5 years, 5 months ago
There's a subtle mistake in there.
Here's a cryptic hint that might help:
Is $$\sqrt{x^2}$$ always equal to $$x$$?
- 5 years, 5 months ago
exactly..but the usage of (a-b)^2 somehow disguises this.
- 5 years, 5 months ago
You can't write (4-9/2) as square root of (4-9/2)^2. That is the mistake.
- 5 years, 5 months ago
Why not?
- 5 years, 5 months ago
Since $$4-9/2$$ is negative, while $$\sqrt{(4-9/2)^2}$$ is positive.
- 5 years, 5 months ago
ok..thanks..i get the point. Thanks to Mursalin too.
- 5 years, 5 months ago
You're welcome!
- 5 years, 5 months ago
Look at the last. Assuming the mistake that it is $$-\sqrt{(5-\frac{9}{2})^{2}}$$ we will get $$-5+\frac{9}{2}+\frac{9}{2}=4$$So, it is proved that $$2+2=4$$ and not $$5$$
- 5 years, 5 months ago
Ram Prakash is right. $$4-\frac{9}{2}$$ is not equal to $$\sqrt{(4-\frac{9}{2})^2}$$. This is what I hinted at in my initial comment.
- 5 years, 5 months ago
Did you see the term: 2 x 4 x 9/2? if we cancel 2 then the answer will be 39 but when we multiply the numerators, it is equal to 29. So maybe it is the mistake
- 5 years, 5 months ago
No,the calculation is$$2 \times 4 \times \frac{9}{2}$$.If we cancel the 2's,then it becomes $$4 \times 9 =36$$ and if we multiply the numerators it is $$\frac{72}{2} = 36$$. So this is not the mistake.
- 5 years, 5 months ago
No sir, i don't think so.
- 5 years, 5 months ago
- 5 years, 5 months ago | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996815919876099, "perplexity": 3661.084920433919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202889.30/warc/CC-MAIN-20190323161556-20190323183556-00484.warc.gz"} |
http://mathhelpforum.com/number-theory/183984-unusual-integer-equation.html | Math Help - unusual integer equation
1. unusual integer equation
Find all the positive integers $x, \, y$ such that $x^y=(y+1)^4$.
I could show that no solution is there if $5 \leq y+1.
But the remaining cases still bug me.
any help.
2. Re: unusual integer equation
Seems like there is quite a collection for y = 0.
3. Re: unusual integer equation
Originally Posted by TKHunny
Seems like there is quite a collection for y = 0.
there is. but we are concerned with only positive integers.
4. Re: unusual integer equation
Originally Posted by abhishekkgp
Find all the positive integers $x, \, y$ such that $x^y=(y+1)^4$.
I could show that no solution is there if $5 \leq y+1.
But the remaining cases still bug me.
any help.
y=4
x=y+1
(5,4) is a solution. I think it is the only one.(for x,y>0)
5. Re: unusual integer equation
Originally Posted by abhishekkgp
there is. but we are concerned with only positive integers.
Always with the missing ONE word!
6. Re: unusual integer equation
Originally Posted by Also sprach Zarathustra
y=4
x=y+1
(5,4) is a solution. I think it is the only one.(for x,y>0)
just found another one (x,y)=(3,8)
7. Re: unusual integer equation
Originally Posted by abhishekkgp
Find all the positive integers $x, \, y$ such that $x^y=(y+1)^4$.
I could show that no solution is there if $5 \leq y+1.
But the remaining cases still bug me.
any help.
Write (a) $x^{y/4}=y+1$. Clearly, $x\geq2$ and so $y+1=x^{y/4}\geq2^{y/4}$. Note that since $x^y$ is a fourth power, then $y/4$ must be an integer.*
It can be shown# that $2^n>4n+1$ for any integer $n\geq5$, so $4n+1\leq2^n$ implies $n<5$. Now, taking $n=y/4$ we have $y/4<5$.
The only possible value for $y$ are $4, 8, 12,$ and $,16$. The corresponding equations for (a) are $x=5$, $x^2=9$, $x^3=13$, and $x^4=17$. Only the first two are solvable. Therefore, the only solutions to the originial equation are those that Also sprach Zarathustra and abhishekkgp gave.
*Not necessary for the solution, only to reduce the number cases.
#If $n=5$, then $2^5>4\cdot5+1$. Adding $4$ gives $4+2^n<2^n+2^n=2^{n+1}$ and $4n+1+4=4(n+1)+1$. Using $2^n>4n+1$ we have $2^{n+1}>4(n+1)+1$.
8. Re: unusual integer equation
Originally Posted by melese
Write (a) $x^{y/4}=y+1$. Clearly, $x\geq2$ and so $y+1=x^{y/4}\geq2^{y/4}$. Note that since $x^y$ is a fourth power, then $y/4$ must be an integer.*
i don't think this is correct for if we take $x=4, y=2$ then $x^y=2^4$ is still a fourth power but $y/4$ is not an integer.
9. Re: unusual integer equation
Originally Posted by abhishekkgp
i don't think this is correct for if we take $x=4, y=2$ then $x^y=2^4$ is still a fourth power but $y/4$ is not an integer.
I cannot be sure I'm not deceived again.
Since $y>0$, $x\geq2$. So, $2^y\leq x^y$ and by the Binomial Theorem, $(1+y)^4<1+y^4$. Therefore, $2^y\leq y^4$.
But $2^y>y^4$ for $y>16$ (by induction or otherwise). It follows that $y\leq16$.
The possible equations are:
$x^1=(1+1)^4; x^2=(2+1)^4; x^3=(3+1)^4; x^4=(4+1)^4;...; x^{15}=(15+1)^4; x^{16}=(y+1)^4$. Then the only solutions I've found are $(16,1), (9,2), (5,4), (3,8)$.
10. Re: unusual integer equation
could you try y=4z and x=(4z+1)^1/z, i don,t think that they are any more +ve integer solutions, I wrote a program and as y goes to infinity x approaches 1, any ideas..
11. Re: unusual integer equation
Originally Posted by melese
I cannot be sure I'm not deceived again.
Since $y>0$, $x\geq2$. So, $2^y\leq x^y$ and by the Binomial Theorem, $(1+y)^4<1+y^4$. Therefore, $2^y\leq y^4$.
But $2^y>y^4$ for $y>16$ (by induction or otherwise). It follows that $y\leq16$.
The possible equations are:
$x^1=(1+1)^4; x^2=(2+1)^4; x^3=(3+1)^4; x^4=(4+1)^4;...; x^{15}=(15+1)^4; x^{16}=(y+1)^4$. Then the only solutions I've found are $(16,1), (9,2), (5,4), (3,8)$.
i had known that there are only for solutions to that equations.. i have not yet reviewed your solution but m sure u r right this time!
12. Re: unusual integer equation
Originally Posted by melese
$(1+y)^4<1+y^4$. .
but its the other way around. Even if this is a typo i am not getting how you got y<16..
13. Re: unusual integer equation
Originally Posted by abhishekkgp
but its the other way around. Even if this is a typo i am not getting how you got y<16..
That was not a typo but a serious mistake. We have $2^y\leq x^y=(1+y)^4$.
For $y>16$, the inequality $2^y>(1+y)^4$ can be shown by induction on $y$. We can check that $2^{17}>(1+17)^4$. Assuming that $2^y>(1+y)^4$ multiply both sides by $2$. Then $2^{y+1}>(1+y)^4\cdot2$. Now it's left to show that $(1+y)^4\cdot2>(2+y)^4$. Note that $\displaystyle\frac{(2+y)^4}{(1+y)^4}=(\frac{2+y}{1 +y})^4=(1+\frac{1}{1+y})^4$.
Since $(1+\frac{1}{1+y})^4$ gets smaller as $y>16$ gets larger it follows that $(1+\frac{1}{1+y})^4<(1+\frac{1}{1+16})^4<2$ and therefore $\frac{(2+y)^4}{(1+y)^4}<2$ - this completes the inductive part.
Since we want $2^y\leq(1+y)^4$, we must take $y\leq16$. Otherwise $y>16$ and the inequality $2^y>(1+y)^4$ holds.
14. Re: unusual integer equation
Originally Posted by melese
That was not a typo but a serious mistake. We have $2^y\leq x^y=(1+y)^4$.
For $y>16$, the inequality $2^y>(1+y)^4$ can be shown by induction on $y$. We can check that $2^{17}>(1+17)^4$. Assuming that $2^y>(1+y)^4$ multiply both sides by $2$. Then $2^{y+1}>(1+y)^4\cdot2$. Now it's left to show that $(1+y)^4\cdot2>(2+y)^4$. Note that $\displaystyle\frac{(2+y)^4}{(1+y)^4}=(\frac{2+y}{1 +y})^4=(1+\frac{1}{1+y})^4$.
Since $(1+\frac{1}{1+y})^4$ gets smaller as $y>16$ gets larger it follows that $(1+\frac{1}{1+y})^4<(1+\frac{1}{1+16})^4<2$ and therefore $\frac{(2+y)^4}{(1+y)^4}<2$ - this completes the inductive part.
Since we want $2^y\leq(1+y)^4$, we must take $y\leq16$. Otherwise $y>16$ and the inequality $2^y>(1+y)^4$ holds.
that's great!! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 102, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.973958432674408, "perplexity": 15400.236969207483}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207925696.30/warc/CC-MAIN-20150521113205-00031-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://laurentlessard.com/bookproofs/beer-pong/ | # Beer pong
This interesting twist on the game of Beer Pong appeared on the Riddler blog. Here it goes:
The balls are numbered 1 through N. There is also a group of N cups, labeled 1 through N, each of which can hold an unlimited number of ping-pong balls. The game is played in rounds. A round is composed of two phases: throwing and pruning.
During the throwing phase, the player takes balls randomly, one at a time, from the infinite supply and tosses them at the cups. The throwing phase is over when every cup contains at least one ping-pong ball. Next comes the pruning phase. During this phase the player goes through all the balls in each cup and removes any ball whose number does not match the containing cup. Every ball drawn has a uniformly random number, every ball lands in a uniformly random cup, and every throw lands in some cup. The game is over when, after a round is completed, there are no empty cups.
How many rounds would you expect to need to play to finish this game? How many balls would you expect to need to draw and throw to finish this game?
Here is my solution:
[Show Solution]
## 3 thoughts on “Beer pong”
1. Jim Crimmins says:
Hi Laurent:
I did this a bit differently, calculating the expected number of throws in each turn to fill cups using NH_n, with n remaining cups in that turn, then expected cups exhausted in each turn, reducing number of cups, and then proceeding that way, forcing total cups exhausted to be N and total throws to be N^2H_N. These were my reusults curious how close they are to yours:
[1.0, 2.5, 4.166666666666667, 5.916666666666667, 8.3000000000000007, 10.333333333333334, 12.357142857142858, 14.825000000000001, 16.505555555555556, 18.992857142857144]
1. My results were similar, but different. Here are the numerical results I found:
[1.0, 2.4, 4.17135, 6.09981, 8.12472, 10.2184, 12.3648, 14.5536, 16.7777, 19.0318].
I actually computed exact fractions as well:
$\{1, \tfrac{12}{5}, \tfrac{1485}{356}, \tfrac{317440}{52041 }, \tfrac{7182252125}{883999816}, \tfrac{3139918735968}{307281361145}, \tfrac{2953154676073308334607}{238835679474876862230},\tfrac{1936604564184224758145986176}{133066780411872510357957803},\dots\}$Didn’t include this in the write-up because it wasn’t particularly instructive, but it might be useful to compare actual rational answers. The first major difference is with the 2.4 vs 2.5; that would be a good place to start.
1. Jim Crimmins says:
Tks – it’s not expected to be exact, but it looks pretty close…… | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47263675928115845, "perplexity": 1177.0876032476522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141196324.38/warc/CC-MAIN-20201129034021-20201129064021-00214.warc.gz"} |
https://codeforces.com/blog/entry/107662 | By MainuCodingBadiPasandAe, history, 2 months ago,
I was solving CSES — Towers and came with an alternate problem which I think can be solved using DP somehow:
You are given n cubes in a certain order, and your task is to build towers using them. Whenever two cubes are one on top of the other, the upper cube must be smaller than the lower cube.
You must process the cubes in the given order. You can always either place the cube on top of an existing tower, begin a new tower or discard the current cube. You may discard at most K cubes. What is the minimum possible number of towers?
Constraints:
(Don't worry about the constraints, if you can think of any polynomial-time solution, please let me know)
1 <= N <= 1e5,
1 <= cube sizes <= 1e9
0 <= K <= N
eg. N = 4; cubes = [3, 8, 2, 1]; K = 1
Output: 1
==> Discard cube with size 3 or 8 and use the rest to build 1 tower
Any ideas to solve this efficiently?
• +7
» 2 months ago, # | 0 Auto comment: topic has been updated by MainuCodingBadiPasandAe (previous revision, new revision, compare).
» 2 months ago, # | 0 This is not solvable.
» 2 months ago, # | 0 Auto comment: topic has been updated by MainuCodingBadiPasandAe (previous revision, new revision, compare).
» 2 months ago, # | ← Rev. 3 → 0 I think this problem can be returned back to a Knapsack problem.Let dp[i][j]= minimum number of towers can be formed from first i cubes ignoring at most j cubes of them.I'm not sure, but I think that thinking this way will help us, can you help me to develop this idea ?NOTE: I think this solution won't satisfy problem's constraints.
• » » 2 months ago, # ^ | 0 I thought of this before writing the blog. You are not storing sufficient information in your DP states. How would you make transitions? Even if you store a pair of ints (minimum number of towers, largest cube that is on top of any tower), you won't be able to make transitions.
• » » » 2 months ago, # ^ | ← Rev. 6 → 0 My transition will be one of three:1- Add this cube [ith cube] (if I can) and update the upper value of my tower. 2- Igonre this cube and increase j by one.3- Make a new tower then update ans and set upper value to infinity.Forget about constraints now and think about how to move between these states.
» 2 months ago, # | 0 It is possible to think of a greedy way.If we have only one tower and the height of the top cube is equal to h, then the optimal way is to put the nearest cube (from the right) whose height is smaller than h. If we don't find any suitable cube, then we start another tower by the first cube that has not been taken yet.You can use a multiset to do it, the multiset will store the height of the top cube of each tower. Iterate over all cubes, the current cube will be putten above the smallest cube in the multiset which is greater than the current cube, now the current cube is the top of this tower, so you should erase the older top, if you can't find this greater cube, you should insert the current cube as a new tower.
• » » 2 months ago, # ^ | 0 You didn't read the problem statement :)
• » » » 2 months ago, # ^ | 0 Oh, sorry.Ok. I will try. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38068944215774536, "perplexity": 1118.4547800917205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711064.71/warc/CC-MAIN-20221205232822-20221206022822-00061.warc.gz"} |
https://mathoverflow.net/questions/250079/approximate-an-exponential-martingale-through-its-kernel/250204 | # Approximate an exponential martingale through its kernel
Given a deterministic function $h\in L^2([0,T]; \mathbb{R})$, we can define the associated exponential martingale \begin{align} M_t = \exp\left[\int_{0}^{t} h_s \,dB_s - \frac{1}{2}\int_{0}^{t} h_s^2\,ds\right], \quad t\in [0,T] \end{align} with the "kernel" $h$. Here $B$ is a standard Brownian motion. By Ito's formula, we obtain $dM_t=M_t h_t dB_t$.
I was wondering, if I choose a sequence of functions $h^n\in L^2([0,T]; \mathbb{R})$, such that $h^n\to h$ in $L^2([0,T]; \mathbb{R})$, with some additional assumptions for $h^n$ or $h$, is it possible to get the following convergence result in $L^2(\Omega; \mathbb{R})$ $$M_T^n\to M_T,\quad n\to \infty$$ where $M_T^n$ is the exponential martingale associated to $h_n$.
I get this question from the Ito representation theorem, which basically results from the density of exponential martingales associated to piecewise constant functions. However, piecewise constant functions on $[0,T]$ is uncountable, hence I try to use a countably dense subset of $L^2([0,T]; \mathbb{R})$ (say some good polynomials) to approximate such functions.
I asked the same question on MSE two weeks ago but haven't received any answer till now.
## 1 Answer
The OP seems to be an $\epsilon$ or so away from answering the question. Let me try to plug the gap, by introducing the approximating SDEs that $M_t^n$ satisfies: $$d M_t^n = h_t^n M_t^n d B_t \;, \qquad M_0^n = 1 \;, \tag{a}$$ which we will compare to $$d M_t = h_t M_t d B_t \;, \qquad M_0 = 1 \;. \tag{b}$$ As long as $h$ and $\{ h_n \}$ are continuous and bounded on $[0,T]$, then the coefficients of these SDEs fulfill standard global Lipschitz and linear growth conditions. Thus, these SDEs have unique strong solutions whose higher moments are nicely bounded on finite time intervals; to read more about this see, e.g., Chapter 5 of Lawrence C. Evans' AMS book entitled An Introduction to Stochastic Differential Equations.
Moreover, by Itô isometry \begin{align*} E | M_t - M_t^n |^2 &= E \left( \int_0^t (M_s h_s - M_s^n h_s^n) d Bs \right)^2 \\ &= E \int_0^t |M_s h_s - M_s^n h_s^n|^2 ds \\ &\le E \int_0^t |M_s h_s - M_s h_s^n + M_s h_s^n - M_s^n h_s^n|^2 ds \\ &\le 2 \int_0^t |h_s-h_s^n |^2 E M_s^2 ds + 2 \int_0^t |h_s^n|^2 E |M_s - M_s^n |^2 ds \\ &\le C(t) \exp\left( 2\int_0^t |h_s^n|^2 ds \right) \int_0^t |h_s - h_s^n|^2 ds \end{align*} where in the last step we used Gronwall's inequality and a bound on the second moment of $M_t$ over $[0,t]$ which we lumped into a positive constant $C(t)>0$.
Note that this last inequality allows one to use convergence of the sequence of (deterministic) functions $\{h_n\}$ in $L^2([0,t]; \mathbb{R})$ to obtain mean-squared convergence of $M_t^n$ to $M_t$, as requested by the OP. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9990900754928589, "perplexity": 142.360026942091}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487625967.33/warc/CC-MAIN-20210616155529-20210616185529-00636.warc.gz"} |
http://download.nexusformat.org/doc/html/classes/base_classes/NXdisk_chopper.html | # 3.3.1.16. NXdisk_chopper¶
Status:
base class, extends NXobject
Description:
A device blocking the beam in a temporal periodic pattern.
A disk which blocks the beam but has one or more slits to periodically let neutrons through as the disk rotates. Often used in pairs, one NXdisk_chopper should be defined for each disk.
The rotation of the disk is commonly monitored by recording a timestamp for each full rotation of disk, by having a sensor in the stationary disk housing sensing when it is aligned with a feature (such as a magnet) on the disk. We refer to this below as the “top-dead-center signal”.
Angles and positive rotation speeds are measured in an anticlockwise direction when facing away from the source.
Symbols:
This symbol will be used below to coordinate datasets with the same shape.
n: Number of slits in the disk
Groups cited:
NXgeometry
Structure:
type: (optional) NX_CHAR
Type of the disk-chopper: only one from the enumerated list (match text exactly)
Any of these values:
• Chopper type single
• contra_rotating_pair
• synchro_pair
rotation_speed: (optional) NX_FLOAT {units=NX_FREQUENCY}
Chopper rotation speed. Positive for anticlockwise rotation when facing away from the source, negative otherwise.
slits: (optional) NX_INT
Number of slits
slit_angle: (optional) NX_FLOAT {units=NX_ANGLE}
Angular opening
pair_separation: (optional) NX_FLOAT {units=NX_LENGTH}
Disk spacing in direction of beam
slit_edges[2n]: (optional) NX_FLOAT {units=NX_ANGLE}
Angle of each edge of every slit from the position of the top-dead-center timestamp sensor, anticlockwise when facing away from the source. The first edge must be the opening edge of a slit, thus the last edge may have an angle greater than 360 degrees.
Timestamps of the top-dead-center signal. The times are relative to the “start” attribute and in the units specified in the “units” attribute. Please note that absolute timestamps under unix are relative to 1970-01-01T:00:00.
@start: (optional) NX_DATE_TIME
beam_position: (optional) NX_FLOAT {units=NX_ANGLE}
Angular separation of the center of the beam and the top-dead-center timestamp sensor, anticlockwise when facing away from the source.
slit_height: (optional) NX_FLOAT {units=NX_LENGTH}
Total slit height
phase: (optional) NX_FLOAT {units=NX_ANGLE}
Chopper phase angle
delay: (optional) NX_NUMBER {units=NX_TIME}
Time difference between timing system t0 and chopper driving clock signal
ratio: (optional) NX_INT
Pulse reduction factor of this chopper in relation to other choppers/fastest pulse in the instrument
distance: (optional) NX_FLOAT {units=NX_LENGTH}
Effective distance to the origin
wavelength_range[2]: (optional) NX_FLOAT {units=NX_WAVELENGTH}
Low and high values of wavelength range transmitted
(geometry): (optional) NXgeometry
NXDL Source:
https://github.com/nexusformat/definitions/blob/master/base_classes/NXdisk_chopper.nxdl.xml | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8936374187469482, "perplexity": 6937.326261413955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572744.7/warc/CC-MAIN-20190916135948-20190916161948-00369.warc.gz"} |
https://economics.stackexchange.com/questions/28972/is-there-a-standard-methodology-for-calculating-real-prices | # Is there a standard methodology for calculating real prices?
Is there an established professional standard used by economic / financial institutions for including or excluding i) the starting year/period, and ii) the end year/period when calculating real prices (i.e. when multiplying nominal values by a multiplier)?
Theoretically, there are four (mutually exclusive) possibilities
1. Include start period inflation, include end year inflation
2. Include start period inflation, exclude end year inflation
3. Exclude start period inflation, include end year inflation
4. Exclude start period inflation, exclude end year inflation
### Example
Converting $100 from 2008 into 2010 dollars. Suppose inflation was 2% in 2008, 3% in 2009, and 4% in 2010. Applying the four methods gives four answers, like so Method 1 Multiplying by the inflation in all of 2008, 2009, and 2010 (i.e. including start year and end year) $100 * (1 + 0.02) * (1 + 0.03) * (1 + 0.04) = 109.26
Method 2
Multiplying by the inflation in all of 2008 and 2009 (not including end year)
$100 * (1 + 0.02) * (1 + 0.03) = 105.06 Method 3 Multiplying the 2008 price by the inflation in 2009 and 2010 (not including start year) $100 * (1 + 0.03) * (1 + 0.04) = 107.12
Method 4
Multiplying by only the years in between (i.e. not including the start year nor end year) i.e. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3292129933834076, "perplexity": 5714.433174652071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574765.55/warc/CC-MAIN-20190922012344-20190922034344-00389.warc.gz"} |
http://christopherdanielson.wordpress.com/category/language-2/ | # Category Archives: Language
## More on the language of place value
182,356
Now mentally answer this question: What is the value of the 8 in this number?
I see two correct ways of stating this:
Eighty thousand, and
Eight ten-thousands
And I’m trying to decide whether I care about the difference between these. I’m not sure that I do.
So now I do what I always do to test ideas. I ask, What if? Specifically, What if we looked to the right of the decimal point? What would this question look like there?
So consider the number 0.0008.
If you said Zero-point-zero-zero-zero-eight, then I’ve got a lot more work to do with you.
Place value language matters in math classrooms.
And the sentence in the picture below is confusing.
No, I’m guessing we would all agree that this is eight ten-thousandths, which is not at all the same as eighty-thousandths (although it is the same as eighty hundred-thousandths).
So now I see that my What if? question has muddied the waters, rather than clarified them.
What can we conclude?
I suppose that the major conclusion is this:
We need to stop pretending that decimal place value (i.e. to the right of the decimal point) behaves exactly like whole-number place value (i.e. to the left of the decimal point).
In the abstract, this is certainly true. But composing units is not conceptually (or linguistically) equivalent to partitioning them.
Like I was saying here:
## “and”
How many dalmatians were in that movie again?
It was one-hundred-and-one, right?
Have I angered you yet? If you teach math, I probably have.
See, we math teachers are precise people. We like things to be just right. Part of being just right is using language correctly. In English number language, and is reserved for separating the whole number part of a number from the fractional part.
One and one-half
One and seven tenths
That sort of thing.
So when we hear one hundred and one, we freak out.
But separating the whole number part from the fractional part? That’s just one perspective. Sure, it’s the grammatically correct one.
But here’s the thing about grammar rules…they are arbitrary. Totally and completely arbitrary. Name a grammar rule and there’s a language somewhere that violates it completely.
Yet we English-speaking math teachers act as though the use of and were a signifier of mathematical understanding or competence. Which it is not.
Here’s another interpretation of and in English number language. Maybe and signifies a change of unit. In one and seven tenths, one is counting the original unit, seven is counting tenths. The and helps the listener to follow along; it signifies this shift.
In that case, one hundred and one is the same way. The first one counts a unit-hundreds. The second one (following the and) counts a different unit (which, awkwardly, we call either ones or units).
If I’m right about this, then you will have heard native speakers of English say something such as,
Three hundred and four thousand and twelve.
Three hundred and four counts thousands. Twelve counts ones. Then within the three hundred and four part, there are two different units also. Hence the and.
If I’m right, then you will probably not have heard native speakers of English say something such as,
Thirty and four.
Both of those words count the same units.
So I say let’s give up on this little obsession we have about and. Let’s not let it get in the way of effective, efficient communication in mathematics classrooms.
Let’s save our wrath for this:
Three point twelve.
## What is ten?
Consider the seemingly simple question What is ten?
Quantity. This refers to how many things there are. If ten is a quantity, then it refers to this many things: ***** *****
Numeration. This refers to how we write how many things there are. If ten is a set of symbols, then it refers to this: 10.
Number language. This refers to how we say how many things there are. If ten is a word, then it refers to this word: ten.
To illustrate the difference, ask a French person to read this number: 10. Then ask a fifth grader what this Roman numeral stands for: X. Finally ask a computer programmer what number this refers to in binary: 10.
In order, the French person’s dix illustrates that we can use different number language for the same numeration and quantity. The fifth grader’s ten illustrates that we can use different numeration for the same number language and quantity. And the programmer’s two illustrates that we can use the same symbols to represent different quantities.
If I weren’t so lazy, I’d link to a Karen Fuson reference for the research details. Maybe I’ll get to that sometime. But she’s the go-to person on this.
## Ratcheting back the rhetoric on Common Core
Bill McCallum writes in the comments here at OMT:
I don’t think that effort [to define terms such "ratio" and "rate" that CCSS leaves undefined] deserves quite the ridicule it is receiving here, but never mind, the criticism will be taken into consideration nonetheless and inform the final draft. I’ll only say that if I had a dollar for every time someone told me the answers to all these questions were obvious, I’d be a rich man. Of course, the “obvious” answers are mutually self-contradictory. This seems to be an area where it is very difficult indeed to find common language, and where emotions run high.
Fair enough. I’m happy to tone things down a bit.
I do need to observe that no one here at OMT (least of all me) has suggested that the answers to the questions at hand are “obvious“. I agree that they are not obvious at all, and I agree that it is very difficult to find common language with respect to these ideas. (Although this last bit is tricky; if we all use the same words but mean different things by them, are we speaking a common language?)
No, my critique is not at all that Common Core has failed to state the obvious definitions.
My critique is that I see no evidence that Common Core-either the Standards or the Progression on rational number-take into account research on how children learn this content, nor do they seem to coincide with everyday uses of these terms. In the era of No Child Left Behind and “evidence-based practice”, I find it troublesome that results of important research work such as that in the Rational Number Project or Cognitively Guided Instruction don’t seem to form a basis for either document.
I find it surprising that there are no research references in the Progression.
In my work with Connected Mathematics, I have many times had teachers ask for definitions of rate, ratio, fraction and rational number. As writers, we have hashed out these ideas many times as well. Answers are not obvious and reasonable people can disagree.
But we sort of agree that answers to these questions ought to be consistent with, and explain relationships to, uses of these terms in mathematics and the world. I don’t understand why this isn’t the starting place for the Progression.
ratio is a multiplicative comparison of two quantities (usually both are non-zero). Conventionally, we use the term “ratio” to apply to part-part comparisons, but this need not be the case.
We can express ratios in several forms. If there are 5 girls for every 3 boys in a certain class, we say that (1) the ratio of girls to boys is 5 to 3, (2) the ratio of girls to boys is 5:3, (3) the ratio of girls to boys is $\frac{5}{3}$, (4) there are 5 girls for every 3 boys.
The fraction notation $\frac{5}{3}$ is problematic in early ratio instruction because children may confuse it to mean that $\frac{5}{3}$ of the students are girls. Children are accustomed to fraction notation being reserved for part-whole relationships; for this reason the notation should be saved for later instruction.
The term rate suggests change. We tend to talk about a “ratio” in static situations where the values remain constant but a “rate” in a situation where the quantities are changing. In the girls and boys situation above, it would be correct to say that there is a rate of 5 girls for every 3 boys, but this feels awkward. If students were enrolling in a school and there were 5 girls enrolling for every 3 boys, the term “rate” is a more natural fit.
unit rate is a rate where one of the quantities being compared is 1 unit. If we enroll five girls for every three boys, this is not a unit rate. We could say that there are $\frac{5}{3}$ girls per boy enrolling at the school, or $\frac{5}{3}$ girls for every boy. For every (non-zero) unit rate, there is a reciprocal unit rate. So we can also say that there are $\frac{3}{5}$ boys per girl.
What counts as a unit varies. When computing a “unit rate” for buying pop, we could compute the cost per ounce, the cost per can, the cost per six-pack or the cost per case of 24. Which of these is considered a unit rate depends on our choice of unit.
To summarize the discussion, ratios and rates are different mainly in connotation. Each expresses a multiplicative relationship between two numbers (as opposed to an additive relationship, for which we use the term “difference”). Unit rates are important forms of rates because of their intimate connections to algebraic and calculus ideas such as slope and rate of change.
It’s just a first stab at discussing these terms in this context, but is consistent with common usage and focuses the discussion on the main idea that is important at this level-rates and ratios are about multiplication relationships, which are the heart of proportional reasoning.
Going back to the lawn example that has been the focus of discussion here as well as over at the Common Core Tools website, this would suggest that “7 lawns in 4 hours” is a rate (there is change involved, and it’s not a part-to-part relationship), and that there are two unit rates: $\frac{7}{4}$ lawns per hour and $\frac{4}{7}$ hours per lawn.
Again, I am not claiming that these relationships are obvious. But if a couple of important goals for the Progressions work are (1) clarity and (2) usefulness for teachers, professional development and curriculum development, I think my proposal above is an improvement over the present document.
## What is a rate? Common Core revisited
A commenter (not me) asks over on the CCSS Progressions blog:
Are rate and unit rate interchangable? Or should a teacher define them for a middle school students as… Rate: a quantity derived from the ratio of two quantities that describes how many units of the first quantity corresponds to one unit of the second quantity. Unit rate: the numerical part of a rate (e.g. For the rate 8 feet per second, the unit rate is 8.) If these are correct, I would then ask for clarity on the phrase “at that rate” in this example from 6.RP.3b. “For example, if it took 7 hours to mow 4 lawns, then at that rate, how many lawns could be mowed in 35 hours? At what rate were lawns being mowed?” Does “at that rate” here really mean “at the rate implied by the ratio of 7 hours to 4 lawns”? You aren’t suggesting that “7 hours to mow 4 lawns” is a rate? The rate, which you ask for in the last question, is “7/4 hours per lawn”?
The answer to this last question is going to be “yes”.
Whether it matches the meaning of these terms in real life or not, the answer will be “yes”.
Whether it matches the grammatical structure of the English language, in which unit would be seen to modify rate, the answer will be “yes”.
A unit rate, in the Looking Glass world of Common Core is not a kind of rate; it’s a different thing altogether. A rate is a numerical/linguistic construction. A unit rate is a number. Each is associated with a ratio.
But why?
The best sense I can make of this is that CCSS wants these terms to be precisely enough defined to admit a sort of mathematical clarity. No such definitions previously existed. So CCSS made them up.
## Words to avoid in the middle school classroom (continued)
I have this to add to the collection so far:
Long and hard.
In trying to put the Common Core mathematical practices into kid-friendly language, a colleague transformed this:
1. Make sense of problems and persevere in solving them.
Into this:
1. Think long and hard to solve problems.
## Words and images to avoid (addendum)
We had a little fun back in April with words and images to avoid in the middle school classroom.
Those who do not learn history are doomed to repeat it, no?
Consider the following from a recent draft of a project that will remain nameless, but which is intended for sixth grade:
Two boys who live near a golf course search for lost golf balls and package them for resale.
How many packs of 12 golf balls can be made
from a supply of 6,324 balls?
or
If a supply of 6,324 golf balls is packed in 12 boxes,
how many balls will be in each box? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7229967713356018, "perplexity": 1202.6894176081612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898644.9/warc/CC-MAIN-20141030025818-00083-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://www.coranac.com/2008/09/ | # Little programming game
I found this little game yesterday: Light-bot. You control a bot with a few commands to light up evry blue tile in the level; kinda like LOGO or Lego Mindstorms. At 12 levels it's a nice little activity.
# NDS register overview
libnds has fixed the datatypes of pretty much all registers and have moved to the GBATek nomenclature for the BG-related registers. The list has been updated to match the libnds v1.3.1. of
The state of register names for NDS homebrew is a bit of a mess. First, there are the GBATek names. Since GBATek is considered the source of GBA/NDS information, it would make sense to adhere to those names pretty closely. But, of course, that's not how actually is in the de facto library for NDS homebrew, libnds.
libnds has two sets of names. This probably is a result of serving different masters in its early days. One set uses Mappy's nomenclature. That's the one without the REG_ in front of it, and uses things like _CR, and _SR. This is one you're most likely to see in the current NDS tutorials. The second set uses GBATek's names (mostly) plus a REG_ prefix. If you've done GBA programming, these should feel quite familiar.
# hurray for bookies
I think I mentioned this before, but we have this Book fair thing over here. These are generally wonderful in that the admission is free, things are usually pretty damn cheap compared to regular stores and even teh internets, and (very unlike most stores in this country *grumble*) there's a large variety of computer and science books as well. Even good ones.
Every month there's one in a different location; and this weekend it was Utrecht. I wasn't planning on going at first because I know I can't keep my hands of the things and I still have a considerable backlog from the last few times I went, but I had to go in that direction anyway, so I figured why not. And, as always, I went in with the idea that I didn't really need anything anymore, but came out with a bag full regardless. Book included:
• It Must Be Beautiful: Great Equations of Modern Science”, exploring the story behind some of the most important equations in physics today.
• Quantum Field Theory: A Modern Introduction” by Michio Kaku. Yes that Kaku. I didn't do much with QFT at univeristy because it's fucking scary, but perhaps this time I can have better luck. If I ever get round to reading it.
• Cross-Platform Game programming“, dealing with memory and resource management for multiple systems, creating debugging facilities and more. I think this would have come in handy if I'd found it a few years ago. Oh well. Particularly nice feature: it was only €4; nearly a tenth of the regular price.
So yeah, another good batch. Now I just have to find the time to read them all.
# To C or not to C
Tonclib is coded mostly in C. The reason for this was twofold. First, I still have it in my head that C is lower level than C++, and that the former would compile to faster code; and faster is good. Second, it's easier for C++ to call C than the other way around so, for maximum compatibility, it made sense to code it in C. But these arguments always felt a little weak and now that I'm trying to port tonclib's functions to the DS, the question pops up again.
On many occasions, I just hated not going for C++. Not so much for its higher-level functionality like classes, inheritance and other OOPy goodness (or badness, some might say), but more because I would really, really like to make use of things like function overloading, default parameters and perhaps templates too.
For example, say you have a blit routine. You can implement this in multiple ways: with full parameters (srcX/Y, dstX/Y, width/height), using Point and Rect structs (srcRect, dstPoint) or perhaps just a destination point, using the full source-bitmap. In other words:
void blit(Surface *dst, int dstX, int dstY, int srcW, int srcH, Surface *src, int srcX, int srcY);
void blit(Surface *dst, Point *dstPoint, Surface *src, Rect *srcRect);
void blit(Surface *dst, Point *dstPoint, Surface *src);
In C++, this would be no problem. You just declare and define the functions and the compiler mangles the names internally to avoid naming conflicts. You can even make some of the functions inline facades that morphs the arguments for the One True Implementation. In C, however, this won't work. You have to do the name mangling yourself, like blit, blit2, blit3, or blitEx or blitRect, and so on and so forth. Eeghh, that is just ugly.
Speaking of points and rectangles, that's another thing. Structs for points and rects are quite useful, so you make one using int members (you should always start with ints). But sometimes it's better to have smaller versions, like shorts. Or maybe unsigned variations. And so you end up with:
struct point8_t { s8 x, y; }; // Point as signed char
struct point16_t { s16 x, y; }; // Point as signed short
struct point32_t { s32 x, y; }; // Point as signed int
struct upoint8_t { u8 x, y; }; // Point as unsigned char
struct upoint16_t { u16 x, y; }; // Point as unsigned short
struct upoint32_t { u32 x, y; }; // Point as unsigned int
And then that for rects too. And perhaps 3D vectors. And maybe add floats to the mix as well. This all requires that you make structs which are identical except for the primary datatype. That just sounds kinda dumb to me.
But wait, it gets even better! You might like to have some functions to go with these structs, so now you have to create different sets (yes, sets) of functions that differ only by their parameter types too! AAAARGGGGHHHHH, for the love of IPU, NOOOOOOOOOOOOOO!!! Neen, neen; driewerf neen! >_<
That's how it would be in C. In C++, you can just use a template like so:
template<class T>
struct point_t { T x, y; }; // Point via templates
typedef point_t<u8> point8_t; // And if you really want, you can
// typedef for specific types.
and be done with it. And then you can make a single template function (or was it function template, I always forget) that works for all the datatypes and let the compiler work it out. Letting the computer do the work for you, hah! What will they think of next.
Oh, and there's namespaces too! Yes! In C, you always have to worry about if some other library has something with the same name as you're thinking of using. This is where all those silly prefixes come from (oh hai, FreeImage!). With C++, there's a clean way out of that: you can encapsulate them in a namespace and when a conflict arises you can use mynamespace::foo to get out of it. And if there's no conflicts, use using namespace mynamespace; and type just plain foo. None of that FreeImage_foo causing you to have more prefix than genuine function identifier.
And [i]then[/i] there's C++ benefits like classes and everything that goes with it. Yes, classes can become fiendishly difficult if pushed too far(1), but inheritance and polymorphism are nice when you have any kind of hierarchy in your program. All Actors have positions, velocities and states. But a PlayerActor also needs input; and an NpcActor has AI. And each kind of NPC has different methods for behaviour and capabilities, and different Items have different effects and so on. It's possible to do this in just C (hint: unioned-structs and function-tables and of course state engines), but whether you'd want to is another matter. And there's constructors for easier memory management, STL and references. And, yes, streams, exceptions and RTTI too if you want to kill your poor CPU (regarding GBA/DS I mean), but nobody's forcing you to use those.
So why the hell am I staying with C again? Oh right, performance!
Performance, really? I think I heard this was a valid point a long time ago, but is it still true now? To test this, I turned all tonclib's C files into C++ files, compiled again and compared the two. This is the result:
Difference in function size between C++ and C in bytes.
That graph shows the difference in the compiled function size. Positive means C++ uses more instructions. In nearly 300 functions, the only differences are minor variations in irq_set(), some of the rendering routines and TTE parsers, and neither language is the clear winner. Overall, C++ seems to do a little bit better, but the difference is marginal.
I've also run a diff between the generated assembly. There are a handful of functions where the order of instructions are different, or different registers are used, or a value is placed in a register instead of on the stack. That's about it. In other words, there is no significant difference between pure C code and its C++ equivalent. Things will probably be a little different when OOP features and exceptions enter the fray, but that's to be expected. But if you stay close to C-like C++, the only place you'll notice anything is in the name-mangling. Which you as a programmer won't notice anyway because it all happens behind the scenes.
So that strikes performance off my list, leaving only wider compatibility. I suppose that has still some merit, but considering you can turn C-code into valid C++ by changing the extension(2), this is sound more and more like an excuse instead of a reason.
##### Notes:
1. As the saying goes: C++ makes it harder to shoot yourself in the foot, but when you do, you blow off your whole leg.
2. and clean up the type issues that C allows but C++ doesn't, like void* arithmetic and implicit pointer casts from void*. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22718730568885803, "perplexity": 2523.7646527154366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00780.warc.gz"} |
https://stats.stackexchange.com/tags/poisson-distribution/info | Tag Info
A discrete distribution defined on the non-negative integers that has the property that the mean is equal to the variance.
Overview
A discrete random variable $X$ has a Poisson distribution indexed by a parameter $\lambda$ if it has probability mass function
$$P(X = x) = \frac{ \lambda^x e^{-\lambda} }{x!} \quad \text{for } x>0$$
One property of the Poisson distribution is that $\mathrm{E}(X) = \mathrm{Var}(X) = \lambda$.
The Poisson distribution is used to model situations where there is a rate of occurrence associated with an event. For example, it used prominently in Physics to model "counting experiments" like the number of photons arriving at a telescope, or the number of radioactive counts recorded by a Geiger counter. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9553729295730591, "perplexity": 231.22673906524338}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104293758.72/warc/CC-MAIN-20220704015700-20220704045700-00047.warc.gz"} |
https://platonicrealms.com/minitexts/Coping-With-Math-Anxiety | # Coping with Math Anxiety
Multiplication is vexation,
The Rule of Three perplexes me,
—Old Rhyme
### What Is Math Anxiety?
A famous stage actress was once asked if she had ever suffered from stage-fright, and if so how she had gotten over it. She laughed at the interviewer’s naive assumption that, since she was an accomplished actress now, she must not feel that kind of anxiety. She assured him that she had always had stage fright, and that she had never gotten over it. Instead, she had learned to walk on stage and perform—in spite of it.
Like stage fright, math anxiety can be a disabling condition, causing humiliation, resentment, and even panic. Consider these testimonials from a questionnaire we have given to students in the past several years:
• When I look at a math problem, my mind goes completely blank. I feel stupid, and I can’t remember how to do even the simplest things.
• I've hated math ever since I was nine years old, when my father grounded me for a week because I couldn’t learn my multiplication tables.
• In math there’s always one right answer, and if you can’t find it you've failed. That makes me crazy.
• Math exams terrify me. My palms get sweaty, I breathe too fast, and often I can't even make my eyes focus on the paper. It’s worse if I look around, because I’d see everybody else working, and know that I’m the only one who can’t do it.
• I've never been successful in any math class I've ever taken. I never understand what the teacher is saying, so my mind just wanders.
• Some people can do math—not me!
What all of these students are expressing is math anxiety, a feeling of intense frustration or helplessness about one's ability to do math. What they did not realize is that their feelings about math are common to all of us to some degree. Even the best mathematicians, like the actress mentioned above, are prone to anxiety—even about the very thing they do best and love most.
In this essay we will take a constructive look at math anxiety, its causes, its effects, and at how you as a student can learn to manage this anxiety so that it no longer hinders your study of mathematics. Lastly, we will examine special strategies for studying mathematics, doing homework, and taking exams.
Let us begin by examining some social attitudes towards mathematics that are especially relevant.
### Social and Educational Roots
Imagine that you are at a dinner party, seated with many people at a large table. In the course of conversation the person sitting across from you laughingly remarks, “of course, I’m illiterate…!” What would you say? Would you laugh along with him or her and confess that you never really learned to read either? Would you expect other people at the table to do so?
Now imagine the same scene, only this time the guest across from you says, “of course, I’ve never been any good at math…!” What happens this time? Naturally, you can expect other people at the table to chime in cheerfully with their own claims to having “never been good at math”—the implicit message being that no ordinary person ever is.
Poor teaching leads to the inevitable idea that the subject (mathematics) is only adapted to peculiar minds, when it is the one universal science, and the one whose ground rules are taught us almost in infancy and reappear in the motions of the universe.
—H.J.S. Smith
The fact is that mathematics has a tarnished reputation in our society. It is commonly accepted that math is difficult, obscure, and of interest only to “certain people,” i.e., nerds and geeks—not a flattering characterization. The consequence in many English-speaking countries, and especially in the United States, is that the study of math carries with it a stigma, and people who are talented at math or profess enjoyment of it are often treated as though they are not quite normal. Alarmingly, many school teachers—even those whose job it is to teach mathematics—communicate this attitude to their students directly or indirectly, so that young people are invariably exposed to an anti-math bias at an impressionable age.
It comes as a surprise to many people to learn that this attitude is not shared by other societies. In Russian or German culture, for example, mathematics is viewed as an essential part of literacy, and an educated person would be chagrined to confess ignorance of basic mathematics. (It is no accident that both of these countries enjoy a centuries-long tradition of leadership in mathematics.)
Students must learn that mathematics is the most human of endeavors. Flesh and blood representatives of their own species engaged in a centuries long creative struggle to uncover and to erect this magnificent edifice. And the struggle goes on today. On the very campuses where mathematics is presented and received as an inhuman discipline, cold and dead, new mathematics is created. As sure as the tides.
—J.D. Phillips
Our jaundiced attitude towards mathematics has been greatly exacerbated by the way in which it has been taught since early in this century. For nearly seventy years, teaching methods have relied on a behaviorist model of learning, a paradigm which emphasizes learning-by-rote; that is, memorization and repetition. In mathematics, this meant that a particular type of problem was presented, together with a technique of solution, and these were practiced until sufficiently mastered. The student was then hustled along to the next type of problem, with its technique of solution, and so on. The ideas and concepts which lay behind these techniques were treated as a sideshow, or most often omitted altogether. Someone once described this method of teaching mathematics as inviting students to the most wonderful restaurant in the world—and then forcing them to eat the menu! Little wonder that the learning of mathematics seems to most people a dull and unrewarding enterprise, when the very meat of the subject is boiled down to the gristle before it is served.
The mind is not a vessel to be filled. It is a fire to be kindled.
—Plutarch
This horror story of mathematics education may yet have a happy ending. Reform efforts in the teaching of mathematics have been under way for several years, and many—if not all—teachers of mathematics have conscientiously set about replacing the behaviorist paradigm with methods based on constructivist or other progressive models of learning. As yet, however, there remains no widely accepted teaching methodology for implementing these reform efforts, and it may well be that another generation will pass before all students in the primary and secondary grades are empowered to discover the range and beauty of mathematical ideas, free of the stigmas engendered by social and educational bias.
Finally, young women continue to face an additional barrier to success in mathematics. Remarkably, even at the start of the 21st century, school-age girls are still discouraged by parents, peers, and teachers with the admonition that mathematics “just isn't something girls do.” Before we became teachers, we would have assumed that such attitudes died out a generation ago, but now we know better. Countless of our female students have told how friends, family members, and even their junior and senior high school instructors impressed upon them the undesirability of pursuing the study of mathematics. My own wife (a mathematician) recalls approaching her junior high school geometry teacher after class with a question about what the class was studying. He actually patted her on the head, and explained that she “didn’t need to know about that stuff.” (And, needless to say, he didn’t answer her question.) Rank sexism such as this is only part of the problem. For all adolescents, but especially for girls, there is concern about how one is viewed by members of the opposite sex—and being a “geek” is not seen as the best strategy. Peer pressure is the mortar in that wall. And parents, often even without knowing it, can facilitate this anxiety and help to discourage their daughters from maintaining an open mind and a natural curiosity towards the study of science and math.
Together these social and educational factors lay the groundwork for many widely believed myths and misconceptions about the study of mathematics. To an examination of these we now turn.
### Math Myths
A host of common but erroneous ideas about mathematics are available to the student who suffers math anxiety. These have the effect of justifying or rationalizing the fear and frustration he or she feels, and when these myths are challenged a student may feel defensive. This is quite natural. However, it must be recognized that loathing of mathematics is an emotional response, and the first step in overcoming it is to appraise one’s opinions about math in a spirit of detachment. Consider the five most prevalent math myths, and see what you make of them:
#### Myth #1: Aptitude for math is inborn.
This belief is the most natural in the world. After all, some people just are more talented at some things (music and athletics come to mind) and to some degree it seems that these talents must be inborn. Indeed, as in any other field of human endeavor, mathematics has had its share of prodigies. Karl Gauss helped his father with bookkeeping as a small child, and the Indian mathematician Ramanujan discovered deep results in mathematics with little formal training. It is easy for students to believe that doing math requires a math brain, one in particular which they have not got.
Math Brain
But consider: to generalize from “three spoons, three rocks, three flowers”—to the number “three”—is an extraordinary feat of abstraction, yet every one of us accomplished this when we were mere toddlers! Mathematics is indeed inborn, but it is inborn in all of us. It is a human trait, shared by the entire race. Reasoning with abstract ideas is the province of every child, every woman, every man. Having a special genetic make-up is no more tune.
Ask your math teacher or professor if he or she became a mathematician in consequence of having a special brain. (Be sure to keep a straight face when you do this.) Almost certainly, after the laughter has subsided, it will turn out that a parent or teacher was responsible for helping your instructor discover the beauty in mathematics, and the rewards it holds for the student—and decidedly not a special brain. (If you ask my wife, on the other hand, she will tell you it was orneriness; she got sick of being told she couldn’t do it.)
#### Myth #2: To be good at math you have to be good at calculating
Some people count on their fingers. Invariably, they feel somewhat ashamed about it, and try to do it furtively. But this is ridiculous. Why shouldn't you count on your fingers? What else is a Chinese abacus, but a sophisticated version of counting on your fingers? Yet people accomplished at using the abacus can out-perform anyone who calculates figures mentally.
Math Fingers
Modern mathematics is a science of ideas, not an exercise in calculation. It is a standing joke that mathematicians can’t do arithmetic reliably, and I often admonish my students to check my calculations on the chalkboard because I'm sure to get them wrong if they don’t. There is a serious message in this: being a wiz at figures is not the mark of success in mathematics.
This bears emphasis: a pocket calculator has no knowledge, no insight, no understanding—yet it is better at addition and subtraction than any human will ever be. And who would prefer being a pocket calculator to being human?
This myth is largely due to the methods of teaching discussed above, which emphasize finding solutions by rote. Indeed, many people suppose that a professional mathematician’s research involves something like doing long division to more and more decimal places, an image that makes mathematicians smile sadly. New mathematical ideas—the object of research—are precisely that. Ideas. And ideas are something we can all relate to. That’s what makes us people to begin with.
#### Myth #3: Math requires logic, not creativity.
The grain of truth in this myth is that, of course, math does require logic. But what does this mean? It means that we want things to make sense. We don't want our equations to assert that 1 is equal to 2.
Logic is the anatomy of thought.
—John Locke
This is no different from any other field of human endeavor, in which we want our results and propositions to be meaningful—and they can’t be meaningful if they do not jive with the principles of logic that are common to all mankind. Mathematics is somewhat unique in that it has elevated ordinary logic almost to the level of an artform, but this is because logic itself is a kind of structure—an idea—and mathematics is concerned with precisely that sort of thing.
The moving power of mathematics is not reasoning but imagination.
—Augustus De Morgan
But it is simply a mistake to suppose that logic is what mathematics is about, or that being a mathematician means being uncreative or unintuitive, for exactly the opposite is the case. The great mathematicians, indeed, are poets in their soul.
How can we best illustrate this? Consider the ancient Greeks, such as Pythagoras, who first brought mathematics to the level of an abstract study of ideas. They noticed something truly astounding: that the musical tones most pleasing to the ear are those achieved by dividing a plucked string into ratios of integers. For instance, the musical interval of a “fifth” is achieved by plucking a taut string whilst pressing the finger against it at a distance exactly three-fourths along its total length. From such insights, the Pythagoreans developed an elaborate and beautiful theory of the nature of physical reality, one based on number. And to them we owe an immense debt, for to whom does not music bring joy? Yet no one could argue that music is a cold, unfeeling enterprise of mere logic and calculation.
If you remain unconvinced, take a stroll through the Mathematical Art of M.C. Escher. Here is the creative legacy of an artist with no advanced training in math, but whose works consciously celebrate mathematical ideas, in a way that slips them across the transom of our self-conscious anxiety, presenting them afresh to our wondering eyes.
#### Myth #4: In math what's counts is getting the right answer.
If you are building a bridge, getting the right answer counts for a lot, no doubt. Nobody wants a bridge that tumbles down during rush hour because someone forgot to carry the 2 in the 10’s place! But are you building bridges, or studying mathematics? Even if you are studying math so that you can build bridges, what matters right now is understanding the concepts that allow bridges to hang magically in the air—not whether you always remember to carry the 2.
That you be methodical and complete in your work is important to your math instructor, and it should be important to you as well. This is just a matter of doing what you are doing as well as you can do it—good mental and moral hygiene for any activity. But if any instructor has given you the notion that “the right answer” is what counts most, put it out of your head at once. Nobody overly fussy about how his or her bootlace is tied will ever stroll at ease through Platonic Realms.
#### Myth #5: Men are better than women at math.
If there is even a ghost of a remnant of a suspicion in your mind about gender making a whit’s difference in students’ mathematics aptitude, slay the beast at once. Special vigilance is required when it comes to this myth, because it can find insidious ways to affect one’s attitude without ever drawing attention to itself. For instance, I’ve had female students confide to me that—although of course they do not believe in a gender gap when it comes to ability—still it seems to them a little unfeminine to be good at math. There is no basis for such a belief, and in fact a sociological study several years ago found that female mathematicians are, on average, slightly more feminine than their non-mathematician counterparts.
Sadly, the legacy of generations of gender bias, like our legacy of racial bias, continues to shade many people’s outlooks, often without their even being aware of it. It is every student’s, parent’s, and educator’s duty to be on the lookout for this error of thought, and to combat it with reason and understanding wherever and however it may surface.
Hypatia of Alexandria
Across the centuries, from Hypatia to Amalie Nöther to thousands of contemporary women in school and university math departments around the globe, female mathematicians have been and remain full partners in creating the rich tapestry of mathematics. A web search for "women in mathematics" will turn up many outstanding sites with information about historical and contemporary women in mathematics. You may also like to check out Platonic Realms' own inspirational poster on Great Women of Mathematics in the Math Store.
### Taking Possession of Math Anxiety
Even though all of us suffer from math anxiety to some degree—just as anyone feels at least a little nervous when speaking to an audience—for some of us it is a serious problem, a burden that interferes with our lives, preventing us from achieving our goals. The first step, and the one without which no further progress is possible, is to recognize that math anxiety is an emotional response. (In fact, severe math anxiety is a learned emotional response.) As with any strong emotional reaction, there are constructive and unconstructive ways to manage math anxiety. Unconstructive (and even damaging) ways include rationalization, suppression, and denial.
By “rationalization,” we mean finding reasons why it is okay and perhaps even inevitable – and therefore justified – for you to have this reaction. The myths discussed above are examples of rationalizations, and while they may make you feel better (or at least less bad) about having math anxiety, they will do nothing to lessen it or to help you get it under control. Therefore, rationalization is unconstructive.
By “suppression” is meant having awareness of the anxiety – but trying very, very hard not to feel it. I have found that this is very commonly attempted by students, and it is usually accompanied by some pretty severe self-criticism. Students feel that they shouldn’t feel this anxiety, that it’s a weakness which they should overcome, by brute force if necessary. When this effort doesn’t succeed (as invariably it doesn’t) the self-criticism becomes ever harsher, leading to a deep sense of frustration and often a severe loss of self-esteem – particularly if the stakes for a student are high, as when his or her career or personal goals are riding on a successful outcome in a math class, or when parental disapproval is a factor. Consequently, suppression of math anxiety is not only unconstructive, but can actually be damaging.
Finally, there is denial. People using this approach probably aren’t likely to see this essay, much less read it, for they carefully construct their lives so as to avoid all mathematics as much as possible. They choose college majors, and later careers, that don’t require any math, and let the bank or their spouse balance the checkbook. This approach has the advantage that feelings of frustration and anxiety about math are mostly avoided. However, their lives are drastically constrained, for in our society fewer than 25% of all careers are, so-to-speak, “math-free,” and thus their choices of personal and professional goals are severely limited. (Most of these math-free jobs, incidentally, are low-status and low-pay.)
The Universe is a grand book which cannot be read until one first learns to comprehend the language and become familiar with the characters in which it is composed. It is written in the language of mathematics.
—Galileo
People in denial about mathematics miss out on something else too, for the student of mathematics learns to see aspects of the structure and beauty of our world that can be seen in no other way, and to which the “innumerate” necessarily remain forever blind. It would be a lot like never hearing music, or never seeing colors. (Of course some people have these disabilities, but innumeracy is something we can do something about.)
Okay, so what is the constructive way to manage math anxiety? I call it “taking possession.” It involves making as conscious as possible the sources of math anxiety in one’s own life, accepting those feelings without self-criticism, and then learning strategies for disarming math anxiety's influence on one’s future study of mathematics. (These strategies are explored in depth in the next section.)
Begin by understanding that your feelings of math anxiety are not uncommon, and that they definitely do not indicate that there is anything wrong with you or inferior about your ability to learn math. For some this can be hard to accept, but it is worth trying to accept—since after all it happens to be true. This can be made easier by exploring your own “math-history.” Think back across your career as a math student, and identify those experiences which have contributed most to your feelings of frustration about math. For some this will be a memory of a humiliating experience in school, such as being made to stand at the blackboard and embarrassed in front of one’s peers. For others it may involve interaction with a parent. Whatever the principle episodes are, recall them as vividly as you are able to. Then, write them down. This is important. After you have written the episode on a sheet(s) of paper, write down your reaction to the episode, both at the time and how it makes you feel to recall it now. (Do this for each episode if there is more than one.)
After you have completed this exercise, take a fresh sheet of paper and try to sum up in a few words what your feelings about math are at this point in your life, together with the reason or reasons you wish to succeed at math. This too is important. Not until after we lay out for ourselves in a conscious and deliberate way what our feelings and desires are towards mathematics, will it become possible to take possession of our feelings of math anxiety and become free to implement strategies for coping with those feelings.
At this point it can be enormously helpful to share your memories, feelings, and goals with others. In a math class I teach for arts majors, I hand out a questionnaire early in the semester asking students to do exactly what is described above. After they have spent about twenty minutes writing down their recollections and goals, I lead them in a classroom discussion on math anxiety. This process of dialogue and sharing—though it may seem just a bit on the goopy side—invariably brings out of each student his or her own barriers to math, often helping these students become completely conscious of these barriers for the first time. Just as important, it helps all my students understand that the negative experiences they have had, and their reactions to them, are shared one way or another by almost everyone else in the room.
If you do not have the opportunity to engage in a group discussion in a classroom setting, find friends or relatives whom you trust to respect your feelings, and induce them to talk about their own experiences of math anxiety and to listen to yours.
Once you have taken possession of your math anxiety in this way, you will be ready to implement the strategies outlined in the next section.
### Strategies for Success
Mathematics, as a field of study, has features that set it apart from almost any other scholastic discipline. On the one hand, correctly manipulating the notation to calculate solutions is a skill, and as with any skill mastery is achieved through practice. On the other hand, such skills are really only the surface of mathematics, for they are only marginally useful without an understanding of the concepts which underlie them. Consequently, the contemplation and comprehension of mathematical ideas must be our ultimate goal. Ideally, these two aspects of studying mathematics should be woven together at every point, complementing and enhancing one another, and in this respect studying mathematics is much more like studying, say, music or painting than it is like studying history or biology.
The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver.
—I.N. Herstein
In view of mathematics’ unique character, the successful student must devise a special set of strategies for accomplishing his or her goals, including strategies for lecture taking, homework, and exams. We will examine each of these in turn. Keep in mind that these strategies are suggestions, not laws handed down from the mountain. Each student must find for him or herself the best way to implement these ideas, fitting them to his or her own unique learning styles. As the Greek said, know thyself!
#### Taking Lectures
Math teachers are a mixed bag, no question, and it’s easy to criticize, especially when the criticism is justified. If your own math teacher really connects with you, really helps you understand, terrific—and be sure to let him or her know. But if not, there are a couple of things you will want to keep in mind.
To begin with, think what the teacher’s job entails. First, a textbook must be chosen, a syllabus prepared, and the material being taught (which your teacher may or may not have worked with in some time) completely mastered. This is before you ever step into class on that first day. Second, for every lecture the teacher gives, there is at least an hour’s preparation, writing down lecture notes, thinking about how best to present the material, and so on. This is on top of the time spent grading student work—which itself can be done only after the instructor works the exercises for him or herself. Finally, think about the anxiety you feel about speaking to an audience, and about your own math anxiety, and then imagine what a math teacher must do: manage both kinds of anxiety simultaneously. It would be wonderful if every instructor were a brilliant lecturer. But even the least brilliant deserves consideration for the difficulty of the job.
The second thing to keep in mind is that getting the most out of a lecture is your job. Many students suppose that writing furiously to get down everything the instructor puts on the board is the best they can do. Unfortunately, you cannot both write the details and focus on the ideas at the same time. Consequently, you will have to find a balance. Particularly if the instructor is lecturing from a set text, it may be that almost everything he or she puts on the board is in the text, so in effect it’s written down for you already. In this case, make some note of the instructor’s ideas and commentary and methods, but make understanding the lecture your primary focus. One of the best things you can do to enhance the value of a lecture is to review the relevent parts of the textbook before the lecture. Then your notes, instead of becoming yet another copy of information you paid for when you bought the book, can be an adjunct set of insights and commentary that will help you when it comes time to study on your own.
Finally, remember that your success is your instructor’s success too. He or she wants you to achieve your goals. So develop a rapport with the instructor, letting him or her know when you are feeling lost and requesting help. Don’t wait until after the lecture—raise your hand or your voice the minute the instructor begins to discuss an idea or procedure that you are unable to follow. Use any help labs or office hours that are available. If you are determined to succeed and your instructor knows it, then he or she will be just as determined to help you.
#### Self-Study and Homework
There you are, just you and the textbook and maybe some lecture notes, alone in the glare of your desk lamp. It’s a tense moment. Like most students, you turn to the exercises and see what happens. Pretty soon you are slogging away, turning frequently to the solutions in the back of the book to check whether you have a clue. If you’re lucky, it goes mostly smoothly, and you mark the problems that won’t come right so that you can ask about them in class. If you’re not so lucky, you get bogged down, stuck on this problem or that, while the hours slide by like agonized glaciers, and you miss your favorite TV show, and you think of all the homework for your other classes that you haven’t got to yet, and you begin to visualize burning your textbook…except that the stupid thing cost you 80 bucks….
Let’s start over.
Many instructors (but not all) encourage their students to work together on homework problems. Modern learning theories emphasize the value of doing this, and I find that students who collaborate can develop a synergy among themselves which supports their learning, helping them to learn more, more quickly, and more lastingly. Find out how your instructor feels about this, and if it is permitted find others in class who are interested in studying together. You will still want to put in plenty of time for self-study, but a couple of hours a week spent studying with others may be very valuable to you.
#### Working Problems
Most problem sets are designed so that the first few problems are rote, and look just like the examples in the book. Gradually, they begin to stretch you a bit, testing your comprehension and your ability to synthesize ideas. Take them one at a time. If you get completely stuck on one, skip it for now. But come back to it. Give yourself time, for your subconscious mind will gradually formulate ideas about how to work the exercise, and it will present these notions to your conscious mind when it is ready.
As an experienced math instructor, it is my sad duty to report that about a third of the students in any given class, on any given assignment, will look the exercises over, and conclude that they don’t know how to do it. They then tell themselves, “I can’t do something I don’t understand,” and close the book. Consequence: no homework gets done.
About another third will look the exercises over, decide that they pretty much get it, and tell themselves, “I don’t need to do the homework, because I already understand it,” and close the book. Consequence: no homework gets done.
I keep the subject constantly before me and wait till the first dawnings open little by little into the full light.
—Isaac Newton
Don’t let this be you. If you’ve pretty much already got it, great. Now turn to the hard exercises (whether they were assigned or not), and test how thorough your understanding really is. If you are unable to do them with ease, then you need to go back to the more routine exercises and work on your skills. On the other hand, if you feel you cannot do the homework because you don’t understand it, then go back in the textbook to where you do understand, and work forward from there. Pick the easiest exercises, and work at them. Compare them to the examples. Work through the examples. Try doing the exercises the same way the examples were done. In short, work at it. You will learn mathematics this way—and in no other way.
#### Story Problems
Everybody complains about story problems, sometimes even the instructor. One is tempted to feel that math is hard enough without some sadist turning it into wordy, dense, hard-to-understand story problems. But again, ask yourself: “Why am I studying math? Is it so that I'll always know how to factor a quadratic equation?” Hardly. The study of math is meant to give you power over the real world. And the real world doesn’t present you with textbook equations, it presents you with story problems. Your boss doesn’t tell you to solve for x, he tells you, “We need a new supplier for flapdoodles. Bob’s Flapdoodle Emporium wholesales them at $129 per gross, but charges$1.25 per ton per mile for shipping. Sally’s Flapdoodle Express wholesales them at $143 per gross, but ships at a flat rate of$85 per ton. Figure out how each of these will impact our marginal cost, and report to me this afternoon.”
The real world. Personally, I love story problems—because if you can work a story problem, you know you really understand the math. It helps to have a strategy, so you might want to check out the Solving Story Problems article in the Platonic Realms Encyclopedia sometime soon.
#### Exams
For many students, this is the very crucible of math anxiety. Math exams represent a do-or-die challenge that can inflame all one’s doubts and frustrations. It is frankly not possible to eliminate all the anxiety you may feel about exams, but here are some techniques and strategies that will dramatically improve your test-taking experience.
Don’t cram. The brain is in many ways just like a muscle. It must be exercised regularly to be strong, and if you place too much stress on it then it won’t function at its peak until it has had time to rest and recover. You wouldn’t prepare for a big race by staying up and running all night. Instead, you would probably do a light work-out, permit yourself some recreation such as seeing a movie or reading a book, and turn-in early. The same principle applies here. If you have been studying regularly, you already know what you need to know, and if you have put off studying until now it is too late to do much about it. There is nothing you will gain in the few hours before the exam, desperately trying to absorb the material, that will make up for not being fresh and alert at exam time.
On exam day, have breakfast. The brain consumes a surprisingly large number of calories, and if you haven’t made available the nutrients it needs it will not work at full capacity. Get up early enough so that you can eat a proper meal (but not a huge one) at least two hours before the exam. This will ensure that your stomach has finished with the meal before your brain makes a demand on the blood supply.
When you get the exam, look it over thoroughly. Read each question, noting whether it has several parts and its overall weight in the exam. Begin working only after you have read every question. This way you will always have a sense of the exam as a whole. (Remember to look on the backs of pages.) If there are some questions that you feel you know immediately how to do, then do these first. (Some students have told me they save the easiest ones for last because they are sure they can do them. This is a mistake. Save the hardest ones for last.)
It is extremely common to get the exam, look at the questions, and feel that you can’t work a single problem. Panic sets in. You see everyone else working, and become certain you are doomed. Some students will sit for an hour in this condition, ashamed to turn in a blank exam and leave early, but unable to calm down and begin thinking about the questions. This initial panic is so common (believe it or not, most of the other students taking the exam are having the same experience), that it’s just as well to assume ahead of time that this is what is going to happen. This gives you the same advantage as when the dentist alerts you that “this may hurt a little.” Since you've been warned, there's far less tendency to have an uncontrollable panic reaction when it happens.
So say to yourself, “Well, I may as well relax because I expected this.” Take a deep breath, let it out slowly. Do this a couple of times. Look for the question on the exam that most resembles what you know how to do, and begin poking it and prodding it and thinking about it to see what it is made of. Don’t bother about the other students in the room—they’ve got their own problems. Before long your brain (remember, it’s a muscle) will begin to unclench a bit, and some things will occur to you. You’re on your way.
Math exams are usually timed—but remember, it’s not a race! You don’t want to dally, but don’t rush yourself either. Work efficiently, being methodical and complete in your solutions. Box, circle, or underline your answers where appropriate. If you don’t take time to make your work neat and ordered, then not only will the grader have trouble understanding what you’ve done, but you can actually confuse yourself—with disastrous results. If you get stuck on a problem, don’t entangle yourself with it to the detriment of your overall score. After a few minutes, move on to the rest of the exam and come back to this one if you have time. And regardless of whether you have answered every question, give yourself at least two or three minutes at the end of the exam period to review your answers. The “oops” mistakes you find this way will surprise you, and fixing them is worth more to your score than trying to bang out something for that last, troublesome question.
In math, having the right answer is nice—but it doesn’t pay the bills. SHOW YOUR WORK.
Finally, place things in perspective. Fear of the exam will make it seem like a much bigger deal than it really is, so remind yourself what it does not represent. It is not a test of your overall intelligence, of your worth as a person, or of your prospects for success in life. Your future happiness will not be determined by it. It is only a math test—it tests nothing about you except whether you understand certain concepts and possess the skills to implement them. You can’t demonstrate your understanding and skills to their best advantage if you panic through making more of it than it is.
When you get the exam back, don’t bury it or burn it or treat it like it doesn’t exist—use it. Discover your mistakes and understand them thoroughly. After all, if you don’t learn from your mistakes, you are likely to make them again.
#### * * * * *
Math anxiety affects all of us at one time or another, but for all of us it is a barrier we can overcome. In this article we have examined the social and educational roots of math anxiety, some common math myths associated with it, and several techniques and strategies for managing it. Other things could be said, and other strategies are available which may help you with your own struggle with math. Talk to your instructor and to other students. With determination and a positive outlook—and a little help—you will accomplish things you once thought impossible.
The harmony of the world is made manifest
in Form and Number, and the heart and soul
and all the poetry of Natural Philosophy are
embodied in the concept of mathematical beauty.
—D’Arcy Wentworth
Contributors
• Wendy Hageman Smith, author
• B. Sidney Smith, author
Citation Info
• [MLA] Hageman Smith, Wendy, B. Sidney Smith. "Coping With Math Anxiety." Platonic Realms Interactive Mathematics Encyclopedia. Platonic Realms, 14 Feb 2014. Web. 14 Feb 2014. <http://platonicrealms.com/>
• [APA] Hageman Smith, Wendy, B. Sidney Smith (14 Feb 2014). Coping With Math Anxiety. Retrieved 14 Feb 2014 from Platonic Realms Minitexts: http://platonicrealms.com/minitexts/Coping-With-Math-Anxiety/ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3456190228462219, "perplexity": 1134.9045462074994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038078900.34/warc/CC-MAIN-20210414215842-20210415005842-00607.warc.gz"} |
https://mathematica.stackexchange.com/questions/135302/how-to-plot-the-stable-and-unstable-manifolds-of-a-hyperbolic-fixed-point-of-a-n | # How to plot the stable and unstable manifolds of a hyperbolic fixed point of a nonlinear system of differential equations?
Suppose we have the following simplified system of two ordinary differential equations:
$$\dot{x}(t)=x(t)^2+2y(t)\\ \dot{y}(t)=3x(t)$$
The system has a hyperbolic fixed point the origin. Hence there exits a stable and an unstable manifold going through the origin.
I would like to plot these exact sets. I am able to plot basins of attraction for regions in phase space but I fail to do so once the attracting region is a one-dimensional line (as is the case here with the stable manifold).
Edit: With MMM's help I was able to roughly plot the unstable manifold. However, what is more challenging is to plot the stable one.
Eq1 = x'[t] == x[t]^2 + 2*y[t];
Eq2 = y'[t] == 3*x[t];
splot = StreamPlot[{x^2 + 2*y, 3*x}, {x, -6, 6}, {y, -6, 6}];
Show[splot,
ParametricPlot[
Evaluate[First[{x[t], y[t]} /.
NDSolve[{Eq1, Eq2, Thread[{x[0], y[0]} == {-0.01, 0}]}, {x,
y}, {t, 0, 3}]]], {t, 0, 3}, PlotStyle -> Red],
ParametricPlot[
Evaluate[First[{x[t], y[t]} /.
NDSolve[{Eq1, Eq2, Thread[{x[0], y[0]} == {0.01, 0}]}, {x,
y}, {t, 0, 2.4}]]], {t, 0, 2.4}, PlotStyle -> Red]]
• Share your try in Mathematica. – zhk Jan 13 '17 at 10:15
The trick to getting the unstable manifold is to run backwards in time. Here I added two more solutions starting near the equilibrium but with negative t:
Eq1 = x'[t] == x[t]^2 + 2*y[t];
Eq2 = y'[t] == 3*x[t];
splot = StreamPlot[{x^2 + 2*y, 3*x}, {x, -6, 6}, {y, -6, 6}];
Show[splot,
ParametricPlot[Evaluate[First[{x[t], y[t]} /.
NDSolve[{Eq1, Eq2, Thread[{x[0], y[0]} == {-0.01, 0}]}, {x, y}, {t, 0, 4}]]],
{t, 0, 4}, PlotStyle -> Red, PlotRange -> {{-6, 6}, {-6, 6}}],
ParametricPlot[Evaluate[First[{x[t], y[t]} /.
NDSolve[{Eq1, Eq2, Thread[{x[0], y[0]} == {0.01, 0}]}, {x, y}, {t, 0, 2.6}]]],
{t, 0, 2.6}, PlotStyle -> Red, PlotRange -> {{-6, 6}, {-6, 6}}],
ParametricPlot[Evaluate[First[{x[t], y[t]} /.
NDSolve[{Eq1, Eq2, Thread[{x[0], y[0]} == {-0.01, 0}]}, {x, y}, {t, 0, -2.6}]]],
{t, 0, -2.6}, PlotStyle -> Green, PlotRange -> {{-6, 6}, {-6, 6}}],
ParametricPlot[Evaluate[First[{x[t], y[t]} /.
NDSolve[{Eq1, Eq2, Thread[{x[0], y[0]} == {0.01, 0}]}, {x, y}, {t, 0, -4}]]],
{t, 0, -4}, PlotStyle -> Green, PlotRange -> {{-6, 6}, {-6, 6}}]
]
• Thank you, this is exactly what I wanted. – ParetoWilli Jan 13 '17 at 13:26
Here's one way to get StreamPlot to show the desired manifolds and fill in the rest of the plot around them.
We can use the eigenvectors of the Jacobian at the equilibrium to approximate a point on each manifold. From such a point, another point further away from the equilibrium along the manifold may be constructed with NDSolve. Using these for StreamPoints allows StreamPlot to fill in the rest of the phase portrait.
(* some useful quantities/expression for playing *)
sys = {x'[t] == x[t]^2 + 2*y[t], y'[t] == 3*x[t]};
vars = {x, y};
dvars = D[Through[vars[t]], t]; (* {x'[t], y'[t]} *)
vel = dvars /. First@Solve[sys, dvars]; (* expressions for {x'[t], y'[t]} *)
equil = First@Solve[sys /. Thread[dvars -> 0]]; (* equilibrium *)
phasefield = vel /. var_[t] :> var; (* strip [t] from expressions *)
linearized = D[vel, {Through[vars[t]]}] /. equil; (* its Eigensystem[] describes equil. *)
(* compute stream points for the separatrix manifolds *)
sp = Function[{λ, (* eigenvalue *)
v, (* eigenvector *)
side}, (* ±1 = which side of the equilibrium: ± v *)
Module[{tfinal},
{NDSolveValue[{sys, Through[vars[0]] == 10^-4 v*side,
WhenEvent[Norm[#] == 1, tfinal = t; "StopIntegration"] &@
Through[vars[t]]},
Through[vars[tfinal]], {t, 0, 100 Sign@λ}],
If[λ < 0, Green, Red]}
]
] @@@ Append @@@ Tuples[{Thread@Eigensystem@linearized, {-1, 1}}]
(*
{{{ 0.599966, -0.800025}, RGBColor[0, 1, 0]},
{{-0.664941, 0.746896}, RGBColor[0, 1, 0]},
{{-0.599966, -0.800025}, RGBColor[1, 0, 0]},
{{ 0.664941, 0.746896}, RGBColor[1, 0, 0]}}
*)
splot = StreamPlot[phasefield, {x, -6, 6}, {y, -6, 6},
StreamPoints -> {Append[sp, Automatic]}]
• Nice! I think there may be a typo or two in the code, though...it's not working for me. – Simon Rochester Jan 15 '17 at 6:01
• @SimonRochester Thanks for letting me know. I had tried to simplify a couple of things and forgot to change all references to old variables. – Michael E2 Jan 15 '17 at 13:29
Your question is not clear. But here is a starting for you.
Eq1 = x'[t] == x[t]^2 + 2*y[t];
Eq2 = y'[t] == 3*x[t];
splot = StreamPlot[{x^2 + 2*y, 3*x}, {x, -6, 6}, {y, -6, 6},
StreamColorFunction -> "Rainbow"]
Manipulate[
Show[splot,
ParametricPlot[
Evaluate[
First[{x[t], y[t]} /.
NDSolve[{Eq1, Eq2, Thread[{x[0], y[0]} == point]}, {x, y}, {t,
0, T}]]], {t, 0, T}, PlotStyle -> Red]], {{T, 1}, 0.1,
1}, {{point, {0.01, 0.0}}, Locator}, SaveDefinitions -> True]
• more specifically: I want to plot the two (unique) trajectories that go through the saddle at the origin. One of them is the stable manifold, the other one is unstable. By studying the vector field we can get a good idea of how they look like, but i want to plot them. – ParetoWilli Jan 13 '17 at 11:27
• @ParetoWilli I think you want something exactly like this. mathematica.stackexchange.com/questions/80284/… – zhk Jan 13 '17 at 12:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.298179030418396, "perplexity": 3674.5445113532487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228998.45/warc/CC-MAIN-20200925213517-20200926003517-00686.warc.gz"} |
http://mathhelpforum.com/algebra/124361-solved-use-graphical-methods-solve-linear-programming-problem.html | # Math Help - [SOLVED] Use graphical methods to solve a linear programming problem
1. ## [SOLVED] Use graphical methods to solve a linear programming problem
NVM
2. Originally Posted by majax79
A math camp wants to hire counselors and aides to fill its staffing needs at minimum cost. The average monthly salary of a counselor is $2400 and the average monthly salary of an aide is$1100. The camp can accomodate up
to 45 staff members and needs at least 30 to run properly. They must have at least 10 aides, and may have up to 3 aides for every 2 counselors. How many counselors and how many aides should the camp hire to minimize
cost?
a. Define variables.
b. Write a system of linear inequalities to model this problem.
To a):
s := monthly salary
c := number of counselors
a := number of aides
to b):
$\left|\begin{array}{l}a+c \ge 30 \\
a+c \le 45 \\
a \ge 10 \\
2c \le 3a\end{array} \right.
$
Solve these inequalities vor c and draw the graphs. Determine the feasible polygon. Use the vertex which produces the lowest parallel to the line describing the salary.
and:
$s = 2400 \cdot c + 1100 \cdot a$
I've got 30 aides and 0 counselors would cause the lowest costs.
Attached Thumbnails
3. Originally Posted by majax79
Anyone?
I'm just curious: What have you done during the last 3 hours? (beside typing "Anyone?")
4. I've been working on this problem...oh and I masturbated too.
5. Originally Posted by majax79
oh and I masturbated too. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6130046248435974, "perplexity": 3688.0687755299755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246633799.48/warc/CC-MAIN-20150417045713-00290-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://zenodo.org/record/5596261/export/csl | Journal article Open Access
# Assessment of Thermal Performance of Non-Conventional Grooved Stepped Shoe Ribs by CFD Technique
Sameer Y. Bhosale; G. R. Selokar
### Citation Style Language JSON Export
{
"DOI": "10.35940/ijeat.B3419.129219",
"container_title": "International Journal of Engineering and Advanced Technology (IJEAT)",
"language": "eng",
"title": "Assessment of Thermal Performance of Non-Conventional Grooved Stepped Shoe Ribs by CFD Technique",
"issued": {
"date-parts": [
[
2019,
12,
30
]
]
},
"abstract": "<p>In improvement of the thermal performance there is necessity of the heat transfer augmentation. Heat transfer enhancement can be achieved with enlarged or extended surface, impeded boundary level, augmentation in the turbulence etc. It is desired to keep the size of heat exchanger compact for better working conditions. In the proposed work, we made the Computational Fluid Dynamics (CFD) analysis of the non-conventional type of ribs. In this work the non-conventional Stepped grooved shoe shaped ribs were studied by changing its geometry parameters like rib height (15, 20,22mm), thickness of the rib (4, 5,10 mm), and the ratio between these entities. The numerical analysis was done to study change in rate of heat transfer and pressure drop. The effects of variation in staggered arrangements and truncation gap on thermal performance were also studied. It was observed that providing staggered arrangement with truncation gap of 20 mm gives the optimum value of thermal enhancement factor of 1.33</p>",
"author": [
{
"family": "Sameer Y. Bhosale"
},
{
"family": "G. R. Selokar"
}
],
"page": "75-82",
"volume": "9",
"type": "article-journal",
"issue": "2",
"id": "5596261"
}
31
13
views | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6273159384727478, "perplexity": 7319.069975230063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103328647.18/warc/CC-MAIN-20220627043200-20220627073200-00492.warc.gz"} |
https://latex.org/forum/viewtopic.php?f=51&t=7006&p=27316 | ## LaTeX forum ⇒ MakeIndex, Nomenclature, Glossaries and Acronyms ⇒ index pages aren't added to the total page number...
Information and discussion about MakeIndex - the tool to generate subject indices for LaTeX documents.
vivelafete
Posts: 11
Joined: Fri Nov 27, 2009 12:38 pm
### index pages aren't added to the total page number...
my .tex file has the following structure:
\documentclass[a4paper, twoside]{scrreprt}\usepackage{makeidx}...\usepackage{lastpage}\label{LastPage}\makeindex...\begin{document}...\chapter{Main Page}\index{Main Page}...\printindex\end{document}
In the .sty i defined to print pageX of "total page number" in the header.
\thepage \hspace{0.5mm} of \hspace{0.2mm} pageref{LastPage}
now I have on the last page "page 23 of 22" what doesn't makes a lot of sense. (23 of 23 would be better).
If I uncomment \index{Main Page} no index appears and the pagenumbers are correct.
So my question can i manage the lastpage by myself?
Why the index pages aren't added to the total page number?
What the hell I'm doing wrong?
I would very appreciate any help or any hint?
Thanks
localghost
Site Moderator
Posts: 9204
Joined: Fri Feb 02, 2007 12:06 pm
That's quite normal if you set the label for the last page manually. Omit the according line right before your index.
Best regards
Thorsten
LaTeX Community Moderator
¹ System: openSUSE 42.2 (Linux 4.4.52), TeX Live 2016 (vanilla), TeXworks 0.6.1
vivelafete
Posts: 11
Joined: Fri Nov 27, 2009 12:38 pm
Thanks!
I didn't realize that I set with \label{LastPage} the mark for the Lastpage.
I defined it also before \begin{document}.
It's a little bit strange that it worked until i used the index. Isn't it?
I searched for a long time at the wrong places and due to you I solved it in 5 minutes.
So thank you very much!
christian | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8088193535804749, "perplexity": 6959.736754411045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027331228.13/warc/CC-MAIN-20190826064622-20190826090622-00539.warc.gz"} |
http://www.embeddedrelated.com/showarticle/540.php | Search tips
# Embedded Systems Blogs > Jason Sachs > March is Oscilloscope Month — and at Tim Scale!
Jason Sachs (contact)
Jason has 17 years of experience in signal conditioning (both analog + digital) in motion control + medical applications. He likes making things spin.
Would you like to be notified by email when Jason Sachs publishes a new blog?
Pageviews: 2393
# March is Oscilloscope Month — and at Tim Scale!
Posted by Jason Sachs on Mar 6 2014 under Test Equipment | Measurement
I got my oscilloscope today.
Maybe that was a bit of an understatement; I'll have to resort to gratuitous typography:
### I GOT MY OSCILLOSCOPETODAY!!!!
Those of you who are reading this blog may remember I made a post about two years ago about searching for the right oscilloscope for me. Since then, I changed jobs and have been getting situated in the world of applications engineering, working on motor control projects. I've been gradually working to fill in gaps in the infrastructure available to me and my coworkers, and one of those has been an oscilloscope I can use in my office. Our group's budget recently allowed us to fill in some of those gaps. So:
I got my oscilloscope today! And it's even the one I asked for!
No, wait — it's better than the one I asked for! Here's why:
The oscilloscope I asked for was an MSOX3024A. When I got a quote from a distributor, they said there was a promotion from Agilent, and they could offer me a lower price. Agilent has a deal ("Supercharge Your Bandwidth!") that runs until March 31. Most oscilloscope models come in series, with different bandwidth options. You want a less expensive scope? Pick the lowest bandwidth. You want something with a high bandwidth? It'll cost you. So the Agilent deal offers you the scope you want at the price of the next lower bandwidth. I would get a MSOX3024A (200MHz) at the normal price of an MSOX3014A (100MHz). Net savings: about $350. Yay! Quote obtained, forwarded on to manager, no problem, back to work. And then I got this funny nagging hunch in the back of my mind. MSOX3024A at the normal price of the MSOX3014A... hmmm... hmmm... what about the MSOX3034A at the normal price of the MSOX3024A? I checked Agilent's list prices for the MSOX3000 series: MSOX3014A 100MHz$5199 MSOX3024A 200MHz $5556 MSOX3034A 350MHz$9169 MSOX3054A 500MHz $11993 MSOX3104A 1GHz$15819
Goodness, I could save \$3500! I called the distributor back and asked if the promotion applied to the MSOX3034A. Yep. Some quick discussions and pleading with my manager and we got the order in for the MSOX3034A. Woot!
So I got my oscilloscope today. (Did I mention that yet?) First action of the day: put the probes on the scope along with the little color coded bands to help you keep the channels from getting mixed up, and compensate the scope probes.
What? You've never compensated an oscilloscope probe before? It's easy. You connect the probe to the calibration signal post on the front panel. It's a precision square wave. Then you zoom up the vertical scale so you can see the transients. Here's the deal: The scope comes with BNC connectors, and most scopes these days allow you to configure the input impedance with one of two choices. If you are doing high frequency measurements, they're typically 50Ω characteristic impedance to avoid reflections. If you are doing lower frequency measurements on high-impedance nodes of a circuit, you don't want the oscilloscope loading down the input, so you pick the other option, which is usually 1MΩ input impedance. But that's not good enough for most applications, and usually you want a probe with a voltage divider so you can get the input impedance higher, and have a higher acceptable input voltage range. Typical scope probes are 10x probes, which mean they have a 10:1 voltage divider inside, and are therefore 10MΩ input impedance.
Now, both the probe tip and the oscilloscope itself have parasitic input capacitance. If these aren't matched with the resistor divider, the probe will give you a transfer function that is frequency-dependent, and the waveforms you see on the scope will be distorted. So oscilloscope probe manufacturers usually put a variable capacitor in the probe, either near the probe tip or near the BNC end of the probe, so they can match the input network and make it close to frequency independent. If you connect the probe to a square wave, you can tune the variable capacitor so that the square wave has edges that are square, and not rounded off or with an overshoot. The variable capacitor has a slotted adjustment screw, and usually you want to use a nonconducting nonferrous screwdriver to prevent the signal from being altered while you're turning the screw. Scopes usually come with a little plastic mini-screwdriver you can use to adjust the capacitor screw.
## Waveform data transfer
I talked a little bit about waveform data transfer in my 2012 article. Gone are the days of DB9 RS232 connections (get out your null modems and gender changers!) and floppy disk drives. Now everything's either thumb drive or USB or Ethernet. The MSOX3000 series offers all three, but only the USB host (for thumb drive) and USB device connectors are built-in to the oscilloscope; Ethernet connector ("LAN port") costs extra.
But at least the basic software is free. Agilent offers free I/O libraries which include device drivers for USB and Virtual Instrument Software Architecture (VISA) libraries.
As far as application software: yeah, you can buy some software from Agilent, but I learned long ago that most software from oscilloscope manufacturers isn't very good. (WaveStar ring a bell for anyone out there with a Tektronix scope?) And I'm a big fan of Python, if you haven't figured that out already from previous columns. So of course I looked around for a Python program to download the Agilent waveforms.
I quickly found an IPython notebook on interfacing to Agilent oscilloscopes using their VISA libraries and the pyvisa Python library. Cool!
It took me a while to get things running, but I was able to connect to my oscilloscope via USB and interact with it in IPython. A couple of stumbling blocks:
• pyvisa requires a .pyvisarc file to point to the visa32.dll file you install with Agilent's IO. The pyvisa version 1.4 needs you to do this manually. Version 1.5 (still in development) is supposed to locate this file automatically.
• pyvisa is a low-level library which only facilitates the communication with a scope. You still have to use the required ASCII command/response protocol given in Agilent's programming guide, which looks like this:
scope = instrument("TCPIP0::130.30.240.155::inst0::INSTR")# Connect to the scope using the VISA address (see below)
scope.ask("*IDN?") # Query the IDN properties of the scope
sa_rate = float(scope.ask(":ACQ:SRAT:ANAL?"))# Get the scope's sample rate
This just screams for someone to write a higher-level library to interface with these series of oscilloscopes. I'll probably end up writing one that is mediocre and not something I can share outside my company. It's really something that Agilent should provide: they don't seem to realize that there is a whole scientific community out there which likes using Python rather than Visual Basic or C#.
In any case, I haven't finished downloading waveforms yet, but one cool thing is that you can set the time scale of the oscilloscope with the :TIMe:SCALE command:
scope.write(":TIM:SCALE 10e-6")
Shazaam! Your scope is now set to 10 μs/division. But that's not all! Let's say you want exactly 3.21 μs/division. You can do that with these scopes, but it's a pain to do, you need to fiddle with the coarse and fine time adjustment knobs. Or:
scope.write(":TIM:SCALE 3.21e-6")
Pow! It just works!
Anyway, that's what I've been up to so far. If I run into something else cool, I'll share it in a future article.
Thanks to Microlease for giving us a good price on these scopes. If you're buying stuff from Agilent, give them a holler — don't just settle for the Agilent list prices.
Update on Mar 6:
I stand corrected: Agilent does include some sample Python code to interface with the MSOX3000 oscilloscopes. It's on p. 1221 of the Programming Guide, and uses pyvisa — well, an earlier version of it, at least. (By the way, those weird :TIM:SCALE commands are called SCPI)
But it's not very "pythonic"; it looks like a Visual Basic programmer was told "You! Translate this into Python! We need Python sample code!" There are global variables and redundant function calls and then there's a sys.exit() call in the middle of a function. Sigh. (Someone should teach this guy about the raise keyword.) But at least they tell you how to decode waveform data:
# Download waveform data.
# --------------------------------------------------------
# Set the waveform points mode.
do_command(":WAVeform:POINts:MODE RAW")
qresult = do_query_string(":WAVeform:POINts:MODE?")
print "Waveform points mode: %s" % qresult
# Get the number of waveform points available.
do_command(":WAVeform:POINts 10240")
qresult = do_query_string(":WAVeform:POINts?")
print "Waveform points available: %s" % qresult
# Set the waveform source.
do_command(":WAVeform:SOURce CHANnel1")
qresult = do_query_string(":WAVeform:SOURce?")
print "Waveform source: %s" % qresult
# Choose the format of the data returned:
do_command(":WAVeform:FORMat BYTE")
print "Waveform format: %s" % do_query_string(":WAVeform:FORMat?")
# Display the waveform settings from preamble:
wav_form_dict = {
0 : "BYTE",
1 : "WORD",
4 : "ASCii",
}
acq_type_dict = {
0 : "NORMal",
1 : "PEAK",
2 : "AVERage",
3 : "HRESolution",
}
preamble_string = do_query_string(":WAVeform:PREamble?")
(
wav_form, acq_type, wfmpts, avgcnt, x_increment, x_origin,
x_reference, y_increment, y_origin, y_reference
) = string.split(preamble_string, ",")
print "Waveform format: %s" % wav_form_dict[int(wav_form)]
print "Acquire type: %s" % acq_type_dict[int(acq_type)]
print "Waveform points desired: %s" % wfmpts
print "Waveform average count: %s" % avgcnt
print "Waveform X increment: %s" % x_increment
print "Waveform X origin: %s" % x_origin
print "Waveform X reference: %s" % x_reference # Always 0.
print "Waveform Y increment: %s" % y_increment
print "Waveform Y origin: %s" % y_origin
print "Waveform Y reference: %s" % y_reference
# Get numeric values for later calculations.
x_increment = do_query_values(":WAVeform:XINCrement?")[0]
x_origin = do_query_values(":WAVeform:XORigin?")[0]
y_increment = do_query_values(":WAVeform:YINCrement?")[0]
y_origin = do_query_values(":WAVeform:YORigin?")[0]
y_reference = do_query_values(":WAVeform:YREFerence?")[0]
# Get the waveform data.
sData = do_query_string(":WAVeform:DATA?")
sData = get_definite_length_block_data(sData)
# Unpack unsigned byte data.
values = struct.unpack("%dB" % len(sData), sData)
print "Number of data values: %d" % len(values)
# Save waveform data values to CSV file.
f = open("waveform_data.csv", "w")
for i in xrange(0, len(values) - 1):
time_val = x_origin + (i * x_increment)
voltage = ((values[i] - y_reference) * y_increment) + y_origin
f.write("%E, %f\n" % (time_val, voltage))
f.close()
print "Waveform format BYTE data written to waveform_data.csv."
# =========================================================
# Returns data from definite-length block.
# =========================================================
def get_definite_length_block_data(sBlock):
# First character should be "#".
pound = sBlock[0:1]
if pound != "#":
print "PROBLEM: Invalid binary block format, pound char is '%s'." % pound
print "Exited because of problem."
sys.exit(1)
# Second character is number of following digits for length value.
digits = sBlock[1:2]
# Get the data out of the block and return it.
sData = sBlock[int(digits) + 2:]
0
posted by Jason Sachs
Jason has 17 years of experience in signal conditioning (both analog + digital) in motion control + medical applications. He likes making things spin.
Previous post by Jason Sachs: Bad Hash Functions and Other Stories: Trapped in a Cage of Irresponsibility and Garden Rakes
Next post by Jason Sachs: Garden Rakes Revisited: The Hall of Shame
all articles by Jason Sachs | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1766061931848526, "perplexity": 4860.488823510912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510276584.58/warc/CC-MAIN-20140728011756-00232-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://dsp.stackexchange.com/questions/77034/how-to-use-deconvolution-technique-to-find-out-impulse-response/77046#77046 | # How to use deconvolution technique to find out impulse response?
I have been working to find out room for impulse response. I am using Logarithmic sweep sine wave as input say $$x(n)$$ and my recorded signal is $$y(n)$$. I know the room impulse response is theoretically as: $$x(n) * h(n) = y(n)$$ where $$*$$ is convolution function.
I have read a research paper where it was pointed out that using the deconvolution technique we can get the room impulse response. I tried using scipy.signal.deconvolve. Here you can view the documentation.
Now if I perform this process, I am not getting impulse response as per my expectations. I think it may work as: $${\tt deconvolve}((x(n)*h(n)),x(n)) = h(n)$$ where $$x(n) * h(n) = y(n)$$.
If theoretically, I am correct then why am I not getting the required result? Am I making any mistake? I am posting the files and also the code with a plot.
## Output Graph
• Khubaivb, the first link (for $x(n)$) doesn't seem to work?
– Peter K.
Aug 31 '21 at 14:40
• Hi. This is not how sweep-sine IR measurement should be done. There's no need to deconvolve the sweep from the recording. All you need is to create the inverse filter (which is time-reversed and amplitude modulated version of the original sweep) and convolve it with the recording. Here's how exactly: dsp.stackexchange.com/a/41700/8202.
– jojek
Aug 31 '21 at 15:01
• @PeterK. Apology for the inconvenience, I have updated the link. Sep 1 '21 at 6:11
• @jojek That's great. It means that I just need to create a mirror image of my sweep signal and then perform convolution of this mirror and amplitude modulated signal with my recording and I'll get the room impulse response? Right? Sep 1 '21 at 6:18
• That’s correct. Keep in mind that the inverse filter is closely tied to the playback sweep. You might have to regenerate it with a known parameters.
– jojek
Sep 1 '21 at 6:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5865691304206848, "perplexity": 680.630868981068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304572.73/warc/CC-MAIN-20220124155118-20220124185118-00044.warc.gz"} |
http://export.arxiv.org/list/cs/pastweek?skip=1313&show=25 | # Computer Science
## Authors and titles for recent submissions, skipping first 1313
[ total of 1534 entries: 1-25 | ... | 1239-1263 | 1264-1288 | 1289-1313 | 1314-1338 | 1339-1363 | 1364-1388 | 1389-1413 | ... | 1514-1534 ]
[ showing 25 entries per page: fewer | more | all ]
### Thu, 23 Jun 2022 (continued, showing 25 of 294 entries)
[1314]
Title: Answer Fast: Accelerating BERT on the Tensor Streaming Processor
Subjects: Machine Learning (cs.LG); Computation and Language (cs.CL)
[1315]
Title: An Ontological Approach to Analysing Social Service Provisioning
Subjects: Databases (cs.DB); Artificial Intelligence (cs.AI); Logic in Computer Science (cs.LO)
[1316]
Title: Transformer Neural Networks Attending to Both Sequence and Structure for Protein Prediction Tasks
Comments: 8 pages, 4 figures, 3 tables
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Quantitative Methods (q-bio.QM)
[1317]
Title: Generational Differences in Automobility: Comparing America's Millennials and Gen Xers Using Gradient Boosting Decision Trees
Authors: Kailai Wang (University of Houston), Xize Wang (National University of Singapore)
Journal-ref: Cities, 114, 103204 (2021)
Subjects: Machine Learning (cs.LG); General Economics (econ.GN); Applications (stat.AP)
[1318]
Title: S2RL: Do We Really Need to Perceive All States in Deep Multi-Agent Reinforcement Learning?
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI)
[1319]
Title: Surgical-VQA: Visual Question Answering in Surgical Scenes using Transformer
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Robotics (cs.RO); Image and Video Processing (eess.IV)
[1320]
Title: Dynamic Restrained Uncertainty Weighting Loss for Multitask Learning of Vocal Expression
Subjects: Sound (cs.SD); Machine Learning (cs.LG); Audio and Speech Processing (eess.AS)
[1321]
Title: Artificial optoelectronic spiking neuron based on a resonant tunnelling diode coupled to a vertical cavity surface emitting laser
Subjects: Emerging Technologies (cs.ET); Neural and Evolutionary Computing (cs.NE); Applied Physics (physics.app-ph); Optics (physics.optics)
[1322]
Title: Supermodular f-divergences and bounds on lossy compression and generalization error with mutual f-information
Subjects: Information Theory (cs.IT); Machine Learning (cs.LG)
[1323]
Title: World of Bugs: A Platform for Automated Bug Detection in 3D Video Games
Subjects: Software Engineering (cs.SE); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Multiagent Systems (cs.MA)
[1324]
Title: When It's Not Worth the Paper It's Written On: A Provocation on the Certification of Skills in the Alexa and Google Assistant Ecosystems
Comments: To appear in the Proceedings of the 4th Conference on Conversational User Interfaces (CUI 2022)
Subjects: Human-Computer Interaction (cs.HC)
[1325]
Title: KeyCLD: Learning Constrained Lagrangian Dynamics in Keypoint Coordinates from Images
Subjects: Machine Learning (cs.LG); Systems and Control (eess.SY)
[1326]
Title: Can you meaningfully consent in eight seconds? Identifying Ethical Issues with Verbal Consent for Voice Assistants
Comments: To appear in the Proceedings of the 4th Conference on Conversational User Interfaces (CUI 2022). arXiv admin note: substantial text overlap with arXiv:2204.10058
Subjects: Human-Computer Interaction (cs.HC)
[1327]
Title: Test Case Prioritization Using Partial Attention
Subjects: Software Engineering (cs.SE)
[1328]
Title: On three types of $L$-fuzzy $β$-covering-based rough sets
Subjects: Artificial Intelligence (cs.AI)
[1329]
Title: ROSE: A RObust and SEcure DNN Watermarking
Subjects: Cryptography and Security (cs.CR); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
[1330]
Title: Heterogeneous Graph Neural Networks for Software Effort Estimation
Comments: Accepted in the Technical Papers Track of the 16th International Symposium on Empirical Software Engineering and Measurement, 2022 (ESEM 2022)
Subjects: Software Engineering (cs.SE); Artificial Intelligence (cs.AI)
[1331]
Title: Connecting a French Dictionary from the Beginning of the 20th Century to Wikidata
Authors: Pierre Nugues
Journal-ref: Proceedings of the 13th Language Resources and Evaluation Conference (LREC), Marseille, France pp. 2548-2555 (2022)
Subjects: Computation and Language (cs.CL); Digital Libraries (cs.DL)
[1332]
Title: Monte Carlo Methods for Industry 4.0 Applications
Subjects: Information Theory (cs.IT)
[1333]
Title: Event-triggered and distributed model predictive control for guaranteed collision avoidance in UAV swarms
Comments: Accepted for publication at the IFAC Conference on Networked Systems (NecSys) 2022
Subjects: Systems and Control (eess.SY)
[1334]
Title: Object Type Clustering using Markov Directly-Follow Multigraph in Object-Centric Process Mining
Authors: Amin Jalali
Subjects: Artificial Intelligence (cs.AI)
[1335]
Title: Weakly-supervised Action Localization via Hierarchical Mining
Subjects: Computer Vision and Pattern Recognition (cs.CV)
[1336]
Title: Agent-based Graph Neural Networks
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Machine Learning (stat.ML)
[1337] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1632147878408432, "perplexity": 19191.409540311804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103620968.33/warc/CC-MAIN-20220629024217-20220629054217-00642.warc.gz"} |
https://math.stackexchange.com/questions/2371949/what-is-the-largest-3-digit-number-which-has-all-3-digits-different-and-is-equal | # What is the largest 3-digit number which has all 3 digits different and is equal to 37 times the sum of it's digits?
What is the largest 3-digit number which has all 3 digits different and is equal to 37 times the sum of it's digits?
-Question 28, Junior Division, AMC 2016
I found the solution here, page 51, however I don't understand it from the very beginning. How does $100a+10b+c$ equal $37a+37b+37c$? Can somebody explain to me what's happened in this solution step by step please? Or can somebody give an alternate solution, suitable for Year 7 and 8? Thanks!
P.S. If anybody knows a more accurate tag for this question, please feel free to edit my question or comment so I can edit it if you can't.
• If $a,b,c$ are the digits of the number, then the number is $100a+10b+c$, the sum of the number's digits is $a+b+c$, and $37$ times the sum of its digits is $37(a+b+c)=37a+37b+37c$. – arctic tern Jul 26 '17 at 6:13
• If the three digits are in order (from left to right) $a$, $b$ and $c$, $37a+37b+37c$ is 37 times the sum of the digits, and $100a+10b+c$ is the number. If that's where your problems start, my guess is that AMC is not for you. – Henrik Jul 26 '17 at 6:14
• Um... That link is to a file on your computer. We can't actually read it. – fleablood Jul 26 '17 at 7:52
• @fleablood - Sorry ¯_(ツ)_/¯. Problem fixed! – bio Jul 26 '17 at 9:36
You have $\overline{abc}=100a +10b+ c=37 (a+b+c)$ or
$$63a-27b-36c=0 \ \ \Longrightarrow 7a=3b+4c$$
Now $a \le \frac{3b+4c}{7}\le \frac{3\times 8+4 \times 9}{7}<9$.
Let $a=8$. Then $3b+4c=56$. We have $3b=4(14-c)$. Thus $b$ is a multiple of $4$. This forces that $b=4$ and gives $c=11$, which is impossible!
Let $a=7$. Then $3b+4c=49$ and $b=\frac{49-4c}{3}=16+\frac{1-4c}{3}$. This gives $c=4$ and leads to $b=11$, which is impossible again.
Let $a=6$. Then $3b+4c=42$ and $4c=3(14-b)$. This gives $c=6, 9$ since $c$ must be a multiple of $3$. Then possible pairs for $(b, c)$ are $(6, 6), (2, 9)$.
Thus largest is $629$.
"What is the largest 3-digit number...."
Let $N = abc$ have three digits. $N = 100a + 10b + c$. That's what the writing numbers of digits mean. $593 = 500 + 90 +3$ and $abc = 100a + 10b + c$.
" which has all 3 digits different"
So $a \ne b; a\ne c; b \ne c$.
" and is equal to 37 times the sum of it's digits?"
So $N = 37*(a+b+c) = 37a + 37b + 37c$
And $N = 100a + 10b + c$.
So $100a + 10b + c = 37a + 37b + 37c$
So $(100a - 37a)= (37 - 10)b + (37-1)c$
So $63a = 27b + 36c$
So $\frac {63}9 a = \frac {27}9 b + \frac {36}9 c$
So $7a = 3b + 4c$.
Obviously $a = 1, b= 1, c=1$ is a solution but 1) It's probably not the largest and 2) The digits aren't all different.
$b \ne c$ so $c = b \pm k$ for some $k \ne 0$.
So $7a = 3b + 4c = 3b + 4(b \pm k) = 7b \pm 4k$
So $a = b \pm \frac 47k$ so $7$ must divide $k$. But $b$ and $c$ are single digits and $c = b \pm k$ so $k = 7$ and $a = b \pm 4$
The possible answers are $b =0; c = 0+7=7; a = 0+4=4$
$b=1; c = 1+7= 8; a = 1+4 =5$
$b=2; c = 2+7 =9; a= 2+4 =6$
$b = 7; c = 7-7=0 ; a=7-4 = 3$
$b = 8; c = 8-7; a = 8-4 = 4$
$b = 9; c = 9 -7 =2; a = 9-4=5$.
Of those $a=6;b=2; c= 9$ and $N = 629 = 37*17 = 37(6+2+9)$ is the largest such answer.
Since $3\cdot 37=111$ and all three-digit multiples of that have three equal digits, our number better not be a multiple of $3$.
The digit sum is at most $9+9+9=27$, so (cf. first paragraph) out number is at most $26\cdot 37=962$. However, $9+6+2\ne 26$. Any number $<962$ has digit sum $\le 9+5+9=23$, so our number is at most $23\cdot 37=851$. Again, $8+5+1\ne 23$, so we must go lower again. Actually, this is just a finite problem - why not simply check the few multiples of $37$ up to that limit?
The notation in which all our numbers are written is called the place value notation. You may have had exercises in class 5 and 6, maybe, concerning decimal expansions of the a number. For example, $678 = 6 * 100 + 7 * 10 + 8 * 1$, and $1089 = 1 * 1000 + 8 * 100 + 9 * 1$.
Therefore, using elementary algebra, we can deduce that given a three digit number represented by $abc$ is the number $100a+10b+c$. Here, I say represented by because in ordinary algebra $abc$ is the product of the three quantities a,b, and c, but here we are treating it as the representation of a three digit number e.g. $678$.
From the representation $abc$, we deduce that $a,b,c$ stand for the digits of the number. For example, when we write $678$, then the digits of the number are $6,7,8$.
Now, we are in a position to actually understand the problem at least : it says which is the largest three digit number is $37$ times the sum of it's digits?
Let this number be $abc$. Note that this is a representation. The actual number, of course, is $100a+10b+c$, and this is equal to $37$ times the sum of digits which are $a,b,c$, hence giving the equation $100a+10b+c = 37a+37b+37c$.
When it comes to solving this question, you should realize that our three digit number is at least a multiple of $37$, the quotient being $a+b+c$. So it's enough to check multiples of $37$. Furthermore, since we have to find the largest such number, we may as well start from the largest three digit number possible, which is $999$. This in fact satisfies all the requirements, but doesn't have distinct digits.
So we come further down by subtracting $37$. The next number is $962$, doesn't work. The next is $925$, doesn't work. The next is $888$, doesn't have distinct digits, ... (once you understand why we are doing all this, I can tell you ways of reducing the number of cases to check in this step, because it's time intensive).
Work your way down, until you find the right answer, which I think is $37*17 = 629$. The sum of the digits is $17$, indeed, and the digits are distinct.
let $$100a +10b+ c=37 (a+b+c)$$
on solving you get $$63a-27b-36c=0$$ consider the largest possible case that is let $a=b=9$ then on substituting the values in the equation you get $c=9$ ...bingo!!
i don't think there is any other specific way. hope it helps.
• note that all $3$ digits have to be different. – Siong Thye Goh Jul 26 '17 at 7:15
• oh sorry had not seen it – user453135 Jul 26 '17 at 7:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8649333715438843, "perplexity": 160.87120676834314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998250.13/warc/CC-MAIN-20190616142725-20190616164725-00379.warc.gz"} |
https://www.ias.ac.in/describe/article/jess/129/0192 | • Future changes in rice yield over Kerala using climate change scenario from high resolution global climate model projection
• Fulltext
https://www.ias.ac.in/article/fulltext/jess/129/0192
• Keywords
Global Climate Model; climate change; cropping system model; rice yield; adaptation measures.
• Abstract
The impact of climate change on agricultural yield is one amongst the major concerns the world is witnessing. Our study focusses on rice yield prediction for an agricultural research station in Kerala with the help of climate change scenario input from the Meteorological Research Institute (MRI) Global Climate Model (GCM) projection under Representative Concentration Pathway 8.5 (RCP8.5). We have used Cropping System Model (CSM) Crop Estimation through Resource and Environment Synthesis (CERES) Rice within Decision Support System for Agrotechnology Transfer (DSSAT) package for predicting the yield. Our study has the novelty of using very high-resolution climate data from a model which is highly skilful in capturing the present-day climate features and climatic trends over India (in particular, over the Western Ghats), as input for simulating the future crop yield. From this study, we find that the rice yield decreases due to rise in temperature and reduction in rainfall, thereby reducing the crops maturity time in the future. Based on our results, the adaptation measures suggested to achieve better yield under future warming conditions are: (i) to opt for alternative rice varieties which have tolerance to high temperatures and consume less water, and (ii) shifting of planting date to the most appropriate window.
$\bf{Highlights}$
$\bullet$Impact study of future climate change on rice yield is carried out using CERES Rice Cropping System Model after systematic validation.
$\bullet$Highly reliable climate change information from the projection by a 20-km resolution global climate model of MRI which is remarkably skilful in simulating the present-day Indian climate, is used as input for the crop model.
$\bullet$Rice yield is found to decrease in future due to rise in temperature and reduction in rainfall, thereby reducing the crops maturity time.
$\bullet$Adaptive measures of opting for temperature tolerant, high yielding rice varieties which consume less water and shifting of planting date to an appropriate window, are suggested to achieve better yield.
• Author Affiliations
1. Multi-Scale Modelling Programme, CSIR Fourth Paradigm Institute, Bangalore 560 037, India.
2. Academy of Scientific and Innovative Research (AcSIR), Ghaziabad 201 002, India.
3. College of Horticulture, Kerala Agricultural University, Vellanikkara, India.
4. Japan Meteorological Business Support Center, Tsukuba, Japan.
• Journal of Earth System Science
Volume 131, 2022
All articles
Continuous Article Publishing mode
• Editorial Note on Continuous Article Publication
Posted on July 25, 2019 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2815909683704376, "perplexity": 8505.923244271444}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302622.39/warc/CC-MAIN-20220120190514-20220120220514-00464.warc.gz"} |
http://mathwaycalculus.com/how-to-solve-9-%E2%88%92-2-5-5%E2%88%924-x-521-%C3%B7-2-of-4/ | # How to solve 9 − 2 + 5 (5−4) × [(5+2)+1] ÷ 2 of 4
Welcome to my article How to solve 9 − 2 + 5 (5−4) × [(5+2)+1] ÷ 2 of 4. This question is taken from the simplification lesson.
The solution of this question has been explained in a very simple way by a well-known teacher by doing addition, subtraction, and fractions.
For complete information on how to solve this question How to solve 9 − 2 + 5 (5−4) × [(5+2)+1] ÷ 2 of 4, read and understand it carefully till the end.
Let us know how to solve this question How to solve 9 − 2 + 5 (5−4) × [(5+2)+1] ÷ 2 of 4.
First write the question on the page of the notebook.
## How to solve 9 − 2 + 5 (5−4) × [(5+2)+1] ÷ 2 of 4
Write this question in this way and solve it in simple way,
\displaystyle 9\text{ }-\text{ }2\text{ }+\text{ }5\text{ }\left( {5-4} \right)\text{ }\times \text{ }\left[ {\left( {5+2} \right)+1} \right]\text{ }\div \text{ }2\text{ }of\text{ }4
\displaystyle 9\text{ }-\text{ }2\text{ }+\text{ }5\text{ }\left( {5-4} \right)\text{ }\times \text{ }\left[ {7+1} \right]\text{ }\div \text{ }2\text{ }of\text{ }4
\displaystyle 9\text{ }-\text{ }2\text{ }+\text{ }5\text{ }\left( {5-4} \right)\text{ }\times \text{ }\left[ 8 \right]\text{ }\div \text{ }2\text{ }of\text{ }4
\displaystyle 9\text{ }-\text{ }2\text{ }+\text{ }5\text{ }\left( 1 \right)\text{ }\times \text{ 8 }\div \text{ }2\text{ }\times \text{ }4
\displaystyle 9\text{ }-\text{ }2\text{ }+\text{ }5\text{ }\left( 1 \right)\text{ }\times \text{ 8 }\div \text{ 8}
\displaystyle 9\text{ }-\text{ }2\text{ }+\text{ }5\text{ }\left( 1 \right)\text{ }\times \text{ 1}
\displaystyle 9\text{ }-\text{ }2\text{ }+\text{ }5\text{ }
\displaystyle \text{14 }-\text{ }2\text{ } | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9681994318962097, "perplexity": 18416.840439117404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711064.71/warc/CC-MAIN-20221205232822-20221206022822-00875.warc.gz"} |
https://answers.gazebosim.org/answers/12590/revisions/ | One way to do it is to publish a visual message to the ~/visual topic. Here's an example changing transparency. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4484620988368988, "perplexity": 897.3907060352631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.91/warc/CC-MAIN-20211020055136-20211020085136-00210.warc.gz"} |
https://www.bmj.com/content/313/7059/744.1?ijkey=9ac108ad12099b082fcabcccc4139061e57e044d&keytype2=tf_ipsecsha | Intended for healthcare professionals
Education And Debate
# Statistics Notes: Measurement error
BMJ 1996; 313 (Published 21 September 1996) Cite this as: BMJ 1996;313:744
1. J Martin Bland, professor of medical statisticsa,
1. a Department of Public Health Sciences, St George's Hospital Medical School, London SW17 0RE,
2. b IRCF Medical Statistics Group, Centre for Statistics in Medicine, Institute of Health Sciences, PO Box 777, Oxford OX3 7LF
1. Correspondence to: Professor Bland.
Several measurements of the same quantity on the same subject will not in general be the same. This may be because of natural variation in the subject, variation in the measurement process, or both. For example, table 1 shows four measurements of lung function in each of 20 schoolchildren (taken from a larger study1). The first child shows typical variation, having peak expiratory flow rates of 190, 220, 200, and 200 1/min.
Table 1
Repeated peak expiratory flow rate (PEFR) measurements for 20 schoolchildren
View this table:
Let us suppose that the child has a “true” average value over all possible measurements, which is what we really want to know when we make a measurement. Repeated measurements on the same subject will vary around the true value because of measurement error. The standard deviation of repeated measurements on the same subject enables us to measure the size of the measurement error. We shall assume that this standard deviation is the same for all subjects, as otherwise there would be no point in estimating it. The main exception is when the measurement error depends on the size of the measurement, usually with measurements becoming more variable as the magnitude of the measurement increases. We deal with this case in a subsequent statistics note. The common standard deviation of repeated measurements is known as the within-subject standard deviation, which we shall denote by sw.
To estimate the within-subject standard deviation, we need several subjects with at least two measurements for each. In addition to the data, table 1 also shows the mean and standard deviation of the four readings for each child. To get the common within-subject standard deviation we actually average the variances, the squares of the standard deviations. The mean within-subject variance is 460.52, so the estimated within-subject standard deviation is sw = (square root)460.52 = 21.5 1/min. The calculation is easier using a program that performs one way analysis of variance2 (table 2). The value called the residual mean square is the within-subject variance. The analysis of variance method is the better approach in practice, as it deals automatically with the case of subjects having different numbers of observations. We should check the assumption that the standard deviation is unrelated to the magnitude of the measurement. This can be done graphically, by plotting the individual subject's standard deviations against their means (see fig 1). Any important relation should be fairly obvious, but we can check analytically by calculating a rank correlation coefficient. For the figure there does not appear to be a relation (Kendall's (tau) = 0.16, P = 0.3).
Table 2
One way analysis of variance for the data of table 1
View this table:
Fig 1
Individual subjects' standard deviations plotted against their means
A common design is to take only two measurements per subject. In this case the method can be simplified because the variance of two observations is half the square of their difference. So, if the difference between the two observations for subject i is di the within-subject standard deviation sw is given by s2w = 1/2n(summation)d2i, where n is the number of subjects. We can check for a relation between standard deviation and mean by plotting for each subject the absolute value of the difference—that is, ignoring any sign—against the mean.
The measurement error can be quoted as sw. The difference between a subject's measurement and the true value would be expected to be less than 1.96 sw for 95% of observations. Another useful way of presenting measurement error is sometimes called the repeatability, which is (square root)2 x 1.96 sw or 2.77 sw. The difference between two measurements for the same subject is expected to be less than 2.77 sw for 95% of pairs of observations. For the data in table 1 the repeatability is 2.77 x 21.5 = 60 1/min. The large variability in peak expiratory flow rate is well known, so individual readings of peak expiratory flow are seldom used. The variable used for analysis in the study from which table 1 was taken was the mean of the last three readings.1
Other ways of describing the repeatability of measurements will be considered in subsequent statistics notes.
## References
1. 1.
2. 2.
View Abstract | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8889175653457642, "perplexity": 777.762876419117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998690.87/warc/CC-MAIN-20190618063322-20190618085322-00345.warc.gz"} |
https://chadrick-kwag.net/ | ## Posts
#### “ValueError: numpy.ndarray size changed, may indicate binary incompatibility.” error fix
After installing packages with python and running a torch training script, I encountered the following error. This error occurred in pycocotools package which was used by detectron2 package. My solution was to reinstall pycocotools package Read more…
#### visual code debug configuration variables
from the official docs: https://code.visualstudio.com/docs/editor/variables-reference Predefined variables The following predefined variables are supported: ${workspaceFolder} – the path of the folder opened in VS Code${workspaceFolderBasename} – the name of the folder opened in VS Code without any Read more…
#### paper summary: “VarifocalNet: An IoU-aware Dense Object Detector”(VFNet)
arxiv: https://arxiv.org/abs/2008.13367 key points another anchor-free point based object detection network introduce new loss, varifocal loss which is a forked version from focal loss. Makes some changes from focal loss to compensate positive/negative imbalance futher. Read more… | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35571467876434326, "perplexity": 21361.908479266887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154042.23/warc/CC-MAIN-20210731011529-20210731041529-00504.warc.gz"} |
https://www.slitherintopython.com/book/chapter_8/chapter_8.html | # Chapter 8 - Lists
Important Note:
Just a heads up, there's a lot to cover in this chapter and it can get quite technical in places (especially section 8.4) so pay particular attention to that section, its really important!
## 8.1 - List literals
Lists are, on the surface, the most simple data structure Python has to offer. They are also one of the most important and are used to store elements.
A list is a sequence of elements in which the elements can be of any type (integers, floats, strings, even other lists).
Lists are of type list
The most basic kind of list is the empty list and is represented as []. The empty list is falsey
>>> empty_list = []
>>> empty_list == False
True
In a list, elements are separated by commas.
my_list = [1, 2, 3, 654, 7]
This list contains 5 elements.
We can mix the types within the list, for example:
my_list = [True, 88, 3.14, "Hello", ['Another list', 45938]]
Notice how the last element is another list
In Python, a list is a collection type meaning it can contain a number of objects (strings, ints, etc..) yet still be treated as a single object.
Lists are also a sequence type meaning each object in the collection occupies a specific numbered location within it (which means elements are ordered).
Lists are also iterable so we can loop through their elements using indices.
They sound a lot like strings don't they?
However they differ in two significant ways:
• A list can contain objects of different types while a string can only contain characters
• A list is a mutable type (We'll get to this later).
## 8.2 - List methods and functions
We can get the length of a list in the same way we do with strings, using the len() function.
>>> my_list = [23, 543, "hi"]
>>> len(my_list)
3
We also have three other quite important functions, these are sum(), min() and max() and they work as you'd expect.
>>> my_list = [1, 2, 3, 4, 5, 6]
>>> sum(my_list)
21
min() and max() also work on strings, as strings have a lexicographical ordering, 'a' being the min and 'z' being the max:
>>> my_list = [6, 324, 456, 2, 6574, -452]
>>> max(my_list)
6574
>>> min(my_list)
-452
>>> min("string")
'g'
>>> max("string")
't'
We can also sort a list using the sorted() function. The sorted() function works by converting a collection to a list and returning the sorted list.
>>> student_grades = [88, 23, 56, 75, 34, 23, 84, 63, 52, 77, 96]
[23, 23, 34, 52, 56, 63, 75, 77, 84, 88, 96]
In computer science, searching and sorting are two big problems and are really important topics.
We can append and pop elements to and from the end of a list. Append is add and pop is remove. We can do this because lists are mutable.
>>> my_list = [1, 5, 8]
>>> my_list.append(99)
>>> my_list
[1, 5, 8, 99]
>>> my_list.pop()
99
>>> my_list
[1, 5, 8]
>>> my_list.pop(2)
5
>>> my_list
[1, 8]
Notice that when we pop we are returned the number we popped. By default, if no arguments are passed to pop() then the last element is popped. We can also pass an index to pop() and it will pop the element at that index and return it to us.
We can store the value that is returned after calling pop() in another variable
>>> my_list = [1, 5, 9]
>>> x = my_list.pop()
>>> x
9
If we have a list of strings or characters, we can join them together using the join() function:
>>> my_list = ["O", "r", "a", "n", "g", "e", "s"]
>>> "".join(my_list)
'Oranges'
The string before the join function is called the joining string and is what will be between each element of the list when joining them together. In this case we want nothing (the empty string).
However if we had words, we might want to join them together with spaces:
>>> my_list = ["Hello", "World.", "This", "is", "a", "string."]
>>> " ".join(my_list)
"Hello World. This is a string."
Side Note:
We can break a string into a list using the split() function:
>>> s = "This is my string"
>>> my_list = s.split()
>>> my_list
['This', 'is', 'my', 'string']
The split() function takes an argument which is the character we want to split the string on. If no argument is passed, then it defaults to splitting on whitespace.
## 8.3 - List indexing
As we could with strings, we can also index into lists the same way.
>>> my_list = [3, 5.64, "Hello", True]
>>> my_list[2]
"Hello"
>>> my_list[0]
3
I've talked about how we can have other lists as elements inside lists. In Python and many other languages, lists inside lists are useful for representing many types of data (matrices, images, spreadsheets and even higher dimensional spaces). These are typically referred to as multidimensional arrays or multidimensional lists.
We can index into these inner lists too!
>>> my_list = [[1, 2, 3], [7, 8, 9]]
>>> my_list[0][1]
2
This works as follows, we first select the embedded list we want then we select the element from that list. In the above example, we want the list at index 0 ([1, 2, 3]) then we select the element at index 1 (2).
We can 'burrow' down into embedded lists this way.
>>> my_list = [[[1, 2, 3], [4, 5, 6]], [7, 8, 9]]
>>> my_list[0][1][1]
5
We can extend this to strings inside lists
>>> my_list = ["My string", "Another string"]
>>> my_list[0][1]
'y'
## 8.4 - References & Mutability
We looked at immutable types earlier on. Now we're going to look at mutable types.
A mutable type (or mutable sequence) is one that can be changed after it is created. A list is a mutable sequence and can therefore be changed in place.
Let's refresh our memory on how strings couldn't be changed after they were created
>>> s = "my string"
>>> s[0] = "b"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'str' object does not support item assignment
We get an error. Take a look back at the chapter on strings (Chapter 6) and revise the diagram that exists in that chapter.
I'm going to introduce the is keyword now. The is keyword compares object IDs (identities). This is similar to == which checks if the objects referred to by the variables have the same content (equality).
To give an analogy, let's say a clothing company is mass producing a certain type of shirt. Let's take two shirts as they are coming off the line.
We would say shirt_1 == shirt_2 evaluates to True (They are both the same color, size and have the same graphic printed on them)
We would say shirt_1 is shirt_2 evaluates to False (Even though they look similar, they are not the same shirt).
Mutability and Immutability is quite like that.
Let's look at some more Python examples:
>>> s = "Hello"
>>> t = s
>>> t is s
True
>>> t == s
True
These variables, s and t are referencing the same object (The string "Hello").What if we try to update s?
>>> s = "Hello"
>>> t = s
>>> t is s
True
>>> s = "World"
>>> t
"Hello"
The variable s is now referencing a different object and t still references "Hello". As strings are immutable we must create a new instance. We cannot change it in place.
With lists however, things operate a little differently.
>>> a = [1, 3, 7]
>>> b = a
>>> b is a
True
>>> a.append(9)
>>> a
[1, 3, 7, 9]
>>> b
[1, 3, 7, 9]
>>> a is b
True
The behavior has changed here and that is because the object pointed to by a and b is mutable. Modifying it does not create a new object and the original reference is not overwritten to point to a new object. We do not overwrite the reference in the variable a, instead we write through it to modify the mutable object it points to.
There are however some little tricks thrown in with this behavior. Consider the following code:
>>> a = [1, 3, 7]
>>> b = a
>>> a = a + [9]
>>> a
[1, 3, 7, 9]
>>> b
[1, 3, 7]
What's going on here? That goes against what we just talked about doesn't it? Well, obviously not, the Python developers wouldn't leave a bug that big in language. What's happening here is we executed the following code: a = a + [9]. This doesn't write through a to modify the list it references. Instead a new list is created from the concatenation of the list a and the list [9].
A reference to this new list overwrites the original reference in a. The list referenced by b is hence unchanged.
That's not all the little tricks thrown in. There's one more subtlety. Consider the following code:
>>> a = [1, 3, 7]
>>> b = a
>>> a += [9]
>>> a
[1, 3, 7, 9]
>>> b
[1, 3, 7, 9]
It turns out x += y isn't always shorthand for x = x + y. Although with immutable types they do the same thing, with lists they behave very differently. It turns out the += operator when applied to lists, modifies the list in-place. This means we write through the reference in a and append [9] to the list.
Writing through references is illustrated below.
Firstly consider the code, then the diagram:
>>> a = [1, 3]
>>> b = a
>>> a[1] = 9
** BEFORE UPDATING A **
NAME LIST LIST OBJECT
=========== ===============
|-----| _____
_____ | | | 1 |
a | # |------------------>| # |--------->|---|
|---| | | _____
_____ | # |--------->| 3 |
b | # |------------------>|_____| |---|
|---|
** AFTER UPDATING A **
NAME LIST LIST OBJECT
=========== ===============
|-----| _____
_____ | | | 1 |
a | # | ----------------->| # |---------> |---|
|---| | | _____
_____ | # |-----| | 3 |
b | # | ----------------->|_____| | |---|
|---| |
| |---|
|---> | 9 |
|---|
So whats happening in the second part of this diagram? The #'s represent references.
a is a reference to a list object which contains references to integers.
b references the same list object. But when we "change" a, we're clearly not changing it. We're updating a reference within it. While b just points to the list, a has changed whats inside it hence b's reference is not effected.
We can still see the 3 floating around in the second part of the diagram. This will be picked up by something called the garbage collector (A garbage collector keeps memory clean and stops it from becoming flooded with stuff like the unreferenced 3).
It's important to keep a mental image of this.
## 8.5 - List operations
We've seen two operations (Now knowing they're different) in the previous section. The + and += operators. I'll briefly run over them again.
>>> a = [1, 2, 3]
>>> b = [4, 5, 6]
>>> a + b
[1, 2, 3, 4, 5, 6]
>>> a += b
>>> a
[1, 2, 3, 4, 5, 6]
We also have the * operator, which does as expected:
>>> a = [1, 2, 3]
>>> a * 3
[1, 2, 3, 1, 2, 3, 1, 2, 3]
We also have a new operator. This is the in operator. The in operator is a membership test operator and returns True if a sequence with a specified value is in an object
>>> a = [3, 7, 22]
>>> 4 in a
False
>>> 3 in a
True
The in operator can be applied to any iterable type (lists, strings, etc).
## 8.6 - List slicing
List slicing works exactly how it does with strings. This makes sense as both strings and lists are sequence types).
>>> a = [1, 2, 3, "Hello", True]
>>> a[3:]
["Hello", True]
>>> a[-4: -2]
[2, 3]
Extended slicing also works exactly the same as it does with strings.
>>> a = [1, 2, 3, "Hello", True]
>>> a[::-1]
[True, 'Hello', 3, 2, 1]
>>> a[::2]
[1, 3, True]
Important:
Remember in section 8.4 we wrote something like:
>>> a = [1, 2, 3]
>>> b = a
If we updated a, then b would also be effected. What if we wanted to make a copy of a and store it in b? We can use slicing to do that as slicing returns a new object
>>> a = [1, 2, 3]
>>> b = a[:]
>>> b is a
False
We can see now they do not reference the same object so updating one will not effect the other.
## 8.7 - Exercises
Important Note: We've learned a lot up to this point. Your solutions should be combining (where necessary) everything we've learned so far. If you are unsure of something, don't be afraid to go back to previous chapters. It's completely normal not to remember everything this early on. The more practice you have the more of this stuff will stick!
### Question 1
Write a program that takes a string as input from the user, followed by a number and output each word which has a length greater than or equal to the number.
The string will be a series of words such as "apple orange pear grape melon lemon"
You are to use the split() function to split the string
# EXAMPLE INPUT
"elephant cat dog mouse bear wolf lion horse"
5
# EXAMPLE OUTPUT
elephant
mouse
horse
You may use the following code at the beginning of your program
my_words = input().split()
num = int(input())
# REMEMBER: my_words is a list
### Question 2
Write a program that takes a string as input from the user, followed by another string (the suffix) and output each word that ends in the suffix.
The first string will be a series of words such as "apples oranges pears grapes lemons melons"
You are to use the split() function to split the string
# EXAMPLE INPUT
"apples oranges pears grapes lemons melons"
"es"
# EXAMPLE OUTPUT
apples
oranges
grapes
Hint: Your solution should be generalized for any suffix. Don't assume the length of the suffix will be 2
You may use the following code at the beginning of your program
words = input().split()
suffix = input()
# REMEMBER: words is a list
### Question 3
Write a program that builds a list of integers from user input. You should stop when the user enters 0.
Your program should print out the list.
# EXAMPLE INPUT
3
5
6
7
0
# EXAMPLE OUTPUT
[3, 5, 6, 7]
### Question 4
Building on your solution to question 3, taking a list built up from user input, ask the user for two more pieces of input. Integers this time.
The integers will represent indices. You are to swap the elements at the specified indices with eachother.
# EXAMPLE INPUT
6
3
9
0
*********************************
* LIST SHOULD NOW BE: [6, 3, 9] *
*********************************
1
2
# EXAMPLE OUTPUT
[6, 9, 3]
You may assume that the indices will be within the index range of the list.
### Question 4
Write a program that builds a list of integers from user input. You program should then find the smallest of those integers and put it at the first index (0) in the list. Input ends when the user inputs 0.
# EXAMPLE INPUT
10
87
11
5
65
342
12
0
# LIST SHOULD NOW BE: [10, 87, 11, 5, 65, 342, 12]
# EXAMPLE OUTPUT
[5, 87, 11, 10, 65, 342, 12]
### Question 5
** THIS IS A HARD PROBLEM **
Write a program that takes two sorted lists as input from the user and merge them together. The resulting list should also be sorted. Both of the input lists will be in increasing numerical order, the resulting list should be also.
# EXAMPLE INPUT
2
6
34
90
0 # END OF FIRST INPUT
5
34
34
77
98
0 # END OF SECOND INPUT
# EXAMPLE OUTPUT
[2, 5, 6, 34, 34, 34, 77, 90, 98]
Don't assume both lists will be the same length!
#### Help support the author by donating or purchasing a copy of the book (not available yet)
Previous Chapter - Next Chapter | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19050119817256927, "perplexity": 1969.7810393117245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257243.19/warc/CC-MAIN-20190523103802-20190523125802-00188.warc.gz"} |
https://g.vovososo.com/search?q=modified%20Bessel%20function | # 搜索结果
modified Bessel function
## 网上的精选摘要
The modified Bessel function of the first kind is implemented in the Wolfram Language as BesselI[nu, z]. where the contour encloses the origin and is traversed in a counterclockwise direction (Arfken 1985, p. 416). (Abramowitz and Stegun 1972, p.
https://mathworld.wolfram.com › M...
### Modified Bessel Function of the First Kind - Wolfram MathWorld
2.1 Bessel functions of the first kind: J 2.1.1 Bessel's integrals · 2.2 Bessel functions of the second kind: Y · 2.3 Hankel functions: Hα, H · 2.4 Modified ...
## 其他用户还问了以下问题
Chapter 10 Bessel Functions · Notation. 10.1 Special Notation · Bessel and Hankel Functions. 10.2 Definitions · 10.3 Graphics · 10.4 Connection Formulas · 10.5 ...
These functions are solutions of the frequently encountered modified Bessel equation, which arises in a variety of physically important problems,. •. Kν(x) will ...
to as a modified Bessel function of the first kind. b) Second Kind: Kν(x) in the solution to the modified Bessel's equation is re- ferred to as a modified ...
40 页·267 KB
Modified Bessel Function of the First Kind · \begin{displaymath} I_n(x) \equiv i^{-n · \begin{displaymath} I_\nu(z)=({\textstyle ...
Modified Bessel equation. It can be reduced to the Bessel equation by means of the substitution x = i¯x, where i2 = −1. Solution: y = C1Iν(x) + C2Kν(x),.
1 页·48 KB | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8309378027915955, "perplexity": 2271.8512729887716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057584.91/warc/CC-MAIN-20210924231621-20210925021621-00687.warc.gz"} |
http://math.stackexchange.com/users/48243/mostafiz | # mostafiz
less info
reputation
8
bio website linkedin.com/in/mostafiz93 location Dhaka, Bangladesh age 21 member for 2 years, 1 month seen Dec 17 at 9:47 profile views 24
Hi, this is Mostafiz. Currently I'm reading Computer Science and Engineering in Bangladesh University of Engineering and Technology(BUET).
# 4 Questions
6 How can I find the value of $a^n+b^n$, given the value of $a+b$, $ab$, and $n$? 2 How to calculate $1^k+2^k+3^k+\cdots+N^k$ with given values of $N$ and $k$? [duplicate] 1 How to solve two recurrences dependent on each other 0 Does the line connecting the mid-points of two opposite sides of a quadrilateral divide it equally?
# 162 Reputation
+10 Initial value of Newton Raphson Method +10 How to calculate $1^k+2^k+3^k+\cdots+N^k$ with given values of $N$ and $k$? +30 How can I find the value of $a^n+b^n$, given the value of $a+b$, $ab$, and $n$? +5 How to solve two recurrences dependent on each other
1 Initial value of Newton Raphson Method
# 12 Tags
1 discrete-mathematics 0 recreational-mathematics × 2 1 numerical-methods 0 elementary-number-theory 1 mathematical-physics 0 number-theory 0 algorithms × 3 0 sequences-and-series 0 recurrence-relations × 2 0 euclidean-geometry
# 18 Accounts
Stack Overflow 429 rep 615 Mathematics 162 rep 8 Ask Ubuntu 110 rep 8 English Language & Usage 106 rep 3 WordPress Development 103 rep 2 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7880351543426514, "perplexity": 1344.2360329530395}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802771133.144/warc/CC-MAIN-20141217075251-00072-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/49417/diverging-random-walk | # Diverging random walk
I have a process $X_{n+1} = X_n\xi_n$ where $\xi_n\sim\mathcal N(1,1)$ and $\xi_n$ is independent of $X_n$. I need to prove that if $X_0\neq0$ then $$\mathsf P\{|X_n|>1\text{ for some }n\geq0\} = 1.$$ From this I construct a random walk: $Y_n = \log|X_n|$ so $$Y_{n+1} = Y_n+\eta_n$$ where $\eta_n = \log|\xi_n|$. I guess that from here I should apply the Law of Large Numbers - but I'm stacked with it. Could you help me? For now I should prove that $Y_n$ will eventually be positive a.s. starting from any point.
On the other hand, $X_n$ is a martingale which maybe also useful for deriving the desired result. If it helps, one can take $\xi_n\sim\mathcal N(m,1)$ for some $m\geq1$.
-
Are you sure that the normal is $\mathcal N(1,1)$ (mean 1)? – leonbloy Jul 4 '11 at 14:54
Yes, it's not a misprint. – Ilya Jul 4 '11 at 14:54
The random variable $\log|\xi|$ is integrable. – Did Jul 4 '11 at 15:09
So your question reduces to determining whether $E\log|\xi|$ is $\ge0$ (you win) or $<0$ (you lose). – Did Jul 4 '11 at 15:26
WolframAlpha? Yes you lose for $m=1$, but you win for $m\ge1.3$ or something, though. – Did Jul 4 '11 at 15:46
Didier Piau perfectly showed an equivalence of this problem and unboundness of a random walk and also gave a solution for the latter problem in this question: When random walk is upper unbounded
-
What is the relationship between the $\xi_n$ sequence and the $X_n$ sequence? The statement is clearly false when $\xi_n = 1/X_n$.
-
I assume $\xi_n$ is independent of $x_n$ – leonbloy Jul 4 '11 at 15:26
Yes, they are. Edited. – Ilya Jul 4 '11 at 15:39
Is the r.v. $\xi_n$ independent of r.v. $X_n$ or the sequence $\{\xi_n\}$ independent of the sequence $\{X_n\}$? These are different conditions. Also, are the $\xi_n$ assumed to be i.i.d? Again, the statement can be false if the $\xi_n$ have some sort of relationship among themselves. – user765195 Jul 4 '11 at 20:11
$\xi_n\perp \mathcal F_n = \sigma\{X_k,k\leq n\}$ and $\xi_n$ are iid r.v. – Ilya Jul 5 '11 at 6:48
Hint:
$E(x_{n} | x_{n-1}) = x_{n-1} E(\xi_{n}) = x_{n-1}$
$E(x_n) = E( E(x_{n} | x_{n-1})) = E(x_{n-1})$
Hence $E(x_n)=x_0$
Doing the same for $E(x_n^2)$ we can show that the variance tends to infinity with $n$. Then, you are done (we are not done, we need more than this).
-
Do you mean I should apply Chebyshev inequality? – Ilya Jul 4 '11 at 15:06
MMmm no, come to think of it, we need something stronger than that. For example, a Cauchy variable has infinite variance but the probability that it's modulus is above 1 is certainly not 1. So my "you're done" sentence is false. – leonbloy Jul 4 '11 at 15:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9019993543624878, "perplexity": 508.36560676951063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119647884.33/warc/CC-MAIN-20141024030047-00117-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://moodle.org/plugins/view.php?plugin=format_onetopic&moodle_version=1 | ## Course formats: Onetopic format
format_onetopic
Maintained by David Herney Bernal
Format to course that allows showing each topic in a tab, keeping the current tab between calls to resources, in such a way that when it returns from a module as the blog or the glossary it returns to tab from where you started. This format is based on the Moodle standard format: “Topics”. It accepts edition by AJAX.
3445 sites
36 fans
Developed by: David Herney Bernal García - davidherney at gmail dot com
### Contributors
David Herney Bernal (Lead maintainer)
Please login to view contributors details and/or to contact them
• Thu, 29 Dec 2016, 2:51 PM
Hello,
Installed it on my newly created 3.2 site hoping for the best while using the new Boost theme, but didn't quite work right. The content of the tabs is being rendered as html, not actually showing the HTML. Happy to send screenshots if that would be helpful.
Hoping for a new release soon
Thanks!
• Thu, 12 Jan 2017, 6:52 AM
Hello!!!
Cuando tienes pensada la liberación de este formato en Moodle 3.2?
Thanks!
• Thu, 19 Jan 2017, 6:36 PM
Hi,
We use your format very frequently so thanks for all the work! At the moment I am experiencing the same issue as Shalimar. The topics are showing as HTML code when using Moodle 3.2 with the Boost them. Would be very helpful if a new version with a fix could be added. Thanks again! Peter
• Fri, 20 Jan 2017, 12:51 AM
Hello,
I have detected a small problem with this version of onetopic for version 3.1.4+.
description of the problem
Context: Starting from a backup of a course made in the version of Moodle 2.7.3, then restoring it directly in the instance of Moodle 3.1.4, the course settings are configured to change the format of Theme to Format shape tabs Satisfactory.
For this moment the tabs are already visible and active each in the visual presentation of the course.
The problem is evident after editing any section or tab of the course (Select "Hide topic"). When selecting "Hide topic", the process is performed successfully by hiding the whole section or topic with their respective activities and resources, BUT when you want to re-activate the entire section or tab, the environment and the "Edit" link is blocked without allowing the Display and activate the "Activate theme" option.
I hope someone can similarly test my case, starting from a backup and restoration of a course from one version to another in moodle and later editing the course's asjustes to change the format of Theme to Eyelashes. So in this way they will be able to evidence my approach and finding when a section or theme is hidden and then intended to be activated again.
Thank you.
• Thu, 9 Feb 2017, 3:03 AM
Hi, we made a tweak for a customer to let it work on Moodle 3.2
Sourcecode:
https://github.com/luukverhoeven/format_onetopic
We rebuild the tabs raw html to html with some extra javascript. Note: this is a workaround.
A better solution would be to use html_writer:: instead of tabobject().
The tabobject doesn't support raw html input in moodle 3.2+ anymore.
I hope this helps.
• Fri, 10 Feb 2017, 5:03 AM
Hi Luuk...
It was fixed in Moodle directly by Stephen Bourget. You can check it in: https://tracker.moodle.org/browse/MDL-57728
Saludos
• Fri, 10 Feb 2017, 6:08 AM
Nice, that they fixed it in the core of moodle. Then there is no need to change the tabobject anymore ;)
Greets Luuk
• Tue, 14 Feb 2017, 1:08 AM
Hello, I have installed moodle 3.2.1+ and I need help to how to install the fix on this version of moodle.
Greets Jorge.
• Fri, 17 Feb 2017, 12:50 AM
Hi. Thanks for developing this very useful format. There is no News Forum by default, right? Was this intentional?
- David
• Fri, 17 Feb 2017, 1:26 AM
Hi David...
Yes, It is intentional. The Onetopic format don't add the news block and this block "build" the News forum.
Saludos
• Fri, 17 Feb 2017, 4:07 AM
Got it. I prefer this default setting, with the ability to add a News Forum via the "Latest News" block.
Thanks, David, from David
• Mon, 27 Feb 2017, 11:09 AM
Hi David,
We love this format and have set it as our default for all new courses. Thanks heaps for making it.
We're getting the same issue as Iñigo Zendegi Urzelai above (Section 0 is hidden with no means of changing it), and our server was updated to 3.1.3+ in December (from 2.8). The course that is encountering the issue (we have only had one course with the problem so far) was originally in Topics format (if that helps at all).
Any idea what might be happening?
Thanks again,
Tim
• Wed, 22 Mar 2017, 5:38 AM
OneTopic Format and "Coding error detected, it must be fixed by a programmer" error?
I wonder if anyone in this community is also randomly receiving this error, when using oneTopic Format?
https://www.dropbox.com/s/kzlczoyxm1yrrrp/2017-03-08_10-23-30.png?dl=0
Our Moodle host is hinting that it might be related to the OneTopic format:
"..... At this point, they're thinking the most likely candidate is the OneTopic format as they can see evidence of it using session objects and heavily using the Page object, but can't conclusively demonstrate a link. ....."
Thanks in advance, Greg
• Fri, 24 Mar 2017, 7:39 AM
Hi David, thanks for this excellent plugin! I just wanted to enquire about this line in the styles.css file:
.format-onetopic .tab_content.marker {font-style: italic;}
I don't really understand the purpose of italicising the font on one of the tabs, and from a design perspective, we prefer no italics on our webpages unless it's indicating a citation. So every time we upgrade, we have to comment out this line of code. So I guess I'm just wondering if it is essential and if there is any chance that it could be removed from future versions?
• Fri, 24 Mar 2017, 7:43 AM
Also, I noticed a minor error in the line: .format-onetopic ul.nav-tabs li a .tab_content.dimmed { color: #999; opactity: 0.5;}. It should be 'opacity', not 'opactity'. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24147850275039673, "perplexity": 4458.146079260547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189474.87/warc/CC-MAIN-20170322212949-00415-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/170352/visibility-of-vertices-in-polyhedra | # Visibility of vertices in polyhedra
Suppose $P$ is a closed polyhedron in space (i.e. a union of polygons which is homeomorphic to $S^2$) and $X$ is an interior point of $P$. Is it true that $X$ can see at least one vertex of $P$? More precisely, does the entire open segment between $X$ and some other vertex lie in the interior of $P$?
-
## 3 Answers
There are many points in the interior of this polyhedron, constructed (independently) by Raimund Seidel and Bill Thurston, that see no vertices. Interior regions are cubical spaces with "beams" from the indentations passing above and below, left and right, fore and aft. Standing in one of these cubical cells, you are surrounded by these beams and can see little else.
The indentations visible are not holes, in that they do not go all the way through, but rather stop just short of penetrating to the other side. So the three back faces of the surrounding cube, obscured in this view, are in fact squares. Thus $P$ is indeed homeomorphic to a sphere.
Figure from: Discrete and Computational Geometry. (book link).
To follow Tony Huynh's point: This polyhedron $P$ cannot be tetrahedralized, i.e., cannot be partitioned into tetrahedra all of whose corners are vertices of $P$.
-
This is an interesting example but does not exactly answers the question, as the polyhedron is assumed to be homeomorphic to the sphere. – Benoît Kloeckner Jun 8 at 14:38
@BenoîtKloeckner: That polyhedron is homeomorphic to a sphere. Those indentations do not go all the way through, but rather stop just short of penetrating to the other side. Admittedly, that is not obvious from the figure. – Joseph O'Rourke Jun 8 at 14:40
Tried to clarify this point... – Joseph O'Rourke Jun 8 at 14:46
I get it, nice! – Benoît Kloeckner Jun 8 at 14:49
Note that the answer is yes in 2 dimensions, since any polygon can be triangulated (without adding additional vertices). Thus, every point in the interior sees at least 3 vertices of $P$.
One can attempt to do the same thing in 3 dimensions, but somewhat surprisingly there exist polyhedra that cannot be decomposed into tetrahedra (without adding additional vertices). See here, where they show that the problem of deciding if a 3-dimensional polyhedra can be decomposed into tetrahedra is NP-complete. The references in that paper might be helpful.
-
So, by taking a slice through the point, every point in the interior sees points in the $n-2$-skeleton. – Douglas Zare Jun 8 at 16:33
I have a simpler example and I see that its idea is similar to the above one.
Cut the vertices of a cube to form 8 small triangles and suppose the triangles are rigid but the faces are not. Then rotate the triangles, 4 of them clockwise and the rest counter-clockwise, in an alternating manner.
The images are drawn using Geogebra 3D.
-
Nice animation! – Joseph O'Rourke Jun 12 at 12:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7545948028564453, "perplexity": 568.6049004368631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663036.27/warc/CC-MAIN-20140930004103-00384-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://www.bas.ac.uk/data/our-data/publication/a-minimal-substorm-model-that-explains-the-observed-statistical-distribution/ | # A minimal substorm model that explains the observed statistical distribution of times between substorms
A minimal model for the evolution of the global dynamical state of the magnetotail during the substorm has been developed, involving only three simple rules and one free parameter D-the period between substorms under constant solar wind driving. The model is driven with a power input derived from solar wind observations from the Wind spacecraft between 1995 and 1998, to derive a sequence of simulated substorm onsets. For values of D between 2.6 h and 2.9 h, the probability distribution of waiting times between successive simulated substorm onsets is not significantly different to an empirical distribution derived from energetic particle observations at geostationary orbit in 1982-3. Similar results are obtained using solar wind data from the ACE spacecraft between 1998 and 2002. Thus, we argue that the minimal substorm model provides a useful statistical and physical description of the timing of substorm onsets and possibly other substorm properties.
### Details
Publication status:
Published
Author(s):
Authors: Freeman, M.P. ORCID record for M.P. Freeman, Morley, S.K. ORCID record for S.K. Morley
On this site: Mervyn Freeman, Simon Morley
Date:
1 January, 2004
Journal/Source:
Geophysical Research Letters / 31
Page(s):
4pp
Digital Object Identifier (DOI):
https://doi.org/10.1029/2004GL019989 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9212188720703125, "perplexity": 2953.4737762443365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144429.5/warc/CC-MAIN-20200219214816-20200220004816-00523.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/92504-point-estimates-endpoints-confidence-interval-print.html | # point estimates and endpoints of confidence interval
• Jun 10th 2009, 05:45 PM
chrissy72
point estimates and endpoints of confidence interval
distribution of x is N(mu,4) n=10:
55.95
56.54
57.58
55.13
57.48
56.06
59.93
58.30
52.57
58.46
point estimate?
endpoints for 95% confidence interval for mu?
probability when one is selected at random that it is less than 52?
i'm lost and would loooove some help..... thankyou
• Jun 10th 2009, 06:15 PM
Random Variable
$\bar{x} \pm z_{0.025} \ \Big( \frac {\sigma}{\sqrt{n}}\Big)$
where $\bar{x}$ is the sample mean (which you have to calculate), $\sigma$ is the standard deviation, and $n$ is the sample size
• Jun 10th 2009, 06:30 PM
chrissy72
i got a mean of 56.83, a variance of 3.814 and a standard deviation of 1.95295 but don't know what to do with them. where do you get the z0.025 from? thankyou for helping me!
• Jun 10th 2009, 07:07 PM
Random Variable
$\sigma$ is the standard deviation of the population, not the sample. So in this problem $\sigma = \sqrt{4} = 2$.
You get $z_{0.025}$ from a standard normal distribution table. There is one in just about every statistics textbook. It's the value of z such that P(Z>z) = 0.025. The value is 1.96.
Have you gone over this stuff in class?
• Jun 10th 2009, 08:33 PM
matheagle
Chrissy, plug in 1.96 into Free One Tailed Area Under the Standard Normal Curve Calculator
and see that the area to the right of 1.96 is .0249978.
• Jun 13th 2009, 01:15 PM
chrissy72
All I have is a textbook. No class. Would the average of the 10 values be considered a point estimate of $\mu$? Also, if the area to the right of 1.96 is 0.249978, would the endpoints of the 95% confidence interval be 1.9849978 on the right and 1.96 or 1.9350022 on the left?
Thanks. I really don't know what I am doing and my text is no help. I need an actual example to learn from but it is very hard to find. I don't have enough calculus OR statistic background to solve these problems without a better book OR an actual teacher. Any tutors out there? =)
• Jun 13th 2009, 01:45 PM
Random Variable
From the data you calculated the sample mean ( $\bar{x}$) to be 56.83.
So a 95% confidence inteveral for the population mean ( $\mu$) is
$\Big[56.83 - 1.96* \frac{2}{\sqrt{10}}, \ 56.83 + 1.96*\frac{2}{\sqrt{10}}\Big]$
[55.59, 58.07]
Now either this interval contains $\mu$ or it doesn't. But if you constructed many such intervals by taking different samples of the same size, 95% of them would contain $\mu$.
• Jun 13th 2009, 01:59 PM
chrissy72
you are wonderful! Thankyou. So would that in fact be considered a point estimate (56.83)? I'm not clear on exactly what that is. My book only mentions that it is a point rather than an interval, but I don't know which point. Also, would the probability of randomly choosing a variable less than 52 be simply 0? Thanks once again! you have made this so much easier to understand.
• Jun 13th 2009, 02:32 PM
Random Variable
EDIT:
If $X$ is $N(\mu, \sigma^{2})$, then $\frac {X-\mu}{\sigma}$ is $N(0,1)$ (or just $Z$) .
So $P(X<52) = P\Big(\frac{X -\mu}{\sigma} < \frac{52 - \mu}{2} \Big) = P(Z
But that can't be answered unless you know $\mu$
• Jun 13th 2009, 02:40 PM
chrissy72
I feel so silly, I think I can just do the old (52-56.83)/2=-2.415, but 2.415 is not on my table so 2.42 becomes 0.0078. I'm not sure how to write that out. Is it $\Phi2.42=0.0078$?
• Jun 13th 2009, 09:19 PM
matheagle
you can plug 2.415 into....
Free One Tailed Area Under the Standard Normal Curve Calculator
and get .0078676
• Jun 13th 2009, 09:37 PM
Random Variable
But I'm fairly certain (which probably means I'm wrong) that $\frac {X-\bar{x}}{\sigma}$ is not N(0,1).
• Jun 13th 2009, 09:55 PM
matheagle
It has mean zero and it is a normal.
I doubt the variance is 1.
Is this X part of the sample, where $\bar X$ was derived from or a different observaton?
If it's part of the sample, then ....
$V\biggl({X_1-\bar X\over \sigma}\biggr)={V\biggl(X_1-\bar X\biggr) \over \sigma^2}$
$={V\biggl(X_1\bigl(1-{1\over n}\bigr)-{\sum_{k=2}^n X_k\over n}\biggr) \over \sigma^2}$
$={\bigl(1-{1\over n}\bigr)^2\sigma^2 +{n-1\over n^2}\sigma^2 \over \sigma^2}$
$=1-{1\over n}$
Otherwise ...
$V\biggl({X_{n+1}-\bar X\over \sigma}\biggr)={\sigma^2+{\sigma^2\over n} \over \sigma^2}=1+{1\over n}$
which is exactly what you get in a prediction interval, inside the square root of course. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 28, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9591582417488098, "perplexity": 1143.9305174119233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721555.54/warc/CC-MAIN-20161020183841-00158-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://blogs.exeter.ac.uk/iaus335/scientific-program/poster-program/ | # Poster Program
Note: The poster reference will be the one matching a poster board number. Details on poster viewing, author attendance and preparation guidelines can be found here.
[table colwidth=”5|35|110″ colalign=”left|left|left”]
Ref., Presenting Author, Poster Title
Session 1. Solar drivers and activity levels[attr colspan=”3″]
P1-01,Mahender Aroori,Quiet sun radiation during solar cycle 23 and 24
P1-02,Allan Sacha Brun,The Solar Dynamo and its Many Variabilities
P1-03,Jack Carlyle,Weighing Silhouettes: The Mass of Solar Filaments
P1-04,Nai-Hwa Chen,Temperature of source regions of 3He-rich impulsive solar energetic particles events
P1-05,Bernhard Fleck,First Results from the 2016-2017 MOTH-II South Pole Campaign
P1-06,Tadhg Garton,Multi-Thermal Segmentation And Identification of Coronal Holes
P1-07,Gareth Hawkes,Magnetic Helicity Flux As A Predictor Of The Solar Cycle
P1-08,Andrew Hillier,Observations of MHD Turbulence in Solar Prominences
P1-09,Petra Kohutova,Simulating The Dynamics Of Coronal Plasma Condensations
P1-10,Konstantina Loumou,Solar flare association with the Hale Sector Boundary
P1-11,Helen Mason,Spectroscopic Diagnostics of small flares and jets
P1-12,Daniel Miller,Alignment as an indicator of changes to modal structure within the Roberts flow
P1-13,Karin Muglach,Photospheric Magnetic Field Evolution And Flow Field Of Coronal Hole Jets
P1-14,Irina Myagkova,Hard X-Ray Emission of Solar Flares Measured by Lomonosov Space Mission
P1-15,Aleksandra Osipova,The Waldmeier Effect For Two Populations Of Sunspots
P1-16,Vemareddy Panditi,Research on solar drivers of space-weather: sun-earth connection of Magnetic Flux Ropes
P1-17,Nandita Srivastava,On the Dynamics of the Largest Active Region of the Solar Cycle 24
P1-18,Jianfei Tang,Propagation and Absorption of Electron Cyclotron Maser Emission Driven by Power-law Electrons
P1-19,Erwin Verwichte,Excitation and evolution of transverse loop oscillations by coronal rain
P1-20,Nicole Vilmer,Reliability of Photospheric Eruptive Proxies Using Parametric Flux Emergence Simulations
P1-21,Maria Weber,Simulations of Magnetic Flux Emergence in Cool Stars
P1-22,Matthew West,Further Exploration Of Post-Flare Giant Arches
P1-23,Yan Yan,Comparative study on the statistical characteristics between solar-type star flares and solar flares
Session 2. Solar wind and heliosphere[attr colspan=”3″]
P2-01,Tanja Amerstorfer,Arrival Prediction of a Coronal Mass Ejection as observed from Heliospheric Imagers at L1
P2-02,Luke Barnard,Testing The Current Paradigm Of Space Weather Prediction With The Heliospheric Imagers
P2-03,Mario Bisi,The Worldwide Interplanetary Scintillation (IPS) Stations (WIPSS) Network October 2016 Campaign: LOFAR IPS Data Analyses
P2-04,Sergio Dasso,Superposed Epoch Study of Magnetic Clouds And Their Driven Shocks/Sheaths Near Earth
P2-05,Andrzej Fludra,Testing Models Of The Fast Solar Wind Using Spectroscopic And Heliospheric In Situ Observations
P2-06,Pavel Gritsyk,Electron acceleration in collapsing magnetic traps during the solar flare on July 19- 2012: observations and models
P2-07,Karine Issautier,Measuring the Solar Wind Electron Temperature Anisotropy using the Quasi-thermal Noise Spectroscopy method on WIND
P2-08,Daniel Johnson,Heliospheric Magnetic Field And Solar Wind Behaviour During Solar Cycle 23-24
P2-09,Olga Khabarova,Conic Current Sheets As Sources Of Energetic Particles In The High-Latitude Heliosphere In Solar Minima
P2-10,Olga Malandraki,Compositional Analysis Within The ‘HESPERIA’ HORIZON 2020 Project to Diagnose Large Solar Energetic Particle Events During Solar Cycle 23
P2-11,Pradiphat Muangha,Ground Level Enhancements in Solar Energetic Particles Observed by IceTop during 2011 to 2016
P2-12,Milton Munroe,A Behavioural Model Of The Solar Magnetic Cycle
P2-13,Nariaki Nitta,Earth-Affecting Coronal Mass Ejections Without Obvious Low Coronal Signatures
P2-14,Barbara Perri,Quasi-Static And Dynamical Simulations Of The Solar Wind Over An 11-Year Cycle
P2-15,Alexei Struminsky,Cosmic Rays near Proxima Centauri b
P2-16,Aline Vidotto,The Solar Wind Through Time
P2-17,David Webb,Understanding Problem Forecasts of ISEST Campaign Flare-CME Events
Session 3. Impact of solar wind structures and radiation on magnetospheres[attr colspan=”3″]
P3-01,Franklin Aldás,Analysis Of Variations Of Earth´S Magnetic Field Produced By Equatorial Electro-Jets In Sudamerica
P3-02,Thamer Alrefay,Testing the Earth’S Bow Shock Models
P3-03,Sergio Dasso,Statistical Analysis of Extreme Electron Fluxes in the Radiation Belts: Observations from ICARE-NG/CARMEN-1- SAC-D
P3-04,Yongqiang Hao,Detection Of Plasmaspheric Compression By Interplanetary Shock Using GPS TEC Technique
P3-05,Rungployphan Kieokaew,Magnetic Curvature and Vorticity Four-Spacecraft Analyses on Kelvin-Helmholtz Waves: a MHD Simulation Study
P3-06,Galina Kotova,Physics-based modeling of the density distribution in the whole plasmasphere using measurements along a single pass of an orbiter
P3-07,Stefania Lepidi,Ground And Space Observations To Determine The Location Of Locally Vertical Geomagnetic Field
P3-08,Stefania Lepidi,Determining The Polar Cusp Longitudinal Location From Pc5 Geomagnetic Field Measurements At A Pair Of High Latitude Stations
P3-09,Nigel Meredith,Extreme Relativistic Electron Fluxes in the Earth’s Outer Radiation Belt: Analysis of INTEGRAL IREM Data
P3-10,Gabrielle Provan,Planetary Period Oscillations In Saturn’s Magnetosphere: Examining The Relationship Between Changes In Behavior And The Solar Wind.
P3-11,Pat Reiff,MHD Modeling of MMS Reconnection Sites
P3-12,Davide Rozza,GeoMagSphere Model Applied During Solar Events: A Study Of Cosmic Rays Detector From The International Space Station
Session 4. Impact of solar wind structures and radiation on ionospheres atmospheres [attr colspan=”3″]
P4-01,Roshni Atulkar,Magnetic storm effects on the variation of TEC over low- mid and high latitude station
P4-02,Binod Bhattarai,Effect Of Geomagnetic Super Substorm At Low Latitude Stations
P4-03,Aziza Bounhir,Climatology of thermospheric neutral temperatures over Oukaïmeden Observatory in Morocco
P4-04,Rimpei Chiba,Sputtering Of Wollastonite By Solar Wind Ions
P4-05,Yongqiang Hao,Changes Of Solar Extreme Ultraviolet Irradiance In Solar Cycle 23 and 24
P4-06,Nadia Imtiaz,Particle-in-cell Modeling of CubeSat and Ionospheric Plasma Interaction
P4-07,Jung Hee Kim,Possible Influence Of The Solar Eclipse On The Global Geomagnetic Field
P4-08,Mai Mai Lam,The temperature signature of an IMF-driven change to the global atmospheric electric circuit (GEC) in the Antarctic troposphere
P4-09,Ayomide Olabode,An Investigation of Total Electron Content at Low and Mid Latitude Stations
P4-10,Ayomide Olabode,Geomagnetic Storm Main Phase Effect on the Equatorial Ionosphere over Ile-Ife as measured from GPS Observations
P4-11,Ayomide Olabode,Solar Activity effect on Ionospheric Total Electron Content (TEC) during Different Geomagnetic Activity in Low-Latitudes
P4-12,Pramod Kumar Purohit,Solar cycle variation and its impact on Critical Frequency of F2 layer
P4-14,Olga Sheiner,Effect Of Solar Coronal Mass Ejections On The Ionosphere
P4-15,Dadaso Shetti,Equatorial Plasma Bubbles observations during quit and disturb period over low Latitude region
P4-16,Manuela Temmer,Statistical analysis on how CME and SIR/CIR events effect the geomagnetic activity and the Earth’s thermosphere
P4-17,Donghe Zhang,The Variability Of The Solar EUV Irradiance And Its Possible Contribution To The Ionospheric Variability During Solar Flare
Session 5. Long-term trends and predictions for space weather [attr colspan=”3″]
P5-01,Melinda Dósa,Long-term longitudinal recurrences of the open magnetic flux density in the heliosphere
P5-02,Heather Elliott,Kp – Solar Wind Speed Relationship: Implications for Long-Term Forecasts
P5-03,Elijah Falayi,Study of geomagnetic induced current at high latitude during the storm- time variation
P5-04,Frederick Gent,Interpreting a millennium solar – like dynamo with the test – field method
P5-05,Romaric Gravet,Observed UV contrast of magnetic features and implications for solar irradiance modelling
P5-06,Norbert Gyenge,On Active Longitudes and their Relation to Loci of Coronal Mass Ejections
P5-07,Ching Pui Hung,Reconstructing the Solar Meridional Circulation from 1976 up to Now
P5-08,Mike Lockwood,Effects of Solar Variability on Global and Regional Climates
P5-09,Sushant Mahajan,Using Torsional Oscillations to Forecast Solar Activity
P5-10,Victor U. J. Nwankwo,Analysis of Long-term Trend of Space Weather-Induced Enhancement of Atmospheric Drag on LEO Satellites
P5-11,Vaibhav Pant,Kinematics of fast and slow CMEs in solar cycle 23 and 24
P5-12,Chris Russell,Long-term Observations of Solar Wind Using STEREO Data
P5-13,Mikhail Vokhmyanin,Sunspots areas and heliographic positions on the drawings made by Galileo Galilei in 1612
P5-14,Mikhail Vokhmyanin,Regularities of the IMF sector structure in the last 170 years
P5-15,Valentina Zharkova,Reinforcement of the double dynamo model of solar magnetic activity on a millennium timescale
Session 7. Forecasting models [attr colspan=”3″]
P7-01,Jordan Guerra Aguilera,Modeling Ensemble Forecasts of Solar Flares
P7-02,Roxane Barnabé,Prediction Of Solar Flares Using Data Assimilation In A Sandpile Model
P7-03,Zouhair Benkhaldoun,The Space Weather through a multidisciplinary scientific approach.
P7-04,Mitsue Den,Physics-Based Modeling Activity From The Solar Surface To Atmosphere Including Magnetosphere And Ionosphere At NICT
P7-05,Mark Dierckxsens,Assessing Space Weather Applications and Understanding: SEP Working Team and SEP Scoreboard
P7-06,Mark Dierckxsens,The SEP Forecast Tool Within The COMESEP Alert System
P7-07,Sean Elvidge,International Community-Wide Ionosphere Model Validation Study: foF2/hmF2/TEC prediction
P7-08,Sarah Glauert,Validating the BAS Radiation Belt Model Forecasts of the Electron Flux at Medium Earth Orbit
P7-09,Daniel Griffin,Numerical Effects Of Vertical Wave Propagation In Atmospheric Models
P7-10,Richard Horne,Forecasting Risk Indicators for Satellites By Integrating The BAS Radiation Belt Model and Radiation Effects Models
P7-11,Xin Huang,A deep learning based solar flare forecasting model
P7-12,Irina Knyazeva,Comparison Of Predictive Efficiency of LOS Magnetograms Topological Descriptors and SHARP Parameters in the Solar Flares Forecasting Task
P7-13,Marianna Korsos,On the evolution of pre-flare patterns in 3-dimensional real and simulated Active Regions
P7-14,Timo Laitinen,Forecasting Solar Energetic Particle Fluence with Multi-Spacecraft Observations
P7-15,KD Leka,Predicting the Where and the How Big of Solar Flares
P7-16,Olga Malandraki,The real-time SEP prediction tools within the framework of the ‘HESPERIA’ HORIZON 2020 project
P7-17,Olga Malandraki,Prediction of GLE events
P7-18,Aoife Mccloskey,Flare Forecasting and Sunspot Group Evolution
P7-19,Gianluca Napoletano,A Probabilistic Approach To ICME Propagation
P7-20,Ljubomir Nikolic,PFSS-based Solar Wind Forecast and the Radius of the Source-Surface
P7-21,Naoto Nishizuka,Solar Flare Prediction with Vector Magnetogram and Chromospheric Brightening using Machine-learning
P7-22,Tomoya Ogawa,AMR-MHD Simulation of CME Propagation in Solar Wind generated on Split Dodecahedron Grid
P7-23,Rui Pinto,SWiFT-FORECAST: A physics-based realtime solar wind forecast pipeline
P7-24,Camilla Scolini,Study of the September 4- 2010 Coronal Mass Ejection: Comparison of the EUHFORIA and ENLIL Predictive Capabilities
P7-25,Olga Sheiner,Ground-based Observations of Powerful Solar Flares Precursors
P7-26,Olga Sheiner,Solar Radio Emission As A Prediction Technique For Coronal Mass Ejections’ Registration
P7-27,Bill Swalwell,Solar Energetic Particle Event Forecasting Algorithms And Associated False Alarms
P7-28,Baolin Tan,Very Long-period Pulsations as a precursor of Solar Flares
P7-29,Yurdanur Tulunay,METU Data Driven Forecast Models: From the Window of Space Weather IAU Symposium 335
P7-30,Christine Verbeke,Assessing Space Weather Applications and Understanding: CME Arrival Time and Impact
Session 8. Space weather monitoring instrumentation data and services[attr colspan=”3″]
P8-01,Ciaran Beggan,SWIGS: a new research consortium to study Space Weather Impacts on Ground-based Systems
P8-02,Francesco Berrilli,SWERTO: a regional Space Weather service
P8-03,Francesco Berrilli,The Ionosphere Prediction Service
P8-04,Norma B. Crosby,ESA SSA Space Radiation Expert Service Centre: Human Space Flight
P8-05,Erwin De Donder,End User Requirements For Space Weather Services.
P8-06,Victor De La Luz,The Early Warning Mexican Space Weather System
P8-07,Richard Harrison,European-led Visible-light Coronal And Heliospheric Imaging Endeavours For An Operational Space Weather Mission
P8-08,Neil Hurlburt,Corona and the solar magnetic field observations for space weather forecasting
P8-09,Karine Issautier,CIRCUS CubSa
P8-10,David Jackson,The Met Office Space Weather Operations Centre (MOSWOC)
P8-11,Sophie Murray,Verification of Flare Forecasts at the Met Office Space Weather Operations Centre
P8-12,Danislav Sapundjiev,Advanced observatory for space-weather research and forecast at the Geophysical Center in Dourbes – Belgium
P8-13,Mike Thompson,COSMO: the Coronal Solar Magnetism Observatory
P8-14,Andrei Tlatov,Modeling and forecast of parameters of space weather based on ground observations of solar activity
P8-15,Vincenzo Vitale,The High-Energy Particle Detector on board of the CSES mission
P8-16,Yihua Yan,On Mingantu Spectral Radioheliograph for Space Weather Observations
[/table] | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.83861243724823, "perplexity": 25362.85461590016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.74/warc/CC-MAIN-20230322180852-20230322210852-00683.warc.gz"} |
https://www.gamedev.net/articles/programming/general-and-gameplay-programming/how-to-write-a-2d-ufo-game-using-the-orx-portable-game-engine-part-4-r4857/ | • # Learning How to write a 2D UFO game using the Orx Portable Game Engine - Part 4
General and Gameplay Programming
# Creating Pickup Objects
This is part 4 of a series on creating a game with the Orx Portable Game Engine. Part 1 is here, and part 3 is here.
In our game, the player will be required to collect objects scattered around the playfield with the ufo.
When the ufo collides with one, the object will disappear, giving the impression that it has been picked up.
Begin by creating a config section for the graphic, and then the pickup object:
[PickupGraphic]
Texture = pickup.png
Pivot = center
[PickupObject]
Graphic = PickupGraphic
The graphic will use the image pickup.png which is located in the project's data/object folder.
It will also be pivoted in the center which will be handy for a rotation effect later.
Finally, the pickup object uses the pickup graphic. Nice and easy.
Our game will have eight pickup objects. We need a simple way to have eight of these objects in various places.
We will employ a nice trick to handle this. We will make an empty object, called PickupObjects which will hold eight copies of the pickup object as child objects.
That way, wherever the parent is moved, the children move with it.
[PickupObjects]
ChildList = PickupObject1 # PickupObject2 # PickupObject3 # PickupObject4 # PickupObject5 # PickupObject6 # PickupObject7 # PickupObject8
Position = (-400, -300, -0.1)
This object will have no graphic. That's ok. It can still act like any other object.
Notice the position. It is being positioned in the top left hand corner of the screen. All of the child objects PickupObject1 to PickupObject8 will be positioned relative to the parent in the top left corner.
Now to create the actual children. We'll use the inheritance trick again, and just use PickupObject as a template:
[PickupObject1@PickupObject]
Position = (370, 70, -0.1)
[PickupObject2@PickupObject]
Position = (210, 140, -0.1)
[PickupObject3@PickupObject]
Position = (115, 295, -0.1)
[PickupObject4@PickupObject]
Position = (215, 445, -0.1)
[PickupObject5@PickupObject]
Position = (400, 510, -0.1)
[PickupObject6@PickupObject]
Position = (550, 420, -0.1)
[PickupObject7@PickupObject]
Position = (660, 290, -0.1)
[PickupObject8@PickupObject]
Position = (550, 150, -0.1)
Each of the PickupObject* objects uses the properties defined in PickupObject. And the only difference between them are their Position properties.
The last thing to do is to create an instance of PickupObjects in code in the Init() function:
orxObject_CreateFromConfig("PickupObjects");
Compile and Run.
Eight pickup objects should appear on screen. Looking good.
It would look good if the pickups rotated slowly on screen, just to make them more interesting. This is very easy to achieve in Orx using FX.
FX can also be defined in config.
FX allows you to affect an object's position, colour, rotation, scaling, etc, even sound can be affected by FX.
Change the PickupObject by adding a FXList property:
[PickupObject]
Graphic = PickupGraphic
FXList = SlowRotateFX
Clearly being an FXList you can have many types of FX placed on an object at the same time. We will only have one.
An FX is a collection of FX Slots. FX Slots are the actual effects themselves. Confused? Let's work through it. First, the FX:
[SlowRotateFX]
SlotList = SlowRotateFXSlot
Loop = true
This simply means, use some effect called SlowRotateFXSlot, and when it is done, do it again in a loop.
Next the slot (or effect):
[SlowRotateFXSlot]
Type = rotation
StartTime = 0
EndTime = 10
Curve = linear
StartValue = 0
EndValue = 360
That's a few properties. First, the Type, which is a rotation FX.
The total time for the FX is 10 seconds, which comes from the StartTime and EndTime properties.
The Curve type is linear so that the values changes are done so in a strict and even manner.
And the values which the curve uses over the 10 second period starts from 0 and climbs to 360.
Re-run and notice the pickups now turning slowly for 10 seconds and then repeating.
# Picking up the collectable objects
Time to make the ufo collide with the pickups. In order for this to work (just like for the walls) the pickups need a body.
And the body needs to be set to collide with a ufo and vice versa.
First a body for the pickup template:
[PickupObject]
Graphic = PickupGraphic
FXList = SlowRotateFX
Body = PickupBody
Then the body section itself:
[PickupBody]
Dynamic = false
PartList = PickupPart
Just like the wall, the pickups are not dynamic. We don't want them bouncing and traveling around as a result of being hit by the ufo. They are static and need to stay in place if they are hit.
Next to define the PickupPart:
[PickupPart]
Type = sphere
Solid = false
SelfFlags = pickup
CheckMask = ufo
The pickup is sort of roundish, so we're going with a spherical type.
It is not solid. We want the ufo to able to pass through it when it collides. It should not influence the ufo's travel at all.
The pickup is given a label of pickup and will only collide with an object with a label of ufo.
The ufo must reciprocate this arrangement (just like a good date) by adding pickup to its list of bodypart check masks:
[UfoBodyPart]
Type = sphere
Solid = true
SelfFlags = ufo
Friction = 1.2
CheckMask = wall # pickup
This is a static bodypart, and we have specified collision actions to occur if the ufo collides with a pickup. But it's a little difficult to test this right now. However you can turn on the debug again to check the body parts:
[Physics]
Gravity = (0, 0, 0)
ShowDebug = true
Re-run to see the body parts.
Switch off again:
[Physics]
Gravity = (0, 0, 0)
ShowDebug = false
To cause a code event to occur when the ufo hits a pickup, we need something new: a physics hander. The hander will run a function of our choosing whenever two objects collide.
We can test for these two objects to see if they are the ones we are interested in, and run some code if they are.
First, add the physics hander to the end of the Init() function:
orxClock_Register(orxClock_FindFirst(orx2F(-1.0f), orxCLOCK_TYPE_CORE), Update,
orxNULL, orxMODULE_ID_MAIN, orxCLOCK_PRIORITY_NORMAL);
orxEvent_AddHandler(orxEVENT_TYPE_PHYSICS, PhysicsEventHandler);
This will create a physics handler, and should any physics event occur, (like two objects colliding) then a function called PhysicsEventHandler will be executed.
Our new function will start as:
orxSTATUS orxFASTCALL PhysicsEventHandler(const orxEVENT *_pstEvent)
{
orxOBJECT *pstRecipientObject, *pstSenderObject;
/* Gets colliding objects */
pstRecipientObject = orxOBJECT(_pstEvent->hRecipient);
pstSenderObject = orxOBJECT(_pstEvent->hSender);
const orxSTRING recipientName = orxObject_GetName(pstRecipientObject);
const orxSTRING senderName = orxObject_GetName(pstSenderObject);
orxLOG("Object %s has collided with %s", senderName, recipientName);
return orxSTATUS_SUCCESS;
}
}
Every handler function passes an orxEVENT object in. This structure contains a lot of information about the event.
The eID is tested to ensure that the type of physics event that has occurred is a orxPHYSICS_EVENT_CONTACT_ADD which indicates when objects collide.
If true, then two orxOBJECT variables are declared, then set from the orxEVENT structure. They are passed in as the hSender and hRecipient objects.
Next, two orxSTRINGs are declared and are set by getting the names of the objects using the orxObject_GetName function. The name that is returned is the section name from the config.
Potential candidates are: UfoObject, BackgroundObject, and PickupObject1 to PickupObject8.
The names are then sent to the console.
Finally, the function returns orxSTATUS_SUCCESS which is required by an event function.
Compile and run.
If you drive the ufo into a pickup or the edge of the playfield, a message will display on the console. So we know that all is working.
Next is to add code to remove a pickup from the playfield if the ufo collides with it. Usually we could compare the name of one object to another and perform the action.
In this case, however, the pickups are named different things: PickupObject1, PickupObject2, PickupObject3… up to PickupObject8.
So we will need to actually just check if the name contains “PickupObject” which will match well for any of them.
In fact, we don't need to test for the “other” object in the pair of colliding objects. Ufo is a dynamic object and everything else on screen is static. So if anything collides with PickupObject*, it has to be the ufo. Therefore, we won't need to test for that.
First, remove the orxLOG line. We don't need that anymore.
Change the function to become:
orxSTATUS orxFASTCALL PhysicsEventHandler(const orxEVENT *_pstEvent)
{
orxOBJECT *pstRecipientObject, *pstSenderObject;
/* Gets colliding objects */
pstRecipientObject = orxOBJECT(_pstEvent->hRecipient);
pstSenderObject = orxOBJECT(_pstEvent->hSender);
const orxSTRING recipientName = orxObject_GetName(pstRecipientObject);
const orxSTRING senderName = orxObject_GetName(pstSenderObject);
if (orxString_SearchString(recipientName, "PickupObject") != orxNULL) {
}
if (orxString_SearchString(senderName, "PickupObject") != orxNULL) {
}
}
return orxSTATUS_SUCCESS;
}
You can see the new code additions after the object names.
If an object name contains the word “PickupObject”, then the ufo must have collided with it. Therefore, we need to kill it off. The safest way to do this is by setting the object's lifetime to 0.
This will ensure the object is removed instantly and deleted by Orx in a safe manner.
Notice that the test is performed twice. Once, if the pickup object is the sender, and again if the object is the recipient.
Therefore we need to check and handle both.
Compile and run.
Move the ufo over the pickups and they should disappear nicely.
We'll leave it there for the moment. In the final, Part 5, we'll cover adding sounds, a score, and winning the game.
Report Article
## User Feedback
There are no comments to display.
## Create an account
Register a new account
• 0
• 0
• 0
• 0
• 11
• 10
• 13
• 13
• 14
• 10
• ### Similar Content
• By exzizt
Hey everyone! I just wanted to announce that after eight months of hard work, we're approaching the launch of our second BETA! Everyone who signs up gets to be the first to play.
Signup here: https://www.playbirdie.com/
• By midn
Since I'm not a good 2D artist (learning sloowly with low motivation but that's a different topic) here's what the current art process is:
Model a thing in Blender. Render it in a low resolution. Using ImageMagick apply a color palette. The palette is I think the SNES palette that I took from Wikipedia and cropped a little bit.
There's a problem though, below is a screenshot of the game. Excuse the size :P.
It's extremely dark, I can't say if it can be passed off as just a "style" but it's even a bit hard on my eyes. And see the background, the black with green spots? That's supposed to be tall grass, but the palette reduces detail until it just looks like overly peppered lettuce.
The white rectangles will be ladder sprites, it's unfinished at the moment.
• For this Screenshot Saturday we captured an image during one of the Earthquake events that occurs on Tartarus. During this time the underground turns into a fire inferno with Monodons spawning in big numbers so be prepared to defend yourself.
• Project Name: Rise of the Betrayer
Program/Language: Game Maker Studio 2 using GML
Current status: All Tilework is done, Demo is near completion with Enemys, Bosses, Weather, Day/Night
Website and video devlogs: Blastedrealm.com
Roles Required: [Programmer, Pixel Artist, Social Media Manager,]
Programmers
This position is looking for a GameMaker Language developer in GMS2. We are looking for people who know how to compose and create logical modular polished scripts and can generally work with others please apply using the link below.
Pixel Artist
We currently have the majority of the Parallax and Tilework all ready. Will need to help with some other miscellaneous needs for the game such as UI, Bosses, Puzzles or more
All artwork is 32x32 based.
Social Media Manager
For this position we are looking for someone who knows how to advertise a game. Posts would be to our Website and Social Media platforms. Knowledge or some experience of the position would be helpful but not required.
Recruiting New Members Form
Genres: MetroidVania/Puzzle/2D Platformer
Description:
Rise of the Betray is an Action-Adventure game.
The game will feature decision-making, crafting, fast-paced combat, multiple endings, intricate story-telling, an open world to explore, and more!
Rise of the Betrayer will pose a dark atmosphere at times, filled with drama, mystery, and dread.
The game will feature hard-core level of difficulty when it comes to platforming, combat, puzzles, and decision-making.Crafting is a big part of the game, so we encourage players to use the tools they have and come up with unique ways to overcome a challenge.
• By SR D
Hi,
Can somebody explain to me normals? Am I correct to say a normal points in one direction? How to find that direction? If I draw a triangle, how is the normal calculated and drawn? Or maybe some good links that describe it?
Thanks,
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16358909010887146, "perplexity": 3164.0521699740193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330968.54/warc/CC-MAIN-20190826042816-20190826064816-00071.warc.gz"} |
http://www.php.net/manual/fa/function.odbc-connect.php | PHP 5.5.37 is released
# odbc_connect
(PHP 4, PHP 5)
odbc_connectConnect to a datasource
### Description
resource odbc_connect ( string $dsn , string$user , string $password [, int$cursor_type ] )
The connection id returned by this functions is needed by other ODBC functions. You can have multiple connections open at once as long as they either use different db or different credentials.
With some ODBC drivers, executing a complex stored procedure may fail with an error similar to: "Cannot open a cursor on a stored procedure that has anything other than a single select statement in it". Using SQL_CUR_USE_ODBC may avoid that error. Also, some drivers don't support the optional row_number parameter in odbc_fetch_row(). SQL_CUR_USE_ODBC might help in that case, too.
### Parameters
dsn
The database source name for the connection. Alternatively, a DSN-less connection string can be used.
user
cursor_type
This sets the type of cursor to be used for this connection. This parameter is not normally needed, but can be useful for working around problems with some ODBC drivers.
The following constants are defined for cursortype:
• SQL_CUR_USE_IF_NEEDED
• SQL_CUR_USE_ODBC
• SQL_CUR_USE_DRIVER
### Return Values
Returns an ODBC connection id or 0 (FALSE) on error.
### Examples
Example #1 DSN-less connections
<?php// Microsoft SQL Server using the SQL Native Client 10.0 ODBC Driver - allows connection to SQL 7, 2000, 2005 and 2008$connection = odbc_connect("Driver={SQL Server Native Client 10.0};Server=$server;Database=$database;",$user, $password);// Microsoft Access$connection = odbc_connect("Driver={Microsoft Access Driver (*.mdb)};Dbq=$mdbFilename",$user, $password);// Microsoft Excel$excelFile = realpath('C:/ExcelData.xls');$excelDir = dirname($excelFile);$connection = odbc_connect("Driver={Microsoft Excel Driver (*.xls)};DriverId=790;Dbq=$excelFile;DefaultDir=$excelDir" , '', '');?> ### See Also • For persistent connections: odbc_pconnect() - Open a persistent database connection add a note ### User Contributed Notes 65 notes simonr at no2sp at m dot cogapp dot com 12 years ago To make a DSN-less connection using ODBC to MS-SQL: <?php$connection_string = 'DRIVER={SQL Server};SERVER=<servername>;DATABASE=<databasename>'; $user = 'username';$pass = 'password'; $connection = odbc_connect($connection_string, $user,$pass ); ?> servername is the name of the database server databasename is the name of the database Note, I've only tried this from a windows box using the Microsoft ODBC drivers.
lffranco at dco.pemex.com
12 years ago
As always Microsoft is clueless... I've been trying to connect to an Access database on a W2K on the network (not a local file, but mapped on the V: drive), via ODBC.All I got is this message:Warning: SQL error: [Microsoft][ODBC Microsoft Access Driver] '(unknown)' is not a valid path. Make sure that the path name is spelled correctly and that you are connected to the server on which the file resides., SQL state S1009 in SQLConnect in d:\apache\cm\creaart.php on line 13So... I started looking al around and looks like the ODBC driver has some severe problems:1. It cannot access a Access database via a mapped drive. And this is for ANY application, name it PHP, Coldfusion, whatever2. You cannot make a system DSN with a UNC (\\Server\resource), so you must map the driveCute isn't it?So... I quit on ODBC and went via ADO, this is the code that works:=== CODE ===$db = '\\\\server\\resource\\db.mdb';$conn = new COM('ADODB.Connection');$conn->Open("DRIVER={Driver do Microsoft Access (*.mdb)}; DBQ=$db");// Driver do Microsoft Access (*.mdb)// must be the name in your odbc drivers, the one you get// from the Data Sources (ODBC).// In this case, I'm in Mexico but the driver name is in portuguese, thanks Microsoft.$sql = 'SELECT username FROM tblUsuarios';$res = $conn->Execute($sql);while (!$res->EOF){ print$res->Fields['username']->Value . "<br>"; $res->MoveNext();}$res->Close(); $conn->Close();$res = null; $conn = null;=== /CODE === exkludge at gmail dot com 9 years ago After doing "harald dot angel at gmail dot com" suggestion, you may still receive this error:"Warning: odbc_connect() [function.odbc-connect]: SQL error: [Microsoft][ODBC Microsoft Access Driver] The Microsoft Jet database engine cannot open the file '(unknown)'. It is already opened exclusively by another user, or you need permission to view its data., SQL state S1000 in SQLConnect in... "You may need to include the <computer name> of the machine where the ODBC is, to the <local group> of the machine where the *.mdb is stored. And make sure that the <local group> has enough permission to access the *.mdb.hope this make somebody more happy!more power to opensource. harald dot angel at gmail dot com 9 years ago - Windows - OS- Apache- ODBC-Connction to MS-Access DB on a:- Network ShareAfter many hours searching here´s how it works:- Map the Network Drive where the mdb is located- Setup System DSN in Control Panel with mapped Drive- Open Registry at:HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC_INI- Edit the for example "M:\" to "\\server\..."- Close Regedit- The Apache-Service must run with a Domain (network)-User!!- After that you can connect using:$conn = odbc_connect('your-dsn-name','',''); hope that makes someone happy :)bye
aamaral at 0kbps dot net
11 years ago
Two additional notes regarding ODBC connections to a Network Sybase SQL Anywhere 8 Server..I wrote a script using the PHP5 CLI binary that monitors a directory for changes, then updates a Network Server SQL Anywhere 8 database when a change was detected. Idealy, my program would run indefinately, and issue odbc_connect()/odbc_close() when appropriate. However, it seems that once connected, your odbc session is limited to 30 seconds of active time, after which, the connection becomes stale, and no further queries can be executed. Instead, it returns a generic "Authentication violation" error from the odbc driver.Here's an example:<?php $conn=odbc_connect($connect_string,'',''); $result=odbc_exec($qry,$conn); //returns data sleep(31);$result=odbc_exec($qry,$conn); //"Authentication Violation"?>Additionally, it seems that odbc_close() doesn't truely close the connection (at least not using Network SQL Anywhere 8). The resource is no longer usable after the odbc_close() is issued, but as far as the server is concerned, there is still a connection present. The connection doesn't truely close until after the php script has ended, which is unfortunate, because a subsequent odbc_connect() commands appear to reuse the existing stale connection, which was supposedly closed.My workaround was to design my script exit entirely after a the database update had completed. I then called my script whithin a batch file and put it inside an endless loop.I'm not sure if this is a bug with PHP or what, but I thought I'd share in case someone else is pulling their hair out trying to figure this one out...
atze at o2o dot com dot au
12 years ago
To connect to a SQL DB on a unix srv via odbc one can use one of the following solutions.1. having an odbc.ini (~/.odbc.ini)[PostgreSQL]Description = PostgreSQL template1Driver = PostgreSQLTrace = YesTraceFile = /tmp/odbc.logDatabase = PerfTestServername = localhostUserName = bossPassword = BigBossPort = 5432Protocol = 6.4ReadOnly = YesConnSettings =[Default]Driver = /local/lib/libodbc.so2. specifying a DSN function DBCALL($SQL){$U = "boss";$DB = "PerfTest";$P = "BigBoss";$Srv = "llocalhost";$DSN = "Driver=PostgreSQL;Server=$Srv;Database=$DB"; echo "Trying to connect to $DSN\n"; if ($CID = odbc_connect ("$DSN","$U","$P",SQL_CUR_USE_ODBC)){ echo "still trying CID =$CID\n"; if ($RES = odbc_exec($CID, $SQL)) { echo "RES =$RES\n"; print_r($RES); echo "\n";$NR = odbc_num_rows($RES);<snip>Hope this helps. Grisu 12 years ago Connect to an MS-Access Database on the Network via ODBCApache 2.0.47 with PHP 4.3.4 running on Windows XP ProIf you encounter the error"[Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified, SQL state IM002 in SQLConnect"you should make sure to have the following done:The ODBC-link must be a System-DNS and not a User-DNS. Configure your ODBC-link and then modify your configuration with regedt32. Go to HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC_INI and open your ODBC-link. The field DBQ contains the path to your database. This path must be without Drive-names (e. g. "M:") so change it to "\\Server\folder\database.mdb". This setting is changed each time you modify your ODBC-configuration using the Windows-tool, so make sure you do this afterwards.Then you go to the Services-Section in your Systemmanagement. Select the properties of your Apache module. In the login-section you have to make sure you login with a valid User-Account for your Network-Server.Please note that this way you still have no permission to access linked tables within the linked databaseFunny enough all this is not necessary on Win98. TheFrenZ 12 years ago If you encounter the error:"[Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified, SQL state IM002 in SQLConnect"Windows with PHP, running under IIS/PWS, PHP runs under the anonymous user, INET_USR_"server" (were "server" is your servername).This user has no read access in the ODBC System DSN tree in the registry.With regedt32 open HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC_INI and give read access to every ODBC entry you want to use with PHP.Beware: With every change to a ODBC system DSN, the rights on that ODBC system DSN are gone again, and you have to change the rights again manualy.Under Apache, PHP runs under the System account and you wont have this problem. eric dot ramirez at iberoonline dot com 13 years ago Connecting with SQL in a ODBC source2 ways, one is if your SQL server is runign in your machine$ser="LOCALMACHINE"; #the name of the SQL Server$db="mydatabase"; #the name of the database$user="myusername"; #a valid username$pass="my pass"; #a password for the username# one line$conn=odbc_connect("Driver={SQL Server};Server=".$ser.";Database=".$db,$user,$pass);# one linethe second way is if the SQL Server is runing in other machine but in the same network$ser="LOCALMACHINE"; #the name of the SQL Server$db="mydatabase"; #the name of the database$user="myusername"; #a valid username$pass="my pass"; #a password for the username#one line$conn=odbc_connect("DRIVER=SQL Server;SERVER=".$ser.";UID=".$user.";PWD=".$pass.";DATABASE=".$db.";Address=".$ser.",1433","","");#one line
richard at lordrich dot com
13 years ago
Because the dsn needs to be system-wide, you will also need write access to the registry to set it up.
Yvan Ecarri
13 years ago
I fighted with the "Data source name not found and nodefault driver specified, SQL state IM002 in SQLConnect"error for a while trying to connect via ODBC to a SQL Server2000. Finally I found this workaround:$cn = odbc_connect("Driver={SQL Server};Server=MyServer;Database=MyDatabase","MyUser","MyPassword")Change "MyServer", "MyDatabase", "MyUser" and "MyPassword" to the right values.I guess that adding the "Integrated Security=YES" will work too.Regards,Yvan Ecarri, MCDBA, MCSD jeremy at austin.ibm.com 13 years ago here's a quick note about using php and db2 that cost me a couple of hours and several recompiles trying to figure out why it didn't work.put the below line in any script putenv("DB2INSTANCE=db2inst1");Or, set that in your webserver environment somehow. sambou at everyonesports dot com 13 years ago If you encounter the error:"[Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified, SQL state IM002 in SQLConnect"make sure you have the correct permission to your database file (e.g. if using Win2k, might want to set the "Everyone" group to "Full Control"). For Windows, I find that I have to sometimes use the registry editor (e.g. RegEdt32.exe) to set the database file's permission because for some unknown reason, setting the permission from the file's "Properties" option does not work. oottavi at netcourrier dot com 13 years ago If you have problem to connect to sybase with an ODBC driver, try to set up your SYBASE environment variable to the correct directory. ([ODBC SQL Server driver]Allocation of a Sybase Open Client Context failed)Ex : Here is a connection to a DSN putenv("SYBASE=c:\sybase");$conn = odbc_connect("DSN1","USER","PASSWORD");echo "conn: $conn";if ($conn <= 0) { echo "Error in connection"; exit;} else { echo "<P>Connection successful\n";};
osiris at rich-howard dot co dot uk
13 years ago
Thought I'd add a note here on this. I'm using Apache 2.0.39 on Windows XP and PHP 4.2.2.It helps a lot if you don't use capital letters in your dsn string.Thought I also comment on the posts about using system dsns over file dsns. There are lots of posts saying use systems not files, but none (that I have seen) which explain why.Essentially: File DSNs are specific to the current user, therefore the Internet Guest User Account doesn't have rights to them. Systems are available to everyone.RegardsOsiris :)
d-m at eudoramail dot com
14 years ago
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/odbc_connect ERRO at DB2_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/To Solve the problem with DB2 + PHP folow this steps!INSTALL THE PROGRAM LIKE THIS!-- DB2 --Install the DataBankInstall Application Tools-- END DB2 ----- APACHE ---cd ../mod_ssl-2.8.5-1.3.22/./configure --enable-module=so --with-apache=../apache_1.3.22/ --with-ssl=../openssl-0.9.6c/cd ../apache_1.3.22/makemake certificate TYPE=custommake install--- END APACHE ----- PHP --cd ../php-4.1.1/./configure --with-apxs=/usr/local/apache/bin/apxs --with-pgsql --with-mysql --with-ibm-db2=/usr/IBMdb2/V7.1makemake install-- END PHP ----- LIB --vim /etc/ld.so.confadd line: /usr/IBMdb2/V7.1/libexecute: ldconfig-- END LIB --To Solve the error ODBC_CONNECT exec the db2profile at the apachectl!!! Like this!-- APACHE EDIT TO RUN DB2 --vim /usr/local/apache/bin/apachectladd line: . /usr/home/db2inst1/sqllib/db2profile-- END APACHE EDIT TO RUN DB2 --NOW run /usr/local/apache/bin/apachectl startsslDONE !!!!!You have a DB2 + APACHE + SSL + PHP + MYSQL + POSTGRES .Enjoy []´sHelio Ferenhof
14 years ago
If you (still) get that annoying error like and you're using Access:MSaccess DSN(Microsoft Jet engine couldn't open the database 'Unknow'.Another user is using it exclusively, or you dont have permission to useit).Make sure your access *.mdb file is not on a network drive. Put it on C: or D: disable all security first so you can test the connection. Once you can verify that you can connect add appropriate passwords, group access, etc.-=WH=-
bill at ergoitsolutions dot com
14 years ago
odbc connect to Oracle 8.0.xxx / NT4 / IIS4 / php.exe (4.1.0) had a lot of trouble connecting kept receiving the 12154 TNS error. Found a really useful hint in a mail msg on phpbuilder. http://www.phpbuilder.com/mail/php-db/2001051/0192.php Had to strip the <cr>'s out of both sqlnet.ora and tnsnames.ora to get a connection established. Also had trouble in php.ini need to fully qualify extension_dir on NT if you leave the last \ on the dir name it is replaced with a /
Mahmoud at iastate dot edu
14 years ago
WINNT 4 Workstation, PHP4 odbc_connect() kept giving me weird errors when trying to connect to a MSaccess DSN(Microsoft Jet engine couldn't open the database 'Unknow'. Another user is using it exclusively, or you dont have permission to use it). After going nuts for a while, I realized that my database name had a space in it (course surveys.mdb), I shortened the name to eliminate the space .. and everything worked fine.
lomaky at yahoo dot com
14 years ago
// simple conection $cnx = odbc_connect('cliente','Administrador',''); //query$SQL_Exec_String = "select * from Clientes"; //ejecucion query $cur= odbc_exec($cnx, $SQL_Exec_String ); echo "<table border=1><tr><th>Dni</th><th>Nombre</th>". "<th>codigo</th><th>ciudad</th></tr>\n"; while( odbc_fetch_row($cur ) ) { $Dni= odbc_result($cur, 1 ); $Nombre= odbc_result($cur, 2 ); $codigo= odbc_result($cur, 3 ); $ciudad= odbc_result($cur, 4 ); echo "<tr><td>$Dni</td><td>$Nombre</td>". "<td>$codigo</td><td>$ciudad</td></tr>\n"; } echo "</table>";
cs at coolspot dot de
14 years ago
We've tried hard to connect from php to our IBM DB2 RS/6000 Server. It worked after we compiled with --ibm-db2= option, but it was unbelievable slow. No, just testing some options, we found out that it went from very slow (getting 100 records lasts 1 till 10 seconds) to fast access (almost same speed as with using JDBC from Servlets) to 0.2 till 0.3 seconds. We simply added the optional parameter Cursortype to odbc_connect, and with the cursortype SQL_CUR_USE_ODBC it changed in that way! Hope this helps anybody who must connect to db2 ;)
fc99 at smm dot de
15 years ago
If you don't want to specify your login credentials on your web server, you can leave the login fields blank to use the integrated windows security like here: odbc_connect("DSN=DataSource","",""); Make sure you have switched your system dsn to integrated security, too ! (works on windows machines only, of course) flo.
SilencerX at optidynamic dot com
15 years ago
If like me you are using openlink from unix to access an MS Access database on an NT/Win2k machine and find out that your INSERT queries don't do anything and don't report any errors, use odbc_pconnect(). I couldn't understand what was going on and after a bit of research I found out that with MySQL they recommended using mysql_pconnect() for INSERT queries. I tried the same thing with odbc and it worked.
garretg at otable dot com
15 years ago
If you're connecting to a SQL server database through ODBC, you must set the default database of the ODBC DSN to the database you want to use. There is no way to specify the database name in odbc_connect or odbc_pconnect, just the DSN name, username, and password.
15 years ago
If using Openlink to connect to a Microsoft Access database, you will most likely fine tha odbd_connect() works fine, but discover that ANY query will produce odd results; with SELECT queries failing with "[OpenLink][ODBC][Driver]Driver not capable, SQL state S1C00 in SQLExecDirect in xxxx.php on line xx" and INSERT / DELETE queries warning "No tuples available at this result index". In this case, use the SQL_CUR_USE_ODBC cursor! This had me stumped for quite some time; because it was the odbc_exec() which was seemingly at fault... :) Siggy
16 years ago
Alot of people share the same kind of problems getting this setup on linux. I was assigned this problem 2 days ago and I was successful. My combination was PHP4 RC2, Easysoft OOB, and unixODBC. These three products work very well together and are real easy to install. More info http://www.easysoft.com/products/oob/main.phtml. ps also works good with Perl's DBI.
-1
cjbauer2 at hotmail dot com
1 year ago
/* Connecting to Microsoft Access with PHP $conn = odbc_connect("Driver={Microsoft Access Driver (*.mdb)}; Dbq=$mdb_Filename", $user,$password); This does NOT work on Windows servers.Instead, it needs $conn->Open("Provider=Microsoft,.Jet.OLEDB.4.0; Data Source=......Point the variable$mdb_Filename to... Data Source=C:\inetpub\wwwroot\php\mdb_Filename or wherever the virtual directory points.
-1
Andrew Wippler
4 years ago
http://www.microsoft.com/en-us/download/details.aspx?id=13255 for the latest odbc driver for .accdb files.
-1
Anonymous
7 years ago
This might be obvious to some, but here is a quick tidbit that might save you some time if you're using FreeTDS in Linux:Be sure that you have these two lines in freetds.conf:dump file = /tmp/freetds.logdump file append = yesso you can tail -f it in the background of debugging the problem. This helped me find my issue on on CentOS Linux: 1) tsql test works2) isql test works3) odbc connection in php also works WHEN RUN FROM THE SHELL4) running PHP through apache does NOT work.my /tmp/freetds.log file told me: net.c:168:Connecting to MYDBSERVER port MYDBPORTnet.c:237:tds_open_socket: MYDBSERVER:MYDBPORT: Permission deniedand the answer was my firewall/SELinux was denying the Apache processes access to connect to the remote MSSQL DB port, but my shell accounts were fine.
-1
Anonymous
8 years ago
I have used mdbtools to access .mdb file on my ubuntu box, as ODBC driver (and PHP)It has very few features, and practically unusable.
-1
Ceeclipse
8 years ago
"Returns an ODBC connection id or 0 (FALSE) on error."Keep in mind that the following code in PHP5 will not work properly:<?phpif( odbc_connect("test", "test", "test") === false ) { // Your error reporting/handling here..}?>odbc_connect() returns an integer, and not a PHP5 boolean!
-1
Luis [from] redcodestudio [dot] com
1 year ago
Following simonr at no2sp at m dot cogapp dot com contribution (thank you), I tried to connect to a local MS SQL Server 2014 Express database by creating a DSN-less connection using ODBC. It worked, here's the code.<?php// Replace the value of these variables with your own data$user = 'username';$pass = 'password';$server = 'serverName\instanceName';$database = 'database';// No changes needed from now on$connection_string = "DRIVER={SQL Server};SERVER=$server;DATABASE=$database";$conn = odbc_connect($connection_string,$user,$pass);if ($conn) { echo "Connection established.";} else{ die("Connection could not be established.");}?>
-1
sven dot delmeiren at cac dot be
9 years ago
Hi,Instructions on how to connect to a Progress database on Linux using the Merant ODBC driver can be found at http://www.progteg.com/english/documents.html. Kind regards,Sven DelmeirenComputers & Communications NV
-1
Kalle Sommer Nielsen
10 years ago
To use MySQL via ODBC:<?php $db_host = "server.mynetwork";$db_user = "dbuser"; $db_pass = "dbpass";$dsn = "DRIVER={MySQL ODBC 3.51 Driver};" . "CommLinks=tcpip(Host=$db_host);" . "DatabaseName=$db_name;" . "uid=$db_user; pwd=$db_pass"; odbc_connect($dsn,$db_user, $db_pass);?> -1 gekkeprutser at gmail dot com 10 years ago I also had a problem with this message: ([ODBC SQL Server driver]Allocation of a Sybase Open Client Context failed). The message only appeared after the server had run for a few hours, so I expected resource starvation. However it was a settings problem and I thought this might benefit others too.In addition to putting a <?php putenv ("SYBASE=c:\sybase"); ?> as described by oottavi above, I also had so specify a locale by setting <?php putenv ("LC_ALL=default"); ?>.Even though my locale was already set to a valid one (en_US) I had to set this environment variable to make it work anyhow. -1 webmaster at dzconnexion dot com 11 years ago Concerning the note posted by Grisu on the 23-Dec-2003 11:51: Connect to an MS-Access Database on the Network via ODBC,PLEASE dont forget to put double slashes as follows:"\\\\Server\\folder\\database.mdb" when setting up the registry key as indicated... -1 rotwhiler at hotmail dot com 11 years ago For anyone having problems with WHERE clauses in their MS Access ODBC requests here’s what I found: SQL requests sent to Access must include quotes around the criteria.$sel = "SELECT * FROM table1 WHERE cust_id = 'cust1'";outputs:SELECT * FROM table1 WHERE cust_id = 'cust1'And works.$sel = ‘SELECT * FROM table1 WHERE cust_id = cust1’;outputs:SELECT * FROM table1 WHERE cust_id = cust1And doesn’t work.Here’s the whole block of code:$conn = odbc_connect("database1","","");$sel = "SELECT * FROM table1 WHERE cust_id = 'cust1'";$exec = odbc_exec($conn,$sel);while($row = (odbc_fetch_array($exec))) $serv[odbc_result($exec,'label')] = $row;print_r($serv);
-1
aurelien marchand
11 years ago
For three days I fought to be able to connect our Linux intranet server to our AS400 database through ODBC and PHP on Mandrake. I installed everything I thought would work but I still got a: odbc_connect(): SQL error: Missing server name, port, or database name in call to CC_connect., SQL state IM002 in SQLConnectNote that isql was working great but php was failing to connect.The solution:I located and found a PHP module called php-unixODBC (to oppose with php-odbc). Once installed (even though it wasn't for the right version of PHP), I realised it didn't place the file properly. The ini file was in /etc/php/ instead of /etc/php.d/, so I moved it there and renamed the old /etc/php.d/36_odbc.ini to /etc/php.d/36_odbc.ini.sav, so that I now had /etc/php.d/36_unixodbc.ini. I restarted the httpd server and now I was able to access the as400.If you have questions, email _artaxerxes2_at_iname_dot_com (sans the underscore).
-1
emin dot eralp at bg dot com dot tr
11 years ago
A VERY IMPORTANT NOTE OF CAUTION FOR WINDOWS USERS DEVELOPING ON NON-NETWORKED SYSTEMSIf like me you are developing on a stand-alone system (Windows XP professional running IIS). Make sure that the the folder your database resides in is shared, otherwise you will get the following type of message:Current Recordset does not support updating. This may be a limitation of the provider, or of the selected locktype.' and you will spend 2 days (as I did) looking for the right combination of settings to write the record properly.
-1
d-m at eudoramail dot com
12 years ago
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/Compiling Apache + PHP + DB2 8.1 without the DATABASE instaled at the WebServer Machine!_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/Go to the machine that you have DB2 DATABASE instaled and COPY the include files from the DB2 8.1 DataBase to the PHP source directory:Sample:[root@DB2_SERVER/usr/opt/db2_08_01/include]# scp * user@webserver:/usr/src/php-4.3.7/ext/odbc/.INSTALLING THE WEBSERVER!Enter in the WebServer as root and ...-- DB2 --Install Administration ClientCatalog the DataBases-- END DB2 ---- LIB --vi /etc/ld.so.confadd line: /opt/IBM/db2/V8.1/libexecute: ldconfig-- END LIB --Download PHP + MODSSL + OPENSSL + APACHE!--- APACHE ---cd ../mod_ssl-2.8.18-1.3.31./configure --enable-module=so --with-apache=../apache_1.3.31 --with-ssl=../openssl-0.9.7dcd ../apache_1.3.31/makemake certificate TYPE=custommake install--- END APACHE ---OBS: CRIPT the CA.key DO NOT CRYPT server.key-- PHP --cd ../php-4.3.7./configure --with-apxs=/usr/local/apache/bin/apxs --with-mysql --with-pgsql --with-zip=/usr/local/lib --with-ibm-db2=/opt/IBM/db2/V8.1makemake install-- END PHP ----- APACHE EDIT TO RUN DB2 --vim /usr/local/apache/bin/apachectladd line: . /home/db2inst1/sqllib/db2profile-- END APACHE EDIT TO RUN DB2 --NOW run /usr/local/apache/bin/apachectl startsslDONE !!!!!You have a WebServer with PHP with DB2,MYSQL,POSTGRES and SSL Support. Enjoy IT![]´sHelio Ferenhof
-1
mortoray at ecircle-ag dot com
13 years ago
If you have switched to a new Version of PHP (from 4.1 to 4.3) and at the same time have upgraded your Apache server (from 1.x to 2.x) and suddenly get the error:"[Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified, SQL state IM002 in SQLConnect"It may be because you have your ODBC connections listed (Control Panel | ODBC) as User DSN rather than System DSN. They need to be System DSN in order for the PHP in the Apache service to access to them.
-1
root at mediamonks dot net
14 years ago
Due to multiple requests, more for DSN-less connections: <?php $db_connection = new COM("ADODB.Connection");$db_connstr = "DRIVER={Microsoft Access Driver (*.mdb)}; DBQ=". realpath("../databases/database.mdb") ." ;DefaultDir=". realpath("../databases"); $db_connection->open($db_connstr); $rs =$db_connection->execute("SELECT * FROM Table"); $rs_fld0 =$rs->Fields(0); $rs_fld1 =$rs->Fields(1); while (!$rs->EOF) { print "$rs_fld0->value $rs_fld1->value\n";$rs->MoveNext(); /* updates fields! */ } $rs->Close();$db_connection->Close(); ?> (Prints first 2 columns for each row.)
-1
cpoirier at shelluser dot net
15 years ago
After much testing, and I think supported by a comment I found in the code, I have come to a disturbing conclusion: odbc_connect() in PHP4.04pl1 is really an odbc_pconnect(), with all the implications for transaction scoping. Specifically, each time you call odbc_connect( "X", "" "" ), you will get the same physical ODBC Connection, and odbc_commit() and odbc_rollback() will affect all copies. The only solution I could find was to use several different DSNs to access the database.
-1
drew at pixelburn dot net
6 years ago
[Microsoft][ODBC Driver Manager] Data source name not found and no default driver specifiedIf you keep running into this on the 64 bit versions of windows, ie server 2008, and none of the other solutions helped.In a 64 bit windows server operating system, there are TWO odbc managers. When you pull up the usual menu for the odbc / dsn system, it is for the 64 bit odbc manager, and 32 bit applications (vb 6.0, PHP 5) will not work using these dsn's. This is where the 32 bit odbc manager is:C:\Windows\SysWOW64\odbcad32.exe
-1
manuel dot vecino at steria dot es
11 years ago
Hi Mario, I changed a bit your script to suit my configuration,I also included a lines to close the connection.Thanks for the scriptI am running apache 2.0.52 and PHP 5.0.2. Windows 2000SQL 2000<head><title>Untitled Document</title><meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"></head><body bgcolor="#FFFFFF"><table width="75%" border="1" cellspacing="1" cellpadding="1" bgcolor="#FFFFFF"> <tr bgcolor="#CCFFFF"> <td height="22"><b>Tipo</b></td> <td height="22"><b>Marca</b></td> <td height="22"><b>Modelo</b></td> </tr><?php $cx=odbc_pconnect("RAMPANT","Invent","pass","");$cur=odbc_exec($cx,"select tipo,marca,modelo from inv_equipos"); while(odbc_fetch_row($cur)){ //collect results$tipo=odbc_result($cur,1); $marca=odbc_result($cur,2); $modelo=odbc_result($cur,3);//format and display results print ("<tr>"); print ("<td>$tipo</td>"); print ("<td>$marca</td>"); print ("<td>$modelo</td>"); print ("</tr>"); }//disconnect from database odbc_close($cx); ?> </table></body></html>
-1
Matt
9 months ago
In newer versions you may need: odbc_connect("Driver={Microsoft Access Driver (*.mdb, *.accdb)};Dbq=C:/Folder/database.mdb", "", "");
-2
Ray.Paseur sometimes uses GMail
9 years ago
To connect and show tables in a Microsoft Access data base (created in *.asp pages)...$dbq = str_replace("/", "\\",$_SERVER["DOCUMENT_ROOT"]) . "\\path\\to\\database.mdb";if (!file_exists($dbq)) { echo "Crap!<br />No such file as$dbq"; }$db_connection = odbc_connect("DRIVER={Microsoft Access Driver (*.mdb)}; DBQ=$dbq", "ADODB.Connection", "password", "SQL_CUR_USE_ODBC");$result = odbc_tables($db_connection);while (odbc_fetch_row($result)) { if(odbc_result($result,"TABLE_TYPE")=="TABLE") { echo "<br />" . odbc_result($result,"TABLE_NAME"); }} -2 ramsosa at yahoo dot com 11 years ago Connecting to ADS (Advantage Database Server) using Windows. When you setup data source in ODBC Manager (PHP_SERVER) don't use a mapped drive in Database or Data Dictionary Path, or you cannot connect:Lets suppose that you share C:\ADS_SERVER\ADS as ADS.Mapping to a drive X: in PHP_SERVERInstead of:X:\APP\DATA\APP.ADDuse UNC:\\ADS_SERVER\ADS\APP\DATA\APP.ADDif the ADS ODBC dialog don´t let you to browse a Network Drive type it manually -1 Abhinav 4 years ago Pls ensure that the MSAccess database format is ".mdb".If it is ".accdb" it will not work! -1 Shyam Hazari 6 years ago Here is my successful odbc_connect with mysql on Ubuntu. It took me a while to figure this out. Installed following packages using apt-get.apache2apache2-mpm-preforkapache2-utilsapache2.2-commonlibapache2-mod-php5libdbd-mysql-perllibmyodbclibmysqlclient15offmysql-client-5.0mysql-commonmysql-server-5.0mysql-server-core-5.0odbcinst1debian1php5php5-cliphp5-commonphp5-odbcunixodbc/etc/odbc.ini------------myodbc3 = MySQL ODBC 3.51 Driver[myodbc3]Driver = /usr/lib/odbc/libmyodbc.soDescription = MySQL ODBC 3.51 DriverServer = localhostPort = 3306User = shyamPassword = mypassDatabase = mysqlOption = 3 Socket = /var/run/mysqld/mysqld.sock/etc/odbcinst.ini----------------[MySQL ODBC 3.51 Driver]Description = MySQL driverDriver = /usr/lib/odbc/libmyodbc.soSetup = /usr/lib/odbc/libodbcmyS.soCPTimeout = CPReuse = UsageCount = 1my php script------------<html><body><?$conn = odbc_connect("DRIVER={MySQL ODBC 3.51 Driver};Server=localhost;Database=mysql", "shyam", "mypass");$sql = "SELECT user from user";$rs = odbc_exec($conn,$sql);echo "<table><tr>";echo "<th>User Name</th></tr>";while (odbc_fetch_row($rs)) {$user = odbc_result($rs,"user"); echo "<tr><td>$user</td></tr>";}odbc_close($conn);echo "</table>";?></body></html> -2 Anonymous 2 years ago I was having trouble connecting to a MSSQL database using FreeTDS and a DSN string with the error message: "Warning: odbc_connect(): SQL error: [unixODBC][Driver Manager]Data source name not found, and no default driver specified, SQL state IM002 in SQLConnect"The problem was with specifying the driver. Many examples show the driver name surrounded by { }The solution was to remove the braces.Example:<?php$user = "username";$pass = "password";// Some examples show "Driver={FreeTDS};" but this will not work$dsn = "Driver=FreeTDS;Server=some.server.com;Port=1433;Database=mydatabase;";$cx = odbc_connect($dsn,$user,$pass);// Get the error messageif($cx === false) { throw new ErrorExcpetion(odbc_errormsg());}?> -1 ewilde aht bsmdevelopment dawt com 6 years ago Once you've set up a UnixODBC connection to Informix (as described elsewhere, for example in http://www.unixodbc.org/), the following PHP code will access a database via its DSN:<?php// We must set these environment variables for Informix to work. Either// do it here or in php.ini.putenv("INFORMIXDIR=/usr/share/informix");putenv("ODBCINI=/usr/local/unixODBC/etc/odbc.ini");// Open up a connection to the database.if (!($con = odbc_connect("CollectOh", "", ""))) echo "<p>Connection to CollectOh failed.</p>\n";else { // Let's try enumerating all of the tables in the database (there ain't // no "show tables" here). if (($res = odbc_exec($con, "select * from SYSTABLES"))) { echo "<p>\n"; odbc_result_all($res); echo "</p>\n"; } // Close up shop, like good dobies. odbc_close($con); }?>
-1
aamaral at 0kbps dot net
11 years ago
To connect to Sybase SQL Server Anywhere 8.0 on Windows use the following:<?php//================================================================ // Configure connection parameters $db_host = "server.mynetwork";$db_server_name = "Dev_Server"; $db_name = "Dev_Data";$db_file = 'c:\dbstorage\dev.db'; $db_conn_name = "php_script";$db_user = "dbuser"; $db_pass = "dbpass";//================================================================$connect_string = "Driver={Adaptive Server Anywhere 8.0};". "CommLinks=tcpip(Host=$db_host);". "ServerName=$db_server_name;". "DatabaseName=$db_name;". "DatabaseFile=$db_file;". "ConnectionName=$db_conn_name;". "uid=$db_user;pwd=$db_pass"; // Connect to DB$conn = odbc_connect($connect_string,'',''); // Query$qry = "SELECT * FROM my_table"; // Get Result $result = odbc_exec($conn,$qry); // Get Data From Result while ($data[] = odbc_fetch_array($result)); // Free Result odbc_free_result($result); // Close Connection odbc_close($conn); // Show data print_r($data);//================================================================?>
-2
jpditri at gmail dot com
5 years ago
I found that on Windows I am able to connect to a FoxPro free table directory on a password-protected mapped network drive without having a domain controller (as posted in the solution below from harald).Trying to connect to the mapped drive returned the error "tablename does not exist." The same code worked correctly if the ODBC resource was located locally on my machine. As a workaround, I specified the datasource explicitly in the connection, but pointed the source at a shortcut to the same mapped directory:<?php$dsn = "Driver={Microsoft Visual FoxPro Driver};SourceType=DBF;SourceDB=c:\\shortcut;Exclusive=NO;collate=Machine;NULL=NO;DELETED=NO;BACKGROUNDFETCH=NO;";$conn=odbc_connect($dsn,"","");?>and the connection completed. My guess is that the credentials needed to access the drive weren't accessible to apache (I tried changing the user apache ran as to no avail), and the shortcut put that responsibility on Windows. -1 Anonymous 9 months ago how to solve Error SQL state S1009 connecting to an access database in a network with odbc1 - if you dont have access installed in your computer, you will need the access driver for odbc conection...install with mdactypex (just for 32 bits)... or AccessDatabaseEngine_X64.exe for 64bits2 - give needed rights to a windows user profile to run as a service: Control panel, Administrative tools, in the left treeview panel, select Local Security Policies, User Rights Assignment, and in the right panel find and select 'Logon as a service', right button click and properties-> Search and add the user User in your computer you want to Apache use to login at service start3 - Go to Windows Services admin, select Apache, edit properties, select login tab, and set the previuos user you use (write his windows login password) 4- Use php code like this:$user="your_db_user";$password="your_db_password"; // Conexion Access con ODBC// Important: (4 initial slashes!) \\\\network_path\dir_name\...$mdbFilename='\\\\servidor\data\AccessBD.mdb';$conn_access = odbc_connect("Driver={Microsoft Access Driver (*.mdb)};Dbq=$mdbFilename", $user,$password);
-1
cjbauer2 at hotmail dot com
1 year ago
<?php//create an instance of the ADO connection object$conn = new COM ("ADODB.Connection") or die("Cannot start ADO");//define connection string, specify database driver$connStr = "PROVIDER=Microsoft.Jet.OLEDB.4.0; Data Source= c:\inetpub\wwwroot\db\examples.mdb"; $conn->open($connStr); //Open the connection to the database//declare the SQL statement that will query the database$query = "SELECT * FROM cars";//execute the SQL statement and return records$rs = $conn->execute($query);$num_columns =$rs->Fields->Count();echo $num_columns . "<br />"; for ($i=0; $i <$num_columns; $i++) {$fld[$i] =$rs->Fields($i);}echo "<table>";while (!$rs->EOF) //carry on looping through while there are records{ echo "<tr>"; for ($i=0;$i < $num_columns;$i++) { echo "<td>" . $fld[$i]->value . "</td>"; } echo "</tr>"; $rs->MoveNext(); //move on to the next record}echo "</table>";//close the connection and recordset objects freeing up resources$rs->Close();$conn->Close();$rs = null;$conn = null;?> -1 MArs 1 year ago Found a fresh list of ODBC drivers which are mostly free now http://www.devart.com/odbc/ -1 stefanov at uk dot ibm dot com 9 years ago Short tutorial on how to connect to the IBM Tivoli Netcool ObjectServer (based on Sybase):http://nuqm.micromuse.com/wiki/index.php/ObjectServer_PHPHopefully this will save some people precious time..Cheers,Pimmy -1 f dot chehimi at lancaster dot ac dot uk (Fadi) 11 years ago This is the typical solution with the steps to follow if someone wants to connect MS Access to PHP. it took me a couple of hours actually till i reached it. i just wanted to ease the hassle for my colleagues in order not to waste their time as i did. this is the duty of every programmer towards his/her peers :p here you are the CAKE :)<?php// to have this working:// 1- u have first to creat ur access database using MS // Access (i asume u know how to do this). My database // that i used in my example is called "Questionaire.mdb". // the table in it is called "Results"//// 2- then u have to add this database to ODBC in the// control panel.//// 3- the adition happens by adding "MS Access Driver" to the // "System DNS" tab in ODBC Data Source Administrator. if // u have that "MS Access Driver" in "User DNS" tab, then // u have to delete it.//// 4- click on Add in the "System DNS" tab.//// 5- choose "MS Access Driver" from the "Creat New // Database Source" window and click finish. //// 6- then the "ODBC MS Access Setup" window will pop-up.//// 7- give the driver the name that you want to use in your // PHP scripting. i used here "MSAccessDriver".// 8- after this, choose the "Select" button in "ODBC MS // Access Setup" to set the path of your Access database.//// 9-then u r done!!// this odbc_connect does the connection to the driver i // created in the ODBC Administrator$con = odbc_connect('MSAccessDriver','',''); if ($con){ echo "odbc connected<br>";$sql = "select * from Results"; //this function will execute the sql satametn in //correspondance to the table in the db $exc = odbc_exec($con,$sql); }else echo "odbc not connected<br>"; if($exc){ echo "selection completed<br>"; while($row = odbc_fetch_row($exc) ) echo $row->id."<br>";}else echo "selection failed<br>"; ?> -1 sa_kelkar at yahoo dot com 12 years ago While Installing PHP + Apache + DB2 some time it give problem with odbc_connect(). . to solve this problem make sure that u have made following entry in /etc/ld.so.conf/usr/IBMdb2/V7.1/lib/usr/IBMdb2/V7.1/include/usr/libplease add following line on line no 25 in /etc/rc.d/init.d/httpd./home/db2inst1/sqllib/db2profile(i.e /home/(instant name)/sqllib/db2profilefor more detail please visit to IBM siteit will work -1 ckelly at powerup dot com dot au 15 years ago To connect to a PROGRESS database using ODBC you must have SQL_CUR_USE_ODBC as the 4th parameter eg odbc_connect(DSN,uname,password,SQL_CUR_USE_ODBC ) otherwise you can pass queries but no results are ever returned . -2 jared (who is at) hatwhite * com 8 years ago It is indeed possible to read the data stored in an MSAccess .mdb file while remaining entirely within Linux. It is thanks to the mdbtools project, which created an MDB driver for unix.Up to you to get PHP to move this data around. Here are the links to get you started:Documentation:ftp://kalamazoolinux.org/pub/pdf/dbaccess.pdfODBC on Linux:http://www.unixodbc.org/MDB driver for the odbc system created above:http://mdbtools.sourceforge.net/Note that on most modern Linux boxen, you can bypass the (documented) command-line configuration and installation of these projects; simply use apt-get or some similar package installer.The documentation link covers more than the mdb driver and is a good place to start for all data access in Linux. Much thanks to the hard work which went into these three projects.-Jared -4 eisalen at yahoo dot com 7 years ago I've been having trouble displaying Russian characters from Access database using ODBC. You might try this solution, though it is using ADODB.Connection COM connection with added charset setting. <?php$db_connection = new COM("ADODB.Connection", NULL, 1251); $db_connstr = "DRIVER={Microsoft Access Driver (*.mdb)}; DBQ=C:\DataDir\Employee.mdb;DefaultDir=C:\DataDir";$db_connection->open($db_connstr);$rs = $db_connection->execute("SELECT EmpNameLocal, EmpPosLocal FROM tbl_Employee WHERE ID='$IDNo'"); $rs_fld0 =$rs->Fields(0); $rs_fld1 =$rs->Fields(1); while (!$rs->EOF) {$empNameLoc = $rs_fld0->value;$empWPPos = $rs_fld1->value;$rs->MoveNext(); } $rs->Close();$db_connection->Close(); ?>
-4
Mika
6 years ago
To connect a SQLite database in Linux I used the following function call<?php $db = odbc_connect('Driver=SQLite3;Database=' . Database::DATABASE, '', '') or die('could not open database!'); ?>In Debian or Ubuntu package libsqliteodbc is requried. Check /etc/odbc.ini and /etc/odbcinst.ini. -3 owen at silicon-dream dot com 10 years ago Connecting to an access database was as simple as the following line for me.odbc_connect("DRIVER={Microsoft Access Driver (*.mdb)}; DBQ=" . str_replace("/", "\\",$_SERVER["DOCUMENT_ROOT"]) . "\_database\dbname.mdb", "", "") | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4264083504676819, "perplexity": 20341.77882311716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00056-ip-10-164-35-72.ec2.internal.warc.gz"} |
http://www.snowboardingforum.com/political-wilderness/52283-gun-control-debate-thread-6.html | Gun Control Debate thread - Page 6 - Snowboarding Forum - Snowboard Enthusiast Forums
Old 11-25-2010, 02:39 AM Thread Starter
Senior Member
Join Date: Jan 2010
Posts: 156
Mentioned: 0 Post(s)
Quoted: 0 Post(s)
Quote:
Originally Posted by Snowolf View Post
Being an atheist does not have to mean belief in nothing and Clubmyke`s silly post "defining" atheism comes nowhere in defining atheism or all atheists for that matter and is actually insulting.
Wow, I am actually surprised this got to you and you are insulted.
May I suggest that the paint brush that I used to paint my viewpoint on atheism is just as wide regarding your viewpoint on God.
Touch'e
Winter Summer
09 Burton Custom 05 Nautique 211
09 Cartel EST Walzer Wakesurf Board (custom)
Van's Cirro 09 Liquid Force Watson
clubmyke is offline
Old 11-25-2010, 03:08 AM
Veteran Member
Join Date: Jun 2010
Location: Mt. Hood, Oregon
Posts: 1,283
Mentioned: 0 Post(s)
Quoted: 0 Post(s)
Quote:
Originally Posted by clubmyke View Post
2nd point - while you say I require a extra assumption. I would like to point out that I agree with with yours explanations in terms of the mathematics, cosmology, and physics. But it is flawed on the basis it missing one critical piece - the artist/director/designer.
Everyone knows that Xenu brought people to earth, in his DC-8, 75 million years ago.
Qball is offline
Old 11-25-2010, 09:27 AM
Drunk with power...er beer.
Join Date: May 2010
Location: Vancouver, BC, Canada
Posts: 5,113
Mentioned: 6 Post(s)
Quoted: 81 Post(s)
Blog Entries: 248
Quote:
Originally Posted by clubmyke View Post
Wow, I am actually surprised this got to you and you are insulted.
May I suggest that the paint brush that I used to paint my viewpoint on atheism is just as wide regarding your viewpoint on God.
Touch'e
You're kidding, right? You're going to use "I know you are, but what am I" and you think you've delivererd a cutting rebuttal? Myke, most of us are insulted because you think your juvenile prattle qualifies as rational discussion. I've been trying to give you the benefit of the doubt, but Paleo is right -- you are basically just a bag of air. And a troll. Despite repeated attempts on our part to get you to provide evidence -- any evidence -- any slightest trace of proof -- of the existance of your deity, you have consistently dodged and/or ignored the request in favour of a bunch of puerile insults and schoolyard rejoinders (see above). You seem to be completely unaware of the difference between an argument and a statement so all you do is provide statements of your own opinion, unsupported by anything but other statements of your own opinion, defended by statements of your own opinion (with the occasional insult thrown in) and you think you have equal credibility. I've already put way more calories into trying to kick-start your intelligence than you're worth. So, you want to believe that an invisible sky fairy created the universe, created the earth in 6 days by sprinkling pixie dust, then fine. Feel free. I'm going to take consolation in the fact that you and those like you are doomed to a life of irrelevance and increasing marginalization, while the rest of the world gets on with reality.
Have a good existance.
Illegitimi non carborundum.
Donutz is offline
Old 11-25-2010, 11:22 AM
Senior Member
Join Date: Feb 2010
Location: St. Louis
Posts: 476
Mentioned: 0 Post(s)
Quoted: 0 Post(s)
God is in our hearts, mannnnn.. you can't prove HIM.
November needs to hurry up and come
To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
Muki is offline
Old 11-25-2010, 02:32 PM
Senior Member
Join Date: May 2010
Location: Portland, OR
Posts: 2,316
Mentioned: 0 Post(s)
Quoted: 0 Post(s)
Quote:
Originally Posted by Muki View Post
God is in our hearts, mannnnn.. you can't prove HIM.
If he is in your heart, then theoretically I should be able to cut you open and find him right?
Who wants to volunteer!
PowderHound and TreeNinja
HoboMaster is offline
Old 11-25-2010, 03:23 PM
Senior Member
Join Date: Feb 2010
Location: St. Louis
Posts: 476
Mentioned: 0 Post(s)
Quoted: 0 Post(s)
You're under the impression that God is physical. IT is spiritual..can't be seen, heard, smelled, touched,..thus, can't be proven. IT only exists if you believe it exists.
Lol no, don't do that. I don't give a crap about religion..just trying to stir shit up.
November needs to hurry up and come
To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
Muki is offline
Old 11-25-2010, 03:47 PM
Stay Strapped
Join Date: Jan 2009
Location: Toronto
Posts: 1,123
Mentioned: 0 Post(s)
Quoted: 0 Post(s)
well that's what happens when you guys feed the troll.... should've known better
Did she say Strap-in or Strap-on?
InfiniteEclipse is offline
Old 11-25-2010, 04:28 PM
Senior Member
Join Date: Sep 2010
Location: nanimo B.C
Posts: 194
Mentioned: 0 Post(s)
Quoted: 0 Post(s)
trololo
nihilist ftw
labowsky is offline
Old 11-26-2010, 04:40 AM Thread Starter
Senior Member
Join Date: Jan 2010
Posts: 156
Mentioned: 0 Post(s)
Quoted: 0 Post(s)
Donutz,
Your reply speaks for itself -You never addressed this ?
"2nd point - while you say I require a extra assumption. I would like to point out that I agree with with yours explanations in terms of the mathematics, cosmology, and physics. But it is flawed on the basis it missing one critical piece - the artist/director/designer."
Winter Summer
09 Burton Custom 05 Nautique 211
09 Cartel EST Walzer Wakesurf Board (custom)
Van's Cirro 09 Liquid Force Watson
clubmyke is offline
Old 11-26-2010, 09:23 AM
Drunk with power...er beer.
Join Date: May 2010
Location: Vancouver, BC, Canada
Posts: 5,113
Mentioned: 6 Post(s)
Quoted: 81 Post(s)
Blog Entries: 248
Quote:
Originally Posted by clubmyke View Post
Donutz,
Your reply speaks for itself -You never addressed this ?
"2nd point - while you say I require a extra assumption. I would like to point out that I agree with with yours explanations in terms of the mathematics, cosmology, and physics. But it is flawed on the basis it missing one critical piece - the artist/director/designer."
CM, I and others have addressed that again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and personally I'm getting tired of it, because you just ignore the response. Remember when I said "you make statements and think they're arguments" ? Well the sentence that begins "But it is flawed" is a STATEMENT, GOD DAMN IT! I will reply one last time: No it isn't flawed, no it isn't missing anything. There is no artist/director/designer because there is no evidence of one, and no need for one. Present evidence that would tend to indicate the existance of a deity, or fuck off.
Illegitimi non carborundum.
Donutz is offline
Message:
Options
## Register Now
In order to be able to post messages on the Snowboarding Forum - Snowboard Enthusiast Forums forums, you must first register.
User Name:
Please enter a valid email address for yourself. Email Address:
OR
## Log-in
User Name Password Remember Me?
Human Verification
In order to verify that you are a human and not a spam bot, please enter the answer into the following box below based on the instructions contained in the graphic.
Thread Tools Show Printable Version Show Printable Version Email this Page Email this Page Display Modes Linear Mode Linear Mode Hybrid Mode Switch to Hybrid Mode Threaded Mode Switch to Threaded Mode
Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is OffTrackbacks are On Pingbacks are On Refbacks are On Forum Rules | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9505649209022522, "perplexity": 1592.3412369440184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823133.4/warc/CC-MAIN-20160723071023-00291-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://arxiv-export-lb.library.cornell.edu/abs/1708.06081 | cs.IT
(what is this?)
# Title: Block Markov Superposition Transmission of BCH Codes with Iterative Erasures-and-Errors Decoders
Abstract: In this paper, we present the block Markov superposition transmission of BCH (BMST-BCH) codes, which can be constructed to obtain a very low error floor. To reduce the implementation complexity, we design a low complexity iterative sliding-window decoding algorithm, in which only binary and/or erasure messages are processed and exchanged between processing units. The error floor can be predicted by a genie-aided lower bound, while the waterfall performance can be analyzed by the density evolution method. To evaluate the error floor of the constructed BMST-BCH codes at a very low bit error rate (BER) region, we propose a fast simulation approach. Numerical results show that, at a target BER of $10^{-15}$, the hard-decision decoding of the BMST-BCH codes with overhead $25\%$ can achieve a net coding gain (NCG) of $10.55$ dB. Furthermore, the soft-decision decoding can yield an NCG of $10.74$ dB. The construction of BMST-BCH codes is flexible to trade off latency against performance at all overheads of interest and may find applications in optical transport networks as an attractive~candidate.
Comments: submitted to IEEE Transactions on Communications Subjects: Information Theory (cs.IT) Cite as: arXiv:1708.06081 [cs.IT] (or arXiv:1708.06081v1 [cs.IT] for this version)
## Submission history
From: Suihua Cai [view email]
[v1] Mon, 21 Aug 2017 05:43:41 GMT (637kb)
Link back to: arXiv, form interface, contact. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4878724217414856, "perplexity": 2054.8778036688677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00037.warc.gz"} |
https://byjus.com/question-answer/which-of-the-following-shows-the-correct-relation-between-speed-time-and-distance-speed-distance/ | Question
# Which of the following is shows the correct relation between speed, time and distance?Distance = Speed ×TimeNone of theseSpeed = Distance ×TimeDistance = Speed / Time
Solution
## The correct option is A Distance = Speed ×TimeDistance and speed are related as: Speed = Distance / Time ⇒ Distance = Speed × Time
Suggest corrections
Similar questions
View More | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9956833720207214, "perplexity": 1273.8330911529233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362589.37/warc/CC-MAIN-20211203030522-20211203060522-00421.warc.gz"} |
https://www.nature.com/articles/sdata201852?error=cookies_not_supported&code=2f7997ed-b592-4eb7-8d23-0e98a1e5d397 | ## Background & Summary
Quantifying streamflow is critical to a variety of socio-economic and ecological analyses and applications13. Examples include the study of freshwater biodiversity patterns47, assessments of global water resources8,9, for example irrigation supply, hydropower or water footprinting1012, analyses of the fate of pollutants13 and quantification of sediment fluxes14,15. Most of the stream reaches in the world are poorly or not monitored at all16,17, due to the inaccessibility of most headwaters and a lack of financial and human resources18, highlighted by a substantial decline in monitoring since the mid-1980s1719. Streamflow is commonly quantified with process-driven global hydrological models (GHMs) and land surface models (LSMs)2024. GHMs/LSMs are typically run at coarse spatial resolutions (~10 to 50 km), due to computational constraints, and consequently are unable to provide reasonable streamflow estimates for small rivers (defined here by Strahler stream order < 5), which comprise 94.6 % of the total stream length and riparian interface on the planet25. Streamflow data at higher spatial resolution would be highly beneficial for ecological applications and water resources assessment, for example understanding/modelling freshwater species distributions or modelling the fate and effects of pollutants in the aquatic environment13,2629.
Compared to process-based models, data-driven models like regression equations and neural networks are more suited for generating high-resolution streamflow data with large spatial extent, thanks to their computational efficiency and relatively quick parameterization30. Data-driven models typically quantify streamflow based on upstream catchment characteristics related to topography, climate, land cover, and soils3033. Data-driven approaches have been mostly employed at a local scale34. Recent studies demonstrated, however, the feasibility of applying a data-driven approach at a global scale, resulting in streamflow estimates that may have greater accuracy than the output of GHMs/LSMs31,32. Despite these encouraging results, consistent high-resolution global streamflow maps are not yet available.
Here we present FLO1K: a consistent dataset of global annual streamflow maps at 1 km resolution for each year in the period 1960-2015. Annual flow (AF) metrics include mean annual flow as well as minimum and maximum monthly flow for a given year. We produced the maps with feed-forward Artificial Neural Networks (ANNs) trained on yearly AF metric values from 6600 monitoring stations worldwide, using catchment-averaged covariates representing topography and climate. We delineated the upstream catchments based on the 1-km HydroSHEDS (www.hydrosheds.org) hydrography35, extended with Hydro1k (https://lta.cr.usgs.gov/HYDRO1K) for latitudes above 60°N not covered by HydroSHEDS, thereby achieving a global coverage (excluding Antarctica). For the training of the ANNs, we used 10 yearly values of mean, minimum and maximum AF per monitoring station and climate covariates for the corresponding years. We then constructed the AF metric maps by first computing for each year and each 30 arc seconds grid cell the upstream catchment-averaged covariates (which varied from year to year for climate), and then applying the trained ANNs. The streamflow is calculated for each terrain grid cell, i.e., it represents the potential in-channel discharge that would occur in the presence of a natural watercourse. The flow maps have a resolution 10 to 50 times higher than those typically produced using state-of-the-art GHMs/LSMs36,37 and global data-driven approaches32. For each of the three AF metrics, 56 yearly layers (1960-2015) are available packed in the NetCDF-4 format CF-compliant. In addition, we provide the FLO1K layers upscaled to 5 and 30 arc minutes resolutions for coarser-grain applications, including comparisons with GHMs/LSMs outputs. The FLO1K database can be downloaded from http://geoservice.pbl.nl/download/opendata/FLO1K and figshare (Data Citation 1).
## Methods
### General approach and streamflow network
The procedure to generate the maps consisted of (i) model fitting, including observed streamflow data preparation, extraction of covariates, and training of the ANNs, and (ii) application of the ANNs to generate the global AF maps. Figure 1 provides a general outline of the procedure. We used the 30 arc seconds (~1 km) version of HydroSHEDS35 extended with Hydro1k for latitudes above 60°N to retrieve the drainage direction network and delineate the upstream catchment of each grid cell38,39. The HydroSHEDS hydrography is based on the National Aeronautics and Space Administration (NASA) Shuttle Radar Topography Mission (SRTM) digital elevation model (DEM)40, which covers the entire terrestrial land surface from latitudes 56°S to 60°N. To achieve a global spatial coverage, we extended HydroSHEDS with Hydro1k38,39, the latter being a United States Geological Survey (USGS) product derived from the GTOPO30 Digital Elevation Model (DEM) (https://lta.cr.usgs.gov/GTOPO30). The resulting drainage direction network is available at http://files.ntsg.umt.edu/data/DRT/.
### Streamflow observations
We derived mean, maximum and minimum AF values from flow records in the Global Runoff Data Centre (GRDC) database (www.bafg.de/GRDC)41. The GRDC comprises daily and monthly streamflow records from 9252 monitoring stations worldwide. The GRDC monitoring stations are not directly referenced on the hydrography employed in this study. This means that mismatched monitoring stations might encompass the wrong upstream catchment basin, which in turn may lead to errors when training the ANNs. As the GRDC dataset includes the estimated catchment area upstream of each monitoring station, we geo-referenced each station in order to match the most similar upstream area on the 30 arc seconds stream network, following the procedure previously used to allocate GRDC stations on the HydroSHEDS 15 arc seconds hydrography42. For each station, a new location is selected that minimizes discrepancies in catchment area and distance from the original location, within a 5 grid cells (~5 km) search radius. Out of the original 9252 monitoring stations, 285 were excluded as they did not report coordinates. Of the remaining 8967, 746 (~8%) were excluded because there was no matching catchment area within the search radius (based on a threshold of maximum 50% difference42). Out of the remaining 8221, 65% reported an area difference smaller than 5%, 15% had an area difference between 5% and 10%, and 20% had an area difference between 10% and 50%.
We used the monthly records provided by the GRDC to calculate AF metrics for the period 1960-2015. We computed the mean AF for each year by averaging the 12 monthly values, and retrieved maximum and minimum AF by selecting the highest and lowest monthly values for each year, respectively. We considered only those years with a complete 12 months record and selected monitoring stations with at least 10 years of data from 1960 through 2015. The remaining set of stations totaled 6600 and were globally distributed as shown in Figure 2.
### Catchment-specific covariates
As covariates of the flow metrics we used topography and climate, which we retrieved from publicly available spatially explicit sources and then aggregated to the upstream catchment of each grid cell. The choice of the covariates set and source data was based on previous studies3034,43,44, expert knowledge and data availability. A list of the covariates and related source databases is provided in Table 1.
We calculated the area of the upstream catchment of each cell by summing the areas of the upstream grid cells. We derived the upstream catchment-averaged elevation from the SRTM DEM40 resampled at 30 arc seconds as provided by HydroSHEDS35, supplemented with the GTOPO30 DEM for areas lacking SRTM coverage, i.e., latitudes above 60°N. We transformed the elevation values by adding a constant value of 500 m to avoid negative values, the lowest being represented by the shores of the Dead Sea at 430 m below sea level. We employed the USGS slope map developed for the Prompt Assessment of Global Earthquakes for Response (PAGER) system45 to calculate upstream catchment-averaged surface slope values. This map is based on the same SRTM+GTOPO30 DEM and has been corrected for the discrepancy between ground units (arc degrees) and elevation units (meters)45.Figure 3
We derived the upstream catchment-averaged values for annual mean, maximum and minimum air temperature (Tair) and precipitation (P), as well as potential evapotranspiration (PET), aridity index (AI) and seasonality index for P and PET, for every year over the period 1960-2015. For air temperature, we employed the Climate Research Unit (CRU) Time Series (TS) dataset46 (version 3.24.01; monthly temporal and 0.5° spatial resolution). For precipitation, we used the Multi-Source Weighted-Ensemble Precipitation (MSWEP) dataset47 (version 1.2; 3-hourly temporal and 0.25° spatial resolution; 1979-2015) supplemented with the Global Precipitation Climatology Centre (GPCC) Full Data Reanalysis48 (version 7; monthly temporal and 0.5° spatial resolution) prior to 1979. MSWEP merges a wide range of gauge, satellite, and reanalysis datasets to achieve precipitation estimates with greater accuracy than any other global dataset47. To combine the GPCC and MSWEP datasets, we rescaled the GPCC estimates such that the 1979-2013 mean of GPCC matched that of MSWEP. For each year and grid cell, we retrieved the mean annual value of Tair and P as the mean over the 12 monthly layers, and the minimum and maximum as the lowest and highest monthly values, respectively. We computed mean annual potential evapotranspiration from monthly Tair values following the temperature-based approach of Hargreaves et al.49 and employing the same CRU TS v. 3.24.01 source data for temperature. Similarly, we calculated seasonality index layers for P and PET as ${X}_{si}={X}_{yr}^{-1}\sum |{X}_{m}-{X}_{yr}/12|$, where si, yr and m stand for seasonality index, yearly and monthly values, respectively50. We downscaled the raster layers for the climate-related covariates to match the 30 arc seconds resolution of the hydrography using nearest-neighbour resampling. In addition, we calculated the aridity index for each year as PET/P, using mean annual P and PET.
To calculate the upstream catchment-average values of the covariates, we employed the TauDEM software (Terrain Analysis Using Digital Elevation Models, http://hydrology.usu.edu/taudem). TauDEM is an open-source C++ software explicitly designed to implement the flow algebra for large datasets, employing a Message Passing Interface (MPI, http://mpi-forum.org) to implement highly parallelized processing algorithms5153. We extracted the covariates for the upstream catchment of each cell of the global hydrological network via the so-called flow accumulation technique (‘AreaD8’ in TauDEM). This technique considers each grid cell as a pour point and subsequently calculates the number of upstream grid cells or the sum of the attribute values of these upstream grid cells, using the flow direction map to delineate the watershed boundaries of the upstream catchment. To derive continuous upstream catchment-averaged values for the predictor variables, we divided the sum of the upstream covariate values by the total number of upstream grid cells at each pour point. To speed-up the calculations, we split the global flow direction layer into six continents (North America, South and Central America, Europe, Africa, Asia, Oceania). Adjacent continents (e.g., Europe and Asia) were separated along watershed boundaries.
### Training of Artificial Neural Networks
We quantified the relationships between the flow metrics and the covariates using artificial neural networks (ANNs), which have been widely used for hydrological modelling from local54 to global32,33 scales. We employed the feed-forward ANN algorithm based on the multi-layer perceptron structure with one hidden layer55,56 (Figure 1). We trained the ANNs based on year-specific values of mean, minimum and maximum AF, using the upstream-catchment topography and year-specific climate as covariates (Table 1). We applied a Box-Cox transformation to normalize the distributions of each variable (response and covariates)57. In addition, we standardized each distribution to zero mean and unit standard deviation, as required for the ANNs56. To avoid possible bias due to differences in monitoring intensity among the stations, we randomly picked 10 yearly values from those stations monitored at least 10 years across the 1960–2015 period. We then iterated the ANNs training 20 times, sampling different years from those stations having a record longer than 10 years. Prior to the training, we tuned the number of neurons of the hidden layer of the ANNs and the weights decay value to regularize the ANNs cost function, and therefore control for overfitting. To this end, we used 10-fold cross-validation (CV) whose folds were based on excluded monitoring stations, and identified the number of neurons and weights decay value that maximized the median coefficient of determination (R2) and minimized the median Root Mean Square Error (RMSE) of the testing set. As a result, we employed 20 neurons for the ANNs hidden layer and a weights decay value of 0.01.
### Generating mean, maximum and minimum AF global maps
We applied the ANNs model to produce 30 arc seconds maps with mean, maximum and minimum annual flow from 1960 through 2015 (Data Citation 1). For each grid cell, we computed the AF metrics as the median across the outputs of 20 trained ANNs and back-transformed the values to m3∙s-1.
We upscaled the 30 arc seconds layers to 5 and 30 arc minutes resolutions, in order to serve potential coarser-grain applications. We based the upscaled output on the 5 and 30 arc minutes flow direction grids produced by applying the dominant river tracing (DRT) algorithm to the same 30 arc seconds flow direction layer used in this study38,39. The 5 and 30 arc minutes flow direction grids are freely available for download at http://files.ntsg.umt.edu/data/DRT/. We upscaled the 30 arc seconds streamflow values by choosing the value of the cell that minimized the differences in upstream-drainage area between the native 30 arc seconds and the coarser resolution grid cell. For the 5 arc minutes grids it was necessary to employ a one-cell search radius to avoid losing connectivity.
### Code availability
The code used to generate the covariate data, geo-reference the monitoring stations, train the ANNs and generate the flow maps (Data Citation 1) was written and run in R version 3.3.2. TauDEM tools52 were used to produce the catchment-specific covariate layers and GDAL library58 functions were employed to handle the analyses on large raster data. The scripts are available on request.
The ensemble of trained ANNs are available as R objects (.rds) and as Portable Model Markup Language (PMML) objects for cross-platform compatibility (.pmml, http://dmg.org). The parameters used for the Box-Cox transformation and standardization of the variables employed by the ANNs are also available in CSV format.
## Data Records
The FLO1K dataset is a set of gridded layers packed as NetCDF-4 files freely available for download (Data Citation 1). For each of the three AF metrics, 56 yearly layers are available from 1960 through 2015, yielding a total of 168 layers. Each non-null cell represents the potential streamflow in m3∙s-1, stored as 32-bit floating point. Layers are in the WGS84 coordinate system with a cell size of 30 arc seconds (~ 1 km) and a global extent, including all continents except for Antarctica (90°N to 90°S latitude and 180°W to 180°E longitude). In addition, upscaled data are available at 5 and 30 arc minutes.
## Technical Validation
To evaluate the quality of the FLO1K maps, we run a 10-fold cross-validation for each of the 20 ANN runs, such that each observation was included in the test set once and by splitting the folds by stations. We assessed the overall map quality with R2 and RMSE calculated based on log-transformed values to evaluate the performance across the full spectrum of streamflow values (10-3-105 m3∙s-1). Cross-validation results showed high agreement between training (90%) and independent testing (10%) data, with negligible variation among the replicates (Table 2).
We assessed the uncertainty per grid cell resulting from the sub-sampling of the monitoring stations, by computing the coefficient of variation (CoV) over the 20 replicates. Uncertainty was very low (CoV < 0.5) for the main river stems globally and smaller reaches in wet regions (Fig. 4). We found higher uncertainty (higher CoV values) for low streamflow values in dry areas, e.g., the upper basin of the Nile (central inset of Fig. 4). These higher CoV values likely reflect the lower number of streamflow observations available for calibrating the ANNs in these areas. The highest CoV values (> 3.5) were found in grid cells with a low number of upstream grid cells (typically <5) in dry areas. In these grid cells, most of the ANN replicates yielded zero-flow values whereas one or few replicates yielded close-to-zero values, resulting in a low mean yet large CoV across the 20 replicates.
We checked for potential bias in streamflow estimates in the northern hemisphere due to snowmelt delays, e.g., the contributing effect of snowfall in November-December of the previous year on the streamflow in May-June. To this end, we generated streamflow maps based on the US water year (November-October) for stations north of 40N and compared their performance to the original (calendar year-based) FLO1K maps. We tuned the ANNs ensemble and computed the streamflow fields adopting the US water year for both the streamflow data and the climate input variables. Differences in R2 between models based on calendar versus US water year were smaller than 0.01 and therefore considered negligible (Table 3).
## Usage notes
The FLO1K dataset reports the potential streamflow in m3·s-1 in each grid cell, i.e., the discharge that would occur if there were a natural watercourse. To avoid confusion, we emphasize that the estimates represent volumetric streamflow rather than specific runoff. As such, the estimates cannot directly be compared with outputs from climate or land surface models without a streamflow routing component.
We refrained from filtering the output to the actual stream network because there are multiple methods for stream network delineation5966, which users of FLO1k may want to select or refine according to their needs. For global-scale analyses one might adopt an arbitrary upstream catchment area threshold in order to delineate the network (e.g., 25 upstream grid cells as in Hydrosheds35), as to our knowledge more refined methods have not yet been developed/tested.
The estimated maximum and minimum flow values for a given year reflect the highest and the lowest monthly values of that year. This does not give an indication about which months of the year belong to the maximum or minimum flow. The corresponding months might change from year to year based on the yearly distribution of the precipitation.
Users of the upscaled streamflow grids should keep in mind that these are contingent on the respective DRT flow direction layers38,39. Further, the accuracy of the upscaled grids has not been evaluated. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5842409729957581, "perplexity": 3400.0427294697784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948871.42/warc/CC-MAIN-20230328201715-20230328231715-00772.warc.gz"} |
https://www.mendeley.com/research/potential-contribution-open-lead-particle-emissions-central-arctic-aerosol-concentration/ | # On the potential contribution of open lead particle emissions to the central Arctic aerosol concentration
by , , ,
Atmospheric Chemistry and Physics ()
#### Abstract
We present direct eddy covariance measurements of aerosol number fluxes,\ndominated by sub-50 nm particles, at the edge of an ice floe drifting in\nthe central Arctic Ocean. The measurements were made during the\nice-breaker borne ASCOS (Arctic Summer Cloud Ocean Study) expedition in\nAugust 2008 between 2 degrees-10 degrees W longitude and 87 degrees-87.5\ndegrees N latitude. The median aerosol transfer velocities over\ndifferent surface types (open water leads, ice ridges, snow and ice\nsurfaces) ranged from 0.27 to 0.68 mm s(-1) during deposition-dominated\nepisodes. Emission periods were observed more frequently over the open\nlead, while the snow behaved primarily as a deposition surface. Directly\nmeasured aerosol fluxes were compared with particle deposition\nparameterizations in order to estimate the emission flux from the\nobserved net aerosol flux. Finally, the contribution of the open lead\nparticle source to atmospheric variations in particle number\nconcentration was evaluated and compared with the observed temporal\nevolution of particle number. The direct emission of aerosol particles\nfrom the open lead can explain only 5-10% of the observed particle\nnumber variation in the mixing layer close to the surface.
#### Cite this document (BETA)
by Discipline
54% Earth and Planetary Sciences
31% Environmental Science
8% Physics and Astronomy
77% Researcher
8% Professor > Associate Professor
8% Student > Ph. D. Student
by Country
15% Norway
8% Germany
8% United Kingdom | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.971855640411377, "perplexity": 17345.897564119}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464056639771.99/warc/CC-MAIN-20160524022359-00222-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://portlandpress.com/biochemj/article-abstract/198/2/385/17986/The-nuclear-oestrogen-receptor-in-the-female-rat?redirectedFrom=fulltext | Oestradiol administration to immature or ovariectomized rats has been reported to increase the uterine content of long-term nuclear oestrogen receptors. However, in the intact adult female rat, oestradiol administration did not increase the concentration of long-term nuclear oestrogen receptors at all phases of the oestrous cycle. Progesterone administration to rats in late dioestrus did not affect the concentration of uterine nuclear oestrogen receptors 24 h later, although it did prevent the normal cyclic increase at pro-oestrus in the concentration of hypothalamic nuclear oestrogen receptors. Our results therefore show that in the intact adult rat, factors other than the concentration of progesterone or oestradiol determine the nuclear concentration of oestrogen receptors in the uterus. They also demonstrate differences between neural and non-neural tissues in the regulation of oestrogen-receptor interactions.
This content is only available as a PDF. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9209176898002625, "perplexity": 9921.142468771186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141194634.29/warc/CC-MAIN-20201127221446-20201128011446-00218.warc.gz"} |
https://www.physicsforums.com/threads/paul-dirac.412210/ | # Paul Dirac
1. Jun 24, 2010
### James Reason8
Good book, The Strangest Man by Graham Farmelo a biography of Paul Dirac man into antimatter, not just physics but also generally how things were in the 1900s including the wars and politics
2. Jun 24, 2010
### Dickfore
Paul Dirac was born in 1902, so he was at most in 2nd grade by the end of 1900's.
3. Jun 24, 2010
### James Reason8
Hey the root, besides the book tells about his father Charles and the first thought of antimatter was late 1800s, not by him
4. Jun 24, 2010
### Dickfore
True, but those were associated with ideas about matter with negative gravity and aether theories as sinks in the flow of aether as opposed to sources (squirts) which were interpreted as ordinary matter.
Both of these ideas ware discarded by the time Dirac proposed his own theory. The most important aspect of matter and antimatter is that there is a gap between the two states equal to 2mc2, which can be considered a consequence of Special Relativity - a theory which made aether obsolete. Also, antimatter has the same gravitational properties as ordinary matter.
5. Jul 8, 2010
### Theorem.
The book is one of my favorites. It is very engaging and not technical. I think when the topic creator refereed to the 1900`s he was referring to the 20th century in general, which is entirely correct. There are few books I have read (and I read a lot) that rival this one.
6. Aug 1, 2010
### Ken Natton
Following George Jones’s recommendation, I have now obtained a copy of this book and I am still in the early chapters, but already I can quite understand the enthusiasm expressed by the other posters on this thread. Of course, I want to read the whole thing before I have too much to say about the book itself, but there was one particular passage I have already read that was so striking and actually amusing for reasons not particularly related to the subject of this book itself, that I thought it worth an early mention.
I have seen how many of the more established contributors to these forums like to have signatures to their posts that often contain some pithy aphorism, often one that has something to say about their view of less experienced contributors, and their sometimes over hastiness to post. In the early chapters of the book, Farmelo is dealing with all of the disparate influences on Dirac in his formative years and makes mention of one individual called Charlie Broad who was Professor of Philosophy at the University of Bristol. If you feel your defences rising at that mention of Philosophy then relax, Farmelo nicely explains how Broad was one of the more reliable lecturers on Relativity Theory in its early days and as such someone who had a rare ability to command Dirac’s serious attention, even though Broad himself later referred to Dirac as ‘…one whose shoelaces I was not worthy to unloose…’ There are a couple of direct quotes from Broad that Farmelo offers that could stand very well as signature aphorisms. Ones that also nicely demonstrate how little has changed in the ninety something years since Relativity Theory burst onto the scene. (Yes I know the first paper was 1905, but late 1919 is when Relativity Theory first started to receive wide attention). I apologise if I’m stating the obvious, but just trying to anticipate and prevent any objections, let me just mention that one does need to catch the dry tone of the first one in particular to properly understand it. In any case, just consider these:
‘A philosopher who regards ignorance of a scientific theory as insufficient reason for not writing about it cannot be accused of complete lack of originality.’
‘popular expositions of the Theory are either definitely wrong, or so loosely expressed as to be dangerously misleading; and all pamphlets against it – even when issued by eminent Oxford tutors – are based on elementary misunderstandings.’
I used to engage with another forum on Darwin & Evolution which was, inevitably, plagued by anti-evolutionists. Oh how well that second quote could be applied to them! I like this Charlie Broad character. I like him a lot.
7. Aug 1, 2010
### yossell
Nobody who doesn't write with insufficiently few double negatives cannot be regarded without a certain amount of lack of disrespect.
8. Aug 1, 2010
### Ken Natton
Ah yes yossel, an absolute masterpiece. I trust that I can expect to see your own version as your signature aphorism then? You have succeeded in making me read Broad’s quote a little differently, but I still think it was underpinned by a similar dry, pretension bursting intention as your own. My impression, from Farmelo’s account of him, is that Broad’s outlook was of the same school of thought as your own.
9. Aug 1, 2010
### yossell
I agree - I actually felt guilty and cheap after posting that comment; in those days, for a certain kind of wry, independently-schooled Oxbridge academic, it wasn't unreasonable (sic) to speak in such a way.
Once I've worked out what my own aphorism actually says, if I find I agree with it, I may well include it as my signature.
10. Aug 1, 2010
### Ken Natton
Oh no, not at all yossell. I'm loath to take this discussion too far from the book about Dirac, but I do intend to post more about that when I've read it. I've already progressed a few chapters since this morning and it is deeply compelling.
But meantime, let me make it clear. While Broad's comment gave me a wry inward smile, your's caused me to laugh out loud such that my wife wanted to know what I was laughing at. I then had to explain it to her knowing that there was no way she would have the faintest idea why I found it so funny. You've got a lot to answer for yossell.
Last edited: Aug 1, 2010
11. Aug 1, 2010
### epenguin
I made some comments on Dirac earlier to say that he made one contribution that seems to me quite unique: predicting from fundamental (though new) principles, and even aesthetic criteria, not just the behaviour of simple constituents of the world, but their existence.
https://www.physicsforums.com/showpost.php?p=2459580&postcount=146
The book suggests he and his time were right for each other and for this discovery, that later as well as earlier his style would not have (and did not) yield such dramatic results. (This could be said for Einstein too).
From the book I now know another thing I would have had no way of guessing. He went to the same school and at the same time as Cary Grant.
No a lot of people know that.
Nor did they.
Last edited: Aug 2, 2010
12. Aug 2, 2010
### Ken Natton
The idea that scientists tend to make their greatest discoveries at a relatively young age is something that has been much discussed and is a near universal truth. There are some examples of significant discoveries made by scientists at an older age, but that their greatest work was already completed by the time they were thirty is nothing unique to Dirac and Einstein. Equally, while we do sometimes hear of a particular discovery or an individual scientist being described as ahead of its or their time, the fact that such an idea is worthy of mention demonstrates that it is far more common for scientific discoveries and for scientists themselves to be considered of their time.
Farmelo gives a vivid account of how ripe many of the concepts of Quantum Physics were for discovery around the time that Dirac was producing his greatest work. I don’t have the text with me at the moment, but only last night I read Farmelo’s analogy of the (paraphrasing) bags of gemstones split open by Heisenberg and Schrödinger and the race that was on to find the diamonds. Several times Dirac thought he had made a significant advance only to find that someone else, sometimes more than one other, had reached a similar conclusion and published before him. It was cold comfort to Dirac to be credited as having stated the concept more concisely and elegantly than his rival. We all know the concept of the undefinability of a particle’s position and momentum as Heisenberg’s Uncertainty Principle. But until I read about it yesterday, I for one was completely unaware of how very nearly it was Dirac’s Uncertainty Principle.
And at best epenguin, your assessment of Dirac as a ‘miserable git’ is unsympathetic. As I was reading about it last night, I was thinking about how the people at Niels Bohr’s Copenhagen Institute responded to Dirac’s unusual character. Of course they would have found him off-putting, but if they had understood his inability to make an emotional connection with his own parents, they might have been more understanding of his inability to make any kind of connection, other than a purely scientific one, with them. I am not so far into the book yet, so I don’t know, but I am assuming Farmelo himself will address this issue at some point. But from the book’s cover I know that there is some question about whether or not Dirac was actually autistic. The notion that he might have been stems entirely from the accounts of just how severely he was emotionally crippled. Clearly his behaviour goes way beyond what would be considered ‘normal’. But it is perhaps also significant to recognise that he was no egotist. He made no demands whatever on anyone else. The only austerity he imposed was entirely upon himself. People’s only objection to him was because of the demands they placed on him to connect with them. And if they disliked the result, that was only because they didn’t understand that he simply wasn’t capable of it. Fortunately, some of the prominent figures around him did understand.
13. Aug 8, 2010
### Ken Natton
It doesn’t seem terribly likely that I am going to generate much discussion about this book. And I make no assumptions that anyone cares much about what I think of it. I might even have been concerned that posting about this book might be frowned on, given just how strongly Physics Forums seeks to defend its position as a purely scientific and not in any way a philosophical group of forums. Except that, of course, I didn’t start this thread, and more particularly, I was lead to this thread, and indeed to the book this thread is about, by one of the forum mentors. In any case, having read it, I have some points that I think are worthwhile making, so what the hell. If this post echoes around an empty chamber then, so be it.
For a biography written with quite such acute judgement and unerring professionalism as this one, it is quite surprising to have Farmelo afford real insight into his own personal response to Dirac, into his researches for this book, and even a little insight into his own career in the penultimate chapter. The first person pronoun suddenly appears unexpectedly when you are nearly through what is a deep and penetrating third person account that does demand quite a bit of effort from its reader. Its not that it is unwelcome or intrusive, it just takes you by surprise a little bit. It is a real change of mood just as you are reaching the end.
From that chapter, it emerges that Farmelo is himself a student of theoretical physics. I suppose that it would seem that he would have to be to write any kind of an insightful biography of a man who possessed one of the twentieth century’s most exceptional scientific minds. But it is clear enough that Farmelo is a bona fide writer, and while he does undoubtedly deal with the science very skilfully, I would suggest that anyone coming to this book looking for a deep technical understanding of Dirac’s scientific work is likely to be disappointed. It does do a very comprehensive job of laying out the precise context for each of Dirac’s important scientific achievements. Personally, I find that fascinating and believe in the value and importance of that insight. But I can understand that many would not share that view. Dirac, for example. It is clear that Einstein was a genuine scientific hero for Dirac, but I have the strongest feeling that he never read a biography of Einstein. In any case, it is clear enough to me that the exercise of writing this book was, and the value and benefit in reading it is entirely philosophical.
This goes to the heart of my own personal drive to understand. When I hear some of the more seemingly outlandish, counter-intuitive, sometimes scepticism provoking scientific ideas, I don’t just want to have the technical details explained to me, I always feel a compunction to understand what generated the idea, what lead serious, dispassionate, rational scientists to offer such a challenging idea as a scientific explanation. That is why I found such a resonance in the account of Dirac’s efforts to find out more about relativity theory when few courses were available and very little literature about it was reliable. Farmelo tells us that
‘[Dirac wanted to] find an accessible technical account of the [relativity] theory that would explain, step by step, how Einstein had developed his ideas.'
And yet, and yet, when Dirac himself later became an educator, both as an author of text books and as a lecturer, his reputation was built on a straight to the heart of the matter approach that was most appreciated by the best students. Farmelo gives a telling account of the occasion when Niels Bohr first received a copy of Dirac’s textbook The Principles of Quantum Mechanics.
‘Even if the author’s name were not on the cover, his identity would have been obvious to Bohr from a quick flick through: the unadorned presentation, the logical construction of the subject from first principles, and the complete absence of historical perspective, philosophical niceties and illustrative calculations.’
A little like the music of Bach, the most telling evidence of just how good his work was is the response it drew from those best placed to know.
‘Dirac’s peers marvelled at its elegance and at the deceptively plain language, which somehow seemed to reveal new insights on each reading, like a great poem.’
And for me, the reality of the impossibility of penetrating Dirac’s great contribution to scientific literature comes in Farmelo’s next comment:
‘The book had been written with no regard for his readers’ intellectual shortcomings…’
But the book is not just a comprehensive account of Dirac’s own life, it also paints a very vivid picture of many of Dirac’s contemporaries. Heisenberg and Schrödinger, Bohr and Born, Kapitza and Ehrenfest, Pauli and Fermi, Oppenheimer and Feynman. It is interesting to think that biographies of nearly every one of these individuals could serve as a different perspective on virtually the same basic story – that of the early development of Quantum Physics. Many other prominent figures make their appearances like, Edward Teller and Eugene Wigner – the latter brother to Dirac’s wife.
And also in that same penultimate chapter, Farmelo does, finally, give his own consideration of whether or not Dirac was truly autistic. He considers the key criteria for a modern diagnosis of autism and notes that Dirac seems to fit every one. Except that autism is a condition from which its sufferers do not escape, and the story of Dirac’s life that the book has just finished telling does reflect the reality that Dirac did mellow in later life. Not just because of a marriage to someone that it was his great and unlikely good fortune to find, but because of the realities of being a professor at Cambridge that necessitated the learning of effective communication skills, and because of his membership of a scientific community that also made its contribution to drawing him out of himself. Ultimately I suppose, Dirac’s upbringing was just a prime example of the kind of thing that Larkin’s infamous poem refers to and his extreme introspection in his younger days was caused by demons that he never totally exorcised, but ones he eventually learned to live with and to control. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.556242048740387, "perplexity": 1222.1145945372039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509996.54/warc/CC-MAIN-20181016030831-20181016052331-00367.warc.gz"} |
https://www.e-arm.org/journal/view.php?number=4038&viewtype=pubreader | # Disability Registration State of Children With Cerebral Palsy in Korea
## Article information
Ann Rehabil Med. 2018;42(5):730-736
Publication date (electronic) : 2018 October 31
doi : https://doi.org/10.5535/arm.2018.42.5.730
1Department of Physical Medicine and Rehabilitation, National Health Insurance Service Ilsan Hospital, Ilsan, Korea
2Research Institute, National Health Insurance Service Ilsan Hospital, Ilsan, Korea
3Department of Statistics, Korea University, Seoul, Korea
4Department of Physical Medicine and Rehabilitation, Inje University Ilsan Paik Hospital, Ilsan, Korea
Corresponding author: Jiyong Kim Department of Physical Medicine and Rehabilitation, Inje University Ilsan Paik Hospital, 170 Juhwa-ro, Ilsanseo-gu, Goyang 10380, Korea. Tel: +82-31- 910-7885, Fax: +82-31-910-7786, E-mail: halwayskim@gmail.com
Received 2017 July 14; Accepted 2017 November 24.
## Abstract
### Objective
To investigate the disability registration state of children with cerebral palsy (CP) in Korea.
### Methods
Based on the National Health Information Database, the disability registration state was examined for brain lesion disability and other possible complicated disabilities accompanying brain disorder in children diagnosed with CP aged up to 5 years old who were born between 2002 and 2008.
### Results
Of children diagnosed with CP, 73.1% were registered as having brain lesion disability for the first time before they turned 2 years old. The younger the children, the more likely they will have 1st and 2nd degree disability. However, when the age of children is increased, such likelihood is decreased. The percentage of children registered as having overlapping disabilities was 7%–20%.
### Conclusion
It is important to establish a more accurate standard to rate disability and provide national support systems for children with CP with various severities and multiple disabilities. By reorganizing the current disability registration system for pediatric brain lesions, the system could serve as a classification standard to provide medical and social welfare services.
Keywords:
## INTRODUCTION
In 1990, Americans with Disabilities Act (ADA) was enacted to protect rights of the disabled. Currently, approximately 49 million Americans (15.4% of the population) are protected by the ADA. The term ‘disability’ refers to a condition in which a person’s daily life is limited due to physical or mental problem. A person who has a history of such problem is not considered to currently have disability because this person is not categorized as having any specific disorder or illness [1].
In Korea, the Welfare Law for Mentally and Physically Disabled People was legislated for the first time in 1981. It was revised in 1989 and renamed the Act on Welfare of Persons with Disability. After several revisions, the current act has completed [2]. At the time of the enactment of the Welfare Law for Mentally and Physically Disabled People, only 5 types of disorders and illnesses were legally identified as disabilities (legal disabilities): physical disability, visual disability, hearing disability, speech and language disability, and intellectual disability. In the revision of 2000, the following 5 types of disability were added: brain lesion disability, developmental disorder, psychiatric disorder, kidney disease, and heart disease. Finally, the number of legal disability types is increased to 15 after 5 more categories (respiratory disorders, liver disease, facial disfigurement, intestinal and urinary stoma, and epilepsy) are added. Developmental disorder disability was renamed autism. According to the Disability Registration report published by the Ministry of Health and Welfare in the end of 2014, a total of 2,494,460 disabled people were registered in Korea, accounting for 4.8% of the total population [3].
In 1981, children with cerebral palsy (CP) were categorized as physically handicapped according to the Welfare Law for the Mentally and Physically Disabled People. CP was re-categorized in 1991 as a physical disability after revision of the Act on Welfare of Persons with Disabilities. The disability of brain lesion was established as a new category in the amendment of 2000, in which CP was recategorized [3,4]. Currently, the degree of brain lesion disability is rated from 1 to 6, with 6 being the mildest. This rating system is based on the same standard for adults. Thus, such rating may not fully reflect disability characteristics of children. On the other hand, when a person has more than one type of disability, the condition is referred to as overlapping disabilities and the disability rating is determined according to the standard of combined disabilities (Ministry of Health and Welfare Notice No. 2015-188). In Korea, the Ministry of Health and Welfare regularly publishes statistics of the registration state of the disabled. It is difficult to collect specific information about the registry of children with CP since the registration statistics not only include CP, but also includes other adult onset encephalopathies. Therefore, the aim of this study was to analyze the disability registration state of children with CP based on medical claims made to the National Health Insurance Service (NHIS).
## MATERIALS AND METHODS
The NHIS gathers data for the National Health Information Database (NHID) for the entire population of Korea. The NHID consists of data of healthcare utilization, health screening, socio-demographic factors, and mortality. This study utilized data related to medical claims made to the NHIS between 2003 and 2013. Of infants born between January 1, 2002 and December 31, 2008, children with a history of at least two medical claims under CP diagnostic codes (Korean Standard Classification of Diseases [KCD]) at 2 years after birth were classified as CP cases. These cases were selected as subjects of this study. CP diagnostic codes are shown in Table 1. Only codes for KCD-6 are listed. However, in the actual study, codes from different time periods were considered. The type and degree of disability of children with CP were identified based on information provided by insurance qualifications. In this study, disability types related to CP including physical, brain lesion, visual, hearing, speech and language, intellectual, autism, and epilepsy were considered. The information on insurance qualifications refers to NHIS-registered information such as joining qualification of NHIS, address, qualification status, date of disqualification, and the type and degree of disability. This study reviewed the disability registration status of children up to 5 years old who were diagnosed with CP and born between 2002 and 2008.
CP diagnostic codes
## RESULTS
A total of 13,591 children born between 2002 and 2008 were diagnosed with CP. Among these, 61.3% were registered as having a disability by the age of 5 years. The most common type of disability for registration was brain lesion, accounting for 70.4% of children with CP. This percentage did not change by year during the investigation period. An average of 18.1% of subjects were registered as having intellectual disability which was the 2nd most common type. This percentage tended to decrease over time. Originally, children with CP were categorized as having a physical disability. After the introduction of brain lesion disability as a new disability type in 2000, children with CP were registered as having a brain lesion disability. The proportion of children registered as having a physical disability decreased from 4.1% of those born in 2002 to 2.3% of those born in 2008. Although some children with CP are still registered as having a physical disability, the percentage is gradually decreasing (Table 2).
Disability registration by year
We analyzed the percentage of children with CP who were registered as having a brain lesion disability according to their age. The proportion of children registered before 2 years of age continued to decrease up to 2007. It then seemed to increase in 2008. For children between ages of 2 and 4, this proportion showed an opposite trend (Fig. 1). In the first-time disability registration for brain lesion, 73.1% of children were registered by the age of 2 (Table 3). However, 67.60% of disabilities were rated as 1st degree disability regardless of first-time disability registration age. When first-time registration age increased, percentages of 1st and 2nd degree disabilities tended to decrease while those of 3rd–6th degree disabilities tended to increase (Fig. 2). To verify data of this study, we compared our results with those of the annual disability registration report published by the Ministry of Health and Welfare. Numbers of children registered as having a brain lesion disability before the age of 5 years were similar between the report (3,166) and our data (3,481) at the end of 2007.
Percentage of cerebral palsy children registered as having brain lesion disability by age.
First-time disability registration of brain lesion disability
Brain lesion disability registration state according to the degree of disability and first-time registration age. Degree of brain lesion disability is rated from 1st to 6th, with 6th being the mildest. Age 0 (≥0 and <12 mo), age 1 (≥12 and <24 mo), age 2 (≥24 and <36 mo), age 3 (≥36 and <48 mo), age 4 (≥48 and <60 mo), and age 5 (≥60 and <72).
CP can accompany not only a physical disability, but also other disabilities such as intellectual, visual, hearing, speech, and language. To acquire data for these accompanying disabilities, registrations of multiple overlapping disabilities were analyzed. Cases of visual, hearing, speech and language, intellectual, autistic, and epilepsy disabilities registered as accompanying a brain lesion disability were examined. In the case of only one disability accompanying brain lesion disability, intellectual disability was the most common (Table 4). The percentage of children registered as having overlapping disabilities of brain lesion and intellectual disability increased from 12.2% for children born in 2002 to 17.3% for children born in 2004. Since 2004, this percentage gradually decreased to 5.6% for children born in 2008. Only 0.1% of children born in 2002 showed speech and language disability accompanying brain lesion disability. This percentage gradually increased to 0.9% for those born in 2008. Other disabilities did not show any clear tendency. Some children registered two overlapping disabilities along with brain lesion disability, including cases of intellectual disability with autistic disability or intellectual disability with speech and language disability. However, according to the current rating guidelines for the degree of disability, the above two cases were excluded from the overlapping disability category. Thus, these cases were considered as registry errors.
Overlapping disability registration of children with brain lesion disability
## DISCUSSION
CP is a neurodevelopment disorder caused by damage to the developing brain. It impacts patients throughout their life [5]. It is widely known that children with CP can survive through adulthood [6,7]. When their survival rate improves, concerns regarding their social costs are also increased. In a previous study, Eunson [8] has reported that direct and indirect cost for a CP patient is approximately US $900,000. Kruse et al. [9] has classified lifetime cost of CP patient into three categories: healthcare cost, productivity cost, and social cost. The calculated total cost was reported to be US$990,000 for male CP patient add US \$921,000 for female CP patient [9]. Since a large social cost is spent throughout the life of a CP patient, precise data of CP registry is very important for planning health and welfare related social policy [9]. In Korea, the Ministry of Health and Welfare regularly publishes statistics of the disability registration state. However, it is difficult to collect specific information about the registry of children with CP since the registration statistics not only includes CP, but also includes other adult onset encephalopathies. Therefore, considerable effort was made in this study to gather detailed information about CP in children based on the government statistics. A significant result was obtained in this study based on medical claim data of the entire national population from the NHIS despite the lack of pre-existing data on the disability registration state for pediatric CP.
Results of this study showed that more than 70% (73.1%) of children with CP were registered for the first time as early as 2 years of age. A similar trend was found in the reported statistics from the Ministry of Health and Welfare. In the overall disability registration state, the number of children with physical, visual, hearing, speech and language, and intellectual disabilities increased during the early elementary school period. On the other hand, the number of children registered with brain lesion disability increased from the age of 1 [10]. When the disability rating criteria were revised in 2009, children and adolescents could be registered with brain lesion disability from the age of 1. However, prior to the 2009 revision, the age at which brain lesion disability was assessed was not specified. As time passed, the proportion of registration for children aged 24–48 months was increased than that in earlier registration data. Since this study was conducted before the revision of the disability rating criteria, such increase might be less related to the revision. However, since the proportion of severe CP has decreased over time, the need for early disability registration might be decreasing.
The majority of patients (67.6%) were rated as 1st degree of disability at the time of first registration. As the first-time registration age increased, 3rd–6th degree rating tended to increase. However, the 1st degree is the most common group. Thus, the degree of disability does not properly reflect the severity of CP disability. Today, the degree of CP is rated according to the level of strength in upper and lower extremities and the level of independence and performance in walking and daily activities using the Modified Barthel Index (MBI). However, these criteria are based on adult brain lesion disability. They do not reflect characteristics of pediatric brain lesion disability. As seen in results of this study, most children are registered for the first time before the age of 2, at which stage even normally developing children need considerable support from their caregivers in their daily activities. Therefore, the severity of disability cannot be determined according to independent performance of daily activities. The degree of disability can also alter as a child grows. If these characteristics specific to neurodevelopment disorders are not considered, it is difficult to correctly distinguish the degree of disability in children. To address this limitation, the rating guidelines of the degree of disability were partially revised in 2013 (Ministry of Health and Welfare Notice No. 2013-56) where the Gross Motor Function Classification System, Gross Motor Function Measure, and Bayley Scales of Infant Development were used for brain lesion disability rating in children aged 1 to 7. Although there are no specific standards for determining the degree of disability by integrating all evaluations, it is critical to provide a systematic basis with differentiated rating standards for children that are different from the adult disability rating system.
Children with CP can suffer various medical problems associated with movement and posture along with intellectual disability and sensory disabilities such as hearing and visual disabilities [5]. A study by Kirby et al. [11] on 8-year-old children with CP showed that 35% of children with CP also had epilepsy of various severities. According to a report on the status of disabled people published by the Korea Institute for Health and Social Affairs in 2014 [12], 7.1% of children with CP had epilepsy. This number significantly differed from the result of the present study which showed 0.0%–0.6% of children were registered as having overlapping epilepsy disability each year. Also, according to that report [12], of all children registered with CP who also had an overlapping disability, 49.8% had speech and language disability, 23.4% had intellectual disability, 15.7% had visual disability, and 11.7% had hearing disability. These results differed from those of the present study in which the status of overlapping disabilities was analyzed for children up to 5 years old. This implies that accompanying disabilities such as speech and language or intelligence might become more severe as children age. Since only one type of disability is usually registered in the current system, it is difficult to determine the exact duplication of the overlapping disability status. Voorman et al. [13] have reported that social function and communication of CP patients are limited when they have additional seizures or speech and language disabilities. Tan et al. [14] have studied CP patients aged from 1 to 24 years and found that, for patients who have the same motor function, CP patients with overlapping intellectual disability show less social participation. Similarly, Fauconnier et al. [15] have reported that patients who suffer accompanying pain, visual impairment, and eating disorders show decreased social participation. As seen in these studies, when a CP patient has accompanying disabilities other than physical impairment, even if the disability is mild, the patient shows restricted daily activities and social participation. Even if the severity of physical impairment from CP is mild, when the number of accompanying disabilities increases, the severity of the degree of overall disability also increases. However, the current disability rating regime lacks a system to reflect these overlapping disability issues appropriately. Thus, it is impossible to distinguish between CP children with complicated multiple problems and CP children without overlapping problems through current disability registry data. It is impossible to determine those children who need more help from a social welfare point of view either. For this reason, a more sophisticated disability registration system for CP is needed.
Our study has several limitations. First, some CP children might be excluded because subjects of the present study were determined based on medical claims under CP diagnostic codes. These data are thus limited to data of the NHID. However, at present, it is impossible to investigate the status of disability registration except for the use of observational studies using the national big data. A prospective study using a Korean CP cohort is needed in the future. Second, our data were restricted to children born between 2002 and 2008 based on medical claims from 2003 to 2013. Third, analysis of the later data collected after the tracking period was limited since the observation period of the data was restricted to the age of 5. Under the current Welfare of Persons with Disability Act, patients diagnosed with brain lesion disability before the age of 6 are required to be re-evaluated at an age between 6 and 12. As the degree of brain lesion disability may change when children age, the progress of disabilities needs to be analyzed according to increase of tracking period.
In conclusion, the aim of this study was to analyze the disability registration state of children with CP in their infant and pre-elementary school periods based on NHID. Our results showed that the current disability registration system did not accurately reflect the severity or assess the complex associated problems of children with CP. By reorganizing the disability registration system for pediatric brain lesions, it might serve as a classification standard to provide medical and social welfare services for CP patients.
## Acknowledgements
This study was supported by a grant (No. NHIS-2017-1-225) from the National Health Insurance Ilsan Hospital. This study used NHIS-NSC data (No. NHIS-2017-1-225) collected by the National Health Insurance Service.
## References
1. Gostin LO. The Americans with Disabilities Act at 25: the highest expression of American values. JAMA 2015;313:2231–5.
2. Kim S. The disabled welfare act revision bill and its significance. Health Welf Policy Forum 2007;127:34–40.
3. Hwang SK. Understanding the new international classification of disability and introduction of the concept of functional disability. Q J Labor Policy 2004;4:128–49.
4. Kim M, Lee I. The effect of functional status and personal assistance services uses on perceived independence among people with physical disabilities. J Korean Social Welf Adm 2007;20:53–83.
5. Bax M, Goldstein M, Rosenbaum P, Leviton A, Paneth N, Dan B, et al. Proposed definition and classification of cerebral palsy, April 2005. Dev Med Child Neurol 2005;47:571–6.
6. Bottos M, Feliciangeli A, Sciuto L, Gericke C, Vianello A. Functional status of adults with cerebral palsy and implications for treatment of children. Dev Med Child Neurol 2001;43:516–28.
7. Hutton JL, Pharoah PO. Effects of cognitive, motor, and sensory disabilities on survival in cerebral palsy. Arch Dis Child 2002;86:84–9.
8. Eunson P. The long-term health, social, and financial burden of hypoxic-ischaemic encephalopathy. Dev Med Child Neurol 2015;57 Suppl 3:48–50.
9. Kruse M, Michelsen SI, Flachs EM, Bronnum-Hansen H, Madsen M, Uldall P. Lifetime costs of cerebral palsy. Dev Med Child Neurol 2009;51:622–8.
10. Ministry of Health and Welfare. Disabled Enrollment Sejong: Ministry of Health and Welfare; 2014.
11. Kirby RS, Wingate MS, Van Naarden Braun K, Doernberg NS, Arneson CL, Benedict RE, et al. Prevalence and functioning of children with cerebral palsy in four areas of the United States in 2006: a report from the Autism and Developmental Disabilities Monitoring Network. Res Dev Disabil 2011;32:462–9.
12. Kim S, Lee Y, Hwang J, Oh M, Lee M, Lee N, et al. Survey of disabled persons 2014 Sejong: Ministry of Health and Welfare; 2014.
13. Voorman JM, Dallmeijer AJ, Van Eck M, Schuengel C, Becher JG. Social functioning and communication in children with cerebral palsy: association with disease characteristics and personal and environmental factors. Dev Med Child Neurol 2010;52:441–7.
14. Tan SS, Wiegerink DJ, Vos RC, Smits DW, Voorman JM, Twisk JW, et al. Developmental trajectories of social participation in individuals with cerebral palsy: a multicentre longitudinal study. Dev Med Child Neurol 2014;56:370–7.
15. Fauconnier J, Dickinson HO, Beckung E, Marcelli M, McManus V, Michelsen SI, et al. Participation in life situations of 8-12 year old children with cerebral palsy: cross sectional European study. BMJ 2009;338:b1458.
## Article information Continued
### Fig. 1.
Percentage of cerebral palsy children registered as having brain lesion disability by age.
### Fig. 2.
Brain lesion disability registration state according to the degree of disability and first-time registration age. Degree of brain lesion disability is rated from 1st to 6th, with 6th being the mildest. Age 0 (≥0 and <12 mo), age 1 (≥12 and <24 mo), age 2 (≥24 and <36 mo), age 3 (≥36 and <48 mo), age 4 (≥48 and <60 mo), and age 5 (≥60 and <72).
### Table 1.
CP diagnostic codes
Classification KCD-6
Spastic CP G8000, 8001, 9002, 8008, 8009, 8100-8103, 8109-8113, 8119, 8190-8193, 8199, 830-833
Dyskinetic CP G803
Ataxic CP G804
Unspecified G808
CP, cerebral palsy; KCD-6, Korean Standard Classification of Diseases 6th edition.
### Table 2.
Disability registration by year
Birth year Children diagnosed with CP Children registered as disability Brain lesion disabilitya) Physical disabilitya) Intellectual disabilitya) Autistic disabilitya)
2002 2,035 1,495 (73.5) 961 (64.3) 62 (4.1) 348 (17.1) 59 (2.9)
2003 1,997 1,364 (68.3) 894 (65.5) 66 (4.8) 293 (14.7) 45 (2.3)
2004 1,865 1,254 (67.2) 839 (66.9) 43 (3.4) 269 (14.4) 58 (3.1)
2005 1,944 1,171 (60.2) 863 (73.7) 51 (4.4) 161 (8.3) 39 (2.0)
2006 1,894 1,064 (56.2) 807 (75.8) 27 (2.5) 150 (7.9) 23 (1.2)
2007 2,039 1,114 (54.6) 811 (72.8) 34 (3.1) 181 (8.9) 27 (1.3)
2008 1,817 874 (48.1) 693 (79.3) 20 (2.3) 109 (6.0) 12 (0.7)
Values are presented as number (%).
CP, cerebral palsy.
a)
These refer to children registered for each type of disability, including cases of overlapping disability.
### Table 3.
First-time disability registration of brain lesion disability
Age (mo) Number of children
≥0 and <12 1,240
≥12 and <24 1,601
≥24 and <36 1,449
≥36 and <48 896
≥48 and <60 542
≥60 and <72 140
### Table 4.
Overlapping disability registration of children with brain lesion disability
Birth year Intellectual Autistic Visual Hearing Speech and language Epilepsy
2002 117 (12.2) 6 (0.6) 2 (0.2) 4 (0.4) 1 (0.1) 3 (0.3)
2003 120 (13.4) 2 (0.2) 3 (0.3) 4 (0.4) 1 (0.1) 5 (0.6)
2004 145 (17.3) 7 (0.8) 6 (0.7) 4 (0.5) 0 (0.0) 0 (0.0)
2005 113 (13.1) 4 (0.5) 7 (0.8) 5 (0.6) 1 (0.1) 0 (0.0)
2006 86 (10.7) 4 (0.5) 2 (0.2) 1 (0.1) 4 (0.5) 0 (0.0)
2007 79 (9.7) 3 (0.4) 5 (0.6) 2 (0.2) 0 (0.0) 3 (0.4)
2008 39 (5.6) 1 (0.1) 1 (0.1) 3 (0.4) 6 (0.9) 0 (0.0)
Values are presented as number (%). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16083182394504547, "perplexity": 6977.528550487417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655554.2/warc/CC-MAIN-20191014223147-20191015010647-00239.warc.gz"} |
https://www.machinegurning.com/rstats/nearest_neighbour_classification/ | ### Matt Upson
Yo no soy marinero
# Nearest neighbour methods
### Overview
In my last post, I started working through some examples given by Hastie et al in Elements of Statistical learning. I looked at using a linear model for classification across a randomly generated training set. In this post I’ll use nearest neighbour methods to create a non-linear decision boundary over the same data.
## Nearest neighbour algorithm
There are much more learned folk than I who give good explanations of the maths behind nearest neighbours, so I won’t spend too long on the theory. Hastie et al define the nearest neighbour approach as:
The $k$ refers to the number of groups that we are interested in, and is user defined. Our prediction $\hat{Y}$ (which is derived from $x$) is equal to the mean of $N_k$, where $N_k$ consists of the $k$ nearest training examples closest to $x$.
How do we define this closeness? Hastie et al simply use Euclidean distance:
So, simply put, all we need to do is look at the neighbours of a particular training example, and take an average of them, to create our prediction of the score of a given point.
## R Walkthrough
As ever, the code to produce this post is available on github, here.
Using the data I generated in my previous post I’ll walk through the process of producing a nearest neighbour prediction.
Just to recap: this is a dataset with 300 training examples, two features ($x_1$ and $x_2$), and a binary coded response variable ($X\in\mathbb{R}^{300 \times 2}$, $y\in{0,1}$):
### Calculating Euclidean distance
The first thing I need to do is calculate the Euclidean distance from every training example to every other training example - i.e. create a distance matrix. Fortunately R does this very simply with the dist() command. This returns a $m \times m$ dimensional matrix
I’m interested in the 15 nearest neighbour average, like Hastie et al., so I just need need to extract the 15 shortest distances from each of these columns. It helps at this point to break the matrix into a list using split(), with a vector element where each column was. This will allow me to use purrr::map() which has an easier syntax than other loop handlers like apply() and its cousins.
Now I need a small helper function to return the closest $k$ points, so that I can take an average. For this I use order()
This should return a vector element in the list containing the index of $D$ which corresponds to the $k$ closest training examples.
So far so good, the function returns us 15 indices with which we can subset $y$ to get our 15 nearest neighbour majority vote. The values of $y$ are then averaged…
…and converted into a binary classification, such that $G\in{0,1}$: where $\hat{Y}>0.5$, $G=1$, otherwise $G=0$.
### Intuition
Before looking at the predictions, now is a good point for a quick recap on what the model is actually doing.
For the training examples $(10, 47, 120)$ I have run the code above, and plotted out the 15 nearest neighbours whose $y$ is averaged to get out prediction $G$.
For the right hand point you can see that for all of the 15 nearest neighbours $y=1$, hence for our binary prediction $G=1$. The opposite can be said for the left hand: again there is a unanimous vote, and so $G=0$. For the middle point, most of the time $y=1$, hence although there is not unanimity, our prediction for this point would be $G=1$.
You can image that from this plot: whilst varying $k$ would have little effect on the points that are firmly within the respective classes, points close to the decision boundary are likely to be affected by small changes in $k$. Set $k$ too low, and we invite bias, set $k$ too high, and we are likely to increase variance. I’ll come back to this.
### Predictions
So how do the predictions made by nearest neighbours ($k=15$) match up with the actual values of $y$ in this training set?
In general: pretty good, and marginally better than the linear classifier I used in the previous post. In just $3$% of cases does our classifier get it wrong.
### Decision boundary
For this next plot, I use the class::knn() function to replace the long-winded code I produced earlier. This function allows us to train our classifier on a training set, and then apply it to a test set, all in one simple function.
In this case I produce a test set which is just a grid of points. By applying the model to this data, I can produce a decision boundary which can be plotted.
### Varying k
I mentioned before the impact that varying $k$ might have. Here I have run knn() on the same data but for multiple values of $k$. For $k=1$ we get a perfect fit with multiple polygons separating all points in each class perfectly. As $k$ increases, we see that the more peripheral polygons start to break down, until at $k=15$ there is largely a singular decision boundary which weaves its way between the two classes. At $k=99$, this decision boundary is much more linear.
In my next post I will address this problem of setting $k$ again, and try to quantify when the model is suffering from variance or bias. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6252191662788391, "perplexity": 475.3765230908094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590329.62/warc/CC-MAIN-20180718213135-20180718233135-00200.warc.gz"} |
https://www.khanacademy.org/math/algebra-basics/basic-alg-foundations/alg-basics-roots/e/square_roots | # Square roots of perfect squares
Practice finding the square root of a perfect square positive integer.
### Problem
square root of, 100, end square root, equals, question mark | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5426595211029053, "perplexity": 4189.159981109491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660709.12/warc/CC-MAIN-20160924173740-00081-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://usa.cheenta.com/equation-of-x-and-y-aime-i-1993-question-13/ | Categories
# Equation of X and Y | AIME I, 1993 | Question 13
Try this beautiful problem from the American Invitational Mathematics Examination I, AIME I, 1993 based on Equation of X and Y.
Try this beautiful problem from the American Invitational Mathematics Examination I, AIME I, 1993 based on Equation of X and Y.
## Equation of X and Y – AIME I, 1993
Jenny and Kenny are walking in the same direction, Kenny at 3 feet per second and Jenny at 1 foot per second, on parallel paths that are 200 feet apart. A tall circular building 100 feet in diameter is centred mid way between the paths . At the instant when the building first blocks the line of sight between Jenny and Kenny, they are 200 feet apart. Let t be amount of time, in seconds, Before Jenny and Kenny, can see each other again. If t is written as a fraction in lowest terms, find the sum of numerator and denominator.
• is 107
• is 163
• is 840
• cannot be determined from the given information
### Key Concepts
Variables
Equations
Algebra
AIME I, 1993, Question 13
Elementary Algebra by Hall and Knight
## Try with Hints
Let circle be of radius 50
Let start points be (-50,100),(-50,-100) then at time t, end points (-50+t,100),(-50+3t,-100)
or, equation and equation of circle is
y=$\frac{-100}{t}x+200 -\frac{5000}{t}$ is first equation
$50^2=x^2+y^2$ is second equation
when they see again then
$\frac{-x}{y}=\frac{-100}{t}$
or, $y=\frac{xt}{100}$
solving in second equation gives $x=\frac{5000}{\sqrt{100^2+t^2}}$
or, $y=\frac{xt}{100}$
solving in first equation for t gives $t=\frac{160}{3}$
or, 160+3=163. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7716281414031982, "perplexity": 1511.1995177037065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141183514.25/warc/CC-MAIN-20201125154647-20201125184647-00101.warc.gz"} |
http://mathoverflow.net/questions/56338/is-n-m-18-7-the-only-positive-solution-to-n2-n-1-m3?sort=newest | # Is (n,m)=(18,7) the only positive solution to n^2 + n + 1 = m^3 ?
It's hard to do a Google search on this problem.
If I was using Maple correctly, there are no other positive solutions with n at most 10000.
I know some of these Diophantine questions succumb to known methods, and others are extremely difficult to answer.
-
@Qiaochu, Do you mean 'Yep, it's the only answer' or 'Yep, it succumbs to known methods' or 'Yep, it's extremely difficult to answer'? :) – David Roberts Feb 22 '11 at 23:40
Write as $(8n+1)^2=(4m)^3-48$, which is Mordell's equation for $k=-48$. A quick google comes up with the following, lrz.de/~hr/numb/mordell.html#tbl3. There's only two solutions for $4m\le10^{10}$. One solution is $m=1$ (hence, $n=0$). The other must be the one you state. – George Lowther Feb 22 '11 at 23:43
A google search gives the following by Keith Conrad describing the methods. math.uconn.edu/~kconrad/blurbs/gradnumthy/mordelleqn1.pdf. In this case, you'd factorize in $\mathbb{Z}[\zeta_3]$. – George Lowther Feb 23 '11 at 0:21
@George, that's $(8n+4)^2=(4m)^3-48$. The problem is discussed on pages 208-209 of Mordell, Diophantine Equations. The solution is ascribed to W Ljunggren, Einige Bemerkungen uber die Darstellung ganzer Zahlen durch binare kubische Formen mit positiver Diskriminante, Acta Math 75 (1942) 1-21. – Gerry Myerson Feb 23 '11 at 0:40
I don't really understand this fact: For solving an equation of the form $aX^{2}+bX+C = M^{3}$ why do we need to translate this problem into algebraic number theoritical methods. Can't we have a purely elementary solution. :( – S.C. Oct 2 '11 at 12:41
sage: E = EllipticCurve([0,0,1,0,-1])
sage: E
Elliptic Curve defined by y^2 + y = x^3 - 1 over Rational Field
sage: E.integral_points()
[(1 : 0 : 1), (7 : 18 : 1)]
-
In other words: There is a lot of research available on integral points on elliptic curves and the resulting algorithm is implemented in sage and magma. See for instance chapter XIII of Smart's "Tha Algorithmic Resolution of Diophantine Equations". The sage documentation of this function refers to Petho A., Zimmer H.G., Gebel J. and Herrmann E., Computing all S-integral points on elliptic curves Math. Proc. Camb. Phil. Soc. (1999), 127, 383-402. – Chris Wuthrich Feb 23 '11 at 9:23
Let $\omega$ be a third root of unity, then $\mathbb{Z}[\omega]$ is a PID.
We have $m^3 = n^2 + n + 1 = (n-\omega)(n-\omega^2)$.
$\gcd(n-\omega,n-\omega^2) = \gcd(n-\omega,\omega-\omega^2) \mid (1-\omega)$, and $(1-\omega)$ is the ramified prime lying over $3$ in $\mathbb{Z}[\omega]$, so from unique factorization of $m^3$ we get that either $(n-\omega)$ and $(n-\omega^2)$ are both roots of unity times cubes, or one is a root of unity times $(1-\omega)$ times a cube and the other is a root of unity times $3$ times a cube. In the second case, $m$ is a multiple of $3$, but then $n^2 + n + 1 \equiv 0 \mod 9$, which is impossible.
If $(n-\omega)$ and $(n-\omega^2)$ are cubes, say $a^3$ and $\bar{a}^3$, then their difference $\omega^2-\omega$ is $a^3-\bar{a}^3 = (a-\bar{a})(a^2+a\bar{a}+\bar{a}^2)$. Thus $a-\bar{a}$ is either a root of unity or a root of unity times $(1-\omega)$, and it must be the latter since $a-\bar{a}$ is pure imaginary. Thus $\Im a \le \Im (\omega-\omega^2) = \sqrt{3}$. The same argument applied to $\omega a$ shows that $\Im \omega a \le \sqrt{3}$, and similarly for other roots of unity times $a$, so $a$ is in a hexagon around the origin that is contained in a circle of radius $2$ around the origin, i.e. $|a| \le 2$, so $m = |a|^2 \le 4$. which doesn't give us any solutions.
Finally we have the case that one of $(n-\omega), (n-\omega^2)$ is of the form $\omega a^3$. Then we have $\pm(\omega^2-\omega) = \omega a^3 - \omega^2 \bar{a}^3$. Write $a = x+y\omega$. Then $\omega a^3 - \omega^2 \bar{a}^3 = (\omega-\omega^2)(x^3+y^3-3x^2y)$, so we have $x^3+y^3-3x^2y = \pm 1$, which is a Thue equation. One solution is $x = -1, y = 2$, leading to the solution $n = 18, m = 7$.
Edit: Mathematica claims that the only solutions to $x^3+y^3-3x^2y = 1$ are $(x,y) = (-2, -3), (-1, -1), (-1, 2), (0, 1), (1, 0), (3, 1)$. Mathematica's documentation says it computes an explicit bound on the size of a solution to a Thue equation based on the Baker-Wustholz theorem in order to solve it, and in this case it seems like the bound was small enough.
-
zeb: Nice, but a disappointing finish. We started off by looking for integer points on an elliptic curve, so it seems to have gone in a bit of a circle. – George Lowther Feb 23 '11 at 0:43
Mordell, following Ljunggren, solves $x^3-3xy^2-y^3=1$ on pages 208-209 of Diophantine Equations, as noted in my other comment. He has to go to the degree 6 field ${\bf Q}(\sqrt\xi)$, where $\xi$ is a root of $z^3-3z+1=0$, find the 4 fundamental units, and apply 2-adic methods. – Gerry Myerson Feb 23 '11 at 0:47
George - I realized this almost immediately after posting my answer. Well, at least the new equation seems simpler to me (are Thue equations easier to deal with than general elliptic curves?) – zeb Feb 23 '11 at 0:58
@zeb: Write $p(z)=z^3-3z+1$, so the equation becomes $p(y/x)=\pm x^{-3}$, so the theory of Diophantine approximation tells you that there are only finitely many solutions. That is $y/x$ approximates the roots of $p$. Although, as Gerry mentions, this is already solved by Mordell. – George Lowther Feb 23 '11 at 1:12
This is an old question, and has already been well-answered, but what I've got to say is slightly too long for a comment...
The equation $x^2+x+1 = y^3$ is of interest to finite geometers because $x^2+x+1$ is the number of points (and lines) in a finite projective plane of order $x$.
People have mentioned Ljunggren's name in comments above. The paper that's relevant is this:
Ljunggren, Wilhelm Einige Bemerkungen über die Darstellung ganzer Zahlen durch binäre kubische Formen mit positiver Diskriminante. (German) Acta Math. 75, (1943). 1–21.
I heartily recommend the Mathscinet review of that article, which says (amongst other things)...
... that Nagell [Norsk Mat. Forenings Skr. (I) no. 2 (1921)] proved that the equation
(1) $x^2+x+1=y^n$
has only trivial solutions unless $n$ is a power of $3$...
... And that Ljunggren then proved that (1) has only two nontrivial solutions, namely (18,7) and (-19, 7), for n=3.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8870744109153748, "perplexity": 368.67134943431324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737958671.93/warc/CC-MAIN-20151001221918-00054-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://cdsweb.cern.ch/collection/Theses?ln=en&as=1 | Books, e-books, journals, periodicals, proceedings, standards and loaning procedures have been migrated on the 20th of April to the new website CERN Library Catalogue. See here for more information.
# CERN Accelerating science
The theses collection aims to cover as well as possible all theses in particle physics and its related fields.
The collection starts with the thesis of Feynman, defended in 1942, and covers now alltogether more than 3000 theses. Most of the documents are held as hard copies, theses from later years are available electronically. (Note also that many theses are not physically held by the CERN Library.)
# Theses
2021-05-04
06:01
Prospect studies for Higgs boson pair production to $b$$\bar{b}$$\gamma\gamma$ final state at the HL-LHC with the ATLAS detector / Briglin, Daniel Lawrence By the end of the HL-LHC era, before 2040, the ATLAS experiment aims to increase the size of the dataset from ∼300fb$^{−1}$, acquired at the end of LHC running, up to ∼3000fb$^{−1}$ [...] CERN-THESIS-2020-333. - 209 p.
Full text
2021-04-22
16:16
Search for CP violation in the charmless decay $B^0 \to p \bar p K^+ \pi^-$ using triple product asymmetries at LHCb and feasibility studies of a SiPM-based readout system for the Upgrade II RICH detector / Bartolini, Matteo The subject of CP symmetry and its violation is often referred to as one of the least understood in particle physics [...] CERN-THESIS-2021-034 - 227 p.
Full text
2021-04-21
11:50
Radiation hardness of the upgraded LHCb muon detector electronics and prospects for a full angular analysis in multi-body rare charm decays / Brundu, Davide In this dissertation the results of the studies on the new readout electronics for the muon system of the LHCb experiment, and the perspectives for carrying out a full angular analysis in the rare decays $D^{0}\to \pi^+ \pi^- \mu^+ \mu^-$ and $D^{0}\to K^+ K^- \mu^+ \mu^-$ will be described [...] CERN-THESIS-2020-324 - 240 p.
Full text
2021-04-20
23:41
Calibration of the Liquid Argon Calorimeter and Search for Stopped Long-Lived Particles / Morgenstern, Stefanie This thesis dives into the three main aspects of today's experimental high energy physics: detector operation and data preparation, reconstruction and identification of physics objects, and physics analysis [...] CERN-THESIS-2020-323 - 183 p.
Full text
2021-04-20
09:50
Timed Track Seeding for the Future Circular Collider / Volkl, Valentin Particle accelerators and detectors have been the principal tools to investigate nature at the smallest scale for more than 50 years [...] CERN-THESIS-2020-322 - 141 p.
Full text
2021-04-19
15:37
Radiation damage of the optical components of the ATLAS TileCal calorimeter at the High-Luminosity LHC / Pinheiro Pereira, Beatriz Catarina TileCal, a sampling hadronic calorimeter, is an essential component of the ATLAS detector at the LHC [...] CERN-THESIS-2020-321 - 92 p.
Full text
2021-04-16
18:11
First Search for Pair Production of Scalar Top Quarks Decaying to Top Quarks and Light-Flavor Jets with Low Missing Transverse Momentum / Madrid, Christopher After the discovery of the Higgs boson in 2012, the current best theoretical model that describes all observed particles and their interactions, the standard model (SM), was considered complete [...] CERN-THESIS-2020-320 - 241 p.
Full text
2021-04-15
15:55
Measurement of the $W$$\gamma$$\gamma$ production cross section in pp collisions at 13 TeV with full Run2 data and limits on anomalous quartic gauge couplings / Da Rold, Alessandro CMS-TS-2021-004 ; CERN-THESIS-2021-033. - 2021. - 148 p.
Fulltext
2021-04-15
15:53
Search for compressed supersymmetry at the LHC in final states with one hadronic tau and one energetic jet / Delgado, Manuel Alejandro Segura We present an experimental search of supersymmetry (SUSY) in compressed mass spectra scenarios with scalar taus ($\widetilde{\tau}$’s), using data from proton-proton (pp) collisions in the Large Hadron Collider, LHC, at CERN laboratory, at $\sqrt{}$ = 13 TeV, collected by the CMS experiment in 201 [...] CMS-TS-2020-005 ; CERN-THESIS-2019-417. - 2019. - 188 p.
Fulltext
2021-04-15
15:52
Measurement of the Z$\gamma$$\gamma$ bosons production cross section and search for new physics in proton proton collisions at $\sqrt(s)$ = 13 TeV with the CMS detector at the CERN LHC / Vazzoler, Federico CMS-TS-2021-005 ; CERN-THESIS-2021-032. - 2021. - 153 p.
Full text | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9613965749740601, "perplexity": 4005.5350593305707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.80/warc/CC-MAIN-20210507090724-20210507120724-00572.warc.gz"} |
https://github.com/libtom/libtommath/commit/390fa39dc519551eb3c6e93fe3fa52a9ec9d9833 | # libtom/libtommath
0 parents commit 390fa39dc519551eb3c6e93fe3fa52a9ec9d9833 Tom St Denis committed with sjaeckel
Showing with 2,918 additions and 0 deletions.
1. +3 −0 b.bat
2. +1,983 −0 bn.c
3. +225 −0 bn.h
4. bn.pdf
5. +411 −0 bn.tex
6. +6 −0 changes.txt
7. +238 −0 demo.c
8. +24 −0 makefile
9. +28 −0 timer.asm
3 b.bat
@@ -0,0 +1,3 @@ +nasm -f coff timer.asm +gcc -Wall -W -O3 -fomit-frame-pointer -funroll-loops -DTIMER demo.c bn.c timer.o -o demo +gcc -I./mtest/ -DU_MPI -Wall -W -O3 -fomit-frame-pointer -funroll-loops -DTIMER demo.c mtest/mpi.c timer.o -o mpidemo
1,983 bn.c
1,983 additions, 0 deletions not shown
225 bn.h
BIN bn.pdf
Binary file not shown
411 bn.tex
@@ -0,0 +1,411 @@ +\documentclass{article} +\begin{document} + +\title{LibTomMath v0.01 \\ A Free Multiple Precision Integer Library} +\author{Tom St Denis \\ tomstdenis@iahu.ca} +\maketitle +\newpage + +\section{Introduction} +LibTomMath'' is a free and open source library that provides multiple-precision integer functions required to form a basis +of a public key cryptosystem. LibTomMath is written entire in portable ISO C source code and designed to have an application +interface much like that of MPI from Michael Fromberger. + +LibTomMath was written from scratch by Tom St Denis but designed to be drop in replacement for the MPI package. The +algorithms within the library are derived from descriptions as provided in the Handbook of Applied Cryptography and Knuth's +The Art of Computer Programming''. The library has been extensively optimized and should provide quite comparable +timings as compared to many free and commercial libraries. + +LibTomMath was designed with the following goals in mind: +\begin{enumerate} +\item Be a drop in replacement for MPI. +\item Be much faster than MPI. +\item Be written entirely in portable C. +\end{enumerate} + +All three goals have been achieved. Particularly the speed increase goal. For example, a 512-bit modular exponentiation is +four times faster\footnote{On an Athlon XP with GCC 3.2} with LibTomMath compared to MPI. + +Being compatible with MPI means that applications that already use it can be ported fairly quickly. Currently there are +a few differences but there are many similarities. In fact the average MPI based application can be ported in under 15 +minutes. + +Thanks goes to Michael Fromberger for answering a couple questions and Colin Percival for having the patience and courtesy to +help debug and suggest optimizations. They were both of great help! + +\section{Building Against LibTomMath} + +Building against LibTomMath is very simple because there is only one source file. Simply add bn.c'' to your project and +copy both bn.c'' and bn.h'' into your project directory. There is no configuration nor building required before hand. + +If you are porting an MPI application to LibTomMath the first step will be to remove all references to MPI and replace them +with references to LibTomMath. For example, substitute + +\begin{verbatim} +#include "mpi.h" +\end{verbatim} + +with + +\begin{verbatim} +#include "bn.h" +\end{verbatim} + +Remove mpi.c'' from your project and replace it with bn.c''. Note that currently MPI has a few more functions than +LibTomMath has (e.g. no square-root code and a few others). Those are planned for future releases. In the interim work +arounds can be sought. Note that LibTomMath doesn't lack any functions required to build a cryptosystem. + +\section{Programming with LibTomMath} + +\subsection{The mp\_int Structure} +All multiple precision integers are stored in a structure called \textbf{mp\_int}. A multiple precision integer is +essentially an array of \textbf{mp\_digit}. mp\_digit is defined at the top of bn.h. Its type can be changed to suit +a particular platform. + +For example, when \textbf{MP\_8BIT} is defined\footnote{When building bn.c.} a mp\_digit is a unsigned char and holds +seven bits. Similarly when \textbf{MP\_16BIT} is defined a mp\_digit is a unsigned short and holds 15 bits. +By default a mp\_digit is a unsigned long and holds 28 bits. + +The choice of digit is particular to the platform at hand and what available multipliers are provided. For +MP\_8BIT either a $8 \times 8 \Rightarrow 16$ or $16 \times 16 \Rightarrow 16$ multiplier is optimal. When +MP\_16BIT is defined either a $16 \times 16 \Rightarrow 32$ or $32 \times 32 \Rightarrow 32$ multiplier is optimal. By +default a $32 \times 32 \Rightarrow 64$ or $64 \times 64 \Rightarrow 64$ multiplier is optimal. + +This gives the library some flexibility. For example, a i8051 has a $8 \times 8 \Rightarrow 16$ multiplier. The +16-bit x86 instruction set has a $16 \times 16 \Rightarrow 32$ multiplier. In practice this library is not particularly +designed for small devices like an i8051 due to the size. It is possible to strip out functions which are not required +to drop the code size. More realistically the library is well suited to 32 and 64-bit processors that have decent +integer multipliers. The AMD Athlon XP and Intel Pentium 4 processors are examples of well suited processors. + +Throughout the discussions there will be references to a \textbf{used} and \textbf{alloc} members of an integer. The +used member refers to how many digits are actually used in the representation of the integer. The alloc member refers +to how many digits have been allocated off the heap. There is also the $\beta$ quantity which is equal to $2^W$ where +$W$ is the number of bits in a digit (default is 28). + +\subsection{Basic Functionality} +Essentially all LibTomMath functions return one of three values to indicate if the function worked as desired. A +function will return \textbf{MP\_OKAY} if the function was successful. A function will return \textbf{MP\_MEM} if +it ran out of memory and \textbf{MP\_VAL} if the input was invalid. + +Before an mp\_int can be used it must be initialized with + +\begin{verbatim} +int mp_init(mp_int *a); +\end{verbatim} + +For example, consider the following. + +\begin{verbatim} +#include "bn.h" +int main(void) +{ + mp_int num; + if (mp_init(&num) != MP_OKAY) { + printf("Error initializing a mp_int.\n"); + } + return 0; +} +\end{verbatim} + +A mp\_int can be freed from memory with + +\begin{verbatim} +void mp_clear(mp_int *a); +\end{verbatim} + +This will zero the memory and free the allocated data. There are a set of trivial functions to manipulate the +value of an mp\_int. + +\begin{verbatim} +/* set to zero */ +void mp_zero(mp_int *a); + +/* set to a digit */ +void mp_set(mp_int *a, mp_digit b); + +/* set a 32-bit const */ +int mp_set_int(mp_int *a, unsigned long b); + +/* init to a given number of digits */ +int mp_init_size(mp_int *a, int size); + +/* copy, b = a */ +int mp_copy(mp_int *a, mp_int *b); + +/* inits and copies, a = b */ +int mp_init_copy(mp_int *a, mp_int *b); +\end{verbatim} + +The \textbf{mp\_zero} function will clear the contents of a mp\_int and set it to positive. The \textbf{mp\_set} function +will zero the integer and set the first digit to a value specified. The \textbf{mp\_set\_int} function will zero the +integer and set the first 32-bits to a given value. It is important to note that using mp\_set can have unintended +side effects when either the MP\_8BIT or MP\_16BIT defines are enabled. By default the library will accept the +ranges of values MPI will (and more). + +The \textbf{mp\_init\_size} function will initialize the integer and set the allocated size to a given value. The +allocated digits are zero'ed by default but not marked as used. The \textbf{mp\_copy} function will copy the digits +(and sign) of the first parameter into the integer specified by the second parameter. The \textbf{mp\_init\_copy} will +initialize the first integer specified and copy the second one into it. Note that the order is reversed from that of +mp\_copy. This odd bug'' was kept to maintain compatibility with MPI. + +\subsection{Digit Manipulations} + +There are a class of functions that provide simple digit manipulations such as shifting and modulo reduction of powers +of two. + +\begin{verbatim} +/* right shift by "b" digits */ +void mp_rshd(mp_int *a, int b); + +/* left shift by "b" digits */ +int mp_lshd(mp_int *a, int b); + +/* c = a / 2^b */ +int mp_div_2d(mp_int *a, int b, mp_int *c); + +/* b = a/2 */ +int mp_div_2(mp_int *a, mp_int *b); + +/* c = a * 2^b */ +int mp_mul_2d(mp_int *a, int b, mp_int *c); + +/* b = a*2 */ +int mp_mul_2(mp_int *a, mp_int *b); + +/* c = a mod 2^d */ +int mp_mod_2d(mp_int *a, int b, mp_int *c); +\end{verbatim} + +Both the \textbf{mp\_rshd} and \textbf{mp\_lshd} functions provide shifting by whole digits. For example, +mp\_rshd($x$, $n$) is the same as $x \leftarrow \lfloor x / \beta^n \rfloor$ while mp\_lshd($x$, $n$) is equivalent +to $x \leftarrow x \cdot \beta^n$. Both functions are extremely fast as they merely copy digits within the array. + +Similarly the \textbf{mp\_div\_2d} and \textbf{mp\_mul\_2d} functions provide shifting but allow any bit count to +be specified. For example, mp\_div\_2d($x$, $n$, $y$) is the same as $y =\lfloor x / 2^n \rfloor$ while +mp\_mul\_2d($x$, $n$, $y$) is the same as $y = x \cdot 2^n$. The \textbf{mp\_div\_2} and \textbf{mp\_mul\_2} +functions are legacy functions that merely shift right or left one bit respectively. The \textbf{mp\_mod\_2d} function +reduces an integer mod a power of two. For example, mp\_mod\_2d($x$, $n$, $y$) is the same as +$y \equiv x \mbox{ (mod }2^n\mbox{)}$. + +\subsection{Basic Arithmetic} + +Next are the class of functions which provide basic arithmetic. + +\begin{verbatim} +/* compare a to b */ +int mp_cmp(mp_int *a, mp_int *b); + +/* c = a + b */ +int mp_add(mp_int *a, mp_int *b, mp_int *c); + +/* c = a - b */ +int mp_sub(mp_int *a, mp_int *b, mp_int *c); + +/* c = a * b */ +int mp_mul(mp_int *a, mp_int *b, mp_int *c); + +/* b = a^2 */ +int mp_sqr(mp_int *a, mp_int *b); + +/* a/b => cb + d == a */ +int mp_div(mp_int *a, mp_int *b, mp_int *c, mp_int *d); + +/* c == a mod b */ +#define mp_mod(a, b, c) mp_div(a, b, NULL, c) +\end{verbatim} + +The \textbf{mp\_cmp} will compare two integers. It will return \textbf{MP\_LT} if the first parameter is less than +the second, \textbf{MP\_GT} if it is greater or \textbf{MP\_EQ} if they are equal. These constants are the same as from +MPI. + +The \textbf{mp\_add}, \textbf{mp\_sub}, \textbf{mp\_mul}, \textbf{mp\_div}, \textbf{mp\_sqr} and \textbf{mp\_mod} are all +fairly straight forward to understand. Note that in mp\_div either $c$ (the quotient) or $d$ (the remainder) can be +passed as NULL to ignore it. For example, if you only want the quotient $z = \lfloor x/y \rfloor$ then a call such as +mp\_div(\&x, \&y, \&z, NULL) is acceptable. + +There is a related class of single digit'' functions that are like the above except they use a digit as the second +operand. + +\begin{verbatim} +/* compare against a single digit */ +int mp_cmp_d(mp_int *a, mp_digit b); + +/* c = a + b */ +int mp_add_d(mp_int *a, mp_digit b, mp_int *c); + +/* c = a - b */ +int mp_sub_d(mp_int *a, mp_digit b, mp_int *c); + +/* c = a * b */ +int mp_mul_d(mp_int *a, mp_digit b, mp_int *c); + +/* a/b => cb + d == a */ +int mp_div_d(mp_int *a, mp_digit b, mp_int *c, mp_digit *d); + +/* c = a mod b */ +#define mp_mod_d(a,b,c) mp_div_d(a, b, NULL, c) +\end{verbatim} + +Note that care should be taken for the value of the digit passed. By default, any 28-bit integer is a valid digit that can +be passed into the function. However, if MP\_8BIT or MP\_16BIT is defined only 7 or 15-bit (respectively) integers +can be passed into it. + +\subsection{Modular Arithmetic} + +There are some trivial modular arithmetic functions. + +\begin{verbatim} +/* d = a + b (mod c) */ +int mp_addmod(mp_int *a, mp_int *b, mp_int *c, mp_int *d); + +/* d = a - b (mod c) */ +int mp_submod(mp_int *a, mp_int *b, mp_int *c, mp_int *d); + +/* d = a * b (mod c) */ +int mp_mulmod(mp_int *a, mp_int *b, mp_int *c, mp_int *d); + +/* c = a * a (mod b) */ +int mp_sqrmod(mp_int *a, mp_int *b, mp_int *c); + +/* c = 1/a (mod b) */ +int mp_invmod(mp_int *a, mp_int *b, mp_int *c); + +/* c = (a, b) */ +int mp_gcd(mp_int *a, mp_int *b, mp_int *c); + +/* c = [a, b] or (a*b)/(a, b) */ +int mp_lcm(mp_int *a, mp_int *b, mp_int *c); + +/* d = a^b (mod c) */ +int mp_exptmod(mp_int *a, mp_int *b, mp_int *c, mp_int *d); +\end{verbatim} + +These are all fairly simple to understand. The \textbf{mp\_invmod} is a modular multiplicative inverse. That is it +stores in the third parameter an integer such that $ac \equiv 1 \mbox{ (mod }b\mbox{)}$ provided such integer exists. If +there is no such integer the function returns \textbf{MP\_VAL}. + +\subsection{Radix Conversions} +To read or store integers in other formats there are the following functions. + +\begin{verbatim} +int mp_unsigned_bin_size(mp_int *a); +int mp_read_unsigned_bin(mp_int *a, unsigned char *b, int c); +int mp_to_unsigned_bin(mp_int *a, unsigned char *b); + +int mp_signed_bin_size(mp_int *a); +int mp_read_signed_bin(mp_int *a, unsigned char *b, int c); +int mp_to_signed_bin(mp_int *a, unsigned char *b); + +int mp_read_radix(mp_int *a, unsigned char *str, int radix); +int mp_toradix(mp_int *a, unsigned char *str, int radix); +\end{verbatim} + +The integers are stored in big endian format as most libraries (and MPI) expect. The \textbf{mp\_read\_radix} and +\textbf{mp\_toradix} functions read and write (respectively) null terminated ASCII strings in a given radix. Valid values +for the radix are between 2 and 64 (inclusively). + +\section{Timing Analysis} +\subsection{Observed Timings} +A simple test program demo.c'' was developed which builds with either MPI or LibTomMath (without modification). The +test was conducted on an AMD Athlon XP processor with 266Mhz DDR memory and the GCC 3.2 compiler\footnote{With build +options -O3 -fomit-frame-pointer -funroll-loops''}. The multiplications and squarings were repeated 10,000 times +each while the modular exponentiation (exptmod) were performed 10 times each. The RDTSC (Read Time Stamp Counter) instruction +was used to measure the time the entire iterations took and was divided by the number of iterations to get an +average. The following results were observed. + +\begin{small} +\begin{center} +\begin{tabular}{c|c|c|c} +\hline \textbf{Operation} & \textbf{Size (bits)} & \textbf{Time with MPI (cycles)} & \textbf{Time with LibTomMath (cycles)} \\ +\hline +Multiply & 128 & 1,394 & 915 \\ +Multiply & 256 & 2,559 & 1,893 \\ +Multiply & 512 & 7,919 & 3,770 \\ +Multiply & 1024 & 28,460 & 9,970 \\ +Multiply & 2048 & 109,637 & 32,264 \\ +Multiply & 4096 & 467,226 & 129,645 \\ +\hline +Square & 128 & 1,288 & 1,147 \\ +Square & 256 & 1,705 & 2,129 \\ +Square & 512 & 5,365 & 3,755 \\ +Square & 1024 & 18,836 & 9,267 \\ +Square & 2048 & 72,334 & 28,387 \\ +Square & 4096 & 306,252 & 112,391 \\ +\hline +Exptmod & 512 & 30,497,732 & 7,222,872 \\ +Exptmod & 768 & 98,943,020 & 16,474,567 \\ +Exptmod & 1024 & 221,123,749 & 30,070,883 \\ +Exptmod & 2048 & 1,694,796,907 & 154,697,320 \\ +Exptmod & 2560 & 3,262,360,107 & 318,998,183 \\ +Exptmod & 3072 & 5,647,243,373 & 494,313,122 \\ +Exptmod & 4096 & 13,345,194,048 & 1,036,254,558 + +\end{tabular} +\end{center} +\end{small} + +\subsection{Digit Size} +The first major constribution to the time savings is the fact that 28 bits are stored per digit instead of the MPI +defualt of 16. This means in many of the algorithms the savings can be considerable. Consider a baseline multiplier +with a 1024-bit input. With MPI the input would be 64 16-bit digits whereas in LibTomMath it would be 37 28-bit digits. +A savings of $64^2 - 37^2 = 2727$ single precision multiplications. + +\subsection{Multiplication Algorithms} +For most inputs a typical baseline $O(n^2)$ multiplier is used which is similar to that of MPI. There are two variants +of the baseline multiplier. The normal and the fast variants. The normal baseline multiplier is the exact same as the +algorithm from MPI. The fast baseline multiplier is optimized for cases where the number of input digits $N$ is less +than or equal to $2^{w}/\beta^2$. Where $w$ is the number of bits in a \textbf{mp\_word}. By default a mp\_word is +64-bits which means $N \le 256$ is allowed which represents numbers upto $7168$ bits. + +The fast baseline multiplier is optimized by removing the carry operations from the inner loop. This is often referred +to as the comba'' method since it computes the products a columns first then figures out the carries. This has the +effect of making a very simple and paralizable inner loop. + +For large inputs, typically 80 digits\footnote{By default that is 2240-bits or more.} or more the Karatsuba method is +used. This method has significant overhead but an asymptotic running time of $O(n^{1.584})$ which means for fairly large +inputs this method is faster. The Karatsuba implementation is recursive which means for extremely large inputs they +will benefit from the algorithm. + +MPI only implements the slower baseline multiplier where carries are dealt with in the inner loop. As a result even at +smaller numbers (below the Karatsuba cutoff) the LibTomCrypt multipliers are faster. + +\subsection{Squaring Algorithms} + +Similar to the multiplication algorithms there are two baseline squaring algorithms. Both have an asymptotic running +time of $O((t^2 + t)/2)$. The normal baseline squaring is the same from MPI and the fast is a comba'' squaring +algorithm. The comba method is used if the number of digits $N$ is less than $2^{w-1}/\beta^2$ which by default +covers numbers upto $3584$ bits. + +There is also a Karatsuba squaring method which achieves a running time of $O(n^{1.584})$ after considerably large +inputs. + +MPI only implements the slower baseline squaring algorithm. As a result LibTomMath is considerably faster at squaring +than MPI is. + +\subsection{Exponentiation Algorithms} + +LibTomMath implements a sliding window $k$-ary left to right exponentiation algorithm. For a given exponent size $L$ an +appropriate window size $k$ is chosen. There are always at most $L$ modular squarings and $\lfloor L/k \rfloor$ modular +multiplications. The $k$-ary method works by precomputing values $g(x) = b^x$ for $0 \le x < 2^k$ and a given base +$b$. Then the multiplications are grouped in windows of $k$ bits. The sliding window technique has the benefit +that it can skip multiplications if there are zero bits following or preceding a window. Consider the exponent +$e = 11110001_2$ if $k = 2$ then there will be a two squarings, a multiplication of $g(3)$, two squarings, a multiplication +of $g(3)$, four squarings and and a multiplication by $g(1)$. In total there are 8 squarings and 3 multiplications. + +MPI uses a binary square-multiply method. For the same exponent $e$ it would have had 8 squarings and 5 multiplications. +There is a precomputation phase for the method LibTomCrypt uses but it generally cuts down considerably on the number +of multiplications. Consider a 512-bit exponent. The worst case for the LibTomMath method results in 512 squarings and +124 multiplications. The MPI method would have 512 squarings and 512 multiplications. Randomly every $2k$ bits another +multiplication is saved via the sliding-window technique on top of the savings the $k$-ary method provides. + +Both LibTomMath and MPI use Barrett reduction instead of division to reduce the numbers modulo the modulus given. +However, LibTomMath can take advantage of the fact that the multiplications required within the Barrett reduction +do not to give full precision. As a result the reduction step is much faster and just as accurate. The LibTomMath code +will automatically determine at run-time (e.g. when its called) whether the faster multiplier can be used. The +faster multipliers have also been optimized into the two variants (baseline and fast baseline). + +As a result of all these changes exponentiation in LibTomMath is much faster than compared to MPI. + + + +\end{document}
6 changes.txt
@@ -0,0 +1,6 @@ +Dec 25th,2002 +v0.01 -- Initial release. Gimme a break. + -- Todo list, + add details to manual [e.g. algorithms] + more comments in code + example programs
238 demo.c
@@ -0,0 +1,24 @@ +CC = gcc +CFLAGS += -Wall -W -O3 -funroll-loops + +VERSION=0.01 + +default: test + +test: bn.o demo.o + $(CC) bn.o demo.o -o demo + +docdvi: bn.tex + latex bn + +docs: docdvi + pdflatex bn + rm -f bn.log bn.aux bn.dvi + +clean: + rm -f *.o *.exe mtest/*.exe bn.log bn.aux bn.dvi *.s + +zipup: clean docs + chdir .. ; rm -rf ltm* libtommath-$(VERSION) ; mkdir libtommath-$(VERSION) ; \ + cp -R ./libtommath/* ./libtommath-$(VERSION)/ ; tar -c libtommath-$(VERSION)/* > ltm-$(VERSION).tar ; \ + bzip2 -9vv ltm-$(VERSION).tar ; zip -9 -r ltm-$(VERSION).zip libtommath-\$(VERSION)/* | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9370827674865723, "perplexity": 8434.952973995869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398466178.24/warc/CC-MAIN-20151124205426-00333-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://projecteuclid.org/euclid.aoas/1514430280 | ## The Annals of Applied Statistics
### Learning population and subject-specific brain connectivity networks via mixed neighborhood selection
#### Abstract
In neuroimaging data analysis, Gaussian graphical models are often used to model statistical dependencies across spatially remote brain regions known as functional connectivity. Typically, data is collected across a cohort of subjects and the scientific objectives consist of estimating population and subject-specific connectivity networks. A third objective that is often overlooked involves quantifying inter-subject variability, and thus identifying regions or subnetworks that demonstrate heterogeneity across subjects. Such information is crucial to thoroughly understand the human connectome. We propose Mixed Neighborhood Selection to simultaneously address the three aforementioned objectives. By recasting covariance selection as a neighborhood selection problem, we are able to efficiently learn the topology of each node. We introduce an additional mixed effect component to neighborhood selection to simultaneously estimate a graphical model for the population of subjects as well as for each individual subject. The proposed method is validated empirically through a series of simulations and applied to resting state data for healthy subjects taken from the ABIDE consortium.
#### Article information
Source
Ann. Appl. Stat. Volume 11, Number 4 (2017), 2142-2164.
Dates
Revised: January 2017
First available in Project Euclid: 28 December 2017
https://projecteuclid.org/euclid.aoas/1514430280
Digital Object Identifier
doi:10.1214/17-AOAS1067
#### Citation
Monti, Ricardo Pio; Anagnostopoulos, Christoforos; Montana, Giovanni. Learning population and subject-specific brain connectivity networks via mixed neighborhood selection. Ann. Appl. Stat. 11 (2017), no. 4, 2142--2164. doi:10.1214/17-AOAS1067. https://projecteuclid.org/euclid.aoas/1514430280
#### References
• Barabási, A.-L. and Albert, R. (1999). Emergence of scaling in random networks. Science 286 509–512.
• Belilovsky, E., Varoquaux, G. and Blaschko, M. (2016). Testing for differences in Gaussian graphical models: Applications to brain connectivity. In Neural Information Processing Systems 595–603.
• Bullmore, E. and Sporns, O. (2009). Complex brain networks: Graph theoretical analysis of structural and functional systems. Nat. Rev., Neurosci. 10 186–198.
• Chung, M., Hanson, J., Ye, J., Davidson, R. and Pollak, S. (2015). Persistent homology in sparse regression and its application to brain morphometry. IEEE Trans. Med. Imag. 34 1928–1939.
• Damoiseaux, J., Rombouts, S., Barkhof, F., Scheltens, P., Stam, C., Smith, S. and Beckmann, C. (2006). Consistent resting-state networks across healthy subjects. Proc. Natl. Acad. Sci. USA 103 13848–13853.
• Danaher, P., Wang, P. and Witten, D. M. (2014). The joint graphical lasso for inverse covariance estimation across multiple classes. J. R. Stat. Soc. Ser. B. Stat. Methodol. 76 373–397.
• Dempster, A. P., Laird, N. M. and Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B. Stat. Methodol. 39 1–38.
• Di Martino, A., Yan, C., Li, Q., Denio, E., Castellanos, F., Alaerts, K., Anderson, J., Assaf, M., Bookheimer, S. and Dapretto, M. (2014). The Autism Brain Imaging Data Exchange: Towards a large-scale evaluation of the intrinsic brain architecture in Autism. Mol. Psychiatry 19 659–667.
• Dubois, J. and Adolphs, R. (2016). Building a science of individual differences from fMRI. Trends Cogn. Sci. 20 425–443.
• Fair, D., Dosenbach, N., Church, J., Cohen, A., Brahmbhatt, S., Miezin, F., Barch, D., Raichle, M., Petersen, S. and Schlaggar, B. (2007). Development of distinct control networks through segregation and integration. Proc. Natl. Acad. Sci. USA 104 13507–13512.
• Fair, D. A., Cohen, A. L., Power, J. D., Dosenbach, N. U. F., Church, J. A., Miezin, F. M., Schlaggar, B. L. and Petersen, S. E. (2009). Functional brain networks develop from a “local to distributed” organization. PLoS Comput. Biol. 5 e1000381.
• Fallani, F., Richiardi, J., Chavez, M. and Achard, S. (2014). Graph analysis of functional brain networks: Practical issues in translational neuroscience. Philos. Trans. R. Soc. Lond. B, Biol. Sci. 369 1–17.
• Fox, M. and Greicius, M. (2010). Clinical applications of resting state functional connectivity. Front. Syst. Neurosci. 4 1–13.
• Friedman, J., Hastie, T. and Tibshirani, R. (2008). Sparse inverse covariance estimation with the graphical lasso. Biostatistics 9 432–441.
• Friedman, J., Hastie, T., Höfling, H. and Tibshirani, R. (2007). Pathwise coordinate optimization. Ann. Appl. Stat. 1 302–332.
• Friston, K. (2011). Functional and effective connectivity: A review. Brain Connect. 1 13–36.
• Greicius, M., Krasnow, B., Reiss, A. and Menon, V. (2003). Functional connectivity in the resting brain: A network analysis of the default mode hypothesis. Proc. Natl. Acad. Sci. USA 100 253–258.
• Gusnard, D. and Raichle, M. (2001). Searching for a baseline: Functional imaging and the resting human brain. Nat. Rev., Neurosci. 2 685–694.
• Kanai, R. and Rees, G. (2011). The structural basis of inter-individual differences in human behaviour and cognition. Nat. Rev., Neurosci. 12 231–242.
• Kelly, C., Biswal, B., Craddock, C., Castellanos, X. and Milham, M. (2012). Characterizing variation in the functional connectome: Promise and pitfalls. Trends Cogn. Sci. 16 181–188.
• Krzanowski, W. J. and Hand, D. J. (2009). ROC Curves for Continuous Data. Monographs on Statistics and Applied Probability 111. CRC Press, Boca Raton, FL.
• Lee, H., Lee, D., Kang, H., Kim, B. and Chung, M. (2011). Sparse brain network recovery under compressed sensing. IEEE Trans. Med. Imag. 30 1154–1165.
• Lindquist, M. A. (2008). The statistical analysis of fMRI data. Statist. Sci. 23 439–464.
• McLachlan, G. J. and Krishnan, T. (2007). The EM Algorithm and Extensions. Wiley Series in Probability and Statistics 382. Wiley-Interscience [John Wiley & Sons], Hoboken, NJ.
• Meinshausen, N. and Bühlmann, P. (2006). High-dimensional graphs and variable selection with the lasso. Ann. Statist. 34 1436–1462.
• Meng, X.-L. and van Dyk, D. (1998). Fast EM-type implementations for mixed effects models. J. R. Stat. Soc. Ser. B. Stat. Methodol. 60 559–578.
• Monti, R. P., Anagnostopoulos, C. and Montana, G. (2017). Supplement to “Learning population and subject-specific brain connectivity networks via mixed neighborhood selection.” DOI:10.1214/17-AOAS1067SUPPA, DOI:10.1214/17-AOAS1067SUPPB.
• Mueller, S., Wang, D., Fox, M., Yeo, T., Sepulcre, J., Sabuncu, M., Shafee, R., Lu, J. and Liu, H. (2013). Individual variability in functional connectivity architecture of the human brain. Neuron 77 586–595.
• Narayan, M., Allen, G. and Tomson, S. (2015). Two sample inference for populations of graphical models with applications to functional connectivity. Preprint. Available at arXiv:1502.03853.
• Nielsen, J., Zielinski, B., Fletcher, T., Alexander, A., Lange, N., Bigler, E., Lainhart, J. and Anderson, J. (2013). Multisite functional connectivity MRI classification of autism: ABIDE results. Front. Human Neurosci. 7 72–83.
• Pinheiro, J. and Bates, D. (2000). Mixed-Effects Models in S and S-PLUS. Springer Science & Business Media, Berlin.
• Power, J., Barnes, K., Snyder, A., Schlaggar, B. and Petersen, S. (2012). Spurious but systematic correlations in functional connectivity MRI networks arise from subject motion. NeuroImage 59 2142–2154.
• Rubinov, M. and Sporns, O. (2010). Complex network measures of brain connectivity: Uses and interpretations. NeuroImage 52 1059–1069.
• Schelldorfer, J., Bühlmann, P. and van de Geer, S. (2011). Estimation for high-dimensional linear mixed-effects models using $\ell_{1}$-penalization. Scand. J. Stat. 38 197–214.
• Smith, S. (2012). The future of fMRI connectivity. NeuroImage 62 1257–1266.
• Smith, S., Miller, K., Salimi-Khorshidi, G., Webster, M., Beckmann, C., Nichols, T., Ramsey, J. and Woolrich, M. (2011). Network modelling methods for fMRI. NeuroImage 54 875–891.
• Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B. Stat. Methodol. 58 267–288.
• Van Den Heuvel, M. and Pol, H. (2010). Exploring the brain network: A review on resting-state fMRI functional connectivity. Eur. Neuropsychopharmacol. 20 519–534.
• Varoquaux, G. and Craddock, C. (2013). Learning and comparing functional connectomes across subjects. NeuroImage 80 405–415.
• Varoquaux, G., Gramfort, A., Poline, J. and Thirion, B. (2010). Brain covariance selection: Better individual functional connectivity models using population prior. In Neural Information Processing Systems 2334–2342.
• Zuo, X., Di Martino, A., Kelly, C., Shehzad, Z., Gee, D., Klein, D., Castellanos, X., Biswal, B. and Milham, M. (2010). The oscillating brain: Complex and reliable. NeuroImage 49 1432–1445.
#### Supplemental materials
• Supplement A. A pdf document consisting of parts A, B, C and D. This supplement contains further details of the various simulation settings employed throughout the manuscript together with an extensive sensitivity analysis of the proposed method. A brief discussion of brain regions studied in the application is also provided.
• Supplement B. A .zip file consisting of R code implementing the proposed Mixed Neighbourhood Selection algorithm. This code may also be freely downloaded from the Comprehensive R Archive Network (CRAN). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3382892310619354, "perplexity": 17405.21306168688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864790.28/warc/CC-MAIN-20180522131652-20180522151652-00433.warc.gz"} |
http://mathoverflow.net/questions/121227/invariant-measures-for-cellular-automata?sort=newest | # Invariant measures for Cellular automata
An easy question that I have never been able to answer. Suppose we have the CA on $\{ 0,1,2 \}^{\mathbb{N}}$ with local rule given by $f(x,y)=A_{x,y}$ and $A$ the $3\times 3$ matrix $A=(0,1,2,0,1,2,1,2,0).$ For example $(0,1,2,0,0,0,1,2,0,\ldots)\mapsto (1,2,1,0,0,1,2,1,\ldots),$ It is like a shift if the coordinate before is $0$ or $1$ and $x+1$ mod 3 if not. My question is, are there any invariant probability measures full supported other that the Haar measure? I have never seen an idea to solve this problem, so any reference is welcome.
-
What about a measure with all weight given to $(0,0,0,\ldots)$? – Yoav Kallus Feb 8 '13 at 19:02
A nice geometric interpretation of the problem could be great. – Umberto Feb 8 '13 at 20:08
If I understand you correctly then the action of this system on the closed subset $\{0,1\}^{\mathbb{N}}$ is simply the shift, so this system admits a host of shift-invariant ergodic measures supported on $\{0,1\}^{\mathbb{N}}$. Or do you want the measure to be fully supported? – Ian Morris Feb 8 '13 at 22:14
Could you explain in more detail how the local rules are defined in terms of the matrix $A$? – R W Feb 8 '13 at 23:46
@R W: I think that if you have x followed by y, then you must take the x+1-st row of the matrix and the y+1-st column of it to get the new value of x. – domotorp Feb 9 '13 at 19:13
This problem is easy, but there is a related problem that is not, this is to find a fully supported invariant measure for our CA that is at the same time invariant for the action of the shift different from the Haar measure. Is this possible?
To solve the problem here we can do the following. Call our CA by $F.$ $F$ is clearly surjective, then by a result by Hedlund every cylinder set has 3 preimages.
We will find an invariant measure for $F$ by defining recursively in $k$ the measure on the cylinder sets with coordinates fixed $0,1,\ldots,k,$ then we will extend the measure using Komogorov as usual. Define for $i\in \{0,1,2\}$ define $[i]_0=\{ i\}\times \{0,1,2\}^{\mathbb{N}}.$
For $k=1,2,\ldots$ suppose $F^{-k}[i]_0=\{a_i^{1}(k),a_i^{2}(k),\ldots,a_i^{3^{k}}(k)\},$ for example ordered by the lexicographic order. We will define inductively in $k$ the values $\mu(a_i^{j}(k))\doteq p_{i}^{j}(k)\in (0,1).$
Choose $p_0^{1}(0),p_1^{1}(0),p_2^{1}(0)>0$ such that $p_0^{1}(0)+p_1^{1}(0)+p_2^{1}(0)=1.$ Define $\mu([i]_0)=p_i^{1}(0).$
Suppose we have defined $\mu$ in $F^{-k}[i]_0.$ Given $j=1,2,\ldots,3^{k}$ we find $j_1,j_2,j_3$ such that $F(a_i^{j_1}(k+1))=F(a_i^{j_2}(k+1))=F(a_i^{j_3}(k+1))=a_i^{j}(k)$ (the evaluation of $F$ on cylinder sets has the obvious meaning) choose $p_i^{j_1}(k+1),p_i^{j_2}(k+1),p_i^{j_3}(k+1)>0$ such that $p_i^{j_1}(k+1)+p_i^{j_2}(k+1)+p_i^{j_3}(k+1)=p_i^{j}(k).$
Clearly we have defined in such a way an $F$-invariant probability measure, however not necessarily shift invariant.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9695476293563843, "perplexity": 195.13149906274086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929656.4/warc/CC-MAIN-20150521113209-00029-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/shear-stress-eqn.127198/ | # Shear stress eqn
1. Jul 26, 2006
### teng125
http://www.savefiles.net/d/ieiwoixnjfw.html
Can anybody pls write for me the shear stress eqn using singularities or heaviside function in which i have attach the in the file above
pls help
thanx
2. Jul 26, 2006
### Cyrus
This is the homework help, not the do your homework forum. :grumpy:
3. Jul 27, 2006
### Pyrrhus
Looks like standard procedure to me?, on what exactly are you stuck?
4. Jul 28, 2006
### teng125
the equation for the middle part where the joint forces between the upper and lower force.Should i write 2q<x-l>^0 or any others??
5. Jul 28, 2006
### Meson
Taking downward direction as a positive direction, the load density function should be
$$q(x)=q_{0}-q_{0}(x-l)^0-q_{0}(x-l)^0=q_{0}-2q_{0}(x-l)^0$$
Note that the $$q_{0}$$ is contributed by the upper distributed force, the first $$q_{0}(x-l)^0$$ is to diminish the effect of $$q_{0}$$ at position along the beam $$x=l$$ while the second $$q_{0}(x-l)^0$$ is the lower distributed force that acting in an upward direction, which means the negative direction here by convention.
Then, proceed to find $$V(x)$$ and $$M(x)$$. Finally, draw the shear and moment diagrams.
Last edited: Jul 28, 2006
6. Jul 28, 2006
### teng125
ok.........thanx | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9270171523094177, "perplexity": 2499.8006212530163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988725470.56/warc/CC-MAIN-20161020183845-00066-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://oeis.org/wiki/Quadratic_polynomials | This site is supported by donations to The OEIS Foundation.
A [univariate] quadratic polynomial is a [univariate] polynomial of degree 2, i.e. of the form
${\displaystyle ax^{2}+bx+c,\quad a\neq 0.}$
The two zeros of the quadratic polynomial ${\displaystyle {1}}$ are the two roots of the quadratic equation
${\displaystyle {1}}$
with ${\displaystyle {1}}$ and ${\displaystyle {1}}$.
The two roots are obtained by completing the square, i.e.
${\displaystyle {1}}$
or, letting ${\displaystyle {1}}$,
${\displaystyle {1}}$
hence
${\displaystyle {1}}$
${\displaystyle {1}}$
where ${\displaystyle {1}}$, the discriminant of the quadratic equation, is either:
• 0 (in which case ${\displaystyle {1}}$ is the rational double root of the quadratic equation);
• positive and a perfect square (the quadratic equation has two distinct rational roots);
• positive and not a perfect square (the quadratic equation has two distinct real conjugate quadratic roots);
• negative (the quadratic equation has two distinct complex conjugate quadratic roots).
## Vieta's formulas for the quadratic
${\displaystyle (x-x_{1})(x-x_{2})=0,\,}$
${\displaystyle {1}}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 14, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7707093358039856, "perplexity": 465.0067777041235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039747024.85/warc/CC-MAIN-20181121032129-20181121053103-00018.warc.gz"} |
https://gamedev.stackexchange.com/questions/16212/quaternions-and-rotation-around-world-axis | # Quaternions and rotation around world axis
Disclaimer: I am a professional games programmer, and use quaternions most days but they are close to black magic to me. I am relatively at home with math but imaginary numbers always confused me. I tend to treat quats as useful and end up reversing multiplications more than once. I try to reason about them like I would with matrices with limited success.
Anyhow....
What baffles me, is the following. When I want to rotate an object around it's local axis I multiply its rotation with the quaternion that represents the rotation I want to apply. It is therefore a rotation in local space.
Now if I want to rotate it around an axis in world space, my reasoning would be: Take the rotation in world space as a quaternion. Multiply the inverse of my object rotation with this quaternion. This will bring my world rotation in local space. Multiply my rotation with this new quaternion. ie: newRot = oldRot * (inverse oldRot * worldRot)
However, what I need to do is newRot = oldRot * (inverse oldRot * worldRot) * oldRot.
Why do I, after multiplying with the inverse quat still need to multiply with my own quat before applying it? I know there must be a perfect valid reason, but I can't reason my way out of it and it's frustrating as heck to me. I tried the various faqs and whatnot, but most go to deep in the math, making it less clear to me.
Anyone who can explain this to me like I'm a 5 year old?
• Isn't it a bit like matrix translations and roatations (ie. you need to move your object to the center, rotate and then move back when you want to rotate an item around it self: Minv_transl * Mrot * Mtransl) – Valmond Aug 20 '11 at 8:20
• I try to reason about them like I would with matrices - then you are on the right track. If you understood how to rotate around object's axes and world's axes using matrices, you can do the same using quaternions. The multiplication order is the same for both, matrices and quaternions. – Maik Semder Aug 20 '11 at 15:57
Quaternions are associative:
you mention that your solution is:
newRot = oldRot * (inverse oldRot * worldRot) * oldRot
which is the same as:
newRot = oldRot * inverse oldRot * worldRot * oldRot
which is the same as:
newRot = identity * worldRot * oldRot
newRot = worldRot * oldRot
which actually brings you back to what's really happening:
localTransformed = oldRot * rot
worldTransformed = rot * oldRot
The order of application is changing, that is all. Going back to matrices, when you apply an object matrix to a transform matrix and store that as your new object matrix, that's your local space transformation. When you apply the transform matrix to the object matrix and store that, that's you world transformation. It's all about the order of application and nothing more.
• +1 for the first part, the second part is a bit misleading. If you'd only use 'rot' in the last code sample, rather than 'localRot' and 'worldRot', the example becomes clearer. Otherwise it implies that the rots itself are anyhow different. But the difference lies only in the multiplication order, as you showed, rather than in different quaternions ('localRot' and 'worldRot'). 'localTransformed' and 'worldTransformed' would be better as: 'rotatedAroundLocalAxis' and 'rotatedAroundWorldAxis'. That itself would explain the equations and make the last paragraph obsolete, which has some flaws. – Maik Semder Aug 20 '11 at 16:33
• Flaws in the last paragraph: the distinction between matrix and transform (both are the same here and interchangeable, so its better to use just matrix to prevent confusion) and the terms "local space transform" and "world transform": it would be more correct to say, the first equation gives you the 'local-to-world matrix' after being rotated around the object's local axis, the second one gives you the 'local-to-world matrix' after being rotated around world's axis. In both cases, what you get is simply the 'local-to-world matrix'. However, the first part has my +1 anyway for the analysis. – Maik Semder Aug 20 '11 at 16:39
• +1 @Maik perhaps you could write a seperate answer to make the indifference between rotations and the issue of multiplication order even clearer? Thanks for the comment either way! – Max Dohme Aug 20 '11 at 17:28
• Ah, now it makes sense. I didn't know (ouch, that woulda been in FAQs) that quaternion multiplication was associative, so indeed the rotation and it's inverse cancel eachother out, giving me the insight I needed, one has the local rotation on the right and one on the left which basically say 'apply rotation in parent space' or 'apply rotation in local space'....no different than matrices. Pretty elementary once you see it! Thanks! – Kaj Aug 20 '11 at 22:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36463290452957153, "perplexity": 1272.4571817216674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540585566.60/warc/CC-MAIN-20191214070158-20191214094158-00346.warc.gz"} |
http://mathhelpforum.com/pre-calculus/23705-need-help-basic-log.html | # Thread: Need Help with Basic Log
1. ## Need Help with Basic Log
Evaluate:
1) $\log_2 128 - \log_2 32$
2) $\log_8 \sqrt8$
Can someone please help me with these log equations! Please show me what to do IN DETAIL (STEP BY STEP)! I really don't get what to do. I know you're suppose to do error and trial for the first equation, but how do I do this? And I don't know how to use the calculator "log" function when the base is not 10!
1) $2$
2) $0$
P.S.
Are the following the correct way of writing an inverse of $y = 4^x$?
Exponential:
$f^-1(x) = 4^x$
Logarithmic:
$y = \log_4 x$
2. Originally Posted by Macleef
Evaluate:
1) $\log_2 128 - \log_2 32$
2) $\log_8 \sqrt8$
Can someone please help me with these log equations! Please show me what to do IN DETAIL (STEP BY STEP)! I really don't get what to do. I know you're suppose to do error and trial for the first equation, but how do I do this? And I don't know how to use the calculator "log" function when the base is not 10!
1) $2$
2) $0$
P.S.
Are the following the correct way of writing an inverse of $y = 4^x$?
Exponential:
$f^-1(x) = 4^x$
Logarithmic:
$y = \log_4 x$
When trying to simplify basic logs of the form $\log_a b$, try to turn it into the form $\log_a a^x$, as this equals a.
1) $\log_2 128 -\log_2 32$
$=\log_2 2^7 - \log_2 2^5$
Now, think of what this means.
$x=\log_2 2^7 \iff 2^x = 2^7$
x must then be 7.
Also, $x=\log_2 2^5 \iff 2^x = 2^5$
$\Rightarrow x = 5$
So we then have $7-5=2$
2) $\log_8 \sqrt{8}$
Recall that $\sqrt{8} = 8^\frac{1}{2}$
So, $\log_8 \sqrt{8} = \log_8 8^{\frac{1}{2}} = \frac{1}{2}$ ... i think the answers must be wrong.
3. Thanks, it makes sense to me now | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8647242188453674, "perplexity": 349.06743539802756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00131-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://eprints.lancs.ac.uk/id/eprint/145270/ | # Renewal Theory for Transient Markov Chains with Asymptotically Zero Drift
Denisov, Denis and Korshunov, Dmitry and Wachtel, Vitali (2020) Renewal Theory for Transient Markov Chains with Asymptotically Zero Drift. Transactions of the American Mathematical Society, 373 (10). pp. 7253-7286. ISSN 0002-9947
Text (1907.07940)
1907.07940.pdf - Accepted Version
## Abstract
We solve the problem of asymptotic behaviour of the renewal measure (Green function) generated by a transient Lamperti's Markov chain $X_n$ in $\R$, that is, when the drift of the chain tends to zero at infinity. Under this setting, the average time spent by $X_n$ in the interval $(x,x+1]$ is roughly speaking the reciprocal of the drift and tends to infinity as $x$ grows. For the first time we present a general approach relying on a diffusion approximation to prove renewal theorems for Markov chains. We apply a martingale type technique and show that the asymptotic behaviour of the renewal measure heavily depends on the rate at which the drift vanishes. The two main cases are distinguished, either the drift of the chain decreases as $1/x$ or much slower than that, say as $1/x^\alpha$ for some $\alpha\in(0,1)$. The intuition behind how the renewal measure behaves in these two cases is totally different. While in the first case $X_n^2/n$ converges weakly to a $\Gamma$-distribution and there is no law of large numbers available, in the second case a strong law of large numbers holds true for $X_n^{1+\alpha}/n$ and further normal approximation is available.
Item Type:
Journal Article
Journal or Publication Title:
Transactions of the American Mathematical Society
Uncontrolled Keywords:
/dk/atira/pure/subjectarea/asjc/2600
Subjects:
Departments:
ID Code:
145270
Deposited By:
Deposited On:
17 Jul 2020 13:40
Refereed?:
Yes
Published?:
Published | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9839438796043396, "perplexity": 406.8545329250664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500255.78/warc/CC-MAIN-20230205130241-20230205160241-00110.warc.gz"} |
https://www.gamedev.net/topic/645634-move-object-towards-another-object-correctly/ | • FEATURED
• FEATURED
• FEATURED
• FEATURED
• FEATURED
View more
View more
View more
### Image of the Day Submit
IOTD | Top Screenshots
### The latest, straight to your Inbox.
Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.
Sign up now
# Move Object towards another object? Correctly.
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
4 replies to this topic
### #1Tommy Boy Members
Posted 19 July 2013 - 01:29 PM
Hi everybody
I'm trying to move one object towards position where I have clicked. for example
if (Unicorn->getX() < ClickedPosX)
{
Unicorn->setX( Unicorn->getX() + 5);
}
if (Unicorn->getX() > ClickedPosX)
{
Unicorn->setX( Unicorn->GetX() - 5);
}
if (Unicorn->getY() > ClickedPosY)
{
Unicorn->setY( Unicorn->getY() + 5);
}
if (Unicorn->getY() < ClickedPosY)
{
Unicorn->setY( Unicorn->getY() - 5);
}
This works but doesn't go straight to that point, here is a picture representing what happens when game runs
I know this happens because I am adding 5 which is the speed. I don't know how to calculate what to add, so he would walk towards that direction. I'm guessing I have to find angle between those two points and somehow use that?
Thanks guys, this is not homework or anything just doing it for fun and learning.
### #2Brother Bob Moderators
Posted 19 July 2013 - 01:42 PM
POPULAR
If you want to do this on integer coordinates, with integer math, look up Bresenham's line algorithm. Otherwise some linear algebra will be useful. If you want to move at a specific speed, make a vector from the current position to the desired position and normalize it, that will give you a unit vector in the direction towards the target and you can just multiply it by the desired speed to move the object. If you want to move at a specific time, an easier approach is just to linearly interpolate between the current and the target position.
### #3Tommy Boy Members
Posted 19 July 2013 - 01:58 PM
Sorry this is probably going to be stupid question, but how do I normalize 2D vector?
Is it dividing the X and Y by the length of the line?
### #4Tommy Boy Members
Posted 19 July 2013 - 02:09 PM
Otherwise some linear algebra will be useful. If you want to move at a specific speed, make a vector from the current position to the desired position and normalize it, that will give you a unit vector in the direction towards the target and you can just multiply it by the desired speed to move the object.
Got it working using that tactic Thank you Brother bob
### #5Kaptein Prime Members
Posted 19 July 2013 - 02:37 PM
the easiest thing is to make positions floats, then you can move by a fractional amount as well as integral, which is what happens when you move slowly
its never a good idea to use integral values for positions, unless perhaps you internally have precision, but render using integers
anyways,
first define a 2D vector:
class vec2
{
public:
float x, y;
vec2() : x(0), y(0) {} // initialize (x, y) to (0, 0)
vec2(float v) : x(v), y(v) {} // (x, y) to (v, v)
// initialize both
vec2(float X, float Y) : x(X), y(Y) {}
float length() const
{
// length in euclidian distance:
return sqrtf(x*x + y*y);
}
void normalize()
{
// make a vectors length be equal to exactly 1:
float L = length();
if (L != 0.0)
{
x /= L; y /= L;
}
}
};
of course, this 2D vector (vec2) is severely lacking in pretty much all aspects of what it should be able to do
however,
let's say you have 2 points in space (x1, y1) and (x2, y2)
both of these are vectors:
vec2 v1 = vec2(x1, y1); // or vec2 v1(x1, y2) if you want
vec2 v2 = vec2(x2, y2);
then, the direction-vector pointing exactly towards a point (x2, y2) is
the normalized vector of the difference between v2 and v1:
vec2 v3 = vec2(v2.x - v1.x, v2.y - v1.y);
v3.normalize();
now we have a vector that points towards our point.
to move towards the new point at a certain speed, multiply the new vector by the speed, or "scaling factor" if you will
float speed = 0.5f;
v3.x *= speed;
v3.y *= speed;
now we will move the unicorn (was it?) towards the new point at our determined speed:
unicorn.x += v3.x;
unicorn.y += v3.y;
hope this helps
i wrote all the code directly in here, so it might be flawed
oops, i see you already got it working
Edited by Kaptein, 20 July 2013 - 05:20 AM.
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17929407954216003, "perplexity": 1800.2859038958852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608084.63/warc/CC-MAIN-20170525140724-20170525160724-00307.warc.gz"} |
https://aj2duncan.com/blog/reprinting-r-chunks/ | ### Reprinting R Chunks
#### November 3, 2017
I only discovered this the other day and I’m writing it down so I can find it easily again.
In an R markdown document you can include a chunk or R code like
{r chunk-name}
my_fn <- function() {
print("Hello World.")
}
What I hadn’t realised is that you can reprint that chunk of code later by using the name as a reference.
{r, ref.label = "chunk-name", echo = TRUE, results = "markup"}
The reason I was looking for this was to enable R functions to be printed in an appendix without having to copy and paste the code.
This also means that you could put a chunk earlier in the document with eval = FALSE and then actually calculate the results later in the document by using ref.label = "chunk-name", eval = TRUE. I doubt I’ll use this very much but it might be useful at some point.
It is of course in the knitr documentation and the functionality is exactly what I’ve come to expect from a package by Yihui Xie. Yihui thinks of pretty much everything when he writes a package. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6606650948524475, "perplexity": 1317.0547046598067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812758.43/warc/CC-MAIN-20180219171550-20180219191550-00550.warc.gz"} |
http://www.fz-juelich.de/iek/iek-8/EN/Expertise/MeasurementTechniques/Radiation/Radiation_node.html | ## Search
Institute of Energy and Climate Research
Spectral and integral radiation quantities can be determined with a number of instruments under field- and laboratory conditions.
Actinic radiation measurements to determine photolysis frequencies have been made since many years at IEK-8. Absolutely calibrated spectral radiometers and filter radiometers are used for that purpose. These instruments are deployed for ground based field measurements, for experiments in the atmosphere simulation chamber SAPHIR, and on mobile platforms like the Zeppelin-NT. In addition the spectral radiometers can serve as a reference for instruments of other institutions. A roof platform with subjacent laboratory is available for such comparisons under field conditions. Moreover, measurements can be made to characterise photochemical laboratory reactors and simulation chambers.
More ... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8449937105178833, "perplexity": 3865.0937190872637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189667.42/warc/CC-MAIN-20170322212949-00125-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/solving-an-equation-for-t.103761/ | # Solving an equation for t
1. Dec 11, 2005
### swain1
Hi, can anyone help me to solve this equation for t please.
1=10*exp((-pi*t)/5)*cos(pi*t)
Thanks
2. Dec 12, 2005
### Tide
HINT: Rewrite the equation as
$$10 \cos \left[ \pi t \left(1 - \frac {i}{5}\right) \right] = 1$$
Second HINT: Recheck my algebra! ;)
3. Dec 12, 2005
### mathman
Note to Tide: Your algebra looks wrong.
4. Dec 14, 2005
### Tide
mathman,
Yes, very! Thanks - but I did give fair warning. :)
[note to self: do it on paper next time!]
5. Dec 14, 2005
### HallsofIvy
Either write cos(pi*t) as
$$\frac{e^{i\pit}+ e^{-i\pit}}{2}$$
so that the equation becomes
$$1= 10e^{-\frac{\pit}{5}}$$\frac{e^{i\pit}+ e^{-i\pit}}{2}$$= 5$$e^{\pit\(-\frac{1}{5}+ i$$+ e^{\pit$$-\frac{1}{5}- i$$\)$$
or write exp((-pi*t)/5) as
$$cos(-\frac{\pit}{5})+ i sin(-\frac{\pit}{5})$$
so the equation becomes
$$1= 10$$cos(\frac{\pit}{5}cos(\pit)- icos(\pit)sin(\frac{pit}{5})$$$$
Similar Discussions: Solving an equation for t | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9514002799987793, "perplexity": 26954.681295098842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814538.66/warc/CC-MAIN-20180223075134-20180223095134-00785.warc.gz"} |
http://www.researchgate.net/researcher/63914218_J_S_Kim/ | # J S Kim
Pohang University of Science and Technology, Andong, North Gyeongsang, South Korea
Are you J S Kim?
## Publications (11)22.11 Total impact
• ##### Article:Microscopic Coexistence of Superconductivity and Antiferromagnetism in Underdoped Ba(Fe_{1-x}Ru_{x})_{2}As_{2}
[show abstract] [hide abstract]
ABSTRACT: We use ^{75}As nuclear magnetic resonance to investigate the local electronic properties of Ba(Fe_{1-x}Ru_{x})_{2}As_{2} (x=0.23). We find two phase transitions: to antiferromagnetism at T_{N}≈60 K and to superconductivity at T_{C}≈15 K. Below T_{N}, our data show that the system is fully magnetic, with a commensurate antiferromagnetic structure and a moment of 0.4μ_{B}/Fe. The spin-lattice relaxation rate 1/^{75}T_{1} is large in the magnetic state, indicating a high density of itinerant electrons induced by Ru doping. On cooling below T_{C}, 1/^{75}T_{1} on the magnetic sites falls sharply, providing unambiguous evidence for the microscopic coexistence of antiferromagnetism and superconductivity.
Physical Review Letters 11/2012; 109(19):197002. · 7.37 Impact Factor
• ##### Article:Microscopic coexistence of superconductivity and antiferromagnetism in underdoped Ba(Fe1-xRux)2As2
[show abstract] [hide abstract]
ABSTRACT: We use $^{75}$As nuclear magnetic resonance (NMR) to investigate the local electronic properties of Ba(Fe$_{1-x}$Ru$_{x}$)$_2$As$_2$ ($x =$ 0.23). We find two phase transitions, to antiferromagnetism at $T_N \approx$ 60 K and to superconductivity at $T_C \approx$ 15 K. Below $T_N$, our data show that the system is fully magnetic, with a commensurate antiferromagnetic structure and a moment of 0.4 $\mu_B$/Fe. The spin-lattice relaxation rate $1/^{75}T_1$ is large in the magnetic state, indicating a high density of itinerant electrons induced by Ru doping. On cooling below $T_C$, $1/^{75}T_1$ on the magnetic sites falls sharply, providing unambiguous evidence for the microscopic coexistence of antiferromagnetism and superconductivity.
05/2012;
• ##### Article:Electron spin resonance in Eu based Fe pnictides
[show abstract] [hide abstract]
ABSTRACT: The phase diagrams of EuFe$_{2-x}$Co$_x$As$_2$ $(0 \leq x \leq 0.4)$ and EuFe$_2$As$_{2-y}$P$_y$ $(0 \leq y \leq 0.43)$ are investigated by Eu$^{2+}$ electron spin resonance (ESR) in single crystals. From the temperature dependence of the linewidth $\Delta H(T)$ of the exchange narrowed ESR line the spin-density wave (SDW) $(T < T_{\rm SDW})$ and the normal metallic regime $(T > T_{\rm SDW})$ are clearly distinguished. At $T > T_{\rm SDW}$ the isotropic linear increase of the linewidth is driven by the Korringa relaxation which measures the conduction-electron density of states at the Fermi level. For $T < T_{\rm SDW}$ the anisotropy probes the local ligand field, while the coupling to the conduction electrons disappears. With increasing substitution $x$ or $y$ the transition temperature $T_{\rm SDW}$ decreases linearly accompanied by a linear decrease of the Korringa-relaxation rate from 8 Oe/K at $x=y=0$ down to 3 Oe/K at the onset of superconductivity at $x \approx 0.2$ or at $y \approx 0.3$, above which it remains nearly constant. Comparative ESR measurements on single crystals of the Eu diluted SDW compound Eu$_{0.2}$Sr$_{0.8}$Fe$_2$As$_2$ and superconducting (SC) Eu$_{0.22}$Sr$_{0.78}$Fe$_{1.72}$Co$_{0.28}$As$_2$ corroborate the leading influence of the ligand field on the Eu$^{2+}$ spin relaxation in the SDW regime as well as the Korringa relaxation in the normal metallic regime. Like in Eu$_{0.5}$K$_{0.5}$Fe$_2$As$_2$ a coherence peak is not detected in the latter compound at $T_{\rm c}=21$ K, which is in agreement with the expected complex anisotropic SC gap structure.
05/2012;
• Source
##### Article:Anisotropic Dirac fermions in a Bi square net of SrMnBi2.
[show abstract] [hide abstract]
ABSTRACT: We report the observation of highly anisotropic Dirac fermions in a Bi square net of SrMnBi(2), based on a first-principles calculation, angle-resolved photoemission spectroscopy, and quantum oscillations for high-quality single crystals. We found that the Dirac dispersion is generally induced in the (SrBi)(+) layer containing a double-sized Bi square net. In contrast to the commonly observed isotropic Dirac cone, the Dirac cone in SrMnBi(2) is highly anisotropic with a large momentum-dependent disparity of Fermi velocities of ~8. These findings demonstrate that a Bi square net, a common building block of various layered pnictides, provides a new platform that hosts highly anisotropic Dirac fermions.
Physical Review Letters 09/2011; 107(12):126402. · 7.37 Impact Factor
• Source
##### Article:Anisotropic Dirac Fermions in a Bi Square Net of SrMnBi 2
[show abstract] [hide abstract]
ABSTRACT: We report the observation of highly anisotropic Dirac fermions in a Bi square net of SrMnBi 2 , based on a first-principles calculation, angle-resolved photoemission spectroscopy, and quantum oscillations for high-quality single crystals. We found that the Dirac dispersion is generally induced in the ðSrBiÞ þ layer containing a double-sized Bi square net. In contrast to the commonly observed isotropic Dirac cone, the Dirac cone in SrMnBi 2 is highly anisotropic with a large momentum-dependent disparity of Fermi velocities of \$8. These findings demonstrate that a Bi square net, a common building block of various layered pnictides, provides a new platform that hosts highly anisotropic Dirac fermions.
Physical Review Letters 09/2011; 20. · 7.37 Impact Factor
• Source
##### Article:Evolution of transport properties of BaFe2-xRuxAs2 in a wide range of isovalent Ru substitution
[show abstract] [hide abstract]
ABSTRACT: The effects of isovalent Ru substitution at the Fe sites of BaFe2-xRuxAs2 are investigated by measuring resistivity and Hall coefficient on high-quality single crystals in a wide range of doping (0 < x < 1.4). Ru substitution weakens the antiferromagnetic (AFM) order, inducing superconductivity for relatively high doping level of 0.4 < x < 0.9. Near the AFM phase boundary, the transport properties show non-Fermi-liquid-like behaviors with a linear-temperature dependence of resistivity and a strong temperature dependence of Hall coefficient with a sign change. Upon higher doping, however, both of them recover conventional Fermi-liquid behaviors. Strong doping dependence of Hall coefficient together with a small magnetoresistance suggest that the anomalous transport properties can be explained in terms of anisotropic charge carrier scattering due to interband AFM fluctuations rather than a conventional multi-band scenario.
09/2011;
• Source
##### Article:Electron-hole asymmetry in Co- and Mn-doped SrFe2As2
[show abstract] [hide abstract]
ABSTRACT: Phase diagram of electron and hole-doped SrFe2As2 single crystals is investigated using Co and Mn substitution at the Fe-sites. We found that the spin-density-wave state is suppressed by both dopants, but the superconducting phase appears only for Co (electron)-doping, not for Mn (hole)-doping. Absence of the superconductivity by Mn-doping is in sharp contrast to the hole-doped system with K-substitution at the Sr sites. Distinct structural change, in particular the increase of the Fe-As distance by Mn-doping is important to have a magnetic and semiconducting ground state as confirmed by first principles calculations. The absence of electron-hole symmetry in the Fe-site-doped SrFe2As2 suggests that the occurrence of high-Tc superconductivity is sensitive to the structural modification rather than the charge doping. Comment: 7 pages, 6 figures
04/2010;
• Source
##### Article:Coupling of localized moments and itinerant electrons in EuFe2As2 single crystals studied by Electron Spin Resonance
[show abstract] [hide abstract]
ABSTRACT: Electron spin resonance measurements in EuFe2As2 single crystals revealed an absorption spectrum of a single resonance with Dysonian lineshape. Above the spin-density wave transition at T_SDW = 190 K the spectra are isotropic and the spin relaxation is strongly coupled to the CEs resulting in a Korringa-like increase of the linewidth. Below T_SDW, a distinct anisotropy develops and the relaxation behavior of the Eu spins changes drastically into one with characteristic properties of a magnetic insulating system, where dipolar and crystal-field interactions dominate. This indicates a spatial confinement of the conduction electrons to the FeAs layers in the SDW state. Comment: 4 pages, 4 figures
09/2009;
• ##### Article:Strong reduction of the Korringa relaxation in the spin-density wave regime of EuFe_ {2} As_ {2} observed by electron spin resonance
[show abstract] [hide abstract]
ABSTRACT: Electron spin resonance measurements in EuFe2As2 single crystals revealed an absorption spectrum of a single resonance with Dysonian line shape. Above the spin-density wave (SDW) transition at TSDW=190 K the spectra are isotropic and the Eu spins relax via the conduction electrons resulting in a Korringa-type increase in the linewidth. Below TSDW, a distinct anisotropy develops and the relaxation behavior of the Eu spins changes drastically into one with characteristic properties of a magnetic insulating system, where dipolar and crystal-field interactions dominate. This indicates a spatial confinement of the conduction electrons to the FeAs layers in the SDW state.
Phys. Rev. B. 81(2).
• ##### Article:Electron-hole asymmetry in Co-and Mn-doped SrFe_ {2} As_ {2}
[show abstract] [hide abstract]
ABSTRACT: Phase diagrams of electron- and hole-doped SrFe2As2 single crystals are investigated using Co and Mn substitution at the Fe sites. We find that the spin-density-wave state is suppressed by both dopants but the superconducting phase appears only for Co (electron) doping, not for Mn (hole) doping. Absence of the superconductivity by Mn doping is in sharp contrast to the hole-doped system with K substitution at the Sr sites. First-principles calculations based on detailed structural investigations reveal that a distinct structural change, i.e., the increase in the Fe-As distance by Mn doping is the most decisive factor to induce a magnetic and semiconducting ground state in SrFe2−xMnxAs2. The absence of electron-hole symmetry in the phase diagrams of the Fe-site doped SrFe2As2 suggests that the occurrence of high-Tc superconductivity is sensitive to the structural modification rather than the carrier doping.
Phys. Rev. B. 82(2).
• ##### Article:Evolution of transport properties of BaFe_ {2− x} Ru_ {x} As_ {2} in a wide range of isovalent Ru substitution
[show abstract] [hide abstract]
ABSTRACT: The effects of isovalent Ru substitution at the Fe sites of BaFe2−xRuxAs2 are investigated by measuring resistivity (ρ) and Hall coefficient (RH) on high-quality single crystals in a wide range of doping (0 ≤ x ≤ 1.4). Ru substitution weakens the antiferromagnetic (AFM) order, inducing superconductivity for relatively high doping level of 0.4 ≤ x ≤ 0.9. Near the AFM phase boundary, the transport properties show non-Fermi-liquid-like behavior with a linear-temperature dependence of ρ and a strong temperature dependence of RH with a sign change. Upon higher doping, however, both ρ and RH recover conventional Fermi-liquid behavior. Strong doping dependence of RH together with a small magnetoresistance suggest that the anomalous transport properties can be explained in terms of anisotropic charge carrier scattering due to interband AFM fluctuations rather than a conventional multiband scenario.
Phys. Rev. B. 85(2).
#### Top co-authors
• ##### Claudia Felser (2)
Max-Planck-Institut für chemische Physik fester Stoffe
#### Institutions
• ###### Pohang University of Science and Technology
• Department of Physics
Andong, North Gyeongsang, South Korea | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8196915984153748, "perplexity": 5614.590747939471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704666482/warc/CC-MAIN-20130516114426-00037-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://ncertmcq.com/rs-aggarwal-class-6-solutions-chapter-5-fractions-ex-5g/ | ## RS Aggarwal Class 6 Solutions Chapter 5 Fractions Ex 5G
These Solutions are part of RS Aggarwal Solutions Class 6. Here we have given RS Aggarwal Solutions Class 6 Chapter 5 Fractions Ex 5G.
Other Exercises
Objective Questions :
Tick the correct answer in each of the following :
Question 1.
Solution:
(c) ∴ canceling the common factor 2, we get $$\\ \frac { 3 }{ 5 }$$
Question 2.
Solution:
(c) ∴ multiplying numerator and denominator by 4, we get $$\\ \frac { 8 }{ 12 }$$
Question 3.
Solution:
Question 4.
Solution:
Question 5.
Solution:
Question 6.
Solution:
(c) each of the fractions has the same denominator.
Question 7.
Solution:
(d) none of these has greater denominator than its numerator.
Question 8.
Solution:
(a) its denominator is greater than its numerator.
Question 9.
Solution:
(b) their numerators are same and 4 < 5 , $$\frac { 3 }{ 4 } >\frac { 3 }{ 5 }$$
Question 10.
Solution:
Question 11.
Solution:
(b) In $$\frac { 4 }{ 5 } ,\frac { 2 }{ 7 } ,\frac { 4 }{ 9 } ,\frac { 4 }{ 11 }$$ numerator is same then the smallest denominator’s fraction is greater.
Question 12.
Solution:
(a) Denominators are same, then fraction of smallest numerator will be smallest.
Question 13.
Solution:
Question 14.
Solution:
Question 15.
Solution:
Question 16.
Solution:
Question 17.
Solution:
Question 18.
Solution:
Question 19.
Solution:
Question 20.
Solution:
Hope given RS Aggarwal Solutions Class 6 Chapter 5 Fractions Ex 5G are helpful to complete your math homework.
If you have any doubts, please comment below. Learn Insta try to provide online math tutoring for you. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9462816119194031, "perplexity": 6369.126975236139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305141.20/warc/CC-MAIN-20220127042833-20220127072833-00394.warc.gz"} |
http://math.stackexchange.com/questions/157999/what-are-the-chances-of-having-certain-cards-on-certain-turns-if-you-draw-one-ca?answertab=active | # What are the chances of having certain cards on certain turns if you draw one card each turn?
You have a sixty-card deck. It has:
• 3 Aa and 3 Ab cards (6 A cards)
• 4 B cards
• 4 C cards
• 4 Da and 2 Db cards (6 D cards)
• 14 Ea and 10 Eb cards (24 E cards)
At the begining of the game, you draw 7 cards. Then, at the begining of each turn, you draw one card.
What are the chances of having these cards by these turns?
1. One E card
2. Another E card, and a A card
3. Another E card, and a B card
4. Two C cards
5. A D card
It doesn't matter if you drew your D card at the beginning of the game or on turn 5 - as long as you have on turn five, you satisfy the conditions. How would you calculate this?
What if you don't draw a card on your first turn?
-
Are you familiar with the rule of successive conditioning? If you can apply that, the rest is just a lot of tedious counting. – user17794 Jun 13 '12 at 22:53
@TimDuff: No, I'm not - care to explain? – Glycan Jun 13 '12 at 23:06
These questions should be pretty easily answered by standard counting arguments. For example, the answer to #1 is $$1 - \frac{\binom{60-24}8}{\binom{60}8} = \frac{176131}{178239} \approx 0.9882.$$ The numerator counts how many 8-card subsets of a 60-card deck avoid the 24 E cards; the denominator counts how many 8-card subsets the 60-card deck has in total. That ratio is the probability of failing to have an E card, hence the one-minus.
I agree that it gets tedious to do the inclusion-exclusion or conditioning to answer #2, ..., #5 in turn. I went ahead and programmed a simulation: in two million trials, the chances of your favorable draws were as follows.
• #1: 98.8%
• #1 and #2: 59.5%
• #1-#3: 25.1%
• #1-#4: 2.4%
• #1-#5: 1.4%
The killer is the two C cards: just getting them in the first 11 cards, independent of anything else, is already just a 15% chance, and it goes down to 4.6% if we insist that the first 11 cards contain three Es and and A and a B in any order (much less by the right turns).
Of course these probabilities go down if you don't draw a card on the first turn: the chances of all five happening seem to be about 0.6% in that scenario.
-
Yes, M:tG. It's a combo deck, if you're interested. – Glycan Jun 14 '12 at 1:26
The slightly weird notation is hypergeometrical distribution, yes? – Glycan Jun 14 '12 at 1:27
Also, not quite - if you generalize it to "having {these} cards at turn N", that's more likely than "having {these} cards at turn N, and having {some subset of {these}} at turn N - 1", and having...". – Glycan Jun 14 '12 at 1:29
Finally, the expression reads as "one minus the number of unordered ways to draw 8 cards from a pool of 60 - 24 cards divided by the number of unordered ways to draw 8 cards from a pool of sixty cards"? Correct me if I'm wrong, please. – Glycan Jun 14 '12 at 1:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7620936036109924, "perplexity": 1316.2561666286072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430452189638.13/warc/CC-MAIN-20150501034949-00041-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://www.smashcompany.com/technology/why-python-3-0-went-off-course | # Why Python 3.x went off course
(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at: lawrence@krubner.com, or follow me on Twitter.
A fascinating post from 2005, which is when Python began to veer off course. Python 2x had some beautiful features that could have been further developed, but instead, with 3.0, Python went down the classic Object Oriented road. In this post, Guido van van Rossum explicitly rejects much of the Functional paradigm that Python had picked up from Lisp.
So now reduce(). This is actually the one I’ve always hated most, because, apart from a few examples involving + or *, almost every time I see a reduce() call with a non-trivial function argument, I need to grab pen and paper to diagram what’s actually being fed into that function before I understand what the reduce() is supposed to do. So in my mind, the applicability of reduce() is pretty much limited to associative operators, and in all other cases it’s better to write out the accumulation loop explicitly.
I find that to be a fairly amazing statement, coming from a great programmer. His description of reduce is like the black-and-white negative of mine. I love reduce because it is like a loop, but it is cleaned up and better structured. Because reduce always takes an accumulator, it is less arbitrary than a loop. It’s also explicit that the goal of reduce is to return the accumulator, whereas a loop is a general purpose control tool which might be used to do something crazy, like produce side effects. I would guess that about 25% of all the bugs I have ever fixed have been because of the combination of mutable variables and loops, and that is because loops (as one encounters them in languages like Java or Ruby) are too open ended.
I’m having some trouble believing that Guido van van Rossum is serious in his criticism of reduce — does he need to take out pen and paper whenever he enters a loop? Because most of the time reduce is easier to understand than a loop, exactly because reduce is more strictly structured. But if he means that he likes loops better because they are imperative, and therefore he can see what is happening, then I’m confused, because elsewhere he writes about some of the problems of standard imperative programming.
And if he thought a simple reduce statement was too complicated, why did he embrace Object Oriented Programming, where one typically does need to graph the relations between objects, because the graph of relations in a non-trivial Object Oriented software project is certainly beyond what a human can comprehend.
He also makes this suggestion:
Let’s add any() and all() to the standard builtins, defined as follows (but implemented more efficiently):
def any(S):
for x in S:
if x:
return True
return False
def all(S):
for x in S:
if not x:
return False
return True
One of the truly terrible things in this world are programming languages that allow multiple routes of return from a function. The “return” keyword is the worst keyword in existence. I will never again willingly work in a language that supports the “return” keyword. In languages that have “return” (Javascript, PHP, Java) I have seen functions that have over a dozen possible end points. But in languages that lack “return” (such as Clojure) there is only one way for the function to end. The above cases are not so terrible, but they still allow the function to either end from a loop or end after the loop is over — it’s amazing that so much bad programming can be packed into a function that’s only 5 lines long.
Source | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3591338098049164, "perplexity": 1319.6800303106927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738723.55/warc/CC-MAIN-20200810235513-20200811025513-00439.warc.gz"} |
https://trac-hacks.org/ticket/7142 | Opened 7 years ago
Closed 7 years ago
# Remove Internal checkbox if ticket_policy is disabled
Reported by: Owned by: josh@… Russ Tyndall normal TimingAndEstimationPlugin normal patch 0.11
### Description
If the timingandestimationplugin.ticket_policy component is disabled (which I am assuming is what deals with the Internal? checkbox), then could that checkbox be automatically removed? Or at the very least, not automatically add itself back in to the trac.ini file?
### Attachments (1)
timingandestimation.diff (4.1 KB) - added by josh@… 7 years ago.
Patch
### comment:1 Changed 7 years ago by josh@…
Any chance of this happening? Or failing that, a point in the right direction so we can create a patch for it ourselves?
### comment:2 Changed 7 years ago by Russ Tyndall
Resolution: → fixed new → closed
I dont use this branch regularly (though I maintain and test it), and so have had not real pressing urge to do this. Is there a problem with simply setting its default to false and then hiding it with the permissions system from everyone?
UNDEBBUGED: something like this in trac.ini
[field settings]
fields = internal ...
interal.permissions = ALWAYS:hide
[ticket-custom]
internal.value = 0
If this somehow doesnt meet your need, please feel free to reopen this ticket.
If you actually want it gone, you will have to comment out the following lines in api.py then remove the config for the field from your trac.ini (and then hope that I didnt miss anyplace in this list)
# ln:164 ticket_fields_need_upgrade
self.config.get( ticket_custom, "internal") and \
if not self.config.get( ticket_custom, "internal"):
self.config.set(ticket_custom, "internal", "checkbox")
self.config.set(ticket_custom, "internal.value", "0")
self.config.set(ticket_custom, "internal.label", "Internal?")
self.config.set(ticket_custom,"internal.order", "5")
if "InternalTicketsPolicy" not in self.config.getlist("trac", "permission_policies"):
perms = ["InternalTicketsPolicy"]
other_policies = self.config.getlist("trac", "permission_policies")
if "DefaultPermissionPolicy" not in other_policies:
perms.append("DefaultPermissionPolicy")
perms.extend( other_policies )
self.config.set("trac", "permission_policies", ', '.join(perms))
I hope this meets your needs, Russ Tyndall
Patch
### comment:3 Changed 7 years ago by josh@…
Resolution: fixed closed → reopened
I've added a patch for the things I changed to achieve this - it's a little neater that your solution. Basically, I moved all the stuff to do with 'Internal' out into ticket_policy.py, so that disabling that gets rid of the field.
I may have missed a couple of bits in api.py, but I'm sure those won't be hard to fix up if you decide to use the patch.
(Feel free to reclose, just making sure you get the notification about it).
### comment:4 Changed 7 years ago by Russ Tyndall
Keywords: patch added → fixed reopened → closed
Thanks for your patch, I will make note of it on the TimingAndEstimationPlugin page so it can be easily found by myself and others in the future.
### Modify Ticket
Action
as closed The owner will remain Russ Tyndall.
The resolution will be deleted. Next status will be 'reopened'. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21925142407417297, "perplexity": 6405.416599705571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00219-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://mashable.com/2014/02/15/nuclear-fusion/ | # Breakthrough Experiment Offers Promise of Nuclear Fusion
Nuclear fusion as an energy source is one step closer to becoming a reality.
In a recent experiment, a team of researchers at the National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory used 192 lasers to pulverize a fuel target to create a fusion reaction. What makes this experiment remarkable is that it is reportedly the first time such an experiment has successfully produced more energy output than the amount of energy spent igniting the fuel.
Citing a report in the science journal Nature, The Guardian reports that the researchers used lasers in an attempt to pulverize a fuel target with two megajoules of energy, an impact roughly equivalent to two standard sticks of dynamite. However, most of the energy from the lasers gets absorbed by its surroundings.
In a report from CBC News, Blair Bromley, chair of the nuclear fusion division of the Canadian Nuclear Society, said that the recent experiment "would power 173 100-watt light bulbs for one second."
Of course, that small output is nowhere near the amount of energy necessary to replace our current energy options. Nevertheless, this small feat is being hailed worldwide as a major scientific breakthrough for an area of research that has long had its fair share of skeptics.
The ultimate goal of the experiment is to create a controlled fusion reaction in which the energy input is lower than the energy output.
Omar Hurricane, one of the authors of the report in Nature, told The Guardian, "We are finally, by harnessing these reactions, getting more energy out of that reaction than we put into the DT fuel." Although Hurricane also cautioned that it was "too early to say" whether or not the team's work will ultimately result in controlled fusion at the facility.
Some experts view fusion energy as the hope for the future, with zero carbon emissions, little waste and no meltdown dangers such as those plaguing a typical nuclear fission reactor, like the facility at Fukushima.
These findings and more were published in Nature, on Wednesday.
Image: Ben Margot/Associated Press | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8033877015113831, "perplexity": 1843.5217615683805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00220-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/173802/remove-first-name-on-in-text-citation | # Remove first name on in-text citation
I am trying to sort out the citation and bibliography for a paper, and make it conform with the style-sheet from my university, which is a version of the Chicago style.
There are chicago-styles available both to Natbib and Biblatex, but I can't get any of the do do what I want: Display only last name on "in-text" citation (unless there are many different authors with same last name), and then the full name in my bibliography.
Here's a MWE:
\documentclass[a4 paper,12pt]{report}
\usepackage{graphicx}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{times}
\usepackage[english]{babel}
\usepackage[
authordate,
backend=biber,
natbib,
maxbibnames=99,
]{biblatex-chicago}
\usepackage{csquotes}
\begin{filecontents}{mybib.bib}
@BOOK{orton,
title = {Survey of English Dialects},
publisher = {Leeds: Arnold},
year = {1962},
author = {Orton, Harold},
volume = {IV},
date-added = {2014-04-20 12:50:04 +0000},
date-modified = {2014-04-24 18:41:22 +0000}
}
\end{filecontents}
\begin{document}
This is a quote \citep{orton} . And then the second quote by the same person \citep{orton}
\printbibliography
\end{document}
As you can see if you try to run this, the in-text citation provides the full name like this (Orton, Harold 1962), while I simply want (Orton 1962). The bibliography looks fine.
John | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2852420508861542, "perplexity": 4154.443287982319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655883439.15/warc/CC-MAIN-20200703215640-20200704005640-00097.warc.gz"} |
https://www.qalaxia.com/questions/gx2x2x4-when-x1 | Sangeetha Pulapaka
1
Do you want to know the value of the function, when x= -1?
If so, then,
g\left(-1\right)\ =\ 2\ \times\left(-1\right)^2-1\ -\ 4\ =\ 2\ -\ 1\ -\ 4\ =\ -3
So the value of g(x) is -3 when x = -1.
Qalaxia QA Bot
0
I found an answer from math.stackexchange.com
Why is the domain of $f(x)=\frac{2x^2}{x^4+1}$ all real numbers? It ...
The domain of a function f is all values x such that f(x) is defined. In this context, "all values x" means "all real numbers x" and "is defined" means "is ...
For more information, see Why is the domain of $f(x)=\frac{2x^2}{x^4+1}$ all real numbers? It ...
Qalaxia Knowlege Bot
0
I found an answer from web.stanford.edu
4. Convex optimization problems
3 − 3x, p⋆. = −∞, local optimum at x = 1. Convex optimization problems. 4–3 ... the constraints fi(x) ≤ 0, hi(x)=0 are the explicit constraints.
For more information, see 4. Convex optimization problems | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7036522030830383, "perplexity": 1868.169795882141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104244535.68/warc/CC-MAIN-20220703134535-20220703164535-00220.warc.gz"} |
https://math.stackexchange.com/questions/680996/prove-that-the-rational-function-f-leftx-right-fracp-leftx-rightq-left | # Prove that the Rational function $f\left(x\right)=\frac{p\left(x\right)}{q\left(x\right)}$ is uniformly continuous
I need some help with a calculus homework question. Here is said question:
Let there be two polynomials $q$ and $p$ such that $\deg(p)\leq\deg(q)+1$ and $q(x)\neq0$ for all $x\in\mathbb{R}$.
Show that that rational function $f:\mathbb{R\rightarrow R}$ that is defined as $f\left(x\right)=\frac{p\left(x\right)}{q\left(x\right)}$ is uniformly continuous
I'm guessing that they want me to prove that all Rational Functions are uniformly continuous but I have no clue as to how to do that (seeing as throughout my internet search I saw many people referencing it)
Any help is appreciated, Thx!
• Note $f$ is continuous and for large $x$, $f(x)\sim ax+b$. – David Mitra Feb 18 '14 at 16:43
In fact, such a function is Lipschitz continous but the degree assumption is critical here. This can be seen as follows: Let $n = \deg(p)$, $m = \deg (q)$. Since $q(x) \neq 0$, the function $f$ is differentiable for all $x \in \mathbb{R}$. Now, $$f'(x) = \frac{p'(x)q(x) - p(x)q'(x)}{q(x)^2}$$ is bounded by the degree assumption (the denominator is of degree $2m$ while the numerator is of degree $\max\{(n-1) + m, n + (m-1)\} = n + m - 1 \leq 2m$). Thus $f$ is Lipschitz and in particular uniformly continuous.
Some hints:
• As the degree of $p(x)$ is smaller than the degree of $q(x)$ plus one, we have that $p(x)/q(x)$ behaves as $ax+b$ asymptotically.
• Now consider the function $h(x)=\frac{p(x)}{q(x)}-(ax+b)$, and note that $h(x)$ tends to zero as $x$ tends to plus/minus $\infty$.
• A continuous function is uniformly continuous on compact sets.
• The sum of two uniformly continuous functions is uniformly continuous
Do you see how to combine these statements into a proof? (Involves some case-checking).
• The hypotheses allow, e.g., that $p$ has degree $4$ and $q$ has degree $3$. So the quotient is at worst nearly a linear function for $x$ large. – David Mitra Feb 18 '14 at 16:50
• Thank you, I totally misread the question. I have edited the answer accordingly. I hope it is better now. – Henrik Feb 18 '14 at 17:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9920586943626404, "perplexity": 133.52202753139645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506870.41/warc/CC-MAIN-20200402080824-20200402110824-00347.warc.gz"} |
https://nflstreamslive.com/answer-20 | # Problem solving calculator
This Problem solving calculator supplies step-by-step instructions for solving all math troubles. We can help me with math work.
## The Best Problem solving calculator
We'll provide some tips to help you select the best Problem solving calculator for your needs. One of the most common tasks when implementing a neural network is to group the data. You could think of this as combining data segments into groups, or you can think of it as dividing the data into groups. A common way to group the data is to use a feature extraction algorithm like logistic regression. In these cases, the solver "group_by" will take an array or list of values and will divide them into groups based on those values. The grouping function is often accomplished by taking a decision tree or an SVM classifier and applying it to the dataset. Another common way to group data is by using a neural network with a "Solver By Group" operation. In this case, the solver divides up the training set into groups based on the output from one of your layers (for example, one layer of a multilayer perceptron). One benefit of grouping is that you can pre-process your data without affecting its classification performance. This allows you to take advantage of features that are specific to one group but which do not affect a different group's classification performance (e.g., extracting features specific to a new disease). An example would be comparing two sets of patient records: one set with symptoms that are known to correlate with cancer, and another set with symptoms that are known not to correlate with cancer. If we perform feature extraction on both sets, we
As the name suggests, a square calculator is used to calculate the area of a square. A square calculator is made up of four basic parts – a base, a top, a pair of sides, and an angle. The area of any four-sided figure can be calculated by using these four components in the correct order. For example, if you want to calculate the area of a square with side lengths $x$, $y$, $z$, and an angle $heta$ (in degrees), then you simply add together the values of $x$, $y$, $z$, and $heta$ in this order: egin{align*}frac{x}{y} + frac{z}{ heta} end{align*}. The above formula can also be expressed as follows: egin{align*}frac{1}{2} x + frac{y}{2} y + frac{z}{4} z = frac{ heta}{4}\end{align*} To find the area of a cube with length $L$ and width $W$, first multiply $L$ by itself twice (to get $L^2$). Next, multiply each side by $W$. Lastly, divide the result by 2 to find the area. For example: egin{align*}left(L
There is no one-size-fits-all answer to this question, as the best way to solve word problems in algebra will vary depending on the individual problem. However, there are some general tips that can be followed in order to help make solving word problems in algebra easier. First, it is important to read the problem carefully and identify what is being asked. Next, it is often helpful to draw a picture or diagram of the problem in order to visualize what is happening. Finally,
The Laplace solver works by iteratively solving for an unknown function '''f''' which is dependent on both '''a''' and '''b'''. For simplicity, we will assume that the solution of this differential equation is known and simply output this value at each iteration. This method is simple and can often be computationally intensive when large systems are being solved. Since the solution of this differential equation depends on both 'a' and 'b', it is important to only solve once for values that are close to the final solution. If these values are close, then it will be difficult to accurately predict where the final solution will be due to numerical errors which could make the difference between converging or diverging.
There is no one-size-fits-all answer to this question, as the best way to solve word problems in algebra will vary depending on the individual problem. However, there are some general tips that can be followed in order to make solving word problems in algebra easier. First, it is important to read the problem thoroughly and identify the key information that is needed in order to solve the problem. Next, it is helpful to draw a picture or diagram of the problem, as this can
## Help with math
Nice app but one problem apps say that try 7 days free but I don't want to try I try app only but I like this app and my math problems will be clear from this app Great app helps me with all my math skills but you should for the solving and explaining part for free
Selena Wilson
Incredible, absolutely wonderful. It's amazing how far we've gone into technology. The only thing that I could see as an even better improvement is being able to solve word problems. Once that becomes a feature, then it will be the perfect math app
Malia Parker | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6246218085289001, "perplexity": 259.59023976617897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500719.31/warc/CC-MAIN-20230208060523-20230208090523-00764.warc.gz"} |
https://eprints.soton.ac.uk/53799/ | The University of Southampton
University of Southampton Institutional Repository
# Laboratory measurements of vortex-induced vibrations of a vertical tension riser in a stepped current
Chaplin, J.R., Bearman, P.W., Huera Huarte, F.J. and Pattenden, R.J. (2005) Laboratory measurements of vortex-induced vibrations of a vertical tension riser in a stepped current. [in special issue: Fluid-Structure and Flow-Acoustic Interactions involving Bluff Bodies] Journal of Fluids and Structures, 21 (1), 3-24.
Record type: Article
## Abstract
This paper presents an initial analysis of measurements of the vortex-induced vibrations of a model vertical tension riser in a stepped current. The riser, 28mm in diameter, 13.12m long and with a mass ratio (mass/displaced mass) of 3.0, was tested in conditions in which the lower 45% of it was exposed to a uniform current at speeds up to 1m/s, while the upper part was in still water. Its response in in-line and transverse directions was inferred from measurements of bending strains at 32 points along its length. Transverse vibrations were observed at modes up to the 8th with individual modal amplitudes up to about 80% of the riser’s diameter. However, in most cases the response included significant contributions from several modes, all at the same frequency. Some evidence was found of lock-in.
Published date: November 2005
Organisations: Energy & Climate Change Group
## Identifiers
Local EPrints ID: 53799
URI: http://eprints.soton.ac.uk/id/eprint/53799
ISSN: 0889-9746
PURE UUID: d144f5bf-a8ce-4b3e-b3e7-7e8bae350892
## Catalogue record
Date deposited: 30 Jul 2008
## Contributors
Author: J.R. Chaplin
Author: P.W. Bearman
Author: F.J. Huera Huarte
Author: R.J. Pattenden | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9585955739021301, "perplexity": 10400.2458343583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662572800.59/warc/CC-MAIN-20220524110236-20220524140236-00397.warc.gz"} |
https://www.narviacademy.in/Topics/Competitive%20Exams/Quantitative%20Aptitude/Quantitative%20Aptitude%20Notes/number-system-quantitative-aptitude-exercise-16.php | # Number System Aptitude Questions and Answers:
#### Overview:
Questions and Answers Type: MCQ (Multiple Choice Questions). Main Topic: Quantitative Aptitude. Quantitative Aptitude Sub-topic: Number System Aptitude Questions and Answers. Number of Questions: 10 Questions with Solutions.
1. The HCF and LCM of two numbers are 22 and 2400 respectively. If one number is 264 then find another number?
1. 100
2. 150
3. 200
4. 250
Solution: Given, HCF = 22
LCM = 2400
First number = 264
Let the second number is $$x$$ then $$Product \ of \ two \ numbers = HCF \times LCM$$ $$264 \times x = 22 \times 2400$$ $$x = \frac{22 \times 2400}{264}$$ $$x = \frac{52800}{264}$$ $$x = 200$$ Hence the second number is 200.
1. Find the smallest sum of rupees which contains Rs 1.25, Rs 15, Rs 2.50, and Rs 10?
1. 30
2. 35
3. 40
4. 45
Solution: Given, Rs 1.25, Rs 15, Rs 2.50, and Rs 10 then
LCM of 1.25, 15, 2.50, and 10
It can be also written
$$(LCM \ of \ 125, 1500, 250, 1000) \times 0.01$$ $$= 3000 \times 0.01$$ $$= 30$$ Hence Rs 30 is the smallest sum of rupees.
1. Three women start walking together to the same way around a circular track of 15 km. If their speeds are 3, 4, and 5 km per hour respectively then find how much time they will take to meet together again?
1. 10 hours
2. 12 hours
3. 15 hours
4. 18 hours
Answer: (c) 15 hours
Solution: Time taken by the women to complete one revolution of the circular way $$= \frac{15}{3}, \frac{15}{4}, \ and \ \frac{15}{5} \ hours$$ $$= \frac{5}{1}, \frac{15}{4}, \ and \ \frac{3}{1} \ hours$$ Taking LCM of $$\frac{5}{1}, \frac{15}{4}, \ and \ \frac{3}{1}$$ $$= \frac{LCM \ of \ 5, 15, 3}{HCF \ of \ 1, 4, 1}$$ $$= \frac{15}{1}$$ $$15 \ hours$$ Hence the women will meet together again after 15 hours.
1. The LCM and HCF of two numbers are 1350 and 25 respectively. If one number is 125 then find another number?
1. 200
2. 270
3. 290
4. 300
Solution: Given, LCM = 1350
HCF = 25
First number = 125
Let the second number is $$x$$ then $$Product \ of \ two \ numbers = HCF \times LCM$$ $$125 \times x = 25 \times 1350$$ $$x = \frac{25 \times 1350}{125}$$ $$x = \frac{33750}{125}$$ $$x = 270$$ Hence the second number is 270.
1. If HCF of 575 and 325 is 65 then find the LCM of the same numbers?
1. 2525
2. 2775
3. 2825
4. 2875
HCF = 65 then $$Product \ of \ two \ numbers = HCF \times LCM$$ $$LCM = \frac{Product \ of \ two \ numbers}{HCF}$$ $$LCM = \frac{575 \times 325}{65}$$ $$LCM = \frac{186,875}{65}$$ $$LCM = 2875$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7822929620742798, "perplexity": 1312.0983990902434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00064.warc.gz"} |
http://mathhelpforum.com/statistics/172810-ml-estimator-coefficient-variation.html | # Math Help - ML estimator of the coefficient of variation
1. ## ML estimator of the coefficient of variation
Hi guyz...
I am reallllllly in need to solve this Q...
Suppose ϴ is the model parameter, but we are interested in some function of ϴ. The invariance property of a ML estimator says that the ML estimator of g(ϴ) is just
g(ɵ)^ = g(ɵˆ):
Use this property to find the ML estimator of the coefficient of variation (σ/µ), where the data come from a Gaussian distribution with mean and variance σ ^2 . | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9941930174827576, "perplexity": 788.5207613104398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464253.80/warc/CC-MAIN-20151124205424-00297-ip-10-71-132-137.ec2.internal.warc.gz"} |
http://devmaster.net/forums/topic/16100-help-exercise-3-ch-5-beginning-c-through-game-programming/page__p__84100 | # Help! - Exercise 3 - Ch. 5 - Beginning C++ Through Game Programming
2 replies to this topic
### #1JPaulez
New Member
• Members
• 2 posts
Posted 05 March 2012 - 05:05 PM
Hello everyone
I've started to learn C++ and I'm reading this book (in the title).
I got stuck in this exercise.
I've done half of it but I can't clearly understand what it's asking me to do next
This is what the exercise says:
---
Using default arguments, write a function that asks the user for a number and returns that number.
The function should accept a string prompt from the calling code.
If the caller doesn’t supply a string for the prompt, the function should use a generic prompt. (until here I understand)
Next, using function overloading, write a function that achieves the same results.
---
How could I make an overloaded function to make the same thing?
I thought (perhaps wrong) that overloaded functions were only created to handle different types of data (int, string, bool...)
This is what I've done:
----------------------------
#include <iostream>
#include <string>
using namespace std;
string userInput(string prompt = "Please enter a number: ");
int main()
{
string input1 = userInput();
cout << input1 << " - This is what you've entered 1st.\n\n";
return 0;
}
string userInput(string prompt)
{
cout << prompt;
string number;
cin >> number;
return number;
}
### #2}:+()___ (Smile)
Member
• Members
• 169 posts
Posted 05 March 2012 - 07:27 PM
They mean that you can replace
string userInput(string prompt = "Please enter a number: ");
with something like
string userInput(string prompt);
string userInput()
{
userInput("Please enter a number: ");
}
Sorry my broken english!
### #3JPaulez
New Member
• Members
• 2 posts
Posted 06 March 2012 - 08:05 AM
Ok.
Thanks for the help!
#### 1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30273041129112244, "perplexity": 9106.960945073584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704179963/warc/CC-MAIN-20130516113619-00066-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.openstarts.units.it/handle/10077/4157 | Rendiconti dell'Istituto di matematica dell'Università di Trieste: an International Journal of Mathematics vol.35 (2003) : [12] Collection home page
CONTENTS
H.A. Al-Kharsani and R.A. Al-Khal
On the derivatives of a family of analytic functions
F. G. Arenas and M. A. Sánchez-Granero
Dimension, inverse limits and GF-spaces
G. Dattoli, S. Lorenzutta and C. Cesarano
From Hermite to Humbert polynomials
H. S. Kasana and D. Kumar
On polynomial approximation of entire functions with index-pair (p, q)
W. Desch and E. Fasanga
Stress boundary value problem in linear viscoelasticity
William Black, Jennifer Kimble, David Koop, Donald C. Solmon
Functions that are the directed x-ray of a planar convex body
Yuki Kurokawa and Hiroyuki Takamura
Blow-up for semilinear wave equations with a data of the critical decay having a small loss | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8947188258171082, "perplexity": 13710.458894896374}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829429.94/warc/CC-MAIN-20181218143757-20181218165757-00107.warc.gz"} |
http://mathhelpforum.com/algebra/45969-algebraii-word-problem-help.html | # Math Help - AlgebraII Word Problem-Help
1. ## AlgebraII Word Problem-Help
John was injured on the job and receievd a setllement of $1.5 million that he can live on for his retirement. John is going to pay off his debts(including his house) of$550,000. He is going to invest the balance in two accounts. One pays 7.5% and the other 4.5%. John will ONLY need $58,500 in order to live ar his current level. a. Write a system of equations that will help you determine the amount to be invested in each account. b. How much should John invest in each account in order to earn the$58,500 he needs?
2. couple of assumptions here since they were not in the problem statement ...
1. interest is simple interest per annum
2. he needs $58,500 per year to support his present lifestyle$1.5 million - $550,000 =$950,000
let x = amount of money invested at 7.5% interest
then (950,000 - x) = amount of money invested at 4.5% interest
interest earned = 58,500 = (x)(7.5%) + (950,000 - x)(4.5%)
don't forget to change the percentages to decimals ... then solve for x.
3. Originally Posted by ep78
John was injured on the job and receievd a setllement of $1.5 million that he can live on for his retirement. John is going to pay off his debts(including his house) of$550,000. He is going to invest the balance in two accounts. One pays 7.5% and the other 4.5%. John will ONLY need $58,500 in order to live ar his current level. a. Write a system of equations that will help you determine the amount to be invested in each account. b. How much should John invest in each account in order to earn the$58,500 he needs?
Using a system of equations, it would look like this:
Let x = amount invested at 7.5%
Let y = amount invested at 4.5%
Let $950,000 = total investment Let$58,500 = interest needed
_____________________________
$x+y=950000$
$.075x+.045y=58500$
_____________________________
Solve the system | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33174172043800354, "perplexity": 4252.167192417897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195035525.0/warc/CC-MAIN-20150601214355-00106-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://hpmuseum.org/forum/thread-13906-post-122856.html | HP 67 programming?
10-30-2019, 12:43 AM
Post: #1
Trond Member Posts: 136 Joined: Sep 2017
HP 67 programming?
I have only really programmed HP 32SII, 42S, and 41C before (and those only a little). I recently downloaded the HP 67 app on my phone. I like the layout, and the red display (wish I had the real thing), and I think I understand the numerical system of the programming language. But how do I scroll the programs, delete lines, etc?
10-30-2019, 01:04 AM (This post was last modified: 10-30-2019 01:06 AM by toml_12953.)
Post: #2
toml_12953 Senior Member Posts: 1,781 Joined: Dec 2013
RE: HP 67 programming?
(10-30-2019 12:43 AM)Trond Wrote: I have only really programmed HP 32SII, 42S, and 41C before (and those only a little). I recently downloaded the HP 67 app on my phone. I like the layout, and the red display (wish I had the real thing), and I think I understand the numerical system of the programming language. But how do I scroll the programs, delete lines, etc?
To scroll the programs use SST and BST when in program mode. h-shifted CLx is for deleting a line.
Here's a quickie intro to the 67. It's far from complete but it should get you started.
Online HP-67 Manual
Tom L
Cui bono?
10-30-2019, 01:20 AM
Post: #3
Thomas Okken Senior Member Posts: 1,374 Joined: Feb 2014
RE: HP 67 programming?
And here's the full manual: https://archived.hpcalc.org/greendyk/hp67/
10-30-2019, 01:51 AM
Post: #4
Namir Senior Member Posts: 810 Joined: Dec 2013
RE: HP 67 programming?
I started programming with the HP-55 in 1975. It really had imited programing steps, but I pushed it to teh limit. In 1977 I got an HP-67 and discovered 224 merged steps of programming, multilevel subroutines, indirect addressing, and of course the card reader!! The HP-67 was the first serious handheld programmable device I owned. The HP-41C that came out in 1979 was another revolution.
Namir
10-30-2019, 03:30 AM
Post: #5
Trond Member Posts: 136 Joined: Sep 2017
RE: HP 67 programming?
Many thanks for the links and info!
11-01-2019, 12:26 AM
Post: #6
Trond Member Posts: 136 Joined: Sep 2017
RE: HP 67 programming?
I am starting to like this thing. Too bad the real calculator is sooo expensive. I'm guessing they're much slower than the app, but would have been a neat thing to have nevertheless.
11-01-2019, 01:45 AM
Post: #7
Thomas Okken Senior Member Posts: 1,374 Joined: Feb 2014
RE: HP 67 programming?
They have become more expensive in recent years, but even at $300-400, you're paying a fraction of the original price. They were$450 in 1976, that's equivalent to $2000 today. N.B. If you do go shopping around for one, make sure to budget an additional$100 or so to get the card reader fixed, those almost invariably need to be repaired because the rubber on the pinch roller turns to goo over time. Sometimes you'll see one offered that has already had that repair done, but most haven't.
11-01-2019, 01:53 AM
Post: #8
Trond Member Posts: 136 Joined: Sep 2017
RE: HP 67 programming?
(11-01-2019 01:45 AM)Thomas Okken Wrote: They have become more expensive in recent years, but even at $300-400, you're paying a fraction of the original price. They were$450 in 1976, that's equivalent to $2000 today. N.B. If you do go shopping around for one, make sure to budget an additional$100 or so to get the card reader fixed, those almost invariably need to be repaired because the rubber on the pinch roller turns to goo over time. Sometimes you'll see one offered that has already had that repair done, but most haven't.
How are the keys compared to HP41C? Are the buttons often messed up with age? That's one thing that has bugged me with two HP 41C that have had the last few years, they both had some buttons that were either sticky or I had to push a few of them harder than the others to get a reaction.
11-01-2019, 03:06 AM
Post: #9
Thomas Okken Senior Member Posts: 1,374 Joined: Feb 2014
RE: HP 67 programming?
It varies, I suppose. I have a 67 that I bought fairly cheap ($110 IIRC, not counting the card reader repair, which I also had done) about 10 years ago; it's in good shape but some of the keys do feel a bit softer than the others, the h key in particular. It's clearly been used pretty heavily. Works fine though. At the other end of the spectrum, I have a 41CX that I bought recently and paid over$300 for, and if it weren't for some slight wear on the feet, I would assume it's never been used. It is spotless and the keyboard is perfect and wonderful. It was advertised as being in great shape, and with a money-back guarantee, so it didn't feel like much of a gamble.
11-03-2019, 11:23 PM (This post was last modified: 11-03-2019 11:24 PM by Helix.)
Post: #10
Helix Member Posts: 231 Joined: Dec 2013
RE: HP 67 programming?
I can't compare with the HP 41, but the keys of the two HP 67 I've found recently have small inconsistencies in their resistance. I don't know if this is due to heavy usage.
On these two calculators, when I received them, one row of keys was almost unresponsive: the top row for one, and the bottom row for the other. This was due to a plastic sheet that was worn and not perfectly aligned, so dust could go easily under the metallic strips.
A calculator with this defect can sometimes be purchased at a reasonable cost, and is not difficult to repair, if one accepts to damage the back label for opening the case. The dust can be removed with a strip of paper, and the plastic sheet is easy to replace with a new one.
These two videos will be better than my explanations:
And I can confirm that the red display of the HP 67 is really attractive!
Jean-Charles
11-04-2019, 02:24 PM
Post: #11
DaveBr Junior Member Posts: 44 Joined: Jun 2018
RE: HP 67 programming?
I have a HP-67 and when I first got it, it had several keys in the bottom row that varied from”mushy” to requiring seemingly way too much force to register the input.
After careful disassembly, I sprayed the key board with DeOxit contact cleaner and then passed a strip of printer paper between the metal springs and key contacts.
After reassembly, the keys feel consistent and register perfectly. I’m amazed at the quality of materials, design and assembly of HP’s early calculators and I agree, the red LEDs are Wonderful.
Dave
RPN rules!
« Next Oldest | Next Newest » | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32378390431404114, "perplexity": 5667.435360765224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363659.21/warc/CC-MAIN-20211209030858-20211209060858-00474.warc.gz"} |
http://mathhelpforum.com/algebra/158776-supposedly-well-known-math-problem.html | # Thread: Supposedly "well-known" math problem
1. ## Supposedly "well-known" math problem
Hi
Could someone please show me how to do (ii) of the question? I got that e^(Sn) > Rn BUT I have no idea how to prove the Rn < 1+ Sn.
$R_n - S_n - 1 > 0$
See what happens when you expand $R_n - S_n - 1$ for n = 2 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.828377902507782, "perplexity": 1090.20957755494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886101966.48/warc/CC-MAIN-20170816125013-20170816145013-00386.warc.gz"} |
https://chem.libretexts.org/Textbook_Maps/Introductory_Chemistry_Textbook_Maps/Map%3A_The_Basics_of_GOB_Chemistry_(Ball_et_al.)/20%3A_Energy_Metabolism/20.5%3A_Stage_II_of_Carbohydrate_Catabolism | # 20.5: Stage II of Carbohydrate Catabolism
Skills to Develop
• Describe the function of glycolysis and identify its major products.
• Describe how the presence or absence of oxygen determines what happens to the pyruvate and the NADH that are produced in glycolysis.
• Determine the amount of ATP produced by the oxidation of glucose in the presence and absence of oxygen.
In stage II of catabolism, the metabolic pathway known as glycolysis converts glucose into two molecules of pyruvate (a three-carbon compound with three carbon atoms) with the corresponding production of adenosine triphosphate (ATP). The individual reactions in glycolysis were determined during the first part of the 20th century. It was the first metabolic pathway to be elucidated, in part because the participating enzymes are found in soluble form in the cell and are readily isolated and purified. The pathway is structured so that the product of one enzyme-catalyzed reaction becomes the substrate of the next. The transfer of intermediates from one enzyme to the next occurs by diffusion.
### Steps in Glycolysis
The 10 reactions of glycolysis, summarized in Figure $$\PageIndex{1}$$, can be divided into two phases. In the first 5 reactions—phase I—glucose is broken down into two molecules of glyceraldehyde 3-phosphate. In the last five reactions—phase II—each glyceraldehyde 3-phosphate is converted into pyruvate, and ATP is generated. Notice that all the intermediates in glycolysis are phosphorylated and contain either six or three carbon atoms.
Figure $$\PageIndex{1}$$: Glycolysis
When glucose enters a cell, it is immediately phosphorylated to form glucose 6-phosphate, in the first reaction of phase I. The phosphate donor in this reaction is ATP, and the enzyme—which requires magnesium ions for its activity—is hexokinase. In this reaction, ATP is being used rather than being synthesized. The presence of such a reaction in a catabolic pathway that is supposed to generate energy may surprise you. However, in addition to activating the glucose molecule, this initial reaction is essentially irreversible, an added benefit that keeps the overall process moving in the right direction. Furthermore, the addition of the negatively charged phosphate group prevents the intermediates formed in glycolysis from diffusing through the cell membrane, as neutral molecules such as glucose can do.
In the next reaction, phosphoglucose isomerase catalyzes the isomerization of glucose 6-phosphate to fructose 6-phosphate. This reaction is important because it creates a primary alcohol, which can be readily phosphorylated.
The subsequent phosphorylation of fructose 6-phosphate to form fructose 1,6-bisphosphate is catalyzed by phosphofructokinase, which requires magnesium ions for activity. ATP is again the phosphate donor.
When a molecule contains two phosphate groups on different carbon atoms, the convention is to use the prefix bis. When the two phosphate groups are bonded to each other on the same carbon atom (for example, adenosine diphosphate [ADP]), the prefix is di.
Fructose 1,6-bisphosphate is enzymatically cleaved by aldolase to form two triose phosphates: dihydroxyacetone phosphate and glyceraldehyde 3-phosphate.
Isomerization of dihydroxyacetone phosphate into a second molecule of glyceraldehyde 3-phosphate is the final step in phase I. The enzyme catalyzing this reaction is triose phosphate isomerase.
Comment: In steps 4 and 5, aldolase and triose phosphate isomerase effectively convert one molecule of fructose 1,6-bisphosphate into two molecules of glyceraldehyde 3-phosphate. Thus, phase I of glycolysis requires energy in the form of two molecules of ATP and releases none of the energy stored in glucose.
In the initial step of phase II, glyceraldehyde 3-phosphate is both oxidized and phosphorylated in a reaction catalyzed by glyceraldehyde-3-phosphate dehydrogenase, an enzyme that requires nicotinamide adenine dinucleotide (NAD+) as the oxidizing agent and inorganic phosphate as the phosphate donor. In the reaction, NAD+ is reduced to reduced nicotinamide adenine dinucleotide (NADH), and 1,3-bisphosphoglycerate (BPG) is formed.
Table $$\PageIndex{1}$$: Maximum Yield of ATP from the Complete Oxidation of 1 Mol of Glucose
Reaction Comments Yield of ATP (moles)
glucose → glucose 6-phosphate consumes 1 mol ATP −1
fructose 6-phosphate → fructose 1,6-bisphosphate consumes 1 mol ATP −1
glyceraldehyde 3-phosphate → BPG produces 2 mol of cytoplasmic NADH
BPG → 3-phosphoglycerate produces 2 mol ATP +2
phosphoenolpyruvate → pyruvate produces 2 mol ATP +2
pyruvate → acetyl-CoA + CO2 produces 2 mol NADH
isocitrate → α-ketoglutarate + CO2 produces 2 mol NADH
α-ketoglutarate → succinyl-CoA + CO2 produces 2 mol NADH
succinyl-CoA → succinate produces 2 mol GTP +2
succinate → fumarate produces 2 mol FADH2
malate → oxaloacetate produces 2 mol NADH
2 cytoplasmic NADH from glycolysis yields 2–3 mol ATP per NADH (depending on tissue) +4 to +6
2 NADH from the oxidation of pyruvate yields 3 mol ATP per NADH +6
2 FADH2 from the citric acid cycle yields 2 ATP per FADH2 +4
3 NADH from the citric acid cycle yields 3 ATP per NADH +18
Net yield of ATP: +36 to +38
BPG has a high-energy phosphate bond (Table 20.1.1) joining a phosphate group to C1. This phosphate group is now transferred directly to a molecule of ADP, thus forming ATP and 3-phosphoglycerate. The enzyme that catalyzes the reaction is phosphoglycerate kinase, which, like all other kinases, requires magnesium ions to function. This is the first reaction to produce ATP in the pathway. Because the ATP is formed by a direct transfer of a phosphate group from a metabolite to ADP—that is, from one substrate to another—the process is referred to as substrate-level phosphorylation, to distinguish it from the oxidative phosphorylation discussed in Section 20.4.
In the next reaction, the phosphate group on 3-phosphoglycerate is transferred from the OH group of C3 to the OH group of C2, forming 2-phosphoglycerate in a reaction catalyzed by phosphoglyceromutase.
A dehydration reaction, catalyzed by enolase, forms phosphoenolpyruvate (PEP), another compound possessing a high-energy phosphate group.
The final step is irreversible and is the second reaction in which substrate-level phosphorylation occurs. The phosphate group of PEP is transferred to ADP, with one molecule of ATP being produced per molecule of PEP. The reaction is catalyzed by pyruvate kinase, which requires both magnesium and potassium ions to be active.
In phase II, two molecules of glyceraldehyde 3-phosphate are converted to two molecules of pyruvate, along with the production of four molecules of ATP and two molecules of NADH.
To Your Health: Diabetes
Although medical science has made significant progress against diabetes , it continues to be a major health threat. Some of the serious complications of diabetes are as follows:
• It is the leading cause of lower limb amputations in the United States.
• It is the leading cause of blindness in adults over age 20.
• It is the leading cause of kidney failure.
• It increases the risk of having a heart attack or stroke by two to four times.
Because a person with diabetes is unable to use glucose properly, excessive quantities accumulate in the blood and the urine. Other characteristic symptoms are constant hunger, weight loss, extreme thirst, and frequent urination because the kidneys excrete large amounts of water in an attempt to remove excess sugar from the blood.
There are two types of diabetes. In immune-mediated diabetes, insufficient amounts of insulin are produced. This type of diabetes develops early in life and is also known as Type 1 diabetes, as well as insulin-dependent or juvenile-onset diabetes. Symptoms are rapidly reversed by the administration of insulin, and Type 1 diabetics can lead active lives provided they receive insulin as needed. Because insulin is a protein that is readily digested in the small intestine, it cannot be taken orally and must be injected at least once a day.
In Type 1 diabetes, insulin-producing cells of the pancreas are destroyed by the body’s immune system. Researchers are still trying to find out why. Meanwhile, they have developed a simple blood test capable of predicting who will develop Type 1 diabetes several years before the disease becomes apparent. The blood test reveals the presence of antibodies that destroy the body’s insulin-producing cells.
Type 2 diabetes, also known as noninsulin-dependent or adult-onset diabetes, is by far the more common, representing about 95% of diagnosed diabetic cases. (This translates to about 16 million Americans.) Type 2 diabetics usually produce sufficient amounts of insulin, but either the insulin-producing cells in the pancreas do not release enough of it, or it is not used properly because of defective insulin receptors or a lack of insulin receptors on the target cells. In many of these people, the disease can be controlled with a combination of diet and exercise alone. For some people who are overweight, losing weight is sufficient to bring their blood sugar level into the normal range, after which medication is not required if they exercise regularly and eat wisely.
Those who require medication may use oral antidiabetic drugs that stimulate the islet cells to secrete insulin. First-generation antidiabetic drugs stimulated the release of insulin. Newer second-generation drugs, such as glyburide, do as well, but they also increase the sensitivity of cell receptors to insulin. Some individuals with Type 2 diabetes do not produce enough insulin and thus do not respond to these oral medications; they must use insulin. In both Type 1 and Type 2 diabetes, the blood sugar level must be carefully monitored and adjustments made in diet or medication to keep the level as normal as possible (70–120 mg/dL).
### Metabolism of Pyruvate
The presence or absence of oxygen determines the fates of the pyruvate and the NADH produced in glycolysis. When plenty of oxygen is available, pyruvate is completely oxidized to carbon dioxide, with the release of much greater amounts of ATP through the combined actions of the citric acid cycle, the electron transport chain, and oxidative phosphorylation. However, in the absence of oxygen (that is, under anaerobic conditions), the fate of pyruvate is different in different organisms. In vertebrates, pyruvate is converted to lactate, while other organisms, such as yeast, convert pyruvate to ethanol and carbon dioxide. These possible fates of pyruvate are summarized in Figure $$\PageIndex{2}$$. The conversion to lactate or ethanol under anaerobic conditions allows for the reoxidation of NADH to NAD+ in the absence of oxygen.
Figure $$\PageIndex{2}$$: Metabolic Fates of Pyruvate
### ATP Yield from Glycolysis
The net energy yield from anaerobic glucose metabolism can readily be calculated in moles of ATP. In the initial phosphorylation of glucose (step 1), 1 mol of ATP is expended, along with another in the phosphorylation of fructose 6-phosphate (step 3). In step 7, 2 mol of BPG (recall that 2 mol of 1,3-BPG are formed for each mole of glucose) are converted to 2 mol of 3-phosphoglycerate, and 2 mol of ATP are produced. In step 10, 2 mol of pyruvate and 2 mol of ATP are formed per mole of glucose.
For every mole of glucose degraded, 2 mol of ATP are initially consumed and 4 mol of ATP are ultimately produced. The net production of ATP is thus 2 mol for each mole of glucose converted to lactate or ethanol. If 7.4 kcal of energy is conserved per mole of ATP produced, and the total amount of energy that can theoretically be obtained from the complete oxidation of 1 mol of glucose is 670 kcal (as stated in the chapter introduction), the energy conserved in the anaerobic catabolism of glucose to two molecules of lactate (or ethanol) is as follows:
$\mathrm{\dfrac{2\times 7.4\: kcal}{670\: kcal}\times100=2.2\%}$
Thus anaerobic cells extract only a very small fraction of the total energy of the glucose molecule.
Contrast this result with the amount of energy obtained when glucose is completely oxidized to carbon dioxide and water through glycolysis, the citric acid cycle, the electron transport chain, and oxidative phosphorylation as summarized in Table $$\PageIndex{1}$$. Note the indication in the table that a variable amount of ATP is synthesized, depending on the tissue, from the NADH formed in the cytoplasm during glycolysis. This is because NADH is not transported into the inner mitochondrial membrane where the enzymes for the electron transport chain are located. Instead, brain and muscle cells use a transport mechanism that passes electrons from the cytoplasmic NADH through the membrane to flavin adenine dinucleotide (FAD) molecules inside the mitochondria, forming reduced flavin adenine dinucleotide (FADH2), which then feeds the electrons into the electron transport chain. This route lowers the yield of ATP to 1.5–2 molecules of ATP, rather than the usual 2.5–3 molecules. A more efficient transport system is found in liver, heart, and kidney cells where the formation of one cytoplasmic NADH molecule results in the formation of one mitochondrial NADH molecule, which leads to the formation of 2.5–3 molecules of ATP.The total amount of energy conserved in the aerobic catabolism of glucose in the liver is as follows:
$\mathrm{\dfrac{38\times7.4\: kcal}{670\: kcal}\times100=42\%}$
Conservation of 42% of the total energy released compares favorably with the efficiency of any machine. In comparison, automobiles are only about 20%–25% efficient in using the energy released by the combustion of gasoline.
As indicated earlier, the 58% of released energy that is not conserved enters the surroundings (that is, the cell) as heat that helps to maintain body temperature. If we are exercising strenuously and our metabolism speeds up to provide the energy needed for muscle contraction, more heat is produced. We begin to perspire to dissipate some of that heat. As the perspiration evaporates, the excess heat is carried away from the body by the departing water vapor.
### Summary
• The monosaccharide glucose is broken down through a series of enzyme-catalyzed reactions known as glycolysis.
• For each molecule of glucose that is broken down, two molecules of pyruvate, two molecules of ATP, and two molecules of NADH are produced.
• In the absence of oxygen, pyruvate is converted to lactate, and NADH is reoxidized to NAD+. In the presence of oxygen, pyruvate is converted to acetyl-CoA and then enters the citric acid cycle.
• More ATP can be formed from the breakdown of glucose when oxygen is present.
### Concept Review Exercises
1. In glycolysis, how many molecules of pyruvate are produced from one molecule of glucose?
2. In vertebrates, what happens to pyruvate when
1. plenty of oxygen is available?
2. oxygen supplies are limited?
3. In anaerobic glycolysis, how many molecules of ATP are produced from one molecule of glucose?
1. two
2.
1. Pyruvate is completely oxidized to carbon dioxide.
2. Pyruvate is reduced to lactate, allowing for the reoxidation of NADH to NAD+.
3. There is a net production of two molecules of ATP.
### Exercises
1. Replace each question mark with the correct compound.
1. $$\mathrm{fructose\: 1,6\textrm{-bisphosphate} \xrightarrow{aldolase}\, ?\, +\, ?}$$
2. $$\mathrm{? + ADP \xrightarrow{pyruvate\: kinase} pyruvate + ATP}$$
3. $$\mathrm{dihydroxyacetone\: phosphate \xrightarrow{?} glyceraldehyde\: 3\textrm{-phosphate}}$$
4. $$\mathrm{glucose + ATP \xrightarrow{hexokinase} \, ? + ADP}$$
2. Replace each question mark with the correct compound.
1. $$\mathrm{fructose\: 6\textrm{-phosphate} + ATP \xrightarrow{?} fructose\: 1,6\textrm{-bisphosphate} + ADP}$$
2. $$\mathrm{? \xrightarrow{phosphoglucose\: isomerase} fructose\: 6\textrm{-phosphate}}$$
3. $$\mathrm{glyceraldehyde\: 3\textrm{-phosphate} + NAD^+ + P_i \xrightarrow{?} 1,3\textrm{-bisphosphoglycerate} + NADH}$$
4. $$\mathrm{3\textrm{-phosphoglycerate} \xrightarrow{phosphoglyceromutase} \, ?}$$
3. From the reactions in Exercises 1 and 2, select the equation(s) by number and letter in which each type of reaction occurs.
1. hydrolysis of a high-energy phosphate compound
2. synthesis of ATP
4. From the reactions in Exercises 1 and 2, select the equation(s) by number and letter in which each type of reaction occurs.
1. isomerization
2. oxidation
5. What coenzyme is needed as an oxidizing agent in glycolysis?
6. Calculate
1. the total number of molecules of ATP produced for each molecule of glucose converted to pyruvate in glycolysis.
2. the number of molecules of ATP hydrolyzed in phase I of glycolysis.
3. the net ATP production from glycolysis alone.
7. How is the NADH produced in glycolysis reoxidized when oxygen supplies are limited in
1. muscle cells?
2. yeast?
8.
1. Calculate the number of moles of ATP produced by the aerobic oxidation of 1 mol of glucose in a liver cell.
2. Of the total calculated in Exercise 9a, determine the number of moles of ATP produced in each process.
1. glycolysis alone
2. the citric acid cycle
3. the electron transport chain and oxidative phosphorylation
1.
1. glyceraldehyde 3-phosphate + dihydroxyacetone phosphate
2. phosphoenolpyruvate
3. triose phosphate isomerase
4. glucose 6-phosphate
1.
1. reactions 1b, 1d, and 2a
2. reaction 1b | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6382701396942139, "perplexity": 4795.325756519046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106465.71/warc/CC-MAIN-20170820112115-20170820132115-00587.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-10-radical-expressions-and-equations-10-2-simplifying-radicals-practice-and-problem-solving-exercises-page-610/12 | ## Algebra 1
$8 \sqrt {2}$
In order to see if a radical is in simplified form, see if any of its factors are perfect squares (meaning that their square root will be an integer). We see that $\sqrt 128$ has factors of 64 and 2. 64 is a perfect square, so we know that we can simplify: $\sqrt {128} = \sqrt {64} \times \sqrt {2} = 8 \sqrt {2}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8058128952980042, "perplexity": 146.09934468724498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570901.18/warc/CC-MAIN-20220809033952-20220809063952-00734.warc.gz"} |
http://e-cigarette-research.info/doku.php/research:documents:7vkxh2g3?do=edit | # E-Cigarette Research
The Truth Is Out There
### Site Tools
Action disabled: source
research:documents:7vkxh2g3
### Influential parameters on particle concentration and size distribution in the mainstream of e-cigarettes
Journal Article 1)
Electronic cigarette-generated mainstream aerosols were characterized in terms of particle number concentrations and size distributions through a Condensation Particle Counter and a Fast Mobility Particle Sizer spectrometer, respectively. A thermodilution system was also used to properly sample and dilute the mainstream aerosol.
Different types of electronic cigarettes, liquid flavors, liquid nicotine contents, as well as different puffing times were tested. Conventional tobacco cigarettes were also investigated.
The total particle number concentration peak (for 2-s puff), averaged across the different electronic cigarette types and liquids, was measured equal to 4.39 ± 0.42 × 109 part. cm−3, then comparable to the conventional cigarette one (3.14 ± 0.61 × 109 part. cm−3). Puffing times and nicotine contents were found to influence the particle concentration, whereas no significant differences were recognized in terms of flavors and types of cigarettes used.
Particle number distribution modes of the electronic cigarette-generated aerosol were in the 120–165 nm range, then similar to the conventional cigarette one.
z-ref: 7vkxh2g3
1)
Fuoco , et al. (2014), Influential parameters on particle concentration and size distribution in the mainstream of e-cigarettes, http://www.sciencedirect.com/science/article/pii/S0269749113005307 accessed: 2014-03-21 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.868691623210907, "perplexity": 7237.8727237048215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400221980.49/warc/CC-MAIN-20200925021647-20200925051647-00722.warc.gz"} |
https://web2.0calc.com/questions/find-the-largest-value-of-x-where-the-plots-of-intersect_1 | +0
# Find the largest value of x where the plots of intersect.
0
190
1
+644
Find the largest value of x where the plots of $$f(x) = - \frac{2x+5}{x+3} \text{ and } g(x) = 4\cdot \frac{x+1}{x-4}$$
intersect.
Mar 9, 2018
#1
+20850
0
Find the largest value of x where the plots of
$$f(x) = - \frac{2x+5}{x+3} \text{ and } g(x) = 4\cdot \frac{x+1}{x-4}$$
intersect.
Mar 9, 2018 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8346004486083984, "perplexity": 1314.9075311536378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584331733.89/warc/CC-MAIN-20190123105843-20190123131843-00031.warc.gz"} |
https://encyclopediaofmath.org/index.php?title=Eichler_cohomology&diff=49899&oldid=12564 | # Difference between revisions of "Eichler cohomology"
In [a2], M. Eichler conceived the "Eichler cohomology theory" (but not the designation) while studying "generalized Abelian integrals" (now called "Eichler integrals" ; see below).
The setting for this theory is that of automorphic forms, with multiplier system, on a discrete group $\Gamma$ of fractional-linear transformations (equivalently, of $( 2 \times 2 )$-matrices; cf. also Automorphic form; Fractional-linear mapping). One may assume that $\Gamma$ consists of real fractional-linear transformations, that is, that $\Gamma$ fixes $\mathcal{H}$, the upper half-plane. A fundamental region, $\mathcal{R}$, of $\Gamma$ is required to have finite hyperbolic area; this is equivalent to the two conditions (taken jointly):
i) $\Gamma$ is finitely generated;
ii) each real point $q$ of $\overline { \mathcal{R} }$ is a parabolic point (a cusp) of $\Gamma$, that is, it is fixed by a cyclic subgroup $\Gamma _ { q }$ of $\Gamma$ with parabolic generator.
Let $k \in \mathbf{Z}$ and let $\mathbf{v}$ be a multiplier system in weight $k$ with respect to $\Gamma$. Since $k$ is integral, this means simply that $|\mathbf{v} ( M ) | = 1$ for all $M \in \Gamma$, and $\mathbf{v}$ is multiplicative:
$$\tag{a1} \mathbf{v} ( M _ { 1 } , M _ { 2 } ) = \mathbf{v} ( M _ { 1 } ) \mathbf{v} ( M _ { 2 } ) , M _ { 1 } , M _ { 2 } \in \Gamma.$$
For $M \in \Gamma$ and $\varphi$ a function on $\mathcal{H}$, define the slash operator
$$\tag{a2} ( \varphi | _ { k } ^ { \mathbf{v} } M ) ( z ) = {\bf v} ( M ) ( cz + d ) ^ { - k } \varphi ( M z ).$$
In this notation, the characteristic transformation law satisfied by an automorphic form $f$ on $\Gamma$ of weight $k$ and multiplier system $\mathbf{v}$ can be written
$$\tag{a3} f | _ { k } ^ { \mathbf{v} } M = f , \forall M \in \Gamma.$$
Let $\{ \Gamma , k , \mathbf{v} \}$ denote the vector space of automorphic forms on $\Gamma$ of weight $k$ and multiplier system $\mathbf{v}$, the collection of $f$ satisfying (a3) and that are holomorphic on $\mathcal{H}$ and meromorphic at each parabolic cusp of $\overline { \mathcal{R} }$ (in the usual local variable, cf. also Analytic function; Meromorphic function). One says that $f \in \{ \Gamma , k , \mathbf{v} \}$ is an entire automorphic form if $f$ is holomorphic at each parabolic cusp. An entire automorphic form $f$ is called a cusp form if $f$ vanishes at each parabolic cusp. As usual, $C ^ { + } ( \Gamma , k , \mathbf{v} )$ denotes the space of entire automorphic forms and $C ^ { 0 } ( \Gamma , k , \mathbf{v} )$ denotes the subspace of cusp forms. For groups $\Gamma$ of the kind considered here, a suitable version of the Riemann–Roch theorem shows that $C ^ { + } ( \Gamma , k , \mathbf{v} )$ has finite dimension over $\mathbf{C}$.
To describe the genesis of Eichler cohomology it is helpful to introduce the Bol identity [a1]:
$$\tag{a4} D ^ { k + 1 } \{ ( c z + d ) ^ { k } F ( M z ) \} =$$
\begin{equation*} = ( c z + d ) ^ { - k - 2 } F ^ { ( k + 1 ) } ( M z ), \end{equation*}
where $k \in \mathbf{Z}$, , $M = \left( \begin{array} { c c } { * } & { * } \\ { c } & { d } \end{array} \right)$ is any fractional-linear transformation of determinant $1$, and $F$ is a function with $k + 1$ derivatives ((a4) is easily derived from the Cauchy integral formula, cf. Cauchy integral theorem, or proved by induction on $k$). As a consequence of (a4), if $F \in \{ \Gamma , - k , \mathbf{v} \}$, then $F ^ { ( k + 1 ) } \in \{ \Gamma , k + 2 , \mathbf{v} \}$.
There is a second consequence of (a4), more directly relevant to the case under consideration: if $f \in \{ \Gamma , k + 2 , \mathbf{v} \}$ and $F ^ { ( k + 1 ) } = f$ (for example, $F ( z ) = ( 1 / k ! ) \int _ { i } ^ { z } f ( \tau ) ( z - \tau ) ^ { k } d \tau$), then $F$ satisfies
$$\tag{a5} F | _ { - k } ^ { \mathbf{v} } M = F + p _ { M } , \forall M \in \Gamma,$$
where $p_{M}$ is a polynomial in $z$ of degree at most $k$. $F$ is called an Eichler integral of weight $- { k }$ and multiplier system $\mathbf{v}$, with respect to $\Gamma$, and with period polynomials $p_{M}$, $M \in \Gamma$. Eichler integrals generalize the classical Abelian integrals (cf. Abelian integral), which occur as the case $k = 0$, $\mathbf{v} \equiv 1$. As an immediate consequence of (a5), $\{ p _ { M } : M \in \Gamma \}$ satisfies the cocycle condition
(a6)
Consider the cocycle condition for $\{ p _ { M } : M \in \Gamma \}$ in the space $P ( k )$ of polynomials of degree at most $k$. A collection of polynomials $\{ p _ { M } \in P ( k ) : M \in \Gamma \}$ satisfying (a6) is called a cocycle in $P ( k )$. A coboundary in $P ( k )$ is a collection $\{ p _ { M } \in P ( k ) : M \in \Gamma \}$ such that
$$\tag{a7} p _ { M } = p | _ { - k } ^ { \mathbf{v} } M - p , M \in \Gamma ,$$
with a fixed polynomial $p \in P ( k )$. Note that $\{ p _ M\}$ defined by (a7) satisfies (a6). The Eichler cohomology group $H ^ { 1 } = H ^ { 1 } ( \Gamma , k , \mathbf{v} ; P ( k ) )$ is now defined to be the quotient space: cocycles in $P ( k )$ modulo coboundaries in $P ( k )$.
To state Eichler's cohomology theorem of [a2] one must introduce the notion of a "parabolic cocycle" . Let $q _ { 1 } , \dots , q _ { t }$ be the (necessarily finite) set of inequivalent parabolic cusps in $\overline { \mathcal{R} }$. For $1 \leq h \leq t$, let $\Gamma _ { h }$ be the stabilizer of $q_h$ in $\Gamma$ with parabolic generator $Q _ { h }$ (cf. also Stabilizer). One says that the cocycle $\{ p _ { M } \in P ( k ) : M \in \Gamma \}$ is parabolic if the following holds: For each $h$, $1 \leq h \leq t$, there exists a $p _ { h } \in P ( k )$ such that .
Coboundaries are of course parabolic cocycles, so one may form the quotient group: parabolic cocycles in $P ( k )$ modulo coboundaries in $P ( k )$. This is a subgroup of $H ^ { 1 } ( \Gamma , k , \mathbf{v} ; P ( k ) )$, called the parabolic Eichler cohomology group and denoted by $\tilde { H } ^ { 1 } = \tilde { H } ^ { 1 } ( \Gamma , k , {\bf v} ; P ( k ) )$.
Eichler's theorem [a2], p. 283, states: The vector spaces $C ^ { 0 } ( \Gamma , k + 2 , \overline{\mathbf{v}} ) \oplus C ^ { 0 } ( \Gamma , k + 2 , \mathbf{v} )$ and $\widetilde { H } ^ { 1 } ( \Gamma , k , \mathbf v ; P ( k ) )$ are isomorphic under a canonical mapping.
The discussion above, leading to (a6), shows how to associate a unique element $\beta ( f )$ of $\widetilde { H } ^ { 1 }$ to $f \in C ^ { 0 } ( \Gamma , k + 2 , \mathbf{v} )$, by forming a $( k + 1 )$-fold anti-derivative of $f$. The key to the proof of Eichler's theorem lies in the construction of a suitable mapping $\alpha ( g )$ from $g \in C ^ { 0 } ( \Gamma , k + 2 , \mathbf{v} )$ to $\widetilde { H } ^ { 1 }$. Eichler accomplishes this by attaching to $g$ an element $\hat{g}$ of $\{ \Gamma , k + 2 , \mathbf{v} \}$ with poles in $\overline { \mathcal{R} }$, and then passing to the cocycle of period polynomials of a $( k + 1 )$-fold anti-derivative of $\hat{g}$. The mapping $\mu$ from $C ^ { 0 } ( \Gamma , k + 2 , \overline{\mathbf{v}} ) \oplus C ^ { 0 } ( \Gamma , k + 2 , \mathbf{v} )$ to $\widetilde { H } ^ { 1 }$ is then defined by means of $\mu ( g , f ) = \alpha ( g ) + \beta ( f )$. The proof that $\mu$ is one-to-one follows from Eichler's generalization of the Riemann period relation for Abelian integrals to the setting of Eichler integrals.
The proof can be completed by showing that $\operatorname { dim } \tilde { H } ^ { 1 } = \operatorname { dim } C ^ { 0 } ( \Gamma , k + 2 , \overline{\mathbf{v}} ) + \operatorname { dim } C ^ { 0 } ( \Gamma , k + 2 ,\mathbf{v} )$. The essence of Eichler's theorem is that every parabolic cocycle can be realized as the system of period polynomials of some unique Eichler integral of weight $- { k }$ and multiplier system $\mathbf{v}$, with respect to $\Gamma$.
R.C. Gunning [a3] has proved a related result, from which Eichler's theorem follows as a corollary: The vector spaces $C ^ { 0 } ( \Gamma , k + 2 , \overline{\mathbf{v}} ) \oplus C ^ { + } ( \Gamma , k + 2 , \mathbf{v} )$ and $H ^ { 1 } ( \Gamma , k , \mathbf{v} ; P ( k ) )$ are isomorphic under the mapping of Eichler's theorem.
Proving Gunning 's theorem first and then deriving Eichler's theorem from it has the advantage that the calculation of $\operatorname{dim} \, H ^ { 1 }$ is substantially easier than that of $\operatorname{dim} \tilde { H } _ { 1 }$; this, because in $H ^ { 1 }$ there is no restriction on the elements of $P ( k )$ associated to the parabolic generators $Q _ { h }$, $1 \leq h \leq t - 1$.
There are various proofs of Gunning 's theorem and its corollary, in addition to those in [a2], [a3]. See, for example, [a4], [a11], [a14]. (G. Shimura [a14] has refined Eichler's theorem by working over the real rather than the complex field.) In [a6], Chap. 5, [a7], [a8], and [a13], analogous results are proved for the more general situation in which $\Gamma$ is a finitely generated Kleinian group. I. Kra has made further contributions to this case ([a9], [a10]).
The literature contains several results describing the cohomology groups $H ^ { 1 }$ and $\widetilde { H } ^ { 1 }$ that arise when the space of polynomials $P ( k )$ is replaced by a larger space of analytic functions [a3], Thm. 3, [a5], Thms. 1; 2, [a7], Thm. 5. Gunning [a3], Thms. 4; 5, discusses $H ^ { 0 }$ and $H ^ { p }$, for $p > 1$, as well as $H ^ { 1 }$. For an overview see [a5].
#### References
[a1] G. Bol, "Invarianten linearer Differentialgleichungen" Abh. Math. Sem. Univ. Hamburg , 16 : 3–4 (1949) pp. 1–28 [a2] M. Eichler, "Eine Verallgemeinerung der Abelschen Integrale" Math. Z. , 67 (1957) pp. 267–298 [a3] R.C. Gunning, "The Eichler cohomology groups and automorphic forms" Trans. Amer. Math. Soc. , 100 (1961) pp. 44–62 [a4] S.Y. Hussemi, M.I. Knopp, "Eichler cohomology and automorphic forms" Illinois J. Math. , 15 (1971) pp. 565–577 [a5] M.I. Knopp, "Some new results on the Eichler cohomology of automorphic forms" Bull. Amer. Math. Soc. , 80 (1974) pp. 607–632 [a6] I. Kra, "Automorphic forms and Kleinian groups" , Benjamin (1972) [a7] I. Kra, "On cohomology of Kleinian groups" Ann. of Math. , 89 : 2 (1969) pp. 533–556 [a8] I. Kra, "On cohomology of Kleinian groups - II" Ann. of Math. , 90 : 2 (1969) pp. 576–590 [a9] I. Kra, "On cohomology of Kleinian groups - III" Acta Math. , 127 (1971) pp. 23–40 [a10] I. Kra, "On cohomology of Kleinian groups - IV" J. d'Anal. Math. , 43 (1983-84) pp. 51–87 [a11] J. Lehner, "Automorphic Integrals with preassigned period polynomials and the Eichler cohomology" A.O.L. Atkin (ed.) B.J. Birch (ed.) , Computers in Number Theory, Proc. Sci. Research Council Atlas Symp. no. 2 , Acad. Press (1971) pp. 49– 56 [a12] J. Lehner, "Cohomology of vector-valued automorphic forms" Math. Ann. , 204 (1973) pp. 155–176 [a13] J. Lehner, "The Eichler cohomology of a Kleinian group" Math. Ann. , 192 (1971) pp. 125–143 [a14] G. Shimura, "Sur les intégrales attachées aux formes automorphes" J. Math. Soc. Japan , 11 (1959) pp. 291–311
How to Cite This Entry:
Eichler cohomology. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Eichler_cohomology&oldid=12564
This article was adapted from an original article by M.I. Knopp (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 6, "x-ck12": 0, "texerror": 0, "math_score": 0.9881523251533508, "perplexity": 551.4676773268474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304798.1/warc/CC-MAIN-20220125070039-20220125100039-00407.warc.gz"} |
https://www.physicsforums.com/threads/inclines-in-tour-de-france.176454/ | Inclines in Tour de France
1. Jul 9, 2007
TSN79
I'm watching Tour de France these days, and I hear people talking about mountain stages etc, and they often say that this and that road has an incline of let's say 10 %...what does that mean? How steep is that?!
2. Jul 9, 2007
bel
I think they refer to the gradient, so, say, a slope of $$\frac{3}{4}$$ is 75% incline.
3. Jul 9, 2007
Kurdt
Staff Emeritus
A 10% incline would mean you increase in height 10cm for every meter you travel.
4. Jul 9, 2007
brewnog
Both correct; 100% is a 'one in one' incline, or 45 degrees. (1:8 is 12.5% etc).
5. Jul 9, 2007
TSN79
Great, thx guys!
6. Jul 9, 2007
humanino
Have you ever seen a 100% incline ? It is really inclined
I mean, realize the following : with a 100% incline, you will not see the road until your distance to the beginning of the slope equal the height of your eyes. Even two meters from the beginning of the slope, it looks like there is no road at all. Quite scary.
7. Jul 10, 2007
TSN79
Also, inclines are catogorized into 5 or so catgories. I believe the lower the number the harder the incline. Is this right?
8. Jul 10, 2007
Chi Meson
And then "HC" is the hardest since it is "beyond categories."
9. Jul 10, 2007
BobG
Technically, climbs are rated by category. The length, number, and steepness of inclines involved in a long climb all go into deciding which category the climb belongs in. A particularly long incline with no breaks could result in a climb being rated in a tougher category than one with several very steep, but short inclines.
10. Jul 10, 2007
Smurf
would 200% be straight vertical then?
11. Jul 10, 2007
bel
No, 200% incline would be a gradient of 2, corresponding to an angle of arctan(2), or about 63.4 degrees.
12. Jul 10, 2007
humanino
Vertical cannot be assigned a finite value here. It would be "infinitely steep" somehow.
13. Jul 10, 2007
Schrodinger's Dog
Vertical is not an incline, the closest to it would be technically indefinite I suppose, or more realistically depend on the slope.
For all intents and purposes in cycling a slope beyond a certain gradient would be irrelevant as it would be impossible for anyone to even attempt to traverse it.
I'd imagine 200% doesn't even exist in cycling, as that would be absurd unless it was a BMX jump or a bump in the road.
Last edited: Jul 10, 2007
14. Jul 10, 2007
DaveC426913
Ah. Took me a while to visualize this.
You're talking about being at the top of the incline looking down. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6725431680679321, "perplexity": 2755.418823144954}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541696.67/warc/CC-MAIN-20161202170901-00092-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://brilliant.org/problems/view-this-in-4-dimensions/ | # View this in 4 dimensions
Algebra Level 5
How many ordered quadruples of non-negative real numbers are there that satisfies:
\begin{align} a + b + c + d = 4 \\ a^2bc + b^2cd + c^2da + d^2 ab = 4 \\ \end{align}
Now prove this.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000035762786865, "perplexity": 1076.234321257387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424154.4/warc/CC-MAIN-20170722222652-20170723002652-00625.warc.gz"} |
https://library.fiveable.me/ap-physics-2/unit-7/unit-7-properties-of-waves-and-particles/study-guide/gXV2Vd4gN69ociflUUAD | # 7.5 Properties of Waves and Particles
S
Saarah Hasan
Daniella Garcia-Loos
### AP Physics 2🧲
61 resources
See Units
From previous lessons, we already know that light behaves as both a particle and a wave. In 7.5, we’re going to expand on this duality.
## Wave-Particle Duality
The coexistence of particle and wave properties for fundamental particles is known as wave-particle duality. On a small scale, particles can showcase the properties of waves. Think back to double-slit experiments: if a stream of particles were traveled through the slit, they would diffract. Waves can also showcase properties of particles; photons have particle properties like momentum and energy, which relate to their frequency and wavelength.
In the 1920s, a physicist named Arthur Compton conducted experiments that showed that when an X-ray photon collides with an electron, the collision obeys that law of conservation of momentum. In this momentum interaction, known as the Compton effect, the scattered photon has a lower frequency than the incident photon.
Taken from Wikimedia Commons
The momentum of the photon is given by:
p=h/λ
This raised a question: if an electromagnetic wave, can a particle of matter behave like a wave? A physicist named Louis de Broglie suggested that the answer was yes. Since a photon’s momentum is p=h/λ, the wavelength can be rearranged to
λ=h/p
If the momentum of a particle (p=mv) is big enough to overcome Planck’s constant, a significant wavelength can be observed. For this to happen, the mass has to be extremely small (on the atomic scale). The equation for the de Broglie wavelength of a particle becomes:
λ=h/mv
Wave-particle duality is the concept that particles, such as electrons and photons, can exhibit both wave-like and particle-like behavior. It is a fundamental principle of quantum mechanics that has been confirmed by numerous experiments.
Here are some key points about wave-particle duality:
• Wave-particle duality is the concept that particles, such as electrons and photons, can exhibit both wave-like and particle-like behavior. It is a fundamental principle of quantum mechanics that has been confirmed by numerous experiments.
• Wave-particle duality is a consequence of the wave-particle duality of energy, which states that energy can exhibit both wave-like and particle-like behavior.
• The wave-like behavior of particles is described by wave-like properties such as wavelength, frequency, and amplitude. The particle-like behavior of particles is described by particle-like properties such as mass and charge.
• Wave-particle duality is a key feature of quantum mechanics and is essential for understanding the behavior of particles at the atomic and subatomic level. It has many important implications and applications, including the development of technologies such as lasers and transistors.
• Wave-particle duality is a fundamental principle of quantum mechanics and is an important part of our understanding of the nature of matter and energy. It has had a profound impact on our understanding of the universe and has led to many important discoveries and technological advances.
Let's do some practice together:
Electrons in a diffraction experiment are accelerated through a potential difference of 175 V. What's the de Broglie wavelength of these electrons? p=mv ➡️ p=√(2mK) KE=1/2mv^2 λ=h/p=h/√(2mK) =6.63*10^-^3^4 J*s/√(2(9.11*10^-^3^1 kg))[125 eV*1.6*10^-^1^9 J/1 eV] = 9.3*10^-^1^1 m = 0.093 nm
## Relativistic mass–energy equivalence
Relativistic mass-energy equivalence is the concept that the mass of an object increases as its velocity increases. It is a consequence of the theory of relativity and is described by the equation E=mc^2, where E is energy, m is mass, and c is the speed of light.
Here are some key points about relativistic mass-energy equivalence:
• Relativistic mass-energy equivalence is the concept that the mass of an object increases as its velocity increases. It is a consequence of the theory of relativity and is described by the equation E=mc^2, where E is energy, m is mass, and c is the speed of light.
• The theory of relativity states that the laws of physics are the same for all inertial observers, regardless of their velocity. This means that the mass of an object depends on its velocity, and that the mass of an object increases as its velocity increases.
## Wave Phenomena
Interference and diffraction are phenomena that are observed in waves, such as light waves or sound waves. They are not observed in particles, such as electrons or atoms.
Here are some key points about why only waves display interference and diffraction:
• Interference is the phenomenon in which waves combine to form a new wave pattern. It is observed when two or more waves pass through each other.
• Diffraction is the phenomenon in which waves bend around obstacles or pass through small openings. It is observed when waves encounter an obstacle or opening that is comparable in size to the wavelength of the wave.
• Interference and diffraction are phenomena that are observed in waves, such as light waves or sound waves. They are not observed in particles, such as electrons or atoms.
• The reason that only waves display interference and diffraction is that these phenomena are a consequence of the wave nature of the particles. Particles do not have a wave nature, so they do not exhibit these phenomena.
## Practice Problems: 🧩
1, An atomic particle of mass m moving at speed v is found to have wavelength λ. What is the wavelength of a second particle with a speed 3v and the same mass? A) (1/9) λ
B) (1/3) λ
C) λ
D) 3 λ
E) 9 λ
2.A very slow proton has its kinetic energy doubled. What happens to the protons corresponding de Broglie wavelength? A) the wavelength is decreased by a factor of √2 B) the wavelength is halved C) there is no change in the wavelength D) the wavelength is increased by a factor of √2 E) the wavelength is doubled.
3. Which of the following graphs best represents the de Broglie wavelength λ of a particle as a function of the linear momentum p of the particle?
1. B: de Broglie wavelength is given by, p = h / λ … mv = h / λ … λ = h / mv … 3x m = 1/3 λ
2. A: From above λ = h / mv … Since K = ½ mv2 , 2x K means √2x v. So when we plug in the new velocity of √2v, the wavelength decreases by this factor since we divide.
3. D: From p=h/ λ, they are inverses.
Browse Study Guides By Unit
💧Unit 1 – Fluids
🔥Unit 2 – Thermodynamics
⚡️Unit 3 – Electric Force, Field, & Potential
💡Unit 4 – Electric Circuits
🧲Unit 5 – Magnetism & Electromagnetic Induction
🔍Unit 6 – Geometric & Physical Optics
⚛️Unit 7 – Quantum, Atomic, & Nuclear Physics
📆Big Reviews: Finals & Exam Prep
Home | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9371657967567444, "perplexity": 511.755187128717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499700.67/warc/CC-MAIN-20230129044527-20230129074527-00848.warc.gz"} |
https://repository.uantwerpen.be/link/irua/94847 | Title Symmetry lowering at the structural phase transitions in $NpO_{2}$ and $UO_{2}$ Author Nikolaev, A.V. Michel, K.H. Faculty/Department Faculty of Sciences. Physics Publication type article Publication 2003 Lancaster, Pa , 2003 Subject Physics Source (journal) Physical review : B : condensed matter and materials physics. - Lancaster, Pa, 1998 - 2015 Volume/pages 68(2003) :5 , p. 054112,1-054112,7 ISSN 1098-0121 1550-235X 1098-0121 Article Reference 054112 ISI 000185240100038 Carrier E-only publicatie Target language English (eng) Full text (Publishers DOI) Affiliation University of Antwerp Abstract The structural phase transitions with electric-quadrupole long-range order in NpO2 (Fm (3) over barm-->Pn (3) over barm) and UO2 (Fm (3) over barm-->Pa (3) over bar) are analyzed from a group theoretical point of view. In both cases, the symmetry lowering involves three quadrupolar components belonging to the irreducible representation T-2g (Gamma(5)) of O-h and condensing in a triple-q structure at the X point of the Brillouin zone. The Pa (3) over bar structure is close to Pn (3) over barm, but allows for oxygen displacements. The Pa (3) over bar ordering leads to an effective electrostatic attraction between electronic quadrupoles while the Pn (3) over barm ordering results in a repulsion between them. It is concluded that the Pn (3) over barm structure can be stabilized only through some additional process such as strengthening of the chemical bonding between Np and O. We also derive the relevant structure-factor amplitudes for Pn (3) over barm and Pa (3) over bar, and the effect of domains on resonant x-ray scattering experiments. Full text (open access) https://repository.uantwerpen.be/docman/irua/1b45e2/1945.pdf E-info http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000185240100038&DestLinkType=RelatedRecords&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000185240100038&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000185240100038&DestLinkType=CitingArticles&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 Handle | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5479541420936584, "perplexity": 7589.288002099289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120844.10/warc/CC-MAIN-20170423031200-00442-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://proofwiki.org/wiki/Derivative_of_Arctangent_Function | Derivative of Arctangent Function
Theorem
Let $x \in \R$.
Let $\arctan x$ be the arctangent of $x$.
Then:
$\dfrac {\mathrm d \left({\arctan x}\right)} {\mathrm d x} = \dfrac 1 {1 + x^2}$
Corollary
$\dfrac {\mathrm d \left({\arctan \left({\frac x a}\right) }\right)} {\mathrm d x} = \dfrac a {a^2 + x^2}$
Proof 1
$\displaystyle y$ $=$ $\displaystyle \arctan x$ $\quad$ $\quad$ $\displaystyle \implies \ \$ $\displaystyle x$ $=$ $\displaystyle \tan y$ $\quad$ Definition of Arctangent $\quad$ $\displaystyle \implies \ \$ $\displaystyle \frac {\mathrm d x} {\mathrm d y}$ $=$ $\displaystyle \sec^2 y$ $\quad$ Derivative of Tangent Function $\quad$ $\displaystyle$ $=$ $\displaystyle 1 + \tan^2 y$ $\quad$ Difference of Squares of Secant and Tangent $\quad$ $\displaystyle$ $=$ $\displaystyle 1 + x^2$ $\quad$ Definition of $x$ $\quad$ $\displaystyle \implies \ \$ $\displaystyle \frac {\mathrm d y} {\mathrm d x}$ $=$ $\displaystyle \frac 1 {1 + x^2}$ $\quad$ Derivative of Inverse Function $\quad$
$\blacksquare$
Proof 2
$\displaystyle \frac{\mathrm d\left( \arctan\left(x\right) \right)}{\mathrm dx}$ $=$ $\displaystyle \lim_{h \to 0} \frac{\arctan\left(x+h\right) - \arctan\left(x\right)} h$ $\quad$ Definition of Derivative of Real Function at Point $\quad$ $\displaystyle$ $=$ $\displaystyle \lim_{h \to 0} \frac{\arctan(x+h) + \arctan(-x)} h$ $\quad$ Arctangent Function is Odd $\quad$ $\displaystyle$ $=$ $\displaystyle \lim_{h \to 0} \frac 1 h \arctan\left(\frac {x+h-x}{1+x\left(x+h\right)}\right)$ $\quad$ Sum of Arctangents $\quad$ $\displaystyle$ $=$ $\displaystyle \lim_{h \to 0} \frac 1 h \arctan\left(\frac h {1+x^2+hx}\right)$ $\quad$ $\quad$ $\displaystyle$ $=$ $\displaystyle \lim_{h \to 0} \frac 1 h \left(\frac h {1+x^2+hx} - \frac 1 3 \left(\frac h {1+x^2+hx}\right)^3 + \frac 1 5\left(\frac h {1+x^2+hx}\right)^5 + \mathcal O\left(h^7\right)\right)$ $\quad$ Definition of Arctangent $\quad$ $\displaystyle$ $=$ $\displaystyle \lim_{h \to 0} \left(\frac 1 {1+x^2+hx} - \frac {h^2} {3\left(1+x^2+hx\right)^3} + \frac{h^4} {5\left(1+x^2+hx\right)^5} + \mathcal O\left(h^6\right)\right)$ $\quad$ $\quad$ $\displaystyle$ $=$ $\displaystyle \frac 1 {1+x^2 + 0x} - \frac {0^2} {3\left(1+x^2+0x\right)^3} + \frac{0^4} {5\left(1+x^2+0x\right)^5}$ $\quad$ $\quad$ $\displaystyle$ $=$ $\displaystyle \frac 1 {1+x^2}$ $\quad$ $\quad$
$\blacksquare$
Also defined as
This result can also be reported as:
$\dfrac {\mathrm d \left({\arctan x}\right)} {\mathrm d x} = \dfrac 1 {x^2 + 1}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9747939109802246, "perplexity": 81.83554031918744}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583690495.59/warc/CC-MAIN-20190120021730-20190120043730-00522.warc.gz"} |
https://www.arxiv-vanity.com/papers/1607.03248/ | # Quantum and Thermal Phase Transitions in a Bosonic Atom-Molecule Mixture in a Two-dimensional Optical Lattice
L. de Forges de Parny and V.G. Rousseau Laboratoire de Physique, CNRS UMR 5672, École Normale Supérieure de Lyon, Université de Lyon, 46 Allée d’Italie, Lyon, F-69364, France Physikalisches Institut, Albert-Ludwigs Universität Freiburg, Hermann-Herder Straße 3, D-79104, Freiburg, Germany Physics Department, Loyola University New Orleans, 6363 Saint Charles Ave., New Orleans, Louisiana 70118, USA
February 10, 2021
###### Abstract
We study the ground state and the thermal phase diagram of a two-species Bose-Hubbard model, with U(1) symmetry, describing atoms and molecules on a two-dimensional optical lattice interacting via a Feshbach resonance. Using quantum Monte Carlo simulations and mean field theory, we show that the conversion between the two species, coherently coupling the atomic and molecular states, has a crucial impact on the Mott-superfluid transition and stabilizes an insulating phase with a gap controlled by the conversion term – the Feshbach insulator – instead of a standard Mott insulating phase. Depending on the detuning between atoms and molecules, this model exhibits three phases: the Feshbach insulator, a molecular condensate coexisting with noncondensed atoms and a mixed atomic-molecular condensate. Employing finite-size scaling analysis, we observe three-dimensional (3D) (3D Ising) transition when U(1) () symmetry is broken whereas the transition is first-order when both U(1) and symmetries are spontaneously broken. The finite temperature phase diagram is also discussed. The thermal disappearance of the molecular superfluid leads to a Berezinskii-Kosterlitz-Thouless transition with unusual universal jump in the superfluid density. The loss of the quasi-long-range coherence of the mixed atomic and molecular superfluid is more subtle since only atoms exhibit conventional Berezinskii-Kosterlitz-Thouless criticality. We also observe a signal compatible with a classical first-order transition between the mixed superfluid and the normal Bose liquid at low temperature.
###### pacs:
03.75.Hh, 05.20.y, 05.30.Jp, 64.60.F-, 03.75.Mn
## I Introduction
Ultracold atoms in optical lattices have opened new perspectives in several modern fields of physics, such as many-body and condensed matter physics. They offer possibilities to study complex many-body systems Bloch_2008 and quantum phase transitions Sachdev_1999 ; Greiner_2002 with a high degree of control. More interestingly, they are quantum simulators, giving access to the experimental implementation of models which are not easily realizable in any other physical contexts. In all these applications, Feshbach resonances offer an invaluable tuning knob for controlling the interaction between the atoms Chinetal2010 and also give the possibility of coherently coupling different internal states of atoms. As an example, an unbound state of two interacting atoms and a bound state – hereafter called the molecular state – can be brought into resonance by the application of a magnetic field, thanks to the different magnetic moments of the two states. Therefore, ultracold atoms in optical lattices are also very suitable to study mixtures of Bose-Einstein condensates (BECs), involving a coherent coupling à la Josephson between pairs of atoms and molecules – realizing quantum-coherent chemical reactions. The control of the effective scattering length of unbound atoms has led to the exploration of complex quantum many-body phases Greineretal2003 ; KetterleZ2008 ; RanderiaT2014 ; Winkleretal2006 ; Chinetal2010 ; Inouyeetal1998 ; Donley2002 ; Stengeretal1999 and to tune transitions between them Jordensetal08 ; Deissleretal2010 , whereas the control of the coherent coupling between atoms and molecules has been exploited e.g. to observe atom-molecule Rabi oscillations Syassenetal2007 ; Buschetal1998 ; Olsenetal2009 . Theoretically, the coherent coupling is at the basis of the prediction of a quantum phase transition between mixed atom-molecule and purely molecular condensates Radzihovskyetal2004 ; Romansetal2004 ; SenguptaD2005 ; deForges_Roscilde_2015 ; Radzihovskyetal2008 ; Ejima_2011 ; Bhaseenetal2012 ; Capponi_2007 ; Roux_2009 . The case of bosonic atoms and molecules is all the more interesting since long-range phase coherence can be established in two dimensions at zero temperature via Bose-Einstein condensation. The asymmetric coherent coupling between atoms and molecules clearly leads to a complex interplay between atomic and molecular condensations and provides quantum phase transitions which are not realized in the context of single species condensates. Even more striking is that the coherent coupling can destroy the phase coherence, leading to an insulating phase with a gap controlled by the conversion amplitude deForges_Roscilde_2015 .
In this paper we focus our attention on the case of a two dimensional (2D) atom-molecule mixture using quantum Monte Carlo (QMC) simulations and Gutzwiller mean field theory (MFT). Our goal is twofold: we elucidate the effect of the conversion term, leading to a rich and original ground state phase diagram and we unveil the thermal phase transitions, exhibiting an unusual Berezinskii-Kosterlitz-Thouless (BKT) transition. Basically, we expect mixed Mott insulator and superfluid phases, i.e., composed by both atoms and molecules, to appear due to the conversions. However, the effect of conversions on the phase transitions is more subtle and we report here a clear evidence that the phase coherence is destroyed when the conversion amplitude is increased. Indeed, close to the resonance, the conversion term has a crucial impact on the Mott-superfluid transition for two particles per site: it enhances the insulator phase and also changes the nature of the transition, leading to a quantum first-order transition with a U(1) symmetry breaking. Even more interestingly, the phase located at the tip of the insulating lobe with two particles per site in the phase diagram does not correspond anymore with the definition of a Mott insulator phase, i.e an insulating phase with a particle-hole gap opened by (diagonal) repulsive interactions between the particles Mott_definition . Instead of this, the system adopts a Feshbach insulating phase where the energy gap is controlled by the (off-diagonal) conversion term between atoms and molecules, keeping the interactions fixed. Although the existence of this phase was previously reported in Ref. deForges_Roscilde_2015 , our present study provides a reliable analysis of the quantum phase transitions, completing the characterization of the phase diagram. Finally, we study the thermal phase transitions. In two dimensions, Bose-Einstein condensation at finite temperature is impossible in the proper sense, leaving space to quasicondensation via a BKT transition Josebook . The atomic and molecular conversions induce an asymmetric coupling between the phase of the atomic and molecular wave functions which couples the topological excitations (vortex-antivortex pairs) of both fields. For positive detuning, this leads to an unusual BKT transitions when the quasi-long-range coherence of the mixed atomic and molecular superfluid is destroyed: only atoms exhibit conventional BKT criticality whereas molecules quasicondense in the same way as atom pairs condense. Our QMC simulations are in agreement with a previous study of an effective coupled model, mimicking the atomic and molecular superfluid for positive detuning deForges2016 . We complete here the picture by studying the thermal transition for negative detuning: the molecular condensate, coexisting with noncondensed atoms, exhibits conventional BKT criticality but is found to be consistent with an anomalous stiffness jump at the transition.
The paper is organized as follows: The atom-molecule Hamiltonian is presented in Sec. II. In Sec. III, we discuss the mean field and quantum Monte Carlo approaches used to study the Hamiltonian. We also define the main observables of interest. Section IV is devoted to the discussion of the ground state phase diagram. Quantum Monte Carlo calculations verify the qualitative conclusions of the mean field theory, but provide quantitatively accurate values for the phase boundaries and elucidate the universality classes of the quantum phase transitions. Finally, in Sec. V, we discuss the thermal phase diagram and the nonconventional BKT transitions associated with the loss of molecular and atomic-molecular quasi-long-range coherence. Conclusions and outlook are provided in Sec. VI.
## Ii Atom-Molecule Hamiltonian
We consider spinless bosons with mass on a square optical lattice close to a narrow Feshbach resonance. The system is described by a single-band Bose-Hubbard model with atomic and molecular bosons, coherently coupled via atom-atom interactions Dickerscheid2005 . The particles can hop between nearest neighboring sites, and their interactions are described by intraspecies and interspecies onsite potentials. An additional term takes into account the conversion between two atoms and a molecule, and vice versa. The Hamiltonian of the system reads Dickerscheid2005 ; Koehleretal2006 ; SenguptaD2005 ; Radzihovskyetal2004 ; Romansetal2004 , where
^T = −∑⟨i,j⟩(ta a†iaj+tm m†imj+H.c.) , (1) ^P = + Uamnainmi+(Ua+δ)nmi−μ(nai+2nmi)] , ^C = g∑i(m†iaiai+a†ia†imi) . (3)
The operator corresponds to the kinetic energy for hopping between nearest neighboring sites defined on a square lattice with periodic boundary conditions. Here and are respectively the tunneling amplitudes for the atoms and the molecules. The and ( and ) operators are bosonic creation and annihilation operators of atoms (molecules) on site . and are the corresponding number operators. The operator contains the intraspecies (interspecies) interactions with repulsive cost and (), as well as the chemical potential term; in particular it contains the detuning term (controlled experimentally by a magnetic field Chinetal2010 ), which brings the state of two atoms and a molecule in and out of resonance on each site, () corresponding to the molecular (atomic) side of the resonance. Finally the operator is the conversion term, which coherently transforms a pair of atoms into a molecule and vice versa Radzihovskyetal2008 . The conversion rate between atoms and molecules, , is obtained via the solution of the scattering problem for two atoms in a parabolic potential Buschetal1998 . Following Ref.Syassenetal2007 , the parameter , calculated by assuming a single harmonic potential, which is a good approximation for a deep optical lattice, is given by
g=[4πℏ2abgΔμΔBm(√2πaho)3(1+0.490abgaho)]12 , (4)
where the harmonic oscillator length, the background scattering length of the atoms, the width of the Feshbach resonance, and the difference between the magnetic moments of an entrance-channel atom pair and a closed-channel molecule. The model described by the Hamiltonian remains realistic as long as , with the nonresonant atom-atom interaction and the energy splitting of the on-site optical lattice potential; see Ref. Dickerscheid2005 for the conditions of applicability of this model and for the derivation of the Hamiltonian Eqs. (13) from a microscopic model. Furthermore, the validity of the single-band approximation requires . In other words, the single-band approximation is is well respected for a narrow Feshbach resonance, e.g. mG for near 414 G Syassenetal2007 .
The Hamiltonian has the symmetry U(1), associated with the mass conservation in the mixture (U(1) symmetry), times the Ising symmetry in the phase relationship between atoms and molecules. As we discuss later, this emergent Ising symmetry, arising from the asymmetric nature of the atom-molecule coupling, is crucial for the understanding of the phase diagram. A theoretically sound treatment requires one to take into account the full many-body physics of the Hamiltonian which is a rather hard task, given the large number of parameters (). In order to simplify our study, we treat the parameters of the model as free parameters and we consider symmetric parameters for atoms and molecules, leading to
t≡ta=tm ,U≡Ua=Um=Uam , (5)
reducing the number of parameters to four independent parameters only : and , where we choose the hopping parameter in order to set the energy scale. A realistic scenario requires the calculation of the parameters from the microscopic Hamiltonian using the Wannier functions. Nevertheless, since the qualitative aspects of the phase diagram do not depend on the precise values of , and , our choice of Eq. (5) is indeed relevant SenguptaD2005 and captures the physics arising from the conversion term Eq. (3), which is demonstrated in Ref.deForges_Roscilde_2015 .
The above atom-molecule Hamiltonian has been mainly studied using mean-field theory Radzihovskyetal2004 ; Romansetal2004 ; Radzihovskyetal2008 . The quantum phase transitions exhibited by the model have been extensively studied in one dimension RousseauD2008 ; EckholtR2010 ; Ejima_2011 ; Bhaseenetal2012 whereas few studies have examined this question in two dimensions SenguptaD2005 ; deForges_Roscilde_2015 . Here we numerically investigate this Hamiltonian in two dimensions, by using exact QMC simulations based on the stochastic Green function algorithm SGF ; directedSGF and Gutzwiller mean field approach. We investigate both the quantum and thermal phase transitions.
## Iii Methods
To capture the zero-temperature physics of the model, we use both the QMC method and the MFT approach. While the QMC simulations become rather demanding for the calculation of the phase diagram, MFT allows for a rapid reconstruction of the phase boundaries, which turns out to be essential given the rich structure of the phase diagram - containing different critical and multicritical points.
### iii.1 Gutzwiller mean-field approach
Although the mean field approximation does not give quantitatively accurate values for the phase boundaries, the mean-field phase diagram of a bosonic coupled mixture is in good agreement with QMC simulations in two dimensions at zero temperature deforges11 ; deforges13 but fails at finite temperature deforges12 . We use a mean-field formulation based on a decoupling approximation which decouples the hopping term to obtain an effective one-site problem. Introducing the atomic (molecular) superfluid order parameter (), we replace the creation and destruction operators on site by their mean values and . Since we are interested in equilibrium states, the order parameters can be chosen to be real. Using this ansatz, the kinetic energy terms, which are nondiagonal in boson creation and destruction operators, are decoupled as
a†iaj = (a†i−ψa)(aj−ψa)+(a†i+aj)ψa−ψ2a (6) ≃(a†i+aj)ψa−ψ2a .
The same approximation applies for the molecules. The Hamiltonian is rewritten as a sum over local terms where
^HMFi = −4ta(a†i+ai)ψa−4tm(m†i+mi)ψm (7) +4taψ2a+4tmψ2m+Uamnainmi +Ua2nai(nai−1)+Um2nmi(nmi−1) +(Ua+δ)nmi−μ(nai+2nmi) +g(m†iaiai+a†ia†imi) .
The mean field Hamiltonian Eq. (7) can be easily diagonalized numerically in a finite occupation-number basis , with the truncation , and then minimizing the lowest eigenvalue with respect to the order parameters and . This gives the order parameters of the ground state and its eigenvector . At zero temperature, the system is in a Bose-Einstein condensate phase if at least one of the order parameters is nonzero and is, otherwise, in an insulating phase. The atomic and molecular condensate fraction is given by
CMFα=|ψα|2 , (8)
with . The atomic and molecular densities are respectively defined by
ρa = ⟨ΨMF0|a†a|ΨMF0⟩ , ρm = ⟨ΨMF0|m†m|ΨMF0⟩ . (9)
Finally, the compressibility is given by
κ=∂ρ/∂μ , (10)
with the total density.
### iii.2 Quantum Monte Carlo simulations
The atom-molecule Hamiltonian is simulated by using the stochastic Green function algorithm SGF ; directedSGF , an exact QMC technique that allows canonical and grand canonical simulations of the system at zero and finite temperatures, as well as measurements of many-particle Green functions. We treat lattices with sizes up to . An inverse temperature of allows one to eliminate thermal effects from the QMC results. We mainly focus on scans at fixed total density in the canonical ensemble and calculate the average atomic and molecular densities, and , respectively, and the condensate fraction of atoms and molecules,
Ca=1L4∑i,j⟨a†iaj⟩ , Cm=1L4∑i,j⟨m†imj⟩. (11)
The total density is conserved in canonical simulations, but individual densities and fluctuate due to the conversion term Eq. (3). We also calculate the superfluid density given by fluctuations of the winding number roy
ρs=⟨(Wa+2Wm)2⟩4tβ (12)
## Iv Ground state phase diagrams
Without coupling between atoms and molecules, i.e. , the symmetry of the model is U(1)U(1) and we expect to observe an atomic (molecular) Mott insulator for small and integer filling, and atomic (molecular) Bose-Einstein condensate () with broken U(1) symmetry for large . Activating the conversion, the symmetry of the model breaks down into a global U(1) symmetry corresponding to the transformations
ϕmi → ϕmi+θ ϕai → ϕai+θ2+12(σ+1)π, (13)
with the Ising variable and and respectively the atomic and molecular phases of the fields. The U(1) symmetry is a joint one for atomic and molecular phases, and corresponds to total “mass” conservation with density . From the mean field point of view, the average phase of the atoms acquires a finite value in the atomic BEC phase, hence , and consequently . As a consequence, the molecular phase locked to the nonzero value acquired by the phase of atomic pairs drive the system to a joint atomic and molecular BEC (), and prohibits the existence of an atomic BEC alone without a molecular condensation. The reverse is not true: because of the asymmetric nature of the atom-molecule coupling, the molecular condensation does not imply an atomic condensation and leaves out the symmetry. Indeed, the molecular condensation leads to a finite value for the average which couples to twice the phase of the atoms and then fixes the phase of the atoms only modulo , i.e. , leading to a fluctuating with discrete fluctuations (). Therefore, for large , we expect the appearance of two Bose-Einstein condensed phases: a molecular condensate and a mixed atomic-molecular condensate .
For small hopping , the coupling also strongly affects the Mott insulating phases, leading to an atomic-molecular Mott insulating phase (). The phase with is well described by an on-site wave function of the form
|Ψ⟩=α(δ/U,g/U)|2,0⟩+β(δ/U,g/U)|0,1⟩ , (14)
in the occupation-number basis . It has been shown that the particle-hole gap , where () is the critical chemical potential to add a particle (hole) to the incompressible phase, is strongly dependent on the conversion parameter . For a moderate hopping, the most striking feature is that the conversion parameter can drive the system towards an insulating phase, the Feshbach insulator (FI), close to Feshbach resonance by opening a particle-hole gap in the phase existing for deForges_Roscilde_2015 . In other words, the particle-hole gap vanishes in the FI phase when the conversions are suppressed, i.e. , whereas the gap remains finite in the phase when .
We first use the MFT described in Sec. III.1 for studying the phase diagram and the quantum phase transitions. Then, we perform exact QMC simulations described in Sec. III.2 for a more extensive analysis of the quantum phase transitions.
### iv.1 Mean-field phase diagram
The atomic-molecular conversions strongly affect the insulating-BEC transition with two particles per site and give rise to an insulating phase at the tip of the Mott lobe. As a reference, for the standard single species Bose-Hubbard model, the Mott-superfluid transition is located at () for (), according to the mean field method of Sec. III.1. Activating the conversion, Fig. 1 shows the atomic and molecular condensate fractions and as functions of the hopping in different regions of the detuning.
For small , the system is in a phase for all detuning Fig. 1 (a–c) and the particle-hole gap is stabilized by the interaction . Three scenarios are observed when the hopping is increased. Firstly, close to the resonance both and turn on simultaneously and jump at the transition, indicating the existence of a metastable region at the transition and a quantum first-order transition – see Fig. 1 (a). Clearly, the transition takes place at a critical hopping bigger than the standard critical hopping of the Mott-superfluid transition without conversions at any filling. Therefore, the interactions alone cannot open the particle-hole gap at the tip of the insulating lobe, which is rather stabilized by the conversions : the system is in a FI phase. Secondly, only the molecular condensation occurs on the molecular side – see Fig. 1 (b) – the atoms being almost eliminated adiabatically for . Consequently, the transition occurs close to the standard critical value of the single species Bose-Hubbard model with . Lastly, for (Fig. 1 (c)), the system is mainly composed by atoms and a mixed condensation occurs when is increased (the molecular condensate is small but finite). Although we numerically observe a continuous - transition, a weak first-order transition is not excluded when fluctuations are taken into account. This, however, does not happen, as we discuss in the following.
We now turn our attention to the phase diagram close to the resonance with a fixed hopping in order to focus on the FI phase. Figure 2, from Ref. deForges_Roscilde_2015 , shows the phase diagram as a function of the detuning and of the chemical potential .
The incompressible region (black region in Fig. 2) reveals the existence of the particle-hole gap of the FI phase with . The molecular condensation and the mixed condensation are also observed in the phase diagram. First-order transitions, indicated by red dashed lines in Fig. 2, are systematically observed when both atomic and molecular order parameters are simultaneously turned on, i.e. when the global symmetry of the model U(1) is destroyed footnote:U1Z2 . The first-order nature of the transition is not specific to the transition to the FI, but it appears to be generic for all insulating- transitions Radzihovskyetal2008 . Although this phase diagram was discussed in Ref. deForges_Roscilde_2015 , the phase transitions have not yet been properly examined using an exact method.
Changing the detuning , i.e. the control parameter in the experiment, can drive the system into different phases, leading to quantum phase transitions without changing the hopping parameter . This gives the experimental possibility to observe multiple transitions and more specifically the first-order FI- transition. The direct observation of the density profile for a fix detuning would give rise to intriguing shapes since many first-order transitions are involved. According to the local density approximation, the density profile is obtained by a vertical scan of the phase diagram by changing the chemical potential . Figure 3 shows the condensates and the densities for such a vertical cut with .
The system evolves continuously from vacuum to and to when increases, and all the quantities jump at the first-order -FI transition. Note that both atomic and molecular densities, and , reach a noninteger plateau in the FI phase, whereas the total density is integer .
The mean field analysis reports a rich physics attributed to the conversion term in Eq. (3) but does not allow the classification of the transitions which requires the calculation of the correlation functions.
### iv.2 Quantum Monte Carlo simulations
The MFT results are qualitatively confirmed by our QMC simulations.
Figure 4 (a) shows that, upon lowering , the atomic and molecular condensates exhibit a clear jump at the tip of the FI, witnessing the first-order nature of the FI- transition. The jump is also observed in the superfluid density. The transition occurs at a critical value well above the standard critical hopping of the MI-SF transition without conversions at any filling (e.g. for the single-species Bose-Hubbard model at the MI-SF transition with Capogrossoetal2008 ), then proving the crucial contribution of the conversions in the particle-hole gap stabilization. The -FI crossover was investigated in Ref. deForges_Roscilde_2015 . Far on the molecular side – Fig. 4 (b) – only the molecular condensation occurs, and the continuous - transition takes place close to . Finally, on the atomic side – Fig. 4 (c) – we do not observe a jump at the - transition at .
To confirm the presence of a first-order quantum phase transition near the tip of the FI lobe, we perform a finite size analysis of the condensates , and of the superfluid density . Indeed, since the correlation length does not diverge at a first-order transition, the jump should increase with the system size for small systems, and then saturate for big sizes.
Figure 5 clearly shows that the jump increases with the linear system size , indicating a first-order FI- phase transition. This conclusion is strengthened by QMC grand canonical simulations, see Fig. 6, where the density jumps at the FI- transition indicating a metastable region and a first-order transition at the tip of the FI lobe.
We now investigate the quantum phase transitions keeping both hopping and conversion fixed and varying the detuning . Since the FI phase is stabilized for even densities only, two different behaviors are observed for even and odd densities.
We first discuss the case with two particles per site. Starting in the phase for large negative , the system evolves firstly in the FI phase, and then in the phase when increasing the detuning – see Fig. 7. As expected, the atomic density increases with the detuning, see Fig. 7 (a) and both atomic and molecular densities jump at the first-order FI- transition, see inset Fig. 7 (a). This jump is also clearly observed in the superfluid density and in the condensate fractions at the FI- transition (Fig. 7 (b-d)).
As shown in Fig. 8, the FI phase cannot be stabilized for and a - transition is induced upon increasing the detuning .
This transition, captured also at the mean-field level, is related to the breaking of the symmetry associated with the phase of the atomic field, and it is therefore expected to belong to the 3D Ising universality class. While the universality class cannot be correctly reproduced at the mean-field level, our QMC simulations show a very convincing scaling of the condensate fraction as with exponents and belonging to the 3D Ising universality class – see Fig. 9 (b) for . The - transition also belongs to the 3D Ising universality class for according to our QMC simulations – e.g. see Fig. 9 (a) for .
Similarly to the single-species Bose-Hubbard model, the scaling of is found to be consistent with the 3D universality class at the -FI transition where only the U(1) symmetry is spontaneously restored – see Fig. 9 (b). The other transitions in the phase diagram of Fig. 2, i.e. vacuum- and FI- with , are second order (not shown). The order and the universality class of the quantum phase transitions of the phase diagram in Fig. 2 are summarized in Table 1.
## V Thermal phase diagram
In two dimensions,, the Bose-Einstein condensation at finite temperature is impossible in the proper sense Mermin , leaving space to quasicondensation via a BKT transition Josebook , associated with the unbinding transition of pairs of topological excitations (vortices and antivortices). In this context, the coherent coupling between atoms and molecules establishes a correlation among the topological defects in the phase patterns of both species which brings interesting features deForges2016 . We first analyze the thermal phase diagram for , and then turn to an analysis of the thermal phase transitions.
### v.1 Thermal phase diagram for ρ=2
The ground state analysis (Sec. IV) revealed the possibility to stabilize an insulating phase, the FI phase, with a finite particle-hole gap opened by the conversions between atoms and molecules for even total density. The FI phase evolves either in a phase when decreasing the detuning , or in a mixed phase when increasing (see Fig. 7). The thermal phase diagram, plotted in Fig. 10, shows the evolution of the phases when activating the thermal effects.
As expected, the molecular (mixed) condensate become molecular (mixed) superfluid at low temperature with and the system is in a normal Bose liquid (NBL) at high temperature for all detuning . The critical temperature at the SF-NBL transition is determined using finite size analysis, see Sec. V.2.
Note that the mixed superfluid and the molecular superfluid , separated by the FI phase at , remain well separated for all temperature and the insulating FI phase crossed over the NBL phase when the temperature is increased. Interestingly, although , the phase is more robust with respect to the thermal effects, compared to the single component superfluid. This can be qualitatively explained by looking at the atomic (molecular) characteristic interacting scales , where far on the atomic side () and far on the molecular side (). Therefore, the interacting scale in the phase is twice as large as the one in the phase. The same behavior has been observed at the mean field level in 3D Radzihovskyetal2004 .
### v.2 Quantum-to-classical first-order phase transition and nonconventional BKT transitions
The quantum first-order transition between the FI and the mixed condensate requires a specific attention since it is not excluded that the metastability region persists at . Indeed, the discontinuity in the superfluid density at the disordered- transition – discontinuity associated with the existence of the metastability region – persists at finite temperature, see Fig. 11 (a).
Interestingly enough, the discontinuity in observed at in Fig. 11 (a) – a temperature close to the critical temperature of the -NBL transition far on the molecular side, reinforces the idea that the quantum first-order FI- transition becomes a classical first-order NBL- transition. As previously discussed, the discontinuity in increases with the system size at a first-order transition. This behavior is clearly observed for in Fig. 11 (a). However, we do not observe other clear signals of a first-order transition (two-peak structures in histograms or negative compressibility) and therefore we are not able to discern whether the phase transition indeed is first order or not.
We now concentrate on vertical slices of the phase diagram in Fig. 10, starting in the condensed phase at low temperature, keeping the detuning fixed and varying the temperature. It is well known that the loss of the quasi-long-range coherence at a BKT transition is associated with the unbinding of vortices and antivortices and with a scaling of the quasicondensate such that with the critical exponent lebellac . Furthermore, the superfluid density satisfies the universal jump at the transition Nelson_Kosterlitz_1977 . To avoid any confusion, we stress that the universal jump in at a BKT transition is only observed in the thermodynamic limit (e.g., see Fig. 12 (b)), and therefore cannot be confused with the discontinuity at a first-order transition observed for . In the case of coupled fields, the BKT transition implies a more complex mechanism since the topological defects are coupled, as for e.g., unbinding of half-vortices instead of the usual integer vortices Mukerjee_2006 ; Wessel_2011 ; Yang_2011 ; deForges2016 .
We first discuss the BKT transition far on the molecular side, with – see Fig. 12.
The critical temperature , in agreement with Ref. Capogrossoetal2008 , is determined from the finite size scaling of the molecular condensate fraction with system sizes up to – see Fig. 12 (a). Moreover, the BKT transition is found to be consistent with an anomalous stiffness jump at instead of – see Fig. 12 (b). This anomalous jump is easily understood by rewriting the superfluid density Eq. (12) with a vanishing atomic winding number on the molecular side, leading to , with the single component winding number. That immediately gives the factor 4 involved in the anomalous stiffness jump observed.
On the atomic side, the situation is more complex since both atomic and molecular fields are coupled in a regime of a quasi-long range order in their correlation. For large positive detuning, we expect the atomic condensate fraction to satisfy the standard BKT scaling. This behavior is depicted for in Fig. 13 (a), where .
This transition is in agreement with the standard universal stiffness jump (not shown). As discussed in Sec. IV for , the molecular phase is locked (from the mean field point of view) to the nonzero value acquired by the phase of atomic pairs , since the average phase of the atoms acquires a finite value . As a consequence, the topological defects in the atomic and molecular field are coupled also at finite temperature, leading to the appearance of vortices in the molecular field due to the conversion term deForges2016 . For large coupling, the molecular field is expected to quasicondense at the atomic BKT transition in the same way as atom pairs condense. Therefore, the molecular BKT transition, driven by the atomic-pair field dynamics, does not satisfy the normal BKT scaling but satisfies the scaling of the atom-pair such that as shown in Fig. 13 (b). This result is in good agreement with a previous study of an effective coupled model deForges2016 . Therefore, the fact that both atomic and molecular BKT transitions occur at the same critical temperature , see Fig. 13, indicates that the molecular field mimics the behavior of the atomic pairs exactly, or in other words is a strong coupling. The order and the universality class of the thermal phase transitions of the phase diagram in Fig. 10 are summarized in Table 2.
## Vi Conclusion
Studying numerically a coherently coupled 2D atom-molecule mixture at zero and finite temperature, we unveiled the phase diagram and the universal traits of the transitions. At zero temperature, we have shown that an insulating phase is stabilized close to the Feshbach resonance – the Feshbach insulator – by the atom-molecule conversion term in a region where interactions alone cannot stabilize a Mott insulator. The Feshbach insulator involved noninteger density plateaus for both atomic and molecular species such that close to the resonance. Such a measurement, directly accessible using Stern-Gerlach separation during the cloud expansion Herbigetal2003 , will be a definitive evidence that this phase is not a standard Mott phase with integer density. The ground state phase diagram comprises the FI phase close to the resonance, a molecular condensate for negative detuning, and a mixed atomic-molecular condensate for positive detuning. The richness of the phase diagram also comes from the variety of quantum phase transitions: the transition from molecular to mixed condensate is found to be of the 3D Ising type due to the breaking of the symmetry associated with the phase of the atomic field; the transition from molecular condensate to Feshbach insulator belongs to the universality class of the 3D model; and interestingly enough, the transition from mixed condensate to disordered phase (vacuum or Feshbach insulator) associated with the spontaneous symmetry breaking of both U(1) and is systematically found to be of the first order; otherwise the transitions are second order. The thermal effects are also discussed. The conversion term couples coherently and asymmetrically the phase of the atomic and molecular fields, and therefore strongly affects the BKT transition. This leads to an unusual molecular superfluid to normal Bose liquid BKT transition, involving a renormalized stiffness jump, instead of the standard one for the single component case. The transition from mixed superfluid to normal Bose liquid also requires a careful treatment since only the atomic BKT transition is conventional whereas the thermal disintegration of the molecular superfluid satisfies the scaling of the atom-pair such that . Finally, we observe a discontinuity in the superfluid density at the mixed superfluid to normal Bose liquid transition, indicating the existence of a possible classical first-order transition. These rich phenomena are amenable to experimental verification using state-of-the-art setups in cold-atom physics.
Acknowledgements. We thank T. Roscilde, F. Hébert, and A. Rançon for useful discussions and for their critical reading of the manuscript. We also thank Professor Min-Fong Yang for critical comments and suggestions. L.dF.dP also thanks Sasha de Forges de Parny and Solenne Ghintran for their support. This work is supported by Agence Nationale de la Recherche (“ArtiQ” project) and the Alexander von Humboldt-Foundation. All calculations have been performed on the PSMN center of the ENS-Lyon.
## References
• (1) I. Bloch, J. Dalibard, and W. Zwerger, Rev. Mod. Phys. 80, 885 (2008).
• (2) Quantum Phase Transitions, S. Sachdev (Cambridge University Press, 1999).
• (3) M. Greiner, O. Mandel, T. Esslinger, T. W. Hänsch, and I. Bloch, Nature 415, 39 (2002).
• (4) C. Chin, R. Grimm, P. Julienne, and E. Tiesinga, Rev. Mod. Phys. 82, 1225 (2010).
• (5) M. Greiner, C. A. Regal, and D. S. Jin, Nature 426, 537 (2003).
• (6) M. Randeria and E. Taylor, Ann. Rev. Cond. Matt. 5, 209 (2014).
• (7) W. Ketterle and M. W. Zwierlein, in Ultracold Fermi Gases, Proceedings of the International School of Physics ”Enrico Fermi”, Course CLXIV, M. Inguscio, W. Ketterle, and C. Salomon (eds.), IOS Press, Amsterdam, 2008.
• (8) K. Winkler, G. Thalhammer, F. Lang, R. Grimm, J. Hecker Denschlag, A. J. Daley, A. Kantian, H. P. B üchler, and P. Zoller, Nature 441, 853 (2006).
• (9) S. Inouye, M. R. Andrews, J. Stenger, H.-J. Miesner, D. M. Stamper-Kurn, and W. Ketterle, Nature 392, 151 (1998).
• (10) E. A. Donley, N. R. Claussen, S. T. Thompson, and C. E. Wieman, Nature 417, 529 (2002).
• (11) J. Stenger, S. Inouye, M. R. Andrews, H.-J. Miesner, D. M. Stamper-Kurn, and W. Ketterle, Phys. Rev. Lett. 82, 2422 (1999).
• (12) R. Jördens, N. Strohmaier, K. Günter, H. Moritz, and T. Esslinger, Nature 455, 204 (2008).
• (13) B. Deissler, M. Zaccanti, G. Roati, C. D’Errico, M. Fattori, M. Modugno, G. Modugno, and M. Inguscio, Nature Phys. 6, 354 (2010).
• (14) N. Syassen, D. M. Bauer, M. Lettner, D. Dietze, T. Volz, S. Dürr, and G. Rempe, Phys. Rev. Lett. 99, 033201 (2007).
• (15) Th. Busch, B.-G. Englert, K. Rzazewski, and M. Wilkens, Found. Phys. 28, 549 (1998).
• (16) M. L. Olsen, J. D. Perreault, T. D. Cumby, and D. S. Jin, Phys. Rev. A 80, 030701(R) (2009).
• (17) K. Sengupta and N. Dupuis, Europhys. Lett. 70, 586 (2005).
• (18) L. Radzihovsky, J. I. Park, and P. B. Weichman, Phys. Rev. Lett. 92, 160402 (2004).
• (19) M. W. J. Romans, R. A. Duine, S. Sachdev, and H. T. C. Stoof, Phys. Rev. Lett. 93, 020405 (2004).
• (20) L. Radzihovsky, P. B. Weichman, and J. I. Park, Ann. Phys. 323, 2376 (2008).
• (21) L. de Forges de Parny, V. G. Rousseau, and T. Roscilde, Phys. Rev. Lett. 114, 195302 (2015).
• (22) S. Capponi, G. Roux, P. Azaria, E. Boulat, and P. Lecheminant, Phys. Rev. B 75, 100503(R) (2007).
• (23) G. Roux, S. Capponi, P. Lecheminant, and P. Azaria, Eur. Phys. J. B 68, 293 (2009).
• (24) S. Ejima, M. J. Bhaseen, M. Hohenadler, F. H. L. Essler, H. Fehske, and B. D. Simons, Phys. Rev. Lett. 106, 015303 (2011).
• (25) M. J. Bhaseen, S. Ejima, F. H. L. Essler, H. Fehske, M. Hohenadler, and B. D. Simons, Phys. Rev. A 85, 033636 (2012).
• (26) N. F. Mott and R. Peierls, Proceedings of the Physical Society, vol. 49, no. 4S, pp. 72-73, 1937.
• (27) J. V. José (Ed.), 40 Years of Berezinskii-Kosterlitz-Thouless Theory, World Scientific, 2013.
• (28) L. de Forges de Parny, A. Rançon, and T. Roscilde, Phys. Rev. A 93, 023639 (2016).
• (29) D. B. M. Dickerscheid, U. Al Khawaja, D. van Oosten, and H. T. C. Stoof, Phys. Rev. A 71, 043604 (2005).
• (30) T. Köhler, K. Góral, and P. S. Julienne, Rev. Mod. Phys. 78, 1311 (2006).
• (31) V. G. Rousseau and P. J. H. Denteneer, Phys. Rev. A 77, 013609 (2008); V. G. Rousseau and P. J. H. Denteneer, Phys. Rev. Lett. 102, 015301 (2009).
• (32) M. Eckholt and T. Roscilde, Phys. Rev. Lett. 105, 199603 (2010).
• (33) V.G. Rousseau, Phys. Rev. E 77, 056705 (2008).
• (34) V.G. Rousseau, Phys. Rev. E 78, 056707 (2008).
• (35) L. de Forges de Parny, F. Hébert, V. G. Rousseau, R .T. Scalettar, and G. G. Batrouni, Phys. Rev. B 84, 064529 (2011).
• (36) L. de Forges de Parny, F. Hébert, V. G. Rousseau, and G. G. Batrouni, Phys. Rev. B 88, 104509 (2013).
• (37) L. de Forges de Parny, F. Hébert, V. G. Rousseau, and G. G. Batrouni, Eur. Phys. J. B 85, 169 (2012).
• (38) D.M. Ceperley and E.L. Pollock, Phys. Rev. B39, 2084 (1984).
• (39) Interestingly, first-order transitions are generically found in three dimensions when the breaking of a U(1) symmetry is involved. See for instance: V. Thanh Ngo and H. T. Diep, J. Appl. Phys. 103, 07C712 (2008); A. O. Sorokin, JETP 118, 417 (2014).
• (40) B. Capogrosso-Sansone, N. V. Prokof’ev, and B. V. Svistunov, Phys. Rev. A 77, 015602 (2008).
• (41) P. C. Hohenberg, Phys. Rev. 158, 383 (1967); N. D. Mermin, Phys. Rev. 176, 250 (1968); N. D. Mermin and H. Wagner, Phys. Rev. Lett. 17, 1133 (1966).
• (42) A. Pelissetto and E. Vicari, Phys. Rep. 368, 549 (2002).
• (43) M. Le Bellac, “Quantum and Statistical Field Theory”, Oxford University Press (1992).
• (44) D.R. Nelson, and J. M. Kosterlitz, Phys. Rev. Lett. 39, 1201 (1977).
• (45) S. Mukerjee, C. Xu and J.E. Moore, Phys. Rev. Lett. 97, 120406 (2006).
• (46) L. Bonnes, and S. Wessel, Phys. Rev. Lett. 106, 185302 (2011).
• (47) K.-K. Ng, and M.-F. Yang, Phys. Rev. B 83, 100511 (2011).
• (48) J. Herbig, T. Kraemer, M. Mark, T. Weber, C. Chin, H.-C. Nägerl, and R. Grimm, Science 301, 1510 (2003).
Want to hear about new tools we're making? Sign up to our mailing list for occasional updates.
If you find a rendering bug, file an issue on GitHub. Or, have a go at fixing it yourself – the renderer is open source!
For everything else, email us at [email protected]. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8656575083732605, "perplexity": 1433.368252636548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683683.99/warc/CC-MAIN-20220707033101-20220707063101-00167.warc.gz"} |
https://socratic.org/questions/what-is-f-x-int-x-2e-x-1-x-3e-x-dx-if-f-2-7 | Calculus
Topics
# What is f(x) = int x^2e^(x-1)-x^3e^-x dx if f(2) = 7 ?
May 25, 2018
${x}^{3} {e}^{-} x + {x}^{2} {e}^{x - 1} + 3 {x}^{2} {e}^{- x} - 2 x {e}^{x - 1} + 6 {e}^{- x} x + 2 {e}^{x - 1} + 6 {e}^{- x} + C$
We get $C$ by the following equation
$38 {e}^{- 2} + 2 e + C = 7$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9680913686752319, "perplexity": 4866.441826998455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528869.90/warc/CC-MAIN-20190723043719-20190723065719-00386.warc.gz"} |
https://forum.bebac.at/forum_entry.php?id=21952&order=time | ## Purpose of the study? [Power / Sample Size]
Hi Helmut,
I have question related to this topic, if a study design is 5x5 cross over trial, reference, 3 treatments, and placebo, and I would like to estimate the sample size ? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8359132409095764, "perplexity": 2352.6991251480167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057225.38/warc/CC-MAIN-20210921131252-20210921161252-00666.warc.gz"} |
https://stemteachersnyc.org/energy-is-energy-a-two-part-workshop-on-a-consistent-approach-in-bio-chem-and-physics/ | Students often struggle with understanding energy and comprehending its crucial role in all science disciplines. This workshop will show you how to build a consistent model for energy across bio, chem, and physics. You will explore key methods to help students understand how energy, a key cross-cutting concept in the Next Generation Science Standards, can be represented by the unified idea of energy storage and transfer in the physical and biological world. We will use a simple but powerful representation tool, the Conservation of Energy (COE) diagram, to qualitatively keep track of energy in various changes. This graphical, visual approach eliminates the conflicts among the ways energy is usually taught in bio, chem, and physics, thus helping students build a deep, applicable understanding of this important concept as they study the various disciplines from 6th through 12th grade.
In Part 1,we will use COE diagrams to represent energy storage and transfer in various demonstrations and scenarios in physics, building the fundamental idea of how energy storage changes when objects/particles attracted to each other are separated. We then use this idea to describe how energy is stored during observable phase changes based on particle behavior change. This paves the way for understanding energy in chemical changes and applying the same idea in biological systems, which will be addressed in Part 2 of the workshop scheduled tentatively for February 4, 2017. After attending both parts you will come away with a clear understanding of how to successfully implement some excellent visual representations to incorporate in any science course and to guide your students to the profound understanding that energy is energy; the currency for change in all of science. Energy does not change form but only the way that energy is stored in a system changes. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8915855884552002, "perplexity": 721.4471130136228}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711064.71/warc/CC-MAIN-20221205232822-20221206022822-00448.warc.gz"} |
http://repo.scoap3.org/search?f=author&p=Falkowski%2C%20Adam&ln=en | SCOAP3 Repository 22 records found 1 - 10 Search took 0.01 seconds.
1 Hadronic $\tau$ Decays as New Physics Probes in the LHC Era / Cirigliano, Vincenzo ; Falkowski, Adam ; González-Alonso, Martín ; Rodríguez-Sánchez, Antonio We analyze the sensitivity of hadronic $\tau$ decays to nonstandard interactions within the model-independent framework of the standard model effective field theory. [...] Published in Physical Review Letters 122 (2019) 10.1103/PhysRevLett.122.221801 External links: pdf; xml 2 The CKM parameters in the SMEFT / Descotes-Genon, Sébastien ; Falkowski, Adam ; Fedele, Marco ; González-Alonso, Martín ; et al The extraction of the Cabibbo-Kobayashi-Maskawa (CKM) matrix from flavour observables can be affected by physics beyond the Standard Model (SM). [...] Published in JHEP 1905 (2019) 172 10.1007/JHEP05(2019)172 arXiv:1812.08163 Fulltext: XML PDF (PDFA); 3 Reactor neutrino oscillations as constraints on effective field theory / Falkowski, Adam ; González-Alonso, Martín ; Tabrizi, Zahra We study constraints on the Standard Model Effective Field Theory (SMEFT) from neutrino oscillations in short-baseline reactor experiments. [...] Published in JHEP 1905 (2019) 173 10.1007/JHEP05(2019)173 arXiv:1901.04553 Fulltext: XML PDF (PDFA); 4 Light dark matter from leptogenesis / Falkowski, Adam ; Kuflik, Eric ; Levi, Noam ; Volansky, Tomer We consider the implications of a shared production mechanism between the baryon asymmetry of the universe and the relic abundance of dark matter, that does not result in matching asymmetries. [...] Published in Physical Review D 99 (2019) 10.1103/PhysRevD.99.015022 arXiv:1712.07652 Fulltext: XML PDF; 5 Flavourful Z ′ portal for vector-like neutrino dark matter and R K * ${R}_{K^{\left(*\right)}}$ / Falkowski, Adam ; King, Stephen ; Perdomo, Elena ; Pierre, Mathias We discuss a flavourful Z ′ portal model with a coupling to fourth-family singlet Dirac neutrino dark matter. [...] Published in JHEP 1808 (2018) 061 10.1007/JHEP08(2018)061 arXiv:1803.04430 Fulltext: XML PDF (PDFA); 6 Future DUNE constraints on EFT / Falkowski, Adam ; Grilli di Cortona, Giovanni ; Tabrizi, Zahra In the near future, fundamental interactions at high-energy scales may be most efficiently studied via precision measurements at low energies. [...] Published in JHEP 1804 (2018) 101 10.1007/JHEP04(2018)101 arXiv:1802.08296 Fulltext: XML PDF (PDFA); 7 CP violation in 2HDM and EFT: the ZZZ vertex / Bélusca-Maïto, Hermès ; Falkowski, Adam ; Fontes, Duarte ; Romão, Jorge. ; et al We study the CP violating ZZZ vertex in the two-Higgs doublet model, which is a probe of a Jarlskog-type invariant in the extended Higgs sector. [...] Published in JHEP 1804 (2018) 002 10.1007/JHEP04(2018)002 arXiv:1710.05563 Fulltext: XML PDF (PDFA); 8 Compilation of low-energy constraints on 4-fermion operators in the SMEFT / Falkowski, Adam ; González-Alonso, Martín ; Mimouni, Kin We compile information from low-energy observables sensitive to flavor-conserving 4-fermion operators with two or four leptons. [...] Published in JHEP 1708 (2017) 123 10.1007/JHEP08(2017)123 arXiv:1706.03783 Fulltext: XML PDF (PDFA); 9 Higgs EFT for 2HDM and beyond / Bélusca-Maïto, Hermès ; Falkowski, Adam ; Fontes, Duarte ; Romão, Jorge C. ; et al We discuss the validity of the Standard Model Effective Field Theory (SM EFT) as the low-energy effective theory for the two-Higgs-doublet Model (2HDM). [...] Published in EPJC 77 (2017) 176 10.1140/epjc/s10052-017-4745-5 Fulltext: XML PDF (PDFA); 10 Anomalous triple gauge couplings in the effective field theory approach at the LHC / Falkowski, Adam ; González-Alonso, Martín ; Greljo, Admir ; Marzocca, David ; et al We discuss how to perform consistent extractions of anomalous triple gauge couplings (aTGC) from electroweak boson pair production at the LHC in the Standard Model Effective Field Theory (SMEFT). [...] Published in JHEP 1702 (2017) 115 10.1007/JHEP02(2017)115 arXiv:1609.06312 Fulltext: XML PDF (PDFA);
SCOAP3 Repository : 22 records found 1 - 10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9323369860649109, "perplexity": 13004.098657694785}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998462.80/warc/CC-MAIN-20190617083027-20190617105027-00326.warc.gz"} |
http://math.stackexchange.com/users/40146/minthao-2011?tab=questions&sort=newest | # minthao_2011
less info
reputation
27
bio website location age member for 1 year, 5 months seen yesterday profile views 67
# 31 Questions
1
vote
1
29
views
1
1
vote
1
166
views
2
2
1
41
views
0
2
37
views
1
vote
1
61
views
2
1
vote
1
143
views
0
2
204
views
1
2
1
49
views
2
5
1
102
views
1
vote
2
62
views
3
2
381
views
4
3
2
170
views
0
3
76
views
2
1
37
views
1
vote
2
73
views
5
1
81
views
1
8
3
252
views
4
1
95
views
2
5
0
293
views
### A system of equations of Vietnamese Mathematical Olympiad 2013
jan 11 '13 at 15:36 minthao_2011 700
1
4
3
271
views
2
1
103
views
3
2
157
views
0
2
69
views
1
2
1
100
views
0
4
113
views
1
vote
2
219
views
3
0
110
views
### How to solve this equation with another way?
oct 4 '12 at 3:57 minthao_2011 700
-2
4
171
views
1
vote
3
125
views
1
vote
3 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1845395565032959, "perplexity": 12387.923211173904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999662979/warc/CC-MAIN-20140305060742-00037-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://nonsmooth.gricad-pages.univ-grenoble-alpes.fr/siconos/reference/cpp/kernel/file_FirstOrderNonLinearDS_hpp.html | # File kernel/src/modelingTools/FirstOrderNonLinearDS.hpp¶
Go to the source code of this file
First Order Non Linear Dynamical Systems.
class FirstOrderNonLinearDS : public DynamicalSystem
#include <FirstOrderNonLinearDS.hpp>
General First Order Non Linear Dynamical Systems - $$M(t) \dot{x} = f(x,t,z) + r, \quad x(t_0) = x_0$$.
This class defines and computes a generic n-dimensional dynamical system of the form :
$$M \dot x = f(x,t,z) + r, \quad x(t_0) = x_0$$
where
• $$x \in R^{n}$$ is the state.
• $$M \in R^{n\times n}$$ a “mass matrix”
• $$r \in R^{n}$$ the input due to the Non Smooth Interaction.
• $$z \in R^{zSize}$$ is a vector of arbitrary algebraic variables, some sort of discret state. For example, z may be used to set some perturbation parameters, to control the system (z set by actuators) and so on.
• $$f : R^{n} \times R \mapsto R^{n}$$ the vector field.
By default, the DynamicalSystem is considered to be an Initial Value Problem (IVP) and the initial conditions are given by $$x(t_0)=x_0$$ To define a Boundary Value Problem, a pointer on a BoundaryCondition must be set.
The right-hand side and its jacobian (from base classe) are defined as
$\begin{split}rhs &=& \dot x = M^{-1}(f(x,t,z)+ r) \\ jacobianRhsx &=& \nabla_x rhs(x,t,z) = M^{-1}\nabla_x f(x,t,z)\end{split}$
The following operators can be plugged, in the usual way (see User Guide)
• $$f(x,t,z)$$
• $$\nabla_x f(x,t,z)$$
• $$M(t)$$
Subclassed by FirstOrderLinearDS
Right-hand side computation
void initRhs(double time)
allocate (if needed) and compute rhs and its jacobian.
Parameters
• time: of initialization
void initializeNonSmoothInput(unsigned int level)
set nonsmooth input to zero
Parameters
• level: input-level to be initialized.
void computeRhs(double time)
update right-hand side for the current state
Parameters
• time: of interest
void computeJacobianRhsx(double time)
update $$\nabla_x rhs$$ for the current state
Parameters
• time: of interest
virtual void resetAllNonSmoothParts()
reset non-smooth part of the rhs (i.e.
r), for all ‘levels’
virtual void resetNonSmoothPart(unsigned int level)
set nonsmooth part of the rhs (i.e.
r) to zero for a given level
Parameters
• level:
Attributes access
SP::SiconosMatrix M() const
returns a pointer to M, matrix coeff.
on left-hand side
void setMPtr(SP::SiconosMatrix newM)
set M, matrix coeff of left-hand side (pointer link)
Parameters
• newM: the new M matrix
const SimpleMatrix getInvM() const
get a copy of the LU factorisation of M operator
Return
SimpleMatrix
SP::SiconosMatrix invM() const
get the inverse of LU fact.
Return
pointer to a SiconosMatrix
SP::SiconosVector f() const
void setFPtr(SP::SiconosVector newPtr)
Parameters
• newPtr: a SP::SiconosVector
virtual SP::SiconosMatrix jacobianfx() const
get jacobian of f(x,t,z) with respect to x (pointer link)
Return
SP::SiconosMatrix
void setJacobianfxPtr(SP::SiconosMatrix newPtr)
set jacobian of f(x,t,z) with respect to x (pointer link)
Parameters
• newPtr: the new value
Memory vectors management
const SiconosMemory &rMemory() const
get all the values of the state vector r stored in memory
Return
a memory vector
SP::SiconosVector fold() const
returns previous value of rhs >OSI Related!!
void initMemory(unsigned int steps)
initialize the SiconosMemory objects: reserve memory for i vectors in memory and reset all to zero.
Parameters
• steps: the size of the SiconosMemory (i)
void swapInMemory()
push the current values of x and r in memory (index 0 of memory is the last inserted vector) xMemory and rMemory,
Plugins management
virtual void updatePlugins(double time)
Call all plugged-function to initialize plugged-object values.
Parameters
• time: value
void setComputeMFunction(const std::string &pluginPath, const std::string &functionName)
to set a specified function to compute M
Parameters
• pluginPath: the complete path to the plugin
• functionName: function name to use in this library
Exceptions
• SiconosSharedLibraryException:
void setComputeMFunction(FPtr1 fct)
set a specified function to compute M
Parameters
• fct: a pointer on the plugin function
void setComputeFFunction(const std::string &pluginPath, const std::string &functionName)
to set a specified function to compute f(x,t)
Parameters
• pluginPath: the complete path to the plugin
• functionName: the function name to use in this library
Exceptions
• SiconosSharedLibraryException:
void setComputeFFunction(FPtr1 fct)
set a specified function to compute the vector f
Parameters
• fct: a pointer on the plugin function
void setComputeJacobianfxFunction(const std::string &pluginPath, const std::string &functionName)
to set a specified function to compute jacobianfx
Parameters
• pluginPath: the complete path to the plugin
• functionName: function name to use in this library
Exceptions
• SiconosSharedLibraryException:
void setComputeJacobianfxFunction(FPtr1 fct)
set a specified function to compute jacobianfx
Parameters
• fct: a pointer on the plugin function
void computeM(double time)
Default function to compute $$M: (x,t)$$.
Parameters
• time: time instant used in the computations
virtual void computef(double time, SP::SiconosVector state)
Default function to compute $$f: (x,t)$$.
Parameters
• time: time instant used in the computations function to compute $$f: (x,t)$$
• time: time instant used in the computations
• state: x value
virtual void computeJacobianfx(double time, SP::SiconosVector state)
Default function to compute $$\nabla_x f: (x,t) \in R^{n} \times R \mapsto R^{n \times n}$$ with x different from current saved state.
Parameters
• time: instant used in the computations
• state: a SiconosVector to store the resuting value
SP::PluggedObject getPluginF() const
Get _pluginf.
Return
a SP::PluggedObject
SP::PluggedObject getPluginJacxf() const
Get _pluginJacxf.
Return
a SP::PluggedObject
SP::PluggedObject getPluginM() const
Get _pluginM.
Return
a SP::PluggedObject
Miscellaneous public methods
void display() const
print the data of the dynamical system on the standard output
Public Functions
FirstOrderNonLinearDS(SP::SiconosVector newX0)
constructor from initial state, leads to $$\dot x = r$$
Warning
you need to set explicitely the plugin for f and its jacobian if needed (e.g. if used with an EventDriven scheme)
Parameters
• newX0: initial state
FirstOrderNonLinearDS(SP::SiconosVector newX0, const std::string &fPlugin, const std::string &jacobianfxPlugin)
constructor from initial state and f (plugins), $$\dot x = f(x, t, z) + r$$
Parameters
• newX0: initial state
• fPlugin: name of the plugin function to be used for f(x,t,z)
• jacobianfxPlugin: name of the plugin to be used for the jacobian of f(x,t,z)
FirstOrderNonLinearDS(const FirstOrderNonLinearDS &FONLDS)
Copy constructor.
Parameters
virtual ~FirstOrderNonLinearDS()
destructor
ACCEPT_STD_VISITORS()
Protected Functions
FirstOrderNonLinearDS()
default constructor
void _init(SP::SiconosVector initial_state)
Common code for constructors should be replaced in C++11 by delegating constructors.
Parameters
• initial_state: vector of initial values for state
virtual void _zeroPlugin()
Reset the PluggedObjects.
ACCEPT_SERIALIZATION(FirstOrderNonLinearDS)
Protected Attributes
SP::SiconosVector _f
value of f(x,t,z)
SP::SiconosVector _fold
to store f(x_k,t_k,z_k)
SP::SiconosMatrix _invM
Copy of M Matrix, LU-factorized, used to solve systems like Mx = b with LU-factorization.
(Warning: may not exist, used if we need to avoid factorization in place of M)
SP::SiconosMatrix _jacobianfx
Gradient of $$f(x,t,z)$$ with respect to $$x$$.
SP::SiconosMatrix _M
Matrix coefficient of $$\dot x$$.
SP::PluggedObject _pluginf
DynamicalSystem plug-in to compute f(x,t,z)
Parameters
• current: time
• size: of the vector _x
• pointer: to the first element of the vector _x
• the: pointer to the first element of the vector _f
• the: size of the vector _z
• a: vector of parameters _z
SP::PluggedObject _pluginJacxf
DynamicalSystem plug-in to compute the gradient of f(x,t,z) with respect to the state: $$\nabla_x f: (x,t,z) \in R^{n} \times R \mapsto R^{n \times n}$$.
Parameters
• time: current time
• sizeOfX: size of vector x
• x: pointer to the first element of x
• jacob: pointer to the first element of jacobianfx matrix
• the: size of the vector z
• a: vector of parameters, z
SP::PluggedObject _pluginM
SiconosMemory _rMemory
the previous r vectors
Private Types
typedef void (*FNLDSPtrfct)(double, unsigned int, const double *, double *, unsigned int, double *)
plugin signature | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45318326354026794, "perplexity": 17156.131423309962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156252.31/warc/CC-MAIN-20180919141825-20180919161825-00211.warc.gz"} |