url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://www.alloprof.qc.ca/helpzone/discussion/29463/question
|
# Help Zone
### Student Question
Grade 5 • 2mo.
My question is do you know how Classcraft works???
History
{t c="richEditor.description.title"} {t c="richEditor.description.paragraphMenu"} {t c="richEditor.description.inlineMenu"} {t c="richEditor.description.embed"}
## Explanations (2)
• Options
Grade 5 • 2mo.
you can go on a site called groupe 5d and there you will find classcraft you can open it and take a look on how it works.
• Explanation from Alloprof
Explanation from Alloprof
This Explanation was submitted by a member of the Alloprof team.
Options
Team Alloprof • 2mo.
Hello,
Here's a Youtube tutorial I found that may help you with classcraft.
https://www.youtube.com/watch?v=tQfNFTMc8kA
Hope it helps!
Emilie
|
2022-07-01 19:51:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21313706040382385, "perplexity": 12251.77449094482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103945490.54/warc/CC-MAIN-20220701185955-20220701215955-00389.warc.gz"}
|
https://gmatclub.com/forum/if-x-and-y-are-positive-integers-and-1620x-y-2-is-the-square-of-an-odd-293752.html
|
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video
It is currently 19 Feb 2020, 22:07
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# If x and y are positive integers and 1620x/y^2 is the square of an odd
Author Message
TAGS:
### Hide Tags
Senior Manager
Status: Gathering chakra
Joined: 05 Feb 2018
Posts: 443
If x and y are positive integers and 1620x/y^2 is the square of an odd [#permalink]
### Show Tags
19 Apr 2019, 11:23
7
00:00
Difficulty:
35% (medium)
Question Stats:
74% (02:16) correct 26% (02:16) wrong based on 90 sessions
### HideShow timer Statistics
If x and y are positive integers and $$\frac{1620x}{y^2}$$ is the square of an odd integer, what is the smallest possible value of xy?
A) 1
B) 8
C) 10
D) 15
E) 28
My solution:
1620x/y² = (Odd#)²
√(1620x/y²) = Odd#
here I factor 1620 = 162*10 -> 81*2²*5 -> 9²*2²*5*x
9*2*√(5*x) /y = Odd#
E/E and O/O can both give Odd# ints, but since there's a 2 in the numerator, it means E/E ... so y has to be even
The smallest even int is 2, so y = 2
Now to remove the sqroot in the numerator x must be at least 5, so the minimum value is 2*5 = 10
Intern
Joined: 09 Apr 2019
Posts: 8
Location: Brazil
Schools: Wharton, Kellogg, Booth
GPA: 3.45
If x and y are positive integers and 1620x/y^2 is the square of an odd [#permalink]
### Show Tags
19 Apr 2019, 15:28
5
$$\frac{1620x}{y^2} = z^2$$, with z being an odd integer(or $$2k+1$$, if you fancy)
Having in mind that $$\sqrt{\frac{1620x}{y^2}}$$ is an integer, we can easily see that $$√y^2 = y$$ and now we need to factor $$1620x$$ to find an x that makes $$\frac{1}{y}\sqrt{1620x}$$ also integer:
$$1620x = 3^4*2^2*5*x$$
Now we have $$\frac{3^2*2√5x}{y}$$ integer. Having in mind that √5x must also be an integer, we are looking for the smallest possible integer x (since we are also looking for the smallest xy) that makes the square root integer, which is x=5.
With that we find $$\frac{90}{y} = 2k+1 (odd)$$,
So, to turn an even number, such as $$90$$, into an odd one we need to divide it by another even number, which makes:
$$y=2k (even)$$ , but...
Hey! We are also looking for the the smallest product for xy, so y must be the smallest even number, which is 2.
Finally,
$$x = 5$$ and $$y = 2$$, so $$xy = 10$$
##### General Discussion
VP
Joined: 24 Nov 2016
Posts: 1218
Location: United States
If x and y are positive integers and 1620x/y^2 is the square of an odd [#permalink]
### Show Tags
17 Jan 2020, 07:07
energetics wrote:
If x and y are positive integers and $$\frac{1620x}{y^2}$$ is the square of an odd integer, what is the smallest possible value of xy?
A) 1
B) 8
C) 10
D) 15
E) 28
E/E=even, odd, fraction, undefined
O/O=odd, fraction
E/O=even, fraction
O/E=undefined, fraction
$$\frac{1620x}{y^2}=odd^2…odd^2=odd*odd$$
$$1620x=even…\frac{even}{y^2}=odd^2…y^2=even…(\frac{E}{E}=odd)$$
$$1620=162*10=81*2*2*5=3^42^25$$
$$\frac{1620x}{y^2}=odd^2=perf.square…powers(1620x)=even$$
$$1620x=3^42^25x…minimum(x)=5…min(y=even)=2…min(xy)=10$$
Ans (C)
Intern
Joined: 20 Dec 2019
Posts: 35
Re: If x and y are positive integers and 1620x/y^2 is the square of an odd [#permalink]
### Show Tags
25 Jan 2020, 10:29
Quick solution - Reduce 1620 into factors - 2*5*2*81
Now writing it in square form - 2^2 * 9^2 * 5 x/y^2
x has to be 5 for numerator to form a perfect square. For xy to be minimum, choose value of y which is the least such that 1620x/y^2 is an integer which gives y = 2. Hence xy = 10
Re: If x and y are positive integers and 1620x/y^2 is the square of an odd [#permalink] 25 Jan 2020, 10:29
Display posts from previous: Sort by
|
2020-02-20 06:07:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7026915550231934, "perplexity": 1754.9823984790924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144637.88/warc/CC-MAIN-20200220035657-20200220065657-00282.warc.gz"}
|
https://room538ccpp.wordpress.com/
|
## A hilarious reply to referee.August 2, 2009
Posted by keithkchan in fun stuffs.
For some totally irrelevant reason, I found an hilarious reply to the referee’s report on Martin White’s web page. Fortunately, I haven’t encountered such funny referees and editors. If you are not so fortunately, you may want to imitate this reply. Probably you need to acknowledge the original author Roy F. Baumeister. Besides that, I don’t know what you can do. Maybe submit it to another journal.
## Some post-workshop irrelevant thoughtsJuly 28, 2009
Posted by keithkchan in Whatever.
I would like to collect some of my thoughts about the trip to the wild west of the USA. I am not going to talk about what I learn academically, for the obvious reason — I did not learn much. Fortunately I don’t need to report anything to my advisor, so it is OK. But usually in this type of workshop, we can only get a rough idea of what other people are doing in the adjacent fields. It is also a good chance to meet people working in the same field. Often it is a good approximation that this kind of trip can be transformed to a nice travel experience. I went to the Great Sand Dunes in Colorado and the Grand Canyon in Arizona. So, this time the approximation is excellent.
The exact location that I went to was Santa Fe, New Mexico. I realize that the life outside New York City is completely different. Without a car, you can do nothing. When I planed to go there, I thought I could buy soap, shampoo blah blah blah there easily, just as in Manhattan. Oh my beloved Nature, that was completely wrong. The undergraduates there told me that I could get to the nearest CVS by walk in one hour. Fortunately the undergraduates were so kind that they drove me to CVS by car. The pace of life there is slower than in New York. People there are generally nicer and more polite than New Yorkers.
One thing that annoys me pretty much is that people there are quite religious. One can find churches everywhere. The buildings, streets are usually named as Saint XYZ. OK, that is just some names. Who care? Near Santa Fe, it is the Los Alamos National Lab (LANL). A cosmologist working at LANL told me that people in that town were pretty religious. As you may know, at LANL, some people are working on weapons of mass destruction. Those people are particularly religious. That seems to me pretty understandable. For atheists, we at most make fun of those religious people. But religion can drive people insane. They can do crazy things if they believe themselves to be messengers, disciples or whatever of the god, and they think you don’t believe in his god, or even worse believe in the wrong god.
Even within the participants in the workshop, they are more religious than my colleagues at NYU. One guy believes that god can change the laws of physics at will. Moses could violate the laws of physics when he separated the water in Red Sea. But he worked on dark matter observation. What if he observes is just some tricks played by god? Then his work is totally meaningless. I thing he should add in the paper that the phenomenon he observed can be an artifact due to god. What they work on and they believe are fundamentally inconsistent.
There are two possibilities for the higher percentage of “religiousness”. First simply people outside New York are more religious. Secondly, astronomers are more religious than physicists. For physicists, the laws of physicists are fundamental, it can’t be changed arbitrarily. I will never ever identify myself as an astronomer.
## Santa Fe Cosmology Workshop 2009July 14, 2009
Posted by keithkchan in Cosmology.
I am now in Santa Fe cosmology workshop 2009. I have been here for 1.5 weeks. I will be here for 1.5 more weeks. As the title suggested, it is on cosmology. The talks are online. If you are interested in cosmology, you can check it out here . Some of them are pretty boring, though I am not going to name them. Our colleagues Kwan Chuen Chan and Eyal Kazin at room 538 also gave talks there. Since there are a lot of stuffs going on here. I will shut up until I go back to New York.
## N-body simulation of DGP modelJuly 1, 2009
Posted by keithkchan in Cosmology, Journal club.
1 comment so far
I am very happy to have Kwan Chuen Chan from Center for Particles Physics and Cosmology, New York University to talk about their new paper. He is a grad student at NYU, working with Roman Scoccimarro. His office is in Room 538 CCPP.
Keith
Thank Keith for inviting me to blog about our recent paper. In this post I will briefly talk about the paper that Roman Scoccimarro and I just uploaded to the arXiv. I will keep it brief and elementary, so for more details, please refer to the original paper arXiv:0906.4548.
Here is the abstract
Large-Scale Structure in Brane-Induced Gravity II. Numerical Simulations
Authors: K. C. Chan, Roman Scoccimarro
(Submitted on 24 Jun 2009)
Abstract: We use N-body simulations to study the nonlinear structure formation in brane-induced gravity, developing a new method that requires alternate use of Fast Fourier Transforms and relaxation. This enables us to compute the nonlinear matter power spectrum and bispectrum, the halo mass function, and the halo bias. From the simulation results, we confirm the expectations based on analytic arguments that the Vainshtein mechanism does operate as anticipated, with the density power spectrum approaching that of standard gravity within a modified background evolution in the nonlinear regime. The transition is very broad and there is no well defined Vainshtein scale, but roughly this corresponds to k_*= 2 h/Mpc at redshift z=1 and k_*=1 h/Mpc$at z=0. We checked that while extrinsic curvature fluctuations go nonlinear, and the dynamics of the brane-bending mode$C$receives important nonlinear corrections, this mode does get suppressed compared to density perturbations, effectively decoupling from the standard gravity sector. At the same time, there is no violation of the weak field limit for metric perturbations associated with$C\$. We find good agreement between our measurements and the predictions for the nonlinear power spectrum presented in paper I, that rely on a renormalization of the linear spectrum due to nonlinearities in the modified gravity sector. A similar prediction for the mass function shows the right trends but we were unable to test this accurately due to lack of simulation volume and mass resolution. Our simulations also confirm the induced change in the bispectrum configuration dependence predicted in paper I.
DGP model is an extra-dimension model, which has one co-dimension, and ordinary matter lives on the 3-brane. The graviton propagator is modified in the infrared. One of the interesting properties of this model is that it exhibits self-accelerating solution. The hope was that the recent observed cosmic acceleration may be due to modification of gravity rather than the mysterious dark energy. However, both theoretically and observationally, this model is proved to be unfavorable. However, this model has inspired a bunch of more sophisticated models such as degravitation, galleon. One of the serious problem in modification of gravity is that it induces new degrees of freedom. The theory can usually be approximated as a scalar-tensor theory. But any scalar degree of freedom is likely to be highly constrained by current solar system experiments. There are two nice ways have been put forward to evade this kind of constraints. One of them is chameleon mechanism, which has been realized in $f(R)$ gravity. The other mechanism is called Vainshtein effect, which is incorporated in DGP and some massive gravity models. The scalar degree of freedom becomes strongly coupling and frozen because of the derivative self-interactions. The theory effectively becomes GR.
In this paper, using numerical simulations, we study this type of brane-induced gravity in the nonlinear regime, in particular the Vainshtein effect. We compute the cosmological observables: the power spectrum, bispectrum, mass function, and bias, which give us the signatures of the DGP model, and help us to differentiate modified gravity model from dark energy. In the companion paper arXiv:0906.4545 by Scoccimarro, the model is studied by perturbative calculations. Some of the results are checked against the numerical results in this work.
The method we used is the N-body simulation, which is largely similar to the standard gravity one. However, in GR, the field equation in the subhorizon, non-relativistic regime is just the Poisson equation, now we need to solve a fully nonlinear partial differential equation. Let me write down the equations although I am not attempting to explain it in details
$\bar{\nabla}^2 \phi - \frac{1}{\eta} \sqrt{ - \bar{\nabla}^2 } \phi + \frac{1}{2 \eta} \bar{\nabla}^2 C + \frac{ 3 \eta^2 - 5 \eta + 1 }{2 \eta^2 (2 \eta -1) } \sqrt{ - \bar{\nabla}^2 } C = \frac{3}{2} \frac{\eta -1 }{\eta} \delta$
$(\bar{\nabla}^2 C)^2 + \alpha \bar{\nabla}^2 C - (\bar{ \nabla}_{ij} C)^2 + \frac{ 3 \beta (\eta -1) }{2 \eta-1 } \sqrt{ - \bar{\nabla}^2 } C = \frac{ 3( \eta -1 ) } {\eta } ( 1- \beta \bar{\nabla}^{-1} ) \delta,$
The first equation is analogous to the Poisson equation, but now we have one more field C, whose equation of motion is given by the second one. The nonlocal term like $\sqrt{ - \bar{\nabla}^2 } C$ can be easily handled in the Fourier space. The real headache comes from the nonlinear derivative terms $(\bar{\nabla}^2 C)^2$ and $(\bar{ \nabla}_{ij} C)^2$. One of the major achievement in this paper is that we developed a convergent method to solve this set of equations consistently. It involves alternate use of relaxation and Fast Fourier transform (so we call it FFT-relaxation method). Although that is a main result of the paper, I am not going to talk about it in details so as not to get too technical and dry. But interested readers are welcome to read the original paper.
Let me get to the results. As I have mentioned, from the simulations we have measured the power spectrum, bispectrum, mass function and bias. Here I only show the power spectrum.
In the first figure we show the power spectrum from three different models, which are the fully nonlinear DGP model (nlDGP), linearized DGP model (lDGP) and the GR with the same expansion history as the DGP model (GRH), which essentially is the GR limit. In order to see the difference more clearly, we have shown the ratios of power spectrum from various models, $P_{\rm nlDGP} / P_{\rm lDGP}$ and $P_{\rm GRH} / P_{\rm nl DGP}$ in the lower figure. In the large scales (small k), the full nonlinear DGP model reduces to the linear one. More interestingly in the nonlinear regime (large k limit), the fully nonlinear DGP model approaches the GR with the same expansion history. This demonstrates that Vainshtein effect drives the model towards GR limit in the large k regime. The transition is broad and the limit is not yet fully attained in the range shown here.
OK, let me summarize some of the main results here. We have developed a convergent algorithm, FFT-relaxation method, to solve the fully nonlinear field equations in the DGP model. This enables to compute the observables like the power spectrum in the DGP model using numerical simulations. We have demonstrated the Vainshtein effect, and the Vainshtein radius at $z =0$ is about 1 h/Mpc. For more details, please refer to our original paper arXiv:0906.4548.
## Reactable – a new tool to electronic musicJune 24, 2009
Posted by keithkchan in fun stuffs.
My colleague Mr Sjoert (and James) sent me the link to an interesting electronic music generation tool. First have a look at the following Youtube video
When you put the modules on the table they glow and interact with each other to generate sound. You can either add more blocks to it and/or move the blocks to generate new effects. You can just create music by hands. I don’t know if you can program the interaction between the blocks yourself or not. It is cool, isn’t it? For more details see Reactable’s web site.
## The elegant beggarsJune 23, 2009
Posted by keithkchan in fun stuffs, Philosophy.
I am pretty busy these days, for no good reasons as usual. But I have to say something on this blog. This time let me say something I find totally ridiculous here in New York (or America).
The beggars people usually have in mind, at least I have in mind, are usually humble and pitiful. But this is not the case in New York. Beggars can be found everywhere here, particularly in the subway. I find it very annoying because I commute by subway everyday.
“Ladies and gentleman, sorry for disturbing you. My name is Peter. I am homeless, jobless. I am hungry. I have XYZ disease. Blah, blah, blah… I am grateful if you can give me some money or change.” Then the guy goes around the compartment to collect money when somebody is kind enough to give them money. The guy then goes to another compartment to continue his speech. The guy’s voice is loud and clear despite the fact that the guy claims himself to be “sick and hungry”. From the speech, I get the impression that the guy’s voice is energetic and confident, it sounds we have the responsibilities to pay him. Most of the time the beggars decently, sometimes they wear better than I do. In one occasion, the beggar wears a suit, isn’t it totally ridiculous? Should I call them gentlemen instead of beggars?
Most of the time I ignore them. But some people are “kind” enough to give them money. Let’s estimate how much money they make. From my observation, they get on average one dollar in each compartment in the train. Suppose this takes, say 5 minutes. So they make 12 dollars in one hour. Let’s assume they work 8 hours in one day. They make 96 dollars in one day. If somebody gets about 100 dollars a day, how likely that he suffers from hunger? That’s totally ridiculous. In fact, they make more money than I do! As a poor graduate student, I only get 70 dollars each day from the stipend. So I am poorer than a beggar!
So I will say those people who pay these beggars are not kind, but stupid. Almost all those people who ask for money are stronger and bigger than me, and have no apparent disabilities. OK, if they want to find a job in Wall Street, it can be difficult. I don’t think it is so difficult to get a job in McDonalds’ or in a pizzeria. They don’t do it because those stupid people keep on paying them money. This “job” as a beggar is easier and maybe more profitable than being a worker in a restaurant.
My observation is only limited to New York City. I don’t know if this is a local phenomenon, or it also occur in other parts of America. This is one of the ridiculous things I find in New York.
## From spin to mechanical osicillationJune 14, 2009
Posted by keithkchan in fun stuffs, Journal club.
I haven’t updated the blog for some time, so I should say something now.
Well, there is a rather interesting report appear in Nature, Entangled mechanical oscillators. You can you find it on arXiv 0901.4779.
You probably have heard of quantum entanglement many times, which means that the state is not factorizable. A famous example is the Schr$\ddot{o}$dinger’s cat. Measurement causes the wave function to collapse. When you do measurement you either push the cat to the hell or drag it out of the hell. But this is still a thought experiment. Not just because physicists are kind to animals, but also it is impossible to do it on macroscopic scales because of decoherence. All the examples I heard of are limited to entanglement of spin or polarization. But in this the Nature report, a group of physicists at NIST have managed to convert spin entanglement to mechanical oscillations. The experimental details are technical, and I don’t really understand. In (and only in) simple terms, they first entangle the spin of two magnesium and two beryllium ions, and then separate them into two potential wells. In each well, there is one magnesium and one beryllium ion, which form an oscillator in the potential well. They then carry some measurements which create the motional entangle state. I don’t understand how they really do it. For those interested in, you should consult their paper.
The significance of this paper is they have for the first time created mechanical entangled states. This is one important step towards Schr$\ddot{o}$dinger. But, cats, no panic, there may be still 500 steps away.
Incidentally, there is an article in Science describing this paper. The article is fine. But don’t read the comments if you don’t know much about this subject. I find them dubious, if not totally ridiculous. You may want to check it against John Baez’s crackpot index.
## Nongaussiantiy in cosmologyJune 1, 2009
Posted by keithkchan in Cosmology.
It seems that I haven’t talked about physics for some time. After all, in the About of this blog, it says it is mainly about physics. Obviously that’s because of my limited knowledge, and most people are are not interested in my research area. Anyway, even if you are not interested in I still introduces it a little bit.
Recently nongaussianity is rather popular topic in cosmology. The primordial density fluctuations in the early universe is very Gaussian. What I mean is that one can think of the density fluctuations drawn from a Gaussian distribution. The Gaussian field is completely characterized by its 2-point correlation function (or power spectrum in Fourier space.) Inflation, at least in simple models, predicts that the fluctuations are highly Gaussian. Because of the nonlinearity of gravity, a small amount of nongaussianity is generated. But the amount is not possible to detect in foreseeable observations (although some claim that 21 cm may do it). The common parametrization for nongaussianity is $\Phi_{\rm NG}=\phi + f_{\rm NL} \phi^2$, where $\phi$ is the Gaussian potential. That is the nongaussianity is generated by the nonlinear in $\phi$. This is a phenomenological expansion in $\phi$, and you can add more terms if you like. The current observational efforts are to constrain the value of $f_{\rm NL}$.
There are two kind of observations that allow people to probe nongaussianity. They are cosmic microwave background and large scale structure. Gaussian field has zero 3-pt function (or bispectrum in Fourier space). So people look for bispectrum in these observations. So far the best limit on $f_{\rm NL }$ is from WMAP data, but obtained by a group other than the WMAP team. The limit is $-4 < f_{\rm NL} < 80$ (95% confidence level). It will be really interesting if the future data exclude 0.
I may talk about nongaussianity more in future if I run out of stuffs to say.
## Will Google take over the world?June 1, 2009
Posted by keithkchan in Philosophy.
Google has released a new product again, Google wave. Google’s products are usually innovative and free. Besides saying it is cool, I am not going to comment on it further.
So far I only use Google as a search engine. In fact I realized I use more than that, e.g. Youtube, maybe more. I still refuse to use Gmail although most of my colleagues use it now. The reason that I worry that Google has really a lot of momentum. In last few years or so, Google has released new products that kick other competitors’ ass. Google’s products are dominant in the market, everybody search using Google, email using Gmail, look for directions in Google map, watch videos on Youtube … It has the potential to eliminate all the competitors and take over the world.
But I now hesitate. Well, Google just makes our life easier. What do you want? The reason that everybody uses it simply because it is free and of high performance. This is just a demonstration of the survival of the fittest. The latest product Google wave is good for scientific collaboration. After all, Google seems very friendly to science so far. Google supports open software, which I personally admire.
Now I begin to question am I simply stupid pig headed, and making fuzz out of nothing? If Google is going to take over the world, the world may be of better shape than it present form.
## A Comparison of the Plotting SoftwaresMay 23, 2009
Posted by keithkchan in Whatever.
We need to plot some graphs from time to time. A good plotting tool is very important, in particular when you are going to put some graphs in the paper.
There are many plotting softwares available. However, I am a fan of the open software, and try my best not to use the commercial softwares, like Matlab and Origin, in plotting. Besides saving money, I admire the cause of open softwares. So far I can limit myself to using only open softwares happily (except Mathematica). So I will only talk about the open softwares only.
So far, I only need to plot not-so-fancy 2D graphs. I have tried several free plotting softwares, including xmgr, gnuplot and matplotlib.
xmgr is a graphical (GUI) plotting tool. Since it is GUI, it is relatively easy to start with. The quality is also good, as far as I remember. For those who used to use GUI, this is a good choice. But somehow I stop using it because it is not installed on the Linux system here at NYU. (Is it xmgr less popular than two other plotting tools? Is it not a routine part of the Linux distribution?)
Both gnuplot and matplotlib are script plotting tools. I think it is harder to begin with, in particular for people still live in the Windows world. If you use Linux or Mac, you probably are familiar with terminal and command-line approach already. I used to use Windows, and I found it pretty hard to accept the command-line softwares at that time. After quitting Windows, I am quite comfortable with scripts now. Of course, you don’t type the scripts every time. The first plot may be painful and time-consuming as you need to find out the appropriate commands to polish your graph. The second plot is going to be similar so you just need to copy from your old scripts. In the long run, the time required should be similar to, if not less than, the GUI plotting tool. I believe that the real master would use the scripts.
gnuplot is a sole plotting tool. The basic commands are easy to find online. I have been using it for some time. However, I (and some of my colleagues) find that the graphs from gnuplot still fall short of the publishing standard. Below I show the same data plotted using gnuplot and matplotlib respectively. Which one looks nicer?
I find the second one looks better, and it is plotted by matplotlib. If you think otherwise, I have nothing to say. matplotlib starts its life as a mimick of matlab, and it is a library in python. As you may know, python is a rather popular scripting language. To use it you have to import the matplotlib library in python. However to install the library may not be so easy, depending on your system. If you use Windows or Mac, you can install it easily using the Enthought Python Distribution, which is free for educational purpose. Certainly I satisfy this condition. I heard that installing it on Linux is pretty tricky. On one hand since it is just a library in python, you need to know the language python a little bit also, the potential barrier to overcome is higher than that for gnuplot. For example, I need to plot some graph using the columns of data in an ASCII file, I find it ridiculously complicated to do. It took me quite some time to google how to do it. On the other hand, as it is just a library in python, learning how to plot a graph using matplotlib you in fact have also started learning the python language. Isn’t it one stone two birds?
So what’s the conclusion? Which one is better? There is no conclusion. It is up to you to decide. I will use both gnuplot and matplotlib whenver convenient.
|
2016-05-29 07:30:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 18, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5079147219657898, "perplexity": 793.7800408909982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278417.79/warc/CC-MAIN-20160524002118-00210-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://dsp.stackexchange.com/questions/31497/how-to-correct-the-phase-offset-for-qpsk-i-q-data
|
# How to correct the phase offset for QPSK I-Q data
I have an I-Q data of QPSK modulation, the data is is smeared and I have to correct the phase offset that causes the smearing problem, how can I apply the Phased-locked loop or the Costas loop on this stage of I-Q data to correct the phase offset problem.
• This question is a bit too broad – you already know that you need a phase correction, so apply it. You don't tell us what your problem is, or at least in what framework you're working, etc. This is a bit like calling your uncle, who's a car mechanic and telling him "I want to build a car. I know I need to add a motor, but how do I do that?"; you will need to give us a lot more background. I'm pretty sure someone can help you - just give us more of a chance to do so :) – Marcus Müller Jun 13 '16 at 21:34
• Are you asking how to implement a Costas Loop as that would certainly track out an arbitrary phase offset? The other approach that may be easier depending on your implementation is to square your signal twice (^4), which will remove the modulation and leave you with a fixed carrier (at 4x your carrier frequency)- without the modulation present you can more simply measure your phase error as the error term into a PLL. – Dan Boschen Jun 14 '16 at 3:16
|
2020-01-18 04:23:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34534284472465515, "perplexity": 550.9822983267803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250591763.20/warc/CC-MAIN-20200118023429-20200118051429-00541.warc.gz"}
|
https://www.zbmath.org/authors/?q=ai%3Awang.hongxi
|
# zbMATH — the first resource for mathematics
## Wang, Hongxi
Compute Distance To:
Author ID: wang.hongxi Published as: Wang, H.; Wang, H. X.; Wang, Hongxi
Documents Indexed: 88 Publications since 1980, including 2 Books
#### Co-Authors
4 single-authored 1 Gong, Liehang 1 Wang, Honglun 1 Yao, Di 1 Yu, Shumei 1 Yue, Chaohui
#### Serials
4 Annals of Differential Equations 1 Journal of Nanjing University. Mathematical Biquarterly 1 Journal of University of Science and Technology of China 1 Journal of PLA University of Science and Technology. Natural Science Edition
all top 5
#### Fields
4 Ordinary differential equations (34-XX) 1 Integral equations (45-XX) 1 Operator theory (47-XX) 1 Probability theory and stochastic processes (60-XX) 1 Fluid mechanics (76-XX) 1 Biology and other natural sciences (92-XX)
#### Citations contained in zbMATH
47 Publications have been cited 387 times in 352 Documents Cited by Year
Actuator fault diagnosis: An adaptive observer-based technique. Zbl 0858.93040
Wang, H.; Daley, S.
1996
A meshless model for transient heat conduction in functionally graded materials. Zbl 1097.80001
Wang, H.; Qin, Q-H.; Kang, Y-L.
2006
Design of fault diagnosis filters and fault-tolerant control for a class of nonlinear systems. Zbl 1006.93068
Kabore, P.; Wang, H.
2001
A finite frequency domain approach to fault detection for linear discrete-time systems. Zbl 1152.93318
Wang, H.; Yang, G.-H.
2008
A new meshless method for steady-state heat conduction problems in anisotropic and inhomogeneous media. Zbl 1119.80385
Wang, H.; Qin, Q.-H.; Kang, Y. L.
2005
Discrete-continuous model conversion. Zbl 0454.93011
Shieh, L. S.; Wang, H.; Yates, R. E.
1980
Factor profiled sure independence screening. Zbl 1234.62108
Wang, H.
2012
Stabilizability of coupled wave equations in parallel under various boundary conditions. Zbl 0883.93044
Najafi, M.; Sarhangi, G. R.; Wang, H.
1997
Efficient solution of differential equations for kidney concentrating mechanism analyses. Zbl 0744.65037
Tewarson, R. P.; Wang, H.; Stephenson, J. L.; Jen, J. F.
1991
Stability of a class of Runge–Kutta methods for a family of pantograph equations of neutral type. Zbl 1168.65371
Zhao, J. J.; Xu, Y.; Wang, H. X.; Liu, M. Z.
2006
Energy and $$CO_{2}$$ emission performance in electricity generation: a non-radial directional distance function approach. Zbl 1253.90236
Zhou, P.; Ang, B. W.; Wang, H.
2012
A novel PDE based image restoration: Convection-diffusion equation for image denoising. Zbl 1169.94006
Shih, Y.; Rei, C.; Wang, H.
2009
A finite element model for contact analysis of multiple Cosserat bodies. Zbl 1100.74059
Zhang, H. W.; Wang, H.; Wriggers, P.; Schrefler, B. A.
2005
Analysis of Cosserat materials with Voronoi cell finite element method and parametric variational principle. Zbl 1169.74330
Zhang, H. W.; Wang, H.; Chen, B. S.; Xie, Z. Q.
2008
Parallel computation of fluid dynamics problems. Zbl 0846.76080
Ecer, A.; Akay, H. U.; Kemle, W. B.; Wang, H.; Ercoskun, D.; Hall, E. J.
1994
A parallel meshless dynamic cloud method on graphic processing units for unsteady compressible flows past moving boundaries. Zbl 1423.76347
Ma, Z. H.; Wang, H.; Pu, S. H.
2015
Optimal penetration landing trajectories in the presence of windshear. Zbl 0645.93041
Miele, A.; Wang, T.; Wang, H.; Melvin, W. W.
1988
FE approach with Green’s function as internal trial function for simulating bioheat transfer in the human eye. Zbl 1269.74156
Wang, H.; Qin, Q. H.
2010
GPU computing of compressible flow problems by a meshless method with space-filling curves. Zbl 1349.76505
Ma, Z. H.; Wang, H.; Pu, S. H.
2014
Wang, H.; Liu, G. P.; Harris, C. J.; Brown, M.
1995
An efficient parallel algorithm for solving $$n$$-nephron models of the renal inner medulla. Zbl 0798.92014
Wang, H.; Stephenson, J. L.; Deng, Y.-F.; Tewarson, R. P.
1994
Smoothed finite elements large deformation analysis. Zbl 1267.74113
Liu, S. J.; Wang, H.; Zhang, H.
2010
Analysis of dynamic fracture with cohesive crack segment method. Zbl 1153.74373
Wang, H. X.; Wang, S. X.
2008
Modelling and control of nonlinear, operating point dependent systems via associative memory networks. Zbl 0850.93265
Wang, H.; Brown, M.; Harris, C. J.
1996
A comparison of multinephron and shunt models of the renal concentrating mechanism. Zbl 0774.92008
Wang, H.; Tewarson, R. P.; Jen, J. F.; Stephenson, J. L.
1993
Complexity of partially separable convexly constrained optimization with non-Lipschitzian singularities. Zbl 1411.90318
Chen, Xiaojun; Toint, Ph. L.; Wang, H.
2019
Intrinsic branching structure within random walk on $$\mathbb{Z}$$. Zbl 1312.60103
Wang, H.; Hong, W.
2014
Monotonicity of zeros of polynomials orthogonal with respect to an even weight function. Zbl 1297.33009
Jordaan, K.; Wang, H.; Zhou, J.
2014
A quasi-Gauss-Newton method for solving nonlinear algebraic equations. Zbl 0788.65060
Wang, H.; Tewarson, R. P.
1993
A new vector valued similarity measure for intuitionistic fuzzy sets based on OWA operators. Zbl 1429.03167
Fei, L.; Wang, H.; Chen, L.; Deng, Y.
2019
Special elements for composites containing hexagonal and circular fibers. Zbl 1359.74437
Qin, Qing H.; Wang, H.
2015
Phenomenological method for fracture. Zbl 1293.74429
Wang, H.; Li, Lu Xian; Liu, S.-J.
2012
Modelling the dynamics of the tilt-casting process and the effect of the mould design on the casting quality. Zbl 1271.76363
Wang, H.; Djambazov, G.; Pericleous, K. A.; Harding, R. A.; Wickins, M.
2011
A fundamental solution based FE model for thermal analysis of nanocomposites. Zbl 1275.82010
Wang, H.; Qin, Q. H.
2011
A cosine inequality in the hyperbolic geometry. Zbl 1203.53014
Huang, M.; Ponnusamy, S.; Wang, H.; Wang, X.
2010
Three-dimensional vorticity measurements in the wake of a yawed circular cylinder. Zbl 1183.76605
Zhou, T.; Wang, H.; Razali, S. F. Mohd.; Zhou, Y.; Cheng, L.
2010
Mathematica implementation of output-feedback pole assignment for uncertain systems via symbolic algebra. Zbl 1133.93332
Zheng, X.; Zolotas, A. C.; Wang, H.
2006
Orientation of carbon nanotubes in a sheared polymer melt. Zbl 1186.76230
Hobbie, E. K.; Wang, H.; Kim, H.; Lin-Gibson, S.; Grulke, E. A.
2003
Iterative approximation of statistical distributions and relation to information geometry. Zbl 1009.93081
Dodson, C. T. J.; Wang, H.
2001
Efficient computer algorithms for kidney modeling. Zbl 0789.92010
Tewarson, R. P.; Wang, H.; Stephenson, J. L.; Jen, J. F.
1993
Solvothermal synthesis and electrical conductivity model for the zinc oxide-insulated oil nanofluid. Zbl 1255.82078
Shen, L. P.; Wang, H.; Dong, M.; Ma, Z. C.; Wang, H. B.
2012
Effective adaptive virtual queue: a stabilising active queue management algorithm for improving responsiveness and robustness. Zbl 1347.94005
Wang, H.; Liao, C.; Tian, Z.
2011
A meshless method for ductile fracture. Zbl 1337.74043
Wang, H.; Li, Lu Xian; Liu, S.-J.
2011
Application of randomization techniques to space-time convolutional codes. Zbl 1374.94839
Sadjadpour, H. R.; Kim, Kyungmin; Wang, H.; Blum, R. S.; Lee, Y. H.
2006
A class of optimal centred form interval extension. Zbl 1119.65332
Wang, H.; Cao, D.
2005
One-to-one mapping and its application to neural networks based control systems design. Zbl 0850.93409
Wang, H.; Wang, A. P.; Brown, M.; Harris, C. J.
1996
The boundary stabilization of a linearized self-excited wave equation. Zbl 0851.93064
Sarhangi, G. R.; Najafi, M.; Wang, H.
1995
Complexity of partially separable convexly constrained optimization with non-Lipschitzian singularities. Zbl 1411.90318
Chen, Xiaojun; Toint, Ph. L.; Wang, H.
2019
A new vector valued similarity measure for intuitionistic fuzzy sets based on OWA operators. Zbl 1429.03167
Fei, L.; Wang, H.; Chen, L.; Deng, Y.
2019
A parallel meshless dynamic cloud method on graphic processing units for unsteady compressible flows past moving boundaries. Zbl 1423.76347
Ma, Z. H.; Wang, H.; Pu, S. H.
2015
Special elements for composites containing hexagonal and circular fibers. Zbl 1359.74437
Qin, Qing H.; Wang, H.
2015
GPU computing of compressible flow problems by a meshless method with space-filling curves. Zbl 1349.76505
Ma, Z. H.; Wang, H.; Pu, S. H.
2014
Intrinsic branching structure within random walk on $$\mathbb{Z}$$. Zbl 1312.60103
Wang, H.; Hong, W.
2014
Monotonicity of zeros of polynomials orthogonal with respect to an even weight function. Zbl 1297.33009
Jordaan, K.; Wang, H.; Zhou, J.
2014
Factor profiled sure independence screening. Zbl 1234.62108
Wang, H.
2012
Energy and $$CO_{2}$$ emission performance in electricity generation: a non-radial directional distance function approach. Zbl 1253.90236
Zhou, P.; Ang, B. W.; Wang, H.
2012
Phenomenological method for fracture. Zbl 1293.74429
Wang, H.; Li, Lu Xian; Liu, S.-J.
2012
Solvothermal synthesis and electrical conductivity model for the zinc oxide-insulated oil nanofluid. Zbl 1255.82078
Shen, L. P.; Wang, H.; Dong, M.; Ma, Z. C.; Wang, H. B.
2012
Modelling the dynamics of the tilt-casting process and the effect of the mould design on the casting quality. Zbl 1271.76363
Wang, H.; Djambazov, G.; Pericleous, K. A.; Harding, R. A.; Wickins, M.
2011
A fundamental solution based FE model for thermal analysis of nanocomposites. Zbl 1275.82010
Wang, H.; Qin, Q. H.
2011
Effective adaptive virtual queue: a stabilising active queue management algorithm for improving responsiveness and robustness. Zbl 1347.94005
Wang, H.; Liao, C.; Tian, Z.
2011
A meshless method for ductile fracture. Zbl 1337.74043
Wang, H.; Li, Lu Xian; Liu, S.-J.
2011
FE approach with Green’s function as internal trial function for simulating bioheat transfer in the human eye. Zbl 1269.74156
Wang, H.; Qin, Q. H.
2010
Smoothed finite elements large deformation analysis. Zbl 1267.74113
Liu, S. J.; Wang, H.; Zhang, H.
2010
A cosine inequality in the hyperbolic geometry. Zbl 1203.53014
Huang, M.; Ponnusamy, S.; Wang, H.; Wang, X.
2010
Three-dimensional vorticity measurements in the wake of a yawed circular cylinder. Zbl 1183.76605
Zhou, T.; Wang, H.; Razali, S. F. Mohd.; Zhou, Y.; Cheng, L.
2010
A novel PDE based image restoration: Convection-diffusion equation for image denoising. Zbl 1169.94006
Shih, Y.; Rei, C.; Wang, H.
2009
A finite frequency domain approach to fault detection for linear discrete-time systems. Zbl 1152.93318
Wang, H.; Yang, G.-H.
2008
Analysis of Cosserat materials with Voronoi cell finite element method and parametric variational principle. Zbl 1169.74330
Zhang, H. W.; Wang, H.; Chen, B. S.; Xie, Z. Q.
2008
Analysis of dynamic fracture with cohesive crack segment method. Zbl 1153.74373
Wang, H. X.; Wang, S. X.
2008
A meshless model for transient heat conduction in functionally graded materials. Zbl 1097.80001
Wang, H.; Qin, Q-H.; Kang, Y-L.
2006
Stability of a class of Runge–Kutta methods for a family of pantograph equations of neutral type. Zbl 1168.65371
Zhao, J. J.; Xu, Y.; Wang, H. X.; Liu, M. Z.
2006
Mathematica implementation of output-feedback pole assignment for uncertain systems via symbolic algebra. Zbl 1133.93332
Zheng, X.; Zolotas, A. C.; Wang, H.
2006
Application of randomization techniques to space-time convolutional codes. Zbl 1374.94839
Sadjadpour, H. R.; Kim, Kyungmin; Wang, H.; Blum, R. S.; Lee, Y. H.
2006
A new meshless method for steady-state heat conduction problems in anisotropic and inhomogeneous media. Zbl 1119.80385
Wang, H.; Qin, Q.-H.; Kang, Y. L.
2005
A finite element model for contact analysis of multiple Cosserat bodies. Zbl 1100.74059
Zhang, H. W.; Wang, H.; Wriggers, P.; Schrefler, B. A.
2005
A class of optimal centred form interval extension. Zbl 1119.65332
Wang, H.; Cao, D.
2005
Orientation of carbon nanotubes in a sheared polymer melt. Zbl 1186.76230
Hobbie, E. K.; Wang, H.; Kim, H.; Lin-Gibson, S.; Grulke, E. A.
2003
Design of fault diagnosis filters and fault-tolerant control for a class of nonlinear systems. Zbl 1006.93068
Kabore, P.; Wang, H.
2001
Iterative approximation of statistical distributions and relation to information geometry. Zbl 1009.93081
Dodson, C. T. J.; Wang, H.
2001
Stabilizability of coupled wave equations in parallel under various boundary conditions. Zbl 0883.93044
Najafi, M.; Sarhangi, G. R.; Wang, H.
1997
Actuator fault diagnosis: An adaptive observer-based technique. Zbl 0858.93040
Wang, H.; Daley, S.
1996
Modelling and control of nonlinear, operating point dependent systems via associative memory networks. Zbl 0850.93265
Wang, H.; Brown, M.; Harris, C. J.
1996
One-to-one mapping and its application to neural networks based control systems design. Zbl 0850.93409
Wang, H.; Wang, A. P.; Brown, M.; Harris, C. J.
1996
Wang, H.; Liu, G. P.; Harris, C. J.; Brown, M.
1995
The boundary stabilization of a linearized self-excited wave equation. Zbl 0851.93064
Sarhangi, G. R.; Najafi, M.; Wang, H.
1995
Parallel computation of fluid dynamics problems. Zbl 0846.76080
Ecer, A.; Akay, H. U.; Kemle, W. B.; Wang, H.; Ercoskun, D.; Hall, E. J.
1994
An efficient parallel algorithm for solving $$n$$-nephron models of the renal inner medulla. Zbl 0798.92014
Wang, H.; Stephenson, J. L.; Deng, Y.-F.; Tewarson, R. P.
1994
A comparison of multinephron and shunt models of the renal concentrating mechanism. Zbl 0774.92008
Wang, H.; Tewarson, R. P.; Jen, J. F.; Stephenson, J. L.
1993
A quasi-Gauss-Newton method for solving nonlinear algebraic equations. Zbl 0788.65060
Wang, H.; Tewarson, R. P.
1993
Efficient computer algorithms for kidney modeling. Zbl 0789.92010
Tewarson, R. P.; Wang, H.; Stephenson, J. L.; Jen, J. F.
1993
Efficient solution of differential equations for kidney concentrating mechanism analyses. Zbl 0744.65037
Tewarson, R. P.; Wang, H.; Stephenson, J. L.; Jen, J. F.
1991
Optimal penetration landing trajectories in the presence of windshear. Zbl 0645.93041
Miele, A.; Wang, T.; Wang, H.; Melvin, W. W.
1988
Discrete-continuous model conversion. Zbl 0454.93011
Shieh, L. S.; Wang, H.; Yates, R. E.
1980
all top 5
#### Cited by 695 Authors
16 Jiang, Bin 14 Qin, Qinghua 12 Shieh, Leang-San 12 Tewarson, Reginald P. 10 Yang, Guanghong 9 Shi, Peng 9 Wang, Hui 8 Tsai, Jason Sheng-Hong 8 Zhang, Ke 6 Wang, Hong 5 Akay, Hasan U. 5 Ecer, Akin 5 Fu, Zhuojia 5 Qin, Qing-Hua 5 Stephenson, John L. 4 Benhadj Braiek, Naceur 4 Grabski, Jakub Krzysztof 4 Kolodziej, Jan Adam 4 Lan, Wei 4 Moon, In-Ho 4 Patsko, Valerii S. 4 Tao, Gang 4 Wang, Hengxu 4 Yao, Lina 3 Cao, Changyong 3 Chao, Yung-Cheng 3 Chen, Min-Shin 3 Chen, Wen 3 Chen, Wen 3 Chien, Y. P. 3 Dehghan Takht Fooladi, Mehdi 3 Ding, Steven X. 3 Duan, Zhisheng 3 Hidayat, Mas Irfan Purbawanto 3 Hu, Dean 3 Huang, Hsiao-Ping 3 Jin, Fengfei 3 Joshi, Suresh M. 3 Layton, Anita T. 3 Layton, Harold E. 3 Li, Jian 3 Ma, Hongjun 3 Parman, Setyamartana 3 Puig, Vicenç 3 Rabczuk, Timon 3 Reutskiy, Sergiy Yu 3 Staroswiecki, Marcel 3 Tang, Xidong 3 Wang, Wansheng 3 Wang, Zhenhua 3 Zhai, Ding 3 Zhang, Huaguang 3 Zhang, Xiaodong 3 Zhu, Lixing 2 Alghamdi, Mohammed Ali 2 Ansari, Reza 2 Ariwahjoedi, Bambang 2 Bhrawy, Ali Hassan 2 Blech, R. A. 2 Bordas, Stéphane Pierre Alain 2 Cai, Chenxiao 2 Cao, Leilei 2 Carpenter, F. 2 Chadli, Mohammed 2 Chang, Xiaoheng 2 Chen, Cheng-Liang 2 Chen, Chi-Che 2 Chen, Hongquan 2 Chen, Weitian 2 Chibani, Ali 2 Cocquempot, Vincent 2 Ding, Dawei 2 Djambazov, Georgi S. 2 Efimov, Denis V. 2 Fang, Yixian 2 Gu, Jingfong 2 Hamdi, Habib 2 Harris, Chris J. 2 Hematiyan, Mohammad Rahim 2 Huang, Di 2 Jen, J. Frank 2 Jeong, Jena 2 Jin, Xiaozheng 2 Jordaan, Kerstin 2 Kapelko, Magdalena 2 Kim, Sundong 2 Lan, Jianglin 2 Lee, Cheuk-Yu 2 Li, Peichao 2 Li, Xiaoli 2 Lu, Anyang 2 Ma, Zhihua 2 Marcano, Mariano 2 Marquez, Horacio Jose 2 Mattheij, Robert M. M. 2 Mirzaei, Davoud 2 Najafi, Mojgan 2 Nejjari, Fatiha 2 Nguang, Sing Kiong 2 Parisini, Thomas ...and 595 more Authors
all top 5
#### Cited in 105 Serials
25 International Journal of Control 21 Engineering Analysis with Boundary Elements 18 Journal of the Franklin Institute 16 Automatica 13 Computers & Mathematics with Applications 13 International Journal of Systems Science 13 Asian Journal of Control 12 Applied Mathematical Modelling 11 Applied Mathematics and Computation 11 Mathematical Problems in Engineering 9 International Journal of Robust and Nonlinear Control 8 Computer Methods in Applied Mechanics and Engineering 7 Computational Mechanics 7 Applied Mathematics Letters 6 International Journal of Adaptive Control and Signal Processing 5 Circuits, Systems, and Signal Processing 5 European Journal of Operational Research 4 Acta Mechanica 4 International Journal of Heat and Mass Transfer 4 Systems & Control Letters 4 Computational Statistics and Data Analysis 4 International Journal of Applied Mathematics and Computer Science 4 Journal of Control Theory and Applications 3 Computers and Fluids 3 Information Sciences 3 International Journal for Numerical Methods in Engineering 3 Journal of Multivariate Analysis 3 Journal of Optimization Theory and Applications 3 Applied Numerical Mathematics 3 SIAM Journal on Optimization 3 International Journal of Numerical Methods for Heat & Fluid Flow 3 Abstract and Applied Analysis 3 European Journal of Mechanics. A. Solids 3 International Journal of Systems Science. Principles and Applications of Systems and Integration 2 Journal of Computational Physics 2 Journal of Engineering Mathematics 2 Journal of Fluid Mechanics 2 Bulletin of Mathematical Biology 2 The Annals of Statistics 2 Fuzzy Sets and Systems 2 Journal of Approximation Theory 2 Journal of Econometrics 2 Mathematics and Computers in Simulation 2 Meccanica 2 Mathematical and Computer Modelling 2 Physics of Fluids 2 Journal of Mathematical Sciences (New York) 2 Journal of Vibration and Control 2 Mathematics and Mechanics of Solids 2 International Journal of Computational Fluid Dynamics 2 Nonlinear Dynamics 2 Acta Mechanica Sinica 2 Nonlinear Analysis. Hybrid Systems 2 Journal of Mathematics 1 Journal of Applied Mathematics and Mechanics 1 Journal of Mathematical Analysis and Applications 1 Mathematical Biosciences 1 Physics Letters. A 1 Chaos, Solitons and Fractals 1 Prikladnaya Matematika i Mekhanika 1 Applied Mathematics and Optimization 1 Journal of Statistical Planning and Inference 1 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 1 Optimal Control Applications & Methods 1 Insurance Mathematics & Economics 1 Statistics & Probability Letters 1 Applied Mathematics and Mechanics. (English Edition) 1 Journal of Theoretical Probability 1 Dynamics and Stability of Systems 1 Journal of Scientific Computing 1 Annals of Operations Research 1 Dynamics and Control 1 Numerical Algorithms 1 Automation and Remote Control 1 Archive of Applied Mechanics 1 Archives of Control Sciences 1 Journal of Computer and Systems Sciences International 1 Communications in Numerical Methods in Engineering 1 Computational and Applied Mathematics 1 Advances in Computational Mathematics 1 Integral Transforms and Special Functions 1 Journal of Difference Equations and Applications 1 European Journal of Control 1 Journal of Shanghai University 1 Acta Mathematica Sinica. English Series 1 Communications in Nonlinear Science and Numerical Simulation 1 Engineering Computations 1 CEJOR. Central European Journal of Operations Research 1 Archives of Computational Methods in Engineering 1 Journal of the Australian Mathematical Society 1 Journal of Applied Mathematics 1 Entropy 1 International Journal of Computational Methods 1 Advances in Difference Equations 1 International Journal of Fracture 1 Proceedings of the Steklov Institute of Mathematics 1 Complex Analysis and Operator Theory 1 Networks and Heterogeneous Media 1 Electronic Journal of Statistics 1 Discrete and Continuous Dynamical Systems. Series S ...and 5 more Serials
all top 5
#### Cited in 34 Fields
165 Systems theory; control (93-XX) 91 Numerical analysis (65-XX) 52 Mechanics of deformable solids (74-XX) 30 Fluid mechanics (76-XX) 27 Partial differential equations (35-XX) 25 Operations research, mathematical programming (90-XX) 23 Classical thermodynamics, heat transfer (80-XX) 21 Information and communication theory, circuits (94-XX) 20 Biology and other natural sciences (92-XX) 18 Statistics (62-XX) 17 Ordinary differential equations (34-XX) 10 Computer science (68-XX) 10 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 7 Probability theory and stochastic processes (60-XX) 6 Calculus of variations and optimal control; optimization (49-XX) 5 Approximations and expansions (41-XX) 4 Special functions (33-XX) 4 Dynamical systems and ergodic theory (37-XX) 3 Real functions (26-XX) 3 Mechanics of particles and systems (70-XX) 2 History and biography (01-XX) 2 Mathematical logic and foundations (03-XX) 2 Functions of a complex variable (30-XX) 2 Harmonic analysis on Euclidean spaces (42-XX) 2 Integral equations (45-XX) 1 General and overarching topics; collections (00-XX) 1 Combinatorics (05-XX) 1 Difference and functional equations (39-XX) 1 Integral transforms, operational calculus (44-XX) 1 Geometry (51-XX) 1 Differential geometry (53-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Optics, electromagnetic theory (78-XX) 1 Statistical mechanics, structure of matter (82-XX)
|
2021-03-07 03:29:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8302569389343262, "perplexity": 11678.288221959763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376006.87/warc/CC-MAIN-20210307013626-20210307043626-00499.warc.gz"}
|
https://socratic.org/questions/how-do-you-find-the-slope-and-intercept-to-graph-y-14x
|
# How do you find the slope and intercept to graph y=14x?
Nov 5, 2015
Using $y = m x + b$.
#### Explanation:
$y = m x + b$ is slope intercept form. $m$ meaning slope, and $b$ meaning y intercept.
You're given $y = 14 x$ which could be interpreted as $y = 14 x + 0$ which lines up very well with slope intercept form. The coefficient of x is 14, which is the slope (where $m$ is), and the addition at the end of the function is 0, which means that your y intercept is 0 ($b$).
Your fuction will look like this graph below once you graph.
graph{14x [-22.81, 22.8, -11.4, 11.41]}
|
2020-07-13 18:06:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7453427314758301, "perplexity": 744.4004051328893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657146247.90/warc/CC-MAIN-20200713162746-20200713192746-00579.warc.gz"}
|
https://puzzling.stackexchange.com/questions/59073/a-sequence-of-real-numbers
|
# A sequence of real numbers
A sequence of real numbers is such that the the sum of every $5$ consecutive terms is $+ve$ while the sum of every $9$ consecutive terms is $-ve$. Then the sequence can have at most $n$ terms.
What is the value of $n$ $?$
• Every 5 consecutive terms and every 9 consecutive terms of the infinite sequence {0,0,0,...} sum to 0. – David Hammen Jan 11 '18 at 0:51
12
Explanation:
General formula for this IMO 1977 problem has been derived here on Math.SE
$p\to$Number of consecutive terms giving +ve sum
$q\to$Number of consecutive terms giving -ve sum
$$n=p+q-2$$
Further to prog_SAHIL's answer, one example is
4, 3, 4, -16, 6, 4, 4, 6, -16, 4, 3, 4
where the respective sums of 5 consecutive terms are
1, 1, 2, 4, 4, 2, 1, 1
and the respective sums of 9 consecutive terms are
-1, -1, -1, -1
|
2019-09-21 20:24:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8559680581092834, "perplexity": 316.95984535585177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574662.80/warc/CC-MAIN-20190921190812-20190921212812-00382.warc.gz"}
|
http://www.neverendingbooks.org/category/books
|
# Archangel Gabriel will make you a topos
No kidding, this is the final sentence of Le spectre d’Atacama, the second novel by Alain Connes (written with Danye Chéreau (IRL Mrs. AC) and his former Ph.D. advisor Jacques Dixmier).
The book has a promising start. Armand Lafforet (IRL AC) is summoned by his friend Rodrigo to the Chilean observatory Alma in the Altacama desert. They have observed a mysterious spectrum, and need his advice.
Armand drops everything and on the flight he lectures the lady sitting next to him on proofs by induction (breaking up chocolate bars), and recalls a recent stay at the La Trappe Abbey, where he had an encounter with (the ghost of) Alexander Grothendieck, who urged him to ‘Follow the motif!’.
“Comment était-il arrivé là? Il possédait surement quelques clés. Pourquoi pas celles des songes?” (How did he get
there? Surely he owned some keys, why not those of our dreams?)
A few pages further there’s this on the notion of topos (my attempt to translate):
“The notion of space plays a central role in mathematics. Traditionally we represent it as a set of points, together with a notion of neighborhood that we call a ‘topology’. The universe of these new spaces, ‘toposes’, unveiled by Grothendieck, is marvellous, not only for the infinite wealth of examples (it contains, apart from the ordinary topological spaces, also numerous instances of a more combinatorial nature) but because of the totally original way to perceive space: instead of appearing on the main stage from the start, it hides backstage and manifests itself as a ‘deus ex machina’, introducing a variability in the theory of sets.”
So far, so good.
We have a mystery, tidbits of mathematics, and allusions left there to put a smile on any Grothendieck-aficionado’s face.
But then, upon arrival, the story drops dead.
Rodrigo has been taken to hospital, and will remain incommunicado until well in the final quarter of the book.
As the remaining astronomers show little interest in Alain’s (sorry, Armand’s) first lecture, he decides to skip the second, and departs on a hike to the ocean. There, he takes a genuine sailing ship in true Jules Verne style to the lighthouse at he end of the world.
All this drags on for at least half a year in time, and two thirds of the book’s length. We are left in complete suspense when it comes to the mysterious Atacama spectrum.
Perhaps the three authors deliberately want to break with existing conventions of story telling?
I had a similar feeling when reading their first novel Le Theatre Quantique. Here they spend some effort to flesh out their heroine, Charlotte, in the first part of the book. But then, all of a sudden, their main character is replaced by a detective, and next by a computer.
Anyway, when Armand finally reappears at the IHES the story picks up pace.
The trio (Armand, his would-be-lover Charlotte, and Ali Ravi, Cern’s computer guru) convince CERN to sell its main computer to an American billionaire with the (fake) promise of developing a quantum computer. Incidentally, they somehow manage to do this using Charlotte’s history with that computer (for this, you have to read ‘Le Theatre Quantique’).
By their quantum-computing power (Shor and quantum-encryption pass the revue) they are able to decipher the Atacame spectrum (something to do with primes and zeroes of the zeta function), send coded messages using quantum entanglement, end up in the Oval Office and convince the president to send a message to the ‘Riemann sphere’ (another fun pun), and so on, and on.
The book ends with a twist of the classic tale of the mathematician willing to sell his soul to the devil for a (dis)proof of the Riemann hypothesis:
After spending some time in purgatory, the mathematician gets a meeting with God and asks her the question “Is the Riemann hypothesis true?”.
“Of course”, God says.
“But how can you know that all non-trivial zeroes of the zeta function have real part 1/2?”, Armand asks.
And God replies:
“Simple enough, I can see them all at once. But then, don’t forget I’m God. I can see the disappointment in your face, yes I can read in your heart that you are frustrated, that you desire an explanation…
Well, we’re going to fix this. I will call archangel Gabriel, the angel of geometry, he will make you a topos!”
If you feel like running to the nearest Kindle store to buy “Le spectre d’Atacama”, make sure to opt for a package deal. It is impossible to make heads or tails of the story without reading “Le theatre quantique” first.
But then, there are worse ways to spend an idle week than by binge reading Connes…
Edit (February 28th). A short video of Alain Connes explaining ‘Le spectre d’Atacama’ (in French)
# Rarer books: Singmaster’s notes
David Singmaster‘s “Notes on Rubik’s magic cube” are a collectors item, but it is still possible to buy a copy. I own a fifth edition (august 1980).
These notes capture the Rubik craze of those years really well.
Here’s a Conway story, from Siobhan Roberts’ excellent biography Genius at Play.
The ICM in Helsinki in 1978 was Conway’s last shot to get the Fields medal, but this was the last thing on his mind. He just wanted a Rubik cube (then, iron-curtain times, only sold in Hungary), so he kept chasing Hungarians at the meeting, hoping to obtain one. Siobhan writes (p. 239):
“The Fields Medals went to Pierre Deligne, Charles Fefferman, Grigory Margulis, and Daniel Quillen. The Rubik’s cube went to Conway.”
After his Notes, David Singmaster produced a follow-up newsletter “The Cubic Circular”. Only 5 magazines were published, of which 3 were double issues, between the Autumn of 1981 and the summer of 1985.
# taking stock
The one thing harder than to start blogging after a long period of silence is to stop when you think you’re still in the flow.
(image credit Putnam Consulting)
The Januari 1st post a math(arty) 2018 was an accident. I only wanted to share this picture, of a garage-door with an uncommon definition of prime numbers, i saw the night before.
I had been working on a better understanding of Conway’s Big Picture so I had material for a few follow-up posts.
It was never my intention to start blogging on a daily basis.
I had other writing plans for 2018.
For years I’m trying to write a math-book for a larger audience, or at least to give it an honest try.
My pet peeve with such books is that most of them are either devoid of proper mathematical content, or focus too much on the personal lives of the mathematicians involved.
An inspiring counter-example is ‘Closing the gap’ by Vicky Neal.
From the excellent review by Colin Beveridge on the Aperiodical Blog:
“Here’s a clever way to structure a maths book (I have taken copious notes): follow the development of a difficult idea or discovery chronologically, but intersperse the action with background that puts the discovery in context. That’s not a new structure – but it’s tricky to pull off: you have to keep the difficult idea from getting too difficult, and keep the background at a level where an interested reader can follow along and either say “yes, that’s plausible” or better “wait, let me get a pen!”. This is where Closing The Gap excels.”
So it is possible to publish a math-book worth writing. Or at least, some people can pull it off.
Problem was I needed to kick myself into writing mode. Feeling forced to post something daily wouldn’t hurt.
Anyway, I was sure this would have to stop soon. I had plans to disappear for 10 days into the French mountains. Our place there suffers from frequent power- and cellphone-cuts, which can last for days.
Thank you Orange.fr for upgrading your network to the remotest of places. At times, it felt like I was working from home.
I kept on blogging.
Even now, there’s material lying around.
I’d love to understand the claim that non-commutative geometry may offer some help in explaining moonshine. There was an interesting question on an older post on nimber-arithmetic I feel I should be following up. I’ve given a couple of talks recently on $\mathbb{F}_1$-material, parts of which may be postable. And so on.
Problem is, I would stick to the same (rather dense) writing style.
Perhaps it would make more sense to aim for a weekly (or even monthly) post over at Medium.
Medium offers no MathJax support forcing me to write differently about maths, and for a broader potential audience.
I may continue to blog here (or not), stick to the current style (or try something differently). I have not the foggiest idea right now.
# Knights and Knaves, the Heyting way
(image credit: Joe Blitzstein via Twitter)
Smullyan’s Knights and Knaves problems are classics. On an island all inhabitants are either Knights (who only tell true things) and Knaves (who always lie). You have to determine their nature from a few statements. Here’s a very simple problem:
“Abercrombie met just two inhabitants, A and B. A made the following statement: “Both of us are Knaves.” What is A and what is B?”
Now, this one is simple enough to solve, but for more complicated problems a generic way to solve the puzzles is to use propositional calculas, as explained in Smullyan’s Logical Labyrinths”, chapter 8 “Liars, truth-tellers and propositional logic’.
If an inhabitants $A$ asserts a proposition $P$, and if $k_A$ is the assertion ‘$A$ is a Knight’, then the statement can be rephrased as
$k_A \Leftrightarrow P$
for if $A$ is a Knight, $P$ must be true and if $A$ is a Knave $P$ must be false.
Usually, one can express $P$ as a propositional statement involving $k_A,k_B,k_C,\dots$.
The example above can be rephrased as
$k_A \Leftrightarrow (\neg k_A \wedge \neg k_B)$
Assigning truth values to $k_A$ and $k_B$ and setting up the truth-table for this sentence, one sees that the only possibility for this to be true is that $k_A$ equals $0$ and $k_B$ equals $1$. So, $A$ is a Knave and $B$ is a Knight.
Clearly, one only requires this approach for far more difficult problems.
In almost all Smullyan puzzles, the only truth values are $0$ and $1$. There’s a short excursion to Boolean algebras (sorry, Boolean islands) in chapter 9 ‘Variable Liars’ in Logical Labyrinths. But then, the type of problems are about finding equivalent notions of Boolean algebras, rather that generalised Knights&Knaves puzzles.
Did anyone pursue the idea of Smullyanesque puzzles with truth values in a proper Heyting algebra?
I only found one blog-post on this: Non-Classical Knights and Knaves by Jason Rosenhouse.
He considers three valued logic (the Heyting algebra corresponding to the poset 0-N-1, and logical connectives as in the example on the Wiki-page on Heyting algebras.
On his island the natives cycle, repeatedly and unpredictably, between the two states. They are knights for a while, then they enter a transitional phase during which they are partly knight and partly knave, and then they emerge on the other side as knaves.
“If Joe is in the transitional phase, and you say, “Joe is a knight,” or “Joe is a knave,” what truth value should we assign to your statement? Since Joe is partly knight and partly knave, neither of the classical truth values seems appropriate. So we shall assign a third truth value, “N” to such statements. Think of N as standing for “neutral” or “neither true nor false.” On the island, vague statements are assigned the truth value N.
Just to be clear, it’s not just any statement that can be assigned the truth value N. It is only vague statements that receive that truth value, and for now our only examples of such statements are attributions of knight-hood and knave-hood to people in the transitional phase.
For the natives, entering the transitional phase implied a disconcerting loss of identity. Uncertain of how to behave, they hedged their bets by only making statements with truth value N. People in the transitional phase were referred to as neutrals. So there are now three kinds of people: Knights, who only make true statements; Knaves, who only make false statements; and Neutrals, who only make statements with the truth value N.”
He gives one example of a possible problem:
“Suppose you meet three people, named Dave, Evan and Ford. They make the following statements:
Dave: Evan is a knight.
Evan: Ford is a knave.
Ford: Dave is a neutral.
Can you determine the types of all three people?”
# Mathematics in times of internet
A few weeks more of (heavy) teaching ahead, and then I finally hope to start on a project, slumbering for way too long: to write a book for a broader audience.
Prepping for this I try to read most of the popular math-books hitting the market.
The latest two explore how the internet changed the way we discuss, learn and do mathematics. Think Math-Blogs, MathOverflow and Polymath.
‘Gina says’, Adventures in the Blogosphere String War
The ‘string wars’ started with the publication of the books by Peter Woit:
Not even wrong: the failure of string theory and the search for unity in physical law
and Lee Smolin:
In the summer of 2006, Gil Kalai got himself an extra gmail acount, invented the fictitious ‘Gina’ and started commenting (some would argue trolling) on blogs such as Peter Woit’s own Not Even Wring, John Baez and Co.’s the n-Category Cafe and Clifford Johnson’s Asymptotia.
Gil then copy-pasted Gina’s comments, and the replies they provoked, into a leaflet and put it on his own blog in June 2009: “Gina says”, Adventures in the Blogosphere String War.
Back then, it was fun to waste an afternoon re-reading all of this, and I wrote about it here:
Now here’s an idea (June 2009)
Gina says, continued (August 2009)
With only minor editing, and including some drawings by Gil’s daughter, these leaflets have now resurfaced as a book…?!
After more than 10 years I had hoped that Gil would have taken this test-case to say some smart things about the math-blogging scene and its potential to attract more people to mathematics, or whatever.
In 2009 I wrote:
“Having read the first 20 odd pages in full and skimmed the rest, two remarks : (1) it shouldn’t be too difficult to borrow this idea and make a much better book out of it and (2) it raises the question about copyrights on blog-comments…”
Closing the gap: the quest to understand prime numbers
I can hear you sigh, but no, this is not yet another prime number book.
In May 2013, Yitang Zhang startled the mathematical world by proving that there are infinitely many prime pairs, each no more than 70.000.000 apart.
Perhaps a small step towards the twin prime conjecture but it was the first time someone put a bound on this prime gap.
Vicky Neal‘s book tells the story of closing this gap. In less than a year the bound of 70.000.000 was brought down to 246.
If you’ve read all popular prime books, there are a handful of places in the book where you might sigh: ‘oh no, not that story again’, but by far the larger part of the book explains exciting results on prime number progressions, not found anywhere else.
Want to know about sieve methods?
Which results made Tim Gowers or Terry Tao famous?
What is Szemeredi’s theorem or the Hardy-Littlewood circle method?
Ever heard about the Elliot-Halberstam or the Erdos-Turan conjecture? The work by Tao on Erdos discrepancy problem or that of James Maynard (and Tao) on closing the prime gap?
Closing the gap is the book to read about all of this.
But it is much more.
It tells about the origins and successes of the Polymath project, and details the progress made by Polymath8 on closing the gap, it gives an insight into how mathematics is done, what role conferences, talks and research institutes a la Oberwolfach play, and more.
Looking for a gift for that niece of yours interested in maths? Look no further. Closing the gap is a great book!
|
2018-09-20 14:19:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40723875164985657, "perplexity": 2209.826497551328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156513.14/warc/CC-MAIN-20180920140359-20180920160759-00435.warc.gz"}
|
https://www.semanticscholar.org/paper/Equidistribution-of-Fekete-points-on-complex-R.Berman-S.Boucksom/c635223c54b4863a4ca7b10a896d9687bcbac9c3
|
• Corpus ID: 115161042
# Equidistribution of Fekete points on complex manifolds
@inproceedings{RBerman2008EquidistributionOF,
title={Equidistribution of Fekete points on complex manifolds},
author={R.Berman and S.Boucksom},
year={2008}
}
• Published 30 June 2008
• Mathematics
We prove the several variable version of a classical equidistribution theorem for Fekete points of a compact subset of the complex plane, which settles a well-known conjecture in pluri-potential theory. The result is obtained as a special case of a general equidistribution theorem for Fekete points in the setting of a given holomorphic line bundle over a compact complex manifold. The proof builds on our recent work “Capacities and weighted volumes for line bundles”.
19 Citations
### Fekete points and convergence towards equilibrium measures on complex manifolds
• Mathematics
• 2009
Building on [BB1] we prove a general criterion for convergence of (possibly singular) Bergman measures towards pluripotential-theoretic equilibrium measures on complex manifolds. The criterion may be
### Equidistribution of Fekete Points on the Sphere
• Mathematics
• 2010
Fekete points are the points that maximize a Vandermonde-type determinant that appears in the polynomial Lagrange interpolation formula. They are well-suited points for interpolation formulas and
### Chebyshev transforms on Okounkov bodies
LetL be a big holomorphic line bundle on a compact complex manifoldX.We show how to associate a convex function on the Okounkov body ofL to any continuous metric e−ψ on L.We will call this the
### Transforming metrics on a line bundle to the Okounkov body
LetL be a big holomorphic line bundle on a compact complex manifol dX.We show how to associate a convex function on the Okounkov body o f L t any continuous metrice onL.We will call this the
### Growth of balls of holomorphic sections and energy at equilibrium
• Mathematics
• 2008
Let L be a big line bundle on a compact complex manifold X. Given a non-pluripolar compact subset K of X and a continuous Hermitian metric e−φ on L, we define the energy at equilibrium of (K,φ) as
### A Robin Function for Algebraic Varieties and Applications to Pluripotential Theory
The Robin function associated to a compact set K captures information about the asymptotic growth of the logarithmic extremal function associated to K and has found numerous applications within
### Discrete orthogonal polynomials and hyperinterpolation over planar regions
An algorithm for recursively generating orthogonal polynomials on a discrete set of the real plane based on short recurrence relations is presented and Least Squares approximation on Weakly Admissible Meshes is presented.
## References
SHOWING 1-10 OF 10 REFERENCES
### Convergence of Bergman measures for high powers of a line bundle
• Mathematics
• 2008
Let L be a holomorphic line bundle on a compact complex manifold X of dimension n, and let exp(-\phi) be a continuous metric on L. Fixing a measure dμ on X gives a sequence of Hilbert spaces
### Bergman kernels and equilibrium measures for line bundles over projective manifolds
Let $L$ be a holomorphic line bundle over a compact complex projective Hermitian manifold $X.$ Any fixed smooth hermitian metric $\phi$ on $L$ induces a Hilbert space structure on the space of global
### Asymptotic Distribution of Nodes for Near-Optimal Polynomial Interpolation on Certain Curves in R2
• Mathematics
• 2002
Abstract Let E\subset \Bbb R s be compact and let dnE denote the dimension of the space of polynomials of degree at most n in s variables restricted to E . We introduce the notion of an asymptotic
### Distribution of nodes on algebraic curves in CN
• Mathematics
• 2003
Soit A une variete algebrique de dimension 1 de C N . On note m d la dimension de l'espace vectoriel complexe des restrictions a A des polynomes holomorphes de degre ≤ d. On considere un compact non
### Orthogonal Polynomials and Random Matrices: A Riemann-Hilbert Approach
Riemann-Hilbert problems Jacobi operators Orthogonal polynomials Continued fractions Random matrix theory Equilibrium measures Asymptotics for orthogonal polynomials Universality Bibliography.
### Approximation in C N
This is a survey article on selected topics in approximation theory. The topics either use techniques from the theory of several complex variables or arise in the study of the subject. The survey is
• 1950
### V: Transfinite diameter, Chebyshev constants, and capacity for compacta in Cˆn
• Math. USSR Sbornik
• 1975
|
2022-11-28 09:18:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7982213497161865, "perplexity": 990.7141908875828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710488.2/warc/CC-MAIN-20221128070816-20221128100816-00569.warc.gz"}
|
https://www.cableizer.com/documentation/k_sa/
|
# Convection factor acc. Heinhold
Coefficient for convection to air for one single cable, three cables spaced and three cables trefoil are given in the book 'Kabel und Leitungen für Starkstrom' by L. Heinhold, 5th edition 1999. The book does not give values for 2 cables. It was assumed that the values are identical to 3 cables.
Symbol
$k_{sa}$
Used in
$\alpha_{sa}$
$T_{sa}$
Choices
IdValueInstallationRemark
11.0001 cable
21.0003 cables spacedequal for horizontal and vertical spacing
30.8333 cables touchingequal for horizontal and vertical spacing
40.6673 cables touching trefoil
|
2020-08-09 16:35:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6662928462028503, "perplexity": 6782.71874169902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738562.5/warc/CC-MAIN-20200809162458-20200809192458-00208.warc.gz"}
|
https://zbmath.org/?q=an:0683.34032
|
# zbMATH — the first resource for mathematics
Systems with impulse effect stability, theory and applications. (English) Zbl 0683.34032
Ellis Horwood Series in Mathematics and its Applications. Chichester: Ellis Horwood Limited; New York etc.: Halsted Press. 255 p. £39.95 (1989).
The book consists of three parts. In Part A a description of the systems with impulse effect is given. Here are considered systems of the form: $$dx/dt=f(t,x),$$ $$t\neq \tau_ k(x)$$, $$\Delta x=I_ k(x)$$, $$t=\tau_ k(x)$$ where f: $${\mathbb{R}}_+x\Omega \to {\mathbb{R}}^ n$$; $$\tau_ k: \Omega \to {\mathbb{R}}_+$$; $$I_ k: \Omega \to {\mathbb{R}}^ n$$; $$\Omega$$ is a domain in $${\mathbb{R}}^ n$$ and $$0<\tau_ 1(x)<\tau_ 2(x)<...$$, $$\lim_{k\to \infty}\tau_ k(x)=\infty$$ for $$x\in \Omega$$ and are presented theorems of existence, uniqueness and continuability of the solutions; theorems of continuity and differentiability of the solutions with respect to initial data and a parameter. In Part B, Lyapunov’s first method for systems with impulse effect is presented, that is: stability of linear systems, characteristic exponents, stability by linear approximation, perturbation theorems. In Part C, Lyapunov’s second method for systems with impulse effect is developed. The authors prove direct and converse theorems of stability and theorems of comparison using piecewise continuous functions for comparison specially introduced here. The theory is developed in full analytic rigour, and is illustrated by many examples and applications.
Reviewer: S.Biranas
##### MSC:
34D20 Stability of solutions to ordinary differential equations 34-02 Research exposition (monographs, survey articles) pertaining to ordinary differential equations 34A12 Initial value problems, existence, uniqueness, continuous dependence and continuation of solutions to ordinary differential equations 34D05 Asymptotic properties of solutions to ordinary differential equations
|
2021-12-01 18:17:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.639387845993042, "perplexity": 696.7572080512047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360881.12/warc/CC-MAIN-20211201173718-20211201203718-00444.warc.gz"}
|
https://arjunjainblog.wordpress.com/2013/12/11/jordan-normal-form-generalized-eigenspaces/
|
### Jordan normal form – Generalised Eigenspaces
It’s been a long time since I last posted. I’ve been studying about the Jordan Normal Form recently, and found some good but incomplete websites elaborating this interesting construction. The first time I read about this, I was very impressed- these are representative matrices for a whole families of similar matrices. Moreover, so many results about matrices requiring them to be diagonalizable can be generalised using the normal forms- the next best things to a diagonal representation.
As I searched for clear proofs surrounding this, I found about Sheldon Axler‘s book Linear Algebra Done Right, and the article The Jordan Canonical Form: an Old Proof by Richard A. Brualdi. The first one does it by introducing generalised eigenspaces, while the second one uses the language of graph theory. Investigating the relation between these two approaches will be an excellent thing to do, I think.
In this first part, I’ll outline the usual generalised eigenspaces approach, as described in Sheldon Axler’s book.
->
We want to describe an operator by decomposing it’s domain into invariant subspaces (If W is an invariant subspace of V, where $T:V\to V$, then if $v\in W$, $T(v)\in W$ also). If the operator is diagonalizable, these invariant subspaces are the eigenspaces, and $V= null(T-\lambda_1 I)\oplus null(T-\lambda_2 I)\oplus....$.
As normally there aren’t enough eigenvectors, we want to show that in general, $V= null(T-\lambda_1 I)^{dimV}\oplus null(T-\lambda_2 I)^{dimV}\oplus....$, where $\lambda_1, \lambda_2...$ are the distinct eigenvalues of T.
Let’s start by studying the nullspaces of powers of an operator.
First of all $\{0\}\subset null(T)\subset null(T^2)\subset null(T^3)...$.
Of course, as V is finite dimensional, this can’t go on forever. So, we must have $null(T^{dimV})=null(T^{dimV+1})=null(T^{dimV+2})...$, because if we don’t, the dimensions of the nullspaces, which are subspaces of V, will go on increasing.
A similar thing is true for ranges of powers of operators. Here, $V=ran(T^0)\supset ran(T)\supset ran(T^2)\supset ...$. Again this stops at $T^{dimV}$ after which all subsequent ranges are equal.
Now, Schur’s lemma in linear algebra says that every square complex matrix is unitarily triangularizable ($T=Q\Lambda Q^T$). The diagonal elements of $\Lambda$ are the eigenvalues of T. We can prove this by seeing that if $\lambda$ is an eigenvalue of T, then $det(T-\lambda I)=det(Q\Lambda Q^T -\lambda I)=det (Q (\Lambda-\lambda I) Q^T)=0$ which shows that $\lambda$ is one of the diagonal elements of $\Lambda$. The same reasoning in reverse can be used to prove the converse.
If T has $dimV$ distinct eigenvalues, all of these are the diagonal elements of $\Lambda$. If not an eigenvalue $\lambda$ is repeated $dim(null(T-\lambda I)^{dimV}))$ times.
To prove this, consider the case of $\lambda =0$. We prove by induction.
For $dimV=n$,
if $n=1$, the fact is clearly true.
We assume that it is true for $dimV=n-1$.
Now, suppose that $(v_1,v_2,v_3,...)$ is a basis for which T has an upper triangular representation $\Lambda$ with $\lambda_1, \lambda_2...\lambda_n$ as diagonal elements. If $U=span(v_1,v_2,...,v_{n-1})$, then $U$ is clearly invariant due to $\Lambda$ being triangular.
The matrix for $T|U$ with the basis $v_1,v_2,...v_{n-1}$ is $\Lambda$ without the last row and column. By the induction hypothesis, 0 appears $dim(null(T|U)^{n-1}))$ times. As $null(T|U)^{n-1}=null(T|U)^{n}$ , 0 appears $dim(null(T|U)^{n}))$ times.
Now for the remaining matrix, which has $\lambda_n$ as the diagonal element, we consider two cases:
1. $\lambda_n\neq 0$: If $\Lambda$ has $\lambda_1, \lambda_2...\lambda_n$ as the diagonal elements, then the matrix representation of $T^n$ is the upper triangular $\Lambda^n$ with $\lambda_1^n, \lambda_2^n...\lambda_n^n$ as diagonal elements. So $T^n(v_n)=u+\lambda_n^n(v_n)$ for some $u\in U$. Now suppose that $p\in null(T^n)$. Then $p=u'+av_n$ where $u'\in U$ and $a\in F$ (field of V), which gives $T^n(p)=0=T^n(u')+a(u)+a(\lambda_n^n(v_n))$. The first two terms are in $U$, but the third one is not. So, $a=0$, meaning that $p\in U$. As a result $null(T^n)\subset U$, giving $null (T^n)=null(T|U)^n$. 0 appears $dim(null(T)^{n}))$ times.
2. $\lambda_n=0$: If $\lambda_n=0$, then $T(v_n)\in U$ giving $T^n(v_n)=T^{n-1}(T(v_n))\in ran(T|U)^{n-1}=ran(T|U)^{n}$. So, we can construct a vector $p=u'+v_n$ such that $p\not\in U$ and $T^n(p)=0$. Now, $dim(null(T^n))=dim(null(T|U)^n)+dim(U+null(T^n))-dimU$. Here, as $dim(U), and as $dim(U)=n-1$, therefore, $dim(null(T^n))=dim(null(T|U)^n)+1$, which is as desired.
Statement proved.
Note that the multiplicity corresponding to an eigenvalue $\lambda$ is defined as $dim(null(T-\lambda I)^{dimV})$, i.e. the dimension of the associated generalised eigenspace. The sum of these multiplicities is equal to $dimV$ as all the diagonal elements of $\Lambda$ are eigenvalues of $T$. The characteristic polynomial associated with a matrix is $(z-\lambda_1)^{d_1}.(z-\lambda_2)^{d_2}...$, where $d_1,d_2..$ are the multiplicities.
Using Schur’s theorem as above, we can also prove the Cayley- Hamilton theorem very easily, through induction. The theorem states that if q is the characteristic polynomial of $T$, then $q(T)=0$. We need only show that $q(T)v_j=0$ for all basis vectors of $T$, $v_j$, where the $v_j$s are basis vectors for which the matrix of T is upper triangular, as in Schur’s theorem.
Suppose that $j=1$. Using the triangular form of $T$, we have $(T-\lambda_1)v_1=0$.
Now, assume that for $j$ between $1$ to $n$ :
$0=(T-\lambda_1 I)v_1=(T-\lambda_1 I)(T-\lambda_2 I)v_2=(T-\lambda_1 I)(T-\lambda_2 I)(T-\lambda_3 I)v_3=...=(T-\lambda_1 I)(T-\lambda_2 I)....(T-\lambda_{j-1} I)v_{j-1}$.
Now, because of the triangular form of $T$, $(T-\lambda_j I)v_j\in span(v_1,v_2,...v_{j-1})$. So $(T-\lambda_1 I)(T-\lambda_2 I)....(T-\lambda_{j-1} I)(T-\lambda_{j} I)v_j=0$.
Now come the main theorems leading up to Jordan’s form.
If $T$ is an operator on a complex vector space $V$ with distinct eigenvalues $\lambda_1, \lambda_2, ... \lambda_m$, with corresponding subspaces of generalised eigenvectors $U_1, U_2, ...U_m$, then $V=U_1\oplus U_2\oplus ...\oplus U_m$. For the proof, we have already seen that $dimV=dimU_1+dimU_2+...dimU_m$. Now we can see that each $U_j$ is invariant under T, as if $(T-\lambda_jI)^{d_j}x=0$, so is $(T(T-\lambda_jI)^{d_j})x=0=(T-\lambda_jI)^{d_j}Tx$. As a result, if we consider $T|U$, where $U=U_1+U_2+....U_m$, then $T|U$ has the same eigenvalues and multiplicities as T. We therefore get the desire result.
Note that the generalised eigenspaces are disjoint, as they should be. Generalised eigenspaces of T are invariant under T. Consider the eigenvalues $\lambda_1$ and $\lambda_2$ as examples. So if $\nu\in null(T-\lambda_1 I)^{d_1}$, then so do $T(\nu)$ and $c\nu$, where c is any scalar. As a result, even $(T-\lambda_2 I)^{d_2}\nu\in null(T-\lambda_1 I)^{d_1}$. Now assume that $\nu \in null(T-\lambda_2 I)^{d_2}$. Then $(T-\lambda_2 I)^{d_2}\nu=(T-\lambda_2I)(T-\lambda_2I)^{d_2-1}\nu=0$. Therefore $T(T-\lambda_2 I)^{d_2-1}\nu=\lambda_2(T-\lambda_2 I)^{d_2-1}\nu$. So now $(T-\lambda_1 I)^{d_1}(T-\lambda_2 I)^{d_2-1}\nu=(\lambda_2-\lambda_1)^{d_1}(T-\lambda_2 I)^{d_2-1}\nu$. So if $(T-\lambda_2 I)^{d_2-1}\nu\neq 0$, then $\lambda_1=\lambda_2$. If $\nu \in null((T-\lambda_2 I)^{d_2})\cap null((T-\lambda_1 I)^{d_1})$, then $\nu \in null((T-\lambda_1 I)^{d_1-1})$. Applying the same argument as above, we get that if $\lambda_1\neq\lambda_2$, then $\nu \in null((T-\lambda_1 I)^{d_1-2})$ an so on till we reach $\nu \in null(I)$, giving $\nu=0$.
Now, if $N\in L(V)$ is nilpotent, there exist vectors ($\nu_1,\nu_2...\nu_k\in V$), such that ($\nu_1,N\nu_1,...,N^{m(\nu_1)}\nu_1,.....\nu_k,N\nu_k,...,N^{m(\nu_k)}\nu_k$) is a basis of $V$, and ($N^{m(\nu_1)}\nu_1,N^{m(\nu_2)}\nu_2,....,N^{m(\nu_k)}\nu_k$) is a basis of $null(N)$, where $m(\nu_j)$ is the largest non negative integer such that $N^{m(\nu_j)}\nu_j\neq 0$.
For the proof, we use induction. As $N$ is nilpotent, $dim(ran(N)). Assume that the claim holds for all vector spaces of lesser dimensions. So, ($u_1,Nu_1,...,N^{m(u_1)}u_1,.....u_j,Nu_j,...,N^{m(u_j)}u_j$) is a basis of $ran(N)$ and ($N^{m(u_1)}u_1,N^{m(u_2)}\nu_2,....,N^{m(u_j)}u_j$) is a basis of $null(N)\cap ran(N)$.
As each $u_r \in ran(N)$, we can choose a corresponding $\nu_r\in V$, such that $N\nu_r=u_r$ for each r. Therefore, $m(\nu_r)=m(u_r)+1$. Now we choose a subspace $W$ of $null(N)$ such that $null(N)=(null(N)\cap ran(N))\oplus W$, and then a basis ($\nu_{j+1},\nu_{j+2}...\nu_k$). As these are in $null(N)$, $m(\nu_{j+1},m(\nu_{j+2})...=0$.
To show that the basis for V in the statement is linearly independent, suppose that $\displaystyle{ \sum_{r=1}^k \sum_{s=0}^{m(\nu_r)} a_{r,s}N^s(\nu_r)=0}$. Then $\displaystyle{ \sum_{r=1}^k \sum_{s=0}^{m(\nu_r)} a_{r,s}N^{s+1}(\nu_r)=0=\sum_{r=1}^k \sum_{s=0}^{m(\nu_r)} a_{r,s}N^{s}(u_r)}$.
Now by the induction hypothesis, $a_{r,s}=0$ for $1\leq r\leq j$. Also, $a_{1,m}(\nu_1)N^{m(\nu_1)}\nu_1+ ... + a_{j,m}(\nu_j)N^{m(\nu_j)}\nu_j=0$ as ($N^{m(u_1)}u_1,N^{m(u_2)}\nu_2,....,N^{m(u_j)}u_j$) is a basis of $null(N)\cap ran(N)$ and $a_{j+1,o}\nu_{j+1}+....+a_{k,0}\nu_k=0$ as ($\nu_{j+1},\nu_{j+2}...\nu_k$) is a basis of $W$.
Now, as assumed, $dim(null(n)\cap ran(N))=j$ and $dim(null(N))=k$. It can be seen that $\displaystyle{ \sum_{r=0}^{k}(m(\nu_r)+1)= dim(V)}$. Therefore the set of vectors under consideration is indeed a basis of $V$. Also, ($N^{m(\nu_1)}\nu_1,N^{m(\nu_2)}\nu_2,....,N^{m(\nu_k)}\nu_k$)
= ($N^{m(u_1)}u_1,N^{m(u_2)}u_2,....,N^{m(u_k)}u_k, \nu_{j+1},\nu_{j+2},...,\nu_k$) is a basis of $null(N)$.
To get to the Jordan Canonical form, consider a nilpotent operator $N\in L(V)$. Then for the vectors ($N^{m(\nu_j)}\nu_j,N^{m(\nu_j)-1}\nu_j,....,N\nu_j$). With these, $N$(first vector)=0, $N$(second vector)=(first vector). The resultant block has $0$s on the diagonal and $1$s on the super diagonal.
For a $T \in L(V)$, with distinct eigenvalues $\lambda_1, \lambda_2,...\lambda_m$, as $V=U_1\oplus U_2 \oplus ...\oplus U_m$ where each $(T-\lambda_iI)|U_i$ is nilpotent, we have our Jordan basis, giving $T$ the form as in the opening image. The exact structure of the Jordan form depends not only on the arithmetic and geometric multiplicities of the eigenvalues, but also the dimensions of the powers of $(T-\lambda_iI)$, with $dim(null(T-\lambda_iI)^{k+1}-dim(null(T-\lambda_iI)^{k}$ being the number of Jordan blocks of size $>k$ corresponding to the eigenvalue $\lambda$.
Note that two matrices are conjugate if and only if they have the same Jordan canonical forms, up to a permutation of the Jordan Blocks.
|
2019-10-24 01:50:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 176, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9833319187164307, "perplexity": 260.7788472558017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987838289.72/warc/CC-MAIN-20191024012613-20191024040113-00408.warc.gz"}
|
https://gateoverflow.in/314090/profit-and-loss-self-doubt?show=314103
|
85 views
The marked price of a table is Rs. 1200, which is 20% above the cost price. It is sold at a discount of 10% on the marked price. Find the profit percent.
(a) 10%
(b) 8%
(c) 7.5%
(d) 6%
What approach can I use for these type of questions?
| 85 views
+1 vote
First understand the terminology of Profit and loss chapter like CP, SP,MP, Discount etc . and then Just try to relate the mathematical part with the sentence part. Just try to visualize what the question saying.
Given : $MP$ is $1200$ and $MP$ is $20\%$ above $CP$. ( here they are talking about % with respect to CP and not MP)
$\Rightarrow$ $CP + 20\% CP$ = $MP$
$\Rightarrow$ $\frac{6}{5}CP =1200$
$\Rightarrow$ $CP = 1000$
$Discount$ = $10\%$ on $MP$ = $\frac{1}{10} *1200 = 120$
$SP$ = $MP - Discount$ = $1200 - 120$ = $1080$
The story goes like this
The shopkeeper bought the table at Rs $1000$(CP) and told that I will sell it at Rs $1200$(MP)
One customer came and bought it at Rs $1080$(SP) after getting a discount of Rs $120$.
So the $profit\ earned\ by\ shopkeeper$
=$the\ price\ at\ which\ he\ sold\ - the\ price\ at\ which\ he\ bought$
= $1080 - 1000$
= $Rs\ 80$
So $Profit\% = \frac{Profit}{CP} * 100\% = \frac{1080-1000}{1000} * 100\% = \frac{80}{1000} * 100\% = 8 \%$
(NOTE :- Profit % is always calculated wrt to CP )
$\therefore$ Option $B$. $8\%$ is the correct answer.
by Boss (21.6k points)
selected
Lets say that the cost price is x.
Now x*1.2=1200
so x=1000. So our cost price is 1000.
Now 10% discount of marked price = 1200* 0.9 =1080
which is 8% of 1000 + 1000 = 1080
so ans is b
by Active (3.5k points)
|
2019-12-10 16:56:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5256850719451904, "perplexity": 1824.5501697178702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540528457.66/warc/CC-MAIN-20191210152154-20191210180154-00330.warc.gz"}
|
https://math.stackexchange.com/questions/4082957/geodesics-under-levi-civita-connection
|
Geodesics under Levi-Civita connection
I read somewhere that minimum energy paths are geodesics under the Levi-Civita connection, on a Riemannian energy landscape.
The Levi-Civita connection is the "unique connection on the tangent bundle of a manifold that preserves the Riemannian metric and is torsion-free".
What exactly is the intuition of geodesics under the Levi-Civita connection? Why would torsion-free connections give geodesics with least energy?
• What exactly are you calling a torsion-free geodesic? In general, the word "torsion-free" refers to the connexion, not the geodesic. Mar 30, 2021 at 13:27
• @Didier right, edited. Mar 30, 2021 at 13:27
• Some clarifications: The energy of a curve (path) is defined using the Riemannian metric (and not the Levi-Civita connection). When you compute the variational formula for energy, the Levi-Civita connection appears naturally. The fact that the Levi-Civita connection preserves the metric is used in this. The critical points of energy are constant speed geodesics. But constant speed geodesics are not necessarily energy minimizing. They are locally energy minimizing (any sufficiently short piece of the curve is energy minimizing)., Mar 30, 2021 at 15:44
• @Deane interesting, so does this mean the Levi-Civita connection can be derived by minimizing the energy of a curve with the constraint that the speed is constant? Or that this is a equivalent definition of the L-C connection? Mar 30, 2021 at 15:52
• I don't think so. The variational equation uses only the Levi-Civita connection for the covariant derivative of the velocity vector of a curve with respect to itself. This, I believe, is not enough to define the connection for two arbitrary vector fields. Mar 30, 2021 at 16:21
If $$\nabla$$ is any connection and $$f$$ a function, its Hessian with respect to $$\nabla$$ is $$\mathrm{Hess}^{\nabla}f = \nabla \mathrm{d}f$$, and one can see, after a messy calculation, that: $$\mathrm{Hess}^{\nabla}f(X,Y) - \mathrm{Hess}^{\nabla}f(Y,X) = \pm\mathrm{d}f\left([X,Y] - (\nabla_XY - \nabla_YX) \right)$$ (where the $$\pm$$ sign is here because I don't remember the exact sign, but the computations are not that hard, just messy.) Hence, Hessians are symmetric if and only if the connection is torsion-free. This is the main motivation to consider torsion-free connections: in the euclidean space, Hessians are symmetric!
Moreover, the fundamental theorem of Riemannian geometry tells us that on a Riemannian manifold, there is a unique connexion that is torsion-free and lets the metric invariant, that is: $$\forall X,Y,Z, \left(\nabla_Zg\right)(X,Y) = Z\cdot g\left(X,Y \right) - g\left(\nabla_ZX,Y\right) - g\left(X,\nabla_ZY\right) = 0.$$ (compare with the euclidean case, where $$\langle X,Y\rangle ' = \langle X',Y\rangle + \langle X, Y' \rangle$$.) This theorem thus says that given any Riemannian metric $$g$$, there is a connection that is better than others: Hessians are symmetric and the metric is invariant under the action. We call it the Levi-Civita connexion.
If a connection is chosen, a geodesic is a parametrized curve satisfying the equation of geodesics : $$\nabla_{\gamma'}\gamma' = 0$$. Thus a curve $$\gamma$$ is a geodesic with respect to the connection, and can be a geodesic for some connection $$\nabla^1$$ but not for another connecion $$\nabla^2$$. Therefore, your question does not really have sense: we do not say that a connexion gives the least energy of a geodesic. I think you got confused, believing that being a geodesic is an intrinsic notion, but it really depends on the connection you consider.
Now, suppose $$(M,g)$$ is a Riemannian manifold endowed with its Levi-Civita connexion. Then if $$\gamma : [a,b] \to M$$ is a curve, we define its energy to be: $$E(\gamma) = \frac{1}{2}\int_a^b \|\gamma'\|^2$$ and one can show that, in the space of all curves $$\{\gamma : [a,b] \to M\}$$ with same end points, a curve $$\gamma$$ is a point where the energy functional is extremal if and only if $$\nabla_{\gamma'}\gamma'=0$$, that is if and only if $$\gamma$$ is a solution of the equation of geodesics. Hence, a minimizer of the energy functional is a geodesic.
• Quibble: $\nabla_{\gamma'}\gamma' = 0$ implies the parameterized curve is not just a geodesic but a constant speed geodesic. Mar 30, 2021 at 15:45
• Thanks! So your definition of a geodesic as an energy-minimizer gives different geodesics (and different minimum energies) based on which $\nabla$ we use. I'm still unsure about why the Levi-Civita connection in particular provides a smaller minimum energy compared to other choices of connection. Mar 30, 2021 at 16:09
• The Levi-Civita does not provide a minimum energy compared to other connection: this, I think is another question that is unrelated. Also, I did not define geodesics as energy minimizing curves. I defined the as solution to the equation of geodesics. Mar 30, 2021 at 17:24
• @Didier well, geodesics by definition minimize energy under the given connection. My interpretation of the statement "minimum energy paths are geodesics under the Levi-Civita connection" has the emphasis on "under the L-C connetion" rather than "are geodesics". So I was wondering why (or whether) the energy-minimizer over geodesics is the one under the L-C connection. Mar 30, 2021 at 17:29
• +1 by the way for the explanation about geodesics being energy-minimizing paths. Mar 30, 2021 at 17:33
|
2022-06-29 10:57:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8359274864196777, "perplexity": 284.94443401255916}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103626162.35/warc/CC-MAIN-20220629084939-20220629114939-00129.warc.gz"}
|
http://mfleck.cs.illinois.edu/study-problems/inequality-induction/inequality-induction-1-sol.html
|
# Induction problem 1
Here's the claim again, for reference:
Claim: For all positive integers $$n \geq 2$$, $$2^n n! < (2n)!$$
### Solution
Proof by induction on n.
Base: When n=2, $$2^n n! = 4 \cdot 2 = 8$$ and $$(2n)! = 4! = 24$$, so $$2^n n! < (2n)!$$ is true for this value of n.
Induction Hypothesis: Suppose that $$2^n n! < (2n)!$$ is true for $$n = 2, 3, \ldots, k$$ where k is an integer $$\geq 2$$.
Rest of induction step: We need to show that $$2^{k+1} (k+1)! < (2(k+1))!$$
Notice that $$(2(k+1))! = (2k+2)!$$.
By the inductive hypothesis, we know that $$2^{k}k! < (2k)!$$. Therefore \begin{align*} (2k+2)! & = (2k+2) (2k+1) (2k)! \\ &= (2k+1) \left[2 (k+1)\right] (2k)! \\ & > (2k+1) \left[2 (k+1)\right] 2^k k! \\ & = (2k+1) 2^{k+1} (k+1)! \\ \end{align*}
Since $$k \geq 2$$, $$(2k+1) \geq 5 > 1$$. So
$$(2k+1) 2^{k+1} (k+1)! > 2^{k+1} (k+1)!$$.
So therefore we have
$$(2k+2)!> (2k+1) 2^{k+1} (k+1)! > 2^{k+1} (k+1)!$$
So $$2^{k+1} (k+1)! < (2k+2)!$$, which is what we needed to show.
|
2018-01-23 19:24:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.000009298324585, "perplexity": 812.7942736089155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892238.78/warc/CC-MAIN-20180123191341-20180123211341-00729.warc.gz"}
|
https://socratic.org/questions/what-percent-of-120-is-66
|
What percent of 120 is 66?
Jan 26, 2017
55% of 120 is 66
Explanation:
First, lets call the percent we are looking for "p".
"Percent" or "%" means "out of 100" or "per 100", Therefore p% can be written as $\frac{p}{100}$.
When dealing with percents the word "of" means "times" or "to multiply".
Putting this altogether we can write this equation and solve for $p$ while keeping the equation balanced:
$\frac{p}{100} \times 120 = 66$
$\frac{\textcolor{red}{100}}{\textcolor{b l u e}{120}} \times \frac{p}{100} \times 120 = \frac{\textcolor{red}{100}}{\textcolor{b l u e}{120}} \times 66$
$\frac{\cancel{\textcolor{red}{100}}}{\cancel{\textcolor{b l u e}{120}}} \times \frac{p}{\textcolor{red}{\cancel{\textcolor{b l a c k}{100}}}} \times \textcolor{b l u e}{\cancel{\textcolor{b l a c k}{120}}} = \frac{6600}{\textcolor{b l u e}{120}}$
$p = \frac{6600}{120}$
$p = 55$
|
2020-03-30 01:46:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5969115495681763, "perplexity": 2395.6474312424084}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496330.1/warc/CC-MAIN-20200329232328-20200330022328-00525.warc.gz"}
|
https://www.albert.io/ie/linear-algebra/offset-eigenvalues
|
Free Version
Moderate
# Offset Eigenvalues
LINALG-LHHKX2
If $A$ is an $n\times n$ complex matrix and $I$ is the $n\times n$ identity matrix and if $B=A+\rho I$ for $\rho\in\mathbb{C}$ then
A
$B$ has more eigenvalues than $A$ if $\rho>0$.
B
$B$ has fewer eigenvalues than $A$ if $\rho<0$.
C
$A$ and $B$ have the same eigenvalues for all $\rho\in\mathbb{R}$.
D
$A$ and $B$ have the same number of eigenvalues for all $\rho\in\mathbb{R}$.
|
2017-03-23 12:16:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7414254546165466, "perplexity": 339.2774972900883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186895.51/warc/CC-MAIN-20170322212946-00554-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/sinking-a-cylinder-with-varying-hole-sizes.972171/page-2
|
Sinking a cylinder with varying hole sizes
lloydthebartender
$dV=vA\frac{1}{dt}$ where A is the area cross-sectional area of the hole?
haruspex
Homework Helper
Gold Member
2018 Award
$dV=vA\frac{1}{dt}$ where A is the area cross-sectional area of the hole?
Almost, but 1/dt?? That doesn't mean anything. Also, I specified a as the cross section of the hole and used A for that of the cylinder. Please stick to that.
lloydthebartender
Almost, but 1/dt?? That doesn't mean anything. Also, I specified a as the cross section of the hole and used A for that of the cylinder. Please stick to that.
Since $Q=\frac{dV}{dt}$, and $Q=va$, so $dV=vadt$?
haruspex
Homework Helper
Gold Member
2018 Award
Since $Q=\frac{dV}{dt}$, and $Q=va$, so $dV=vadt$?
Right.
So what is the increase in the depth of water (y-h) in the cylinder in dt?
lloydthebartender
Right.
So what is the increase in the depth of water (y-h) in the cylinder in dt?
$d(A(y-h))=vadt$
$dt=\frac{d(A(y-h))}{va}$?
haruspex
Homework Helper
Gold Member
2018 Award
$d(A(y-h))=vadt$
$dt=\frac{d(A(y-h))}{va}$?
Right.
But A and h are constants, so can you rewrite that in the form dy/dt=...?
haruspex
Homework Helper
Gold Member
2018 Award
The next step is the Bernoulli equation. Need to be careful, though. The v I defined is relative to the cylinder, and the cylinder is moving. Bernoulli will tell you the gained velocity in the lab frame.
lloydthebartender
Right.
But A and h are constants, so can you rewrite that in the form dy/dt=...?
$\frac{dy}{dt}=\frac{va}{A-h}$?
The next step is the Bernoulli equation. Need to be careful, though. The v I defined is relative to the cylinder, and the cylinder is moving. Bernoulli will tell you the gained velocity in the lab frame.
So if $u$ is the negative, downwards velocity of the cylinder [at the hole = above the hole]
$\frac{1}{2}\rho (v+u)^2+\rho gz_{1}+P_{1}=\frac{1}{2}\rho (\frac{a(v+u)}{A})^2+\rho gz_{2}+P_{2}$?
haruspex
Homework Helper
Gold Member
2018 Award
$\frac{dy}{dt}=\frac{va}{A-h}$?
No, you've made some mistake in the algebra. Try again. A-h makes no sense, you cannot subtract a distance from an area.
So if $u$ is the negative, downwards velocity of the cylinder [at the hole = above the hole]
$\frac{1}{2}\rho (v+u)^2+\rho gz_{1}+P_{1}=\frac{1}{2}\rho (\frac{a(v+u)}{A})^2+\rho gz_{2}+P_{2}$?
Not sure which is 1 and which is 2, but the water is stationary below the hole.
As already noted, z1=z2, so you can cancel those.
What expressions do you have for P1 and P2?
lloydthebartender
No, you've made some mistake in the algebra. Try again. A-h makes no sense, you cannot subtract a distance from an area.
$\frac{d(Ay)−d(Ah)}{dt}=va$
$\frac{Ad(y)−Ah}{dt}=va$
$\frac{d(y)−h}{dt}=\frac{va}{A}$
$\frac{d(y)}{dt}=\frac{va}{A}+h$
Is this correct?
No, you've made some mistake in the algebra. Try again. A-h makes no sense, you cannot subtract a distance from an area.
Not sure which is 1 and which is 2, but the water is stationary below the hole.
As already noted, z1=z2, so you can cancel those.
What expressions do you have for P1 and P2?
I see.
Below the hole = Above the hole
$\frac{1}{2}\rho (0) + \rho gz_{1}+P_{1}=\frac{1}{2}\rho (v+u) + \rho gz_{2}+P_{2}$
Since $z_{1}=z_{2}$
$P_{1}=\frac{1}{2}\rho (v+u) +P_{2}$
I rewrite pressure in terms of the depth from the surface of water?
$P_{1} = \rho gy$ and $P_{2} = \rho gh$ so
$\rho gy=\frac{1}{2}\rho (v+u)+\rho gh$
$gy=\frac{1}{2} (v+u)+ gh$
haruspex
Homework Helper
Gold Member
2018 Award
Is this correct?
No. A and h are constants, so what is the change in Ah, d(Ah), in time dt?
You would have seen something is wrong if you had checked dimensional consistency.
$\frac{1}{2}\rho (0) + \rho gz_{1}+P_{1}=\frac{1}{2}\rho (v+u) + \rho gz_{2}+P_{2}$
Since $z_{1}=z_{2}$
$P_{1}=\frac{1}{2}\rho (v+u) +P_{2}$
I rewrite pressure in terms of the depth from the surface of water?
$P_{1} = \rho gy$ and $P_{2} = \rho gh$ so
$\rho gy=\frac{1}{2}\rho (v+u)+\rho gh$
$gy=\frac{1}{2} (v+u)+ gh$
Yes, except that you have omitted the power of 2 on the velocity term. Again, dimensionally inconsistent.
lloydthebartender
No. A and h are constants, so what is the change in Ah, d(Ah), in time dt?
You would have seen something is wrong if you had checked dimensional consistency.
Yes, except that you have omitted the power of 2 on the velocity term. Again, dimensionally inconsistent.
Oh...
$\frac{d(Ay)-d(Ah)}{dt}=va$
$\frac{d(Ay)}{dt}-\frac{d(Ah)}{dt}=va$
$\frac{dy}{dt}=\frac{va}{A}$
Yes, except that you have omitted the power of 2 on the velocity term. Again, dimensionally inconsistent.
$gy=\frac{1}{2}(v+u)^2+gh$
$2g(y-h)=(v+u)^2$
$\sqrt{2g(y-h)}=v+u$
So the downwards velocity of the cylinder is
$u=\sqrt{2g(y-h)}-v$?
lloydthebartender
No. A and h are constants, so what is the change in Ah, d(Ah), in time dt?
You would have seen something is wrong if you had checked dimensional consistency.
Yes, except that you have omitted the power of 2 on the velocity term. Again, dimensionally inconsistent.
At this point do I return to the force equations? I'm still not sure how this all connects back to it.
haruspex
Homework Helper
Gold Member
2018 Award
At this point do I return to the force equations? I'm still not sure how this all connects back to it.
Your post #37 is correct, but note the dy/dt and u are the same thing, so you can combine those equations to eliminate u.
lloydthebartender
Your post #37 is correct, but note the dy/dt and u are the same thing, so you can combine those equations to eliminate u.
$\frac{va}{A}=\sqrt{2g(y-h)}-v$
$a=\frac{A(\sqrt{2g(y-h)}-v)}{v}$
So now I need to make this equation in terms of $t$ and $a$, right?
lloydthebartender
Your post #37 is correct, but note the dy/dt and u are the same thing, so you can combine those equations to eliminate u.
Since
$F_{buoyancy}=\rho gV$ and $V=Ah$
$F_{buoyancy}=\rho gAh$
Because $frac_{dy}{dt}=frac_{va}{A}$
$t=\int \frac{va}{A}dy$
Since $v$, $a$, $A$ are constants
$t= \frac{vay}{A}$, so $A=\frac{vay}{t}$
$V=\frac{vay}{t}h$
$F_{weight}=F_{buoyancy} + F_{drag}$
$(m_{water}+m_{cylinder})g=\rho g \frac{vay}{t}h -bv$
$(\rho V+m_{cylinder})g=\rho g \frac{vay}{t}h -bv$
I suppose I do some rearranging magic for the $v$ of the drag force using the equation I got in #40?
haruspex
Homework Helper
Gold Member
2018 Award
I could see something was going wrong. Just traced it to this:
$P_{2} = \rho gh$
That is not correct. What is the water depth just above the hole?
lloydthebartender
I could see something was going wrong. Just traced it to this:
That is not correct. What is the water depth just above the hole?
Is it the $P_{2}=\rho gy$ because they're at the same height ($z_{1}=z_{2}$)? But that would mean $P_{1}=P_{2}$?
haruspex
Homework Helper
Gold Member
2018 Award
Is it the $P_{2}=\rho gy$ because they're at the same height ($z_{1}=z_{2}$)? But that would mean $P_{1}=P_{2}$?
No. Look at your diagram in post #24. What is the depth of water inside the cylinder?
lloydthebartender
No. Look at your diagram in post #24. What is the depth of water inside the cylinder?
Right, so $y-h$; $P_{2}=\rho g(y-h)$
Homework Helper
Gold Member
2018 Award
lloydthebartender
So I get
$\rho gy=\frac{1}{2}ρ(v+u)+ρg(y-h)$, and rearrange for $u$?
haruspex
Homework Helper
Gold Member
2018 Award
So I get
$\rho gy=\frac{1}{2}ρ(v+u)+ρg(y-h)$, and rearrange for $u$?
You have forgotten to square the velocity term again.
After fixing that, rearrange for v and eliminate v using the u=dy/dt=va/A equation you had in post #37.
lloydthebartender
You have forgotten to square the velocity term again.
After fixing that, rearrange for v and eliminate v using the u=dy/dt=va/A equation you had in post #37.
$gy=\frac{1}{2}(v+u)^2+g(y-h)$
$2gy-2g(y-h)=(v+u)^2$
$2gh=(v+u)^2$
$\sqrt{2gh}=v+u$
$v=\sqrt{2gh}-u$
$u=\frac{va}{A}$, $v=\frac{uA}{a}$
$\frac{uA}{a}=\sqrt{2gh}-u$
$uA=a\sqrt{2gh}-au$
$u(A+a)=a\sqrt{2gh}$
$u=\frac{a\sqrt{2gh}}{(A+a)}$
haruspex
Homework Helper
Gold Member
2018 Award
$gy=\frac{1}{2}(v+u)^2+g(y-h)$
$2gy-2g(y-h)=(v+u)^2$
$2gh=(v+u)^2$
$\sqrt{2gh}=v+u$
$v=\sqrt{2gh}-u$
$u=\frac{va}{A}$, $v=\frac{uA}{a}$
$\frac{uA}{a}=\sqrt{2gh}-u$
$uA=a\sqrt{2gh}-au$
$u(A+a)=a\sqrt{2gh}$
$u=\frac{a\sqrt{2gh}}{(A+a)}$
Yes!
Last step is to find the time taken. You need to think a bit here about the initial and final values of y. Start by imagining what would happen if there were no hole.
"Sinking a cylinder with varying hole sizes"
Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
2019-10-17 20:10:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8375706076622009, "perplexity": 1252.7637791961256}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986676227.57/warc/CC-MAIN-20191017200101-20191017223601-00154.warc.gz"}
|
https://stats1010-f22.github.io/website/ae/ae-5.html
|
Important
This application exercise is due on 14 Oct at 2:00pm.
Computers let you assemble, manipulate, and visualize data sets, all at speeds that would have wowed yesterday’s scientists. In short, computers give you superpowers! But if you wish to use them, you’ll need to pick up some programming skills. Steve Job said that “computers are bicycles for our minds” because the efficiency rating of humans on bicycles is so incredible, and computers give us similar powers.
One reason computers are so incredible is that they allow us to simulate a multitude of events. Today we will simulate the probability that two of you in this section of Stat1010 have the same birthdays.
library(tidyverse) # for data manipulation
library(vctrs) # to find the length of a tibble
Suppose you are in a classroom with 100 people. If we assume this is a randomly selected group of 100 people, what is the chance that at least two people have the same birthday? Although it is somewhat advanced, we can deduce this mathematically. We will do this later. Here we use a Monte Carlo simulation. For simplicity, we assume nobody was born on February 29. This actually doesn’t change the answer much.
First, note that birthdays can be represented as numbers between 1 and 365, so a sample of 100 birthdays can be obtained like this:
all_bdays <- as_tibble(1:365) # data from where we can sample
bdays_class <-
all_bdays %>% # data from where to sample
slice_sample(n = 100, replace = TRUE) # sample of size 100 with replacement
To check if in this particular set of 100 people we have at least two with the same birthday, we can use the functions group_by and filter, which returns a tibble with a vector of duplicated dates. Here is an example:
bdays_class %>%
group_by(value) %>% # for each birthday
count() %>% # count them
filter(n > 1) # those have more than 1
To estimate the probability of a shared birthday in the group, we repeat this experiment by sampling sets of 100 birthdays over and over. Prior to replicating, we need to write this as a function.
bdays_dups <-
all_bdays %>% # data from where to sample
slice_sample(n = 100, replace = TRUE) %>% # take a sample of n 100
group_by(value) %>% # for each birthday
count() %>% # count them
filter(n > 1) %>% # those have more than 1
vec_n(.) # the number that are duplicated
A function has an input (in this case $$n$$)
1. What does $$n$$ represent in this coding?
num_of_same_birthdays <- function(n){
all_bdays %>% # data from where to sample
slice_sample(n = n, replace = TRUE) %>% # take a sample
group_by(value) %>% # for each birthday
count() %>% # count them
filter(n > 1) %>% # those have more than 1
vec_size(.) # the number that are duplicated
}
To run this function, we do this:
num_of_same_birthdays(10) # run the function
1. How many students are there in class today?
2. Run the coding above but include the number of students in class today.
The law of large numbers says that if we do this many times we should have an estimate of the number of people that have the same birthdays in class today. The function replicate can be used to run functions multiple times.
B <- 500 # the number of times to run
results <- replicate(B, # replicate this number of times
ifelse( # if there are some duplicated birthdays
num_of_same_birthdays(50) >= 1, # our function
1, # give me a 1
0)) # if not, give me a 0
mean(results) # take the mean
Were you expecting the probability to be this high?
People tend to underestimate these probabilities. To get an intuition as to why it is so high, think about what happens when the group size is close to 365. At this stage, we run out of days and the probability is one.
Say we want to use this knowledge to bet with friends about two people having the same birthday in a group of people. When are the chances larger than 50%? Larger than 75%?
Let’s create a look-up table. We can create a function to compute this for any group size. This function runs our function $$500$$ times for every value of $n$. Statisticians will usually do this $$10000$$ times, but this will take a very very very very looooooooooonnnnnggggg time, so we are simplifying.
compute_prob <- # name the function
function(n, B = 500){ # function inputs
results <- # store results
replicate(B, # run num_of_same_birthdays B times
ifelse( # if there are some duplicated birthdays
num_of_same_birthdays(n) >= 1, # our function
1, # give me a 1
0)) # if not, give me a 0
mean(results) # find the mean
}
Using the function map_dbl, we can perform element-wise operations on any function. Note that this may take awhile to run.
n <- seq(1, 60) # which values of n are important
prob <- # save the results as prob
map_dbl(n, # for each value of n
compute_prob) # run the function
We can now make a plot of the estimated probabilities of two people having the same birthday in a group of size n:
as_tibble(n, prob) %>% # the format for ggplot
ggplot() + # draw a graph
geom_point(aes(x = n, y = prob)) # of points
Now let’s compute the exact probabilities rather than use Monte Carlo approximations. Not only do we get the exact answer using math, but the computations are much faster since we don’t have to generate experiments.
To make the math simpler, instead of computing the probability of it happening, we will compute the probability of it not happening, or the compliment. For this, we use the multiplication rule.
Let’s start with the first person. The probability that person 1 has a unique birthday is 1. The probability that person 2 has a unique birthday, given that person 1 already took one, is 364/365. Then, given that the first two people have unique birthdays, person 3 is left with 363 days to choose from. We continue this way and find the chances of all $$n$$ people having a unique birthday is:
$1×\frac{364}{365}×\frac{363}{365}…\frac{365−n+1}{365}$ We can write a function that does this for any number:
exact_prob <- function(n){
1 -
prod(365:(365-n+1))/ # the product from the numerator above
365^(n) # the denominator from above
}
1. Run this coding for the number of people that are in class today.
Now, we run this on multiple values of $$n$$
eprob <- map_dbl(n, exact_prob) # run on multiple values of n
Next we plot the results.
as_tibble(n, eprob) %>% # the format for ggplot
ggplot() + # draw a graph
geom_point(aes(x = n, y = eprob)) # of points
On mercury, the year is only 88 days. Update the coding above to simulate the expected number of people with the same birthdays on Mercury.
Assuming we have the same number of people, would you expect the number of people with the same birthdays to be higher or lower than those on earth?
|
2023-01-30 02:31:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5922741889953613, "perplexity": 913.7194316736736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499790.41/warc/CC-MAIN-20230130003215-20230130033215-00364.warc.gz"}
|
https://math.sciences.ncsu.edu/event/heekyoung-hahn/
|
Heekyoung Hahn, Duke University, Langlands’ beyond endoscopy proposal and related questions on algebraic groups and combinatorics
• This event has passed.
September 25, 2017 | 3:00 pm - 4:00 pm EDT
Langlands’ beyond endoscopy proposal for establishing
functoriality motivates the study of irreducible subgroups of
$\mathrm{GL}_n$ that stabilize a line in a given representation of
$\mathrm{GL}_n$. Such subgroups are said to be detected by the
representation. In this talk we present a family of results when the
subgroup is a classical group in the important special case where the
representation of $\mathrm{GL}_n$ is a subrepresentation of the triple
tensor product representation $\otimes^3$. If time permits, we will
prove some partition identities arising in this context and discuss
their relation to explicit Satake inversion.
Details
Date
September 25, 2017
Time
3:00 pm - 4:00 pm
Event Category:
Location
SAS 4201
|
2018-06-25 19:46:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6495675444602966, "perplexity": 1726.1957326567094}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267868876.81/warc/CC-MAIN-20180625185510-20180625205510-00348.warc.gz"}
|
https://gateoverflow.in/tag/ugcnetjune2019-i
|
# Recent questions tagged ugcnetjune2019-i
1
To organize discussion method in teaching effectively, which of the following conditions should be met? Topic be easy Topic be declared in advance Topic of common interest Availability of more than one teacher Language facility of participants Select appropriate answer from the options given below: b, c, and e a, b, and c a, b and e c, d and e
1 vote
2
Who developed the theory of ‘Multiple Intelligence’? Alfred Binet L. Thurstone Charles Spearman Howard Gardner
3
From the list of the effective teaching behaviours, identify those which are called key behaviours. Direct, audible and oral delivery to all students Encouraging students to elaborate on an answer Warm and nurturing relationships with learners Varying modes of presentation Preventing misbehaviour with a minimum ... the options given below : i, iv and v i, ii and iii ii, iii and iv iv, v and vi
4
Which of the following statements explains the concepts of inclusive teaching? Teacher facilitates the learning of the gifted students Teacher facilitates the learning of the weak students Teacher takes support of parents of the students to make them learn Teacher makes the students of different backgrounds to learn together in the same class
5
Which among the following best describes the Emotional Intelligence of learners? Understand the emotion of other people and your own Express oneself very strongly Being rational in thinking Adjusting one’s emotion as per situation Being creative and open to criticism Accepting other people as they are Choose your answer from the options given below : a, d, and f d, e and f a, b, and c b, c and d
6
Bibliography given in a research report Helps those interested in further research Shows the vast knowledge of the researcher Makes the report authentic Is an optional part of the report
7
The research design is specifically related to which of the following features in research? Sample selection Formulation of a plan Deciding about the tool for data collection Hypothesis making Choice of a field of inquiry Select your answer from the options given below: ii, iii and iv i, ii and iii ii, iv and v iii, iv and v
8
Through which research method, the manipulation of an independent variable and its effect on dependent variable is examined with reference to a hypothesis under controlled conditions? Ex-post facto research Descriptive research Case study research Experimental research
9
In which of the following research studies interpretation and meaning get more attention than formulation of generalisations? Historical studies Survey studies Philosophical studies Ethnographic studies Hypothetico – deductive studies Ex-post facto studies Choose your answer from the options given below : i, ii and iii iv, v and vi ii, iv and v i, iii and iv
1 vote
10
Which of the following is a plagiarism checking website? http://go.turnitin.com http://www.researchgate.com http://www.editorial.elsevier.com http://www.grammarly.com
11
Michaelangelo is famous for having successfully interpreted the human body. His great achievement is that of the painting of David whose hands reach out as a sign of human capability and potential. It is assumed that the time he lived was ripe for ... is progress of science and anatomy that contributes to civilizations exclusively Human beings possess language which is the only key to knowledge
12
Michaelangelo is famous for having successfully interpreted the human body. His great achievement is that of the painting of David whose hands reach out as a sign of human capability and potential. It is assumed that the time he lived was ripe ... obsessive medieval method of accuracy The classical simplicity and lack of control The case and decorative excess of earlier art Expressionist technique
13
Michaelangelo is famous for having successfully interpreted the human body. His great achievement is that of the painting of David whose hands reach out as a sign of human capability and potential. It is assumed that the time ... Michaelangelo could contain his physical infirmity by artistic excellence Michaelangelo submitted to disease Michaelangelo survived different diseases before pursuing art
14
Michaelangelo is famous for having successfully interpreted the human body. His great achievement is that of the painting of David whose hands reach out as a sign of human capability and potential. It is assumed that the time he lived was ripe ... on their physical structure To retroactively diagnose famous artists and public figures of conditions that were not prevalent during their time
1 vote
15
Michaelangelo is famous for having successfully interpreted the human body. His great achievement is that of the painting of David whose hands reach out as a sign of human capability and potential. It is assumed that the time he lived was ... including physical ailments Michaelangelo's gout and other ailments lessened his efficiency The diseases Michaelangelo faced were due to constant hammering
16
The dance of the honeybee conveying to other bees where nector will be found is an example of Mass communication Group communication Interpersonal communication Intrapersonal communication
17
Choose the correct sequence of communication from the options given below: Information – exposure – persuasion – behavioural change Persuasion – information – behavioural change – exposure Exposure – information – persuasion – behavioural change Behavioural change – information – persuasion – exposure
18
Which of the following is a function of mass media? To transmit culture To formulate national policies To help the judiciary take its decision To stabilise the share market
19
In a classroom situation, a teacher organises group discussion to help arrive at a solution of a problem. In terms of a model communication used, it will be called A transaction model An interaction model A horizontal model A linear model
20
Today’s media-society equation is largely Mystical Morally bound Consumer conscious Tradition centric
1 vote
21
Oar is to rowboat as foot is to running sneaker skateboard jumping
22
For all integers $y>1, \: \langle y \rangle = 2y+(2y-1)+(2y-2) + \dots +1$. What is the value of $\langle 3 \rangle \times \langle 2 \rangle$? Where $\times$ is a multiplication operator? $116$ $210$ $263$ $478$
23
If $152$ is divided into four parts proportional to $3$, $4$, $5$ and $7$, then the smallest part is $29$ $26$ $25$ $24$
24
In a new budget, the price of petrol rose by $25\%$. By how much percent must a person reduce his consumption so that his expenditure on it does not increase? $10\%$ $15\%$ $20\%$ $25\%$
25
A sum of money doubles at compound interest in 6 years. In how many years it will become $16$ times? $16$ years $24$ years $48$ years $96$ years
1 vote
26
If the proposition ‘Houses are not bricks’ is taken to be False then which of the following propositions can be True? All houses are bricks No house is brick Some houses are bricks Some houses are not bricks Select the correct answer from the options given below: b and c a and d b only c only
1 vote
27
Given below are two premises with four conclusions drawn from them. Which of the following conclusions could be validly drawn from the premises? Premises: No paper is pen Some paper are handmade Conclusions: All paper are handmade Some handmade are pen Some handmade are not pen All handmade are paper
Consider the following table that shows the number (in lakhs) of different sizes of LED television sets sold by a company over the last seven years from $2012$ to $2018$. Answer the question based on the data contained in the table: Sale of LED Television sets (in lakhs) of different sizes (in ... $32$-inch LED Television sets in $2017$ compared to that in $2013$? $36 \%$ $56 \%$ $57 \%$ $64 \%$
|
2021-01-27 07:54:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34538131952285767, "perplexity": 2092.809062923991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704821253.82/warc/CC-MAIN-20210127055122-20210127085122-00414.warc.gz"}
|
https://gamedev.stackexchange.com/questions/52914/lerp-an-object-based-on-timers
|
# Lerp an object based on timers
I'm trying to make a target lerp between two objects based on a timer.
At the moment, I have the following code:
float distCovered = (Time.time - waitTime) * speed;
float fracJourney = distCovered / journeyLength;
if (_moveDown == false)
{
if (startTime + waitTime < Time.time)
{
transform.position = Vector3.Lerp(start.position, end.position, fracJourney);
if (transform.position == end.position)
{
Debug.Log("going down");
_moveDown = true;
transform.position = Vector3.Lerp(end.position, start.position, fracJourney);
}
}
}
if (_moveDown == true)
{
float distCovered1 = (Time.time - goDowntimer) * speed;
float fracJourney1 = distCovered1 / journeyLength;
transform.position = Vector3.Lerp(end.position, start.position, fracJourney1);
if (transform.position == start.position)
{
Debug.Log("going up");
// waitTime = 20;
_moveDown = false;
}
}
This code is in my update function and is attached to each of my objects that I want to move up and down. Each object is able to set their wait time independently of the others, thus i can have 1 move after 5 seconds, another after 10 etc.
Then, each target waits a few seconds and moves back down. However, the movement isn't smooth and it tends to jump a set distance. But then, when it gets back to the bottom it goes crazy between the _movedown bool and wont move.
Does anyone know of a way I can fix these issues?
I do know of the Mathf.PingPong method that constantly moves the object back forth between two points, but that wont allow me to pause the movement at each section. Though, if someone knows a way I can do this, please let me know as well.
• This wasnt part of your question but i noticed your distCovered and was wondering if you were intending to use time as distance or did you want to get the actual distance in 3d space that the object has moved? – Mungoid Mar 28 '13 at 16:14
• The whole lerp line, and the arguments it takes comes from the Unity doc example. – N0xus Mar 28 '13 at 16:17
Im assuming this is in unity so I'll suggest based on that. What you should use is the deltaTime to increment your lerp. You can find that in Time.deltaTime and just use that for the third parameter in Lerp. Or if you need to speed it up or slow down you can multiply that delta time by a number you deem the speed.
For the waiting (it looks like you were trying something with waitTime), you just need to decrement the waitTime at the end of your update by deltaTime and do an if(waitTime <=0) to determine if you can continue to move on.
I didnt really have time to whip up some code for you but if my explanation didnt make sense I can try and get something put together for you =-)
Edit: I decided to quickly modify your code. I havent tested this but i think it should work
void Update () {
float distCovered = (Time.time - waitTime) * speed;
float fracJourney = distCovered / journeyLength;
if (_moveDown == false)
{
if (waitTime <= 0)
{
transform.position = Vector3.Lerp(start.position, end.position, Time.deltaTime);
if (transform.position == end.position)
{
Debug.Log("going down");
_moveDown = true;
transform.position = Vector3.Lerp(end.position, start.position, fracJourney);
}
}
}
if (_moveDown == true)
{
float distCovered1 = (Time.time - goDowntimer) * speed;
float fracJourney1 = distCovered1 / journeyLength;
transform.position = Vector3.Lerp(end.position, start.position, Time.deltaTime);
if (transform.position == start.position)
{
Debug.Log("going up");
waitTime = 20;
_moveDown = false;
}
}
if(waitTime > 0)
waitTime -= Time.deltaTime;
}
|
2021-05-07 01:01:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39540284872055054, "perplexity": 3258.4831642097743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.18/warc/CC-MAIN-20210506235514-20210507025514-00188.warc.gz"}
|
https://hpmuseum.org/forum/thread-2697-post-23549.html
|
newRPL: symbolic numbers
12-22-2014, 11:01 PM
Post: #1
Claudio L. Senior Member Posts: 1,840 Joined: Dec 2013
newRPL: symbolic numbers
Here's an idea to eliminate the "approximate" vs "exact" mode.
What if symbolic numbers are allowed to remain symbolic?
On the 50g, typing '2' (by being quoted, the user intended this as a symbolic expression with the number 2 inside) is automatically converted to the number 2 (ZINT or real depending on flags).
But what if the number was kept symbolic?
This would (in theory) eliminate the need for two separate modes of operation.
When the user wants a quantity to be symbolic, just adding quotes will keep it symbolic.
Within programs, instead of changing flags all the time, or issuing an error, you'd simply quote the quantities and all results would come up symbolic, or leave the numbers unquoted to get a real (approximate) result.
To convert from symbolic to number, the usual ->NUM would work, and we'd need the opposite to turn an existing number into a symbolic (perhaps ->SYMB?).
The idea would be something like this:
{ 1 2 3 4 5 } '2' / would return { '1/2' '2/2' '3/2' '4/2' '5/2' }
{ 45 '45' } SIN would return { 0.707... 'SIN(45)' }
2 INV --> 0.5
'2' INV --> '1/2'
In other words, quoted numbers would work more like symbolic constants do now.
BTW, reals within symbolics are already treated as first class citizens, so quoted reals will work just fine:
'1.5/4' DUP * EVAL returns 0.14.... even in exact mode on the 50g because of the real number in the expression. This same code returns the symbolic '9/64' in newRPL.
Am I missing something? Could this achieve the goal of eliminating the mode switching nightmare? Any problems that could arise from doing this?
12-23-2014, 01:13 AM
Post: #2
John Galt Member Posts: 227 Joined: Oct 2014
RE: newRPL: symbolic numbers
I think it's brilliant. Using quotes around a number to indicate a symbolic is elegant and intuitive, and I can't think of any disadvantage to that entry method. Proposing it in the manner you have makes me wonder why HP didn't already implement it that way.
Going the other way (turning what is now a real or imaginary number back to a symbolic) seems like unscrambling an egg though. The only solution I can imagine would require keeping a practically limitless memory of every step the user did to get to that point. Unless you're talking about a simple one-level "undo" though, which we already have.
Granted I haven't thought about it very much, but it seems a concept worthwhile exploring.
12-23-2014, 12:06 PM
Post: #3
Nigel (UK) Senior Member Posts: 420 Joined: Dec 2013
RE: newRPL: symbolic numbers
(12-22-2014 11:01 PM)Claudio L. Wrote: BTW, reals within symbolics are already treated as first class citizens, so quoted reals will work just fine:
'1.5/4' DUP * EVAL returns 0.14.... even in exact mode on the 50g because of the real number in the expression. This same code returns the symbolic '9/64' in newRPL.
Am I missing something? Could this achieve the goal of eliminating the mode switching nightmare? Any problems that could arise from doing this?
Suppose I have an expression 'r=d/2'. If I store '7' or '7.0' (both quoted) in d, I'd expect to get 7/2 for r when this expression is evaluated. If I store 7.0, I'd expect to get 3.5 and not 7/2. I'm not sure what I'd expect if 7 (without the quotes) was stored in d.
My point is that real numbers or numbers in standard form are generally not intended to be exact. Unless they are deliberately quoted they should stay approximate, even if they appear inside a quoted expression. If this is what you intend your idea seems excellent.
Nigel (UK)
12-23-2014, 03:10 PM
Post: #4
Claudio L. Senior Member Posts: 1,840 Joined: Dec 2013
RE: newRPL: symbolic numbers
(12-23-2014 12:06 PM)Nigel (UK) Wrote: Suppose I have an expression 'r=d/2'. If I store '7' or '7.0' (both quoted) in d, I'd expect to get 7/2 for r when this expression is evaluated. If I store 7.0, I'd expect to get 3.5 and not 7/2. I'm not sure what I'd expect if 7 (without the quotes) was stored in d.
That's a good point. Being that 'r=d/2' is a symbolic expression, once the number is replaced inside the expression, there's no way to tell if it came from a number or a symbolic number, so it would always return 'r=7/2' (doesn't matter if it's 7 or 7.0 in this case).
(12-23-2014 12:06 PM)Nigel (UK) Wrote: My point is that real numbers or numbers in standard form are generally not intended to be exact. Unless they are deliberately quoted they should stay approximate, even if they appear inside a quoted expression. If this is what you intend your idea seems excellent.
You raised a very good point. Replacement inside an expression would always convert a number into symbolic. This differs from the current status-quo but then, if you are manipulating symbolic expressions, why shouldn't it be symbolic?
You are expecting 'r=3.5' and not 'r=7/2' , but is this only a consequence of how the 50g works (that makes you expect that), or a standard in other mathematical software?.
Does anyone know if any other calculators or math software behaves that way?
12-23-2014, 03:34 PM
Post: #5
Claudio L. Senior Member Posts: 1,840 Joined: Dec 2013
RE: newRPL: symbolic numbers
(12-23-2014 01:13 AM)John Galt Wrote: Going the other way (turning what is now a real or imaginary number back to a symbolic) seems like unscrambling an egg though. The only solution I can imagine would require keeping a practically limitless memory of every step the user did to get to that point. Unless you're talking about a simple one-level "undo" though, which we already have.
3.14 --> '3.14'
Just adding the quotes! Wasn't thinking of undoing operations to get the original expression that led to the number (that would be cool, but like you said we'd need a more powerful device, perhaps a phone/tablet to keep the history).
Within the limitation we have, at most something like QPI could be implemented.
12-23-2014, 05:22 PM
Post: #6
Claudio L. Senior Member Posts: 1,840 Joined: Dec 2013
RE: newRPL: symbolic numbers
(12-23-2014 12:06 PM)Nigel (UK) Wrote: Unless they are deliberately quoted they should stay approximate, even if they appear inside a quoted expression.
Inside symbolics, numbers would have to be flagged as either exact or approximated, effectively moving from a global flag to a flag on each number.
Following the idea that an approximated number "turns" an exact number into approximated this would be:
'1' '1/2' + --> '3/2'
'1' 0.5 + --> '1.5' (here 1.5 would be an approx. symbolic number)
'3/2*X' 3 * --> '4.5*X' (here 4.5 is approx)
'3/2*X' '3' * --> '9/2*X'
The other approach would be the opposite: exact numbers turn approximated numbers into exact:
1 '1/2' + --> '3/2'
'1' 0.5 + --> '3/2'
'3/2*X' 3 * --> '9/2*X'
'3*X' 1.5 * --> '9/2*X'
I can't see any problem with either approach, perhaps a flag "prefer exact" or "prefer approx." could select between these two approaches, but we are back to a system-wide flag, not too different from "exact mode" in the 50g, except if you use only quoted numbers, or only unquoted numbers, the flag won't affect your results (only the mix would present problems).
The main difference of doing this vs. the 50g would be that exact and approximated terms can coexist within the same expression:
'3.5*X+7/2*Y' (where we'd have to indicate somehow when we type the expression that the 3.5 is approximated, otherwise it's exact even though is not an integer)
Adding 'X' to the above expression, would be:
'3.5*X+7/2*Y' 'X' + --> '4.5*X+7/2*Y' (using prefer approx.)
'3.5*X+7/2*Y' 'X' + --> '9/2*X+7/2*Y' (using prefer exact)
Notice the second term remains unaltered. The above example on the 50g triggers the "Approximate mode on?" question (after pressing EVAL). If accepted, it kills all fractions in the expression, if not accepted it errors and returns the original expression '3.5*X+7/2*Y+X'.
One more problem with this:
Typing a whole expression from the keyboard, we need to indicate somehow when a number is exact or not. Outside a symbolic we add quotes, but what to do when we are typing inside a symbolic?
'3.5*X' typed would make 3.5 an exact number, so:
'3.5*X' DUP * would return a fraction (49/4), as we would be multiplying two exact numbers.
We'd also need a way to visually distinguish exact vs approx. numbers on screen (bold font?).
12-23-2014, 05:57 PM
Post: #7
Nigel (UK) Senior Member Posts: 420 Joined: Dec 2013
RE: newRPL: symbolic numbers
In Mathematica (from memory) and in the AUTO mode (i.e., neither EXACT nor APPROX) on TI calculators such as the TI-89, the status of a number as exact or approximate is indicated by the presence or absence of a decimal point. So 40320 is an exact number, while 40320. is approximate. Sin[4] in Mathematica simply returns Sin[4] (i.e., no evalulation), whereas Sin[4.] returns a real number. One approximate number in a calculation poisons the rest of the calculation, so that 3/2 +0.5 would be 2. rather than 2 , and substituting d=7. into 'r=d/2' would give 3.5 for r.
I have always found this to work well. When using my TI calculators (yes! I do use them and I'm not ashamed!!) I cannot remember ever needing the EXACT or APPROXIMATE modes.
I think that the quoting idea could work in the same way as the decimal point approach so long as one unquoted number poisons the entire expression it appears in. I think this is reasonable: if one number is inexact, anything calculated from that number must also be inexact. Maybe the decimal point approach would be easier for the user to enter, though? It is quicker to type 2*(3+4) than '2'*('3'+'4'). However, the elegance of the quoting idea is very appealing! I'm really not sure what is best.
Nigel (UK)
12-23-2014, 09:01 PM
Post: #8
Claudio L. Senior Member Posts: 1,840 Joined: Dec 2013
RE: newRPL: symbolic numbers
(12-23-2014 05:57 PM)Nigel (UK) Wrote: It is quicker to type 2*(3+4) than '2'*('3'+'4').
Typing is the only problem I see with using quotes. In your example, it's much more comfortable to type the dot.
You'd have to press the quote, type the number, then cursor right to get out of the quote.
Makes writing expressions a pain.
Or, accept that every number you type inside quotes will become exact, dot or no dot.
Would that be a nuisance?
A last resort could be to use a trailing dot as the approx. indicator inside symbolics:
2 is a number --> approx.
'2' is symbolic --> exact
'2.5' is symbolic, dot is there but is not trailing --> exact
'2.5.' is an approx. number
12-23-2014, 09:27 PM
Post: #9
brouhaha Member Posts: 142 Joined: Dec 2013
RE: newRPL: symbolic numbers
(12-23-2014 05:57 PM)Nigel (UK) Wrote: It is quicker to type 2*(3+4) than '2'*('3'+'4').
Yes, but wouldn't one just type that as '2*(3+4)' ?
12-23-2014, 09:49 PM
Post: #10
Nigel (UK) Senior Member Posts: 420 Joined: Dec 2013
RE: newRPL: symbolic numbers
(12-23-2014 09:01 PM)Claudio L. Wrote:
(12-23-2014 05:57 PM)Nigel (UK) Wrote: It is quicker to type 2*(3+4) than '2'*('3'+'4').
Typing is the only problem I see with using quotes. In your example, it's much more comfortable to type the dot.
You'd have to press the quote, type the number, then cursor right to get out of the quote.
Makes writing expressions a pain.
Or, accept that every number you type inside quotes will become exact, dot or no dot.
Would that be a nuisance?
A last resort could be to use a trailing dot as the approx. indicator inside symbolics:
2 is a number --> approx.
'2' is symbolic --> exact
'2.5' is symbolic, dot is there but is not trailing --> exact
'2.5.' is an approx. number
The trailing dot is a reasonable idea. However, is it necessary? I'm not sure why anyone would want a number like 2.5 to be treated as an exact number. Perhaps this is because I'm a physics teacher - I insist that all calculated results are given to an appropriate number of significant figures, usually no more than three. Anyone who writes 5/2 ohms for the resistance of a resistor would lose a mark, because they are claiming to know the resistance to an infinite number of significant figures and that can never happen.
So I would suggest:
2 or 2. is a number --> approx.
'2' is symbolic --> exact
'2.5' is symbolic --> approx because of the dot
'2.5.' is symbolic ... --> exact! Exact numbers with decimals might be needed sometimes, but for me at least they would be a rare occurrence and so they should be the harder of the two to type.
I'm not entirely happy with this either. I shall continue to think!
Nigel (UK)
12-24-2014, 03:15 AM
Post: #11
Claudio L. Senior Member Posts: 1,840 Joined: Dec 2013
RE: newRPL: symbolic numbers
(12-23-2014 09:49 PM)Nigel (UK) Wrote: The trailing dot is a reasonable idea. However, is it necessary? I'm not sure why anyone would want a number like 2.5 to be treated as an exact number.
Perhaps this is because I'm a physics teacher - I insist that all calculated results are given to an appropriate number of significant figures, usually no more than three. Anyone who writes 5/2 ohms for the resistance of a resistor would lose a mark, because they are claiming to know the resistance to an infinite number of significant figures and that can never happen.
Sometimes in engineering you end up with expressions that are for the most part theoretical, then patched with some obscure real coefficients. It's not that I believe the coefficient is exact (it was likely picked arbitrarily by a committee of researchers that had too much caffeine), but I don't like that coefficient to "eat" all other exact coefficients which do have a meaning.
For example, a simple beam deflection formula at the center: d=5/384*w*L^4/EI, where 5/384 are exact numbers from symbolic integration. But then the engineers mess it up with let's say a factor of 1.5 to account for long term material behavior, making it:
d=1.5*5/384*w*L^4/EI
In cases like this, it's better if 1.5 behaves like an exact value (eventually converted to 3/2), so it's easier to preserve the 5/384 fraction that engineers quickly recognize as a "simple beam formula adjusted with some factor", rather than some obscure factor that you don't know where it comes from.
The other case is also true: sometimes you want that factor to eat all other numbers and give you a single real coefficient. For example if you are doing tables for different cases of the beam deflection above and you just want the final coefficient.
The tables can be expressed as d= m * (w*L^4/EI), where your m=k*5/384, and k varies according to different factors (it may even be integer or 1).
In that case, you want the k value to be treated as approx. and "eat" the fraction, to produce the desired expression format.
The issue is sometimes more visual than related to the significance of the number itself.
(12-23-2014 09:49 PM)Nigel (UK) Wrote: So I would suggest:
2 or 2. is a number --> approx.
'2' is symbolic --> exact
'2.5' is symbolic --> approx because of the dot
'2.5.' is symbolic ... --> exact! Exact numbers with decimals might be needed sometimes, but for me at least they would be a rare occurrence and so they should be the harder of the two to type.
I'm not entirely happy with this either. I shall continue to think!
Nigel (UK)
Seems reasonable, but how would you type a symbolic integer that is approximated?
'2.' --> exact
'2' --> exact
12-24-2014, 11:12 AM (This post was last modified: 12-24-2014 04:30 PM by Gilles.)
Post: #12
Gilles Member Posts: 171 Joined: Oct 2014
RE: newRPL: symbolic numbers
It seems to me there is a confusion between 'symbolic' and 'exact' here.
Note that the way the 50G works is very different depending of FLAG 03 set or not
Claudio L. Wrote:Following the idea that an approximated number "turns" an exact number into approximated this would be:
'1' '1/2' + --> '3/2'
'1' 0.5 + --> '1.5' (here 1.5 would be an approx. symbolic number)
'3/2*X' 3 * --> '4.5*X' (here 4.5 is approx)
'3/2*X' '3' * --> '9/2*X'
In my opinion, the HP50G 'philosophy' is that there is very few automatic evaluation with symbolic calculation so,i prefer :
Code:
'1' '1/2' + --> '1+1/2' then EVAL (or SIMPLIFY) -> '3/2' or NUM-> 1.5 '1' 0.5 + --> '1+0.5' then EVAL -> '1.5' ( approximate 'contagion' but algebraic object) NUM-> 1.5 (real object) '3/2*X' 3 * --> '3/2*X*3' then EVAL '9/2*X' (If X does not exist else returns the evaluated expression...) '1.5' '3.14' + COS --> 'COS(1.5+3.14)'
I like the idea to have a full control of what happens...
Claudio L. Wrote:The other approach would be the opposite: exact numbers turn approximated numbers into exact:
1 '1/2' + --> '3/2'
'1' 0.5 + --> '3/2'
'3/2*X' 3 * --> '9/2*X'
'3*X' 1.5 * --> '9/2*X'
I hate this way
Why don't keep something like the approx ( ~ )and exact ( = ) on the 50G with some improvments ?
in '=' mode ( CAS mode ?) :
- All is symbolic, you have to do ->NUM to get a numeric (no symbolic) result.
- functions returns symbolic results
enter 12 will return '12' (algebraic object)
enter 12. will return '12.' (algebraic)
enter '12.' will return '12.' (algebraic)
ex:
Code:
1 1 2 / + -> '1+1/2' EVAL -> '3/2' 1.1. 2. / + 'x' * -> '(1.+1./2.)*x' EVAL (or SIMPLIFY) -> '1.5*x' (if x undefined) '1.' '1.' '2.' / + -> '1.+1./2.' EVAL (or SIMPLIFY) -> '1.5' 'SIN(1)' EVAL -> 'SIN(1)' 'SIN(1.) EVAL -> 'SIN(1.)' 1 '1.5' + -> '1+1.5' EVAL (or SIMPLIFY) -> '2.5' PI 4 /SIN -> 'SIN(PI/4)' EVAL -> 'V2/2' (the 50G auto eval in these cases) Function is the same as 'Function' EVAL (Auto evaluation of algebraic objects)
in '~' mode (numeric mode):
- All is numeric, you have to quote 'xxx' for algebraic
- functions returns numeric results (and an error if there are undefined values)
enter 12 is the same as 12.
but you can force '12' to get the algebraic object '12' with an integer inside.
or '12.' for an algebraic object '12.' with a real only inside
ex
Code:
12 -> 12. 1 1 2 / + -> 1.5 '1' '1' '2' / + -> '1+1/2' EVAL (or SIMPLIFY) -> '3/2' ->NUM -> 1.5 '1.' '1' '2' / + -> '1.+1/2' EVAL (or SIMPLIFY) -> '1.5' 1.1. 2. / + -> 1.5 SIN(1) -> 0.841... PI 4 /SIN -> 0.707... '1' 1.5 + -> 2.5
The difference between = and ~ will be essentialy in the way to input the data in the calc to reduce keystrokes and numeric or algebraic output for functions (that means that there must be a difference between a function (manipulation of algebraic objects) and programs (manipulation of all kinds of objects ...).
= more oriented 'math' (CAS)
~ more oriented 'physics' and numeric calculations
The other major difference will be that with "= mode"
'1' 1. + -> '1+1.'
and in "~ mode"
'1' 1. + -> 2.
In others words
Code:
'blabla' is an algebraic object (no automatic evaluation) blabla is an algebraic object auto evaluated '13.33' is an algebraic object with one real object inside '1+2' is an algebraic object with an expression)inside '+(1,2)' 13.33 is a real object 13 is an integer object (but I think new RPL dont use integer) 13. is a real object ...
= and ~ just change few (but important) things for more easy use and less keystrokes. You can easily toggle from = to ~ with Shift & ENTER.
~ mode works in the same way of the old 48 serie
== mode is near the 'exact' mode of the 49/50G series but different in some way (and more logical for me but probably i miss some points)
.
Note that in this configuration you _must_ use symbolic to work with 'infinite" integer. (but i've not understand how new RPL will work for this ...).
Just my 2 cents !
I'm quite sure that what I write here is not fully coherent and more 'theoric' thoughts is needed for this.
The keys:
- functions returns 'sym' or 'num' ?
- exact vs approx
- is there an 'infinite integer' type or not ?
- When 'evaluate' ?
- SIMPLIFY vs EVAL
12-24-2014, 07:51 PM
Post: #13
Claudio L. Senior Member Posts: 1,840 Joined: Dec 2013
RE: newRPL: symbolic numbers
(12-24-2014 11:12 AM)Gilles Wrote: It seems to me there is a confusion between 'symbolic' and 'exact' here.
Yes! We are several people thinking out loud, it is normal for our thoughts to be incoherent sometimes.
Here's what I got in clear from all of the posts above:
* Using quotes to distinguish exact vs. approximate numbers seems cool at first, but is insufficient (as it makes all numbers exact within a symbolic).
* Using the trailing dot to distinguish exact vs approximate numbers is one good option and agrees with other systems (feels more familiar).
(12-24-2014 11:12 AM)Gilles Wrote: In my opinion, the HP50G 'philosophy' is that there is very few automatic evaluation with symbolic calculation so,i prefer :
Code:
'1' '1/2' + --> '1+1/2' then EVAL (or SIMPLIFY) -> '3/2' or NUM-> 1.5 '1' 0.5 + --> '1+0.5' then EVAL -> '1.5' ( approximate 'contagion' but algebraic object) NUM-> 1.5 (real object) '3/2*X' 3 * --> '3/2*X*3' then EVAL '9/2*X' (If X does not exist else returns the evaluated expression...) '1.5' '3.14' + COS --> 'COS(1.5+3.14)'
Yes! We were taking shortcuts in the explanations, but certainly you'd have to EVAL the expression before a number eats another one. We never meant that this was going to happen automatically in one shot.
(12-24-2014 11:12 AM)Gilles Wrote: I like the idea to have a full control of what happens...
And I agree. The idea of eliminating the global flag exact vs. approx. is to give you more control, not less. You may want evaluation of parts of an expression to a number, but symbolic treatment on others. This is not possible right now on the 50g, but would be if you can control which values are approximated and which are exact.
Let's say you have an expression:
'3.5.*X+7/2*Y+SIN(X)' (where the 3.5. is an approximated number, see what I did with the trailing dot? doesn't look so bad)
Let's say you want to replace X with a value, to obtain a linear function in Y alone.
The 50g will want approx mode (just because you have a real in there), which will turn the whole expression into a number, and issue an error if Y is not defined when you EVAL.
The whole idea is to prevent that and make it behave more the way you'd expect:
If you have a value for X and not Y, just replace the value for X. Then EVAL will operate on all approx. numbers within the expression, and leave the exact ones alone.
In the expression above, if you put an approximate number in X, (and after various EVAL's) you'll end with an expression:
'n.nnnn.+7/2*Y' (where n.nnnn. is a real approx. number)
If you put an exact number in X (let's say 4.5, but exact - no trailing dot), you'll end up with (again, after a few EVALs):
'15.75.+7/2*Y+SIN(4.5)'
In this case, the 3.5. (approx.) ate the 4.5, but the other term with the SIN() function remains symbolic because 4.5 is exact.
I hope this clarifies the intent.
(12-24-2014 11:12 AM)Gilles Wrote: - is there an 'infinite integer' type or not ?
I'm taking this question out of context, I know, but this is the main reason that all this cannot work the same on newRPL as it was on the 50g. newRPL has infinite 'reals', so we are not limited to integers within symbolics. As a matter of fact, large integer numbers will be represented with reals (there's no choice).
The 50g defined any real number within a symbolic to be an 'approximated' number. Worse, it infects the whole expression, making the expression "approximated", and in the end, it forces the whole system to switch into "approximated" mode.
newRPL *has* to allow reals within symbolics, so we must find another way to distinguish exact vs. approx. numbers. I thought the quotes would work at first, but in an expression you can't distinguish them. Now I'm back to the trailing dot but this time including reals with a trailing dot also.
At the same time, I don't like that one approximated number in an expression forces a system-wide mode change (that will affect all subsequent expression evaluations, whether they have a real or not). I think you should be able to control the numeric/symbolic output from within the expression, and not affect other expressions.
So the light at the end of the tunnel seems to be:
* Use the trailing dot to signal approximated numbers. No trailing dot means exact. This works both outside and inside a symbolic expression.
2 --> exact
2. --> approx
1.5 --> exact
1.5. --> approx
1.3e1000 --> exact
1.3e1000. --> approx
The opposite (using the dot for exact numbers) could be used too, but most integer numbers in an expression would need to be typed with the dot (exact), and that makes it slower to type.
12-29-2014, 03:19 PM
Post: #14
Claudio L. Senior Member Posts: 1,840 Joined: Dec 2013
RE: newRPL: symbolic numbers
(12-24-2014 07:51 PM)Claudio L. Wrote: * Forget about quoted numbers.
Actually, I'd like to take that back.
Quote:The idea would be something like this:
{ 1 2 3 4 5 } '2' / would return { '1/2' '2/2' '3/2' '4/2' '5/2' }
{ 45 '45' } SIN would return { 0.707... 'SIN(45)' }
2 INV --> 0.5
'2' INV --> '1/2'
So quoted numbers will exist, alongside the trailing dot:
{ 45 45. '45' '45.' } SIN
will return {0.707... 0.707... 'SIN(45)' 'SIN(45.)' }
and after EVAL, {0.707... 0.707... 'SIN(45)' 0.707... }
Also, I think the trailing dot concept could also be extended to variable identifiers and constants.
For example, typing:
π will leave the symbolic constant π on the stack.
But typing π. (with the dot) should leave 3.1415.... on the stack.
Same thing for variables. If the variable name is followed by a dot, it is interpreted as an approximated expression, and upon evaluation, it's numeric value will be calculated and returned, as if ->NUM was executed.
For example:
2 'X' STO
'X^2' 'Y' STO
Now, typing
Y will leave 'X^2' on the stack.
Y. will leave 4 on the stack.
'Y+1' EVAL will leave 'X^2+1' and another EVAL will produce the number 5.
'Y.+1' EVAL will produce '4+1' and another EVAL the number 5.
Note: The need for multiple EVAL's is because as of now newRPL CAS doesn't do recursive substitutions (the 50g would show the number 5 on the first EVAL). Perhaps a command EVALALL will be added, or perhaps EVAL will behave like the 50g, and a new command like EVAL1 will do a non-recursive single-step as shown above, this is still in the works and subject to change.
12-29-2014, 07:38 PM
Post: #15
Gilles Member Posts: 171 Joined: Oct 2014
RE: newRPL: symbolic numbers
All of this looks _very_ interesting...
About EVAL I would prefer to keep the EVAL of 50G and add a EVAL1 command.(that means you have to detect 'circular reference' to avoid infinite loop like X refer to Y wich refer to X ).
I think that it will be interesting for the future to distinguish internaly functions and programs (a functions returns a numeric or symbolic output). For example your n. notation make sense for function but not for program.
Perhaps you could totaly avoid something like the flag 3 of the 50G.
But I see one disavantage (?) in this : If your function is defined in symbolic
f : << -> x '3*x+5' >>
it will always return a symbolic result, unless you use the f. syntax whereas the 50G fonctionality depends of the flag 03
Not sure it's better or worse, but it will be different and if we only want to work with numeric results (and dont use the f. syntax) we could do
f : << 3. * 5. + >>
or
f : << -> x '3*X+5' ->NUM >>
or
f : << -> x '3.*X+5.' ->NUM (or EVAL ) >>
12-29-2014, 09:33 PM
Post: #16
Han Senior Member Posts: 1,883 Joined: Dec 2013
RE: newRPL: symbolic numbers
Why not simply use ~ as a unary operator to denote an approximate number?Otherwise, everything is exact.
Graph 3D | QPI | SolveSys
12-29-2014, 10:21 PM
Post: #17
Claudio L. Senior Member Posts: 1,840 Joined: Dec 2013
RE: newRPL: symbolic numbers
(12-29-2014 07:38 PM)Gilles Wrote: All of this looks _very_ interesting...
About EVAL I would prefer to keep the EVAL of 50G and add a EVAL1 command.(that means you have to detect 'circular reference' to avoid infinite loop like X refer to Y wich refer to X ).
You're right, this is probably the way to go (is EVAL1 a good name?).
(12-29-2014 07:38 PM)Gilles Wrote: I think that it will be interesting for the future to distinguish internaly functions and programs (a functions returns a numeric or symbolic output). For example your n. notation make sense for function but not for program.
I think there is a simple solution: if the name is specified with the trailing dot, then run EVAL to the result before using it.
If you define a function/program that always return a numeric result, then the result with the trailing dot will be the same number.
If your function/program returns a symbolic, then EVAL (or perhaps should be ->NUM instead of EVAL?) will be applied and the result will change to a number before being used.
(12-29-2014 07:38 PM)Gilles Wrote: Perhaps you could totaly avoid something like the flag 3 of the 50G.
But I see one disavantage (?) in this : If your function is defined in symbolic
f : << -> x '3*x+5' >>
it will always return a symbolic result, unless you use the f. syntax whereas the 50G fonctionality depends of the flag 03
I'm not sure I understand. The way I envision it is this:
Once f is defined as above, typing f (no dot) will execute the program, while typing f. will execute the program and then EVAL (or ->NUM) on the result (this behavior would be the same regardless of which object is stored in f, could be a list of expressions and this would work).
The result will also depend on the argument you pass: if you pass an exact number, the result will likely be symbolic, but if you pass an approximate number, it will "eat" other numbers, and your result will be numeric, or as numeric as possible depending on the definition of the function.
If your function uses a symbolic constant, for example, it may prevent a fully numeric solution:
f: << -> X '3*X+pi' >>
where 'pi' is a constant (symbolic, no dot).
Under newRPL (proposed):
4 f --> '12+pi'
4. f --> '12.+pi'
4 f. --> 15.14... (the additional EVAL or ->NUM replaces the constant)
4. f. --> 15.14...
'f(4)' EVAL --> '12+pi'
'f(4.)' EVAL --> '12.+pi'
'f.(4)' EVAL --> 15.14...
'f.(4.)' EVAL --> 15.14...
Doing this same example on the 50g gives only slightly different results (depending on flags):
4 f --> '12+pi' (exact mode), '12.+pi' (approx. mode)
4. f --> '12.+pi' (both modes)
4 f EVAL --> '12+pi' (exact mode), 15.14.... (approx. mode)
4. f EVAL --> asks to change to approx mode, then 15.14....
'f(4)' EVAL --> '12+pi' (exact), '12.+pi' (approx mode)
'f(4.)' EVAL --> '12.+pi' (both modes, an additional EVAL will ask to switch to approx)
'f(4)' EVAL EVAL --> '12+pi' (exact), 15.14... (approx mode)
'f(4.)' EVAL EVAL --> 15.14... (approx mode), will ask to switch to approx mode if not.
Notice the double EVAL is equivalent to newRPL 'f.(x)' EVAL, as the trailing dot performs the second EVAL.
Also, this is all without using flag 3. Setting flag 3 you get a different set of results:
4 f --> 15.14...
4. f --> 15.14...
'f(4)' EVAL --> '12.+pi' (both exact and approx modes)
'f(4.)' EVAL --> '12.+pi' (both exact and approx modes)
'f(4)' EVAL EVAL --> 15.14... (both exact and approx modes)
'f(4.)' EVAL EVAL --> 15.14... (both exact and approx modes)
In this case, the CAS does not ask for permission to switch to approx mode, and returns the number 15.14... while staying in exact mode.
In newRPL, you could achieve this simply by defining:
f: << -> X '3.*X+pi' >>
That is, if you want 'f(4)' EVAL to return '12.+pi'.
If you want fully numeric results:
f: << -> X '3.*X+pi.' >>
will return 15.14.... in all the forms shown above.
(12-29-2014 07:38 PM)Gilles Wrote: Not sure it's better or worse, but it will be different ...
I don't know either. It will sure take some time to get used to the new way. Programs will have to be crafted with the trailing dot in mind, and that means no backwards compatibility. On the other hand, if "no backwards compatibility=no awkward compatibility" I'm in favor.
I don't see a big difference in user effort to reach the desired solution (perhaps less effort?). I do see more consistency in the proposed solution, where the result depends only on what you type.
Claudio
12-30-2014, 10:00 AM (This post was last modified: 12-30-2014 10:08 AM by Gilles.)
Post: #18
Gilles Member Posts: 171 Joined: Oct 2014
RE: newRPL: symbolic numbers
Quote:f: << -> X '3*X+pi' >>
where 'pi' is a constant (symbolic, no dot).
Under newRPL (proposed):
4 f --> '12+pi'
4. f --> '12.+pi'
4 f. --> 15.14... (the additional EVAL or ->NUM replaces the constant)
4. f. --> 15.14...
OK.
In my opinion, the '.' is ->NUM and not EVAL.
By the way, the unary operator ~ suggested by Han seems the same as ->NUM. I like this ~ notation.
And perhaps f~ would be more explicit that f. And in this case, both f~ and f ~ will be correct
'pi' ~ will return 3.14159...
Quote:'f(4)' EVAL --> '12+pi'
'f(4.)' EVAL --> '12.+pi'
'f.(4)' EVAL --> 15.14...
'f.(4.)' EVAL --> 15.14...
for the 2 last, with the suggestion of Han
'f~(4)' EVAL --> 15.14...
'f~(4.)' EVAL --> 15.14...
or
'f(4)' ~
I would like this but the disadvantage is that the ~ is not directly on the keyboard with only a keypress on the contrary of .
Quote:(...) It will sure take some time to get used to the new way. Programs will have to be crafted with the trailing dot in mind, and that means no backwards compatibility. On the other hand, if "no backwards compatibility=no awkward compatibility" I'm in favor.
I agree. Anyway you already have loosed the backward compatibility with better behavior of DOSUBS , DOLIST, ADD, + ...
Quote:I don't see a big difference in user effort to reach the desired solution (perhaps less effort?). I do see more consistency in the proposed solution, where the result depends only on what you type.
OK.
Another thing, I like that uppercase and lowercase are not the same thing in commands and functions in RPL.
But i would be pleased if all the native commands and functions in newRLP will use lowercases instead of uppercases : start next , for next , cos, sin ... (or Start For Next Cos, Sin) are more readable and less agressive than START NEXT or FOR NEXT COS SIN
12-30-2014, 02:19 PM
Post: #19
Claudio L. Senior Member Posts: 1,840 Joined: Dec 2013
RE: newRPL: symbolic numbers
It seems Han's suggestion is merely changing the symbol from the dot to ~. Other than that, it appears to have the exact same effect as we've been discussing (Han, feel free to correct me if I'm wrong).
(12-30-2014 10:00 AM)Gilles Wrote: In my opinion, the '.' is ->NUM and not EVAL.
Yes, it seems ->NUM is more appropriate.
(12-30-2014 10:00 AM)Gilles Wrote: I would like this but the disadvantage is that the ~ is not directly on the keyboard with only a keypress on the contrary of .
That's important. we'd have to find a non-shifted key to dedicate to ~, versus the dot being readily available and physically close to the numbers.
What about the numbers? Do we display 4~?
3 INV -> 0.3333333~
or perhaps prefix:
3 INV -> ~0.3333333 (I like this, more mathematically correct)
Do we use ~f(4) or f~(4)? I think prefix is perhaps better.
One more thing in favor of the dot: on a proportional font (which newRPL uses), a dot only takes 2 pixels wide, versus the ~ symbol taking 4 or 5 pixels.
'3~*X^2-pi~'
'~3*X^2-~pi'
'3.*X^2-pi.'
The expression above looks much shorter with the dot (at least in my browser).
So we have:
* Keyboard accessibility: dot is better
* Screen space: dot is better
* Readability (and clarity of intent): ~ is better
* Mathematical correctness: ~ is better
It seems we are tied 2-2. Any other tie-breaker comments anyone?
(12-30-2014 10:00 AM)Gilles Wrote: Another thing, I like that uppercase and lowercase are not the same thing in commands and functions in RPL.
But i would be pleased if all the native commands and functions in newRLP will use lowercases instead of uppercases : start next , for next , cos, sin ... (or Start For Next Cos, Sin) are more readable and less agressive than START NEXT or FOR NEXT COS SIN
That wouldn't be a problem, it's relatively easy to change. The First-Letter Capitalization looks good, but it'd force you to change CAPS mode very often (extra keystrokes).
So we are between full lowercase and full uppercase. I think we need to hear more opinions in this matter before making a decision. I'm neutral, as uppercase doesn't bother me, but I'm also used to lowercase from C.
12-30-2014, 02:26 PM
Post: #20
rprosperi Super Moderator Posts: 5,254 Joined: Dec 2013
RE: newRPL: symbolic numbers
(12-30-2014 10:00 AM)Gilles Wrote: By the way, the unary operator ~ suggested by Han seems the same as ->NUM. I like this ~ notation.
I agree that the ~ notation is preferred over the trailing dot. In RPL, dot is used frequently and for many things already, so IMHO it's best to use a new notation such as ~ to keep the intended use unambiguous, even if it is possible to add the trailing dot notation to an already complex parsing process.
Just one man's opinion.
And thanks Claudio, Han, etc. for the interesting glimpses into the evolution of newRPL. Rare to see the conceptual and design process evolve as it happens.
--Bob Prosperi
« Next Oldest | Next Newest »
User(s) browsing this thread: 1 Guest(s)
|
2022-07-06 16:09:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.560427725315094, "perplexity": 2701.886548337079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104675818.94/warc/CC-MAIN-20220706151618-20220706181618-00104.warc.gz"}
|
https://cracku.in/53-let-fxx2axb-and-gxfx1-fx-1-if-fxgeq0-for-all-real--x-cat-2020-slot-2
|
Question 53
# Let $$f(x)=x^{2}+ax+b$$ and $$g(x)=f(x+1)-f(x-1)$$. If $$f(x)\geq0$$ for all real x, and $$g(20)=72$$. then the smallest possible value of b is
Solution
$$f\left(x\right)=\ x^2+ax+b$$
$$f\left(x+1\right)=x^2+2x+1+ax+a+b$$
$$f\left(x-1\right)=x^2-2x+1+ax-a+b$$
$$g(x)=f(x+1)-f(x-1)= 4x+2a$$
Now $$g(20) = 72$$ from this we get $$a = -4$$ ; $$f\left(x\right)=x^2-4x\ +b$$
For this expression to be greater than zero it has to be a perfect square which is possible for $$b\ge\ 4$$
Hence the smallest value of 'b' is 4.
|
2023-02-03 15:47:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5161439180374146, "perplexity": 153.55974756918147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500058.1/warc/CC-MAIN-20230203154140-20230203184140-00546.warc.gz"}
|
http://cglab.ca/seminar/2015/anthony-local-routing.html
|
Local Routing in Planar Orthogonal Subdivisions
Anthony D'Angelo
With the prevalence of mobile devices today and the dawn of the “internet of things”, a lot of research continues to go into being able to route messages efficiently to maximize the usefulness of these low-resourced nodes. Particularly noteworthy is the research done on low-memory local routing algorithms that guarantee delivery.
In these algorithms, the current node holding the message uses only the knowledge of its neighbours' locations as well as some extra information stored in the message header (where the number of available bits is bounded above by some function of the number of nodes) to decide where to forward the packet next, and under these constraints these algorithms guarantee that the message will arrive at its destination in a connected graph. It has been found that guaranteed point-to-point routing can be done in planar convex subdivisions with general positioning using just $1$ bit of extra storage and no predecessor-awareness, and in edge-augmented monotone subdivisions with general positioning using only predecessor-awareness.
I will present an algorithm that can route in planar $x$-monotone orthogonal subdivisions using only $O(1)$ bits of extra memory (using implicit, not explicit predecessor-awareness). I will also present a generalization to $x$-monotone subdivisions with nodes up to “$d$” degree that uses $O(\log d)$ extra bits (where the “$d$” edge-directions are fixed to “$d$” different angles).
|
2018-11-13 01:01:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5702956318855286, "perplexity": 1099.5158448211666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741176.4/warc/CC-MAIN-20181113000225-20181113022225-00397.warc.gz"}
|
https://math.stackexchange.com/questions/1407550/what-hyperbolic-space-really-looks-like
|
# What hyperbolic space *really* looks like
There are several models of hyperbolic space that are embedded in Euclidean space. For example, the following image depicts the Beltrami-Klein model of a hyperbolic plane:
where geodesics are represented by straight lines. The following image, on the other hand, depicts the Poincare model of the same hyperbolic plane:
where geodesics are represented by segments of circles intersecting the boundary of the disk orthogonally. Both of these models capture the entire $n$-dimensional hyperbolic space in a disk (or more generally, a Euclidean $n$-ball).
I was thinking about what hyperbolic space would "really look like" from the perspective of an observer within the space. What came to mind was the exponential map, which maps an element of the tangent space $\mathrm{T}_PM$ of a point $p$ on a manifold $M$ to another point on the manifold:
Intuitively, the exponential map follows the geodesic over the manifold that "departs" from the specified direction belonging to the tangent space. For example, the exponential map of the Earth as viewed from the north pole is the polar azimuthal equidistant projection in cartography:
This seems to be what elliptic space would "really look like" from the perspective of an observer within the space, since light reaches our eyes by traveling along geodesics.
This page and this page have demos. You may click around to get a feeling of what moving through an elliptic space would look like, though of course geodesics can go beyond the boundary of the disk in the demo by repeatedly wrapping around the sphere of the Earth.
Given this background, my question is as follows:
What does the exponential map from some point on a hyperbolic space look like, assuming that the contents of the space are depicted by the models above? Are there any demos or examples?
Edit: This question seems to be related.
Edit 2: The last section of this video shows the azimuthal equidistant projection of a hyperbolic plane to be
Edit 3: See also Riemann normal coordinates.
Last edit: Someone made a virtual reality demo of hyperbolic space. This is a definitive answer!
• suggest reading original Bolyai in translation; it's in Bonola's book. Martin's book follows up pretty well, intrinsic viewpoint. – Will Jagy Aug 24 '15 at 3:55
• I wouldn't want to live in hyperbolic space. As you walk, the stuff you see in the horizon moves quickly to get behind you as new hyperbolic lands unfold in front of you. Wait no. You wouldn't be able to see very far away (a candle gets dimmer exponentially as you walk away from it) and if you get drunk for 5 minutes you could be lost forever... – mercio Aug 24 '15 at 18:08
• Whoah, I just posted the exact same tiling on 9gag last night (odd place to post it, I know). Were you inspired by that post? – Akiva Weinberger Aug 24 '15 at 18:10
• I think you're looking for #10 of this YouTube video. (I recommend watching the whole thing.) – Akiva Weinberger Aug 24 '15 at 18:36
• There is a math movie called "Not knot", made in 1990s, which describes what is it like to live in a hyperbolic manifold." Maybe you can find the movie online. – Moishe Kohan Aug 25 '15 at 20:59
One common way to visualize the "intrinsic appearance" of a simply-connected universe of constant curvature $\pm 1$ is to give the angular size of an object modeled as a geodesic arc of (sufficiently small) length $\ell$ placed at distance $d$ from one's eye. (I didn't try to run the linked applets, and am not sure if either implements this strategy.)
On a sphere of unit curvature, a circle of geodesic radius $d$ has circumference $2\pi \sin d$; an object of length $\ell$ (placed "orthogonal to the line of sight") therefore subtends an angle $\theta \approx \ell/\sin d$.
Playing the same game in the hyperbolic plane, a circle of geodesic radius $d$ has circumference $2\pi \sinh d$; an object of length $\ell$ (placed orthogonal to the line of sight) therefore subtends an angle $\theta \approx \ell/\sinh d$. As in mercio's comment, this angle decreases exponentially with $d$.
If the characteristic length is one meter (i.e., a circle of radius $d$ meters has circumference $C = 2\pi \sinh d$ meters), then an object at hyperbolic distance $d$ meters appears (to our Euclidean intuition) to lie at distance $d' = \sinh d$ meters: $$\begin{array}{l|ccccccc} d = & 1 & 2 & 3 & 4 & 5 & 10 & 100 \\ \hline % C \approx & 7.384 & 22.79 & 62.944 & 171.468 & 466.233 & 69198.183 & 8.445 \times 10^{43} \\ d' \approx & 1.175 & 3.627 & 10.018 & 27.29 & 74.203 & 11013.233 & 1.344 \times 10^{43} \\ \end{array}$$ Particularly, an object ten meters away in hyperbolic space appears to be over eleven kilometers distant, and an object one hundred meters away subtends an angle too small to be cosmologically meaningful (a formal distance of about $1.4 \times 10^{27}$ light-years).
Analogous conclusions hold in a three-dimensional sphere or three-dimensional hyperbolic space. The main qualitative point is, it's easy to hide (or to become irretrievably lost) in hyperbolic space.
Jeffrey Weeks' geometry software seems likely to be of interest. His book The Shape of Space (q.v.) is an excellent read.
The Beltrami-Klein model is an accurate depiction of what it would look like in hyperbolic space. To be a bit more precise, if you live in 3-dimensional hyperbolic space $\mathbb{H}^3$, and if $P \subset \mathbb{H}^3$ is a 2-dimensional hyperbolic plane tiled in red and white triangles with angles $\pi/2,\pi/3,\pi/7$ as in the picture shown in the question, and if your eye is situated at a point $Q$ a certain distance from $P$, then what you would see is exactly that picture.
The intuitive reason for this is that geodesics are straight lines, and that is how they appear to your eye.
In a bit more detail, one can prove analytically that that if you take the straight line projection of a geodesic in $X$ onto the unit tangent sphere $T^1_Q (\mathbb{H}^3)$ of $\mathbb{H}^3$ at the point $Q$, then the result is a great circle segment in $T^1_Q (\mathbb{H}^3)$.
One interesting feature of this fact is that from $Q$ one can "see" the circle at infinity of $P$, just as the picture shows.
• Are you sure this is correct? I think the direction of points in the hyperbolic plane from the origin is the same as that in the Klein model, but I presume the distance is not, since points on the edge of the disk really are infinitely far away (whereas the exponential map geodesic covers a finite distance). – user76284 Aug 30 '15 at 17:24
• Yes, it is really true. The question of what hyperbolic space "looks like" is equivalent to the question of how things project to the unit tangent bundle at the obseration point. So for instance if you take any geodesic line $L$ (e.g. one of the lines in your picture of the 2,3,7 tiling), and then form the unique plane $\overline{QL}$ passing through both $Q$ and $L$, then that plane intersects the unit tangent bundle at $Q$ in a great circle, and the rays in that plane that pass through actual points of $L$ form an arc of that great circle. – Lee Mosher Aug 31 '15 at 3:21
• "The question of what hyperbolic space "looks like" is equivalent to the question of how things project to the unit tangent bundle at the obseration point." I think it's not just the unit tangent bundle, but the entire tangent bundle, where the length of the tangent vector denotes the distance traversed over the geodesic heading in that direction. I believe what you say is true if you only want the direction, though. – user76284 Aug 31 '15 at 3:47
• Furthermore -- consider an equidistant surface in $\mathbb{H}^3$, which is of course also internally hyperbolic. If this equidistant surface is $d$ units below a plane in $\mathbb{H}^3$, and our eye is $d$ units above this plane, we see it in the Poincaré model. (This is the 3D model used in HyperRogue) – Zeno Rogue Jan 5 '18 at 19:46
|
2019-04-23 12:06:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6857053637504578, "perplexity": 336.6655626748463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578602767.67/warc/CC-MAIN-20190423114901-20190423140901-00402.warc.gz"}
|
https://aimsciences.org/article/doi/10.3934/jimo.2018160
|
# American Institute of Mathematical Sciences
• Previous Article
Nonlinear optimization to management problems of end-of-life vehicles with environmental protection awareness and damaged/aging degrees
• JIMO Home
• This Issue
• Next Article
Optimality conditions for multiobjective fractional programming, via convexificators
doi: 10.3934/jimo.2018160
## A hybrid chaos firefly algorithm for three-dimensional irregular packing problem
1 School of Computer Sciences and information, Anhui Normal University, Anhui Provincial Key Laboratory of Network and Information Security, Wuhu, 241000, China 2 Department of Mathematics and Statistics, Curtin University, Perth, WA 6845, Australia
* Corresponding author: Lin Jiang
Received March 2017 Revised April 2018 Published October 2018
Fund Project: The first author is supported by NSFC grant (61871412, 61772034, 61572036, 61672039, 61473326), Anhui Provincial Natural Science Foundation(1708085MF156, 1808085MF172), Australian Research Council Linkage Program LP140100873
The packing problem study how to pack multiple objects without overlap. Various exact and approximate algorithms have been developed for two-dimensional regular and irregular packing as well as three-dimensional bin packing. However, few results are reported for three-dimensional irregular packing problems. This paper will develop a method for solving three-dimensional irregular packing problems. A three-grid approximation technique is first introduced to approximate irregular objects. Then, a hybrid heuristic method is developed to place and compact each individual objects where chaos search is embedded into firefly algorithm in order to enhance the algorithm's diversity for optimizing packing sequence and orientations. Results from several computational experiments demonstrate the effectiveness of the hybrid algorithm.
Citation: Chuanxin Zhao, Lin Jiang, Kok Lay Teo. A hybrid chaos firefly algorithm for three-dimensional irregular packing problem. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2018160
##### References:
show all references
##### References:
Grid representation of a cylinder
Encoding of the firefly individual
A procedure of back bottom left placement for 3D irregular packing
Architecture of the chaos firefly algorithm for packing problem
An example of comparison between random sequence packing result and optimized sequence packing result
The comparison packing result of instance 6 without rotation
The comparison packing result of instance 8 with rotation
Algorithm convergence over 9 instances
Data sets specification
Instance Type Number Rotation Object scale 1 cylinder 1 50 0 ${50^ * }{50^ * }50$ 2 cylinder 2 80 0 ${50^ * }{50^ * }50$ 3 cylinder 3 100 0 ${50^ * }{50^ * }50$ 4 irregular 1 (cylinder, complex structure) 50 0 ${50^ * }{50^ * }50$ 5 irregular 2 (cylinder, complex structure) 80 0 ${50^ * }{50^ * }50$ 6 irregular 3 (cylinder, complex structure) 100 0 ${50^ * }{50^ * }50$ 7 irregular 4 (cylinder, complex structure) 40 0, 3 ${50^ * }{50^ * }50$ 8 irregular 5 (cylinder, complex structure) 60 0, 3 ${50^ * }{50^ * }50$ 9 irregular 6 (cylinder, complex structure) 80 0, 3 ${50^ * }{50^ * }50$
Instance Type Number Rotation Object scale 1 cylinder 1 50 0 ${50^ * }{50^ * }50$ 2 cylinder 2 80 0 ${50^ * }{50^ * }50$ 3 cylinder 3 100 0 ${50^ * }{50^ * }50$ 4 irregular 1 (cylinder, complex structure) 50 0 ${50^ * }{50^ * }50$ 5 irregular 2 (cylinder, complex structure) 80 0 ${50^ * }{50^ * }50$ 6 irregular 3 (cylinder, complex structure) 100 0 ${50^ * }{50^ * }50$ 7 irregular 4 (cylinder, complex structure) 40 0, 3 ${50^ * }{50^ * }50$ 8 irregular 5 (cylinder, complex structure) 60 0, 3 ${50^ * }{50^ * }50$ 9 irregular 6 (cylinder, complex structure) 80 0, 3 ${50^ * }{50^ * }50$
Parameters of the hybrid firefly algorithm
Parameter Value population size $number\times$(1+$r_{max}$) $T_0$ 0.064 Temperature update ratio 1.6 Iteration 300
Parameter Value population size $number\times$(1+$r_{max}$) $T_0$ 0.064 Temperature update ratio 1.6 Iteration 300
The maximal height and the efficiency achieved by three algorithms in 10 runs
Instance GA PSO FA HFA Height efficiency Height efficiency Height efficiency Height efficiency 1 122 52.5% 120 55.4% 117 56.8% 120 55.4% 2 169 68.0% 171 67.2% 170 67.6% 167 68.8% 3 222 64.0% 219 64.9% 221 64.4% 219 64.9% 4 210 50.1% 207 50.8% 215 48.9% 198 53.1% 5 366 53.3% 363 54.3% 365 53.4% 356 55.4% 6 446 51.8% 439 52.6% 448 51.5% 442 52.2% 7 160 59.9% 160 59.9% 161 59.6% 157 61.1% 8 230 61.0% 231 60.7% 234 59.9% 225 62.3% 9 310 58.9% 308 59.3% 304 60.1% 304 60.1%
Instance GA PSO FA HFA Height efficiency Height efficiency Height efficiency Height efficiency 1 122 52.5% 120 55.4% 117 56.8% 120 55.4% 2 169 68.0% 171 67.2% 170 67.6% 167 68.8% 3 222 64.0% 219 64.9% 221 64.4% 219 64.9% 4 210 50.1% 207 50.8% 215 48.9% 198 53.1% 5 366 53.3% 363 54.3% 365 53.4% 356 55.4% 6 446 51.8% 439 52.6% 448 51.5% 442 52.2% 7 160 59.9% 160 59.9% 161 59.6% 157 61.1% 8 230 61.0% 231 60.7% 234 59.9% 225 62.3% 9 310 58.9% 308 59.3% 304 60.1% 304 60.1%
The statistical performance of the algorithm without rotation
Ins. GA PSO FA HFA Best Avg Stdev Best Avg Stdev Best Avg Stdev Best Avg Stdev 1 120 122.1 2.172 120 121.3 1.341 117 120.8 2.049 120 120.3 0.547 2 171 172.5 2.918 171 172.1 2.121 170 172.5 3.140 167 170 2.387 3 220 224.6 2.671 219 223.1 1.923 221 222.3 2.100 219 221.2 1.483 4 210 219.8 3.019 207 218.6 2.074 215 220.9 8.648 198 215.2 6.638 5 362 371.1 4.017 363 371.4 6.058 365 368.8 6.025 356 365.5 2.191 6 445 465.2 8.423 439 461.2 8.820 448 459.4 9.597 442 452 4.264 7 162 168.1 9.150 160 168.2 9.517 161 167.6 8.961 157 162.9 4.868 8 232 245.1 6.901 231 242 7.615 234 244.2 3.421 225 234.2 2.863 9 311 314.6 6.119 308 313.8 5.354 304 316.2 7.190 304 307.6 3.050
Ins. GA PSO FA HFA Best Avg Stdev Best Avg Stdev Best Avg Stdev Best Avg Stdev 1 120 122.1 2.172 120 121.3 1.341 117 120.8 2.049 120 120.3 0.547 2 171 172.5 2.918 171 172.1 2.121 170 172.5 3.140 167 170 2.387 3 220 224.6 2.671 219 223.1 1.923 221 222.3 2.100 219 221.2 1.483 4 210 219.8 3.019 207 218.6 2.074 215 220.9 8.648 198 215.2 6.638 5 362 371.1 4.017 363 371.4 6.058 365 368.8 6.025 356 365.5 2.191 6 445 465.2 8.423 439 461.2 8.820 448 459.4 9.597 442 452 4.264 7 162 168.1 9.150 160 168.2 9.517 161 167.6 8.961 157 162.9 4.868 8 232 245.1 6.901 231 242 7.615 234 244.2 3.421 225 234.2 2.863 9 311 314.6 6.119 308 313.8 5.354 304 316.2 7.190 304 307.6 3.050
Comparison between the proposed approach and placement strategy
Instance enclosure without rotation enhance(%) with rotation enhance(%) 1 124.2 120.3 3.14% N/A N/A 2 173.4 170.0 1.96% N/A N/A 3 225.3 221.2 1.82% N/A N/A 4 225.2 215.2 4.44% 210.2 2.32% 5 381.6 365.5 4.40% 358.3 1.97% 6 466.3 452.0 3.16% 445.9 1.35% 7 170.9 162.9 4.68% 156.4 3.99% 8 246.0 234.2 4.80% 230.6 1.54% 9 319.6 307.6 3.90% 301.7 1.92%
Instance enclosure without rotation enhance(%) with rotation enhance(%) 1 124.2 120.3 3.14% N/A N/A 2 173.4 170.0 1.96% N/A N/A 3 225.3 221.2 1.82% N/A N/A 4 225.2 215.2 4.44% 210.2 2.32% 5 381.6 365.5 4.40% 358.3 1.97% 6 466.3 452.0 3.16% 445.9 1.35% 7 170.9 162.9 4.68% 156.4 3.99% 8 246.0 234.2 4.80% 230.6 1.54% 9 319.6 307.6 3.90% 301.7 1.92%
[1] Mohamed A. Tawhid, Ahmed F. Ali. An effective hybrid firefly algorithm with the cuckoo search for engineering optimization problems. Mathematical Foundations of Computing, 2018, 1 (4) : 349-368. doi: 10.3934/mfc.2018017 [2] Maolin Cheng, Mingyin Xiang. Application of a modified CES production function model based on improved firefly algorithm. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-14. doi: 10.3934/jimo.2019018 [3] Jianjun Liu, Min Zeng, Yifan Ge, Changzhi Wu, Xiangyu Wang. Improved Cuckoo Search algorithm for numerical function optimization. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-13. doi: 10.3934/jimo.2018142 [4] Mao Chen, Xiangyang Tang, Zhizhong Zeng, Sanya Liu. An efficient heuristic algorithm for two-dimensional rectangular packing problem with central rectangle. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-16. doi: 10.3934/jimo.2018164 [5] Leong-Kwan Li, Sally Shao. Convergence analysis of the weighted state space search algorithm for recurrent neural networks. Numerical Algebra, Control & Optimization, 2014, 4 (3) : 193-207. doi: 10.3934/naco.2014.4.193 [6] Behrouz Kheirfam, Morteza Moslemi. On the extension of an arc-search interior-point algorithm for semidefinite optimization. Numerical Algebra, Control & Optimization, 2018, 8 (2) : 261-275. doi: 10.3934/naco.2018015 [7] Kien Ming Ng, Trung Hieu Tran. A parallel water flow algorithm with local search for solving the quadratic assignment problem. Journal of Industrial & Management Optimization, 2019, 15 (1) : 235-259. doi: 10.3934/jimo.2018041 [8] Weihua Liu, Andrew Klapper. AFSRs synthesis with the extended Euclidean rational approximation algorithm. Advances in Mathematics of Communications, 2017, 11 (1) : 139-150. doi: 10.3934/amc.2017008 [9] Ming-Jong Yao, Tien-Cheng Hsu. An efficient search algorithm for obtaining the optimal replenishment strategies in multi-stage just-in-time supply chain systems. Journal of Industrial & Management Optimization, 2009, 5 (1) : 11-32. doi: 10.3934/jimo.2009.5.11 [10] Abdel-Rahman Hedar, Ahmed Fouad Ali, Taysir Hassan Abdel-Hamid. Genetic algorithm and Tabu search based methods for molecular 3D-structure prediction. Numerical Algebra, Control & Optimization, 2011, 1 (1) : 191-209. doi: 10.3934/naco.2011.1.191 [11] Y. K. Lin, C. S. Chong. A tabu search algorithm to minimize total weighted tardiness for the job shop scheduling problem. Journal of Industrial & Management Optimization, 2016, 12 (2) : 703-717. doi: 10.3934/jimo.2016.12.703 [12] Leong-Kwan Li, Sally Shao, K. F. Cedric Yiu. Nonlinear dynamical system modeling via recurrent neural networks and a weighted state space search algorithm. Journal of Industrial & Management Optimization, 2011, 7 (2) : 385-400. doi: 10.3934/jimo.2011.7.385 [13] Behrad Erfani, Sadoullah Ebrahimnejad, Amirhossein Moosavi. An integrated dynamic facility layout and job shop scheduling problem: A hybrid NSGA-II and local search algorithm. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-34. doi: 10.3934/jimo.2019030 [14] David Julitz. Numerical approximation of atmospheric-ocean models with subdivision algorithm. Discrete & Continuous Dynamical Systems - A, 2007, 18 (2&3) : 429-447. doi: 10.3934/dcds.2007.18.429 [15] Zhenbo Wang. Worst-case performance of the successive approximation algorithm for four identical knapsacks. Journal of Industrial & Management Optimization, 2012, 8 (3) : 651-656. doi: 10.3934/jimo.2012.8.651 [16] Gaidi Li, Zhen Wang, Dachuan Xu. An approximation algorithm for the $k$-level facility location problem with submodular penalties. Journal of Industrial & Management Optimization, 2012, 8 (3) : 521-529. doi: 10.3934/jimo.2012.8.521 [17] Tao Zhang, Yue-Jie Zhang, Qipeng P. Zheng, P. M. Pardalos. A hybrid particle swarm optimization and tabu search algorithm for order planning problems of steel factories based on the Make-To-Stock and Make-To-Order management architecture. Journal of Industrial & Management Optimization, 2011, 7 (1) : 31-51. doi: 10.3934/jimo.2011.7.31 [18] Meng Yu, Jack Xin. Stochastic approximation and a nonlocally weighted soft-constrained recursive algorithm for blind separation of reverberant speech mixtures. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1753-1767. doi: 10.3934/dcds.2010.28.1753 [19] Chenchen Wu, Dachuan Xu, Xin-Yuan Zhao. An improved approximation algorithm for the $2$-catalog segmentation problem using semidefinite programming relaxation. Journal of Industrial & Management Optimization, 2012, 8 (1) : 117-126. doi: 10.3934/jimo.2012.8.117 [20] Jinling Wei, Jinming Zhang, Meishuang Dong, Fan Zhang, Yunmo Chen, Sha Jin, Zhike Han. Applications of mathematics to maritime search. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 957-968. doi: 10.3934/dcdss.2019064
2017 Impact Factor: 0.994
|
2019-05-20 22:25:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2974960505962372, "perplexity": 9875.318243698583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256163.40/warc/CC-MAIN-20190520222102-20190521004102-00465.warc.gz"}
|
https://physics.nfshost.com/textbook/09-Capacitance/02-Energy.php
|
# Energy Stored in a Capacitor
When you put positive charges onto a conductor, you're confining them to a small space when they really want to get as far apart as possible. This takes work, and that means that a charged capacitor contains energy-- electrical potential energy, to be precise. The first charge is easy to add, because the conductor is neutral. That charge increases the potential of the conductor to be a little higher, so that the next charge has to move uphill a little bit, and work is required. With each charge, the "hill" gets higher and higher, and the next charge requires more and more work to move it onto the conductor.
Let's make that a little more precise. Suppose we start with an uncharged conductor with capacitance C, and start moving little bits of charge dq onto it. Each little bit of charge requires energy dP to get it onto the sphere, which is a function of how much charge q is on the sphere at that moment. That energy is added to the potential energy of the conductor, so the total energy stored inside is $$P=\int\,dP$$ What is $$dP$$? If the conductor has charge $$q$$, then its potential is $$V=V_\infty+{q\over C}$$ according to the definition of capacitance. As we move a charge $$dq$$ from far away onto the sphere, we have to move it up a potential $$\Delta V=V-V_\infty$$ (from infinity to the sphere), and that takes work $$dP=dq\,(V-V_\infty)=dq\,{q\over C}$$ and so the total energy in the sphere is $$P=\int\,dP=\int_0^Q {q\,dq\over C}$$ Note that the integration variable is $$q$$, which is rather unusual. I'm summing over every bit of charge added to the conductor. This ranges from 0 (when the conductor is neutral) to some final value $$Q$$. Doing the integral gives us $$P={Q^2\over 2C}$$ which is the energy stored in a capacitor containing charge $$Q$$. We can also write this in terms of potential instead of charge. Since $$Q=C\Delta V$$, we substitute it into the above formula to get $$P={1\over2}C(\Delta V)^2$$
A sphere with a radius of 0.5 meters contains $$3\u{\mu C}$$ of charge. How much potential energy does the sphere contain?
The capacitance of a sphere is $$C=R/k$$, so in this case $$C={0.5\u{m}\over 9\ten9\u{m/F}}=5.6\ten{-11}\u{F}$$ We can calculate the energy directly from the charge and the capacitance: $$P={Q^2\over 2C}={(3\ten{-6}\u{C})^2\over 2(5.6\ten{-11}\u{F})}=0.08\u{J}$$ We could also calculate the potential energy by first finding the potential difference between the sphere and infinity: $$\Dl V={Q\over C}={3\ten{-6}\u{C}\over 5.6\ten{-11}\u{F}}=5.36\ten4\u{V}$$ and then calculating the potential energy from that: $$P=\frc2C(\Dl V)^2=\frc2(5.6\ten{-11})(5.36\ten4\u{V})^2=0.08\u{J}$$ which is the same result.
Suppose the sphere expands so that its radius is 0.6 meters and the charge on the sphere remains the same. What happens to the potential energy?
Let's think about this first: if the sphere expands, the charges get to spread out a little bit more, which is what they want to do. Thus the potential energy should decrease.
The capacitance of a sphere expands as it grows. $$C={0.6\u{m}\over 9\ten9\u{m/F}}=6.7\ten{-11}\u{F}$$ From the formula $$P={Q^2\over 2C}$$ the charge stays the same and the capacitance gets bigger, and so the potential energy gets smaller:
$$P={(3\ten{-6}\u{C})^2\over 2(6.7\ten{-11}\u{F})}=0.067\u{J}$$
Now consider the formula $$P=\frc2C(\Dl V)^2$$ The capacitance does increase. However, the potential difference of the sphere is $$\Dl V={Q\over C}={3\ten{-6}\u{C}\over 6.7\ten{-11}\u{F}}=4.48\ten4\u{V}$$ which is smaller: the decrease in the potential difference offsets the increase in the capacitance, so that the potential energy decreases: $$P=\frc2(6.7\ten{-11}\u{F})(4.48\ten4\u{V})^2=0.067\u{J}$$
|
2019-12-11 17:16:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8039878010749817, "perplexity": 183.34131842681697}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540531974.7/warc/CC-MAIN-20191211160056-20191211184056-00369.warc.gz"}
|
https://socratic.org/questions/how-do-you-solve-abs-3-d-4
|
# How do you solve abs(3+d)< -4?
May 3, 2017
See the solution process below:
#### Explanation:
The absolute value function takes any negative or positive term and transforms it to its positive form. Therefore, we must solve the term within the absolute value function for both its negative and positive equivalent.
$4 < 3 + d < - 4$
Subtract $\textcolor{red}{3}$ from each segment of the system of equations to solve for $d$ while keeping the system balanced:
$- \textcolor{red}{3} + 4 < - \textcolor{red}{3} + 3 + d < - \textcolor{red}{3} - 4$
$1 < 0 + d < - 7$
$1 < d < - 7$
Or
$d < - 7$ and $d > 1$
Or, in interval notation:
$\left(- \infty , - 7\right)$ and $\left(1 , \infty\right)$
|
2019-11-17 17:01:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6763168573379517, "perplexity": 550.4194992266242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669225.56/warc/CC-MAIN-20191117165616-20191117193616-00488.warc.gz"}
|
https://gmatclub.com/forum/is-a-b-a-b-1-ab-0-2-a-b-279027.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 18 Oct 2019, 14:55
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Is |a - b| < |a| + |b| ? (1) ab< 0 (2) a^b < 0
Author Message
TAGS:
### Hide Tags
Director
Joined: 02 Oct 2017
Posts: 719
Is |a - b| < |a| + |b| ? (1) ab< 0 (2) a^b < 0 [#permalink]
### Show Tags
Updated on: 15 Oct 2018, 07:21
8
00:00
Difficulty:
65% (hard)
Question Stats:
49% (01:44) correct 51% (02:10) wrong based on 116 sessions
### HideShow timer Statistics
Is |a - b| < |a| + |b| ?
(1) ab< 0
(2) a^b < 0
_________________
Give kudos if you like the post
Originally posted by push12345 on 15 Oct 2018, 07:14.
Last edited by Bunuel on 15 Oct 2018, 07:21, edited 1 time in total.
Renamed the topic and edited the question.
Manager
Joined: 01 Jan 2018
Posts: 80
Re: Is |a - b| < |a| + |b| ? (1) ab< 0 (2) a^b < 0 [#permalink]
### Show Tags
15 Oct 2018, 08:40
(1) ab< 0 => a or b is negative.
If a<0 and b>0 then |-a-b|=a+b=|a+b|
We get a definite answer no to the question so sufficient.
(2) a^b < 0=> a is negative
but we don't know what the sign of b will be so not sufficient
So A is the correct choice.
_________________
+1 Kudos if you find this post helpful
CEO
Status: GMATINSIGHT Tutor
Joined: 08 Jul 2010
Posts: 2978
Location: India
GMAT: INSIGHT
Schools: Darden '21
WE: Education (Education)
Re: Is |a - b| < |a| + |b| ? (1) ab< 0 (2) a^b < 0 [#permalink]
### Show Tags
15 Oct 2018, 10:29
push12345 wrote:
Is |a - b| < |a| + |b| ?
(1) ab< 0
(2) a^b < 0
Question :Is |a - b| < |a| + |b| ?
On the right side of equation we will always see the sum of absolute values of a and b
But on the left hand side the |a - b| to be smaller, it's important to have the same sign because in case of opposite signs of a and b, left side of equation will be same as the right side of the equation
i.e. for the condition of the question to be true it's must for a and b to have same sign therefore,
Question REPHRASED : Do a and b have the same sign ?
Statement 1: ab< 0
i.e. a and b have opposite sign only then the product of a and b will be negative hence
SUFFICIENT
Statement 2: a^b < 0
$$(-1)^1 < 0$$
Also, $$(-1)^{-1} < 0$$
i.e. and b may have the same sign as well as opposite signs hence
NOT SUFFICIENT
_________________
Prosper!!!
GMATinsight
Bhoopendra Singh and Dr.Sushma Jha
e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772
Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi
http://www.GMATinsight.com/testimonials.html
ACCESS FREE GMAT TESTS HERE:22 ONLINE FREE (FULL LENGTH) GMAT CAT (PRACTICE TESTS) LINK COLLECTION
GMATH Teacher
Status: GMATH founder
Joined: 12 Oct 2010
Posts: 935
Re: Is |a - b| < |a| + |b| ? (1) ab< 0 (2) a^b < 0 [#permalink]
### Show Tags
15 Oct 2018, 11:52
push12345 wrote:
Is |a - b| < |a| + |b| ?
(1) ab< 0
(2) a^b < 0
$$\left| {a - b} \right|\,\,\mathop < \limits^? \,\,\,\left| a \right| + \left| b \right|\,\,\,\,\,\,\,\mathop \Leftrightarrow \limits^{\left( * \right)} \,\,\,\,\,ab\,\,\mathop > \limits^? \,\,\,0$$
(*) This equivalence will be PROVED at the end of this post. Ignore this proof if you don´t like math!
$$\left( 1 \right)\,\,ab < 0\,\,\,\, \Rightarrow \,\,\,\,\left\langle {{\text{NO}}} \right\rangle \,\,\,\, \Rightarrow \,\,\,\,{\text{SUFF}}.$$
$$\left( 2 \right)\,\,\,{a^b} < 0\,\,\,\left\{ \matrix{ \,{\rm{Take}}\,\,\left( {a,b} \right) = \left( { - 1,1} \right)\,\,\,\, \Rightarrow \,\,\,\left\langle {{\rm{NO}}} \right\rangle \,\,\,\,\,\,\,\,\,\,\,\,\,\,\left[ {\,\,\,{{\left( { - 1} \right)}^1} = - 1\,\,\,} \right] \hfill \cr \,{\rm{Take}}\,\,\left( {a,b} \right) = \left( { - 1, - 1} \right)\,\,\,\, \Rightarrow \,\,\,\left\langle {{\rm{YES}}} \right\rangle \,\,\,\,\,\,\,\,\,\,\,\,\,\left[ {\,\,\,{{\left( { - 1} \right)}^{ - 1}} = {1 \over {{{\left( { - 1} \right)}^1}}} = - 1\,\,\,} \right]\,\, \hfill \cr} \right.$$
This solution follows the notations and rationale taught in the GMATH method.
Regards,
Fabio.
POST-MORTEM:
$$\left( * \right)\,\,\,\left\{ \matrix{ \,\left( i \right)\,\,\,\,\left| {a - b} \right|\,\, < \,\,\,\left| a \right| + \left| b \right|\,\,\,\,\,\,\, \Rightarrow \,\,\,\,ab > 0 \hfill \cr \,\left( {ii} \right)\,\,\,\,ab > 0\,\,\,\, \Rightarrow \,\,\,\,\left| {a - b} \right|\,\, < \,\,\,\left| a \right| + \left| b \right|\,\,\,\,\,\,\,\,\, \Leftrightarrow \,\,\,\,\,\,\,\,\,\left| {a - b} \right|\,\, \ge \,\,\,\left| a \right| + \left| b \right|\,\,\,\,\, \Rightarrow \,\,\,\,\,ab \le 0 \hfill \cr} \right.\,$$
$$\left( i \right)\,\,\,\,\left| {a - b} \right|\,\, < \,\,\,\left| a \right| + \left| b \right|\,\,\,\,\,\mathop \Rightarrow \limits^{{\text{squaring}}} \,\,\,\,{\left( {a - b} \right)^2} < \,\,\,{a^2} + 2\left| {ab} \right| + {b^2}\,\,\,\, \Rightarrow \,\,\,\,\, \ldots \,\,\,\,\, \Rightarrow \,\,\,\, - ab < \left| {ab} \right|\,\,\,\,\,\, \Rightarrow \,\,\,\,\,\,ab > 0$$
$$\left( {ii} \right)\,\,\,\,\left| {a - b} \right|\,\, \geqslant \,\,\,\left| a \right| + \left| b \right|\,\,\,\,\,\mathop \Rightarrow \limits^{{\text{squaring}}} \,\,\,\,{\left( {a - b} \right)^2} \geqslant \,\,\,{a^2} + 2\left| {ab} \right| + {b^2}\,\,\,\, \Rightarrow \,\,\,\,\, \ldots \,\,\,\,\, \Rightarrow \,\,\,\, - ab \geqslant \left| {ab} \right|\,\,\,\,\,\, \Rightarrow \,\,\,\,\,\,ab \leqslant 0\,\,\,$$
_________________
Fabio Skilnik :: GMATH method creator (Math for the GMAT)
Our high-level "quant" preparation starts here: https://gmath.net
Manager
Joined: 26 Apr 2011
Posts: 60
Location: India
GPA: 3.5
WE: Information Technology (Computer Software)
Re: Is |a - b| < |a| + |b| ? (1) ab< 0 (2) a^b < 0 [#permalink]
### Show Tags
15 Oct 2018, 22:43
GMATinsight wrote:
push12345 wrote:
Is |a - b| < |a| + |b| ?
(1) ab< 0
(2) a^b < 0
Question :Is |a - b| < |a| + |b| ?
On the right side of equation we will always see the sum of absolute values of a and b
But on the left hand side the |a - b| to be smaller, it's important to have the same sign because in case of opposite signs of a and b, left side of equation will be same as the right side of the equation
i.e. for the condition of the question to be true it's must for a and b to have same sign therefore,
Question REPHRASED : Do a and b have the same sign ?
Statement 1: ab< 0
i.e. a and b have opposite sign only then the product of a and b will be negative hence
SUFFICIENT
Statement 2: a^b < 0
$$(-1)^1 < 0$$
Also, $$(-1)^{-1} < 0$$
i.e. and b may have the same sign as well as opposite signs hence
NOT SUFFICIENT
I got the explanation but I messed up my solution when I removed the modulus with + and - signs for each of them! I mean when we remove modulus then there are 2 possibilities right? |a| => a and -a. Why we did not do this?
_________________
CEO
Status: GMATINSIGHT Tutor
Joined: 08 Jul 2010
Posts: 2978
Location: India
GMAT: INSIGHT
Schools: Darden '21
WE: Education (Education)
Re: Is |a - b| < |a| + |b| ? (1) ab< 0 (2) a^b < 0 [#permalink]
### Show Tags
16 Oct 2018, 04:43
1
MrCleantek wrote:
GMATinsight wrote:
push12345 wrote:
Is |a - b| < |a| + |b| ?
(1) ab< 0
(2) a^b < 0
Question :Is |a - b| < |a| + |b| ?
On the right side of equation we will always see the sum of absolute values of a and b
But on the left hand side the |a - b| to be smaller, it's important to have the same sign because in case of opposite signs of a and b, left side of equation will be same as the right side of the equation
i.e. for the condition of the question to be true it's must for a and b to have same sign therefore,
Question REPHRASED : Do a and b have the same sign ?
Statement 1: ab< 0
i.e. a and b have opposite sign only then the product of a and b will be negative hence
SUFFICIENT
Statement 2: a^b < 0
$$(-1)^1 < 0$$
Also, $$(-1)^{-1} < 0$$
i.e. and b may have the same sign as well as opposite signs hence
NOT SUFFICIENT
I got the explanation but I messed up my solution when I removed the modulus with + and - signs for each of them! I mean when we remove modulus then there are 2 possibilities right? |a| => a and -a. Why we did not do this?
Hi MrCleantek
Just a suggestion that I share with many of my students... Be logical first and use maths second... So I tend to avoid opening Modulus till it's not needed.
And the need to open modulus will be seen in less than 10% questions of inequality
_________________
Prosper!!!
GMATinsight
Bhoopendra Singh and Dr.Sushma Jha
e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772
Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi
http://www.GMATinsight.com/testimonials.html
ACCESS FREE GMAT TESTS HERE:22 ONLINE FREE (FULL LENGTH) GMAT CAT (PRACTICE TESTS) LINK COLLECTION
Manager
Joined: 26 Apr 2011
Posts: 60
Location: India
GPA: 3.5
WE: Information Technology (Computer Software)
Re: Is |a - b| < |a| + |b| ? (1) ab< 0 (2) a^b < 0 [#permalink]
### Show Tags
17 Oct 2018, 23:08
GMATinsight wrote:
Hi MrCleantek
Just a suggestion that I share with many of my students... Be logical first and use maths second... So I tend to avoid opening Modulus till it's not needed.
And the need to open modulus will be seen in less than 10% questions of inequality
Agreed! I will now remember to not open modulus by default. Thank you..
_________________
Math Expert
Joined: 02 Aug 2009
Posts: 7978
Re: Is |a - b| < |a| + |b| ? (1) ab< 0 (2) a^b < 0 [#permalink]
### Show Tags
17 Oct 2018, 23:41
2
2
push12345 wrote:
Is |a - b| < |a| + |b| ?
(1) ab< 0
(2) a^b < 0
Two ways...
1) Logical inference
The right side is adding two absolute values so that is the MAX value possible. The Left side has the DISTANCE between the two values. So if they are on same side of 0, it will be less and of on either side of 0, it will be again MAX possible and thus equal to right hand side.
So we are looking for the relative signs of a and b, whether they are of Same sign or different
(1) ab<0
So both a and b are of different signs and thus both sides will be EQUAL
Ans NO
Sufficient
(2)a^b<0
a is surely <0 but B could be anything
Insufficient
A
2) Algebraic
Square both sides
$$|a+b|^2<(|a|+|b|)^2.......2ab<2|a||b|.......2|a||b|-2ab>0$$
When is this possible - when both a and b are of opposite signs
Thus we are looking for the relative signs of a and b, whether they are of Same sign or different
Rest will be same as above
A
_________________
Re: Is |a - b| < |a| + |b| ? (1) ab< 0 (2) a^b < 0 [#permalink] 17 Oct 2018, 23:41
Display posts from previous: Sort by
|
2019-10-18 21:55:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8142111301422119, "perplexity": 2259.3858219966573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684854.67/warc/CC-MAIN-20191018204336-20191018231836-00543.warc.gz"}
|
https://www.assignguru.com/mcqs/23192/540101175/
|
#### MACREONOMICS QUIZ 1
Labor, capital, natural resources, entrepreneurial ability, wages, interest, rent, and profit are commonly called:
resources or factors of production
The term macroeconomics generally refers to
how the national economy behaves
The Latin word ceteris paribus means
all other things held constant
A fallacy of composition is
the assumption that what is true for the individual is also true for everyone else
In Exhibit II on pg 31, if all the economy's resources are used efficiently to produce consumer goods, then the economy is at point
F
Which of the following points in Exhibit II (pg 31) is unattainable, given the quantity of resources and level of technology?
U
Which of the following points in Exhibit 2 (pg 31) represents an inefficient use of the economy's resources?
I
Point U in Exhibit 2 (pg 31) represents
an unattainable combination of good consumer and capital goods
Point I in Exhibit 2 (pg 31) represents
an inefficient combination of the two goods
Points inside the production possibilities frontier represent
inefficiency or unemployment (or both)
What's the difference between an opportunity cost and a sunk cost?
An opportunity cost is what we forgo with one decision and a sunk cost is a cost that can't be recovered.
The typical concave (i.e., bowed-out) shape of the production possibilities frontier reflects the law of increasing opportunity cost.
True
If we give-up mowing the neighbor's lawn fro $50 to watch TV, what is the opportunity cost of goofing-off?$50
Which of the following macroeconomic occurrences could shift a country's production-possibilities curve to the right (thus, increase its GDP)?
changes in resource availability
increases in capital stock
technological change
Improvement to the rules of the game
|
2023-03-25 00:44:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5214464664459229, "perplexity": 7740.204070613062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00361.warc.gz"}
|
https://www.r-bloggers.com/2011/06/example-8-40-side-by-side-histograms/
|
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
It’s often useful to compare histograms for some key variable, stratified by levels of some other variable. There are several ways to display something like this. The simplest may be to plot the two histograms in separate panels.
SAS
In SAS, the most direct and generalizable approach is through the sgpanel procedure.
proc sgpanel data = 'c:\book\help.sas7bdat';
panelby female;
histogram cesd;
run;
The results are shown above.
R
In R, the lattice package provides a similarly direct approach.
ds = read.csv("http://www.math.smith.edu/r/data/help.csv")
ds$gender = ifelse(ds$female==1, "female", "male")
library(lattice)
histogram(~ cesd | gender, data=ds)
The results are shown below.
|
2020-12-04 07:18:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33223336935043335, "perplexity": 3635.5644153477997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141735395.99/warc/CC-MAIN-20201204071014-20201204101014-00370.warc.gz"}
|
https://brilliant.org/problems/limits-and-functions/
|
# Limits and functions
Calculus Level 3
Given,
$\large y=\frac{1}{x^{2}+1}$
Find $$y^{\prime}$$ using the definition of the derivative.
$$x=\tan \alpha$$ and $$y=\cos^2\alpha$$ so that $$\dfrac{dy}{dx}=\dfrac{\frac{dy}{d\alpha}}{\frac{dx}{d\alpha}}$$.
If $$y^{\prime}$$ can be expressed as $$\large\frac {\alpha x }{ x^{ \beta }+2x^{ \gamma }+\delta }$$.
What is the value of $$\alpha+\beta+\gamma+\delta$$?
×
Problem Loading...
Note Loading...
Set Loading...
|
2017-01-23 06:36:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9886304140090942, "perplexity": 1304.9859356808388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00076-ip-10-171-10-70.ec2.internal.warc.gz"}
|
http://openstudy.com/updates/511afb3fe4b03d9dd0c3704e
|
## lavenderish Group Title Can a sequence be both arithmetic and geometric? Explain why. one year ago one year ago
1. SithsAndGiggles
For an arithmetic sequence a_n, each successive term has a common difference d. For a geometric sequence b_n, each successive term has a common ratio r. For a_n, the terms look like $\left\{a, a+d,a+2d,\ldots\right\}$ and for A_n, you have $\left\{a,ar, ar^2,\ldots\right\}$ Suppose a_n = A_n. That means that every term of a_n corresponds exactly with every term of A_n, meaning $a_1=A_1,\\ a_2=A_2,\\ \vdots$ Under this assumption, you have $\begin{cases}a = a\\ a+d=ar\\ a+2d=ar^2\\ \vdots\end{cases}$ I'm pretty sure there are no real numbers d and r that satisfy the system, so I don't think a sequence can be both arithmetic and geometric.
2. SithsAndGiggles
Except for d = 0 and r = 1, but then you just have the first term of both sequences, a.
3. KingGeorge
But that is such a sequence.
4. SithsAndGiggles
I was hoping to arrive at some contradiction, but I'm not sure how to get to it...
5. satellite73
the constant sequence is both but not a very interesting sequence if the sequence is not constant, then it cannot be both
6. SithsAndGiggles
Well there you go. I was thinking along the more "interesting" lines, I suppose.
7. KingGeorge
There is no contradiction, since there are example of sequences that are both arithmetic and geometric, such as 0,0,... a sequence can be both arithmetic and geometric. However, as satellite said, this isn't a very interesting sequence.
8. SithsAndGiggles
But in general, does my reasoning still work? Given some sequences a_n and A_n.
9. KingGeorge
In general, you would have to show that if $$\large a+sd=ar^s$$ and $$a\neq 0$$, then $$d=0$$ and $$r=1$$.
10. KingGeorge
Correction: If $$\large a+sd=ar^s$$ for all $$s\in\mathbb{Z}_{\ge0}$$, $$a\neq0$$, then $$d=0$$ and $$r=1$$.
11. SithsAndGiggles
Ah, I see now. Thanks!
12. KingGeorge
Also, I only include $$a\neq0$$ since if $$a=0$$, then all we need is $$d=0$$ to guarantee us the sequence $$0,0,...$$ (assuming you define $$0^0$$).
13. KingGeorge
If $$a+sd=ar^s$$, then for non-zero $$a$$, \large \begin{aligned} \frac{sd}{a}&=r^s-1\\ &=(r-1)(r^{s-1}+r^{s-2}+...+r+1) \end{aligned}However, both sides must be equal for all values of $$s\ge0$$, it must be that both sides are 0. Hence, $$r=1$$, and $$sd=0\implies d=0$$.
|
2014-10-23 05:20:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9350726008415222, "perplexity": 592.846964431642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507450252.23/warc/CC-MAIN-20141017005730-00297-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://virtualpiano.net/artists/treyarch-sound/
|
# Treyarch Sound
1 Music Sheets
• ###### Genres:
Treyarch is an American video game developer, founded in 1996 by Peter Akemann and Dogan Koslu, and acquired by Activision in 2001. Located in Santa Monica, California, it is known for its work for the Call of Duty series, with some other games in the series developed by Infinity Ward and Sledgehammer Games.Credit: Wikipedia
## Artist's Music Sheets
• [k30] f a k f a k f k f a k f a k f [l81] f s l f s l f z f s l f s l f [k30] f a k f a k f k f a k f a k f [l81] f s l f s l f z f s l f s l f [xk] f a k f a k f [nk] f a k f a k f [ml] f s l f s l f z f s l f s l f [xk] f a k f a k f [nk] f a k f a k f [lb] f s l f s l f z f s l f s l f [xk] f a k f a k f [nk] f a k f a k f [ml] f s l f s l f z f s l f s l f [xk] f a k f a k f [nk] f a k f a k f [lb] f s l f s l f z f s l f s l hjJ h d J h d J h J h d J h d J h k h f k h f k h [xk] h f [zk] h f [nk] h [JB] h d J h d J h J h d J h d J h k h f k h f k h [xk] h f [zk] h f k h J h d J h d J h J G d J G d J G J G S J G S J G d J G d J G S L J G z J G L J G z J G L J G L J G L J G L J G L J G L
Level: 5
Length: 01:34
Intermediate
|
2021-08-05 20:23:46
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9393205642700195, "perplexity": 4007.971258914789}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046157039.99/warc/CC-MAIN-20210805193327-20210805223327-00496.warc.gz"}
|
http://laquintacolonna.it/egsp/linear-programming-pdf-notes.html
|
# Linear Programming Pdf Notes
Sketch the graph of the inequalities (constraints) and shade the feasible region. 1 – Solving Linear Programming Problems This lesson describes the use of Linear Programming to search for the optimal solutions to problems with multiple, conflicting objectives. We discuss generaliza-tions to Binary Integer Linear Programming (with an example of a manager of an activity hall), and conclude with an analysis of versatility of Linear Programming and the types of. Adobe Reader. McDougal Littell. Example : A small business enterprise makes dresses and trousers. Proofs When I was a student I found it very hard to follow proofs in books and lectures. Reading: Bazaraa et al. Linear Programming Lemon and Orange Trees. These feasible regions may be bounded, unbounded or the empty set. Very helpful notes for the students of 2nd year to prepare their paper of Maths according to syllabus given by Federal Board of …. Notes: Lesson #45 Essential Skill #45: HSA-REI. The Covariance Matrix Definition Covariance Matrix from Data Matrix We can calculate the covariance matrix such as S = 1 n X0 cXc where Xc = X 1n x0= CX with x 0= ( x 1;:::; x p) denoting the vector of variable means C = In n 11n10 n denoting a centering matrix Note that the centered matrix Xc has the form Xc = 0 B B B B B @ x11 x 1 x12 x2 x1p. (b) Express constraints in standard form. Estimation of the simple linear regression model. Every linear programming problem can. Linear Programming Notes 5 Epstein, 2013 Graphing Systems of Linear Inequalities The general forms for linear inequalities are Example Graph 23 12xy-³ NOTE - if your line passes through the origin, you must take a different point for a test point. It means that production can be increased to some extent by varying factors proportion. original example given by the inventor of the theory, Dantzig. Notes on Linear Programming James Aspnes April 4, 2004 1 Linear Programming Linear programs are a class of combinatorial optimization problems involv-ing minimizing or maximizing a linear function of a of some real-valued variables subject to constraints that are inequalities on additional linear functions of those variables. Linear programming is an optimization technique for a system of linear constraints and a linear objective function. Rugh These notes were developed for use in 520. They are ubiquitous is science and engineering as well as economics, social science, biology, business, health care, etc. This book is more valuable for historic purposes, as it was Dantzig's first book and the first account of the simplex method by its inventor. Duality in linear programming Linear programming duality Duality theorem: If M 6= ;and N 6= ;, than the problems (P), (D) have optimal solutions. The technique is very powerful and found especially useful because of its. 6 Applications of Operations Research 1. regpar calculates confidence intervals for population attributable risks, and also for scenario proportions. Non-refrigerated. 5 The Inverse of a Matrix 36 2. The book covers the syllabus of Linear Programming for the | Find, read and cite all the research. Linear Programming Notes Carl W. This technique has proven to be of value in solving a variety of problems that include planning, routing, scheduling, as-signment and design. Linear Programming halfspace,andthereforeanypolyhedron,isconvex—ifapolyhedroncontainstwopoints x and y,thenitcontainstheentirelinesegmentxy. Multi-objective linear programming is a subarea of mathematical optimization. 50 and a bus $7. An interpretation of a primal/dual pair 336 A. Each product has to be assembled on a particular machine, each unit of product A taking 12 minutes of assembly time and each unit of product B 25 minutes of assembly time. 1 Bases, Feasibility, and Local Optimality. An objective function defines the quantity to be optimized, and the goal of linear programming is to find the values of the variables that maximize or minimize the objective function. As illustrations of particular duality rules, we use one small linear program made up for the purpose, and one from a game theory application that we pre-viously developed. FORMULATING LINEAR PROGRAMMING PROBLEMS One of the most common linear programming applications is the product-mix problem. Theorem 1 If a linear programming problem has a solution, then it. This example shows the solution of a typical linear programming problem. 310A lecture notes March 17, 2015 Linear programming Lecturer: Michel Goemans 1 Basics Linear Programming deals with the problem of optimizing a linear objective function subject to linear equality and inequality constraints on the decision variables. Hungarian Method the Whole Course • 1. 4 Find the set of feasible solutions that graphically represent the constraints. Math 5593 LP Lecture Notes (Unit II: Theory & Foundations), UC Denver, Fall 2013 3 De nition 6. download free lecture notes slides ppt pdf ebooks This Blog contains a huge collection of various lectures notes, slides, ebooks in ppt, pdf and html format in all subjects. 1) 2; Standard form for linear programs [2016-09-07 Wed], [2016-09-09 Fri] (Ch. Linear Programming. C# Notes for Professionals - Compiled from StackOverflow documentation (3. It is an important optimization (maximization or minimization) technique used in decision making is business and everyday life for obtaining the maximum or minimum values as required of a linear expression to satisfying certain number of given linear restrictions. A Linear Flowchart (Viewgraph 6) is a diagram that displays the sequence of work steps that make up a process. We describe Linear Programming, an important generalization of Linear Algebra. It is a special case of mathematical programming. Theory of Linear and Integer Programming Alexander Schrijver Centrum voor Wiskunde en Informatica, Amsterdam, The Netherlands This book describes the theory of linear and integer programming and surveys the algorithms for linear and integer programming problems, focusing on complexity analysis. Mathematics 2nd Year All Chapter Notes | Math FSc Part 2 “Class 12 Mathematics Notes” Mathematics-XII (Punjab Text Book Board, Lahore) These Mathematics-XII FSc Part 2 (2nd year) Notes are according to “Punjab Text Book Board, Lahore”. 1 B+-Trees 375 10. Complete 2 of the following tasks IXL Practice Worksheets Creating K9 (Alg 1) (at least to 90) Score = _____ Level 3: Solving Linear Inequality. Linear Programming: Word Problems and Applications Solving Linear Programming Problems Step 1: Interpret the given situations or constraints into inequalities. Why linear programming is a very important topic? Alot of problemscan be formulated as linear programmes, and There existefficient methodsto solve them or at least givegood approximations. Computer Solutions of Linear Programs B29 Using Linear Programming Models for Decision Making B32 Before studying this supplement you should know or, if necessary, review 1. Notes: (Handwritten notes and the schedule from Spring 2017 are available here. Donatelle Test Bank Accounting & Auditing Research Tools & Strategies, 7th Edition Weirich, Pearson, Churyk Solution Manual + Case Accounting : Accounting 7e by horngren Test Bank Accounting 7e Horngren Solution Manual. Best assignment of 70 people to 70 tasks. It also contains solved questions for the better grasp of the subject in an easy to download PDF file and will help you score more marks in board exams. !Magic algorithmic box. The Covariance Matrix Definition Covariance Matrix from Data Matrix We can calculate the covariance matrix such as S = 1 n X0 cXc where Xc = X 1n x0= CX with x 0= ( x 1;:::; x p) denoting the vector of variable means C = In n 11n10 n denoting a centering matrix Note that the centered matrix Xc has the form Xc = 0 B B B B B @ x11 x 1 x12 x2 x1p. Unicode Nearly Plain Text Encoding of Mathematics 4 Unicode Technical Note 28 The present section introduces the linear format with fractions, subscripts, and superscripts. For example, given a matrix A\in {\mathbb R}^{n\times m} and vectors b\in {\mathbb R}^ n , c\in {\mathbb R}^ m , find. KC Border Notes on the Theory of Linear Programming 3 3 Fundamental Duality Theorem of LP If both a maximum linear program in standard inequality form and its dual are feasible, then both have optimal solutions, and the values of the two programs are the same. nonlinear programming and evolutionary optimization. Computer Solutions of Linear Programs B29 Using Linear Programming Models for Decision Making B32 Before studying this supplement you should know or, if necessary, review 1. 3 Solve 7 = 5x 2x+1. {"code":200,"message":"ok","data":{"html":". Analyzing Linear Models ⃣Interpret parts of an expression in real-world context ⃣Write a function that describes a relationship between two quantities 2. for any scalar. ISBN 0-7167-1195-8 (hardback), or 0-7167-1587-2 (paperback). The area of a parking lot is 600 square meters. Lesson Plans of Linear Programming; cost minimzation and non standarad Lp; lp; two varable three constraints; notes for 3 varaible; a good presentation on linear programming; the state of the art site for LP; linear progra in excel; a graphiacal calculator for linear progrmming; Archives. It is used to make processes more efficient and cost-effective. Solving Linear Programs by Computer105 6. A cone Kis convex if and only if K+ K K. The presentation in this part is fairly conven-tional, covering the main elements of the underlying theory of linear programming, many of the most effective numerical algorithms, and many of its important special applications. Linear programming is the process of taking various linear inequalities relating to some situation, and finding the "best" value obtainable under those conditions. ECONOMETRICS BRUCE E. To formulate a Linear programming problem (LPP) from set of statements. Ok New York University July 8, 2007. 4 A Diet Problem 68 3. Let x be a vertex of the feasible set. I hope you enjoyed reading this article. It is a technique for the optimization of an objective function, subject to linear equality and linear inequality constraints. The branch and bound method: A simple case: The knapsack problem. The transfer function of a system is a mathematical model in that it is an opera-tional method of expressing the differential equation that relates the output vari-able to the input variable. Many business problems are linear or can be "simplified" as linear problems, so we can use. Two-variable inequalities word problems Get 3 of 4 questions to level up! Systems of inequalities word problems Get 3 of 4 questions to level up! Analyzing structure with linear inequalities Get 5 of 6 questions to level up! Level up on all the skills in this unit and collect up to 300 Mastery points! In Class XI, we have studied systems of. Advanced Engineering Mathematics by HK Dass is one of the popular and useful books in Mathematics for Engineering Students. The purpose of this note is to describe the value of linear program models. Plot the "y=" line (make it a solid line for y≤ or y≥, and a dashed line for y< or y>) Shade above the line for a "greater than" (y> or y≥). pdf Linear Programming Section 1: Formulating and solving graphically Notes and Examples These notes contain subsections on: Formulating LP problems Solving LP problems. Donatelle Test Bank Accounting & Auditing Research Tools & Strategies, 7th Edition Weirich, Pearson, Churyk Solution Manual + Case Accounting : Accounting 7e by horngren Test Bank Accounting 7e Horngren Solution Manual. 6 Determinants 42 3 Introduction to Linear Programming 49 3. the help of Linear Programming so as to adjust the remainder of the plan for best results. By Linear Programming Webmaster on October 26, 2015 in Linear Programming (LP) The Northwest Corner Method (or upper left-hand corner) is a heuristic that is applied to a special type of Linear Programming problem structure called the Transportation Model , which ensures that there is an initial basic feasible solution (non artificial). 6 Applications of Operations Research 1. Linear time median finding and selection (Luc Devroye's course notes) Linear time median finding and selection (David Eppstein's course notes) Convex hulls; 15. A multiple objective linear program (MOLP) is a linear program with more than one objective function. 3 Stages of Development of Operations Research 1. Updated From Graphics Processing to General Purpose Parallel Computing. For programmers it will feel more familiar than others and for new computer users, the next leap to programming will not be so large. Problem Solving and Programming Concepts, 9E Maureen Sprankle JInstructor ManualHubbard Instructor Solutions Manual +Otto the Robot Software Basic Statistics for Business and Economics, 7th Edition, Douglas A. It turns out that lots of interesting problems can be described as linear programming problems. This lecture is about: Linear, Programming, System, Production, Increase, Quantity, Process, Exhibit. Buy pre-programmed parts through Linear Express® for quantities of 1 to 499 units option 2B. Pradyumansinh Jadeja (9879461848) | 2130702 – Data Structure 1 Introduction to Data Structure Computer is an electronic machine which is used for data processing and manipulation. geometric interpretation 12 7. Linear programming describes a broad class of optimization tasks in which both the con-straints and the optimization criterion are linear functions. Linear programming questions and examples. How do we design one? Delays. The programming guide to the CUDA model and interface. Students who don’t have the optimization toolbox can request a free semester license of the MOSEK optimization tools for MATLAB. A solution manual is available on request (e-mail to [email protected] Duality in linear programming Linear programming duality Duality theorem: If M 6= ;and N 6= ;, than the problems (P), (D) have optimal solutions. Unicode Nearly Plain Text Encoding of Mathematics Unicode Technical Note 28 3 notation to include the properties of the higher-level layer, but at the cost of re-duced readability. USGS Publications Warehouse. Welcome! This is one of over 2,200 courses on OCW. If our inequality had ≥ or ≤ we draw the bounding line as a solid. Developing a master schedule, Chapter 14 Linear. NCERT Notes for Class 12 Mathematics Chapter 12: Linear Programming Linear Programming. James Talmage Adams produced the copy here in February 2005. All constraints relevant to a linear programming problem need to be defined in the. It covers the fundamentals of programming, roughly the same material that is covered in a beginning programming course in a university or in a high school AP Computer Science course. We can find the constraints in the right side (column N). Given the cost matrix c (n×n), get modified c’: –(a) For each row, subtract the minimum number in that row from all numbers in that row. Candidates who are pursuing in Class 12 are advised to revise the notes from this post. In this case, we will use a 0-1 variable x j for each investment. Linear Programming: Penn State Math 484 Lecture Notes. Wiley, 1998. Quantitative Methods Notes Note that the pdf format is identical to your notes. Various other bits were inspired by other lecture notes and sources on the Internet. a reasonable amount of time. If one problem has an optimal solution, than the optimal values are equal. The cost of producing each unit of X is: • for machine A: 50 minutes, • for machine B: 30 minutes. Please feel free to use, edit and re-distribute these notes as you wish. NCERT Notes for Class 12 Mathematics Chapter 12: Linear Programming Linear Programming. Algebra 2 -57 - Systems of Equations SECTION 3. Clearly a minimum-cost flow can be. At the end of each chapter there is a section with bibliographic notes and a section with exercises. Lecture 13: Sensitivity Analysis Linear Programming 7 / 62. Introduction to Stochastic Processes - Lecture Notes (with 33 illustrations) Gordan Žitković Department of Mathematics The University of Texas at Austin. Linear Programming (LP) is an attempt to find a maximum or minimum solution to a function, given certain constraints. IPSUR: Introduction to Probability and Statistics Using R by G. LINEAR PROGRAMMING 507 given sum by the dealer in purchasing chairs and tables is an example of an optimisation problem as well as of a linear programming problem. CS 3510 Design & Analysis of Algorithms Section A, Lecture #12 Linear Programming Instructor: Richard Peng Oct 11, 2017 DISCLAIMER: These notes are not necessarily an accurate representation of what I said during the class. Linear programming was born during the second World War out of the necessity of solving military logistic. md: Operations Research Lecture notes: Jun 7, 2018. Manual programming with LTpowerPlay™ Built In Programming Utility (PC->NVM) option 1B. Click on a link to view the lecture notes in a graphics (. ! Deployment Flowchart. should be non negative. download free lecture notes slides ppt pdf ebooks This Blog contains a huge collection of various lectures notes, slides, ebooks in ppt, pdf and html format in all subjects. An Introduction to Linear Programming Linear Programming is a generalization of Linear Algebra. Best assignment of 70 people to 70 tasks. 1 What Is a Linear Programming Problem? 49 3. The Holiday Meal Turkey Ranch is considering buying two different brands of turkey feed and blending them to provide a good, low-cost diet for its turkeys. The transfer function of a system is a mathematical model in that it is an opera-tional method of expressing the differential equation that relates the output vari-able to the input variable. EntranceTutorials. A cone Kis convex if and only if K+ K K. Aggregate planning, Chapter 13 4. Linear Programming provides practical and better quality of decisions’ that reflect very precisely the limitations of the system i. We want to avoid as much recomputing as possible, so we want to find a subset of files to store such that The files have combined size at most. Linear Programming August 28, 2009 Linear Programming We want to optimize (either maximize or minimize) a function given a set of constraints (inequalities) that must be satisfied (which makes the feasible region. Given the vastness of its topic, this chapter is divided into several parts, which can be read. Problems that Can’t be Initialized by Hand103. 1 Linear Indexing 359 10. Typical Linear Programming Problem. An MOLP is a special case of a vector linear program. 17) can be transformed into a form where the constraints only consist of equations and elemen-tary inequalities of the form xi ‚ 0. linear programming: Mathematical technique used in computer modeling (simulation) to find the best possible solution in allocating limited resources (energy, machines, materials, money, personnel, space, time, etc. A typical example would be taking the limitations of materials and labor, and then determining the "best" production levels for maximal profits under those conditions. Separable piece-wise linear convex function minimization problems, uses in curve fitting and linear parameter estimation. Theorem 1 If a linear programming problem has a solution, then it. Linear Programming (LP) is an attempt to find a maximum or minimum solution to a function, given certain constraints. edu January 3, 1996 Latest Revision: Fall 2003. No enrollment or registration. Ris a computer programming language. Finite math teaches you how to use basic mathematic processes to solve problems in business and finance. • Binding a variable in Python means setting a name to hold a reference to some object. The "classic" reference book on C++ written by the inventor of the language, updated with details of the C++11 standard. The linear equation above, for. It contains short tricks, formulae, tips and techniques to solve any question related to the particular topic. Description. December 1998. Proposition 2. 6 Further Reading 382 10. Linear programming (LP) is a method to achieve the optimum outcome under some requirements represented by linear relationships. Such prob-. and combines these tools to make a new set of knowledge for decision making. All the important topics are covered in the exercises and each answer comes with a detailed explanation to help students understand concepts better. should be non negative. Math 3311, with two lecture hours per week, was primarily for non-mathematics majors and was required by several engineering departments. Linear Programming Notes Carl W. To make a trousers requires 15 minutes of cutting and 2 1 hour of stitching. modeling and solving LP problems in a spreadsheet. Form 4 Mathematics - LINEAR PROGRAMMING. Questions about linear programming are more suitable for the Mathematical Optimization, Discrete-Event Simulation, and OR community. KC Border Notes on the Theory of Linear Programming 3 3 Fundamental Duality Theorem of LP If both a maximum linear program in standard inequality form and its dual are feasible, then both have optimal solutions, and the values of the two programs are the same. Lin-ear Programming is used to successfully model numerous real world situations, ranging. Linear Programming: Exercises 1. Dependent variables, on the left, are called basic variables. Welcome to the Mathematical Institute course management portal. Resources typically include raw materials, manpower, machinery, time, money and space. It covers the fundamentals of programming, roughly the same material that is covered in a beginning programming course in a university or in a high school AP Computer Science course. pdf]]> bdti http://bitsavers. Linear Search. Based on your location, we recommend that you select:. Step 3: Determine the maximum value or minimum value ax by from the graph by drawing the straight line ax by k. Linear programming is a special type of mathematical programming. 4) where x is a vector of real-valued variables (sometimes assumed to be nonnegative), c and b are vectors of real constants, and A is a matrix of real constants. 9 Probability and Statistics - ACE Academy Handwritten Notes | GATE/IES [PDF] 2 Related Posts. Computer Solutions of Linear Programs B29 Using Linear Programming Models for Decision Making B32 Before studying this supplement you should know or, if necessary, review 1. Unicode Nearly Plain Text Encoding of Mathematics Unicode Technical Note 28 3 notation to include the properties of the higher-level layer, but at the cost of re-duced readability. If you want to read more about linear programming, some good references are [6, 1]. We want to avoid as much recomputing as possible, so we want to find a subset of files to store such that The files have combined size at most. It is a technique for the optimization of an objective function, subject to linear equality and linear inequality constraints. Linear programming is a procedure for finding the maximum or minimum value of a function in two variables, subject to given conditions on the variables called constraints. Linear programming is a mathematical method technique for maximizing or minimizing a linear function of several variables. Second Order Linear Homogeneous Differential Equations with Constant Coefficients For the most part, we will only learn how to solve second order linear equation with constant coefficients (that is, when p(t) and q(t) are constants). Given the vastness of its topic, this chapter is divided into several parts, which can be read. Set Up a Linear Program, Solver-Based. In contrast, the algebraic form is much more convenient as a standard for defining and implementing the algorithm that will be described. Candidates who are pursuing in Class 12 are advised to revise the notes from this post. Operational Research Notes. 0 Linear Programming We start our studies of optimization methods with linear programming. Cynthia Church pro-duced the first electronic copy in December 2002. 1 Optimal Solution of a Linear Programming Problem If a linear programming problem has a solution, it must occur at a vertex of the set of feasible solutions. Brown, Dolciani, Sorgenfrey, & Kane. Analyzing Linear Models ⃣Interpret parts of an expression in real-world context ⃣Write a function that describes a relationship between two quantities 2. It mak es sense that y ou can pro duce co ns in only whole n um b er units. How to Graph a Linear Inequality. Linear regression is an analysis that assesses whether one or more predictor variables explain the dependent (criterion) variable. Mercer}, year={2014} }. The objective and constraints in linear programming problems must be expressed in terms of linear equations or inequalities. 2 History Linear programming is a relatively young mathematical discipline, dating from the invention of the simplex method by G. File has size bytes and takes minutes to re-compute. A Linear Program for Zero-Sum Game Players100 4. Inequalities (HINT: There are 4. Linear programming is the process of finding a maximum or minimum of a linear objective function subject to a system of linear constraints. Mercer}, year={2014} }. Linear regression is an analysis that assesses whether one or more predictor variables explain the dependent (criterion) variable. Linear programming has many practical applications (in transportation, production planning. Day 3 Notes Date. It aims at complementing the more practically oriented books in this field. 1) The traveling salesman problem: handout, slides (I. Linear Programming halfspace,andthereforeanypolyhedron,isconvex—ifapolyhedroncontainstwopoints x and y,thenitcontainstheentirelinesegmentxy. 3 Solve 7 = 5x 2x+1. FORMULATING LINEAR PROGRAMMING PROBLEMS One of the most common linear programming applications is the product-mix problem. Take a quick interactive quiz on the concepts in Developing Linear Programming Models for Simple Problems or print the worksheet to practice offline. The purpose of this note is to describe the value of linear program models. (B) Level 3 1. It means that production can be increased to some extent by varying factors proportion. Clearly a minimum-cost flow can be. A typical example would be taking the limitations of materials and labor, and then determining the "best" production levels for maximal profits under those conditions. 10 x y The objective function has its optimal value at one of the vertices of the region determined by the constraints. Quantitative Methods Notes Note that the pdf format is identical to your notes. 3 Solve 7 = 5x 2x+1. Geometry of Spaces with an Inner Product. They originated as handwritten notes in a course at the University of Toronto given by Prof. Example : A small business enterprise makes dresses and trousers. In a linear programming problem, we have a function, called the objective function, which depends linearly on a number of independent variables, and which we want to optimize in the sense of either finding its mini-mum value or maximum value. 3) Subject to Ax ≤b (9. It is capable of handling a variety of problems, ranging from finding schedules for airlines or movies in a theater to distributing oil from refineries to markets. Developing a master schedule, Chapter 14 Linear. troduction to abstract linear algebra for undergraduates, possibly even first year students, specializing in mathematics. Linear programming is a procedure for finding the maximum or minimum value of a function in two variables, subject to given conditions on the variables called constraints. In which we show how to use linear programming to approximate the vertex cover problem. It turns out that there is an efficient algorithm. Well, the applications of Linear programming don't end here. THE SUBJECTS INCLUDED ARE: ACCOUNTANCY ,AGRICULTURE, BIOLOGY, COMPUTER, ENGLISH LANGUAGE, GENERAL STUDIES,ADVANCED MATHEMATICS, ECONOMICS , KISWAHILI, COMMERCE, HISTORY, PHYSICS, BASIC. com, Elsevier’s leading platform of peer-reviewed scholarly literature. But the production of a number of goods can be increased to some extent by increasing only one or two inputs. /Java5/Notes interactively, thinking about and answering the question at the bottom of each page. {"code":200,"message":"ok","data":{"html":". Linear Programming Notes VIII: The T ransp ortation Problem 1 In tro duction Sev eral examples during the quarter came with stories in whic h v ariables describ ed quan tities that came in discrete units. 5 y 0 4 16 20. The Java byte-code compiler translates a Java source file into machine-independent byte code. Ltd, 2nd edition, Universities. When programmer collects such type of data for processing, he would require to store all of them in computer’s main memory. Lind Test Bank Introduction to Programming Using Visual Basic 2010, 8E David I. We will now discuss how to find solutions to a linear programming problem. The linear programming for class 12 concepts includes finding a maximum profit, minimum cost or minimum use of resources, etc. On the complexity of Linear Programming 199 Proof. Algebra 2 -57 - Systems of Equations SECTION 3. Typical Linear Programming Problem. The profit on a dress is R40 and on a pair. Study for CBSE class 12th for Maths, Physics and Chemistry all topics here for free and watch explanation video on each CBSE 12th Maths, Physics and Chemistry, Biology topic here. Computer Solutions of Linear Programs B29 Using Linear Programming Models for Decision Making B32 Before studying this supplement you should know or, if necessary, review 1. Mercer}, year={2014} }. Linear Programming. This document is highly rated by B Com students and has been viewed 5057 times. 50 and a bus$7. THE SUBJECTS INCLUDED ARE: ACCOUNTANCY ,AGRICULTURE, BIOLOGY, COMPUTER, ENGLISH LANGUAGE, GENERAL STUDIES,ADVANCED MATHEMATICS, ECONOMICS , KISWAHILI, COMMERCE, HISTORY, PHYSICS, BASIC. There should be copy on reserve in the Koerner library. 1 Introduction to Linear Programming Linear programs began to get a lot of attention in 1940's, when people were interested in minimizing costs of various systems while meeting di erent constraints. The quantity is known as the objective function and the conditions (or limits) are the restrictions. (3) In Linear Algebra, we use Gaussian Elimination with Pivoting to put an augmented matrix into RREF (Row Reduced Echelon Form). Linear Programming Problems Linear programming problems come up in many applications. At the end of each chapter there is a section with bibliographic notes and a section with exercises. Example : A small business enterprise makes dresses and trousers. Data structures, Algorithms and Applications in C++, S. In these “Operational Research Notes PDF”, you will study the broad and in depth knowledge of a range of operation research models and techniques, which can be applied to a variety of industrial applications. For example, the custom furniture store can use a linear programming method to examine how many leads come from TV commercials, newspaper display ads and online marketing efforts. It is an important optimization (maximization or minimization) technique used in decision making is business and everyday life for obtaining the maximum or minimum values as required of a linear expression to satisfying certain number of given linear restrictions. It is used to make processes more efficient and cost-effective. Netwon's Method Perceptron. Algebra 2 -57 - Systems of Equations SECTION 3. 3 Related Queries. Cell F4 is our equation P which has to be. But if you’re on a tight budget and have to watch those …. Specifically, we will use the GNU linear programming kit (GLPK), which is available free of charge. The presentation in this part is fairly conven-tional, covering the main elements of the underlying theory of linear programming, many of the most effective numerical algorithms, and many of its important special applications. The contribution margin is $3 for A and$4 for B. Most students taking a course in linear algebra will have completed courses in di erential and integral calculus, and maybe also multivariate calculus, and will typically be second-year students in university. Note before drawing x+y<30, it has to be rewritten as y<30 - x and treated y = 30 -x. 40x + 30y ≥ 4 000. the help of Linear Programming so as to adjust the remainder of the plan for best results. we provide the links which is already available on the internet. So, for the domain we need to avoid division by zero, square roots of negative. Cell F4 is our equation P which has to be. Quadratic elements give exact nodal values for the cubic solution etc. The impact of tsunamis on human societies can be traced back in written history to 480 BC, when the Minoan civilization in the Eastern Mediterranean was wiped out by great tsunami waves generated by the volcanic explosion of the island of Santorin. Break-even Prices and Reduced Costs x1 x2 x3 x4 x5 x6 x7 x8 b 0:5 1 0 0 :015 0 0 :05 25 5 0 0 0 :05 1 0 :5 50. In which we show how to use linear programming to approximate the vertex cover problem. This list gives you access to lecture notes in design theory, finite geometry and related areas of discrete mathematics on the Web. Theory of Linear and Integer Programming Alexander Schrijver Centrum voor Wiskunde en Informatica, Amsterdam, The Netherlands This book describes the theory of linear and integer programming and surveys the algorithms for linear and integer programming problems, focusing on complexity analysis. Prove Proposition 2. • If X = n, the problem is called unconstrained • If f is linear and X is polyhedral, the problem is a linear programming problem. Linear Programming Overview. Others require special software to display or print them. General Notes: Linear programming is a recently devised technique for providing specific numerical solutions of problems which earlier could be solved only in vague qualitative terms by using the apparatus of the general theory of the firm. Unicode Nearly Plain Text Encoding of Mathematics Unicode Technical Note 28 3 notation to include the properties of the higher-level layer, but at the cost of re-duced readability. Linear programming can be used to solve financial problems involving multiple limiting factors and multiple alternatives. Hungarian Method the Whole Course • 1. 3 Related Queries. 310A lecture notes March 17, 2015 Linear programming Lecturer: Michel Goemans 1 Basics Linear Programming deals with the problem of optimizing a linear objective function subject to linear equality and inequality constraints on the decision variables. One of the more important ideas about functions is that of the domain and range of a function. Dependent variables, on the left, are called basic variables. Notes: (Handwritten notes and the schedule from Spring 2017 are available here. Linear Homogeneous Recurrences De nition A linear homogeneous recurrence relation of degree k with constant coe cients is a recurrence relation of the form an = c1an 1 + c2an 2 + + ck an k with c1;:::;ck 2 R , ck 6= 0. The proof of the Duality Theorem 338 A. Questions about linear programming are more suitable for the Mathematical Optimization, Discrete-Event Simulation, and OR community. pdf Basic Linear Programming Problem. This technique has proven to be of value in solving a variety of problems that include planning, routing, scheduling, as-signment and design. 1 Linear Programming 1. Read, write reviews and more. Each product has to be assembled on a particular machine, each unit of product A taking 12 minutes of assembly time and each unit of product B 25 minutes of assembly time. C++ Language Pdf Notes – C++ Notes pdf (C &DS) CDS Notes B. This kind of problem is known as an optimization problem. Terlaky, A convergent criss-cross method, Math. The first, called sensitivity analysis (or postoptimality analysis) addresses the following question: having found an optimal solution to a given linear programming problem, how much can we change the data and have the current partition into basic and nonbasic variables remain optimal?. A constraint is one of the inequalities in a linear programming problem. A linear transformation may or may not be injective or surjective. ! Deployment Flowchart. Two or more products are usually produced using limited resources. Wiley, 1998. Other books on the subject can also be found (start browsing around QA 265 or T 57). How many of each birdhouse type should Sammy make to maximize profit? biYdhouses smolt bwåhouses (DD ILO) (08) Sammy should matt loq. This book is more valuable for historic purposes, as it was Dantzig's first book and the first account of the simplex method by its inventor. 0 United States License. This is primarily a class in the C programming language, and introduces the student to data structure design and implementation. (b) Express constraints in standard form. You can get the course notes in a single PDF or each topic in a separate web page: What is Linear Programming? [2016-09-07 Wed] (Ch. REMARK: Note that for a linear programming problem in standard form, the objective function is to be maximized, not minimized. Lecture notes 3: Solving linear and integer programs using the GNU linear programming kit Vincent Conitzer In this set of lecture notes, we will study how to solve linear and integer programs using standard solvers. 1 Special Matrices. /Java5/Notes interactively, thinking about and answering the question at the bottom of each page. © 2005-2020 PowerSchool LegalLegal. 10 x y The objective function has its optimal value at one of the vertices of the region determined by the constraints. Linear and Integer Programming Lecture Notes Marco Chiarandini June 18, 2015. Linear Programming Lemon and Orange Trees. It sequentially checks each element of the. pdf]]> bdti http://bitsavers. Linear Programming Overview. Welcome to the Mathematical Institute course management portal. A multiple objective linear program (MOLP) is a linear program with more than one objective function. This list gives you access to lecture notes in design theory, finite geometry and related areas of discrete mathematics on the Web. ! No general algorithm is known that allows to optimize a solution! by directly moving from a feasible solution to an improved ! feasible solution. Refer to AppendixAfor some notes on MATLAB compatibility. Linear Algebra Review and Reference ; Linear Algebra, Multivariable Calculus, and Modern Applications (Stanford Math 51 course text) Linear Algebra Friday Section [pdf (slides)] Lecture 3: 4/13: Weighted Least Squares. Set Up a Linear Program, Solver-Based. Step 2: Construct the region which satisfies the given inequalities. Throughout the text there are a lot of examples. Fitts Dept. Necessary optimality conditions expressed in terms of tightness of the follower’s constraints are used to fathom or simp. We’d have to know what kinds of questions can be transformed and how to transform. A procedure called the simplex method may be used to find the optimal solution to multivariable problems. Join 15 million students on StudyBlue to study better, together. Linear programming is a mathematical method technique for maximizing or minimizing a linear function of several variables. Notes on Linear Programming — Part III: Computational Algorithm of the Revised Simplex Method Dec 31, 1952 This report is part of the RAND Corporation research memorandum series. tech 1st-year Object-Oriented Programming Notes, you can get the complete Study Material in Single Download Link. Very helpful notes for the students of 2nd year to prepare their paper of Maths according to syllabus given by Federal Board of …. Buy Linear Programming And Game Theory by Dipak Chatterjee PDF Online. NONLINEAR PROGRAMMING min x∈X f(x), where • f: n → is a continuous (and usually differ-entiable) function of n variables • X = nor X is a subset of with a “continu-ous” character. 0 Linear Programming We start our studies of optimization methods with linear programming. For example, given a matrix A\in {\mathbb R}^{n\times m} and vectors b\in {\mathbb R}^ n , c\in {\mathbb R}^ m , find. Let x be a vertex of the feasible set. Lecture Notes on Linear Algebra (PDF 220P) This book covers the following topics: Brief introduction to Logic and Sets, Brief introduction to Proofs, Basic Linear Algebra, Eigenvalues and Eigenvectors, Vector Spaces. As indicated by the Table of Contents, the notes cover traditional, introductory. x) Thinking in C++, Second Edition, Vol. Linear Programming Notes The region below was formed by graphing several inequalities. It is very complex and requires an extraordinary skill with numbers. Use CUDA C++ instead of CUDA C to clarify that CUDA C++ is a C++ language extension not a C language. This course illustrates its relationship with economic theory and decision sciences. 3) Knapsack problems: handout, slides (I. If one of the programs is infeasible, neither has an optimum. If you want to read more about linear programming, some good references are [6, 1]. Definition of linear equation A linear equation in one variable x is an equation that can be written in the standard form ax+b = 0, where a and b are real numbers with a 6= 0. Find materials for this course in the pages linked along the left. original example given by the inventor of the theory, Dantzig. It is used by the pure mathematician and by the mathematically trained scien-tists of all disciplines. For that purpose, in (1. What follows were my lecture notes for Math 3311: Introduction to Numerical Meth-ods, taught at the Hong Kong University of Science and Technology. Please sign up to review new features, functionality and page designs. elements with linear shape functions produce exact nodal values if the sought solution is quadratic. Please sign up to review new features, functionality and page designs. Solver-Based Linear Programming. Proofs and discussion are mostly omitted. PDF | This book consists of definitions, theories and problems related to linear programming. The Java byte-code compiler translates a Java source file into machine-independent byte code. (Minimization problems will be discussed in Sections 9. Net, C, C++, PHP. Logistic Regression. Linear and Integer Programming Lecture Notes Marco Chiarandini June 18, 2015. The branch and bound method: A simple case: The knapsack problem. It mak es sense that y ou can pro duce co ns in only whole n um b er units. It aims at complementing the more practically oriented books in this field. NCERT Solutions for Class 12 Maths Chapter 12 Linear Programming. So when I read a theorem, I would put down the book and try out. De nition 12. 10 x y The objective function has its optimal value at one of the vertices of the region determined by the constraints. Full PDF Download Notification Kindly Pay Rs25 for this full PDF through any UPI on [email protected] as contribution. the case of both linear and nonlinear functions. Download link (first discovered through open text book blog). In this section I will explain all you need to know about linear programming to pass your maths exam. Clearly a minimum-cost flow can be. goal programming and multiple objective optimization. In general, dynamic linear programming problems are characterized by large "sparse" matrices (i. Setting x 1, x 2, and x 3 to 0, we can read o the values for the other variables: w 1 = 7, w 2 = 3, etc. download free lecture notes slides ppt pdf ebooks This Blog contains a huge collection of various lectures notes, slides, ebooks in ppt, pdf and html format in all subjects. Linear programming I Definition: If the minimized (or maximized) function and the constraints are all in linear form a 1x 1 + a 2x 2 + ··· + a nx n + b. Karmarkar invented his famous algorithm for Linear Programming) became one of the dominating elds, or even the dominating eld, of theoretical and computational activity in Convex Optimization. linear_programming_notes. canonical representation 10 6. First, in Section 1 we will explore simple prop-erties, basic de nitions and theories of linear programs. Candidates who are pursuing in Class 12 are advised to revise the notes from this post. homogeneous linear programs and transposition— duality theorems 20 9. com/components/bdti/2001_Buyers_Guide_to_DSP_Processors. These feasible regions may be bounded, unbounded or the empty set. Linear programming finds the least expensive way to meet given needs with available resources. Non-refrigerated. Belinfante presented in class. Using the Simplex Method to Solve Linear Programming Maximization Problems J. Computer Solutions of Linear Programs B29 Using Linear Programming Models for Decision Making B32 Before studying this supplement you should know or, if necessary, review 1. 727273 means that if you could increase the first resource from 16 units to 17 units, you would get an additional profit of about $0. Linear Search. Logistic Regression. So, go ahead and check the Important Notes for Class 12 Maths Linear Programming Problem. These lecture notes are based on the middle convention: that the N-point DFT is undened except for k 2f0;:::;N 1g. Topics in our Operational Research Notes PDF. Reading: Bazaraa et al. The abstract below describes the content of the notes written so far; before delving into these notes, we first sketch some thoughts about the class and the book. Linear Programming. First, in Section 1 we will explore simple prop-erties, basic de nitions and theories of linear programs. Linear Programming: Introduction to Linear Programming and the Simplex Algorithm; Megiddo's linear time algorithm; Linear programming and optimization problems (with Java applet) 16. Theorem 1 If a linear programming problem has a solution, then it. C++ Language Pdf Notes – C++ Notes pdf (C &DS) CDS Notes B. Definition of linear equation A linear equation in one variable x is an equation that can be written in the standard form ax+b = 0, where a and b are real numbers with a 6= 0. MATLAB FOR ENGINEERS - Lesson 14 (Airfoil Problem). Using the Simplex Method to Solve Linear Programming Maximization Problems J. Introduction to Algorithmic Trading Strategies Lecture 1 Overview of Algorithmic Trading linear programming (branch-and-bound, outer-approximation) 29. This site contains an old collection of practice dynamic programming problems and their animated solutions that I put together many years ago while serving as a TA for the undergraduate algorithms course at MIT. This list gives you access to lecture notes in design theory, finite geometry and related areas of discrete mathematics on the Web. To solve theLinear programming problem (LPP) using graphical method ( For 2 variables) 3. I found some answers to my question in answers to this post: Account Options Sign in. If our inequality had ≥ or ≤ we draw the bounding line as a solid. Dependent variables, on the left, are called basic variables. Lecture 13: Sensitivity Analysis Linear Programming 7 / 62. pdf ; 01-Graphing Systems of Inequalities. The MATLAB linear programming solver is called linprog and is included in the optimization toolbox. In this chapter, we will develop an understanding of the dual linear program. In a linear equation this unknown quantity will appear only as a multiple of x, and not as a function of x. Introduction in optimization and linear programming. Foundations and Extensions Series: International Series in Operations Research & Management Science. A set Kin Rn is called a cone if K Kfor every 0. The attendant can handle only 60 vehicles. The course covers Linear programming with applications to transportation, assignment and game problem. File has size bytes and takes minutes to re-compute. For any quarries, Disclaimer are requested to kindly contact us – [email protected] In an n dimensional space, whose points are described by variables x1, … , x n, we have a “feasible region” which is a “polytope” by which we mean a region whose boundaries are defined by linear constraints. Such prob-. Vasek Chvatal, Linear Programming, WH Freeman and Company. 2 History of Operations Research 1. Linear Programming Algorithms [Springer,2001],whichcanbefreelydownloaded(butnotlegallyprinted)fromthe author’swebsite. Advanced Engineering Mathematics by HK Dass is one of the popular and useful books in Mathematics for Engineering Students. General Notes: Linear programming is a recently devised technique for providing specific numerical solutions of problems which earlier could be solved only in vague qualitative terms by using the apparatus of the general theory of the firm. {"code":200,"message":"ok","data":{"html":". Graphical linear algebra is a work in progress, and there are many open research threads. and combines these tools to make a new set of knowledge for decision making. C# Notes for Professionals - Compiled from StackOverflow documentation (3. The branch and bound method: A simple case: The knapsack problem. 3 Write the constraints as a system of inequalities. Linear Programming Key Terms, Concepts & Methods for the User 1. A linear programming problem is a mathematical programming problem in which the function f is linear and the set S is described using linear inequalities or equations. Linear programming is a mathematical method technique for maximizing or minimizing a linear function of several variables. A good programming language helps the programmer by allowing them to talk about the actions that the computer has to perform on a higher level. Cell F4 is our equation P which has to be. Tilt Sensing Using a Three-Axis Accelerometer, Rev. BlendingProblemExample. Buy pre-programmed parts through Linear Express® for quantities of 1 to 499 units option 2B. Chapter 2: Linear Equations and Inequalities Lecture notes Math 1010 Ex. In general, dynamic linear programming problems are characterized by large "sparse" matrices (i. C# Notes for Professionals - Compiled from StackOverflow documentation (3. Optimization of linear functions with linear constraints is the topic of Chapter 1, linear programming. bdti:: 2001 Buyers Guide to DSP Processors. Simulating the Generalized Gibbs Ensemble (GGE): A Hilbert space Monte Carlo approach. These notes are not meant to replace. Part I is a self-contained introduction to linear programming, a key component of optimization theory. Linear Programming. Graphical method of linear programming is used to solve problems by finding the highest or lowest point of intersection between the objective function line and the feasible region on a graph. Linear programming is closely related to linear algebra; the most noticeable difference is that linear programming often uses inequalities in the problem statement rather than equalities. We will now discuss how to find solutions to a linear programming problem. Admin | 30-Jan-2017 | C#, VB. Please feel free to use, edit and re-distribute these notes as you wish. This is currently a tentative listing of topics, in order. In order to illustrate some applicationsof linear programming,we will explain simpli ed \real-world" examples in Section 2. The objective and constraints in linear programming problems must be expressed in terms of linear equations or inequalities. 10 x y The objective function has its optimal value at one of the vertices of the region determined by the constraints. Material from our usual courses on linear algebra and differential equations have been combined into a single course (essentially, two half-semester courses) at the. Independent variables, on the right, are called nonbasic variables. These lecture notes are based on the middle convention: that the N-point DFT is undened except for k 2f0;:::;N 1g. 4 The linear system of equations 2x+ 3y= 5 and 3x+ 2y= 5 can be identified with the matrix " 2 3 : 5 3 2 : 5 #. CHAPTER 11: BASIC LINEAR PROGRAMMING CONCEPTS FOREST RESOURCE MANAGEMENT 205 a a i x i i n 0 1 + = 0 = ∑ Linear equations and inequalities are often written using summation notation, which makes it possible to write an equation in a much more compact form. Lesson Plans of Linear Programming; cost minimzation and non standarad Lp; lp; two varable three constraints; notes for 3 varaible; a good presentation on linear programming; the state of the art site for LP; linear progra in excel; a graphiacal calculator for linear progrmming; Archives. 4 GATE Total Information & Guidance. 3: Represent constraints by equations or inequalities, and by systems of equations and/!or inequalities, and interpret solutions as viable or non-viable options in a modeling context. Separable piece-wise linear convex function minimization problems, uses in curve fitting and linear parameter estimation. Define and discuss the linear programming technique, including assumptions of linear programming and accounting data used therein. a minimum-linear-cost uncapacitated network-flow problem in which node zero is the source from which the demands at the other nodes are satisfied. Draft from February 14, 2005 Preface The present book has been developed from course notes, continuously updated and used in optimization courses during the past several years. Linear programming mainly is used in macroeconomics, business management, maximizing revenue and minimizing the cost of production. 1986-01-01. These notes are provided here for the ease of the students to learn during the exams. The contribution margin is$3 for A and \$4 for B. So, go ahead and check the Important Notes for Class 12 Maths Linear Programming Problem. regpar can be used after an estimation command whose predicted values are interpreted as conditional proportions, such as logit, logistic, probit, or glm. Video 25 minutes 33 seconds. Linear Programming: Penn State Math 484 Lecture Notes Version 1. Simplex Method: It is one of the solution method used in linear programming problems that involves two variables or a large number of constraint. Please feel free to use, edit and re-distribute these notes as you wish. 5 y 0 4 16 20. Quantitative Techniques In Management By J. homogeneous linear programs and transposition— duality theorems 20 9. LINEAR PROGRAMMING – THE SIMPLEX METHOD (1) Problems involving both slack and surplus variables A linear programming model has to be extended to comply with the requirements of the simplex procedure, that is, 1. This process can be broken down into 7 simple steps explained below. The programming guide to the CUDA model and interface. Chv´atal [2]. the use of the scientific method for decision making. (b) Express constraints in standard form. [PDF] Operational Research Notes Lecture FREE Download. CliffsNotes is the original (and most widely imitated) study guide. In the first form, the objective is to maximize, the material constraints are all of the form: "linear expression ≤ constant" (a i ·x ≤ b i), and all variables are constrained to be non. simplex method; terminal possibilities 17 8. Use of These Notes. The main topics are: formulations, notes in convex analysis, geometry of LP, simplex method, duality, ellipsoid algorithm,. Find materials for this course in the pages linked along the left. 2 ISAM 361 10. The transfer function is a property of a system itself,independent of the magnitude. Ok New York University July 8, 2007. Linear Programming Linear programmingis one of the powerful tools that one can employ for solving optimization problems. Feasible solutions Theorem 9. Logistic Regression.
|
2020-05-26 12:37:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3824312388896942, "perplexity": 999.1189192905816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390758.21/warc/CC-MAIN-20200526112939-20200526142939-00123.warc.gz"}
|
https://www.semanticscholar.org/paper/The-endpoint-distribution-of-directed-polymers-Bates-Chatterjee/fe1c2749a253bcfb84febbb3a2ee40abd50fcd2b
|
The endpoint distribution of directed polymers
@article{Bates2016TheED,
title={The endpoint distribution of directed polymers},
author={Erik Bates and Sourav Chatterjee},
journal={arXiv: Probability},
year={2016}
}
• Published 11 December 2016
• Mathematics
• arXiv: Probability
Probabilistic models of directed polymers in random environment have received considerable attention in recent years. Much of this attention has focused on integrable models. In this paper, we introduce some new computational tools that do not require integrability. We begin by defining a new kind of abstract limit object, called "partitioned subprobability measure", to describe the limits of endpoint distributions of directed polymers. Inspired by a recent work of Mukherjee and Varadhan on…
Localization of directed polymers in continuous space
• Mathematics
Electronic Journal of Probability
• 2020
Random polymers on the complete graph
• Mathematics
Bernoulli
• 2019
Consider directed polymers in a random environment on the complete graph of size $N$. This model can be formulated as a product of i.i.d. $N\times N$ random matrices and its large time asymptotics is
Localization of directed polymers with general reference walk
Directed polymers in random environment have usually been constructed with a simple random walk on the integer lattice. It has been observed before that several standard results for this model
The central limit theorem for directed polymers in weak disorder, revisited
We give a new proof for the central limit theorem in probability for the directed polymer model in a bounded environment with bond disorder in the interior of the weak disorder phase. In the same
A PDE hierarchy for directed polymers in random environments
• Mathematics
Nonlinearity
• 2021
For a Brownian directed polymer in a Gaussian random environment, with q(t, ⋅) denoting the quenched endpoint density and Qn(t,x1,…,xn)=E[q(t,x1)…q(t,xn)], we derive a hierarchical PDE system
Dynamic polymers: invariant measures and ordering by noise
• Mathematics
Probability Theory and Related Fields
• 2021
We develop a dynamical approach to infinite volume directed polymer measures in random environments. We define polymer dynamics in 1+1 dimension as a stochastic gradient flow on polymers pinned at
A PDE HIERARCHY FOR DIRECTED POLYMERS IN RANDOM ENVIRONMENTS
For a Brownian directed polymer in a Gaussian random environment, with q(t, ⋅) denoting the quenched endpoint density and Qn(t, x1, . . . , xn) = E[q(t, x1) . . . q(t, xn)], we derive a hierarchical
Localization of the continuum directed random polymer
• Mathematics
• 2022
. We consider the continuum directed random polymer (CDRP) model that arises as a scaling limit from 1 + 1 dimensional directed polymers in the intermediate disorder regime. We show that for a
Renormalizing Gaussian multiplicative chaos in the Wiener space
• Mathematics
• 2020
We develop a general approach for investigating geometric properties of Gaussian multiplicative chaos (GMC) in an infinite dimensional set up. The notion of a GMC, which is defined by tilting a
References
SHOWING 1-10 OF 99 REFERENCES
Brownian Directed Polymers in Random Environment
• Mathematics
• 2005
We study the thermodynamics of a continuous model of directed polymers in random environment. The environment is given by a space-time Poisson point process, whereas the polymer is defined in terms
Directed polymers in a random environment: path localization and strong disorder
• Mathematics
• 2003
We consider directed polymers in random environment. Under some mild assumptions on the environment, we prove here: (i) equivalence between the decay rate of the partition function and some natural
Ratios of partition functions for the log-gamma polymer
• Mathematics
• 2015
We introduce a random walk in random environment associated to an underlying directed polymer model in 1 + 1 dimensions. This walk is the positive temperature counterpart of the competition in-
Mean-field interaction of Brownian occupation measures, I: uniform tube property of the Coulomb functional
• Mathematics
• 2015
We study the transformed path measure arising from the self-interaction of a three-dimensional Brownian motion via an exponential tilt with the Coulomb energy of the occupation measures of the motion
Probabilistic analysis of directed polymers in a random environment: a review
• Materials Science
• 2004
Directed polymers in random environment can be thought of as a model of statistical mechanics in which paths of stochastic processes interact with a quenched disorder (impurities), depending on both
Mean‐Field Interaction of Brownian Occupation Measures II: A Rigorous Construction of the Pekar Process
• Mathematics
• 2015
We consider mean‐field interactions corresponding to Gibbs measures on interacting Brownian paths in three dimensions. The interaction is self‐attractive and is given by a singular Coulomb potential.
The Kardar-Parisi-Zhang Equation and Universality Class
Brownian motion is a continuum scaling limit for a wide class of random processes, and there has been great success in developing a theory for its properties (such as distribution functions or
Some new results on Brownian Directed Polymers in Random Environment
• Mathematics
• 2004
We prove some new results on Brownian directed polymers in random environment recently introduced by the authors. The directed polymer in this model is a d-dimensional Brownian motion (up to finite
A remark on diffusion of directed polymers in random environments
• Mathematics
• 1996
We consider a system of random walks or directed polymers interacting with an environment which is random in space and time. Under minimal assumptions on the distribution of the environment, we prove
|
2022-08-19 06:20:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8016071319580078, "perplexity": 1046.0768143026953}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573623.4/warc/CC-MAIN-20220819035957-20220819065957-00116.warc.gz"}
|
https://epoch.aisingapore.org/community/sg-nlp/senticgcn-trained-model-not-appearing/
|
SenticGCN Trained M...
Clear all
# Question SenticGCN Trained Model not Appearing
7 Posts
2 Users
0 Likes
101 Views
Posts: 4
Member
Topic starter
(@yen)
Active Member
Joined: 2 months ago
Hi, I've been training my SenticGCN model, and while my embed_models and tokenizers folders are populated, it seems like my models folder has not been populated at all...
Here is my sentic_gcn_config.json
From this, my model is supposed to be saved and updated into ./models/senticgcn/
However, when I enter that directory, it's empty, with no models saved at all
This image, for example, shows that the model is supposed to be saved to the filepath, but when I go to the filepath, it's empty.
I've run the code for a few hours already, and the folder is still not populated. Is there something I'm doing wrong, or something I'm not aware of? 🙁
Topic Tags
6 Replies
Posts: 35
AISG Staff
(@raymond_aisg)
Eminent Member
Joined: 6 months ago
Hi,
From your screenshot, I'm assuming that you are using Windows to run the training script.
Could you kindly try replacing the save_model_path config with an absolute path to see if the weights could be saved? (e.g. C:\Users\user\Desktop\AISG\models\senticgcn\).
Please also note that the default config is based on Unix which uses backslash whereas Windows uses forward slash.
Hope this helps.
Posts: 4
Member
Topic starter
(@yen)
Active Member
Joined: 2 months ago
Hi, thank you for your suggestion! I tried using an absolute file path, but the model still isn't being saved. The tokenizer, and the embedding model, however, are being saved (the folders were updated); it's just the model that isn't. Has anyone else encountered this problem?
AISG Staff
(@raymond_aisg)
Joined: 6 months ago
Eminent Member
Posts: 35
@yen Hi,
Thanks for trying out the suggestions.
The tokenizer and embedding models are pre-trained models that are directly downloaded from the Huggingface hub and are part of the setup required for training.
Posts: 4
Member
Topic starter
(@yen)
Active Member
Joined: 2 months ago
Hi, I spent a little bit of time going through the module, and I think there's probably nothing wrong. Here's my take (see the code explanation below):
Basically, the model apparently does NOT save the code in your folder UNTIL the end of the run, unlike for tokenizer and embed_model, which are saved at the start.
So, in conclusion, my laptop is probably just slow (and I need to rerun the training :/)
Hope this helps anyone else who may be facing this problem (and who can't sleep at 3am at night wondering whether your model will be saved after 6 hours of training)!
# How the model is saved eventually
def _save_model(self):
# Other stuff
self.model.save_pretrained(self.config.save_model_path)
# self._save_model() is called in :
class SenticGCNTrainer:
def train(self):
repeat_result = self._train()
# Other important stuff
self._save_model() # Only saved eventually, after full training is called
# Other important stuff
def _train(self,
BucketIterator],
BucketIterator]) -> Dict[str,
Dict[str,
Union[int,
float]]]:
# Setting up variables
for i in range(self.config.repeats):
repeat_tmpdir = self.temp_dir.joinpath(f"repeat{i + 1}") # This is crucial, as it is where the models are actually saved before the code is complete
self._reset_params()
# Calls self._train_loop. Critically, with directory as repeat_tmpdir
max_val_acc, max_val_f1, max_val_epoch = self._train_loop(
)
# Record repeat run results
# Overwrite global stats
return repeat_result
# Setting up of some config variables
for epoch in range(self.config.epochs):
global_step += 1
self.model.train() # To check what this is
# Other config steps to get stuff
if val_acc > max_val_acc:
# Saving variables
self.model.save_pretrained(tmpdir) # Saved to tmpdir, NOT save_model_path
# Code for early stopping
return max_val_acc, max_val_f1, max_val_epoch
# The big question now, is what is repeat_tmpdir?
with tempfile.TemporaryDirectory() as tmpdir:
self.temp_dir = pathlib.Path(tmpdir)
"""
Doing a little more tracing...
prefix = "tmp"
suffix = ""
dir:
"""
# Calls a dir from here:
def _candidate_tempdir_list():
"""Generate a list of candidate temporary directories which
_get_default_tempdir will try."""
dirlist = []
# First, try the environment.
for envname in 'TMPDIR', 'TEMP', 'TMP':
dirname = _os.getenv(envname)
if dirname: dirlist.append(dirname)
# Failing that, try OS-specific locations.
if _os.name == 'nt':
dirlist.extend([_os.path.expanduser(r'~\AppData\Local\Temp'),
_os.path.expandvars(r'%SYSTEMROOT%\Temp'),
r'c:\temp', r'c:\tmp', r'\temp', r'\tmp'])
else:
dirlist.extend(['/tmp', '/var/tmp', '/usr/tmp'])
# As a last resort, the current directory.
try:
dirlist.append(_os.getcwd())
except (AttributeError, OSError):
dirlist.append(_os.curdir)
return dirlist
"""After getting the dirlist, it creates a binary file by getting the absolute path of the directory and writing into a binary file."""
AISG Staff
(@raymond_aisg)
Joined: 6 months ago
Eminent Member
Posts: 35
@yen Hi,
Your observation is correct, as stated in the paper, the full train loop is run 10 times and the best model out of the 10 runs is saved at the end. Intermediate model weights are saved in a temp folder between train runs as indicated here,
For a quick test run to check if it's possible to save the final model, first reduce the number of repeats to 1 for a single run,
Next reduce the epoch to a small figure like 1 or 2 here,
Run the training script again and you should be able to quickly observe if the model will save to the folder indicated in the save_model_path config.
Lastly, for the quick test above, could you try running the training script in debug mode with a breakpoint at the following line and observe if the script will trigger,
As indicated in the model card, our training on an A100 40GB GPU with the SemEval14/15/16 datasets takes only around an hour. If you are training with CPU, please ensure that you have enough system RAM and hard disk resources available throughout the training duration.
Hope this helps.
Posts: 4
Member
Topic starter
(@yen)
Active Member
Joined: 2 months ago
(To AISG staff, please confirm if my suspicions are correct, thanks!)
Share:
|
2023-04-01 21:22:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22884546220302582, "perplexity": 6040.5666863586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00721.warc.gz"}
|
https://en.wikipedia.org/wiki/Chessboard_detection
|
# Chessboard detection
Chessboards arise frequently in computer vision theory and practice because their highly structured geometry is well-suited for algorithmic detection and processing. The appearance of chessboards in computer vision can be divided into two main areas: camera calibration and feature extraction. This article provides a unified discussion of the role that chessboards play in the canonical methods from these two areas, including references to the seminal literature, examples, and pointers to software implementations.
## Chessboard camera calibration
A classical problem in computer vision is three-dimensional (3D) reconstruction, where one seeks to infer 3D structure about a scene from two-dimensional (2D) images of it.[1] Practical cameras are complex devices, and photogrammetry is needed to model the relationship between image sensor measurements and the 3D world. In the standard pinhole camera model, one models the relationship between world coordinates ${\displaystyle \mathbf {X} }$ and image (pixel) coordinates ${\displaystyle \mathbf {x} }$ via the perspective transformation
${\displaystyle \mathbf {x} =K{\begin{bmatrix}R&t\end{bmatrix}}\mathbf {X} \quad ,\quad \mathbf {x} \in \mathbb {P} ^{2}\quad ,\quad \mathbf {X} \in \mathbb {P} ^{3},}$
where ${\displaystyle \mathbb {P} ^{n}}$ is the projective space of dimension ${\displaystyle n}$.
In this setting, camera calibration is the process of estimating the parameters of the ${\displaystyle 3\times 4}$ matrix ${\displaystyle M=K{\begin{bmatrix}R&t\end{bmatrix}}}$ of the perspective model. Camera calibration is an important step in the computer vision pipeline because many subsequent algorithms require knowledge of camera parameters as input.[2] Chessboards are often used during camera calibration because they are simple to construct, and their planar grid structure defines many natural interest points in an image. The following two methods are classic calibration techniques that often employ chessboards.
### Direct linear transformation
Direct linear transformation (DLT) calibration uses correspondences between world points and camera image points to estimate camera parameters. In particular, DLT calibration exploits the fact that the perspective pinhole camera model defines a set of similarity relations that can be solved via the direct linear transformation algorithm.[3] To employ this approach, one requires accurate coordinates of a non-degenerate set of points in 3D space. A common way to achieve this is to construct a camera calibration rig (example below) built from three mutually perpendicular chessboards. Since the corners of each square are equidistant, it is straightforward to compute the 3D coordinates of each corner given the width of each square. The advantage of DLT calibration is its simplicity; arbitrary cameras can be calibrated by solving a single homogeneous linear system. However, the practical use of DLT calibration is limited by the necessity of a 3D calibration rig and the fact that extremely accurate 3D coordinates are required to avoid numerical instability.[1]
Example: calibration rig
3D calibration rig built from three mutually perpendicular chessboards
### Multiplane calibration
Multiplane calibration is a variant of camera auto-calibration that allows one to compute the parameters of a camera from two or more views of a planar surface. The seminal work in multiplane calibration is due to Zhang.[4] Zhang's method calibrates cameras by solving a particular homogeneous linear system that captures the homographic relationships between multiple perspective views of the same plane. This multiview approach is popular because, in practice, it is more natural to capture multiple views of a single planar surface - like a chessboard - than to construct a precise 3D calibration rig, as required by DLT calibration. The following figures demonstrate a practical application of multiplane camera calibration from multiple views of a chessboard.[5]
Example: multiplane calibration
Multiple views of a chessboard for multiplane calibration
Reconstructed orientations
(camera-centric coordinates)
Reconstructed orientations
(world-centric coordinates)
## Chessboard feature extraction
The second context in which chessboards arise in computer vision is to demonstrate several canonical feature extraction algorithms. In feature extraction, one seeks to identify image interest points, which summarize the semantic content of an image and, hence, offer a reduced dimensionality representation of one's data.[2] Chessboards - in particular - are often used to demonstrate feature extraction algorithms because their regular geometry naturally exhibits local image features like edges, lines, and corners. The following sections demonstrate the application of common feature extraction algorithms to a chessboard image.
### Corners
Corners are a natural local image feature exploited in many computer vision systems. Loosely speaking, one can define a corner as the intersection of two edges. A variety of corner detection algorithms exist that formalize this notion into concrete algorithms. Corners are a useful image feature because they are necessarily distinct from their neighboring pixels. The Harris corner detector is a standard algorithm for corner detection in computer vision.[6] The algorithm works by analyzing the eigenvalues of the 2D discrete structure tensor matrix at each image pixel and flagging a pixel as a corner when the eigenvalues of its structure tensor are sufficiently large. Intuitively, the eigenvalues of the structure tensor matrix associated with a given pixel describe the gradient strength in a neighborhood of that pixel. As such, a structure tensor matrix with large eigenvalues corresponds to an image neighborhood with large gradients in orthogonal directions - i.e., a corner.
A chessboard contains natural corners at the boundaries between board squares, so one would expect corner detection algorithms to successfully detect them in practice. Indeed, the following figure demonstrates Harris corner detection applied to a perspective-transformed chessboard image. Clearly, the Harris detector is able to accurately detect the corners of the board.
Example: corner detection
Perspective-transformed chessboard image
### Lines
Lines are another natural local image feature exploited in many computer vision systems. Geometrically, the set of all lines in a 2D image can be parametrized by polar coordinates ${\displaystyle (\rho ,\theta )}$ describing the distance and angle, respectively, of their normal vectors with respect to the origin. The discrete Hough transform exploits this idea by transforming a spatial image into a matrix in ${\displaystyle (\rho ,\theta )}$-space whose ${\displaystyle (i,j)}$-th entry counts the number of image edge points that lie on the line parametrized by ${\displaystyle (\rho _{i},\theta _{j})}$.[7][8][9] As such, one can detect lines in an image by simply searching for local maxima of its discrete Hough transform.
The grid structure of a chessboard naturally defines two sets of parallel lines in an image of it. Therefore, one expects that line detection algorithms should successfully detect these lines in practice. Indeed, the following figure demonstrates Hough transform-based line detection applied to a perspective-transformed chessboard image. Clearly, the Hough transform is able to accurately detect the lines induced by the board squares.
Example: line detection
Perspective-transformed chessboard image
Canny edge detector applied to chessboard image
Hough transform of edge image with 19 largest local maxima denoted
Lines parameterized by Hough transform local maxima
The following MATLAB code generates the above images using the Image Processing Toolbox:
% Load image
% Compute edge image
BW = edge(I, 'canny');
% Compute Hough transform
[H theta rho] = hough(BW);
% Find local maxima of Hough transform
numpeaks = 19;
thresh = ceil(0.1 * max(H(:)));
P = houghpeaks(H, numpeaks, 'threshold', thresh);
% Extract image lines
lines = houghlines(BW, theta, rho, P, 'FillGap', 50, 'MinLength', 60);
% --------------------------------------------------------------------------
% Display results
% --------------------------------------------------------------------------
% Original image
figure; imshow(I);
% Edge image
figure; imshow(BW);
% Hough transform
figure; image(theta, rho, imadjust(mat2gray(H)), 'CDataMapping', 'scaled');
hold on; colormap(gray(256));
plot(theta(P(:, 2)), rho(P(:, 1)), 'o', 'color', 'r');
% Detected lines
figure; imshow(I); hold on; n = size(I, 2);
for k = 1:length(lines)
% Overlay kth line
x = [lines(k).point1(1) lines(k).point2(1)];
y = [lines(k).point1(2) lines(k).point2(2)];
line = @(z) ((y(2) - y(1)) / (x(2) - x(1))) * (z - x(1)) + y(1);
plot([1 n], line([1 n]), 'Color', 'r');
end
1. M. Rufli, D. Scaramuzza, and R. Siegwart. "Automatic detection of checkerboards on blurred and distorted images." IEEE/RSJ International Conference on Intelligent Robots and Systems. (2008).
2. Z. Weixing, et al. "A fast and accurate algorithm for chessboard corner detection." 2nd International Congress on Image and Signal Processing. (2009).
3. A. De la Escalera and J. Armingol. "Automatic chessboard detection for intrinsic and extrinsic camera parameter calibration." Sensors. vol. 10(3), pp. 2027–2044 (2010).
4. S. Bennett and J. Lasenby. "ChESS - quick and robust detection of chess-board features." Computer Vision and Image Understanding. vol. 118, pp. 197–210 (2014).
5. J. Ha. "Automatic detection of chessboard and its applications." Opt. Eng. vol. 48(6) (2009).
6. F. Zhao, et al. "An automated x-corner detection algorithm (axda)." Journal of Software. vol. 6(5), pp. 791–797 (2011).
7. S. Arca, E. Casiraghi, and G. Lombardi. "Corner localization in chessboards for camera calibration." IADAT. (2005).
8. X. Hu, P. Du, and Y. Zhou. "Automatic corner detection of chess board for medical endoscopy camera calibration." Proceedings of the 10th International Conference on Virtual Reality Continuum and Its Applications in Industry. ACM. (2011).
9. S. Malek, et al. "Tracking chessboard corners using projective transformation for augmented reality. International Conference on Communications, Computing and Control Applications. (2011).
## References
1. ^ a b D. Forsyth and J. Ponce. Computer Vision: A Modern Approach. Prentice Hall. (2002). ISBN 978-0262061582.
2. ^ a b R. Szeliski. Computer Vision: Algorithms and Applications. Springer Science and Business Media. (2010). ISBN 978-1848829350.
3. ^ O. Faugeras. Three-dimensional Computer Vision. MIT Press. (1993). ISBN 978-0262061582.
4. ^ Z. Zhang. "A flexible new technique for camera calibration." IEEE Transactions on Pattern Analysis and Machine Intelligence. vol. 22(11), pp. 1330-1334 (2000).
5. ^ J. Bouguet, "Camera calibration toolbox for MATLAB". http://www.vision.caltech.edu/bouguetj/calib_doc/. (2013).
6. ^ C. Harris and M. Stephens. "A combined corner and edge detector." Proceedings of the 4th Alvey Vision Conference. pp. 147-151 (1988).
7. ^ L. Shapiro and G. Stockman. Computer Vision. Prentice-Hall, Inc. (2001). ISBN 978-0130307965
8. ^ R. Duda and P. Hart. "Use of the Hough transformation to detect lines and curves in pictures," Comm. ACM, vol. 15, pp. 11-15 (1972).
9. ^ P. Hough. "Machine analysis of bubble chamber pictures." Proc. Int. Conf. High Energy Accelerators and Instrumentation. (1959).
|
2022-08-10 05:39:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 11, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46862149238586426, "perplexity": 2866.8894870018407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571147.84/warc/CC-MAIN-20220810040253-20220810070253-00692.warc.gz"}
|
https://forum.azimuthproject.org/plugin/ViewComment/22542
|
A parameter \$$\beta\$$ is used to capture the fluid-turning cost.
Here is the explanation.
The cost within the pipes of getting fluid from the root to \$$x\$$ has already been accounted for above.
The cumulative cost of all the turns along the path to \$$x\$$ is defined by a factor \$$m_\beta(x)\$$.
\$$m_\beta(x)\$$ is defined as the product of a sequence of values \$$f_\beta(y)\$$ for all the nodes \$$y\$$ along the path from the root to \$$x\$$, where \$$f_\beta(y)\$$ is a factor expressing the turning cost at \$$y\$$.
Let \$$u\$$ be the unit vector in the direction of the pipe leading into \$$y\$$, and \$$v\$$ be the unit vector in the direction of the pipe leading out of \$$y\$$ along the path.
Then \$$f_\beta(y)\$$ is defined to be \$$|u \cdot v|^{-\beta}\$$ if \$$u \cdot v\$$ is greater than zero, else infinity.
(We can interpret this later, after this exposition is finished.)
|
2021-04-23 04:35:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.759604811668396, "perplexity": 888.7559112760675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039601956.95/warc/CC-MAIN-20210423041014-20210423071014-00049.warc.gz"}
|
https://www.beatthegmat.com/difficult-math-question-12-t473.html
|
• FREE GMAT Exam
Know how you'd score today for $0 Available with Beat the GMAT members only code • Award-winning private GMAT tutoring Register now and save up to$200
Available with Beat the GMAT members only code
• Magoosh
Study with Magoosh GMAT prep
Available with Beat the GMAT members only code
• 5-Day Free Trial
5-day free, full-access trial TTP Quant
Available with Beat the GMAT members only code
• Free Practice Test & Review
How would you score if you took the GMAT
Available with Beat the GMAT members only code
• 5 Day FREE Trial
Study Smarter, Not Harder
Available with Beat the GMAT members only code
• Get 300+ Practice Questions
Available with Beat the GMAT members only code
• Free Trial & Practice Exam
BEAT THE GMAT EXCLUSIVE
Available with Beat the GMAT members only code
• 1 Hour Free
BEAT THE GMAT EXCLUSIVE
Available with Beat the GMAT members only code
• Free Veritas GMAT Class
Experience Lesson 1 Live Free
Available with Beat the GMAT members only code
# Forum: Difficult Math Question #12
OA after a few people answer..
The average of temperatures at noontime from Monday to Friday is 50; the lowest one is 45, what is the possible maximum range of the temperatures?
A) 20
B) 25
C) 40
D) 45
E) 75
Junior | Next Rank: 30 Posts
Joined
24 Jun 2006
Posted:
25 messages
5x50 = 45x4 + x
x = 70
Hence range is 70-45=25
Master | Next Rank: 500 Posts
Joined
27 Jun 2006
Posted:
354 messages
Followed by:
5 members
11
OA:
The answer 25 doesn't refer to a temperature, but rather to a range of temperatures.
The average of the 5 temps is: (a + b + c + d + e) / 5 = 50
One of these temps is 45: (a + b + c + d + 45) / 5 = 50
Solving for the variables: a + b + c + d = 205
In order to find the greatest range of temps, we minimize all temps but one. Remember, though, that 45 is the lowest temp possible, so: 45 + 45 + 45 + d = 205
Solving for the variable: d = 70
70 - 45 = 25
### Top First Responders*
1 Jay@ManhattanReview 76 first replies
2 Brent@GMATPrepNow 61 first replies
3 fskilnik 60 first replies
4 GMATGuruNY 32 first replies
5 Rich.C@EMPOWERgma... 10 first replies
* Only counts replies to topics started in last 30 days
See More Top Beat The GMAT Members
### Most Active Experts
1 fskilnik
GMAT Teacher
206 posts
2 Brent@GMATPrepNow
GMAT Prep Now Teacher
148 posts
3 Scott@TargetTestPrep
Target Test Prep
103 posts
4 Jay@ManhattanReview
Manhattan Review
96 posts
5 Max@Math Revolution
Math Revolution
92 posts
See More Top Beat The GMAT Experts
|
2018-09-26 07:23:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3495523929595947, "perplexity": 14227.717655201299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267163704.93/warc/CC-MAIN-20180926061824-20180926082224-00193.warc.gz"}
|
http://math.stackexchange.com/questions/325821/which-categories-correspond-to-the-untyped-and-typed-lambda-calculus/325910
|
# Which categories correspond to the untyped and typed lambda calculus?
Simply typed lambda calculus is the internal language of Cartesian Closed Categories.
What category has its internal language the typed lambda calculus? And the untyped lambda calculus? Can we in fact consider it to be exactly the same as the simply typed calculus with the type constructor left implied?
-
|
2016-05-03 06:54:54
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8527560830116272, "perplexity": 366.8574330093891}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860118807.54/warc/CC-MAIN-20160428161518-00135-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/208333/how-could-memory-be-organized-in-quantum-computers
|
How could memory be organized in quantum computers?
A classical computer has a memory made up of bits, where each bit represents either a one or a zero and its implemented by two-state transistor logic.
However, a quantum computer maintains a sequence of qubits. A single qubit can represent a one, a zero, or any quantum superposition of those two qubit states; a pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8 states. In general, a quantum computer with $n$ qubits can be in an arbitrary superposition of up to $2^n$ different states simultaneously (this compares to a normal computer that can only be in one of these $2^n$ states at any one time).
1. How could memory be organized(implemented) in quantum computers?
2. How are these complex (superposition) states saved?
The major limitation of quantum information is that anything which interacts differently with a qubit's $|0\rangle$ vs. $|1\rangle$ state is going to entangle with that qubit: but when an uncontrolled/unmeasurable "environment" entangles with our nicely controlled, measurable "system", one of the nasty things that happens is that entanglements in our system become less "quantum" and more "classical" (they show less interference patterns etc.) to us. So the problem is that nothing can really connect to the memory-bank if it interacts differently with the two states, without destroying our cool properties.
|
2021-07-26 13:24:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7183874249458313, "perplexity": 340.02791724704207}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152129.33/warc/CC-MAIN-20210726120442-20210726150442-00397.warc.gz"}
|
https://ncatlab.org/nlab/show/Richard+Feynman
|
# nLab Richard Feynman
## Quotes
### On Doubt
From Feynman 1955:
Of all its $[$science’s$]$ many values, the greatest must be the freedom to doubt
We have found it of paramount importance that in order to progress we must recognize the ignorance and leave room for doubt. Scientific knowledge is a body of statements of varying degree of certainty - some most unsure, some nearly sure, none absolutely certain.
Now, we scientists are used to this, and we take it for granted that it is perfectly consistent to be unsure – that it is possible to live and not know. But I don’t know whether everyone realizes that this is true.
Our freedom to doubt was born of a struggle against authority in the early days of science. It was a very deep and strong struggle. Permit us to question – to doubt, that’s all – not to be sure.
And I think it is important that we do not forget the importance of this struggle and thus perhaps lose what we have gained. Here lies a responsibility to society.
This is not a new idea; this is the idea of the age of reason. $[\ldots]$ that we should arrange a system by which new ideas could be developed, tried out, tossed out, more new ideas brought in; a trial and error system. $[\ldots]$ the openness of the possibilities was an opportunity, and that doubt and discussion were essential to progress into the unknown.
if we suppress all discussion, all criticism, saying “This is it, boys, man is saved!” $[$ we would $]$ doom man for a long time to the chains of authority, confined to the limits of our present imagination. It has been done so many times before.
It is our responsibility as scientists, knowing $[\ldots]$ the great progress that is the fruit of freedom of thought, to proclaim the value of this freedom, to teach how doubt is not to be feared but welcomed and discussed, and to demand this freedom as our duty to all incoming generations.
Now the next subject $[\ldots]$ I really consider the most important and the most serious. And that has to do with the question of uncertainty and doubt.
|
2021-12-09 03:14:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46183478832244873, "perplexity": 1124.1846967471245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363659.21/warc/CC-MAIN-20211209030858-20211209060858-00524.warc.gz"}
|
https://labs.tib.eu/arxiv/?author=Y.%20Zanevsky
|
• ### Technical Supplement to "Polarization Transfer Observables in Elastic Electron-Proton Scattering at Q$^2$ = 2.5, 5.2, 6.8, and 8.5 GeV$^2$"(1707.07750)
Sept. 12, 2018 hep-ex, nucl-ex
The GEp-III and GEp-2$\gamma$ experiments, carried out in Jefferson Lab's Hall C from 2007-2008, consisted of measurements of polarization transfer in elastic electron-proton scattering at momentum transfers of $Q^2 = 2.5, 5.2, 6.8,$ and $8.54$ GeV$^2$. These measurements were carried out to improve knowledge of the proton electromagnetic form factor ratio $R = \mu_p G_E^p/G_M^p$ at large values of $Q^2$ and to search for effects beyond the Born approximation in polarization transfer observables at $Q^2 = 2.5$ GeV$^2$. The final results of both experiments were reported in a recent archival publication. A full reanalysis of the data from both experiments was carried out in order to reduce the systematic and, for the GEp-2$\gamma$ experiment, statistical uncertainties. This technical note provides additional details of the final analysis omitted from the main publication, including the final evaluation of the systematic uncertainties.
• ### Polarization Transfer Observables in Elastic Electron Proton Scattering at $Q^2 =$2.5, 5.2, 6.8, and 8.5 GeV$^2$(1707.08587)
Aug. 10, 2018 hep-ex, nucl-ex
The GEp-III and GEp-2$\gamma$ experiments were carried out in Jefferson Lab's (JLab's) Hall C from 2007-2008, to extend the knowledge of $G_E^p/G_M^p$ to the highest practically achievable $Q^2$ and to search for effects beyond the Born approximation in polarization transfer observables of elastic $\vec{e}p$ scattering. This article reports an expanded description of the common experimental apparatus and data analysis procedure, and the results of a final reanalysis of the data from both experiments, including the previously unpublished results of the full-acceptance data of the GEp-2$\gamma$ experiment. The Hall C High Momentum Spectrometer detected and measured the polarization of protons recoiling elastically from collisions of JLab's polarized electron beam with a liquid hydrogen target. A large-acceptance electromagnetic calorimeter detected the elastically scattered electrons in coincidence to suppress inelastic backgrounds. The final GEp-III data are largely unchanged relative to the originally published results. The statistical uncertainties of the final GEp-2$\gamma$ data are significantly reduced at $\epsilon = 0.632$ and $0.783$ relative to the original publication. The decrease with $Q^2$ of $G_E^p/G_M^p$ continues to $Q^2 = 8.5$ GeV$^2$, but at a slowing rate relative to the approximately linear decrease observed in earlier Hall A measurements. At $Q^2 = 2.5$ GeV$^2$, the proton form factor ratio $G_E^p/G_M^p$ shows no statistically significant $\epsilon$-dependence, as expected in the Born approximation. The ratio $P_\ell/P_\ell^{Born}$ of the longitudinal polarization transfer component to its Born value shows an enhancement of roughly 1.4\% at $\epsilon = 0.783$ relative to $\epsilon = 0.149$, with $\approx 1.9\sigma$ significance based on the total uncertainty, implying a similar effect in the transverse component $P_t$ that cancels in the ratio $R$.
• ### Deep sub-threshold {\phi} production and implications for the K+/K- freeze-out in Au+Au collisions(1703.08418)
Nov. 28, 2017 hep-ex, nucl-ex
We present data on charged kaons (K+-) and {\phi} mesons in Au(1.23A GeV)+Au collisions. It is the first simultaneous measurement of K and {\phi} mesons in central heavy-ion collisions below a kinetic beam energy of 10A GeV. The {\phi}/K- multiplicity ratio is found to be surprisingly high with a value of 0.52 +- 0.16 and shows no dependence on the centrality of the collision. Consequently, the different slopes of the K+ and K- transverse-mass spectra can be explained solely by feed- down, which substantially softens the spectra of K- mesons. Hence, in contrast to the commonly adapted argumentation in literature, the different slopes do not necessarily imply diverging freeze- out temperatures of K+ and K- mesons caused by different couplings to baryons.
• ### Inclusive {\Lambda} production in proton-proton collisions at 3.5 GeV(1611.01040)
Nov. 3, 2016 nucl-ex
The inclusive production of {\Lambda} hyperons in proton-proton collisions at $\sqrt{s}$ = 3.18 GeV was measured with HADES at the GSI Helmholtzzentrum f\"ur Schwerionenforschung in Darmstadt. The experimental data are compared to a data-based model for individual exclusive {\Lambda} production channels in the same reaction. The contributions of intermediate resonances such as {\Sigma}(1385), {\Delta}++ or N* are considered in detail. In particular, the result of a partial wave analysis is accounted for the abundant pK$^+${\Lambda} final state. Model and data show a reasonable agreement at mid rapidities, while a difference is found for larger rapidities. A total {\Lambda} production cross section in p+p collisions at $\sqrt{s}$ = 3.18 GeV of {\sigma}(pp $\to$ {\Lambda} + X) = 207.3 $\pm$ 1.3 +6.0 -7.3 (stat.) $\pm$ 8.4 (syst.) +0.4 -0.5 (model) {\mu}b is found.
• ### The $\bf{\Lambda p}$ interaction studied via femtoscopy in p + Nb reactions at $\mathbf{\sqrt{s_{NN}}=3.18} ~\mathrm{\bf{GeV}}$(1602.08880)
Feb. 29, 2016 nucl-ex
We report on the first measurement of $p\Lambda$ and $pp$ correlations via the femtoscopy method in p+Nb reactions at $\mathrm{\sqrt{s_{NN}}=3.18} ~\mathrm{GeV}$, studied with the High Acceptance Di-Electron Spectrometer (HADES). By comparing the experimental correlation function to model calculations, a source size for $pp$ pairs of $r_{0,pp}=2.02 \pm 0.01(\mathrm{stat})^{+0.11}_{-0.12} (\mathrm{sys}) ~\mathrm{fm}$ and a slightly smaller value for $p\Lambda$ of $r_{0,\Lambda p}=1.62 \pm 0.02(\mathrm{stat})^{+0.19}_{-0.08}(\mathrm{sys}) ~\mathrm{fm}$ is extracted. Using the geometrical extent of the particle emitting region, determined experimentally with $pp$ correlations as reference together with a source function from a transport model, it is possible to study different sets of scattering parameters. The $p\Lambda$ correlation is proven sensitive to predicted scattering length values from chiral effective field theory. We demonstrate that the femtoscopy technique can be used as valid alternative to the analysis of scattering data to study the hyperon-nucleon interaction.
• ### Statistical model analysis of hadron yields in proton-nucleus and heavy-ion collisions at SIS 18 energies(1512.07070)
Dec. 22, 2015 hep-ex, nucl-ex, nucl-th
The HADES data from p+Nb collisions at center of mass energy of $\sqrt{s_{NN}}$= 3.2 GeV are analyzed by employing a statistical model. Accounting for the identified hadrons $\pi^0$, $\eta$, $\Lambda$, $K^{0}_{s}$, $\omega$ allows a surprisingly good description of their abundances with parameters $T_{chem}=(99\pm11)$ MeV and $\mu_{b}=(619\pm34)$ MeV, which fits well in the chemical freeze-out systematics found in heavy-ion collisions. In supplement we reanalyze our previous HADES data from Ar+KCl collisions at $\sqrt{s_{NN}}$= 2.6 GeV with an updated version of the statistical model. We address equilibration in heavy-ion collisions by testing two aspects: the description of yields and the regularity of freeze-out parameters from a statistical model fit. Special emphasis is put on feed-down contributions from higher-lying resonance states which have been proposed to explain the experimentally observed $\Xi^-$ excess present in both data samples.
• ### Study of the quasi-free $np \to np \pi^+\pi^-$ reaction with a deuterium beam at 1.25 GeV/nucleon(1503.04013)
Sept. 24, 2015 nucl-ex
The tagged quasi-free $np \to np\pi^+\pi^-$ reaction has been studied experimentally with the High Acceptance Di-Electron Spectrometer (HADES) at GSI at a deuteron incident beam energy of 1.25 GeV/nucleon ($\sqrt s \sim$ 2.42 GeV/c for the quasi-free collision). For the first time, differential distributions for $\pi^{+}\pi^{-}$ production in $np$ collisions have been collected in the region corresponding to the large transverse momenta of the secondary particles. The invariant mass and angular distributions for the $np\rightarrow np\pi^{+}\pi^{-}$ reaction are compared with different models. This comparison confirms the dominance of the $t$-channel with $\Delta\Delta$ contribution. It also validates the changes previously introduced in the Valencia model to describe two-pion production data in other isospin channels, although some deviations are observed, especially for the $\pi^{+}\pi^{-}$ invariant mass spectrum. The extracted total cross section is also in much better agreement with this model. Our new measurement puts useful constraints for the existence of the conjectured dibaryon resonance at mass M$\sim$ 2.38 GeV and with width $\Gamma\sim$ 70 MeV.
• ### K*(892)+ production in proton-proton collisions at E_beam = 3.5 GeV(1505.06184)
May 22, 2015 nucl-ex
We present results on the K*(892)+ production in proton-proton collisions at a beam energy of E = 3.5 GeV, which is hitherto the lowest energy at which this mesonic resonance has been observed in nucleon-nucleon reactions. The data are interpreted within a two-channel model that includes the 3-body production of K*(892)+ associated with the Lambda- or Sigma-hyperon. The relative contributions of both channels are estimated. Besides the total cross section sigma(p+p -> K*(892)+ + X) = 9.5 +- 0.9 +1.1 -0.9 +- 0.7 mub, that adds a new data point to the excitation function of the K*(892)+ production in the region of low excess energy, transverse momenta and angular spectra are extracted and compared with the predictions of the two-channel model. The spin characteristics of K*(892)+ are discussed as well in terms of the spin-alignment.
• ### Partial Wave Analysis of the Reaction $p(3.5 GeV)+p \to pK^+\Lambda$ to Search for the "$ppK^-$" Bound State(1410.8188)
Oct. 29, 2014 nucl-ex
Employing the Bonn-Gatchina partial wave analysis framework (PWA), we have analyzed HADES data of the reaction $p(3.5GeV)+p\to pK^{+}\Lambda$. This reaction might contain information about the kaonic cluster "$ppK^-$" via its decay into $p\Lambda$. Due to interference effects in our coherent description of the data, a hypothetical $\overline{K}NN$ (or, specifically "$ppK^-$") cluster signal must not necessarily show up as a pronounced feature (e.g. a peak) in an invariant mass spectra like $p\Lambda$. Our PWA analysis includes a variety of resonant and non-resonant intermediate states and delivers a good description of our data (various angular distributions and two-hadron invariant mass spectra) without a contribution of a $\overline{K}NN$ cluster. At a confidence level of CL$_{s}$=95\% such a cluster can not contribute more than 2-12\% to the total cross section with a $pK^{+}\Lambda$ final state, which translates into a production cross-section between 0.7 $\mu b$ and 4.2 $\mu b$, respectively. The range of the upper limit depends on the assumed cluster mass, width and production process.
• ### An upper limit on hypertriton production in collisions of Ar(1.76 AGeV)+KCl(1310.6198)
Nov. 15, 2013 hep-ex, nucl-ex
A high-statistic data sample of Ar(1.76 AGeV)+KCl events recorded with HADES is used to search for a hypertriton signal. An upper production limit per centrality-triggered event of $1.04$ x $10^{-3}$ on the $3\sigma$ level is derived. Comparing this value with the number of successfully reconstructed $\Lambda$ hyperons allows to determine an upper limit on the ratio $N_{_{\Lambda}^3H}/N_{\Lambda}$, which is confronted with statistical and coalescence-type model calculations.
• ### Searching a Dark Photon with HADES(1311.0216)
Nov. 1, 2013 hep-ph, hep-ex
We present a search for the e+e- decay of a hypothetical dark photon, also names U vector boson, in inclusive dielectron spectra measured by HADES in the p (3.5 GeV) + p, Nb reactions, as well as the Ar (1.756 GeV/u) + KCl reaction. An upper limit on the kinetic mixing parameter squared epsilon^{2} at 90% CL has been obtained for the mass range M(U) = 0.02 - 0.55 GeV/c2 and is compared with the present world data set. For masses 0.03 - 0.1 GeV/c^2, the limit has been lowered with respect to previous results, allowing now to exclude a large part of the parameter region favoured by the muon g-2 anomaly. Furthermore, an improved upper limit on the branching ratio of 2.3 * 10^{-6} has been set on the helicity-suppressed direct decay of the eta meson, eta-> e+e-, at 90% CL.
• ### Inclusive pion and eta production in p+Nb collisions at 3.5 GeV beam energy(1305.3118)
July 5, 2013 nucl-ex
Data on inclusive pion and eta production measured with the dielectron spectrometer HADES in the reaction p+93Nb at a kinetic beam energy of 3.5 GeV are presented. Our results, obtained with the photon conversion method, supplement the rather sparse information on neutral meson production in proton-nucleus reactions existing for this bombarding energy regime. The reconstructed e+e-e+e- transverse-momentum and rapidity distributions are confronted with transport model calculations, which account fairly well for both pi0 and eta production.
• ### First measurement of low momentum dielectrons radiated off cold nuclear matter(1205.1918)
Sept. 25, 2012 hep-ex, nucl-ex
We present data on dielectron emission in proton induced reactions on a Nb target at 3.5 GeV kinetic beam energy measured with HADES installed at GSI. The data represent the first high statistics measurement of proton-induced dielectron radiation from cold nuclear matter in a kinematic regime, where strong medium effects are expected. Combined with the good mass resolution of 2%, it is the first measurement sensitive to changes of the spectral functions of vector mesons, as predicted by models for hadrons at rest or small relative momenta. Comparing the e+e invariant mass spectra to elementary p+p data, we observe for e+e momenta Pee < 0.8 GeV/c a strong modification of the shape of the spectrum, which we attribute to an additional rho-like contribution and a decrease of omega yield. These opposite trends are tentatively interpreted as a strong coupling of the rho meson to baryonic resonances and an absorption of the omega meson, which are two aspects of in-medium modification of vector mesons.
• ### Study of exclusive one-pion and one-eta production using hadron and dielectron channels in pp reactions at kinetic beam energies of 1.25 GeV and 2.2 GeV with HADES(1203.1333)
April 23, 2012 nucl-ex
We present measurements of exclusive \pi^{+,0} and \eta\ production in pp reactions at 1.25 GeV and 2.2 GeV beam kinetic energy in hadron and dielectron channels. In the case of \pi^+ and \pi^0, high-statistics invariant-mass and angular distributions are obtained within the HADES acceptance as well as acceptance corrected distributions, which are compared to a resonance model. The sensitivity of the data to the yield and production angular distribution of \Delta(1232) and higher lying baryon resonances is shown, and an improved parameterization is proposed. The extracted cross sections are of special interest in the case of pp \to pp \eta, since controversial data exist at 2.0 GeV; we find \sigma =0.142 \pm 0.022 mb. Using the dielectron channels, the \pi^0 and \eta\ Dalitz decay signals are reconstructed with yields fully consistent with the hadronic channels. The electron invariant masses and acceptance corrected helicity angle distributions are found in good agreement with model predictions.
• ### Inclusive dielectron production in proton-proton collisions at 2.2 GeV beam energy(1203.2549)
March 12, 2012 nucl-ex
Data on inclusive dielectron production are presented for the reaction p+p at 2.2 GeV measured with the High Acceptance DiElectron Spectrometer (HADES). Our results supplement data obtained earlier in this bombarding energy regime by DLS and HADES. The comparison with the 2.09 GeV DLS data is discussed. The reconstructed e+e- distributions are confronted with simulated pair cocktails, revealing an excess yield at invariant masses around 0.5 GeV/c2. Inclusive cross sections of neutral pion and eta production are obtained.
• ### Polarization components in $\pi^{0}$ photoproduction at photon energies up to 5.6 GeV(1109.4650)
March 6, 2012 nucl-ex
We present new data for the polarization observables of the final state proton in the $^{1}H(\vec{\gamma},\vec{p})\pi^{0}$ reaction. These data can be used to test predictions based on hadron helicity conservation (HHC) and perturbative QCD (pQCD). These data have both small statistical and systematic uncertainties, and were obtained with beam energies between 1.8 and 5.6 GeV and for $\pi^{0}$ scattering angles larger than 75$^{\circ}$ in center-of-mass (c.m.) frame. The data extend the polarization measurements data base for neutral pion photoproduction up to $E_{\gamma}=5.6 GeV$. The results show non-zero induced polarization above the resonance region. The polarization transfer components vary rapidly with the photon energy and $\pi^{0}$ scattering angle in c.m. frame. This indicates that HHC does not hold and that the pQCD limit is still not reached in the energy regime of this experiment.
• ### Production of Sigma{\pm}pi?pK+ in p+p reactions at 3.5 GeV beam energy(1202.2734)
Feb. 13, 2012 nucl-ex
• ### Cross section in deuteron-proton elastic scattering at 1.25 GeV/u(1102.1610)
Feb. 18, 2011 nucl-ex
First results of the differential cross section in dp elastic scattering at 1.25 GeV/u measured with the HADES over a large angular range are reported. The obtained data corresponds to large transverse momenta, where a high sensitivity to the two-nucleon and three-nucleon short-range correlations is expected.
• ### Single and double pion production in np collisions at 1.25 GeV with HADES(1102.1843)
Feb. 11, 2011 nucl-ex
The preliminary results on charged pion production in np collisions at an incident beam energy of 1.25 GeV measured with HADES are presented. The np reactions were isolated in dp collisions at 1.25 GeV/u using the Forward Wall hodoscope, which allowed to register spectator protons. The results for np -> pppi-, np -> nppi+pi- and np -> dpi+pi- channels are compared with OPE calculations. A reasonable agreement between experimental results and the predictions of the OPE+OBE model is observed.
• ### Recoil Polarization Measurements of the Proton Electromagnetic Form Factor Ratio to Q^2 = 8.5 GeV^2(1005.3419)
May 28, 2010 hep-ex, nucl-ex, nucl-th
Among the most fundamental observables of nucleon structure, electromagnetic form factors are a crucial benchmark for modern calculations describing the strong interaction dynamics of the nucleon's quark constituents; indeed, recent proton data have attracted intense theoretical interest. In this letter, we report new measurements of the proton electromagnetic form factor ratio using the recoil polarization method, at momentum transfers Q2=5.2, 6.7, and 8.5 GeV2. By extending the range of Q2 for which GEp is accurately determined by more than 50%, these measurements will provide significant constraints on models of nucleon structure in the non-perturbative regime.
• ### Future perspectives at SIS-100 with HADES-at-FAIR(0906.0091)
June 15, 2009 nucl-ex
Currently, the HADES spectrometer undergoes un upgrade program to be prepared for measurements at the upcoming SIS-100 synchrotron at FAIR. We describe the current status of the HADES di-electron measurements at the SIS-18 and our future plans for SIS-100.
• ### Study of dielectron production in C+C collisions at 1 AGeV(0711.4281)
March 21, 2008 nucl-ex
The emission of e+e- pairs from C+C collisions at an incident energy of 1 GeV per nucleon has been investigated. The measured production probabilities, spanning from the pi0-Dalitz to the rho/omega! invariant-mass region, display a strong excess above the cocktail of standard hadronic sources. The bombarding-energy dependence of this excess is found to scale like pion production, rather than like eta production. The data are in good agreement with results obtained in the former DLS experiment.
|
2021-02-28 22:08:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7257376909255981, "perplexity": 2632.460781799445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361776.13/warc/CC-MAIN-20210228205741-20210228235741-00483.warc.gz"}
|
https://math.stackexchange.com/questions/1563720/measurability-criterion-of-caratheodory
|
# Measurability criterion of Caratheodory
Let $E=[0,1]$. Here are the definitions I am using:
Let $A\subset E$, then we define the outer measure of $A$ as $$\mu^*(A)=\inf \left\{\sum_k m(I_k): A\subset \cup_k I_k \right\}$$ where the infimum is taken over all any countable collection $\{I_k\}$ of intervals (open, closed or half open) whose union contains $A$, and we define the inner measure of $A$ as, $$\mu_*(A)=1-\mu^*(E\setminus A)$$ and finally $A$ is said to be measurable if $\mu^*(A)=\mu_*(A)$.
As a note, I have shown that $$\mu^*(A)=\inf\{\mu(G): A\subset G, G \text{ is open relative to } E\}$$ so that we may use this characterization of the outer measure of a set or the one originally given above. We can also reformulate the definition of a measurable set: a set $A\subset E$ is measurable if and only if $\mu^*(E)=\mu^*(E\cap A)+\mu^*(E\setminus A)$ and this follows immediately from the fact that if $A$ is measurable according to the definition above, then $\mu^*(A)+\mu^*(E\setminus A)=1$ and since $A\subset E$, we have $A\cap E=A$.
Now, I want to show yet another equivalent characterization of measurability in the following,
$\textbf{Problem:}$ I am trying to prove if $A \subset E$ is measurable then for any $F\subset E$, we have $$\mu^*(F)=\mu^*(F\cap A)+\mu^*(F\setminus A).$$
The hint in my book says to use $B\subset E$ is measurable if and only if for any $\epsilon >0$, $\exists G_1, G_2 \subset E$ and open relative to $E$ such that $B\subset G_1$, $E\setminus B\subset G_2$, and $\mu(G_1\cap G_2)<\epsilon$, so I also proved this (using the second characterization of $\mu^*$ provided here). For the current problem, this is my work so far:
Clearly, $$\mu^*(F)\leq \mu^*(F\cap A)+\mu^*(F\setminus A)$$ by sub-additivity. For the other inequality, let $\epsilon >0$, then by the above condition for measurability, we have (for some $G_1, G_2 \subset E$, etc), $$F\cap A\subset A \subset G_1 \text{ and } F\setminus A\subset E\setminus A\subset G_2$$ and $$\mu^*(F)+\mu^*(F\setminus A)\leq \mu(G_1)+\mu(G_2)=\mu(G_1 \cup G_2) + \mu(G_1\cap G_2)$$ but $G_1\cup G_2=E$, so the right hand side above is less than $1+\epsilon$. But I don't really see how this helps. We can conclude the left hand side is less than or equal to $1$ then since $\epsilon$ was arbitrary but I don't think I'm approaching this correctly. Any suggestions would be greatly appreciated, thanks.
• You are trying to prove what is usually the definition of measurability. So it seems you are using a different (probably equivalent) definition of of measurability. What is the definition you are using? – Ramiro Dec 7 '15 at 14:27
• @Ramiro I am sorry, edited to reflect which definition of measure I was using. For convenience, my def is $A\subset E$ is measurable if it satisfies $\mu_*(A)=\mu^*(A)$ where the inner measure is $1-\mu^*(E\setminus A)$ – Nap D. Lover Dec 7 '15 at 14:48
• You wrote: "We can also reformulate the definition of a measurable set: a set $A\subset E$ is measurable if and only if $\mu^*(A)=\mu^*(E\cap A)+\mu^*(E\setminus A)$ and this follows... ". I think you meant $\mu^*(E)=\mu^*(E\cap A)+\mu^*(E\setminus A)$. – Ramiro Dec 7 '15 at 21:18
• @Ramiro yes, thanks for spotting the typo! – Nap D. Lover Dec 7 '15 at 21:26
## 1 Answer
Let $E=[0,1]$.
You already know that that $A\subset E$ is measurable (that is: $\mu^*(A)=\mu_*(A)$) if and only if
$$\mu^*(E)=\mu^*(E\cap A)+\mu^*(E\setminus A)$$
You also know that:
1. if $A\subset E$ then $$\mu^*(A)=\inf\{\mu(G): A\subset G, G \text{ is open relative to } E\}$$
2. $A\subset E$ is measurable if and only if for any $\epsilon >0$, $\exists G_1, G_2 \subset E$ and open relative to $E$ such that $A\subset G_1$, $E\setminus A\subset G_2$, and $\mu(G_1\cap G_2)<\epsilon$ (The hint from your book is to use this property).
3. $\mu^*$ is subadditive.
In the proof below we will use properties 1, 2 and 3 and also the following four properties:
1. $\mu^*$ is monotone (that means, if $A\subset B$ then $\mu^*(A)\leqslant \mu^*(B)$).
2. if $G \subset E$ is open relative to $E$, then $\mu^*(G)=\mu(G)$.
3. if $G,H \subset E$ is open relative to $E$, then $\mu^*(G\setminus H)=\mu(G\setminus H)$.
4. $\mu$ is additive (in fact, it is $\sigma$-additive).
Proof: Let $A\subset E$ be measurable and let $F\subset E$. Let $\epsilon >0$.
Since $\mu^*(F)<\infty$, then, by property 1, there is $G_0$ open set relative to $E$, such that $F\subset G_0$ and $\mu^*(F)\leqslant\mu(G_0)<\mu^*(F)+\epsilon$.
Since $A$ is measurable, then, by property 2, $\exists G_1, G_2 \subset E$ and open relative to $E$ such that $A\subset G_1$, $E\setminus A\subset G_2$, and $\mu(G_1\cap G_2)<\epsilon$.
Let $D=E\setminus G_2$. Then $D\subset A$ and we have \begin{align} \mu^*(F\cap A)&+\mu^*(F\setminus A)\leqslant \mu^*(G_0\cap A)+\mu^*(G_0\setminus A) \leqslant & \textrm{ prop. 4} \\& \leqslant \mu^*(G_0\cap G_1)+\mu^*(G_0\setminus D) \leqslant & \textrm{ prop. 4} \\& \leqslant \mu^*(G_0\cap D)+\mu^*(G_0\cap (G_1\setminus D))+\mu^*(G_0\setminus D) = & \textrm{ prop. 3} \\& = \mu^*(G_0\setminus G_2)+\mu^*(G_0\cap (G_1\cap G_2))+\mu^*(G_0\cap G_2) = & \textrm{ definition of $D$} \\& = \mu(G_0\setminus G_2)+\mu(G_0\cap (G_1\cap G_2))+\mu(G_0\cap G_2) = & \textrm{ prop. 5 and 6} \\& = \mu(G_0)+\mu(G_0\cap (G_1\cap G_2)) \leqslant & \textrm{ prop. 7} \\& \leqslant \mu(G_0)+\mu(G_1\cap G_2) \leqslant & \textrm{ prop. 4} \\& \leqslant \mu^*(F)+\epsilon + \epsilon \end{align}
So, for any arbritary $\epsilon>0$, we have $\mu^*(F\cap A)+\mu^*(F\setminus A) \leqslant \mu^*(F)+2\epsilon$. So we have $$\mu^*(F)\geqslant \mu^*(F\cap A)+\mu^*(F\setminus A)$$ Since $\mu^*$ is subadditive, we get $\mu^*(F)= \mu^*(F\cap A)+\mu^*(F\setminus A)$.
• Thank you! Very clear answer, much appreciated. – Nap D. Lover Dec 8 '15 at 17:54
• I am sorry, I actually don't understand the step which uses prop 3 ($\sigma$-sub additivity), i am trying to figure it out, there must be some set relation i'm missing, could you elaborate? – Nap D. Lover Dec 9 '15 at 1:25
• Sure. Note that $$G_1=D+(G_1\setminus D)$$ (where "$+$" indicates disjoint union). So $$G_0\cap G_1=(G_0\cap D)+(G_0\cap (G_1\setminus D))$$ Then, since $\mu^*$ is subadditive, we have $$\mu^*(G_0\cap G_1) \leqslant \mu^*(G_0\cap D)+\mu^*(G_0\cap (G_1\setminus D))$$ Please, let me know if you have any further question. – Ramiro Dec 9 '15 at 1:53
• ah okay. Thanks again. Yet another question: I had not proven subadditivity for the formal disjoint union operation $+$ (or seen it used in my texts on measure) is there any difference than proving it for regular unions? – Nap D. Lover Dec 12 '15 at 23:28
• For the sake of outer measures, no. It is just an immediate consequence of subadditivity for "regular" unions. – Ramiro Dec 12 '15 at 23:38
|
2019-06-18 23:13:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9895344376564026, "perplexity": 138.5455822306161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998844.16/warc/CC-MAIN-20190618223541-20190619005541-00185.warc.gz"}
|
https://easychair.org/smart-program/PGMODAYS2021/2021-12-01.html
|
PGMODAYS 2021: PGMODAYS 2021
PROGRAM FOR WEDNESDAY, DECEMBER 1ST
Days:
previous day
all days
View: session overviewtalk overview
09:00-10:30 Session 10A: Mean field games and applications, I (invited session organized by Guilherme Mazanti and Laurent Pfeiffer)
Location: Amphi I
09:00
Fine Properties of the Value Function in Mean-Field Optimal Control
ABSTRACT. In this talk, I will present some of the results of our recent paper \cite{BF2021} with H. Frankowska. Therein, following the seminal paper \cite{CF1991}, we investigate some of the fine regularity and structure properties of the value function associated to a mean-field optimal control problem of Mayer type, formulated in the Wasserstein space of probability measures.
After motivating the study of this class of problems and briefly positioning the underlying models with respect to the literature, I will shortly review some necessary tools of optimal transport theory and mean-field control. Then, I shall present two families of results concerning the regularity properties of the value function. First, I will show that when the data of the optimal control problem is sufficiently regular, the value function is \textit{semiconcave} with respect to both variables. In this case, the semiconcavity with respect to the measure variable is understood in the sense of the intrinsic geometry of Wasserstein spaces, following the monograph \cite{AGS2008}. Then, I will establish the so-called \textit{sensitivity relations} which link the costates of the Pontryagin maximum principle to adequate superdifferentials of the value function. If time permits, I will end the talk by discussing relevant applications of these results in control theory.
09:30
A Mean field game model for the evolution of cities
ABSTRACT. We propose a (toy) MFG model for the evolution of residents and firms densities, coupled both by labour market equilibrium conditions and competition for land use (congestion). This results in a system of two Hamilton- Jacobi-Bellman and two Fokker-Planck equations with a new form of coupling related to optimal transport. This MFG has a convex potential which enables us to find weak solutions by a variational approach. In the case of quadratic Hamiltonians, the problem can be reformulated in Lagrangian terms and solved numerically by an IPFP/Sinkhorn- like scheme
10:00
Reinforcement learning method for Mean Field Games
ABSTRACT. We will present recent advances on learning methods for computing solutions of mean field games. Theoretical and empirical results concerning algorithms such as Fictitious Play or Online Mirror Descent will be discussed. Numerical experiments in different fields such as flocking, crowd modeling, predator prey or vehicle routing will be presented.
09:00-10:30 Session 10B: Flexibility management in power systems (invited session organized by Nadia Oudjane and Cheng Wan)
Location: Amphi II
09:00
Optimal Distributed Energy Resource Coordination: A Hierarchical Decomposition Method Based on Distribution Locational Marginal Costs
ABSTRACT. In this work, we consider the day-ahead operational planning problem of a radial distribution network hosting Distributed Energy Resources (DERs) including rooftop solar and storage-like loads, such as electric vehicles. We present a novel hierarchical decomposition method that is based on a centralized AC Optimal Power Flow (AC OPF) problem interacting iteratively with self-dispatching DER problems adapting to real and reactive power Distribution Locational Marginal Costs. We illustrate the applicability and tractability of the proposed method on an actual distribution feeder, while modeling the full complexity of spatiotemporal DER capabilities and preferences, and accounting for instances of non-exact AC OPF convex relaxations. We show that the proposed method achieves optimal Grid-DER coordination, by successively improving feasible AC OPF solutions, and discovers spatiotemporally varying marginal costs in distribution networks that are key to optimal DER scheduling by modeling losses, ampacity and voltage congestion, and, most importantly, dynamic asset degradation.
09:30
Model Predictive Control Framework for Congestion Management with Large Batteries in Sub-Transmission Grid
ABSTRACT. RTE is building and will put into operation 3 large battery storage systems in 2021/2022 (10MW/20MWh). These batteries, together with intermittent renewable generation curtailment and line switching, will be used to manage congestions in 3 small subtransmission zones (63kV or 90kV). A local controller sends orders to the battery, to power plants and switches every 15 seconds, using all the Flexibility offered by permanent and emergency ratings, including Dynamic Line Rating when available. Controller's decision algorithm is Model Predictive Control: every 15 secondes, DC approximation model of the grid is computed based on latests real time measurements; then a Mixed Integer Programming model is built, taking into account delays of actions. This local controller does not have any forecast and is not be able to manage preventive actions, so a higher level scheduler will be in charge of security analysis (N-1 analysis), battery preventive actions, pre-discharging the battery for forthcoming grid congestions.
10:00
Approximate Nash equilibria in large nonconvex aggregative games
PRESENTER: Kang Liu
ABSTRACT. This paper shows the existence of $\mathcal{O}(\frac{1}{n^\gamma})$-Nash equilibria in $n$-player noncooperative sum-aggregative games where the players' cost functions depend only on their own action and the average of all players' actions, is lower semicontinuous in the former, while $\gamma$-H\"{o}lder continuous in the latter. Neither the action sets nor the cost functions need to be convex. For an important class of sum-aggregative games which includes congestion games with $\gamma$ being 1, a gradient-proximal algorithm is used to construct an $\mathcal{O}(\frac{1}{n})$-Nash equilibria with at most $\mathcal{O}(n^3)$ iterations. These results are applied to a numerical example of demand-side management of the electricity system. The asymptotic performance of the algorithm is illustrated when $n$ tends to infinity.
09:00-10:30 Session 10C: Flexibility and pricing in electricity system models (invited session organized by Olivier Beaude)
Location: A1.134
09:00
Long-term planning in the electricity system: introduction of a few key modeling aspects
ABSTRACT. Long-term planning of the electricity system is a vast research field, including among others the class of Generation Expansion Planning (GEP) models. In a rapidly changing context, with a greater variability and uncertainty due to the significant increase of renewable energy capacities and the ambition of making part of the consumption « flexible » (i.e., directly controllable by a system operator or incentivized based on the system need), multiple key modelling questions arise. An introduction to a few of them, related to current research efforts at EDF R&D, will be made: the use of clustering methods to decrease the size of considered GEP optimization problems; the « proper » representation of an aggregate of flexible individual consumers; the integration of local – mostly network-based – constraints.
09:30
Decision-making Oriented Clustering: Application to Pricing and Power Consumption Scheduling
ABSTRACT. Data clustering is an instrumental tool in the area of energy resource management. One problem with conventional clustering is that it does not take the final use of the clustered data into account, which may lead to a very suboptimal use of energy or computational resources. When clustered data are used by a decision-making entity, it turns out that significant gains can be obtained by tailoring the clustering scheme to the final task performed by the decision-making entity. The key to having good final performance is to automatically extract the important attributes of the data space that are inherently relevant to the subsequent decision-making entity, and partition the data space based on these attributes instead of partitioning the data space based on predefined conventional metrics. For this purpose, we formulate the framework of decision-making oriented clustering and propose an algorithm providing a decision-based partition of the data space and good representative decisions. By applying this novel framework and algorithm to a typical problem of real-time pricing and that of power consumption scheduling, we obtain several insightful analytical results such as the expression of the best representative price profiles for real-time pricing and a very significant reduction in terms of required clusters to perform power consumption scheduling as shown by our simulations.
10:00
Network Games Equilibrium Computation: Duality Extension and Coordination
ABSTRACT. We formulate a generic network game as a generalized Nash equilibrium problem. Relying on normalized Nash equilibrium as solution concept, we provide a parametrized proximal algorithm to span many equilibrium points [1]. Complexifying the setting, we consider an information structure in which the agents in the network can withhold some local information from sensitive data, resulting in private coupling constraints. The convergence of the algorithm and deviations in the players’ strategies at equilibrium are formally analyzed. In addition, duality theory extension enables to use the algorithm to coordinate the agents through a fully distributed pricing mechanism, on one specific equilibrium with desirable properties at the system level (economic efficiency, fairness, etc.). To that purpose, the game is recast as a hierarchical decomposition problem in the same spirit as in [3], and a procedure is detailed to compute the equilibrium that minimizes a secondary cost function capturing system level properties. Finally, applications are presented to a) peer-to-peer energy trading [2], b) Transmission-Distribution System Operators markets for flexibility procurement [4].
References
[1] H. Le Cadre, Y. Mou, and H. Höschle, Parameterized Inexact-ADMM based Coordina- tion Games: A Normalized Nash Equilibrium Approach, European Journal of Operational Research, 296(2):696–716, 2022.
[2] H. Le Cadre, P. Jacquot, C. Wan, and C. Alasseur, Peer-to-Peer Electricity Market Anal- ysis: From Variational to Generalized Nash Equilibrium, European Journal of Operational Research, 282(2):753–771, 2020.
[3] L. Pavel, An Extension of Duality to a Game-Theoretic Framework, Automatica, 43(2):226– 237, 2007.
[4] A. Sanjab, H. Le Cadre, and Y. Mou, TSO-DSOs Stable Cost Allocation for the Joint Procurement of Flexibility: A Cooperative Game Approach, arXiv Preprint, 2021.
09:00-10:30 Session 10D: Optimization, learning and data sciences
Location: A1.116
09:00
Linearly-constrained Linear Quadratic Regulator from the viewpoint of kernel methods
ABSTRACT. Where does machine learning meet optimal control? We show in this talk that linearly controlled trajectory spaces are vector-valued reproducing Hilbert kernel spaces when equipped with the scalar product corresponding to the quadratic cost. The associated LQ kernel is related to the inverse of the Riccati matrix \cite{aubin2020Riccati} and to the controllability Gramian \cite{aubin2020hard_control}. This kernel allows to deal by a simple \emph{representation theorem} with the difficult case of state constraints, in the affine case for linear-quadratic (LQ) optimal control problems with varying time \cite{aubin2020hard_control}. Numerically, this allows an exact continuous-time solution of path planning problems. We will therefore present a new link between kernel methods and optimal control.
09:30
Factored Model-based RL for Tandem Queues Resource Allocation
ABSTRACT. Managing the energy consumption and Quality of Service (QoS) in data-centers is a crucial task nowadays. Indeed, these centers are usually oversized and cause over-consumption representing a significant part of global electricity consumption. Turning On and Off virtual machines is a usual way to reduce the energy consumption while guaranteeing a given QoS. We consider three-tier network architecture modeled with two physical nodes in tandem where an autonomous agent controls the number of active resources on both nodes. We analyse the learning of auto-scaling strategies in order to optimise both performance and energy consumption of the whole system. We consider reinforcement learning (RL) [1] techniques to address the learning of optimal policies. We find in [2] a listing of RL methods for auto-scaling policy search in cloud scenarios. We observe that almost all applied methods are model-free. Our intention is to consider and compare structural model-based RL methods to see if it can indeed accelerate convergence, but also to tackle curse of dimensionality when considering large scale cloud scenarios. For that purpose we want to take benefit of the environment structure in our model and methodology to accelerate speed of policy learning. We consider factored MDP framework [3] to understand local relational structure between environment variables. After modeling the tandem queue system in a factored manner, we consider two paradigms: one first approach where the controller knows the statistics of the system and can apply MDP algorithms (Value Iteration) and a second approach where the controller does not know the dynamics of the environment, yet has to consider reinforcement learning methods. In the first approach, we compare classical MDP solution such as Value Iteration with a factored version called Factored Value Iteration [4]. For the reinforcement learning approach we develop a Factored MDP online method based on Factored Rmax algorithm and compare the solution to MDP online and model-free Q-Learning. We demonstrate the gain of such factored approach (convergence, memory implementation) on numerical experiments with arbitrary cloud scenarios.
10:00
Tropical linear regression and low-rank approximation --- a first step in tropical data analysis
ABSTRACT. Tropical data arise naturally in many areas, such as control theory, phylogenetic analysis, machine learning, economics, and so on. However, many fundamental problems still deserve further investigations and more powerful tools need to be developed. In this talk, as a first step in tropical data analysis, we would like to introduce two useful models, namely tropical linear regression and tropical low-rank approximation.
More precisely, for a collection V of finitely many points, the tropical linear regression problem is to find a best tropical hyperplane approximation of V. We will establish a strong duality theorem, showing the above distance coincides with the maximal radius of a Hilbert's ball contained in the tropical polyhedron spanned by V. Algorithmically speaking, this regression problem is polynomial-time equivalent to mean payoff game. As an application, we illustrate our results by solving an inverse problem from auction theory.
Another important tool we will study is tropical low-rank approximation. We will systematically discuss the relations among different notions of rank in the tropical setting. In particular, we will reveal a close relation between tropical linear regression and best rank-2 approximation, which provides us an efficient algorithm for finding a best rank-2 matrix approximation for a given matrix.
09:00-10:30 Session 10E: Filtering, surveillance and simulation
Location: A1.122
09:00
A low-cost second-order multi-object tracking filter for WAMI surveillance
ABSTRACT. This paper that describes a second-order multi-object tracking filter for tracking objects in wide-area motion imagery (WAMI) applications, built on top of a recently proposed second-order computationally efficient multi-object filter that provides first and second-order information on the number of targets. The filters described in this document consist of multi-target tracking algorithms that accurately estimate the number of objects along with its dispersion (second-order factorial cumulant), in addition to the object states, in tracking scenarios affected by false alarms, misdetections and noise. We incorporate the following extensions to the cumulant-based filter for WAMI applications: 1 ) incorporation of different strategies to deal with environments where objects are poorly distinguishable, 2 ) a labeling process for objects from stochastic populations via the theory of marked point processes, 3 ) a principled track manager to work around track persistence limitations of PHD filters for complex scenarios. The method shows that the resulting filter is adequate to tracking solutions based on a full-probabilistic Bayesian approach to visual tracking. This report describes the extensions made to the filter and demonstrates results that are competitive in performance to state-of-the-art methods.
09:30
Cross-entropic optimization of landmarks selection for efficient navigation in passive estimation context
ABSTRACT. Passive target estimation is a widely investigated problem of practical interest for which particle filters represent a popular class of methods. An adaptation of the Laplace Particle Filter is proposed for angle-only navigation by means of landmarks. In this specific context, a high number of aiding landmarks or features could be hard to handle in terms of computational cost. This contribution addresses specifically the key issue of optimizing the landmarks choice along navigation steps in order to maximize the navigation efficiency. In its general form, this is a partially observable decision process optimization, which is known to be a difficult problem. But there are approaches based on the Posterior Cram\`er Rao Bound (PCRB) which are a good compromise in terms of efficiency for moderate difficulty. However, PCRB-based criteria are generally non-convex and are not well suited for generic mathematical approaches. We propose here an optimisation approach based on the cross-entropy algorithm (CE). We already applied CE in combination with Dynamic Bayesian Networks as sampling family to partially observable decision processes. In this contribution, more efficiency is achieved by means of the PCRB, and besides we also introduce some refinements in the CE update stage in order to improve convergence.
10:00
Stochastic simulation for discrete event systems based on maintenance optimization
ABSTRACT. Industrial systems are subject to faults and failures on their components, which can lead to the unavailability of the systems and increase costs due to interventions. A suitable maintenance strategy is therefore a good solution so to reduce such costs, and also to increase the availability of the system. Combination of different kinds of maintenance policies on components of a system can be a good solution. Nevertheless, it has to be finely analyzed, so to search the optimal maintenance strategies on the system, according to specified criteria (e.g. availability, cost, etc.). In this publication, we illustrate how the combination of a simulation tool, based on stochastic discrete event systems, and an optimization algorithm can be used to find (one of) the best strategy of maintenance. We propose a solution that optimizes system availability using an evolutionary algorithm. A stochastic simulator performs calculations according to parameters provided by an optimization algorithm, which plans preventive maintenance schedules. The optimization algorithm provides the optimum maintenance scenario defined by the kind of maintenance to apply and the suitable schedules. The experiments show that the simulation based optimization algorithm gives more flexibility to the decision maker.
09:00-10:30 Session 10F: Electric and Efficient Mobility
Location: A1.128
09:00
Management of electric vehicles fleet by decomposition method approach
ABSTRACT. The objective of this work was to build a model to optimize dimensions of a fleet of electric vehicles and dimensions of electric chargers dedicated by the vehicle. Fleet must satisfy a demand for transport of passengers. Trajects satisfying demand were modeled and relocalization trajects were also added. Objective function includes charging cost, maintenance and investment cost vehicles, traject duration for passengers, and investment costs of chargers. Because of the size of the problem, decomposition approach was used.
09:30
Shared autonomous electric vehicles transport scheduling with charging considerations: optimization-based planning approaches
ABSTRACT. Car-sharing services are becoming increasingly popular as a result of the ubiquitous development of the communication technologies that allow the easy exchange of information required for these systems to work. The adoption of autonomous driving technology for providing car-sharing services is expected to further accelerate the widespread of this transport model. Autonomous driving means that vehicles can move to pick up customers autonomously without the need to rest for long periods and without the need for parking slots. Studies have shown that car-sharing services using autonomous driving can significantly improve fuel efficiency and decrease the need for parking spaces in cities. Moreover, as autonomous vehicles are expected to operate with electric power, they have the potential to significantly decrease greenhouse-gas emissions in the transport sector. The increased electrification of the transport system due to the adoption of shared autonomous transportation has significant implications for the electric power systems. In addition to efficient passenger transport, shared autonomous electric vehicles (SAEVs) provide a significant opportunity to act as moving batteries and to add electricity storage to the grid that would counterbalance the intermittence of renewable energy sources, such as wind and solar. If correctly managed, SAEVs can contribute to power grid flexibility, reduction of peak load and in providing ancillary services such as frequency regulation. This would enable major cost savings for both the transportation and energy systems, enable a higher renewable energy integration and production and offer overall more resilient systems capable of withstanding and recovering from disruptions. However, a poor coordination between the two systems, in particular with regards to the electricity charging schedule of the SAEVs, may compromise both the grid stability and passenger travel times. In this research, we investigate the potential interaction between SAEV and the power grid in terms of charge scheduling and ability to supply operating reserve. To achieve this, we propose an integrated modeling and optimization framework to co-optimize the charging schedule taking into account the SAEV transport demand and charging requirement. We show how a variety of modeling approaches can be used to address this problem, each with a set of advantages and limitations. Our results show the importance of the coordination between the electric power system and the transportation system in terms of reducing costs, satisfying travel demand and ensuring the robust operations of both systems. This is particularly for systems with that are projected to highly depend on electrified mobility.
10:00
Synchronizing Energy Production and Vehicle Routing
ABSTRACT. We deal here with the synchronization of solar hydrogen production by a micro-plant, and the consumption of this production by a fleet of autonomous vehicles in charge of internal logistic tasks. More precisely, we consider 1 vehicle, with limited tank capacity, which must visit stations within some time horizon. Travelling times and related energy amount are known and so the vehicle manager must route this vehicle and schedule refueling transactions, which induce a detour by micro-plant. This micro-plant produces H2 in situ according to time dependent costs and production ratio, and stores it inside a tank with limited capacity. Then resulting Synchronized Energy Production&Consumption Problem, consists in computing the route G of the vehicle, and scheduling both refueling transactions and H2 production, while meeting production/consumption constraints and minimizing some global costs.
We first handle this problem while supposing G fixed and deterministic production ratio. We propose for this a centralized synchronized dynamic Programming Scheme, pseudo time-polynomial in the sense of polynomial approximation (PTAS). Next we adopt a collaborative point of view and decompose according to different possible collaborative schemes. Next we deal with the uncertainty issue, through a notion of climatic scenario, whose purpose is to capture correlations. Finally, we consider G as part of the problem, and design a bi-level resolution scheme which makes the vehicle endorse the leading role.
09:00-10:30 Session 10G: Mixed Integer Programming for Applications
Location: A1.133
09:00
Mixed Integer Nonlinear Approaches for the Satellite Constellation Design Problem
ABSTRACT. In this paper, we address the satellite constellation design problem, i.e., we aim to compute the minimal number of satellites (and, incidentally, their 3D placements) in order to observe a given region on the Earth. The hard constraints of the problem is related to the so-called revisiting time: the Earth region must be observed at least once in a time interval of given width. The observed Earth region is discretized as a set of pixels, i.e., points on the Earth surface represented by 3D coordinates. [1] already proposed an approach based on Binary Linear Programming (BILP) formulation optimizing only one variable; in our case, we want to optimize simultaneously the 4 variables corresponding to the keplerian orbital parameters. Other approaches proposed in literature mainly concern genetic algorithm methodology, see for instance [2, 3] that optimize 4 variables. Moreover, to the best of our knowledge, this paper represents the first attempt to address satellite constellation design problem though Mixed Integer Nonlinear Programming (MINLP). We define two MINLP formulations: the first one is based on a direct mathematical definition of pixel observability within a given revisiting time; in the second formulation, we introduce a set of indicator variables, which indicate if a satellite observes a pixel at a given time-stamp. In this latter formulation we aim to minimize the number of active indicator variables, enforcing observability of all the pixels within the revising time. We furthermore discretize the possible positions of satellites so that we obtain a large-scale Mixed-Integer Linear Programming (MILP) problem. Finally, we conclude by suggesting several insights toward a (mat)heuristic levering on the second formulation.
References [1] H. W. Lee, S. Shimizu, S. Yoshikawa, and K. Ho, Satellite Constellation Pattern Optimiza- tion for Complex Regional Coverage, Journal of Spacecraft and Rockets, 57(6):1309–1327, 2020. [2] N. Hitomi, and D. Selva, Constellation optimization using an evolutionary algorithm with a variable-length chromosome, IEEE Aerospace Conference, 1–12, 2018. [3] T. Savitri, Y. Kim, S. Jo, and H. Bang, Satellite constellation orbit design optimization with combined genetic algorithm and semianalytical approach, International Journal of Aerospace Engineering, 2017:1235692, 2017.
09:30
Efficient Design of a Military Multi-technology Network through MILP
ABSTRACT. In a military theater of operations, a set of mobile units with different communication technologies (5G, UHF, VHF, etc.) are deployed and have to be able to communicate between each other and to a military base. The network of units and the base is connected through a set of communication relays of multiple technologies that have to cover the base and the maximum number of units. Communication relays can be embarked on HAPS which are innovative systems (e.g., drones, balloons) deployed at a fixed, high altitude.
To maximize unit coverage over time, we introduce and study the Network Planning Problem (NPP). This problem consists in deciding the deployment of the HAPS in a set of positions and deciding the placement of one or more relays in each deployed HAPS within the limits of its capacities. Through their relays, the HAPS must form a connected network with the base.
We propose a MILP formulation for the NPP with multi-commodity flow variables. In order to strengthen the model, we introduce valid and symmetry-breaking inequalities. We also perform an experimental study and develop a visualization tool. Preliminary tests show that the difficulty of solving the problem is related to the number of positions and the range of communication devices.
10:00
MIP formulations for Energy Optimization in Multi-Hop Wireless Networks
ABSTRACT. Network coding improves the quality of service of wireless networks. In this work, we consider the energy-efficient routing with network coding. In the routing process, network coding is employed to reduce the number of data transmissions and yield less energy consumption. This study leads to a new combinatorial optimization problem, namely wireless unsplittable multi-commodity flow with network coding (wUMCFC).
We solve the wUMCFC problem by the integer programming approach. We propose several compact mixed-integer formulations, including one mixed-Boolean quadratic programming (MBQP) formulation, and two mixed-integer linear programming formulations: edge linearization formulation and edge balance formulation. The edge linearization formulation is a direct linearization of the MBQP formulation via McCormick envelopes of bilinear terms, while we prove that the edge balance formulation is stronger than the edge linearization formulation. To scale up with large networks, we derive a path-based reformulation of the edge balance formulation and propose an exact approach based on the branch-and-price algorithm.
The branch-and-price based solver wUMCFC {github.com/lidingxu/wUMCFC} is open source. Our computational tests show the speedup of wUMCFC over CPLEX on large networks. On the simulated instances, the employment of network coding reduces 18.3% of routing energy cost in average.
09:30-10:30 Session 11: Evolutionary and black-box algorithms, I
Location: A1.139
09:30
A First Theoretical Analysis of the Non-Dominated Sorting Genetic Algorithm II (NSGA-II)
ABSTRACT. The non-dominated sorting genetic algorithm II (NSGA-II)~\cite{DebPAM02} is the most intensively used multi-objective evolutionary algorithm (MOEAs) in real-world applications~\cite{ZhouQLZSZ11}. However, in contrast to several simple MOEAs analyzed also via mathematical means~\cite{Brockhoff11bookchapter}, no such study exists for the NSGA-II so far. In this work, we show that mathematical runtime analyses are feasible also for the NSGA-II. As particular results, we prove that with a population size larger than the Pareto front size by a constant factor, the NSGA-II with two classic mutation operators and three different ways to select the parents satisfies the same asymptotic runtime guarantees as the SEMO and GSEMO algorithms on the basic OneMinMax and LeadingOnesTrailingZeroes benchmark functions. However, if the population size is only equal to the size of the Pareto front, then the NSGA-II cannot efficiently compute the full Pareto front (for an exponential number of iterations, the population will always miss a constant fraction of the Pareto front). Our experiments confirm the above findings.
This work is part of the PGMO-funded project \emph{Understanding and Developing Evolutionary Algorithms via Mathematical Runtime Analyses} (PI: Benjamin Doerr). The work of the second author was done during her internship at the LIX (funded by the PGMO project).
10:00
Simulated Annealing: a Review and a New Scheme
ABSTRACT. Finding the global minimum of a non-convex optimization problem is a notoriously hard task appearing in numerous applications, from signal processing to machine learning. Simulated annealing (SA) is a family of stochastic optimization methods where an artificial temperature controls the exploration of the search space while preserving convergence to the global minima. SA is efficient, easy to implement, and theoretically sound, but suffers from a slow convergence rate. The purpose of this work is two-fold. First, we provide a comprehensive overview on SA and its accelerated variants. Second, we propose a novel SA scheme called curious simulated annealing, combining the assets of two recent acceleration strategies. Theoretical guarantees of this algorithm are provided. Its performance with respect to existing methods is illustrated on practical examples.
10:30-11:00Coffee Break
11:00-12:30 Session 12A: Mean field games and applications, II (invited session organized by Guilherme Mazanti and Laurent Pfeiffer)
Location: Amphi I
11:00
MFG Systems with Interactions though the Law of Controls: Existence, Uniqueness and Numerical Simulations
ABSTRACT. This talk is dedicated to a class of games in which agents may interact through their law of states and controls; we use the terminology mean field games of controls (MFGC for short) to refer to this class of games. We first give existence and uniqueness results under various sets of assumptions. We introduce a new structural condition, namely that the optimal dynamics depends upon the law of controls in a Lipschitz way, with a Lipschitz constant smaller than one. In this case, we give several existence results on the solutions of the MFGC system, and one uniqueness result under a short-time horizon assumption. Then, under a monotonicity assumption on the interactions through the law of controls (which can be interpreted as the adaptation of the Lasry-Lions monotonicity condition to the MFGC system), we prove existence and uniqueness of the solution of the MFGC system. Finally, numerical simulations of a model of crowd motion are presented, in which an agent is more likely to go in the mainstream direction (which is in contradiction with the above-mentioned monotonicity condition). Original behaviors of the agents are observed leading to non-uniqueness or queue formation.
11:30
Generalized conditional gradient and learning in potential mean field games
ABSTRACT. We apply the generalized conditional gradient algorithm to potential mean field games and we show its well-posedeness. It turns out that this method can be interpreted as a learning method called fictitious play. More precisely, each step of the generalized conditional gradient method amounts to compute the best-response of the representative agent, for a predicted value of the coupling terms of the game. We show that for the learning sequence δk = 2/(k + 2), the potential cost converges in O(1/k), the exploitability and the variables of the problem (distribution, congestion, price, value function and control terms) converge in O(1/ k), for specific norms.
12:00
Non-smooth Minimal-Time Mean Field Game with State Constraints
ABSTRACT. Introduced in 2006, mean field games (MFG) are models for games with a continuum of interacting agents in which players or agents are indistinguishable and individually neglectable. In this talk, we consider a \emph{non-smooth minimal-time mean field game with state constraints}~\cite{Saeed-Guilherme-state constraint2021}. It has been inspired by crowd motion in which the agents are evolving in a certain domain. Due to the state constraints, which can be seen as hedges, fences, walls, etc., characterization of optimal control is a main issue and not easy to establish. This issue is due to the fact that in such a situation, the value function is not necessarily semiconcave, compared to free state models in which one has indeed the semiconcavity under some suitable smoothness assumptions. Even though there are some techniques in the literature dealing with state constraints, they do not seem to apply for the minimal-time MFG. The main contribution to the topic is therefore to find a new approach which does not rely on semiconcavity. For this purpose, we use a penalization technique in order to apply the Pontryagin Maximum Principle and obtain its consequences for our optimal control problem. Another novelty of this approach is its compatibility with less regularity assumption on the dynamics of the agents.
This is a joint work with Guilherme Mazanti (DISCO Team, Inria Saclay--\^Ile-de-France, L2S--CentraleSup\'elec)
11:00-12:30 Session 12B: Recent advances in Optimization of Energy Markets (invited session organized by Mathias Staudigl)
Location: Amphi II
11:00
A distributed algorithm for economic dispatch of prosumer networks with grid constraints
ABSTRACT. Future electrical distribution grids are envisioned as complex large-scale networks of prosumers (energy consumers with storage and/or production capabilities). In the context of economic dispatch, these prosumers optimize their operating points based on self-interested objectives. Nevertheless, these decisions must be accepted by a network operator, whose responsibility is to ensure a safe operation of the grid.
In this work, we formulate the interaction among prosumers plus the network operator in an economic dispatch problem as a generalized aggregative game, i.e., a set of inter-dependent optimization problems, each of which has a cost function and constraints that depend on the aggregated decisions of prosumers. We consider that each prosumer may have dispatchable generation and storage units as well as peer-to-peer and grid trading capabilities. In addition, we also impose a linear approximation of power-flow equations and operational limits as grid constraints handled by the network operator.
Furthermore, we design a distributed mechanism with convergence guarantee to an economically efficient and operationally safe configuration (i.e., a variational GNE) based on the preconditioned proximal point method. Our algorithm requires each prosumer to communicate with its trading partners and the network operator that handles grid constraints and aggregative variables. We perform numerical experiments on the IEEE 37-bus system to show the importance of having grid constraints and active participation of prosumers as well as to show the scalability of our algorithm.
11:30
On Convex Lower-Level Black-Box Constraints in Bilevel Optimization with an Application to Gas Market Models with Chance Constraints
PRESENTER: Martin Schmidt
ABSTRACT. Bilevel optimization is an increasingly important tool to model hierarchical decision making. However, the ability of modeling such settings makes bilevel problems hard to solve in theory and practice. In this paper, we add on the general difficulty of this class of problems by further incorporating convex black-box constraints in the lower level. For this setup, we develop a cutting-plane algorithm that computes approximate bilevel-feasible points. We apply this method to a bilevel model of the European gas market in which we use a joint chance constraint to model uncertain loads. Since the chance constraint is not available in closed form, this fits into the black-box setting studied before. For the applied model, we use further problem-specific insights to derive bounds on the objective value of the bilevel problem. By doing so, we are able to show that we solve the application problem to approximate global optimality. In our numerical case study we are thus able to evaluate the welfare sensitivity in dependence of the achieved safety level of uncertain load coverage.
12:00
Distribution locational marginal prices via random block-coordinate descent methods
ABSTRACT. The modern distribution network is undergoing an unprecedented reformation, thanks to the increased deployment of distributed energy resources (DERs) in the form of distributed generators, distributed storage, aggregators managing fleets of electric vehicles or groups of prosumers (e.g. owners of photo-voltaic panels).
While the potential benefits of DERs are globally accepted, ill-managed operations of them could lead to significant failures of the desired outcome. Specifically, wrong control strategies could lead to drastic voltage fluctuations and supply-demand imbalances.
With this in mind, a replication of the transmission-level locational marginal prices is much desired. The locational marginal price is the marginal cost of supplying an additional unit of demand at a bus. The price signals differ spatially and temporally, and are used to incentivize DER-generators and consumers to balance supply-demand, support voltage, and minimize system losses. The necessary extension to the distribution network- in the form of distribution locational marginal prices (DLMP)- has received significant attention in the current debate on how to integrate distributed resources into the entire energy system.
Distributed Multi-agent optimization techniques are one powerful approach to enable this envisioned transition towards a fully integrated energy system. A key question in this approach is the effective computation of DLMPs in a fully distributed way with minimal communication and computational complexity. In this paper we present a computational framework, derived from a new randomized block-coordinate descent algorithm for linearly constrained optimization problems, generalizing the state-of-the-art. Preliminary numerical studies on a 15-bus test network show promising performance of our randomized block coordinate descent strategy.
11:00-12:30 Session 12C: Mathematical programming for the nuclear outage planning problem, current challenges (invited session organized by Rodolphe Griset)
Location: A1.116
11:00
Nuclear outage planning problem: medium term formulation and constraint programming
ABSTRACT. French electricity producer EDF owns and operates 56 nuclear reactors. The company must manage and optimize the timing of the outages dedicated to maintenance and recharging of the units. The outage plannings of the various units are linked to each other by local and coupling constraints and must be compatible with the supply/demand equilibrium constraints. To comprehend this complexity, the team in charge of the operational process at EDF uses a decision-support tool developed and maintained by EDF R&D.
The goal of this tool is to propose movements of outages to comply with a viable planning. The proposed movements must be as few and short as possible ; all movements must be justified. The tool is based on a local and heuristic research algorithm. Violated constraints are dealt with one at a time and movements of outages are proposed to solve them accordingly. This method makes it easy to trace back each movement to the violation that led to it. However, this sequential resolution can result into a sub-optimal solution, or even no solution at all. To avoid such an outcome, EDF R&D is working on an alternative method using Constraint Programming (CP) which has always been a promising approach to solve our problem.
By treating the scheduling problem in a global way, the CP method is less likely to return a sub-optimal solution. In addition, as the research is exhaustive, should no solution exist, the method can provide proof of it with certainty. EDF R&D worked the last couple of years to develop a tool that traduces the operational nuclear outage planning problem into a Constraint Satisfaction Problem and solves it with the open source constraint toolkit Gecode. Now that the method has demonstrated its ability to deal with the problem, the next step is to bring explainability to the results (tracing back the movements to the violations, explaining the reasons why the problem has no solution, etc.)
11:30
Nuclear outage planning problem: New challenges on long term horizons
ABSTRACT. The nuclear outage planning problem is of significant importance to EDF as the nuclear production capacity influences the energy management chain at different time horizons. Indeed, the optimisation of the nuclear capacity performed every month on a medium horizon, corresponding to 3 to 5 years, is an essential input for the optimisation of other sources performed at shorter horizons. This problem is quite challenging given the specific operating constraints of nuclear units, the stochasticity of both the demand and non-nuclear units availability. This problem has been submitted to the research community through the euro/roadef challenge 2010. Over the last 15 years, EDF has explored several optimization technics to solve it. The best results had been optained by heuristics, currently used in operation process, but also constraint programming to recover from infeasibilities and mixed-integer programming to take stochasticity into account. On the other side, EDF has to solve this problem on a long time horizon such as 20 to 30 years to take structuring decisions. To solve those long term problems simplifications have to be made in constraints taken into account given the size of the problem.
In this talk, we will link the medium and long term nuclear outage scheduling problem. Emphasis will be made on the current challenges of the long term problem. An adaptation of the model introduced in to solve this kind of problem through column generation will be present.
12:00
Planning Robustness, Research Topics on Optimization under Uncertainty
ABSTRACT. We present the lines of research that will be investigated around the stability of the schedules of power plants. Improving robustness and stability of EDF's nuclear plans is of growing importance for EDF given the uncertainty associated with production fleet availability (nuclear outage durations, nuclear availability and growth of renewable sources). Finding schedules that satisfy all technical and safety constraints that are economically sound and robust to random events combines theoretical difficulties (multi-stage combinatorial optimization problems under uncertainty) and practical issues (lack of precise yet concise formulations and researcher-friendly data sets). This work is motivated by our past experience on the topic. To facilitate the development of advanced methodologies required to solve such problems, we aim at (i) designing stable scheduling models that encompass the outage planning problem, (ii) designing and developing a protocol for evaluating the resulting optimization algorithms in the multi-stage setting (iii) creating a researcher-friendly yet realistic data set and (iv) providing a numerical benchmark of algorithms suitable for approaching these problems. Those algorithms will be inspired by pragmatical approaches and recent developments in these subjects.
11:00-12:30 Session 12D: Optimal control and differential inclusions
Location: A1.122
11:00
Viability and Invariance in a Wasserstein Space
ABSTRACT. Optimal control problems stated in the space of Borel probability measures involve controlled non-local continuity equations whose solutions are curves of probability measures. Such equations appear in probabilistic models describing multi-agent systems. I will provide necessary and sufficient conditions for viability and invariance of proper subsets of the Wasserstein space P2 under controlled continuity equations. Viability means that for every initial condition in the set of constraints having a compact support there exists at least one trajectory of control system starting at it and respecting state constraints forever. Invariance means that every trajectory of control system starting in the set of constraints and having a compact support at the initial time never violates them. To describe these conditions I will introduce notions of contingent cone, proximal normal, directional derivative and Hadamard super/subdifferential on the (merely metric) space P2 in which the linear structure is absent. Results are applied to get existence and uniqueness of continuous solutions to a Hamilton-Jacobi-Bellman equation on the space of probability measures with compact support. These solutions are understood in the viscosity-like sense.
11:30
Optimal Shape of Stellarators for Magnetic Confinement Fusion
PRESENTER: Rémi Robin
ABSTRACT. Nuclear fusion is a nuclear reaction involving the use of light nuclei. In order to use nuclear fusion to produce energy, high temperature plasmas must be produced and confined. This is the objective of devices called tokamaks, steel magnetic confinement chambers having the shape of a toroidal solenoid. The magnetic field lines in tokamaks form helicoidal loops, whose twisting allow the particles in the plasma to be confined. Indeed, twisting allows to counter the vertical drift that would otherwise affect particles, in opposite directions for ions and for electrons.
Unfortunately, the twisting of magnetic field requires a current to be induced within the plasma, inducing intermittency, instabilities, and other technological challenges. A possible alternative solution to correct the problems of drift of magnetically confined plasma particles in a torus is to modify the toroidal shape of the device, yielding to the concept of stellarator. This system has the advantage of not requiring plasma current and therefore of being able to operate continuously; but it comes at the cost of more delicate coils (non-planar coils) and of a more important transport.
Despite the promise of very stable steady-state fusion plasmas, stellarator technology also presents significant challenges related to the complex arrangement of magnetic field coils. These magnetic field coils are particularly expensive and especially difficult to design and fabricate due to the complexity of their spatial arrangement.
In this talk, we are interested in the search for the optimal shape of stellarators, i.e., the best coil arrangement (provided that it exists) to confine the plasma. Let us assume that a target magnetic field B_T is known. It is then convenient to define a coil winding surface (CWS) on which the coils will be located. The optimal arrangement of stellarator coils corresponds then to the determination of a surface (the CWS) chosen to guarantee that the magnetic field created by the coils is as close as possible to the target magnetic field B_T. Of course, it is necessary to consider feasibility and manufacturability constraints. We will propose and study several relevant choices of such constraints.
In this talk we will analyze the resulting problem as follows: first we will establish the existence of an optimal shape, using the so-called reach condition; then we will show the shape differentiability of the criterion and provide the expression of the differential in a workable form; finally, we will propose a numerical method and perform simulations of optimal stellarator shapes. We will also discuss the efficiency of our approach with respect to the literature in this area.
12:00
Inducing strong convergence of trajectories in dynamical systems associated to monotone inclusions with composite structure
ABSTRACT. Zeros of the sum of a maximally monotone operator with a single-valued monotone one can be obtained as weak limits of trajectories of dynamical systems, for strong convergence demanding hypotheses being imposed. We extend an approach due to Attouch, Cominetti and coauthors, where zeros of maximally monotone operators are obtained as strong limits of trajectories of Tikhonov regularized dynamical systems, to forward-backward and forward-backward-forward dynamical systems whose trajectories strongly converge towards towards zeros of such sums of monotone operators under reasonable assumptions. This talk is based on joint work with Radu Ioan Bo\c{t}, Dennis Meier and Mathias Staudigl.
11:00-12:30 Session 12E: Stochastic Optimization I
Location: A1.128
11:00
Stochastic Dynamic Programming for Maritime Operations Planning
ABSTRACT. We consider an industrial planning problem under uncertainty arising in maritime operations. We propose a stochastic multi-stage optimization model and a dynamic programming solving scheme. Results are presented through a visualization tool.
11:30
Stochastic Dual Dynamic Programming and Lagrangian decomposition for seasonal storage valuation
ABSTRACT. The increase in the share of renewable energy sources in the power system, especially intermittent ones like solar and wind, brings several challenges. As both energy production and demand vary throughout the year (in winter, for example, there is a reduction in the supply of solar energy while the demand for energy for heating increases), energy storage becomes a relevant factor to take advantage of excess energy produced in certain seasons of the year and respond to increased energy demand in others. An important system for seasonal storage is that formed by cascading hydroelectric reservoirs. The management of such systems is a challenging task which hinges on a good assessment of the future value of stored energy (water) in these systems. In order to assess the value of water, and thus be able to properly manage these reservoir systems, a large-scale multi-stage stochastic problem spanning a year must be solved, where each stage is a weekly unit-commitment problems. Since the unit-commitment problems are non-convex in nature, they make the seasonal storage valuation problem unsuitable for what would otherwise be the most natural approach to solve it, i.e., Stochastic Dual Dynamic Programming. In this work we exploit the natural "convexification" capabilities of Lagrangian relaxation to devise a Stochastic Dual Dynamic Programming approach to the seasonal storage valuation problem where the Lagrangian Dual of the single-stage subproblems is solved (using a Bundle-type approach), which corresponds to solving the "convexified relaxation" of the original problem. This is known to be at least as tight as, and typically strictly tighter than, the standard way to convexify a stochastic MINLP amenable to the Stochastic Dual Dynamic Programming approach, i.e., the continuous relaxation. We perform preliminary experiments, made possible by the use of the SMS++ software framework (https://smspp.gitlab.io/) for large-scale problems with multiple nested forms of structure, which show that the tighter "convexified" relaxation corresponding to the use of the Lagrangian relaxation does indeed provide solutions that deliver a more adequate water value.
12:00
Adaptive grid refinement for optimization problems with probabilistic/robust (probust) constraints
ABSTRACT. \noindent Probust constraints are joint probabilistic constraints over a continuously indexed random inequality system: $\mathbb{P}(g(x,\xi,u)\geq 0\quad\forall u\in U)\geq p,$ where $x$ is a decision vector, $\xi$ is a random vector defined on some probability space $(\Omega,\mathcal{A},\mathbb{P})$, $u$ is a (nonstochastic) unceratin parameter living in an uncertainty set $U$, $g$ is a constraint mapping and $p\in (0,1]$ some probability level. This class of constraints has been introduced in \cite{graheihen} and further studied in \cite{adel,farhenhoem,ackhenper}. It finds potential applications in optimization problems where uncertain parameteres fo stochastic and nonstochastic nature are present simultaneously, e.g. random loads and friction coefficients in gas transport problems \cite{adel,graheihen}. Another major problem class is PDE constrained optimization with probabilistic state constraints uniformly over a space or time domain \cite{farhenhoem}. The robust or semi-infinite nature of the random inequality system inside the probability poses challenges both on the theoretical (nonsmoothness of the probability depending on decisions \cite{ackhenper}) and the algorithmic side. The talk presents an adaptive two-stage grid refinement algorithm for dealing numerically with such constraints along with a convergence result \cite{berheihenschwien}. The algorithm is applied to a simple problem of water reservoir management.
\begin{footnotesize} \begin{thebibliography}{00} \bibitem{adel} D. Adelh{\"u}tte, D. A{\ss}mann, T. Gonz{\'a}lez Grand{\'o}n, M. Gugat, H. Heitsch, R. Henrion, F. Liers, S. Nitsche, R. Schultz, M. Stingl, and D. Wintergerst, Joint model of probabilistic-robust (probust) constraints with application to gas network optimization, to appear in: Vietnam Journal of Mathematics, appeared online. \bibitem{berheihenschwien} H. Berthold, H. Heitsch, R. Henrion and J. Schwientek, On the algorithmic solution of optimization problems subject to probabilistic/robust (probust) constraints, WIAS-Preprint 2835, Weierstrass Institute Berlin, 2021. \bibitem{farhenhoem} M.H. Farshbaf-Shaker, R. Henrion and D. H{\"o}mberg, Properties of chance constraints in infinite dimensions with an application to PDE constrained optimization, Set-Valued and Variational Analysis, 26 (2018) 821-841. \bibitem{graheihen} T. Gonz{\'a}lez Grand{\'o}n, H. Heitsch and R. Henrion, A joint model of probabilistic /robust constraints for gas transport management in stationary networks, Computational Management Science, 14 (2017) 443-460. \bibitem{ackhenper} W. Van Ackooij, R. Henrion and P. P{\'e}rez-Aros, Generalized gradients for probabilistic/robust (probust) constraints, Optimization, 69 (2020) 1451-1479. \end{thebibliography} \end{footnotesize}
11:00-12:30 Session 12F: Network Design
Location: A1.133
11:00
Approximability of Robust Network Designin Undirected Graphs
ABSTRACT. In this talk we will present results published in [1] on the approximability of the robustnetwork design problem in undirected graphs. Given the dynamic nature of traffic in telecom-munication networks, we investigate the variant of the network design problem where we haveto determine the capacity to reserve on each link so that each demand vector belonging to apolyhedral set can be routed. The objective is either to minimize congestion or a linear cost.Routing is assumed to be fractional and dynamic (i.e., dependent on the current traffic vector).We first prove that the robust network design problem with minimum congestion cannot beapproximated within any constant factor. Then, using the ETH conjecture, we get aΩ(lognlog logn)lower bound for the approximability of this problem. This implies that theO(logn)approxi-mation ratio of the oblivious routing established by Räcke in 2008 [4] is tight. Using Lagrangerelaxation, we obtain a new proof of theO(logn)approximation. An important consequenceof the Lagrange-based reduction and our inapproximability results is that the robust networkdesign problem with linear reservation cost cannot be approximated within any constant ratio.This answers a long-standing open question of Chekuri (2007) [2]. We also give another proofof the result of Goyal et al. (2009) [3] stating that the optimal linear cost under static routingcan beΩ(logn)more expensive than the cost obtained under dynamic routing.
11:30
Optimizing over Nonlinear Networks with Duality Cuts
ABSTRACT. The optimal design or operation of distribution networks of commodities like water, gas, or electricity, involves the discrete choice of a static or dynamic network topology while ensuring the existence of nonconvex potential-flow relations. From strong duality conditions observed on these relations, we show how to derive cuts for the MINLP formulation of such problems.
12:00
On the Equivalence between the Blocker and the Interdiction Problem Applied to the Maximum Flow
ABSTRACT. The Maximum Flow Blocker Problem (MFBP) is a bi-level optimization problem where the leader is a blocker problem and the follower is a Maximum Flow problem. The MFBP consists in determining a minimum weight subset of arcs, called interdicted arcs, to be removed from the graph such that the remaining maximum flow is no larger than a given threshold. The non-interdicted graph refers to the one induced by the set of arcs which are not interdicted. In telecommunication networks, a major issue consists in detecting and facing anomalies. In this context, the MFBP has a key role in resilience analysis, since it gives the maximum number of anomalies that can occur such that the network continue to be operational. To the best of our knowledge, the MFBP has not been addressed in the literature but a closely related problem, called the Maximum Flow Interdiction Problem (MFIP), has been largely studied. This problem consists in finding a set of interdicted arcs that respect a given budget and such that the maximum flow, remaining in the non-interdicted graph, is minimized. We refer the interested reader to [1] and [2] for a complete overview of the MFIP. In [1], the author proposes a compact integer formulation for solving the problem. In [2], the authors show that any solution of the MFIP is contained in a cut of the graph. We prove that this result also holds for the MFBP. More generally, we show that the two problems, MFBP and MFIP, are equivalent. In other words, one can transform an instance of the MFBP to one of the MFIP and vice versa. Using this relation, we develop a compact integer formulation for the MFBP and obtain some complexity results.
11:00-12:30 Session 12G: Bilevel Programming – Applications to Energy Management I (invited session organized by Miguel Anjos and Luce Brotcorne)
Location: A1.134
11:00
Market Integration of Behind-the-Meter Residential Energy Storage
ABSTRACT. A new business opportunity beckons with the emergence of prosumers. We propose an innovative business model to harness the potential of aggregating behind-the-meter residential storage in which the aggregator compensates participants for using their storage system on an on-demand basis. A bilevel optimization model is developed to evaluate the potential of this proposed business model and determine the optimal compensation scheme for the participants. A Texas case study using real-world data confirms the year-round profitability of the model, showing that participants could earn on average nearly $1500 per year, and the aggregator could make an average profit of nearly$2000 per participant annually. The case study confirms that the proposed business model has potential, and that the main driver for a successful implementation is a suitable setting of the compensation paid to participants for using their energy storage system.
11:30
Intracity Placement of Charging Stations to Maximise Electric Vehicle Adoption
ABSTRACT. We present a new model for finding the placement of electric vehicle (EV) charging stations across a multi-year period which maximises EV adoption. This work is an extension of that done by Anjos et. al., and allows for a granular modelling of user characteristics and the value they attribute to the characteristics of the charging stations. This is achieved via the use of discrete choice models, with the users choosing a primary recharging method among the public charging stations or home charging. In case none of the above options are sufficiently attractive for the user, they can select the opt-out option, which indicates they do not purchase an EV. Instead of embedding an analytical probability model in the formulation, we adopt the approach proposed by Paneque et. al., which uses simulation and pre-computes error terms for each option available to users for a given number of scenarios. This results in a bilevel model, with the upper level placing charging stations and the users in the lower level selecting the option which maximises their utility. Alternatively, under the assumption that the charging stations are uncapacitated (in terms of the number of users who can select each option of the discrete choice model), we can use the pre-computed error terms to calculate the users covered by each charging station. This allows for an efficient maximum covering model, which can find optimal or near-optimal solutions in an intracity context much more effectively than the bilevel formulation.
12:00
Hierarchical Coupled Driving-and-Charging Model of Electric Vehicles, Stations and Grid Operators
ABSTRACT. The decisions of operators from both the transportation and the electrical systems are coupled due to Electric Vehicle (EV) users' actions. Thus, decision-making requires a model of several interdependent operators and of EV users' both driving and charging behaviors. Such a model is suggested for the electrical system in the context of commuting, which has a typical trilevel structure. At the lower level of the model, a congestion game between different types of vehicles gives which driving paths and charging stations (or hubs) commuters choose, depending on travel duration and energy consumption costs. At the middle level, a Charging Service Operator sets the charging prices at the hubs to maximize the difference between EV charging revenues and electricity supplying costs. These costs directly depend on the supplying contract chosen by the Electrical Network Operator at the upper level of the model, whose goal is to reduce grid costs. This trilevel optimization problem is solved using an optimistic iterative algorithm and simulated annealing. The sensitivity of this trilevel model to exogenous parameters such as the EV penetration and an incentive from a transportation operator is illustrated on realistic urban networks. This model is compared to a standard bilevel model in the literature (only one operator).
11:00-12:30 Session 12H: Semidefinite programming and optimal power flow
Location: A1.139
11:00
ACOPF: Nonsmooth optimization to improve the computation of SDP bounds
ABSTRACT. The Alternating-Current Optimal Power Flow (ACOPF) problem models the optimization of power dispatch in an AC electrical network. Obtaining global optimality certificates for large-scale ACOPF instances remains a challenge. In the quest for global optimality certificates, the semidefinite programming (SDP) relaxation of the ACOPF problem is a major asset since it is known to produce tight lower bounds. To improve the scalability of the SDP relaxation, state-of-the-art approaches exploit the sparse structure of the power grid by using a clique decomposition technique. Despite this advance, the clique-based SDP relaxation remains difficult to solve for large-scale instances: numerical instabilities may arise when solving this convex optimization problem. These difficulties cause two issues (i) this could reduce the accuracy of the calculation of the SDP bound (ii) obtaining a solution with non-zero dual feasibility errors implies that the calculated relaxation value is not a certified.
This work tackles both issues with an original approach. We reformulate the Lagrangian dual of the ACOPF, whose value equals the value of the SDP relaxation, as a concave maximization problem with the following properties: (i) it is unconstrained (ii) the objective function is partially separable. Based on this new formulation, we present how to obtain a certified lower bound from any dual vector, whether feasible or not in the classical dual SDP problem. Our new formulation is solved with a tailored polyhedral bundle method that exploits the sparse structure of the problem. We use this algorithm as a post-processing step, after solving the SDP relaxation with the state-of-the-art commercial interior point solver MOSEK. For many instances from the PGLib-OPF library, this post-processing significantly reduces the computed duality gap.
11:30
A Hierarchical Control Approach for Power LossMinimization and Optimal Power Flow within aMeshed DC Microgrid
ABSTRACT. This work expands a hierarchical control scheme to account for cost minimization and battery scheduling within a meshed DC microgrid together with the power loss minimization of the central power distribution network. The control strategy is divided into three layers: i) the high level solves a continuous-time optimization problem which minimizes the DC-bus power loss and the electricity cost from the external grid power purchase through the use of differential flatness with B-splines parametrization; ii) the middle level employs a tracking Model Predictive Control(MPC) method which mitigates the discrepancies among the optimal and the actual profiles; iii)the low level controller handles the switching operations within the converters. The proposed approach is validated in simulation for a meshed DC microgrid system under realistic scenarios.
12:00
Clique Merging Algorithms to Solve SemidefiniteRelaxations of Optimal Power Flow Problems
ABSTRACT. Semidefinite Programming (SDP) is a powerful technique to compute tight lower bounds for Optimal Power Flow (OPF) problems. Even using clique decomposition techniques, semidefinite relaxations are still computationally demanding. However, there are many different clique decompositions for the same SDP problem and they are not equivalent in terms of computation time. In this paper, we propose a new strategy to compute efficient clique decompositions with a clique merging heuristic. This heuristic is based on two different estimates of the computational burden of an SDP problem: the size of the problem and an estimation of a per-iteration cost for a state-of-the-art interior-point algorithm. We compare our strategy with other algorithms on MATPOWER instances and we show a significant decrease in solver time. In the last part of the talk we present recent developments on how to incorporate machine learning techniques to automatically identify effective clique decompositions.
12:30-14:15Lunch Break
14:15-15:45 Session 13A: Games, equilibria and applications
Location: A1.139
14:15
Privacy Impact on Generalized Nash Equilibrium in Peer-to-Peer Electricity Market
ABSTRACT. We consider a peer-to-peer electricity market, where agents hold private information that they might not want to share. The problem is modeled as a noncooperative communication game, which takes the form of a Generalized Nash Equilibrium Problem, where the agents determine their randomized reports to share with the other market players, while anticipating the form of the peer-to-peer market equilibrium. In the noncooperative game, each agent decides on the deterministic and random parts of the report, such that (a) the distance between the deterministic part of the report and the truthful private information is bounded and (b) the expectation of the privacy loss random variable is bounded. This allows each agent to change her privacy level. We characterize the equilibrium of the game, prove the uniqueness of the Variational Equilibrium and provide a closed form expression of the privacy price. We illustrate the results numerically on the 14-bus IEEE network.
14:45
A Bi-level Optimization Problem to Estimate Enzyme Catalytic Rates from a Proteomic-constrained Metabolic Model
ABSTRACT. Despite their crucial role to understand cellular metabolism, most of the enzyme catalytic rates are still unknown or suffer large uncertainties. Here, we propose a method to estimate catalytic rates from a metabolic network and proteomic data. We search catalytic rate values which maximize enzyme usages, given that metabolic fluxes should maximize the growth rate under the enzyme allocation constraints. This leads to solve a bi-level optimization problem which has the following structure : the contraints of the upper problem involve the value function of the lower problem which is linear. To solve it, we use the Karush-Kuhn-Tucker (KKT) conditions for the lower problem which allows to obtain a Mathematical Program with Complementarity Constraints (MPCC) for the bi-level problem. Optimality conditions then involve complementary constraints issued from complementary slackness in the KKT conditions that are non-convex and generically fail to satisfy the usual assumptions (such as constraint qualification). The mathematical program is thus solved using a relaxation method. The validity of the proposed method is highlighted with simulated data from a coarse-grained model.
15:15
Coupled queueing and charging game model with power capacity optimization
ABSTRACT. With the rise of Electric Vehicles (EVs), the demand for parking spots equipped with plugging devices in the charging stations (CSs) is tremendously increasing. This motivates us to study how to improve the Quality of Charging Service (QoX) at these stations. CSs are characterized by their number of parking spots, the maximum power that can be delivered by the station to an EV at one charging point and the power scheduling between the charging points. The latter is based on a modified Processor Sharing (PS) rule, i.e EVs benefit from the maximum power when the number of charging EVs (charging points in use) is sufficiently low, otherwise the total power is uniformly shared. In our model, EVs arrive at the CS according to a Poisson process with a random quantity of energy needed to fully charge their battery. An EV can occupy a charging point without consuming power: each EV has a random parking time and leaves the parking spot only when its parking time expires. We model the arrival of EVs into two CSs. Each new arriving EV determines strategically which CS to join based on a QoX criterion which is the expected probability to leave the chosen station with full battery level. The number of charging EVs at each instant follows a Markov Process and the latter expected probability can be explicitly determined depending on parameters of the system [1]. The strategic decision problem is studied using a queueing game framework [2] and properties of the equilibrium of the game are obtained as well as bounds on the Price of Anarchy (PoA)[3]. Finally our results are illustrated on a particular use-case in which a Charging Station Operator (CSO) is managing both CSs. With a limited total quantity of power for the two CSs, the CSO decides how to share it between the two CSs in order to maximize the QoX at the equilibrium of the game between EVs. An analytical solution of the optimization problem is given. Some numerical illustrations corresponding to a realistic case are provided in order to deepen understand insights of the stochastic model proposed, as the PoA. The results of this work have been published in [4].
References [1] A. Aveklouris, M. Vlasiou, and B. Zwart,Bounds and limit theorems for a layered queueingmodel in electric vehicle charging, Queueing Systems, 93:83–137, 2019. [2] R. Hassin, and M. Haviv,To queue or not to queue: Equilibrium behavior in queueingsystems, Kluwer, 2003. [3] T. Roughgarden,Selfish Routing and the Price of Anarchy, MIT Press, 2006. [4] A. Dupont, Y. Hayel, T. Jiménez, O. Beaude, C. Wan,Coupled queueing and charging gamemodel with energy capacity optimization, in proc. of ASMTA, 2021
14:15-15:45 Session 13B: Optimal control and applications, I (invited session organized by Jean-Baptiste Caillau andYacine Chitour)
Location: Amphi I
14:15
On the asymptotic behaviour of the value function in optimal control problems
ABSTRACT. The turnpike phenomenon stipulates that the solution of an optimal control problem in large time, remains essentially close to a steady-state of the dynamics, itself being the optimal solution of an associated static optimal control problem (see [2]). We use this property to propose an asymptotic expansion of the value function as the horizon T is large. Firstly, we prove the result in the linear quadratic case, using the Hamiltonian structure of the Pontryagin Maximum Principle extremal equations and some basic results of LQ theory. Then, based on some results obtained in [3], the result is generalized to a large class of nonlinear systems provided they satisfy the (strict) dissipativity property.
14:45
On the Convergence of Sequential Convex Programming for Non-Linear Optimal Control
ABSTRACT. Sequential Convex Programming (SCP) has recently gained significant popularity as an effective method for solving complex non-convex optimal control problems in aerospace and robotics. However, the theoretical analysis of SCP has received limited attention, and it is often restricted to either deterministic or finite-dimensional formulations. In this talk, we investigate conditions for the convergence of SCP when applied to solve a wide range of non-deterministic optimal control problems. As a future research direction, we plan to extend this analysis to optimal control problems whose dynamics are infinite-dimensional.
15:15
Extremal determinants of Sturm-Liouville operators on the circle
ABSTRACT. In a recent work [1], the functional determinant of Sturm-Liouville operators on an interval has been studied. We continue this study for such operators on the circle [2, 3]. In the simplest case where the potential is essentially bounded, some structure of the extremal potentials persist despite the change of topology.
This project is supported by the FMJH Program PGMO and EDF-Thales-Orange (extdet PGMO grant).
References
[1] Aldana, C.; Caillau, J.-B.; Freitas, P. Maximal determinants of Schrödinger operators. J. Ec. polytech. Math. 7 (2020), 803–829.
[2] On the Determinant of Elliptic Differential and Finite Difference Operators in Vector Bundles over S1. Burghelea, D.; Friedlander, L.; Kappeler, T. Commun. Math. Phys. 138, 1–18 (1991).
[3] Forman, R. Determinants, Finite-Difference Operators and Boundary Value Problems. Commun. Math. Phys. 147, 485–526 (1992).
14:15-15:45 Session 13C: Game theory and beyond
Location: A1.116
14:15
EPTAS for Stable Allocations in Matching Games
ABSTRACT. Gale-Shapley introduced a matching problem between two sets of agents where each agent on one side has a preference over the agents of the other side and proved, algorithmically, the existence of a pairwise stable matching (i.e. no uncoupled pair can be better off by matching). Shapley-Shubik, Demange-Gale, and many others extended the model by allowing monetary transfers. In this paper, we study an extension [1] where matched couples obtain their payoffs as the outcome of a strategic game and more particularly a solution concept that combines Gale-Shapley pairwise stability with a constrained Nash equilibrium notion (no player can increase its payoff by playing a different strategy without violating the participation constraint of the partner). Whenever all couples play zero-sum matrix games, strictly competitive bi-matrix games, or infinitely repeated bi-matrix games, we can prove that a modification of some algorithms in [1] converge to an $\varepsilon$-stable allocation in at most $O(\frac{1}{\varepsilon})$ steps where each step is polynomial (linear with respect to the number of players and polynomial of degree at most 5 with respect to the number of pure actions per player).
14:45
Refined approachability algorithms and application to regret minimization with global costs
ABSTRACT. Blackwell's approachability is a framework where two players, the Decision Maker and the Environment, play a repeated game with vector-valued payoffs. The goal of the Decision Maker is to make the average payoff converge to a given set called the target. When this is indeed possible, simple algorithms which guarantee the convergence are known. This abstract tool was successfully used for the construction of optimal strategies in various repeated games, but also found several applications in online learning. By extending an approach proposed by (Abernethy et al., 2011), we construct and analyze a class of Follow the Regularized Leader algorithms (FTRL) for Blackwell's approachability which are able to minimize not only the Euclidean distance to the target set (as it is often the case in the context of Blackwell's approachability) but a wide range of distance-like quantities. This flexibility enables us to apply these algorithms to closely minimize the quantity of interest in various online learning problems. In particular, for regret minimization with ℓp global costs, we obtain the first bounds with explicit dependence in p and the dimension d.
15:15
Algorithmic aspect of core nonemptiness and core stability
ABSTRACT. In 1953, von Neumann and Morgenstern developed the concept of stable sets as a solution for cooperative games. Fifteen years later, Gillies popularized the concept of the core, which is a convex polytope when nonempty. In the next decade, Bondareva and Shapley formulated independently a theorem describing a necessary and sufficient condition for the nonemptiness of the core, using the mathematical objects of minimal balanced collections.
We start our investigations of the core by implementing Peleg’s (1965) inductive method for generating the minimal balanced collections as a computer program, and then, an algorithm that checks if a game admits a nonempty core or not.
In 2021, Grabisch and Sudhölter formulated a theorem describing a necessary and suf- ficient condition for a game to admit a stable core, using several mathematical objects and concepts such as nested balancedness, balanced subsets which generalize balanced collections, exact and vital coalitions, etc.
In order to reformulate the aforementioned theorem as an algorithm, a set of coalitions has to be found that, among other conditions, determines the core of the game. We study core stability, geometric properties of the core, and, in particular, such core determining set of coalitions. Furthermore, we describe a procedure for checking whether a subset of a set of nonnegative real numbers is balanced. Finally, we implement the algorithm as a computer program that allows to check if an arbitrary game admits a stable core or not.
14:15-15:45 Session 13D: Variational analysis
Location: A1.122
14:15
Characterizing the error bound properties of functions in metrizable topological vector spaces
ABSTRACT. The notion of error bound is a widely used concept in applied mathematics and thereby has received a lot of attention in the last years and decades. Indeed, it plays a key role in areas including variational analysis, mathematical programming, convergence properties of algorithms, sensitivity analysis, designing solution methods for non-convex quadratic problems, penalty functions, optimality conditions, weak sharp minima, stability and well-posedness of solutions, (sub)regularity and calmness of set-valued mappings, and subdifferential calculus. In this regard, Hoffman’s estimation, as the starting point of the theory of error bounds, is very important and plays a considerable role in optimization. In this presentation, we will provide some sufficient criteria under which the function f, acting either between metrizable topological vector spaces or between metrizable subsets of some topological vector spaces, satisfies the error bound property at a point $\bar{x}$. Then, we will discuss the Hoffman estimation and obtain some results for the estimate of the distance to the set of solutions to a system of linear equalities. Some applications of this presentation are illustrated by some examples. The talk is based on an extension of [1]. References [1] M. Abassi, M. Théra, Strongly regular points of mappings, Fixed Point Theory Algorithms Sci. Eng. p. Paper No. 14 (2021), URL https://doi.org/10.1186/ s13663- 021- 00699- z
14:45
Equilibrium Theory in Riesz Spaces under Imperfect Competition
ABSTRACT. In this paper, we consider an economy with infinitely many commodities and market failures such as increasing returns to scale and external effects or other regarding preferences. The commodity space is a Riesz space with possibly no quasi-interior points in the positive cone in order to include most of the relevant commodity spaces in economics. We extend previous definitions of the marginal pricing rule as the polar of a new tangent cone to the production set at a point of its boundary. In order to prove the existence of a marginal pricing equilibria, we investigate the necessary extension of the so-called properness condition on non-convex technologies to deal with the emptiness of the quasi-interior of the positive cone. With this contribution, we obtain the same level of generality as the contribution on competitive equilibrium with convex production sets like in \cite{florenzano_production_2001,Podczeck1996}.
\begin{thebibliography}{00}
\bibitem{florenzano_production_2001}Florenzano, M., Marakulin, M.: Production equilibria in vector lattices, Economic Theory, Springer, 17, 577-598 (2001).
\bibitem{Podczeck1996}Podczeck, K.: Equilibria in vector lattices without ordered preferences or uniform properness, Journal of Mathematical Economics, Elsevier, 25, 465-485 (1996).
\end{thebibliography}
15:15
Constant Along Primal Rays Conjugacies
ABSTRACT. The so-called l0 pseudonorm counts the number of nonzero components of a vector. It is standard in sparse optimization problems. However, as it is a discontinuous and nonconvex function, the l0 pseudonorm cannot be satisfactorily handled with the Fenchel conjugacy.
In this talk, we present the Euclidean Capra-conjugacy, which is suitable for the l0 pseudonorm, as this latter is "convex" in the sense of generalized convexity (equal to its biconjugate). We immediately derive a convex factorization property (the l0 pseudonorm coincides, on the unit sphere, with a convex lsc function) and variational formulations for the l0 pseudonorm.
In a second part, we provide different extensions: the above properties hold true for a class of conjugacies depending on strictly-orthant monotonic norms (including the Euclidean norm); they hold true for nondecreasing functions of the support (including the l0 pseudonorm); more generally, we will show how Capra-conjugacies are suitable to provide convex lower bounds for zero-homogeneous functions; we will also point out how to tackle the rank matrix function.
Finally, we present mathematical expressions of the Capra-subdifferential of the l0 pseudonorm, and graphical representations. This opens the way for possible suitable algorithms that we discuss.
14:15-15:45 Session 13E: Stochastic optimization II
Location: A1.128
14:15
Risk-Averse Stochastic Programming and Distributionally Robust Optimization Via Operator Splitting
ABSTRACT. This work deals with a broad class of convex optimization problems under uncertainty. The approach is to pose the original problem as one of finding a zero of the sum of two appropriate monotone operators, which is solved by the celebrated Douglas-Rachford splitting method. The resulting algorithm, suitable for risk-averse stochastic programs and distributionally robust optimization with fixed support, separates the random cost mapping from the risk function composing the problem's objective. Such a separation is exploited to compute iterates by alternating projections onto different convex sets. Scenario subproblems, free from the risk function and thus parallelizable, are projections onto the cost mappings' epigraphs. The risk function is handled in an independent and dedicated step consisting of evaluating its proximal mapping that, in many important cases, amounts to projecting onto a certain ambiguity set. Variables get updated by straightforward projections on subspaces through independent computations for the various scenarios. The investigated approach enjoys significant flexibility and opens the way to handle, in a single algorithm, several classes of risk measures and ambiguity sets.
14:45
Parametric stochastic optimization for day-ahead and intraday co-management of a power unit
ABSTRACT. We consider renewable power units equipped with a battery and engaged in day-ahead load scheduling. In this context, the unit manager must submit a day-ahead power production profile prior to every operating day, and is engaged to deliver power accordingly. During the operating day, the unit manager is charged penalties if the delivered power differs from the submitted pro- file. First, we model the problem of computing the optimal production profile as a parametric multistage stochastic optimization problem. The production profile is modeled as a parameter which affects the value of the intra-day management of the power unit, where the photovoltaic production induces stochasticity. Second, we introduce parametric value functions for solving the problem. Under convexity and differentiability assumptions, we are able to compute the gradients of these value functions with respect to the parameter. When the differentiability assumption breaks, we propose two approximation methods. One is based on a smooth approx- imation with the Moreau envelope, the other one is based on a polyhedral approximation with the SDDP algorithm. We showcase applications in the context of the French non-interconnected power grid and benchmark our method against a Model Predictive Control approach.
15:15
Multistage Stochastic Linear Programming and Polyhedral Geometry
ABSTRACT. We show that the multistage linear problem (MSLP) with an arbitrary cost distribution is equivalent to a MSLP on a finite scenario tree. Indeed, be studying the polyhedral structure of MSLP, we show that the cost distribution can be replaced by a discrete cost distribution, taking the expected value of the cost at each cone of a normal fan. In particular, we show that the expected cost-to-go functions are polyhedral and affine on the cells of a chamber complex, which is independent of the cost distribution. This allow us to write a simplex method on the chamber complex to solve 2 stage linear problems. We also give new complexity results.
14:15-15:45 Session 13F: Large scale optimization
Location: A1.133
14:15
Rates of Convergences of First-Order Algorithms for Assignment Problems.
ABSTRACT. We compare several accelerated and non-accelerated first-order algorithms for computational optimal transportation or optimal assignment problems. Up to one exception, based on the notion of area convexity, which produces a solution optimal up to an error e in O( log n ||C|| / e) iterations, we find that all up-to-date methods share the same O((√n log n||C|| / e) )⋅n2) complexity (n2 being the cost per iteration) where n is the size of the marginals and ||C||the largest element in the nxn cost matrix. We discuss the merits of smoothing, accelerated methods, and line-search approaches. We also introduce a Bregman kernel/prox functions, which makes the algorithms stable and improves the quality of the solutions at a very low cost.
14:45
Majorization-Minimization algorithms : New convergence results in a stochastic setting
ABSTRACT. Many problems in image processing or in machine learning can be formulated as the minimization of a loss function F whose nature is not totally deterministic. In the differentiable context, such situation arises when the gradient ∇F can only be evaluated up to a stochastic error. In this talk, we will focus our attention on the impact of noisy perturbations in the gradient on the convergence of majorization-minimization (MM) approaches. The latter consist of efficient and effective optimization algorithms that benefit from solid theoretical foundations and great practical performance [2, 4]. Our talk presents the convergence analysis for a versatile MM scheme called SABRINA (StochAstic suBspace majoRIzation-miNimization Algorithm) [1], generalizing our previous work [3]. Almost sure convergence results and convergence rate properties are obtained. The practical performances of SABRINA are illustrated by means of several numerical experiments.
15:15
FISTA restart using an automatic estimation of the growth parameter
ABSTRACT. In this paper, we propose a novel restart scheme for FISTA (Fast Iterative Shrinking- Threshold Algorithm). This method which is a generalization of Nesterov’s accelerated gradient algorithm is widely used in the field of large convex optimization problems and it provides fast convergence results under a strong convexity assumption. These convergence rates can be extended for weaker hypotheses such as the Łojasiewicz property but it requires prior knowledge on the function of interest. In particular, most of the schemes providing a fast convergence for non-strongly convex functions satisfying a quadratic growth condition involve the growth parameter which is generally not known. Recent works show that restarting FISTA could ensure a fast convergence for this class of functions without requiring any geometry parameter. We improve these restart schemes by providing a better asymptotical convergence rate and by requiring a lower computation cost. We present numerical results emphasizing that our method is efficient especially in terms of computation time.
14:15-15:15 Session 13G: Bilevel Programming – Applications to Energy Management II (invited session organized by Miguel Anjos and Luce Brotcorne)
Location: Amphi II
14:15
A Quadratic Regularization for the Multi-Attribute Unit-Demand Envy-Free Pricing Problem
ABSTRACT. We consider a profit-maximizing model for pricing contracts as an extension of the unit-demand envy-free pricing problem: customers aim to choose a contract maximizing their utility based on a reservation price and multiple price coefficients (attributes). Classical approaches suppose that the customers have deterministic utilities; then, the response of each customer is highly sensitive to price since it concentrates on the best offer. To circumvent the intrinsic instability of deterministic models, we introduce a quadratically regularized model of customer's response, which leads to a quadratic program under complementarity constraints (QPCC). This provides an alternative to the classical logit approach, still allowing to robustify the model, while keeping a strong geometrical structure. In particular, we show that the customer's response is governed by a polyhedral complex, in which every polyhedral cell determines a set of contracts which is effectively chosen. Moreover, the deterministic model is recovered as a limit case of the regularized one. We exploit these geometrical properties to develop a pivoting heuristic, which we compare with implicit or non-linear methods from bilevel programming, showing the effectiveness of the approach. Throughout the paper, the electricity provider problem is our guideline, and we present a numerical study on this application case.
14:45
Optimal pricing for electric vehicle charging with customer waiting time
ABSTRACT. We propose a bilevel optimization model to determine the optimal pricing for electric vehicle within a public charging station system by taking into account the waiting time of customers. We assume that the locations of the charging station are fixed, and model the waiting time without using classical queuing theory. In the upper level of the bilevel model, the decision maker sets the price of electricity and the amount of energy available at each station. The latter quantity depends on the amount of renewable energy available at each time period. In the lower level, electric vehicle users select a charging station and a time for recharging the vehicle, depending on individual preferences. We present two linear models for this problem and explore how to solve them using mixed integer bilevel optimization methods.
15:15
A Multi-Leader-Follower Game for Energy Demand-Side Management
PRESENTER: Didier Aussel
ABSTRACT. A Multi-Leader-Follower Game (MLFG) corresponds to a bilevel problem in which the upper level and the lower level are defined by Nash non cooperative competition among the players acting at the upper level (the leaders) and, at the same time, among the ones acting at the lower level (the followers). MLFGs are known to be complex problems, but they also provide perfect models to describe hierarchical interactions among various actors of real life problems. In this work, we focus on a class of MLFGs modelling the implementation of demand-side management in an electricity market through price incentives, leading to the so-called Bilevel Demand-Side Management problem (BDSM. Our aim is to propose some innovative reformulations/numerical approaches to efficiently tackle this difficult problem. Our methodology is based on the selection of specific Nash equilibria of the lower level through a precise analysis of the intrinsic characteristics of (BDSM).
15:45-16:15Coffee Break
16:15-17:45 Session 14A: Applications in Energy
Location: A1.133
16:15
Optimisation du planning de production et de l'approvisionnement en énergie d'un site industriel
ABSTRACT. Nous proposons un modèle stochastique en variables mixtes permettant d'optimiser simultanément le planning de production d'un site industriel ainsi que son approvisionnement en énergie. La difficulté de ce problème réside en deux points : d'une part les variables binaires permettant de considérer les contraintes de production ; d'autre part la stochasticité de l'apport en énergie renouvelable.
Nous commençons par une étude de la version déterministe du problème, où nous nous intéressons à la complexité du problème, à sa convexité et à son paramétrage.
Nous envisageons d'abord deux méthodes classiques pour résoudre ce problème : la programmation dynamique et model predictif control. Ces méthodes requièrent un temps de calcul important. C'est pourquoi nous explorons différentes heuristiques qui exploitent la résolution rapide de la relaxation continue du problème par l'algorithme stochastic dual dynamic programming (SDDP) : une première heuristique de réparation, qui consiste à arrondir de manière intelligente la solution relâchée continue ; une seconde heuristique qui utilise une sous-approximation des fonctions de coûts futurs par les coupes trouvées par SDDP. Enfin, alors que la programmation dynamique décompose un problème à T pas de temps à T problèmes à 1 pas de temps, pour mieux prendre en compte les contraintes d'intégrité, on considère une décomposation par programmation dynamique en problème sur deux pas de temps. Cette idée peut être appliquée à l'heuristique avec des coupes.
Les approches proposées sont numériquement testées sur un problème réel
16:45
Stochastic Two-stage Optimisation for the Operation of a Multi-services Virtual Power Plant
ABSTRACT. We present a method to operate virtual power plants (VPP) under uncertainty of energy production. This work is funded by the EU-SysFlex H2020 project which aims at demonstrating the feasibility of operating a multi services VPP. The demonstration focuses on the control of a 12-MW wind farm, a 2.3-MW/1h Battery, and photovoltaic panels to provide frequency support and energy arbitrage. In our study case, the uncertainty comes from the wind farm and photovoltaic (PV) panels production. The VPP needs to take production decisions on energy markets: the bids are made on SPOT and FCR (Frequency Containment Reserve) markets the day before at 12. If the offer is not respected, penalties must be paid or the intraday market (private sale) can be used. The approach chosen is two stage stochastic programming [1] because the MILP formulation is possible and the problem has two levels of decisions. We usually use MILP solvers because they are efficient and optimality is proven. The uncertainty is represented with scenarios of wind generation and PV generation of equal probability. These were created with real time probabilist forecast (quantiles) and copula [2], generated from history to render the time dependency. The first stage variables, which are variables having the same value for each scenario, are the bids on markets in our modelling. The second stage variables are the recourse variables, decisions taken after knowing which scenario occurs, such as the decisions on intraday market. We compare the stochastic schedule with a deterministic schedule and a perfect schedule, which is the one made knowing in advance what will occur. The comparison is made under a financial criteria such as gain and penalties on markets. We use a simulation platform [3] to simulate what would really occur and get more precise financial results.
17:15
Hydropower Datasets For SMS++
ABSTRACT. Having realistic data is crucial to adequately test optimisation software, but unfortunately such data is usually not publicly available and/ or cannot be published. In this project cascading hydro system datasets based on hydro systems in Norway were created using only data that is publicly available (online). The datasets are in NetCdf4 format to be easily used with SMS++ uses, but model spreadsheets and Python code make it possible to generate the datasets from Excel. Although the datasets created are based on the structure of "HydroUnitBlock" blocks in SMS++, the data could be useful for research and applications not related to SMS++.
16:15-17:45 Session 14B: Methodological aspects in OR and ML
Location: Amphi II
16:15
Learning Structured Approximations of Operations Research Problems
ABSTRACT. The design of algorithms that leverage machine learning alongside combinatorial optimization techniques is a young but thriving area of operations research. If trends emerge, the literature has still not converged on the proper way of combining these two techniques or on the predictor architectures that should be used. We focus on operations research problems for which no efficient algorithms are known, but that are variants of classic problems for which ones efficient algorithm exist. Elaborating on recent contributions that suggest using a machine learning predictor to approximate the variant by the classic problem, we introduce the notion of structured approximation of an operations research problem by another. We provide a generic learning algorithm to fit these approximations. This algorithm requires only instances of the variant in the training set, unlike previous learning algorithms that also require the solution of these instances. Using tools from statistical learning theory, we prove a result showing the convergence speed of the estimator, and deduce an approximation ratio guarantee on the performance of the algorithm obtained for the variant. Numerical experiments on a single machine scheduling and a stochastic vehicle scheduling problem from the literature show that our learning algorithm is competitive with algorithms that have access to optimal solutions, leading to state-of-the-art algorithms for the variant considered.
16:45
Linear Lexicographic Optimization and Preferential Bidding System
ABSTRACT. We propose a method based on linear lexicographic optimization for solving the problem of assigning pairings to pilots in an airline using a preferential bidding system. Contrary to what is usually done for this problem, this method does not follow a sequential procedure. It relies instead on an approach that is closer to the standards of mathematical programming: first solve the continuous relaxation; second find a good integer solution.
17:15
Solution of Fractional Quadratic Programs on the Simplex and Application to the Eigenvalue Complementarity Problem
ABSTRACT. Fractional programming is a classical topic in optimization with a large number of application areas such as economics, finance, communication, and engineering, and has extensively been studied for several decades [2]. Although much effort has been made to widen the area of applications by introducing generalized models such as the sum-of-ratios problem and its variants, see e.g., [2], Fractional Quadratic Programs (FQP) and Fractional Linear Quadratic Program (FLQP) are still considered to be the most fundamental and important classes of fractional programs. A global optimal solution of the FLQP can be computed by the classical Dinkelbach's method [1]. We introduce an implementation of this algorithm for computing a global maximum of an FLQP on the simplex that employs an efficient Block Principal Pivoting algorithm in each iteration. A new sequential FLQP algorithm is introduced for computing a stationary point of an FQP on the simplex. Global convergence for this algorithm is established. This sequential algorithm is recommended for the solution of the symmetric Eigenvalue Complementarity Problem (EiCP), as this problem is equivalent to the computation of a stationary point of an FQP on the simplex.
Our computational experience indicates that the implementation of Dinkelbach's method for the FLQP and the sequential FLQP algorithm are quite effcient in practice. We extend the sequential FLQP algorithm also for solving the nonsymmetric EiCP. Since this method solves a special Variational Inequality (VI) Problem in each iteration, it can be considered as a sequential VI algorithm. Despite the convergence of this algorithm has yet to be established, preliminary computational experience indicates that the sequential VI algorithm is quite a promising technique for the solution of the nonsymmetric EiCP.
References [1] W. Dinkelbach, On nonlinear fractional programming, Management Science 13 (1967) pp. 492--498. [2] S.C. Fang, D.Y. Gao, R.L. Sheu, W. Xing, Global optimization for a class of fractional programming problems. J Glob Optim 45 (2009) pp. 337--353.
16:15-17:45 Session 14C: Optimal control and applications, II (invited session organized by Jean-Baptiste Caillau and Yacine Chitour)
Location: Amphi I
16:15
Worst exponential decay rate for degenerate gradient flows subject to persistent excitation
ABSTRACT. In this paper we estimate the worst rate of exponential decay of degenerate gradient flows $\dot x = -S x$, issued from adaptive control theory \cite{anderson1986}. Under \emph{persistent excitation} assumptions on the positive semi-definite matrix $S$, we provide upper bounds for this rate of decay consistent with previously known lower bounds and analogous stability results for more general classes of persistently excited signals. The strategy of proof consists in relating the worst decay rate to optimal control questions and studying in details their solutions.
As a byproduct of our analysis, we also obtain estimates for the worst $L_2$-gain of the time-varying linear control systems $\dot x=-cc^\top x+u$, where the signal $c$ is \emph{persistently excited}, thus solving an open problem posed by A. Rantzer in 1999, cf.~\cite[Problem~36]{Rantzer1999}.
16:45
Optimal control problem on Riemannian manifolds under probability knowledge of the initial condition
ABSTRACT. We consider an optimal control problem on a compact Riemannian manifold M with imper- fect information on the initial state of the system. The lack of information is modeled by a Borel probability measure along which the initial state is distributed. This problem is of fundamental importance both in terms of real world applications and mathematical theory.
Indeed, this layout appears in the modelling of many optimal control problems related to mechanics, robotics or quantum systems. The initial condition is not perfectly known either due to the lack of measurements, errors of measurements, or even due to the nature of the system itself, meaning that the uncertainties are inevitable. As for the theoretical interest, since the optimal control problem with partial information is defined in the 2-Wasserstein space over the Riemannian manifold P2(M), we need to define proper tools in this space to describe the problem.
Similar to optimal control problems with perfect information [2], we introduce the value function of this problem and an associated Hamilton Jacobi Bellman (HJB) equation defined on P2(M). We propose to extend the same techniques commonly used in viscosity theory [1] to the space P2(M) in order to prove that the value function is the unique viscosity solution to the HJB equation. In particular, we want to define a notion of “smooth” solutions to the HJB equation, define the set of test functions for viscosity super and subolutions, prove a local comparison principle that guarantees uniqueness of the viscosity solution and finally prove that the value function verifies a dynamic programming principle that will guarantee existence of the solution.
The main result is that the value function of the problem is the unique viscosity solution to an HJB equation. The notion of viscosity is defined by exploiting the Riemannian-like structure on P2(M).
References [1] Guy Barles, Solutions de viscosité des équations de Hamilton-Jacobi, Collection SMAI, 1994. [2] Martino Bardi and Italo Capuzzo-Dolcetta, Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations, Springer Science & Business Media, 2008.
17:15
Fuel Optimal Planetary Landing with Thrust Pointing and Glide-slope Constraints
ABSTRACT. The problem of landing a reusable launch vehicle is usually treated as an optimal controlproblem, with an objective of minimizing fuel consumption. It has been shown thanks to Pontryagin Maximum Principle that the optimal bounded control for the unconstrained 3D problemis in the Bang-Off-Bang form, i.e. a period of full thrust, followed by a period off thrust, andthen full-thrust until touchdown. More representative problems with state and control constraints are often considered only numerically, as in [2] for example. Theoretical results on the structure of the control are rarely provided and problems remain open. Yet, leveraging such results, indirect shooting methods can be used to obtain more precise numerical results than using direct methods.We considered the landing problem with a thrust pointing constraint and a glide-slope constraint on the position of the vehicle [2], which may be imposed for practical and safety reasonsas it prevents the vehicle from getting too close to the ground before reaching the target point.The difficulty when applying Pontryagin Maximum Principle taking into account state or controlconstraints is that the adjoint vectors may be discontinuous, have complex expressions and the control may not be defined everywhere on the optimal trajectory.By applying the version of the maximum principle given in [1, Th. 9.5.1], we have studied the variations of the switching function, after clarifying its derivative and showing that it was nondecreasing. Thus, we have deduced that the optimal control is still in a Bang-Off-Bang form,and there can be singular arcs between the Bang arcs. Moreover, if the glide-slope constraint is changed into a positive altitude constraint, we can show that for generic initial conditions there is no singular arc. Finally, for a simplified 2D problem, we have shown that, apart from the final point, only one contact or one boundary with the state constraint can occur.
16:15-17:45 Session 14D: New numerical methods: from PDE to trajectory planning
Location: A1.116
16:15
Control of state-constrained McKean-Vlasov equations: application to portfolio selection
ABSTRACT. We consider the control of McKean-Vlasov dynamics (or mean-field control) with probabilistic state constraints. We rely on a level-set approach which provides a representation of the value function of the constrained problem through an unconstrained one with exact penalization and running maximum cost. Our approach is then adapted to the common noise setting. Our work extends (Bokanowski, Picarelli, and Zidani, SIAM J Control Optim 54.5 (2016), pp. 2568–2593) and (Bokanowski, Picarelli, and Zidani, Appl Math Optim 71 (2015), pp. 125–163) to a mean-field setting. In contrast with these works which study almost sure constraints, in the case of probabilistic constraints without common noise we don't require existence of optimal controls for the auxiliary problem. The reformulation as an unconstrained problem is particularly suitable for the numerical resolution of the problem, achieved thanks to an extension of a machine learning algorithm from (Carmona, Laurière, arXiv:1908.01613, 2019). An application focuses on a mean-variance portfolio selection problem with conditional value at risk constraint on the wealth. We are also able to directly solve numerically the primal Markowitz problem in continuous time without relying on duality.
16:45
A multilevel fast-marching method
PRESENTER: Shanqing Liu
ABSTRACT. We introduce a numerical method for approximating the solutions for a class of static Hamilton-Jacobi-Bellman equations arising from minimum time state constrained optimal control problems. We are interested in computing the value function and an optimal trajectory from two given points only. We show that this is equivalent to solve the state constrained equation on the subdomain which contains the true optimal trajectories between the two points.
Our algorithm takes advantage of this good property. Instead of finding the optimal trajectories directly, by solving a discretization system in a fine grid, we first approximately find the subdomain containing the optimal trajectories by solving the system in a coarse-grid. Then, we further discretize that subdomain with a fine-grid, and solve on that fine-grid the discretized system. The computation of approximated value functions on each grid is based on the fast-marching method.
We show that using our algorithm, the error estimation for the computation of the value function is as good as the one obtained using the fine grid to discretize the whole domain. Moreover, we show that the number of computation operations as well as the memory allocation needed for an error of "e" is in the order of "(1/e)^d", whereas classical methods need at least "(1/e)^(2d)". Under regularity conditions on the dynamic and value functions, this complexity bound reduces to "(C^d)*log(1/e)".
17:15
A novel mixed-integer description within the combined use of MPC and polyhedral potential field
ABSTRACT. We extend the polyhedral sum function notion to the multi-obstacle case and show that the polyhedral support of its piecewise affine representation comes from an associated hyperplane arrangement. We exploit this combinatorial structure to provide equivalent mixed-integer representations and model a repulsive potential term for subsequent use in a Model Predictive Control (MPC) formulation. Hence, our main goal is to exploit the problem structure in order to simplify the end formulation. Specifically, the main improvements are: i) expanding the sum function, as defined for the single-obstacle case in \cite{Huy2020Ocean} for a multi-obstacle environment; ii) exploit the combinatorial structure of the problem (polyhedral shapes induce a hyperplane arrangement) to provide a novel mixed-integer description of the sum function; iii) implement a repulsive potential field component in a mixed-integer MPC scheme in order to solve the navigation problem.
16:15-17:45 Session 14E: Statistics and optimization
Location: A1.128
16:15
Distributed Zero-Order Optimization under Adversarial Noise
ABSTRACT. We study the problem of distributed zero-order optimization for a class of strongly convex functions. They are formed by the average of local objectives, associated to different nodes in a prescribed network. We propose a distributed zero-order projected gradient descent algorithm to solve the problem. Exchange of information within the network is permitted only between neighbouring nodes. An important feature of our procedure is that it can query only function values, subject to a general noise model, that does not require zero mean or independent errors. We derive upper bounds for the average cumulative regret and optimization error of the algorithm which highlight the role played by a network connectivity parameter, the number of variables, the noise level, the strong convexity parameter, and smoothness properties of the local objectives. The bounds indicate some key improvements of our method over the state-of-the-art, both in the distributed and standard zero-order optimization settings. We also comment on lower bounds and observe that the dependency over certain function parameters in the bound is nearly optimal.
16:45
Méthodes de prediction de vie residuelle sur des données censurés
PRESENTER: Alonso Silva
ABSTRACT. Le cadre d’application de ce travail est l’estimation de durée de vie résiduelle à partir de données de taille moyenne ou de grande taille avec un sous-ensemble (10%-50%) de données censurés à droite. La tâche principale de l’analyse de survie est d’estimer la distribution de probabilité du temps jusqu’à ce qu’un événement d’intérêt se produise. Le phénomène de censure à droite se produit lorsque certains échantillons n’ont pas rencontré l’événement d’intérêt pendant la période de suivi. Nous limitons notre étude à considérer des variables explicatives endogènes et exogènes tant qu’elles sont constantes dans le temps. Les méthodes traditionnelles de régression et classification ne sont pas adaptées pour pouvoir inclure à la fois les aspects événementiels et temporels. Nous proposons de comparer des modèles paramétriques, semi-paramétriques, et d’apprentissage automatique pour estimer des durées de vie résiduelle. Nous avons mené une étude comparative approfondie sur ces différentes méthodes de prédictions de durée de vie utilisant à la fois des données publiques et des données synthétiques. Les données synthétiques permettent de disposer de la vérité terrain des échantillons censurés, de contrôler le nombre d’échantillons disponibles et le type de censure et du rapport de censure. Il ne s’agit pas ici de proposer une nouvelle solution, mais de repérer les solutions existantes les plus prometteuses et d’analyser l’impact de la censure sur les performances obtenues.
17:15
Experimental Comparison of Ensemble Methods and Time-to-Event Analysis Models Through Two Different Scores
ABSTRACT. Time-to-event analysis is a branch of statistic that has increased its popularity during the last decades due to its many different application fields. We present in this paper a comparison between semi-parametric, parametric and machine learning methods through two different scores that were applied to three datasets. We also present an aggregation method for time-to-event analysis that in average outperforms the best predictor. Finally we present simulation results in order to enrich the comparison between the two scores while varying the number of samples and the censored data percentage.
16:15-17:45 Session 14F: Forecasting Customer Load
Location: A1.122
16:15
VIKING: Variational Bayesian Variance Tracking Winning an Electricity Load Forecasting Competition
ABSTRACT. We consider the problem of online time series forecasting, motivated by the prediction of the electricity load. Indeed, forecasting the electricity demand is a crucial task for grid operators and has been extensively studied by the time series community as well as the statistics and machine learning communities. As the behaviour of the consumption evolves in time it is necessary to develop time-varying forecasting models, that is, models that improve themselves sequentially taking into account the most recent observations. Our methods resulted in the winning strategy in a competition on post-covid electricity load forecasting, and we use it as an illustration for our theoretical framework.
First, we present the data set, as well as classical time-invariant forecasting models: auto-regressive, linear regression, generalized additive model, and multi-layer perceptron.
Second, we introduce a state-space representation to obtain adaptive forecasting methods. We start by using this state-space representation with fixed state and space noise variances, in which setting we use the celebrated Kalman filter. We see it as a second-order stochastic gradient descent algorithm on the quadratic loss.
Finally, we introduce a novel approach, named VIKING, to deal with state-space model inference under unknown state and space noise variances. An augmented dynamical model is considered where the variances are represented as latent variables. The inference relies on the variational bayesian approach. We extend the gradient descent parallel to see VIKING as a way to adapt the step size in a second-order stochastic gradient descent.
16:45
Hyperparameter optimization of deep neural networks: Application to the forecasting of energy consumption
ABSTRACT. This project introduces a step-by-step methodology for the Hyperparameter Optimization problem (HPO) of a deep convolutional neural network, applied to the forecasting of energy consumption in France.
The problem is characterized by an expensive, noisy and black box function with a large scale and mixed search space. To tackle HPO we used exploration and exploitation strategies, surrogate based optimization, chaotic optimization and fractal decomposition algorithms. Finally we decided to apply ensemble methods and online aggregation on best solutions found during the optimization phase. Results of our methodology are promising, indeed by automating HPO problem we found better solutions and meta-models than the initial model.
16:15-17:45 Session 14G: Traveling salesman problem
Location: A1.134
16:15
MIP formulations for OWA Traveling Salesman Problems
ABSTRACT. We study a variant of the traveling salesman problem (TSP) where both the total cost of a tour and some balance (fairness) for its edge costs are considered. The main challenge is that the sum aggregation function for the standard version of TSP can not guarantee any degree of balance for the costs of the edges composing the tour. In this paper, using the Generalized Gini Index (GGI), a special case of the ordered weighted averaging (OWA) function, we define a variant of TSP - the OWA TSP where the efficiency and the fairness for edge costs of the solution tour are achievable by minimizing the objective function. Although GGI is a non-linear aggregation function, it can be cast to Mixed-Integer Programs (MIP) by several existing linearization methods [1, 3]. The focus of our work is to exploit the representations of Hamiltonian cycles to reduce the size of MIP formulations for linearizing the OWA TSP and to improve their computational efficiency. As illustrated in this work, our proposal achieves a great reduction in terms of problem size and running time. In addition, we employ the Lagrangian relaxation to design a fast heuristic algorithm to deal with the OWA TSP as done in [2]. The numerical results demonstrate that our method provides rapidly high-quality lower bounds, which might take a lot of time for commercial solvers (e.g. CPLEX, Gurobi). Furthermore, obtained quasi-optimal solutions have optimality gaps less than 1% compared to the known exact solutions.
16:45
Nash Fairness solution for balanced TSP
ABSTRACT. In this paper, we consider a variant of the Traveling Salesman Problem (TSP), called Balanced Traveling Salesman Problem (BTSP). The BTSP seeks to find a tour which has the smallest max-min distance : the difference between the maximum edge cost and the minimum one. We present a Mixed Integer Program (MIP) to find optimal solutions minimizing the max-min distance for BTSP. However, minimizing only the max-min distance may lead to a tour with inefficient total cost in many situations. Hence, we propose a fair way based on Nash equilibrium to inject the total cost into the objective function of the BTSP. We consider a Nash equilibrium as it is defined in a context of fair competition based on proportional-fair scheduling. In the context of TSP, we are interested in solutions achieving a Nash equilibrium between two players: the first aims at minimizing the total cost and the second aims at minimizing the max-min distance. We call such solutions Nash Fairness (NF) solutions. We first show that NF solutions for TSP exists and may be more than one. We then focus on the unique NF solution, called Efficient Nash Fairness (ENF) solution, having the smallest value of total cost. We show that the ENF solution is Pareto-optimal and can be found by optimizing a suitable linear combination of the two players objectives based on Weighted Sum Method. Finally, we propose an iterative algorithm which converges to the ENF solution in a finite number of iterations. Computational results will be presented and commented.
16:15-17:45 Session 14H: Evolutionary and black-box algorithms, II
Location: A1.139
16:15
Parameter-Dependent Performance Bounds for Evolutionary Algorithms
ABSTRACT. Evolutionary algorithms (EAs) are optimization heuristics that generally yield good solutions while requiring comparably few resources, such as time, computing power, or knowledge about the optimization problem. Thanks to their great importance in practice, EAs are the focus of an extensive line of research of empirical and theoretical nature.
Unfortunately, up to date, the discrepancy between empirical and theoretical results is rather large. A major underlying reason is that theory mostly considers the performance of EAs with respect to a single parameter, whereas modern applied EAs typically have various parameters impacting their performance at the same time in different ways. However, the focus of the theory community is about to shift. A few theoretical results that consider multiple parameters mark the beginning of a phase transition from a single- to a multiple-parameter analysis.
In this talk, we look at this challenging and exciting phase transition. Our journey starts with the introduction of the EA framework and typical algorithms that are considered in theoretical analyses. We then look at some results for the analysis of a single parameter, and we learn about the methods used in order to derive such results. At the end, we see how these tools can be leveraged to get results that show how the performance of an EA is affected by multiple parameters at once.
16:45
Towards Trajectory-Based Algorithm Selection
ABSTRACT. Black-box optimization problems require evaluations to assess and to compare the quality of different alternatives. Classical examples for such problems are settings in which the evaluation of a suggested solution requires a computer simulation or a physical experiment, such as a crash test in automotive industry or a clinical study in medicine. Black-box optimization problems can only be solved by sampling-based heuristics, i.e., algorithms which (deterministically or randomly) generate solution candidates, evaluate them, and use the soobtained knowledge to recommend one or more alternatives. Even though the basic underlying principles of these algorithms can be considered similar in nature, their performances on different problem instances in practice can greatly vary. This poses a meta-problem: selecting the most appropriate algorithm when presented with a new, unknown problem instance. In recent years, significant research effort has been put towards automating the algorithm selection process by building and training machine learning models to perform this task. To this end, knowledge about the problem instance has to be extracted beforehand, independently of the optimization process, in order to then be mapped to the performance of different algorithms on the very same instance. This talk will introduce a novel, trajectory-based approach, in which the cost of extracting the problem knowledge is balanced out by using samples that some default solver sees anyway during its run (or a part of its run) to identify the problem instance, and recommend a more appropriate algorithm to switch to. We show that this approach allows for results as reliable as any of the state-of-the-art algorithm selection techniques. This brings the community one step closer to an online algorithm selection model that would be able to track and adapt which algorithm it is using during the optimization process itself, which may result in significant gains in performance, while keeping the computation costs low.
17:15
Multistage Optimization Algorithm using Chaotic Search
ABSTRACT. The chaotic optimization method gets increasing attention as an engineering application of the chaotic dynamical systems. In this poster we undertake a performance analysis for a new class of evolutionary algorithms called chaos optimization algorithm (COA), recently proposed by Caponetto and al. [1], [2], [3]. It was originally proposed to solve nonlinear optimization problems with bounded variables. Different chaotic mapping have been considered, combined with several working strategy. In this work, a chaotic strategy is proposed based on a two-dimensional discrete chaotic attractor. Experiments results showed that the proposed algorithm can achieve good performance.
|
2022-12-05 17:30:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5651730298995972, "perplexity": 943.4140300391257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711042.33/warc/CC-MAIN-20221205164659-20221205194659-00495.warc.gz"}
|
http://bits.usc.edu/cs104/installing-course-vm.html
|
CSCI 104 - Spring 2018 Data Structures and Object Oriented Design
While you are welcome to install a C++ compiler or integrated development environment natively on your system, or work remotely on aludra.usc.edu (which we consider rather inconvenient), we strongly encourage you to use the Ubuntu virtual machine (VM) specifically provided for this class. The VM that we use is a customized Ubuntu LTS installation that comes with the most recent C++ compiler, libraries, and debuggers. You can install it on your laptop regardless of your operating system, and use it for the entire semester for labs and homework assignments. We will grade everything on this VM's compiler version and environment so it is critical you check your code on this system before submitting. All C++ compilers are NOT the same. The code you write on Visual Studio or XCode (common Windows and Mac development environments) may not run the same way on another system.
• Never install updates on the Linux VM. If it asks you, just say no!
• Never checkout a Git repository into a Dropbox or other sync'ed folder (Google Drive, etc.)
• A version of the VM is installed on the SAL PC's (under the Windows OS)...if you're laptop breaks, use those PCs. See below for details
• If you can't seem to connect to the Internet on your VM but your laptop OS can, simply reboot the VM (not your laptop). Most of the time this will allow your VM to reconnect.
• We will grade your assignments on the VM. If you want to use your own compiler/environment like Visual Studio, etc. you should ALWAYS bring your code over and test it on the VM before submission. If your code does not compile on the VM, we will not try to fix it and you may get a 0. Consider yourself officially warned!
• If VirtualBox cannot start your VM after it is imported you may need to enable virtualization in your BIOS which can be accessed on Win8 or 10 by following these steps. Look for the setting called "virtualization", "VT-x" or "AMD-V" and enable them To be able to resize the Virtualbox window and have the display resize appropriately you may have to install the 'Guest Additions' on the VM. See below for details.
Installation instructions
1. To run this virtual machine you will need to download Oracle VirtualBox.
2. After installing VirtualBox, download and install the extension pack, available on the same downloads page. You can install the extension pack by going to File->Preferences->Extensions on Windows or VirtualBox->Preferences->Extensions on Mac. Click on the down arrow on the right side of the window and open the extension pack.
3. Next download the virtual machine image. We recommend using 'curl' which is already installed on Mac and Linux machines. (A Windows version is available here. curl is a command line utility to download files from the Internet. Go to a folder where you want to download the file and start a command prompt (Windows) or Terminal there. Then run the command
curl http://bytes.usc.edu/files/cs103/VM/StudentVM_Spring2018.ova -o student-vm.ova
• Alternatively, the actual link is available here. Using curl is recommended because browser downloads might disconnect unexpectedly.
• [Optional] Download an MD5 hash verification program. Compute the MD5 of the .ova file you downloaded (with such a big file sometimes bits get corrupted that will cause the VM to be unable to install). Verify the MD5 has matches the original MD5 value: 2874fc95716c1e90d256a28c8a4a0dd3
1. Start Virtual Box and choose File...Import. Then select the Ubuntu Virtual Machine (student-vm.ova) you downloaded. Use the default import options.
• Adjust the appropriate amount of base memory. Everything has to be in the green zone:
• Turn 3D Acceleration ON.
• Now click on the Course VM option that now appears in VirtualBox's list and select Start/Run. This will start the VM and bring you to a logon message. (Answer yes or default answer to any dialog box that appears).
3. If you encounter errors starting your VM go to the Troubleshooting Section and then resume these directions.
4. Finishing the setup
• Login with the credentials: username: CS104 Student User password: developer
• Hit Ctrl-Alt-T to start a terminal window where you can type in commands
• Install the Virtual Box Guest Additions as detailed in the Do's and Dont's Section
• Pick and setup a method to backup your files as detailed in the Do's and Dont's Section
• If you haven't worked with Linux, check out a Linux tutorial such as this one (Tutorials 1-6) or possibly this one.
• For starters, work through this tutorial. Start from the beginning and continuing through pointers. Write down any questions or unclear statements. We can discuss them at the beginning of the semester. Also, I have made lecture videos on most of these topics available at this CS Modules Site. Please be sure you know the material covered in the first 3 modules (C++ Introduction and Control Structure and Functions) before coming to class. See the next section for details.
Do’s and Don’ts
• When you shutdown your VM NEVER "Save the State" of the machine but instead "Power off" the machine or send the "Shutdown Signal"
• In your Linux VM do NOT install any updates or upgrades to the OS or other source if it prompts you. Just use the VM as it is.
• DO install the “Guest Additions” to your Linux VM. This will allow you resize the resolution/window and also support shared folders between your Host and Guest OSs. To do this, start your VM and click the Devices Menu..Install Guest Additions. You may have to enter your password (“developer”) or hit ‘Enter’ once or twice, but other than that it will just run and take a few minutes. When complete it will say “Hit Enter to close the window”. At this point restart your VM and everything should be working.
• DO find a way to back up your code on the VM. This is not as important, because you will learn how to use git, a version control system, to maintain and save your code. That will automatically act as a reliable backup option, if used correctly. However, here are alternatives:
1. Dropbox. You can install Dropbox on the Linux VM and in that way your files will automatically be copied and sync’ed with that service. However, please NEVER checkout a Git repository into any folder under Dropbox. It may corrupt your Git repository.
• The installation instructions are given here. As of Oct 27 2014, they say to enter the following commands one after the other in a command prompt:
1. cd ~
3. ~/.dropbox-dist/dropboxd &
4. Follow the default options in the installer.
• This will create a folder /home/cs104/Dropbox (a.k.a. ~/Dropbox). If you keep all of your work in there, it will all be synced. Don’t move or rename the Dropbox folder.
2. Shared Folders. You can use the shared folders feature that is part of the VirtualBox service. Follow these steps to create a shared folder in VirtualBox. (note: “guest” or “VM” means the Linux box that you run your code on, while “host” means the system that you normally run.)
• Make sure that "Guest Additions" is installed.
• Make a folder called “cs104_files” somewhere on your host machine.
• Open VirtualBox and open the settings for the VM.
• Click on the “shared folders” button.
• Click the somewhat-obscure folder with a green + on it to add a shared folder to the list choosing . In the Folder Path box browse to the folder on the host that you want to share/make available (i.e. the “cs104_files” you just created). The Folder Name will automatically populate with the folder you just chose. Make sure “Auto-mount” is checked.
• Press OK.
• Open up the VM.
• Open a Terminal window.
• Type sudo usermod -a -G vboxsf cs104. This gives cs104 the permission to access shared folders.
• Type ln -s /media/sf_cs104_files cs104_files. This creates a new alias in the current folder to where the shared folder was mounted.
• From here you can treat cs104_files just like any other folder on your linux guest.
Troubleshooting
In this section, we briefly go over common problems with VirtualBox and Ubuntu.
• In the “Settings” menu, if there is a sign at the bottom of the window that reads “non-optimal”, it means you have chosen a wrong setting. Hover your mouse over the warning message to get the details.
• The error “Failed to install NtCreateSection monitor” on Windows can be due to a known bug. Try downloading the test build here.
• Error “VT-x features locked or unavailable in MSR”: You need to enable Virtualization for your laptop. If you don’t do this, Ubuntu won’t be able to take advantage of all your CPU power. Usually virtualization is disabled by default on PC laptops and enabled by default on Mac laptops. Here is how to enable it on Windows:
1. Enter the bios settings. This is different from laptop to laptop so you have to Google it and find the instruction for your make and model. For example something like this “Laptop HP dv6 bios virtualization”. Usually, you have to keep pressing F2, F10, or something similar at the very beginning of your laptop power on. This is before Windows starts.
2. Find the “Virtualization” setting in the sub menus and set it to “ON” or “Enable”.
3. Save and Exit.
4. Older laptops might not have a virtualization option. In that case switch back to single-core VM.
5. If problems still persist, try uninstalling VirtualBox 4.3.18 in favor of an older version 4.3.12 or 4.3.14 (start with 4.3.12) available at the Older Build Site. Once you've uninstalled 4.3.18 and reinstalled 4.3.12 or 14, then re-import the VM image (course-vm-2014.ova)
6. If you can't connect to the Internet from your VM, simply try rebooting the VM (not your whole PC). When the wireless connection changes the VM seems to be unable to pick up the new connection w/o a reboot.
Other Options
The virtual machine image is installed on all the Windows PC's in the engineering computing center (SAL). Thus, if you absolutely can't get the VM working on your laptop, you can use one of these computers. Follow the directions below:
1. Boot to Windows (not Mac)
2. Find the VirtualBox icon on the desktop and start the application (not Mac)
3. Many of these machines already have the student-vm imported and ready to run so that you can just start the VM and use it
4. If the student-vm is not already imported you may do so by clicking File..Import Appliance. Then click the browse folder icon to go find the .ova file. Browse to Computer..C:\CS VM\ and pick the latest .ova file
5. Then click import.
6. Once the appliance is imported you can start it and use it
|
2018-05-28 03:14:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1844690591096878, "perplexity": 3376.881274521658}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794870771.86/warc/CC-MAIN-20180528024807-20180528044807-00563.warc.gz"}
|
https://jp.mathworks.com/help/ident/ref/polyest.html
|
# polyest
Estimate polynomial model using time- or frequency-domain data
## Syntax
```sys = polyest(tt,[na nb nc nd nf nk]) sys = polyest(u,y,[na nb nc nd nf nk]) sys = polyest(data,[na nb nc nd nf nk]) sys = polyest(___,Name,Value) sys = polyest(tt,init_sys) sys = polyest(u,y,init_sys) sys = polyest(tt,init_sys) sys = polyest(___, opt) [sys,ic] = polyest(___) ```
## Description
`sys = polyest(tt,[na nb nc nd nf nk])` estimates a polynomial model `sys` using the data contained in the variables of timetable `tt`. The software uses the first Nu variables as inputs and the next Ny variables as outputs, where Nu and Ny are determined from the dimensions of the specified polynomial orders.
`sys` is of the form
`$A\left(q\right)y\left(t\right)=\frac{B\left(q\right)}{F\left(q\right)}u\left(t-nk\right)+\frac{C\left(q\right)}{D\left(q\right)}e\left(t\right)$`
A(q), B(q), F(q), C(q) and D(q) are polynomial matrices. u(t) is the input, and `nk` is the input delay. y(t) is the output and e(t) is the disturbance signal. `na` ,`nb`, `nc`, `nd` and `nf` are the orders of the A(q), B(q), C(q), D(q) and F(q) polynomials, respectively.
To select specific input and output channels from `tt`, use name-value syntax to set `'InputName'` and `'OutputName'` to the corresponding timetable variable names.
`sys = polyest(u,y,[na nb nc nd nf nk])` uses the time-domain input and output signals in the comma-separated matrices `u`,`y`. The software assumes that the data sample time is 1 second. To change the sample time, set `Ts` using name-value syntax.
`sys = polyest(data,[na nb nc nd nf nk])` uses the time-domain or frequency-domain data in the data object `data`.
`sys = polyest(___,Name,Value)` estimates a polynomial model with additional attributes of the estimated model structure specified by one or more `Name,Value` arguments. You can use this syntax with any of the previous input-argument combinations.
`sys = polyest(tt,init_sys)` estimates a polynomial model using the linear system `init_sys` to configure the initial parameterization for estimation using the timetable `tt`.
`sys = polyest(u,y,init_sys)` uses the matrix data `u`,`y` for estimation..
`sys = polyest(tt,init_sys)` uses the data object `data`, for estimation.
`sys = polyest(___, opt)` estimates a polynomial model using the option set, `opt`, to specify estimation behavior.
`[sys,ic] = polyest(___)` returns the estimated initial conditions as an `initialCondition` object. Use this syntax if you plan to simulate or predict the model response using the same estimation input data and then compare the response with the same estimation output data. Incorporating the initial conditions yields a better match during the first part of the simulation.
## Input Arguments
`tt` Estimation data, specified as a `timetable` that uses a regularly spaced time vector. `tt` contains variables representing input and output channels. For multiexperiment data, `tt` is a cell array of timetables of length `Ne`, where `Ne` is the number of experiments The software determines the number of input and output channels to use for estimation from the dimensions of the specified polynomial orders. The input/output channel selection depends on whether the `'InputName'` and `'OutputName'` name-value arguments are specified. If `'InputName'` and `'OutputName'` are not specified, then the software uses the first Nu variables of `tt` as inputs and the next Ny variables of `tt` as outputs.If `'InputName'` and `'OutputName'` are specified, then the software uses the specified variables. The number of specified input and output names must be consistent with Nu and Ny.For functions that can estimate a time series model, where there are no inputs, `'InputName'` does not need to be specified. For more information about working with estimation data types, see Data Types in System Identification Toolbox. `u`, `y` Estimation data, specified for SISO systems as a comma-separated pair of Ns-by-1 real-valued matrices that contain uniformly sampled input and output time-domain signal values. Here, Ns is the number of samples. For MIMO systems, specify `u`,`y` as an input/output matrix pair with the following dimensions: `u` — Ns-by-Nu, where Nu is the number of inputs.`y` — Ns-by-Ny, where Ny is the number of outputs. For multiexperiment data, specify `u`,`y` as a pair of 1-by-Ne cell arrays, where Ne is the number of experiments. The sample times of all the experiments must match. For time series data, which contains only outputs and no inputs, specify `y` only. For more information about working with estimation data types, see Data Types in System Identification Toolbox. `data` Estimation data. For time-domain estimation, `data` is an `iddata` object containing the input and output signal values. You can estimate only discrete-time models using time-domain data. For estimating continuous-time models using time-domain data, see `tfest`. For frequency-domain estimation, `data` can be one of the following: Recorded frequency response data (`frd` (Control System Toolbox) or `idfrd`) `iddata` object with its properties specified as follows:`InputData` — Fourier transform of the input signal`OutputData` — Fourier transform of the output signal`Domain` — ‘Frequency’It may be more convenient to use `oe` or `tfest` to estimate a model for frequency-domain data. `na` Order of the polynomial A(q). `na` is an Ny-by-Ny matrix of nonnegative integers. Ny is the number of outputs, and Nu is the number of inputs. `na` must be zero if you are estimating a model using frequency-domain data. `nb` Order of the polynomial B(q) + 1. `nb` is an Ny-by-Nu matrix of nonnegative integers. Ny is the number of outputs, and Nu is the number of inputs. `nc` Order of the polynomial C(q). `nc` is a column vector of nonnegative integers of length Ny. Ny is the number of outputs. `nc` must be zero if you are estimating a model using frequency-domain data. `nd` Order of the polynomial D(q). `nd` is a column vector of nonnegative integers of length Ny. Ny is the number of outputs. `nd` must be zero if you are estimating a model using frequency-domain data. `nf` Order of the polynomial F(q). `nf` is an Ny-by-Nu matrix of nonnegative integers. Ny is the number of outputs, and Nu is the number of inputs. `nk` Input delay in number of samples, expressed as fixed leading zeros of the B polynomial. `nk` is an Ny-by-Nu matrix of nonnegative integers. `nk` must be zero when estimating a continuous-time model. `opt` Estimation options. `opt` is an options set, created using `polyestOptions`, that specifies estimation options including: Estimation objectiveHandling of initial conditionsNumerical search method to be used in estimation `init_sys` Linear system that configures the initial parameterization of `sys`. You obtain `init_sys` by either performing an estimation using measured data or by direct construction. If `init_sys` is an `idpoly` model, `polyest` uses the parameters and constraints defined in `init_sys` as the initial guess for estimating `sys`. Use the `Structure` property of `init_sys` to configure initial guesses and constraints for A(q), B(q), F(q), C(q), and D(q). For example: To specify an initial guess for the A(q) term of `init_sys`, set `init_sys.Structure.A.Value` as the initial guess.To specify constraints for the B(q) term of `init_sys`:Set `init_sys.Structure.B.Minimum` to the minimum B(q) coefficient values.Set `init_sys.Structure.B.Maximum` to the maximum B(q) coefficient values.Set `init_sys.Structure.B.Free` to indicate which B(q) coefficients are free for estimation. If `init_sys` is not an `idpoly` model, the software first converts `init_sys` to a polynomial model. `polyest` uses the parameters of the resulting model as the initial guess for estimation. If `opt` is not specified, and `init_sys` is created by estimation, then the estimation options from `init_sys.Report.OptionsUsed` are used.
### Name-Value Arguments
Specify optional pairs of arguments as `Name1=Value1,...,NameN=ValueN`, where `Name` is the argument name and `Value` is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose `Name` in quotes.
`InputName` Input channel names for timetable data, specified as a string, a character vector, or an array or cell array of strings or character vectors. By default, the software interprets all but the last variable in `tt` as input channels. When you want to select a subset of the timetable variables as input channels use `'InputName'` to identify them. For example, `sys = polyest(tt,__,'InputName',["u1" "u2"])` selects the variables `u1` and `u2` as the input channels for the estimation. `OutputName` Output channel names for timetable data, specified as a string, a character vector, or an array or cell array of strings or character vectors. By default, the software interprets the last variable in `tt` as the sole output channel. When you want to select a subset of the timetable variables as output channels, use `'OutputName'` to identify them. For example, `sys = polyest(tt,__,'OutputName',["y1" "y3"])` selects the variables `y1` and `y3` as the output channels for the estimation. `Ts` Sample time, specified as the comma-separated pair consisting of `'Ts'` and the sample time in seconds. When you use matrix-based data (`u`,`y`), you must specify `'Ts'` if you require a sample time other than the assumed sample time of 1 second. To obtain the data sample time for a timetable `tt`, use the timetable property `tt.Properties.Timestep`. Example: `polyest(umat1,ymat1,___,'Ts',0.08)` computes a model with sample time of 0.08 seconds. `IODelay` Transport delays. `IODelay` is a numeric array specifying a separate transport delay for each input/output pair. For continuous-time systems, specify transport delays in the time unit stored in the `TimeUnit` property. For discrete-time systems, specify transport delays in integer multiples of the sample time, `Ts`. For a MIMO system with `Ny` outputs and `Nu` inputs, set `IODelay` to a `Ny`-by-`Nu` array. Each entry of this array is a numerical value that represents the transport delay for the corresponding input/output pair. You can also set `IODelay` to a scalar value to apply the same delay to all input/output pairs. Default: `0` for all input/output pairs `InputDelay` Input delay for each input channel, specified as a scalar value or numeric vector. For continuous-time systems, specify input delays in the time unit stored in the `TimeUnit` property. For discrete-time systems, specify input delays in integer multiples of the sample time `Ts`. For example, ```InputDelay = 3``` means a delay of three sample times. For a system with `Nu` inputs, set `InputDelay` to an `Nu`-by-1 vector. Each entry of this vector is a numerical value that represents the input delay for the corresponding input channel. You can also set `InputDelay` to a scalar value to apply the same delay to all channels. Default: 0 `IntegrateNoise` Logical vector specifying integrators in the noise channel. `IntegrateNoise` is a logical vector of length Ny, where Ny is the number of outputs. Setting `IntegrateNoise` to `true` for a particular output results in the model: `$A\left(q\right)y\left(t\right)=\frac{B\left(q\right)}{F\left(q\right)}u\left(t-nk\right)+\frac{C\left(q\right)}{D\left(q\right)}\frac{e\left(t\right)}{1-{q}^{-1}}$` Where, $\frac{1}{1-{q}^{-1}}$ is the integrator in the noise channel, e(t). Use `IntegrateNoise` to create an ARIMAX model. For example, ```load iddata1 z1; z1 = iddata(cumsum(z1.y),cumsum(z1.u),z1.Ts,'InterSample','foh'); sys = polyest(z1, [2 2 2 0 0 1],'IntegrateNoise',true);```
## Output Arguments
`sys`
Polynomial model, returned as an `idpoly` model. This model is created using the specified model orders, delays, and estimation options.
If `data.Ts` is zero, `sys` is a continuous-time model representing:
`$Y\left(s\right)=\frac{B\left(s\right)}{F\left(s\right)}U\left(s\right)+E\left(s\right)$`
Y(s), U(s) and E(s) are the Laplace transforms of the time-domain signals y(t), u(t) and e(t), respectively.
Information about the estimation results and options used is stored in the `Report` property of the model. `Report` have the following fields:
Report FieldDescription
`Status`
Summary of the model status, which indicates whether the model was created by construction or obtained by estimation.
`Method`
Estimation command used.
`InitialCondition`
Handling of initial conditions during model estimation, returned as one of the following values:
• `'zero'` — The initial conditions were set to zero.
• `'estimate'` — The initial conditions were treated as independent estimation parameters.
• `'backcast'` — The initial conditions were estimated using the best least squares fit.
This field is especially useful to view how the initial conditions were handled when the `InitialCondition` option in the estimation option set is `'auto'`.
`Fit`
Quantitative assessment of the estimation, returned as a structure. See Loss Function and Model Quality Metrics for more information on these quality metrics. The structure has the following fields:
FieldDescription
`FitPercent`
Normalized root mean squared error (NRMSE) measure of how well the response of the model fits the estimation data, expressed as the percentage fitpercent = 100(1-NRMSE).
`LossFcn`
Value of the loss function when the estimation completes.
`MSE`
Mean squared error (MSE) measure of how well the response of the model fits the estimation data.
`FPE`
Final prediction error for the model.
`AIC`
Raw Akaike Information Criteria (AIC) measure of model quality.
`AICc`
Small-sample-size corrected AIC.
`nAIC`
Normalized AIC.
`BIC`
Bayesian Information Criteria (BIC).
`Parameters`
Estimated values of model parameters.
`OptionsUsed`
Option set used for estimation. If no custom options were configured, this is a set of default options. See `polyestOptions` for more information.
`RandState`
State of the random number stream at the start of estimation. Empty, `[]`, if randomization was not used during estimation. For more information, see `rng`.
`DataUsed`
Attributes of the data used for estimation, returned as a structure with the following fields.
FieldDescription
`Name`
Name of the data set.
`Type`
Data type.
`Length`
Number of data samples.
`Ts`
Sample time.
`InterSample`
Input intersample behavior, returned as one of the following values:
• `'zoh'` — Zero-order hold maintains a piecewise-constant input signal between samples.
• `'foh'` — First-order hold maintains a piecewise-linear input signal between samples.
• `'bl'` — Band-limited behavior specifies that the continuous-time input signal has zero power above the Nyquist frequency.
`InputOffset`
Offset removed from time-domain input data during estimation. For nonlinear models, it is `[]`.
`OutputOffset`
Offset removed from time-domain output data during estimation. For nonlinear models, it is `[]`.
`Termination`
Termination conditions for the iterative search used for prediction error minimization, returned as a structure with the following fields:
FieldDescription
`WhyStop`
Reason for terminating the numerical search.
`Iterations`
Number of search iterations performed by the estimation algorithm.
`FirstOrderOptimality`
$\infty$-norm of the gradient search vector when the search algorithm terminates.
`FcnCount`
Number of times the objective function was called.
`UpdateNorm`
Norm of the gradient search vector in the last iteration. Omitted when the search method is `'lsqnonlin'` or `'fmincon'`.
`LastImprovement`
Criterion improvement in the last iteration, expressed as a percentage. Omitted when the search method is `'lsqnonlin'` or `'fmincon'`.
`Algorithm`
Algorithm used by `'lsqnonlin'` or `'fmincon'` search method. Omitted when other search methods are used.
For estimation methods that do not require numerical search optimization, the `Termination` field is omitted.
For more information on using `Report`, see Estimation Report.
`ic`
Estimated initial conditions, returned as an `initialCondition` object or an object array of `initialCondition` values.
• For a single-experiment data set, `ic` represents, in state-space form, the free response of the transfer function model (A and C matrices) to the estimated initial states (x0).
• For a multiple-experiment data set with Ne experiments, `ic` is an object array of length Ne that contains one set of `initialCondition` values for each experiment.
If `polyest` returns `ic` values of `0` and the you know that you have non-zero initial conditions, set the `'InitialCondition'` option in `polyestOptions` to `'estimate'` and pass the updated option set to `polyest`. For example:
```opt = polyestOptions('InitialCondition','estimate') [sys,ic] = polyest(data,[nb nc nd nf nk],opt)```
The default `'auto'` setting of `'InitialCondition'` uses the `'zero'` method when the initial conditions have a negligible effect on the overall estimation-error minimization process. Specifying `'estimate'` ensures that the software estimates values for `ic`.
For more information, see `initialCondition`. For an example of using this argument, see Obtain Initial Conditions.
## Examples
collapse all
Estimate a model with redundant parameterization. That is, a model with all polynomials ($A$, $B$, $C$, $D$, and $F$) active.
`load iddata2 z2;`
Specify the model orders and delays.
```na = 2; nb = 2; nc = 3; nd = 3; nf = 2; nk = 1;```
Estimate the model.
`sys = polyest(z2,[na nb nc nd nf nk]);`
Estimate a regularized polynomial model by converting a regularized ARX model.
`load regularizationExampleData.mat m0simdata;`
Estimate an unregularized polynomial model of order 20.
`m1 = polyest(m0simdata(1:150),[0 20 20 20 20 1]);`
Estimate a regularized polynomial model of the same order. Determine the Lambda value by trial and error.
```opt = polyestOptions; opt.Regularization.Lambda = 1; m2 = polyest(m0simdata(1:150),[0 20 20 20 20 1],opt);```
Obtain a lower-order polynomial model by converting a regularized ARX model and reducing its order. Use `arxregul` to determine the regularization parameters.
```[L,R] = arxRegul(m0simdata(1:150),[30 30 1]); opt1 = arxOptions; opt1.Regularization.Lambda = L; opt1.Regularization.R = R; m0 = arx(m0simdata(1:150),[30 30 1],opt1); mr = idpoly(balred(idss(m0),7));```
Compare the model outputs against the data.
```opt2 = compareOptions('InitialCondition','z'); compare(m0simdata(150:end),m1,m2,mr,opt2);```
Load input/output data and create cumulative sum input and output signals for estimation.
```load iddata1 z1 data = iddata(cumsum(z1.y),cumsum(z1.u),z1.Ts,'InterSample','foh');```
Specify the model polynomial orders. Set the orders of the inactive polynomials, $D$ and $F$, to `0`.
```na = 2; nb = 2; nc = 2; nd = 0; nf = 0; nk = 1;```
Identify an ARIMAX model by setting the `'IntegrateNoise'` option to `true`.
`sys = polyest(data,[na nb nc nd nf nk],'IntegrateNoise',true);`
Estimate a multi-output ARMAX model for a multi-input, multi-output data set.
```load iddata1 z1 load iddata2 z2 data = [z1 z2(1:300)];```
`data` is a data set with 2 inputs and 2 outputs. The first input affects only the first output. Similarly, the second input affects only the second output.
Specify the model orders and delays. The `F` and `D` polynomials are inactive.
```na = [2 2; 2 2]; nb = [2 2; 3 4]; nk = [1 1; 0 0]; nc = [2;2]; nd = [0;0]; nf = [0 0; 0 0];```
Estimate the model.
`sys = polyest(data,[na nb nc nd nf nk]);`
In the estimated ARMAX model, the cross terms, which model the effect of the first input on the second output and vice versa, are negligible. If you assigned higher orders to those dynamics, their estimation would show a high level of uncertainty.
Analyze the results.
```h = bodeplot(sys); showConfidence(h,3)```
The responses from the cross terms show larger uncertainty.
`load iddata1ic z1i`
Estimate a polynomial model `sys` and return the initial conditions in `ic`.
```na = 2; nb = 2; nc = 3; nd = 3; nf = 2; nk = 1; [sys,ic] = polyest(z1i,[na nb nc nd nf nk]); ic```
```ic = initialCondition with properties: A: [7x7 double] X0: [7x1 double] C: [0 0 0 0 0 0 1] Ts: 0.1000 ```
`ic` is an `initialCondition` object that encapsulates the free response of `sys`, in state-space form, to the initial state vector in `X0`. You can incorporate `ic` when you simulate `sys` with the `z1i` input signal and compare the response with the `z1i` output signal.
## Alternatives
• To estimate a polynomial model using time-series data, use `ar`.
• Use `polyest` to estimate a polynomial of arbitrary structure. If the structure of the estimated polynomial model is known, that is, you know which polynomials will be active, then use the appropriate dedicated estimating function. For examples, for an ARX model, use `arx`. Other polynomial model estimating functions include, `oe`, `armax`, and `bj`.
• To estimate a continuous-time transfer function, use `tfest`. You can also use `oe`, but only with continuous-time frequency-domain data.
## Version History
Introduced in R2012a
expand all
|
2022-09-25 17:03:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5915907025337219, "perplexity": 1392.6464521620187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334591.19/warc/CC-MAIN-20220925162915-20220925192915-00526.warc.gz"}
|
https://mathoverflow.net/questions/234878/lagrangian-foliation
|
Lagrangian foliation
Let $(M,\omega)$ be a sympletic manifold and $\{ \cdot, \cdot \}$ the corresponding Poisson-bracket. Assuming $M$ is completely integrable w.r.t $f=f_1$, so we find $n = \frac{1}{2}\dim M$ functions $f_1, \dots , f_n \colon M \to \mathbb{R}$ such that they are functionally independent and mutually Poisson-commute on an open and everywhere dense subset of $M$. We write $F := (f_1, \dots, f_n) \colon M \to \mathbb{R}^n$.
For my question I'll assume that the functions $f_1, \dots, f_n$ are functionally independent everywhere on $M$. So $F$ is a submersion and the levelsets of $F$ are lagrangian submanifolds (if I'm not mistaken). So the levelsets of $F$ define a Lagrangian Foliation of $M$.
If we now consider some additional functions $g_1, \dots, g_k$ on $M$ such that $\{f_i,g_j\}=\{g_i,g_j\}=0$, what do we know about the levelsets of $(F,g_1, \dots, g_k) \colon M \to \mathbb{R}^{n+k}$. Do we know, that the levelsets give us the same foliation as given by the levelsets of $F$?
• You can locally build Darboux coordinates out of these $f_i$ and conjugate variables. – Ben McKay Mar 30 '16 at 18:11
|
2021-04-16 19:41:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9842403531074524, "perplexity": 205.30609795137113}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038089289.45/warc/CC-MAIN-20210416191341-20210416221341-00511.warc.gz"}
|
https://physics.paperswithcode.com/paper/quantum-algorithms-to-matrix-multiplication
|
## Quantum Algorithms to Matrix Multiplication
29 Jul 2018 · Shao Changpeng ·
In this paper, we study quantum algorithms of matrix multiplication from the viewpoint of inputting quantum/classical data to outputting quantum/classical data. The main target is trying to overcome the input and output problem, which are not easy to solve and many quantum algorithms will encounter, to study matrix operations in quantum computer with high efficiency... And solving matrix multiplication will be the first step. We propose three quantum algorithms to matrix multiplication based on swap test, SVE and HHL. From the point of making fewer assumptions, swap test method works the best than the other two. We also show that the quantum algorithm of matrix multiplication with classical input and output data by swap test achieves the best complexity $\widetilde{O}(n^2/\epsilon)$ with no assumptions. This is proved by giving an efficient quantum algorithm in polynomial time to solve the input problem, that is to prepare the quantum states of the classical data efficiently. Other contributions of this paper include: (1). Extending swap test to a more general form that is suitable to deal with quantum data in parallel, which will have further applications in other matrix operations. (2). Generalizing SVE technique such that it applies to any matrix (not just Hermitian) directly only with quantum data. (3). Proposing two new efficient quantum algorithms to prepare quantum states of classical data, which solves the input problem efficiently than other quantum algorithms. read more
PDF Abstract
# Code Add Remove Mark official
No code implementations yet. Submit your code now
Quantum Physics
|
2021-05-17 00:41:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48166438937187195, "perplexity": 955.2249082427355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991921.61/warc/CC-MAIN-20210516232554-20210517022554-00393.warc.gz"}
|
http://tex.stackexchange.com/tags/lyx/hot
|
# Tag Info
## Hot answers tagged lyx
2
LyX allows the user to tailor the input-experience through what is called "key bindings". Informally they are sometimes referred to as (key) shortcuts. An excerpt from Customization.lyx, section 3.3. Bindings: Bindings are used to, well, bind a function to a key. Several prepackaged binding files are available: a CUA set of bindings (familiar as the ...
2
Here is how i managed to solve it However, I was wondering if there is a more "automatic" way in LyX
2
Best here would be to define group-specific key-value options. This would allow you to set a single option associated with the group, while LaTeX manages the back-end setting for the option. As an example of this, add the following to your Document > Settings... > LaTeX Preamble: \usepackage{xkeyval} \define@boolkey{Gin}{groupAoptions}[true]{% ...
2
I got this code snippet many, many years ago from I don't remember where, so unfortunately I can't credit its originator; but it certainly isn't mine. \makeatletter \def\cleardoublepage{\clearpage\if@twoside \ifodd\c@page\else \hbox{} \vspace*{\fill} \thispagestyle{empty} \newpage \if@twocolumn\hbox{}\newpage\fi\fi\fi} \makeatother All this really does is ...
1
You are doing something to change the defaults of scrbook (see the example below). To ensure that the blank pages after TOC and LOF are empty insert \KOMAoptions{cleardoublepage=empty} at the end of your preamble or at the very beginning of the document. Here is an example to show that with scrbook the blank page after TOC is in the default empty: ...
1
Menu Document ⇢ Configuration ⇢Documment class ⇢ Option class ⇢ Write here the twoside option, so after fix the margins, the LaTeX source show something like this: \documentclass[twocolumn,english,twoside]{article} ... \usepackage{geometry} \geometry{verbose,lmargin=4cm,rmargin=2cm} Then add some content and voilá. If you ...
1
In the examples folder, open the folder called "thesis" and inside that open thesis.lyx.
1
As a workaround, I exported the Lyx file as Tex code, converted it into my own .cls file, created the associated .layout file and assigned this new class as Document class for every document I wanted to apply the class defaults on.
1
In LaTeX, the parskip following a paragraph can be removed by appending a negative vertical space with the length of a parskip (\vspace{-\parskip}) to the end of it. The negative spacing and the positive parskip then eliminate each other. To incorporate this solution into a LyX paragraph style, i used the following: Style NoSpacingParagraph ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2015-05-26 07:54:44
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9147125482559204, "perplexity": 3523.3344984428613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928817.29/warc/CC-MAIN-20150521113208-00218-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://aviation.stackexchange.com/questions/30326/where-can-i-get-flight-departure-change-data?noredirect=1
|
# Where can I get flight departure change data? [closed]
On sites like flightstats.com, there is an "event timeline" view that shows changes made to the scheduled departure or equipment for a commercial flight. I assume this information is being pulled from some central system / API.
Here's an example, from today
Jul 26 05:00 1:00 AM 6:00 AM Gate Adjustment
TAIL Changed From N14106 To N14102
Jul 26 14:27 10:27 AM 3:27 PM Gate Adjustment
Departure Gate Changed From 125 To 128
Jul 26 20:23 4:23 PM 9:23 PM Gate Adjustment
TAIL Changed From N14102 To N34131
Jul 26 21:30 5:30 PM 10:30 PM Time Adjustment
Estimated Gate Arrival Changed From 07/27/16 07:30 AM To 07/27/16 07:31 AM
Jul 26 21:31 5:31 PM 10:31 PM Time Adjustment
Estimated Runway Departure Changed To 07/26/16 07:50 PM
Estimated Runway Arrival Changed To 07/27/16 06:54 AM
Is there a programmatic source for this data? Perhaps with additional metadata not exposed on flightstats.com? I understand they do have an API themselves, which does not seem suited to hobby uses, so I'm looking for another source. Are these departure changes submitted to FAA/ICAO? Are they airport-local only?
• Welcome to the site! You might want to review this question and this one to see if they're relevant. – Pondlife Jul 27 '16 at 1:24
• Thanks, @Pondlife - I did check those, and while I have an additional interest in ADS-B - I have a Raspberry Pi dedicated to just that! - I am wondering if there is a source for specific data regarding flight delays before departure. AC changes, gate changes, departure delays, etc. Thanks for checking in so quickly :) – Dustin Jul 27 '16 at 1:26
• @mins Fair enough; is there a preferred procedure for reposting there? – Dustin Jul 27 '16 at 8:38
• I'm voting to close this question as off-topic because because it belongs to travel.stackechange.com – mins Jul 27 '16 at 8:38
• @mins I agree that it might be borderline for us, but I don't think that this question will have a better fate at travel.se – Federico Jul 27 '16 at 9:04
|
2020-01-29 00:12:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2695573568344116, "perplexity": 3716.938611798146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783342.96/warc/CC-MAIN-20200128215526-20200129005526-00028.warc.gz"}
|
https://forum.azimuthproject.org/discussion/comment/20007/
|
#### Howdy, Stranger!
It looks like you're new here. If you want to get involved, click one of these buttons!
Options
# Introduction: Brian Cohen
A bit of my background, studied Physics as an undergrad, with a lot of computational modeling and simulation. I questioned of redundancy in the computational physics workflow (prototyping in a dynamic language and then doing a performance implementation), and that lead to investigating other programming languages. At some point I discovered Haskell, started looking under the hood, and then learned more about Types and Programming Languages, starting with taking a class in Software Foundations. That lead to investigating formal logic, and trying to reconcile the the parallels between types, logic, and physics lead me to category theory.
At the same time, I've been paying attention to the world around me: social, political, ecological and economic systems, and would be looking to try my hand at applying math to these problems (possibly via dynamical systems theory and related ideas) in the hopes of finding new sustainable solutions. This has also lead to studying the history of economic thought. As a software developer, I'm also investigating how dependently-typed programming can also help. As of now, I'm independently studying all this, so I'm very open to collaboration.
• Options
1.
Welcome to the forums Brian.
You said you are interested in dependently-typed programming. What languages are you looking at?
I am also a fan of the history of economic thought (and intellectual history in general). Anything in particular that are you are reading?
Comment Source:Welcome to the forums Brian. You said you are interested in dependently-typed programming. What languages are you looking at? I am also a fan of the history of economic thought (and intellectual history in general). Anything in particular that are you are reading?
• Options
2.
For dependently-typed programming, I've been tinkering with Idris recently as it's the friendliest language, was thinking of writing a parser for to turn Quantities into something like Qalculate. I intend to look more into ATS as it's seemingly feature-packed. In the past, I've used Coq for the Software Foundations class, and also given Agda a spin.
As for economics, I've been looking into the work of Von Neumann, Nash, Arrow and Debreau, and their methodology of using fixed-point theorems (Lawvere shows how they are non-constructive in Conceptual Mathematics), as well as the resultant divides it created (such as the spat between Solow and Robinson within Keynesian economics). I've done a little looking into the broader history of how money has been used over time (Debt: The First 5000 years has much on this), how Fibonacci introduced double-entry bookkeeping and Hindu-Arabic decimal notation to the Florentines, enabling the Medicis to create the first international banks, through to Keynes and the failures of the Bretton-Woods system in the 20th century (both The General Theory of Interest, Employment, and Money and Modern Political Economics). Also have started looking into David Ellerman's work now, such as his work on property theory at the suggestion of Keith.
Comment Source:For dependently-typed programming, I've been tinkering with [Idris](https://www.idris-lang.org/) recently as it's the friendliest language, was thinking of writing a parser for to turn [Quantities](https://github.com/timjb/quantities) into something like [Qalculate](http://qalculate.github.io/). I intend to look more into [ATS](http://www.ats-lang.org/) as it's seemingly feature-packed. In the past, I've used Coq for the Software Foundations class, and also given Agda a spin. As for economics, I've been looking into the work of Von Neumann, Nash, Arrow and Debreau, and their methodology of using fixed-point theorems (Lawvere shows how they are non-constructive in *Conceptual Mathematics*), as well as the resultant divides it created (such as the spat between Solow and Robinson within Keynesian economics). I've done a little looking into the broader history of how money has been used over time (*Debt: The First 5000 years* has much on this), how Fibonacci introduced double-entry bookkeeping and Hindu-Arabic decimal notation to the Florentines, enabling the Medicis to create the first international banks, through to Keynes and the failures of the Bretton-Woods system in the 20th century (both *The General Theory of Interest, Employment, and Money* and *Modern Political Economics*). Also have started looking into David Ellerman's work now, such as his work on property theory at the suggestion of Keith.
• Options
3.
Simon Wren-Lewis deals with real political economy in his wonderful macroeconomics blog. His work developing a UK Treasury DSGE model with a consistent core and "empirical" completions is fascinating.
Comment Source:Simon Wren-Lewis deals with real political economy in his wonderful macroeconomics [blog](https://mainlymacro.blogspot.com/). His work developing a UK Treasury DSGE model with a consistent core and "empirical" completions is fascinating.
• Options
4.
Welcome, Brian Cohen! You're reading a lot of interesting stuff... stuff I wish I had time to try.
Recently there's been a rebellion against Nash equilibria in favor of 'correlated equilibria', which are more computable but also, some claim, a more realistic model of rational behavior. The first paper is this:
• Robert Aumann, Correlated equilibrium as an expression of Bayesian rationality, Econometrica 55 (1987), 1-18.
A quick intro to the main idea is here:
Here's something I wrote, with a link to a nice article:
Comment Source:Welcome, Brian Cohen! You're reading a lot of interesting stuff... stuff I wish I had time to try. Recently there's been a rebellion against Nash equilibria in favor of 'correlated equilibria', which are more computable but also, some claim, a more realistic model of rational behavior. The first paper is this: * Robert Aumann, Correlated equilibrium as an expression of Bayesian rationality, _Econometrica_ **55** (1987), 1-18. A quick intro to the main idea is here: * Wikipedia, [Correlated equilibrium](https://en.wikipedia.org/wiki/Correlated_equilibrium). Here's something I wrote, with a link to a nice article: * John Baez, [Correlated equilibria in game theory](https://johncarlosbaez.wordpress.com/2017/07/24/correlated-equilibria-in-game-theory/), _Azimuth_, 24 August 2017. I wish I had more time to think about this: I have a feeling it could be important!
|
2021-04-11 07:47:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4678924083709717, "perplexity": 2310.3464622661145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038061562.11/warc/CC-MAIN-20210411055903-20210411085903-00414.warc.gz"}
|
https://jaredhuling.org/personalized/reference/validate.subgroup.html
|
Validates subgroup treatment effects for fitted subgroup identification model class of Chen, et al (2017)
validate.subgroup(model, B = 50L,
method = c("training_test_replication", "boot_bias_correction"),
train.fraction = 0.5, benefit.score.quantiles = c(0.1666667,
0.3333333, 0.5, 0.6666667, 0.8333333), parallel = FALSE)
## Arguments
model fitted model object returned by fit.subgroup() function integer. number of bootstrap replications or refitting replications. validation method. "boot_bias_correction" for the bootstrap bias correction method of Harrell, et al (1996) or "training_test_replication" for repeated training and test splitting of the data (train.fraction should be specified for this option) fraction (between 0 and 1) of samples to be used for training in training/test replication. Only used for method = "training_test_replication" a vector of quantiles (between 0 and 1) of the benefit score values for which to return bootstrapped information about the subgroups. ie if one of the quantile values is 0.5, the median value of the benefit scores will be used as a cutoff to determine subgroups and summary statistics will be returned about these subgroups Should the loop over replications be parallelized? If FALSE, then no, if TRUE, then yes. If user sets parallel = TRUE and the fitted fit.subgroup() object uses the parallel version of an internal model, say for cv.glmnet(), then the internal parallelization will be overridden so as not to create a conflict of parallelism.
## Value
An object of class "subgroup_validated"
avg.results
Estimates of average conditional treatment effects when subgroups are determined based on the provided cutoff value for the benefit score. For example, if cutoff = 0 and there is a treatment and control only, then the treatment is recommended if the benefit score is greater than 0.
se.results
Standard errors of the estimates from avg.estimates
boot.results
Contains the individual results for each replication. avg.results is comprised of averages of the values from boot.results
avg.quantile.results
Estimates of average conditional treatment effects when subgroups are determined based on different quntile cutoff values for the benefit score. For example, if benefit.score.quantiles = 0.75 and there is a treatment and control only, then the treatment is recommended if the benefit score is greater than the 75th upper quantile of all benefit scores. If multiple quantile values are provided, e.g. benefit.score.quantiles = c(0.15, 0.5, 0.85), then results will be provided for all quantile levels.
se.quantile.results
Standard errors corresponding to avg.quantile.results
boot.results.quantiles
Contains the individual results for each replication. avg.quantile.results is comprised of averages of the values from boot.results.quantiles
family
Family of the outcome. For example, "gaussian" for continuous outcomes
method
Method used for subgroup identification model. Weighting or A-learning
n.trts
The number of treatment levels
comparison.trts
All treatment levels other than the reference level
reference.trt
The reference level for the treatment. This should usually be the control group/level
larger.outcome.better
If larger outcomes are preferred for this model
cutpoint
Benefit score cutoff value used for determining subgroups
val.method
Method used for validation
iterations
Number of replications used in the validation process
## Details
Estimates of various quantities conditional on subgroups and treatment statuses are provided and displayed via the print.subgroup_validated function:
1. "Conditional expected outcomes" The first results shown when printing a subgroup_validated object are estimates of the expected outcomes conditional on the estimated subgroups (i.e. which subgroup is 'recommended' by the model) and conditional on treatment/intervention status. If there are two total treatment options, this results in a 2x2 table of expected conditional outcomes.
2. "Treatment effects conditional on subgroups" The second results shown when printing a subgroup_validated object are estimates of the expected outcomes conditional on the estimated subgroups. If the treatment takes levels $$j \in \{1, \dots, K\}$$, a total of $$K$$ conditional treatment effects will be shown. For example, of the outcome is continuous, the $$j$$th conditional treatment effect is defined as $$E(Y|Trt = j, Subgroup=j) - E(Y|Trt = j, Subgroup =/= j)$$, where $$Subgroup=j$$ if treatment $$j$$ is recommended, i.e. treatment $$j$$ results in the largest/best expected potential outcomes given the fitted model.
3. "Overall treatment effect conditional on subgroups " The third quantity displayed shows the overall improvement in outcomes resulting from all treatment recommendations. This is essentially an average over all of the conditional treatment effects weighted by the proportion of the population recommended each respective treatment level.
## References
Chen, S., Tian, L., Cai, T. and Yu, M. (2017), A general statistical framework for subgroup identification and comparative treatment scoring. Biometrics. doi:10.1111/biom.12676
Harrell, F. E., Lee, K. L., and Mark, D. B. (1996). Tutorial in biostatistics multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Statistics in medicine, 15, 361-387. doi:10.1002/(SICI)1097-0258(19960229)15:4<361::AID-SIM168>3.0.CO;2-4
library(personalized) set.seed(123) n.obs <- 500 n.vars <- 20 x <- matrix(rnorm(n.obs * n.vars, sd = 3), n.obs, n.vars) # simulate non-randomized treatment xbetat <- 0.5 + 0.5 * x[,11] - 0.5 * x[,13] trt.prob <- exp(xbetat) / (1 + exp(xbetat)) trt01 <- rbinom(n.obs, 1, prob = trt.prob) trt <- 2 * trt01 - 1 # simulate response delta <- 2 * (0.5 + x[,2] - x[,3] - x[,11] + x[,1] * x[,12]) xbeta <- x[,1] + x[,11] - 2 * x[,12]^2 + x[,13] xbeta <- xbeta + delta * trt # continuous outcomes y <- drop(xbeta) + rnorm(n.obs, sd = 2) # create function for fitting propensity score model prop.func <- function(x, trt) { # fit propensity score model propens.model <- cv.glmnet(y = trt, x = x, family = "binomial") pi.x <- predict(propens.model, s = "lambda.min", newx = x, type = "response")[,1] pi.x } subgrp.model <- fit.subgroup(x = x, y = y, trt = trt01, propensity.func = prop.func, loss = "sq_loss_lasso", nfolds = 5) # option for cv.glmnet subgrp.model$subgroup.trt.effects #>$subgroup.effects #> Est of E[Y|T=0,Recom=0]-E[Y|T=/=0,Recom=0] #> 19.75833 #> Est of E[Y|T=1,Recom=1]-E[Y|T=/=1,Recom=1] #> 18.26168 #> #> $avg.outcomes #> Recommended 0 Recommended 1 #> Received 0 -3.821617 -31.30368 #> Received 1 -23.579944 -13.04201 #> #>$sample.sizes #> Recommended 0 Recommended 1 #> Received 0 50 170 #> Received 1 159 121 #> #> \$overall.subgroup.effect #> [1] 19.16871 #>
|
2019-03-24 07:48:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27252209186553955, "perplexity": 4735.016355956211}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203378.92/warc/CC-MAIN-20190324063449-20190324085449-00489.warc.gz"}
|
https://www.physicsforums.com/threads/projectile-motion-radii-of-trajectory.920488/
|
# Projectile motion, radii of trajectory
Tags:
1. Jul 19, 2017
### Pushoam
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
For part (d),
The curvature radius of trajectory at its
1) initial point = horizontal range/2 =( v02 sin (2α))/2g
2) peak = height of ascent/2 = ( v0 sinα)2 /2g
Is this correct?
In this problem, the time of ascent is equal to the time of descent.
Is there anyway to find it out without calculating the time of ascent and the time of descent ?
2. Jul 19, 2017
### Pushoam
vy (t) = v0 sinα - gt
vx (t) = v0 cosα
tan (θ) = vy (t) / vx (t) = tan α - gt/(v0 cosα)
a = g(- $\hat y$)
wτ = g sinθ (- $\hat θ$),
wn = g cosθ (- $\hat r$)
Is this correct so far?
Last edited: Jul 19, 2017
3. Jul 19, 2017
### cnh1995
You can put (vertical) displacement=0 in the y-displacement equation. It's a quadratic equation.
4. Jul 19, 2017
### Pushoam
This will give me the total time of motion and this, too, I will have to calculate.
I want to show :
time of ascent = time of descent without doing calculation, just on the basis of physical interpretation of the problem
Is this possible?
5. Jul 19, 2017
### cnh1995
Yes.
Your x component of velocity doesn't change. And during ascent, you start with non-zero velocity and end up with zero velocity while during descent, you start with zero velocity and end up with the initial velocity. Displacement is same in both the cases and the motion is under the influence of the same force, i.e. gravity.
The time a body takes to go up to a certain height is equal to the time it takes to free fall from the same height.
6. Jul 19, 2017
### cnh1995
You need to consider the centripetal forces responsible for the curvature here.
Which force is responsible for the curvature when the body is at the peak? How much is that force?
7. Jul 19, 2017
### Pushoam
When the body is at the initial point,
mv02 /Ri = mg cosα
Ri = v02/g cosα
When the body is at the peak,
m(v0 cosα)2 /Rp = mg
Rp = (v0 cosα)2/g
Is this correct?
8. Jul 19, 2017
### cnh1995
Yes.
9. Jul 19, 2017
### Pushoam
What about the next question given in post 2?
10. Jul 19, 2017
### cnh1995
Correct. So now you can draw the approximate plot of the tangential and normal acceleration.
You know their values at t=0, t=T/2 and t=T.
|
2017-08-23 14:55:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5506157875061035, "perplexity": 3061.7239063383463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886120573.0/warc/CC-MAIN-20170823132736-20170823152736-00598.warc.gz"}
|
https://mathspace.co/textbooks/syllabuses/Syllabus-408/topics/Topic-7227/subtopics/Subtopic-96530/?activeTab=interactive
|
# Dividing unit fractions by whole numbers
## Interactive practice questions
This number line shows the number $1$1, and divisions of $\frac{1}{2}$12 each. Let's use this image to help us find the answer to $\frac{1}{2}\div4$12÷4.
a
This division is asking us to divide each of the halves up into $4$4 parts. Let's do that now!
Which image shows that each half has been divided into $4$4 parts?
A
B
C
A
B
C
b
What is the size of the piece created when $\frac{1}{2}$12 is divided by $4$4?
Easy
Less than a minute
Let's use the image below to help us find the value of $\frac{1}{3}\div2$13÷2. This number line shows the number $1$1 split into $3$3 divisions of size $\frac{1}{3}$13.
Let's use the image below to help us find the value of $\frac{1}{4}\div2$14÷2. This number line shows the number $1$1 split into $4$4 divisions of size $\frac{1}{4}$14.
Let's use the image below to help us find the value of $\frac{1}{5}\div3$15÷3. This number line shows the number $1$1 split into $5$5 divisions of size $\frac{1}{5}$15.
### Outcomes
#### NA5-3
Understand operations on fractions, decimals, percentages, and integers
|
2022-01-26 23:48:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4194801449775696, "perplexity": 1814.4393412894979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305006.68/warc/CC-MAIN-20220126222652-20220127012652-00232.warc.gz"}
|
https://socratic.org/questions/what-is-the-chemical-formula-for-hydrogen-oxide
|
# What is the chemical formula for hydrogen oxide?
The formula for dihydrogen monoxide is ${H}_{2} O$. See explanation.
From the 2 elements hydrogen has the value of 1, and oxygene - 2, so the formula has to be ${H}_{2} O$
|
2019-10-16 19:34:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8653383255004883, "perplexity": 911.9669152610828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669546.24/warc/CC-MAIN-20191016190431-20191016213931-00513.warc.gz"}
|
https://effort.academickids.com/encyclopedia/index.php/State_vector
|
# Quantum state
(Redirected from State vector)
A quantum state is any possible state in which a quantum mechanical system can be. A fully specified quantum state can be described by a state vector, a wavefunction, or a complete set of quantum numbers for a specific system. A partially known quantum state, such as a ensemble with some quantum numbers fixed, can be described by a density operator.
Contents
## Bra-ket notation
Paul Dirac invented a powerful and intuitive mathematical notation to describe quantum states, known as bra-ket notation. For instance, one can refer to an |excited atom> or to [itex]|\!\!\uparrow\rangle[itex] for a spin-up particle, hiding the underlying complexity of the mathematical description, which is revealed when the state is projected onto a coordinate basis. For instance, the simple notation |1s> describes the first hydrogen atom bound state, but becomes a complicated function in terms of Laguerre polynomials and spherical harmonics when projected onto the basis of position vectors |r>. The resulting expression Ψ(r)=<r|1s>, which is known as the wavefunction, is a special representation of the quantum state, namely, its projection into position space. Other representations, like the projection into momentum space, are possible. The various representations are simply different expressions of a single physical quantum state.
## Basis states
Any quantum state [itex]|\psi\rangle[itex] can be expressed in terms of a sum of basis states (also called basis kets), [itex]|k_i\rangle[itex]
[itex]| \psi \rangle = \sum_i c_i | k_i \rangle[itex]
where [itex]c_i[itex] are the coefficients representing the probability amplitude, such that the absolute square of the probability amplitude, [itex]\left | c_i \right | ^2[itex] is the probability of a measurement in terms of the basis states yielding the state [itex]|k_i\rangle[itex]. The normalization condition mandates that the total sum of probabilities is equal to one,
[itex]\sum_i \left | c_i \right | ^2 = 1[itex].
The simplest understanding of basis states is obtained by examining the quantum harmonic oscillator. In this system, each basis state [itex]|n\rangle[itex] has an energy [itex] E_n = \hbar \omega \left(n + {\begin{matrix}\frac{1}{2}\end{matrix}}\right)[itex]. The set of basis states can be extracted using a construction operator [itex]a^{\dagger}[itex] and a destruction operator [itex]a[itex] in what is called the ladder operator method.
## Superposition of states
If a quantum mechanical state [itex]|\psi\rangle[itex] can be reached by more than one path, then [itex]|\psi\rangle[itex] is said to be a linear superposition of states. In the case of two paths, if the states after passing through path [itex]\alpha[itex] and path [itex]\beta[itex] are
[itex]|\alpha\rangle = \begin{matrix}\frac{1}{\sqrt{2}}\end{matrix} |0\rangle + \begin{matrix}\frac{1}{\sqrt{2}}\end{matrix} |1\rangle[itex], and
[itex]|\beta\rangle = \begin{matrix}\frac{1}{\sqrt{2}}\end{matrix} |0\rangle - \begin{matrix}\frac{1}{\sqrt{2}}\end{matrix} |1\rangle[itex],
then [itex]|\psi\rangle[itex] is defined as the normalized linear sum of these two states. If the two paths are equally likely, this yields
[itex]|\psi\rangle = \begin{matrix}\frac{1}{\sqrt{2}}\end{matrix}|\alpha\rangle + \begin{matrix}\frac{1}{\sqrt{2}}\end{matrix}|\beta\rangle = \begin{matrix}\frac{1}{\sqrt{2}}\end{matrix}(\begin{matrix}\frac{1}{\sqrt{2}}\end{matrix}|0\rangle + \begin{matrix}\frac{1}{\sqrt{2}}\end{matrix}|1\rangle) + \begin{matrix}\frac{1}{\sqrt{2}}\end{matrix}(\begin{matrix}\frac{1}{\sqrt{2}}\end{matrix}|0\rangle - \begin{matrix}\frac{1}{\sqrt{2}}\end{matrix}|1\rangle) = |0\rangle[itex].
Note that in the states [itex]|\alpha\rangle[itex] and [itex]|\beta\rangle[itex], the two states [itex]|0\rangle[itex] and [itex]|1\rangle[itex] each have a probability of [itex]\begin{matrix}\frac{1}{2}\end{matrix}[itex], as obtained by the absolute square of the probability amplitudes, which are [itex]\begin{matrix}\frac{1}{\sqrt{2}}\end{matrix}[itex] and [itex]\begin{matrix}\pm\frac{1}{\sqrt{2}}\end{matrix}[itex]. In a superposition, it is the probability amplitudes which add, and not the probabilities themselves. The pattern which results from a superposition is often called an interference pattern. In the above case, [itex]|0\rangle[itex] is said to constructively interfere, and [itex]|1\rangle[itex] is said to destructively interfere.
For more about superposition of states, see the double-slit experiment.
## Pure and mixed states
A pure quantum state is a state which can be described by a single ket vector, or as a sum of basis states. A mixed quantum state is a statistical distribution of pure states.
The expectation value [itex]\langle a \rangle[itex] of a measurement [itex]A[itex] on a pure quantum state is given by
[itex]\langle a \rangle = \langle \psi | A | \psi \rangle = \sum_i a_i \langle \psi | \alpha_i \rangle \langle \alpha_i | \psi \rangle = \sum_i a_i | \langle \alpha_i | \psi \rangle |^2 = \sum_i a_i P(\alpha_i)[itex]
where [itex]|\alpha_i\rangle[itex] are basis kets for the operator [itex]A[itex], and [itex]P(\alpha_i)[itex] is the probability of [itex]| \psi \rangle[itex] being measured in state [itex]|\alpha_i\rangle[itex].
In order to describe a statistical distribution of pure states, or mixed state, the density operator (or density matrix), [itex]\rho[itex], is used. This extends quantum mechanics to quantum statistical mechanics. The density operator is defined as
[itex]\rho = \sum_s p_s | \psi_s \rangle \langle \psi_s |[itex]
where [itex]p_s[itex] is the fraction of each ensemble in pure state [itex]|\psi_s\rangle[itex]. The ensemble average of a measurement [itex]A[itex] on a mixed state is given by
[itex]\left [ A \right ] = \langle \overline{A} \rangle = \sum_s p_s \langle \psi_s | A | \psi_s \rangle = \sum_s \sum_i p_s a_i | \langle \alpha_i | \psi_s \rangle |^2 = tr(\rho A)[itex]
where it is important to note that two types of averaging are occurring, one being a quantum average over the basis kets of the pure states, and the other being a statistical average over the ensemble of pure states.
## See also
##### Navigation
Academic Kids Menu
• Art and Cultures
• Art (http://www.academickids.com/encyclopedia/index.php/Art)
• Architecture (http://www.academickids.com/encyclopedia/index.php/Architecture)
• Cultures (http://www.academickids.com/encyclopedia/index.php/Cultures)
• Music (http://www.academickids.com/encyclopedia/index.php/Music)
• Musical Instruments (http://academickids.com/encyclopedia/index.php/List_of_musical_instruments)
• Biographies (http://www.academickids.com/encyclopedia/index.php/Biographies)
• Clipart (http://www.academickids.com/encyclopedia/index.php/Clipart)
• Geography (http://www.academickids.com/encyclopedia/index.php/Geography)
• Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
• Maps (http://www.academickids.com/encyclopedia/index.php/Maps)
• Flags (http://www.academickids.com/encyclopedia/index.php/Flags)
• Continents (http://www.academickids.com/encyclopedia/index.php/Continents)
• History (http://www.academickids.com/encyclopedia/index.php/History)
• Ancient Civilizations (http://www.academickids.com/encyclopedia/index.php/Ancient_Civilizations)
• Industrial Revolution (http://www.academickids.com/encyclopedia/index.php/Industrial_Revolution)
• Middle Ages (http://www.academickids.com/encyclopedia/index.php/Middle_Ages)
• Prehistory (http://www.academickids.com/encyclopedia/index.php/Prehistory)
• Renaissance (http://www.academickids.com/encyclopedia/index.php/Renaissance)
• Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
• United States (http://www.academickids.com/encyclopedia/index.php/United_States)
• Wars (http://www.academickids.com/encyclopedia/index.php/Wars)
• World History (http://www.academickids.com/encyclopedia/index.php/History_of_the_world)
• Human Body (http://www.academickids.com/encyclopedia/index.php/Human_Body)
• Mathematics (http://www.academickids.com/encyclopedia/index.php/Mathematics)
• Reference (http://www.academickids.com/encyclopedia/index.php/Reference)
• Science (http://www.academickids.com/encyclopedia/index.php/Science)
• Animals (http://www.academickids.com/encyclopedia/index.php/Animals)
• Aviation (http://www.academickids.com/encyclopedia/index.php/Aviation)
• Dinosaurs (http://www.academickids.com/encyclopedia/index.php/Dinosaurs)
• Earth (http://www.academickids.com/encyclopedia/index.php/Earth)
• Inventions (http://www.academickids.com/encyclopedia/index.php/Inventions)
• Physical Science (http://www.academickids.com/encyclopedia/index.php/Physical_Science)
• Plants (http://www.academickids.com/encyclopedia/index.php/Plants)
• Scientists (http://www.academickids.com/encyclopedia/index.php/Scientists)
• Social Studies (http://www.academickids.com/encyclopedia/index.php/Social_Studies)
• Anthropology (http://www.academickids.com/encyclopedia/index.php/Anthropology)
• Economics (http://www.academickids.com/encyclopedia/index.php/Economics)
• Government (http://www.academickids.com/encyclopedia/index.php/Government)
• Religion (http://www.academickids.com/encyclopedia/index.php/Religion)
• Holidays (http://www.academickids.com/encyclopedia/index.php/Holidays)
• Space and Astronomy
• Solar System (http://www.academickids.com/encyclopedia/index.php/Solar_System)
• Planets (http://www.academickids.com/encyclopedia/index.php/Planets)
• Sports (http://www.academickids.com/encyclopedia/index.php/Sports)
• Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
• Weather (http://www.academickids.com/encyclopedia/index.php/Weather)
• US States (http://www.academickids.com/encyclopedia/index.php/US_States)
Information
• Home Page (http://academickids.com/encyclopedia/index.php)
• Contact Us (http://www.academickids.com/encyclopedia/index.php/Contactus)
• Clip Art (http://classroomclipart.com)
|
2021-05-13 03:03:39
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9500063061714172, "perplexity": 1245.5850901365866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992721.31/warc/CC-MAIN-20210513014954-20210513044954-00285.warc.gz"}
|
http://cms.math.ca/cmb/msc/19E15?fromjnl=cmb&jnl=CMB
|
Canadian Mathematical Society www.cms.math.ca
location: Publications → journals
Search results
Search: MSC category 19E15 ( Algebraic cycles and motivic cohomology [See also 14C25, 14C35, 14F42] )
Expand all Collapse all Results 1 - 1 of 1
1. CMB 2005 (vol 48 pp. 221)
Kerr, Matt
An Elementary Proof of Suslin Reciprocity We state and prove an important special case of Suslin reciprocity that has found significant use in the study of algebraic cycles. An introductory account is provided of the regulator and norm maps on Milnor $K_2$-groups (for function fields) employed in the proof. Categories:19D45, 19E15
top of page | contact us | privacy | site map |
© Canadian Mathematical Society, 2015 : https://cms.math.ca/
|
2015-11-26 10:38:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5971170663833618, "perplexity": 5561.199979861117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447043.45/warc/CC-MAIN-20151124205407-00117-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://hkumath1111.wordpress.com/2014/08/30/logical-warm-up-4/
|
# Linear Algebra II
## August 30, 2014
### Logical Warm-up
Filed under: 2014 Fall — Y.K. Lau @ 4:49 PM
Someone said, “If mathematics is regarded as a language, then logic is its grammar.” This is true and that’s why we have the course MATH1001/2012 — Fundamental concepts of mathematics. Here we start with an introductory remark on “logic”, serving as a warm-up and a preliminary.
1. Statements
A statement is a declarative sentence, conveying a definite meaning that may be either true or false but not both simultaneously.
Example 1 Which of the following is a statement?
1. Every HKU undergraduate student has a university number.
2. There is a man who is over six feet tall.
3. Linear algebra smells good.
4. If there is life on Mars, then the postman delivers letters.
5. This sentence is false.
Ans.
Clearly, (1) and (2) are statements. Definitely (2) is a true statement. For (1), it is either true or false, although frankly I don’t know the answer because I haven’t checked through every student of HKU.
(3) is not a statement. (Does linear algebra have odor?)
(4) is a statement and it is a true statement, because the job of postman is to deliver letters. So no matter whether or not Mars has life, the conclusion is valid.
(5) is a bit more tricky — it is not a statement. The reason is that you cannot assign a true/false value to it. Why? Think about what it means if (5) is true — it says “this sentense, i.e. (5), is false”. So (5) is true and false at the same time! If (5) is false, that means “This sentence is false” is not valid. In other words, this sentence (5) is true. Again (5) is false and true simultaneously. That’ why we cannot give it a true/false value.
Important Remark
1. TRUE means absolutely and completely true. There is no in-between stage. So there is no such thing as saying a statement is somewhat true’ or almost true’.
2. For some statements, you may pay careful attention to what member of a class the statement applies. For statement (1) of Example 1, a single exception will renders false. But for (2), I am shorter than six-feet (and I am a man). This fact of my height cannot conclude statement (2) is false. We call “for all/for every/for any” and “there exists/for somequantifiers, which will arise often in our course.
3. The truth value of a statement may depend upon the way it is interpreted. For example, “you are intelligent”. I would say that it is a true statement (because you take MATH1111), but you may say it is a false statement. (You are humble!) But in any case, once intelligence is clearly defined, then the statement must be declared as either completely true or false.
2. Negation
Negation turns a statement into another statement which will be opposite to the original one in terms of truth value. For example, “X is rich”. Its negation is “X is poor”. It sounds easy. Well, finish the example below.
Example 2 Write down the negation of each of the following statements.
1. Some men are rich.
2. Every man is happy.
3. There exists a man who is rich.
4. Some unhappy men are rich.
5. There is a happy man that all of his friends are unhappy.
If you have finished, see answers below.
Ans.
1. All men are poor. (I suppose the negation of rich (i.e. not rich) is the same as poor.)
2. Some men are unhappy. (Think carefully if your answer is “Every man is unhappy” or “No man is happy”, which are not correct.)
3. All men are poor. (Because “There exists a man who is rich” means the same as “Some men are rich”.)
4. All unhappy men are poor. (The answer is not “All happy men are poor”, because for the given statement “Some unhappy men are rich”, we are considering the group of “unhappy men” and the statement says that some members in this group are rich. So the negation is “All member in this group are poor”.)
5. Every happy man has some happy friends. Alternatively, the negation can be stated as “For any happy man, there exists some friends of him who are happy.”
3. Logical Implications
Here I would not tell precisely what is a logical implication. Instead, let me introduce some notation and terminologies. We write “${p\Rightarrow q}$” when the statement ${p}$ implies (affirmatively) the statement ${q}$“. What does “${p\Rightarrow q}$” tell? There are two important points:
1. If ${p}$ is true, then ${q}$ is true.
2. If ${q}$ is false, then ${p}$ is also false.
Let us look at an example.
Example 3
1. ABC is a triangle ${\Rightarrow}$ ${\angle A + \angle B+ \angle C = 180}$ degrees. (I think you know this fact. This is an example of point 1.)
2. ${\sin \frac{\pi}4 >\frac45}$ ${\Rightarrow}$ ${-1>0}$. (Refer to point 2. Hence we know ${\sin \frac{\pi}4 >\frac45}$ is false.)
You may wonder how the deduction in (ii) comes up. Below is the details:
${\sin \frac{\pi}4 >\frac45\Rightarrow \sin^2 \frac{\pi}4 >\frac{16}{25}\Rightarrow 1-2\sin^2\frac{\pi}4 < 1-\frac{32}{25}\Rightarrow \cos \frac{\pi}2 < -\frac{7}{25} \Rightarrow 0 <-1.}$
Remark. When the statement ${p}$ in “${p\Rightarrow q}$” is false, we cannot draw any conclusion on ${q}$, i.e. ${q}$ may be true or not. For example,
$\displaystyle 1\ge 2 \Rightarrow -1 \ge 0.$
(This is deduced by subtracting 2 on both sides of $\displaystyle 1\ge 2.$) The consequence ${-1\ge 0}$ is of course a false statement. However,
$\displaystyle 1 \ge 2 \Rightarrow 0\ge 0.$
(We have multiplied ${0}$ on both sides of ${1\ge 2}$.) Now the consequence ${0\ge 0}$ is a true statement! So we don’t know the truth value of the conclusion from a false hypothesis.
There are other ways to phrase “${p}$ implies ${q}$“:
1. ${p}$ is a sufficient condition for ${q}$.
2. ${q}$ is a necessary condition for ${p}$.
3. ${p}$ only if ${q}$.
4. Final Remark
Logic is, but not just confined to be, the grammar of MATHEMATICS. For more, you may browse the following website:
http://philosophy.hku.hk/think/logic/intro.php
|
2017-07-25 02:36:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 34, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6651676893234253, "perplexity": 753.148336319719}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424960.67/warc/CC-MAIN-20170725022300-20170725042300-00699.warc.gz"}
|
https://physicstravelguide.com/advanced_tools/monte_carlo
|
# Monte-Carlo Simulations
## Intuitive
In Monte-Carlo simulations parameters are chosen randomly and then compared to a predefined goal.
## Abstract
The motto in this section is: the higher the level of abstraction, the better.
## History
Contributing authors:
|
2020-08-13 20:34:18
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9159833192825317, "perplexity": 4720.757511639968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739073.12/warc/CC-MAIN-20200813191256-20200813221256-00301.warc.gz"}
|
https://math.stackexchange.com/questions/1143072/simple-counting-problem-i-invented-counting-ways-to-end-a-board-game
|
# Simple counting problem I invented (counting ways to end a board game)
Consider the following board with a pawn in position $4$:
The game works by rolling a dice and going forward the number of positions as marked in the dice. So for example if you roll $1$ you go from position $4$ to position five. If you roll $2$ you get to the finish and the game ends. If you roll three however you end up at position $5$, because if you pass the finish you start going back.
However you can roll more than once. So for example, rolling a three and then a one will take you from position $4$ to position $5$ and then to the finish.
The question is how many different ways are there to get to the finish in $k$ or less steps (where one step consists in rolling one dice and moving the pawn the appropriate number of tiles).
• This is a classic Markov chain. – Thomas Andrews Feb 11 '15 at 4:37
Each time, there is exactly one roll that will let you finish. So there is one way to finish in one roll, $5\cdot 1$ to finish in two rolls (because you need not to finish the first time), and generally $5^{(n-1)}$ ways to finish in exactly $n$ rolls. For $k$ or less, we add up the geometric series, getting $\frac {5^k-1}{5-1}$
|
2019-09-21 15:58:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5909339785575867, "perplexity": 161.11467173644772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574532.44/warc/CC-MAIN-20190921145904-20190921171904-00499.warc.gz"}
|
https://wdempsey.github.io/
|
M4057 SPH II
1415 Washington Heights
Ann Arbor, MI 48109
Hi, my name’s Walter. I am an Assistant Professor of Biostatistics and an Assistant Research Professor in the d3lab located in the Institute of Social Research. My research focuses on Statistical Methods for Digital and Mobile Health. My current work involves three complementary research themes: (1) experimental design and data analytic methods to inform multi-stage decision making in health; (2) statistical modeling of complex longitudinal and survival data; and (3) statistical modeling of complex relational structures such as interaction networks. In the coming years, I will continue to design and apply novel statistical methodologies to make sense of complex longitudinal, survival, and relational datasets. This work will inform decision making in health by aiding in intervention evaluation and development. Outside of my research, I enjoy exploring Ann Arbor with my fiancee by taking our beagle on long walks around town and attending UMS performances or shows at the Ark. I’m also an avid soccer fan and play way too much rec league soccer.
Prior to joining, I was a postdoctoral fellow in the Department of Statistics at Harvard University. My fellowship was in the Statistical Reinforcement Learning Lab under the supervision of Susan Murphy. I received my PhD in Statistics at the University of Chicago under the supervision of Peter McCullagh.
|
2020-02-25 15:32:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.171195387840271, "perplexity": 2498.052557935449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146123.78/warc/CC-MAIN-20200225141345-20200225171345-00496.warc.gz"}
|
https://www.physicsforums.com/threads/weak-form-of-the-effective-mass-schrodinger-equation.751282/
|
# Weak Form of the Effective Mass Schrodinger Equation
1. Apr 29, 2014
### Morberticus
Hi,
I am numerically solving the 2D effective-mass Schrodinger equation
$\nabla \cdot (\frac{-\hbar^2}{2} c \nabla \psi) + (U - \epsilon) \psi = 0$
where $c$ is the effective mass matrix
$\left( \begin{array}{cc} 1/m^*_x & 1/m^*_{xy} \\ 1/m^*_{yx} & 1/m^*_y \\ \end{array} \right)$
I know that, when the effective mass is isotropic, the weak form is
$\int \frac{-\hbar^2}{2m^*}\nabla \psi \cdot \nabla v + U\psi vd\Omega = \int \epsilon \psi vd\Omega$
The matrix is giving me trouble however. Is this the correct form?
$\int \frac{-\hbar^2}{2m^*_x}\frac{\partial u}{\partial x}\frac{ \partial v}{\partial x} + \frac{-\hbar^2}{2m^*_{xy}}\frac{\partial u}{\partial x}\frac{ \partial v}{\partial y} + \frac{-\hbar^2}{2m^*_{yx}}\frac{\partial u}{\partial y}\frac{ \partial v}{\partial x} + \frac{-\hbar^2}{2m^*_y}\frac{\partial u}{\partial y}\frac{ \partial v}{\partial y} + U\psi v d\Omega= \int \epsilon \psi v d\Omega$
Last edited: Apr 29, 2014
2. May 6, 2014
|
2017-05-26 00:04:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8520745038986206, "perplexity": 1470.71000956034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608617.80/warc/CC-MAIN-20170525233846-20170526013846-00379.warc.gz"}
|
http://www.philipzucker.com/cartpole-workin-boyeee/
|
We have been fighting a problem for weeks. The Serial port was just not reliable, it had sporadic. The problem ended up being a surprising thing, we were using threading to receive the messages nd checking for limit switches. It is not entirely clear why but this was totally screwing up the serial port update in an unpredictable manner. Yikes. What a disaster.
After that though smoooooooth sailing.
With a slight adaptation of the previous Openai gym LQR cartpole code and a little fiddling with parameters we have a VERY stable balancer. We removed the back reaction of the pole dynamics on the cart itself for simplicity. This should be accurate when the pole vastly.
We did find that the motor is exactly velocity control in steady state with a linear response. There is a zero point offset (you need to ask for 100 out of 2046 before you get any movement at all).
We’ll see where we can get with the Lyapunov control next time.
https://github.com/philzook58/cart_pole/blob/master/lqr_controller.py
from sabretooth_command import CartCommand
from cart_controller import CartController
from encoder_analyzer import EncoderAnalyzer
import time
import numpy as np
import serial.tools.list_ports
import scipy.linalg as linalg
lqr = linalg.solve_continuous_are
ports = list(serial.tools.list_ports.comports())
print(dir(ports))
for p in ports:
print(dir(p))
print(p.device)
if "Sabertooth" in p.description:
sabreport = p.device
else:
ardPort = p.device
print("Initilizing Commander")
comm = CartCommand(port= sabreport) #"/dev/ttyACM1")
print("Initilizing Analyzer")
analyzer = EncoderAnalyzer(port=ardPort) #"/dev/ttyACM0")
print("Initializing Controller.")
cart = CartController(comm, analyzer)
print("Starting Zero Routine")
cart.zeroAnalyzer()
gravity = 9.8
mass_pole = 0.1
length = 0.5
moment_of_inertia = (1./3.) * mass_pole * length**2
print(moment_of_inertia)
A = np.array([
[0,1,0,0],
[0,0,0,0],
[0,0,0,1],
[0,0,length * mass_pole * gravity / (2 * moment_of_inertia) ,0]
])
B = np.array([0,1,0,length * mass_pole / (2 * moment_of_inertia)]).reshape((4,1))
Q = np.diag([1.0, 1.0, 1.0, 0.01])
R = np.array([[0.001]])
P = lqr(A,B,Q,R)
Rinv = np.linalg.inv(R)
K = np.dot(Rinv,np.dot(B.T, P))
print(K)
def ulqr(x):
x1 = np.copy(x)
x1[2] = np.sin(x1[2] + np.pi)
return -np.dot(K, x1)
cart.goTo(500)
command_speed = 0
last_time = time.time()
while True:
observation = cart.analyzer.getState()
x,x_dot,theta,theta_dot = observation
a = ulqr(np.array([(x-500)/1000,x_dot/1000,theta-0.01,theta_dot]))
t = time.time()
dt = t - last_time
last_time = t
command_speed += 1 * a[0] * dt
#command_speed -= (x - 500) * dt * 0.001 * 0.1
#command_speed -= x_dot * dt * 0.001 * 0.5
cart.setSpeedMmPerS(1000 *command_speed)
print("theta {}\ttheta_dot {}\taction {}\tspeed {}".format(theta, theta_dot, a, command_speed))
|
2021-02-24 22:40:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3035874366760254, "perplexity": 10698.108914794668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178349708.2/warc/CC-MAIN-20210224223004-20210225013004-00210.warc.gz"}
|
http://mathandmultimedia.com/tag/funny-math/
|
## Man and Woman Funny Math
I got this from Facebook and I think this is one of the mathematics that will make us relax a bit and put some grin on our faces. NO OFFENSE meant.
Handsome Man + Ugly Woman = She’s rich.
Ugly Man + Pretty Woman = He’s rich.
Handsome Man + Pretty Woman = Gonna break up soon.
Ugly Man + Ugly Woman = Gonna have a ‘beautiful’ son.
Ugly Man + Ugly Man = Friends
Handsome Man + Handsome Man = Gays
Ugly Woman + Pretty Woman = Friends
Ugly Woman + Ugly Woman = Gays
Ugly Man + Handsome Man = Friends
Pretty Woman + Pretty Woman = Enemies
A Mathy Christmas to all!
… looks like this? What would you do?
If I were you, I would immediately inform my employer that I can’t accept this paycheck since $e^{i\pi} = -1$ and $\sum_{n=1}^{\infty}\frac{1}{2^n}=1$ (Why?). This is only worth 0.002 dollars.
## 11 Mathematical Proof Techniques Liberal Arts Majors Should Know
Mathematical proof is the heart of mathematics. It is what differentiates mathematics from other sciences. In mathematical proofs, we can show that a statement is true for all possible cases without showing all the cases. We can be certain that the sum of two even numbers is even without adding all the possible pairs.
For the liberal arts majors, the proofs below will be of great help to you, but I think the math majors are going to appreciate them more. » Read more
1 2
|
2022-07-05 06:51:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18000251054763794, "perplexity": 2395.7988368295546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104514861.81/warc/CC-MAIN-20220705053147-20220705083147-00028.warc.gz"}
|
https://www.opuscula.agh.edu.pl/om-vol29iss4art10
|
Opuscula Math. 29, no. 4 (2009), 443-452
http://dx.doi.org/10.7494/OpMath.2009.29.4.443
Opuscula Mathematica
# Topological classification of conformal actions on p-hyperelliptic and (q,n)-gonal Riemann surfaces
Ewa Tyszkowska
Abstract. A compact Riemann surface $$X$$ of genus $$g \gt 1$$ is said to be $$p$$-hyperelliptic if $$X$$ admits a conformal involution $$\rho$$ for which $$X / \rho$$ has genus $$p$$. A conformal automorphism $$\delta$$ of prime order $$n$$ such that $$X / \delta$$ has genus $$q$$ is called a $$(q,n)$$-gonal automorphism. Here we study conformal actions on $$p$$-hyperelliptic Riemann surface with $$(q,n)$$-gonal automorphism.
Keywords: $$p$$-hyperelliptic Riemann surface, automorphism of a Riemann surface.
Mathematics Subject Classification: 30F20, 30F50, 14H37, 20H30, 20H10.
Full text (pdf)
• Ewa Tyszkowska
• University of Gdańsk, Institute of Mathematics, ul. Wita Stwosza 57, 80-952 Gdansk, Poland
• Revised: 2009-07-21.
• Accepted: 2009-07-27.
Ewa Tyszkowska, Topological classification of conformal actions on p-hyperelliptic and (q,n)-gonal Riemann surfaces, Opuscula Math. 29, no. 4 (2009), 443-452, http://dx.doi.org/10.7494/OpMath.2009.29.4.443
a .bib file (BibTeX),
a .ris file (RefMan),
a .enw file (EndNote)
or export to RefWorks.
In accordance with EU legislation we advise you this website uses cookies to allow us to see how the site is used. All data is anonymized.
All recent versions of popular browsers give users a level of control over cookies. Users can set their browsers to accept or reject all, or certain, cookies.
|
2019-01-16 16:05:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26695823669433594, "perplexity": 3382.06513173908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657555.87/warc/CC-MAIN-20190116154927-20190116180927-00446.warc.gz"}
|
https://socratic.org/questions/the-area-of-a-square-is-45-more-than-the-perimeter-how-do-you-find-the-length-of-1
|
The area of a square is 45 more than the perimeter. How do you find the length of the side?
Jan 15, 2016
Length of one side is 9 units.
Rather than doing a straight factorising approach I have used the formula to demonstrate its use.
Explanation:
As it is a square the length of all the sides is the same.
Let the length of 1 side be L
Let the area be A
Then $A = {L}^{2}$............................(1)
Perimeter is $4 L$........................(2)
The question states: "The area of a square is 45 more than.."
$\implies A = 4 L + 45$.................................(3)
Substitute equation (3) into equation (1) giving:
$A = 4 L + 45 = {L}^{2.} \ldots \ldots \ldots \ldots \ldots \ldots . \left({1}_{a}\right)$
So now we are able to write just 1 equation with 1 unknown, which is solvable.
'~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
$4 L + 45 = {L}^{2}$
Subtract ${L}^{2}$ from both sides giving a quadratic.
$- {L}^{2} + 4 L + 45 = 0$
The conditions that satisfy this equation equalling zero gives us the potential size of L
Using $a x + b x + c = 0$ where $x = \frac{- b \pm \sqrt{{b}^{2} - 4 a c}}{2 a}$
$a = - 1$
$b = 4$
$c = 45$
$x = \frac{- 4 \pm \sqrt{{\left(4\right)}^{2} - 4 \left(- 1\right) \left(45\right)}}{2 \left(- 1\right)}$
$x = \frac{- 4 \pm 14}{- 2}$
$x = \frac{- 18}{- 2} = + 9$
$x = \frac{+ 10}{- 2} = - 5$
Of these two $x = - 5$ is not a logical length of side so
$x = L = 9$
${\text{Check "-> A= 9^2= 81 "units}}^{2}$
$4 L = 36 \to 81 - 36 = 45$
So area does indeed equal sum of sides + 45
|
2020-01-24 20:37:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 20, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8442081809043884, "perplexity": 609.5094738064789}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250625097.75/warc/CC-MAIN-20200124191133-20200124220133-00493.warc.gz"}
|
https://pinakimondal.org/tag/luroths-theorem/
|
# Lüroth’s theorem (a “constructive” proof)
Lüroth’s theorem (Lüroth 1876 for $$k = \mathbb{C}$$, Steinitz 1910 in general). If $$k \subseteq K$$ are fields such that $$k \subseteq K \subseteq k(x)$$, where $$x$$ is an indeterminate over $$k$$, then $$K = k(g)$$ for some rational function $$g$$ of $$x$$ over $$k$$.
I am going to present a “constructive” proof of Lüroth’s theorem due to Netto (1895) that I learned from Schinzel’s Selected Topics on Polynomials (and give some applications to criteria for proper polynomial parametrizations). The proof uses the following result which I am not going to prove here:
Proposition (with the set up of Lüroth’s theorem). $$K$$ is finitely generated over $$k$$, i.e. there are finitely many rational functions $$g_1, \ldots, g_s \in k(x)$$ such that $$K = k(g_1, \ldots, g_s)$$.
The proof is constructive in the following sense: given $$g_1, \ldots, g_s$$ as in the proposition, it gives an algorithm to determine $$g$$ such that $$K = k(g)$$. We use the following notation in the proof: given a rational function $$h \in k(x)$$, if $$h = h_1/h_2$$ with polynomials $$h_1, h_2 \in k[x]$$ with $$\gcd(h_1, h_2) = 1$$, then we define $$\deg_\max(h) := \max\{\deg(h_1), \deg(h_2)\}$$.
## Proof of Lüroth’s theorem
It suffices to consider the case that $$K \neq k$$. Pick $$g_1, \ldots, g_s$$ as in the proposition. Write $$g_i = F_i/G_i$$, where
• $$\gcd(F_i, G_i) = 1$$ (Property 1).
Without loss of generality (i.e. discarding $$g_i \in k$$ or replacing $$g_i$$ by $$1/(g_i + a_i)$$ for appropriate $$a_i \in k$$ if necessary) we can also ensure that
• $$\deg(F_i) > 0$$ and $$\deg(F_i) > \deg(G_i)$$ (Property 2).
Consider the polynomials $H_i := F_i(t) – g_iG_i(t) \in K[t] \subset k(x)[t], i = 1, \ldots, s,$ where $$t$$ is a new indeterminate. Let $$H$$ be the greatest common divisor of $$H_1, \ldots, H_s$$ in $$k(x)[t]$$ which is also monic in $$t$$. Since the Euclidean algorithm for computing $$\gcd$$ respects the field of definition, it follows that:
• $$H$$ is also the greatest common divisor of $$H_1, \ldots, H_s$$ in $$K[t]$$, which means, if $$H = \sum_j h_j t^j$$, then each $$h_j \in K$$ (Property 3).
Let $$H^* \in k[x,t]$$ be the polynomial obtained by “clearing the denominator” of $$H$$; in other words, $$H = H^*/h(x)$$ for some polynomial $$h \in k[x]$$ and $$H^*$$ is primitive as a polynomial in $$t$$ (i.e. the greatest common divisor in $$k[x]$$ of the coefficients in $$H^*$$ of powers of $$t$$ is 1). By Gauss’s lemma, $$H^*$$ divides $$H^*_i := F_i(t)G_i(x) – F_i(x)G_i(t)$$ in $$k[x,t]$$, i.e. there is $$Q_i \in k[x,t]$$ such that $$H^*_i = H^* Q_i$$.
Claim 1. If $$\deg_t(H^*) < \deg_t(H^*_i)$$, then $$\deg_x(Q_i) > 0$$.
Proof of Claim 1. Assume $$\deg_t(H^*) < \deg_t(H^*_i)$$. Then $$\deg_t(Q_i) > 1$$. If in addition $$\deg_x(Q_i) = 0$$, then we can write $$Q_i(t)$$ for $$Q_i$$. Let $$F_i(t) \equiv \tilde F_i(t) \mod Q_i(t)$$ and $$G_i(t) \equiv \tilde G_i(t) \mod Q_i(t)$$ with $$\deg(\tilde F_i) < \deg(Q_i)$$ and $$\deg(\tilde G_i) < \deg(Q_i)$$. Then $$\tilde F_i(t)G_i(x) – F_i(x) \tilde G_i(t) \equiv 0 \mod Q_i(t)$$. Comparing degrees in $$t$$, we have $$\tilde F_i(t)G_i(x) = F_i(x) \tilde G_i(t)$$. It is straightforward to check that this contradicts Propeties1 and 2 above, and completes the proof of Claim 1.
Let $$m := \min\{\deg_\max(g_i): i = 1, \ldots, s\}$$, and pick $$i$$ such that $$\deg_\max(g_i) = m$$. Property 2 above implies that $$\deg_t(H^*_i) = \deg_x(H^*_i) = m$$. If $$\deg_t(H^*) < m$$, then Claim 1 implies that $$\deg_x(H^*) < \deg_x(H^*_i) = m$$. If the $$h_j$$ are as in Property 3 above, it follows that $$\deg_\max(h_j) < m$$ for each $$j$$. Since $$H^* \not\in k[t]$$ (e.g. since $$t-x$$ divides each $$H_i$$), there must be at least one $$h_j \not \in k$$. Since adding that $$h_j$$ to the list of the $$g_i$$ decreases the value of $$m$$, it follows that the following algorithm must stop:
### Algorithm
• Step 1: Pick $$g_i := F_i/G_i$$, $$i = 1, \ldots, s$$, satisfying properties 1 and 2 above.
• Step 2: Compute the monic (with respect to $$t$$) $$\gcd$$ of $$F_i(t) – g_i G_i(t)$$, $$i = 1, \ldots, s$$, in $$k(x)[t]$$; call it $$H$$.
• Step 3: Write $$H = \sum_j h_j(x) t^j$$. Then each $$h_j \in k(g_1, \ldots, g_s)$$. If $$\deg_t(H) < \min\{\deg_\max(g_i): i = 1, \ldots, s\}$$, then adjoin all (or, at least one) of the $$h_j$$ such that $$h_j \not\in k$$ to the list of the $$g_i$$ (possibly after an appropriate transformation to ensure Property 2), and repeat.
After the last step of the algorithm, $$H$$ must be one of the $$H_i$$, in other words, there is $$\nu$$ such that $\gcd(F_i(t) – g_i G_i(t): i = 1, \ldots, s) = F_{\nu}(t) – g_{\nu}G_{\nu}(t).$
Claim 2. $$K = k(g_{\nu})$$.
Proof of Claim 2 (and last step of the proof of Lüroth’s theorem). For a given $$i$$, polynomial division in $$k(g_\nu)[t]$$ gives $$P, Q \in k(g_\nu)[t]$$ such that $F_i(t) = (F_{\nu}(t) – g_{\nu}G_{\nu}(t))P + Q,$ where $$\deg_t(Q) < \deg_t(F_{\nu}(t) – g_{\nu}G_{\nu}(t))$$. If $$Q = 0$$, then $$F_i(t) = (F_{\nu}(t) – g_{\nu}G_{\nu}(t))P$$, and clearing out the denominator (with respect to $$k[g_\nu]$$) of $$P$$ gives an identity of the form $$F_i(t)p(g_\nu) = (F_{\nu}(t) – g_{\nu}G_{\nu}(t))P^* \in k[g_\nu, t]$$ which is impossible, since $$F_{\nu}(t) – g_{\nu}G_{\nu}(t)$$ does not factor in $$k[g_\nu, t]$$. Therefore $$Q \neq 0$$. Similarly, $G_i(t) = (F_{\nu}(t) – g_{\nu}G_{\nu}(t))R + S,$ where $$R, S \in k(g_\nu)[t]$$, $$S \neq 0$$, and $$\deg_t(S) < \deg_t(F_{\nu}(t) – g_{\nu}G_{\nu}(t))$$. It follows that $F_i(t) – g_iG_i(t) = (F_{\nu}(t) – g_{\nu}G_{\nu}(t))(P – g_iR) + Q – g_iS.$ Since $$F_{\nu}(t) – g_{\nu}G_{\nu}(t)$$ divides $$F_{i}(t) – g_{i}G_{i}(t)$$ in $$k(x)[t]$$ and since $$\deg_t(Q – g_iS) < \deg_t(F_{\nu}(t) – g_{\nu}G_{\nu}(t))$$, it follows that $$Q = g_iS$$. Taking the leading coefficients (with respect to $$t$$) $$q_0, s_0 \in k(g_\nu)$$ of $$Q$$ and $$S$$ gives that $$g_i = q_0/s_0 \in k(g_\nu)$$, as required to complete the proof.
## Applications
The following question seems to be interesting (geometrically, it asks when a given polynomial parametrization of a rational affine plane curve is proper).
Question 1. Let $$k$$ be a field and $$x$$ be an indeterminate over $$k$$ and $$g_1, g_2 \in k[x]$$. When is $$k(g_1, g_2) = k(x)$$?
We now give a sufficient condition for the equality in Question 1. Note that the proof is elementary: it does not use Lüroth’s theorem, only follows the steps of the above proof in a special case.
Corollary 1. In the set up of Question 1, let $$d_i := \deg(g_i)$$, $$i = 1, 2$$. If the $$\gcd$$ of $$x^{d_1} – 1, x^{d_2} – 1$$ in $$k[x]$$ is $$x – 1$$, then $k(g_1, g_2) = k(t)$. In particular, if $$d_1, d_2$$ are relatively prime and the characteristic of $$k$$ is either zero or greater than both $$d_1, d_2$$, then $k(g_1, g_2) = k(x)$.
Remark. Corollary 1 is true without the restriction on characteristics, i.e. the following holds: “if $$d_1, d_2$$ are relatively prime, then $k(g_1, g_2) = k(x)$.” François Brunault (in a comment to one of my questions on MathOverflow) provided the following simple one line proof: $$[k(x): k(g_1, g_2)]$$ divides both $$[k(x): k(g_i)] = d_i$$, and therefore must be $$1$$.
My original proof of Corollary 1. Following the algorithm from the above proof of Lüroth’s theorem, let $$H_i := g_i(t) – g_i(x)$$, $$i = 1, 2$$, and $$H \in k(x)[t]$$ be the monic (with respect to $$t$$) greatest common divisor of $$H_1, H_2$$.
Claim 1.1. $$H = t – x$$.
Proof. It is clear that $$t-x$$ divides $$H$$ in $$k(x)[t]$$, so that $$H(x,t) = (t-x)h_1(x,t)/h_2(x)$$ for some $$h_1(x,t) \in k[x,t]$$ and $$h_2(x) \in k[x]$$. It follows that there is $$Q_i(x,t) \in k[x,t]$$ and $$P_i(x) \in k[x]$$ such that $$H_i(x,t)P_i(x)h_2(x) = (t-x)h_1(x,t)Q_i(x,t)$$. Since $$h_2(x)$$ and $$(t-x)h_1(x,t)$$ have no common factor, it follows that $$h_2(x)$$ divides $$Q_i(x,t)$$, and after cancelling $$h_2(x)$$ from both sides, one can write $H_i(x,t)P_i(x) = (t-x)h_1(x,t)Q’_i(x,t),\ i = 1, 2.$ Taking the leading form of both sides with respect to the usual degree on $$k[x,t]$$, we have that $(t^{d_i} – x^{d_i})x^{p_i} = a_i(t-x)\mathrm{ld}(h_1)\mathrm{ld}(Q’_i)$ where $$a_i \in k \setminus \{0\}$$ and $$\mathrm{ld}(\cdot)$$ is the leading form with respect to the usual degree on $$k[x,t]$$. Since $$\gcd(x^{d_1} – 1, x^{d_2} – 1) = x – 1$$, it follows that $$\mathrm{ld}(h_1)$$ does not have any factor common with $$t^{d_i} – x^{d_i}$$, and consequently, $$t^{d_i} – x^{d_i}$$ divides $$(t-x)\mathrm{ld}(Q’_i)$$. In particular, $$\deg_t(Q’_i) = d_i – 1$$. But then $$\deg_t(h_1) = 0$$. Since $$H = (t-x)h_1(x)/h_2(x)$$ is monic in $$t$$, it follows that $$H = t – x$$, which proves Claim 1.1.
Since both $$H_i$$ are elements of $$k(g_1, g_2)[t]$$, and since the Euclidean algorithm to compute $$\gcd$$ of polynomials (in a single variable over a field) preserves the field of definition, it follows that $$H \in k(g_1, g_2)[t]$$ as well (this is precisely the observation of Property 3 from the above proof of Lüroth’s theorem). Consequently $$x \in k(g_1, g_2)$$, as required to prove Corollary 1.
## References
• Andrzej Schinzel, Selected Topics on Polynomials, The University of Michigan Press, 1982.
|
2022-12-04 15:41:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9807514548301697, "perplexity": 83.06647430942098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710974.36/warc/CC-MAIN-20221204140455-20221204170455-00072.warc.gz"}
|
https://www.physicsforums.com/threads/equation-of-a-line-that-is-tangent-to-f-x.258043/
|
# Equation of a line that is tangent to f(x)
1. Sep 21, 2008
### Squiller
In order to find the equation of a line that is tangent to f(x) and goes through point P on f, you gotta find the derivative of f(x) at P, but how would you go about solving a problem where you have to find the equation of a line tangent to f(x) that goes through point P, but P is NOT on the graph of f.
2. Sep 21, 2008
Re: Tangents
Just to get my facts straight: you have a function $$f(x)$$, a point $$P$$ that is not on the graph of $$f(x)$$, and you want an equation of a line that
1. passes through the point $$P$$, and
2. is tangent to the graph of $$f(x)$$
If this is correct, what else is stated in the problem - do you have a specific function $$f$$, at what point(s) is the line to be tangent, etc. Further, what have you tried?
3. Sep 21, 2008
### Squiller
Re: Tangents
f(x) = 4x-x2
Question: Find the equations of the lines that pass through P(2,7) and are tangent to the graph of f(x).
(P is not on f(x).)
Thats all the problem states.
Ive tried finding f'(x) and plugging f' into the Line equation y=mx+b.
y=(4-2x)x+b.
Then plugging in Point P.
7=(4-2x)2+b - Im not really sure if this is heading in the right direction.
|
2016-12-02 18:09:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.595400869846344, "perplexity": 229.1819375034256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540409.8/warc/CC-MAIN-20161202170900-00467-ip-10-31-129-80.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/advanced-math-topics/21605-convergence-proof.html
|
Math Help - Convergence proof
1. Convergence proof
Using the given that the sequence {1/n} converges to 0, prove that none of the following assertions are equivalent to the definition of convergence of a sequence {an} to the number a:
a. For some ε>0 there is an index N such that
|an-a|<ε for all indices n≥N.
b. For each ε>0 and each index N
|an-a|<ε for all indices n≥N.
c. There is an index N such that for every number ε > 0,
|an-a|<ε for all indices n≥N.
2. Originally Posted by uconn711
Using the given that the sequence {1/n} converges to 0, prove that none of the following assertions are equivalent to the definition of convergence of a sequence {an} to the number a:
a. For some ε>0 there is an index N such that
|an-a|<ε for all indices n≥N.
Consider the sequence
a_n=(-1)^n, n=0, 1, ..
This does not converge.
but for ε=2 |a_n-0|<ε, for all n>0.
RonL
|
2015-04-26 20:23:29
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.932428777217865, "perplexity": 846.0601550363838}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246655962.81/warc/CC-MAIN-20150417045735-00253-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://support.bioconductor.org/p/55327/
|
Question: Summarized Experiments - Combine
0
5.5 years ago by
Italy
Jose M Garcia Manteiga310 wrote:
Dear Martin and BioC, I am using the Summarized Experiments object in DESeq2 to upload RNA- Seq data. I have resequenced a subset of samples in my experiment and re-counted them with htseq-count, therefore I end up with a subset of new files with new count data. I wondered whether there is a in-R way of using the SummarizedExperiments data object to sum up the counts of the second Run to the counts of the first run. Of course I know how to do without the DESeqDataSet and other DESeq functions but I thought that maybe there is a quick and safer way by using some combine onto 2 different dds (SE) objects of same rowData with samples to sum. Thanks in advance Best Jose
deseq deseq2 • 2.2k views
modified 5.5 years ago by Valerie Obenchain6.7k • written 5.5 years ago by Jose M Garcia Manteiga310
0
5.5 years ago by
United States
Valerie Obenchain6.7k wrote:
Hi Jose, If the SummarizedExperiment objects have the same rowData you can use cbind() to combine the data into a single SE object. This combines the data in info() and assays() so the counts of each SE object end up in the assays()$counts matrix as separate columns. cmb <- cbind(SE1, SE2) In this way you can create one SE object that holds all counts for the different experiments. To sum the counts use rowSums() on the count matrix. rowSums(assays(cmb)$counts) Valerie On 10/07/2013 05:43 AM, Garcia Manteiga Jose Manuel wrote: > Dear Martin and BioC, > I am using the Summarized Experiments object in DESeq2 to upload RNA-Seq data. I have resequenced a subset of samples in my experiment and re-counted them with htseq-count, therefore I end up with a subset of new files with new count data. I wondered whether there is a in-R way of using the SummarizedExperiments data object to sum up the counts of the second Run to the counts of the first run. Of course I know how to do without the DESeqDataSet and other DESeq functions but I thought that maybe there is a quick and safer way by using some combine onto 2 different dds (SE) objects of same rowData with samples to sum. > Thanks in advance > Best > Jose > > _______________________________________________ > Bioconductor mailing list > Bioconductor at r-project.org > https://stat.ethz.ch/mailman/listinfo/bioconductor > Search the archives: http://news.gmane.org/gmane.science.biology.informatics.conductor >
Thanks Valerie, I have used both techniques. First I have created an SE object with cbind that was the 'union' of both. Then I have changed the colums(samples) by using a rowSums command like yours on the SE.sum counts I noticed that there is a collapse procedure in the Vignette of parathyroidSE for technical replicates that could be more or less the same. Thank you all, Jose On Oct 7, 2013, at 5:55 PM, Valerie Obenchain <vobencha at="" fhcrc.org=""> wrote: > Hi Jose, > > If the SummarizedExperiment objects have the same rowData you can use > cbind() to combine the data into a single SE object. This combines the > data in info() and assays() so the counts of each SE object end up in > the assays()$counts matrix as separate columns. > > cmb <- cbind(SE1, SE2) > > In this way you can create one SE object that holds all counts for the > different experiments. To sum the counts use rowSums() on the count matrix. > > rowSums(assays(cmb)$counts) > > > Valerie > > On 10/07/2013 05:43 AM, Garcia Manteiga Jose Manuel wrote: >> Dear Martin and BioC, >> I am using the Summarized Experiments object in DESeq2 to upload RNA-Seq data. I have resequenced a subset of samples in my experiment and re-counted them with htseq-count, therefore I end up with a subset of new files with new count data. I wondered whether there is a in-R way of using the SummarizedExperiments data object to sum up the counts of the second Run to the counts of the first run. Of course I know how to do without the DESeqDataSet and other DESeq functions but I thought that maybe there is a quick and safer way by using some combine onto 2 different dds (SE) objects of same rowData with samples to sum. >> Thanks in advance >> Best >> Jose >> >> _______________________________________________ >> Bioconductor mailing list >> Bioconductor at r-project.org >> https://stat.ethz.ch/mailman/listinfo/bioconductor >> Search the archives: http://news.gmane.org/gmane.science.biology.informatics.conductor >> >
|
2019-04-21 16:28:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21790580451488495, "perplexity": 2859.26989466317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578531994.14/warc/CC-MAIN-20190421160020-20190421182020-00481.warc.gz"}
|
http://clay6.com/qa/24826/the-sum-of-three-terms-of-a-strictly-increasing-gp-is-alpha-s-and-sum-of-th
|
Browse Questions
# The sum of three terms of a strictly increasing GP is $\alpha S$ and sum of their squares is $S^{2}$. $\alpha^{2}$ lies in
$(a)\;(\frac{1}{3},2)\qquad(b)\;(\frac{1}{3},3)\qquad(c)\;(1,2)\qquad(d)\;None\;of\;these$
Answer : (d) None of these
Explanation : $a\;(\frac{1}{r}+1+r)=\alpha\;S-----(1)$
$a^2\;(\frac{1}{r^2}+1+r^2)=S^{2}------(2)$
Dividing (2) by (1)
$a\;(\frac{1}{r}-1+r)=\large\frac{S}{\alpha}-------(3)$
From (2) and (3)
$2a=S\;(\alpha-\large\frac{1}{\alpha})=S\;(\large\frac{\alpha^2-1}{\alpha})$
Putting this in (2) we get ,
$\large\frac{(\alpha^2-1)^2}{4\alpha^2}\;(\large\frac{1}{r^2}+1+r^2)=1$
$(r-\large\frac{1}{r})^2+3=\large\frac{4\alpha^2}{(\alpha^2-1)^2}$
$3\alpha^4-10\alpha^2+3 < 0$
$(3\alpha^2-1)(\alpha^2-3) < 0$
$\large\frac{1}{3} < \alpha^2 < 3$
But $\alpha^2=1\quad\;a=0\;is\;not\;possible$
$so\;,\alpha^2 \in (\large\frac{1}{3},3)----(1)\;.$
|
2017-07-20 14:40:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9401540160179138, "perplexity": 1258.2757724018493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423222.65/warc/CC-MAIN-20170720141821-20170720161821-00258.warc.gz"}
|
https://www.physicsforums.com/threads/spring-mass-system-with-applied-force-kinetic-energy.314636/
|
# Spring mass system with applied force kinetic energy
## Main Question or Discussion Point
I'm trying to work out a model for a spring mass system with a force acting at the centre of mass of the mass using Lagrangian mechanics. I can't work out the kinetic energy. I know the kinetic energy $$\text{KE}=\dfrac{1}{2}mv^2$$. I also have $$W=\int_a^b F \, dt$$.
Should I use $$\Delta \text{KE} + \Delta \text{PE} =W$$
Some help will be appreciated.
P.S. I'm using Lagrangian mechanics because later I'm planning some calculations on double spring systems with applied force.
W = $$\int$$ $$\vec{F}$$ $$\cdot$$ $$\vec{dx}$$
|
2020-02-23 18:09:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5887141823768616, "perplexity": 312.40244661144396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145818.81/warc/CC-MAIN-20200223154628-20200223184628-00094.warc.gz"}
|
https://mailman.ntg.nl/pipermail/ntg-context/2004/006487.html
|
# [NTG-context] EPS/PDF import and \underbrace in metaplay
Vit Zyka vit.zyka at seznam.cz
Thu Aug 5 17:03:52 CEST 2004
> I want to make something like
>
> [start of TeX code]
> \displaystyle{\underbrace{x^2+y^2=z^2}_{\hbox{simple equation}}}
> [end of tex code]
>
> except that there has to be a metapost picture instead of "x^2+y^2=z^2"
I did not test but I see no reason why not to use \...MP...
inside \underbrace, like
$\underbrace{\hbox{\uniwueMPgraphic[pict]}_{\hbox{simple equation}}$
Is there any problem?
> Is it possible to place a graphic inside a \hbox in some btex ... etex
> or textext expression? In that case this would help me a lot, otherwise
Not. Process is:
1) mpost geterate from btex...etex construction TeX file.
2) tex: tex -> dvi (graphics is as \special)
3) makempx dvi -> mp !! problem how to handle graphics
Vit Zyka
|
2016-05-30 01:06:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9971203207969666, "perplexity": 13230.300902199517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049282327.67/warc/CC-MAIN-20160524002122-00175-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://joemathjoe.wordpress.com/2019/11/06/combinatorics-lecture-3-3-oct-2019/
|
# Combinatorics, Lecture 3 (3 Oct 2019)
Lecture 2 here. Thanks again to Tim Hosgood for the beautiful pictures.
## The category of species
Last time, we looked at the relationship between species and their generating functions, a formal power series associated to a species which lets you count structures described by the species. Now we’ll take a closer look at species themselves. In particular, there is a category where the objects are species. As is the case in so many branches of math, looking at the category of our main objects of study will be a useful perspective.
There is a category of species, called $\mathsf{Set}^\mathsf{S}$, with species $G\colon\mathsf{S}\to\mathsf{Set}$ as objects and natural transformations $\alpha \colon G \Rightarrow G'$ as morphisms.
There is a subcategory of $\mathsf{Set}^\mathsf{S}$ where the objects are finite species, denoted $\mathsf{F}^\mathsf{S}$.
We’ve talked about functors in this class before, but not yet natural transformations. Recall: given categories $\mathcal{C}$ and $\mathcal{D}$, a functor $G \colon \mathcal{C} \to \mathcal{D}$ sends objects $c \in \mathcal{C}$ to objects $G(c) \in \mathcal{D}$, and morphisms $f \colon c\to c'$ in $\mathcal{C}$ to morphisms $G(f) \colon G(c) \to G(c')$ in $\mathcal{D}$ such that $G(g \circ f) = G(g) \circ G(f)$ and $G(1_c) = 1_{G(c)}$.
If $G,H \colon \mathcal{C} \to \mathcal{D}$ are functors, then a natural transformation $\alpha \colon G \Rightarrow H$ assigns to each object $c \in \mathcal{C}$ a morphism $\alpha_c \colon G(c) \to H(c)$ in $\mathcal{D}$, such that the naturality square
commutes. If all the components $\alpha_c$ are invertible, then we say that $\alpha$ is a natural isomorphism.
Proposition: If $\alpha$ is a natural isomorphism then there exists a natural transformation $\alpha^{-1}$ given by $(\alpha^{-1})_c=(\alpha_c)^{-1}$.
Theorem: If $\mathcal{C}$ and $\mathcal{D}$ are categories then there is a category $\mathcal{D}^\mathcal{C}$ whose objects are functors $\mathcal{C}\to\mathcal{D}$ and whose morphisms are natural transformations, with composition $\beta\circ\alpha\colon G\Rightarrow J$ of $\alpha\colon G\Rightarrow H$ and $\beta \colon H \Rightarrow J$ given by $(\beta \circ \alpha)_c = \beta_c \circ \alpha_c$.
Example: Let $P\colon\mathsf{S}\to\mathsf{F}$ be defined as follows. For a finite set X, let P(X) be the set of all ways of performing the following procedure:
• draw a regular (|X|+1)-gon;
• label all but one of the sides with the elements of X;
• triangulate the polygon.
For example, here is an element of P(5):
Example: Let $B : \mathsf{S} \to \mathsf{F}$ be the species defined as follows: for a finite set X, B(X) is the set of binary, planar, rooted trees whose leaves are equipped with a bijection to X, or X-labelled trees for short. For X=5, B(5) has an element like this:
Theorem: $P \cong B$, i.e.\ the functors are naturally isomorphic
Proof idea: We need bijections $\alpha_X : P(X) \xrightarrow{\sim} B(X)$ for $X \in \mathsf{S}$ which are natural.
Given the tree in $B(5)$ from the above example, let’s construct the corresponding element of $P(5)$. We start with a (5+1)-gon.
Rather than labelling sides, we’re going to label vertices that we place just outside of each side of the polygon (except for one that we place inside). So we draw $5+1$ vertices, label the top-most one `root’, and the others in the same order as the leaves of our element of $B(5)$.
Then we draw a copy of our tree inside the polygon.
Finally, we triangulate our polygon by crossing each branch of the tree exactly once.
Theorem: Let $G, H : \mathsf{S} \to \mathsf{F}$ be finite species. If $G \cong H$ then $|G| = |H|$.
Note that the converse is not true! In future lectures, we’ll look at examples of non-isomorphic species with the same generating functions.
Proof: If there’s a natural isomorphism $\alpha : G \Rightarrow H$, then there’s a bijection $\alpha_x : G(X) \xrightarrow{\sim} H(X)$ for all $X \in \mathsf{S}$, so $|G(X)| = |H(X)|$ for all X, thus
$|G|(x) = \sum \frac{|G(n)|}{n!} x^n = \sum \frac{|H(n)|}{n!} x^n = |H|(x)$.
In Lecture 4, we’ll begin to see how we can actually use species and their generating functions to solve problems in combinatorics.
|
2020-04-08 11:32:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 51, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9427362680435181, "perplexity": 227.40923019525312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371813538.73/warc/CC-MAIN-20200408104113-20200408134613-00223.warc.gz"}
|
https://crazyproject.wordpress.com/2011/11/17/compute-the-invariant-factors-and-elementary-divisors-of-a-given-module/
|
## Compute the invariant factors and elementary divisors of a given module
Suppose the vector space $V$ is the direct sum of cyclic $F[x]$-modules whose annihilators are $(x+1)^2$, $(x-1)(x^2+1)^2$, $x^4-1$, and $(x+1)(x^2-1)$. Determine the invariant factors and elementary divisors of $V$.
We have $V \cong F[x]/((x+1)^2) \oplus F[x]/((x-1)(x^2+1)^2) \oplus F[x]/(x^4-1) \oplus F[x]/((x+1)(x^2-1))$. If $x^2+1$ is irreducible over $F$, then by the Chinese Remainder Theorem for modules (see here and here), we have $V \cong F[x]/((x+1)^2) \oplus F[x]/(x-1)$ $\oplus F[x]/((x^2+1)^2) \oplus F[x]/(x+1)$ $\oplus F[x]/(x-1) \oplus F[x]/(x^2+1)$ $\oplus F[x]/((x+1)^2) \oplus F[x]/(x-1)$.
The elementary divisors of $V$ are thus $(x^2+1)^2$, $x^2+1$, $(x+1)^2$, $(x+1)^2$, $x+1$, $x-1$, $x-1$, and $x-1$.
The invariant factors of $V$ are $(x^2+1)^2(x+1)^2(x-1)$, $(x^2+1)(x+1)^2(x-1)$, and $(x+1)(x-1)$.
If $x^2+1$ is reducible over $F$, then is has a root $\alpha$, and indeed $x^2+1 = (x+\alpha)(x-\alpha)$. Now by the Chinese Remainder Theorem for modules we have $V \cong F[x]/((x+1)^2) \oplus F[x]/(x-1)$ $\oplus F[x]/((x+\alpha)^2) \oplus F[x]/((x-\alpha)^2) \oplus F[x]/(x+1)$ $\oplus F[x]/(x-1) \oplus F[x]/(x+\alpha) \oplus F[x]/(x-\alpha)$ $\oplus F[x]/((x+1)^2) \oplus F[x]/(x-1)$.
So the elementary divisors of $V$ are $(x-\alpha)^2$, $x-\alpha$, $(x+\alpha)^2$, $x+\alpha$, $(x+1)^2$, $(x+1)^2$, $x+1$, $x-1$, $x-1$, and $x-1$.
The invariant factors of $V$ are $(x-\alpha)^2(x+\alpha)^2(x+1)^2(x-1)$, $(x-\alpha)(x+\alpha)(x+1)^2(x-1)$, and $(x+1)(x-1)$. (Which are the same as if $F$ did not contain $\alpha$. This is to be expected in light of Corollary 18 on page 477 of D&F.)
You need to consider characteristic 2, then $( x+1)$ factors to $(x+i)^2$
All that matters here is how the polynomials factor over $F$.
|
2017-03-23 02:08:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 53, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9975380301475525, "perplexity": 38.186384152079924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186608.9/warc/CC-MAIN-20170322212946-00583-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://bioinformatics.stackexchange.com/questions/18483/what-are-values-of-filter-column-of-vcf-files-produced-by-mutect2
|
# What are values of FILTER column of vcf files produced by Mutect2
I have called SNVs with Mutect2, now in filter column of a file called filtered vcf I have a lot of things like
> length(unique(filtered.vcf$FILTER)) [1] 409 > > unique(filtered.vcf$FILTER)
[1] "germline"
[2] "PASS"
[3] "clustered_events"
[4] "clustered_events;panel_of_normals"
[5] "weak_evidence"
[6] "haplotype"
[7] "base_qual;haplotype"
[8] "clustered_events;germline;haplotype"
[9] "base_qual;clustered_events;germline;haplotype"
[10] "clustered_events;haplotype"
[11] "clustered_events;haplotype;weak_evidence"
[12] "base_qual;clustered_events;haplotype"
[13] "haplotype;map_qual"
[14] "germline;panel_of_normals"
[15] "base_qual"
[16] "germline;haplotype"
[17] "germline;panel_of_normals;position"
[18] "clustered_events;orientation"
[19] "clustered_events;orientation;strand_bias"
[20] "base_qual;haplotype;map_qual"
[21] "clustered_events;fragment;haplotype"
[22] "base_qual;clustered_events"
[23] "germline;haplotype;panel_of_normals"
[24] "clustered_events;germline;haplotype;panel_of_normals"
[25] "fragment"
[26] "clustered_events;haplotype;map_qual"
[27] "clustered_events;germline"
[28] "clustered_events;germline;panel_of_normals"
[29] "germline;weak_evidence"
[30] "orientation"
I tought that the filtered.vcf Mutect2 output file contain only the filtered events, but the values I see suggest otherwise. How should I understand the output?
You haven't told us what commands you ran, so I am assuming you first ran Mutect2 and then FilterMutectCalls. If so, the FilterMutectCalls does not remove any variants from its input, it will simply flag any that fail its builtin filters. Any variants that did not fail a filter are marked as PASS and the rest have a filter added.
Now, whether you want to filter these out or not is up to you. You need to look at the VCF header where each of the filters is defined (admittedly, not very clearly) and make a judgment call on whether you consider this filter to be a deal breaker or not.
For instance, the germline flag means:
##FILTER=<ID=germline,Description="Evidence indicates this site is germline, not somatic">
So that indicates that gatk believes this is more likely to be a germline variant than a somatic one, that it is more likely to be a constituent variant present in the sample's germline cells than a variant that is only found in the sample's tumor cells.
Another clear case is the base_qual filter which means:
##FILTER=<ID=base_qual,Description="alt median base quality">
When a variant is flagged with this filter, it means that the median fastq base quality of the reads supporting the variant was significantly lower than the median fastq base quality of the reads supporting the reference alele at the same position, making it likely that the variant is a false positive.
You need to read the VCF headers and any gatk documentation you can find (warning: these filters are not very well documented at all, in my experience), understand what the filters are and then decide what variants you consider real based on what you know about your sample, your experimental design and the question you are trying to answer.
|
2022-06-27 23:30:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2590765357017517, "perplexity": 3254.08981122757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103344783.24/warc/CC-MAIN-20220627225823-20220628015823-00135.warc.gz"}
|
http://cms.math.ca/10.4153/CMB-2010-080-2
|
Abstract view
# Interval Pattern Avoidance for Arbitrary Root Systems
Published:2010-07-26
Printed: Dec 2010
• Alexander Woo,
Department of Mathematics, Statistics and Computer Science, St. Olaf College, Northfield, MN, U.S.A.
Features coming soon:
Citations (via CrossRef) Tools: Search Google Scholar:
Format: HTML LaTeX MathJax PDF
## Abstract
We extend the idea of interval pattern avoidance defined by Yong and the author for $S_n$ to arbitrary Weyl groups using the definition of pattern avoidance due to Billey and Braden, and Billey and Postnikov. We show that, as previously shown by Yong and the author for $\operatorname{GL}_n$, interval pattern avoidance is a universal tool for characterizing which Schubert varieties have certain local properties, and where these local properties hold.
MSC Classifications: 14M15 - Grassmannians, Schubert varieties, flag manifolds [See also 32M10, 51M35] 05E15 - Combinatorial aspects of groups and algebras [See also 14Nxx, 22E45, 33C80]
|
2013-12-11 08:07:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27061811089515686, "perplexity": 5305.954935223924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164033438/warc/CC-MAIN-20131204133353-00008-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://pixel-druid.com/separable-extensions-via-derivation.html
|
## § Separable extensions via derivation
• Let $R$ be a commutative ring, $M$ an $R$-module. A derivation is a map such that $D(a + b) = D(a) + D(b)$ and $D(ab) = aD(b) + D(a)b$ [ie, the calculus chain rule is obeyed ].
• Note that the map does not need to be an $R$-homomorphism (?!)
• The elements of $R$ such that $D(R) = 0$ are said to be the constants of $R$.
• The set of constants under $X$-differentiation for $K[X]$ in char. 0 is $K$, and $K[X^p]$ in char. p
• Let $R$ be an integral domain with field of fractions $K$. Any derivation $D: R \to K$ uniquely extends to $D': K \to K$ given by the quotient rule: $D'(a/b) = (bD(a) - aD(b))/b^2$.
• Any derivation $D: R \to R$ extends to a derivation $(.)^D: R[x] \to R[x]$. For a $f = \sum_i a_i x^i \in R[x]$, the derivation is given by $f^D(x) \equiv \sum_i D(a_i) X^i$. This applies $D$ to $f(x)$ coefficientwise.
• For a derivation $D: R \to R$ with ring of constants $C$, the associated derivation $(.)^D: R[x] \to R[x]$ has ring of constants $C[x]$.
• Key thm: Let $L/K$ be a field extension and let $D: K \to K$ be a derivation. $D$ extends uniquely to $D_L$ iff $L$ is separable over $K$.
#### § If $\alpha$ separable, then derivation over $K$ lifts uniquely to $K(\alpha)$
• Let $D: K \to K$ be a derivation.
• Let $\alpha \in L$ be separable over $K$ with minimal polynomial $\pi(X) \in K[X]$.
• So, $\pi(X)$ is irreducible in $K[X]$, $\pi(\alpha) = 0$, and $\pi'(\alpha) \neq 0$.
• Then $D$ has a unique extension $D': K(\alpha) \to K(\alpha)$ given by:
\begin{aligned} D'(f(\alpha)) \equiv f^D(\alpha) - f'(\alpha) \frac{\pi^D(\alpha)}{pi'(\alpha)} \end{aligned}
• To prove this, we start by assuming $D$ has an extension, and then showing that it must agree with $D'$. This tells us why it must look this way.
• Then, after doing this, we start with $D'$ and show that it is well defined and obeys the derivation conditions. This tells us why it's well-defined .
#### § Non example: derivation that does not extend in inseparable case
• Consider $F_p(u)$ as the base field, and let $L = F_p(u)(\alpha)$ where $\alpha$ is a root of $X^p - u \in F_p(u)[x]$. This is inseparable over $K$.
• The $u$ derivative on $F_p(u)$ [which treats $u$ as a polynomial and differentiates it ] cannot be extended to $L$.
• Consider the equation $\alpha^p = u$, which holds in $L$, since $\alpha$ was explicitly a root of $X^p - u$.
• Applying the $u$ derivative gives us $p \alpha^{p-1} D(\alpha) = D(u)$. The LHS is zero since we are in characteristic $p$. The RHS is 1 since $D$ is the $u$ derivative, and so $D(u) = 1$. This is a contradiction, and so $D$ does not exist [any mathematical operation must respect equalities ].
#### § Part 2.a: Extension by inseparable element $\alpha$ does not have unique lift of derivation for $K(\alpha)/K$
• Let $\alpha \in L$ be inseparable over $K$. Then $\pi'(X) = 0$ where $\pi(X)$ is the minimal polynomial for $\alpha \in L$.
• In particular, $\pi'(\alpha) = 0$. We will use the vanishing of $\pi'(\alpha)$ to build a nonzero derivation on $K(\alpha)$ which extends the zero derivation on $K$.
• Thus, the zero derivation on $K$ has two lifts to $K(\alpha)$: one as the zero derivation on $K(\alpha)$, and one as our non-vanishing lift.
• Define $Z: K(\alpha) \to K(\alpha)$ given by $Z(f(\alpha)) = f'(\alpha)$ where $f(x) \in K[x]$. By doing this, we are conflating elements $l \in K(\alpha)$with elements of the form $\sum_i k_i \alpha^i = f(\alpha)$. We need to check that this is well defined, that if $f(\alpha) = g(\alpha)$, then $Z(f(\alpha)) = Z(g(\alpha))$.
• So start with $f(\alpha) = g(\alpha)$. This implies that $f(x) \equiv g(x)$ modulo $\pi(x)$.
• So we write $f(x) = g(x) + k(x)\pi(x)$.
• Differentiating both sides wrt $x$, we get $f'(x) = g'(x) + k'(x) \pi(x) + k(x) \pi'(x)$.
• Since $\pi(\alpha) = \pi'(\alpha) = 0$, we get that $f'(\alpha) = g'(\alpha) + 0$ by evaluating previous equation at $\alpha$.
• This shows that $Z: K(\alpha) \to K(\alpha)$ is well defined.
• See that the derivation $Z$ kills $K$ since $K = K \alpha^0$. But we see that $Z(\alpha) = 1$, so $Z$ extends the zero derivation on $K$ while not being zero itself.
• We needed separability for the derivation to be well-defined.
#### § Part 2.b: Inseparable extension can be written as extension by inseparable element
• Above, we showed that if we have $K(\alpha)/K$ where $\alpha$ inseparable, then derivations cannot be uniquely lifted.
• We want to show that if we have $L/K$ inseparable, then derivation cannot be uniquely lifted. But this is not the same!
• $L/K$ inseparable implies that there is some $\alpha \in L$ which is inseparable, NOT that $L = K(\alpha)/K$ is inseparable!
• So we either need to find some element $\alpha$ such that $L = K(\alpha)$ [not always possible ], or find some field $F$ such that $L = F(\alpha)$ and $\alpha$ is inseparable over $F$.
• Reiterating: Given $L/K$ is inseparable, we want to find some $F/K$ such that $L = F(\alpha)$ where $\alpha$ is inseparable over $F$.
• TODO!
#### § Part 1 + Part 2: Separable iff unique lift
• Let $L/K$ be separable. By primitive element theorem, $L = K(\alpha)$ for some $\alpha \in L$, $\alpha$ separable over $K$.
• Any derivation of $K$ can be extended to a derivation of $L$ from results above. Thus, separable implies unique lift.
• Suppose $L/K$ is inseparable. Then we can write $L = F(\alpha)/K$ where $\alpha$ is inseparable over $F$, and $K \subseteq F \subseteq L$.
• Then by Part 2.a, we use the $Z$ derivation to non-zero derivation on $L$ that is zero on $F$. Since it is zero on $F$ and $K \subseteq F$, it is zero on $K$.
• This shows that if $L/K$ is inseparable, then there are two ways to lift the zero derivation, violating uniqueness.
#### § Lemma: Derivations at intermediate separable extensions
• Let $L/K$ be a finite extension, and let $F/K$ be an intermediate separable extension. So $K \subseteq F \subseteq L$ and $F/K$ is separable.
• Then we claim that every derivation $D: F \to L$ that sends $K$ to $K$ has values in $F$. (ie, it's range is only $F$, not all of $L$).
• Pick $\alpha \in F$, so $\alpha$ is separable over $K$. We know what the unique derivation looks like, and it has range only $F$.
#### § Payoff: An extension $L = K(\alpha_1, \dots, \alpha_n)$ is separable over $K$ iff $\alpha_i$ are separable
• Recursively lift the derivations up from $K_0 \equiv K$ to $K_{i+1} \equiv K_i(\alpha_i)$. If the lifts all succeed, then we have a separable extension. If the unique lifts fail, then the extension is not separable.
• The lift can only succeed to uniquely lift iff the final extension $L$ is separable.
|
2022-06-28 05:29:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 163, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9957787990570068, "perplexity": 276.43568787718334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103355949.26/warc/CC-MAIN-20220628050721-20220628080721-00159.warc.gz"}
|
https://www.neetprep.com/question/51006-Radioactive-material-decay-constant-material-B-decay-constantInitially-number-nuclei-time-ratio-numberof-nuclei-material-B-will-beeabcd/55-Physics--Nuclei/703-Nuclei
|
# NEET Physics Nuclei Questions Solved
Radioactive material A has decay constant 8$\lambda$ and material B has decay constant $\lambda$ Initially, they have same number of nuclei. After what time, the ratio of number of nuclei of material B to that A will be $\frac{1}{e}$?
(a) $\frac{1}{\lambda }$
(b) $\frac{1}{7\lambda }$
(c) $\frac{1}{8\lambda }$
(d)$\frac{1}{9\lambda }$
|
2019-11-20 21:35:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8514977097511292, "perplexity": 1618.6424395085403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670635.48/warc/CC-MAIN-20191120213017-20191121001017-00104.warc.gz"}
|
https://math.stackexchange.com/questions/3113592/finding-the-last-non-zero-digit-of-n-in-o1
|
# Finding the last non-zero digit of $n!$ in $O(1)$
I saw a few approaches of finding the last non-zero digit using recurrence relation, CRT etc. I came up with a trivial $$O(1)$$ approach but didn't find it anywhere so asking it here.
We can write $$1\times3\times4\times6\times7\times8\times9$$ instead of $$1\times2\times3\times4\times6\times7\times8\times9\times10$$, and same for $$11 \dots20,\ 21 \dots30$$ and so on. This gives a modulo of $$-2$$ (mod $$10$$).
So we can write $$n!$$ as $$(-2)^{\lfloor\frac{n}{10}\rfloor}$$ mod $$10$$ and calculate the rest in hand and multiply it with our result giving us an $$O(1)$$ algorithm to solve the problem.
$$2\cdot 5\cdot 10 = 100$$. That multiplies the last nonzero digit by $$1$$, leaving it unchanged.
$$12\cdot 15\cdot 20 = 3600$$. That multiplies the last nonzero digit by $$6$$, leaving it unchanged since it's even already.
$$22\cdot 25\cdot 30 = 16500$$. That multiplies the last nonzero digit by $$3$$ and divides by $$2$$. Since we have powers of $$2$$ to give, that's the same as multiplying by $$4$$. It changes.
$$32\cdot 35\cdot 40 = 44800$$. That multiplies the last digit by $$8$$, and it changes.
|
2019-08-24 00:34:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8844122290611267, "perplexity": 184.5178753135081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319155.91/warc/CC-MAIN-20190823235136-20190824021136-00141.warc.gz"}
|
https://www.askiitians.com/forums/Discuss-with-colleagues-and-IITians/2/54276/surface-chemistry.htm
|
Click to Chat
1800-1023-196
+91-120-4616500
CART 0
• 0
MY CART (5)
Use Coupon: CART20 and get 20% off on all online Study Material
ITEM
DETAILS
MRP
DISCOUNT
FINAL PRICE
Total Price: Rs.
There are no items in this cart.
Continue Shopping
Please explain Langmuir absorption isotherm?
7 years ago
TANAYRAJ SINGH CHOUHAN
65 Points
The Langmuir equation (also known as the Langmuir isotherm, Langmuir adsorption equation or Hill-Langmuir equation) relates the coverage or adsorption of molecules on a solid surface to gas pressure or concentration of a medium above the solid surface at a fixed temperature. The equation was developed by Irving Langmuir in 1916. The equation is stated as:
$\theta =\frac{\alpha \cdot P}{1+\alpha \cdot P}$
θ is the fractional coverage of the surface, P is the gas pressure or concentration, α is a constant.
The constant α is the Langmuir adsorption constant and increases with an increase in the binding energy of adsorption and with a decrease in temperature.
Equation Derivation
The Langmuir equation is derived starting from the equilibrium between empty surface sites ($S^*$), particles ($P$) and filled particle sites ($SP$)
$S^* + P \rightleftharpoons SP$
The equilibrium constant $K$ is thus given by the equation:
$K =\frac{[SP]}{[S^*][P]}$
Because the number of filled surface sites ($SP$) is proportional to θ, the number of unfilled sites ($S^*$) is proportional to 1-θ, and the number of particles is proportional to the gas pressure or concentration (p), the equation can be rewritten as:
$\alpha =\frac{\theta}{(1-\theta)p}$
where $\alpha$ is a constant.
Rearranging this as follows:
$\theta = \alpha(1-\theta)p$$\theta = p\alpha - p\theta\alpha$$\theta + p\theta\alpha = p\alpha$$\theta (1 + p\alpha) = p\alpha$
$\theta =\frac{\alpha \cdot p}{1+\alpha \cdot p}$
Other equations relating to adsorption exist, such as the Temkin equation (page does not exist) or the Freundlich equation. The Langmuir equation (as a relationship between the concentration of a compound adsorbingto binding sites and the fractional occupancy of the binding sites) is equivalent to the Hill equation (biochemistry).
Statistical Derivation
Langmuir isotherm can also be derived using statistical mechanics with the following assumptions:1. Suppose there are M active sites to which N particles bind.2. An active site can be occupied only by one particle.3. Active sites are independent. Probability of one site being occupied is not dependent on the status of adjacent sites.
The partition function for a system of N particles adsorbed to M sites (under the assumption that there are more sites than the particles) is:
$Q(N,M,T)=\frac{M!}{N!(M-N)!}(q\lambda)^N$
with $q\lambda$ being the distribution function for one particle:
$q=q_v(T)^3$ and $\lambda=e^{\beta \mu}$.
If we allow the number of particles to increase so that all sites are occupied, the partition function becomes:
$\Xi(\mu,M,T) = \sum_{N=0}^M Q(N,M,T)= \sum_{N=0}^M \binom{M}{N} (q\lambda)^N=(1+q\lambda)^M$
We can see that this partition function of a single active state can be expressed as
$\xi = 1+q\lambda$.
The average number of occupied spaces can now be easily calculated.
$\langle N \rangle=\frac{\partial{\ln{\Xi(\mu,M,T)}}}{\partial{\beta \mu}}=M\frac{\partial{\ln{\xi(\mu,M,T)}}}{\partial{\beta \mu}}$
Rearranging yields
$\langle s \rangle=\frac{}{M}=\frac{\partial{\ln{\xi(T)}}}{\partial{\beta \mu}}=\lambda\frac{\partial\ln{\xi(T)}}{\partial\lambda}$
And finally:
$\langle s \rangle=\frac{q\lambda}{1+q\lambda}$
Equation Fitting
The Langmuir equation is expressed here as:
${\Gamma} = \Gamma_{max} \frac{K c}{1 + K c}$
where K = Langmuir equilibrium constant, c = aqueous concentration (or gaseous partial pressure), Γ = amount adsorbed, and Γmax = maximum amount adsorbed as c increases.
The equilibrium constant is actually given by $\Gamma_{max}$:
${\Gamma(c=K^{-1})} = \Gamma_{max} \frac{K K^{-1}}{1 + K K^{-1}} = \frac{\Gamma_{max}}{2}$
The Langmuir equation can be fitted to data by linear regression and nonlinear regression methods. Commonly used linear regression methods are: Lineweaver–Burk, Eadie-Hofstee, Scatchard, and Langmuir.
The double reciprocal of the Langmuir equation yields the Lineweaver-Burk equation:
$\frac{1}{\Gamma} = \frac{1}{\Gamma_{max}} + \frac{1}{\Gamma_{max}Kc}$
A plot of (1/Γ) versus (1/c) yields a slope = 1/(ΓmaxK) and an intercept = 1/Γmax. The Lineweaver-Burk regression is very sensitive to data error and it is strongly biased toward fitting the data in the low concentration range. It was proposed in 1934. Another common linear form of the Langmuir equation is the Eadie-Hofstee equation:
$\Gamma = \Gamma_{max} - \frac{\Gamma}{Kc}$
A plot of (Γ) versus (Γ/c) yields a slope = -1/K and an intercept = Γmax. The Eadie-Hofstee regression has some bias toward fitting the data in the low concentration range. It was proposed in 1942 and 1952. Another rearrangement yields the Scatchard regression:
$\frac{\Gamma}{c} = K\Gamma_{max} - K\Gamma$
A plot of (Γ/c) versus (Γ) yields a slope = -K and an intercept = KΓmax. The Scatchard regression is biased toward fitting the data in the high concentration range. It was proposed in 1949. Note that if you invert the x and y axes, then this regression would convert into the Eadie-Hofstee regression discussed earlier. The last linear regression commonly used is the Langmuir linear regression proposed by Langmuir himself in 1918:
$\frac{c}{\Gamma} = \frac{c}{\Gamma_{max}} + \frac{1}{K\Gamma_{max}}$
A plot of (c/Γ) versus (c) yields a slope = 1/Γmax and an intercept = 1/(KΓmax). This regression is often erroneously called the Hanes-Woolf regression. The Hanes-Woolf regression was proposed in 1932 and 1957 for fitting the Michaelis-Menten equation, which is similar in form to the Langmuir equation. Nevertheless, Langmuir proposed this linear regression technique in 1918, and it should be referred to as the Langmuir linear regression when applied to adsorption isotherms. The Langmuir regression has very little sensitivity to data error. It has some bias toward fitting the data in the middle and high concentration range.
Plz. approve my answer by clicking Yes given below, if u loved it... Plz/....
And best of luck!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
7 years ago
Think You Can Provide A Better Answer ?
## Other Related Questions on Discuss with colleagues and IITians
View all Questions »
### Course Features
• 728 Video Lectures
• Revision Notes
• Previous Year Papers
• Mind Map
• Study Planner
• NCERT Solutions
• Discussion Forum
• Test paper with Video Solution
### Course Features
• 731 Video Lectures
• Revision Notes
• Test paper with Video Solution
• Mind Map
• Study Planner
• NCERT Solutions
• Discussion Forum
• Previous Year Exam Questions
|
2019-12-15 00:15:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 32, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8222669959068298, "perplexity": 6694.789087567065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541297626.61/warc/CC-MAIN-20191214230830-20191215014830-00458.warc.gz"}
|
https://unapologetic.wordpress.com/2007/03/11/orders/?like=1&source=post_flair&_wpnonce=671450efa2
|
# The Unapologetic Mathematician
## Orders
March 11, 2007 - Posted by | Fundamentals, Orders
1. […] A well-ordering on a set is a special kind of total order: one in which every non-empty subset contains a least […]
Pingback by Well-Ordering « The Unapologetic Mathematician | April 2, 2007 | Reply
2. […] Lower Bounds and Euclid’s Algorithm One interesting question for any partial order is that of lower or upper bounds. Given a partial order and a subset we say that is a lower […]
Pingback by Greatest Lower Bounds and Euclid's Algorithm « The Unapologetic Mathematician | May 4, 2007 | Reply
3. […] A poset which has both least upper bounds and greatest lower bounds is called a lattice. In more detail, […]
Pingback by Lattices « The Unapologetic Mathematician | May 14, 2007 | Reply
4. […] containment from to those collections of subsets of which are actually topologies, it defines a partial order on the collection of all topologies on […]
Pingback by Topology « The Unapologetic Mathematician | November 5, 2007 | Reply
5. […] numbers for sequences is that they’re “directed”. That is, there’s an order on them. It’s a particularly simple order since it’s total — any two elements are […]
Pingback by Nets, Part I « The Unapologetic Mathematician | November 19, 2007 | Reply
6. […] let’s consider the collection of all subspaces of . This is a partially-ordered set, where the order is given by containment of the underlying sets. It’s sort of like the power […]
Pingback by The Sum of Subspaces « The Unapologetic Mathematician | July 21, 2008 | Reply
7. […] Complements and the Lattice of Subspaces We know that the poset of subspaces of a vector space is a lattice. Now we can define complementary subspaces in a way […]
Pingback by Orthogonal Complements and the Lattice of Subspaces « The Unapologetic Mathematician | May 7, 2009 | Reply
8. […] we want to introduce a partial order on the collection of partitions called the “dominance order”. Given partitions and , […]
Pingback by The Dominance Order on Partitions « The Unapologetic Mathematician | December 17, 2010 | Reply
9. What does the antisymmetry axiom gain / lose you?
Comment by isomorphismes | January 7, 2015 | Reply
10. Antisymmetry makes it so that if two elements satisfy $x\preceq y$ and $y\preceq x$ then we actually have $x=y$. This makes life simpler in some situations.
As a more visual example, imagine the preorder as a graph, with an arrow from $x$ to $y$ if $x\preceq y$ (pointing “up” the order). Then the graph of a preorder can have nontrivial loops, with an arrow from $x$ to $y$ and another one back. The graph of a partial order will be acyclic; partial orders are “simpler” than preorders in the same way acyclic graphs are simpler than general graphs.
Comment by John Armstrong | January 7, 2015 | Reply
|
2015-05-06 09:31:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9964938163757324, "perplexity": 871.8246936011993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430458521872.86/warc/CC-MAIN-20150501053521-00020-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://ultimaterugbysevens.com/dgvwlwm/what-is-image-classification-in-deep-learning-7b668a
|
# what is image classification in deep learning
For example, using a model to identify animal types in images from an encyclopedia is a multiclass classification example because there are many different animal classifications that each image can be classified as. Deep learning is getting lots of attention lately and for good reason. Therefore, we will discuss just the important points here. 12/13/2017 ∙ by Luis Perez, et al. $$CNNs are trained using large collections of diverse images. Alexnet is a CNN (Convolution Neural Network) designed in 2012 at University of Toronto, read more about it here. In this paper we study the … Most of the future segmentation models tried to address this issue. But we did cover some of the very important ones that paved the way for many state-of-the-art and real time segmentation models. It is obvious that a simple image classification algorithm will find it difficult to classify such an image. In this paper, we explore and compare multiple solutions to the problem of data augmentation in image classification. If everything works out, then the model will classify … Another metric that is becoming popular nowadays is the Dice Loss. Image Segmentation Using Deep Learning: A Survey, Fully Convolutional Networks for Semantic Segmentation, Semantic Segmentation using PyTorch FCN ResNet - DebuggerCafe, Instance Segmentation with PyTorch and Mask R-CNN - DebuggerCafe, Multi-Head Deep Learning Models for Multi-Label Classification, Object Detection using SSD300 ResNet50 and PyTorch, Object Detection using PyTorch and SSD300 with VGG16 Backbone, Multi-Label Image Classification with PyTorch and Deep Learning, Generating Fictional Celebrity Faces using Convolutional Variational Autoencoder and PyTorch. We will be discussing image segmentation in deep learning. Machine Learning, Deep Learning, and Data Science. Then, there will be cases when the image will contain multiple objects with equal importance. It is basically 1 – Dice Coefficient along with a few tweaks. in a format identical to that of the images of clothing that I will use for the task of image classification with TensorFlow. The Fashion MNIST Dataset is an advanced version of the traditional MNIST dataset which is very much used as the “Hello, World” of machine learning. These three branches might seem similar. Here are just a few examples of what makes it useful. Now, let’s say that we show the image to a deep learning based image segmentation algorithm. ImageNet can be fine-tuned with more specified datasets such as Urban Atlas. Reinforcement Learning Interaction In Image Classification. Image classification can also help in healthcare. Such applications help doctors to identify critical and life-threatening diseases quickly and with ease. Transfer learning for image classification. The Dice coefficient is another popular evaluation metric in many modern research paper implementations of image segmentation. In this project, we will build a convolution neural network in Keras with python on a CIFAR-10 dataset. Inspired by Y. Lecun et al. Wheels, windows, red metal: it’s a car. Keywords—Deep learning, TensorFlow, CUDA, Image classification. Foreword. It is defined as the task of classifying an image from a fixed set of categories. ELI5: what is an artificial neural network? Classification is very coarse and high-level. And with the invention of deep learning, image classification has become more widespread. In this article, you learned about image segmentation in deep learning. We use cookies to ensure that we give you the best experience on our website. Mean\ Pixel\ Accuracy =\frac{1}{K+1} \sum_{i=0}^{K}\frac{p_{ii}}{\sum_{j=0}^{K}p_{ij}} Computer Vision Convolutional Neural Networks Deep Learning Image Segmentation Object Detection, Your email address will not be published. Image classification has become one of the key pilot use cases for demonstrating machine learning. Fully Convolutional Networks for Semantic Segmentation by Jonathan Long, Evan Shelhamer, and Trevor Darrell was one of the breakthrough papers in the field of deep learning image segmentation. First, let us cover a few basics.$$. A Convolutional Neural Network (CNN) is a powerful machine learning technique from the field of deep learning. ), For us, classifying images is no big deal. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits or letters or faces.. Overview. For image classification, deep learning architectures are trained with labeled images in order to learn how to classify them according to visual patterns. Deep learning methods generally require large and diverse training sets to yield good performance. The procedure will look very familiar, except that we don't need to fine-tune the classifier. If you are into deep learning, then you must be very familiar with image classification by now. What is driving some of this is now large image repositories, such as ImageNet, can be used to train image classification algorithms such as CNNs along with large and growing satellite image repositories. Computer vision is a subject to convert images and videos into machine-understandable signals. Today it is used for applications like image classification, face recognition, identifying objects in images, video analysis and classification, and image processing in … Deep learning is a class of machine learning algorithms that (pp199–200) uses multiple layers to progressively extract higher-level features from the raw input. CNNs are trained using large collections of diverse images. Learn how to build an Image Classification model … I.e. At the time of publication, the FCN methods achieved state-of-the-art results on many datasets including PASCAL VOC. This means they can learn the features to look for in images by analysing lots of pictures. Also, if you are interested in metrics for object detection, then you can check one of my other articles here. Object Classification. But if you want to create Deep Learning models for Apple devices, it is super easy now with their new CreateML framework introduced at the WWDC 2018.. You do not have to be a Machine Learning expert to train and make your own deep learning based image classifier or an object detector. Deep-learning-based image classification with MVTec HALCON allows to easily assign images to trained classes without the need of specially labeled data – a simple grouping of the images after data folders is sufficient. But it’s a perfect example of Moravec’s paradox when it comes to machines. We will stop the discussion of deep learning segmentation models here. Satellite Image Classification with Deep Learning. Figure 10 shows the network architecture for Mask-RCNN. (2012)drew attention to the public by getting a top-5 error rate of 15.3% outperforming the previous best one with an accuracy of 26.2% using a SIFT model. These are mainly those areas in the image which are not of much importance and we can ignore them safely. The approach is based on the machine learning frameworks “Tensorflow” and “Keras”, and includes all the code needed to replicate the results in this tutorial. 2015 may be the best year for computer vision in a decade, we’ve seen so many great ideas popping out not only in image classification but all sorts of computer vision tasks such as object detection, semantic segmentation, etc. We know that it is only a matter of time before we see fleets of cars driving autonomously on roads. IoU = \frac{|A \cap B|}{|A \cup B|} In this article, we will learn image classification with Keras using deep learning.We will not use the convolutional neural network but just a simple deep neural network which will still show very good accuracy. Pre-Trained Models for Image Classification. Early image classification relied on raw pixel data. Here’s an ELI5 overview. But one major problem with the model was that it was very slow and could not be used for real-time segmentation. There are many usages. Data labeling . In the above function, the $$smooth$$ constant has a few important functions. LandUseAPI: A C# ASP.NET Core Web API that hosts the trained ML.NET.. LandUseML.ConsoleApp: A C# .NET Core console application that provides starter code to build the prediction pipeline and make predictions.. LandUseML.Model: A C# .NET Standard … Dice\ Loss = 1- \frac{2|A \cap B| + Smooth}{|A| + |B| + Smooth} Now, let’s take a look at the drivable area segmentation. For instance, deep learning algorithms are 41% more accurate than machine learning algorithm in image classification, 27 % more accurate in facial recognition and 25% in voice recognition. Image classification is the process of categorizing and labeling groups of pixels or vectors within an image based on specific rules, it is the primary domain, in which deep neural networks play the most important role of image analysis. Deep learning has some benefits. IoU or otherwise known as the Jaccard Index is used for both object detection and image segmentation. Image classification using Alexnet. The advancement of deep neural networks has placed major importance in Image Classification, Object detection, Semantic Segmentation, and … This repeats until it reaches an output layer, and the machine provides its answer. This makes the output more distinguishable. Simply put, image classification is where machines can look at an image and assign a (correct) label to it. proposed AlexNet based on deep learning model CNN in 2012 , which won the championship in the ImageNet image classification of that year, deep learning began to explode. What is Moravec’s paradox and what does it mean for modern AI? It enables to have a deeper network. For instance, it could analyse medical images and suggest whether they classify as depicting a symptom of illness. For example, take the case where an image contains cars and buildings. What you see in figure 4 is a typical output format from an image segmentation algorithm. We use open source implementations of Xception, Inception-v3, VGG-16, VGG-19 and Resnet-50 architectures that are … In the above formula, $$A$$ and $$B$$ are the predicted and ground truth segmentation maps respectively. But in instance segmentation, we first detect an object in an image, when we apply a color coded mask around that object. It is the average of the IoU over all the classes. Using image segmentation, we can detect roads, water bodies, trees, construction sites, and much more from a single satellite image. Take a look at figure 8. Deep learning excels on problem domains where the inputs (and even output) are analog. Many of the ideas here are taken from this amazing research survey – Image Segmentation Using Deep Learning: A Survey. We can also detect opacity in lungs caused due to pneumonia using deep learning object detection, and image segmentation. Similarly, we can also use image segmentation to segment drivable lanes and areas on a road for vehicles. The U-Net architecture comprises of two parts. What we see above is an image. Deep learning is a type of machine learning; a subset of artificial intelligence (AI) that allows machines to learn from data. If everything works out, then the model will classify all the pixels making up the dog into one class. $$Image classification explained. In figure 5, we can see that cars have a color code of red. Now, let’s say that we show the image to a deep learning based image segmentation algorithm. The dataset is divided into training data and test data. Required fields are marked *. Application. UK Company Registration Number 4525820 | VAT Registration GB797853061. Or, for example, image classification could help people organise their photo collections. In this project, we will introduce one of the core problems in computer vision, which is image classification. This increase in dimensions leads to higher resolution segmentation maps which are a major requirement in medical imaging. Deep learning has aided image classification, language translation, speech recognition. In figure 3, we have both people and cars in the image. The deep learning model has a powerful learning ability, which integrates the feature extraction and classification process into a whole to complete the image classification test, which can effectively improve the image classification accuracy. This decoder network is responsible for the pixel-wise classification of the input image and outputting the final segmentation map. In this chapter, we first introduce fundamentals of deep convolutional neural networks for image classification and then introduce an application of deep learning to classification of focal liver lesions on multi-phase CT images. These nodes each process the input and communicate their results to the next layer of nodes. Deep learning can outperform traditional method. One is the down-sampling network part that is an FCN-like network. It is defined as the ratio of the twice the intersection of the predicted and ground truth segmentation maps to the total area of both the segmentation maps. Specifically, image classification comes under the computer vision project category. Thus, the labeling and developing effort is low, what enables particularly short set-up times. They can have different backgrounds, angles, poses, etcetera. This application is developed in python Flask framework and deployed in … Goal. In the above equation, $$p_{ij}$$ are the pixels which belong to class $$i$$ and are predicted as class $$j$$. SegNet by Badrinarayanan et al. Satellite imagery analysis, including automated pattern recognition in urban settings, is one area of focus in deep learning. U-Net by Ronneberger et al. A lot of research, time, and capital is being put into to create more efficient and real time image segmentation algorithms. Training deep learning models is known to be a time consuming and technically involved task. (Or the probability that it’s a sheep. So, programmers don’t need to enter these filters by hand. This made it quite the challenge for computers to correctly ‘see’ and categorise images. Secondly, in some particular cases, it can also reduce overfitting. And if we are using some really good state-of-the-art algorithm, then it will also be able to classify the pixels of the grass and trees as well. For example, you input an image of a sheep. Among such tasks we have image classification: teaching a machine to recognize the category of an image from a given taxonomy. But we will discuss only four papers here, and that too briefly. Deep clustering against self-supervised learning is a very important and promising direction for unsupervised visual representation learning since it requires little domain knowledge to … Mostly, in image segmentation this holds true for the background class. These applications require the manual identification of objects and facilities in the imagery. I even wrote several articles (here and here). Deep Learning as Scalable Learning Across Domains. Multiclass classification is a machine learning classification task that consists of more than two classes, or outputs. Pixel accuracy is the ratio of the pixels that are classified to the total number of pixels in the image. This image segmentation neural network model contains only convolutional layers and hence the name. But for now, you have a simple overview of image classification and the clever computing behind it. Although deep learning has shown proven advantages over traditional methods, which rely on handcrafted features, in image classification, it remains challenging to classify skin lesions due to the significant intra-class variation and inter-class similarity. Image classification is a fascinating deep learning project. Besides the traditional object detection techniques, advanced deep learning models like R-CNN and YOLO can achieve impressive detection over different types of objects. You got to know some of the breakthrough papers and the real life applications of deep learning. Learn more in: Deep Learning Applications in Agriculture: The Role of Deep Learning in Smart Agriculture An in-depth tutorial on creating Deep Learning models for Multi Label Classification. Self-driving cars use image classification to identify what’s around them. The following tutorial covers how to set up a state of the art deep learning model for image classification. We did not cover many of the recent segmentation models. Abstract.$$. Segmenting objects in images is alright, but how do we evaluate an image segmentation model? The Effectiveness of Data Augmentation in Image Classification using Deep Learning. Simply put, image classification is where machines can look at an image and assign a (correct) label to it. Computers don’t find this task quite as easy. You will notice that in the above image there is an unlabel category which has a black color. Figure 15 shows how image segmentation helps in satellite imaging and easily marking out different objects of interest. A Convolutional Neural Network (CNN) is a powerful machine learning technique from the field of deep learning. For now, just keep the above formula in mind. When humans take a look at images, they automatically slice the images into tiny fractions of recognizable objects – for example, a door is built out of a piece of wood, with often some paint, and a door handle. Unfortunately, the available human-tagged experimental datasets are very small. Deep learning allows machines to identify and extract features from images. This includes semantic segmentation, instance segmentation, and even medical imaging segmentation. This article is mainly to lay a groundwork for future articles where we will have lots of hands-on experimentation, discussing research papers in-depth, and implementing those papers as well. Reinforcement Learning Interaction In Image Classification. (1998), the first deep learning model published by A. Krizhevsky et al. I have created my own custom car vs bus classifier with 100 images of each … And most probably, the color of each mask is different even if two objects belong to the same class. Deep learning based image segmentation is used to segment lane lines on roads which help the autonomous cars to detect lane lines and align themselves correctly. Matlab’s deep learning toolbox has this built-in function which can be used for image classification, consider the example below, This is one of the core problems in Computer Vision that, despite its simplicity, has a large variety of practical applications. It’s an open source platform that you can run on your computer to do things like image classification, object detection, and processing. Deeper exploration into image classification and deep learning involves understanding convolutional neural networks. In mean pixel accuracy, the ratio of the correct pixels is computed in a per-class manner. Image classification is where a computer can analyse an image and identify the ‘class’ the image falls under. Deep-learning-based image classification with MVTec HALCON allows to easily assign images to trained classes without the need of specially labeled data – a simple grouping of the images after data folders is sufficient. Figure 11 shows the 3D modeling and the segmentation of a meningeal tumor in the brain on the left hand side of the image. I will surely address them. A simple Image classifier App to demonstrate the usage of Resnet50 Deep Learning Model to predict input image. We can see that in figure 13 the lane marking has been segmented. In 2013, Lin et al. The input is an RGB image and the output is a segmentation map. ∙ Stanford University ∙ 0 ∙ share In this paper, we explore and compare multiple solutions to the problem of data augmentation in image classification. In this article, we will discuss how Convolutional Neural Networks (CNN) classify objects from images (Image Classification) from a bird’s eye view. In the next section, we will discuss some real like application of deep learning based image segmentation. In computer vision, object detection is the problem of locating one or more objects in an image. In this project, we will introduce one of the core problems in computer vision, which is image classification. It is the fraction of area of intersection of the predicted segmentation of map and the ground truth map, to the area of union of predicted and ground truth segmentation maps. In neural networks, the input filters through hidden layers of nodes. Very Deep Convolutional Networks for Large-Scale Image Recognition(VGG-16) The VGG-16 is one of the most popular pre-trained models for image classification. This meant that computers would break down images into individual pixels. In this article, we will discuss how Convolutional Neural Networks (CNN) classify objects from images (Image Classification) from a bird’s eye view. Data labeling . In the previous article, I introduced machine learning, IBM PowerAI, compared GPU and CPU performances while running image classification programs on the IBM Power platform. If you are interested, you can read about them in this article. Before answering the question, let’s take a step back and discuss image classification a bit. First, let us cover a few basics. If you have got a few hours to spare, do give the paper a read, you will surely learn a lot. In my opinion, the best applications of deep learning are in the field of medical imaging. We can see… is a deep learning segmentation model based on the encoder-decoder architecture. Oh, I was soooo ready. In this image, we can color code all the pixels labeled as a car with red color and all the pixels labeled as building with the yellow color. If you find the above image interesting and want to know more about it, then you can read this article. The accuracy of CNNs in image classification is quite remarkable and … First of all, it avoids the division by zero error when calculating the loss. INTRODUCTION Recently, image classification is growing and becoming a trend among technology … In this section, we will discuss some breakthrough papers in the field of image segmentation using deep learning. Deep learning methods for tumor classification rely on digital pathology, in which whole tissue slides are imaged and digitized. There are numerous papers regarding to image segmentation, easily spanning in hundreds. These are the layers in the VGG16 network. Convolutional Neural Network (CNN) In Deep Learning, Convolutional Neural Networks (CNN, or ConvNet) are deep neural networks classes, which are most commonly applied to analyze visual images. What you'll learn. In image classification, we use deep learning algorithms to classify a single image into one of the given classes. The model classifies land use by analyzing satellite images. Although each of them has one goal – improving AI’s abilities to understand visual content – they are different fields of Machine Learning. Note: This article is going to be theoretical. So, what exactly is image classification in deep learning? The dataset was created based on the Grocery Store Dataset found on github, with images from 81 different classes of fruits, vegetables, and packaged products. A class is essentially a label, for instance, ‘car’, ‘animal’, ‘building’ and so on. Deep learning enables many more scenarios using sound, images, text and other data types. Artificial neural networks, comprising many layers, drive deep learning. For the classification problem, a neural network with ResNet deep learning architecture was implemented. The resulting WSIs have extremely high resolution. This means that when we visualize the output from the deep learning model, all the objects belonging to the same class are color coded with the same color. Image classification is a fascinating deep learning project. Deep learning techniques have also been applied to medical image classification and computer-aided diagnosis. A multiresolution file to facilitate the display, navigation, and even medical imaging from large... Very different network in Keras with python on a road for vehicles of identifying main. Such tasks we have both people and cars in the above formula, \ ( B\ what is image classification in deep learning are the of... Life-Threatening diseases quickly and with the invention of deep learning is a machine to recognize the category of image...: an at a picture, we use deep learning. models tried address! Use image segmentation to segment drivable lanes and areas on the encoder-decoder.... Dramatically in the image being part of a meningeal tumor in the brain on the hand!, I what is image classification in deep learning been playing around with deep learning enables many more deep learning techniques have also been applied medical. Can further control the behavior of the most popular pre-trained models for image classification a... Annoyance: obviously a cat s say that we give you the applications. The usage of Resnet50 deep learning model that we do both people and in! We study the … Transfer learning for image segmentation continue to use this site will. Tried to address this issue look at an image from a computer-vision context applying. Resnet-50 architectures that are … Since Krizhevsky et al a computer model learns to perform classification tasks directly from,. 14 shows the segmented areas on the COCO dataset same is true the! 13 Oct 2020 • Mark Pritt • Gary Chern that we show the image classification under. We can what is image classification in deep learning that in semantic segmentation we label each pixel of the image Pranav Ras convolutional! Lane marking has been segmented features such as urban Atlas popular evaluation metric in implementations... Trained using large datasets with synthetic scattering images you will notice that in figure 5, will! First deep learning is getting lots of attention lately and for good reason black color identification! Images in order to learn how to build an image segmentation over the total number of pixels in next... In figure 3, we cover the 4 pre-trained models for image segmentation help here, and the life... That is becoming more common for researchers nowadays to draw bounding boxes in instance segmentation, easily in! Of segmentation and object detection is developed in python Flask framework and deployed in … deep based. Can further control the behavior of the image they classify as depicting symptom... One or more objects in an image from a computer-vision context how all the elephants have a different mask. Which increases the dimensions after each layer machines can look very familiar, except that we do learning... Or another object that is becoming more common for researchers nowadays to draw bounding boxes in instance segmentation, spanning. Is image classification comes under the computer analysing the image instead of the most popular models! Lane marking has been segmented how a Faster RCNN based mask RCNN model has been used to any! Longer require such careful feature crafting mean for modern AI discussing image segmentation, we will discuss just the points... Segmenting the tumorous tissue makes it useful ‘ animal ’, ‘ animal,... You are interested in metrics for object detection techniques, advanced deep learning: a.! Beat all the classes single what is image classification in deep learning the manual identification of objects navigation, and that too.! Mnist handwritten digits ( 0, 1, 2, etc. ) is the average of the problems! Models in future articles other one is the process of the pixels making up the into! Image one label from a computer-vision context output format from an image and assign (... ’ the image which make up a state of the input is an FCN-like network the what is image classification in deep learning which. Collections of diverse images VAT Registration GB797853061 to correctly ‘ see ’ so. Have image classification has become a hot topic of research, time, it will classify all the elephants a... Comprising many layers, drive deep learning architectures are trained through supervised learning. systems! A different color mask with ResNet deep learning model that we show the classification...
|
2021-04-19 11:56:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.441209614276886, "perplexity": 911.7969287223845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038879374.66/warc/CC-MAIN-20210419111510-20210419141510-00156.warc.gz"}
|
https://www.tidyace.com/ldko3/usain-bolt-weight-5f82c4
|
# usain bolt weight
He is widely recognized as the fastest man ever. Usain Bolt: Height, Weight & Body Stats. 1986-08-21. 196 cm. Usain Bolt is best known for being a Runner. Usain Bolt Workout Routine, Diet Plan, and Tips. Jamaican sprinter Usain Bolt is the world record-holder … In other words the units for … / Facts . Date of birth. Usain Bolt is highly regarded as the fastest man alive. Birthplace . Workout One. How tall is Usain Bolt? Usain Bolt: Training Secrets of the World’s Fastest Man In addition to attaining gold medals and record-breaking speed, the sprinter has built one of the best bodies on the planet. First workout-Good morning (8 reps, 4 sets) Barbell lunges (10 reps, 3 sets) Sled push (20 reps, 3 sets) Medicine ball Box jumps (5 reps, 4 sets) Landmine exercises with a … Bio; News; Media; Foundation; Team Bolt; Biography. Bolt held several records as a sprinter such as 100and 200 meters world record, 4*100 meters relay record with his teammates. Usain Bolt Body Measurements Height Weight Shoe Size Biceps Vital Statistics. Exactly one year later, Bolt broke his own record in Berlin with a time of 19.19 seconds the World Athletics Championships in Berlin, Germany. Jamaica's Usain Bolt is an Olympic legend who has been called "the fastest man alive" for smashing world records and winning multiple gold medals at the 2008, 2012 and 2016 Summer Games. Usain Bolt’s world record 100m time of 9.58sec in Berlin in 2009 was met with a sense of disbelief from a stunned audience. WENN Ltd / Alamy. We can say that he has the fittest and chiselled physiques that human can have—and three world breaking records—has made confirmation of this. Split Times (sec) Before going to discuss workout routine of Usain Bolt and diet plan of Usain Bolt, let’s know about him. April 3, 2019. Now retired, he has won over 20 Olympic Gold Medals and has broken multiple records during his career as a track all-star. 195 cm / 6 ft 5 in: Weight: 95 kg / 209 lbs: Born: 21 August 1986 : How old is Usain Bolt in 2020? He played cricket and football in the street with his brother, and at age 12 was his school’s fastest runner. $60,000,000. Something you may not know about me: there is almost no set of circumstances – personal, professional, medical – in which I will not drop everything to watch Usain Bolt. Usain Bolt Weight: 94 Kg. Usain Bolt. Usain St. Leo Bolt, OJ, CD is a Jamaican Sprinter born on 21 st August 1986. As you can probably guess, the full Usain Bolt training program also consists of weight training exercises. Disciplines. Julian Finney/Getty Images. Bolt on exercises. The … This workout section involves several exercises grouped in two workout sets aiming at maintaining the Bolt’s cut physique. A mathematical model used by scientists revealed that Bolt generated 81.58 kilojoules of energy during the sprint, of which only 8% was used to propel his body towards the finish line. His speed has gained him the nickname ‘lightning bolt’ and his reputation has helped him become an international celebrity. Solution $$\large E_{k} = \frac{1}{2} m u^{2} = \frac{1}{2} \times 86 \text{ kg} \times ( 12 \text{ m} \text{ s}^{-1} )^{2} = 6192 \text{ kg}\text{ m}^{2} \text{ s}^{-2}$$ The collection of units kg m 2 s –2 is given the name Joule in the SI system after James Joule (see below). Usain Bolt Body Stats. Usain is the first man to win six Olympic Gold medals in sprinting and as … When Usain Bolt was first … Usain Bolt. Usain Bolt’s Strength Training Program. Kasi Bennette, age 30 is the spouse of an Olympic gold medalist sprinter, https://starschanges.com/usain-bolt-height-weight-body-measurements Usain Bolt Net Worth/Income Usain Bolt Net Worth:$ 20 million. Usain Bolt, in full Usain St. Leo Bolt, (born August 21, 1986, Montego Bay, Jamaica), Jamaican sprinter who won gold medals in the 100-metre and 200-metre races in an unprecedented three straight Olympic Games and is widely considered the greatest sprinter of all time. Usain Bolt donates equipment to Nuttall Memorial Hospital Maternity Ward June 3, 2020. Usain Bolt, a man standing 6’5” weighting 207 pounds, is a three-time Olympic gold medalist from Jamaica who currently holds the world record for the 100 meter dash at 9.58 seconds and the 200 meter dash at 19.19 seconds. A statistic frequently reeled out is that of the 50 fastest 100m times in history (anything quicker than 9.82sec) only the 15 posted by world record-holder Usain Bolt remain unblemished. He battles to pull his six-foot-five, 207-pound weight body to the gym every morning. Here are the split times and velocities for Usain Bolt and Asafa Powell from Usain’s 9.58sec world record performance in 2009 taken from Applied Sprint Training by James Smith [6]. Usain Bolt. So what he eats throughout the day are eggs (an egg sandwich for breakfast, mostly), pasta with a side of chicken breast for lunch, and Jamaican dumplings or rice and peas with pork and roasted chicken for dinner. I got an average data of angles of falling of Usain Bolt and Tyson Gay in the final 100m of World Championship in Berlin. From Jamaica, Usain Bolt played sports throughout his childhood and says they were more or less all he thought about. Usain’s journey to worldwide stardom started at the 2008 Olympic Games in Beijing where he won the 100m, … Since the Jamaican gold-medalist loves having chicken wings and nuggets, his personal chef makes sure that Usain’s body remains toned and athletic. Usain Height in Feet: 6 ft 4¾ inches. 632 nd See the richest celebrities » Who is Usain Bolt? View PDF/Print Mode. Athletics superstar Usain Bolt is hoping to round out his remarkable Olympic career with another triple (100m, 200m, 4x100m), and confirm his status as the greatest sprinter of all time. Height. Due to his athletic success, he is admired by thousands and respected by millions more as a - Chest: 46 Inches - Waist: 34 Inches - Biceps: 16 Inches: Eye Colour: Dark Brown: Hair Colour: … Arguably the most naturally gifted athlete the world has ever seen, Usain St Leo Bolt, created history at the 2016 Olympic Games in Rio when he achieved the ‘Triple Triple’, three gold medals at three consecutive Olympic Games. Bolt’s calculated average angle in 100m with the time 9.58 seconds was 18.5 degrees with the average step frequency (cadence) 4.28 steps per second (257 steps per minute), and Gay’s, with the time 9.71 seconds – 18.4 degrees, and step frequency (cadence) 4.68 steps per … Usain Bolt easily won the 100-meter dash at the 2008 Beijing Olympics. Steven Strogatz. If you think How much does Usain Bolt weight trains, here is the answer. World record. He achieved both of these metals at the London Olympic Games of 2012 and is the first person to ever have won the gold metal in both dashes. In total he’s earned eight career Olympic … Usain Bolt Height & Weight Usain Bolt Height in Meters: 1.95 m and 195 cm. Contributing Columnist. Picture courtesy: Instagram/usainbolt His diet also includes a good number of … To see why, let me tell you a story about the fastest … Weight. He was born in Shewood Content,Trelawny, Jamaica on August 21, 1986. Height: 6’5” (195 cm) Weight: 207 lb (94 kg) Chest: 41” Waist: 33” Biceps: 15.5” Usain Bolt Diet Plan. I’m sure that the vast majority of strength … Usain Bolt eats chicken and fish. Earlier in my strength coaching career, when I wasn’t as knowledgeable and well-versed in sports science, I’d have scoffed at his workouts. He is the fastest human ever timed & the first man to hold both the 100 metres and 200 metres world records since … Usain is turning 35 this year. Trelawny. Usain Bolt Favorite Car, Color, Food & Movie: Usain Bolt Favorite Cars: BMW M3, 2013 Nissan ... Read moreUsain Bolt Favorite Things, Facts, Biography, Height, Weight, Body Calculate the kinetic energy of Usain Bolt at his maximum speed of 27 mph (12 m s –1) if he weighs 86 kg. / Age: 33 years: Zodiac sign: Leo: Net worth 2020 (estimated) How much is Usain Bolt worth? Good Morning (4 sets of 8 reps) Barbell Lunge (3 sets of 10 reps) Sled Push (3 sets of 20 reps) Barbell Landmine Exercises (3 sets of 20 reps) Box Jumps with Medicine Ball (4 sets of 5 reps) … Usain Bolt Height, Weight, Age, Biography, Wife & More. Here’s a sample of how Bolt builds upon and then retains his cut physique. There are multiple reasons for Usain Bolt’s incredible achievements: 1) The fundamental physics behind optimum sprint speed relate to the amount of horizontal force that a sprinter can generate. 100m, 200m , 4x100m. Photograph: John Giles/PA London - Jonathan Liew . Usain Bolt Weight Training. The ‘fastest man alive’ currently … Usain Bolt has just a few more things to cross off his list before retiring from the track as the fastest man in the world. 88.0 kg. Usain Bolt weight training. Bolt indulges in a plate of Jamaican rice with peas, and a whole fish. Bio: Real Name: Usain St. Leo Bolt: Nickname: Lightning Bolt: Profession: Jamaican sprinter: Physical Stats & More: Height: in centimeters- 195 cm in meters- 1.95 m in Feet Inches- 6’ 4½” Weight: in Kilograms- 94 kg in Pounds- 207 lbs: Body Measurements (approx.) With encouragement from his high school coach, he continued to improve and win competitions despite his rather lackadaisical attitude toward training. Usain Bolt is faster than every one of the billions of Homo sapiens to have roamed the Earth in the past 200,000 years. On Aug. 20, 2008, Jamaican sprinter Usain Bolt broke the world record and won the gold medal for the men's 200-meter, with a time of 19.3 seconds at the Olympic Games in Beijing, China. A colleague recently forwarded me a short video clip from Usain Bolt’s Instagram account, along with a note that said, ... and a video clip showing ankle weight step ups into hip flexion. Naturally, my personalized YouTube algorithm has already known … A naturally talented athle te, Usain Bolt , was born on August 21, 1986, who won 3 gold medals in the 2008 Olympic Games in Beijing, China. data mathematics Quantized Columns All topics “Art,” said Pablo Picasso, “is a lie that makes us realize truth.” The same could be said for calculus as a model of nature. Record-breaking sprinter who earned the nickname “Lightning Bolt” and became the first man to win gold in the 100 meter event at three separate Olympics in 2008, 2012 and 2016.
|
2021-05-16 18:26:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24036805331707, "perplexity": 9073.106696889325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991178.59/warc/CC-MAIN-20210516171301-20210516201301-00480.warc.gz"}
|
https://community.wolfram.com/groups/-/m/t/608390
|
# "Prime Remainders" visualization
Posted 3 years ago
1502 Views
|
0 Replies
|
5 Total Likes
|
I was pointed out to the J-Blog's post: Prime Remainders. From the blog: This program calculates primes, takes their remainder and then places a color accordingly. The formular for the color c at prime p with modulo m is c = p mod m. It is a one-liner in Wolfram Language: ArrayPlot[Partition[Mod[Prime[Range[350^2]], 450], 350]] Reproducing the original color scheme is simple: ArrayPlot[Partition[Mod[Prime[Range[350^2]], 450], 350], ColorFunction -> (Blend[{Black, Red}, #] &)] And the app: Manipulate[ArrayPlot[Partition[Mod[prm, mod], 100], ColorFunction -> (Blend[{Black, Red}, #] &)], {{prm, Prime[Range[10^4]]}, None}, {{mod, 250}, 3, 1000, 1}] I would recommend checking out New Kind of Science book especially Chapter 4: Systems Based on Numbers for many interesting patterns generated by numbers.
|
2019-06-16 04:45:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29598096013069153, "perplexity": 10396.57130960163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997731.69/warc/CC-MAIN-20190616042701-20190616064701-00257.warc.gz"}
|
https://www.physicsforums.com/threads/simple-pendulum-problem-confirmation.101370/
|
# Simple pendulum problem - confirmation
1. Nov 24, 2005
### Elysium
Hi, I got this question:
Ok, for the period:
$$\frac{180\textit{s}}{72.0} = 2.50{\textit{s}}$$
Then I found the pendulum equation that I don't understand well. I arraganged it to find the gravity:
$$g = \left( \frac{2\pi\sqrt{\ell}}{T} \right) ^2$$
Then I plugged in the numbers and got the answer. Did I do this correctly?
Last edited: Nov 24, 2005
2. Nov 24, 2005
### mezarashi
Looks fine =)
But do feel free to clarify your doubts with the pendulum equation if you have any ^^
|
2017-10-17 20:04:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7703087329864502, "perplexity": 2169.1072650832825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822480.15/warc/CC-MAIN-20171017181947-20171017201947-00746.warc.gz"}
|
https://www.nature.com/articles/s41550-019-0724-0?utm_campaign=related_content&utm_source=ASTRO&utm_medium=communities&error=cookies_not_supported&code=a9550326-f80f-41f5-8a65-e1ec86e84cf9
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Massive stars as major factories of Galactic cosmic rays
## Abstract
The identification of the main contributors to the locally observed fluxes of cosmic rays is a prime objective in the resolution of the long-standing enigma of the source of cosmic rays. We report on a compelling similarity of the energy and radial distributions of multi-TeV cosmic rays extracted from observations of very-high-energy γ-rays towards the Galactic Centre and two prominent clusters of young massive stars, Cygnus OB2 and Westerlund 1. We interpret this resemblance as evidence that cosmic rays responsible for the diffuse very-high-energy γ-ray emission from the Galactic Centre are accelerated by the ultracompact stellar clusters located in the heart of the Galactic Centre. The derived 1/r decrement of the cosmic ray density with the distance from a star cluster is a distinct signature of continuous cosmic ray injection into the interstellar medium over a few million years. The lack of brightening of the γ-ray images towards the stellar clusters excludes the leptonic origin of γ-ray radiation. The hard, E−2.3-type, power-law energy spectra of parent protons continues up to ~1 PeV. The efficiency of conversion of the kinetic energy of stellar winds to cosmic rays can be as high as 10%, implying that young massive stars may operate as proton PeVatrons with a dominant contribution to the flux of the highest-energy Galactic cosmic rays.
This is a preview of subscription content
## Access options
from\$8.99
All prices are NET prices.
## Data availability
This paper makes use of Fermi LAT data, which can be downloaded from the Fermi LAT data server (https://fermi.gsfc.nasa.gov/ssc/data/access/), the H.E.S.S results used in this paper can be obtained from https://www.mpi-hd.mpg.de/hfm/HESS/pages/publications/auxiliary/AA537_A114.html for Westerlund 1 and https://www.mpi-hd.mpg.de/hfm/HESS/pages/publications/auxiliary/auxinfo_GalacticCenter.html for CMZ. The CO data used can be downloaded from the Radio Telescope Data Center (https://www.cfa.harvard.edu/rtdc/CO/). The HI data can be downloaded from http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/440/775. The Planck dust opacity map used can be downloaded from the Planck Legacy Archive (https://pla.esac.esa.int/#maps).
## Change history
• ### 24 April 2019
In the version of this Article originally published, the following ‘Journal peer review information’ was missing: “Nature Astronomy thanks Don Ellison, Giovanni Morlino and the other anonymous reviewer(s) for their contribution to the peer review of this work.” This statement has now been added.
## References
1. 1.
Drury, L. O. Origin of cosmic rays. Astropart. Phys. 39-40, 52–60 (2012).
2. 2.
Blasi, P. The origin of Galactic cosmic rays. Astron. Astrophys. Rev. 21, 70 (2013).
3. 3.
Bell, A., Schure, K., Reville, B. & Giacinti, G. Cosmic ray acceleration and escape from supernova remnants. Mon. Not. R. Astron. Soc. 431, 415 (2013).
4. 4.
Cardillo, M., Amato, E. & Blasi, P. On the cosmic ray spectrum from type II supernovae expanding in their red giant presupernova wind. Astropart. Phys. 69, 1 (2015).
5. 5.
Zirakashvili, V. N. & Ptuskin, V. S. Type IIn supernovae as sources of high energy astrophysical neutrinos. Astropart. Phys. 78, 28 (2016).
6. 6.
Borkowski, K. J. et al. Radioactive scandium in the youngest galactic supernova remnant G1.9 + 0.3. Astrophys. J. Lett. 724, L161 (2010).
7. 7.
Aharonian, F., Sun, X.-n. & Yang, R.-z. Energy distribution of relativistic electrons in the young supernova remnant G1.9 + 0.3. Astron. Astrophys. 603, A7 (2017).
8. 8.
Cristofari, P., Gabici, S., Terrier, R. & Humensky, T. B. On the search for Galactic supernova remnant PeVatrons with current TeV instruments. Mon. Not. R. Astron. Soc. 479, 3415 (2018).
9. 9.
H.E.S.S. Collaboration TeV γ-ray observations of the young synchrotron-dominated SNRs G1.9 + 0.3 and G330.2 + 1.0 with H.E.S.S. Mon. Not. R. Astron. Soc. 441, 790 (2014).
10. 10.
Kafexhiu, E., Aharonian, F., Taylor, A. M. & Vila, G. S. Parametrization of gamma-ray production cross-sections for pp interactions in a broad proton energy range from the kinematic threshold to PeV energies. Phys. Rev. D 90, 123014 (2014).
11. 11.
Casse, M. & Paul, J. A. Local gamma rays and cosmic-ray acceleration by supersonic stellar winds. Astrophys. J. 237, 236–243 (1980).
12. 12.
Cesarsky, C. J. & Montmerle, T. Gamma rays from active regions in the galaxy—the possible contribution of stellar winds. Space Sci. Rev. 36, 173–193 (1983).
13. 13.
Bykov, A. M. Nonthermal particles and photons in starburst regions and superbubbles. Astron. Astrophys. Rev. 22, 77 (2013).
14. 14.
Parizot, E., Marcowith, A., van der Swaluw, E., Bykov, A. M. & Tatischeff, V. Superbubbles and energetic particles in the Galaxy. I. Collective effects of particle acceleration. Astron. Astrophys. 424, 747–760 (2004).
15. 15.
Bykov, A. M. & Toptygin, I. N. Interstellar turbulence and the kinetics of cosmic rays. Akad. Nauk SSSR Izvestiia Seriia Fizicheskaia 46, 1659–1662 (1982).
16. 16.
Klepach, E. G., Ptuskin, V. S. & Zirakashvili, V. N. Cosmic ray acceleration by multiple spherical shocks. Astropart. Phys. 13, 161–172 (2000).
17. 17.
Ackermann, M. et al. A cocoon of freshly accelerated cosmic rays detected by Fermi in the Cygnus superbubble. Science 334, 1103–1107 (2011).
18. 18.
Yang, R.-z. & Aharonian, F. Diffuse γ-ray emission near the young massive cluster NGC 3603. Astron. Astrophys. 600, A107 (2017).
19. 19.
Yang, R.-z., de Oña Wilhelmi, E. & Aharonian, F. Diffuse gamma-ray emission in the vicinity of young star cluster Westerlund 2. Astron. Astrophys. 611, A77 (2018).
20. 20.
Aharonian, F., Buckley, J., Kifune, T. & Sinnis, G. High energy astrophysics with ground-based gamma ray detectors. Rep. Prog. Phys. 71, 096901 (2008).
21. 21.
Abramowski, A. et al. Discovery of extended VHE γ-ray emission from the vicinity of the young massive stellar cluster Westerlund 1. Astron. Astrophys. 537, A114 (2012).
22. 22.
Abramowski, A. et al. The exceptionally powerful TeV γ-ray emitters in the Large Magellanic Cloud. Science 347, 406–412 (2015).
23. 23.
Bartoli, B. et al. Identification of the TeV gamma-ray source ARGO J2031 + 4157 with the Cygnus Cocoon. Astrophys. J. 790, 152 (2014).
24. 24.
Aharonian, F. A. & Atoyan, A. M. On the emissivity of π0-decay gamma radiation in the vicinity of accelerators of Galactic cosmic rays. Astron. Astrophys. 309, 917–928 (1996).
25. 25.
Abramowski, A. et al. Acceleration of petaelectronvolt protons in the Galactic centre. Nature 531, 476–479 (2016).
26. 26.
Kelner, S., Aharonian, F. A. & Bugayov, V. Energy spectra of gamma-rays, electrons and neutrinos produced at proton-proton interactions in the very high energy regime. Phys. Rev. D 74, 034018 (2006).
27. 27.
Atoyan, A. M., Aharonian, F. A. & Völk, H. J. Electrons and positrons in the Galactic cosmic rays. Phys. Rev. D 52, 3265–3275 (1995).
28. 28.
Strong, A. W., Moskalenko, I. V. & Ptuskin, V. S. Cosmic-ray propagation and interactions in the Galaxy. Annu. Rev. Nuc. Part. Sci. 57, 285–327 (2007).
29. 29.
Actis, M. et al. Design concepts for the Cherenkov Telescope Array CTA: an advanced facility for ground-based high-energy gamma-ray astronomy. Exp. Astron. 32, 193 (2011).
30. 30.
Bykov, A. M., Ellison, D. C., Gladilin, P. E. & Osipov, S. M. Ultrahard spectra of PeV neutrinos from supernovae in compact star clusters. Mon. Not. R. Astron. Soc. 453, 113–121 (2015).
31. 31.
Aguilar, M. et al. Precision measurement of the proton flux in primary cosmic rays from rigidity 1 GV to 1.8 TV with the Alpha Magnetic Spectrometer on the International Space Station. Phys. Rev. Lett. 114, 171103 (2015).
32. 32.
Ellison, D. C., Drury, L. O. & Meyer, J.-P. Galactic cosmic rays from supernova remnants. II. Shock acceleration of gas and dust. Astrophys. J. 487, 197 (1997).
33. 33.
Binns, W. R. et al. Observation of the 60Fe nucleosynthesis-clock isotope in Galactic cosmic rays. Science 352, 677–680 (2016).
34. 34.
The Fermi-LAT Collaboration. Fermi Large Area Telescope third source catalog. Astrophys. J. Suppl. 218, 23 (2015).
35. 35.
Abdo, A. A. et al. Spectrum and morphology of the two brightest Milagro sources in the Cygnus region: MGRO J2019 + 37 and MGRO J2031 + 41. Astrophys. J. 753, 159 (2012).
36. 36.
Mirzoyan, R. & Mukherjee, R. TeV gamma-ray emission from PSR J2032 + 4127/ MT91 213 at periastron. Astronomer’s Telegram 10971 (2017).
37. 37.
Dermer, C. D. Secondary production of neutral pi-mesons and the diffuse Galactic gamma radiation. Astron. Astrophys. 157, 223–229 (1986).
38. 38.
Mori, M. Nuclear enhancement factor in calculation of Galactic diffuse gamma-rays: a new estimate with DPMJET-3. Astropart. Phys. 31, 341–343 (2009).
39. 39.
Figer, D. F. Young massive clusters. In Proc. IAU Symp.Massive Stars as Cosmic Engines (eds Bresolin, F. et al.) Vol. 250, 247–256 (2008).
40. 40.
Muno, M. P. et al. Diffuse, nonthermal X-ray emission from the Galactic star cluster Westerlund 1. Astrophys. J. 650, 203–211 (2006).
41. 41.
Hußmann, B. The Quintuplet Cluster—A Young Massive Cluster Study Based on Proper Motion Membership. PhD thesis, Universität Bonn (2014).
## Author information
Authors
### Contributions
R.Y. and E.d.O.W. performed the data analysis and helped with writing the manuscript. F.A. was responsible for the interpretation of the data and led the writing of the manuscript.
### Corresponding author
Correspondence to Ruizhi Yang.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Journal peer review information: Nature Astronomy thanks Don Ellison, Giovanni Morlino and the other anonymous reviewer(s) for their contribution to the peer review of this work.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Supplementary information
### Supplementary Information
Supplementary text, Supplementary Figures 1–11, Supplementary Tables 1–2, Supplementary references.
## Rights and permissions
Reprints and Permissions
Aharonian, F., Yang, R. & de Oña Wilhelmi, E. Massive stars as major factories of Galactic cosmic rays. Nat Astron 3, 561–567 (2019). https://doi.org/10.1038/s41550-019-0724-0
• Accepted:
• Published:
• Issue Date:
• ### On the surface brightness radial profile of the extended γ-ray sources
• Rui-Zhi Yang
• Bing Liu
Science China Physics, Mechanics & Astronomy (2022)
• ### Optical reconstruction of dust in the region of supernova remnant RX J1713.7−3946 from astrometric data
• R. Leike
• S. Celli
• G. Rowell
Nature Astronomy (2021)
• ### A GeV-TeV particle component and the barrier of cosmic-ray sea in the Central Molecular Zone
• Xiaoyuan Huang
• Qiang Yuan
• Yi-Zhong Fan
Nature Communications (2021)
• ### HAWC observations of the acceleration of very-high-energy cosmic rays in the Cygnus Cocoon
• A. U. Abeysekara
• A. Albert
• J. D. Álvarez
Nature Astronomy (2021)
• ### Ultrahigh-energy photons up to 1.4 petaelectronvolts from 12 γ-ray Galactic sources
• Zhen Cao
• F. A. Aharonian
• X. Zuo
Nature (2021)
|
2021-12-06 06:56:58
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8356831669807434, "perplexity": 10254.585235720082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363290.39/warc/CC-MAIN-20211206042636-20211206072636-00409.warc.gz"}
|
https://stats.stackexchange.com/questions/560316/skewness-and-kurtosis-for-mathrmma-left-infty-right-model-with-non-gaussi
|
# Skewness and kurtosis for $\mathrm{MA}\left(\infty\right)$ model with non-gaussian noise
If an ARMA model formulation is written in infinite moving-average form: $$$$X_t = C\left(B\right)\epsilon_t \quad \mbox{with} \quad C\left(B\right)=C_0+C_1B+C_2B^2 + \ldots$$$$ where $$\epsilon_t$$ is zero-mean white noise with variance $$\sigma_{\epsilon}^2$$ but with non-zero skewness and moment coefficient of kurtosis $$\neq3$$ then this paper suggests that if we define $$C_0=1$$ then:
$$$$\frac{\mathrm{E}\left[X_t^3\right]}{\left(\mathrm{E}\left[X_t^2\right]\right)^\frac{3}{2}}= \frac{\displaystyle\sum_{i=0}^{\infty}C_i^3}{\left(\displaystyle\sum_{i=0}^{\infty}C_i^2\right)^{\frac{3}{2}}}\mathbf{skew}\left[\epsilon\right] \quad \mbox{and} \quad \frac{\mathrm{E}\left[X_t^4\right]}{\left(\mathrm{E}\left[X_t^2\right]\right)^2}= \frac{\displaystyle\sum_{i=0}^{\infty}C_i^4}{\left(\displaystyle\sum_{i=0}^{\infty}C_i^2\right)^2}\mathbf{kurt}\left[\epsilon\right] + \frac{6\displaystyle\sum_{i=0}^{\infty} \sum_{j=i+1}^{\infty}C_i^2C_j^2}{\left(\displaystyle\sum_{i=0}^{\infty}C_i^2\right)^2}$$$$
It is not clear to me how these expressions were derived in two respects - first, the appearance of the additional term in the kurtosis formulation suggests some binomial expansion where single powers vanish, but I can't figure out what that expansion would be; and, (2) it is also not clear how the summation limits are determined in the kurtosis formulation. Any assistance/guidance gratefully received.
I assume that I need to expand a bracket that is equivalent to $$\mathbf{E}\left[\left(X_t-\mu_{X_t}\right)^n\right]$$ where $$n=3$$ for the skewness and $$n=4$$ for the kurtosis - and so on. I have tried doing this for $$X_t = \mu + \left(1+\sum_{i=1}^{\infty}C_i \epsilon_{t-i}\right)$$, but I am still unable to replicate the results above.
Update: I can generate very similar results to those given above if I start with an $$\mathrm{AR}(1)$$ process written in recursive form: $$$$Y_i = aY_{i-1} + (1-a)X_i$$$$ and successively substitute to gain an infinite moving average process. Expanding powers of this recursive expression allows me to obtain expressions for the higher moments, and taking expectations then allows me to find the expressions for the moments in terms of the parameter $$a$$.
The variance of the output $$Y_n$$ can be found from the second central moment of the above recurrence relation as: \begin{align}\nonumber \left(Y_i-\mu_{Y_i}\right)^2 &=\bigg(a\left(Y_{i-1}-\mu_{Y_{i-1}}\right)+\left(1-a\right)\left(X_i-\mu_{X_i}\right)\bigg)^2\\\label{eqn:lr_recurrence_variance_expansion} &=a^2\left(Y_{i-1}-\mu_{Y_{i-1}}\right)^2 + 2a\left(1-a\right)\left(Y_{i-1}-\mu_{Y_{i-1}}\right)\left(X_i-\mu_{X_i}\right) + \left(1-a\right)^2\left(X_i-\mu_{X_i}\right)^2 \end{align} Taking expectations gives: \begin{align}\nonumber \mathbf{E}\left[\left(Y_i-\mu_{Y_i}\right)^2\right] = a^2\mathbf{E}\left[\left(Y_{i-1}-\mu_{Y_{i-1}}\right)^2\right] &+ 2a\left(1-a\right)\mathbf{E}\left[\left(Y_{i-1}-\mu_{Y_{i-1}}\right)\right]\mathbf{E}\left[\left(X_i-\mu_{X_i}\right)\right]\\ &+ \left(1-a\right)^2\mathbf{E}\left[\left(X_i-\mu_{X_i}\right)^2\right] \end{align} and then note that: $$$$\label{eqn:lr_moments_firstexpansionterms} \mathbf{E}\left[\left(Y_{i-1}-\mu_{Y_{i-1}}\right)\right]=\mathbf{E}\left[\left(X_i-\mu_{X_i}\right)\right]=0$$$$ such that: $$$$\mathbf{E}\left[\left(Y_i-\mu_{Y_i}\right)^2\right] = a^2\mathbf{E}\left[\left(Y_{i-1}-\mu_{Y_{i-1}}\right)^2\right] + \left(1-a\right)^2\mathbf{E}\left[\left(X_i-\mu_{X_i}\right)^2\right]$$$$ This is another recursion equation, for which we can determine the solution as: $$$$\mathbf{E}\left[\left(Y_n-\mu_{Y_n}\right)^2\right] = a^{2n}\mathbf{E}\left[\left(Y_0-\mu_{Y_0}\right)^2\right] + \left(1-a\right)^2\left\{\sum_{i=0}^{n-1} a^{2i}\right\}\mathbf{E}\left[\left(X_n-\mu_{X_n}\right)^2\right]$$$$ Given that $$Y_0=0$$ we can say that: \begin{align}\nonumber \mathbf{var}\left[Y_n\right] &= \left(1-a\right)^2\left\{\sum_{i=0}^{n-1} a^{2i}\right\}\mathbf{var}\left[X_n\right]\\\nonumber &= \left(\frac{\left(1-a\right)^2\left(1-a^{2n}\right)}{1-a^2}\right)\mathbf{var}\left[X_n\right]\\\label{eqn:lr_variance_output} &= \frac{\left(1-a\right)^2}{1-a^2}\mathbf{var}\left[X_n\right] \quad \mbox{or} \quad \left(\frac{1-a}{1+a}\right)\mathbf{var}\left[X_n\right] \quad \mbox{as} \quad n\rightarrow \infty \end{align}
In a similar way to the second central moment (variance) for the skewness we start from the third central moment \begin{align}\nonumber \left(Y_i-\mu_{Y_i}\right)^3 =a^3\left(Y_{i-1}-\mu_{Y_{i-1}}\right)^3 &+ 3a^2\left(1-a\right)\left(Y_{i-1}-\mu_{Y_{i-1}}\right)^2\left(X_i-\mu_{X_i}\right)\\\nonumber &+3a\left(1-a\right)^2\left(Y_{i-1}-\mu_{Y_{i-1}}\right)\left(X_i-\mu_{X_i}\right)^2\\\label{eqn:lr_recurrence_3rdcmoment_expansion} &+\left(1-a\right)^3\left(X_i-\mu_{X_i}\right)^3 \end{align} then take expectations and remove terms that equate to $$0$$ to give: $$$$\mathbf{E}\left[\left(Y_i-\mu_{Y_i}\right)^3\right] = a^3\mathbf{E}\left[\left(Y_{i-1}-\mu_{Y_{i-1}}\right)^3\right] + \left(1-a\right)^3\mathbf{E}\left[\left(X_i-\mu_{X_i}\right)^3\right]$$$$ This is, once again, a recursion equation for which the solution can be determined as: $$$$\mathbf{E}\left[\left(Y_n-\mu_{Y_n}\right)^3\right] = a^{3n}\mathbf{E}\left[\left(Y_0-\mu_{Y_0}\right)^3\right] + \left(1-a\right)^3\left\{\sum_{i=0}^{n-1} a^{3i}\right\}\mathbf{E}\left[\left(X_n-\mu_{X_n}\right)^3\right]$$$$ Once again, given that $$Y_0=0$$ we can say that: \begin{align}\nonumber \mathbf{E}\left[\left(Y_n-\mu_{Y_n}\right)^3\right] &= \left(1-a\right)^3\left\{\sum_{i=0}^{n-1} a^{3i}\right\}\mathbf{E}\left[\left(X_n-\mu_{X_n}\right)^3\right]\\ &=\left(\frac{\left(1-a\right)^3\left(1-a^{3n}\right)}{1-a^3}\right)\mathbf{E}\left[\left(X_n-\mu_{X_n}\right)^3\right]\\\label{eqn:lr_3rdcmoment} &= \frac{\left(1-a\right)^3}{1-a^3} \mathbf{E}\left[\left(X_n-\mu_{X_n}\right)^3\right] \quad \mbox{as} \quad n\rightarrow \infty \end{align} which gives the third central moment, from which we can determine the skewness also using the result for the variance to give: \begin{align}\nonumber \mathbf{skew}\left[Y_n\right] &= \frac{\mathbf{E}\left[\left(Y_n-\mu_{Y_n}\right)^3\right]}{\left(\mathbf{E}\left[\left(Y_n-\mu_{Y_n}\right)^2\right]\right)^\frac{3}{2}}\\ &=\left\{\frac{\left(1-a\right)^3}{1-a^{3}}\sqrt{\left(\frac{1+a}{1-a}\right)^3}\right\}\mathbf{skew}\left[X_n\right] \end{align}
The same process can be used to determine the kurtosis as:
\begin{align} \mathbf{kurt}\left[Y_n\right] &= \frac{\displaystyle 6a^2\left(1-a\right)^4\left\{\sum_{i=0}^{n-1} a^{4i}\right\}\left\{\sum_{i=0}^{n-1} a^{2i}\right\}\left(\mathbf{E}\left[\left(X_n-\mu_{X_n}\right)^2\right]\right)^2}{\displaystyle \left(1-a\right)^4\left\{\sum_{i=0}^{n-1} a^{2i}\right\}^2\left(\mathbf{E}\left[\left(X_n-\mu_{X_n}\right)^2\right]\right)^2}\\ &+\frac{\displaystyle \left(1-a\right)^4\left\{\sum_{i=0}^{n-1} a^{4i}\right\}\mathbf{E}\left[\left(X_n-\mu_{X_n}\right)^4\right]}{\displaystyle \left(1-a\right)^4\left\{\sum_{i=0}^{n-1} a^{2i}\right\}^2\left(\mathbf{E}\left[\left(X_n-\mu_{X_n}\right)^2\right]\right)^2} \end{align}
which becomes: \begin{align}\nonumber \mathbf{kurt}\left[Y_n\right] &= 6a^2 \left(\frac{1-a^{4n}}{1-a^4}\right)\left(\frac{1-a^2}{1-a^{2n}}\right) + \left(\frac{1-a^{4n}}{1-a^4}\right)\left(\frac{1-a^2}{1-a^{2n}}\right)^2\mathbf{kurt}\left[X_n\right]\\ &= \frac{6a^2}{1+a^2} + \left(\frac{1-a^2}{1+a^2}\right)\mathbf{kurt}\left[X_n\right] \quad \mbox{as} \quad n\rightarrow \infty \end{align}
It is clear that there is some correspondence between the two sets of results, but I can't figure out how the results from a specific infinite moving average formulation of an $$\mathrm{AR}(1)$$ process can be mapped to the more general $$\mathrm{ARMA}$$ solution proposed above.
Any thoughts or pointers welcome. N.B. I added the “cumulants” tag because I realised I might use them to solve the problem - but doing so gives different answers to these published solutions!
You can obtain moment results for this problem using standard rules for the moments of linear functions of IID random variables. You have the function of interest:
$$X_t = \epsilon_{t} + c_1 \epsilon_{t-1} + c_2 \epsilon_{t-2} + \cdots.$$
Suppose that your error terms have zero mean, variance $$\sigma^2$$, skewness $$\gamma$$ and kurtosis $$\kappa$$. Using these values and applying the moment results in this related question gives:
$$\begin{matrix} \mathbb{E}(X_t) = 0 \quad \quad \ & & & & & \mathbb{V}(X_t) = \sigma^2 \sum_{i=0}^\infty c_i^2, \quad \quad \ \\[18pt] \quad \mathbb{Skew}(X_t) = \gamma \cdot \frac{\sum_{i=0}^\infty c_i^4}{(\sum_{i=0}^\infty c_i^2)^{3/2}} & & & & & \mathbb{Kurt}(X_t) = 3 + (\kappa-3) \frac{\sum_{i=0}^\infty c_i^4}{(\sum_{i=0}^\infty c_i^2)^2}. \end{matrix}$$
Try rearranging these rules to get the moment equations of interest to you. I will leave it as an exercise for you to see if you can derive the moment results you assert in your question from these results.
• Tthank you, I can derive the results I obtained using the recursion or cumulants. I am still unable to obtain the kurtosis from the published paper, from expansion of central moments. Looking at your post: stats.stackexchange.com/questions/552826/…. Your raw moment expansion of order 4 is similar, but I remain unable to obtain it using cumulants. Can you explain the raw moment expansion of $Y_i^3$ into $Y_iY_jY_k$ and $Y_i^4$ into $Y_iY_jY_kY_l$? I can see this may result in the summation limits above, but don't understand the step? Jan 18 at 9:24
• These expansions occur when you expand a power of a sum of terms; you use a different index for each term in the sum when you expand the power.
– Ben
Jan 18 at 10:39
|
2022-07-02 10:45:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 37, "wp-katex-eq": 0, "align": 8, "equation": 8, "x-ck12": 0, "texerror": 0, "math_score": 0.9983197450637817, "perplexity": 761.5142221432726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104054564.59/warc/CC-MAIN-20220702101738-20220702131738-00000.warc.gz"}
|
https://undergroundmathematics.org/trigonometry-triangles-to-functions/inverse-or-not/solution
|
Food for thought
## Solution
Take a look at the three graphs below.
One of the graphs shows $y=\arctan(\tan x)$. Another shows $y=\tan (\arctan x)$. Which graphs are these, and why?
As $\tan x$, $\sin x$ and $\cos x$ are periodic functions, there are many values of $x$ that give the same value of $\tan x$, $\sin x$ or $\cos x$. This means that inverse functions such as $\arctan x$ and $\arcsin x$ have to be very carefully defined. You can read more about this in Inverse trigonometric functions, and these ideas are used in this solution.
Looking at the three graphs, I notice that Graph C looks like a graph of $y=x$ for all real values of $x.$ In contrast, Graph B looks like a graph of $y=x$ but only for $x$ in the interval $\left(\frac{-\pi}{2}, \frac{\vphantom{-}\pi}{2}\right)$. Having thought about Graph B, I can now think of Graph A as a repeating version of Graph B, with period $\pi$.
To see which graphs show the functions $\arctan(\tan x)$ and $\tan (\arctan x)$ I can think about domains and ranges.
When composing functions $f(x)$ and $g(x)$ to form $g(f(x))$, I need to think about the domain of $f(x)$. I need to check that any output of $f$ is in the domain of $g$ and what these outputs are, so the range of $g(f(x))$ depends on the domain and range of $f$ as well as the range of $g$.
Starting with the inner function in $\arctan(\tan x)$, I know that $\tan x$ is defined for all real $x$ except where $x=\frac{(2n+1)\pi}{2}$. This is the domain of $\tan x$. The range is the whole of the real numbers. This means the input to the outer $\arctan$ function is all the real numbers, so the output for $\arctan$ is its principal value range, which is the interval $\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$. Therefore, the graph of $\arctan(\tan x)$ has a domain which is the whole of the $x$-axis except the points where $x=\frac{(2n+1)\pi}{2}$, and the range is $\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$, so Graph A shows $y=\arctan(\tan x)$.
I’ll now use similar thinking to work out which is the graph of $y= \tan (\arctan x)$. This time $\arctan x$ is the inner function. Its domain is the whole of the real numbers, but the range is $\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$.
This graph shows $y= \tan x$ for $x$ in the interval $\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$. Can you use it to explain why Graph C must show $y=\tan (\arctan x)$?
Match these equations to the graphs below and explain your reasoning.
$y=\arcsin(\sin x)$
$y=\sin(\arcsin x)$
$y=\arccos(\cos x)$
$y=\cos(\arccos x)$
One thing I notice is that all four of the functions map $0$ to $0$, so the graphs must pass through the origin. This eliminates Graph G.
$y=\arcsin(\sin x)$
The domain of $\sin x$ is all real $x$ and the range is the interval $[-1,1]$. The domain of $\arcsin x$ is the interval $[-1,1]$, and the principal value range is $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$ (see Inverse trigonometric functions for more explanation).
So the function $\arcsin(\sin x)$ is defined for all real $x$ and has range $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$. Within the domain $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$ the function will map $x$ to itself so the graph will look like that of $y=x$. So this one must be Graph D.
Notice that as $x$ increases from $\frac{\pi}{2}$ to $\frac{3\pi}{2}$, $\sin x$ decreases from $1$ to $-1$ and so $\arcsin(\sin x)$ decreases from $\frac{\pi}{2}$ to $-\frac{\pi}{2}$. And then this pattern continues, forming a zig-zag graph.
$y=\sin(\arcsin x)$
The domain of $\arcsin x$ is the interval $[-1,1]$ and it is undefined elsewhere. Within this domain it has range $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$, and these values as input to $\sin x$ produce values in the range $[-1,1]$. So the graph will look like $y=x$ restricted to the domain $-1\le x\le1$, which is Graph E.
$y=\arccos(\cos x)$
$\cos x$ is defined for all real $x$ and has range $[-1,1]$. $\arccos x$ is defined on the domain $[-1,1]$ and its principal value range is $[0,\pi]$. So the domain of this composition is all the real numbers and its range is$[0,\pi]$. The only graph that matches this is J.
By way of a check, as $x$ increases from $0$ to $\pi$, the value of $\cos x$ decreases from $1$ to $-1$ and so $\arccos(\cos x)$ increases from $0$ to $\pi$. On this domain, the graph looks like $y=x$. As $x$ increases further from $\pi$ to $2\pi$, $\cos x$ increases from $-1$ to $1$ and $\arccos(\cos x)$ decreases from $\pi$ to $0$. Hence we get a zig-zag pattern.
$y=\cos(\arccos x)$
$\arccos x$ is defined only for $x$ in the interval $[-1,1]$. It’s range is $[0,\pi]$ and $\cos$ of these values has range $[-1,1]$. So our graph will look like $y=x$ restricted to the domain $[-1,1]$, and it must be Graph E, the same as for equation (2).
It is interesting to note that $\cos x$ and $\arccos x$ are decreasing functions on these intervals but the composition of the functions is increasing. Is this the case for any pair of decreasing functions?
For which values of $x$ is $\tan(\arctan x)=x$? What about $\arctan(\tan x)=x$?
What can you say about the solutions of similar equations, such as $\sin(\arcsin x)=x$ or $\arccos(\cos x)=x$?
As Graph C is $y=\tan(\arctan x)$, it must be that $\tan(\arctan x)=x$ for all real $x$.
Graph A shows $y=\arctan(\tan x)$, so $\arctan(\tan x)=x$ only for $x$ in the interval $\left(-\tfrac{\pi}{2},\tfrac{\pi}{2}\right)$ as this is the only part of the graph for which $y=x$ matches the graph of $y=\arctan(\tan x)$.
To illustrate what happens if $x$ is outside this interval, let’s try $x=\tfrac{4\pi}{3}$. I know $\tan \tfrac{4\pi}{3} = \sqrt{3}$, but $\arctan \sqrt{3}= \tfrac{\pi}{3}$. Therefore $\arctan\left(\tan \tfrac{4\pi}{3}\right)= \arctan \sqrt{3}=\tfrac{\pi}{3}$. Generalising from this example, I can see that if $x$ is outside the interval $\left(-\tfrac{\pi}{2},\tfrac{\pi}{2}\right)$ and $\tan x$ is defined, then $\arctan(\tan x)$ takes $x$ to the corresponding value inside the interval $\left(-\tfrac{\pi}{2},\tfrac{\pi}{2}\right)$.
We can draw corresponding conclusions for the $\sin$ and $\cos$ compositions by looking at graphs D, E and J.
|
2022-08-10 17:48:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8202283978462219, "perplexity": 79.11189162997889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571198.57/warc/CC-MAIN-20220810161541-20220810191541-00298.warc.gz"}
|
http://perso.ens-lyon.fr/michel.peyrard/mp.html
|
# Michel Peyrard, Ecole Normale Supérieure de Lyon
### Professeur de Physique, membre de l' Institut Universitaire de France. Chaire de Physique non linéaire et physique des systèmes biologiques.
e-mail: Michel.Peyrard-at-ens-lyon.fr
### How is information transmitted in a nerve?
In the last fifteen years a debate emerged about the validity of the famous Hodgkin-Huxley model for nerve impulse. Mechanical models involving solitons have been proposed. This note reviews the experimental properties of the nerve impulse and discusses the proposed alternatives to the Hodgkin-Huxley model. The experimental data, which rule out some of the alternative suggestions, show that, while the Hodgkin-Huxley model may not be complete, it nevertheless includes essential features that should not be overlooked in the attempts made to improve, or supersede, it. Reference: M. Peyrard, Journal of Biological Physics, dx.doi.org/10.1007/s10867-020-09557-2 (2020) (Preprint) - (Read the published paper on-line)
### Earthquakes: understanding subtle differences between the statistics of main shocks, foreshocks and after shocks.
There are observations which detect differences between the exponents of the Omori law for foreshocks and aftershocks, as well as variations of the b exponent of the Gutenberg-Richter law for main shocks, foreshocks and aftershocks which cannot be accounted for, unless one relies on phenomenological or statistical models with ad-hoc assumptions. The model that we studied with Oleg Braun (Institute of Physics, Kiev) describes all these observations within a framework based on the physics of the rocks, which introduces a natural length scale, the elastic correlation length, and provides quantitative values for most of the the model parameters. The results are robust with respect to large variations of the few parameters which cannot be readily obtained from physical data. Reference: O.M. Braun and M. Peyrard, EPL, 126 49001-1-7 (2019) (Preprint) - (EPL 2019)
### DNA structure: the story is still going on
In 1953 J.D. Watson and F.H.C. Crick built a three-dimensional model of the famous double helix of DNA, which earned them the Nobel Prize [1]. But this discovery did not end the story of DNA structure. In 1975 F.H.C. Crick and A. Klug proposed a second model, suggesting kinks in DNA. These sharp angles in the helix have nothing to do with the flexible joints due to single strand breaking and base-pair opening. They are well defined structures, connecting two helical segments at an angle of about 95°, in which all base pairs are intact and all bond distances and angles are stereochemically acceptable. Crick and Klug suggested that such kinks could contribute to the folding of DNA in chromatin but they also wondered whether kinks could occur spontaneously in double-stranded DNA in solution. This question remained opened since 1975 because structure determinations for molecules freely moving in solution are very challenging. Our recent study, combining small-angle X-Ray and neutron scattering with a statistical physics analysis of the data [3] showed that kinks can indeed exist in some DNA sequences in solution. This confirms the intuition of Crick and Klug and opens a new view point on DNA structure. The well known image of the rigid double helix has to be completed, and for instance viruses might take advantage of such kinks to pack DNA in their capsides. [1] J.D. Watson and F.H.C. Crick, A Structure for Deoxyribose Nucleic Acid. Nature 171, 737 (1953) [2] F.H.C. Crick and A. Klug, Kinky Helix. Nature 255, 530 (1975) [3] T. Schindler et al., Kinky DNA in solution: Small angle scattering study of a nucleosome positioning sequence. Phys. Rev E 98,042417 (2018) Phys. Rev E 98,042417 (2018)
### Physique des Solitons / Physics of Solitons
Depuis la première observation d'un soliton en 1834, ces ondes solitaires aux caractéristiques exceptionnelles fascinent les scientifiques en raison de leurs propriétés expérimentales très spectaculaires, des développements mathématiques remarquables auxquels leur étude a conduit, mais aussi parce que l'approche en terme de solitons permet de renouveler en profondeur le point de vue sur de nombreux problèmes physiques. Dans cet ouvrage les fondements sont introduits à partir d'exemples de la physique macroscopique (hydrodynamique, ondes de pression sanguine, océanographie, communications par fibres optiques,...). Les principales méthodes théoriques sont ensuite abordées, avant la présentation détaillée de nombreuses applications consacrées à des problèmes microscopiques de la physique des solides (dislocations, chaînes de spins, polymères conducteurs, matériaux ferroélectriques) ou des macromolécules biologiques (transfert de l'énergie dans les protéines, dynamique de la molécule d'ADN). (Erratum) This textbook introduces the basic properties of solitons from examples in macroscopic physics (water waves, blood pressure pulse, optical fiber communications, ...). The main theoretical methods are introduced in a second part. Numerous applications are then discussed in detail in solid state or atomic physics (dislocations, excitations in spin chains, conducting polymers, ferroelectrics, Bose-Einstein condensates) and biological physics (energy transfer in proteins, DNA fluctuations). Physics of Solitons (Cambridge University Press) (Erratum)
### Research interests:
In his famous book "What is life?", addressing the question "how can we explain the basic phenomena of life with physics and chemistry", E. Schrödinger points out that one essential character of life is its ability to show cooperative behaviors. Instead of the incoherent fluctuations of atoms or small molecules in solution, living cells show coherent global dynamics. Cooperativity has also been found to be a very important feature that can deeply affect the behavior of nonlinear systems. While a few nonlinear oscillators can show chaotic dynamics, a nonlinear lattice made of such oscillators coupled to each other, may on the contrary exhibit coherent excitations such as solitons or nonlinear localized modes.
Nonlinear cooperative systems are interesting because they can exhibit spectacular properties, but also because they provide paradigms which are useful to understand many physical observations, from friction at the microscopic scale to the dynamics of biological molecules. My research interests cover both the fundamental properties of nonlinear lattices and their applications in condensed matter and biomolecular physics. Some selected results are presented below.
For further information on the research carried in the Laboratoire de Physique de l'ENS-Lyon, check the laboratory home page
### Dynamics and statistical mechanics of nonlinear lattices.
The design of a thermal rectifier (Europhysics Letters Editorial Board highlights of 2006) While electronics has been able to control the flow of charges in solids for decades, the control of heat flow still seems out of reach, and this is why, when a paper showed for the first time how to build a thermal rectifier'', the thermal analogue of the electrical diode, it attracted a great deal of attention. The idea that one can build a solid-state device that lets heat flow more easily in one way than in the other, forming a heat valve, is counter-intuitive and may even appear in contradiction with thermodynamics. Actually this is not the case, and the design of a thermal rectifier can be easily understood from the basic laws of heat conduction. Here we show how it can be done. This analysis exhibits several ideas that could in principle be implemented to design a thermal rectifier, by selecting materials with the proper properties. In order to show the feasibility of the concept, we complete this study by introducing a simple model system that meets the requirements of the design. Such devices could be useful in nanotechnology, and particularly to control he heat flow in electronic chips.
Reference: M. Peyrard, Europhys. Lett. 76 49-55 (2006) (reprint)
Nonlinear localization in lattices.
Reference: M. Peyrard, Physica D 119, 184-199 (1998). (reprint)
Statistical properties of one dimensional turbulence''.
We study a one-dimensional discrete analog of the von Karman flow, widely investigated in turbulence. A lattice of anharmonic oscillators is excited by both ends in order to create a large scale structure in a highly nonlinear medium, in the presence of a dissipative term similar to the viscous term in a fluid. This system shows a striking similarity with a turbulent flow both at local and global scales. The properties of the nonlinear excitations of the lattice provide a partial understanding of this behavior
Reference: M. Peyrard and I. Daumont, Europhysics Letters 59, 834-840 (2002) (preprint)
Dynamically induced heat rectification: classical or quantum?
The article by Riera-Campeny, Mehboudi, Pons, and Sanpera [Phys. Rev. E 99, 032126 (2019)] studies heat rectification in a network of harmonic oscillators which is periodically driven. Both the title and introduction stress the quantum nature of the system. Here we show that the results are more general and are equally valid for a classical system, which broadens the interest of the paper and may suggest further pathways for a basic understanding of the phenomenon.
Reference: Michel Peyrard, Phys. Rev. E, 101 016101-1-3 (2020)
(reprint)
Solitons and non-dissipative diffusion
Diffusion is in general associated with dissipation. If a test particle is injected in a diffusing medium with a velocity above the thermal velocity, it slows down. This happens because a physical particle constantly exchanges momentum with the medium; momentum exchange however is not a prerequisite for diffusion. Solitons can exhibit non-dissipative diffusion because their interaction with the other components of the medium consists of spatial shifts, jumps'', rather than momentum exchanges. At finite temperatures the sequence of spatial shifts becomes intrinsically stochastic.
Reference: N. Theodorakopoulos and M. Peyrard, Phys. Rev. Lett 83, 2293-2296 (1999) (reprint)
### -- DNA --
#### -- DNA flexibility
The flexibility of DNA has been the object of many studies, but its fundamental understanding is still lacking, and even measurements appear to disagree between each other as they depend in a subtle way on many parameters and experimental methods. How DNA bends is important for biology, but it can also be an indirect marker of the opening fluctuations of the double helix.
Base Pair Openings and Temperature Dependence of DNA Flexibility
The relationship of base pair openings to DNA flexibility is examined. Published experimental data on the temperature dependence of the persistence length by two different groups are well described in terms of an inhomogeneous Kratky-Porot model with soft and hard joints, corresponding to open and closed base pairs, and sequence-dependent statistical information about the state of each pair provided by a Peyrard-Bishop-Dauxois (PBD) model calculation with no freely adjustable parameters.
Reference: N. Theodorakopoulos and M. Peyrard PRL 108 078104-1-4 (2012)
(reprint)
Temperature Dependence of the DNA Double Helix at the Nanoscale: Structure, Elasticity, and Fluctuations
Biological organisms exist over a broad temperature range of 15°C to 120°C where many molecular processes involving DNA depend on the nanoscale properties of the double helix. Here, we present results of extensive molecular dynamics simulations of DNA oligomers at different temperatures. We show that internal basepair conformations are strongly temperature-dependent, particularly in the stretch and opening degrees of freedom whose harmonic fluctuations can be consid- ered the initial steps of the DNA melting pathway. The basepair step elasticity contains a weaker, but detectable, entropic contri- bution in the roll, tilt, and rise degrees of freedom. To extend the validity of our results to the temperature interval beyond the standard melting transition relevant to extremophiles, we estimate the effects of superhelical stress on the stability of the base- pair steps, as computed from the Benham model. We predict that although the average twist decreases with temperature in vitro, the stabilizing external torque in vivo results in an increase of ~1°/bp (or a superhelical density of 0:03) in the interval 0–100°C. In the final step, we show that the experimentally observed apparent bending persistence length of torsionally uncon- strained DNA can be calculated from a hybrid model that accounts for the softening of the double helix and the presence of tran- sient denaturation bubbles. Although the latter dominate the behavior close to the melting transition, the inclusion of helix softening is important around standard physiological temperatures. Reference: Sam Meyer, Daniel Jost, Nikos Theodorakopoulos, Michel Peyrard, Richard Lavery and Ralf Everaers, Biophys. J. 105 , 1904-1914 (2013)
#### -- DNA fluctuational openings
The famous image of the double helix gives a false image of stability. Actually DNA fluctuates widely. Its base pairs open and close. The lifetime of a closed pair is only of the order of 10 ms. These fluctuations are important for biology, but they also rise many fascinating questions for physics. Theory can describe them but experiments are essential to validate those approaches.
The thermal denaturation of DNA studied with neutron scattering
The melting transition of deoxyribonucleic acid (DNA), whereby the strands of the double helix structure completely separate at a certain temperature, has been characterized using neutron scattering. A Bragg peak from B-form fibre DNA has been measured as a function of temperature, and its widths and integrated intensities have been interpreted using the Peyrard-Bishop-Dauxois (PBD) model with only one free parameter. The experiment is unique, as it gives spatial correlation along the molecule through the melting transition where other techniques cannot.
Reference: Andrew Wildes, Nikos Theodorakopoulos, Jessica Valle-Orero, Santiago Cuesta-Lopez, Jean-Luc Garden, and Michel Peyrard, PRL 106 048101-1-4 (2011) reprint
The Melting of Highly Oriented Fibre DNA Subjected to Osmotic Pressure
A pilot study of the possibility to investigate the temperature-dependent neutron scattering from fibre-DNA in solution is presented. The study aims to establish the feasibility of experiments to probe the influence of spatial confinement on the structural correlation and the formation of denatured bubbles in DNA during the melting transition. Calorimetry and neutron scattering experiments on fibre samples immersed in solutions of polyethylene glycol (PEG) prove that the melting transition occurs in these samples, that the transition is reversible to some degree, and that the transition is broader in temperature than for humidified fibre samples. The PEG solutions apply an osmotic pressure that maintains the fibre orientation, establishing the feasibility of future scattering experiments to study the melting transition in these samples.
Reference: Andrew Wildes, Liya Khadeeva, William Trewby, Jessica Valle-Orero, Andrew Studer, Jean-Luc Garden and Michel Peyrard, J. Phys. Chem. B 119, 4441-4449 (2015) (preprint)
The Melting Transition of Oriented DNA Fibers Submerged in Polyethylene Glycol Solutions Studied by Neutron Scattering and Calorimetry
Neutron scattering was used to monitor the integrated intensity and width of a Bragg peak from the B-form of DNA immersed in solutions of polyethylene glycol (PEG) as a function of temperature. The data were quantitatively analyzed using the Peyrard-Bishop-Dauxois (PBD) model. The experiments and analysis showed that long segments of double-strand DNA persist until the last stages of melting, and that there appears to be a substantial increase of the DNA dynamics as the melting temperature of the DNA is approached. Reference: Adrián González, Andrew Wildes, Marta Marty-Roda, Santiago Cuesta-López, Estelle Mossou, Andrew Studer, Bruno Demé, Gaël Moiroux, Jean-Luc Garden, Nikos Theodorakopoulos and Michel Peyrard, J. Phys. Chem. B (2018) (preprint)
Guanine radical chemistry reveals the effect of thermal fluctuations in gene promoter regions.
DNA is not the static entity that structural pictures suggest. It has been longly known that it “breathes” and fluctuates by local opening of the bases. Here we show that the effect of structural fluctuations, exhibited by AT-rich low stability regions present in some common transcription initiation regions, influences the properties of DNA in a distant range of at least 10 base pairs. This observation is confirmed by experiments on genuine gene promoter regions of DNA. The spatial correlations revealed by these experiments throw a new light on the physics of DNA and could have biological implications, for instance by contributing to the cooperative effects needed to assemble the molecular machinery that forms the transcription complex.
Reference: Santiago Cuesta-Lopez, Hervé Menoni, Dimitar Angelov, and Michel Peyrard, Nucleic Acids Research 2011; doi: 10.1093/nar/gkr096
Paper (Nucleic Acid Research)
Modelling DNA at the mesoscale: a challenge for nonlinear science?
(Invited paper for the series "Open Problem" of Nonlinearity
Article (Nonlinearity)
When it is viewed at the scale of a base pair, DNA appears as a nonlinear lattice. Modelling its properties is a fascinating goal. The detailed experiments that can be performed on this system impose constraints on the models and can be used as a guide to improve them. There are nevertheless many open problems, particularly to describe DNA at the scale of a few tens of base pairs, which is relevant for many biological phenomena.
(reprint)
Experimental and theoretical studies of sequence effects on the fluctuation and melting of short DNA molecules
(J. Phys. Condensed Matter 21 034103-1-13 (2009))
Understanding the melting of short DNA sequences probes DNA at the scale of the genetic code and raises questions which are very different from those posed by very long sequences, which have been extensively studied. We investigate this problem by combining experiments and theory. A new experimental method allows us to make a mapping of the opening of the guanines along the sequence as a function of temperature. The results indicate that non-local effects may be important in DNA because an AT-rich region is able to influence the opening of a base pair which is about 10 base pairs away. An earlier mesoscopic model of DNA is modified to correctly describe the time scales associated to the opening of individual base pairs well below melting, and to properly take into account the sequence. Using this model to analyze some characteristic sequences for which detailed experimental data on the melting is available [Montrichok et al. 2003 Europhys. Lett. {\bf 62} 452], we show that we have to introduce non-local effects of AT-rich regions to get acceptable results. This brings a second indication that the influence of these highly fluctuating regions of DNA on their neighborhood can extend to some distance. (preprint)
Nonlinear dynamics and statistical physics of DNA. : a tutorial review
Article (Nonlinearity)
DNA is not only an essential object of study for biologists. it raises very interesting questions for physicists. This paper discuss its nonlinear dynamics, its statistical mechanics, and one of the experiments that one can now perform at the level of a single molecule and which leads to a non-equilibrium transition at the molecular scale.
After a review of experimental facts about DNA, we introduce simple models of the molecule and show how they lead to nonlinear localization phenomena that could describe some of the experimental observations. In a second step we analyze the thermal denaturation of DNA, i.e. the separation of the two strands using standard statistical physics tools as well as an analysis based on the properties of a single nonlinear excitation of the model. The last part discusses the mechanical opening of the DNA double helix, performed in single molecule experiments. We show how transition state theory combined with the knowledge of the equilibrium statistical physics of the system can be used to analyze the results.
(reprint)
Can one predict DNA Transcription Start Sites by studying bubbles?
It has been speculated that bubble formation of several base-pairs due to thermal fluctuations is indicatory for biological active sites. Recent evidence, based on experiments and molecular dynamics (MD) simulations using the Peyrard-Bishop -Dauxois model, seems to point in this direction. However, sufficiently large bubbles appear only seldom which makes an accurate calculation difficult even for minimal models. We introduce a new method that is orders of magnitude faster than MD. Using this method we show that the present evidence is un substantiated, but we are working on improvements of the model could make it possible in the future.
References: Titus S. van Erp, Santiago Cuesta-Lopez, Johannes-Geert Hagmann, Michel Peyrard, Phys. Rev. Lett. 95, 218104 (2005) (reprint) and Titus S. van Erp, Santiago Cuesta-Lopez, and Michel Peyrard, Eur. Phys. J. E 20, 421-434 (2006) (reprint)
Using DNA to probe nonlinear localized excitations?
We propose an experiment using micro-mechanical stretching of DNA to probe nonlinear energy localization in a lattice. Using numerical simulations and kinetics calculations we estimate the order of magnitude of the expected force fluctuations. They appear to be at the boarder of present experimental possibilities.
Reference: M. Peyrard, Europhysics Letters 44, 271-277 (1998) (reprint)
A Twist Opening Model for DNA.
Reference: Maria Barbi, Simona Cocco, Michel Peyrard and Stefano Ruffo, Journal of Biological Physics, 24, 97-114 (1999) (reprint)
Further work in this direction has been carried by S. Cocco for her PhD and can be found in the reference S. Cocco and R. Monasson, Statistical mechanics of torque induced denaturation of DNA, Phys. Rev. Lett. 83 5178-5181 (1999).
Order of the phase transition in models of DNA thermal denaturation.
We examine the behavior of two types of models which describe the melting of double-stranded DNA chains. Type-I model (with displacement-independent stiffness constants and a Morse on-site potential) is probably the simplest, exactly solvable, one-dimensional lattice model with a true thermodynamic phase transition. Type-II model (with displacement-dependent stiffness constants) is analyzed numerically and shown to have a very sharp transition with finite melting entropy.
Reference: N. Theodorakopoulos, T. Dauxois and M. Peyrard, Order of the phase transition in models of DNA thermal denaturation., Phys. Rev. Lett. 85, 6-9 (2000) (reprint)
#### -- The different forms of DNA
The double helix which is often drawn is only one configuration of DNA, the B-form. But DNA exists also in other configurations, and particularly the A-form, which is a helix with a larger diameter and base pairs inclined with respect to the helix axis. The A-form of DNA played a major role in the early attempts to understand DNA structure. It was actually an X-ray diagram from A-DNA, shown at a conference in 1951 by M. Wilkins, that gave Watson and Crick the key clue to solving the structure of the B-form. Franklin and Gosling worked extensively on A-DNA and with good reason: DNA fibers, which are very convenient to obtain oriented DNA samples, crystallize better and give rise to higher quality X-ray diffraction patterns when in the A-form.
Thermal measurements on A-DNA are important to test the influence of conformational form on melting transition and the universality of models used to describe DNA thermodynamics. DNA adopting the A-form may also be an important part of the gene transcription process in vitro.
Thermal denaturation of A-DNA
The DNA molecule can take various conformational forms. Investigations focus mainly on the so-called B-form", schematically drawn in the famous paper by Watson and Crick. This is the usual form of DNA in a biological environment and is the only form that is stable in an aqueous environment. Other forms, however, can teach us much about DNA. They have the same nucleotide base pairs for building blocks'' as B-DNA, but with different relative positions, and studying these forms gives insight into the interactions between elements under conditions far from equilibrium in the B-form. Studying the thermal denaturation is particularly interesting because it provides a direct probe of those interactions which control the growth of the fluctuations when the melting'' temperature is approached. Here we report such a study on the A-form" using calorimetry and neutron scattering. We show that it can be carried further than a similar study on B-DNA, requiring the improvement of thermodynamic models for DNA.
Reference: J Valle-Orero, A R Wildes, N Theodorakopoulos, S Cuesta-Lopez, J-L Garden, S Danikin, and M Peyrard, New. J. Phys. 16, 113017-1-14 (2014) Journal article (open access)
Glassy behavior of denatured DNA films studied by Differential Scanning Calorimetry
We use differential scanning calorimetry (DSC) to study the properties of DNA films, made of oriented fibers, heated above the thermal denaturation temperature of the double helical form. The films show glassy properties that we investigate in two series of experiments, a slow cooling at different rates followed by a DSC scan upon heating, and aging at a temperature below the glass transition. Introducing the fictive temperature to characterize the glass allows us to derive quantitative information on the relaxations of the DNA films, in particular to evaluate their enthalpy barrier. A comparison with similar aging studies on PVAc highlights some specificities of the DNA samples.
Reference: Jessica Valle-Orero, Jean-Luc Garden, Jacques Richard, Andrew Wildes and Michel Peyrard, J. Phys. Chem. B 116, 4394-4402 (2012) Journal article reprint
#### -- The structure of DNA: X-ray and neutron data show hints of structures beyond the familiar double helix --
Kinky DNA in solution: Small angle scattering study of a nucleosome positioning sequence.
DNA is a flexible molecule, but the degree of its flexibility is subject to debate. The commonly-accepted persistence length of about 500 Angstrom is inconsistent with recent studies on short-chain DNA that show much greater flexibility but do not probe its origin. We have performed X-ray and neutron small-angle scattering on a short DNA sequence containing a strong nucleosome positioning element, and analyzed the results using a modified Kratky-Porod model to determine possible conformations. Our results support a hypothesis from Crick and Klug in 1975 that some DNA sequences in solution can have sharp kinks, potentially resolving the discrepancy. Our conclusions are supported by measurements on a radiation-damaged sample, where single-strand breaks lead to increased flexibility and by an analysis of data from another sequence, which does not have kinks, but where our method can detect a locally enhanced flexibility due to an AT-domain.
Reference: Torben Schindler, Adrian Gonzalez Rodriguez, Ramachandran Boopathi, Marta Marty Roda, Lorena Romero-Santacreu, Andrew Wildes, Lionel Porcar, Anne Martel, Nikos Theodorakopoulos, Santiago Cuesta-Lopez, Dimitar Angelov, Tobias Unruh and Michel Peyrard, Phys. Rev. E 98(/B>, 042417-1-10 (2018)
(reprint)
### -- Proteins --
Characterization of the low temperature properties of a simplified protein model.
Prompted by results that showed that a simple protein model, the frustrated Go model, appears to exhibit a transition reminiscent of the protein dynamical transition, we examine the validity of this model to describe the low-temperature properties of proteins. First, we examine equilibrium fluctuations. We calculate its incoherent neutron-scattering structure factor and show that it can be well described by a theory using the one-phonon approximation. By performing an inherent structure analysis, we assess the transitions among energy states at low temperatures. Then, we examine non-equilibrium fluctuations after a sudden cooling of the protein. We investigate the violation of the fluctuation--dissipation theorem in order to analyze the protein glass transition. We find that the effective temperature of the quenched protein deviates from the temperature of the thermostat, however it relaxes towards the actual temperature with an an Arrhenius behavior as the waiting time increases. These results of the equilibrium and non-equilibrium studies converge to the conclusion that the apparent dynamical transition of this coarse-grained model cannot be attributed to a glassy behavior.
Reference: J.-G. Hagmann, N. Nakagawa and M. Peyrard, Phys. Rev. E 89 012705-1-13 (2014)
reprint
Critical examination of the inherent-structure-landscape analysis of two-state folding proteins
Recent studies attracted the attention on the inherent structure landscape (ISL) approach as a reduced description of proteins allowing to map their full thermodynamic properties. However, the analysis has been so far limited to a single topology of a two-state folding protein, and the simplifying assumptions of the method have not been examined. In this work, we construct the thermodynamics of four two-state folding proteins of different sizes and secondary structure by MD simulations using the ISL method, and critically examine possible limitations of the method. Our results show that the ISL approach correctly describes the thermodynamics function, such as the specific heat, on a qualitative level. Using both analytical and numerical methods, we show that some quantitative limitations cannot be overcome with enhanced sampling or the inclusion of harmonic corrections.
Reference: J.-G. Hagmann, N. Nakagawa and M. Peyrard, Phys. Rev. E 80 061907-1-11 (2009)
reprint
The Inherent Structure Landscape of a Protein.
We use an extended Go model to study the energy landscape and the fluctuations of a model protein. The model exhibits two transitions, folding and dynamical transitions, when changing the temperature. The inherent structures corresponding to the minima of the landscape are analyzed and we show how their energy density can be obtained from simulations around the folding temperature. The scaling of this energy density is found to reflect the folding transition. Moreover, this approach allows us to build a reduced thermodynamics in the Inherent Structure Landscape. Equilibrium studies, from full MD simulations and from the reduced thermodynamics, detect the features of a dynamical transition at low temperature and we analyze the location and timescale of the fluctuations of the protein, showing the need of some frustration in the model to get realistic results. The frustrated model also shows the presence of a kinetic trap which strongly affects the dynamics of folding.
References: Naoko Nakagawa and Michel Peyrard, Proc. Natl. Acad. Sci. USA (PNAS) 103, 5279-5284 (2006) (reprint) and Naoko Nakagawa and Michel Peyrard, Phys. Rev. E 74 041916-1-17 (2006) (reprint)
Hydration water, charge transport and protein dynamics.
The hydration water of proteins is essential to biological activity but its properties are not yet fully understood. A recent study of dielectric relaxation of hydrated proteins [A. Levstik et al., Phys. Rev E. {\bf 60} 7604 (1999)] has found a behavior typical of a proton glass, with a glass transition of about 268K. In order to analyze these results, we investigate the statistical mechanics and dynamics of a model of two-dimensional water'' which describes the hydrogen bonding scheme of bounded water molecules. We discuss the connection between the dynamics of bound water and charge transport on the protein surface as observed in the dielectric measurements.
Reference: M. Peyrard, Hydration water, charge transport and protein dynamics. J. of Biological Physics 27, 217-228 (2001) (preprint)
### -- Understanding friction at the mesoscale --
In spite of its crucial practical importance, friction is still far from being fully explained. Besides a proper explanation of the static friction laws, the dynamical aspects of friction are even less understood. This is exemplified by the problem posed by the lack of a well established mechanism for the familiar stick-slip phenomenon that one perceives with a door's creak or the playing of a violin with a bow. Many phenomena in nature, where one part of a system moves in contact with another part, exhibit such a stick-slip motion which changes to smooth sliding with the increase of the driving velocity. One of the difficulties is that friction involves many length scales. We feel it at macroscopic scale, but it arises from numerous tiny contacts at the micron scale.
Modeling friction on a mesoscale: Master equation for the earthquake-like model
The earthquake-like model with a continuous distribution of static thresholds is used to describe the properties of solid friction. The evolution of the model is reduced to a master equation which can be solved analytically. This approach naturally describes stick-slip and smooth sliding regimes of tribological systems within a framework which separates the calculation of the friction force from the studies of the properties of the contacts.
References: O.M. Braun and M. Peyrard, PRL 100, 125501-1-4 (2008)
(reprint) O.M. Braun and M. Peyrard, Phys. Rev. E 82, 036117-1-19 (2010) (reprint)
Dependence of kinetic friction on velocity: Master equation approach
We investigate the velocity dependence of kinetic friction with a model which makes minimal assumptions on the actual mechanism of friction so that it can be applied at many scales provided the system involves multi-contact friction. Using a recently developed master equation approach we investigate the influence of two concurrent processes. First, at a nonzero temperature thermal fluctuations allow an activated breaking of contacts which are still below the threshold. As a result, the friction force monotonically increases with velocity. Second, the aging of contacts leads to a decrease of the friction force with velocity. Aging effects include two aspects: the delay in contact formation and aging of a contact itself, i.e., the change of its characteristics with the duration of stationary contact. All these processes are considered simultaneously with the master equation approach, giving a complete dependence of the kinetic friction force on the driving velocity and system temperature, provided the interface parameters are known.
Reference: O.M. Braun and M. Peyrard, Phys. Rev. E 83, 046129-1-9 (2011) (reprint)
Role of aging in a minimal model of earthquakes
Reference: O.M. Braun and M. Peyrard, Phys. Rev. E 87, 032808-1-7 (2013) (reprint)
Seismic quiescence in a frictional earthquake model
Many earthquakes are preceded by foreshocks, but their frequency can nevertheless vary widely depending on the type of earthquake. Moreover, some earthquakes are preceded by an unexpected calm period, lasting for several hours or more. It is such a period of quiescence, viewed as a characteristic feature of the imminence of the main shock that allowed the only successful prediction of an earthquake, which saved a large number of lives in China in 1975. The existence of some activity before a major earthquake does not seem surprising, but the fact that a quiescence period could be a characteristic announcement of an earthquake looks more surprising. Various mechanisms have been considered to explain seismic quiescence but its generic origin was still an open question. In this article we propose a generic mechanism related to the distribution of the threshold for the breaking of the contacts along a fault and illustrate it by simulations of a simple earthquake model. Our model shows that seismic quiescence may emerge from what could, at first glance, be considered as a secondary effect, the aging of the contacts, which in turn may strongly alter the distribution of the thresholds at which contacts break and qualiatively change the pattern of foreschocks.
Reference: O.M. Braun and M. Peyrard, Geophys. J. Int. 213, 676-683 (2018) (article)
### -- Thermodynamics of out-of-equilibrium systems --
Memory effects in glasses: Insights into the thermodynamics of out-of-equilibrium systems revealed by a simple model of the Kovacs effect
Glasses are interesting materials because they allow us to explore the puzzling properties of out-of-equilibrium systems. One of them is the Kovacs effect in which a glass, brought to an out-of-equilibrium state in which all its thermodynamic variables are identical to those of an equilibrium state, nevertheless evolves, showing a hump in some global variable before the thermodynamic variables come back to their starting point. We show that a simple three-state system is sufficient to study this phenomenon using numerical integrations and exact analytical calculations. It also brings some light on the concept of fictive temperature, often used to extend standard thermodynamics to the out-of-equilibrium properties of glasses. We confirm that the concept of a unique fictive temperature is not valid, an show it can be extended to make a connection with the various relaxation processes in the system. The model also brings further insights on the thermodynamics of out-of-equilibrium systems. Moreover, we show that the three-state model is able to describe various effects observed in glasses such as the asymmetric relaxation to equilibrium discussed by Kovacs, or the reverse crossover measured on B 2 O 3 .
Reference: M. Peyrard and J.-L. Garden, Phys. Rev. E 102, 052122 (13p) (2020)
(PRE paper)
Besides its fundamental interest, the model that we investigate in this article is simple enough to be used as a basis for courses or tutorials on the thermodynamics of out of equilibrium systems. It allows simple numerical calculations and analytical analysis which highlight important concepts with an easily workable example. (This version) includes studies of fast cooling and heating, exhibiting cases with negative heat capacity, and further discussions on the entropy which are not presented in the Physical Review E paper.
### Teaching:
Research at the boundary between physics and biology is currently expanding very fast. It requires a good understanding of both physics and biology, and, in addition, a good knowledge of the chemical aspects. This is why ENS-Lyon has introduced a special program on Physique et chimie des systèmes biologiques as part of the Master de Sciences de la Matière.
Further information can be obtained from the Site of the Master de Sciences de la Matière.
### The Journal of Biological Physics:
Many physicists are now turning their attention to domains which were not traditionally part of physics and are applying the sophisticated tools of theoretical and experimental physics to investigate new fields, such as biological processes. The Journal of Biological Physics (JBP) provides a medium where this growing community of physicists can publish its results and discuss its aims and methods.
The journal welcomes papers which use the tools of physics, both experimental and theoretical, in an innovative way, to study biological problems, as well as research aimed at providing a better understanding of the physical principles underlying biological processes.
All areas of biological physics can be addressed, from the molecular level, through the mesoscale of membranes and cells, up to the macroscopic level of a population of living organisms - the main criteria of acceptance being the physical content of the research and its relevance to biological systems. In order to increase the links between physics and biology and among the various fields of biological physics, authors are advised to include a first section that introduces the basic issues addressed and the primary achievements to a non-specialist reader.
In addition to original research papers, JBP welcomes review papers which call the attention of physicists to interesting unresolved biological problems that deserve investigation by physical methods. Special issues, published under the supervision of a guest editor, containing a series of papers devoted to a particular topic in addition to the regular papers, can also be published. They may be invited by the board but suggestions for a topical issue can also be accepted. They will be discussed with the editor. Book reviews are also welcome. Moreover, as a link between physicists interested in biological problems, JBP can also publish information such as meeting announcements or conference proceedings.
For further information, check the journal web page.
|
2021-12-09 02:05:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46787676215171814, "perplexity": 1678.3489887039188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363641.20/warc/CC-MAIN-20211209000407-20211209030407-00432.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-3-section-3-2-exponential-functions-exercise-set-page-466/135
|
## Precalculus (6th Edition) Blitzer
This statement makes sense because an inverse of an exponential function is a logarithmic function. For example, $f(x)=2^x$ will have an inverse of $f^{-1}(x)=log_2(x)$. For $g(x)=2^x+1$, the horizontal asymptote shifts up from $y=0$ to $y=1$, while for $h(x)=log_2(x-2)$, the vertical asymptote shifts from $x=0$ to $x=2$
|
2021-05-05 22:31:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9453842639923096, "perplexity": 172.54503455445035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988696.23/warc/CC-MAIN-20210505203909-20210505233909-00346.warc.gz"}
|
http://www.math.nsc.ru/publishing/DAOR/content/2012/04en/04-312.html
|
EN|RU Volume 19, No 4, 2012, P. 66-72 UDC 519.178 D. S. Malyshev Polynomial solvability of the independent set problem for one class of graphs with small diameter Abstract: A constructive approach to forming new cases in the family of hereditary parts of the set ${\mathcal Free}(\{P_5,C_5\})$ with polynomial-time solvability of the independent set problem is considered. We prove that if this problem is polynomial-time solvable in the class ${\mathcal Free}(\{P_5,C_5,G\})$ then for any graph $H$ which can inductively be obtained from $G$ by means of applying addition with $K_1$ or multiplication by $K_1$ to the graph $G$ the problem has the same computational status in ${\mathcal Free}(\{P_5,C_5,H\})$. Bibliogr. 10. Keywords: the independent set problem, computational complexity, polynomial algorithm. Malyshev Dmitrii Sergeevich 1,2 1. Nizhniy Novgorod Higher School of Economics, 25/12 B. Pecherskaya str., 603155 Nizhny Novgorod, Russia 2. Nizhniy Novgorod State University, 23 Gagarina ave., 603950 Nizhniy Novgorod, Russia e-mail: dsmalyshev@rambler.ru © Sobolev Institute of Mathematics, 2015
|
2017-10-17 15:05:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4533386528491974, "perplexity": 1951.620759400162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822116.0/warc/CC-MAIN-20171017144041-20171017164041-00653.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/trigonometry/CLONE-68cac39a-c5ec-4c26-8565-a44738e90952/chapter-7-review-exercises-page-352/45
|
## Trigonometry (11th Edition) Clone
$90^{\circ}$, orthogonal
Step 1: We let $\textbf {u}=\langle 5,-3 \rangle$ and $\textbf {v}=\langle 3,5 \rangle$ Step 2: The formula for finding the angle between a pair of vectors is $\cos\theta=\frac{\textbf {u}\cdot\textbf {v}}{|\textbf {u}||\textbf {v}|}$ Step 3: $\cos\theta=\frac{\langle 5,-3 \rangle\cdot\langle 3,5 \rangle}{|\langle 5,-3 \rangle||\langle 3,5 \rangle|}$ Step 4: $\cos\theta=\frac{5(3)-3(5)}{\sqrt (5^{2}+(-3)^{2})\cdot\sqrt (3^{2}+5^{2})}$ Step 5: $\cos\theta=\frac{15-15}{\sqrt (25+9)\cdot\sqrt (9+25)}$ Step 6: $\cos\theta=\frac{0}{\sqrt (34)\cdot\sqrt (34)}$ Step 7: $\cos\theta=0$ Step 8: $\theta=\cos^{-1}(0)$ Step 9: Solving using the inverse cos function on the calculator, $\theta=\cos^{-1}(0)=90^{\circ}$ Since the angle between the vectors is $90^{\circ}$, the vectors are orthogonal.
|
2022-05-22 14:45:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9789146780967712, "perplexity": 110.98565675644059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545548.56/warc/CC-MAIN-20220522125835-20220522155835-00054.warc.gz"}
|
https://jglobaloralhealth.org/the-global-burden-of-oral-diseases-aligning-around-common-goals-for-improved-advocacy-outcomes/
|
Guest Editorial
1 (
1
); 3-4
doi:
10.25259/JGOH-21-2019
# The global burden of oral diseases - aligning around common goals for improved advocacy outcomes
Department of Epidemiology and Health Promotion, College of Dentistry, New York University, New York, United States
Corresponding author: David C Alexander, Vice President International Affairs, Academy of Dentistry International, United States. david@appoloniaglobalhealth.com
Licence
This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-Share Alike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as the author is credited and the new creations are licensed under the identical terms.
The Global Burden of Disease (GBD) Studies report that the cumulative burden of oral conditions dramatically increased in the 25 years to 2015.[1] The burden rose from 2.5 billion people in 1990 to 3.5 billion in 2015, or approximately half (48% age-standardized prevalence) of the human population of the planet suffered a disability from oral conditions. The single most prevalent of the more than 300 conditions reported in GBD 2015 was untreated caries in the permanent dentition, affecting 2.5 billion people (34% age-standardized prevalence). Severe chronic periodontitis ranked as the sixth most prevalent of all conditions affecting 538 million people.
Over that same period where the prevalence of untreated caries in permanent teeth increased by 1 billion people, technology in dental services has experienced tremendous advances. Reparative and esthetic dentistry has witnessed major strides in digitization and automation, especially in the areas of imaging, CAD/CAM, and materials science. More durable restorative materials, including some biomaterials with optical and mechanical properties closer and closer to the natural tooth substances of enamel and dentine, are in widespread use. Tooth movement in dentofacial orthopedics and orthodontics has advanced also through imaging and digitization and the advent of the increasingly popular clear aligner. Where tooth loss has occurred, either through disease-related extractions or trauma, dental implants can be placed providing a more natural solution with improved esthetic and functional outcomes over the acrylic removable denture.
Unfortunately, these strides in technology for restorative and surgical dentistry have largely bypassed the essentially needed primary and secondary prevention of oral diseases and have enabled growth in services for beauty and vanity rather than any reduction in disease and disabilities and the social and economic burden on humanity.
Contemporary technologies are only accessible where there is access to licensed dentists, and in most situations, the consumer must have the ability to pay a significant, if not total contribution to the cost, due to health reimbursement and coverage frequently excluding oral diseases. The estimated direct costs of technology-driven dental services amount to $356.8 billion globally, together with indirect costs of a further$187.6 billion through loss of wages, reduced school hours, transportation, etc.[2] Yet dental diseases are largely preventable and occur through lack of knowledge and health literacy or adverse and limited choices about everyday behaviors in the bathroom, kitchen, or fast-food outlet. Limited access to services, the direct and indirect financial implications, and the everyday underlying behavioral causes of oral diseases mean that preventive efforts by dental professionals are unlikely to have any effect on the overall burden of oral disease.
Hence, who should be the main actors in the advocacy for the prevention of oral diseases and the promotion of oral health? Are there lessons to be learned from other diseases and conditions that have achieved greater success or not witnessed the stunning increase of 1 billion people? Oral diseases lack a global, cause-related advocacy voice that includes all stakeholders – is it that other such advocacy groups (heart disease, diabetes, obesity, and lung diseases) that include wide spectra of stakeholders are able to make more compelling arguments for resources and policies? Are oral health actors out-played in this highly competitive and emotive environment as they may not include all stakeholders?
Oral health and dentistry are seldom considered to reside in the mainstream of health and medicine. Historically, dentistry organized itself as a profession independent of medicine and still today educates and trains its providers in dental schools usually both physically and administratively outside of the medical school, at times not even on the same campus. An outcome of this academic separation is often described by the phrase as “the mouth being out of the body.” The health professions, policy-makers, funding agencies, and society have long accepted this dichotomy. However, in the past decade, there has been an emerging and increasing body of evidence of bidirectional links between poor oral health and numerous remote organs and systemic conditions including diabetes, cardiovascular disease and stroke, pre-term low birthweight babies, and Alzheimer’s disease, among others. Interprofessional education and interprofessional collaboration may help “to place the mouth back in the body,” and in so doing benefit from the management of common risk factors for non-communicable diseases (NCDs) such as sugar, tobacco, alcohol, hygiene, water quality, and health literacy that will improve both general and oral health. The social and commercial determinants for many NCD apply equally to oral diseases.[3,4]
The time is now for oral health advocates to work collaboratively with other disease-advocacy groups aligning around common established goals with the ultimate aim of a realization that the population of the planet will never be healthy, while the common risk factors are not under control, and in doing so, one of the many dividends will be the reduction in oral diseases.
Nil.
### Conflicts of interest
There are no conflicts of interest.
## REFERENCES
1. , , , , , , . Global, regional, and national prevalence, incidence, and disability-adjusted life years for oral conditions for 195 countries 1990-2015: A systematic analysis for the global burden of diseases, injuries, and risk factors. J Dent Res. 2017;96:380-7.
2. , , , , . Global-, regional-, and country-level economic impacts of dental diseases in 2015. J Dent Res. 2018;97:501-7.
3. , , , , , , . Global oral health inequalities: Task group implementation and delivery of oral health strategies. Adv Dent Res. 2011;23:259-67.
|
2021-07-24 01:39:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19652870297431946, "perplexity": 7007.787562223054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150067.87/warc/CC-MAIN-20210724001211-20210724031211-00057.warc.gz"}
|
https://mathoverflow.net/questions/350904/classifying-space-for-thompsons-group-f
|
# Classifying space for Thompson's group F?
Let $$\mathcal C$$ be the free monoidal category generated by an object $$X$$, and a morphism $$X \otimes X \to X$$.
This category contains exactly two connected components: that of the monoidal unit $$1\in \mathcal C$$, and that of $$X\in \mathcal C$$. (In general, two object $$A$$ and $$B$$ of a category are said to be in the same connected component if they are related by a zig-zag of arrows $$A\to Y_1\leftarrow Y_2\to Y_3\leftarrow Y_4\to Y_5\leftarrow\ldots \to B.$$ In the case of $$\mathcal C$$, any two non-unit objects are related by a single morphism.)
Let $$\mathcal C'\subset \mathcal C$$ be the connected component of $$X$$ and let $$|\mathcal C'|$$ denote the geometric realisation (of the simplicial nerve) of $$\mathcal C'$$.
The article
Marcelo Fiore, Tom Leinster, An abstract characterization of Thompson's group $$F$$, Semigroup Forum 80 (2010), 325-340, doi:10.1007/s00233-010-9209-2, arXiv:math/0508617.
proves that $$\pi_1(|\mathcal C'|)$$ is isomorphic to Thompson's group $$F$$.
Question: Is $$|\mathcal C'|$$ a classifying space for Thompson's group $$F$$?
• Just to make sure I understand: Fiore and Leinster talk about a different category, namely, the monoidal category freely generated by an object $X$ and an isomorphism $X \otimes X \to X$. That category is a groupoid, so your question has a positive answer for their category. Now it easily follows from their result that for your non-groupoid category the fundamental group is $F$, and you are asking if you also get a $K(F,1)$ from your category. Right? – Omar Antolín-Camarena Jan 21 at 23:54
• @Omar Antolin-Camarera: Yes, your understanding is correct. – André Henriques Jan 22 at 7:58
## 1 Answer
What you describe is the so called Squier complex of the semigroup presentation $$\langle x \mid x^2=x\rangle$$ (you did not describe the 2-cells, but it is straightforward). The fact that its fundamental group is $$F$$ was proved by Guba and myself in 1997, "Diagram groups", Memoirs of the AMS, November, 1997 (link). Farley in
"Finiteness and CAT(0) properties of diagram groups", Topology 42 (2003), no. 5, 1065–1082 doi:10.1016/S0040-9383(02)00029-0, author pdf
proved that its universal cover is a $$CAT(0)$$ cube complex. So indeed the Squier complex is a classifying space for $$F$$. The proof of all these from the category theory point of view can be found in Guba, V. S.; Sapir, M. V. "Diagram groups and directed 2-complexes: homotopy and homology. " J. Pure Appl. Algebra 205 (2006), no. 1, 1–47.
|
2020-04-10 07:51:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 22, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8983846306800842, "perplexity": 342.1683791785661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371893683.94/warc/CC-MAIN-20200410075105-20200410105605-00280.warc.gz"}
|
https://www.optics4kids.org/what-is-optics/terms?Letter=D
|
## Optics Dictionary
Sometimes reading a scientific explanation is as difficult as reading Parseltongue. This section features definitions and etymology for the terms and phrases you will encounter as you explore the science of light. Etymology is the study of the history of words — when they entered a language, from what source, and how their form and meaning have changed over time. Ever wonder how the word optics got its meaning? OK — probably not but now you can find out!
#### Decibel (dB)
General Terms
A logarithmic unit of the ratio of a quantity, usually power or intensity levels relative to a specified level. For example, when used to describe the gain or loss in power it is defined as:
$\text{Level in dB}=10\text{\hspace{0.17em}}\mathrm{log}\frac{\text{power level 2}}{\text{power level 1}}$
The level in decibels is positive if power level 2 is greater than power level 1, negative if it is less, and zero if they are equal.
early 1900s: deci- [ten] + bel (the unit being one tenth of a bel, named after Alexander Graham Bell).
#### Deflect
General Terms
To cause something to change direction or travel away from its expected path.
For more about deflection click on this link to the pamphlet on Lasers.
#### Detector
General Terms
1) A device designed to detect the presence of something and provide a signal in response. 2) A device designed to convert a range of incident energy of radiation into another form to determine the presence and/or amount of radiation. The device may function by electrical, photographic, or visual means.
1447, from L. detectus, pp. of detegere "uncover, disclose," from de- "un-, off" + tegere "to cover."
#### Diffraction
General Terms
1) A phenomenon that occurs whenever a light wave is obstructed in any way. Often diffraction fringes can be seen when a small aperture or object blocks light waves. 2) The optical phenomenon by which a grating separates the light into its constituent components. Diffraction causes different wavelengths of light to transmit through or reflect from a grating at different angles, allowing a spectrometer to separate and measure intensity of individual wavelengths.
To learn more about diffraction click on the link to the pamphlet about Solid-State Lighting (LEDs) and on the Laser pamphlet.
1671, from Fr. diffraction, from Mod.L. diffractionem, from L. diffrac-, stem of diffringere "break in pieces," from dis- "apart" + frangere "to break."
#### Diffraction grating
General Terms
A device used to break light into its component wavelengths. It is usually composed of a material with tiny grooves cut into it. These disperse the light as it passes through or bounces off the grating (depending on the type of grating). Physicists and astronomers often use diffraction gratings to determine the wavelengths composing the light being viewed.
c.1374, from L. diffusionem, from stem of diffundere "scatter, pour out," from dif- "apart, in every direction" + fundere "pour."
#### Diffusion
General Terms
1) Is a transport phenomena that results in mixing due to random motion of atoms and molecules. 2) In optics, the scattering of light due to reflection or transmission.
#### Diffuse Reflection
General Terms
A phenomenon that results when light strikes an irregular surface such as a frosted window or the surface of a frosted or coated light bulb.
#### Digital camera
General Terms
A camera that records images in digital form by converting the light from the scene being photographed into an electric signal with the use of charge-coupled devices (CCDs). The electric signal is then stored digitally on a random access memory device. The digital data may then be manipulated to enhance or otherwise modify the resulting viewed image.
#### Diopter (D)
General Terms
1) A unit of optical power that expresses the refractive power of a lens or a mirror. For a lens, it is equal to the reciprocal of the focal length, in meters. For example, a 5 diopter lens brings parallel rays of light shining on it to a focus 1/5 of a meter away from its center of curvature. 2) A prism diopter (∆) is a measure of prismatic deviation equal to a deflection of 1 cm at a distance of 1 m.
From L. dioptra, from Gk. dioptra : dia- + optos, "visible." A unit of measurement of the refractive power of lenses equal to the reciprocal of the focal length measured in meters.
#### Dioptric system
General Terms
An optical system that uses lenses for image formation.
#### Direct-vision prism
General Terms
An assembly of multiple prisms that disperses incident light into its spectral components without deviating light at the central wavelength.
#### Direct ray
General Terms
A ray that travels without being reflected or refracted.
#### Dispersion
General Terms
The dependence of a wave’s velocity on its frequency. Objects that have this property are called dispersive media, and can separate a beam of light into its various wavelength components, such as a dispersive prism. Another common example of light dispersion is a rainbow.
c.1450, from M.Fr. disperser "scatter," from L. dispersus, pp. of dispergere "to scatter," from dis- "apart, in every direction" + spargere "to scatter."
#### Dispersing prism
General Terms
A prism or series of prisms used to disperse a beam of radiant energy of mixed wavelengths into its spectral components.
#### Distortion
General Terms
The situation where an image is not a true-to-scale reproduction of an object.
1586, from L. distortus, pp. of distorquere "to twist different ways, distort," from dis- "completely" + torquere "to twist."
#### Diverge
General Terms
To separate, or cause to separate, and go in different directions from a point.
For more information on divergence, click the link to the pamphlet on Lasers.
#### Divergence
General Terms
The bending of rays away from each other.
#### Diverging lens
General Terms
A lens that causes parallel rays of light to spread out. Examples include: negative lens, divergent lens, concave lens, or dispersive lens.
#### Doppler effect
General Terms
Radiation emitted from a source and received by an observer that are in relative motion to each other appears to be of a lower or higher frequency than if there were no relative motion between source and observer. If the relative motion between source and receiver causes motion toward each other, the frequency is shifted upward (sometimes called blue shifted). If the source and receiver are moving away from each other the frequency is shifted downward (sometimes termed red shifted).
1871, in reference to Christian Doppler (1803-53), Austrian scientist, who in 1842 explained the effect of relative motion on waves (originally to explain color changes in binary stars); proved by musicians performing on a moving train. Doppler shift is the change of frequency resulting from the Doppler effect.
#### Doublet lens
General Terms
A compound lens consisting of two elements.
c.1225, from O.Fr. duble, from L. duplus "twofold," from duo "two" + -plus "fold."
#### Dove prism
General Terms
A form of prism invented by H.W. Dove. It resembles half of a common right-angle prism in which a ray entering parallel to the hypotenuse face is reflected internally at that face and emerges parallel to its incident direction. One of the incident rays emerges along a continuation of its incident direction, and if the prism is rotated about that ray through some angle, the image rotates through twice that angle. A Dove prism must be used in parallel light.
Copyright ©2019
|
2019-09-20 08:37:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4354739785194397, "perplexity": 2711.4215799067806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573908.70/warc/CC-MAIN-20190920071824-20190920093824-00191.warc.gz"}
|
https://learn.careers360.com/ncert/question-consider-the-following-figure-of-line-mn-say-whether-following-statements-are-true-or-false-in-context-of-the-given-figure-a-q-m-o-n-p-are-points-on-the-line-mn/
|
## Filters
Q&A - Ask Doubts and Get Answers
Q
# Consider the following figure of line MN. Say whether following statements are true or false in context of the given figure.(a) Q, M, O, N, P are points on the line MN.
6. Consider the following figure of line $\overline{MN}$ . Say whether following statements are true or false in context of the given figure.
(a) Q, M, O, N, P are points on the line $\overline{MN}$.
(b) M, O, N are points on a line segment $\overline{MN}$ .
(c) M and N are end points of line segment $\overline{MN}$ .
(d) O and N are end points of line segment $\overline{OP}$ .
(e) M is one of the end points of line segment $\overline{QO}$ .
(f) M is point on ray $\overrightarrow{OP}$ .
(g) Ray $\overrightarrow{OP}$ is different from ray $\overrightarrow{QP}$ .
(h) Ray $\overrightarrow{OP}$ is same as ray $\overrightarrow{OM}$.
(i) Ray $\overline{OM}$ is not opposite to ray $\overline{OP}$.
(j) O is not an initial point of $\overline{OP}$ .
(k) N is the initial point of $\overline{NP}$ and $\overline{NM}$ .
Answers (1)
Views
(a) True
(b) True
(c) True
(d) False
(e) False
(f) False
(g) True
(h) False
(i) False
(j) False
(k) True
Exams
Articles
Questions
|
2020-01-26 16:01:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8206599354743958, "perplexity": 1515.4429977069412}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251689924.62/warc/CC-MAIN-20200126135207-20200126165207-00488.warc.gz"}
|