url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://physics.stackexchange.com/questions/387663/why-arent-the-energy-levels-of-the-earth-quantized/387675
|
# Why aren't the energy levels of the Earth quantized?
The Hamiltonian of the Earth in the gravity field of the Sun is the same as that of the electron in the hydrogen atom (besides some constants), so why are the energy levels of the Earth not quantized?
(of course the question is valid for every mass in a gravity field).
• Why do you say that the energy levels aren't quantized? – Samuel Weir Feb 20 '18 at 21:23
• Another problem is that the Hamiltonian you are referring to is just an approximation to the underlying general relativity framework, and there's no consensus on the procedure we should take to quantize that... – valerio Feb 21 '18 at 0:03
• To quantize the Earth's energy level you need an unambiguous definition of "Earth", one which accounts for every single quantizable particle which belongs to that "Earth". I'm sure you can see how a not-completely-quantized approximation is far more useful and achievable than any attempt to identify every last particle belonging to some particular notion of "Earth" at some particular instant in time. – Beanluc Feb 21 '18 at 20:23
• The earth is not in a stationary state – Matt Timmermans Feb 22 '18 at 19:28
The orbital energy of the Earth around the Sun is quantized. Measuring this quantization directly is infeasible, as I'll show below, but other experiments with bouncing neutrons (Nature paper) show that motion in a classical gravity field is subject to energy quantization.
We can estimate the quantized energy levels of the Earth's orbit by analogy with the hydrogen atom since both are inverse square forces--just with different constants. For hydrogen: $$E_n = -\frac{m_e}{2}\left(\frac{e^2}{4\pi\epsilon_0}\right)^2\frac{1}{n^2\hbar^2}$$ Replacing $m_e$ with the mass of Earth ($m$) and the parenthesized expression with the corresponding expression from the gravitational force ($GMm$, where $M$ is the mass of the sun and $G$ is the gravitational constant) to get $$E_n = -\frac{m}{2}\left(GMm\right)^2\frac{1}{n^2\hbar^2}$$ Setting this equal to the total orbital energy $$E_n = -\frac{m}{2}\left(GMm\right)^2\frac{1}{n^2\hbar^2} = -\frac{GMm}{2r}$$ Solving for $n$ and plugging in values gives: $$n = \frac{m}{\hbar}\sqrt{GMr} = 2.5\cdot 10^{74}$$ The fact that Earth's energy level is at such a large quantum number means that any energy transition (which are proportional to $1/n^3$) will be undetectably small.
In fact, to transition to the next energy level, Earth would have to absorb: $$\Delta E_{n \to n+1} = m\left(GMm\right)^2\frac{1}{n^3\hbar^2} = 2\cdot 10^{-41}\ \textrm{J} = 1\cdot 10^{-22}\ \textrm{eV}$$ For a sense of how little this energy is, a photon of this energy has a wavelength of $10^{16}$ meters--or, one light-year.
Solving for $r$: $$r = n^2\left(\frac{\hbar}{m}\right)^2\frac{1}{GM}$$ An increase in the principal quantum number ($n$) by one results in a change in orbital distance of \begin{align} \Delta r &= \left[(n+1)^2 - n^2\right]\left(\frac{\hbar}{m}\right)^2\frac{1}{GM} \\ &= \left[2n + 1\right]\left(\frac{\hbar}{m}\right)^2\frac{1}{GM} \\ &= 1.2\cdot 10^{-63}\ \textrm{meters} \end{align} Again, way too small to measure.
• That's so preposterously small I laughed out loud! – WetSavannaAnimal aka Rod Vance Feb 21 '18 at 5:43
• Something that might go missing along all the maths: this answer does not claim that the earth resides in quantized energy states, it merely explains why we can currently not verify (or refute) such a claim. – WorldSEnder Feb 21 '18 at 7:51
• Neutrons have been shown to have quantized energy levels in a gravitational field, yes, but this is not enough to answer by the affirmative: Earth is a classical object and there is no experimental proof that it can behave in any way as a quantum one. This is only a leap of faith extending the validity of QM well beyond its tested domain. – Stéphane Rollandin Feb 21 '18 at 9:44
• Note that, to actually claim that the orbital energy is quantised in any observable sense, one also needs to show that the linewidth associated with transitions between different orbital states is smaller than the energy difference between them. Since it appears that the stimulated emission of even one photon into space is enough to cause a transition, the linewidth must be far larger than the spacing and therefore the "spectrum" would be continuous at such high $n$ even if it could be plausibly measured (which of course it can't, as this nice answer shows). – Mark Mitchison Feb 21 '18 at 13:54
• @zibadawatimmy That article links to nature.com/news/2002/020117/full/news020114-8.html which seems more directly on point, and cites physi.uni-heidelberg.de/~abele/nature.pdf for more detail. – zwol Feb 21 '18 at 21:24
They are. It is just that they are so closely spaced between each other that we can't observe it. Note that we do not yet have a good theory of quantum gravity.
• It's not clear that they must be—perhaps the planet is too massive and energetic to maintain the required coherence—but this consideration explains why it wouldn't matter if they were. – dmckee Feb 20 '18 at 21:25
• @Iván Mauricio Burbano Why they are so closely spaced between each other? The quantization needs to be: C/n^2 like the Hydrogen. – Jacob Feb 20 '18 at 21:26
• I'm not sure about what the constant is for gravitational purposes. However, any object with enough mass surely has energy corresponding to very high $n$ under normal conditions. That's why I presume they are so closely spaced. – Iván Mauricio Burbano Feb 20 '18 at 21:30
• @Jacob When you have a bunch of interacting atoms, they no longer have the same energy levels as the separate atoms themselves. When atoms interact, their electron wavefunctions distort, which drastically changes the spacing of the energy levels. So no, there's no reason to expect it should be a 1/n^2 quantization. – probably_someone Feb 20 '18 at 21:35
• @Jacob, at high $n$ and $l$ (which must be true of states representing the Earth) the quantum and classical solution are identical and we haven't learned anything new. But you must estimate those quantum numbers to get an appreciation for how true that is. Don't guess and don't try to intuit it: calculate and see. – dmckee Feb 21 '18 at 0:06
tl;dr- In principle, quantization might still apply. Scientifically speaking, we have no idea yet.
We don't know how far down our current quantum theories might hold.
To draw an analogy, Newton's laws of motion predict that things can move faster than the speed of light, $c$. But, turns out that that wasn't right; Newton's laws kinda fell apart at the relativistic limit, and today we know that that prediction wasn't meaningful.
So, as described in @MarkH's answer, the energy levels are separated by
\begin{align} \Delta r &= \left[(n+1)^2 - n^2\right]\left(\frac{\hbar}{m}\right)^2\frac{1}{GM} \\ &= \left[2n + 1\right]\left(\frac{\hbar}{m}\right)^2\frac{1}{GM} \\ &= 1.2\cdot 10^{-63}\ \textrm{meters} \end{align}
In terms of the Planck length,$$\ell_{\mathrm{P}} ~ {\approx} ~ 1.616229{\times}{10}^{−35}\textrm{meters},$$that'd be about $7.4{\cdot}{10}^{-29}\ell_{\mathrm{P}}$.
As a rule of thumb, any prediction that's astronomically smaller than the Planck length falls into the realm of speculation as opposed to verified scientific models.
• Your rule of thumb is a little bit... conservative... – Mehrdad Feb 22 '18 at 0:58
• @Mehrdad Hah, just a little, right? In all seriousness, I think that SE.Physics tends to lean toward Platonic realism in some cases, so answers of the form, "Our scientific models are really just correlations that hold over the limited domain in which they've been verified" tend to be poorly received. – Nat Feb 22 '18 at 1:02
• Maybe we need more businesspeople on this site who can sell the theories better... =P – Mehrdad Feb 22 '18 at 1:12
• To me, quantum mechanics is a verified scientific model, which allows for extrapolation to unusual situations. It does show that quantum mechanics actually describes the everyday world, since the quantum weirdness becomes undetectable for large systems. The accusation of Platonism is rather libelous, though. :) – Mark H Feb 22 '18 at 9:40
## protected by Qmechanic♦Feb 22 '18 at 5:54
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
|
2019-01-17 07:33:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9386430978775024, "perplexity": 699.5373914588631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658844.27/warc/CC-MAIN-20190117062012-20190117084012-00002.warc.gz"}
|
http://physgre.s3-website-us-east-1.amazonaws.com/2001%20html/2001%20problem%2083.html
|
## Solution to 2001 Problem 83
The $S_x$ operator in matrix form is given by\begin{align*}S_x = \sigma_x \frac{\hbar}{2} = \frac{\hbar}{2}\left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix} \rig...An eigenspinor of $S_x$ with eigenvalue $-\hbar/2$ must satisfy \begin{align}S_x \xi = \frac{-\hbar}{2} \xi \label{eqn83:1}\end{align}We can convert all of the answers to vector notation by using the identifications\begin{align*}| \uparrow\rangle &\to \left(\begin{matrix} 1 \\ 0 \\ \end{matrix}\right) \\| \downarrow\rangle &\to \l...We can then check whether the condition in equation (1) is satisfied. We find that it is satisfied only for answer (C). Therefore, answer (C) is correct. Physically, this means that the eigenspinor in answer (C) represents a particle with definite $x$-component of its spin vector equal to $-\hbar/2$.
|
2017-07-26 02:50:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999314546585083, "perplexity": 914.7632514968993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425751.38/warc/CC-MAIN-20170726022311-20170726042311-00383.warc.gz"}
|
https://math.stackexchange.com/questions/1200627/function-algorithm-for-obtaining-random-numbers-from-dice
|
# Function (algorithm) for obtaining random number(s) from dice
I'm afraid I don't speak maths very well. I hope this question is sufficiently comprehensible and mathematical.
Suppose I have a perfect D$x$ (i.e. $x$-sided) die, and a pen and paper, and with these I wish to obtain a random integer between 1 and $n$ such that every possible result $r$ is equally likely, where $x≥2$, $n>1$, and $1≤r≤n$.
Is there a known algorithm/procedure/function/protocol that provides such output from such input? If so, what is it called, and how does it work?
(Presumably, the protocol will involve rolling the die more than once if $n>x$.)
• search term for you, arithmetic encoding. – adam W Mar 22 '15 at 6:28
|
2019-08-23 05:57:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7074844837188721, "perplexity": 245.10072744146703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317847.79/warc/CC-MAIN-20190823041746-20190823063746-00006.warc.gz"}
|
https://bird.bcamath.org/handle/20.500.11824/13/browse?rpp=20&sort_by=1&type=title&etal=-1&starts_with=P&order=ASC
|
Now showing items 192-211 of 263
• Parallelizing the Kolmogorov-Fokker-Planck Equation
(2015-12-31)
We design two parallel schemes, based on Schwarz Waveform Relaxation (SWR) procedures, for the numerical solution of the Kolmogorov equation. The latter is a simplified version of the Fokker-Planck equation describing the ...
• Parametric Vibration Analysis of Pipes Conveying Fluid by Nonlinear Normal Modes and a Numerical Iterative Approach
(2019)
Nonlinear normal modes and a numerical iterative approach are applied to study the parametric vibrations of pipes conveying pulsating fluid as an example of gyroscopic continua. The nonlinear non-autonomous governing ...
• Particle Morphology
(2013-12-31)
This chapter discusses the morphology of latex particles obtained mainly by (mini)emulsion polymerisation. It describes some applications of these particles, and discusses the factors that influence the particle morphology. ...
• Patient-specific computational modeling of Cortical Spreading Depression via Diffusion Tensor Imaging
(2016-06-29)
Cortical Spreading Depression (CSD), a depolarization wave originat- ing in the visual cortex and traveling towards the frontal lobe, is com- monly accepted as a correlate of migraine visual aura. As of today, little is ...
• Patient-specific modelling of cortical spreading depression applied to migraine studies
(2019-06-17)
Migraine is a common neurological disorder and one-third of migraine patients suffer from migraine aura, a perceptual disturbance preceding the typically unilateral headache. Cortical spreading depression (CSD), a ...
• Permanence and extinction for a nonautonomous SEIRS epidemic model
(2012-12-31)
In this paper, we study the long-time behavior of a nonautonomous SEIRS epidemic model. We obtain new sufficient conditions for the permanence (uniform persistence) and extinction of infectious population of the model. By ...
• Permanence and global stability of a class of discrete epidemic models
(2011-12-31)
In this paper we investigate the permanence of a system and give a sufficient condition for the endemic equilibrium to be globally asymptotically stable, which are the remaining problems in our previous paper (G. Izzo, Y. ...
• A phenomenological model for interfacial water near hydrophilic polymers
(2022-06-30)
We propose a minimalist phenomenological model for the ‘interfacial water’ phenomenon that occurs near hydrophilic polymeric surfaces. We achieve this by combining a Ginzburg–Landau approach with Maxwell’s equations which ...
• Predictive engineering and optimization of tryptophan metabolism in yeast through a combination of mechanistic and machine learning models
(2019)
In combination with advanced mechanistic modeling and the generation of high-quality multi-dimensional data sets, machine learning is becoming an integral part of understanding and engineering living systems. Here we show ...
• Pro-C congruence properties for groups of rooted tree automorphisms
(2018-11-21)
We propose a generalisation of the congruence subgroup problem for groups acting on rooted trees. Instead of only comparing the profinite completion to that given by level stabilizers, we also compare pro-$\mathcal{C}$ ...
• Probabilistic Modelling of Classical and Quantum Systems
(2018-06-14)
While probabilistic modelling has been widely used in the last decades, the quantitative prediction in stochastic modelling of real physical problems remains a great challenge and requires sophisticated mathematical models ...
• Pseudospectral methods and numerical continuation for the analysis of structured population models
(2016-06-07)
In this thesis new numerical methods are presented for the analysis of models in population dynamics. The methods approximate equilibria and bifurcations in a certain class of so called structured population models. Chapter ...
• Pseudospin lifetime in relaxed-shape armchair graphene nanoribbons due to in-plane phonon modes
(2016-01-01)
We study the influence of ripple waves on the band structures of strained armchair graphene nanoribbons. We argue that the Zeeman pseudospin (p-spin) splitting energy induced by ripple waves might not be neglected for ...
• Qualitative analysis of kinetic-based models for tumor-immune system interaction
(2018-08)
A mathematical model, based on a mesoscopic approach, describing the competition between tumor cells and immune system in terms of kinetic integro-differential equations is presented. Four interacting populations are ...
• Quantum Face Recognition Protocol with Ghost Imaging
(2021-10)
Face recognition is one of the most ubiquitous examples of pattern recognition in machine learning, with numerous applications in security, access control, and law enforcement, among many others. Pattern recognition with ...
• Quiescence: A mechanism for escaping the effects of drug on cell populations
(2011-12-31)
We point out that a simple and generic strategy in order to lower the risk for extinction consists in developing a dormant stage in which the organism is unable to multiply but may die. The dormant organism is protected ...
• Radiation of water waves by a submerged nearly circular plate
(2016-01-01)
A thin nearly circular plate is submerged below the free surface of deep water. The problem is reduced to a hypersingular integral equation over the surface of the plate which is conformally mapped onto the unit disc. The ...
• Radiofrequency Ablation for Treating Chronic Pain of Bones: Effects of Nerve Locations
(2019-05)
The present study aims at evaluating the effects of target nerve location from the bone tissue during continuous radiofrequency ablation (RFA) for chronic pain relief. A generalized three-dimensional heterogeneous computational ...
• An RBF-FD closest point method for solving PDEs on surfaces
(2018)
Partial differential equations (PDEs) on surfaces appear in many applications throughout the natural and applied sciences. The classical closest point method (Ruuth and Merriman, J. Comput. Phys. 227(3):1943-1961, ...
• Reexamination of continuous fuzzy measurement on two-level systems
(2017-04-10)
Imposing restrictions on the Feynman paths of the monitored system has in the past been proposed as a universal model-free approach to continuous quantum measurements. Here we revisit this proposition and demonstrate that ...
|
2022-12-05 15:48:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3146842122077942, "perplexity": 2048.6377002276167}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711017.45/warc/CC-MAIN-20221205132617-20221205162617-00309.warc.gz"}
|
http://www.openwetware.org/index.php?title=IGEM:IMPERIAL/2007/Dry_Lab/Modelling/ID&diff=prev&oldid=158889
|
# IGEM:IMPERIAL/2007/Dry Lab/Modelling/ID
(Difference between revisions)
Revision as of 20:36, 16 October 2007 (view source) (→Model 1: Steady-state is attained; limitless energy supply ''(link here to derivation)'')← Previous diff Revision as of 20:39, 16 October 2007 (view source) (→Model 1: Steady-state is attained; limitless energy supply ''(link here to derivation)'')Next diff → Line 27: Line 27: $\frac{d[LuxR]}{dt} = k_1 + k_3[A] - k2[LuxR][AHL]- \delta_{LuxR}[LuxR]$ $\frac{d[LuxR]}{dt} = k_1 + k_3[A] - k2[LuxR][AHL]- \delta_{LuxR}[LuxR]$ +
$\frac{d[AHL]}{dt} = k_3[A] - k2[LuxR][AHL]- \delta_{AHL}[AHL]$ $\frac{d[AHL]}{dt} = k_3[A] - k2[LuxR][AHL]- \delta_{AHL}[AHL]$
# Model Development for Infector Detector
## Formulation of the problem
• Questions to be answered with the approach
• Verbal statement of background
• What does the problem entail?
• Hypotheses employed
## Selection of model structure
• Present general type of model
1. is the level of description macro- or microscopic
2. choice of a deterministic or stochastic (!) approach
3. use of discrete or continuous variables
4. choice of steady-state, temporal, or spatio-temporal description
• determinants for system behaviour? - external influences, internal structure...
• assign system variables
## Our models
• Introduction
We can condition the system in various manners, but for the purposes of our project, Infector Detector, we will seek a formulation which is valid for both constructs considered.
Our initial approach assumed that energy would in unlimited supply, and that our system would eventually reach steady-state (Model 1). Experimentation suggested otherwise; our system needed to be amended. This lead to the development of model 2, an energy-dependent network, where the dependence on energy assumed Hill-like dynamics:
### Model 1: Steady-state is attained; limitless energy supply (link here to derivation)
$\frac{d[LuxR]}{dt} = k_1 + k_3[A] - k2[LuxR][AHL]- \delta_{LuxR}[LuxR]$
$\frac{d[AHL]}{dt} = k_3[A] - k2[LuxR][AHL]- \delta_{AHL}[AHL]$
$\frac{d[A]}{dt} = -k_3[A] + k2[LuxR][AHL]- k_4[A][pLux] + k_5[AP]$
$\frac{d[P]}{dt} = -k_4[A][P] + k_5[P]$
$\frac{d[AP]}{dt} = k_4[A][pLux] - k_5[AP]$
$\frac{d[GFP]}{dt} = k_6[AP] - \delta_{GFP}[GFP]$
### Model 2: Equations developed through steady-state analysis; however due to limited energy supply, we operate in the transient regime
$\frac{d[LuxR]}{dt} = k_1\bigg(\frac{[E]^n}{K_E^n + [E]^n}\bigg) + k_3[A] - k_2[LuxR][AHL]- \delta_{LuxR}[LuxR]$
$\frac{d[AHL]}{dt} = k_3[A] - k_2[LuxR][AHL]- \delta_{AHL}[AHL]$
$\frac{d[A]}{dt} = -k_3[A] + k_2[LuxR][AHL]- k_4[A][P] + k_5[AP]$
$\frac{d[P]}{dt} = -k_4[A][P] + k_5[P]$
$\frac{d[AP]}{dt} = k_4[A][P] - k_5[AP]$
$\frac{d[GFP]}{dt} = k_6[AP]\bigg(\frac{[E]^n}{K_E^n + [E]^n}\bigg) - \delta_{GFP}[GFP]$
$\frac{d[E]}{dt} = -\alpha_{1}k_1\bigg(\frac{[E]^n}{K_E^n + [E]^n}\bigg) - \alpha_{2}k_6[AP]\bigg(\frac{[E]^n}{K_E^n + [E]^n}\bigg)$
where:
[A] represents the concentration of AHL-LuxR complex
[P] represents the concentration of pLux promoters
[AP] represents the concentration of A-Promoter complex
k1, k2, k3, k4, k5, k6 are the rate constants associated with the relevant forward and backward reactions
$\alpha_i \$ represents the energy consumption due to gene transcription. It is a function of gene length.
n is the positive co-operativity coefficient (Hill-coefficient)
$K_E \$ the half-saturation coefficient
The system of equations for the two constructs varies strictly with respect to the value of the parameter k1. Construct 1 possesses a non-zero k1 rate constant, whereas for construct 2, a zero value is assumed.
|
2014-09-18 18:32:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7498504519462585, "perplexity": 5203.047864667843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657128337.85/warc/CC-MAIN-20140914011208-00034-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
|
https://doughboxdiner.com.au/pvc/965-bending-strength-and-tensile-strength.html
|
# bending strength and tensile strength
critical bending load of cfrp panel with shallow surface the “composite” tensile strength f t of cfrp can be determined from both direct tensile tests and 3-p-b tests, but notched bending tests are much easier to perform. 2 similar to the common relation for direct tensile tests, i.e., fracture load = strength x area, an equivalent area a e has been defined for 3-p-b conditions so that the
difference between yield strength and tensile strength in materials engineering, yield strength and tensile strength are two properties that can be used to characterize a material. the main difference between yield strength and tensile strength is that yield strength is the minimum stress under which a material deforms permanently , whereas tensile strength describes the maximum stress that a
what is the relation between tensile strength and flexural strength in since the isotropic material fails in tensile portion; the strength is nothing but its tensile strength. hence when tested using tensile mode on a utm;; we designate it
flexural vs. tensile strength in brittle materials - sciencedirect it leads to a significant difference between the measurements made in bending and tension. the flexural strength is higher than the tensile one. indeed, for two
ldk – always have an escape plan… hard condition, type 301 stainless steel has a tensile strength of 185,000 psi minimum, and a minimum hard condition,type 301 stainless steel has a tensile strength of 185,000 psi minimum, and a minimum
bending flexural test - tec-science due to the linear stress distribution at a bending load, the flexural yield strength for steels is about 10 % to 20 % higher than the tensile yield strength for materials with no visible yield strengths in the stress-curves, a 0.2 % flexural offset yield strength \ \sigma by0.2 \ can be defined analog to the 0.2% offset yield strength of the
bendingstrength - an overview sciencedirect topics the bending strength and bending modulus for several commercial bone cements are shown in table 11.2.according to iso 5833 the minimum requirement for the bending strength is 50 mpa and for the bending modulus it is 1800 mpa. the addition of antibiotics reduces the bending strength, but the differences between antibiotic-loaded bone cement and plain bone cement are not always statistically
the importance of tensile strength in bending - youtube oct 9, 20 5 the strip should be placed into that part of a prosthesis that is subject to most tensile stresses. this is because conventional dental composites
defining the tensile, compressive, shear, torsional and yield may 9, 2006 yield strength is defined as the stress at which a material changes from elastic deformation to plastic deformation. once the this point, known as
mechanical properties of wood - forest products laboratory bending strength, tensile strength perpendicular to grain, and hardness. these properties, grouped according to the broad forest tree egories of Seven Trust
flexural and diametral tensile strength of composite resins - scielo restorative dentistry. flexural and diametral tensile strength of composite resins. álvaro della bonai; paula benettiii; márcia borbaiii; dileta cecchettiiv.
what is flexural strength? - definition from trenchlesspedia the flexural strength of a material is defined as the maximum bending stress that can be applied to that material before it yields. the most common way of obtaining the flexural strength of a material is by employing a transverse bending test using a three-point flexural test technique.
good to know: flexural strength and why it is important 25 jan 20 8 high flexural strength is essential for stress-bearing restorations, when high pressure/stress is exerted on the material or restoration. as a result,
correlation between tensile and bending behavior of frc the final objective of this study is to provide hard data needed to determine if the tensile stress-strain response of fiber reinforced cement composites can be
black titanium diamond rings, black titanium men's rings absolute titanium black titanium tension set rings, ladies titanium grades, ofover 150,000 psi in tensile strength and the most suitable "modulus ofelasticity" a term in metallurgy which indi es resistance tobending are used by absolute titanium in these jewelry
tensile and flexural strength of masonry - group hms the tensile strength of masonry can be calculated from properties of masonry units and mortar. the correlation between cajculated values and test results is.
the relationship between tensile and bending properties of non 8 feb 2020 the tensile strength applying in bending is compared with that in direct tension. this leads to an estimate of the ratio modulus of rupture to
the strongest softwoods bending strength chart (psi it has the highest bending strength & compression strength of any softwood seen throughout north america. and it’s high strength-to-weight ratio makes it popular for building trusses and joists. when shopping for yellow pine, be mindful that the longleaf species is endangered while the slash, shortleaf, and loblolly pines are abundant.
questions regarding shear strength, fatigue strength bending 80% stretching 20% fatigue 100% frequent stress preferred fail scenario: flex first, then shear, rather than shear/break off right away without any flexing-----questions: 1. i see "yield strength" listed on various manufacturer websites, but what exactly is yield strength? it can't be tensile strength because that's listed as a
how to calculate the bending strength for a material - quora bending strength or flexural strength of a specimen having rectangular cross-section is calculated using 3-point flexural test or 4-point flexural test.
determination of bending strength of materials by bendingstrength is also known as the flexural strength which is defined as the property which states the stress in the material just before it yields in a flexural test. when an object made up of homogenous material is bent, some particular range of stresses are experienced across its depth that has maximum tensile value outside the bend and
mechanical properties of dental materials - slideshare jul 4, 20 6 since the tensile strength of brittle materials is usually well below their shear a flexural force can produce all the three types of stresses in a
correlation between flexural and indirect tensile strength of resin nov 4, 20 6 flexural strength n = 5 and indirect tensile strength n = 5 of 7 resin composite cements relyx unicem 2 automix rxu , panavia sa psa ,
plastic fabri ion manufacturers industry information for additional fabri ion processes. punching, stamping, engraving, etching, bending, cutting, drilling, tapping and assembly are all possible post-formation processes that are offered by plastic fabri ors. different plastic parts require different kinds of attention, and plastic fabri ors offer a wide range of processes to make whatever kinds of changes to a plastic part that may be necessary. important considerations for plastic fabri ions are size parameters, working temperature ranges, tensile strength, temper hardness or softness and color requirements. plastic
bend strength versus tensile strength of fiberâ & 39;reinforced - wiley & 39;& 39; will be used to model this effect. . origin of difference between bend and tensile. strength. consider a typical tensile stress-strain curve as deduced.
what is the relationship between tensile stress and stress is a quantity that is measured at a point along a plane passing through that point and having a specific orientation. that means, changing any one of those point, plane or its orientation changes the value of stress. stress is a vector an
bendingstrength designerdata the bending strength or flexural strength of a material is defined as its ability to resist deformation under load. during a bending test described in astm d790 the maximum achieved flexural stress value is noted as flexural strength. for materials that deform significantly but do not break, the load at yield, typically measured at 5% deformation/strain of the outer surface, is reported as the
compressive vs. flexural strength concrete construction magazine q. is there a good way to correlate core compressive strength and flexural that tensile strength is the basis for its ability to resist bending, or its flexural
bending strength - an overview sciencedirect topics bending strength. flexural strength is defined as the maximum stress that a material exhibits at failure due to a three or four-points flexural load (saika and de brito, 2012). from: use of recycled plastics in eco-efficient concrete, 2019. related terms: compressive strength; elastic moduli; glass ceramics; glass fiber; tensile strength
what are the differences between flexural and tensile strength sep 30, 20 6 in context of reinforced concrete structures, flexural strength is the capacity of the concrete usually beams to resist deformation under bending moment.
the relationship between tensile and bending properties of non feb 8, 2020 the tensile strength applying in bending is compared with that in direct tension. this leads to an estimate of the ratio modulus of rupture to
bending flexural test - tec-science due to the linear stress distribution at a bending load, the flexural yield strength for steels is about 10 % to 20 % higher than the tensile yield strength! for materials with no visible yield strengths in the stress-curves, a 0.2 % flexural offset yield strength $$\sigma_{by0.2}$$ can be defined analog to the 0.2% offset yield strength of the
science laboratory equipment,chemistry laboratory equipment,hospital laboratory equipment,india universal tester digital universal testing machine computerized universal tensile tester standard universal testing machine standard universal testing
what are the differences between flexural and tensile strength 30 sep 20 6 in context of reinforced concrete structures, flexural strength is the capacity of the concrete usually beams to resist deformation under bending moment.
what is the relation between tensile strength and flexural i am doing an optimization problem where bending strength and tensile strength are two outputs. so as per the problem definition should i compare the bending strength with ultimate tensile
bend strength versus tensile strength of fiberâ & 39;reinforced - wiley tensile strength. in particular, composites failing gracefully. with a gradual decay in stress tend to have comparatively higher strengths in bending. a method of
tensile strength testing national technical systems - nts common tensile test results include elastic limit, tensile strength, yield point, yield bend test for ductility provides a simple way to evaluate the quality of
bendingstrength - an overview sciencedirect topics bendingstrength. flexural strength is defined as the maximum stress that a material exhibits at failure due to a three or four-points flexural load saika and de brito, 2012 . from: use of recycled plastics in eco-efficient concrete, 2019. related terms: compressive strength; elastic moduli; glass ceramics; glass fiber; tensile strength
what is the relation between tensile strength and flexural i am doing an optimization problem where bending strength and tensile strength are two outputs. so as per the problem definition should i compare the bending strength with ultimate tensile
flexural strength - wikipedia flexural versus tensile strength. the flexural strength would be the same as the tensile strength if the material were homogeneous. in fact, most materials have small or large defects in them which act to concentrate the stresses locally, effectively causing a localized weakness.
metal stitching & thread repair inserts. - turlock , ca - lock-n-stitch, inc only cracks when it is stressed beyond its tensile strength. this includes all heat related cracks, freeze cracks, impact and bending loads. cast iron does not weaken over time.
why is tensile strength smaller than flexural strength from: flexural strength - wikipedia flexural strength, also known as modulus of rupture, or bend strength, or transverse rupture strength is a material property, defined as the stress in a material just before it yields in a flexure test. flexural
flexural strength - wikipedia flexural strength, also known as modulus of rupture, or bend strength, or transverse rupture strength is a material property, defined as the stress in a material just before it yields in a flexure test. the transverse bending test is most frequently employed, in which a specimen having either a circular or rectangular cross-section is bent until fracture or yielding using a three point
estimating the tensile strength of ultrahigh-performance fiber 8 to investigate the tensile behavior of uhpfrc, while su et al. 9 and mertol et al. 0 conducted a series of bending beam tests to study the flexural behavior of
search steel c or less. tensile requirements class of iron tensile strength, min, mpa no. 20 20 no. 25 25 hardness, knoop 200 200 hardness, vickers 188 188 tensile strength, ultimate < = 400 mpa < = 58000 psi tensile strength, yield < = 210 mpa < = 30500 psi elongation at break < = 0.65 mechanical properties of astm a335 p11 tensile strength, min, mpa 415 yield strength, min, mpa 205
how to calculate bending strength from yield strength - quora the maximum stresses occur on a small area of the bending section theoretically only on the edge . while the entire cross section of the sample is under the maximum stresses during the tension test. therefore, it is more likely to find a "weak
bending strength - an overview sciencedirect topics the bending strength and bending modulus for several commercial bone since the tensile strength of most brittle ceramics is approximately one-tenth that of
tensilestrength vs flexural strength the tensile strength is lower than flexural bending strength. these are measurements from experiments. i understand this means the material is not homogeneous. i have to use this polypropylene material in the geometry attached. where there is a force load on one of the corners. my questions here are: 1.
tie-33 bending strength of optical glass and zerodur the quantity bending strength means the maximum tensile stress a glass item with well-defined geometry and surface condition can endure in the appli ion environment during the intended lifetime with acceptable low failure probability. tensile stress may cause a micro crack to grow by opening the
the strongest softwoods bending strength chart psi the bending strength of wood is measured by applying a force perpendicular to the woods’s grain. in this configuration, the strength of a wood board is maximized to incur a force. the reason for this is that wood fibers run through the grain in elongated strands. thus, a force incurred by the end-grain can result in the wood fibers separating
what are the differences between flexural and tensile in context of reinforced concrete structures, flexural strength is the capacity of the concrete usually beams to resist deformation under bending moment. it is sometimes called bending strength. tensile strength is the capacity of concrete to re
what is the difference between tensile modulus and maybe this is an odd question, but so far i don't understand why we have two standard methods for testing bending/flexural strength of fine ceramic, 3-points- and 4 points-bending tests though
- bending of high strength steel table 1 typical tensile strength values to calculate bend force. p = bend force, tons metric t = plate thickness, mm w = die width, mm figure 1 b = bend length, mm r m = tensile strength, mpa table 1 r d = die entry radius, mm r p = punch radius, mm the ssab bending formula is verified by tests for 90 bends, see figure 5. example 1
what is the relationship between tensile stress and bending stress is a quantity that is measured at a point along a plane passing through that point and having a specific orientation. that means, changing any one of those (point, plane or its orientation) changes the value of stress.
should bending strength be compared with yeild strength or ultimate so as per the problem definition should i compare the bending strength with ultimate tensile strength or with yield strength. what should be the optimization criteria
ball-nogues studio the kunstuniversität linz, austria; the site for this tensile installation was in a building erected by the use as rags, we created a deli ely balanced tensile network over the buildings’ main staircase. an “egg”
motorsport connections in excess of 2500 degrees and have a tensile strength over 100 lbs. these locking ties will provide in excess of 2500 degrees strong – 100 lb tensile strength simple and safe with no sharp edges thermotec
strength of materials basics and equations mechanics of materials strength of materials, also called mechanics of materials, is a subject which deals with the the applied loads may be axial tensile or compressive , or shear . when bending a piece of metal, one surface of the material stretches in tension
how to calculate the bending strength for a material - quora bendingstrength or flexural strength of a specimen having rectangular cross-section is calculated using 3-point flexural test or 4-point flexural test.
bendingstrength - an overview sciencedirect topics the bending strength and fracture toughness of glass-ceramic a-w is shown in fig. 13.5, which compares the values of the parent glass g, glass-ceramic a, precipitating only apatite, sintered hydroxyapatite and human cortical bone kokubo et al., 1985 .the bending strength of 220 mpa of glass-ceramic a-w is higher than that of human cortical bone. comparing the bending strength of glass-ceramic
should bending strength be compared with yeild strength or i am doing an optimization problem where bending strength and tensile strength are two outputs. so as per the problem definition should i compare the bending strength with ultimate tensile
everything you need to know about concrete strength cor-tuf flexural strength is used as another indirect measure of tensile strength. it is defined as a measure of an unreinforced concrete slab or beam to resist failure in bending. in other words, it is the ability of the concrete to resist bending.
should bending strength be compared with yeild strength or i am doing an optimization problem where bending strength and tensile strength are two outputs. so as per the problem definition should i compare the bending strength with ultimate tensile
how to calculate flexural strength - sciencing flexural strength really tells you the maximum amount of stress the material can take so you might see references to “flexural stress” also , and it& 39;s quoted as a
wood strengths - woodworkweb maximum crushing strength is the maximum stress sustained by a board when pressure is applied parallel to the grain. impact bending involves dropping a hammer of a given weight upon a board from successively greater heights until complete rupture occurs. the height of the drop that causes failure provides a comparative measure of how well the
how to calculate flexural strength sciencing flexural strength or the modulus of rupture is the maximum amount of stress a material can withstand without breaking. calculate flexural strength by applying the standard formula using experimental data for the maximum force applied, the length of the sample, the width of the sample and its depth.
the relationship between tensile and flexural strength of weibull theory predicts a higher strength in bending than in tension. how ever it assumes that failure initiates from a critical defect, whereas many unidirectional
bendingtensilestrength - leo: übersetzung im englisch lernen sie die übersetzung für 'bending tensile strength' in leos englisch ⇔ deutsch wörterbuch. mit flexionstabellen der verschiedenen fälle und zeiten aussprache und relevante diskussionen kostenloser vokabeltrainer
the ratio of flexural strength to uniaxial tensile strength in - core email: masoud.yekanifard asu.edu. the ratio of flexural strength to uniaxial tensile strength in bulk. epoxy resin polymeric materials. m. yekani fard a,*.
|
2020-10-27 23:13:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7215774059295654, "perplexity": 3462.039471121783}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894890.32/warc/CC-MAIN-20201027225224-20201028015224-00662.warc.gz"}
|
https://polytope.miraheze.org/wiki/Petrial_square_tiling
|
# Petrial square tiling
Petrial square tiling
Rank3
TypeRegular
SpaceEuclidean
Notation
Schläfli symbol${\displaystyle \{4,4\}^\pi}$
${\displaystyle \{\infty,4\}_4}$
Elements
Faces${\displaystyle N}$ zigzags
Edges${\displaystyle N\times 2M}$
Vertices${\displaystyle N\times M}$
Vertex figureSquare, edge length 2
Related polytopes
ArmySquat
RegimentSquat
Petrie dualSquare tiling
Abstract properties
Flag count${\displaystyle N\times 8M}$
Schläfli type{∞,4}
Topological properties
OrientableYes
Genus
Properties
SymmetryR3
ConvexNo
The petrial square tiling is one of the three regular skew tilings of the Euclidean plane. 4 zigzags meet at each vertex. The petrial square tiling is the Petrie dual of the square tiling, so it is in the same regiment.
## Vertex coordinates
Coordinates for the vertices of a petrial square tiling of edge length 1 are given by
• ${\displaystyle (i,j)}$,
where i and j range over the integers.
## Related polyhedra
The rectification of the petrial square tiling is the square-hemiapeirogonal tiling, which is a uniform tiling.
|
2023-03-31 22:24:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9419459104537964, "perplexity": 7662.034794252859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949689.58/warc/CC-MAIN-20230331210803-20230401000803-00756.warc.gz"}
|
https://www.coursehero.com/file/p5cnoji/Suppose-J-1-x-1-y-1-is-any-interval-of-the-left-half-and-J-2-x-2-y-2-is-any/
|
Suppose J 1 x 1 y 1 is any interval of the left half and J 2 x 2 y 2 is any
# Suppose j 1 x 1 y 1 is any interval of the left half
• Homework Help
• klyefuelopt
• 11
• 100% (7) 7 out of 7 people found this document helpful
This preview shows page 10 - 11 out of 11 pages.
Suppose J 1 = [ x 1 , y 1 ] is any interval of the left half and J 2 = [ x 2 , y 2 ] is any interval of the right half. Could this be a larger overlap than anything found by F ? We’ll prove that it cannot. Since J 2 is an interval of the right half, its left endpoint is at least x , i.e., x 2 x . Therefore the intersection J 1 J 2 is contained in [ x , ) , so J 1 J 2 = [ x , y 1 ] J 2 . Let J = [ x 0 , y 0 ] be the interval found in step 8. Similarly, we’ll have J J 2 = [ x , y 0 ] J 2 . Now due to the way J was selected in step 8, the right endpoint of J is at least as large as the right endpoint of J 1 , i.e., y 0 y 1 . This means that [ x , y 1 ] [ x , y 0 ] . It follows that [ x , y 1 ] J 2 [ x , y 0 ] J 2 , i.e., J 1 J 2 J J 2 , i.e., overlap ( J 1 , J 2 ) overlap ( J , J 2 ) . Now given the way the loop in steps 9–10 works, we see that O overlap ( J , J 2 ) , so O overlap ( J 1 , J 2 ) . Therefore J 1 , J 2 cannot have higher overlap than what was returned by F . This means that F correctly finds the largest possible overlap between any pair of intervals. Alternate proof of the last paragraph: Suppose J 1 is an interval of the left half and J 2 is an interval of the right half. Then the left endpoint of J 2 is at least x . Therefore their intersection lies in [ x , ) . The left endpoint of J 1 is at most x , therefore it does not affect the size of the overlap. In other words, we can replace the left endpoint of J 1 with x and nothing changes. Now if we hypothetically assume all left endpoints of CS 170, Fall 2014, Sol 2 10
the left intervals are x , it is obvious that the best we can do is find the one that has the highest right endpoint (regardless of the choice of J 2 we end up with the highest overlap). Line 9 finds exactly this interval, and then we check its overlap with all intervals of the right half. Therefore we must have considered at least one of the pairs from the left half and the right half that have the highest overlap. Comment about pseudocode: Notice that we didn’t bother spelling out how you find the interval whose right endpoint is maximal (step 8), how to compute the overlap of two intervals (step 10), or other low-level implementation details. Try to follow this practice in your own solutions, too: don’t drown the reader in low- level details, write your pseudocode at a high level of abstraction so that it is as easy to read and understand as possible. CS 170, Fall 2014, Sol 2 11
|
2021-04-23 18:03:45
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9173387289047241, "perplexity": 400.2732248492197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039596883.98/warc/CC-MAIN-20210423161713-20210423191713-00067.warc.gz"}
|
https://math.stackexchange.com/questions/2283439/link-between-the-maps-and-the-lie-algebra-example-with-so2
|
# Link between the maps and the Lie algebra : example with $SO(2)$
I would like to clarify something and I will take the example of the $SO(2)$ Lie group.
A Lie group is a manifold with a group structure. Thus we can define maps on it to be able to move on the manifold.
In $SO(2)$, I can write the matrices as :
$$\begin{pmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{pmatrix}$$
As I understand things, I see the $\theta$ as a map on the manifold.
But when we want to find the Lie algebra of the group, we say that it is the tangent space to the identity.
Thus, we have to take a curve from $\mathbb{R}$ to $SO(2)$ that pass in the identity and we have to derive it on the identity to find the Lie algebra.
In this vision, we could say that $\theta$ is in fact the parametrisation of my curve.
Thus an element of the Lie algebra is the derivative of the matrix according to $\theta$, and I find :
$$\begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}$$
Finally : is $\theta$ a map on my manifold or is it a curve on it ?
• We have $SO(2) \cong S^1$ so $\theta$ would be an argument. This is not very well defined as $\theta = \theta + 2\pi$. What you found is a surjection from $\Bbb R \to S^1$, $t \mapsto e^{it}$. If you restrict it this is also a curve passing by zero. Also notice that $\mathfrak{so(2)} \cong \Bbb R$ since $S^1$ is one-dimensional, so this is not a very interesting Lie algebra. – user171326 May 16 '17 at 13:21
The map $\epsilon : \Bbb R \to SO(2)$ defined by the first display equation, namely, $$\epsilon : \theta \mapsto \pmatrix{\cos \theta & -\sin \theta \\ \sin \theta & \cos \theta}$$ is a perfectly good parameterized curve on $SO(2)$. (In fact, it is a homomorphism of Lie groups, since $\epsilon(\alpha + \beta) = \epsilon(\alpha) \epsilon(\beta)$.)
This map is surjective, so it defines a parameterization of $SO(2)$. On the other hand, the map is not injective, so it is not bijective, that is, without making an additional choice, we cannot immediately regard $\theta$ as a map on (some subset of) $SO(2)$.
We can, however, choose an open interval $I \subset \Bbb R$ on which $\epsilon$ is injective, in which case $\epsilon\vert_I^{-1}$ is a homeomorphism $\epsilon(I) \to I$; in particular, this inverse is a smooth local chart on $SO(2)$ and hence defines a (preferred) coordinate on $\epsilon(I) \subset SO(2)$.
So, we may as well (somewhat abusively) call this coordinate $\theta$, or equivalently, just use $\theta$ to denote the map $\epsilon_I^{-1}$. In this sense, $\theta$ is a map (in particular satisfying $\epsilon \circ \theta = \textrm{id}_{\epsilon(I)}$), but this declaration depends on our (noncanonical) choice of $I$. Note, by the way, that there is no choice of $I$ that produces a global chart, that is, for which $\epsilon(I) = SO(2)$.
As N.H. hints in his helpful comment, there is another way to view the charts $\varepsilon\vert_I^{-1}$: We can identify $SO(2)$ with $\Bbb S^1 \subset \Bbb C$ via the identification $$\varepsilon(\theta) \leftrightarrow \exp (i\theta) .$$ Then, unwinding definitions shows that $\varepsilon\vert_I^{-1}$ coincides via this identification with the restriction $\arg\!\vert_{\epsilon(I)}$ of some choice of branch of the argument function.
• Thank you. To summarize : a parametrisation is not necesseraly bijective but it is surjective (it must go on all the manifold). A curve is just a map from $I \subset \mathbb{R}$ in the manifold. A map is a map form $U \subset R^n$ to the manifold and it is a diffeomorphism so it must be bijective. In my example, if I restrict $\theta$ to $[0; 2 \pi[$, my function $f(\theta)$ will be at the same time a curve and a map of my manifold. But if I don't put any restriction on $\theta$, it will just be a curve as it will not be bijective. – StarBucK May 16 '17 at 13:45
• A parametrisation will always be a curve if I work on an 1 dimension manifold. Do you agree with all that I said ? – StarBucK May 16 '17 at 13:46
• You're welcome. Note that the requirement that a parameterization be surjective is not universal. Sometimes people will refer to nonglobal parameterizations as local parameterizations, sometimes just as parameterizations. For me, maps are always continuous, and certainly curves should be. The sentence "A map is a map from..." is both circular and confusing. I would refer to a homeomorphism from an open set $U \subset \Bbb R^n$ to an open subset of manifold also as a parameterization of the manifold, and its inverse as a coordinate chart. – Travis May 16 '17 at 13:51
• Also, when working with these concepts, often it is best to restrict to working with open intervals and open sets more generally. One can deal with more general sets, but this often involves technicalities that are a consequence of the fact that boundary points behave qualitatively differently than interior points. – Travis May 16 '17 at 13:53
• Finally, as both of the answers point out, the Lie group $SO(2)$ is highly nonrepresentative here, as it is itself $1$-dimensional, and (being connected) is hence parameterizable by regular curve. So it would be useful to supplement this example with one involving a higher-dimensional Lie group. – Travis May 16 '17 at 13:55
First, I would like to emphasize that a Lie group is a smooth manifold equipped with a group structure whose operations (multiplication, inverse) are smooth. It is not enough to ensure that $G$ is a manifold and a group to deduce that $G$ is a Lie group.
Regarding your question, $\theta$ is not a map. However, the following map is a parametrization of $\textrm{SO}(2)$ : $$\theta\mapsto\begin{pmatrix}\cos(\theta)&-\sin(\theta)\\\sin(\theta)&\cos(\theta)\end{pmatrix}.$$ As you deduced it, $\begin{pmatrix}0&-1\\1&0\end{pmatrix}$ is in $\mathfrak{so}(2)$. Furthermore, according to the above parametrization of $\textrm{SO}(2)$, the Lie algebra $\mathfrak{so}(2)$ has dimension $1$ over $\mathbb{R}$. Whence, $\mathfrak{so}(2)$ is the set of $2\times 2$ skew-symmetric matrices.
First of all: it is incorrect to say that $\theta$ is a map/parameterization/curve. The map is the function $$f(\theta) = \pmatrix{\cos \theta & -\sin \theta\\ \sin \theta & \cos \theta}$$ It is $f$ which is the map, not $\theta$. $\theta$ is just a varaible, which is an argument of the function $f$. If we consider this function over the domain $[0, 2 \pi]$, then we have a parameterization of $SO(2)$.
Second: the Lie algebra is the vector space of all derivatives of curves through the identity. Note that $$\frac d{d \theta}f( \alpha \theta) = \alpha \pmatrix{0 & -1 \\1 & 0}$$ and we see that the Lie algebra is actually a $1$-dimensional space. Note that there are infinitely many paths through the identity, corresponding to varying speeds and directions of traversal.
$SO(2)$ happens to be a connected one-dimensional Lie group. As such, your parameterization is both a curve on the Lie group and a parameterization of the entire thing.
|
2019-09-23 13:23:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8964459300041199, "perplexity": 147.0242912417872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576965.71/warc/CC-MAIN-20190923125729-20190923151729-00537.warc.gz"}
|
https://ai.stackexchange.com/tags/training/new
|
# Tag Info
0
I'd like to add some details to the Neil Slater's answer. In order to generate data, we want to find some unknown distribution. Since we do not know anything about a real distribution, we can approximate it using GAN. It was shown that optimizing the loss function of the original GAN is equivalent to minimizing Jensen-Shannon divergence between the real ...
1
SVM complexity is $O(\max(n,d)\min(n,d)^2)$ according to Chapelle, Olivier. "Training a support vector machine in the primal." Neural Computation 19.5 (2007): 1155-1178. $n$ is the number of instances and $d$ is the number of dimensions. I'm assuming that you have more instances than dimensions giving a complexity of $O(nd^2)$. Hopefully this ...
0
Why I have to set to real these fake images and what fake images are these? You set them to "real" label for the discriminator when training the generator, because that is the goal of the generator, to produce an output of 1 (probability of being a real image) when tested. Usually you will generate a new batch of generated images for this step in ...
1
Yes, this method of training a model is commonly known as online learning and specific learning algorithms have been designed for this purpose, such as, Stochastic Gradient Descent(SGD). As opposed to Batch Gradient descent, which computes gradients over the entire training set at each step, the SGD algorithm computes gradients for individual samples and ...
1
I have not implement the backprop of a bi-directional RNN from scratch so I can't be sure my answer is correct but I hope it helps. You can see how bi-directional RNN works from this video from Andrew NG. I got the image below from that video: For more clarity: So if you know how to backprop through a simple RNN, you should be able to do so for bi-...
0
You can calculate the memory requirement analytically, but it's still not going to beat physical test in practice as there are so many unknown variables in the system which can takes the GPU memory. Maybe tensorflow will decide to store the gradients, then you have to take into account the memory usage of it also. The way I do it is by setting the GPU memory ...
1
You might be able to glean what you want from Chapter 13 or Sutton & Barto's Reinforcement Learning: An Introduction, which deals with policy gradient algorithms, and includes pseudocode for a variety of agents based on linear approximation using softmax regression. From your description, you appear to be using - or should consider - softmax regression ...
1
The neural network will learn what we teach it, for example with that image only, after finish training, your model will hard to recognize humans with dark skin, glasses, big eyes, etc, the features that two annotated targets don't have. If your data is big enough, and contain all the feature of humans face, the result should be good. If not, I recommend a ...
1
If you have an erratic loss landscape, it can lead to an unstable learning curve. Thus, it's always better to choose a simpler function which creates a simple landscape. Sometimes even due to uneven dataset distribution, we can observe those jumps/irregularities in the training curve. And yes, those jumps do mean it might've found something significant in ...
2
There is an approach to machine learning, called Simulated Annealing, which varies the rate: starting from a large rate, it is slowly reduced over time. The general idea is that the initial larger rate will cover a broader range, while the increasingly lower rate then produces a less 'erratic' climb towards a maximum. If you only use a low rate, you risk ...
0
Check out Figure 6 in this paper: PyTorch Distributed: Experiences on Accelerating Data Parallel Training It breaks down the latency of the forward pass, the backward pass, the communication step, and the optimization step for running both ResNet50 and BERT on a NVIDIA Tesla V100 GPUs. From measuring the pixels in the figure, I estimated the times for the ...
-1
Normally you only have two classes along with a threshold probability. It's how systems like YOLO work.
Top 50 recent answers are included
|
2020-10-25 02:38:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5693869590759277, "perplexity": 607.3192051578267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107885126.36/warc/CC-MAIN-20201025012538-20201025042538-00330.warc.gz"}
|
https://www.physicsforums.com/threads/prove-the-theorem-for-the-matrix.255184/
|
# Prove the theorem for the matrix
1. Sep 10, 2008
### frostshoxx
1. The problem statement, all variables and given/known data
Prove that every square real matrix X can be written in a unique way as the sum of a symmetric matrix A and a skew-symmetric matrix B.
2. Relevant equations
X = A + B
A = $$\frac{X+X^{T}}{2}$$
B = $$\frac{X-X^{T}}{2}$$
X = $$\frac{X+X^{T}}{2}$$ + $$\frac{X-X^{T}}{2}$$
3. The attempt at a solution
So I tried to solve $$\frac{X+X^{T}}{2}$$ + $$\frac{X-X^{T}}{2}$$ and it gives out X as a solution. However, how can I know that A is a symmetric and B is a skew-symmetric? Any idea?
2. Sep 10, 2008
### Focus
Take the transpose of A and B. You also need to prove uniqueness which I would do by contradiction.
3. Sep 10, 2008
### frostshoxx
Can this be done symbolically? Also, what do you mean by contradiction? could you give some examples?
4. Sep 10, 2008
### Focus
Yes why not, if you take transpose of A, you will get A again. And B is skew because of the negative sign.
Example of uniqueness. Let e be a number (in reals) such that$$a \cdot a^{-1}=e$$ and $$a\cdot e=a \quad \forall a \in \mathbb{R}$$. e is unique.
Proof:
Fix a in reals and assume e is not unique. You have $$a\cdot e=a$$ and $$a\cdot e'=a$$ for $$e\neq e'$$ (same for inverses). Now you have
$$a\cdot e \cdot e'=a \cdot e'=a$$
taking inverses gives the result that $$e \cdot e'=e$$ and $$e \cdot e'=e'$$
thus $$e=e'$$ which contradicts the assumption, thus e must be unique.
Hope that helps
|
2017-05-25 14:39:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7381627559661865, "perplexity": 455.2375123901638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608084.63/warc/CC-MAIN-20170525140724-20170525160724-00425.warc.gz"}
|
https://leanprover-community.github.io/archive/stream/267928-condensed-mathematics/topic/by.20exactI.html
|
## Stream: condensed mathematics
### Topic: by exactI
#### Floris van Doorn (Feb 06 2021 at 07:18):
I like that we have some notation for by exactI, but can we change it from a zero-width space to something more reasonable? We could use an English word that hasn't much meaning by itself, like now, so, thus? (though none of these quite sound right in the places we want)
#### Johan Commelin (Feb 06 2021 at 07:20):
If you can think of a fitting english word that is more reasonable than by exactI, I'm all for it.
#### Johan Commelin (Feb 06 2021 at 07:21):
Also, I think it's fine to just write by exactI almost everywhere.
I just wanted to avoid in the main statements, because there it's very distracting for outsiders that are reading their first lean code.
#### Johan Commelin (Feb 06 2021 at 08:16):
after the , of an \exists we could write such_that... but that doesn't fit very well after the , of \forall.
Of course we could have two new symbols.
#### Mario Carneiro (Feb 06 2021 at 08:24):
You could also write it such that you don't need by exactI...
Last updated: May 09 2021 at 23:10 UTC
|
2021-05-09 23:37:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5128027200698853, "perplexity": 2650.957751046297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989018.90/warc/CC-MAIN-20210509213453-20210510003453-00092.warc.gz"}
|
http://rtx.civil.sharif.edu/RIdriss2013IntensityModel.html
|
# Idriss 2013 Intensity Model
## Class Name
• RIdriss2013IntensityModel
## Location in Objects Pane
• Models > Model > Hazard > Earthquake > Intensity > Idriss 2013 Intensity
## Model Description
### Model Form
• This model produces the spectral acceleration or peak ground acceleration at specified locations for given magnitude and hypocenter location of several earthquake sources as input, based on the Idriss (2014) attenuation relation.
• Depth to the top of rupture is calculated based on formulas used in Baker's implementation of Idriss (2014) attenuation relation, if it is unknown.
• In case of unkown parameters, you can enter the unknown-identifier (e.g. 999 in some fields) once for all inputs in each field, assuming all inputs of that field are unkown.
• For more information, see Idriss (2014) and Baker research group implementation of Idriss (2014).
• No
## Properties
### Object Name
• Name of the object in Rt
• Allowable characters are upper-case and lower-case letters, numbers, and underscore (“_”).
• The name is unique and case-sensitive.
### Display Output
• Determines whether the model is allowed to print messages to the Output Pane.
### Magnitude List
• Magnitudes of various earthquake sources
### Depth To Top Of Rupture List
• Depth to top of rupture of various earthquake sources (Use 999 in case of unknown)
### Rupture Distance List
• Closest distance from rupture plane of various earthquake sources (Use 999 in case of unknown. In this case, it is calculated based on Pythagorean theorem using the depth of top edge of rupture and hypocenter location distance)
### Hypocentre Location List
• Hypocenter locations of earthquake sources, which automatically will yield the radius $${R}$$ to the various output locations
### Epsilon Uncertainty List
• Model error, typically a standard normal random variable; this model does not distinguish between inter- and intra-event model residuals
### Fault Types
• Fault mechanism that can either be Unspecified, Normal-slip, Strike-slip, or Reverse-slip.
### Response Type
• Type of the response than can be either $${S_a}$$, $${PGA}$$, or $${PGV}$$.
### Period List
• List of the natural periods at which the intensity is evaluated
### Structure Location List
• List of the locations where the intensity will be computed at them (the output will give as many intensity values as the locations provided here)
### Shear Wave Velocity List
• List of the shear wave velocities at the specified locations
## Output
• Earthquake intensities (as many as the locations provided in the input)
• The output is an automatically generated generic response object, which takes the object name of the model plus “Response”.
|
2019-05-22 19:53:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.532091498374939, "perplexity": 5052.821619289281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256948.48/warc/CC-MAIN-20190522183240-20190522205240-00415.warc.gz"}
|
http://enhtech.com/standard-error/fix-sampling-distribution-standard-error.php
|
Home > Standard Error > Sampling Distribution Standard Error
# Sampling Distribution Standard Error
The graphs below show the sampling distribution of the Student approximation when σ value is unknown Further information: Student's t-distribution §Confidence And you do itchange depending on how many samples I'm taking every time I do a sample mean.The concept of a sampling distributionA for a sample of n data points with sample bias coefficient ρ.
You are asked to guess the average weight of the six 25 (4): 30–32. And the standard error of the sampling distribution (σx) is determined by the standard error click here now some new terms that we will use this lesson and in future lessons. standard Standard Error Excel The variance of the sum would simulation app probably later in this video. The effect of the FPC is that the error becomes zero error of observations) of the sample.
So here, what we're saying is this mix of the above guidelines. The graph below shows the distribution of the sample means distribution ^ James R. distributed with a mean of 80 and a standard deviation of 2.82.
take samples from this crazy distribution. RumseyList Price: $16.99Buy Used:$0.48Buy New:magical things about mathematics. Standard Error Of Mean Calculator Note: N is theThe standard error estimated usingρ=0 diagonal line with log-log slope -½.
Answer: For this problem, we do not need Answer: For this problem, we do not need So 1 over the distribution of the mean would be to a normal distribution.The notation for standard error can be any one ofof 0.14 that we found using the normal distribution).So let me draw the mean of the sampling distribution and the standard deviation.
And let's see Sampling Distribution Of The Mean Calculator ISBN 0-8493-2479-3 p. 626 ^ a b Dietz, David; Barr, binomial approach is computationally demanding. The standard deviation of theare taking samples of averages based on samples.
American Statistical Association.the sample standard deviation is 2.56.Correction for correlation in the sample Expected error in the mean offor 20,000 samples, where each sample is of size n=16.JSTOR2682923. ^ Sokal and Rohlf (1981) Biometry: Principles anda sampled student is less than 75 pounds is equal to 0.038.Here, we're going to do a 25 browse this site distribution
We do 9.3-- so let me do this case.So you see31, 32, 33, 34, 38, 40, 40, 48, 53, 54, and 55. https://en.wikipedia.org/wiki/Standard_error Note that in all cases, the mean of sample mean is close to the populationto express the variability of data: Standard deviation or standard error of mean?".
Web Demonstration of Central Limit Theorem Before we begin the Athe Greek letter mu is our true mean.Because of random variation in sampling, the proportion or mean calculated using thethese are sample values.And let's when the sample size n is equal to the population size N.
Next, consider all possible samples of 16 standard Commons Attribution-ShareAlike License; additional terms may apply.This is the variance Health Statistics (24). Greek letters indicate that Standard Error Vs Standard Deviation variance divided by n. of crazy distribution that looks something like that.
Doi:10.4103/2229-3485.100662. ^ Isserlis, L. (1918). "On the value http://enhtech.com/standard-error/solution-standard-deviation-sampling-distribution-standard-error.php to you one day. Notes. sampling marriage is about half the standard deviation of 9.27 years for the runners.By using this site, you agree todoi:10.2307/2682923.
Let's see if it age is 23.44, and the population standard deviation is 4.72. They report that, in a sample of 400 patients, the Sampling Distribution Of The Mean Examples distributed with a mean of 0.50 and a standard deviation of 0.04564.It's one of thoseanother 10,000 trials.So, in the trial we just did, my μ {\displaystyle \mu } , and 9.27 years is the population standard deviation, σ.
sampling but for n = 6 the underestimate is only 5%.If a continuous distribution, how is the sample mean distributed?&fnbsp; If takenlook at this.I really want to givewill usually be less than or greater than the population mean.
|
2018-08-18 19:54:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8731850385665894, "perplexity": 1031.9271402811562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213737.64/warc/CC-MAIN-20180818193409-20180818213409-00284.warc.gz"}
|
https://stats.stackexchange.com/questions/181172/how-to-learn-a-complete-order-function-from-a-set-of-partial-order-relations
|
# How to learn a “complete order function” from a set of partial order relations?
Input: vector data x_1,...,x_n and a partial ranking function e..g r_p(x_1) > r_p(x_2).
Output: a "complete order function" r_c that projects the data onto 1D (e.g. r_c(x_1)=0.5, r_c(x_2)=0.3)?
Question: what's the standard model/algorithm to use? (e.g. parametric family of functions)
|
2019-12-11 17:19:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7872122526168823, "perplexity": 7544.738158097687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540531974.7/warc/CC-MAIN-20191211160056-20191211184056-00239.warc.gz"}
|
https://cs.stackexchange.com/questions/140319/dag-when-adding-an-edge-that-would-normally-result-in-a-cycle-is-there-an-algo
|
# DAG: When adding an edge that would normally result in a cycle, is there an algorithm to split the graph instead?
Summary
I am using a DAG to compress a tree structure with many repeated nodes (the repeated nodes only very seldomly do not also have repeated edges out.)
Normally, when attempting to add an edge to a DAG that would cause a cycle, you instead detect the situation and abort. I'm seeking an algorithm that will instead attempt to add the new edge anyway, modifying the graph such that it still represents the same tree, potentially partially decompressing parts of the graph in order to avoid the cycle.
Does an efficient algorithm to do this already exist? I have been unable to find one in a perfunctory literature search, though I am not aware of the proper terminology for this operation, if one exists.
Simple Example
In this example, we are adding the edge F->C, which creates a cycle. To break this cycle, we can split the E node into one version of E with C as a parent and one version of E with D as a parent (notated E'), similarly with F and C.
Slightly more complicated example
In this example we have several more cases. The offending edge from F to D is highlighted. But there are several paths through the graph, some involving nodes upstream from D, some involving nodes downstream from D, and potentially some not involving D at all.
This graph is a compressed version of this tree:
where the highlighted copies of F are where placing D as a child is permissible. As you can see, these three nodes correspond to the three ways of reaching F' in the previous figure.
Motivating use case
In the game of Go, there is, in certain rulesets, the idea of superko, which forbids repetition of a previous board state. Such positions are usually very rare in practice. In order to efficiently search the game tree, we would want to take into account transpositions of sets of moves which leave the board the same, which is why a graph structure is useful, but in situations where a superko is possible the history of the position is also important, not just the current situation. So while F->C would be a legal move normally, it is only a legal move in situations where C is not part of the node's history, i.e. we went through node D instead of node C. So we would need to consider the cases separately.
Known Caveats
I am aware of the DAG cycle detection algorithm, and it seems like it might be easy to adapt this algorithm to perform this task, but I cannot seem to make it work. I am also aware that it is not always possible to split a graph in this manner to remove the cycle.
• I have made an edit to the post to make the language slightly more clear, if not the meaning. A large part of the problem is that I do not have the vocabulary to clearly describe the behavior expected. If I did, then I probably would have found something in the literature search. Hopefully, the examples and the use case will make it clear approximately what I want. Thanks for the help! May 14 at 16:59
• The problem is not well-specified, so not yet in a state where it can be solved. In particular, "without substantially changing the properties of the graph" is vague and ambiguous. Obviously any change to the graph will change some properties, and leave others unchanged. The first step is to be able to specify precisely the requirements. If you can't specify that, then there's no way to come up with an algorithm that meets those unstated requirements - and it's premature to ask here for such an algorithm.
– D.W.
May 14 at 19:12
• Perhaps you are asking how to duplicate nodes, so that after the change, if there is a path $V \leadsto W$ in the original graph, then there some duplicate $V_i$ of $V$ and some duplicate $W_j$ of $W$ so that there is a path from $V_i \leadsto W_j$ (possibly $V_i=V$ and/or $W_j=W$) in the modified graph, and vice versa. Is that the requirement?
– D.W.
May 14 at 19:15
• Yes, I believe that is what I'm trying to convey. If we consider the graph to be a tree where all the duplicate nodes are combined, then this algorithm will "uncombine" as few nodes as possible to make the resulting graph still representative of the tree but without the cycle. May 14 at 20:04
• I used the free version of lucidchart. May 18 at 15:15
I think one possibility is do the naive thing, and whenever you find a backedge $$u \to v$$ (i.e., where $$v$$ is an ancestor of $$u$$ in the search tree), duplicate the subtree rooted at $$v$$, one duplicate per edge out of $$v$$.
Given an edge $$u \to v$$, there are various ways to test whether it is a backedge. The naive way is to traverse the path from $$u$$ to the root (by following parent pointers) and see if you ever visit $$v$$. A fancy way is to use an algorithm for least-common-ancestors on a dynamic tree.
• Thank you, this helped me construct my answer. May 18 at 15:10
Any graph of this type can be represented using this simplified model (left):
where M1, M2, and M3 are metanodes that can represent 0 or more nodes, and edges involving M1, M2, and M3 can represent 0 or more edges.
In order to split the graph in the manner I am seeking, it's required to classify these nodes using the reachability algorithm. The members of M1 are all nodes $$k \not \in \{A, B\}$$ such that $$A \not \le k$$. The members of M2 are all nodes $$k \not \in \{A, B\}$$ such that $$A \le k \land B \not \le k$$. Finally the members of M3 are the remaining nodes $$k \not \in \{A, B\}$$ and $$B \le k$$.
The members of M2 are duplicated along with B and A. All edges from M1 to M2 or B are routed to M2' and B' instead. And the edges from M2 and B to M3 are duplicated for M2' and B'. Finally, an edge from B' to A' is made.
The algorithm fails to place A' if and only if there are no edges connecting M1 to B and there are no edges from M1 to M2 or M2 to B, which represents the fact that there is no path to B without going through A, so there is no possible way to place A'.
So this problem can be solved with $$O(n)$$ lookups using pretty much any general reachability algorithm.
|
2021-10-25 11:34:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6051966547966003, "perplexity": 346.93545342442053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00633.warc.gz"}
|
https://proofwiki.org/wiki/Characterization_of_Pseudoprime_Element_when_Way_Below_Relation_is_Multiplicative
|
# Characterization of Pseudoprime Element when Way Below Relation is Multiplicative
## Theorem
Let $L = \struct {S, \vee, \wedge, \preceq}$ be a bounded below continuous lattice such that
$\ll$ is multiplicative relation
where $\ll$ denotes the way below relation of $L$.
Let $p \in S$.
Then $p$ is pseudoprime element if and only if
$\forall a, b \in S: a \wedge b \ll p \implies a \preceq p \lor b \preceq p$
## Proof
### Sufficient Condition
Let $p$ be pseudoprime element.
Let $a, b \in S$ such that
$a \wedge b \ll p$
By definition of meet:
$\inf \left\{ {a, b}\right\} \ll p$
$\exists c \in \left\{ {a, b}\right\}: c \preceq p$
Thus
$a \preceq p$ or $b \preceq p$
$\Box$
### Necessary Condition
Suppose
$\forall a, b \in S: a \wedge b \ll p \implies a \preceq p \lor b \preceq p$
$p$ is not a pseudoprime element.
$p$ is not a prime element.
By definition of prime element:
$\exists x, y \in S: x \wedge y \preceq p$ and $x \npreceq p$ and $y \npreceq p$
By definition of continuous:
$\forall z \in S: z^\ll$ is directed.
and
$L$ satisfies axiom of approximation.
$\exists u \in S: u \ll x \land u \npreceq p$
and
$\exists v \in S: v \ll y \land v \npreceq p$
$\ll$ is auxiliary relation.
$u \wedge v \ll x \wedge y$
By definition of transitivity:
$u \wedge v \ll p$
By assumption:
$u \preceq p$ or $v \preceq p$
This contradicts $u \npreceq p$ and $v \npreceq p$.
$\blacksquare$
|
2023-04-02 05:51:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9881167411804199, "perplexity": 1907.2036307631274}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00444.warc.gz"}
|
https://chemistry.stackexchange.com/questions/67923/degeneracy-of-second-excited-state-of-h
|
# Degeneracy of second excited state of H-?
This is a question presented in a IIT-JEE 2015 paper I exam. It says,
. Not considering the electronic spin, the degeneracy of the second excited state (n = 3) of H atom is 9, while the degeneracy of the second excited state of H- is-
I saw a solution which says it's 3.
I think by 9 degenerate energy levels the question meant 9 possible orbitals for M shell. But I am not sure how's that 3 for for H-.
Please don't delete this question as I haven't found it anywhere.
## 2 Answers
I think it is important to understand that for hydrogen atom (or any other one-electron system) all orbitals from the same shell have same energy. For instance, $E_\mathrm{2s} = E_\mathrm{2p}$, $E_\mathrm{3s} = E_\mathrm{3p} = E_\mathrm{3d}$, etc. Thus,
• The first excited state of hydrogen atom would be one in which either $\mathrm{2s}$ or one of the three $\mathrm{2p}$ orbitals is occupied and it will be 4-fold degenerate: $\mathrm{1s^0 2s^1}$, $\mathrm{1s^0 2p_x^1}$, $\mathrm{1s^0 2p_y^1}$, $\mathrm{1s^0 2p_z^1}$.
• Analogously, the second excited state of hydrogen atom would be one in which either $\mathrm{3s}$ or one of the three $\mathrm{3p}$ or one of the five $\mathrm{3d}$ orbitals is occupied and it will be 9-fold degenerate: $\mathrm{1s^0 2s^0 2p_x^0 2p_y^0 2p_z^0 3s^1}$, $\mathrm{1s^0 2s^0 2p_x^0 2p_y^0 2p_z^0 3s^0 3p_x^1}$, $\mathrm{1s^0 2s^0 2p_x^0 2p_y^0 2p_z^0 3s^0 3p_y^1}$, $\mathrm{1s^0 2s^0 2p_x^0 2p_y^0 2p_z^0 3s^0 3p_z^1}$, plus 5 configurations with only $\mathrm{3d}$-orbitals occupied.
For hydride the situation is different: it is not a one-electron system, so different orbitals from the same shell do not have same energy anymore. For instance, $E_\mathrm{2s} < E_\mathrm{2p}$, $E_\mathrm{3s} < E_\mathrm{3p} < E_\mathrm{3d}$, etc. Thus,
• The first excited state of the hydride would be one in which one electron populates $\mathrm{1s}$ orbital and another $\mathrm{2s}$ one, i.e. a non-degenerate $\mathrm{1s^1 2s^1}$ state.
• The second excited state of the hydride would be one in which one electron populates $\mathrm{1s}$ orbital and another one of the three $\mathrm{2p}$ ones, i.e. a 3-fold degenerate state: $\mathrm{1s^1 2s^0 2p_x^1}$, $\mathrm{1s^1 2s^0 2p_y^1}$, $\mathrm{1s^1 2s^0 2p_z^1}$.
• So one of the electrons remains in $\ce1s^1$ at all times? Is this a rule? This question was intended for students who only know how to work around with hydrogen-like species (i.e single electron species). PS I am aware about this particular examination, its syllabi as well as the difficult level of its questions. @Wildcat – McSuperbX1 May 12 at 13:41
In a single electron system like H atom or He+ all orbitals with same principal quantum number (n) have same energy Therefore the degeneracy refers to the 3 subshells 3s 3p 3d which have same energy
|
2019-07-24 07:27:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8288519978523254, "perplexity": 576.7503744357613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195531106.93/warc/CC-MAIN-20190724061728-20190724083728-00067.warc.gz"}
|
http://trac-hacks.org/ticket/9731
|
Opened 2 years ago
Closed 2 years ago
# Custom queries broken when using "contains" with a multi-select custom field
Reported by: Owned by: jbeilicke cmc normal MultiSelectCustomFieldsPatch normal 0.12
### Description
Attached please find a modified version of the original patch that fixes the mentioned issue in Trac 0.12. The bug in ticket/query.py line 545 lead to wrong database results, due to a pipe right before the last percentage:
Original patch:
value = '%' + value + '|%'
Modified version:
value = '%' + value + '%'
### comment:1 follow-up: ↓ 2 Changed 2 years ago by cmc
• Resolution set to invalid
• Status changed from new to closed
That is not correct. All multi-select values should end with a pipe. Hence, '%' + value + '%' may actually return incorrect result.
Imagine if options were 'Dog' and 'Dog Park'. If you searched for 'Dog', a ticket with 'Dog Park' (in the database, 'Dog Park|') would come up with your proposed change. Instead, if you search for '%Dog|%', no Dog Park entries will be returned.
### comment:2 in reply to: ↑ 1 ; follow-up: ↓ 3 Changed 2 years ago by jbeilicke
That is not correct. All multi-select values should end with a pipe. Hence, '%' + value + '%' may actually return incorrect result.
You are right, of course. Quite obvious, but I didn't see this. Thanks for the clarification. I will revert my patch and check again why the queries with multi-selects are not working as expected. :)
### comment:3 in reply to: ↑ 2 ; follow-up: ↓ 4 Changed 2 years ago by cmc
That is not correct. All multi-select values should end with a pipe. Hence, '%' + value + '%' may actually return incorrect result.
You are right, of course. Quite obvious, but I didn't see this. Thanks for the clarification. I will revert my patch and check again why the queries with multi-selects are not working as expected. :)
Mind you, this patch was originally written for 0.11 and I haven't worked on the 0.12 version for a year. I wouldn't be at all surprised if there was a bug somewhere else ;) Are the fields savings with pipes? If you're converting an existing field over to use multi-select, you'll need to append a pipe to each value.
### comment:4 in reply to: ↑ 3 Changed 2 years ago by jbeilicke
That is not correct. All multi-select values should end with a pipe. Hence, '%' + value + '%' may actually return incorrect result.
You are right, of course. Quite obvious, but I didn't see this. Thanks for the clarification. I will revert my patch and check again why the queries with multi-selects are not working as expected. :)
Mind you, this patch was originally written for 0.11 and I haven't worked on the 0.12 version for a year. I wouldn't be at all surprised if there was a bug somewhere else ;) Are the fields savings with pipes? If you're converting an existing field over to use multi-select, you'll need to append a pipe to each value.
That is the case: I have to append the pipe. Thanks again.
### Modify Ticket
Change Properties
|
2014-03-08 18:30:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4048875570297241, "perplexity": 3388.27031980612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999657009/warc/CC-MAIN-20140305060737-00024-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/1516795/is-the-negation-of-a-non-theorem-a-theorem
|
# Is the negation of a non-theorem a theorem?
I don't know if that is something obvious or if it is a dumb question. But it seems to be true.
Consider the non-theorem $\forall x. x < 1$. Its negation is $\exists x. x \geq 1$ and is a theorem. Is this always true?
I couldn't find a counterexample.
If this is true I would like to know a good explanation why.
• The answer depends on what you define as "not-theorem". – 5xum Nov 7 '15 at 0:19
• Some valid proposition. – Rafael Castro Nov 7 '15 at 0:20
• Some valid (and provable) proposition. – Rafael Castro Nov 7 '15 at 0:26
• You need to tell us what first-order theory you are working in. The answer to your "is this always true?" for the theory of the real numbers is yes, but for the theory of natural number it is no. – Rob Arthan Nov 7 '15 at 0:31
• My comment wasn't quite right: if you fix a particular structure like $\Bbb{N}$ or $\Bbb{R}$, then the theory of that structure is complete: any sentence is either true or false. If you are looking at a theory defined by axioms and inference rules, then we need to know what those axioms and inference rules are, – Rob Arthan Nov 7 '15 at 0:51
It's neither obvious nor a dumb question. But it is, perhaps surprisingly, sensitive to what theory, or at least what language, you're proving things in. I assume you intend the example you gave, $(\exists x)\,x\ge1$, as a sentence about the real numbers. The reals are a real closed field, and the theory of real closed fields is complete: every sentence or its negation is a theorem. So this really is a special case: as fate would have it, for this theory, the negation of every non-theorem actually is a theorem, so you'll search in vain for a counterexample.
However, the same cannot be said for arithmetic, as formalized by Peano arithmetic (PA): there are sentences $S$ in the language of arithmetic such that neither $S$ nor $\neg S$ is a theorem of PA (assuming, of course, that PA is consistent). Examples include Gödel sentences (true sentences asserting their own unprovability within the system), as well as more natural sentences: Goodstein's theorem, which Kirby and Paris showed is unprovable in PA, and a true sentence about finite Ramsey theory which Paris and Harrington showed is independent of PA.
Set theory offers further examples. For our purposes, it's safe to say that ZFC is the system in which contemporary mathematical practice takes place. ZFC has its own Gödel sentences (assuming it's consistent), but it turns out that many natural mathematical questions — sentences $S$ — are simply independent of the ZFC axioms: ZFC proves neither $S$ nor $\neg S$. One famous example is the Continuum Hypothesis, but the list of interesting statements independent of ZFC is substantial.
The Axiom of Choice, AC, provides the "C" in ZFC. AC says: for every set $X$ of nonempty sets, there is a function $f$ with domain $X$ such that $f(x)\in x$ for all $x\in X$ ($f$ is a choice function for $X$). ZFC without AC is the system known as ZF. It turns out AC is not provable in ZF, and the negation of AC is not provable in ZF. In some models of ZF, every set has a choice function (these models are, of course, models of ZFC); in other models, many infinite sets lack choice functions.
Finally, note that, assuming it's consistent, ZFC cannot prove its own consistency. Via Gödel numbering and arithmetization of syntax, a sentence meaning "ZFC is consistent" can be formulated within ZFC. This sentence is just a statement about the integers, which isn't provable even in ZFC. However, if we add a large cardinal axiom, even a "small large cardinal" axiom such as "There exists an inaccessible cardinal", then the resulting stronger theory can prove that ZFC is consistent, and in particular new statements of arithmetic become provable.
• Note that the "assuming it's consistent" that appears a couple of times, is just because an inconsistent theory (equipped with the kinds of logic we're talking about) proves every sentence and in particular it has no undecidable sentences. From the POV of this question, there are no non-theorems anyway in that case and the question is moot :-) – Steve Jessop Nov 7 '15 at 15:02
• @SteveJessop Thanks for adding that. It may not be obvious to all readers, or to the OP, that in an inconsistent theory there are no non-theorems (every sentence is a theorem). That's because $(p\land \neg p)\to q$ is a tautology, anyway it is in classical logic, so if a theory proves one contradiction, it proves every sentence. – BrianO Nov 8 '15 at 4:51
Suppose we are talking about the theory of groups. The statement $$\forall x \forall y \big(xy=yx\big)$$ is not a theorem (it is false for some groups). It negation $$\exists x \exists y \big(xy \ne yx\big)$$ is also not a theorem (it, too, is false for some groups).
• You need a $\forall \text{ groups } G$ in front of your first formula, and so one of the two is true. In any case, if you specify which sets the $x$ and $y$ are being drawn from, you see that one of the two is true. – Eric Thoma Jan 31 '16 at 4:08
• A "theorem" is a statement in a certain "theory". The theory of groups has no quantifier "$\forall$ Groups", nor does it allow talking about "sets". – GEdgar Jan 31 '16 at 14:36
• I see. Thank you for clarifying. – Eric Thoma Jan 31 '16 at 20:43
It is not always true. There are undecidable propositions as well, propositions that are both unprovable and whose negations are unprovable. This is the essence of Gödel's first incompleteness theorem.
There are many examples in the answers to this other question.
• The answer depends on the theory the OP is interested in. – Rob Arthan Nov 7 '15 at 0:32
• @RobArthan I interpreted the question to be about propositions in "all of mathematics" (so including incomplete theories) but now I'm not sure I even understood the basic gist of the question. – Aaron Golden Nov 7 '15 at 0:41
• @AaronGolden: you and I are on the same wavelength now $\ddot{\smile}$. – Rob Arthan Nov 7 '15 at 0:42
• Whether a particular theory or a particular interpretation of "not theorem" was in intended, this post directly addresses the question as asked. Don't discount such answers. you can learn from them. +1 – Paul Sinclair Nov 7 '15 at 0:44
• I would like to point out that this answer does not apply if the first-order theory of interest is the theory of the reals (which is what the OP said was the theory of interest in a comment under the question). The first-order theory of the reals is complete. – Rob Arthan Nov 7 '15 at 0:55
Consider an invalid (non-theorem) statement.
• For all $z∈C$ where Re($z$)≠$0$; we must have $z_i≤z_j$.
Its negation is:
For some $z∈C$ where Re($z$)≠$0$; we must have $z_i>z_j$, which is also invalid (a non-theorem).
|
2019-03-24 07:44:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7478818893432617, "perplexity": 395.30577688313844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203378.92/warc/CC-MAIN-20190324063449-20190324085449-00372.warc.gz"}
|
https://cracku.in/9-sqrt188-sqrt51-sqrt169-x-cmat-2018-slot-2
|
Question 9
# $$\sqrt{188+ \sqrt{51+ \sqrt{169}}}$$ = ?
Solution
$$\sqrt{188+ \sqrt{51+ \sqrt{169}}}$$ = $$\sqrt{188+ \sqrt{51+13}}$$ = $$\sqrt{188+ \sqrt{64}}$$ = $$\sqrt{188+ 8}$$ = $$\sqrt{196}$$ = 14
|
2023-01-28 13:50:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9921876788139343, "perplexity": 4091.6219299515656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499634.11/warc/CC-MAIN-20230128121809-20230128151809-00771.warc.gz"}
|
https://zbmath.org/?q=an%3A1115.76392
|
Numerical approximations of singular source terms in differential equations.(English)Zbl 1115.76392
Summary: Singular terms in differential equations pose severe challenges for numerical approximations on regular grids. Regularization of the singularities is a very useful technique for their representation on the grid. We analyze such techniques for the practically preferred case of narrow support of the regularizations, extending our earlier results for wider support. The analysis also generalizes existing theory for one dimensional problems to multi-dimensions. New high order multidimensional techniques for differential equations and numerical quadrature are introduced based on the analysis and numerical results are presented. We also show that the common use of distance functions in level-set methods to extend one dimensional regularization to higher dimensions may produce $$O(1)$$ errors.
MSC:
76M25 Other numerical methods (fluid mechanics) (MSC2010) 74S30 Other numerical methods in solid mechanics (MSC2010) 65N99 Numerical methods for partial differential equations, boundary value problems
Full Text:
References:
[1] Beale, J.T.; Majda, A., Vortex methods II: higher order accuracy in two and three dimensions, Math. comput, 39, 29-52, (1982) · Zbl 0488.76025 [2] Beyer, R.P.; LeVeque, R.J., Analysis of a one-dimensional model for the immersed boundary method, SIAM J. num. anal, 29, 332-364, (1992) · Zbl 0762.65052 [3] Chorin, A.J., Numerical study of slightly viscous flow, J. fluid mech, 57, 785-796, (1973) [4] Juric, D.; Tryggvason, G., A front-tracking method for dendritic solidification, J. comput. phys, 123, 127-148, (1996) · Zbl 0843.65093 [5] Kreiss, H.O.; Thomée, V.; Widlund, O., Smoothing of initial data and rates of convergence for parabolic difference equations, Commun. pur. appl. math, 23, 241-259, (1970) · Zbl 0188.41001 [6] G. Ledfelt, A thin wire sub cell model for arbitrary oriented wires for the fd – td method, in: G. Kristensson (Ed.), Proceedings of the EMB 98 - Electromagnetic Computations for Analysis and Design of Complex Systems, 1998, pp. 148-155 [7] LeVeque, R.J.; Li, Z.L., Immersed interface methods for Stokes flow with elastic boundaries or surface tension, SIAM J. sci. comput, 18, 709-735, (1997) · Zbl 0879.76061 [8] Monaghan, J.J., Extrapolating B splines for interpolation, J. comput. phys, 60, 253-262, (1985) · Zbl 0588.41005 [9] Osher, S.J.; Fedkiw, R.P., Level set methods and dynamic implicit surfaces, (2002), Springer Verlag Berlin [10] Peskin, C.S., Numerical analysis of blood flow in the heart, J. comput. phys, 25, 220-252, (1977) · Zbl 0403.76100 [11] Peskin, C.S., The immersed boundary method, Acta numer, 11, 479-517, (2002) · Zbl 1123.74309 [12] Raviart, P.A., An analysis of particle methods, (), 253-262 [13] Sethian, J.A., Level set methods and fast marching methods. evolving interfaces in computational geometry, fluid mechanics, computer vision and materials science, (1999), Cambridge University Press Cambridge · Zbl 0973.76003 [14] Tornberg, A.K., Multi-dimensional quadrature of singular and discontinuous functions, Bit, 42, 644-669, (2002) · Zbl 1021.65010 [15] Tornberg, A.K.; Engquist, B., Regularization techniques for numerical approximation of PDEs with singularities, J. sci. comput, 19, 527-552, (2003) · Zbl 1035.65085 [16] Tornberg, A.K.; Engquist, B., The segment projection method for interface tracking, Commun. pur. appl. math, 56, 47-79, (2003) · Zbl 1205.76205 [17] Tryggvason, G.; Bunner, B.; Esmaeeli, A.; Juric, D.; Al-Rawahi, N.; Tauber, W.; Han, J.; Nas, S.; Jan, Y.J., A front-tracking method for the computations of multiphase flow, J. comput. phys, 169, 708-759, (2001) · Zbl 1047.76574 [18] Waldén, J., On the approximation of singular source terms in differential equations, Numer. meth. part D E, 15, 503-520, (1999) · Zbl 0938.65112
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2022-11-26 08:41:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5579299330711365, "perplexity": 5304.047202994291}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706285.92/warc/CC-MAIN-20221126080725-20221126110725-00247.warc.gz"}
|
https://en.wikiversity.org/wiki/User:Guy_vandegrift/Quizbank/Archive1/Sample_calc2_FE
|
# User:Guy vandegrift/Quizbank/Archive1/Sample calc2 FE
## Contents
### pc1:FE:V1
pc120161220T142002
1) Integrate the function, ${\displaystyle {\vec {F}}=-x^{2}y^{4}{\hat {x}}+x^{4}y^{5}{\hat {y}}}$, as a line integral around a unit square with corners at (0,0),(1,0),(1,1),(0,1). Orient the path so its direction is out of the paper by the right hand rule
a) 4.08E-01
b) 4.37E-01
c) 4.67E-01
d) 5.00E-01
e) 5.35E-01
2) The specific heat of water and aluminum are 4186 and 900, respectively, where the units are J/kg/Celsius. An aluminum container of mass 0.95 kg is filled with 0.19 kg of water. You are consulting for the flat earth society, a group of people who believe that the acceleration of gravity equals 9.8 m/s/s at all altitudes. Based on this assumption, from what height must the water and container be dropped to achieve the same change in temperature? (For comparison, Earth's radius is 6,371 kilometers)
a) 5.24 x 100 km
b) 6.35 x 100 km
c) 7.7 x 100 km
d) 9.32 x 100 km
e) 1.13 x 101 km
3) A 2.3 kg mass is sliding along a surface that has a kinetic coefficient of friction equal to 0.41 . In addition to the surface friction, there is also an air drag equal to 16 N. What is the magnitude (absolute value) of the acceleration?
a) 7.2 m/s2.
b) 8.3 m/s2.
c) 9.5 m/s2.
d) 11 m/s2.
e) 12.6 m/s2.
4) A car of mass 856 kg is driving on an icy road at a speed of 19 m/s, when it collides with a stationary truck. After the collision they stick and move at a speed of 4.7 m/s. What was the mass of the truck?
a) 1507 kg
b) 1809 kg
c) 2170 kg
d) 2604 kg
e) 3125 kg
5) A 6.7 cm diameter pipe can fill a 2.2 m^3 volume in 8.0 minutes. Before exiting the pipe, the diameter is reduced to 2.3 cm (with no loss of flow rate). What is the speed in the first (wider) pipe?
a) 8.86E-1 m/s
b) 1.07E0 m/s
c) 1.30E0 m/s
d) 1.57E0 m/s
e) 1.91E0 m/s
6) A sled of mass 2 kg is on perfectly smooth surface. A string pulls with a tension of 17.4N. At what angle above the horizontal must the string pull in order to achieve an accelerations of 2.9 m/s2?
a) 53.3 degrees
b) 61.3 degrees
c) 70.5 degrees
d) 81.1 degrees
e) 93.3 degrees
7) A cylinder with a radius of 0.38 m and a length of 2.2 m is held so that the top circular face is 3.8 m below the water. The mass of the block is 903.0 kg. The mass density of water is 1000kg/m^3. What is the pressure at the top face of the cylinder?
3.72E4 Pa
4.51E4 Pa
5.47E4 Pa
6.62E4 Pa
8.02E4 Pa
8) A particle is initially at the origin and moving in the x direction at a speed of 3.8 m/s. It has an constant acceleration of 2.1 m/s2 in the y direction, as well as an acceleration of 0.6 in the x direction. What angle does the velocity make with the x axis at time t = 2.9 s?
a) 31.37 degrees.
b) 36.07 degrees.
c) 41.48 degrees.
d) 47.71 degrees.
e) 54.86 degrees.
9) A train accelerates uniformly from 17 m/s to 35.25 m/s, while travelling a distance of 151 m. What is the 'average' acceleration?
a) 1.83m/s/s.
b) 2.19m/s/s.
c) 2.63m/s/s.
d) 3.16m/s/s.
e) 3.79m/s/s.
10) What is the gravitational acceleration on a plant that is 2.33 times more massive than Earth, and a radius that is 1.49 times greater than Earths?
a) 10.3 m/s2
b) 11.8 m/s2
c) 13.6 m/s2
d) 15.6 m/s2
e) 18 m/s2
11) A lead filled bicycle wheel of radius 0.37 m and mass 2.1 kg is rotating at a frequency of 1.4 revolutions per second. What is the total kinetic energy if the wheel is rotating about a stationary axis?
a) 5.16 x 100 J
b) 6.25 x 100 J
c) 7.58 x 100 J
d) 9.18 x 100 J
e) 1.11 x 101 J
12) The spring constant is 676N/m, and the initial compression is 0.14m. What is the mass if the cart reaches a height of 2.73m, before coming to rest?
a) 0.225 kg
b) 0.236 kg
c) 0.248 kg
d) 0.260 kg
e) 0.273 kg
13) A car is traveling in a perfect circle at constant speed. If the car is headed north while turning east, the acceleration is
a) zero
b) east
c) north
d) south
e) west
14) If a particle's position is given by x(t) = 5sin(4t-π/6), what is the velocity?
a) v(t) = -20sin(4t-π/6)
b) v(t) = 5cos(4t-π/6)
c) v(t) = 20cos(4t-π/6)
d) v(t) = -20cos(4t-π/6)
e) v(t) = 20sin(4t-π/6)
15)
A 1241 heat cycle uses 1.1 moles of an ideal gas. The pressures and volumes are: P1= 2 kPa, P2= 4.1 kPa. The volumes are V1= 2.1m3 and V4= 4.3m3. How much work is done in in one cycle?
a) 7.3 x 102 J
b) 2.31 x 103 J
c) 7.3 x 103 J
d) 2.31 x 104 J
e) 7.3 x 104 J
#### KEY:pc1:FE:V1
pc120161220T142002
1) Integrate the function, ${\displaystyle {\vec {F}}=-x^{2}y^{4}{\hat {x}}+x^{4}y^{5}{\hat {y}}}$, as a line integral around a unit square with corners at (0,0),(1,0),(1,1),(0,1). Orient the path so its direction is out of the paper by the right hand rule
- a) 4.08E-01
- b) 4.37E-01
- c) 4.67E-01
+ d) 5.00E-01
- e) 5.35E-01
2) The specific heat of water and aluminum are 4186 and 900, respectively, where the units are J/kg/Celsius. An aluminum container of mass 0.95 kg is filled with 0.19 kg of water. You are consulting for the flat earth society, a group of people who believe that the acceleration of gravity equals 9.8 m/s/s at all altitudes. Based on this assumption, from what height must the water and container be dropped to achieve the same change in temperature? (For comparison, Earth's radius is 6,371 kilometers)
-a) 5.24 x 100 km
+b) 6.35 x 100 km
-c) 7.7 x 100 km
-d) 9.32 x 100 km
-e) 1.13 x 101 km
3) A 2.3 kg mass is sliding along a surface that has a kinetic coefficient of friction equal to 0.41 . In addition to the surface friction, there is also an air drag equal to 16 N. What is the magnitude (absolute value) of the acceleration?
-a) 7.2 m/s2.
-b) 8.3 m/s2.
-c) 9.5 m/s2.
+d) 11 m/s2.
-e) 12.6 m/s2.
4) A car of mass 856 kg is driving on an icy road at a speed of 19 m/s, when it collides with a stationary truck. After the collision they stick and move at a speed of 4.7 m/s. What was the mass of the truck?
-a) 1507 kg
-b) 1809 kg
-c) 2170 kg
+d) 2604 kg
-e) 3125 kg
5) A 6.7 cm diameter pipe can fill a 2.2 m^3 volume in 8.0 minutes. Before exiting the pipe, the diameter is reduced to 2.3 cm (with no loss of flow rate). What is the speed in the first (wider) pipe?
-a) 8.86E-1 m/s
-b) 1.07E0 m/s
+c) 1.30E0 m/s
-d) 1.57E0 m/s
-e) 1.91E0 m/s
6) A sled of mass 2 kg is on perfectly smooth surface. A string pulls with a tension of 17.4N. At what angle above the horizontal must the string pull in order to achieve an accelerations of 2.9 m/s2?
-a) 53.3 degrees
-b) 61.3 degrees
+c) 70.5 degrees
-d) 81.1 degrees
-e) 93.3 degrees
7) A cylinder with a radius of 0.38 m and a length of 2.2 m is held so that the top circular face is 3.8 m below the water. The mass of the block is 903.0 kg. The mass density of water is 1000kg/m^3. What is the pressure at the top face of the cylinder?
+ 3.72E4 Pa
- 4.51E4 Pa
- 5.47E4 Pa
- 6.62E4 Pa
- 8.02E4 Pa
8) A particle is initially at the origin and moving in the x direction at a speed of 3.8 m/s. It has an constant acceleration of 2.1 m/s2 in the y direction, as well as an acceleration of 0.6 in the x direction. What angle does the velocity make with the x axis at time t = 2.9 s?
-a) 31.37 degrees.
-b) 36.07 degrees.
-c) 41.48 degrees.
+d) 47.71 degrees.
-e) 54.86 degrees.
9) A train accelerates uniformly from 17 m/s to 35.25 m/s, while travelling a distance of 151 m. What is the 'average' acceleration?
-a) 1.83m/s/s.
-b) 2.19m/s/s.
-c) 2.63m/s/s.
+d) 3.16m/s/s.
-e) 3.79m/s/s.
10) What is the gravitational acceleration on a plant that is 2.33 times more massive than Earth, and a radius that is 1.49 times greater than Earths?
+a) 10.3 m/s2
-b) 11.8 m/s2
-c) 13.6 m/s2
-d) 15.6 m/s2
-e) 18 m/s2
11) A lead filled bicycle wheel of radius 0.37 m and mass 2.1 kg is rotating at a frequency of 1.4 revolutions per second. What is the total kinetic energy if the wheel is rotating about a stationary axis?
-a) 5.16 x 100 J
-b) 6.25 x 100 J
-c) 7.58 x 100 J
-d) 9.18 x 100 J
+e) 1.11 x 101 J
12) The spring constant is 676N/m, and the initial compression is 0.14m. What is the mass if the cart reaches a height of 2.73m, before coming to rest?
- a) 0.225 kg
- b) 0.236 kg
+ c) 0.248 kg
- d) 0.260 kg
- e) 0.273 kg
13) A car is traveling in a perfect circle at constant speed. If the car is headed north while turning east, the acceleration is
-a) zero
+b) east
-c) north
-d) south
-e) west
14) If a particle's position is given by x(t) = 5sin(4t-π/6), what is the velocity?
-a) v(t) = -20sin(4t-π/6)
-b) v(t) = 5cos(4t-π/6)
+c) v(t) = 20cos(4t-π/6)
-d) v(t) = -20cos(4t-π/6)
-e) v(t) = 20sin(4t-π/6)
15)
A 1241 heat cycle uses 1.1 moles of an ideal gas. The pressures and volumes are: P1= 2 kPa, P2= 4.1 kPa. The volumes are V1= 2.1m3 and V4= 4.3m3. How much work is done in in one cycle?
-a) 7.3 x 102 J
+b) 2.31 x 103 J
-c) 7.3 x 103 J
-d) 2.31 x 104 J
-e) 7.3 x 104 J
### pc1:FE:V2
pc120161220T142002
1)
A 1241 heat cycle uses 1.1 moles of an ideal gas. The pressures and volumes are: P1= 2 kPa, P2= 4.1 kPa. The volumes are V1= 2.1m3 and V4= 4.3m3. How much work is done in in one cycle?
a) 7.3 x 102 J
b) 2.31 x 103 J
c) 7.3 x 103 J
d) 2.31 x 104 J
e) 7.3 x 104 J
2) A 2.4 kg mass is sliding along a surface that has a kinetic coefficient of friction equal to 0.68 . In addition to the surface friction, there is also an air drag equal to 6 N. What is the magnitude (absolute value) of the acceleration?
a) 9.2 m/s2.
b) 10.5 m/s2.
c) 12.1 m/s2.
d) 13.9 m/s2.
e) 16 m/s2.
3) If a particle's position is given by x(t) = 5sin(4t-π/6), what is the velocity?
a) v(t) = 5cos(4t-π/6)
b) v(t) = -20cos(4t-π/6)
c) v(t) = -20sin(4t-π/6)
d) v(t) = 20cos(4t-π/6)
e) v(t) = 20sin(4t-π/6)
4) A cylinder with a radius of 0.25 m and a length of 3.5 m is held so that the top circular face is 3.3 m below the water. The mass of the block is 922.0 kg. The mass density of water is 1000kg/m^3. What is the pressure at the top face of the cylinder?
3.23E4 Pa
3.92E4 Pa
4.75E4 Pa
5.75E4 Pa
6.97E4 Pa
5) A sled of mass 2.6 kg is on perfectly smooth surface. A string pulls with a tension of 19.2N. At what angle above the horizontal must the string pull in order to achieve an accelerations of 2.4 m/s2?
a) 53.7 degrees
b) 61.8 degrees
c) 71 degrees
d) 81.7 degrees
e) 93.9 degrees
6) A car of mass 674 kg is driving on an icy road at a speed of 16 m/s, when it collides with a stationary truck. After the collision they stick and move at a speed of 5.9 m/s. What was the mass of the truck?
a) 801 kg
b) 961 kg
c) 1154 kg
d) 1385 kg
e) 1661 kg
7) A train accelerates uniformly from 12.75 m/s to 33.125 m/s, while travelling a distance of 272 m. What is the 'average' acceleration?
a) 0.99m/s/s.
b) 1.19m/s/s.
c) 1.43m/s/s.
d) 1.72m/s/s.
e) 2.06m/s/s.
8) A 9.2 cm diameter pipe can fill a 1.6 m^3 volume in 8.0 minutes. Before exiting the pipe, the diameter is reduced to 4.0 cm (with no loss of flow rate). What is the speed in the first (wider) pipe?
a) 5.01E-1 m/s
b) 6.08E-1 m/s
c) 7.36E-1 m/s
d) 8.92E-1 m/s
e) 1.08E0 m/s
9) A car is traveling in a perfect circle at constant speed. If the car is headed north while turning east, the acceleration is
a) east
b) north
c) west
d) zero
e) south
10) A particle is initially at the origin and moving in the x direction at a speed of 3.8 m/s. It has an constant acceleration of 2.1 m/s2 in the y direction, as well as an acceleration of 0.6 in the x direction. What angle does the velocity make with the x axis at time t = 2.9 s?
a) 31.37 degrees.
b) 36.07 degrees.
c) 41.48 degrees.
d) 47.71 degrees.
e) 54.86 degrees.
11) A lead filled bicycle wheel of radius 0.56 m and mass 2.9 kg is rotating at a frequency of 1.6 revolutions per second. What is the total kinetic energy if the wheel is rotating about a stationary axis?
a) 3.79 x 101 J
b) 4.6 x 101 J
c) 5.57 x 101 J
d) 6.75 x 101 J
e) 8.17 x 101 J
12) The specific heat of water and aluminum are 4186 and 900, respectively, where the units are J/kg/Celsius. An aluminum container of mass 0.95 kg is filled with 0.19 kg of water. You are consulting for the flat earth society, a group of people who believe that the acceleration of gravity equals 9.8 m/s/s at all altitudes. Based on this assumption, from what height must the water and container be dropped to achieve the same change in temperature? (For comparison, Earth's radius is 6,371 kilometers)
a) 5.24 x 100 km
b) 6.35 x 100 km
c) 7.7 x 100 km
d) 9.32 x 100 km
e) 1.13 x 101 km
13) Integrate the function, ${\displaystyle {\vec {F}}=-x^{3}y^{4}{\hat {x}}+x^{4}y^{4}{\hat {y}}}$, as a line integral around a unit square with corners at (0,0),(1,0),(1,1),(0,1). Orient the path so its direction is out of the paper by the right hand rule
a) 3.43E-01
b) 3.67E-01
c) 3.93E-01
d) 4.21E-01
e) 4.50E-01
14) The spring constant is 621N/m, and the initial compression is 0.14m. What is the mass if the cart reaches a height of 3.01m, before coming to rest?
a) 0.187 kg
b) 0.196 kg
c) 0.206 kg
d) 0.217 kg
e) 0.227 kg
15) What is the gravitational acceleration on a plant that is 2.21 times more massive than Earth, and a radius that is 1.74 times greater than Earths?
a) 4.1 m/s2
b) 4.7 m/s2
c) 5.4 m/s2
d) 6.2 m/s2
e) 7.2 m/s2
#### KEY:pc1:FE:V2
pc120161220T142002
1)
A 1241 heat cycle uses 1.1 moles of an ideal gas. The pressures and volumes are: P1= 2 kPa, P2= 4.1 kPa. The volumes are V1= 2.1m3 and V4= 4.3m3. How much work is done in in one cycle?
-a) 7.3 x 102 J
+b) 2.31 x 103 J
-c) 7.3 x 103 J
-d) 2.31 x 104 J
-e) 7.3 x 104 J
2) A 2.4 kg mass is sliding along a surface that has a kinetic coefficient of friction equal to 0.68 . In addition to the surface friction, there is also an air drag equal to 6 N. What is the magnitude (absolute value) of the acceleration?
+a) 9.2 m/s2.
-b) 10.5 m/s2.
-c) 12.1 m/s2.
-d) 13.9 m/s2.
-e) 16 m/s2.
3) If a particle's position is given by x(t) = 5sin(4t-π/6), what is the velocity?
-a) v(t) = 5cos(4t-π/6)
-b) v(t) = -20cos(4t-π/6)
-c) v(t) = -20sin(4t-π/6)
+d) v(t) = 20cos(4t-π/6)
-e) v(t) = 20sin(4t-π/6)
4) A cylinder with a radius of 0.25 m and a length of 3.5 m is held so that the top circular face is 3.3 m below the water. The mass of the block is 922.0 kg. The mass density of water is 1000kg/m^3. What is the pressure at the top face of the cylinder?
+ 3.23E4 Pa
- 3.92E4 Pa
- 4.75E4 Pa
- 5.75E4 Pa
- 6.97E4 Pa
5) A sled of mass 2.6 kg is on perfectly smooth surface. A string pulls with a tension of 19.2N. At what angle above the horizontal must the string pull in order to achieve an accelerations of 2.4 m/s2?
-a) 53.7 degrees
-b) 61.8 degrees
+c) 71 degrees
-d) 81.7 degrees
-e) 93.9 degrees
6) A car of mass 674 kg is driving on an icy road at a speed of 16 m/s, when it collides with a stationary truck. After the collision they stick and move at a speed of 5.9 m/s. What was the mass of the truck?
-a) 801 kg
-b) 961 kg
+c) 1154 kg
-d) 1385 kg
-e) 1661 kg
7) A train accelerates uniformly from 12.75 m/s to 33.125 m/s, while travelling a distance of 272 m. What is the 'average' acceleration?
-a) 0.99m/s/s.
-b) 1.19m/s/s.
-c) 1.43m/s/s.
+d) 1.72m/s/s.
-e) 2.06m/s/s.
8) A 9.2 cm diameter pipe can fill a 1.6 m^3 volume in 8.0 minutes. Before exiting the pipe, the diameter is reduced to 4.0 cm (with no loss of flow rate). What is the speed in the first (wider) pipe?
+a) 5.01E-1 m/s
-b) 6.08E-1 m/s
-c) 7.36E-1 m/s
-d) 8.92E-1 m/s
-e) 1.08E0 m/s
9) A car is traveling in a perfect circle at constant speed. If the car is headed north while turning east, the acceleration is
+a) east
-b) north
-c) west
-d) zero
-e) south
10) A particle is initially at the origin and moving in the x direction at a speed of 3.8 m/s. It has an constant acceleration of 2.1 m/s2 in the y direction, as well as an acceleration of 0.6 in the x direction. What angle does the velocity make with the x axis at time t = 2.9 s?
-a) 31.37 degrees.
-b) 36.07 degrees.
-c) 41.48 degrees.
+d) 47.71 degrees.
-e) 54.86 degrees.
11) A lead filled bicycle wheel of radius 0.56 m and mass 2.9 kg is rotating at a frequency of 1.6 revolutions per second. What is the total kinetic energy if the wheel is rotating about a stationary axis?
-a) 3.79 x 101 J
+b) 4.6 x 101 J
-c) 5.57 x 101 J
-d) 6.75 x 101 J
-e) 8.17 x 101 J
12) The specific heat of water and aluminum are 4186 and 900, respectively, where the units are J/kg/Celsius. An aluminum container of mass 0.95 kg is filled with 0.19 kg of water. You are consulting for the flat earth society, a group of people who believe that the acceleration of gravity equals 9.8 m/s/s at all altitudes. Based on this assumption, from what height must the water and container be dropped to achieve the same change in temperature? (For comparison, Earth's radius is 6,371 kilometers)
-a) 5.24 x 100 km
+b) 6.35 x 100 km
-c) 7.7 x 100 km
-d) 9.32 x 100 km
-e) 1.13 x 101 km
13) Integrate the function, ${\displaystyle {\vec {F}}=-x^{3}y^{4}{\hat {x}}+x^{4}y^{4}{\hat {y}}}$, as a line integral around a unit square with corners at (0,0),(1,0),(1,1),(0,1). Orient the path so its direction is out of the paper by the right hand rule
- a) 3.43E-01
- b) 3.67E-01
- c) 3.93E-01
- d) 4.21E-01
+ e) 4.50E-01
14) The spring constant is 621N/m, and the initial compression is 0.14m. What is the mass if the cart reaches a height of 3.01m, before coming to rest?
- a) 0.187 kg
- b) 0.196 kg
+ c) 0.206 kg
- d) 0.217 kg
- e) 0.227 kg
15) What is the gravitational acceleration on a plant that is 2.21 times more massive than Earth, and a radius that is 1.74 times greater than Earths?
-a) 4.1 m/s2
-b) 4.7 m/s2
-c) 5.4 m/s2
-d) 6.2 m/s2
+e) 7.2 m/s2
|
2019-10-16 10:18:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4990399479866028, "perplexity": 1741.2906321146227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666959.47/warc/CC-MAIN-20191016090425-20191016113925-00241.warc.gz"}
|
https://www.clutchprep.com/chemistry/practice-problems/66571/5-00-moles-of-an-ideal-gas-are-contained-in-a-cylinder-with-a-constant-external-
|
# Problem: 5.00 moles of an ideal gas are contained in a cylinder with a constant external pressure of 1.00 atm and at a temperature of 593 K by a movable, frictionless piston. This system is cooled to 504 K.i) Calculate the work done on or by the system. ii) Given that the molar heat capacity (C) of an ideal gas is 20.8 J/mol K, calculate q (J), the heat that flows into or out of the system.
###### Problem Details
5.00 moles of an ideal gas are contained in a cylinder with a constant external pressure of 1.00 atm and at a temperature of 593 K by a movable, frictionless piston. This system is cooled to 504 K.
i) Calculate the work done on or by the system.
ii) Given that the molar heat capacity (C) of an ideal gas is 20.8 J/mol K, calculate q (J), the heat that flows into or out of the system.
|
2020-10-25 10:56:26
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8049003481864929, "perplexity": 324.9927822681476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107888931.67/warc/CC-MAIN-20201025100059-20201025130059-00149.warc.gz"}
|
https://math.stackexchange.com/questions/1191420/why-is-dn-sim-homeomorphic-to-mathbbrpn
|
# Why is $D^n/\sim$ homeomorphic to $\mathbb{RP}^n$?
Let $\sim$ be the equivalence relation on $D^n$ (the $n$-dimensional unit disc) which identifies antipodal points on the boundary $\partial D^n = S^{n-1}$. Show that $D^n/\sim$ is homeomorphic to $\mathbb{RP}^n$, the real projective space of dimension $n$.
Edit: We define $\mathbb{RP}^n$ as the set of real lines through the origin in $\mathbb{R}^{n+1}$, with the origin deleted, with the quotient topology determined by the map which sends a nonzero vector in $\mathbb{R}^{n+1}$ to the line containing it.
• but you didn't specify the topology. – Ittay Weiss Mar 15 '15 at 21:13
• can you think of a function from the set of lines you describe to the set of pairs of points in $D^n$? (hint: intersection). Are these points always antipodal (hint: yes). What does that tell you? – Ittay Weiss Mar 15 '15 at 21:14
• The function which sends a line (in, say, $\mathbb{R}^n \times \{0\}$) to its intersection with $\partial D^n$, which is a pair of antipodal points in $D^n$. I'm still thinking about what this tells you. – Randy Randerson Mar 15 '15 at 21:18
• I'm not sure what it tells you. Can you say what it tells you about? – Randy Randerson Mar 15 '15 at 21:26
• it tells you that you should review what quotient spaces are. – Ittay Weiss Mar 15 '15 at 22:03
Take these local coordinates, to proof ${D^{n + 1}} \simeq P{\mathbb{R}^n}$ \begin{gathered} {U_i} = \{ \left. {[{x_0}, \cdots ,{x_n}]} \right|{x_i} \ne 0\} \subset P({\mathbb{R}^{n + 1}}) \hfill \\ {\varphi _i}:{U_i} \to {\mathbb{R}^n},[{x_0}, \cdots ,{x_n}] \to (\frac{{{x_0}}}{{{x_i}}}, \cdots ,\frac{{{x_{i - 1}}}}{{{x_i}}},\frac{{{x_{i + 1}}}}{{{x_i}}}, \cdots ,\frac{{{x_n}}}{{{x_i}}}) \hfill \\ {\psi _i}:{\mathbb{R}^n} \to {U_i},({a_1}, \cdots ,{a_n}) \to [{a_1}, \cdots ,{a_{i - 1}},1,{a_i}, \cdots ,{a_n}] \hfill \\ \end{gathered}
|
2020-03-29 14:23:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9989275932312012, "perplexity": 766.7025518422864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494349.3/warc/CC-MAIN-20200329140021-20200329170021-00344.warc.gz"}
|
http://journal.psych.ac.cn/xlkxjz/article/2018/1671-3710/1671-3710-26-12-2272.shtml
|
首页 期刊介绍 编 委 会 投稿指南 期刊订阅 联系我们 English
## 回归混合模型:方法进展与软件实现
,1,2,3, ,4
1. 广州大学心理系
2. 广州大学心理测量与潜变量建模研究中心
3. 广东省未成年人心理健康与教育认知神经科学实验室, 广州 510006
4. 中国政法大学社会学院, 北京 102249
## Regression mixture modeling: Advances in method and its implementation
,1,2,3, ,4
1. Department of Psychology, Guangzhou University
2. The Center for Psychometric and Latent Variable Modeling, Guangzhou University
3. The Key Laboratory for Juveniles Mental Health and Educational Neuroscience in Guangdong Province, Guangzhou University, Guangzhou 510006, China
4. School of Sociology, China University of Political Science and Law, Beijing 102249, China
基金资助: *国家自然科学基金. 31400904广州大学“创新强校工程”青年创新人才类项目. 2014WQNCX069广州大学青年拔尖人才培养项目. BJ201715
Abstract
The person-centered methods, including latent class analysis (LCA) and latent profile analysis (LPA), are increasingly popular in recent years. Researchers often add covariate variables (i.e., predictor and distal variables) into LCA and LPA models. This kind of models are also called regression mixture models. In this paper, we introduce several new methods. Those methods include (1) the LTB method proposed by Lanza, Tan and Bray (2013) to model categorical outcome variables; and (2) the BCH method proposed by Bolck, Croon and Hagenaars (2004) to deal with continuous distal variables. Using an empirical example, we demonstrate the process of analyses in Mplus. The future directions of those new methods were also discussed.
Keywords: person-centered method ; mixture modeling ; latent class analysis ; latent variable modeling ; Mplus
WANG Meng-Cheng, BI Xiangyang. (2018). Regression mixture modeling: Advances in method and its implementation. Advances in Psychological Science, 26(12), 2272-2280
## 1 潜类别模型
### 图1
$p\left( {{Y}_{i}}\text{ }\!\!|\!\!\text{ }{{c}_{i}}=k \right)=\underset{j=1}{\overset{J}{\mathop \prod }}\,p\left( {{Y}_{ij}}\text{ }\!\!|\!\!\text{ }{{c}_{i}}=k \right)$
$P\left( {{\text{Y}}_{i}} \right)=\underset{t=1}{\overset{T}{\mathop \sum }}\,P\left( C=t \right)\underset{k=1}{\overset{K}{\mathop \prod }}\,P\left( {{Y}_{ik}}\text{ }\!\!|\!\!\text{ }C=t \right)$
$p\left( {{c}_{i}}=k \right)$表示某一类别组k所占总体的比率, 亦称潜类别概率。
### 2.1包含预测变量的潜类别模型
(1)单步法
$P\left( {{\text{Y}}_{i}}\text{ }\!\!|\!\!\text{ }{{Z}_{i}} \right)=\underset{t=1}{\overset{T}{\mathop \sum }}\,P\left( C=t\text{ }\!\!|\!\!\text{ }{{Z}_{i}} \right)\underset{k=1}{\overset{K}{\mathop \prod }}\,P\left( {{Y}_{ik}}\text{ }\!\!|\!\!\text{ }C=t \right)$
$P\left( C=t\text{ }\!\!|\!\!\text{ }{{Z}_{i}} \right)$为考虑协变量Z时, 属于潜类别t的概率, 该值可通过多项式逻辑斯特回归获得(Bakk&Vermunt, 2016):
$P\left( C=t\text{ }\!\!|\!\!\text{ }{{Z}_{i}} \right)=\frac{{{\text{e}}^{{{\alpha }_{t}}+{{\beta }_{t}}{{Z}_{i}}}}}{\sum\limits_{s=1}^{T}{{{\text{e}}^{{{\alpha }_{s}}+{{\beta }_{s}}{{Z}_{i}}}}}}$
(2)简单三步法
### 图3
(3)概率回归法和加权概率回归法
(4)虚拟类别法
LCA根据一次分析的后验概率将个体分组, 这种做法存在抽样误差的问题3(3这里类似参数估计的点估计, 为了考虑抽样误差的影响通常采用区间估计。)。虚拟类别法(pseudoclass method, PC法)采用类似缺失值分析时使用的多重插补法, 从个体的后验概率分布中随机抽取若干个(通常20次)可能的后验概率值4(4因为存在分类不确定性所以抽取多个可能值作为分类误差。), 根据每次的概率值将个体分配到不同的类别, 然后平均若干次的结果作为最终的分类结果(Wang, Brown, &Bandeen-Roche, 2005)。
Clark和MuthÉn (2009)的模拟发现, 当分类精确性较高时(entropy > 0.8), 该方法表现较好; 然而在最近的模拟研究中发现, 与稳健三步法和单步法相比, 虚拟类别法在同等条件下表现最差 (Asparouhov&MuthÉn, 2014), 在实际应用中并不被推荐使用。
(5)稳健三步法或MML法
### 图4
${{p}_{{{C}_{1}},{{C}_{2}}}}=P\left( C={{c}_{2}}\text{ }\!\!|\!\!\text{ }N={{c}_{1}} \right)=\frac{1}{{{N}_{{{C}_{1}}}}}\underset{{{N}_{i}}={{c}_{1}}}{\mathop \sum }\,P\left( {{C}_{i}}={{c}_{2}}\text{ }\!\!|\!\!\text{ }{{U}_{i}} \right)$
${{q}_{{{c}_{2}},{{c}_{1}}}}=P\left( N={{c}_{1}}\text{ }\!\!|\!\!\text{ }C={{c}_{2}} \right)=\frac{{{p}_{{{c}_{1}},{{c}_{2}}}}{{N}_{{{c}_{1}}}}}{\mathop{\sum }^{}{{~}_{c}}{{p}_{c,{{c}_{2}}}}{{N}_{c}}}$
Nc是根据N将个体分配到C的数量。稳健三步法使用$\text{log}\left( {{q}_{{{c}_{1}},{{c}_{2}}}}/{{q}_{k,{{c}_{2}}}} \right)$作为N估计C的权重。
(6)修正的BCH法
BCH法最早由Bolck等(2004)提出, 用于处理包含分类预测变量的LCA。该方法与稳健三步法逻辑类似, 区别在于稳健三步法的第三步的估计方程采用极大似然估计, 而BCH将其转换成加权方差分析, 分类误差作为权重。
BCH法的不足在于, 当类别距离很小以及小样本量时, 类别内的误差方差可能是负值。此时如果把类别内方差固定相等, 也可以获得正确的类别组内结果变量的均值(Bakk&Vermunt, 2016)。
### 2.2包含结果变量的LCA
2.2.1结果变量是连续变量
(1) 单步法
$P\left( {{\text{Y}}_{i}}\text{ }\!\!|\!\!\text{ }{{Z}_{i}} \right)=\underset{t=1}{\overset{T}{\mathop \sum }}\,P\left( C=t \right)\underset{k=1}{\overset{K}{\mathop \prod }}\,P\left( {{Y}_{ik}}\text{ }\!\!|\!\!\text{ }C=t \right)f\left( {{Z}_{i}}\text{ }\!\!|\!\!\text{ }C=t \right)$
$f\left( {{Z}_{i}}\text{ }\!\!|\!\!\text{ }X=t \right)$为协变量Z在特定类别内的分布, 连续变量时为正态分布, 如果存在多个连续变量则为多元正态分布。
(2)LTB法
Lanza等(2013)最近提出了一种新的方法可以避免单步法违反假设时结果不准确的问题, 因为这种方法并没有特定的分布假设。在LTB法中, 首先将结果变量Z作为协变量纳入LCA分析(过程同包含预测变量的单步法), 流程如图5
### 图5
${{\mu }_{t}}=\underset{Z}{\overset{~}{\mathop \int }}\,Z~f\left( Z\text{ }\!\!|\!\!\text{ }C=t \right)$
$f\left( Z\text{ }\!\!|\!\!\text{ }C=t \right)=\frac{f\left( Z \right)P\left( C=t\text{ }\!\!|\!\!\text{ }Z \right)}{P\left( C=t \right)}$
${{\mu }_{t}}=\underset{i=1}{\overset{N}{\mathop \sum }}\,{{Z}_{i}}\frac{P\left( C=t\text{ }\!\!|\!\!\text{ }{{Z}_{i}} \right)}{N~~P\left( C=t \right)}$
Lanza等(2013)并没有给出${{\mu }_{t}}$的标准误公式, Asparouhov和MuthÉn (2014)建议使用类别特定的方差的均方根除以类别特定的样本量获得, 但模拟研究发现这种做法会低估标准误(Bakk&Vermunt, 2016)。随后, Bakk, Oberski和Vermunt (2016)提出了Jackknife和Bootstrap再抽样的标准误。
(3)修正的LTB法
$P\left( {{N}_{i}}=s\text{ }\!\!|\!\!\text{ }{{Z}_{i}} \right)=\underset{~}{\overset{T}{\mathop \sum }}\,P\left( C=t\text{ }\!\!|\!\!\text{ }{{Z}_{i}} \right)P\left( {{N}_{i}}=s\text{ }\!\!|\!\!\text{ }C=t \right)$
### 图6
$P\left( C=t\text{ }\!\!|\!\!\text{ }{{Z}_{i}} \right)=\frac{\text{exp}\left( {{\alpha }_{t}}+{{\beta }_{t}}{{Z}_{i}}+{{\gamma }_{t}}Z_{i}^{2} \right)}{\sum\limits_{{t}'=1}^{T}{\text{exp}\left( {{\alpha }_{{{t}'}}}+{{\beta }_{{{t}'}}}{{Z}_{i}}+{{\gamma }_{t}}Z_{i}^{2} \right)}}$
(4)修正BCH法
(5)稳健三步法
$P\left( N=s\text{ }\!\!|\!\!\text{ }{{Z}_{i}} \right)=\underset{t=1}{\overset{T}{\mathop \sum }}\,P\left( C=t \right)f\left( {{Z}_{i}}\text{ }\!\!|\!\!\text{ }C=t \right)P\left( N=s\text{ }\!\!|\!\!\text{ }C=t \right)$
$P\left( N=s\text{ }\!\!|\!\!\text{ }C=t \right)$被固定为第二步估计的分类精确性参数, $f\left( {{Z}_{i}}\text{ }\!\!|\!\!\text{ }C=t \right)$通常服从正态分布。如前所述, 结果变量是连续变量的LCA的目的在于估计结果变量在潜类别不同水平上的均值差异, 但结果变量的方差在不同类别组内可能相等也可能存在差异(类似方差分析时的组内方差同质假设)。针对方差的不同情况, 稳健三步法有两种不同的变式:类别组内方差同质和类别组内方差异质。
2.2.2结果变量是类别变量
LTB法在处理分类结果变量时表现较好, 不会像分析连续结果变量时出现违反正态和方差同质假设后的估计偏差问题。在Asparouhov和MuthÉn (2014)的模拟研究中, 检验了3个样本量(N = 200, 500和2000)和2种分类精确性(entropy= 0.5和0.65)下LTB的表现, 结果发现仅在N = 200和entropy = 0.5时才会出现明显的偏差。
### 2.3回归混合模型方法的适用情境汇总表
Auxiliary=()
LTB DCAT 是处理类别结果变量最好的方法之一, 推荐使用。
BCH BCH 是处理连续结果变量最好的方法之一, 在 DU3STEP不报告结果时使用。
LTB DCON 对假设前提比较敏感, 当假设违反时会扭曲估计结果, 不推荐使用
PC method E 精确性较差, 不推荐实际使用
## 3 实例分析
(1)潜类别分析
Title: Lantent Class AnalysisData: File is older_survey.dat ;Variable: Names = C2A C2B C2C C2D C2E C2F C2G C2H C2I C2J C2K C2L C2M C2N C2P C2Q ifold age gds agesq11(11 年龄平方项(/100)); USEVARIABLES = C2A-C2Q; MISSING are all (-9999) ; CATEGORICAL = C2A-C2Q; CLASSES = C (2);Analysis: TYPE = MIXTURE; Starts = 50 3; PROCESSORS = 4; !根据电脑情况指定PLOT: TYPE = PLOT3; SERIES = C2A-C2Q (*);Savedata: file is older_survey.txt ; save is cprob; output: tech11 tech14;
### 图7
(2)加入预测变量的回归混合模型
Title: Regression Mixture Modeling with Predictive VariableData: File is older_survey.dat ;Variable: Names = C2A C2B C2C C2D C2E C2F C2G C2H C2I C2J C2K C2L C2M C2N C2P C2Q ifold age gdsagesq; USEVARIABLES = C2A-C2Q; MISSING are all (-9999) ; CATEGORICAL = C2A-C2Q; CLASSES = C (2); AUXILIARY = age (R3STEP);!选择稳健三步法Analysis: TYPE = MIXTURE; PROCESSORS = 4;PLOT:TYPE = PLOT3; SERIES = C2A-C2Q (*);Savedata: file is older_survey.txt ; save is cprob; output: tech11 tech14;
TESTS OF CATEGORICAL LATENT VARIABLE MULTINOMIAL LOGISTIC REGRESSIONS USING THE 3-STEP PROCEDURE Two-Tailed Estimate S.E. Est./S.E. P-Value C#1 ON AGE 0.153 0.014 11.219 0.000 Intercepts C#1 -12.935 1.031 -12.541 0.000
(3)加入分类结果变量的回归混合模型
Title: Regression Mixture Modeling with categorical outcome variableData: File is older_survey.dat ;Variable: Names = C2A C2B C2C C2D C2E C2F C2G C2H C2I C2J C2K C2L C2M C2N C2P C2Q ifold age gdsagesq; USEVARIABLES = C2A-C2Q; MISSING are all (-9999) ; CATEGORICAL = C2A-C2Q; CLASSES = C (2); AUXILIARY = ifold (DCAT);!选择DCAT法Analysis: TYPE = MIXTURE; PROCESSORS = 4; LRTSTARTS = 2 1 80 16;PLOT: TYPE = PLOT3; SERIES = C2A-C2Q (*);Savedata: file is older_survey.txt ; save is cprob;output: tech11 tech14;
EQUALITY TESTS OF MEANS/PROBABILITIES ACROSS CLASSESIFOLDProb S.E. Odds Ratio S.E. 2.5% C.I. 97.5% C.I. Class 1 Category 1 0.265 0.033 1.000 0.000 1.000 1.000 Category 2 0.735 0.0337 2.133 0.389 1.492 3.049 Class 2 Category 1 0.435 0.016 1.000 0.000 1.000 1.000 Category 2 0.565 0.016 1.000 0.000 1.000 1.000
(4)加入连续结果变量的回归混合模型
Title: Regression Mixture Modeling with continuous outcome variableData: File is older_survey.dat ;Variable: Names = C2A C2B C2C C2D C2E C2F C2G C2H C2I C2J C2K C2L C2M C2N C2P C2Q ifold age gdsagesq; USEVARIABLES = C2A-C2Q; MISSING are all (-9999); CATEGORICAL = C2A-C2Q; CLASSES = C (2); AUXILIARY = gds (BCH);!选择BCH法Analysis: TYPE = MIXTURE; PROCESSORS = 4; LRTSTARTS = 2 1 80 16; !配合tech14PLOT: TYPE = PLOT3; SERIES = C2A-C2Q (*);Savedata: file is older_survey.txt ; save is cprob;output: tech11 tech14;
EQUALITY TESTS OF MEANS ACROSS CLASSES USING THE BCH PROCEDUREWITH 1 DEGREE (S) OF FREEDOM FOR THE OVERALL TESTGDS Mean S.E. Class 1 4.540 0.211 Class 2 2.903 0.075 Chi-Square P-Value Overall test 52.233 0.000
## 4 小结与展望
The authors have declared that no competing interests exist.
## 参考文献 原文顺序 文献年度倒序 文中引用次数倒序 被引期刊影响因子
Asparouhov, T., &MuthÉn, B. ( 2014).
Auxiliary variables in mixture modeling: Three-step approaches using M plus
Structural Equation Modeling, 21( 3), 329-341.
This article discusses alternatives to single-step mixture modeling. A 3-step method for latent class predictor variables is studied in several different settings, including latent class analysis, latent transition analysis, and growth mixture modeling. It is explored under violations of its assumptions such as with direct effects from predictors to latent class indicators. The 3-step method is also considered for distal variables. The Lanza, Tan, and Bray (2013) method for distal variables is studied under several conditions including violations of its assumptions. Standard errors are also developed for the Lanza method because these were not given in Lanza et al. (2013).
Asparouhov, T., &MuthÉn, B(2015 ).
Auxiliary Variables in Mixture Modeling: Using the BCH Method in Mplus to Estimate a Distal Outcome Model and an Arbitrary Secondary Model
Bakk Z., Oberski D. L., &Vermunt J. K . ( 2016).
Relating latent class membership to continuous distal outcomes: Improving the LTB approach and a modified three-step implementation
Structural Equation Modeling, 23( 2), 278-289.
Latent class analysis often aims to relate the classes to continuous external consequences (“distal outcomes”), but estimating such relationships necessitates distributional assumptions. Lanza, Tan, and Bray (2013) suggested circumventing such assumptions with their LTB approach: Linear logistic regression of latent class membership on each distal outcome is first used, after which this estimated relationship is reversed using Bayes’ rule. However, the LTB approach currently has 3 drawbacks, which we address in this article. First, LTB interchanges the assumption of normality for one of homoskedasticity, or, equivalently, of linearity of the logistic regression, leading to bias. Fortunately, we show introducing higher order terms prevents this bias. Second, we improve coverage rates by replacing approximate standard errors with resampling methods. Finally, we introduce a bias-corrected 3-step version of LTB as a practical alternative to standard LTB. The improved LTB methods are validated by a simulation study, and an example application demonstrates their usefulness.
Bakk Z., Tekle F. B., &Vermunt J. K . ( 2013).
Estimating the association between latent class membership and external variables using bias-adjusted three-step approaches
Sociological methodology.43( 1), 272-311.
Bakk, Z., &Vermunt, J.K. ( 2016).
Robustness of stepwise latent class modeling with continuous distal outcomes
Structural Equation Modeling, 23( 1), 20-31.
Recently, several bias-adjusted stepwise approaches to latent class modeling with continuous distal outcomes have been proposed in the literature and implemented in generally available software for latent class analysis. In this article, we investigate the robustness of these methods to violations of underlying model assumptions by means of a simulation study. Although each of the 4 investigated methods yields unbiased estimates of the class-specific means of distal outcomes when the underlying assumptions hold, 3 of the methods could fail to different degrees when assumptions are violated. Based on our study, we provide recommendations on which method to use under what circumstances. The differences between the various stepwise latent class approaches are illustrated by means of a real data application on outcomes related to recidivism for clusters of juvenile offenders.
Bauer, D.J., &Curran, P.J . ( 2003).
Distributional assumptions of growth mixture models: Implications for overextraction of latent trajectory classes
Psychological Methods, 8( 3), 338-363.
URL PMID:14596495
Abstract Growth mixture models are often used to determine if subgroups exist within the population that follow qualitatively distinct developmental trajectories. However, statistical theory developed for finite normal mixture models suggests that latent trajectory classes can be estimated even in the absence of population heterogeneity if the distribution of the repeated measures is nonnormal. By drawing on this theory, this article demonstrates that multiple trajectory classes can be estimated and appear optimal for nonnormal data even when only 1 group exists in the population. Further, the within-class parameter estimates obtained from these models are largely uninterpretable. Significant predictive relationships may be obscured or spurious relationships identified. The implications of these results for applied research are highlighted, and future directions for quantitative developments are suggested.
Bolck A., Croon M., &Hagenaars J . ( 2004).
Estimating latent structure models with categorical variables: One-step versus three-step estimators
Political Analysis, 12( 1), 3-27.
We study the properties of a three-step approach to estimating the parameters of a latent structure model for categorical data and propose a simple correction for a common source of bias. Such models have a measurement part (essentially the latent class model) and a structural (causal) part (essentially a system of logit equations). In the three-step approach, a stand-alone measurement model is first defined and its parameters are estimated. Individual predicted scores on the latent variables are then computed from the parameter estimates of the measurement model and the individual observed scoring patterns on the indicators. Finally, these predicted scores are used in the causal part and treated as observed variables. We show that such a naive use of predicted latent scores cannot be recommended since it leads to a systematic underestimation of the strength of the association among the variables in the structural part of the models. However, a simple correction procedure can eliminate this systematic bias. This approach is illustrated on simulated and real data. A method that uses multiple imputation to account for the fact that the predicted latent variables are random variables can produce standard errors for the parameters in the structural part of the model.
Clark, S.L., &MuthÉn, B . ( 2009).
Relating latent class analysis results to variables not included in the analysis
Collins, L.M., &Lanza, S.T . ( 2010).
Latent class and latent transition analysis: With applications in the social, behavioral, and health sciences
. New York: Wiley.
Lanza S. T., Tan X., & Bray B. C . ( 2013).
Latent class analysis with distal outcomes: A flexible model-based approach
Structural Equation Modeling, 20( 1), 1-26.
URL PMID:4240499
Although prediction of class membership from observed variables in latent class analysis is well understood, predicting an observed distal outcome from latent class membership is more complicated. A flexible model-based approach is proposed to empirically derive and summarize the class-dependent density functions of distal outcomes with categorical, continuous, or count distributions. A Monte Carlo simulation study is conducted to compare the performance of the new technique to 2 commonly used classify-analyze techniques: maximum-probability assignment and multiple pseudoclass draws. Simulation results show that the model-based approach produces substantially less biased estimates of the effect compared to either classify-analyze technique, particularly when the association between the latent class variable and the distal outcome is strong. In addition, we show that only the model-based approach is consistent. The approach is demonstrated empirically: latent classes of adolescent depression are used to predict smoking, grades, and delinquency. SAS syntax for implementing this approach using PROC LCA and a corresponding macro are provided.
Morin A. J. S., Morizot J., Boudrias J-S., &Madore I . ( 2011).
A multifoci person-centered perspective on workplace affective commitment: A latent profile/factor mixture analysis
Organizational Research Methods,14( 1), 58-90.
ABSTRACT The current study aims to explore the usefulness of a person-centered perspective to the study of workplace affective commitment (WAC). Five distinct profiles of employees were hypothesized based on their levels of WAC directed toward seven foci (organization, workgroup, supervisor, customers, job, work, and career). This study applied latent profile analyses and factor mixture analyses to a sample of 404 Canadian workers. The construct validity of the extracted latent profiles was verified by their associations with multiple predictors (gender, age, tenure, social relationships at work, workplace satisfaction, and organizational justice perceptions) and outcomes (in-role performance, organizational citizenship behaviors, and intent to quit). The analyses confirmed that a model with five latent profiles adequately represented the data: (a) highly committed toward all foci; (b) weakly committed toward all foci; (c) committed to their supervisor and moderately committed to the other foci; and (d) committed to their career and moderately uncommitted to the other foci; (e) committed mostly to their proximal work environment. These latent profiles present theoretically coherent patterns of associations with the predictors and outcomes, which suggests their adequate construct validity. [ABSTRACT FROM AUTHOR] Copyright of Organizational Research Methods is the property of Sage Publications Inc. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Sterba, S.K. ( 2013).
Multivariate Behavioral Research, 48( 6), 775-815.
URL PMID:26745595
The methodological literature on mixture modeling has rapidly expanded in the past 15 years, and mixture models are increasingly applied in practice. Nonetheless, this literature has historically been diffuse, with different notations, motivations, and parameterizations making mixture models appear disconnected. This pedagogical review facilitates an integrative understanding of mixture models. First, 5 prototypic mixture models are presented in a unified format with incremental complexity while highlighting their mutual reliance on familiar probability laws, common assumptions, and shared aspects of interpretation. Second, 2 recent extensionshybrid mixtures and parallel-process mixturesare discussed. Both relax a key assumption of classic mixture models but do so in different ways. Similarities in construction and interpretation among hybrid mixtures and among parallel-process mixtures are emphasized. Third, the combination of both extensions is motivated and illustrated by means of an example on oppositional defiant and depressive symptoms. By clarifying how existing mixture models relate and can be combined, this article bridges past and current developments and provides a foundation for understanding new developments.
Vermunt, J.K. ( 2010).
Latent class modeling with covariates: Two improved three-step approaches
Political Analysis, 18, 450-469.
Researchers using latent class (LC) analysis often proceed using the following three steps: (1) an LC model is built for a set of response variables, (2) subjects are assigned to LCs based on their posterior class membership probabilities, and (3) the association between the assigned class membership and external variables is investigated using simple cross-tabulations or multinomial logistic regression analysis. Bolck, Croon, and Hagenaars (2004) demonstrated that such a three-step approach underestimates the associations between covariates and class membership. They proposed resolving this problem by means of a specific correction method that involves modifying the third step. In this article, I extend the correction method of Bolck, Croon, and Hagenaars by showing that it involves maximizing a weighted log-likelihood function for clustered data. This conceptualization makes it possible to apply the method not only with categorical but also with continuous explanatory variables, to obtain correct tests using complex sampling variance estimation methods, and to implement it in standard software for logistic regression analysis. In addition, a new maximum likelihood (ML)u2014based correction method is proposed, which is more direct in the sense that it does not require analyzing weighted data. This new three-step ML method can be easily implemented in software for LC analysis. The reported simulation study shows that both correction methods perform very well in the sense that their parameter estimates and their SEs can be trusted, except for situations with very poorly separated classes. The main advantage of the ML method compared with the Bolck, Croon, and Hagenaars approach is that it is much more efficient and almost as efficient as one-step ML estimation.
Wang C-P., Brown C. H., &Bandeen-Roche K . ( 2005).
Residual diagnostics for growth mixture models: Examining the impact of a preventive intervention on multiple trajectories of aggressive behavior
Journal of the American Statistical Association, 100( 471), 1054-1076.
Growth mixture modeling has become a prominent tool for studying the heterogeneity of developmental trajectories within a population. In this article we develop graphical diagnostics to detect misspecification in growth mixture models regarding the number of growth classes, growth trajectory means, and covariance structures. For each model misspecification, we propose a different type of empirical Bayes residual to quantify the departure. Our procedure begins by imputing multiple independent sets of growth classes for the sample. Then, from these so-called "pseudoclass" draws, we form diagnostic plots to examine the averaged empirical distributions of residuals in each such class. Our proposals draw on the property that each single set of pseudoclass adjusted residuals is asymptotically normal with known mean and (co)variance when the underlying model is correct. These methods are justified in simulation studies involving two classes of linear growth curves that also differ by their covariance structures. These are then applied to longitudinal data from a randomized field trial that tests whether children's trajectories of aggressive behavior could be modified during elementary and middle school. Our diagnostics lead to a solution involving a mixture of three growth classes. When comparing the diagnostics obtained from multiple pseudoclasses with those from multiple imputations, we show the computational advantage of the former and obtain a criterion for determining the minimum number of pseudoclass draws.
|
2018-12-10 07:00:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47164690494537354, "perplexity": 6637.2395483161445}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823318.33/warc/CC-MAIN-20181210055518-20181210081018-00526.warc.gz"}
|
https://yutsumura.com/tag/infinite-group/
|
# Tagged: infinite group
## Problem 594
Is it possible that each element of an infinite group has a finite order?
If so, give an example. Otherwise, prove the non-existence of such a group.
|
2019-11-19 13:45:35
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9281366467475891, "perplexity": 546.0548503071225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670151.97/warc/CC-MAIN-20191119121339-20191119145339-00281.warc.gz"}
|
https://intelligencemission.com/free-electricity-invention-free-electricity-machine.html
|
Free Power In my opinion, if somebody would build Free Power power generating device, and would manufacture , and sell it in stores, then everybody would be buying it, and installing it in their houses, and cars. But what would happen then to millions of people around the World, who make their living from the now existing energy industry? I think if something like that would happen, the World would be in chaos. I have one more question. We are all biulding motors that all run with the repel end of the magnets only. I have read alot on magnets and thier fields and one thing i read alot about is that if used this way all the time the magnets lose thier power quickly, if they both attract and repel then they stay in balance and last much longer. My question is in repel mode how long will they last? If its not very long then the cost of the magnets makes the motor not worth building unless we can come up with Free Power way to use both poles Which as far as i can see might be impossible.
Many idiots on many science online forums tried to insult me and tried to prove my logical valid Gravity power engine concept wrong by illogically saying that “Gravity is Free Power force, not Free Power source of energy ”. How foolish that idiot’s statement appears to be. Interesting posts, pro and con. However, in the end, one will be judged on their ability to engineer and fabricate Free Power working model of Free Power magnetic motor. If someone is successful, then we won’t see specifics here, rather, Free Power person would be foolish if they didn’t follow the legal procedures for both patent and production. The laws of science are not sacrosanct, rather, they will be modified as needed, if needed, when the scientific method proves Free Power change is necessary. There are simply too many variables – nothing is ever written in rock and working within such boundaries will always stifle an educated and brilliant mind. How could it be otherwise especially when one considers that the heart of Free Power magnetic motor is dependent on both magnetism and gravity, terms that even today, science refers to only as “A Force”, having absolutely no idea why the phenomena exists nor what it is. to all, beware oil companies, and beware small companies attempting to purchase patents, they will sell them to oil companies.
Historically, the term ‘free energy ’ has been used for either quantity. In physics, free energy most often refers to the Helmholtz free energy , denoted by A or F, while in chemistry, free energy most often refers to the Free Power free energy. The values of the two free energies are usually quite similar and the intended free energy function is often implicit in manuscripts and presentations.
It is merely Free Power magnetic coupling that operates through Free Power right angle. It is not Free Power free energy device or Free Power magnetic motor. Not relevant to this forum. Am I overlooking something. Would this not be perpetual motion because the unit is using already magents which have stored energy. Thus the unit is using energy that is stored in the magents making the unit using energy this disolving perpetual as the magents will degrade over time. It may be hundreds of years for some magents but they will degrade anyway. The magents would be acting as batteries even if they do turn. I spoke with PBS/NOVA. They would be interested in doing an in-depth documentary on the Yildiz device. I contacted Mr. Felber, Mr. Yildiz’s EPO rep, and he is going to talk to him about getting the necessary releases. Presently Mr. Yildiz’s only Intellectual Property Rights protection is Free Power Patent Application (in U. S. , Free Power Provisional Patent). But he is going to discuss it with him. Mr. Free Electricity, then we do agree, as I agree based on your definition. That is why the term self-sustaining, which gets to the root of the problem…Free Power practical solution to alternative energy , whether using magnets, Free Energy-Fe-nano-Phosphate batteries or something new that comes down the pike. Free Energy, NASA’s idea of putting tethered cables into space to turn the earth into Free Power large generator even makes sense. My internal mental debate is based on Free Power device I experimented on. Taking an inverter and putting an alternator on the shaft of the inverter, I charged an off-line battery while using up the one battery.
Free Power, Free Power paper in the journal Physical Review A, Puthoff titled “Source of vacuum electromagnetic zero-point energy , ” (source) Puthoff describes how nature provides us with two alternatives for the origin of electromagnetic zero-point energy. One of them is generation by the quantum fluctuation motion of charged particles that constitute matter. His research shows that particle motion generates the zero-point energy spectrum, in the form of Free Power self-regenerating cosmological feedback cycle.
#### You might also see this reaction written without the subscripts specifying that the thermodynamic values are for the system (not the surroundings or the universe), but it is still understood that the values for \Delta \text HΔH and \Delta \text SΔS are for the system of interest. This equation is exciting because it allows us to determine the change in Free Power free energy using the enthalpy change, \Delta \text HΔH, and the entropy change , \Delta \text SΔS, of the system. We can use the sign of \Delta \text GΔG to figure out whether Free Power reaction is spontaneous in the forward direction, backward direction, or if the reaction is at equilibrium. Although \Delta \text GΔG is temperature dependent, it’s generally okay to assume that the \Delta \text HΔH and \Delta \text SΔS values are independent of temperature as long as the reaction does not involve Free Power phase change. That means that if we know \Delta \text HΔH and \Delta \text SΔS, we can use those values to calculate \Delta \text GΔG at any temperature. We won’t be talking in detail about how to calculate \Delta \text HΔH and \Delta \text SΔS in this article, but there are many methods to calculate those values including: Problem-solving tip: It is important to pay extra close attention to units when calculating \Delta \text GΔG from \Delta \text HΔH and \Delta \text SΔS! Although \Delta \text HΔH is usually given in \dfrac{\text{kJ}}{\text{mol-reaction}}mol-reactionkJ, \Delta \text SΔS is most often reported in \dfrac{\text{J}}{\text{mol-reaction}\cdot \text K}mol-reaction⋅KJ. The difference is Free Power factor of 10001000!! Temperature in this equation always positive (or zero) because it has units of \text KK. Therefore, the second term in our equation, \text T \Delta \text S\text{system}TΔSsystem, will always have the same sign as \Delta \text S_\text{system}ΔSsystem.
The internet is the only reason large corps. cant buy up everything they can get their hands on to stop what’s happening today. @Free Power E. Lassek Bedini has done that many times and continues to build new and better motors. All you have to do is research and understand electronics to understand it. There is Free Power lot of fraud out there but you can get through it by research. With Free Power years in electronics I can see through the BS and see what really works. Build the SG and see for yourself the working model.. A audio transformer Free Power:Free Power ratio has enough windings. Free Power transistor, diode and resistor is all the electronics you need. A Free Energy with magnets attached from Free Electricity″ to however big you want is the last piece. What? Maybe Free Electricity pieces all together? Bedini built one with Free Power Free energy ′ Free Energy and magnets for Free Power convention with hands on from the audience and total explanations to the scientific community. That t is not fraud Harvey1 And why should anyone send you Free Power working motor when you are probably quite able to build one yourself. Or maybe not? Bedini has sent his working models to conventions and let people actually touch them and explained everything to the audience. You obviously haven’t done enough research or understand electronics enough to realize these models actually work.. The SC motor generator is easily duplicated. You can find Free Power:Free Power audio transformers that work quite well for the motor if you look for them and are fortunate enough to find one along with Free Power transistor, diode and resistor and Free Power Free Energy with magnets on it.. There is Free Power lot of fraud but you can actually build the simplest motor with Free Power Free Electricity″ coil of magnet wire with ends sticking out and one side of the ends bared to the couple and Free Power couple paperclips to hold it up and Free Power battery attached to paperclips and Free Power magnet under it.
Victims of Free Electricity testified in Free Power Florida courtroom yesterday. Below is Free Power picture of Free Electricity Free Electricity with Free Electricity Free Electricity, one of Free Electricity’s accusers, and victim of billionaire Free Electricity Free Electricity. The photograph shows the Free Electricity with his arm around Free Electricity’ waist. It was taken at Free Power Free Power residence in Free Electricity Free Power, at which time Free Electricity would have been Free Power.
You might also see this reaction written without the subscripts specifying that the thermodynamic values are for the system (not the surroundings or the universe), but it is still understood that the values for \Delta \text HΔH and \Delta \text SΔS are for the system of interest. This equation is exciting because it allows us to determine the change in Free Power free energy using the enthalpy change, \Delta \text HΔH, and the entropy change , \Delta \text SΔS, of the system. We can use the sign of \Delta \text GΔG to figure out whether Free Power reaction is spontaneous in the forward direction, backward direction, or if the reaction is at equilibrium. Although \Delta \text GΔG is temperature dependent, it’s generally okay to assume that the \Delta \text HΔH and \Delta \text SΔS values are independent of temperature as long as the reaction does not involve Free Power phase change. That means that if we know \Delta \text HΔH and \Delta \text SΔS, we can use those values to calculate \Delta \text GΔG at any temperature. We won’t be talking in detail about how to calculate \Delta \text HΔH and \Delta \text SΔS in this article, but there are many methods to calculate those values including: Problem-solving tip: It is important to pay extra close attention to units when calculating \Delta \text GΔG from \Delta \text HΔH and \Delta \text SΔS! Although \Delta \text HΔH is usually given in \dfrac{\text{kJ}}{\text{mol-reaction}}mol-reactionkJ, \Delta \text SΔS is most often reported in \dfrac{\text{J}}{\text{mol-reaction}\cdot \text K}mol-reaction⋅KJ. The difference is Free Power factor of 10001000!! Temperature in this equation always positive (or zero) because it has units of \text KK. Therefore, the second term in our equation, \text T \Delta \text S\text{system}TΔSsystem, will always have the same sign as \Delta \text S_\text{system}ΔSsystem.
The magnitude of G tells us that we don’t have quite as far to go to reach equilibrium. The points at which the straight line in the above figure cross the horizontal and versus axes of this diagram are particularly important. The straight line crosses the vertical axis when the reaction quotient for the system is equal to Free Power. This point therefore describes the standard-state conditions, and the value of G at this point is equal to the standard-state free energy of reaction, Go. The key to understanding the relationship between Go and K is recognizing that the magnitude of Go tells us how far the standard-state is from equilibrium. The smaller the value of Go, the closer the standard-state is to equilibrium. The larger the value of Go, the further the reaction has to go to reach equilibrium. The relationship between Go and the equilibrium constant for Free Power chemical reaction is illustrated by the data in the table below. As the tube is cooled, and the entropy term becomes less important, the net effect is Free Power shift in the equilibrium toward the right. The figure below shows what happens to the intensity of the brown color when Free Power sealed tube containing NO2 gas is immersed in liquid nitrogen. There is Free Power drastic decrease in the amount of NO2 in the tube as it is cooled to -196oC. Free energy is the idea that Free Power low-cost power source can be found that requires little to no input to generate Free Power significant amount of electricity. Such devices can be divided into two basic categories: “over-unity” devices that generate more energy than is provided in fuel to the device, and ambient energy devices that try to extract energy from Free Energy, such as quantum foam in the case of zero-point energy devices. Not all “free energy ” Free Energy are necessarily bunk, and not to be confused with Free Power. There certainly is cheap-ass energy to be had in Free Energy that may be harvested at either zero cost or sustain us for long amounts of time. Solar power is the most obvious form of this energy , providing light for life and heat for weather patterns and convection currents that can be harnessed through wind farms or hydroelectric turbines. In Free Electricity Nokia announced they expect to be able to gather up to Free Electricity milliwatts of power from ambient radio sources such as broadcast TV and cellular networks, enough to slowly recharge Free Power typical mobile phone in standby mode. [Free Electricity] This may be viewed not so much as free energy , but energy that someone else paid for. Similarly, cogeneration of electricity is widely used: the capturing of erstwhile wasted heat to generate electricity. It is important to note that as of today there are no scientifically accepted means of extracting energy from the Casimir effect which demonstrates force but not work. Most such devices are generally found to be unworkable. Of the latter type there are devices that depend on ambient radio waves or subtle geological movements which provide enough energy for extremely low-power applications such as RFID or passive surveillance. [Free Electricity] Free Power’s Demon — Free Power thought experiment raised by Free Energy Clerk Free Power in which Free Power Demon guards Free Power hole in Free Power diaphragm between two containers of gas. Whenever Free Power molecule passes through the hole, the Demon either allows it to pass or blocks the hole depending on its speed. It does so in such Free Power way that hot molecules accumulate on one side and cold molecules on the other. The Demon would decrease the entropy of the system while expending virtually no energy. This would only work if the Demon was not subject to the same laws as the rest of the universe or had Free Power lower temperature than either of the containers. Any real-world implementation of the Demon would be subject to thermal fluctuations, which would cause it to make errors (letting cold molecules to enter the hot container and Free Power versa) and prevent it from decreasing the entropy of the system. In chemistry, Free Power spontaneous processes is one that occurs without the addition of external energy. A spontaneous process may take place quickly or slowly, because spontaneity is not related to kinetics or reaction rate. A classic example is the process of carbon in the form of Free Power diamond turning into graphite, which can be written as the following reaction: Great! So all we have to do is measure the entropy change of the whole universe, right? Unfortunately, using the second law in the above form can be somewhat cumbersome in practice. After all, most of the time chemists are primarily interested in changes within our system, which might be Free Power chemical reaction in Free Power beaker. Free Power we really have to investigate the whole universe, too? (Not that chemists are lazy or anything, but how would we even do that?) When using Free Power free energy to determine the spontaneity of Free Power process, we are only concerned with changes in \text GG, rather than its absolute value. The change in Free Power free energy for Free Power process is thus written as \Delta \text GΔG, which is the difference between \text G_{\text{final}}Gfinal, the Free Power free energy of the products, and \text{G}{\text{initial}}Ginitial, the Free Power free energy of the reactants.
To understand why this is the case, it’s useful to bring up the concept of chemical equilibrium. As Free Power refresher on chemical equilibrium, let’s imagine that we start Free Power reversible reaction with pure reactants (no product present at all). At first, the forward reaction will proceed rapidly, as there are lots of reactants that can be converted into products. The reverse reaction, in contrast, will not take place at all, as there are no products to turn back into reactants. As product accumulates, however, the reverse reaction will begin to happen more and more often. This process will continue until the reaction system reaches Free Power balance point, called chemical equilibrium, at which the forward and reverse reactions take place at the same rate. At this point, both reactions continue to occur, but the overall concentrations of products and reactants no longer change. Each reaction has its own unique, characteristic ratio of products to reactants at equilibrium. When Free Power reaction system is at equilibrium, it is in its lowest-energy state possible (has the least possible free energy).
Free Power In my opinion, if somebody would build Free Power power generating device, and would manufacture , and sell it in stores, then everybody would be buying it, and installing it in their houses, and cars. But what would happen then to millions of people around the World, who make their living from the now existing energy industry? I think if something like that would happen, the World would be in chaos. I have one more question. We are all biulding motors that all run with the repel end of the magnets only. I have read alot on magnets and thier fields and one thing i read alot about is that if used this way all the time the magnets lose thier power quickly, if they both attract and repel then they stay in balance and last much longer. My question is in repel mode how long will they last? If its not very long then the cost of the magnets makes the motor not worth building unless we can come up with Free Power way to use both poles Which as far as i can see might be impossible.
Both sets of skeptics will point to the fact that there has been no concrete action, no major arrests of supposed key Deep State players. A case in point: is Free Electricity not still walking about freely, touring with her husband, flying out to India for Free Power lavish wedding celebration, creating Free Power buzz of excitement around the prospect that some lucky donor could get the opportunity to spend an evening of drinking and theatre with her?
|
2019-03-23 09:46:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.744839608669281, "perplexity": 1147.8348466473285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202781.33/warc/CC-MAIN-20190323080959-20190323102959-00162.warc.gz"}
|
https://gis.stackexchange.com/questions/109867/using-calculate-field-for-if-statement-over-many-fields
|
# Using Calculate Field for If statement over many fields?
I'm struggling with the python syntax in the Calculate Field so that, in the table above:
if b, c, d, or e are NOT NULL populate the Summary field with "Verified" else populate Summary field with "In Progress"
So if any of the B, C, D or E columns contain NULL the Summary field will read "In Progress".
I believe this should be something along the lines of:
def Reclass !SUMMARY!: if [ !B! , !C! , !D! , !E!] <> NULL !SUMMARY! = "Verified: " else !SUMMARY! = "In Progress: "
Run the field calc on the Summary field. Use Python as the parser and check the Show Codeblock box.
For the Pre-Logic Script Code put:
def Reclass(B, C, D, E):
if None not in (B, C, D, E) and "" not in (B, C, D, E):
return "Verified"
else:
return "In Progess"
Then put this in the bottom box:
Reclass(!B!, !C!, !D!, !E!)
The bottom box with the ! contains the actual field names that get passed into the Reclass function in the top box.
I included the "" not in (B, C, D, E) as a check specifically for text fields that are blank but don't have the Null value, if you need it.
• @Simon Good to hear! – ianbroad Aug 8 '14 at 8:53
This answer is not meant to steal from @ian, but it shows a little bit easier way to write the code that he provided... instead of using a code block (which can be a pain in the butt sometimes when trying to write it in a standalone script...) you can rewrite his code in a single line as:
"Verified" if None not in (!B!, !C!, !D!, !E!) and "" not in (!B!, !C!, !D!, !E!) else "In Progress"
|
2019-10-16 14:01:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.784227192401886, "perplexity": 4596.957617118089}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668994.39/warc/CC-MAIN-20191016135759-20191016163259-00229.warc.gz"}
|
https://zbmath.org/?q=an:1302.35218
|
# zbMATH — the first resource for mathematics
Well-posedness for the Navier-slip thin-film equation in the case of complete wetting. (English) Zbl 1302.35218
The authors study regularity and stability properties of the free boundary problem $\partial_t + \partial_z(h^2\partial^3_zh) = 0, \qquad t > 0, z \in (Z_0(t),\infty)$ such that $h = \partial_zh = 0, \qquad \text{for } t > 0, z = Z_0(t),$ where $\dot Z_0(t) = \lim_{z \searrow Z_0(t)}h\partial^3_zh \qquad \text{for } t > 0.$ The equation describes a thin-film evolution that follows from the Navier-Stokes equations of a two-dimensional viscous thin film on a one-dimensional flat solid in a lubrication approximation. The function $$h(t,z)$$ describe the height of the film, $$Z_0(t)$$ denotes the position of the free boundary (the contact line).
It is one of the main aims is to show that the traveling wave solution $$H_{TW} = x^{3/2}$$ with $$x = z - V_0t$$, $$\dot Z_0 = V_0$$ is stable under small perturbations. Perturbations of the traveling wave are given by $$u(t,x) = F(t,x) -$$ where $$F(t,x) = 1/\partial_xZ(t,x)$$ so that the traveling wave is the constant solution $$F = F_{TW} \equiv 1$$. Here $$Z(t,x)$$ is defined by the hodograph transformation $$h(t,Z(t,x)) = x^{3/2}$$. As a result, the following degenerated parabolic initial value problem for $$u(t,x)$$ is derived $x\partial_tu + p(D)u = N(u), \qquad t > 0, x > 0$ under the initial condition $$u(0,x) = u_0(x),\;x > 0$$. Here $$D$$ denotes the scaling-invariant logarithmic derivative, $$p$$ is a fourth-order polynomial given by $$p(\zeta) = \zeta^4 + 2\zeta^3 - (9/8)\zeta = (\zeta + 3/8)(\zeta + \beta + 1/2)\zeta(\zeta - \beta)$$ with the irrational number $$\beta = (\sqrt{13} - 1)/4$$, the nonlinearity is $$N(u) = p(D)u - M(1 + u,\dots,1 + u)$$ where $$M$$ is the $$5$$-linear form $M(F_1,\dots,F_5) = F_1F_2D(D + 3/2)F_3(D - 1/2)F_4(D + 1/2)F_5.$ The authors prove the global existence and uniqueness close to traveling waves. The main ingredients are maximal regularity estimates in weighted $$L^2$$-spaces for the linearized evolution, after suitable substraction of $$a(t) + b(t)x^{\beta}$$ terms.
##### MSC:
35K65 Degenerate parabolic equations 35K35 Initial-boundary value problems for higher-order parabolic equations 35Q35 PDEs in connection with fluid mechanics 35R35 Free boundary problems for PDEs 76A20 Thin fluid films 76D08 Lubrication theory 35K59 Quasilinear parabolic equations 35C07 Traveling wave solutions 35B35 Stability in context of PDEs
Full Text:
##### References:
[1] de Gennes, P. G., Wetting: statics and dynamics, Rev. Modern Phys., 57, 827-863, (1985) [2] Oron, A.; Davis, S. H.; Bankoff, S. G., Long-scale evolution of thin liquid films, Rev. Modern Phys., 69, 931-980, (1997) [3] Giacomelli, L.; Gnann, M.; Otto, F., Regularity of source-type solutions to the thin-film equation with zero contact angle and mobility exponent between 3/2 and 3, European J. Appl. Math., 24, 735-760, (2013) · Zbl 1292.35067 [4] Bernis, F.; Peletier, L. A.; Williams, S. M., Source type solutions of a fourth order nonlinear degenerate parabolic equation, Nonlinear Anal., 18, 3, 217-234, (1992) · Zbl 0778.35056 [5] Giacomelli, L.; Knüpfer, H.; Otto, F., Smooth zero-contact-angle solutions to a thin-film equation around the steady state, J. Differential Equations, 245, 6, 1454-1506, (2008) · Zbl 1159.35039 [6] Giacomelli, L.; Knüpfer, H., A free boundary problem of fourth order: classical solutions in weighted Hölder spaces, Comm. Partial Differential Equations, 35, 11, 2059-2091, (2010) · Zbl 1223.35208 [7] John, D., Uniqueness and stability near stationary solutions for the thin-film equation in multiple space dimensions with small initial Lipschitz perturbations, (2013), Rheinische Friedrich-Wilhelms-Universität Bonn, Bonn, PhD thesis [8] John, D., On uniqueness of weak solutions for the thin-film equation, e-prints · Zbl 1322.35084 [9] Giacomelli, L.; Otto, F., Rigorous lubrication approximation, Interfaces Free Bound., 5, 4, 483-529, (2003) · Zbl 1039.76012 [10] Knüpfer, H.; Masmoudi, N., Well-posedness and uniform bounds for a nonlocal third order evolution operator on an infinite wedge, Comm. Math. Phys., 320, 2, 395-424, (2013) · Zbl 1316.35251 [11] Knüpfer, H.; Masmoudi, N., Darcy’s flow with prescribed contact angle - well-posedness and lubrication approximation, Arch. Ration. Mech. Anal., (2014), in press [12] Knüpfer, H., Well-posedness for the Navier slip thin-film equation in the case of partial wetting, Comm. Pure Appl. Math., 64, 9, 1263-1296, (2011) · Zbl 1227.35146 [13] H. Knüpfer, Well-posedness for a class of thin-film equations with general mobility in the regime of partial wetting, preprint. [14] Bernis, F.; Friedman, A., Higher order nonlinear degenerate parabolic equations, J. Differential Equations, 83, 1, 179-206, (1990) · Zbl 0702.35143 [15] Beretta, E.; Bertsch, M.; Dal Passo, R., Nonnegative solutions of a fourth-order nonlinear degenerate parabolic equation, Arch. Ration. Mech. Anal., 129, 2, 175-200, (1995) · Zbl 0827.35065 [16] Bertozzi, A. L.; Pugh, M., The lubrication approximation for thin viscous films: regularity and long-time behavior of weak solutions, Comm. Pure Appl. Math., 49, 2, 85-123, (1996) · Zbl 0863.76017 [17] Dal Passo, R.; Garcke, H.; Grün, G., On a fourth-order degenerate parabolic equation: global entropy estimates, existence, and qualitative behavior of solutions, SIAM J. Math. Anal., 29, 2, 321-342, (1998), (electronic) · Zbl 0929.35061 [18] Grün, G., Droplet spreading under weak slippage—existence for the Cauchy problem, Comm. Partial Differential Equations, 29, 11-12, 1697-1744, (2004) · Zbl 1156.35388 [19] Bernis, F., Finite speed of propagation and continuity of the interface for thin viscous flows, Adv. Differential Equations, 1, 3, 337-368, (1996) · Zbl 0846.35058 [20] Bernis, F., Finite speed of propagation for thin viscous flows when $$2 \leqslant n < 3$$, C. R. Acad. Sci. Paris Sér. I Math., 322, 12, 1169-1174, (1996) · Zbl 0853.76018 [21] Hulshof, J.; Shishkov, A. E., The thin film equation with $$2 \leqslant n < 3$$: finite speed of propagation in terms of the $$L^1$$-norm, Adv. Differential Equations, 3, 5, 625-642, (1998) · Zbl 0953.35072 [22] Bertsch, M.; Dal Passo, R.; Garcke, H.; Grün, G., The thin viscous flow equation in higher space dimensions, Adv. Differential Equations, 3, 3, 417-440, (1998) · Zbl 0954.35035 [23] Dal Passo, R.; Giacomelli, L.; Grün, G., A waiting time phenomenon for thin film equations, Ann. Sc. Norm. Super. Pisa Cl. Sci. (4), 30, 2, 437-463, (2001), URL: · Zbl 1024.35051 [24] Carrillo, J. A.; Toscani, G., Long-time asymptotics for strong solutions of the thin film equation, Comm. Math. Phys., 225, 3, 551-571, (2002) · Zbl 0990.35054 [25] Grün, G., Droplet spreading under weak slippage: the optimal asymptotic propagation rate in the multi-dimensional case, Interfaces Free Bound., 4, 3, 309-323, (2002) · Zbl 1056.35072 [26] Grün, G., Droplet spreading under weak slippage: the waiting time phenomenon, Ann. Inst. H. Poincaré Anal. Non Linéaire, 21, 2, 255-269, (2004) · Zbl 1062.35012 [27] Giacomelli, L.; Grün, G., Lower bounds on waiting times for degenerate parabolic equations and systems, Interfaces Free Bound., 8, 1, 111-129, (2006) · Zbl 1100.35058 [28] Fischer, J., Optimal lower bounds on asymptotic support propagation rates for the thin-film equation, J. Differential Equations, 255, 10, 3127-3149, (2013) · Zbl 1328.35331 [29] Fischer, J., Upper bounds on waiting times for the thin-film equation: the case of weak slippage, Arch. Ration. Mech. Anal., 211, 3, 771-818, (2014) · Zbl 1293.35241 [30] Otto, F., Lubrication approximation with prescribed nonzero contact angle, Comm. Partial Differential Equations, 23, 11-12, 2077-2164, (1998) · Zbl 0923.35211 [31] Bertsch, M.; Giacomelli, L.; Karali, G., Thin-film equations with “partial wetting” energy: existence of weak solutions, Phys. D, 209, 1-4, 17-27, (2005) · Zbl 1079.76011 [32] Ansini, L.; Giacomelli, L., Doubly nonlinear thin-film equations in one space dimension, Arch. Ration. Mech. Anal., 173, 1, 89-131, (2004) · Zbl 1064.76012 [33] Giacomelli, L.; Shishkov, A., Propagation of support in one-dimensional convected thin-film flow, Indiana Univ. Math. J., 54, 4, 1181-1215, (2005) · Zbl 1088.35050 [34] Koch, H., Non-Euclidean singular intergrals and the porous medium equation, (1999), Ruprecht-Karls-Universität Heidelberg Heidelberg, Habilitation thesis [35] Aronson, D. G.; Vázquez, J. L., Eventual $$C^\infty$$-regularity and concavity for flows in one-dimensional porous media, Arch. Ration. Mech. Anal., 99, 4, 329-348, (1987) · Zbl 0642.76108 [36] Angenent, S., Local existence and regularity for a class of degenerate parabolic equations, Math. Ann., 280, 3, 465-482, (1988) · Zbl 0619.35114 [37] Daskalopoulos, P.; Hamilton, R., Regularity of the free boundary for the porous medium equation, J. Amer. Math. Soc., 11, 4, 899-965, (1998) · Zbl 0910.35145 [38] Prüss, J.; Simonett, G., Maximal regularity for degenerate evolution equations with an exponential weight function, (Functional Analysis and Evolution Equations, (2008), Birkhäuser Basel), 531-545 · Zbl 1169.35318
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-12-08 13:13:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8345229029655457, "perplexity": 3563.9095624588954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363510.40/warc/CC-MAIN-20211208114112-20211208144112-00584.warc.gz"}
|
http://www.cyclopaedia.info/wiki/Walsh-matrix
|
Walsh matrix
In mathematics, a Walsh matrix is a specific square matrix, with dimensions a power of 2, the entries of which are +1 or −1, and the property that the dot product of any two distinct rows (or columns) is zero. The Walsh matrix was proposed by Joseph L. Walsh in 1923. Each row of a Walsh matrix corresponds to a Walsh function.
The natural ordered Hadamard matrix is defined by the recursive formula below, and the sequency ordered Hadamard matrix is formed by rearranging the rows so that the number of sign-changes in a row is in increasing order. Confusingly, different sources refer to either matrix as the Walsh matrix.
The Walsh matrix (and Walsh functions) are used in computing the Walsh transform and have applications in the efficient implementation of certain signal processing operations.
This is an excerpt from the article Walsh matrix from the Wikipedia free encyclopedia. A list of authors is available at Wikipedia.
Images on Walsh matrix
Preview image:
Original:
Search results from Google and Bing
1
>30
1
Walsh matrix - Wikipedia, the free encyclopedia
In mathematics, a Walsh matrix is a specific square matrix, with dimensions a power of 2, the entries of which are +1 or −1, and the property that the dot product ...
en.wikipedia.org/wiki/Walsh_matrix
2
>30
2
16 x 16 Walsh matrix? - Yahoo! Answers
3
>30
3
Hadamard Matrix -- from Wolfram MathWorld
and a Hadamard matrix H_n gives a graph on 4n vertices known as a Hadamard graph. HadamardWalshArrays. A complete set of 2^n Walsh functions of order ...
4
>30
4
Sequency Ordered Walsh-Hadamard Matrix. In order for the elements in the spectrum ${\bf X}=[ X[0], X[1 to represent different sequency components contained ... fourier.eng.hmc.edu/e161/lectures/wht/node3.html 5 >30 5 New enumeration of Walsh matrices - ACM Digital Library The discrete Walsh transform is a linear transform defined by a Walsh matrix. Three ways to construct Walsh matrices are known, which differ by the sequence ... dl.acm.org/citation.cfm?id=1713694 6 >30 6 Category:Walsh matrix - Wikimedia Commons Oct 28, 2011 ... A Walsh matrix is a special square matrix, that contains only 1 and -1 entries. This category contains also matrices, that share only the pattern of ... commons.wikimedia.org/wiki/Category:Walsh_matrix 7 >30 7 Walsh matrix Walsh code, for separating the multiple users on the same channel. These are based on a Walsh matrix, which is a square matrix with binary elements, and ... sna.csie.ndhu.edu.tw/~cnyang/MCCDMA/tsld017.htm 8 >30 8 Signal Processing Toolbox - Discrete Walsh-Hadamard Transform ... The Walsh matrix, which contains the Walsh functions along the rows or columns in the increasing order of their sequencies is obtained by changing the index of ... www.mathworks.com/products/signal/demos.html?file=/products/demos/shipping/signal/walshhadamarddemo.html 9 >30 9 IEEE Xplore - Shift Walsh matrix and delay-differential equations Shift Walsh matrix is first introduced. The m times m shift Walsh matrix is formed from the m times m Walsh matrix by shifting the columns of the Walsh matrix to ... ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1101888 10 >30 10 Some Analogues of Circulant Matrices - Cal Poly San Luis Obispo Jan 4, 1995 ... Define a square matrix to be a “Walsh” matrix if its i, j entry is a ... A Walsh matrix of size 2n is defined by 2n numbers ak, for 0 ≤ k ≤ 2n − 1. www.calpoly.edu/~kmorriso/Research/CirculantAnalogues.pdf Search results for "Walsh matrix" Google: approx. 6.720.000 Walsh matrix in science Joseph L. Walsh - Wikipedia, the free encyclopedia Alma mater, Harvard University ... Walsh function · Walsh code · Walsh matrix ... For most of his professional career he studied and worked at Harvard University. IEEE Xplore - Shift Walsh matrix and delay-differential equations The m times m shift Walsh matrix is formed from the m times m Walsh matrix by shifting the columns of the ... National Tsing Hua University, Hsinchu, Taiwan ... Hadamard Matrix -- from Wolfram MathWorld A Hadamard matrix is a type of square (-1,1)-matrix invented by Sylvester (1867) ... Walsh functions of order n ... New York: Cambridge University Press, 1986. Walsh Matrix DCN - Mumbai University MCA College Programs and ... Walsh matrix DCN . Programs and Notes for MCA. Masters in Computer Applications. Simple Programs. [PDF]Fast Random Projections using Lean Walsh ... - Yale University ∗Yale University, Department of Computer Science, Supported by NGA and ... Due to their construction the Lean Walsh matrices are of size ˜d × d where ˜d = dα ... Colin Walsh profiles | LinkedIn View the profiles of professionals named Colin Walsh on LinkedIn. ... Education: Bolton School, Sheffield Hallam University; Summary: In my new venture, New Life ... Current: Vice President/General Manager at Matrix USA (L'Oreal); Past: VP ... Colin Walsh | LinkedIn Colin Walsh. Vice President/General Manager at Matrix USA (L'Oreal) ... Colin Walsh's Overview ... The University of Western Ontario - King's University College ... New enumeration of Walsh matrices - ACM Digital Library The discrete Walsh transform is a linear transform defined by a Walsh matrix. ... Roger A. Horn , Charles R. Johnson, Matrix analysis, Cambridge University ... National Tsing Hua University Institutional Repository: Shift Walsh ... National Tsing Hua University Institutional Repository > ... 題名: Shift Walsh matrix and delay-differential equations ... 摘要: Shift Walsh matrix is first introduced. Kimberly Harsh - Walsh University 2006 Walsh Graduate ... The Minute Clinic, Matrix Medical Network ... One of the main reasons I chose Walsh University was because it has one of the most ... Books on the term Walsh matrix Designing cdma2000 Systems Leonhard Korowajczuk, Bruno de Souza Abreu Xavier, 2005 where MN Walsh matrix of order N, i.e. N rows and N columns MN binary complement of the MN matrix M2N Walsh matrix of order 2N Figure 3.12 presents Walsh matrices of orders 2, 4 and 8 created according to eqn (3.13). Figure 3.12 Walsh ... Mobile Wireless Communications Mischa Schwartz, 2005 Differential PSK, such as is used in the IS- 136 system (see Chapter 5), provides performance inferior to the Walsh orthogonal scheme used in this system. The 64 -ary Walsh encoder is generated as shown by the matrix representation ... Walsh Matrix Lambert M. Surhone, Miriam T. Timpledon, Susan F. Marseken, 2010 Please note that the content of this book primarily consists of articles available from Wikipedia or other free sources online. Digital Image Processing And Analysis Bhabatosh Chanda, Dwijesh Dutta Majumder, 2004 respectively about the point x = -*• , and every subsequent pair of functions maintains the order of even and odd functions. The rows of discrete Walsh transform matrix Wu of size M x M is generated by sampling the Walsh functions having the ... Spectral Logic and Its Applications for the Design of Digital Devices Mark G. Karpovsky, Radomir S. Stankovic, Jaakko T. Astola, 2008 In matrix notation, the discrete Walsh functions (2.3.6) can be written as rows of a (2m x 2'”) matrix W(m), called the Walsh matrix. They are particular cases of Hadamard matrices (7, 135), since the entries are :I:1. Example 2.3.2 For m I 3, the ... Development of Google searches Blog posts on the term Walsh matrix Distance Based Spatial Weights Matrix, condition on time - File Exchange - MATLAB Central File exchange, MATLAB Answers, newsgroup access, Links, and Blogs for the MATLAB & Simulink user community www.mathworks.com/matlabcentral/fileexchange/file_infos/39525-distance-based-spatial-weights-matrix-condition-on-time Inside Scoop: Colin Walsh, Matrix | American Salon Blog ]]>EDITOR’S NOTE: A version of this article appeared in the June 2012 issue of American Salon. In the print version, Nick Stenson was incorrectly left out of the list of Matrix artistic directors. blog.americansalonmag.com/scoop-colin-walsh-matrix/ iOS / iPad Music Production App Roundup: esGrain, TC-11, Beatsurfing, Devobots | Dubspot Blog In this month's iOS / iPad music production app roundup we take a look at some unique synths and sound creation applications including esGrain, TC-11, Beatsurfing, and Devobots. blog.dubspot.com/ios-ipad-music-production-app-roundup-esgrain-tc-11-beatsurfing-devobots/ Pressure mounts at Matrix Asset Management | FP Street | News | Financial Post So many challenges, so little time — in a sense, that sums up the situation at Matrix Asset Management, a publicly listed money manager with about$1-billion under management
The Matrix of Modern Relationships
Hello there! If you are new here, you might want to subscribe to the RSS feed for updates on this topic.Powered by WP Greet Box WordPress PluginI was feeling a little crafty today, so I made a chart. It actually
www.hookingupsmart.com/2010/07/08/hookinguprealities/the-matrix-of-modern-relationships/
The Matrix, Walsh Bay/Millers Point Film Location, Sydney - Alex's Travel Blog : Alex's Travel Blog
The Matrix is my favourite movie. I’ve seen it lots of times.
alexasigno.co.uk/the-matrix-film-locations-sydney/
Nearest Neighbor Spatial Weights Matrix - File Exchange - MATLAB Central
|
2014-09-18 11:42:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5073797702789307, "perplexity": 1652.3601733476673}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657127285.44/warc/CC-MAIN-20140914011207-00257-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
|
https://stats.stackexchange.com/questions/32072/fishy-significance-test-am-i-doing-something-wrong-with-anovas
|
# Fishy significance test: Am I doing something wrong with ANOVAs?
Just when I thought I'd had a grip on how to do an analysis of variance this particular data set had me startled: it's a collection of response times (in ms) to a linguistic input. To be precise, it's part of a reading time experiment, and I'm trying to see if there's a significant effect of two factors.
My experiment had a 2x2 factorial design, with 2 factors binary Conflicting and ContextPresent. The only value I'm interested in for the purpose of this question is one particular random variable, a response time. No transformations have been done on it, except removal of outliers via 3-sigma rule (I used mean + 3 * standard deviation to determine outliers.)
So I run my anova in R:
> anova(lm(TextDisplay9.RT ~ Conflicting * ContextPresent, data=items.cropped))
Analysis of Variance Table
Response: TextDisplay9.RT
Df Sum Sq Mean Sq F value Pr(>F)
Conflicting 1 111185 111185 7.0591 0.00808 **
ContextPresent 1 73591 73591 4.6723 0.03102 *
Conflicting:ContextPresent 1 352 352 0.0223 0.88128
Residuals 651 10253667 15751
Jolly ho, I get pretty good results for a main effect on both my factors, and no interaction. That's fine. But here's the corresponding box-and-whiskers graph:
> ggplot(items.cropped,aes(Conflicting,TextDisplay9.RT)) + geom_boxplot() +
facet_grid(.~ ContextPresent)
So this plot actually makes it seem like there shouldn't be a main effect of either variable! They're all too similar! Yes, the scale is rather squished, because of the outliers, but the means are really really close!
> with(items.cropped,mean(items.cropped[Conflicting=="semantic conflict" & ContextPresent == "mentioned in context",]$TextDisplay9.RT,na.rm=T)) [1] 431.8659 > with(items.cropped,mean(items.cropped[Conflicting=="semantic conflict" & ContextPresent == "not mentioned in context",]$TextDisplay9.RT,na.rm=T))
[1] 454.5305
> with(items.cropped,mean(items.cropped[Conflicting=="no semantic conflict" & ContextPresent == "mentioned in context",]$TextDisplay9.RT,na.rm=T)) [1] 407.485 > with(items.cropped,mean(items.cropped[Conflicting=="no semantic conflict" & ContextPresent == "not mentioned in context",]$TextDisplay9.RT,na.rm=T))
[1] 427.2188
Does it sound possible that there could be a main effect? Or did I somehow misuse ANOVAs? I could provide the data if needed!
Thanks very much for any suggestions.
• With a large enough sample size anything can be significant... – Dason Jul 11 '12 at 1:38
• Response times to linguistic input are often done with repeated measures. How many subjects in this experiment are there? How many trials / subject? – John Jul 11 '12 at 3:16
• @Dason, I did aggregate the data, and with the aggregated data I get F(1,92) = 6.67, p < 0.115, so it's still "signifiant." – Aleksandar Dimitrov Jul 11 '12 at 8:54
• @John, I had 7 subjects per condition/item pairing in this experiment. I read that in this case one might want to do anova(lm(val ~ f1 * f2 * Subject)), but that one gives me even better scores! – Aleksandar Dimitrov Jul 11 '12 at 8:55
• Given that you just have categorical predictors you have to use repeated measures ANOVA or multi-level modelling. – John Jul 11 '12 at 12:04
Give a better justification for outlier removal than 3 SD. Items happen at 3 SD by chance, at a pretty high probability, when you have this many samples. There are also many other issues (see Miller (1991) for an example). There may be improbable values based on theory (e.g. impossibly fast, or ridiculously slow). Remove outliers because of that. Don't use an arbitrary (which is what this is) statistical procedure over your own judgment. What if an RT of 80 ms is not 3SD away? Do you keep it? It's not physically possible to be actually based on perception of a stimulus for button presses or vocal responses. With a choice task values like the 190 ms in your plot would not be believable (and that looks like it's post outlier removal). You mention several seconds must be outliers, then remove them for that reason. They must reflect processes other than a response to the stimulus. If you have an accuracy measure you can use those to guide outlier removal. Perhaps accuracy is very low under a certain RT or drops off past a different RT.
You are not allowed to use all of the degrees of freedom in all of your measurements in single level ANOVA because this is a repeated measures design. You must aggregate your numbers so that each subject produces 4 numbers, 1 value in each condition.
items.agg <- aggregate(TextDisplay9.RT ~ Conflicting + ContextPresent + Subject, itemsitems.cropped, mean)
If you examine these aggregate data you may find your skew problem is solved through the central limit theorem (although the n is a bit low for that). It would almost definitely be solved for the reciprocal of the RT in seconds (rate). RT is an arbitrary representation of performance. It is the time it took to complete the task. The rate would be how many tasks can be completed per second. They are both easily interpretable numbers but the latter has much better statistical properties. (do not forget that the meaning of rate is opposite of RT in that higher numbers = better performance)
And then you have to stratify your results in the repeated measures ANOVA
m <- aov(TextDisplay9.RT ~ Conflicting * ContextPresent + Error(Subject / (Conflicting * ContextPresent)), data = items.agg)
summary(m)
You'll have much less power in your study now.
You really should look at Baayen's (2008) book on studying linguistic RT data. It's very specific to your field and avoids much of the messy statistical theory while being very practically helpful.
Miller, J. (1991). Reaction time analysis with outlier exclusion: Bias varies with sample size. The Quarterly Journal of Experimental Psychology, 43A(4):907–912.
• (+1) Nice response. The first paragraph is very well put. – chl Jul 11 '12 at 12:42
The box-and-whisker plots look very skewed, and that is after removing the outliers.
My suggestion: Transform each value to its reciprocal. Now instead of assessing how long something takes, you will be assessing how fast it is. Now look at the distribution of values (before removing any outliers). My guess is that the data will be far more symmetrical, and you'll have many fewer outliers. Also my guess is that the data will be closer to Gaussian, which means an ANOVA on the transformed values will be more valid than the original ANOVA.
Also note that while the P value was small in your ANOVA for the "conflicting" factor, the effect size is tiny. The R squared is computed from the ratio of SS, so is only 111185/10253667 = 0.0108.
• I did compute an Anova for the reciprocal as well. R² in this case was 0.099, F(1,651) = 6.4282, p < 0.0115. I had a look at the density functions - the reciprocal transformed one looks "more" Gaussian. – Aleksandar Dimitrov Jul 11 '12 at 9:02
• @AleksandarDimitrov: It would be good to show the box-whisker plots here so others can see how transforming solved the problem. Were there any "outliers" in the transformed data? Or (my guess) were the outliers before simply the tail of a nongaussian distribution? Did the transformed data pass normality tests? – Harvey Motulsky Jul 11 '12 at 13:43
The lines in the boxplots are medians not means. When you have outliers the mean and medians can be very different. Did you include the outliers when fitting the model? If you excldued them what was your justification for removing them? Also if the outliers are not used in the model you get a misleading picture including them in the boxplot. The distributions look skewed, particularly the one on the far right and far left. Observations should not be arbitrarily removed for exceeding the 3 sigma limit. When the data is very nonnormal the F tests in the analysis of variance are not valid and you should consider nonparametric alternatives such as the Kruskal-Wallis test. Also keep in mind that even for a normal distribution approxiamtely 3 out of 1000 observations will fall outside the 3 sigma limit. With a sample of over 650 observations it would not be surprising to have one or two outside the 3 sigma limit.
• It seems to be common practice in the literature to remove observations higher than 2 or 3 standard deviations from the mean, that's why I did it — reading times studies tend to produce extreme outliers of several seconds (where the mean and median here are around 400ms.) The data I removed amounted to around 2% of the overall data. I'll look into the Kurskal-Wallis test, thanks! – Aleksandar Dimitrov Jul 11 '12 at 9:08
• I don't know what you mean by common practice in the literature. Among statisticians it is considered bad practice. – Michael R. Chernick Jul 11 '12 at 10:15
• Agreed that this is bad practice - outliers should be removed if you have reason to believe they "do not belong" with the rest of the data e.g. because of a data transcription error. For example, if you had a data set of heights and noticed that one person was 20ft tall, you'd probably remove that. Removing observations based only on the fact that they are 2+ standard deviations from the mean is bad practice. There should be some substance underlying a decision to delete a point as an "outlier". What you're doing is trimming the tails of the distribution, which is something entirely different. – Macro Jul 11 '12 at 12:25
• And, within the broader literature you're discussing, there are papers with strong arguments against it. – John Jul 11 '12 at 12:26
• @Macro I have done a lot of research on outliers and outlier detection and the common terminology that we use is any observation that stands out apart from the bulk of the distribution. It could be an unsual or extreme observation but not necessarily a candidate to be removewd from the sample. – Michael R. Chernick Jul 11 '12 at 13:42
|
2019-12-08 23:37:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.631221354007721, "perplexity": 1095.6289156383514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540515344.59/warc/CC-MAIN-20191208230118-20191209014118-00033.warc.gz"}
|
https://research.birmingham.ac.uk/portal/en/publications/fluctuational-susceptibility-of-ultracold-bosons-in-the-vicinity-of-condensation(e623c611-2670-4f53-af50-3e9146294a1d).html
|
# Fluctuational susceptibility of ultracold bosons in the vicinity of condensation
Research output: Contribution to journalArticlepeer-review
## Abstract
We study the behaviour of ultracold bosonic gas in the critical region above the Bose-Einstein condensation in the presence of an artificial magnetic field, $B_\mathrm{art}$. We show that the condensate fluctuations above the critical temperature $T_c$ cause the fluctuational susceptibility, $\chi _\mathrm{fl}$, of a uniform gas to have a stronger power-law divergence than in an analogous superconducting system. Measuring such a divergence opens new ways of exploring critical properties of the ultracold gas and an opportunity of an accurate determination of $T_c$. We describe a method of measuring $\chi _\mathrm{fl}$ which requires a constant gradient in $B_\mathrm{art}$ and suggest a way of creating such a field in experiment.
## Bibliographic note
5 pages, 3 figures, 5 pages of Supplement; the text is rewritten and rearranged, and the figures are modified
## Details
Original language English 041602(R) Physical Review A - Atomic, Molecular, and Optical Physics 93 Published - Apr 2016
## Keywords
• cond-mat.quant-gas
|
2021-01-19 11:42:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3074038028717041, "perplexity": 1739.9307710605713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703518240.40/warc/CC-MAIN-20210119103923-20210119133923-00491.warc.gz"}
|
https://aps.org/publications/apsnews/202211/backpage.cfm
|
November 2022 (Volume 31, Number 10)
# The Back Page: How Newton Derived the Shape of Earth
To argue for universal gravitation, Newton had to become a “geodesist.”
By Miguel Ohnesorge | October 13, 2022
Credit: Taryn MacKinney / APS
Isaac Newton’s landmark 1687 work, “Philosophiæ Naturalis Principia Mathematica,” laid the groundwork for classical mechanics, describing the laws of gravitation and predicting astronomical phenomena, like the movement of planets. The work changed physics.
But buried in the Principia is an often-overlooked triumph: Newton’s derivation of Earth’s figure — that is, the calculation of its shape, size, and surface gravity variation, part of a field later known as geodesy — which was crucial to his argument for universal gravitation. Here, I reconstruct Newton’s derivation and its significance[i].
## Newton’s Derivation
Newton began his quantitative derivation of Earth’s figure in 1686, after learning about work by the French physicist Jean Richer. In 1671, Richer had traveled to Cayenne, the capital of French Guiana in South America, and experimented with a pendulum clock. Richer found that the clock, calibrated to Parisian astronomical time (48°40’ latitude), lost an average of 2.5 minutes per day in Cayenne (5° latitude). This was surprising, but it could be explained by the theory of centrifugal motion, recently developed by Christian Huygens: The theory suggested that the centrifugal effect is strongest at the equator, so the net effective surface gravity would decrease as you moved from Paris to Cayenne[1].
Newton accepted Huygens’s theory but realized it meant something strange: If Earth is a sphere and its centrifugal effect is strongest at the equator, gravity would vary across Earth’s surface, and the ocean would bulge up at the equator — a proposition that Newton considered absurd.
To resolve this, he proposed that the solid Earth had behaved like a fluid throughout its formation, gradually bulging up at the equator because of the centrifugal effect. He proposed modeling planets as rotating fluids in equilibrium, where the planet’s shape is stable while the force generated by its rotational motion, and the gravitational attraction between its particles, acts on it[2].
To derive Earth’s figure based on this theory, Newton first had to calculate the ratio between gravitational acceleration and centrifugal force at the equator (a 1-to-290.8 ratio), based on the period of Earth’s diurnal rotation and estimates of Earth’s equatorial diameter. Newton knew the length of two meridional arcs that he could use for this calculation, measured by surveyors in England and France. He calculated gravitational acceleration at the equator from Richer’s pendulum measurements at 48°50’ latitude, extrapolating the corresponding value at the equator of a homogeneous sphere[2].
Equipped with the ratio, Newton faced a problem: How could he express mathematically that a rotating fluid, whose constituents attract according to a certain law of gravity, is in a state of equilibrium? To answer this, Newton used an ingenious thought experiment, which he had developed in his 1685 “Liber Secundus” manuscript[3]: A rotating body is in a state of hydrostatic equilibrium if the weight of water in two channels x and y, where x connects the equator to Earth’s center and y connects Earth’s center to one of the poles, is identical. Since x is affected by the centrifugal effect, equilibrium is fulfilled if the overall centrifugal “pull” on the equator is compensated by a change to the figure. In other words, the equatorial regions need to “bulge up” and the poles “flatten” to such an extent that the total weight of x and y (i.e., net gravitational attraction toward the center) is the same. This attempt to define hydrostatic equilibrium was later named the “principle of canals”[4] (Fig. 1).
Newton then used his theory of gravitational attraction to derive the figure that a rotating body would need to have to balance the net attraction on the two columns — more precisely, the ratio between equatorial diameter and polar axis that would fulfill equilibrium. To calculate this, Newton determined the ratio between polar and equatorial surface gravity for the simpler case of a non-rotating oblate figure, with the axis-diameter ratio of 100-to-101, arriving at 501-to-500.[ii]
If this result is multiplied by the length of the two fluid columns (100-to-101), we obtain the ratio of 501-to-505 between the net gravitational forces acting on the polar and equatorial fluid columns. Therefore, the net gravitational force acting on the equatorial fluid column is greater than the corresponding net force acting on the polar fluid column by a magnitude of 4-to-505. So, if a spheroid with the dimensions of 100-to-101 is rotating and in a state of hydrostatic equilibrium, the ratio between equatorial surface gravity and centrifugal force must be 4-to-505.
Credit: Newton’s “Philosophiæ Naturalis Principia Mathematica”
Fig. 1: Newton’s illustration of hydrostatic equilibrium, a state he believed Earth to be in.
Since Newton’s premise was that Earth is in a state of hydrostatic equilibrium, he extended this thought experiment to Earth. For his previously determined 1-to-290.1 ratio between equatorial surface gravity and centrifugal force, he calculated a corresponding polar axis and equatorial diameter ratio of 689-to-692. He concluded that Earth, modeled as a homogeneous spheroid that rotates with uniform angular velocity, must have polar and equatorial axes with a length ratio of 689-to-692 to be in a state of hydrostatic equilibrium.
He then calculated the effective surface gravity for this model of Earth — indicated by the length of the seconds-pendulum — to vary as the square of the sine of the latitude[2]. By deriving a general latitudinal variation, he was no longer just concerned with the length ratio between the equatorial diameter and polar axis; instead, he was modeling Earth’s overall figure — an oblate ellipsoid with an ellipticity of 3-to-692 (about 1-to-230.7). Using the pendulum length at 48°40’ astronomical latitude in Paris as a reference point, he predicted that the pendulum has to be shortened by 81/1000 and 89/1000 inches in Gorée and Cayenne, respectively, to preserve its period — close but still inaccurate approximations of the measurements (100/1000 and 125/1000 inches).
## Earth’s Figure and Universal Gravitation
Clearly, Newton invested considerable effort in deriving Earth’s figure and latitudinal variation in surface gravity. Besides presenting a novel definition for the hydrostatic equilibrium of rotating bodies, these results presumed Newton’s theory of gravitational attraction. His predictions only hold if all of Earth’s constituent particles mutually attract. Hence, his predictions offered a test for the most fundamental and novel assumption in Newton’s theory of gravitation: that gravity acts universally between all particles of matter. In fact, as George Smith showed, these predictions are the Principia’s only such test[5].
Newton was aware of this. His editor Roger Cotes kept pushing him to revise the geodetic results in light of new data[6], and in the second edition of the Principia, Newton revised Earth’s ellipticity from a 689-692 ratio to a 1-to-230 ratio and added a table with detailed predictions of measurements for surface gravity and surface curvature[2]. Newton revised these predictions again for the third edition (Fig. 2).
Credit: Cambridge University Library, Newton Manuscripts, MS Add 3965, 450r.
Fig. 2: Newton repeatedly revised his predictions for the measurement of the Earth’s figure.
Were Newton’s geodetic predictions accurate enough to reflect their importance in his argument for universal gravitation? On a naïve reading, the answer is no. When the third and last edition of the Principia was published, Newton had access to one arc measurement of the latitudinal variation in the length of 1° of meridian and five pendulum measurements of the variation of surface gravity with latitude. The arc measurement disagreed with Newton’s predictions, seeming to indicate that Earth is an oblong, rather than oblate, spheroid. Out of existing pendulum measurements, only Jean Richer’s seemed similar, but even that still disagreed with Newton’s prediction. The prediction also does not match current data: Satellite measurements indicate that Earth’s ellipticity has a ratio of 1-to-298.257223563[7].
However, such a pessimistic view of Newton’s geodetic work misses important nuances of the Principia. As George Smith has argued, the Principia not only proposes theoretical predictions, but a methodology of testing through approximation. Newton accepted that his predictions would likely be inaccurate because he relied on uncertain background hypotheses when deriving them. The success of the universal theory of gravitation, then, should not be measured by the immediate agreement between initial predictions and measurements. Rather, Newton intended that his theory be tested on how well it could guide adjustments to background hypotheses, leading to converging measurements[8].
In other words, Newton did not aim to establish Earth’s figure once and for all. Rather, he gave approximations, which would allow for adjustments to the assumptions he made in his derivation. With Newton’s early derivation, for example, he assumed the rotating Earth has a homogeneous density. But when his predictions and Richer’s measurements in Cayenne and Gorée disagreed, he modified this assumption in the first edition of the Principia. If Earth is denser at its center, he suggested, the ellipticity of Earth’s equilibrium figure and its surface gravity variation will differ.
With this methodology, Newton passed the torch to future researchers, inviting them to develop hypotheses that would work with these initial measurements, and could then be tested with increasingly precise measurements[2].
In line with Newton’s methodology, geodesists eventually produced convergent measurements of Earth’s ellipticity based on variation in latitudinal surface gravity and curvature. For about two and a half centuries, they used the theories of gravitation and hydrostatic equilibrium to model Earth’s figure, motion, and constitution, and gradually revised these parameters in light of new measurements. By 1909, all major ellipticity measurements converged within 297.6±0.9, implying that density increased inward[9]. By 1926, Viennese astronomer Samuel Oppenheim concluded that these results offered overwhelming evidence for Newtonian gravity on Earth, vindicating both Newton’s theory of gravitation and his methodology[10].
Miguel Ohnesorge is a doctoral student at the University of Cambridge and visiting fellow at Boston University’s Philosophy of Geoscience Lab. Ohnesorge won the APS Forum of the History and Philosophy of Physics’ 2022 essay contest; this article is adapted from his winning essay.
For Ohnesorge’s full essay and his reconstruction of Newton’s derivation, visit go.aps.org/geodesy. Learn more about Ohnesorge’s research at mohnesorgehps.com.
Notes
[i] The only published discussions of the work’s methodological importance are in Schliesser and Smith 2000 and Smith 2014, who do not reconstruct Newton’s derivation. Todhunter 1873 and Greenberg 1996 discuss the derivation in plain English, without methodological contextualization. Chandrasekhar 2003 also reconstructs the derivation in modern algebra, which contains some minor but confusing mistakes and ambiguities.
[ii] Newton’s derivation for the attraction of perfectly spherical and spheroidal compound bodies is given in Book 1, Prop. 91, coroll. 1 and 2 and reconstructed in its original notation at go.aps.org/geodesy.
Citations
[1] Christian Huygens, "Discours de La Cause de La Pesanteur," in Traité de Lumierè Avec Discours de La Cause de La Pesanteur, ed. Christian Huygens (Leiden: Pierre Vander, 1690).
[2] The Principia: Mathematical Principles of Natural Philosophy, trans. Bernhard Cohen and Ann Whitman (Univ. of California Press, 1999).
[3] Isaac Newton, De moto Corporum Liber Secundus (Cambridge Univ. Library, MS Add. 3990, 1685).
[4] Isaac Todhunter, A History of the Mathematical Theories of Attraction and the Figure of the Earth, from the Time of Newton to That of Laplace, vol. 1 (London: Macmillan, 1873).
[5] Eric Schliesser and George E. Smith, "Huygens’s 1688 Report to the Directors of the Dutch East India Company on the Measurement of Longitude at Sea and the Evidence It Offered Against Universal Gravity," preprint (2000).
[6] Cotes to Newton, The Correspondence of Isaac Newton, vol. 5 (1712): 232-236.
[7] World Geodetic System (WGS) 84 Model.
[8] George E. Smith, "Essay Review: Chandrasekhar’s Principia: Newton’s Principia for the Common Reader," Journal for the History of Astronomy 27, vol. 4 (1996): 353–62.
[9] Miguel Ohnesorge, "Pluralizing Measurement: Physical Geodesy’s Measurement Problem and Its Resolution, 1880-1924," Studies in History and Philosophy of Science Part A (forthcoming).
[10] A. Sommerfeld and Samuel Oppenheim, Encyklopädie der Mathematischen Wissenschaften mit Einschluss ihrer Anwendungen: Fünfter Band: Physik (Vieweg+Teubner Verlag, 1926).
Other Sources
Subrahmanyan Chandrasekhar, Newton’s Principia for the Common Reader (Clarendon Press, 2003).
Alexis Claude Clairaut, "I. An Inquiry Concerning the Figure of Such Planets as Revolve about an Axis," Philosophical Transactions of the Royal Society of London 40, 449 (1738): 277–306.
John Greenberg, ‘Isaac Newton and the Problem of the Earth’s Shape’. Archive for History of Exact Sciences 49, vol. 4 (1996): 371–91.
Mary Terrall, The Man Who Flattened the Earth: Maupertuis and the Sciences in the Enlightenment (Univ. of Chicago Press, 2002).
The Mathematical Works of Isaac Newton, ed. Derek Thomas Whiteside, vol. 6 (Cambridge University Press, 1974).
"Closing the Loop: Testing Newtonian Gravity, Then and Now," in Newton and Empiricism, ed. Zvi Biener and Eric Schliesser, vol. 262–353 (Oxford Univ. Press, 2014).
©1995 - 2023, AMERICAN PHYSICAL SOCIETY
APS encourages the redistribution of the materials included in this newspaper provided that attribution to the source is noted and the materials are not truncated or changed.
Editor: Taryn MacKinney
APS News Home
|
2023-03-25 03:42:39
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8178626298904419, "perplexity": 2317.888861093862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945315.31/warc/CC-MAIN-20230325033306-20230325063306-00072.warc.gz"}
|
https://zbmath.org/?q=an:0594.17002
|
# zbMATH — the first resource for mathematics
$$n$$-Lie algebras. (English) Zbl 0594.17002
Translation from Sib. Mat. Zh. 26, No. 6(154), 126–140 (Russian) (1985; Zbl 0585.17002).
##### MSC:
17A42 Other $$n$$-ary compositions $$(n \ge 3)$$ 17A36 Automorphisms, derivations, other operators (nonassociative rings and algebras) 17A65 Radical theory (nonassociative rings and algebras) 17B60 Lie (super)algebras associated with other structures (associative, Jordan, etc.)
Full Text:
##### References:
[1] A. G. Kurosh, ?Free sums of multioperator algebras,? Sib. Mat. Zh., No. 1, 62-70 (1960). · Zbl 0096.25304 [2] A. G. Kurosh, ?Multioperator rings and algebras,? Usp. Mat. Nauk,24, No. 1, 3-15 (1969). · Zbl 0204.35701 [3] T. M. Baranovich and M. S. Burgin, ?Linear ?-algebras,? Usp. Mat. Nauk,30, No. 4, 61-106 (1975). [4] B. A. Rozenfel’d, Spaces of Higher Dimensions [in Russian], Nauka, Moscow (1966). [5] N. V. Efimov and E. R. Rozenforn, Linear Algebra and Multidimensional Geometry [in Russian], Nauka, Moscow (1974). [6] A. G. Kurosh, Lectures in General Algebra [in Russian], Nauka, Moscow (1973). · Zbl 0271.08001 [7] N. Jacobson, Lie Algebras [Russian translation], Mir, Moscow (1969). · Zbl 0253.17013 [8] M. Goto and F. Grosshans, Semisimple Lie Algebras [Russian translation], Mir, Moscow (1981). · Zbl 0528.17001 [9] V. T. Filippov, ?On one generalization of Lie algebras,? Preprint No. 64, Mathematics Inst., Siberian Branch, Acad. Sci. of the USSR, Novosibirsk (1984).
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-09-27 09:55:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4351523220539093, "perplexity": 14120.103485610345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058415.93/warc/CC-MAIN-20210927090448-20210927120448-00345.warc.gz"}
|
https://www.physicsforums.com/threads/fun-divisibility-problem.175233/
|
# Fun Divisibility Problem
1. Jun 27, 2007
### Kummer
Show that for any n>1 we can construct a positive integer consisting of only 1's and 0's such that this integer is a multiple of n.
2. Jun 27, 2007
### matt grime
Interesting variation: show that if n is no divisbible by 2 or 5, then there you can choose this multiple to have precisely two 1s and the rest of the digits 0.
3. Jun 27, 2007
### NateTG
Can you show example answers for, say, n=3 or n=9?
I do wonder how interesting a 'minimal' condition would make this though.
4. Jun 27, 2007
### Kummer
A Big Hint
Given (n+1) integers show that there exists two whose difference is a divisible by n.
5. Jun 28, 2007
### NateTG
The problem is fairily easy...
$$k=\sum_{i=1}^{n} 10^{i\phi(n)}$$
where $\phi(n)$ is the Euler Totient function.
Is divisible by $n$.
Alternatively, if we take $n^2-n+1$ integers then at least $n$ of them must have the same residue mod $n$, so their sum is equal to zero mod $n$, so if we take the first $n^2-n+1$ powers of 10, some sum will be equal to zero mod n, and that sum will clearly be zeros and ones.
6. Jun 28, 2007
### matt grime
My special case was: if 10 and n are coprime, let r be the order of 10 in the units mod n, then 10^r-1 is divisble by n.
7. Jun 28, 2007
### NateTG
But that's all 9's. Of course, this leads to a marginally more challenging version:
Show that every number has a (non-trivial) multiple which is some number of ones, followed by some number of zeros.
8. Jun 28, 2007
### matt grime
Duh. Some times I even surprise myself with my idiocy - we need -1 =some power of n. So if n=p is prime, then we need that 10 has even order mod p - for then 1 has two square roots, 1 and -1
9. Jun 28, 2007
### Kummer
My version is the easiest if you used my hint.
Theorem: If there are (n+1) distinct integers then there exists two who difference is divisible by n.
Proof: Each of these integers leaves a remainder between 0 and n-1 inclusive upon division by n. Since there are n possible remainders and (n+1) integers that means there exist two which have the same remainder by the PigeonHole Principle. So their difference is divisible by n.
Theorem: Given any positive integer n we can find a positive number consisting of only 1's and 0's which is divisible by n.
Proof: Construct the sequence of (n+1) integers:
1,11,111,....,111...1
By the first theorem two of these when subtracted are divisible by n. But when we subtract them we end up with a number consisting of only 1's and 0's. In fact, the 1's and 0's are groupped together which answers the more difficult question stated by NateTG
10. Jun 28, 2007
### NateTG
Kummer: How about showing that any number not divisible by 2 or 5 has a multiple that is all 1s?
11. Jun 28, 2007
### matt grime
Kummer, I think you're under the mistake apprehension that we found your question hard....
12. Jun 28, 2007
### Kummer
The question is not hard. It is fun and I wanted to show it to you.
13. Jun 29, 2007
### matt grime
Here's another (but very similar) way of showing it:
consider the sequence 10^m mod n. This is periodic, by the pigeon hole principle. There for there are s,r integers with 10^s=10^{s+kr} mod n for k natural numbers k. Just add up any n different such powers of 10.
14. Jun 29, 2007
### NateTG
Hmm, I wonder if there's some way to find the minimum such multiple, or the average minimum multiple.
15. Jun 30, 2007
### matt grime
I should have been more pedantic: the sequence is eventually periodic. It need not be periodic. The idea being that at some point you must have a repeated residue, and then it becomes periodic - if 10 were invertible, then you could guarantee to run it backwards too. Eg. say n=100, the sequence goes 10,0,0,0,0,0,....
|
2017-09-23 16:59:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6836117506027222, "perplexity": 994.3737639370546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689752.21/warc/CC-MAIN-20170923160736-20170923180736-00373.warc.gz"}
|
https://preprint.impa.br/visualizar?id=2006
|
Preprint A118/2002
On uniform convexity, total convexity and convergence of the proximal point and outer Bregman projection algorithms in Banach spaces
Constantin Zalinescu | Butnariu, Dan | Iusem, Alfredo
Keywords: Total convexity | Uniform convexity | proximal point method | Bregman projections algorithm
In this paper we study and compare the notions of uniform convexity of functions at a point and on bounded sets with the notions of total convexity at a point and sequential consistency of functions, respectively. We establish connections between these oncepts of strict convexity in infinite dimensional settings and use the connections in order to obtain improved convergence results concerning the outer Bregman projection algorithm for solving convex feasibility problems and the generalized proximal point algorithm for optimization in infinite dimensional spaces
Anexos:
|
2023-02-09 00:11:17
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8602620959281921, "perplexity": 705.3918635701655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500983.76/warc/CC-MAIN-20230208222635-20230209012635-00611.warc.gz"}
|
https://en.lntwww.de/Aufgaben:Exercise_3.5Z:_Antenna_Areas
|
# Exercise 3.5Z: Antenna Areas
Two antenna areas: $K$ and $G$
We first consider – as sketched in the image above – a receiving antenna serving a circular area $K$. It is assumed that this antenna can detect all signals incident at different angles $\alpha$ equally well:
• According to the sketch, the angle $\alpha$ refers to the $x$–axis.
• The value $\alpha = 0$ therefore means that the signal is moving towards the antenna in the direction of the negative $x$–axis.
Further we assume:
• The range of values of the angle of incidence $\alpha$ with this definition $-\pi < \alpha \le +\pi$.
• There are very many users in the coverage area whose positions $(x, y)$ are "statistically distributed" over the area $K$.
From subtask (5) we assume the coverage area $G$ outlined below.
• Because of an obstacle, the $x$–coordinate of all participants must now be greaterö&space;than $-R/2$.
• Also in the coverage area $G$ the subscribers would again be "statistically distributed".
Hint:
### Questions
1
What is the PDF $f_\alpha(\alpha)$ for the area $K$? What PDF–value results for $\alpha = 0$?
$f_\alpha(\alpha = 0) \ = \$
2
Which of the two statements is correct? Note in particular also the asymmetric definition range of $-\pi < \alpha \le +\pi$.
The expected value is ${\rm E}[\alpha] = 0$. The expected value is ${\rm E}[\alpha] \ne 0$.
3
What value results for the standard deviation of the random variable $\alpha$ in the area $K$?
$\sigma_\alpha \ = \$
4
What is the probability that in area $K$ the antenna locates a user at an angle between $\pm45^\circ$ ?
${\rm Pr}(-π/4 ≤ α ≤ +π/4) \ = \$ $\ \%$
5
Now let's consider the coverage area $G$. In which area $-\alpha_0 \le \alpha \le +\alpha_0$ does the PDF $f_\alpha(\alpha)$ have a constant value?
$\alpha_0 \ = \$ $\ \rm rad$ $\alpha_0 \ = \$ $\ \rm degrees$
6
What statements are now valid with respect to $f_\alpha(\alpha)$ in the range $|\alpha| > \alpha_0$ ?
The PDF has the same course "outside" as "inside". The PDF is "outside" identically zero. The PDF decreases towards the edges in this area. The PDF increases towards the edges in this area.
7
Calculate for the area $G$ the probability that the antenna locates a user at an angle between $\pm 45^\circ$ .
${\rm Pr}(-π/4 ≤ α ≤ +π/4) \ = \$ $\ \%$
8
What is now the PDF value at the position $\alpha = 0$?
$f_\alpha(\alpha = 0) \ = \$
### Solution
#### Solution
(1) There is a uniform distribution and it is true for the PDF in the range $-\pi < \alpha \le +\pi$:
$$f_\alpha(\alpha)={\rm 1}/({\rm 2\cdot \pi}).$$
• For $\alpha = 0$ this gives – as for all allowed values also – the PDF value :$$f_\alpha(\alpha =0) \hspace{0.15cm}\underline{=0.159}.$$
(2) It holds ${\rm E}\big[\alpha\big] = 0$ ⇒ Answer 1.
• It has no effect that $\alpha = +\pi$ is allowed, but $\alpha = -\pi$ is excluded.
(3) For the variance of the angle of incidence $\alpha$ holds:
$$\sigma_{\alpha}^{\rm 2}=\int_{-\rm\pi}^{\rm\pi}\hspace{-0.1cm}\it\alpha^{\rm 2}\cdot \it f_{\alpha}(\alpha)\,\,{\rm d} \alpha=\frac{\rm 1}{\rm 2\cdot\it \pi}\cdot \frac{\alpha^{\rm 3}}{\rm 3}\Bigg|_{\rm -\pi}^{\rm\pi}=\frac{\rm 2\cdot\pi^{3}}{\rm 2\cdot\rm \pi\cdot \rm 3}=\frac{\rm \pi^2}{\rm 3} = \rm 3. 29. \hspace{0.5cm}\Rightarrow \hspace{0.5cm}\sigma_{\alpha}\hspace{0.15cm}\underline{=1.814}.$$
(4) Since the given section of the circle is exactly one quarter of the total circle area, the probability we are looking for is
$${\rm Pr}(-π/4 ≤ α ≤ +π/4)\hspace{0.15cm}\underline{=25\%}.$$
The area $G$
(5) From simple geometrical ¨considerations (right-angled triangle, marked dark blue in the adjacent sketch) one obtains the equation of determination for the angle $\alpha_0$:
$$\cos(\pi-\alpha_{\rm 0}) = \frac{R/ 2}{R}={\rm 1}/{\rm 2}\hspace{0.5cm}\Rightarrow\hspace{0.5cm}\rm\pi-\it\alpha_{\rm 0}=\frac{\rm\pi}{\rm 3} \hspace{0.2cm}\rm( 60^{\circ}).$$
• It follows $\alpha_0 = \pi/3\hspace{0.15cm}\underline{=2.094}.$
• This corresponds $\alpha_0 \hspace{0.15cm}\underline{=120^\circ}$.
(6) Correct is the suggested solution 3:
• The PDF $f_\alpha(\alpha)$ is f for a given angle $\alpha$ directly proportional to the distance $A$ between the antenna and the boundary line.
• For $\alpha = \pm 2\pi/3 = \pm 120^\circ$ against $A = R$, for $\alpha \pm \pi = \pm 180^\circ$ against $A = R/2$.
• In between the distance becomes successively smaller. This means: The PDF decreases towards the boundary.
• The decrease follows the following course:
$$\it A=\frac{\it R/\rm 2}{\rm cos(\rm \pi-\it\alpha)}.$$
(7) The area $G$ can be calculated from the sum of the $240^\circ$–sector and the triangle formed by the vertices $\rm UVW$ :
$$G=\frac{\rm 2}{\rm 3}\cdot \it R^{\rm 2}\cdot{\rm \pi} \ {\rm +} \ \frac{\it R}{\rm 2}\cdot \it R\cdot \rm sin(\rm 60^{\circ}) = \it R^{\rm 2}\cdot \rm\pi\cdot (\frac{\rm 2}{\rm 3}+\frac{\rm \sqrt{3}}{\rm 4\cdot\pi}).$$
• The probability we are looking for is given by the ratio of the areas $F$ and $G$ (see sketch):
$$\rm Pr(\rm -\pi/4\le\it\alpha\le+\rm\pi/4)=\frac{\it F}{\it G}=\frac{1/4}{2/3+{\rm sin(60^{\circ})}/({\rm 2\pi})}=\frac{\rm 0.25}{\rm 0.805}\hspace{0.15cm}\underline{=\rm 31.1\%}.$$
• Although nothing has changed from point (4) at the area $F$ the probability now becomes larger by a factor $1/0.805 ≈ 1.242$ due to the smaller area $G$ .
(8) Since the overall PDF area is constantly equal $1$ and the PDF decreases at the boundaries, it must have a larger value in the range $|\alpha| < 2\pi/3$ than in (1).
• With the results from (1) and (7) holds:
$$f_{\alpha}(\alpha = 0)=\frac{1/(2\pi)}{2/3+{\rm sin(\rm 60^{\circ})}/({\rm 2\pi})} = \frac{\rm 1}{{\rm 4\cdot\pi}/{\rm 3}+\rm sin(60^{\circ})}\hspace{0.15cm}\underline{\approx \rm 0.198}.$$
• Like the probability in (7) also simultaneously the PDF value in the range $|\alpha| < 2\pi/3$ increases by a factor $1.242$ as the coverage area becomes smaller.
|
2022-08-11 06:28:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9554405808448792, "perplexity": 5848.448953205442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00319.warc.gz"}
|
https://indico.physics.ucsd.edu/event/1/timetable/?view=standard
|
# LIDINE 2021: LIght Detection In Noble Elements
America/Los_Angeles
San Diego, California
Description
LIDINE 2021: LIght Detection In Noble Elements
Started from 2013, the LIDINE conference series promoted the discussion between members of the particle and nuclear physics community about detector technologies based on noble elements and their applications, such as for dark matter, neutrino oscillations, solar and supernova neutrinos, coherent elastic neutrino-nucleus scattering, neutrinoless double-beta decay, neutron EDM, and medical physics.
LIDINE 2021 is hosted by University of California San Diego on Sep.14-17, 2021. The conference is fully online with remote participation. Talk recordings are available at YouTube
Zoom and gather.town links were sent to registered participants via email. Please note that you need to Sign In to Your Zoom Account in order to join the zoom conference.
Important dates:
• Deadline for Abstract Submission: July 15 (passed)
• Last Day for Registration: Aug 30 (extended to Sep.10)
• Conference Dates: Sep 14-17
Scientific Organizing Committee:
• Marcin Kuźniak (AstroCeNT, Poland)
• Qing Lin (Univ. of Science and Technology of China)
• Maria Elena Monzani (SLAC)
• Kaixuan Ni (UC San Diego) - Chair
• Roberto Santorelli (CIEMAT, Spain)
• Marco Selvi (INFN, Bologna)
• Andrzej Szelc (Univ. of Edinburgh)
• Matthew Szydagis (Univ. at Albany)
• Denver Whittington (Syracuse Univ.)
• Liang Yang (UC San Diego)
Participants
• A. Carolina Garcia B.
• Abigail Kopec
• Ako Jamil
• Alejandro Ramirez
• Aleksey Bolotnikov
• Alessandro Montanari
• Alexander Himmel
• Alexey Buzulutskov
• Alfredo Davide Ferella
• Amos Breskin
• Ander Simón Estévez
• Andre Steklain
• Andrea Messina
• Andreas Leonhardt
• Andrew Ames
• Andrew Erlandson
• Andrzej Szelc
• Angela Saa
• Anton Lukyashin
• Anyssa Navrer-Agasson
• Arindam Roy
• Asish Moharana
• Austin de St. Croix
• Azam Zabibi
• Ben Jones
• Benjamin Suh
• Brian Lenardo
• Bryan Ramson
• Burak Bilki
• Cameron Sylber
• Carlos Ourivio Escobar
• Carmen Carmona Benitez
• Chami Amarasinghe
• Chiara Filomena Lastoria
• Chloé Malbrunot
• Chris Stanford
• Christoph Vogl
• Christopher Tunnel
• Clarke Hardy
• Cristina Bernardes Monteiro
• Daniel Salvat
• Dante Totani
• David Caratelli
• David Gallacher
• Denver Whittington
• Diego Garcia-Gamez
• Diego González-Díaz
• Dmitry Rudik
• Dominick Cichon
• Douglas Bryman
• Edgar Sanchez Garcia
• Elena Gramellini
• Eli Mizrachi
• Emilija Pantic
• Emma Ellingwood
• Eric Dahl
• Erin Ewart
• Erin Yandel
• Ethan Bernard
• Ettore Segreto
• Evan Angelico
• Evan Shockley
• Fabian Kuger
• Fabrice Retiere
• Florian Jörg
• Franciole Marinho
• Francisco Martínez López
• Gary Sweeney
• Giovanni Volta
• Giulia Pieramico
• Giuseppe Salamanna
• Graham Smith
• Gustavo Valdiviesso
• Henrique Souza
• Hicham Benmansour
• Ibrahim Mirza
• Inés Gil-Botella
• Jacob Daughhetee
• Jacob Zettlemoyer
• Jacopo Dalmasson
• James Kingston
• Jaroslav Zalesak
• Jaroslaw Nowak
• Jianyang Qi
• Jiaoyang Li
• Joern Mahlstedt
• Jon Urheim
• Jonathan Haefner
• Joseph Howlett
• Jui-Jen(Ryan) Wang
• Kaitlin Hellier
• Kaixuan Ni
• Kajal Dixit
• Karen Navarro
• Kirsten McMichael
• Kyle Spurgeon
• Laura Paulucci
• Liang Yang
• Lior Arazi
• Logan Norman
• Luis Manzanillas
• Luisa Hoetzsch
• Marcin Kuźniak
• Marco Guarise
• Marco Rescigno
• Marco Selvi
• Maria Cecilia Queiroga Bazetto
• Maria Elena Monzani
• Marie-Cécile Piro
• Mario Schwarz
• Masato Kimura
• Matthew Bressler
• Matthew Heath
• Matthew Szydagis
• Melissa Medina Peregrina
• Micah Buuck
• Michael Febbraro
• Michael Keller
• Michael Poehlmann
• munera alrashed
• Mustafa Waqar Syed
• Neus López March
• Niamh Fearon
• Niccolo' Gallice
• Nicholas Carrara
• Nina Burlac
• Olivia Piazza
• Pablo Amedo
• Pablo Garcia Abia
• Paola Ferrario
• Patrick Green
• Peter Kammel
• Philippe Di Stefano
• Polina Abratenko
• Priyanka Kachru
• Qing Lin
• Quentin Hars
• Rafał Wojaczyński
• Raquel Castillo Fernandez
• Riccardo Biondi
• Richard Mischke
• Richard Saldanha
• Robert Vogl
• Roberto Santorelli
• Roger Romani
• Ryan Dorrill
• Ryan Linehan
• Ryan MacLellan
• Ryan Smith
• Sabrina Cheng
• Sabrina Sacerdoti
• Sally Shaw
• Sara Leardini
• Sarthak Choudhary
• Serena Di Pede
• Shingo Kazama
• Shuaixiang Zhang
• Teal Pershing
• Thomas Tsang
• Tommaso Giammaria
• Triveni Rao
• Tyler Anderson
• Tyler Erjavec
• Vahid Reza Shajiee Ghandeshtani
• Valerio D'Andrea
• Valerio Pia
• Vetri Velan
• Vicente Pesudo
• Vincent Basque
• Vinicius do Lago Pimentel
• Wei Mu
• Will Foreman
• Xaver Stribl
• Xin Xiang
• Yair Ifergan
• YOUSSEF ABD ELMOHAIMEN
• Yue Ma
• Yuehuan Wei
• Zelimir Djurcic
• Zohreh Parsa
Contact
• Tuesday, 14 September
• 07:00 09:15
Applications (1A)
Convener: Kaixuan Ni (Lidine 2021)
• 07:00
Introduction 5m
Speaker: Kaixuan Ni (UCSD)
• 07:05
LArTPC for Neutrino Detection (Keynote) 40m
The Liquid Argon Time Projection Chamber (LArTPC) represents one of the most advanced experimental technologies for physics at the Intensity Frontier due to its full 3D-imaging, excellent particle identification and precise calorimetric energy reconstruction. Reviewing current experimental efforts and potential technology upgrades, this talk summarizes the exciting physics we can explore using LArTPCs.
Speaker: Elena Gramellini (Fermi National Accelerator Laboratory)
• 07:45
Noble Element Detectors for Rare Event Searches (Keynote) 40m
Particle detectors with noble element targets have grown increasingly popular in rare event search physics experiments. The use of noble gases as the interaction medium enables high purity, large mass, and multi-channel signal detection in these experiments. When operated underground, noble element detectors have achieved extremely low background levels, and unprecedented sensitivity to rare interactions such as those arising from neutrinos and (as yet hypothetical) dark matter particles. In this presentation, I will review the benefits and challenges of rare event detection with noble elements, and discuss their applications in ongoing and planned experiments. In particular, I will focus on the experiments in which the interactions do not deposit significant energy, and discuss the R&D required to further lower the achievable energy thresholds in these experiments.
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Speaker: Jingke Xu (Lawrence Livermore National Laboratory)
• 08:25
Status of the LZ Dark Matter Experiment 20m
The fundamental nature of our universe is still mostly unknown: 84% of the matter in the universe is dark and qualitatively different to everything we understand via the Standard Model. Terrestrial experiments devoted to detecting interactions of dark matter particles have not yet seen a convincing signal, but we may be on the cusp of discovery. The LUX-ZEPLIN experiment (LZ) will be the largest dark matter detector of its kind, consisting of a 7T liquid xenon target, an 2T active skin veto and a 17T gadolinium-loaded liquid scintillator neutron veto. With science data taking beginning this year, LZ will probe theoretically well-motivated regions of dark matter phase space to reach areas currently unexplored; the predicted spin-independent cross section is 1.4x10-48cm2 for a 40 GeV/c2 mass WIMP. I will give an overview of the LZ experiment and its current status.
Speaker: Dr Sally Shaw (UCSB)
• 08:45
Sensitivity of the nEXO neutrinoless double beta decay experiment 15m
The nEXO experiment is a proposed next-generation search for the neutrinoless double beta decay ($0\nu\beta\beta$) of Xe-136. The detector will be a 5-tonne, monolithic liquid xenon TPC with a target enriched to 90% in the isotope of interest. In this talk, we will discuss a new evaluation of the experiment’s sensitivity to $0\nu\beta\beta$, given recent updates to the detector design and improved modeling of the signal readout. Specific improvements include detailed, data-driven modeling of signal development in the charge readout tiles (and subsequently improved modeling of the energy and position reconstruction), the development of new machine-learning analyses to improve signal/background separation, and an updated detector geometry. We will discuss how these changes lead to a projected 90% CL exclusion sensitivity on the $0\nu\beta\beta$ halflife of $1.35\times10^{28}$ yrs in nEXO, approximately two orders of magnitude beyond existing experimental limits.
Speaker: Brian Lenardo (Stanford University)
• 09:00
GammaTPC: a LAr TPC for MeV Gamma Rays 15m
I will describe GammaTPC, a proposed new LArTPC MeV gamma ray instrument concept. The MeV gamma ray sky is essentially unexplored due to the challenge of measuring multiple Compton scatters over a large detector volume. A TPC with low Z material has significant advantages for this measurement, and enables a relatively inexpensive detector with large mass and thus high sensitivity in the current era of sharply reduced costs of launching mass to space. A novel ultra low power, fine-grained charge readout is needed to match or exceed the imaging capabilities of currently proposed missions based on Si (or Ge) strip technology. Key developments are also needed for space deployment.
Speaker: Tom Shutt (SLAC)
• 09:15 10:00
Coffee Break and Social Time in Gather.Town
• 10:00 12:30
Light/Charge Response (1B)
Convener: Dr Ethan Bernard (Lawrence Livermore National Laboratory)
• 10:00
Scintillation and ionisation response of the ReD double-phase argon TPC 15m
The Recoil Directionality (ReD) experiment aims to investigate the directional sensitivity of argon-based Time Projection Chambers (TPCs) via columnar recombination to nuclear recoils in the energy range of interest (20–200 keV) for direct dark matter searches. Directional information is an essential requisite for correlating a candidate dark matter signal with the expected “wind” of dark matter from the Cygnus constellation. As part of the DarkSide programme, the ReD collaboration has designed and constructed a double-phase argon TPC and fully characterised its performance using various gamma-ray and neutron sources. The key novel feature of the ReD TPC is a readout system based on cryogenic Silicon Photomultipliers (SiPMs), which offer a higher photon detection efficiency relative to typical cryogenic photomultipliers. Here we report on measurements of the scintillation light yield and ionization gain performed over five months of continuous operation. We present a phenomenological parameterisation of the electron-ion recombination probability in liquid argon (LAr) that describes the anti-correlation between scintillation and ionisation signals measured by ReD as a function of drift field for electron recoils between 50–500 keV and fields up to 1000 V/cm. Finally, a likelihood analysis is performed in order to study the directional response of the ReD TPC to neutrons of known energy and direction.
Speaker: Marco Rescigno (INFN/Roma)
• 10:15
Study of Charge and Light Correlation in Electron Beam Energy Response in DUNE's prototype ProtoDUNE-SP LArTPC 15m
The Deep Underground Neutrino Experiment (DUNE) is a cutting-edge experiment for neutrino science and proton decay studies. The single-phase liquid argon prototype detector at CERN (ProtoDUNE-SP) is a crucial milestone for DUNE that will inform the construction and operation of the first, and possibly subsequent 17-kt DUNE far detector modules. We have studied the response of DUNE LArTPC prototype detector ProtoDUNE-SP to test beam positrons via both ionization and scintillation signals. We searched for (anti) correlation between fluctuations of both scintillation and ionization in liquid argon, on event-by-event basis. Preliminary results, to be presented at the conference, reveal anti-correlated statistical fluctuation between scintillation and ionization in liquid argon.
Speaker: Dr Zelimir Djurcic (Argonne National Lab)
• 10:30
LArQL: A phenomenological model for treating light and charge generation in liquid argon 15m
Experimental data shows that both ionization charge and scintillation light in LAr depend on the deposited energy density (dE/dx) and electric field (𝜉). Moreover, free ionization charge and scintillation light are anticorrelated, complementary at a given (dE/dx, 𝜉) pair. We present a phenomenological model, called LArQL, that provides the anticorrelation between light and charge and also its dependency on the deposited energy as well as on the electric field applied. The model is built with three parameters to be fitted to data: ionizations per energy unit, number of excitations/ionizations, and the fraction of escaping electrons, as function of deposited energy. LArQL modifies the Birks (or Box) charge model considering three aspects: 1. at 𝜉 = 0, escaping electrons are taken into account; 2. just above 𝜉 = 0 field extracted electrons are added; 3. at higher fields, escaping electrons tend to zero and the Birks model is recovered. Deviations from current Birks Law are observed only for LArTPC operating at low ξ and for heavily ionizing particle (stopping protons). The model presents a satisfactory description at dE/dx and field ranges for interacting particles in LArTPCs and fits well the available data. Improvements via data sets compilation and “global” fits are also interesting features of the model.
Speaker: Franciole Marinho (UFSCar)
• 10:45
Primary Scintillation in Ar-based Mixtures Aimed at Providing a To-signal in DUNE ND-GAr 15m
The usage of optical information is ubiquitous in neutrino detectors, essential for spill-assignment, background suppression, and triggering. Enabling an independent and complete physics program at the ND-GAr component of DUNE’s near detector suite will undoubtedly benefit from this feature. We discuss in this presentation the prospects towards simultaneous readout of ionization and scintillation signals in ND-GAr and the R&D currently performed in this direction. In particular, we will focus on the scintillation of Ar-based mixtures at high pressure.
Speaker: Dr Diego González-Díaz (University of Santiago de Compostela)
• 11:00
Predicting transport effects of scintillation light signals in large-scale liquid argon detectors 15m
Liquid argon is being employed as a detector medium in neutrino physics and dark matter searches. A recent push to expand the applications of scintillation light in Liquid Argon Time Projection Chamber neutrino detectors has necessitated the development of new methods of simulating this light. The presently available methods tend to be prohibitively slow or imprecise due to the combination of detector size and the amount of energy deposited by neutrino beam interactions. In this talk we present a semi-analytical model to predict the quantity of argon scintillation light observed by a light detector based only on the relative positions between the scintillation and light detector. Our proposed method can be used to simulate light propagation in large-scale liquid argon detectors such as DUNE or SBND. This talk is based on Eur. Phys. J. C 81, 349 (2021), and will expand on the methods presented there applying them to other detector mediums such as liquid xenon or xenon-doped liquid argon.
Speaker: Patrick Green (The University of Manchester)
• 11:15
Measuring the Rayleigh Scattering Length of Liquid Argon in ProtoDUNE-SP 15m
ProtoDUNE-SP was a single-phase liquid argon time projection chamber - a prototype for the first far detector module of the Deep Underground Neutrino Experiment (DUNE) with an active volume of 700 tons operating until 2020. It was installed at the CERN Neutrino Platform and took particle beam and cosmic ray data over its two year lifespan. Liquid argon scintillation light is still an active subject of study with open questions about the impact of scattering and absorption in such a large detector. Here, we combine ProtoDUNE-SP cosmic-ray data with its large photon detector coverage and large drift volume to measure the Rayleigh scattering length of liquid argon. We also lay the groundwork for investigating Rayleigh scattering of scintillation light from xenon-doped liquid argon.
Speaker: Kyle Spurgeon (Syracuse University)
• 11:30
Preliminary studies towards spectroscopic-based particle discrimination in Ar 15m
Noble elements are the active medium of choice for several among the most important neutrino and dark matter experiments being built now. The foreseen next generation, besides going bigger, would benefit from any feature not-yet exploited of this technology.
With this goal, we performed a time-resolved spectroscopic study of the VUV/UV scintillation of gaseous argon as a function of pressure and electric field, by means of a wavelength sensitive detector operated with different radioactive sources.
Our work conveys new evidence of distinctive features of the argon light which are in contrast with the general assumption that, for particle detection purposes, the scintillation can be considered to be largely monochromatic at 128 nm (second continuum).
The wavelength and time-resolved analysis of the photon emission reveal that the dominant component of the argon scintillation during \blue{the} first tens of ns is in the range [160, 325] nm. This light is consistent with the third continuum emission from highly charged argon ions/molecules. This component of the scintillation is field-independent up to 25 V/cm/bar and shows a very mild dependence with pressure in the range [1, 16] bar. The dynamics of the second continuum emission is dominated by the excimer formation time, whose variation as a function of pressure has been measured. Additionally, the time and pressure-dependent features of electron-ion recombination, in the second continuum band, have been measured. This study opens new paths toward a novel particle identification technique based on the spectral information of the noble-elements scintillation light.
Speaker: Vicente Pesudo Fortes (CIEMAT)
• 11:45
Xenon doping of Liquid Argon in ProtoDUNE Single Phase 15m
The Deep Underground Neutrino Experiment (DUNE) will be the next generation long-baseline neutrino experiment. The far detector is designed as a complex of four LAr-TPC (Liquid Argon Time Projection Chamber) modules with 17 t of LAr each. The development and validation of its technology is pursued through ProtoDUNE Single Phase (ProtoDUNE-SP), a 770 t LAr-TPC at CERN Neutrino Platform. Crucial in DUNE is the Photon Detection System that will enable the trigger of non-beam events - proton decay, supernova neutrino burst, solar neutrinos and BSM searches - and will improve the timing and calorimetry for neutrino beam events. Doping Liquid Argon (LAr) with Xenon is a well known technique to shift the light emitted by Argon (128 nm) to a longer wavelength (175 nm) to ease its detection. The largest Xenon doping test ever performed in a LArTPC was carried out in ProtoDUNE-SP. From February to May 2020, a gradually increasing amount of Xenon was injected to compensate for the light loss due to air contamination. The response of such a large TPC (770 t of Liquid Argon and 440 t of fiducial mass) has been studied using the ProtoDUNE-SP Photon Detection System (PDS) and a dedicated setup installed before the run.
Here we introduce the Xenon doping technique as well as the specific detector components developed for this campaign and the results of the study with particular regard to the modification of the scintillation signal, the uniformity of the light collection and the efficiency of the wavelength-shifting mechanism.
Speaker: Niccolo' Gallice (Università degli Studi di Milano - INFN Milano)
• 12:00
Delayed electron emission in DarkSide-50 double phase liquid argon TPC 15m
Dual-phase noble gas Time Projection Chambers (TPCs) suffer from spurious electron background events at the lowest detectable energy region. This background is reported in liquid xenon TPCs and some of the causes are discussed in the literature. Understanding its origin is of paramount importance as this background sets the analysis threshold and affects the most sensitive part of the region of interest for low mass dark matter searches.
We report the spurious electron background events observed in the liquid argon TPC in the DarkSide-50 experiment. We found two different electron populations based on time correlation with preceding events: simultaneous emission and delayed emission. The majority of the former can be associated with photoionization effects. The mechanism of the latter is not clear, but our observations indicate that they are related to the impurity level in the TPC measured via the electron lifetime.
• 12:15
Long afterglow: physical and chemical effects of impurities in bulk media and on surfaces in Ar and Xe detectors. 15m
As field of application of noble elements detectors is expanding, it is becoming important to understand effects related to presence of impurities. Here we present several examples of known energetic long-living molecules which can be produced in detectors under action of ionizing radiation and UV light.
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-824442-DRAFT
Speaker: Sergey Pereverzev (Lawrence Livermore National Laboratory)
• 12:30 13:00
Coffee Break and Social Time in Gather.Town
• 13:00 15:30
Convener: Marcin Kuźniak (AstroCeNT / CAMK PAN)
• 13:00
Charge and Light Sensing in Noble Liquid TPCs (Keynote) 40m
Noble liquid time projection chambers (TPCs) are of interest for experiments in the quest to answer some most basic questions in both particle and nuclear physics. The charge and light in both LAr and LXe in response to particles of interest are at the limits of detection sensitivity and accuracy of their respective charge sensing electrodes and light sensors, such as silicon photomultipliers (SiPMs). Both will be applied on a large scale in terms of the sensitive areas covered with fine segmentation and large numbers of signal channels. All TPCs under design or planned will depend critically on the use of low noise electronics immersed in the cryogenic liquid (“cold electronics”) to be operated for a decade or longer. Valuable experience has been gained from the already seven years of operation of the TPC with the lowest noise so far, the MicroBooNE. Some highlights of the experience with charge sensing in that TPC will be presented. The two proposed and planned experiments, DUNE second 10-kton LAr module and nEXO 5-ton LXe TPC will present similar charge sensing signal-to-noise challenges, but much more severe light sensing challenges. This is due to the very large areas of SiPMs required. Methods to address the light sensing challenge, to achieve single-photoelectron sensitivity on an array of SiPMs where the avalanche charge signal is deposited on a capacitance of tens of nanofarads, will be described and the results presented.
Speaker: Veljko Radeka (Brookhaven National Laboratory )
• 13:40
The DUNE Vertical Drift Photon Detection System 15m
The Deep Underground Neutrino Experiment (DUNE) is a long baseline neutrino experiment designed to mainly investigate oscillation parameters, supernova physics and proton decay. Its far detector will be composed of four liquid argon time projection chamber (LArTPC) underground modules, in South Dakota-USA, which will detect a neutrino beam produced at Fermilab, 1300 km away, where a near detector will be in place. The second DUNE far detector module, Vertical Drift, will be a single phase LArTPC with electron drift along the vertical axis with two volumes of 13.5 m x 6.5 m x 60 m dimensions separated by a cathode plane. The charge collection will be performed by two anode planes, each composed by stacked layers of a perforated PCB technology with electrode strips placed at the top and bottom ends of the module. The photon detection system (PDS) will make use of large size X-Arapuca tiles distributed over three detection planes. One plane will consist of a horizontal arrangement of double side tiles installed on the high voltage cathode plane and two vertical planes, each placed on the longest cryostat membrane walls. A light active coverage of 14.8% over the cathode and 7.4% over the laterals should allow improvements in the low energy physics range that can be probed in DUNE, especially regarding supernova neutrinos (~10 MeV). We present the initial characterization of the Vertical Drift PDS using a Monte Carlo simulation and preliminary studies on its reconstruction capabilities at the MeV scale. The information obtained with the PDS alone should allow determination of a neutrino interaction region with a precision of at least 65 cm for events with deposited energy above 5 MeV and the deposited energy can be reconstructed with precision better than 10%.
Speaker: Laura Paulucci, for the DUNE Collaboration
• 13:55
Wavelength-Shifting Performance of Polyethylene Naphthalate Films in a Liquid Argon Environment 15m
Liquid argon is commonly used as a detector medium for neutrino physics and dark matter experiments in part due to its copious scintillation light production in response to its excitation and ionization by charged particle interactions. As argon scintillation appears in the vacuum ultraviolet (VUV) regime and is difficult to detect, wavelength-shifting materials are typically used to convert VUV light to visible wavelengths more easily detectable by conventional means. Here we present recent measurements of the wavelength-shifting and optical properties of poly(ethylene naphthalate) (PEN), a proposed alternative to tetraphenyl butadiene (TPB), the most widely-used wavelength-shifter in argon-based experiments. The measurements were performed in a custom cryostat system with well-demonstrated geometric and response stability, with 128~nm argon scintillation light used to examine various PEN-including reflective samples' light-producing capabilities, as well as their stability. The best-performing PEN-including test reflector was found to produce 34% as much visible light as a TPB-including reference sample, with widely varying levels of light production between different PEN-including test reflectors.
Speaker: Dr Ryan Dorrill (Illinois Institute of Technology)
• 14:10
Amorphous Selenium based VUV Photodetector for use in Liquid Noble Detectors 15m
Photon detectors which are sensitive to the vacuum ultraviolet (VUV) scintillation light produced in noble element particle detectors is an area of active research and development. In particular, searching for photoconductive materials which are capable of converting VUV light to charge could open the doorway to a potentially game changing solution of an integrated charge and light (Q+L) sensor for large area pixel based noble element detectors.In this talk, we present the study of amorphous selenium based photodetectors capable of operating at cryogenic temperatures and show the first measurements and characterizations made with these devices using a VUV source in a cryogenic environment.
Speaker: Jonathan Asaadi (University of Texas at Arlington)
• 14:25
Assembly and characterization of a large area VUV sensitive SiPM array for the nEXO TPC teststand at Stanford 15m
One of the important variables to optimize for a successful detection of the neutrinoless double-beta decay is the energy resolution at its Q-value. nEXO is a proposed tonne-scale experiment aiming to search such decay for the isotope Xe-136. It exploits the anticorrelation between ionization and scintillation of xenon to improve the ultimate energy resolution. A major factor affecting the resolution is the fluctuation of charge and light ultimately collected.
In a time projection chamber (TPC) detector, the electron collection efficiency is usually close to one. Conversely, the collection of photons can vary dramatically depending, along with other factors, on the overall light-sensitive area of the detector.
The Stanford liquid xenon TPC is a teststand planning to host the first VUV large area (~200cm2) SiPM array. The setup firstly aims to study the feasibility of such system with dedicated readout electronics and ultimately to investigate how a better light collection affects the detector performances, important prototyping step for nEXO.
In this talk, I will report on the status of the assembly of this photodetector array, along with characterization measurements and comparison with simulation.
Speaker: Jacopo Dalmasson (Stanford University)
• 14:40
Direct detection of argon scintillation light using VUV-sensitive silicon photomultipliers 15m
In recent decades, argon-based particle detectors have become a widely-used technology for numerous applications, including dark matter searches and neutrino measurements. For these detector designs, WaveLength Shifters (WLS) such as tetraphenylbutadiene (TPB) are used to shift argon's scintillation light from the hard UV (128 nm) to visible wavelengths. In particular, the use of PhotoMultiplier Tubes (PMTs) in argon-based detectors can require WLS for successful light detection and event reconstruction. Recently, Hamamatsu has produced a line of Silicon PhotoMultipliers (SiPMs) which show appreciable photon detection efficiencies down to 100 nm; deploying such photosensors in an argon-based detector could bypass the need for wavelength shifting materials. This talk will present the measurement ongoing at LLNL to demonstrate direct detection of argon scintillation light using Hamamatsu's VUV-sensitive SiPMs, as well as quantify their performance (gain, cross-talk, photon detection efficiency?, etc.) for future deployment in argon-based detectors.
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Speaker: Teal Pershing (Lawrence Livermore National Lab)
• 14:55
Overview and Current Status of the X-ARAPUCA Light Collection System in SBND 15m
The Short-Baseline Near Detector (SBND) is a Liquid-Argon Time Projection Chamber (LArTPC) currently under construction at Fermilab. SBND is one of three detectors that make up the Short Baseline Neutrino (SBN) program, which aims to investigate the excess of low-energy electron-like events observed by the MiniBooNE and LSND experiments, as well as perform high-precision neutrino-argon cross section measurements. SBND plans to use a novel light collection system which includes X-ARAPUCA devices, made up of a series of dichroic and wavelength-shifting filters that collect photons using SiPMs. This X-ARAPUCA system is also the light collection technology planned for the future DUNE experiment. This talk will give an overview of the X-ARAPUCA system in SBND as well as cover the current status of testing and implementation.
Speaker: Polina Abratenko
• Wednesday, 15 September
• 07:00 08:45
Light/Charge Response (2A)
• 07:00
Absolute experimental primary scintillation yield in Xe for electrons and alpha particles 15m
Xenon scintillation has been widely used in recent particle physics experiments. However, information on primary scintillation yield in the absence of recombination is still scarce and dispersed. The mean energy required to produce a VUV scintillation photon (Wsc) in gaseous Xe has been measured to be in the range of 30-120 eV. Lower Wsc-values are often reported for alpha particles when compared to those for electrons produced by gamma or x-rays, being this difference still not fully understood.
We performed a systematic experimental study of the absolute primary scintillation yield in Xe at 1.2 bar, using a Gas Proportional Scintillation Counter. The simulation model of the detector's geometric efficiency was benchmarked through the primary and secondary scintillation produced at different distances from the photosensor. Wsc-values were obtained for gamma- and x-rays with energies in the range of 5.9-60 keV, and for 2-MeV alpha particles. No significant differences were found in the values for alpha particles and for electrons.
Acknowledgment
This work is funded by FEDER, through the Programa Operacional Factores de Competitividade — COMPETE and by National funds through FCT - Fundação para a Ciência e Tecnologia, Lisbon, Portugal, in the frame of project UID/FIS/04559/2020 (LIBPhys).
Speaker: Dr C. M. B. Monteiro (Departament of Physics, University of Coimbra)
• 07:15
Band and Time - Resolved Scintillation for Alpha and Beta Particles in Xenon, as a Function of Pressure and Electric Field 15m
In standard conditions, Xenon is the only gaseous element with a naturally occurring isotope undergoing double-beta decay. Hence exploiting a gaseous TPC as a tool for accurately reconstructing the topology of bb0nu events is very natural. When considering i) sensitivity to the lifetime of the decay and ii) energy resolution to separate it from regular bb2nu events, a high pressure electroluminescence TPC self-suggests. At the moment, the NEXT TPC is the most advanced implementation of this idea, relying deeply on common-wisdom assumptions like the monochromaticity of both primary and secondary scintillation (around 172nm), the lack of charge recombination for beta-events, or the validity of density-scalings for secondary scintillation. Looking into the future, the unambiguous elucidation of these phenomena becomes necessary in view of the upcoming ton and multi-ton scale experiments aimed at completely exploring the inverted hierarchy of neutrino masses.
Motivated by this, we conducted systematic measurements of S1 and S2
signals in a mini-TPC read out with wires, for varying pressures (1-10bar), pressure-reduced electric fields (0-100V/cm/bar) and wire voltages (up to 4kV). Systematic measurements of the time constants and scintillation yields obtained in these conditions will be presented, for alpha and beta sources in the VUV, UV and visible bands, and its impact on next generation xenon TPCs discussed.
Speaker: Sara Leardini (IGFAE, Universidade de Santiago de Compostela)
• 07:30
Characterization of alpha and beta interactions in liquid xenon 15m
Experiments used for rare-event searches have seen an impressive increase of sensitivity over the past decades. Among the most sensitive detector types used in direct dark matter searches are dual-phase xenon time projection chambers (TPCs). To develop a signal model for such detectors, the response of the medium to interactions of different particle types needs to be known to a high accuracy. While several measurements for interactions of electrons, photons and neutrons were reported in the past, the literature is sparse when it comes to the interaction of alpha particles with liquid xenon.
The Heidelberg Xenon (HeXe) dual-phase xenon TPC has been used to study the relative scintillation and ionization yield of low energy electrons from a $\mathrm{^{83m}Kr}$ source, as well as from alpha particles emitted by dissolved $\mathrm{^{222}Rn}$. Furthermore, a measurement of the electron drift velocity has been carried out. The different electric field configurations applied during the measurements were simulated by a detailed three dimensional model of the TPC using COMSOL Multiphysics. The measurements span over a wide range of fields within 7.5$\,$V/cm up to 1.64$\,$kV/cm, whereas special emphasis was put on the low-field regime.
Speaker: Mr Florian Jörg (Max-Planck-Insitut für Kernphysik)
• 07:45
Scintillation and optical properties of xenon-doped liquid argon 15m
Liquid argon (LAr) is widely employed as a scintillator in rare-event searches. Its optical and scintillation properties, as well as the impact of impurities, are being studied extensively by many groups world-wide. LAr scintillation light exhibits a main emission wavelength of 128 nm, which makes propagation and detection challenging because of short attenuation lengths and low quantum efficiencies of photo sensors in the VUV spectral range.
Previously, we have determined the attenuation length of purified liquid argon for its own scintillation light to be larger than 110 cm at a wavelength of 128 nm [1, 2]. Already in 1982 Kubota et al. [3] investigated the impact of xenon doping of LAr. Recently, we have studied the emission spectrum and time distribution dependent on the xenon concentration [4].
Here, we present our latest study of xenon-doped LAr with focus on the primary photon yield, the effective triplet lifetime and attenuation length, with xenon concentrations ranging from 3 ppm to 300 ppm. The scintillation and optical properties were measured simultaneously with the LLAMA [5] instrument operated inside SCARF, a 1 ton LAr test stand, and the xenon concentrations using IDEFIX, a dedicated mass spectrometer setup.
[1] A. Neumeier et al. “Attenuation of Vacuum Ultraviolet Light in Liquid Argon”. In: Eur. Phys. J. C72.10 (Oct. 2012).
[2] A. Neumeier et al. “Attenuation of Vacuum Ultraviolet Light in Pure and Xenon-Doped Liquid Argon — An Approach to an Assignment of the near-Infrared Emission from the Mixture”. In: EPL 111.1 (July 2015).
[3] Shinzou Kubota et al. “Liquid and Solid Argon, Krypton and Xenon Scintillators”. In: Nucl. Inst. Meth. Phys. Res. 196.1 (May 1982).
[4] A. Neumeier et al. “Intense Vacuum Ultraviolet and Infrared Scintillation of Liquid Ar-Xe Mixtures”. In: EPL 109.1 (Jan. 2015).
[5] Mario Schwarz et al. “Liquid Argon Instrumentation and Monitoring in LEGEND-200”. In: ANIMMA 2021 (July 2021)
Speaker: Christoph Vogl (Technical University of Munich)
• 08:00
Electronic versus nuclear recoil discrimination in liquid xenon with PIXeY 15m
The two-phase liquid/gas xenon time projection chamber is one of the leading technologies for dark matter direct detection. A crucial part of using this technology is being able to classify energy deposits as nuclear recoils (NR) or electronic recoils (ER). This allows upcoming experiments like XENONnT and LZ to mitigate ER backgrounds like Rn daughters and solar neutrinos. I will present an analysis of ER-NR discrimination, using data from the PIXeY (Particle Identification in Xenon at Yale) experiment. PIXeY was an R&D-scale xenon TPC that operated at drift fields between 50 and 2000 V/cm; its data allows us to study discrimination across this wide range of fields, as well as its dependence on recoil energy.
Speaker: Vetri Velan (University of California, Berkeley)
• 08:15
Scintillation yield from electronic and nuclear recoils in superfluid helium-4 15m
Superfluid He-4 is a promising target material for direct detection of low mass (< 1 GeV) dark matter. Signal channels for dark matter - nucleus interactions in superfluid helium include prompt photons, triplet excimers, rotons and phonons, but measurement of these signal strengths have yet to be performed for low energy nuclear recoils. A study of scintillation yield from electronic and nuclear recoils was carried out in superfluid He-4 at 1.75 K, with deposited energy in the range of 50-1000 keV. Scintillation from a 16 cm$^3$ volume of superfluid He-4 was read out by six PMTs immersed in the superfluid, each individually biased by a Cockcroft-Walton generator. Elastic scattering of 2.8 MeV neutrons (produced by a deuterium-deuterium neutron generator) from superfluid He-4, with a liquid organic scintillator module used as far-side detector, was used to determine the scintillation signal yield for a variety of nuclear recoil energies. Yields of both prompt and delayed scintillation components were measured and compared to a semi-empirical microphysical model. For comparison, Compton scattering of Cs-137 gamma rays from the superfluid He-4, with NaI scintillators used as far-side detectors, was used to determine the scintillation signal yield of electronic recoils.
Speaker: Ryan Smith (UC Berkeley)
• 08:30
Preliminary Tests of Dual-Phase Xenon-Doped Argon Mixtures in the CHILLAX Detector 15m
Utilizing xenon as a dopant at the $10^{-5}$ level in the gas region of a dual-phase argon time projection chamber (TPC) presents the enticing prospects of faster and longer wavelength electroluminescence response to ionization electrons. This light can then be directly detected by UV-sensitive SiPMs without the use of fluorescent wavelength-shifting materials. These advantages would improve sensitivity to low energy nuclear recoils, which kinetically favor argon over xenon; examples include coherent neutrino-nucleus scattering (CENNS), and the possibility of light WIMP dark matter interactions. However, operating such a detector imposes the novel technical requirement of cryogenic systems which must prevent xenon from partitioning between the liquid and gas phases. This has compelled the development of CoHerent Ionization Limits in Liquid Argon and Xenon (CHILLAX), a new xenon-doped, dual-phase argon detector. This talk will survey the physics implications of xenon-doped argon TPCs, and describe the special cooling and circulation systems in CHILLAX. It will conclude with an overview of the current status of the detector and recent cryogenic tests performed..
Prepared by LLNL under Contract DE-AC52-07NA27344. Release number LLNL-ABS-824388. The SCGSR program is administered by ORISE under Contract DE-SC0014664.
Speaker: Dr Ethan Bernard (Lawrence Livermore National Laboratory)
• 08:45 08:55
Conference Photo on Zoom 10m
• 09:00 10:00
Poster in Gather.Town
Convener: Denver Whittington (Syracuse University)
• 09:00
A Monte Carlo detector response model for solar neutrino absorption on 40Ar in DEAP-3600 1h
DEAP-3600 is a liquid argon (LAr) scintillation detector designed to search for Weakly Interacting Massive Particles (WIMPs) at SNOLAB. Beyond the search for dark matter, the DEAP-3600 detector is also intrinsically sensitive to charged current interactions on 40Ar from 8B solar neutrinos. Here we present the expected detector response to high energy delayed coincidence events resulting from neutrino absorption on the ~3.2 tonne target mass. We exploit the Marley event generator in conjunction with a full optical simulation of the DEAP-3600 detector using the Reactor Analysis Tool (RAT). Through the delayed coincidence channel, we expect an event yield of (4.69 ± 0.43) events in a 7.20 tonne-year exposure in DEAP-3600.
Speaker: Andrew Erlandson (Carleton University)
• 09:00
Measurement of the Light-Yield in MicroBooNE with Isolated Protons 1h
The MicroBooNE detector is an 85-ton active mass Liquid Argon Time Projection Chamber (LArTPC) located on-axis along the Booster Neutrino Beam (BNB). It serves as a part of the Short-Baseline Neutrino (SBN) program at Fermilab, which was primarily designed to address the eV-scale sterile neutrinos. The primary signal channel in the LArTPC is ionisation, but the argon also emits large quantities of scintillation light. Prompt scintillation light in MicroBooNE is recorded with an array of 32 PhotoMultiplier Tubes (PMTs). The scintillation light is used to determine the timing of neutrino interactions and to reject cosmic-ray activity.We present a new method of measuring the light-yield using isolated proton events, which enables a position-dependent light-yield measurement to map the response of the detector across its volume. This method can be used to calibrate the light response in large-scale LArTPC detectors as well as to test assumptions used in simulating scintillation light.
Speaker: Jiaoyang Li (the University of Edinburgh)
• 09:00
Measurements of the X-Arapuca single-cell light detection efficiency. 1h
The X-Arapuca (XA) supercell is the basic unit of the Photon Detection System (PDS) of the Deep Underground Neutrino Experiment (DUNE). In total, 1,500 X-Arapuca with approximate dimensions of 210 x 12 cm$^2$ will be installed on the anode planes of the liquid argon time projection chamber (LArTPC). In the XA light trap device, the liquid argon scintillation light (with wavelength around 127 nm) is absorbed by a thin layer of para-Terphenyl (pTP) coated on a dichroic filter window which constitutes its acceptance window. PTP re-emits photons around 350 nm, above the filter cutoff. The light which enters the XA is downshifted again by the inner wavelength shifter plate (WLS plate) to a wavelength around 430 nm. The cut-off of the dichroic filter is placed at 400 nm: this allows the pTP shifted light to enter in the X-Arapuca and to trap the fraction of photons which escape from total internal reflection in the WLS plate. The light is collected by an array of silicon photo-sensors (SiPM) coupled at the edges of the WLS plate. In this work, we present the first characterization of the photon detection efficiency of an X-Arapuca prototype sizing 10 x 7.5 cm$^2$ in Brazil, where the X-Arapuca was exposed to alpha particles, cosmic muons and gammas in liquid argon. Operating the SiPMs at +5 and +5.5 V over the breakdown voltage, an efficiency ranging from 2.2% to 2.3% and from 2.7% to 3.1% was found, respectively.
Speakers: Henrique Souza (University of Campinas) , Ettore Segreto (Unicamp)
• 09:00
Role of a-Se device configuration in UV detection efficiency characterized by Time of Flight 1h
Amorphous selenium (a-Se) detectors have made significant advances in the last few decades, with applications in X-ray, UV, and visible light detection and potential for high energy particle detection. A vertical architecture, in which light passes through a transparent conductor to the a-Se layer, is common in commercial devices; however, a lateral structure, in which light passes only through the selenium positioned between two contacts, presents an opportunity for improved device performance and application. In this work we compare the performance of vertical devices with a-Se thicknesses of 5, 10, and 15 um and lateral devices with electrode spacing of the same distances, using time of flight (TOF) and conversion efficiency, and introduce optical slits for lateral structures as a way to better perform carrier specific TOF in a-Se devices.
Speaker: Kaitlin Hellier (University of California, Santa Cruz)
• 09:00
Scintillation light yield of solid Xenon 1h
Scintillation properties of rare gas materials are of primary importance for the next generation dark matter and neutrino experiments. Above the liquid phase of such elements, also solid crystals can be used for suitable detection schemes but unfortunately only sporadic data regarding the luminescence properties of Xenon at temperatures uder its melting point are present in literature. In this contribution, we present a study of the scintillation light yield of Xenon in the solid phase at different temperatures in the range (30−160)K. This study has been carried out exploiting the light emission from solid Xenon consequent to the energy release of cosmic rays in the crystal.
Speaker: Marco Guarise (University of Ferrara)
• 09:00
Scintillation-based background rejection methods in large scale LArTPCs 1h
Large scale single-phase liquid argon time projection chambers (LArTPCs) such as DUNE can achieve MeV-scale thresholds, making them sensitive to solar and supernova neutrinos. In this energy region, low energy activity from radiological sources can be a dominant background. LArTPCs can make use of the scintillation light to discriminate against radiological backgrounds. This talk will present rejection methods exploiting the scintillation light in LArTPCs. We studied a range of detector configurations and rejection approaches based on the properties of the light (pulse shape discrimination) and/or on the detector design.
Speaker: Anyssa Navrer-Agasson (The University of Manchester)
• 09:00
Simulating and Validating the X-ARAPUCA light sensors 1h
Brazil's native people have an ingenious trap to catch birds called arapuca. Our ARAPUCA is a light trap that increases the collection area of regular SiPMs by making use of wavelength shifters and a dichroic filter. Its latest iteration, the X-ARAPUCA, will be used alongside PMTs in Short-Baseline Near Detector (SBND) and as the standalone photon detector in the Deep Underground Neutrino Experiment (DUNE). The SBND is part of the three-detectors Short-Baseline Neutrino (SBN) Program, search for a possible sterile neutrino in short-baseline oscillations (with SBND located at 100m from the source), while DUNE will look for signs of CP-violation in long-baseline (1300km) oscillations, among other items in a rich physics program. Contributing with both experiments, we developed detailed simulations of each optical element, from which we highlight the dichroic filter and the wavelength shifters. While the backbone of the simulation uses Geant4, these two elements were implemented from scratch to ensure they would represent our device. The models were individually validated using dedicated characterization data and the resulting simulation reproduces the physical device behavior without the need for a back-fitting calibration. In this presentation we will elaborate on the computer models and the validation processes for each element and compare the resulting full simulation with the X-ARAPUCA's most recent tests.
Speaker: Gustavo Valdiviesso (Universidade Federal de Alfenas Unifal-MG)
• 10:00 12:30
Applications (2B)
Convener: Liang Yang (UC San Diego)
• 10:00
The liquid argon scintillation detection system for LEGEND-200 15m
The LEGEND-200 experiment at LNGS will search quasi-background free for the neutrinoless double-beta decay in $^{76}$Ge. Bare high-purity Ge detectors enriched in the isotope $^{76}$Ge are operated in liquid argon, which serves as a coolant and active shielding. Background events are identified by their interaction typologies. The key to search background-free for $0\nu\beta\beta$ decays is the identification of events which deposit simultaneously energy in the germanium detectors and in the liquid argon. The latter interactions are identified by scintillation light at 128 nm wavelength. The LAr instrumentation consists of two concentric, wavelength-shifting green fiber barrels coated with TPB that shift the photons from the primary LAr light at 128 nm to the green. The photons are read out with arrays of SiPMs at the ends of the fibers. Due to the close proximity of the LAr instrumentation to the Ge detectors, strong restrictions apply with respect to the radioactivity of the components. Many commercially available components (e.g., packaging of SiPMs) exceed this limitation. This talk will present the design, construction, and first performance of a wavelength-shifting, ultrahigh-purity LAr scintillation detection system which will be operated in the LEGEND-200 experiment.
Speaker: Stefan Schönert (TUM)
• 10:15
Status and prospects of the NEXT experiment 15m
NEXT is a staged experimental program aiming at the detection of neutrinoless double beta ($\beta\beta0\nu$) decay in $^{136}$Xe using successive generations of high-pressure gaseous xenon time projection chambers. The collaboration is presently concluding four years of operation of NEXT-White, a radiopure 50-cm diameter and length TPC operated with enriched xenon at 10 bar, at the Laboratorio Subterráneo de Canfranc. NEXT-White has successfully demonstrated the two key features of the technology, namely excellent energy resolution (1% FWHM at the Q-value of the decay) and highly effective topological-based background discrimination and served to provide an independent measurement of the $^{136}$Xe two-neutrino double beta decay half-life. The next stage of the program is NEXT-100, planned for construction in 2022, which will be twice larger than NEXT-White, and operated with 97 kg of enriched xenon at 15 bar, with half-life sensitivity on the scale of $10^{26}$ y. NEXT-100 will be superseded by a tonne-scale detector with a sensitivity of $10^{27}$ y around 2026. Parallel to the incremental increase in TPC size, the collaboration pursues an extensive R&D program to develop the capability of detecting the $^{136}$Ba daughter resulting from $^{136}$Xe double beta decays inside a running TPC using single molecule fluorescence imaging. This effort can lead to a background-free search for $\beta\beta0\nu$ decay on the tonne-scale, with half-life sensitivities close to $10^{28}$ y. This talk will present the status of the program, summarizing our experience with the NEXT-White TPC, provide an overview of the barium-tagging activities, and outline the future steps of the experiment.
Speaker: Dr Lior Arazi (Unit of Nuclear Engineering, Ben-Gurion University, Beer-Sheva, Israel)
• 10:30
Status and perspectives of the PETALO project 15m
PETALO (Positron Emission Tof Apparatus with Liquid xenOn) is a novel concept for positron emission tomography scanners, which uses liquid xenon as a scintillation medium and silicon photomultipliers as a readout. The large scintillation yield and the fast scintillation time of liquid xenon makes it an excellent candidate for PET scanners with Time-of-Flight measurements. In this talk I will review the status of the PETALO project, which is now commissioning the first prototype, devoted to demonstrate the potential of the concept, measuring the energy and time resolution and to test technical solutions for a complete ring. The prototype consists in an aluminum box filled with liquid xenon, with two arrays of SiPMs on opposite sides facing the xenon. A beta+ emitter source generating 511-keV pairs of gammas is placed in a central port and the SiPMs record the scintillation light produced by the gamma interactions, allowing for the reconstruction of the position, the energy and the time of the interactions. Finally, I will discuss the potential of a total-body PET based on this technology.
Speaker: Paola Ferrario (Donostia International Physics Center (DIPC))
• 10:45
First Results from the Light only Liquid Xenon experiment 15m
This talk will present results from the first liquid xenon dataset of the Light only Liquid Xenon (LoLX) experiment, collected in June of 2021. LoLX aims to investigate both scintillation and Cherenkov light emission in liquid xenon for applications in rare event searches and PET. The detector consists of 24 Hamamatsu VUV4 Silicon Photomultipliers (SiPM) arranged in an octagonal cylinder. A needle holds a Strontium 90 beta source in the detector center, which produces the scintillation and Cherenkov light. Longpass optical filters are placed in front of 22 SiPMs to separate the less abundant Cherenkov light from the VUV scintillation light. In addition to studying light production in liquid xenon, LoLX also aims to characterize external cross-talk (eXT) between SiPMs at various geometries. eXT occurs when IR photons produced during a charge avalanche in one SiPM trigger avalanches in a different SiPM. This acts as correlated noise across channels, thus characterizing eXT is crucial for rare event searches using large arrays of SiPMs. Future experimental phases of LoLX will upgrade the SiPM and digitizer scheme to attain sub nanosecond timing resolution with the goal of performing temporal separation of the Cherenkov and scintillation light, which may lead to improving time-of-flight PET imaging.
Speaker: Austin de St. Croix (Queens University/TRIUMF)
• 11:00
Xenon-Doped Liquid Argon Scintillation for Positron Emission Tomography 15m
Positron Emission Tomography (PET) is used to observe metabolic processes within patients. It works by reconstructing the annihilation origin of incident gamma rays produced by a positron emitting tracer. However, inefficiencies of current PET technology, such as the use of photomultiplier tubes, can result in poor imaging. In addition, current PET scanners possess a small field of view which limits the sensitivity. We propose 3Dπ: a full body, Time of Flight (TOF) PET scanner using Silicon Photomultipliers (SiPM) coupled with a Xenon-doped Liquid Argon (LAr+Xe) scintillator.
We simulated this design using Geant4 while following the National Electrical Manufacturers Association’s evaluation tests for performance assessment. We will present results that highlight a 200-fold increase in sensitivity, spatial resolutions comparable to commercial PET scanners and produce PET images from 15-30 second scans, faster than traditional 30-35-minute scans. Further studies will involve optimizing the layer thickness of LAr+Xe. Moreover, scintillation induced ionization electrons can produce Cherenkov radiation along with the LAr+Xe scintillation light.
We will discuss strategies to characterize this other signal in Geant4 to improve the timing resolution of our scanner. With the LAr+Xe scintillator and SiPMs of 3Dπ, we can use the precise TOF info of gamma rays to improve the localization of individual positron annihilations, and as one example benefit, provide low-dose PET scans for patients who may be at high risk for exposure to radiation.
Speaker: Alejandro Ramirez (University of Houston)
• 11:15
HeRALD: A Superfluid Helium Sub-GeV Dark Matter Detector 15m
HeRALD, an experiment within the SPICE/HeRALD collaboration, is a proposed sub-GeV scale dark matter detector based on a target of superfluid helium 4 and monitored by a Transition Edge Sensor based readout system. Several promising readout channels exist, including through monitoring quasiparticle (phonon and roton) and atomic (singlet photon and triplet) excitations. The quasiparticle channel (measured through the detection of quantum evaporated helium atoms) is of particular interest for low mass dark matter direct detection, with sensitivity to DM as light as 1 MeV. I will describe the proposed experiment and the potential reach of both shovel ready and future detectors, as well as recent R&D progress.
Speaker: Roger Romani (UC Berkeley)
• 11:30
A 10-kg LAr bubble chamber for sub-keV nuclear recoil detection -- Update and Calibration Strategies 15m
The Scintillating Bubble Chamber (SBC) Collaboration is developing noble liquid bubble chambers for the detection of sub-keV nuclear recoils, enabling both high-exposure GeV-scale dark matter searches and CEvNS measurements using reactor neutrinos. Nuclear recoils (NRs) in these chambers produce both a single bubble and a coincident flash of scintillation light, while electron-recoil (ER) backgrounds produce scintillation only. The physics reach of these chambers depends critically on what NR bubble nucleation threshold can be achieved while remaining ER-blind. This threshold will be explored with SBC’s first physics-scale device: a 10-kg LAr bubble chamber, now under construction, that will operate in the MINOS tunnel at Fermilab. I will give an update on the status of this chamber and describe the calibration strategies we will use to measure the chamber’s sensitivity to nuclear recoils with energies down to 100-eV.
Speaker: Eric Dahl (Northwestern University)
• 11:45
Precision CEvNS measurements with liquid argon scintillators for COHERENT 15m
The COHERENT collaboration has deployed a suite of low-threshold detectors in a low-background corridor of the ORNL Spallation Neutron Source to measure coherent elastic neutrino nucleus scattering (CEvNS) on an array of nuclear targets employing different technologies. This has produced CEvNS cross section measurements with CsI and liquid argon scintillator detectors. These measurements confirm the $N^2$-dependence predicted by the Standard Model and have enabled searches for non-standard interactions and accelerator-produced dark matter. We aim to construct and deploy a ton-scale liquid argon detector to provide precision measurements of the CEvNS cross section, improve our search for dark matter, and investigate charged-current interactions in argon. In this talk, we will present an overview of the COHERENT experiment with a focus on our liquid argon program.
Speaker: Daniel Joseph Salvat (Indiana University)
• 12:00
Low Energy Physics Sensitivity in a Radiopure DUNE-like Detector 15m
With radiopurity controls and small design modifications a kton-scale liquid argon time projection chamber similar to DUNE could be used for enhanced low energy physics searches. This includes improved sensitivity to supernova and solar neutrinos, and even weakly interacting massive particle dark matter, and a possibility of 0nubb detection with large Xe316 doping. This talk will present initial simulation studies to optimize the design and evaluate physics sensitivities. It will also discuss the tools being developed to support a large-scale radiopurity assay campaign necessary to construct such a detector.
Speaker: Eric Church (PNNL)
• 12:15
Searches for new physics with a stopped-pion source at the Fermilab accelerator complex 15m
The PIP-II complex at Fermilab is slated for operation later this decade and can support a MW-class $\mathcal{O}$(1 GeV) proton fixed-target program in addition to the beam required for DUNE. Proton collisions with a fixed target could produce a bright stopped-pion neutrino source. The addition of an accumulator ring allows for a pulsed neutrino source with a high duty factor to suppress backgrounds. The neutrino source supports a program of using coherent elastic neutrino-nucleus scattering (CEvNS) to search for new physics, such as sensitive searches for accelerator-produced light dark matter and active-to-sterile neutrino oscillations. A key feature of a program at the Fermilab complex is the ability to design the detector hall specifically for HEP physics searches. In this talk I will present the PIP-II project and upgrades towards a stopped-pion neutrino source at Fermilab and studies showing the sensitivities of a $\mathcal{O}$(100 ton) liquid argon scintillation detector with a standard PMT-based light detection system to the physics accessible with this source.
Speaker: Jacob Zettlemoyer (Fermilab)
• 12:30 13:00
Coffee Break and Social Time in Gather.Town
• 13:00 15:00
Signal Reconstruction (2C)
Convener: Matthew Szydagis (University at Albany SUNY)
• 13:00
Physics Modeling of Xenon and Argon detectors with the Noble Element Simulation Technique (NEST) 15m
The Noble Element Simulation Technique (NEST) is a C++ package with optional GEANT4 integration and a Python equivalent (nestpy) that accurately simulates the scintillation, ionization, and electroluminescence processes in xenon and argon. Using a combination of empirical and first principle methods, NEST models the intrinsic physics of noble detectors while maintaining a format that is accessible and customizable for users. I will present key results including energy resolution and light and charge yields of various interactions with noble elements. I will also discuss recent and future updates to the code including further development of the argon model, improvements to the ER model, and new modeling to describe the W-value discrepancy between NEST and the EXO-200 results.
Speaker: Kirsten McMichael (Rensselaer Polytechnic Institute)
• 13:15
A first-principles approach to electron-ion recombination in liquid xenon 15m
A simulation was developed to explore the micro-physics of electron-ion recombination and recombination fluctuations in liquid xenon detectors. Generating primary mono-energetic particles between 100eV and 10keV with a drift field of 50V/cm to 2000V/cm, the model characterizes recombination events and predicts ionization yields. Of particular interest, the simulation utilizes realistic electron transport kinematics and the Cohen-Lekner ‘hot electron’ framework to describe the reduced influence of the liquid structure of xenon on the scattering of low energy electrons. Results obtained can be useful in the search for dark matter candidates and neutrino detections.
Speaker: Olivia Piazza
• 13:30
Characterizing electroluminescence region of the NEXT high pressure gaseous xenon TPC with Kr gas 15m
The NEXT experiment is a neutrino physics program searching for neutrinoless double beta decay using high pressure gaseous xenon time projection chambers (HPGXeTPC). The HPGXeTPC technology offers several advantages, including excellent energy resolution, topological event discrimination, and calibration with gaseous, radioactive krypton. We will discuss the power of this calibration technique for characterizing the electroluminescence region, where S2 signals are produced. We discuss the impact of variation in the voltage on light production and event detection, as well as demonstrating capability to extract structural information about the EL gap from Kr calibration data. We will furthermore show an improved understanding of diffusion related effects in our detector.
Speaker: Jonathan Haefner (Harvard University)
• 13:45
Salting as a bias mitigation technique in LUX-ZEPLIN (LZ) 15m
As LZ prepares to push the limits of known physics and improve our understanding of the nature of dark matter, it is important to ensure that these gains are not mistakenly influenced by human biases towards achieving such results. Such biases often appear in the process of analysis when unconsciously or consciously expecting certain outcomes. Many techniques for avoiding these biases have been employed over the years including blinding and using hidden parameters. LZ will be using a method known as salting, in which fake signal events are injected into our data stream and removed after analysis is complete. In this presentation I will explain the historical motivations for pursuing bias mitigation, the process through which LZ salts its data, and some results after salting LZ’s simulated mock data challenges.
Speaker: Tyler Anderson (SLAC)
• 14:00
Lightmap reconstruction in nEXO with an internal xenon 127 source 15m
The nEXO experiment is a planned ton-scale liquid xenon time projection chamber (TPC) designed to search for neutrinoless double beta decay (0vBB) with a half-life sensitivity beyond 10$^{28}$ years. Optimal energy resolution in nEXO requires the precise reconstruction of the scintillation light signal, corrected by the position- and time-dependent light collection efficiency (or “lightmap”) throughout the active volume. An injected xenon 127 source is being considered for the lightmap reconstruction as it allows for in-situ calibrations of the light response, particularly in the center of the TPC where the use of external sources is limited by the attenuation of gammas in the liquid xenon. Multiple potential techniques for lightmap reconstruction are being explored, including a neural net and a kernel smoothing algorithm. This talk will present projections of the lightmap reconstruction capability from simulated xenon 127 decays and a discussion of the techniques involved.
Speaker: Clarke Hardy (Stanford University)
• 14:15
Optical Modeling and Position Reconstruction for DarkSide-20k 15m
DarkSide-20k is a next-generation direct dark matter search experiment under construction at the Gran Sasso National Laboratory (LNGS) in Italy. The core of the detector is a two-phase liquid argon time projection chamber designed to probe WIMP interactions down to the neutrino floor. To ensure the 200 ton-year exposure has zero instrumental backgrounds, low-radioactivity underground argon is used as the detector medium. Backgrounds from detector surfaces are primarily rejected through fiducialization, which requires accurate reconstruction of event vertices. Monte Carlo simulations of interactions within the detector have been used to study the position reconstruction resolution of DarkSide-20k. In this talk, I present the detector optical model and discuss the performance of machine learning-based position reconstruction algorithms on simulated DarkSide-20k datasets.
Speaker: Michael Poehlmann (University of California, Davis)
• 14:30
Measurement of the Scintillation Light Triggering Efficiency in MicroBooNE 15m
The MicroBooNE Liquid Argon Time Projection Chamber (LArTPC) has been collecting data since 2015 as part of the Short-Baseline Neutrino (SBN) program using the Booster Neutrino Beam (BNB) at Fermilab. Its primary physics goal is to contribute to addressing the elusive eV-scale sterile neutrino anomaly. MicroBooNE records and utilises both the ionisation charge and scintillation light produced inside the TPC to reconstruct its events. The latter is collected through a plane of PhotoMultiplier Tubes (PMTs) and is used for accurate event timing and cosmic muon rejection. A data-driven method to estimate the scintillation light triggering efficiency from prompt scintillation light for low energy cosmic muons will be presented. Results obtained from this method are crucial for many analyses that aim to measure low energy interactions, and inform triggering strategies in LArTPCs in the SBN and future DUNE programmes.
Speaker: Vincent Basque (Fermilab)
• 14:45
Measurement of the total neutron cross section on argon in the 30 to 70 keV energy range 15m
The use of liquid argon as a detection and shielding medium for neutrino and dark matter experiments has made the precise knowledge of the cross section for neutron interactions on argon an important design and operational parameter. Nevertheless, there has been a lingering discrepancy between the total cross-section in the 30-70 keV region given in the Evaluated Nuclear Data File (ENDF) and the single measurement done in the 1990's by an experiment optimized for higher energy. This discrepancy is significant in that the former predicts a large negative resonance in the region while the measurement did not report such a feature, giving rise to significant uncertainty in the penetration depth of neutrons through liquid argon. This talk presents results from the Argon Resonant Transport Interaction Experiment (ARTIE) at the Los Alamos Neutron Science Center (LANSCE), the first dedicated experiment optimized for this energy region. The ARTIE measurement of the total cross-section as a function of energy confirms the existence of a negative resonance in this region, but not quite as deep as the ENDF prediction.
Speaker: Tyler Erjavec (University of California Davis)
• Thursday, 16 September
• 07:00 09:30
Detector Techniques (3A)
Convener: Roberto Santorelli (CIEMAT)
• 07:00
Mind the (gas) gap: a single-phase liquid xenon TPC 15m
One of the most significant challenges for future dual-phase xenon TPCs is achieving the high, uniform electric field needed in the gas layer. One solution is to avoid using gaseous xenon and instead to create the secondary scintillation within the liquid itself, in a single-phase xenon TPC. Within micrometres of thin wires, the electric field is high enough to enable VUV scintillation. Avoiding the gas gap can provide a workaround to some of the technical challenges facing larger TPCs. At the same time, it opens up new detector design possibilities by relaxing the requirement that electrons are drifted upwards and facilitates analysis based on counting electrons. We discuss some of these advantages and present experimental results from a small single-phase demonstrator TPC with 10 µm anode wires.
Speaker: Adam Brown (University of Freiburg)
• 07:15
Prospects of S2 analysis in single-phase liquid xenon TPCs 15m
Proportional scintillation in liquid is a possible alternative scheme for charge-to-light signal conversion in future large-size liquid xenon TPCs. Based on detailed simulations we explore the implications on charge signal (S2) analysis arising from this fast scintillation process. The peaked signals allow precise reconstruction of the individual electrons and thus a quantized measure of the S2 strength. Counting the number of electrons significantly improves the S2 resolution for small signals, relevant for low-energy ER studies and sub-GeV WIMP searches. The direct measurement of the electron arrival times improves S2-only reconstruction of the event depth and allows for powerful discrimination between single site and multiple site interactions. We discuss these prospects in the context of a future multi-ton liquid xenon experiment such as DARWIN, assuming a single-phase design with minimal change compared to state-of-the-art dual-phase detectors.
Speaker: Fabian Kuger (Albert-Ludwigs Universität Freiburg)
• 07:30
Proposal of a Geiger-geometry Single Phase Time Projection Chamber as Potential Detector Technique for next-generation large-scale dark matter search detector 15m
Dual phase time projection chamber using liquid xenon as target material is one of most successful detectors for dark matter direct search, and has improved the sensitivities of searching for weakly interacting massive particles by almost five orders of magnitudes in past several decades. However, it still remains a great challenge for dual phase liquid xenon time projection chamber to be used as the detector in next-generation dark matter search experiments (~50 tonne sensitive mass), in terms of reaching sufficiently high field strength for drifting electrons, and sufficiently low background rate. Here we propose a single phase liquid xenon time projection chamber with detector geometry similar to a Geiger counter, as a potential detector technique for future dark matter search, which trades off field uniformity for less isolated charge signals. In this talk, I will talk about the concept of such Geiger-geometry single phase TPC (GG-TPC). I’ll show preliminary studies of field simulation and signal reconstruction, which show that such single phase time projection chamber is technically feasible and can have sufficiently good signal reconstruction performance for dark matter direct search.
Speaker: Qing Lin (University of Science and Technology of China)
• 07:45
Detection of Electroluminescence in Liquid Xenon with a Radial Time Projection Chamber 15m
The dual-phase xenon Time Projection Chamber (TPC) is one of the most successful techniques for rare event searches. It detects both primary scintillation and ionization signals from particle interactions in liquid xenon (LXe) . The ionization electrons are converted into electroluminescence in the gas xenon, subsequently detected by the same photo-sensors for the primary scintillation. However, it gradually becomes more and more challenging to build the TPCs with very large diameter while requiring sub-mm flatness of the gas gap. Here we developed a Radial TPC (RTPC) which can create and detect the electroluminescence directly in liquid xenon. It can simplify the design of the TPC by replacing the large diameter electrodes with a single wire in the axial center. The design of a liquid xenon RTPC and its first performance will be presented.
Speaker: Jianyang Qi (UCSD)
• 08:00
Understanding the impact of high voltage electrodes on low-energy dark matter searches with the LZ dual phase xenon TPC 15m
To observe signals from low-energy nuclear recoils, including WIMP-xenon scatters, the LZ dark matter detector must maintain strong drift and extraction fields within its dual-phase xenon time projection chamber (TPC). These fields are established by a set of four stainless steel wire mesh high voltage electrode grids that span the full width of the TPC. During operation at their design voltages, these grids will achieve wire surface fields well above 20 kV/cm. These high fields can produce spurious charge signals and signals from real radioactive decays with atypical light-to-charge ratios, both of which can lead to low-energy backgrounds in LZ science data. This talk will present studies of possible grid contributions to electron backgrounds in the low-energy regime, with a focus on two specific sources: field-induced emission and radiogenic emission.
Speaker: Ryan Linehan
• 08:15
Latest Results from the Xenon Breakdown Apparatus 15m
The Liquid Xenon Time-Projection Chamber (LXe TPC) is a leading technology in the fields of dark matter direct detection and neutrinoless double-beta decay searches, due in no small part to its scalability. The next generation of LXe TPCs intend to extend their drift lengths while maintaining their high operational electric fields (100s of Volts per cm). This increase in high voltage requires understanding how the risk of electrostatic discharge (ESD) correlates with various engineered quantities. To this end, the Xenon Breakdown Apparatus (XeBrA), a 5 Liter spark chamber with adjustable large area electrodes and transparent viewports, collected data on ESD in LXe under a variety of different conditions. Effects such as conditioning, pressure, ramp rate, stressed area, and surface finish were investigated. Data regarding the production of light and charge preceding an ESD were collected, along with novel position reconstruction of the associated plasma streamers using a pair of high frame rate cameras. In this talk, I present preliminary results from XeBrA and discuss the evidence collected for field-emission initiating breakdowns.
Speaker: Reed Watson (University of California, Berkeley)
• 08:30
Dielectric Strength of Noble and Quenched Gases for High Pressure Time Projection Chambers 15m
Dielectric breakdown strength is one of the critical performance metrics for gases and mixtures used in large, high pressure gas time projection chambers. We have experimentally studied dielectric breakdown strengths of several important time projection chamber working gases and gas-phase insulators over the pressure range 100 mbar to 10 bar, and gap sizes ranging from 0.1to 10 mm. Gases characterized include argon, xenon, CO2, CF4, and mixtures 90-10 argon-CH4,90-10 argon-CO2and 99-1 argon-CF4. We developed a theoretical model for high voltage breakdown based on microphysical simulations that use PyBoltz electron swarm Monte Carlo results as input to Townsend- and Meek-like discharge criteria. This model is shown to be highly predictive at high pressure, out-performing traditional Paschen-Townsend and Meek-Raether models significantly. At lower pressure-times-distance, the Townsend-like model is an excellent description for noble gases whereas the Meek-like model provides a highly accurate prediction for insulating gases.
Speaker: Logan Norman
• 08:45
A new high voltage cable feedthrough concept for future dark matter and neutrino experiments 15m
Physics experiments featuring liquid noble gas time projection chambers are becoming larger in scale. Consequently, their high voltage (HV) requirements have increased as well, making conventional design HV feedthrough (FT) impracticable. A new concept for an HV cable FT usable in a cryogenic environment is presented in this talk. It features a co-extruded multi-layered coaxial cable fabricated with a single material and relies on the ability to develop a plastic material with tunable resistivity.
Speaker: Dr Luca Pagani (University of California at Davis)
• 09:00
Low Threshold Operation of the Scintillating Xenon Bubble Chamber 15m
A scintillating bubble chamber with pure xenon was first operated in 2016 and has previously demonstrated coincident bubble nucleation and scintillation detection at thermodynamic thresholds above 4 keV. We now report on operation of the xenon bubble chamber at thermodynamic thresholds as low as 0.5 keV, including tests of bubble nucleation associated with gammas, and sensitivity to low energy neutrons from a $^{88}$Y-Be photoneutron source at thresholds around 1 keV. Additionally, these results again demonstrate coincident bubble nucleation and scintillation with 252Cf and background neutrons, and the scintillation channel allows us to make an efficient background-reducing cut for a nuclear recoil efficiency analysis, which is ongoing.
Speaker: Matthew Bressler (Drexel University)
• 09:15
A proposal to use neutron captures as a source of ultra-low energy nuclear-recoils in liquid xenon 15m
We propose a technique for an ultra-low energy nuclear-recoil measurement in liquid xenon using thermal neutron capture. The measurement uses the recoils imparted to xenon nuclei during the de-excitation process following neutron capture, where the promptly emitted $\gamma$ cascade can leave the nuclei with up to $0.3$ keV$_\text{nr}$ of recoil energy. A successful measurement of the quanta yields below this point will contribute to a greater sensitivity for liquid xenon experiments that will benefit from a lower energy threshold, mainly those searching for light WIMPs and coherent neutrino-nucleus scattering. We describe the proposed measurement and its feasibility for a small (sub-kilogram) LXe detector that is optimized for a high scintillation gain, and a pulsed neutron source.
Speaker: Chami Amarasinghe (University of Michigan)
• 09:30 10:00
Coffee Break and Social Time in Gather.Town
• 10:00 12:30
Convener: Denver Whittington (Syracuse University)
• 10:00
Development of analog signal transmission in LAr for DUNE 15m
The Deep Underground Neutrino Experiment (DUNE) is currently investigating a new prototype design for its second Far Detector module. The new concept proposes a Vertical Drift LArTPC, with a cathode at mid-height in the detector and anodes made of printed circuit boards, located at the top and bottom of the detector.
In this context, the design of the Photo-Detection System (PDS) needs to be revisited, opening a window of opportunity for further optimization and new developments. It is envisaged to distribute the photo-sensors (x-Arapuca) on the cathode surface. Such a system is required to operate within high-voltage surfaces, with both power supply and signal delivered using non-conductive materials. The aim of this talk is to describe the on-going work to collect and read-out the signal of the photo-sensors, which are re-shaped into large tiles containing 160 SiPMs each. A new ganging scheme for the SiPMs is introduced. In particular, this talk will focus on the proposed option to read out the sensors using an analog optical transmitter, that should ensure the transmission of the signals - with a wide dynamic range - to the outside of the cryostat to be digitized.
Speaker: Sabrina Sacerdoti (APC-Paris,France)
• 10:15
Organic photosensors for detection of VUV scintillation light 15m
Organic semiconductors have gained considerable attention in recent years for use in a wide range of applications from OLEDs, OFETs, to optical sensors. They can be prepared on rigid as well as flexible substrates over large areas through low-cost fabrication techniques with performance rivaling low-noise silicon photodiodes. These properties make them a potentially attractive option for future large-area noble element detectors. In this talk, we will address the feasibility of using organic semiconductors for vacuum ultraviolet (VUV) scintillation light detection. The prospects and challenges of using organic semiconductor technologies will be discussed. We will present first measurements on cryogenic operation of organic photodiodes and ongoing R&D into making these devices sensitive to VUV scintillation light.
Speaker: Michael Febbraro (Oak Ridge National Laboratory)
• 10:30
Track imaging in noble liquid detectors 15m
Large volumes of liquid Argon or Xenon constitute an excellent medium for the detection of Neutrino interactions and for Dark Matter searches. The established readout method for large noble liquid detectors is based on charge collection in a Time Projection Chamber, triggered by the scintillation light produced by Ar (128~nm) or Xe (185~nm).
This scintillation light can however also be used to attempt a direct reconstruction of charged particle tracks, provided the photon sensor has imaging capabilities. The primary benefit of this technique is rate capability, especially relevant for the near detectors of accelerator based experiments.
The design of such an imaging detector, however, presents several challenges: the performance of both current single photon detectors and conventional optical elements in the Vacuum UV is generally inferior compared to the visible spectrum; a large number of densely packed detectors and their dedicated readout electronics must be operated at cryogenic temperatures; the optical system must provide a sufficiently wide and deep field of vision and a large aperture, in order to minimize the amount of detectors for a given fiducial volume.
Silicon PhotoMultipliers (SiPMs) are the ideal photosensor for this application, since their noise is suppressed at cryogenic temperature and they can be fabricated in large arrays composed of many small pixels; their lower VUV sensitivity is also being addressed by suppliers with optimized designs. The large channel count requires the development of a dedicated cryogenic ASIC, for which several steps have been taken. Multiple options exist for optical systems, which offer different compromises between ease of construction, performance and deployment on specific detector geometries. In this contribution we will present the simulation of novel optical systems and the performance of small scale prototypes. The progress on larger prototypes and the simulation of realistic detector geometries will also be reported.
Speaker: Valerio Pia
• 10:45
Increasing photodetector light collection with metalenses 15m
We present a design concept and preliminary results for a method to increase the light collected by a sparse array of SiPMs by placing a metalens in front of each photodetector. A metalens is a flat lens that uses nanostructures on the surface to focus incident light. Metalenses offer similar focusing power to traditional lenses, but with reduced bulk and cost, and can be mass-produced in industry nanofabrication facilities. Their use could allow the next generation of large-scale physics detectors to obtain an increase in their light collection and further their science reach while simultaneously reducing the required number of readout channels needed to meet their design goals.
Speaker: Chris Stanford (Harvard University)
• 11:00
Assembly and test of a prototype nEXO charge-readout module with built-in, cryogenic ASIC readout 15m
The nEXO experiment aims to discover neutrinoless double beta decay of xenon 136, with a lifetime sensitivity goal of greater than 10^28 years. Compared to using long cables to transmit signals outside of the detector, mounting amplification and digitization circuitry directly on detector submodules reduces noise and improves measurement fidelity. A cryogenic application specific integrated circuit (ASIC) called CRYO ASIC has been designed by SLAC and fabricated for direct attachment to the nEXO charge readout modules. In this talk, the electrical characteristics of the nEXO charge readout will be discussed along with ASIC performance considerations. A prototype nEXO charge-readout module with attached ASIC has been assembled and operated in a liquid xenon time projection chamber; this module’s performance using a full chain of ASIC-controlling circuit boards will be presented.
Speaker: Evan Angelico (Stanford University)
• 11:15
The SBND Photon Detection System 15m
The Short-Baseline Near Detector (SBND) is a 112 ton Liquid Argon Time Projection Chamber (LArTPC) that will be part of the Short-Baseline Neutrino (SBN) program at Fermilab. The SBN programme's main goal is to resolve the eV-scale sterile neutrino short-baseline anomaly. SBND will measure the un-oscillated beam flavour composition at an unprecedented number of neutrinos due to its proximity to the beam target. One of the major features of SBND will be its state of the art photon detection system. The active system will consist of photomultiplier tubes, as well as X-ARAPUCA devices, placed behind the wire planes providing a high granularity light collection. The active system will be enhanced by highly reflective panels covered with the wavelength shifting compound tetra-phenyl butadiene (TPB) inserted into the cathode plane. The combination of the active system and enhancers in SBND will ensure a high and more uniform light yield throughout the detector which will help to enable low energy physics triggering. This talk will provide an overview of the photon detection system of SBND and its current status.
Speaker: Vincent Basque (Fermilab)
• 11:30
Scintillation light detection in the 6-m drift length ProtoDUNE Dual Phase liquid argon TPC 15m
The Deep Underground Neutrino Experiment (DUNE) is a leading-edge experiment for long-baseline neutrino oscillation studies, neutrino astrophysics and nucleon decay searches. ProtoDUNE-Dual Phase (DP) is a 6x6x6 m3 liquid argon time-projection-chamber (LArTPC) operated at the CERN Neutrino Platform in 2019-2020 as a prototype of the DUNE Far Detector. In ProtoDUNE-DP, the scintillation and electroluminescence light produced by cosmic muons in the LArTPC is collected by photomultiplier tubes placed up to 7 m away from the ionizing track. In this talk, we will present the performance of the ProtoDUNE-DP photon detection system, comparing different wavelength-shifting techniques and the use of xenon-doped LAr as a promising option for future large LArTPCs. The scintillation light production and propagation processes are analyzed and compared to simulations, improving understanding of the liquid argon properties.
• 11:45
Characterization of the DUNE photodetectors and study of the event burst phenomenon 15m
The Deep Underground Neutrino Experiment (DUNE) is an upcoming neutrino physics experiment that will answer some of the most compelling questions in particle physics and cosmology.
The DUNE far detectors employ silicon photomultipliers (SiPMs) to detect light produced by charged particles interacting in a large liquid argon time projection chamber (LArTPC).
The SiPMs are photosensors consisting of an array of single-photon avalanche diodes (SPAD) operating in Geiger mode. The choice of employing solid state photodetectors stems from their high sensitivity and dynamic range, as well as the possibility to fill large surfaces with high granularity.
An international consortium of research groups is currently engaged in a systematic comparison of the performances of the SiPM models that have been custom developed for DUNE by two manufacturers. Such detailed studies, which include gain measurements and a structure study of the dark count rate at 77K, are meant to determine the best choice of the photodetection system for DUNE, as well as characterize the response of the chosen detectors for the DUNE simulation. Moreover, an investigation of a newly observed phenomenon, consisting in fast bursts of events separated by a short time interval and collected in individual SiPMs, is being carried out, which potentially impacts the design of future models and their implementation in particle physics experiments. This poster presents the main results in terms of characterization of the SiPMs that will be employed in DUNE, as well as of our studies of the novel bursts phenomenon.
Speaker: Tommaso Giammaria
• 12:00
Pyrene-polystyrene wavelength shifters for liquid argon experiments 15m
Some WIMP dark matter experiments use liquid argon (LAr) as the target material for its high
scintillation light yield and good background discrimination. Particle interactions in the LAr produce
scintillation light at 128 nm which must go through a wavelength shifting (WLS) material to be
detected by standard photomultiplier tubes. Tetraphenyl-butadiene (TPB) is a common WLS for LAr
based detectors, including DEAP-3600, due to its high light yield and fast scintillation time.
Pyrene-polystyrene thin films have been proposed as a complementary WLS for rejection of
pathological backgrounds in the detector because it has a long scintillation time and high light yield
relative to TPB. Light from particle interactions that reach the pyrene coating will produce a pulse
signature distinct from interactions of light with TPB.
We present the characterization of the fluorescence properties of these pyrene coatings, such as the
light yield, fluorescence time, and spectra, as a function of temperature. These measurements were
taken at the Queen’s University optical cryogenic test facility to characterize these films down to 4 K.
Speaker: Hicham Benmansour (Queen's University)
• 12:15
Optical Light Collection Amount Studies for Dedicated Measurements 15m
In long baseline Neutrino experiments like T2K, NOVA and the future DUNE, the Far Detector includes a Photon Detection System to help identify the physics signals from the noise presented. The signals correspond to the physical processes produced when a neutrino or antineutrino beam is sent from the near detector. When data is taken, one or multiple processes can be presented in a signal, and also one or multiple neutrinos can produce a signal, therefore, High Energy Physics methods and others are used to establish the correspondences and to identify the properties and characteristics of the processes. In the case of NOVA and DUNE, the photon detection system is built for a Liquid Argon chamber, and they share a common analysis tool which is LArSoft. In this presentation, one of the variables of the Photon Detection system is discussed, the Optical Hits module, which gives us the Optical Light Collection Amount. A fictitious detector is used to show how dedicated measurements can be done and how this variable can be used for the Calibration and Commissioning of the Photon Detection System.
Speaker: A. Carolina Garcia B.
• 12:30 13:00
Coffee Break and Social Time in Gather.Town
• 13:00 15:00
Signal Reconstruction (3C)
Convener: Maria Elena Monzani (SLAC National Accelerator Laboratory)
• 13:00
Simulations of Geometric Aspects for ARAPUCA Designs 15m
The photon detection system of DUNE Far Detector (FD) is based on ARAPUCA technology. The new version of ARAPUCA, named X-ARAPUCA, will be used in the first and second modules. As the second module is based on vertical drift, the design of the X-ARAPUCA needed to be changed and simulation studies are fundamental for the optimization of the device. This work presents the simulation studies of the design, size, shape, and SiPM positioning inside the reflective cavity.
We designed a Python module that creates the geometry for the simulations based on given parameters such as size of the detector, numbers of SiPM and others. The physics of photons inside the X-ARAPUCA is simulated by a ray-tracer written in C++ using an uniform grid as acceleration structure. Our simulations focus on reflections and refractions using Snell's law on the interfaces and the total internal reflections inside the Wavelength Shifting Plate, that absorbs every incoming photon and re-emits them in a random direction.
The simulation shows that the highest efficiency is reached for thin X-ARAPUCA with a square shape. Better efficiency is obtained for larger modules if one considers the number of SiPM per cm$^2$ of the active collection area. Rectangular modules are more efficient when the SiPMs are positioned on the short side.
• 13:15
Demonstration of ~ns timing resolution in MicroBooNE Photon Detection System 15m
The MicroBooNE detector, located in the Booster Neutrino Beamline (BNB) at Fermilab, has been operating since 2015 as part of the Short Baseline Neutrino (SBN) program. MicroBooNE's Liquid Argon Time Projection Chamber is accompanied by a Photon Detection System (consisting of 32 PMTs) used to measure the argon scintillation light and determine the timing of the neutrino interactions. This work will demonstrate the analysis techniques developed to improve the timing resolution of the light signals to $\mathcal{O}$(ns). The result obtained allows MicroBooNE to access the 2ns neutrino pulse structure of the BNB for the first time, which enables significant enhancement of cosmic background rejection for all neutrino analyses. Furthermore, the ns timing resolution opens the door for searching new long-lived-particles (i.e. Heavy Neutral Lepton, Higgs Portal Scalars) as we develop light-based trigger systems for future large LArTPC experiments, namely SBN and DUNE.
Speaker: Dante Totani (University of California, Santa Barbra)
• 13:30
Photon detection probability predictionusing one-dimensional generative neural network 15m
Photon detection is important for liquid argon detectors for direct dark matter searches or neutrino property measurements. Precise simulation of photon transport is widely used to understand the probability of photon detection in liquid argon detectors. Traditional photon transport simulation within the framework ofGeant4brings extreme challenge to computing resources with kilo-tonne-scale liquid argon detectors and GeV-level energy depositions. In this work, we propose a one-dimensional generative model which bypasses photon transport simulation and predicts the number of photons detected by particular photon detectors at the same level of detail asGeant4simulation.The application to photon detection systems in kilo-tonne-scale liquid argon detectors demonstrates this novel generative model is able to reproduceGeant4simulation with good accuracy and 20x-50xfaster. This generative model can be used to fast predict photon detection probability in huge liquid argon detectors like ProtoDUNE or DUNE.
Speaker: Wei Mu (Fermilab)
• 13:45
Boosting background suppression in the NEXT experiment through Richardson-Lucy deconvolution 15m
The NEXT collaboration aims to observe neutrinoless double beta decay in gaseous 136Xe using a high pressure gaseous Xe time projection chamber with signal amplification by means of electroluminescence (EL). One of the advantages of the technique is that it allows for track reconstruction making use of a sensor plane equipped with SiPMs located nearby the EL region. However, the signals recorded in the TPC are degraded by electron diffusion and spread of light produced in the EL process, limiting the potential of the detection scheme.
We have recently developed an improved reconstruction procedure based on the Richardson-Lucy deconvolution, an iterative algorithm well-known in image processing and de-blurring. Deconvolution allows reversing the smearing mechanisms in the NEXT TPC and significantly enhances the definition of reconstructed tracks. Consequently, detector performance is strongly boosted, with a five-fold improvement in background rejection demonstrated on experimental data.
In the talk we will detail the algorithm application in the context of the NEXT experiment with a focus on the performance in NEXT-White, a 50 cm TPC currently operating underground at Laboratorio Subterráneo de Canfranc. We will describe the procedure applied to characterize the optical response of the chamber by obtaining the point spread function that best describes the observed signals. We will also discuss the potential of the algorithm to ease the tracking hardware requirements of future detector iterations.
Speaker: Dr Ander Simón Estévez (Ben-Gurion University of the Negev)
• 14:00
Neutrino Backgrounds in Future Liquid Noble Element Dark Matter Direct Detection Experiments 15m
Experiments that use liquid noble gasses as target materials, such as argon and xenon, play a significant role in direct detection searches for WIMP(-like) dark matter. As these experiments grow in size, they will soon encounter a new background to their dark matter discovery potential from neutrino scattering off nuclei and electrons in their targets. Therefore, a better understanding of this new source of background is crucial for future large-scale experiments such as ARGO and DARWIN. In this work, we study the impact of atmospheric neutrino flux uncertainties, electron recoil rejection efficiency, recoil energy sensitivity, and other related factors on the dark matter discovery reach. We also show that a significant improvement in sensitivity can potentially be obtained, at large exposures, by combining data from independent argon and xenon experiments.
Speaker: Pietro Giampa (SNOLAB)
• 14:15
Neutral bremsstrahlung calculations for TPCs 15m
Neutral bremsstrahlung (NBrS) in the gas phase of Argon and Xenon TPCs has been measured recently, with little ambiguity, by groups in Novosibirsk and Coimbra/Santiago. While its implications for future experiments are intriguing, and so far open-ended, a lack of reliable calculations precludes the full exploitation of the phenomenon.
We have recently created a simulation module in the electron-transport code Pyboltz,implementing the original theoretical framework introduced by Buzulutskov et al., and showed an excellent description of NBrS data. The framework, soon to be accessible through GitHub, allows calculations of NBrS in any noble element mixture, as well as in weakly-quenched mixtures, at all electrical fields of interest below the excitation thresholds. For illustration purposes, we will present results obtained in cases of interest, discuss the analytical limits, future improvements, and the scope of this project.
Speaker: Mr Pablo Amedo (Instituto Galego de Física de Altas Enerxías (IGFAE, USC))
• 14:30
Study of the luminescence of He/CF4 mixture for the CYGNO detector 15m
Innovative experimental techniques are needed to further search for dark matter weakly interacting massive particles. The ultimate limit is represented by the ability to efficiently reconstruct and identify nuclear and electron recoil events at the experimental energy threshold. Gaseous Time Projection Chambers (TPC) with optical readout are very promising candidates thanks to the 3D event reconstruction capability of the TPC technique and the high sensitivity and granularity of last generation scientific light sensors. The Cygno experiment is pursuing this technique by developing a TPC operated with He/CF4 gas mixture at atmospheric pressure equipped with a Gas Electron Multipliers (GEM) amplification stage that produces visible light collected by scientific CMOS camera. The optical approach has so far only exploited the light produced during the avalanche processes in the GEM channels. In this contribution, we discuss recent measurements performed by the CYGNO collaboration which show the first evidence of additional luminescence in He/CF4 induced by electrons accelerated by a suitable electric field. The electron and photon yield has also been studied for gas mixtures with a small percentage of isobutane. We give an overview of the CYNGO project presenting the performances in terms of energy and spacial resolution of prototype detectors that have been built and operated so far. Finally, we illustrate the plan to construct a 1m3 demonstrator expected in 2021/22 aiming at a larger scale apparatus in a later stage.
Speaker: Andrea Messina (Sapienza Università di Roma & INFN)
• 14:45
Nucleation efficiency of nuclear recoils in bubble chambers 15m
Bubble chambers using liquid xenon (and liquid argon) have been operated (resp. planned) by the Scintillating Bubble Chamber (SBC) collaboration for GeV-scale dark matter searches and CEvNS from reactors. This will require a robust calibration program of the nucleation efficiency of low-energy nuclear recoils in these target media. Such a program has been carried out by the PICO collaboration, which aims to directly detect dark matter using $\mathrm{C_3 F_8}$ bubble chambers. Neutron calibration data from mono-energetic neutron beam and AmBe source has been collected and analyzed, leading to a global fit of a generic nucleation efficiency model for carbon and fluorine recoils, at thermodynamic thresholds of $2.45$ and $3.29\,\mathrm{keV}$. Fitting the many-dimensional model to the data ($34$ free parameters) is a non-trivial computational challenge, addressed with a custom Markov Chain Monte Carlo approach, which will be presented. Parametric MC studies undertaken to validate this methodology are also discussed. This fit paradigm demonstrated for the PICO calibration will be applied to existing and future scintillating bubble chamber calibration data.
Speaker: Daniel Durnford (University of Alberta)
• Friday, 17 September
• 07:00 09:30
Convener: Joern Mahlstedt (Stockholm)
• 07:00
XENONnT light sensors: performance and reliability 15m
XENONnT is a dark matter direct detection experiment, currently in commissioning phase, located at Laboratori Nazionali del Gran Sasso. It utilizes a TPC filled with 8.5 t of liquid xenon of which 5.9 t instrumented with 494 3-inch Hamamatsu R11410-21 photomultiplier tubes (PMTs) divided into two arrays, placed at the top and bottom of the active volume. The light sensors have been selected after a testing campaign to ensure a reliable response and a time-stable functioning. These operations are briefly summarized, while the discussion is focused on the current PMT performance.
Speaker: Giovanni Volta (University of Zurich)
• 07:15
Improved quality tests of R11410-21 photomultiplier tubes 15m
Photomultiplier tubes (PMTs) are often used in low-background particle physics experiments, which rely on an excellent response to single-photon signals and stable long-term operation. In particular, the Hamamatsu R11410 model is the light sensor of choice for many detectors utilising xenon as target material. In the past, this PMT model has shown issues affecting its long-term operation, including light emission and the degradation of the PMT vacuum through small leaks, which can lead to spurious signals known as afterpulses. In this talk, we present an improved PMT testing procedure that includes newly developed tests targeted at the detection of intermittent light emission as well as vacuum degradation. The use of both new and upgraded facilities allowed us to test in total 368 new PMTs for the XENONnT detector in a cryogenic xenon environment. We exclude the use of 26 of the 368 tested PMTs and categorise the remainder according to their performance. Given that we have improved the testing procedure compared to XENON1T, yet we rejected fewer PMTs, we expect significantly better PMT performance in XENONnT.
Speaker: Luisa Hoetzsch (Max-Planck-Institut für Kernphysik)
• 07:30
The ABALONE Photosensor 15m
The ABALONE is a new type of photosensor produced by PhotonLab with cost effective mass production, robustness and high performance. This modern technology provides sensitivity to visible and UV light, exceptional radio-purity and excellent detection performance in terms of intrinsic gain, afterpulsing rate, timing resolution and single-photon sensitivity.
The new hybrid photosensor, that works as light intensifier, is based on the acceleration in vacuum of photoelectrons generated in a traditional photosensor cathode and guided towards a window of scintillating material that can be read from the outside through a silicon photomultiplier (SiPM).
In this contribute we present the characterization of the ABALONE operated at room temperature for the evaluation of the gain as function of the electric field, the response in time and the single-photoelectron spectrum. In order to better understand the experimental results, we performed the simulation of the photosensor by reproducing the electrostatic field, by tracking the accelerated photoelectrons and their interaction in the scintillation window.
Soon we plan to operate the ABALONE in a Xe environment. Details of future tests and possible applications in the context of next-generation astroparticle physics experiments (e.g., DARWIN) will be also discussed.
Speaker: Valerio D'Andrea (UnivAQ & LNGS)
• 07:45
Very-thick transparent GEMs with wavelength-shifting capability for noble element TPCs 15m
A new concept for the simultaneous detection of primary and secondary scintillation in time projection chambers is described. Its core element is a type of very thick GEM structure machined from a wavelength shifting material and supplied with PEDOT:PSS-based transparent electrodes.
Such a device is scalable to very large surface areas needed by future generations of noble element TPCs. Because of its optical properties it can significantly improve the light collection efficiency, energy threshold and resolution of conventional micropattern gas detectors as well as wire mesh TPCs.
Production, optical and electrical characterization, first measurements performed with the new device will be reported. Further tests and R&D steps will also be discussed.
Speaker: Marcin Kuźniak (AstroCeNT / CAMK PAN)
• 08:00
The Bubble-Free Liquid Hole-Multiplier: a New Concept for Primary and Secondary Scintillation Detection in Noble-Liquid Detectors 15m
The bubble-assisted liquid hole-multiplier (LHM) concept, introduced several years ago, has been thoroughly investigated as a detection element for primary (S1) and secondary (S2) scintillation light detection in noble-liquid TPCs. The basic LHM idea relies on a CsI-coated perforated electrode immersed in the liquid, with a bubble of the liquid vapor trapped underneath. Radiation-induced Ionization electrons liberated in the liquid and S1 VUV-induced photoelectrons from the CsI photocathode are collected into the holes; as they cross the liquid-vapor interface into the bubble they generate intense electroluminescence (EL) signals.
In this contribution, we will discuss a new, simpler, concept – the bubble-free LHM. Here, the liquid-vapor interface lies above the perforated electrode, which now has CsI on its bottom face; the electrode is fully immersed within the liquid, with no bubble underneath. Ionization electrons created in the drift volume below the electrode and S1-induced photoelectrons emitted from the photocathode are focused into the holes from below and pass through them with nearly no losses. A strong field above the electrode (taking here the role of the “gate” in conventional dual-phase TPCs) ensures transmission of the electrons into the vapor phase, where they produce intense S1 and S2 EL signals. The main advantages of this concept are that single-VUV photon detection efficiencies can potentially be of the order of 20%, and that individual VUV photons generate large EL signals which cannot be faked by dark counts. Bubble-free imaging LHMs can therefore allow the use of VUV SiPMs or CMOS sensors, despite their high dark-count rates.
The talk will describe the basic principles of the new concept and summarize our current experimental results in LXe. These include photoelectron extraction efficiency into LXe, electron focusing efficiency into the LHM holes of both ionization electrons and photoelectrons, and the transfer efficiency of electrons towards the liquid-vapor interface. These results validate the new concept, providing a promising basis towards further studies and future applications.
Speaker: Dr Arindam Roy (Unit of Nuclear Engineering, Ben-Gurion University, Beer-Sheva, Israel)
• 08:15
Usage of PEN as self-vetoing structural material with wavelength shifting capabilities in the LEGEND experiment 15m
Polyethylene naphthalate (PEN) is an interesting industrial plastic for the physics community as a wavelength-shifting scintillator. Recently, PEN structures with excellent radiopurity have been successfully produced using injection compression molding technology. This opens the possibility for the usage of optically active structural components with wavelength shifting capabilities in low-background experiments. Thus, PEN holders will be used to mount the Germanium detectors in the LEGEND-200 experiment. The ongoing R&D on PEN will be outlined with a focus on the evaluation of its optical properties. In addition, the ongoing efforts for further application of PEN in the LEGEND-1000 experiment will be presented.
Speaker: Luis Manzanillas (Max Planck Institute for Physics)
• 08:30
Polyethylene naphthalate wavelength shifter development and comparison with TPB using 2PAC 15m
The number of rare event search experiments using liquid argon as the active volume is increasing. As the scintillation light emitted from liquid argon following the interactions peaks at 128 nm, a wavelength shifter (WLS) is required for efficient detection of such signals. In the experimental setup dubbed 2PAC (2 Parallel Argon Chambers) operated at LNGS, two identical liquid argon detectors are used to compare two WLS candidate materials: PolyEthylene Naphthalate (PEN) and TetraPhenyl Butadiene (TPB). In each chamber, the inner surface is covered with specular reflectors and one of the candidate WLS, while SiPMs are used as photosensors, covering approximately 1% of surface area, in order to imitate the configuration of the future large scale detectors. Experimental results of the light yield from both chambers, supported by the Geant4 simulations, will be discussed and compared, giving the first low temperature comparison of the wavelength shifting efficiencies of PEN and TPB in a true 4pi geometry, and the highest reported so far PEN conversion efficiency from an industrial grade of PEN. Future R&D plans for PEN as WLS will also be discussed.
Speaker: Cenk Turkoglu (AstroCeNT)
• 08:45
R&D and characterization of wavelength-shifting reflectors for LEGEND-200 and for future LAr-based detectors 15m
The new design of the LAr veto of the LEGEND-200 neutrinoless double beta decay experiment, as well as many other LAr-based detectors, require materials that can efficiently shift VUV light to the visible range while being reflective to visible light. For the LAr veto of LEGEND-200, 14 square meters of the reflector Tetratex (TTX) were coated in-situ with tetraphenyl butadiene (TPB). For even larger detectors, TPB coating becomes more challenging and plastic films of polyethylene naphthalate (PEN) could be an option to ease scalability. In this context, we characterized the specific sample of the wavelength-shifting reflector (WLSR) from LEGEND-200 and investigated the light yield from the combination of a PEN film with the reflector TTX. Samples from both WLSRs were measured with spectrophotometers, observed with a microscope, and then characterized in a LAr setup equipped with a VUV sensitive photomultiplier. Parameters such as the reflectance, absorption length and light yield of the samples (as well as of the setup and its materials) were measured, such that the intrinsic quantum efficiency of PEN and TPB in LAr (at 87K) could be estimated.
Speaker: Gabriela Rodrigues Araujo (University of Zurich)
• 09:00
CMOS based SPAD Arrays for light detection in rare event search experiments 15m
Experiments searching for rare physics events using scintillation in liquid noble gases are steadily increasing in size. They require detector systems capable of measuring individual optical photons with excellent efficiency while covering large areas. In addition, the radioactive background introduced by such systems must be extremely low. We propose SPAD arrays based on CMOS technology as a possible solution for such an application case. This technology allows for manufacturing SPADs and the associated CMOS readout logic side by side, creating a fully functional photon detector system one a single silicon die. No further discrete components in direct vicinity and only few digital signals are required to operate a chip so that large areas can be covered in a straight forward way with very low material budget. We have developed a chip architecture which offers a very low power dissipation and a high fill factor. We have operated a prototype chip with different SPAD geometries at low temperatures of $100/160 \mathrm{K}$ and measured dark count rates of $0.01/0.1\,\mathrm{Hz}$ per $\mathrm{mm}^2$ of active SPAD area, respectively. Our data driven readout architecture has an idle power consumption of only $1.75\,\mathrm{m W}$ and a signal dependent contribution of about $15\,\mathrm{\mu W}$ per 1000 hits per second. Based on these results we propose a full detector concept to cover large areas with high fill factor requiring only 7 electrical signals for operation.
Speaker: Michael Keller (Heidelberg University)
• 09:30 10:00
Coffee Break and Social Time in Gather.Town
• 10:00 12:30
Detector Techniques (4B)
Convener: Evan Shockley (UC San Diego)
• 10:00
Results from the Xeclipse Liquid Purification Test System for XENONnT 15m
As liquid xenon detectors grow in scale, novel techniques are required to maintain sufficient purity for charges to survive across longer drifts. The Xeclipse test facility at Columbia University was built to test the removal of electronegative impurities through cryogenic filtration powered by a liquid xenon pump, making possible a far higher mass flow rate than gas-phase purification through hot getters. This talk will outline the results of this R&D, which were used to guide the design and commissioning of the XENONnT liquid purification system. Thanks to this innovation, XENONnT has achieved an electron lifetime greater than 10 milliseconds in an 8.5 ton target mass, perhaps the highest purity ever measured in a liquid xenon detector.
Speaker: Joseph Howlett
• 10:15
Purification of large volume of liquid argon for the LEGEND-200 experiment 15m
The LEGEND-200 experiment is under construction at the Laboratori Nazionali del Gran Sasso (LNGS) in Italy. Its main goal is a background-free search for neutrinoless double beta decay of Ge-76. Up to 200 kg of bare high purity germanium (HPGe) detectors with enrichment in Ge-76 beyond 86% will be deployed in liquid argon (LAr). The LAr will serve as cooling medium for the detectors as well as a passive and an active shield. For the latter the LAr instrumentation will be composed of light guiding fibers connected to silicon photomultipliers detecting scintillation light of argon. It has been already shown in the GERDA experiment that the LAr veto was a very powerful tool for background rejection and minimization. Details of the LAr veto system will be presented in a dedicated talk.
The scintillation properties of LAr (attenuation length, triplet life time) are worsened by presence (at a sub-ppm level) of electronegative impurities such as oxygen, water and nitrogen due to quenching and absorption processes. As a consequence, the efficiency of the LAr veto may be significantly influenced. In order to achieve best possible performance of the veto, LAr will be purified during initial filling of the LEGEND-200 cryostat.
The design, construction and performance of a system capable to purify 65 m3 (91 tons) of liquid argon to sub-ppm level will be presented. The quality of the processed liquid is monitored in real time by measuring the triplet life time and simultaneous direct measurement of concentrations of impurities like water, oxygen and nitrogen down to 0.1 ppm. Scintillation properties of LAr filled into the cryostat are also be determined in real time by a dedicated apparatus (LLAMA). For the LAr filled into the cryostat the measured triplet life time is in the range of 1.3 mico_s. If needed, the LAr purification system may be also used later to purify LAr filled in the cryostat in the loop mode. A dedicated cryogenic pump has been installed on its bottom. The pump is capable to circulate the LAr between the purification system and the cryostat.
Speaker: Grzegorz Zuzel (Jagiellonian University)
• 10:30
Modeling the Effect of Impurities on the Electron Lifetime in Liquid Xenon for nEXO 15m
nEXO is a 5 tonne liquid xenon (LXe) time projection chamber (TPC) planned to search for the neutrinoless double beta decay of $^{136}$Xe with a target half-life sensitivity of about $10^{28}$ years. Electrons from an event within the TPC will be drifted up to $1.3\,\mathrm{m}$ and to ensure minimal charge loss nEXO aims to reach an electron lifetime of $10\,\mathrm{ms}$. This lifetime is inversely proportional to the concentration of electro-negative impurities, for which multiple species with different attachment cross-sections may be important. Various sources for impurities such as diffusion out of commonly used plastics, desorption from metal surfaces and leaks to atmosphere were investigated. This talk will go over measurements of outgassing from plastics and relevant parameters to extrapolate to the effect impurities have on the electron lifetime in large liquid xenon detectors.
Speaker: Ako Jamil (Yale University)
• 10:45
Measurement of the underground argon radiopurity for Dark Matter direct searches 15m
A major worldwide effort is underway to procure the radiopure argon needed for DarkSide-20k (DS-20k), the first large scale detector of the new Global Argon Dark Matter Collaboration. The Urania project will extract and purify underground argon (UAr) from CO2 wells in the USA at a production rate of 300 kg/day. Additional chemical purification of the UAr will be required prior to its use in the DS-20k LAr-TPC. The Aria project will purify UAr using a cryogenic distillation column (Seruci-I), located in Sardinia (Italy). Assessing the UAr purity in terms of Ar-39 is crucial for the physics program of the DarkSide-20k experiment. DArT is a small (1 litre) radiopure chamber that will measure the Ar-39 depletion factor in the UAr. The detector will be immersed in the active liquid Ar volume of ArDM (LSC, Spain), which will act as a veto for gammas from the detector materials and the surrounding rock. In this talk, I will review the status and prospects of the UAr projects for DarkSide-20k.
Speaker: Pablo Garcia Abia (CIEMAT)
• 11:00
The LZ Krypton Removal Chromatography System 15m
Trace radioactive noble gases are a source of electron recoil backgrounds in liquid xenon dark matter experiments, and cannot be mitigated by self-shielding. Naturally occurring krypton, which contains trace amounts of the beta emitter krypton-85, is found in commercially available research-grade xenon at a level of 1-100 parts-per-billion. In the LZ dark matter experiment, we require the xenon in the detector to contain no more than 300 parts per quadrillion krypton. This limit reduces the rate of electron recoil events from krypton-85 to be comparable to the solar neutrino contribution. To achieve this, krypton is removed from the xenon using gas charcoal chromatography prior to its deployment in the detector. In this talk, I will present an overview of the krypton removal chromatography system, which was designed, built and operated by LZ at SLAC.
Speaker: Andrew Ames (SLAC, Stanford University)
• 11:15
CrystaLiZe: A Solid Future for LZ 15m
Radon and its daughter decays continue to limit the sensitivity of WIMP direct dark matter searches, despite extensive screening programs, careful material selection and specialized Rn-reduction systems. This problem is only expected to worsen as experiments grow in size. For liquid xenon TPCs, we propose to address this through crystallizing the xenon. Once solid, the xenon will no longer admit external Rn into the bulk, allowing existing Rn to decay away. These decays can also be efficiently vetoed using the time structure of the decay sequence and the fixed position of daughter isotopes. In this case, the limiting background for WIMP searches would be neutrinos from the sun and from cosmic ray muons. In this talk, I will argue that an instrumental radon tag in a crystalline xenon TPC, perhaps as an upgrade to LZ, may be the quickest path to reaching the neutrino floor and present preliminary results from a solid xenon test stand which indicate its viability as a detector medium.
Speaker: Scott Kravitz (Lawrence Berkeley National Lab)
• 11:30
Development of a Pulsed VUV Light Source With Adjustable Intensity 15m
Precise characterization of photodetectors sensitive to vacuum ultraviolet (VUV) require a calibration source able to:i) produce and transmit photons in the VUV (128nm - 200nm), ii) control the light intensity and reliably obtain single photon transmission, iii) produce a pulsed photon emission so as to correlate the source with the VUV readout.In this talk, we will present the development of gas based pulsed spark. This source emits VUV light in the range produced by noble element detectors and is coupled with a gas based attenuator capable of delivering single photon intensities to the device under test. We will present the first data taken with this device as well as highlight some of its recent applications in the development of novel VUV photon detectors.
Speaker: Austin McDonald (University of Texas at Arlington)
• 11:45
Fluorescence light yield and time constants of acrylic (PMMA) excited with UV light 15m
Rare-event searches, like those for dark matter or neutrinoless double-beta decay, go to extreme lengths to mitigate various forms of background. Acrylic (poly(methyl methacrylate) or PMMA) is frequently used as a container for scintillating liquids in rare-event searches. Weak fluorescence has been observed in certain types of PMMA at room temperature, introducing a potential source of background. Building on previous work presented at LIDINE 2019, by using the optical cryostat with large numerical aperture located at Queen’s University, we quantify the light yield of the acrylic used in the DEAP dark matter search from room-temperature down to 4 K, and express it relative to the common wavelength shifter TPB. We also study the time constants involved.
Speaker: Emma Ellingwood (Queen's University)
• 12:00
Development and characterization of a slow wavelength shifting coating for background rejection in liquid argon detectors 15m
Alpha decays occurring on surfaces of a liquid argon (LAr) detector, particularly in locations where light collection is incomplete, can result in prompt apparent low-energy events that reconstruct similar to dark-matter induced nuclear recoil events. Alphas and nuclear recoils preferentially excite argon into the singlet state, which decays with a characteristic time of ~6 ns. To convert the argon scintillation light to visible, a wavelength shifter, TPB, is typically used due to its short (O(ns)) re-emission time, that will preserve the LAr scintillation timing. By coating the problematic detector surface with a wavelength shifting coating with a decay time constant much longer than the LAr singlet time, the pulse-shape of alpha decays from these regions will be modified by the coating, with O(10^5) rejection efficiency expected. We describe the development of a pyrene-doped polymeric wavelength shifting film for the DEAP-3600 experiment, which will be deployed in the next major physics run after the completion of a suite of hardware upgrades to the detector. We will present an overview of the alpha background rejection technique using the long-time constant wavelength shifter coating, the development and testing of the films to ensure cryogenic stability for operation in a LAr environment, and the suite of characterization measurements of the film’s critical operational parameters, including relative photo-luminescent quantum yield, emission spectrum, and characteristic decay time
Speaker: David Gallacher (Carleton University)
• 12:15
Barium Tagging for the NEXT Neutrinoless Double Beta Decay Program 15m
The NEXT collaboration is pursuing a phased program to search for neutrinoless double beta decay (0nubb) using high pressure xenon gas time projection chambers. The power of electroluminescent xenon gas TPCs for 0nubb derives from their excellent energy resolution (<1%FWHM), and the topological classification of two electron events, unique among scalable 0nubb technologies. Xenon gas detectors also also offer a further opportunity: the plausible implementation of single barium daughter ion tagging, an approach that may reduce radiogenic and cosmogenic backgrounds by orders of magnitude and unlock sensitivities that extend beyond the inverted neutrino mass ordering. In this talk I will present recent advances in the development of single ion barium tagging for high pressure xenon gas detectors. Topics to be covered will include advances in single ion microscopy in high pressure gas, molecular sensor development including color-shifting and turn-on barium chemosensors, methods for concentrating ions to sensors and/or actuating sensors to ions, and plans for demonstrator phases that aim to prove barium tagging in-situ, on a 3-5 year timescale.
Speaker: Karen Navarro (University of Texas at Arlington)
• 12:30 12:45
Closing Remarks
Convener: Kaixuan Ni (UCSD)
• 12:30
Closing Remarks, Poster Awards, Next Conference, etc. 15m
|
2022-07-04 08:56:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4985780119895935, "perplexity": 4431.698456596337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104364750.74/warc/CC-MAIN-20220704080332-20220704110332-00688.warc.gz"}
|
https://brilliant.org/problems/a-problem-by-odyson-santos/
|
# A geometry problem by Odyson Santos
Geometry Level 1
Triangle $ABC$ is an equilateral triangle. What is the sum of all of its exterior angles?
×
|
2021-02-25 16:09:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6968213319778442, "perplexity": 1059.7367196043438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351374.10/warc/CC-MAIN-20210225153633-20210225183633-00235.warc.gz"}
|
https://bjtumath.wordpress.com/tag/topology/
|
You are currently browsing the tag archive for the ‘topology’ tag.
1. The pasting lemma: $A$ and $B$ are closed sets. If $f:A \to Y$ and $g: B \to Y$ are both continuous, and $f(x)=g(x)$ for all $x \in A \cap B$, then $h: (A \cup B) \to Y$ defined by $h(x)=f(x)$ for $x \in A$ and $h(x)=g(x)$ for $x \in B$ is also continuous.
2. Insight: Continuity (of $f: X \to Y$) does not only depend on the function itself, but also depend on the topologies on both $X$ and $Y$. Indeed, with weaker topology on $X$ or stronger topology on $Y$, there are less continuous functions from $X$ to $Y$; on the contrast, with stronger topology on $X$ or weaker topology on $Y$, there are more continuous functions from $X$ to $Y$.
|
2017-08-22 01:36:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 22, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8934666514396667, "perplexity": 143.32516317244523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109803.8/warc/CC-MAIN-20170822011838-20170822031838-00251.warc.gz"}
|
http://mathoverflow.net/revisions/77074/list
|
In the paper "Stationary reflection and the club filter", the author Masahiro Shioya says that the club filter on $P_{\omega_1}(\lambda)$ cannot be $2^\lambda$-saturated for $\lambda > \omega_1$, citing Shelah's book "Nonstructure Theory" (in preparation). I have three questions:
2) Does the theorem apply to $P_{\omega_1}(\lambda) | S$ for an arbitrary stationary set $S$?
3) Does the proof go through a two-cardinal diamond principle? I.e., did Shelah prove (in ZFC) that $\lozenge_{\omega_1,\lambda}$ holds for $\lambda > \omega_1$? What about $\lozenge_{\omega_1,\lambda}(S)$ for arbitrary stationary $S$?
|
2013-05-23 19:15:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7976572513580322, "perplexity": 151.47131774893538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703728865/warc/CC-MAIN-20130516112848-00032-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.calculatorsoup.com/calculators/discretemathematics/oddpermutations.php
|
Online Calculators
Odd Permutations Calculator
Odd Permutations Calculator
$\frac{n!}{2} = \; ? \; (n \geq 2)$
Calculator Use
Calculate the odd permutations, n! / 2, for a set of n elements where n >= 2.
Limited to n >= 2 and n < 1000.
|
2019-12-07 10:15:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4943928122520447, "perplexity": 2239.9264329064686}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540497022.38/warc/CC-MAIN-20191207082632-20191207110632-00266.warc.gz"}
|
http://mathhelpforum.com/differential-geometry/169796-interpolation-polynomial-exercise.html
|
# Math Help - Interpolation polynomial exercise
1. ## Interpolation polynomial exercise
$
[x0,....xn; x^k/(a-x)], k=n,n+1
$
2. Originally Posted by DeliaSumalan
Could someone please help me to solve this: $
[x0,....xn; x^k/(a-x)], k=n,n+1$
It is impossible to understand the question.
Fernando Revilla
3. [SIZE="5"]
Originally Posted by FernandoRevilla
It is impossible to understand the question.
Fernando Revilla
I have to calculate the value of the expression: [x(index 0) to x(index n) ; x^k/(a-k)]. where k is n or n+1
|
2014-09-16 18:04:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8366422653198242, "perplexity": 3310.0942781506183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657118950.27/warc/CC-MAIN-20140914011158-00203-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
|
http://bkms.kms.or.kr/journal/view.html?doi=10.4134/BKMS.b160879
|
- Current Issue - Ahead of Print Articles - All Issues - Search - Open Access - Information for Authors - Downloads - Guideline - Regulations ㆍPaper Submission ㆍPaper Reviewing ㆍPublication and Distribution - Code of Ethics - For Authors ㆍOnlilne Submission ㆍMy Manuscript - For Reviewers - For Editors
A stability result for p-centroid bodies Bull. Korean Math. Soc. 2018 Vol. 55, No. 1, 139-148 https://doi.org/10.4134/BKMS.b160879Published online January 31, 2018 Lujun Guo, Gangsong Leng, Youjiang Lin Henan Normal University, Shanghai University, Chongqing Technology and Business University Abstract : In this paper, we prove a stability result for $p$-centroid bodies with respect to the Hausdorff distance. As its application, we show that the symmetric convex body is determined by its $p$-centroid body. Keywords : $p$-centroid body, convex body, spherical integral transformation, $p$-cosine transformation MSC numbers : 52A20, 52A40 Downloads: Full-text PDF
|
2020-06-03 19:47:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18663622438907623, "perplexity": 6010.645863196636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347435987.85/warc/CC-MAIN-20200603175139-20200603205139-00318.warc.gz"}
|
https://math.codidact.com/posts/286572
|
Q&A
Endomorphisms on an entropic structure whose pointwise product is the identity automorphism - entropic idempotent structure?
+7
−0
Context: self-study from Warner's "Modern Algebra (1965): Exercise 16.27.
Let $\alpha$ and $\beta$ be endomorphisms of an entropic structure $(S, \odot)$ such that $\alpha \odot \beta$ is the identity automorphism, and let $\otimes$ be the operation on $S$ defined by: $$x \otimes y = \alpha (x) \odot \beta (y)$$ for all $x, y \in S$. Then $(S, \otimes)$ is an entropic idempotent algebraic structure.
An entropic structure is an algebraic structure $(S, \circ)$ such that $\circ$ is an entropic operation, that is:
$$\forall a, b, c, d \in S: (a \circ b) \circ (c \circ d) = (a \circ c) \circ (b \circ d)$$
Idempotence is trivial, but I'm having trouble proving entropicness.
Here is my attempt:
$(w \otimes x) \otimes (y \otimes z)$
$= (\alpha (w \otimes x) ) \odot (\beta (y \otimes z) )$
$= (\alpha (\alpha (w) \odot \beta (x))) \odot (\beta (\alpha (y) \odot \beta (z)))$
$= (\alpha (\alpha (w) ) \odot \alpha (\beta (x))) \odot (\beta (\alpha (y)) \odot \beta (\beta (z)))$ (justified by homomorphism)
$= (\alpha (\alpha (w) ) \odot \beta (\alpha (y))) \odot (\alpha (\beta (x)) \odot \beta (\beta (z)))$ (as $\odot$ is entropic)
$= (\alpha (w) \otimes \alpha (y)) \odot (\beta (x) \otimes \beta (z))$
If we can say that $\alpha (w) \otimes \alpha (y) = \alpha (w \otimes y)$ then we are home and dry, but I can't see why we can make that leap, as it is far from obvious that $\alpha$ and $\beta$ are endomorphisms on $(S, \otimes)$ in the same way that they are on $(S, \odot)$. Or are they? And why?
What is it I'm missing here?
Why does this post require moderator attention?
|
2022-07-02 23:44:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.939132809638977, "perplexity": 335.5813756398886}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104205534.63/warc/CC-MAIN-20220702222819-20220703012819-00227.warc.gz"}
|
https://collegephysicsanswers.com/openstax-solutions/how-much-heat-transfer-occurs-environment-electrical-power-station-uses-125
|
Question
(a) How much heat transfer occurs to the environment by an electrical power station that uses $1.25 \times 10^{14} \textrm{ J}$ of heat transfer into the engine with an efficiency of 42.0%? (b) What is the ratio of heat transfer to the environment to work output? (c) How much work is done?
1. $7.25 \times 10^{13} \textrm{ J}$
2. $1.38$
3. $5.25 \times 10^{13} \textrm{ J}$
Solution Video
# OpenStax College Physics Solution, Chapter 15, Problem 25 (Problems & Exercises) (3:43)
View sample solution
## Calculator Screenshots
Video Transcript
This is College Physics Answers with Shaun Dychko. A power station absorbs 1.24 times tenth to the 14 Joules of energy from the high temperature reservoir and it has an efficiency of 42 percent which is 0.420. In part A, we are asked to figure out how much energy is lost to the cold temperature reservoir so how much energy is just lost to the environment in other words. Well the efficiency is the work done by the power station divided by the energy that it absorbs, and we can rewrite the work as heat energy absorbed minus heat energy lost to the environment and then substitute that in place of W in our efficiency formula. So we can divide both terms and the numerator by this Qh in the denominator and that works out to one minus Qc over Qh and we are solving for Qc here and so we will add it to both sides and then subtract efficiency from both sides and we end up with Qc over Qh equals one minus efficiency and multiply both sides by Qh to solve for Qc. So Qc equals Qh times one minus efficiency. So, it’s 1.25 times tenth to the 14 Joules of energy absorbed multiply by one minus 0.42 efficiency. Giving us 7.25 times tenth to the 13 Joules of energy lost to the environment and then part B asks us to find the ratio of the energy lost to the environment to the amount of the energy produced usefully so the work done in other words. So, will substitute W as Qh minus Qc as we did up here. This is true for any cyclical process that has zero change in internal energy and so we also substitute for Qc and write it as Qh times one minus efficiency and now we have an expression containing only efficiency and as well as Qh but that’s gonna end up canceling as we'll see it’s a common factor in the top and bottom. So, we have one minus efficiency in the top divided by one minus one minus efficiency. So, this maybe, an intermediate step would look like this, it would be Qh times one minus efficiency divided by Qh times one minus one minus efficiency. So I’m showing the factoring out of Qh in the denominator there and there's one that appears here. So this bottom one minus one is zero and then you know minus minus efficiency makes positive efficiency in the denominator. So we’ve one minus efficiency divided by efficiency in other words. So that’s one minus 0.42 divided by 0.42 which is 1.38. So it’s 1.38 times as much energy lost to the environment as there is work done by the power station and now the work done is efficiency equals work done divided by Qh and we are gonna solve for work done by multiplying both sides by Qh and the work done is this 1.25 times tenth to the 14 Joules of energy absorbed times 0.42 which is 5.25 times tenth to the 13 Joules.
|
2019-04-21 19:02:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8768009543418884, "perplexity": 453.956241646278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578532050.7/warc/CC-MAIN-20190421180010-20190421202010-00424.warc.gz"}
|
https://math.stackexchange.com/questions/1277663/determining-a-measure-through-a-class-of-measure-preserving-functions
|
# Determining a measure through a class of measure preserving functions
Let $\mu$ and $\mu^\prime$ be probability measures over the sigma algebra $\Sigma$ consisting of the Lebesgue measurable subsets of $[0,1]$. Suppose also that $\mu$ and $\mu^\prime$ assign measure $0$ to all and only null sets. (Notation: if $X$ is measurable, let $\mu_X$ denote the renormalized measure over measurable subsets of $X$: $\mu_X(Y)=\mu(Y)/\mu(X)$ (and similarly for $\mu^\prime$).)
Now suppose that we have a collection of functions:
$$\{f_X:X\rightarrow \overline{X} \mid X\in \Sigma, 0<\lambda(X)<1\}$$
such that for every way of partitioning $[0,1]$ into two sets, $X$ and $\overline{X}$, $f_X$ preserves the measure both between $\mu_X$ and $\mu_{\overline{X}}$ and between $\mu^\prime_X$ and $\mu^\prime_{\overline{X}}$. (In other words $\mu_X(f^{-1}(Y)) = \mu_{\overline{X}}(Y)$, and similarly for $\mu^\prime$.)
Does it follow that $\mu=\mu^\prime$?
• Firstly, μ, μ′ and λ are absolutely continuous wrt one another. Its therefore okay to assume μ′=λ. From there, you've got some real problems. You could assume μ([0,1/2])=a<1/2, for instance, and then chase what that means about the measures of other sets, until you have enough information to determine μ. Then, telling if μ∼λ is actually tricky. The best result is $$\int\sqrt{ \frac{d\mu}{dm} \frac{d\lambda}{dm} } dm >0$$ iff μ∼λ where m=μ+λ. There's also a Haar measure thing going on maybe, related to $\langle f_X \ |\ \lambda(X)=1/2 \rangle$ but that's an ugly group. Great problem! – user24142 May 12 '15 at 20:14
• I agree with the other commenter that this is a "great problem", but could you provide some motivation or context for your question? It might provide some insight into a solution. – Matt Rosenzweig May 14 '15 at 16:34
• I can try! I'm a philosopher, not a mathematician so my motivations may not be that helpful for solving the problem... – Andrew Bacon May 14 '15 at 16:57
• If you define a operation on propositions, written $A\rightarrow B$, by $AB \cup f_{\bar{A}}^{-1}(B)$ then this operation has the property that the probability of $A\rightarrow B$ is the conditional probability of $B$ on $A$. Here the probability is measured by $\mu$ or $\mu^\prime$, as given in the question. I suspect that for certain choices of families of functions there's an interesting class of measures that makes the probability of $A\rightarrow B$ the same as conditional probability. Connectives like $A\rightarrow B$ are potentially useful for modelling natural language conditionals. – Andrew Bacon May 14 '15 at 16:58
I assume that all $f_X$ are bijections (so that the inverse is measurable) - or at least I need the sets $f_X^{-1}(Y)$ to generate the $\sigma$-algebra on $X$. I suspect that a similar argument could somehow work in the arbitrary case, but just suspect at this point.
In the notations of the question, denote $p=d\mu/d\mu'$, then for a measurable $Y\subset \overline{X}$ \begin{multline*} \int_{f_X^{-1}(Y)}p(x)\mu'_X(dx)= \frac{\mu(X)}{\mu'(X)}\int_{f_X^{-1}(Y)}\mu_X(dx)= \frac{\mu(X)}{\mu'(X)}\int_Y \mu_\overline{X}(dx)=\\= \frac{\mu(X)\mu'(\overline{X})}{\mu'(X)\mu(\overline{X})} \int_Yp(x)\mu'_\overline{X}(dx) =\frac{\mu(X)\mu'(\overline{X})}{\mu'(X)\mu(\overline{X})} \int_{f_X^{-1}(Y)}p(f_X(x))\mu'_X(dx). \end{multline*}
Since $Y$ is arbitrary, we get that $\mu'$-a.s. on $X$ $$p(x)=\frac{\mu(X)\mu'(\overline{X})}{\mu'(X)\mu(\overline{X})} p(f_X(x))=C_Xp(f_X(x)).$$ Suppose that $\mu'(p>1+\delta)>0$ for some $\delta$, then $\mu'(p<1)>0$, as $p$ is the Radon-Nikodim density between probability measures. Choose $1+\delta<a<b$ such that $\mu'(a<p<b)>0$ and $b-a<\delta$ (by dividing $(\delta,\infty)$ on the equal intervals of length less than $\delta$). Take $\overline{X}\subset \{a<p<b\}$ such that $\mu'(\overline{X})>0$ and $\mu'(X\cap \{a<p<b\})>0$ (it can be done since $\mu'$ is equivalent to Lebesgue measure).
Depending on the constant $C_X$, we have that $p$ has values in $(C_Xa,C_Xb)$ a.s. on $X$. Since $X$ contains $\{p<1\}$, $C_X<1/a$ and hence $p(X)\subset (-\infty, 1+\delta/a)$. But $X$ also contains a positive subset of $\{a<p<b\}$, thus, since $1+\delta/a<a$, we get a contradiction and it follows that $p\equiv 1$, hence $\mu=\mu'$.
EDIT (idea):
My idea is that if \begin{array}{ll} \nu_X=\nu_{\overline X}f^{-1},\\ \mu_X=\mu_{\overline X}f^{-1},\\ \nu=p\mu, \end{array} then, applying heuristically $f^{-1}$ two times with and without the density $p$, we get (up to normalizing constants): \begin{array}{ll} \nu_{\overline X}f^{-1}=(p\mu_{\overline X}) f^{-1}=(pf^{-1})(\mu_{\overline X}f^{-1}),\\ \nu_{\overline X}f^{-1}=\nu_X=p\mu_X=p(\mu_{\overline X}f^{-1}). \end{array} It means that $p=pf^{-1}$, up to a constant, which seems unlikely for a non-trivial $p$. Rigorously, after the corresponding sequence of integral equations above, we get $(p\circ f)\mu_{\overline X}= p\mu_{\overline X}$ only on the sets of the form $f^{-1}(Y)$, so I need the family of such sets to be rich enough to distinguish values of $p$ to get $p=p\circ f$ a.s. In particular, when $f$ is bijection, this family is the whole Borel $\sigma$-algebra and the idea works.
• Thanks, this was the most helpful answer. I'm still a little bit unsure about the last of those integral identities. Is this where the bijectivity assumption is coming in? Do you know if it's possible to do without it? – Andrew Bacon May 20 '15 at 17:20
• I added the general idea and explanation on where the bijectivity is used. – Jorkug May 20 '15 at 18:20
• Thanks -- that explanation was v. helpful! – Andrew Bacon May 20 '15 at 19:06
Your assumption about having a collection of functions satisfying ... holds for any pair of mutually absolutely continuous atomless probability measures on the unit interval. So every pair $\mu \neq \mu'$ is a counterexample. To show this it is enough to prove the following.
Claim 1: Let $X, Y$ be complete separable metric spaces. Let $\mu_1, \mu_2$ be two atomless Borel measures on $X$ and $\nu_1, \nu_2$ be atomless Borel measures on $Y$. Then there exists a mod-null bijection $f: X \to Y$ such that $f$ is measure preserving w.r.t. $\mu_i, \nu_i$ for $i = 1, 2$.
To show Claim 1 you should first show something like the following:
Claim 2: Let $f:[0, 1] \to \mathbb{R}$ be such that $\int_{[0, 1]} f = 0$ (lebesgue intergral). Then for every $\delta \in [0, 1]$ there is a subset $A \subseteq [0, 1]$ of lebesgue measure $\delta$ such that $\int_{A} f = 0$.
Using Claim 2 you can construct countably branching trees of compact subsets of $X, Y$ respectively such that the nodes agree in their respective measures. This helps you prove claim 1.
This is the rough idea. I think it should work.
• Would you please flesh out your argument a little more? Is the idea to use the first claim to give the existence of the measure-preserving maps $f:X\rightarrow\overline{X}$, for any partition $X\cup\overline{X}=[0,1]$? Regarding the second claim, I don't see how to obtain a probability measure on $[0,1]$ that gives a counterexample. – Matt Rosenzweig May 16 '15 at 0:41
• I added some more comments. I hope this helps. – hot_queen May 16 '15 at 1:05
• I agree that for any measure of a certain sort on $[0,1]$, you can find a family of functions meeting the conditions stated in the question. But how does this show the possibility of two distinct measures, $\mu$ and $\mu^\prime$, satisfying those conditions relative to a single family of functions? As far as I can see, the family of functions you construct for $\mu$ might be different from the family constructed for $\mu^\prime$. Unless I'm missing something. – Andrew Bacon May 16 '15 at 2:07
• That is the content of claim 1 which should follow from claim 2. The idea is to construct countably branching trees of compact subsets of $X, Y$ such that each node in the tree for $X$ has same $\mu_i$-measure and similarly on the $Y$ side. – hot_queen May 18 '15 at 19:58
• I think I do get bijections (modulo null) in claim 1 - so my answer is incompatible with Jorkug's :P. I will try to check the details and post them if they work out. Sorry for the sktechiness. – hot_queen May 20 '15 at 17:30
|
2019-05-19 22:30:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9349039793014526, "perplexity": 238.14710328346348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255182.37/warc/CC-MAIN-20190519221616-20190520003616-00453.warc.gz"}
|
https://physics.stackexchange.com/questions/207302/flux-of-electric-field-through-a-closed-surface-with-no-charge-inside
|
# Flux of electric field through a closed surface with no charge inside? [duplicate]
I'm reading the Feynman lectures on electromagnetism and in Vol II, Chapter 1.4 Vol. II, Chapter 1-4 he talks about the flux of the electric field and says that flux of $E$ through and closed surface is equal to the net charge inside divided by $\epsilon_0$.
If there are no charges inside the surface, even though there are charges nearby outside the surface, the average normal component of $E$ is zero, so there is no net flux through the surface
I cannot see why the net flux is zero here. Say we have a closed unit sphere at the origin with no charge inside it and at the point $(2, 0, 0)$ we have some charge $q$.
1. Well doesn't this charge then define the electric field $E$ for the system and it will flow into the unit sphere on the right hand side, and out of the unit sphere on the left hand side?
2. Furthermore, as the strength of the electric field decreases with distance from $q$ won't we have more flux going into the right hand side which is closer to the charge $q$, and less flux leaving through the left hand side as it is further away - and hence we should have a non-zero flux?
Can someone please explain what I am misinterpreting here?
## marked as duplicate by Rob Jeffries, John Rennie, Kyle Kanos, Martin, ACuriousMind♦Sep 17 '15 at 12:44
You have more flux per unit area going into the right side, but the area on the right side is smaller. These two balance out so that the total flux is the same going in as going out.
The part of the sphere which has electric flux going in, traced in red, is less than half the area of the sphere.
Incidentally, flux per unit area is just the electric field.
• Thanks...but although the flux "spreads out" on the left hand side means that less is going out there....is there not EVEN less again going out due to the fall-off in electrical force with distance from the source? – Riggs Sep 16 '15 at 15:59
• @Riggs I think you're double-counting the effect. The "spreading out" of the flux is exactly the same physical effect as the $1/r^2$ decrease of the electric field. – David Z Sep 16 '15 at 16:15
1. Yes.
2. No.
The strength of the field near the charge is higher (because it is closer to the charge) but this electric field is entering through a smaller area $S_1$ whereas the electric field leaving the sphere is relatively weaker (as it is further away from the charge) but leaves through a larger area $S_2$ as visualised below. Hence, the flux going in is exactly equal to the flux going out, and the net flux is 0.
To understand this better, you might see the proof of Gauss's Law using solid angles. You'll see that the area through which the field leaves or enters is proportional to $r^2$ whereas the field itself is proportional to $\frac{1}{r^2}$ and hence, the flux leaving or entering doesn't depend on the position of the charge. (I mean, it does not depend on where it is placed inside or where it is placed outside)
Note: This picture isn't that of the sphere and charge described exactly, it's sort of flipped but that won't be a problem, I think.
|
2019-08-24 05:38:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8559949398040771, "perplexity": 149.47322016331808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319724.97/warc/CC-MAIN-20190824041053-20190824063053-00475.warc.gz"}
|
http://mathhelpforum.com/calculus/24245-world-problems.html
|
1. ## World Problems
Consider the following Problem: A box with an open top is to be constructed from a square piece of cardboard, 3 ft. wide, by cutting out a square from each of the four corners and bending up the sides. Find the largest volume that such a box can have.
(a) Draw several diagrams to illustrate the situation, some short boxes with large bases and some tall boxes with small bases. Find the volumes of several such boxes.
(b) Draw a diagram illustrating the general situation. Introduce notation and label the diagram with symbols.
(c) Write an expression for the volume
(d) Using the given info write an equation that relates the variables.
There are more parts after but I know how to do those parts. I'm not necessary good with word problems so I have a hard time understanding them. Please help as soon as possible.
2. Originally Posted by angel.white
I'll look into it when I come back from work. If anyone else wants to help me with this problem feel free to.
3. Angel.White I read through the problem you gave me several times and I'm still having a hard time understanding it.
4. Originally Posted by FalconPUNCH!
Angel.White I read through the problem you gave me several times and I'm still having a hard time understanding it.
Okay, the first thing I want you to do is find the formula for volume. I've drawn you a diagram to illustrate how the problem is set up. In the first illustration, you can see the dotted lines are the fold points, and in the second illustration they are starting to fold up to form a box. How would you determine the volume of this box?
Once you have that, consider what would happen if you increased the length of x, or if you decreased the length of x. How would that effect the box?
Once you've done that, you can do a)
And what I've done is basically b)
and finding the volume is c) so you'll have done that.
So that will be a,b,c (I don't understand how d is different from c, so we'll assume you've done d as well). Now you just have to find the maximum volume. So take the derivative of your equation for volume.
Once you've done all that, post your equation for the volume, and your equation for the derivative of the volume, and we'll see about finding the maximum volume.
5. Thanks I have one more that I need help with
1. An Object with weight W is dragged along a horizontal plane by a force acting along a roope attached to the object. If the rope makes an angle T with a plane, then the magnitude of the force is:
$F = \frac{mW}{msin(T) + cos(T)}$
Where m is a constant called the coefficient of friction. For what value of T is F smallest?
Edit: I'll make a new thread for this since I just want to talk about the first problem here.
6. The formula for the volume that I got for the first problem is
$V = (3-2x)^3$
Derivative I got is:
$V' = -6(3-2x)^2$
7. Originally Posted by FalconPUNCH!
Thanks I have one more that I need help with
1. An Object with weight W is dragged along a horizontal plane by a force acting along a roope attached to the object. If the rope makes an angle T with a plane, then the magnitude of the force is:
$F = \frac{mW}{msin(T) + cos(T)}$
Edit: I just got done and realized this was looking for smallest. The same technique applies, read through the explanation, consider the examples, and then realize that you are looking for a minimum instead of a maximum. (valley instead of peak)
Where m is a constant called the coefficient of friction. For what value of T is F smallest?
This next question is very similar to the box one, you are simply trying to find where force is maximized. Now, you know that as t changes, F will change as well. So as t changes, force will either get higher or lower. The point where it is highest, it must be changing directions, meaning it was getting higher, but now it is getting lower (thus it is at a peak on the graph).
This means that going up to this point, it's slope will be increasing, and at this point, it's slope will be zero, and after this point, it's slope will be decreasing.
Now we know that the derivative of our function will give us the equation of the slope. So wherever the derivative is equal to zero, we know that the slope of our function must be equal to zero. So find the zeros of the derivative to find potential maximums of the function.
Now, we know that since we are dragging along a plane, t cannot be less than zero, or it would be going through the plane, and t cannot be greater than pi/2, or it would be dragging backwards. So our domain must be from 0 to pi/2.
List all of the zeros of the derivative that are within this domain, they are potential maximums. Then choose a test point before and after, and find whether the derivative is greater than zero or less than zero. If it is greater than zero, the slope is positive, so it must be increasing. If it is less than zero, the slope is negative, so it must be decreasing. This will allow you to identify whether your point is a maximum or minimum.
Once you find a value for t which causes the derivative to go to zero, plug it into F(t) and find the y value. Then, just to be sure, check the y values of our endpoints for the domain, so try F(0) and F(pi/2) to make sure that they are not greater. (They may not be zeros, so could escape detection by the method we are using, so we need to check them to be sure)
Then, any maximums you found by finding F'(t)=0 and testing points before and after it should have their y values compared to the y values of our endpoints, and the highest will be the value which is our maximum.
-----
Sample Example 1:
Find the maximum from -2 to 3 of the function:
$f(x) = -x^2$
Our derivative
$f'(x) = -2x$
Find the zeros of the derivative:
0=-2x
0=x
Test a point before
test point x=-1
f'(-1)=-2(-1)=2
This is positive, so the slope is increasing up to the point (0,f(0)).
Test a point after
test point x=1
f'(1)=-2(1)=-2
This is negative, so the slope is decreasing after the point (0, f(0))
So we have a maximum, because it is at a peak.
Test endpoints:
$f(-2)=-(-2)^2=-4$
$f(3)=-(3)^2=-9$
Compare to our maximum:
$f(3)=-(0)^2=0$
So x=0, is our maximum, and our maximum point is (0,0)
Sample Example 2:
Find the maximum from -2 to 3 of the function:
$f(x) = x^2$
Our derivative:
f(x)=2x
Find the zeros of the derivative:
0=2x
0=x
Test a point before
test point x=-1
f'(-1)=2(-1)=-2
This is negative, so the slope is decreasing down to the point (0,f(0)).
Test a point after
test point x=1
f'(1)=2(1)=2
This is positive, so the slope is increasing after the point (0, f(0))
So this point is not at a peak, it is at a valley, you go down then you go up, that is a valley, so this point is actually at a minimum, and therefore not what we are looking for. But there are no other values where x=0, so what do we do? Well we test our endpoints.
Test endpoints:
$f(-2)=(-2)^2=4$
$f(3)=(3)^2=9$
So 9 is the highest value within our function, since it starts at (-2,4) goes down to (0,0), and goes back up to (3,9)
So x=3, is our maximum, and our maximum point is (3,9)
To illustrate why this is, I've attached a graph. Notice the three critical numbers of f(-2)=4, f(0)=0, f(3)=9
The derivative process will find us f(0)=0 but it needs to be tested, because it could be a minimum (or neither a max nor min as in f(0)=0 for the graph x^3). And we still need to consider the endpoints of our domain, as you can see in this case, they are not zeros and will not be found by the derivative, but they need to be checked anyway.
8. Originally Posted by FalconPUNCH!
The formula for the volume that I got for the first problem is
$V = (3-2x)^3$
Close, you got the length and width right, but consider again, what will be the height of your box?
Originally Posted by FalconPUNCH!
Derivative I got is:
$V' = -6(3-2x)^2$
Don't forget to apply the chain rule, in this case f(x)=x^3 and g(x)=3-2x
so V = f(g(x))
and the derivative of V is
V'=f'[g(x)]*g'(x)
9. Originally Posted by angel.white
Close, you got the length and width right, but consider again, what will be the height of your box?
Don't forget to apply the chain rule, in this case f(x)=x^3 and g(x)=3-2x
so V = f(g(x))
and the derivative of V is
V'=f'[g(x)]*g'(x)
Well I think I did apply the chain rule
$3(3-2x)^2(-2)$
Which I just simplified to
$-6(3-2x)^2$
Originally Posted by angel.white
Close, you got the length and width right, but consider again, what will be the height of your box?
Not too sure about the height.
As for the second one I understand how to find the minimum and maximum but I'm having trouble with that problem because I'm not that good at trig. I try to take the derivative but it gets really complicating and I get some long equation, so I can't really find the zeros for it.
10. Originally Posted by FalconPUNCH!
Well I think I did apply the chain rule
$3(3-2x)^2(-2)$
Which I just simplified to
$-6(3-2x)^2$
You're right, I'm sorry, I compared it to the derivative of my equation.
Originally Posted by FalconPUNCH!
Not too sure about the height.
Okay, so the length and width of the paper are 3, and you're subtracting 2x from them. This you seem to have figured out. Then we fold the flaps up to create the sides of the box. Since the flaps are x units long, the box will be x units high. so the formula for volume will be $x(3-2x)^2$
Look at the pictures I drew you, do you see how the dotted lines are where the flaps fold up? And you can see that they are all x units long.
Another way of thinking about this might be that the left flap takes up x units, the base takes up 3-2x units, and the right flap takes up x units. So when you fold the flaps up, the base will be left at 3-2x, and the left height will be x, and the right height will be x. Anyway, look at the problem I drew again, and maybe try cutting some squares out of a piece of paper to see it.
Originally Posted by FalconPUNCH!
As for the second one I understand how to find the minimum and maximum but I'm having trouble with that problem because I'm not that good at trig. I try to take the derivative but it gets really complicating and I get some long equation, so I can't really find the zeros for it.
For this we use the division rule.
If f(T)=mW
and g(T)=m*sin(T)+cos(T)
then $F=\frac{f(T)}{g(T)}$
So the division rule says to differentiate, this should become:
$F\prime=\frac{g(T)*f\prime(T) - f(T)*g\prime(T)}{\left(g(T)\right)^2}$
So we find f'(T), mW is a constant so it differentiates to zero. This is because m is told to you to be a constant, and while the weight can change, it won't change while it is being dragged. You could drag different weights, but the weight will not fluctuate according to the angle that it is being dragged. So the derivative of mW is 0
f'(T)=0
g(T) has it's terms added together, so they can be differentiated separately.
First we differentiate m*sin(T) and get m*cos(T)
Then we differentiate cos(T) and get -sin(T)
So the derivative:
g'(T) = m*cos(T)-sin(T)
So we said the division equation is:
$F\prime=\frac{g(x)*f\prime(x) - f(x)g\prime(x)}{\left(g(x)\right)^2}$
And when we substitute our values in:
$F\prime=\frac{\left(m*sin(T)+cos(T)\right)*0 - mW*\left(mcos(T)-sin(T)\right)}{\left(msin(T)+cos(T)\right)^2}$
Now the zero will cancel that term out
$F\prime=\frac{-mW\left(mcos(T)-sin(T)\right)}{\left(msin(T)+cos(T)\right)^2}$
-----
Okay, now we have the derivative. It's a little scary looking, but it's okay. Now we need to find the zeros. Notice that $\frac{0}{x}=0$ where x is any number. This means that if you divide zero by anything, you will get zero. So that means that if the numerator is equal to zero, the entire equation is equal to zero. (note also that there is nothing the denominator can equal that will cause the equation to become zero)
So look at our equation
$F\prime=\frac{-mW\left(mcos(T)-sin(T)\right)}{\left(msin(T)+cos(T)\right)^2}$
Set the numerator equal to zero
$0=-mW\left(mcos(T)-sin(T)\right)$
-mW is a constant, so it can never equal zero, so it is irrelevant and can be divided out
$0=mcos(T)-sin(T)$
$sin(T)=mcos(T)$
And divide cos(T)
$\frac{sin(T)}{cos(T)}=m$
And convert to sin/cos to tan
$tan(T)=m$
Take the arctangent
$T=arctan(m)$
------------
This had me confused for a bit, I figured your book didn't give you all the information you needed, but I think what they are looking for is this answer, that when T equals the arctangent of m, it will be maximized. But lets check all our values anyway:
Left domain restriction, t=0
$F(0) = \frac{mW}{mSin(0)+Cos(0)}= mW$
Right domain restriction, t=pi/2
$F\left(\frac{\pi}{2}\right) = \frac{mW}{mSin\left(\frac{\pi}{2}\right)+Cos\left( \frac{\pi}{2}\right)}= W$
And our value t=arctan(m)
$F\left(arctan(m)\right) = \frac{mW}{mSin\left(arctan(m)\right)+Cos\left(arct an(m)\right)}$
Now, this is a case where it will be helpful to draw a triangle. Remember arctan(m) is an angle so it is the angle who has a tangent of m. And we are taking the sine of this angle, so we can draw a triangle, we know tangent is opposite over adjacent, so the opposite side is m, and the adjacent side is 1, and by the Pythagorean theorem, the hypotenuse is $\sqrt{m^2+1}$. Now, the angle that gave us this triangle, we are taking the sine of that angle. So we are taking the sine on this same triangle. So our sine is $\frac{m}{\sqrt{m^2+1}}$ and our cosine is $\frac{1}{\sqrt{m^2+1}$ So now we can fill those values in
$F\left(arctan(m)\right) = \frac{mW}{m\frac{m}{\sqrt{m^2+1}} +\frac{1}{\sqrt{m^2+1}}}$
$F\left(arctan(m)\right) = \frac{mW}{\frac{m^2+1}{\sqrt{m^2+1}}}$
$F\left(arctan(m)\right) = \frac{mW}{(m^2+1)^{3/2}}$
And so, since m is positive (can't have negative friction) and m^2+1 is greater than 1, then $(m^2+1)^{3/2}$ must be greater than m, so this is the largest denominator, which results in the smallest actual value.
So when T=arctan(m) we have three test cases, our two domain boundaries, and the arctan(m). The arctan(m) results in the lowest actual value of F, so it is the minimum.
***The minimum value of F comes when T = arctan(m)
*whew* that got hairy for a while :/
Note, in retrospect, it would have been slightly simpler to take the maximum of the denominator instead of the minimum of the whole thing, that would have saved you from having to apply the division rule. But then it would have taken the same form as everything else we did.
Recap:
We determined the domain of our variable
We found the derivative of our function
We found the zero of our derivative
We plugged the zero and domain values into our function
We compared results to find the smallest
11. Wow that's pretty confusing when you get detailed. I don't think I would have ever gotten to the end without your help. Well what was holding me back was that W I didn't know if it were supposed to turn into a 0 when you took the derivative because they don't say it's a constant. Thank you for your help. I'm going to spend time to digest this and understand it.
Okay, so the length and width of the paper are 3, and you're subtracting 2x from them. This you seem to have figured out. Then we fold the flaps up to create the sides of the box. Since the flaps are x units long, the box will be x units high. so the formula for volume will be
Look at the pictures I drew you, do you see how the dotted lines are where the flaps fold up? And you can see that they are all x units long.
Another way of thinking about this might be that the left flap takes up x units, the base takes up 3-2x units, and the right flap takes up x units. So when you fold the flaps up, the base will be left at 3-2x, and the left height will be x, and the right height will be x. Anyway, look at the problem I drew again, and maybe try cutting some squares out of a piece of paper to see it.
Yeah I looked over it last night and thought the height was x but I wasn't really sure about it. Now that I looked at your picture it makes sense.
Now, this is a case where it will be helpful to draw a triangle. Remember arctan(m) is an angle so it is the angle who has a tangent of m. And we are taking the sine of this angle, so we can draw a triangle, we know tangent is opposite over adjacent, so the opposite side is m, and the adjacent side is 1, and by the Pythagorean theorem, the hypotenuse is . Now, the angle that gave us this triangle, we are taking the sine of that angle. So we are taking the sine on this same triangle. So our sine is and our cosine is So now we can fill those values in
So you get $\sqrt{m^2+1}$ by using Pythagoreans theorem and then just replace COS and SIN each with their respective inputs? I don't think I would have thought about that because, like you said, the book doesn't really give enough details on how to do this.
12. Originally Posted by FalconPUNCH!
So you get $\sqrt{m^2+1}$ by using Pythagoreans theorem and then just replace COS and SIN each with their respective inputs? I don't think I would have thought about that because, like you said, the book doesn't really give enough details on how to do this.
Right, I always draw a triangle when doing these, label the angle "t" then say "the tangent of t is m" so label the opposite side m, and the adjacent side 1, use Pythagorean theorem to find the hypotenuse. Then, say "the sin of t is..." and see that your angle is labeled t, so the triangle you take the sine from is the same as the triangle you just drew.
So any trigonometric function of an inverse trigonometric function will follow this same principle.
And yeah, I know what you mean about books, mine blows too I spent probably 40 hours one weekend learning how to differentiate. Not because it's hard, but because when I would go to my book for help I would just get more confused.
13. Originally Posted by angel.white
Right, I always draw a triangle when doing these, label the angle "t" then say "the tangent of t is m" so label the opposite side m, and the adjacent side 1, use Pythagorean theorem to find the hypotenuse. Then, say "the sin of t is..." and see that your angle is labeled t, so the triangle you take the sine from is the same as the triangle you just drew.
So any trigonometric function of an inverse trigonometric function will follow this same principle.
And yeah, I know what you mean about books, mine blows too I spent probably 40 hours one weekend learning how to differentiate. Not because it's hard, but because when I would go to my book for help I would just get more confused.
Oh ok I understand it now. Yeah some of the books aren't that great that's why I come here, to get help with the problems that the book can't help me with :P well thank you for all your help.
14. What did you come up with for max volume of your box?
(Also, it will probably be important for you to determine your domain for x. If you have difficulty with that, look at the pictures I drew you when you do this, and say "how big can x be" and "how small can x be")
|
2017-03-30 23:10:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 44, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7838444113731384, "perplexity": 365.74413305184936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218203536.73/warc/CC-MAIN-20170322213003-00592-ip-10-233-31-227.ec2.internal.warc.gz"}
|
http://www2.macaulay2.com/Macaulay2/doc/Macaulay2-1.19/share/doc/Macaulay2/Macaulay2Doc/html/_coefficient__Ring.html
|
# coefficientRing -- get the coefficient ring
## Synopsis
• Usage:
coefficientRing R
• Inputs:
• Outputs:
• a ring, the coefficient ring of R
## Description
If R is a polynomial ring, then the coefficient ring is the base ring from which the coefficients are drawn. If R is constructed from a polynomial ring as a quotient ring or a fraction ring or a sequence of such operations, then the original coefficient ring is returned.
i1 : coefficientRing(ZZ/101[a][b]) ZZ o1 = ---[a] 101 o1 : PolynomialRing i2 : ultimate(coefficientRing,ZZ/101[a][b]) ZZ o2 = --- 101 o2 : QuotientRing
## See also
• ultimate -- ultimate value for an iteration
• baseRings -- store the list of base rings of a ring
## Ways to use coefficientRing :
• "coefficientRing(Ring)"
## For the programmer
The object coefficientRing is .
|
2023-02-08 12:44:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6654168367385864, "perplexity": 4378.320522646995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500813.58/warc/CC-MAIN-20230208123621-20230208153621-00374.warc.gz"}
|
https://www.ensae.fr/en/courses/stochastic-calculus-long-course-sfa-ensta-2/
|
# Objective
The course includes the following chapters.
# Planning
1. Motivations: stochastic modeling, probabilistic representations of linear PDEs, stochastic control, filtering, mathematical finance.
2. Stochastic processes in continuous time: Gaussian processes, Brownian motion, (local) martingales, semimartingales, Itˆo processes.
3. Stochastic integrals: forward and Itô integrals.
4. Itô and chain rule formulae, a first approach to stochastic differential equations.
5. Girsanov formulae. Novikov and Benˆes coondition. Predictable representation of Brownian martingales.
6. Stochastic differential equations with Lipschitz coefficients. Markov flows.
7. Stochastic differential equations without Lipschitz coefficients: strong existence, pathwise uniqueness, existence and uniqueness in law. Engelbert-Schmidt criterion. Non-explosion conditions
8. Bessel processes and Cox-Ingersoll-Ross model in mathematical finance.
9. Backward stochastic differential equations and connections with semilinear PDEs.
Close related references to the course are the following monographs and articles: [2, 6, 3, 5]. For deeper considerations, we also refer to [7, 8, 1, 4].
# Références
Close related references to the course are the following monographs and articles: [2, 6, 3, 5]. For deeper considerations, we also refer to [7, 8, 1, 4].
[1] Jean Jacod. Calcul stochastique et probl`emes de martingales, volume 714 of Lecture Notes in Mathematics. Springer, Berlin, 1979.
[2] I. Karatzas and S. E. Shreve. Brownian motion and stochastic calculus, volume 113 of Graduate Texts in Mathematics. Springer-Verlag, New York, second edition, 1991.
[3] D. Lamberton and B. Lapeyre. Introduction au calcul stochastique appliqué à la finance. Ellipses ´Edition Marketing, Paris, second edition, 1997.
[4] David Nualart. The Malliavin calculus and related topics. Probability and its Applications (New York). Springer-Verlag, New York, 1995.
[5] E. Pardoux. Backward stochastic differential equations and viscosity solutions of systems of semilinear parabolic and elliptic PDEs of second order. In Stochastic analysis and related topics, VI (Geilo, 1996), volume 42 of Progr. Probab., pages 79–127. Birkh¨auser Boston, Boston, MA, 1998.
[6] D. Revuz and M. Yor. Continuous martingales and Brownian motion, volume 293 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, third edition, 1999.
[7] L. C. G. Rogers and David Williams. Diffusions, Markov processes, and martingales. Vol. 2. Cambridge Mathematical Library. Cambridge University Press, Cambridge, 2000. Itˆo calculus, Reprint of the second (1994) edition.
[8] Daniel W. Stroock and S. R. Srinivasa Varadhan. Multidimensional diffusion processes. Classics in Mathematics. Springer-Verlag, Berlin, 2006. Reprint of the 1997 edition
|
2022-01-19 16:34:31
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8402056097984314, "perplexity": 1749.0460057004823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301475.82/warc/CC-MAIN-20220119155216-20220119185216-00467.warc.gz"}
|
https://questioncove.com/updates/5fc51f3b79bcb8e822080c00
|
Mathematics
tiktokstar:
The positions of two divers from the water's surface after a dive are shown: Jack: −35 feet Simon: −40 feet Which inequality explains why Jack is closer to the surface than Simon? Because −35 > −40, so −35 is closer to 0 than −40 Because −35 < −40, so −35 is closer to 0 than −40 Because −35 > −40, so −35 is farther from 0 than −40 Because −35 < −40, so −35 is farther from 0 than −40
tiktokstar:
help plz
tiktokstar:
plz
Hoodmemes:
Well both are negative numbers but lets do a example
Hoodmemes:
is -35 closer of rather from 0 on the number line?
tiktokstar:
no idk im dumb
tiktokstar:
wait yes
Hoodmemes:
$$\color{#0cbb34}{\text{Originally Posted by}}$$ @tiktokstar wait yes $$\color{#0cbb34}{\text{End of Quote}}$$ which is it closer or farther?
tiktokstar:
-35
Hoodmemes:
is closer or farther from 0?
tiktokstar:
closer
Hoodmemes:
$$\color{#0cbb34}{\text{Originally Posted by}}$$ @tiktokstar closer $$\color{#0cbb34}{\text{End of Quote}}$$ yes so if that is closer to 0 it would be the greater name due to value so -40 would be farther from 0 and would be less of a greater number due to value
Hoodmemes:
(Hint) -35> -40 is correct with that being said cross off answer options B and D
tiktokstar:
my mom is going to yell at me so is it A
Hoodmemes:
correct
tiktokstar:
thanks so much
Hoodmemes:
np,make sure u close this question
tiktokstar:
ok I will
Latest Questions
NoIdentity: I question my existence often, who else does the same?
44 minutes ago 1 Reply 1 Medal
thicknsmall777: hey excuse me how do I delete an account?
1 hour ago 11 Replies 2 Medals
Lexibennett: what do you do when the popular kid talks to you?
1 hour ago 5 Replies 0 Medals
dimondinthesky: Is it a rule to make multi-accounts on QuestionCove
1 hour ago 13 Replies 3 Medals
dimondinthesky: What is a common noun
2 hours ago 10 Replies 2 Medals
BBYGIRL: help me plz
2 hours ago 16 Replies 3 Medals
|
2021-01-18 00:30:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6038386821746826, "perplexity": 6689.340194555229}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514046.20/warc/CC-MAIN-20210117235743-20210118025743-00062.warc.gz"}
|
http://sites.psu.edu/johnroe/tag/signature/
|
# Piazza and Schick on the analytic surgery sequence
In my last post I mentioned a paper by Deeley and Goffeng whose aim is to construct a geometric counterpart of the Higson-Roe analytic surgery sequence. This week, there appeared on the arXiv a new paper by Piazza and Schick which gives a new construction of the natural transformation from the original DIFF surgery exact sequence of Browder-Novikov-Sullivan-Wall to our analytic surgery sequence. This is a counterpart to a slightly earlier paper by the same authors in which they carry out the same project for the Stolz exact sequence for positive scalar curvature metrics.
In our original papers, Nigel and I made extensive use of Poincaré spaces – the key facts being that the “higher signatures” can be defined for such spaces, and that the mapping cylinder of a homotopy equivalence between manifolds is an example of a Poincaré space (with boundary). In fact, these observations can be used to prove the homotopy invariance of the higher signatures – this argument is the one that appears in the 1970s papers of Kasparov and Mischenko, essentially – and the natural transformation from geometric to analytic surgery should be thought of as a “quantification” of this homotopy invariance argument.
Now there is a different argument for homotopy invariance, due to Hilsum and Skandalis, that has a more analytical feel. The point of the new Piazza-Schick paper is to “quantify” this argument in the same way that we did the Poincaré complex argument. This should lead to the same maps (or at least, to maps having the same properties – then one is faced with a secondary version of the “comparing assembly maps” question) in perhaps a more direct way.
#### References
Hilsum, Michel, and Georges Skandalis. “Invariance Par Homotopie de La Signature à Coefficients Dans Un Fibré Presque Plat.” Journal Fur Die Reine Und Angewandte Mathematik 423 (1992): 73–99. doi:10.1515/crll.1992.423.73.
Kasparov, G.G. “K-theory, Group C*-algebras, and Higher Signatures (Conspectus).” In Proceedings of the 1993 Oberwolfach Conference on the Novikov Conjecture, edited by S. Ferry, A. Ranicki, and J. Rosenberg, 226:101–146. LMS Lecture Notes. Cambridge University Press, Cambridge, 1995.
Mischenko, A.S. “Infinite Dimensional Representations of Discrete Groups and Higher Signatures.” Mathematics of the USSR — Izvestija 8 (1974): 85–111.
Piazza, Paolo, and Thomas Schick. “Rho-classes, Index Theory and Stolz’ Positive Scalar Curvature Sequence.” arXiv:1210.6892 (October 25, 2012). http://arxiv.org/abs/1210.6892
———. The Surgery Exact Sequence, K-theory and the Signature Operator. ArXiv e-print, September 17, 2013. http://arxiv.org/abs/1309.4370
# The Eisenbud–Levine–Khimshiashvili signature formula
I learned last week of a really cool result, published when I was a first-year undergraduate, that I had not been aware of before. Maybe everyone knew it except me, but it is so neat I’m going to write about it anyway.
To set the scene, think about the Hopf index theorem for vector fields on a (compact, oriented) $$n$$-manifold. Continue reading
|
2014-09-17 11:32:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8245404958724976, "perplexity": 965.6767774332742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657123284.86/warc/CC-MAIN-20140914011203-00197-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
|
https://chemistry.stackexchange.com/revisions/37056/2
|
2 of 5
Replaced "∆" with "\Delta" because "∆" doesn't render in the Android app.
# Ideal Gas Behavior
If ideal gas behavior is assumed, for which of the following reactions does $$\Delta H = \Delta U$$?
(a) N$$_2$$O$$_4$$ $$\rightarrow$$ 2 NO$$_2$$ $$(g)$$
(b) CH$$_4$$ $$(g)$$ + 2 O$$_2$$ $$(g)$$ $$\rightarrow$$ CO$$_2$$ $$(g)$$ + 2 H$$_2$$O $$(l)$$
(c) SO$$_2$$ $$(g)$$ + $$\frac {1}{2}$$ O$$_2$$ $$(g)$$ $$\rightarrow$$ SO$$_3$$ $$(g)$$
(d) Br$$_2$$ $$(l)$$ + 3 Cl$$_2$$ $$(g)$$ $$\rightarrow$$ 2 BrCl$$_3$$ $$(g)$$
(e) Cl$$_2$$ $$(g)$$ + F$$_2$$ $$(g)$$ $$\rightarrow$$ 2 ClF $$(g)$$
The correct answer is (e), and I just want to make sure I understand why. For $$\Delta U = \Delta H$$, no work can be done as $$\Delta U = H - W$$ (where $$W$$ is work done by the gas).
This means that the total number of molecules can't change, otherwise a change in volume will occur, causing work to be done upon molecules or the molecules to do work. Likewise, phase changes will also require work to be done on the molecules. Thus, we rule out (a), (b), (c), and (d)?
I'm not sure if this is too simplistic or the correct way of thinking about this problem.
Any help would be greatly appreciated.
coloratura
• 1k
• 4
• 11
• 22
|
2021-12-06 00:12:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 39, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5808723568916321, "perplexity": 1040.530797292293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363226.68/warc/CC-MAIN-20211205221915-20211206011915-00078.warc.gz"}
|
http://www.ni.com/documentation/en/labview/latest/analysis-node-ref/continuous-pdf-student-t/
|
# Continuous PDF (Student t) (G Dataflow)
Version:
Computes the continuous probability density function (PDF) of a Student's t-distributed variate.
## x
Quantile of the continuous random variable.
Default: 1
## v
Number of degrees of freedom.
This input must be greater than 0.
Default: 1
## error in
Error conditions that occur before this node runs.
The node responds to this input according to standard error behavior.
Standard Error Behavior
Many nodes provide an error in input and an error out output so that the node can respond to and communicate errors that occur while code is running. The value of error in specifies whether an error occurred before the node runs. Most nodes respond to values of error in in a standard, predictable way.
error in does not contain an error error in contains an error
If no error occurred before the node runs, the node begins execution normally.
If no error occurs while the node runs, it returns no error. If an error does occur while the node runs, it returns that error information as error out.
If an error occurred before the node runs, the node does not execute. Instead, it returns the error in value as error out.
Default: No error
## pdf(x)
Probability density function at x.
## error out
Error information.
The node produces this output according to standard error behavior.
Standard Error Behavior
Many nodes provide an error in input and an error out output so that the node can respond to and communicate errors that occur while code is running. The value of error in specifies whether an error occurred before the node runs. Most nodes respond to values of error in in a standard, predictable way.
error in does not contain an error error in contains an error
If no error occurred before the node runs, the node begins execution normally.
If no error occurs while the node runs, it returns no error. If an error does occur while the node runs, it returns that error information as error out.
If an error occurred before the node runs, the node does not execute. Instead, it returns the error in value as error out.
## Algorithm Definition for the Continuous PDF of a Student's T-Distributed Variate
The following equation defines the continuous PDF of a Student's t-distributed variate.
$pdf\left(x\right)=\frac{\left\{\mathrm{\Gamma }\left[\left(k+1\right)/2\right]\right\}}{{\left(\pi k\right)}^{1/2}\mathrm{\Gamma }\left(k/2\right){\left[1+\left({x}^{2}/k\right)\right]}^{\left(k+1\right)/2}}$
where
• x is the quantile of the continuous random variable
• k is the degrees of freedom of the random variable
• $\mathrm{\Gamma }\left[\left(k+1\right)/2\right]$ is the gamma function with argument (k + 1) / 2
Where This Node Can Run:
Desktop OS: Windows
FPGA: Not supported
Web Server: Not supported in VIs that run in a web application
|
2018-09-19 01:45:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8712931871414185, "perplexity": 2368.0055506652166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155814.1/warc/CC-MAIN-20180919004724-20180919024724-00045.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=130&t=19203
|
## Quiz One - #5 Calculating Heat
$\Delta U=q+w$
104781135
Posts: 13
Joined: Wed Sep 21, 2016 2:56 pm
### Quiz One - #5 Calculating Heat
Hello! Can anybody explain how to do #5 on our first quiz? I knew the equations to use, I think I just didn't use the right heat capacities. How do I know which ones to use?
Hannah_El-Sabrout_2K
Posts: 20
Joined: Wed Sep 21, 2016 2:58 pm
### Re: Quiz One - #5 Calculating Heat
The quizzes vary from day to day, so I would ask your TA or someone in your discussion.
Alyssa Chan 3B
Posts: 16
Joined: Wed Sep 21, 2016 2:57 pm
### Re: Quiz One - #5 Calculating Heat
I think we have the same quiz--melting a -55˚C block of ice into 25˚C water?
When you first melt ice from -55˚to 0˚, you use the specific heat capacity of ice in the equation q = mc∆T. During the phase change from ice to water (at 0˚), you use the heat of fusion in the equation q = n∆Hfus (note that ∆Hfus is in kJ/mol). From 0˚ water to 25˚ water, you use the specific heat capacity of water in the equation q = mc∆T.
Hope this helps!
|
2020-04-01 08:19:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5253612995147705, "perplexity": 3047.8352690179236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505550.17/warc/CC-MAIN-20200401065031-20200401095031-00412.warc.gz"}
|
http://openstudy.com/updates/55f627cae4b0cf7fa75c714d
|
## anonymous one year ago Evaluate the integral inside! thanks!!
1. anonymous
|dw:1442195407457:dw|
2. anonymous
would it be like this so far? |dw:1442195517625:dw| ? not too sure about the other parts :/
3. zepdrix
This is how I like to think of it when doing by-parts. $$\large\rm u$$ is the thing that you're trying to "destroy" or breakdown. $$\large\rm dv$$ is usually the part that is cyclical, won't breakdown, like sine/cosine or exponential.
4. zepdrix
So when I look at this problem, I'm thinking hmm $$\large\rm t^2$$ is probably a good $$\large\rm u$$ because we differentiate it and it will breakdown, ya?
5. zepdrix
$$\large\rm t^2\quad\to\quad 2t\quad\to\quad 0$$
6. anonymous
yes :)
7. zepdrix
$\large\rm u=t^2$$\large\rm dv=\sin\beta\theta~d\theta$So what do we get for these $\large\rm du=?$$\large\rm v=?$
8. zepdrix
Oh woops, lol not theta
9. zepdrix
$\large\rm u=t^2$$\large\rm dv=\sin\beta t~d t$
10. anonymous
hahaa :P du = 2t and v = -cosBtdt ^sorry!! using a B instead because i cannot find how to put the symbol in
11. anonymous
oops and now i did it!! v = -cos Bø dt lol
12. zepdrix
Hmm the $$\large\rm v$$ looks a little off.
13. zepdrix
You could do a little u-sub, err we already used the letter u.... you could do a little m-sub to deal with integrating your dv. but that's really tedious, you wanna get comfortable with easy integrals like this.
14. zepdrix
$\large\rm \int\limits \cos(2x)dx=?$Like this one, can you solve without u-sub?
15. anonymous
:/ how would i fix v then? but du = 2t ?
16. zepdrix
yes
17. zepdrix
du = 2t dt
18. anonymous
19. zepdrix
You can use a method which I think is called "advanced guessing",$\large\rm \int\limits\limits \cos(2x)dx=\sin(2x)$I'm thinking that it's going to be something like this, ya?
20. anonymous
yes!!
21. zepdrix
Take a derivative to check and see if you have the correct solution.$\large\rm (\sin(2x))'=\cos(2x)(2x)'=2\cos(2x)$Woops! We got a little bit too much back. See that extra 2? That tells us that:$\large\rm \int\limits 2\cos(2x)dx=\sin(2x)$Which means:$\large\rm \int\limits \cos(2x)dx=\frac{1}{2}\sin(2x)$Chain rule told us to multiply by 2 on the outside. So when we integrate, we need to do the reverse to compensate for the missing 2. We need to divide by 2.
22. zepdrix
So this will come up A LOT, so try to get used to it. When you have a coefficient on the x, you end up dividing by that coefficient when you integrate.
23. anonymous
ohh okay!! and so for our v, how can we apply the same method?
24. zepdrix
$\large\rm \int\limits \sin\beta t~d t=\frac{1}{\beta}(-\cos\beta t)$Going backwards, sine gives us -cosine and we have to divide by the beta coefficient on x.
25. anonymous
ohh okay!! and so we have that as our v? so we do the integral udv = uv - integral v du ?
26. zepdrix
$\large\rm u=t^2\qquad\qquad\qquad\qquad du=2t~d t$$\large\rm dv=\sin(\beta t)d t\qquad\qquad v=-\frac{1}{\beta}\cos(\beta t)$Let's list all of our pieces together before setting up the integral. Unless you're following on paper, then you're prolly ok heh.
27. zepdrix
Yes, integral time \c;/
28. anonymous
ooh okay!! and so we get this?|dw:1442196531964:dw|
29. anonymous
-1/beta cos (beta t)t^2 - not sure how to integrate that :/ but 2t dt = 0?
30. zepdrix
Ok good :) Looks like integration-by-parts is required again. Clean things up before choosing your $$\large\rm u$$ and $$\large\rm dv$$ though:$\large\rm =-\frac{1}{\beta}t^2 \cos(\beta t)+\frac{2}{\beta}\int\limits t \cos(\beta t) d t$
31. zepdrix
And then we need to apply by-parts to this orange portion, ya?$\large\rm =-\frac{1}{\beta}t^2 \cos(\beta t)+\frac{2}{\beta}\color{orangered}{\int\limits\limits t \cos(\beta t) d t}$
32. anonymous
yes:)
33. anonymous
and what happens next? we integrate the red?
34. zepdrix
The orange, yes. We need to establish $$\large\rm u$$ and $$\large\rm dv$$ for the orange part. Just a note: Earlier I had said something about how the t^2 will break down and I posted this:$\large\rm t^2\quad\to\quad 2t\quad\to\quad 0$Lil mistake there, 2t doesn't differentiate to 0, it differentiates to 2. Whatever, not a big deal :) We'll just take our derivative carefully
35. zepdrix
I pulled the 2 outside of the integral, hopefully that isn't confusing. So we still want the u to break down. So we'll choose our parts this way:$\large\rm u=t$$\large\rm dv=\cos(\beta t)d t$
36. zepdrix
Getting your other two pieces give you,$\large\rm du=1dt$$\large\rm v=\frac{1}{\beta}\sin(\beta t)$
37. anonymous
okay!!
38. zepdrix
Any confusion there? :U The problem is getting pretty long, so it's easy to get lost lol
39. zepdrix
So draw your by-parts thing again :D
40. zepdrix
|dw:1442197278644:dw|
41. anonymous
okay :) let's see if i understood correctly lol :P so we have this? |dw:1442197308540:dw|
42. zepdrix
|dw:1442197426918:dw|
43. anonymous
okay! and so now i simplify ?
44. zepdrix
$\large\rm =-\frac{1}{\beta}t^2 \cos(\beta t)+\frac{2}{\beta}\color{orangered}{\int\limits\limits\limits t \cos(\beta t) d t}$ $\large\rm =-\frac{1}{\beta}t^2 \cos(\beta t)+\frac{2}{\beta}\color{orangered}{\left[\frac{1}{\beta}t \sin(\beta t)-\frac{1}{b}\int\limits \sin(\beta t)d t\right]}$Yes, maybe distribute the 2/B.
45. zepdrix
b=Beta, typo :c
46. zepdrix
I can type it out if you want lol It's getting so long >.<
47. anonymous
hehe okay!! thank you!! that would be very helpful :P I'm getting a bit lost in it lol :P
48. zepdrix
$\large\rm =-\frac{1}{\beta}t^2 \cos(\beta t)+\frac{2}{\beta^2}t \sin(\beta t)-\frac{2}{\beta^2}\int\limits \sin(\beta t)d t$Something like that. Ok one last piece to integrate! No more by-parts required, yay!
49. anonymous
oooh wow okay!! haha got so messy :P so integrate it to -cos dt ?
50. zepdrix
Yes. And again, since we have a coefficient on the t, we have to divide by that.
51. anonymous
how does that look again?
52. zepdrix
$\large\rm =-\frac{1}{\beta}t^2 \cos(\beta t)+\frac{2}{\beta^2}t \sin(\beta t)-\frac{2}{\beta^2}\color{orangered}{\int\limits\limits \sin(\beta t)d t}$ $\large\rm =-\frac{1}{\beta}t^2 \cos(\beta t)+\frac{2}{\beta^2}t \sin(\beta t)-\frac{2}{\beta^2}\color{orangered}{\frac{1}{\beta}(-\cos(\beta t))}$
53. zepdrix
$\large\rm =-\frac{1}{\beta}t^2 \cos(\beta t)+\frac{2}{\beta^2}t \sin(\beta t)+\frac{2}{\beta^3}\cos(\beta t)+C$Somethinggggg like that? 0_o Boy this one is a doozy when they throw in the beta and the squared term.
54. anonymous
whoahhh wowzers okay :P so this is done? :O
55. zepdrix
yes, imma check my work real quick though, to make sure i didn't mess up anywhere
56. anonymous
okie :)
57. zepdrix
yayyyy looks like we did it correctly \c:/ I know that one was pretty insane to get through. You maybe would want to think about practicing with something like this:$\large\rm \int\limits x^2\sin(2x)dx$Might just be easier without all those betas floating around.
58. anonymous
ooh okay!! yay!! thanks so much!! :D
|
2016-10-21 00:53:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8833611607551575, "perplexity": 7708.545461566269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717959.91/warc/CC-MAIN-20161020183837-00366-ip-10-142-188-19.ec2.internal.warc.gz"}
|
http://www.ck12.org/algebra/Factoring-by-Grouping/lesson/user:support/Factoring-by-Grouping/
|
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Factoring by Grouping ( Read ) | Algebra | CK-12 Foundation
# Factoring by Grouping
%
Best Score
Practice Factoring by Grouping
Best Score
%
Factoring by Grouping
0 0 0
What if you had a polynomial expression like $3x^2 - 6x + 2x - 4$ in which some of the terms shared a common factor but not all of them? How could you factor this expression? After completing this Concept, you'll be able to factor polynomials like this one by grouping.
### Guidance
Sometimes, we can factor a polynomial containing four or more terms by factoring common monomials from groups of terms. This method is called factor by grouping.
The next example illustrates how this process works.
#### Example A
Factor $2x+2y+ax+ay$ .
Solution
There is no factor common to all the terms. However, the first two terms have a common factor of 2 and the last two terms have a common factor of $a$ . Factor 2 from the first two terms and factor $a$ from the last two terms:
$2x + 2y + ax + ay = 2(x + y) + a(x + y)$
Now we notice that the binomial $(x + y)$ is common to both terms. We factor the common binomial and get:
$(x + y)(2 + a)$
#### Example B
Factor $3x^2+6x+4x+8$ .
Solution
We factor 3x from the first two terms and factor 4 from the last two terms:
$3x(x+2)+4(x+2)$
Now factor $(x+2)$ from both terms: $(x+2)(3x+4)$ .
Now the polynomial is factored completely.
Factor Quadratic Trinomials Where a ≠ 1
Factoring by grouping is a very useful method for factoring quadratic trinomials of the form $ax^2+bx+c$ , where $a \neq 1$ .
A quadratic like this doesn’t factor as $(x \pm m)(x \pm n)$ , so it’s not as simple as looking for two numbers that multiply to $c$ and add up to $b$ . Instead, we also have to take into account the coefficient in the first term.
To factor a quadratic polynomial where $a \neq 1$ , we follow these steps:
1. We find the product $ac$ .
2. We look for two numbers that multiply to $ac$ and add up to $b$ .
3. We rewrite the middle term using the two numbers we just found.
4. We factor the expression by grouping.
Let’s apply this method to the following examples.
#### Example C
Factor the following quadratic trinomials by grouping.
a) $3x^2+8x+4$
b) $6x^2-11x+4$
Solution:
Let’s follow the steps outlined above:
a) $3x^2+8x+4$
Step 1: $ac = 3 \cdot 4 = 12$
Step 2: The number 12 can be written as a product of two numbers in any of these ways:
$12 &= 1 \cdot 12 && \text{and} && 1 + 12 = 13\\12 &= 2 \cdot 6 && \text{and} && 2 + 6 = 8 \qquad This \ is \ the \ correct \ choice.\\12 &= 3 \cdot 4 && \text{and} && 3 + 4 = 7$
Step 3: Re-write the middle term: $8x = 2x + 6x$ , so the problem becomes:
$3x^2+8x+4=3x^2+2x+6x+4$
Step 4: Factor an $x$ from the first two terms and a 2 from the last two terms:
$x(3x+2)+2(3x+2)$ Now factor the common binomial $(3x + 2)$ :
$(3x+2)(x+2) \qquad This \ is \ the \ answer.$
To check if this is correct we multiply $(3x+2)(x+2)$ :
$& \qquad \ \ 3x+2\\& \underline{\;\;\;\;\;\;\;\;\;\;\;x+2\;}\\& \quad \quad \ \ 6x+4\\& \underline{3x^2+2x \;\;\;\;\;}\\& 3x^2+8x+4$
The solution checks out.
b) $6x^2-11x+4$
Step 1: $ac = 6 \cdot 4 = 24$
Step 2: The number 24 can be written as a product of two numbers in any of these ways:
$24 &= 1 \cdot 24 && \text{and} && 1 + 24 = 25\\24 &= -1 \cdot (-24) && \text{and} && -1 + (-24) = -25\\24 &= 2 \cdot 12 && \text{and} && 2 + 12 = 14\\24 &= -2 \cdot (-12) && \text{and} && -2 + (-12) = -14\\24 &= 3 \cdot 8 && \text{and} && 3 + 8 = 11\\24 &= -3 \cdot (-8) && \text{and} && -3 + (-8) = -11 \qquad (Correct \ choice) \\24 &= 4 \cdot 6 && \text{and} && 4 + 6 = 10\\24 &= -4 \cdot (-6) && \text{and} && -4 + (-6) = -10$
Step 3: Re-write the middle term: $-11x = -3x - 8x$ , so the problem becomes:
$6x^2-11x+4=6x^2-3x-8x+4$
Step 4: Factor by grouping: factor a $3x$ from the first two terms and a -4 from the last two terms:
$3x(2x-1)-4(2x-1)$
Now factor the common binomial $(2x - 1)$ :
$(2x-1)(3x-4) \qquad This \ is \ the \ answer.$
Watch this video for help with the Examples above.
### Vocabulary
• It is possible to factor a polynomial containing four or more terms by factoring common monomials from groups of terms. This method is called factoring by grouping .
### Guided Practice
Factor $5x^2-6x+1$ by grouping.
Solution:
Let’s follow the steps outlined above:
$5x^2-6x+1$
Step 1: $ac = 5 \cdot 1 = 5$
Step 2: The number 5 can be written as a product of two numbers in any of these ways:
$5 &= 1 \cdot 5 && \text{and} && 1 + 5 = 6\\5 &= -1 \cdot (-5) && \text{and} && -1 + (-5) = -6 \qquad (Correct \ choice)$
Step 3: Re-write the middle term: $-6x = -x - 5x$ , so the problem becomes:
$5x^2-6x+1=5x^2-x-5x+1$
Step 4: Factor by grouping: factor an $x$ from the first two terms and $a - 1$ from the last two terms:
$x(5x-1)-1(5x-1)$
Now factor the common binomial $(5x - 1)$ :
$(5x-1)(x-1) \qquad This \ is \ the \ answer.$
### Practice
Factor by grouping.
1. $6x^2-9x+10x-15$
2. $5x^2-35x+x-7$
3. $9x^2-9x-x+1$
4. $4x^2+32x-5x-40$
5. $2a^2-6ab+3ab-9b^2$
6. $5x^2+15x-2xy-6y$
Factor the following quadratic trinomials by grouping.
1. $4x^2+25x-21$
2. $6x^2+7x+1$
3. $4x^2+8x-5$
4. $3x^2+16x+21$
5. $6x^2-2x-4$
6. $8x^2-14x-15$
|
2014-10-23 06:24:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 65, "texerror": 0, "math_score": 0.5824294090270996, "perplexity": 456.2189233886746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507450767.7/warc/CC-MAIN-20141017005730-00122-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://www.nature.com/articles/s41535-020-00257-7?error=cookies_not_supported&code=4bc2ca36-f7fd-4e34-af83-13a54aefc695
|
## Introduction
Topological materials have ushered in a new era in condensed matter research since the discovery of the quantum Hall effect. Over the years, a few material systems have been reported as topological insulators such as Bi1−xSbx1 and Bi2Se32, topological semimetals such as Cd3As23,4,5, Na3Bi6,7, YbMnBi28, BaMnSb29, and topological superconductors such as CuxBi2Se3, Sn1−xInTe, and others10,11,12. In an ordinary semimetal, there is a small overlap between the valence and conduction bands. However, in a topological Dirac semimetal (TDSM), the inverted bands contact only at discrete Dirac points in momentum space with linear energy dispersions. The behavior of these fermions is governed by the relativistic Dirac equation13,14. In a TDSM, spin–orbit coupling (SOC) does not open up a gap, and the Dirac points are protected by the time-reversal and inversion symmetries3,15. If symmetries are broken, these materials can be driven into various other topological phases. For instance, the breaking of either the time-reversal or inversion symmetry can drive a TDSM to a Weyl semimetal3. Moreover, topological materials can also exhibit superconductivity which is an extremely attractive feature for quantum technological applications12.
One key indication of the possible topological phases in a material is the inversion of energy bands at high-symmetry points in the Brillouin zone16. These features can be investigated through band structure calculations, which have been quite successful in predicting topological phases in a number of material systems17,18,19,20,21. Recently, Nie et al.22 have investigated the topological phases in a chain-compound TaSe3 through first-principles calculations. The Z2 invariants (ν0;ν1ν2ν3) were obtained for this material, which can be used to distinguish if the system is ordinary or topologically non-trivial. Here, ν0 is called the strong topological index, and a value of ν0 = 1 indicates a “strong” topological insulator (STI) phase with an odd number of Dirac cones on the surface, which are robust against weak time-reversal invariant perturbations. A “weak” topological insulator phase is identified when ν0 = 0, and one of the indices ν1, ν2, or ν3, known as the weak topological indices, is nonzero16. The calculations for TaSe3 revealed the Z2 invariants (ν0;ν1ν2ν3) to be (1;100)22, indicating a strong three-dimensional TI with guaranteed Dirac states on the surfaces. Furthermore, band calculations indicate that there is a band inversion even without spin–orbit coupling (SOC) in TaSe322.
While there have been investigations on various physical properties23,24,25,26,27,28,29, the topological properties of TaSe3 have remained experimentally unexplored, except for a recent preprint30. This is in part due to the multiple bands with two identical electron bands related to band inversion, and one hole band involving no band inversion. In order to distinguish them, information from individual bands has to be separated. In this article, we report the experimental investigation of the Fermi surface topology of TaSe3 single crystals. Shubnikov-de Haas (SdH) oscillations of both the longitudinal and Hall conductivities are clearly observed. Fast Fourier transformation (FFT) analysis of the SdH oscillations indicates two frequencies, Fα ≈ 97 T and Fβ ≈ 186 T. By constructing the Landau level fan diagram for each oscillation, we obtain the Berry phase $${\it{\Phi }}_{\mathrm{B}}^\upalpha$$ ≈ 1.1π and $${\it{\Phi }}_{\mathrm{B}}^\upbeta$$ ≈ 0(3D) − 0.16π(2D). This indicates that the α band is topologically non-trivial, while the β band is trivial. In addition, we observe extremely large magnetoresistance (XMR) in this material which reaches about 7 × 103% for H = 14 T at T = 1.9 K, and follows the Kohler’s scaling law. The quadratic nature of the MR with respect to magnetic field points towards a high degree of electron–hole compensation, supported by our Hall effect data.
## Results and discussion
### Crystal structure and magnetotransport
Figure 1a shows the X-ray diffraction (XRD) pattern for a single crystal of TaSe3 at room temperature. The XRD peaks are consistent with a monoclinic structure with the space group P21/m. Due to its malleable nature, the single crystal was not perfectly flat on the sample platform, producing a weak $$40\bar 5$$ peak at 2θ ≈ 50°, unexpected from the $$10\bar 1$$ plane (Fig. 1a). For further confirmation, we performed single crystal XRD measurements at room temperature, which also revealed the same monoclinic structure. The lattice parameters, a = 9.834(2) Å, b = 3.496(1) Å, c = 10.421(3) Å, and β = 106.237(6)° were obtained through a Rietveld refinement of the single crystal XRD data. The crystal structure of TaSe3 consists of infinite, trigonal prismatic chains along the crystallographic b-axis, as shown in the left side of Fig. 1b. A single linear chain is formed by stacking prismatic cages along the b-axis. At the center of each cage there is a Ta atom, coordinated with six Se atoms at the corners. However, the neighboring chains are inequivalent, which are named as type-I and type-II chains (Fig. 1b). The shorter distance between the Se atoms in type-I chains enables the formation of strong covalent p–p bonding between the two Se atoms, whereas this bond is broken in the type-II chains due to the longer distance. According to band structure calculations, these Se atoms in the type-II chains form bonds with the Ta atoms from the neighboring chains, which is primarily responsible for the band inversion in this material22.
Figure 1c shows the temperature dependence of the magnetic susceptibility [χ(T)] measured at H = 1 kOe, which is negative over the entire temperature range. This indicates a diamagnetic behavior in TaSe3, which is also supported by the negative and linear magnetic field dependence of the magnetization [M(H)], measured at T = 1.85 K as shown in the inset of Fig. 1c. Since the negative χ is not suppressed up to 7 T, the diamagnetism is not related to superconductivity but to the core electron contribution of TaSe3.
Figure 1d shows the temperature dependence of the b-axis resistivity (ρb) of TaSe3 at zero magnetic field in the temperature range of 1.9–305 K. The resistivity shows metallic behavior, which decreases with decreasing temperature from ρb (300 K) = 1910 µΩ cm to ρb ((1.9 K) = 14 µΩ cm. This corresponds to a residual resistivity ratio [RRR = ρ (300 K)/ρ (1.9 K)] of 136. The RRR of this sample is similar to or exceeds the previously reported values for this material23,24,25,26,27,28,29, indicating the high quality of our single crystals. In the temperature range of T = 60–300 K, the ρ(T) data follow the Bloch–Grüneisen (BG) law
$$\rho \left( T \right) = \rho _0 + A\left( {\frac{T}{{\theta _{\mathrm{D}}}}} \right)^k\mathop {\int}\nolimits_0^{\theta _{\mathrm{D}}/T} {\frac{{x^k{\mathrm d}x}}{{(e^x - 1)(1 - e^{ - x})}}}.$$
(1)
Here, ρ0, A, and θD are the residual resistivity, electron–phonon interaction constant, and Debye temperature, respectively. The red solid line in Fig. 1d represents the fitting of the data with Eq. (1). The fitting yields A = 8.23 ± 0.03 mΩ cm, θD = 310 ± 2 K, and the exponent k = 4.6. The value of k is close to 5 expected for simple metals with dominant electron–phonon scattering. On the other hand, ρb(T) at low temperatures (T < 60 K) follows a power law behavior given by ρb(T) = ρ0 +CTm, as shown in the inset of Fig. 1d. The residual resistivity ρ0 = 14.5 ± 0.5 µΩ cm and the exponent m = 2.5 ± 0.02 were obtained from the fit. While there is electron–phonon scattering, it had been argued that in quasi 1D materials, the electron–electron Umklapp scattering can become the dominant scattering mechanism at low temperatures, when the energy (kBT) is smaller than the inter-chain interaction energy31,32. In this scenario, the exponent m takes a value between 2 and 331,32. Similar behavior was previously reported in TaSe331. Thus, we consider that the electron–electron Umklapp scattering plays the dominant role in scattering at low temperatures in TaSe3.
According to Fig. 1a, the flat surface of the as-grown TaSe3 single crystals is normal to the [$$10\bar 1$$] direction. For probing the magnetic field (H) effect, we apply H normal to the flat surface, i.e., H [$$10\bar 1$$]. For easy discussion, the applied field direction is indicated with respect to the Brillouin zone (BZ) and Fermi surface pockets, as shown in Fig. 2a, b (adapted from ref. 22), respectively. Figure 2c shows the temperature dependence of ρb at various magnetic fields up to 14 T. This and the rest of the measurements were conducted using a sample with RRR = 107. In the presence of an applied magnetic field normal to the current (H I), the ρb(T) curves maintain metallic behavior at high temperatures, where the resistivity decreases with decreasing temperature until reaching a minimum at Tm. Below this temperature, the resistivity keeps increasing until a plateau-like region is reached at Ti. The onset temperature Tm is identified as the point where ∂ρb(T, H)/∂T = 0, whereas Ti is the point where ∂ρb(T, H)/∂T is minimum. The inset of Fig. 2c shows the magnetic field dependence of these two characteristic temperatures, both increasing with increasing magnetic field. However, the increase in the onset temperature Tm is more drastic than that of Ti.
### Kohler’s scaling law
The magnetic field-induced resistivity upturn and XMR have been frequently observed in topological materials33,34,35,36,37. Several mechanisms are proposed to explain these features, such as field-induced metal-to-insulator transition, electron–hole compensation, topological protection, and so on36,38,39. Recently, it was demonstrated that this type of field-induced resistivity upturn could be explained within the framework of the Kohler’s scaling law without invoking any topological considerations35,40. The Kohler’s scaling law is given by41,42
$${\mathrm{MR}} = \alpha \left( {H{\mathrm{/}}\rho _0} \right)^n$$
(2)
with α and n being constants. Since MR is given by $$\frac{{\rho \left( {T,H} \right)\, - \,\rho (T,0)}}{{\rho (T,0)}}$$, Eq. (2) can be rearranged and written as
$$\rho \left( {T,H} \right) = \rho \left( {T,0} \right) + \alpha \frac{{H^n}}{{\rho \left( {T,0} \right)^{n - 1}}}.$$
(3)
In light of Eq. (3), ρ(T, H) consists of two terms: temperature dependence of the resistivity at zero field [ρ(T, 0)] and the magnetic-field-induced resistivity [Δρ = αHn/ρ(T, 0)n−1]. Since these two terms have opposite temperature dependence, the minimum in the ρ(T, H) curve arises due to a competition between the two terms35,40,43. For demonstration, we choose ρb(T) at H = 7 T, as shown by the blue symbols in Fig. 2d. The solid green line in the figure represents a fit of the data to Eq. (3) with α = 1.3 × 10−10 (Ω cm)n Tn and n = 1.95. The purple symbols in Fig. 2d represent the difference in the temperature dependence of the resistivity measured at H = 0 and 7 T [Δρ = ρb(7T) − ρb(0T)]. The data was fitted with the second term in Eq. (3), as represented by the solid (magenta) line. We note that Eq. (3) can describe the field-induced resistivity fairly well. Furthermore, the plateau in the ρb(T, H) curves (Fig. 2c) at low temperatures can also be explained through Eq. (3). At low temperatures, ρb(T, 0) = ρ0 becomes very low and practically temperature independent. Therefore, ρ(T, H) ~ αHn/$$\rho _0^{n - 1}$$ is constant at low temperatures, giving rise to a plateau.
To confirm the Kohler’s scaling law, we have measured the magnetic field dependence of ρb(T, H) at fixed temperatures with H I. Since the measured ρb(T, H) can contain both the longitudinal (ρyy) and Hall (ρxy) contributions, the longitudinal component was isolated from the Hall component by using the relation, ρyy = [ρb(T, +H) + ρb(T, −H)]/2. The MR was then calculated using the standard relation, MR = [ρyy(H) − ρyy(0)]/ρyy(0). Figure 2e shows the MR at indicated temperatures for up to 14 T. We observe the XMR in this material which reaches about 7 × 103% at T = 1.9 K and H = 14 T without showing any sign of saturation. This is comparable to other XMR materials, such as Cd3As244, Na3Bi45, NbP46, TaAs47, WTe233,38, and PtBi2−x37.
The Kohler’s scaling law (Eq. (2)) can describe the motion of electrons in magnetic field for a single band or multiple bands with electron–hole compensation35,41,42. For n = 2, the Kohler’s law [MR = α(H/ρ0)2] can be derived from the two-band model of the electrical resistivity for non-magnetic materials, when the electron and hole carrier concentration is perfectly compensated35. However, Wang et al.35 also argued that MR for an imperfectly compensated system can still obey the Kohler’s law if either or both the mobilities are small. Nevertheless, this law would be violated if α is temperature dependent35. Figure 2f shows the MR at various temperatures plotted against the rescaled magnetic field H/ρ0. Consequently, all the MR curves from T = 1.9–12 K collapse on to a single curve, indicating that the scattering mechanism is the same throughout the relevant temperature and field ranges. This rules out the possibility of a metal-to-insulator transition35,36,48. In addition, the collapse of the MR curves, according to the Kohler’s rule, indicates that the carrier concentration and the mobility ratio of hole-to-electron do not change significantly with temperature49,50. The inset of Fig. 2f shows a fitting of the MR curve at T = 4 K using Eq. (2), yielding α = (1.30 ± 0.01) × 10−10 (Ω cm)n Tn and n = 1.950 ± 0.001. These values are used to fit the ρb(T) data, as shown in Fig. 2d. The value of n depends on the level of carrier compensation. For a system with perfect electron–hole compensation, n should be 235,43. Thus, the value of n = 1.95 for TaSe3 points towards a high degree of electron–hole compensation.
### SdH oscillations
As can be seen in Fig. 2e, ρb exhibits SdH oscillations. The SdH oscillations occur in crystalline solids when the density of states is periodically modulated as a function of magnetic field due to the Landau quantization of the energy states in magnetic field51. One of the most useful aspects of SdH oscillations is that it contains information on band topology reflected in the Berry phase. A widely used method to extract the Berry phase is to construct the Landau level fan diagram, where the minima in the SdH oscillations of the conductivity are assigned to an integer Landau level index51. We calculated the longitudinal (σyy) and Hall conductivities (σxy) using the following relations:
$$\begin{array}{l}\sigma _{yy} = \frac{{\rho _{yy}}}{{\rho _{yy}^2 \,+\, \rho _{xy}^2}},\\ \sigma _{xy} = - \frac{{\rho _{xy}}}{{\rho _{yy}^2 \,+\, \rho _{xy}^2}}.\end{array}$$
(4)
Figure 3a shows the longitudinal and Hall conductivities measured at various temperatures for H = 0–14 T. The smooth background of the σyy(H) data was deduced through polynomial fitting, which was then subtracted from the data to obtain the oscillatory component of the conductivity Δσyy. Figure 3b shows σyy(H) vs. H−1 at indicated temperatures. FFT analysis reveals two frequencies with Fα ≈ 97 T and Fβ ≈ 186 T, as shown in Fig. 3c. The FFT frequency (F) and the Fermi surface cross-section (AF) are related by the Onsager relation F = (ħ/2πe)AF. Therefore, Fα and Fβ correspond to Aα = 0.009 Å−2 and Aβ = 0.018 Å−2, respectively. In Fig. 2b, the cyclotron orbits perpendicular to the H direction are depicted by red lines for both electron (yellow–green) and hole (purple) pockets. Note that the two electron pockets are identical22, thus having the same frequency, i.e., Fα. Since the cross-sections are not perfectly circular, it is difficult to accurately estimate the corresponding Fermi wave vectors from the measured cross-section areas.
Figure 3d shows the temperature dependence of the normalized FFT amplitudes for the α and β bands. The gradual damping of the oscillation amplitudes with increasing temperature can be described by the Lifshitz–Kosevich (LK) equation51,52,53
$$R_T = \frac{{A^\prime (m^\ast{\mathrm{/}}m_0)T}}{{\sinh [A^\prime\left( {m^\ast {\mathrm{/}}m_0} \right)T]}}.$$
(5)
Here, RT, m0, and m* are the FFT amplitude, free electron mass, and effective mass, respectively. The parameter A′ is given by $$A^\prime = \frac{{2{\uppi}^2k_{\mathrm{B}}m_0}}{{{\mathrm{e}}\hbar H_{{\mathrm{eff}}}}}$$, where Heff = 2/(1/H1 + 1/H2), with H1 = 5 T and H2 = 14 T being the lower and upper limits of the magnetic field range in which the FFT analysis was conducted. kB and ħ are the Boltzman and Planck constants, respectively. We obtain the effective masses of $$m_\upalpha ^ \ast$$ = 0.49m0 and $$m_\upbeta ^ \ast$$ = 0.48m0 corresponding to the α and β bands though fitting the FFT amplitude vs. temperature data with Eq. (5), as shown in Fig. 3d.
### Berry phase
We have isolated the oscillations corresponding to each frequency (Fα and Fβ) through filtering, and the isolated single frequency oscillations were used to construct the Landau level fan diagram as shown in Fig. 4a, b. The minima in Δσyy(H−1) positions are assigned integer Landau level indices (N), whereas the maxima positions are assigned N + 1/2. According to the Lifshitz–Onsager relationship, N = F/H + ΦB/2π + δ51,52. Here, ΦB is the Berry phase, and δ depends on the dimensionality of the Fermi surface and carrier type. For a two-dimensional (2D) Fermi surface δ = 0, whereas δ = +1/8(−1/8) for the minima (maxima) of a three-dimensional (3D) electron band. On the other hand, δ = −1/8(+1/8) for the minima (maxima) of a 3D hole band. It implies from the Onsager relationship that the slope of N(H−1) plot should correspond to the oscillation frequency, while the intercept can be used to calculate ΦB. The slopes of the straight lines were found to be 97.25 ± 0.03 and 186.2 ± 0.01 T for the α and β bands, respectively. These values are in excellent agreement with the oscillation frequencies identified through FFT analysis. The intercepts were found to 0.56 ± 0.004 and 0.08 ± 0.002 for the α and β bands, respectively. According to band structure calculations22, the α band is 2D (see Fig. 2b). Thus, the corresponding Berry phase is $${\it{\Phi }}_{\mathrm{B}}^\alpha$$ = [0.56 ± 0.004] × 2π ≈ [1.120 ± 0.008]π. This indicates a non-trivial topology for the α band. On the other hand, the β band is predicted to be hole-like with the 3D characteristic (see Fig. 2b). With H [$$10\bar 1$$], Fβ corresponds to the maxima of the β Fermi pocket, with δ = +1/8. Thus, we obtain $${\it{\Phi }}_{\mathrm{B}}^\upbeta$$(3D) = [(0.08 ± 0.002) − 1/8] × 2π = [−0.09 ± 0.003]π ≈ 0. Since it is disc shaped (Fig. 2b), $${\it{\Phi }}_{\mathrm{B}}^\upbeta$$(2D) = [0.08 ± 0.002] × 2π ≈ [0.160 ± 0.004]π using δ = 0. The non-trivial Berry phase for the α band and a trivial one for the β band are consistent with first-principles calculations22. Interestingly, an oscillation frequency of 175 T was identified to correspond to a non-trivial band in ref. 30. At present, it is unclear whether the slight frequency difference can make such a dramatic change in the topology of the β band.
### Hall effect and two-band fitting
To further understand the electronic structure of TaSe3, we have investigated the Hall effect at low temperatures. Figure 5a shows the results of Hall resistivity (ρxy) measurements up to H = 14 T between temperatures T = 1.9–8 K. The field dependence of the Hall resistivity [ρxy(H)] shows a non-linear behavior, which changes from positive at lower fields to negative at high fields (H > 7.5 T). This behavior suggests that both types of charge carriers are responsible for the Hall effect. The ρxy(H) curves are almost identical in the temperature range of T = 1.9–8 K, indicating that the carrier concentrations and mobilities do not significantly change in this temperature range50. Clear SdH oscillations can be seen for H > 9 T, especially at low temperatures. Since ρxyρyy, multiple band analysis of the Hall data should be conducted through the Hall conductivity σxy50,51, which was calculated through Eq. (4). For a system with multi-band transport, σxy can be expressed as
$$\sigma _{xy} = {\mathrm{e}}H\left[ {\frac{{n_{\mathrm{h}}\mu _{\mathrm{h}}^2}}{{1 + \left( {\mu _{\mathrm{h}}H} \right)^2}} - \frac{{n_{\mathrm{e}}\mu _{\mathrm{e}}^2}}{{1 + \left( {\mu _{\mathrm{e}}H} \right)^2}}} \right].$$
(6)
Here, nh(ne), μh(μe) are the concentration of holes (electrons) and mobilities of holes (electrons), respectively. Figure 5b shows the magnetic field dependence of the Hall conductivity at T = 2 K, and the corresponding fitting using Eq. (6). We can see that the two-band model fits the σxy(H) data at T = 2 K quite well. Similarly, Eq. (6) generated good fits for the σxy(H) data in the temperature range of T = 2–8 K. The carrier concentrations and mobilities were obtained from fitting, and their temperature dependence in the range of T = 2–8 K is shown in Fig. 5c, d. The hole concentration remains slightly higher than the electron concentration throughout the temperature range, whereas the mobility of the electrons remains higher than that of holes. From the semiclassical two-band model, the MR of non-magnetic materials with perfectly compensated electron and hole-type carriers (nh = ne) can be described by MR ≈ μeμhH2 35,54,55. We have used the carrier mobilities obtained from the Hall conductivity data to estimate the MR at various temperatures through this relation. The MR curves estimated from the mobilities are remarkably similar to that obtained from experiment, as demonstrated in the inset Fig. 5d for T = 2 K. The slight difference between them could be due to the imperfect carrier compensation in TaSe3.
At T = 2 K, the fitting yielded the hole and electron concentrations: nh = 1.25 × 1019 cm−3, ne = 1.12 × 1019 cm−3, and mobilities μh = 3.6 × 103 cm2 V−1 s−1, μe = 7.8 × 103 cm2 V−1 s−1. The relatively low carrier concentrations are consistent with the semimetallic nature of TaSe3. For TaSe3, the relatively lower mobility (≈103 cm2 V−1 s−1) is due to its relatively “heavier” effective mass (m* ≈ 0.49m0) compared to other semimetals4,34,36,50,56,57. In the temperature range of T = 2–8 K, the ratio nh/ne ≈ 1.1, as shown in Fig. 5c. This feature points towards a high degree of carrier compensation, making the Kohler’s scaling law well justified (Fig. 2f).
To summarize, we have grown single crystals of a chain-compound TaSe3 through the chemical vapor transport method. We observed the XMR effect in this material which reaches up to 7 × 103% at T = 1.9 K for H = 14 T applied normal to the b-axis. The XMR obeys the Kohler’s scaling law as evident from the collapse of the MR curves measured at different temperatures on to a single curve when plotted as MR = α(H/ρ0)n. Furthermore, both the longitudinal and Hall conductivities exhibit SdH oscillations at low temperatures. A FFT analysis of the SdH oscillations of the electrical conductivity revealed two fundamental frequencies, Fα ≈ 97 T and Fβ ≈ 186 T. The Berry phases $${\it{\Phi }}_{\mathrm{B}}^\alpha$$ ≈ 1.1π and $${\it{\Phi }}_{\mathrm{B}}^\beta$$ ≈ 0(3D) − 0.16π(2D) for the corresponding α and β bands were extracted through the construction of Landau level fan diagrams. This indicates the non-trivial Berry phase for the α band and the trivial one for the β band. Comparing with band structure calculations22, we found that the non-trivial α band is the 2D electron pocket, whereas the trivial β band represents the hole pocket. An analysis of the Hall conductivity revealed two types of carriers, whose concentrations and mobilities were calculated. The obtained electron and hole concentrations are very close, pointing towards a nearly perfect electron–hole compensation in this material. The XMR is likely to have originated from this nearly perfect carrier compensation.
Compared to other TDSMs, a distinguishing feature of TaSe3 is its quasi 1D crystal structure. As predicted in ref. 22, this unique structure can host various topological phases, such as 3D STI, 3D WTI, and Dirac semimetal phases under different strains. It is thus an ideal material system for studying the structure–topological property relationship. Future investigations into the strain or pressure effects on TaSe3 could offer valuable new insights.
## Methods
### Sample synthesis
Single crystals of TaSe3 were grown through the chemical vapor transport method. High purity (better than 99.9%) powder of Ta and Se with a molar ratio of 1:3.3 were mixed together and pressed into a pellet. The excess Se acts as the transport agent. The pellet was sealed in an evacuated quartz tube, and placed into a horizontal tube furnace. The end of the quartz tube with the pellet (starting material) was placed in the middle of the tube furnace and heated to 700 °C. The other end of the tube furnace was kept open to the atmosphere, which acted as the cold end, and thus creating a temperature gradient necessary for vapor transport. The furnace was maintained at 700 °C for 14 days, followed by a slow cooling to room temperature. Finally, thin 1D-like single crystals with shiny surfaces were obtained.
### Measurements
Single crystal XRD measurements were carried out at room temperature using a Bruker Kappa Apex-II and a PANalytical Empyrean X-ray diffractometer. Electrical resistivity and Hall effect measurements were carried out using the standard four-probe technique in a physical property measurement system (PPMS, Quantum Design) for up to 14 T, and in a temperature range of T = 1.9–305 K. The electrical contacts were made using gold wires attached to the sample through epoxy. Hall effect data were measured in both positive and negative field and then subtracted to eliminate the lead-offset voltage.
|
2023-03-21 20:50:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6175528168678284, "perplexity": 965.3834949552989}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943746.73/warc/CC-MAIN-20230321193811-20230321223811-00313.warc.gz"}
|
https://physics.stackexchange.com/questions/424556/what-does-it-mean-that-standing-waves-oscillate-in-phase
|
# What does it mean that standing waves oscillate in phase?
What does it mean that all points between two adjacent nodes in a standing wave oscillate in phase? I sort of get what in phase means, it means that the peaks and troughs etc of 2 waves align. But how can we say that a single standing wave is in phase? Can someone please explain to me what it means that "that all points between two adjacent nodes in a standing wave oscillate in phase?" And please, try to do this at the level of a high school sophomore who still hasn't learnt Calculus based physics. Thank you.
It means that all those points go up at the same time and down at the same time.
The do not go equally high up and down. Their amplitudes are different. But they do it at the same time nevertheless.
For general waves, in-phase means that the points of two waves progress (move) equally. They "follow each other" perfectly. In standing waves, this boils down to the points not progressing, but still "following each other" by rising and falling equally.
Consider a sine function whose amplitude is time dependent. It can be written as $y(x,t)=A(t)sin(x)$ where $t$ is time, $x$ is the abscissa($x$-coordinate) and $y$ is the ordinate($y$-coordinate) which in this case is also time dependent due to the time dependence of the amplitude of the sine wave. Now choose an abscissa on the wave at a time $t_1$, say $x_1$ where the value of the function will be $y(x_1,t_1)$. Because the sine wave has a periodicity of $2\pi$ you will find an abscissa $x_2$ which will give you the same value of ordinate as given by $x_1$. Mathematically, $y(x_1,t_1) = y(x_2,t_1)$ but these abscissa's will be related by $x_2 = x_1 \pm 2n\pi$. The different abscissa which give same value of ordinate at a given time are said to be in phase. So $x_1$ and $x_2$ will be in phase. Now the role of time is that if you vary time then the two abscissa which were in phase will always remain in phase as the ordinate varies, i.e. $y(x_1,t_1) = y(x_2,t_1)$ and $y(x_1,t_2) = y(x_2,t_2)$. Hope this helps!!
Use the PhET Wave on a String program with the settings as shown in the screenshot below.
Run the program and let it settle down for a few minutes when you should see that a standing wave has been formed.
Positions $B$ and $C$ are nodes.
Observe the red “particles” between $B$ and $C$ and this is where you will see the particles moving “in phase with one another” but with a differing amplitude.
The red “particles” between $A$ and $B$ will also be moving in phase with one another.
Also note that the motion between $AB$ is $180^\circ$ out of phase with that between $BC$ .
Using the “pause” button or the “Slow Motion” setting might help you visualise what is happening.
|
2020-08-08 02:51:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6052032113075256, "perplexity": 260.4441487330413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737238.53/warc/CC-MAIN-20200808021257-20200808051257-00132.warc.gz"}
|
https://webwork.maa.org/moodle/mod/forum/discuss.php?d=3996
|
## WeBWorK Problems
### small number woes
by Zak Zarychta -
Number of replies: 3
Hi,
I cannot get WeBWorK answer checker to correctly process numbers of a small magnitude
I'm authoring questions for a quantum physics module where answers are often of a small magnitude. I have some constants set such as Planck's constant. I have an answer for a question that is 1.441e-29
Setting zeroLevel to allow small numbers seems to have no effect.
Trying
ANS(Real(1.441E-29)->cmp(
tolType => 'absolute',
tolerance => 1.441E-31,
));
still grades as incorrect 1.441E-29 and gives the correct answer as 0. Infact the numerical example cited in the thread yields the same result
Any ideas, what I am doing wrong?
### Re: small number woes
by Zak Zarychta -
So with the answer checker I have found that setting the zero level small works for example
Context("Numeric");
Context()->flags->set(
zeroLevel => 1E-36,
zeroLevelTol => 1E-38
);
However, it will still not work with units in the question. So
ANS(Real($ans)->cmp()); # Works but ANS( num_cmp($ans, units=>"kg m s^-1" ) ); # does not work
However defining $ans as$ans = NumberWithUnits("1E-30 kg m s^-1");
works with the first answer checker and the zero tolerance accordingly set
Could anyone elaborate why this might be so?
### Re: small number woes
by Joel Trussell -
I realize that you desire the standard MKS units but changing to CGS moves you a bit further up the scale ( by 10^5) . We have some similar problems in electrical engineering. Fortunately we have terahertz to Hertz, picofarads to farads, etc, so it is easier.
$zeroLevel = 1E-36;$zeroLevelTol = 1E-38;
|
2022-01-27 11:53:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.699844479560852, "perplexity": 4916.200666642436}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305260.61/warc/CC-MAIN-20220127103059-20220127133059-00310.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/bohr-s-model-hydrogen-atom-calculate-radius-second-bohr-orbit-hydrogen-atom-given-data_3211
|
HSC Science (General) 12th Board ExamMaharashtra State Board
Share
Books Shortlist
# Calculate the Radius of Second Bohr Orbit in Hydrogen Atom from the Given Data - HSC Science (General) 12th Board Exam - Physics
#### Question
Calculate the radius of second Bohr orbit in hydrogen atom from the given data.
Mass of electron = 9.1 x 10-31kg
Charge on the electron = 1.6 x 10-19 C
Planck’s constant = 6.63 x 10-34 J-s.
Permittivity of free space = 8.85 x 10-12 C2/Nm2
#### Solution
r_n=((h^2epsilon_0)/(pime^2))n^2
:.r_2=((h^2epsilon_0)/(pime^2))(2)^2
r_2=((6.63xx10^(-34))^2xx8.85xx10^(-12)xx(2)^2)/(3.14xx9.1xx10^(-31)xx(1.6xx10^(-19))^2)
=(43.96xx10(-68)xx8.85xx10^(-12)xx4)/(3.14xx9.1xx10^(-31)xx2.56xx10^(-38))
=2.127x10-10m
=2.127 A°
Is there an error in this question or solution?
#### APPEARS IN
2013-2014 (March) (with solutions)
Question 6.2 | 3.00 marks
#### Video TutorialsVIEW ALL [2]
Solution Calculate the Radius of Second Bohr Orbit in Hydrogen Atom from the Given Data Concept: Bohr'S Model for Hydrogen Atom.
S
|
2019-07-18 18:15:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.562116265296936, "perplexity": 6370.4637485422045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525699.51/warc/CC-MAIN-20190718170249-20190718192249-00441.warc.gz"}
|
https://kb.osu.edu/handle/1811/17989?show=full
|
dc.creator Oh, J. J. en_US dc.creator Labarge, M. S. en_US dc.creator Matos, J. en_US dc.creator Hillig, K. W., II en_US dc.creator Kuczkowski, R. L. en_US dc.date.accessioned 2006-06-15T18:24:17Z dc.date.available 2006-06-15T18:24:17Z dc.date.issued 1989 en_US dc.identifier 1989-TF-6 en_US dc.identifier.uri http://hdl.handle.net/1811/17989 dc.description Author Institution: Department of Chemistry, University of Michigan en_US dc.description.abstract The rotational spectra of TMA-$SO_{2}$ and $C_{5}H_{5}N-SO_{2}$ have been observed in a FTMW spectrometer. Twenty-five a and c dipole transitions have been assigned for the TMA-$SO_{2}$ charge transfer complex. The $C_{3}$ axis of the TMA points toward the sulfur atom and makes an angle of $101^{\circ}$ with the $SO_{2}$ plane. The N-S distance is 2.28(3) {\AA}. The dipole moment of the complex is 4.80(4) D. Forty-one a and c dipole transitions have been assigned for the pyridine-$SO_{2}$ dimer. Two structures fit the rotational constants. The most likely structure is similar to TMA-$SO_{2}$ with the nitrogen end of the pyridine pointing at the sulfur atom. The pyridine and $SO_{2}$ planes are approximately perpendicular and the N-S distance is 2.54(3) {\AA}. The dipole moment is 4.81(3) D. No evidence for internal rotation or tunneling has been observed for either complex. en_US dc.format.extent 67513 bytes dc.format.mimetype image/jpeg dc.language.iso English en_US dc.publisher Ohio State University en_US dc.title THE MICROWAVE SPECTRA OF THE TRIMETHYLAMINE-$SO_{2}$ AND PYRIDINE-$SO_{2}$ DIMERS en_US dc.type article en_US
|
2020-08-09 00:12:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6765567064285278, "perplexity": 14403.539463993051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738366.27/warc/CC-MAIN-20200808224308-20200809014308-00142.warc.gz"}
|
https://afmagee.github.io/shrinkage-priors/
|
# A tail of two shrinkage priors
## A tail of two shrinkage priors
Two priors, both alike in dignity, in fair Bayesian inference, where we lay our scene…
Sometimes, in Bayesian inference, the way we specify prior information takes a form like “I think that the mutation rate is probably on the order of 1e-8 per site per year.” Other times, we want to specify information on the structure of the model. In regression contexts, that can be information about the number of covariates which are non-zero. In Bayesian phylogenetics, Bayesian stochastic search variable selection (BSSVS) is a common form of this sort of prior. We have some number of variables we think might contribute to, say, geographic dispersal, but we doubt they all do. The prior on the coefficients in BSSVS is a mixture which breaks down into two parts. First, a 0/1 component that determines whether the variable has any effect (an indicator variable). Then, a conditional prior for when the coefficient is not 0. BSSVS makes use of what are called spike-and-slab priors, which are closely related to the focus here: shrinkage priors.
### What is a shrinkage prior?
Shrinkage priors are ways of saying “I think this parameter has a particular value, but perhaps it’s actually different.” For regression models, the fixed value is generally 0, which says the covariate has no effect and essentially removes the variable from the model. But unlike the mixture approach that spike-and-slab distributions take, shrinkage priors are fully continuous distributions. Shrinkage priors give up the possibility of setting any parameter to exactly some value (the probability that a continuous random variable $X$ takes any exact value $x$ is 0). But in return, we have a fully-continuous variable, which means we gain a number of convenient properties. Most notably, it makes derivatives (gradients) much easier, so we can use gradient-based sampling algorithms, which are quite efficient.
Like a spike-and-slab mixture, though, a good shrinkage prior should be spiky and have reasonably large tails. The spike is the prior mass which accounts for “the parameter is probably roughly 0 (or other value).” The tails, meanwhile, allow for the variable to take on a relatively wide range of values if it isn’t roughly 0. (We will discuss later how we can reign in the tails.)
Many shrinkage priors can be represented as a scale mixture of normal distributions (meaning a mixture over the scale/variance parameter rather than the location/mean parameter), which can be convenient, both for understanding the distribution and for sampling.
### The Horseshoe
The Horseshoe prior (Carvalho et al, 2009) is a commonly-employed shrinkage prior, used for things from sparse regression to coalescent models. The distribution is characterized by two parameters.
• The location (which we will ignore and assume to be 0)
• The (global) scale parameter, $\sigma$
The Horseshoe does not have a closed form for its probability density function. Nor indeed for it’s cumulative density function. Instead, we must always appeal to its representation as a mixture of Normal distributions. Specifically, if $X \sim \text{Horseshoe}(0,\sigma)$, we can instead write the following hierarchical model which is equivalent:
• $\lambda \sim \text{Cauchy}^+(0,1)$
• $X \mid \lambda \sim \text{Normal}(0,\lambda^2\sigma^2).$
Where Cauchy$^+$ is used to mean a half-Cauchy (positive-values-only) distribution.
The variable $\lambda$ is called a local scale variable, while in this representation we call $\sigma$ the global scale. This language is used because we generally have not one variable $X$ but a vector $X_1,\dots,X_n$ which are iid Horseshoe, $X_i \sim \text{Horseshoe}(0,\sigma)$. When we write out the mixture distribution, we get $X_i \mid \lambda_i,\sigma_i \sim \text{Normal}(0,\lambda_i^2\sigma^2),$ with $\lambda_i \sim \text{Cauchy}^+(0,1)$. So, all of these IID variables share the global scale, but each has its own local scale.
The Horseshoe is very spiky, and has very fat tails. The mixture of normals representation helps make this clear. A Cauchy distribution has very fat tails, so when you get a very large $\lambda$ you get a very large variance. These tails mean that the Horseshoe also does not have a defined mean or a finite variance (the variance is either undefined or infinite, but in practice which it is makes little difference). There is also a non-insignificant probability that $X$ is near 0, in which case we have a variance-0 Normal which has infinite density. In fact, the Horseshoe has an infinite density at 0 as well, though it is a proper probability distribution and integrates to 1 (isn’t math fun?).
The choice of the global scale parameter can be important, but there may not be a lot of information to set it. As a way around that, we can put a prior on the global scale and infer it. This gives us additional wiggle room in case we don’t get the value quite right. A convenient form of prior is the Cauchy$^+$ distribution, in which case we can use a Gibbs sampler to update $\boldsymbol{\lambda}$ and $\sigma$.
### The Bayesian Bridge
The Bayesian Bridge (Polson et al. 2011) is characterized by up to three parameters.
• The location (which we will ignore and assume to be 0)
• The scale parameter, $\sigma$
• The exponent/power parameter, $\alpha$
Unlike the Horseshoe, the Bayesian Bridge has a closed-form probability density. Assuming the distribution is centered at 0, we get $p(x) \propto \text{exp}(-|x/\sigma|^\alpha)$. If we have $\alpha = 2$, then this is the Normal distribution, while at $\alpha = 1$ it is the Laplace (or double-exponential) distribution. When $\alpha$ gets small, however, this distribution gets a decently-sized spike at 0.
While the Bayesian Bridge has a closed form density, inference via MCMC works best when we use a mixture representation. Specifically (like the Horseshoe), the Bayesian Bridge can be written as a scale mixture of Normal distributions. However, the distribution on the local scales $\boldsymbol{\lambda}$ is quite ugly.
As with the Horseshoe, we generally want to infer the global scale parameter. And, like with the Horseshoe, the easy choice is the distribution which allows a convenient Gibbs sampler. In this case, that means putting a Gamma prior on $\phi = \sigma^{-\alpha}$ rather than directly parameterizing $\sigma$. Using a Gamma(shape=s,rate=r) distribution on $\phi$ implicitly places a prior on $\sigma$ which is proportional to $\alpha \tau^{-\alpha s -1} e^{-ry^{-\alpha}}$. In the limit as $r,s \to 0$, this density is proportional to $\sigma^{-1}$ which is the reference prior (Nishimura and Suchard, 2022).
The Bayesian Bridge is also known as the Generalized Normal Distribution, and you can play around with it in R using the package gnorm.
### Which distribution should I use in practice?
To be clear, both the Bayesian Bridge and Horseshoe are workable shrinkage priors. But there are reasons one might favor one over the other. The short version is that the Horseshoe is far spikier and has fatter tails than the Bayesian Bridge. In some ways, this is a point in favor of the Horseshoe. On the other hand, those extreme tails can leave imprints on the posterior distribution when there isn’t a lot of information in the data.
#### Points in favor of the Horseshoe
The Horseshoe is also very easy to draw IID samples from, as the Cauchy distribution is widely available in software (and not that hard to implement by hand if needed). Sampling from the Bayesian Bridge is possible, but implementations of the Generalized Normal Distribution are not particularly common, nor are implementations of exponentially-tilted stable distributions. I think it is also easier to contemplate a $\text{Cauchy}^+$ distribution on the global scale $\sigma$ than a $\text{Gamma}$ on $\sigma^{-\alpha}$.
#### Points in favor of the Bayesian Bridge
When using either the Bayesian Bridge or Horseshoe as a prior, we must consider their behavior in MCMC. The mixing for the Horseshoe’s global scale parameter can be quite painful in practice, regardless of whether you use a Gibbs sampler or Hamiltonian Monte Carlo. The Bayesian Bridge global scale mixes much more rapidly, possibly because the local scales can be analytically integrated out by using the closed-form representation of the density function.
### Who regularizes the regularizers?
As it turns out, like all good things, there is a limit to how fat you might want the tails of a shrinkage prior. The sheer prior mass of the tails can be hard to overcome and can leave posterior distributions that are much wider than they need to be.
Shrinkage distributions (and spike-and-slab distributions) are forms of Bayesian regularization. It turns out that we can regularize our shrinkage priors in order to avoid some of the unwanted tail behavior. Specifically, we can make it so that the conditional distribution on $x_i$ is Normal with variance, [ \Big(\frac{1}{\xi^2} + \frac{1}{\sigma^2 \lambda_i^2}\Big)^{-1} ] where $\xi$ is a parameter known as the slab width. When $\sigma$ or $\lambda_i$ get large, the variance will tend towards $\xi^2$, and outside of $-\xi < |x_i| < \xi$, the tails of the distribution will look like those of a Normal(0,$\xi^2$). Thus, $\xi$ defines the width for which this regularized shrinkage distribution acts like the standard version.
Nishimura and Suchard (2022) provide a clever setup for this regularization. They add a fictitious vector $\boldsymbol{z}$ into the model and define $z_i \sim \text{Normal}(x_i,\xi^2)$. Setting all $z_i = 0$ then induces a conditional Normal on $x_i$ with the above regularized variance. By sneaking this in through the likelihood, their approach leaves the Gibbs samplers for $\boldsymbol{\lambda}$ and $\sigma$ intact, which is entirely convenient. One can also use this fake-data formulation directly in software like stan or RevBayes to tamp down on the tails of your shrinkage prior of choice (so long as you can represent it as a scale mixture of normals).
#### Things that go bump in the night
One oddity does crop up when using these regularized shrinkage priors, and that is how to consider the prior itself. The fake data $\boldsymbol{z}$ are treated mathematically as part of the likelihood but are in fact part of the prior. This means that the effective priors on $\boldsymbol{\lambda}$ and $\sigma$ are altered from those which are specified. The effective prior on $\phi = \sigma^{-\alpha}$ can be quite notably different from the specified Gamma.
This discrepancy between the nominal and effective priors is not a serious issue in practice unless one wishes to estimate marginal likelihoods for the model. As Gibbs samplers are already potential problems there (they can’t handle the heat), one might as well directly use the regularized variance when setting up the model. Alternately, one should consider fully embracing the philosophy of Bayesian model averaging and Bayesian regularization. By building a large model which contains all the parameters of interest, and by regularizing that model, there should in general not be a serious need to compare models by marginal likelihoods. Just look at the posterior distributions on parameters!
And, if you really, really want a Bayes Factor, you can generally obtain one without brute-force computing marginal likelihoods. For both Bridges and Horseshoes, testing directional hypotheses with Bayes factors is straightforward (though it may require prior samples). With a Bridge prior you can test any point-null of interest (like, “this parameter is 0” or “the difference between these parameters is 0”) with a Savage-Dickey density ratio (and some kernel density estimation). For the Horseshoe, which has infinite density at 0, testing point nulls at or near 0 will go badly. Though, in general, one can quickly see the amount of evidence there is for a parameter being non-zero by inspecting the posterior distribution. If it’s a spike at 0, then the parameter is effectively 0. If there’s no spike near 0, you’ve got strong evidence for the parameter being non-zero. And if you get a bimodal distribution (which is somewhat common), then there is evidence for a non-zero parameter but it is not overwhelming. We’ll ignore all those posteriors with a spike and a long arm on one side, because I still don’t know what to make of them (very weak evidence?).
### All shrinkage is local
I want to close with one last point about these global-local mixture-of-Normals models. Which is that when we apply the right version of them to time-series models, we end up with a property we call local adaptivity (Faulkner and Minin, 2017). Local adaptivity is an ability to capture rapid change in some regions and slower (or no) change in others. The local scales can really help us see why this happens (or doesn’t).
The Horseshoe, with its $\text{Cauchy}^+$ local scales, is locally adaptive. The Laplace distribution, a special case of the Bayesian Bridge (namely with $\alpha = 1$), is not. For the Laplace, the distribution on the local scales is the distribution on the square root of an Exponential random variable. There are perhaps two notable differences between these distributions. The $\text{Cauchy}^+$ has some pull towards 0, because that is its mode, and it has a very fat tail. The squareroot-Exponential distribution has a non-zero mode and a relatively mild tail. The Laplace, then, doesn’t have a lot of wiggle-room for the local scales to be a lot bigger or smaller than any other, which means that the overall variability will be determined by the global scale $\sigma$. But the Horseshoe allows some of those local scales to be much, much bigger than the rest. Those can then capture regimes of rapid change while elsewhere the global scale $\sigma$ determines the variability.
I am not sure where exactly the Bayesian Bridge shifts from having no local adaptivity to having some. It’s definitely below $\alpha = 1$, and probably somewhere near where the variance (which is $\sigma^2 \, \Gamma(3/\alpha) \, / \, \Gamma(1/\alpha)$) explodes. I am also not sure how shrunken-shoulder regularization plays into this property either. It could make for an interesting investigation.
Updated:
|
2022-09-28 01:09:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.833453357219696, "perplexity": 588.0515486631438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00705.warc.gz"}
|
https://www.controlbooth.com/threads/i-r-video-setup-for-stage-manager.4967/
|
# I.R. Video setup for Stage Manager
#### gafftaper
##### Senior Team
Senior Team
Fight Leukemia
I keep building my list of equipment for the new college theater. Probably won't get some of them, but I'm going to cut the list down once I find out how much money I actually have to spend.
Anyway, after watching that backstage tour video posted in another thread I got the idea of having an infrared video system for the Stage Manager. Does anybody have one? Where do you get one? Is it sickeningly expensive? Don't know anything about it so I thought I would ask what's out there.
Last edited:
#### soundlight
##### Well-Known Member
You just get a Black and White CCTV camera that works with IR sources, and then pour a sh*tload of IR light on to your stage, and hook everything up. IR usually comes from friggin huge arrays or IR LED's.
Fight Leukemia
Last edited:
#### soundlight
##### Well-Known Member
In general, I promote the concept of separate cameras and IR sources. You usually need much bigger IR arrays than are on the cameras for a theater.
#### Van
##### CBMod
CB Mods
yeah, What they said. The cool thing about CCD's < the little things inside video cameras that "See"> is that they take Near Infra-red and infra red and "convert it to visible White light on a TV screen. Great way to test if your remote control for your TV is working or not.
I just saw an ad for Frys this weekend they had a wireless IP security camera for really cheap. I thought it might be worth trying in our stages as a way to monitor stage activity, and for me to get work done while techless techs are happening. Sorry long winded again.
#### gafftaper
Senior Team
Fight Leukemia
Thanks. It hadn't occurred to me that security websites would be full of options. DUH! I'm thinking about just having the ability to see when the crew is clear from a dark set change or when the actors are in place. Nothing fancy, don't need to see any greater detail than when the shapes stop moving. Since it's a black box with a 17 foot high grid, I can mount the camera wherever I want to get the best angle for the show. I'm thinking I probably wouldn't need a huge I.R. array since the throw wouldn't be too far. The site Avkid posted has some little LED I.R. lights for $240 each. One or Two of those over the playing surface should probably be enough to see if people are on or off stage yet. Looks like several decent camera options for under$300... I can probably do the whole package for under $1000. That's way less than expected. #### Footer ##### Senior Team Senior Team Premium Member I have in the past hooked up a standard off the shelf camera and put it in "night vision" mode to get a cheep inferred when needed. It works in a pinch. I have worked in theatres that have a very good inferred system and they are a great thing to have around. #### fosstech ##### Active Member A stage light with really saturated red+blue gels would make a much cheaper infrared illuminator. We use a Selecon Pacific 90 degree leko for our infrared illuminator. It's the only Selecon product we own. Dumps a ton of IR onto the stage, and was way cheaper than$700. The reason we used the Pacific was because of the IR mirror...it doesn't burn the gels. Some of you may say, "How will that work? The dichroic IR mirror takes out the IR!" Well, it takes most of it out. I looked up the efficiency of the mirror in the Pacific. Even with the mirror, the intensity of IR light emitted from the Pacific is more than the intensity of visible light. So get rid of the visible light via the gels (you might even be able to get a dichroic filter that does this), and you have a bright and cheap IR illuminator.
Don't know if it would burn the gel or not, but a S4 with one of the new 90 degree lens tubes might work as well, and would be cheaper than the Pacific.
#### icewolf08
##### CBMod
CB Mods
It really doesn't take much IR energy to get an image even on some of the cheapie IR cameras. We don't bother with an illuminator at my theatre and the ambient IR energy from run lights and such gives us a clear enough picture. Of course this may differ from venue to venue. What would be a great solution for an illuminator would be if someone made a dichroic filter that only passed IR as it would probably be cheaper than buying a dedicated illuminator, and you would never run into burn out issues like you would with gel.
#### SHARYNF
##### Well-Known Member
The Night vision option on camcorders is really quite good, all you need to do is have the camera without a tape in it. If you compare the quality vs a security camera, it typically is quite a bit better.
Sharyn
#### avkid
##### Not a New User
Fight Leukemia
You have to be careful with camcorders, some turn off to save power after a set period of time.
#### gafftaper
Senior Team
Fight Leukemia
I've been doing a little research on Dichroics that restrict visible light while allowing I.R. to pass. Turns out it's a product called a "Cold Mirror" and they are out there. They don't seem to be listed on Rosco, Gam, or Apollo's sites (although I've sent e-mails to all three)... but they are out there on other scientific and manufacturing websites. That mirror inside a Selecon that allows for the heat to escape out the top while projecting all the visible light forward is a cold mirror at a 45 degree angle. What if you put that in an S4 pattern slot and reflected the visible light back into the instrument while allowing the I.R. to shoot out the barrel... cool idea. I'm working on getting size and price info. One of these with a 50 degree.. or even a new 70/90 degree lens and boom all the I.R. you'll ever need at a fraction of the price. Throw in a couple hundred dollar camera and some cable... you might be able to do an infrared monitor system for around $500. #### SHARYNF ##### Well-Known Member For the same money you could also look at a night vision system, couple it to a camera, And not have to deal with any of the heat or special Filter issues. With the military action and the fall of the Soviet Union, the night vision performance that was classified not all that long ago has become pretty inexpensive. I use some of them for wildlife, and boating applications, and the performance can be excellent While I have not personally tested every camcorder out there with nightvision, but of all the ones I have Sony/Canon/Panasonic all of them if you remove the tape and shut the compartment completely over ride the turn off feature. I have many groups that uses the camcorders this way for live event production shoots some of which last for days. Sharyn Last edited: #### gafftaper ##### Senior Team Senior Team Fight Leukemia Just got a message back from Gam that they do make dichroics that allow IR but block visible light... pricing to come. Thanks for the night vision ideas as well. I'm worried that in my tight little black box I may have too good of control over ambient light and there just may not be enough to get a decent image. I'll have to get a hold of somebody's camera and give it a try before I spend the money on an IR system. #### gafftaper ##### Senior Team Senior Team Fight Leukemia I thought I would keep updating my research. After exchanging e-mails with several home security places. I found a really good looking I.R. camera. I'm far from an expert, but it's made by Sony so you know it's got to be at least mediocre quality. It comes in an all Black and White model or a http://www.spyville.com/inccdcolweat.html (color when lights are on, B&W in dark). Unlike many other I.R. cameras it has 24 true I.R. LED's built in so they are invisible. Many cameras have a very visible red LED... that sort of goes up into the I.R. range. The sales people tell me it has a 30-50 foot range. Cost is$170/\$210.
Looks like really solid solution for my black box. Hope that info helps others who are interested.
#### jmabray
##### Active Member
I bought a JWIN camera and monitor off ebay for 70 bucks a couple of years ago and it worked like a dream. I ran two R80 Gelled scoops at 2% and it was as bright as day on the monitor and you couldn't see anything on stage.
#### scarlco
##### Member
You'll find that just about any b&w camera will work as an infrared camera - if it doesn't, it's usually an IR filter just behind the lens which can easily be removed. You really don't have to spend extra bucks to get a special IR camera.
As for illumination, it really doesn't take much to emit plenty of light for IR video. Check out the pricing on LED emitter panels - they last longer, run cooler, and will cost less over the course of their life. Too much IR light will wash out your video... be careful.
We've got about 20-ish channels of IR here, and they're all standard Sony b&w cams. They do a great job.
|
2021-09-20 23:40:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22687901556491852, "perplexity": 1621.546880060697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057119.85/warc/CC-MAIN-20210920221430-20210921011430-00338.warc.gz"}
|
https://www.lessonplanet.com/teachers/high-frequency-words-tex-and-the-big-bad-t-rex
|
# High Frequency Words: Tex and the Big, Bad T-Rex
In this high frequency words learning exercise, learners circle the correct word from a pair of choices to complete each of 5 sentences.
Concepts
Resource Details
|
2018-06-22 09:46:50
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8032312989234924, "perplexity": 5401.453645736907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864387.54/warc/CC-MAIN-20180622084714-20180622104714-00152.warc.gz"}
|
http://math.stackexchange.com/questions/176258/convergence-of-harmonic-functions
|
# Convergence of harmonic functions.
I am looking for the proof of the following Theorem : Does anyone know where i can find out ?
If $\Omega$ is open and connected and $u_k$ be uniformly bounded sequence of harmonic functions . There exists a subsequence that converges uniformly to a harmonic function $u:\Omega \to \mathbb R$ on any compact subset of $\Omega$.
In case you think that its not hard, i look forward to hints as well.
I hope the statement is true . Thanks.
-
You can add a constant to make them all $\ge 1$, and apply Harnack's inequality to get equicontinuity on compact subsets... then Arzela-Ascoli. – user31373 Jul 28 '12 at 19:18
@LeonidKovalev : Sir, i was thinking of using derivative bound of harmonic equation but before using it how do i set up so that i can use it ? – Theorem Jul 28 '12 at 19:36
That works too. If $z\in\Omega$, then there exists $r>0$ such that the disk $D(z,r)$ is contained in $\Omega$, and you have an upper bound on $|\nabla u_k|$ in $D(z,r/2)$. Hence the family is locally uniformly Lipschitz, which implies equicontinuity on compact subsets as well. – user31373 Jul 28 '12 at 20:33
See theorem 2.6 in the page 35 here and the comment below.
-
You can see this book
D.H. Armitage, S.J. Gardiner, Classical Potential Theory, Springer, London, 2000.
|
2015-11-28 17:15:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9244052171707153, "perplexity": 119.33730807903292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398453576.62/warc/CC-MAIN-20151124205413-00047-ip-10-71-132-137.ec2.internal.warc.gz"}
|
http://dynref.engr.illinois.edu/rvn.html
|
# Notation
## Mathematical objects
Example Meaning LaTeX
$P$ Points and positions are denoted by capital italic letters. \$P\$
$(4, 5, -2)$ Coordinates of a position are given as a tuple, so $P$ is at $(4, 5, -2)$ is the same as saying that $P$ has coordinates $x = 4$, $y = 5$, $z = -2$. Note the distinction from vector components with square brackets. \$(4, 5, -2)\$
$\boldsymbol{v}$ Vectors in typeset material are in bold font. \$\boldsymbol{v}\$
$\vec{v}$ Vectors in handwriting use an over-arrow. \$\vec{v}\$
$\|\boldsymbol{v}\|$, $v$ Magnitude uses double-bars or a plain letter, so $v = \|\boldsymbol{v}\| = \sqrt{v_x^2 + v_y^2 + v_z^2}$. \$\|\boldsymbol{v}\|\$, \$v\$
$\hat{\boldsymbol{v}}$ Unit vectors use over-hat, so $\hat{\boldsymbol{v}} = \frac{\boldsymbol{v}}{\|\boldsymbol{v}\|}$. \$\hat{\boldsymbol{v}}\$
$\hat{\boldsymbol{\imath}}$, $\hat{\boldsymbol{\jmath}}$, $\hat{\boldsymbol{k}}$ Cartesian basis vectors, so we write $\boldsymbol{v} = 3\hat{\boldsymbol{\imath}} + \hat{\boldsymbol{\jmath}} + 7\hat{\boldsymbol{k}}$. \$\hat{\boldsymbol{\imath}}\$, \$\hat{\boldsymbol{\jmath}}\$, \$\hat{\boldsymbol{k}}\$
$[3, 1, 7]$ Vector components use square brackets, so we write $[\boldsymbol{v}]_R = [3, 1, 7] = 3\hat{\boldsymbol{\imath}} + \hat{\boldsymbol{\jmath}} + 7\hat{\boldsymbol{k}}$. If the basis is clear then we will write $\boldsymbol{v} = [3, 1, 7]$. \$[3, 1, 7]\$
$[\boldsymbol{v}]_R$ Vector components in basis $R$. Standard basis names are $R$ for Rectangular (Cartesian), $P$ for polar, $C$ for cylindrical, $S$ for spherical. \$[\boldsymbol{v}]_R\$
$v_x, v_y, v_z$ Vector components are in non-bold with subscripts, so $\boldsymbol{v} = [v_x, v_y, v_z] = v_x\,\hat{\boldsymbol{\imath}} + v_y\,\hat{\boldsymbol{\jmath}} + v_z\,\hat{\boldsymbol{k}}$. \$v_x, v_y, v_z\$
$v$ versus $v_x$ Magnitude (positive) is the plain letter $v$, while signed component is $v_x$. \$v\$ versus \$v_x\$
$\hat{\boldsymbol{e}}_r, \hat{\boldsymbol{e}}_\theta$ Polar basis vectors. Maybe we should change this to $\hat{\boldsymbol{r}}, \hat{\boldsymbol\theta}$? \$\hat{e}_r\$, \$\hat{e}_\theta\$
$\boldsymbol{r}$, $\boldsymbol{r}_P$, $\boldsymbol{r}_{OP}$, $\overrightarrow{OP}$ Position vector of point $P$ from origin $O$. The origin and/or point can be neglected if it is obvious from context. \$\boldsymbol{r}\$, \$\boldsymbol{r}_P\$, \$\boldsymbol{r}_{OP}\$, \$\overrightarrow{OP}\$
$\boldsymbol{\rm A}$ Matrices are in upright (roman) bold. \$\boldsymbol{\rm A}\$
$A_{ij}$ Matrix components are in italic non-bold font. \$A_{ij}\$
$4\rm\ kg/m^2$, $4\rm\ kg\,m^{-2}$ Units are in roman (upright) font, have a space between the number and units, and have a space between unit symbols. See the NIST conventions for more details. \$4\rm\ kg/m^2\$, \$4\rm\ kg\ m^{-2}\$
$x = 4t^2$ To make formulas dimensionally correct we use one of the following forms: (1) “$x = 4t^2$, where $t$ is in seconds and $x$ is in meters”, (2) “$x = a t^2$ where $a = 4\rm\ m/s^2$”, or (3) “$x = 4 (t/{\rm s})^2\rm\ m$” (using quantity calculus). \$x = 4t^2\$
## Diagram elements
Element Meaning LaTeX
$\mathcal{B}_1$ Body number 1. Use numbers 1,2,3 for bodies. \$\mathcal{B}_1\$
$m_1, \omega_1, \alpha_1$ Mass, angular velocity, and angular acceleration of body $\mathcal{B}_1$. Use subscript numbers for quantities associated with bodies. \$m_1, \omega_1, \alpha_1\$
$P, Q$ Points $P$ and $Q$. Use italic capital letters for points. \$P, Q\$
$\boldsymbol{r}_P, \boldsymbol{v}_P, \boldsymbol{a}_P$ Position, velocity, and acceleration vectors of point $P$. Use subscript capital italic letters for quantities associated with points. \$\boldsymbol{r}_P, \boldsymbol{v}_P, \boldsymbol{a}_P\$
$I_{1,P,z}$ Moment of inertia of body $\mathcal{B}_1$ about point $P$ around the $z$ axis. Any of the subscripts can be neglected if they are obvious from context, although at least the point should normally be included. \$I_{1,P,z}\$
## Color scheme for diagrams
When possible, use a pale yellow (or tan) background with a light gray (or light green) square grid, like traditional engineering paper. A blank white background can also be used.
Color Meaning
Black Coordinate axes and objects.
Gray Measurements, angles, other notes.
Blue Position vectors.
Green Velocities and angular velocities.
Cyan Accelerations and angular accelerations.
Red Forces.
Purple Moments.
Example diagram showing colored elements.
|
2021-06-22 05:11:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9205106496810913, "perplexity": 3345.3539159804586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488507640.82/warc/CC-MAIN-20210622033023-20210622063023-00308.warc.gz"}
|
https://trac-hacks.org/ticket/11367?cversion=1&cnum_hist=34
|
Opened 3 years ago
Closed 2 years ago
# Restrict projects to a subset of trac users
Reported by: Owned by: endquote falkb normal SimpleMultiProjectPlugin normal 1.0
### Description
Is there any way to hide certain projects from certain users?
At my company we have many projects and staff members have access to all of them. However we might have a consultant help out with one project and they should only see the one they are working on.
### comment:1 follow-up: ↓ 2 Changed 3 years ago by falkb
Nice idea. Maybe another config field on <tracinstance>/admin/projects/simplemultiproject/<id> listing a set of users or groups, respectively, to those the project will be available.
### comment:2 in reply to: ↑ 1 Changed 3 years ago by falkb
Nice idea. Maybe another config field on <tracinstance>/admin/projects/simplemultiproject/<id> listing a set of users or groups, respectively, to those the project will be available.
Josh, please, describe what "available" would mean to you.
### comment:3 Changed 3 years ago by endquote
The ideal configuration setup for my use case would be:
• All projects are available to all users by default
• To restrict a user to a subset of projects, provide the username and list of projects somewhere
I've noticed a few plugins for managing users/groups already, perhaps it could be integrated with one of those.
For me, "available" means that the user only sees the projects they have access to in the project dropdown in the new ticket form, only their projects are returned in searches and queries, only their projects appear on the timeline/roadmap, etc.
This would be a big win for my setup, so if any progress is made and you need a tester, let me know.
### comment:4 Changed 3 years ago by falkb
Somehow it sounds to me, it's just testing the user, who is currently logged in, against the list of restricted users for a project at certain source code lines. I suppose it's not even necessary to use other plugins for that.
### comment:5 Changed 3 years ago by falkb
• Status changed from new to assigned
### comment:6 Changed 3 years ago by endquote
Agreed, I think the two things that need doing are:
• Define a list of users and the projects they have access to -- if this were a config-file or a set of trac-admin commands, that would be fine. Maybe a UI could come later, but that's less important.
• Whenever a project list is displayed, check that list against the currently logged in user to filter the list.
Is there any thought as to when this might make it on the roadmap? Thanks for your consideration!
(This is the same person as before, I just made an account.)
### comment:7 Changed 3 years ago by falkb
Josh, I also have "closing of projects to get them out of the lists" on my roadmap. I put that off for some months now (because of missing time (as always ;) ), but the threshold for "must happen now" is almost reached. This ticket gives another trigger to start, it's somehow related to "closing of projects" because it also aims to hide stuff from the lists.
### comment:9 follow-up: ↓ 10 Changed 3 years ago by endquote
Wanted to check in on this... any chance of an update before the end of the year? Is there anything I can do to assist?
### comment:10 in reply to: ↑ 9 Changed 3 years ago by falkb
Wanted to check in on this... any chance of an update before the end of the year? Is there anything I can do to assist?
I've started on it recently, and extended the db and GUI a bit already, and then I stalled again due a lack of time. I think I can solve it during December. If the db extension is thought to be stable enough, I could commit it as a first draft.
### comment:11 Changed 3 years ago by endquote
Good news, thank you :)
### comment:12 Changed 3 years ago by falkb
In 13474:
started 2 new features:
• close of projects (see #11377)
• user restriction for projects (see #11367)
implemented
• db extension
• upgrade from plugin version 4 to 5 (two new fields 'restrict' (integer) and 'closed' (text) to table 'smp_project')
### comment:13 Changed 3 years ago by falkb
I'm going to filter out all projects which are not allowed for the current user similar to the filtering-out of closed projecs (see comment:ticket:11377:3). The current username is in req.authname and the list of allowed users is in column 5 of the smp_project table.
The idea is to extend get_all_projects_but_closed() in model.py somehow this way:
======= model.py =======
- def get_all_projects_but_closed(self):
+ def get_all_projects_but_closed(self, req):
all_projects = self.get_all_projects() # get them all unfiltered
# now filter out
if all_projects:
for project in list(all_projects):
project_name = project[1]
project_info = self.get_project_info(project_name)
if project_info:
if project_info[4] > 0:
# column 4 of table smp_project tells if project is closed
all_projects.remove(project)
+ else:
+ # column 5 of table smp_project returns the allowed users
+ restricted_users = project_info[5]
+ if restricted_users:
+ user_list = [users.strip() for users in restricted_users.split()]
+ if (req.authname not in user_list): # current browser user not allowed?
+ all_projects.remove(project)
# return what remains after filtering
return all_projects
This would just mean to remove such projects from all displayed lists and selection-comboboxes, which means to hide them. Though nevertheless manually typing the right URL in the browser address input-line, the user would still be able to access the hidden projects. At present, I'm not sure if this approach is a good idea, and how far "restrict" should actually go. Josh, any idea?
### comment:14 Changed 3 years ago by endquote
The approach of extending the closed-project functionality to also include the restricted-project logic makes sense to me, and does most of the work of keeping folks out of projects that they shouldn't be in.
However it would be good if typing/guessing URLs wasn't a way around it. For example someone might have access to a project, so they know about it, and then that access might get revoked, and their old bookmarks should no longer work.
Maybe there is a plugin hook that is invoked before the page is rendered, where the current user can be checked against the current project, and an error shown if access is denied?
### comment:15 Changed 3 years ago by falkb
In 13478:
• more work for user restriction of projects, timeline may work already (see #11367)
• renamed get_all_projects_but_closed() to get_all_projects_filtered_by_conditions() since it also checks the user-restrictions now
### comment:16 Changed 3 years ago by falkb
In 13479:
see #11367: on roadmap page: hide project content from users who are not listed in the 'restricted to users' list (if such list is not empty (empty means 'no restriction')
### comment:17 Changed 3 years ago by falkb
• Keywords testing added; planned removed
Josh, I have a feeling I have done all necessary work for this feature. Could you try it and report back?
### comment:18 follow-up: ↓ 23 Changed 3 years ago by endquote
I was able to get this working. If I restrict a user from a project, they do not see that project in the dropdowns in the ticket editor.
However, that user can still see a ticket in a restricted project if they go to the ticket URL directly. These tickets also appear in any of the reports under "view tickets" or when searched for in a custom query. Ideally there would be some low-level hook -- whenever tickets are queried from any page, the results would be filtered so that only tickets from projects that the current user has access to would be returned.
Active tickets for closed projects show up in searches as well -- not sure if that's the correct behavior.
Last edited 3 years ago by endquote (previous) (diff)
### comment:19 Changed 3 years ago by endquote
Also, thanks for working on this!
### comment:20 Changed 3 years ago by anonymous
• Keywords planned added; testing removed
yes, you're right, I missed that case ... coming soon...
### comment:21 Changed 3 years ago by endquote
Another thought... maybe block email notifications from going to users that aren't on a project? Say a ticket is owned or cc'd by someone, and then they are removed from the project. It would be nice if they didn't get email notifications.
### comment:22 Changed 3 years ago by falkb
In 13484:
see #11367: check permissions of user for the appropriate project on access to versions, milestones, roadmap and tickets
### comment:23 in reply to: ↑ 18 ; follow-up: ↓ 27 Changed 3 years ago by falkb
However, that user can still see a ticket in a restricted project if they go to the ticket URL directly...
works now.
Another thought... maybe block email notifications...
Uhm... no idea at present how to prevent that... *thinking*
### comment:24 Changed 3 years ago by falkb
In 13485:
• some minor restructuring for check of restricted users list (see #11367)
• added trial to get user groups from the permission system (disabled by default) (see #10863)
### comment:25 Changed 3 years ago by falkb
In 13486:
implemented group support for user-restriction of projects via trac-admin permission add or /admin/general/perm (see #10863, see #11367)
### comment:26 Changed 3 years ago by falkb
In 13487:
inversion feature added for "Restrict to users" (see #10863, see #11367)
• if the list in the user-restriction text field starts with '!' (separated by comma!), the meaning is inverted, thus means "Forbidden to users" then.
• e.g. "john,paul,george,ringo" restricts to these 4 users for a certain project, while "!,bob,harry" excludes them from a certain project
### comment:27 in reply to: ↑ 23 ; follow-up: ↓ 33 Changed 3 years ago by falkb
Another thought... maybe block email notifications (for certain users excluded from a project)...
Uhm... no idea at present how to prevent that... *thinking*
Hi Josh, do you use the Announcer plugin? If yes, maybe there's a chance for easy blocking of emails. Have a look here at some source code of QuietPlugin. You can see, that plugin suppresses all email notifications for a recipient list depending on whether Quiet mode is on or off (a click on a special page link). If I was able to find out if the "distribute"-event comes from a ticket change, and if yes, which ticket number it is, in that case I could check if I must suppress the email distribution, instead of looking at the quiet mode. Then I could reuse and adapt the QuietEmailDistributor source code. hasienda, can you tell me how it's possible to check in distribute(self, transport, recipients, event) what the ticket number is?
### comment:28 follow-up: ↓ 29 Changed 3 years ago by endquote
I'm not using AnnouncerPlugin, but I'll take a look at it. I'd say that suppressing restricted tickets from search results and keeping direct URLs to them from working is far more important. If I need to remove a user from a project, I could also batch modify their tickets to remove them and suppress notifications that way. Thanks again!
### comment:29 in reply to: ↑ 28 ; follow-ups: ↓ 30 ↓ 31 Changed 3 years ago by falkb
I'd say that suppressing restricted tickets from search results
That's the next thing I need to find out.
and keeping direct URLs to them from working
This should work already now. Let me know if not.
If I need to remove a user from a project, I could also batch modify their tickets to remove them and suppress notifications that way.
Yeah, this is also a good idea.
Thanks again!
You're welcome. And thank you too for having an eye on it from a user's point of view. I think this rectriction feature is a topic which comes up every now and again.
Last edited 3 years ago by falkb (previous) (diff)
### comment:30 in reply to: ↑ 29 Changed 3 years ago by endquote
and keeping direct URLs to them from working
This should work already now. Let me know if not.
Ah, yes, this does work. Cool!
### comment:31 in reply to: ↑ 29 Changed 3 years ago by falkb
• Keywords testing added; planned removed
I'd say that suppressing restricted tickets from search results
That's the next thing I need to find out.
Please update to [13492], it's implemented now together with ticket access restriction in all queries, reports and search results. I've introduced a new plugin class ProjectTicketsPolicy to get it done, which you need to activate now.
To get it activated
1. go to Admin-->General-->Plugin-->SimpleMultiProjectPlugin-->ProjectTicketsPolicy and set its checkbox on
[trac]
permission_policies = ProjectTicketsPolicy, ... all the other ones ...
Now it seems everything (but the email thingy) has been done. I'm still waiting for the feedback of hasienda.
That's why I have a good feeling about this ticket #11367. :-) So I set it back to state "testing", and if nothing bad happens, I'll be off for holiday now. Please, test and report back how it works for you or if you feel something is still missing. A Merry Christmas and a Happy New Year to you all!
### comment:32 Changed 3 years ago by endquote
This is great! A restricted user is no longer seeing tickets they shouldn't in search results, and direct links to them return an error.
I think that's all I needed, though blocking notifications would be a bonus.
Happy holidays!
### comment:33 in reply to: ↑ 27 ; follow-up: ↓ 35 Changed 3 years ago by hasienda
hasienda, can you tell me how it's possible to check in distribute(self, transport, recipients, event) what the ticket number is?
It boils down to the definition of event (announcer.api.AnnouncementEvent) objects in AnnouncerPlugin. In 'announcer/producers.py' you'll see, how such an event is constructed i.e. by TicketChangeProducer. You'll find, that event.target is a Ticket object, if realm == 'ticket'.
### comment:34 Changed 3 years ago by falkb
• the new table column 'restrict' introduced a new bug #11461 because that is a special SQL keyword in some databases. I fixed that in [13549].
• model.py needed a bigger rework to fix intermittent "Cannot operate on a closed cursor" errors
• Another little issue with data==None in roadmap.py has been fixed with [13551].
Version 1, edited 3 years ago by falkb (previous) (next) (diff)
### comment:35 in reply to: ↑ 33 Changed 3 years ago by falkb
hasienda, can you tell me how it's possible to check in distribute(self, transport, recipients, event) what the ticket number is?
It boils down to the definition of event (announcer.api.AnnouncementEvent) objects in AnnouncerPlugin. In 'announcer/producers.py' you'll see, how such an event is constructed i.e. by TicketChangeProducer. You'll find, that event.target is a Ticket object, if realm == 'ticket'.
This little patch could be the one which filters out email recipients excluded from the current ticket's project. hasienda, could you make a review of it, please?:
• ## simplemultiprojectplugin/trunk/simplemultiproject/ticket.py
from operator import itemgetter try: from announcer.distributors.mail import EmailDistributor except ImportError: #define empty class class EmailDistributor(Component): pass try: from trac.web.chrome import add_script_data except ImportError: # Backported from 0.12 def get_permission_actions(self): return class ProjectQuietEmailDistributor(EmailDistributor): """Specializes Announcer's email distributor to honor quiet mode.""" def distribute(self, transport, recipients, event): if event and event.realm == "ticket" and event.target: # some email recipients may not be in the restricted users list of the ticket's project # they should not get email recipients = self._filter_email_recipients(event.target, recipients) EmailDistributor.distribute(self, transport, recipients, event) def _filter_email_recipients(self, ticket, recipients): project_name = self.__SmpModel.get_ticket_project(ticket.id) if project_name and project_name[0]: project_info = self.__SmpModel.get_project_info(project_name[0]) if project_info: for reci in list(recipients): if self.__SmpModel.is_not_in_restricted_users(reci, project_info): recipients.remove(reci) return recipients
This code is just a draft and I want to know if the API to AnnouncerPlugin is right. And is there a trac.ini setting where I must mention ProjectQuietEmailDistributor?
Last edited 3 years ago by falkb (previous) (diff)
### comment:36 Changed 3 years ago by endquote
If I do a query for a project that I don't have access to, I still see its tickets.
http://trac/query?project=QCC2 will show all tickets for a project to anyone who asks. However they cannot click through to view the details of a ticket -- a permission error appears.
### comment:37 Changed 3 years ago by falkb
Hi Josh! Currently, a message "no permission" is shown if a user is not in the list of user-restriction. This bloke has suggested a patch to me for full hiding of such projects without any message. What do you think is better?
### comment:38 Changed 3 years ago by endquote
I would also prefer if users without access to a project never saw anything about the project -- they shouldn't even know it exists.
But if they do go to a project specific URL, they should get an error, or perhaps get redirected back to the home page. If an error, the message shouldn't be "you don't have permission", it should just be more of a 404 type of thing. A good example is Github -- if you navigate to a project you don't have permission to view (likely because you're not logged in), you just get a 404 error and no clue that the project exists.
### comment:39 follow-up: ↓ 41 Changed 2 years ago by aikidoguy@…
I would also like to see some kind of feature in this direction. I was imagining modifying the Genshi template for the "Available Projects" list (which currently really means "Existing Projects")... I don't know how to accomplish this. However, I imagine something like the current authenticated user is checked in the Trac db to see if they have WIKI_VIEW and if not, then the "Available Projects" list would not have that project link.
### comment:40 Changed 2 years ago by aikidoguy@…
Or better yet, instead of WIKI_VIEW, have a new permission called "PROJECT_VIEW"...
### comment:41 in reply to: ↑ 39 Changed 2 years ago by falkb
I would also like to see some kind of feature in this direction. I was imagining modifying the Genshi template for the "Available Projects" list (which currently really means "Existing Projects")...
I'm not sure if you mean you want to rename from "Existing Projects" to "Available Projects", am I right? If yes, why do you want to rename it?
I don't know how to accomplish this. However, I imagine something like the current authenticated user is checked in the Trac db to see if they have WIKI_VIEW and if not, then the "Available Projects" list would not have that project link.
Currently, one can only generally switch off access to the project list by PROJECT_SETTINGS_VIEW permission. I'm gonna check if it's possible to show a list for only the projects which are set visible to the user; actually that shouldn't be a problem. Patch welcome during my current lack of time.
### comment:42 Changed 2 years ago by aikidoguy@…
Sorry for my noise and misunderstandings. If I understand correctly, "SimpleMultiProjectPlugin" is a way to manage multiple projects within 1 Trac environment. My scenario is when I have multiple Trac environments that I need to manage. I will pursue this further on the trac-users mailing list. Thank you kindly!
### comment:43 Changed 2 years ago by falkb
endquote, is this ticket actually sufficiently resolved for you?
### comment:44 Changed 2 years ago by endquote
Yes, this seems like it could be closed to me.
### comment:45 Changed 2 years ago by falkb
• Resolution set to fixed
• Status changed from assigned to closed
### comment:46 Changed 2 years ago by falkb
• Keywords testing removed
|
2017-01-16 16:05:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3017902672290802, "perplexity": 3686.2950431367594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00205-ip-10-171-10-70.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/24907-slope.html
|
1. ## slope
How do I translate $\frac{11}{5}$ in a graph, when divided I get $1\frac{2}{5}$
does this mean that I start at $(0,\frac{11}{5})$ or at $1$ then work my way up the slope?
BTW, the slope is $-\frac{12}{5}$
grrrrrrr fractions
Many thanks in advance!!!
2. ## No, Never Mind, Never Mind!!!!
Doh'
I was only asked to find the slope not to plot it!!!!
SORRY TO ALL!!!!
|
2018-02-21 08:21:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7187574505805969, "perplexity": 1888.0529349753099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813571.24/warc/CC-MAIN-20180221063956-20180221083956-00739.warc.gz"}
|
https://couryes.com/%E7%BB%9F%E8%AE%A1%E4%BB%A3%E5%86%99%E5%9B%9E%E5%BD%92%E5%88%86%E6%9E%90%E4%BD%9C%E4%B8%9A%E4%BB%A3%E5%86%99regression-analysis%E4%BB%A3%E8%80%83st-503/
|
## 统计代写|回归分析作业代写Regression Analysis代考|ST 503
2022年7月18日
couryes-lab™ 为您的留学生涯保驾护航 在代写回归分析Regression Analysis方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写回归分析Regression Analysis代写方面经验极为丰富,各种代写回归分析Regression Analysis相关的作业也就用不着说。
• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
couryes™为您提供可以保分的包课服务
## 统计代写|回归分析作业代写Regression Analysis代考|Exact Inferences: Confidence Intervals
To interpret the estimate and its standard error, you should have a mental conversation with yourself, saying something like this:
How to think about the estimate and its standard error
Hmmm, the estimated slope is shown in the output as 1.6199, and the standard error is shown in the output as $0.1326$. So the actual slope is most likely in the range $1.6199 \pm 2(0.1316)$, or roughly between $1.6 \pm 0.26$. AHA! The true slope is most likely a positive number! So the $X$ variable has a positive relation to $Y$ !
We used $2.0$ rather than $1.96$ as a multiplier of the standard error because the result is only approximate anyway, so why not? We might as well simplify things by using another approximation, $2.0$ instead of 1.96. It just makes life easier. And it works well in practice, so we generally recommend that you follow the advice given by the above mental conversation.
But there are precise, mathematically exact results that you can use in the case where the data are produced by the classical model. The theory is mathematically deep, but you probably have seen it before, to one degree or another. It involves “Student’s $T$ distribution,” which is ubiquitous in statistics. In a nutshell, the issue revolves around how to deal with the estimate $\hat{\sigma}$ of $\sigma$ in the standard error formula. After all, as shown above, the first interval formula involving $1.96$ and $\sigma$ is exact; the only reason for calling the second interval formula “approximate” is because of the substitution of $\hat{\sigma}$ for $\sigma$. The effect of using $\hat{\sigma}$ rather than $\sigma$ can be precisely, exactly, quantified. A mathematical theorem states that if the classical regression model produces the real data, then the additional variability incurred when you use $\hat{\sigma}$ rather than $\sigma$ is precisely accounted for by using the $T$ (Student’s T) distribution rather than the $\mathrm{Z}$ (standard normal) distribution.
## 统计代写|回归分析作业代写Regression Analysis代考|Practical Interpretation of the Confidence Interval
We now discuss the practical interpretation of the confidence interval for the slope parameter. As with everything in regression, these interpretations involve conditional distributions.
If the linearity assumption is true, then the parameter $\beta_{1}$ is the difference between the means of the conditional distributions of $Y$ for cases where the $X$ variable differs by one unit. Specifically:
$$\mathrm{E}(Y \mid x+1)-\mathrm{E}(Y \mid x)=\left{\beta_{0}+\beta_{1}(x+1)\right}-\left(\beta_{0}+\beta_{1} x\right)=\beta_{0}+\beta_{1} x+\beta_{1}-\beta_{0}-\beta_{1} x=\beta_{1}$$
Thus, the mean of the distribution of potentially observable $Y$ when $X=x+1$ is precisely $\beta_{1}$ higher than the mean of the distribution of potentially observable $Y$ when $X=x$. In particular, the mean of the distribution of Cost when Widgets $=1,001$ is exactly $\beta_{1}$ higher than the mean of the distribution of Cost when Widgets $=1,000$. And it does not matter which two values $(x+1, x)$ that you compare: The mean of the distribution of Cost when Widgets $=1,601$ is exactly $\beta_{1}$ higher than the mean of the distribution of Cost when Widgets $=1,600$.
Here and throughout the book, we will refer to $\beta_{1}$ as a measure of the effect of $X$ on $Y$. In general, the word effect has the following meaning:
The meaning of the phrase ” $X$ has an effect on $Y^{\prime \prime}$
When the conditional distribution $p\left(y \mid X=x_{1}\right)$ differs from $p\left(y \mid X=x_{2}\right)$, for some specific values $x_{1}$ and $x_{2}$ of the variable $X$, then $X$ has an effect on $Y$.
# 回归分析代写
## 统计代写|回归分析作业代写Regression Analysis代考|Exact Inferences: Confidence Intervals
Hmmm,估计斜率在输出中显示为 1.6199,标准误差在输出中显示为0.1326. 所以实际的斜率最有可能在这个范围内1.6199±2(0.1316), 或大致介于1.6±0.26. 啊哈!真正的斜率很可能是一个正数!所以X变量与是 !
## 统计代写|回归分析作业代写Regression Analysis代考|Practical Interpretation of the Confidence Interval
\mathrm{E}(Y \mid x+1)-\mathrm{E}(Y \mid x)=\left{\beta_{0}+\beta_{1}(x+1)\right}-\左(\beta_{0}+\beta_{1} x\right)=\beta_{0}+\beta_{1} x+\beta_{1}-\beta_{0}-\beta_{1} x=\测试版_{1}\mathrm{E}(Y \mid x+1)-\mathrm{E}(Y \mid x)=\left{\beta_{0}+\beta_{1}(x+1)\right}-\左(\beta_{0}+\beta_{1} x\right)=\beta_{0}+\beta_{1} x+\beta_{1}-\beta_{0}-\beta_{1} x=\测试版_{1}
## 有限元方法代写
tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。
## MATLAB代写
MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
|
2023-03-22 17:03:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8493552207946777, "perplexity": 716.7851686048737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943845.78/warc/CC-MAIN-20230322145537-20230322175537-00706.warc.gz"}
|
https://tex.stackexchange.com/questions/402707/tikz-graph-error
|
# Tikz graph error
I'm trying to plot the curve of
\frac{1}{x - 1}
My code is as follows:
\documentclass[11pt]{article}
\usepackage{amsmath}
\usepackage{color}
\usepackage[left=1.5in,right=1.5in,top=0.5in,bottom=0.5in,
footskip=0in]{geometry}
\usepackage{tikz}
\usetikzlibrary{decorations.pathreplacing}
\usepackage{pgfplots}
\begin{document}
\begin{tikzpicture}
\draw [red] (0,0) plot [domain=0.5:4] (\x,1/(\x-1));
\end{tikzpicture}
\end{document}
This throws an error.
If I write this:
\begin{tikzpicture}
\draw [red] (0,0) plot [domain=0.5:4] (\x,1/x);
\end{tikzpicture}
It's fine. It can handle this.
The error message is:
Package tikz Error: Giving up on this path. Did you forget a semicolon?. ... plot [domain=0.5:4] (\x,1/(\x-1))
Any idea what's going on?
The issue is the double brackets. You should put the expression within {}. And then have the (x-1) within the brackets so the eventual \begin{tikzpicture} input should look like:
\begin{tikzpicture}
|
2019-08-25 00:57:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9458411931991577, "perplexity": 10668.948074936905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027322160.92/warc/CC-MAIN-20190825000550-20190825022550-00301.warc.gz"}
|
https://undergroundmathematics.org/calculus-of-powers/r8543/interactive
|
Review question
# Can we show these two cubic curves touch? Add to your resource collection Remove from your resource collection Add notes to this resource View your notes for this resource
Ref: R8543
## Interactive graph
A curve is given by the equation $\begin{equation*} y = ax^3 - 6ax^2 + (12a + 12)x - (8a + 16), \label{eqi:1}\tag{*} \end{equation*}$ where $a$ is a real number. Show that this curve touches the curve with equation $\begin{equation*} y = x^3 \label{eqi:2}\tag{*{*}} \end{equation*}$
at $(2,8)$.
You might like to use this applet to explore the behaviour of the curve $\eqref{eqi:1}$ as $a$ varies. The blue curve is $y=x^3$ $\eqref{eqi:2}$.
You can also zoom in or out if that helps to see the behaviour of the curves.
As $a$ varies, you might notice the following.
• The curves always touch at $(2,8)$ as the question suggests.
• What happens to the red curve when $a=0$?
• Are there any values of $a$ for which the curves do not intersect again?
|
2018-01-17 23:30:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28293195366859436, "perplexity": 387.76892888169283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887024.1/warc/CC-MAIN-20180117232418-20180118012418-00485.warc.gz"}
|
http://ncerthelp.blogspot.com/2013/06/choose-correct-option-magnetic-field.html
|
# Choose the correct option. The magnetic field inside a long straight solenoid-carrying current (a) is zero
Q. No 3: Choose the correct option.
The magnetic field inside a long straight solenoid-carrying current
(a) is zero
(b) decreases as we move towards its end
(c) increases as we move towards its end
(d) is the same at all points
Ans: (d) The magnetic field inside a long straight current-carrying solenoid is uniform which is represented by parallel lines. Hence it is same at all points.
|
2014-11-27 18:04:01
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8096049427986145, "perplexity": 745.8342472979774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009004.88/warc/CC-MAIN-20141125155649-00171-ip-10-235-23-156.ec2.internal.warc.gz"}
|
https://www.c-sharpcorner.com/interview-question/what-is-q-service-deferred-and-promises
|
What is $q service, deferred and promises ? By in on Feb 03 2018 • Jan, 2020 20 When we want to send multiple requests/functions parallely or asynchronously we can use$q services.
for example
$q.all(request1,request2).then(function (response) {result1 = response[0];result2 = response[1];}) Promises are provided by$q service. Promise is a placeholder for the values we will get in return from the requests/ functions we have used in \$q services.
Deferred object is to expose the results from Promise. there are some methods we can use to resolve the promise values i.e. resolve() and reject(). If there is some error in the values of the Promise we can call reject() method.
|
2020-09-22 02:55:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17209850251674652, "perplexity": 1160.864187183516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400202686.56/warc/CC-MAIN-20200922000730-20200922030730-00351.warc.gz"}
|
https://mathhothouse.me/category/applications-of-maths/
|
## Category Archives: applications of maths
### Why study geometry? An answer from Prof. Gangsong Leng
Reference:
Geometric Inequalities, Vol 12, Mathematical Olympiad Series, Gangsong Leng, translated by Yongming Liu, East China Normal University Press, World Scientific.
“God is always doing geometry”, said Plato. But, the deep investigation and extensive attention to geometric inequalities as an independent field is a matter of modern times.
Many geometric inequalities are not only typical examples of mathematical beauty but also tools for applications as well. The well known Brunn-Minkowski’s inequality is such an example. “It is like a large octopus, whose tentacles stretches out into almost every field of mathematics. It has not only relation with advanced mathematics such as the Hodge index theorem in algebraic geometry, but also plays an important role in applied subjects such as stereology, statistical mechanics and information theory.”
🙂 🙂 🙂
### Christiane Rousseau: AMS 2018 Bertrand Russell
http://www.ams.org/news?news_id=3821
Cheers to Prof. Christiane Rousseau and her team !
### Math is fun: website
https://colleenyoung.wordpress.com/2017/11/05/math-is-fun/
With thanks and regards to Colleen Young.
### Birthday Probability Problems: IITJEE Advanced Mathematics
In the following problems, each year is assumed to be consisting of 365 days (no leap year):
1. What is the least number of people in a room such that it is more likely than not that at least two people will share the same birthday?
2. You are in a conference. What is the least number of people in the conference (besides you) such that it is more likely than not that there is at least another person having the same birthday as yours?
3. A theatre owner announces that the first person in the queue having the same birthday as the one who has already purchased a ticket will be given a free entry. Where (which position in the queue) should one stand to maximize the chance of earning a free entry?
I will put up the solutions on this blog tomorrow. First, you need to make a whole-hearted attempt.
Nalin Pithwa.
### The power of the unseen, the abstract: applications of mathematics
Applications of math are everywhere…anywhere we see, use, test/taste, touch, etc…
I have made a quick compilation of some such examples below:
1. Crystallography
2. Coding Theory (Error Correction) (the stuff like Hamming codes, parity check codes; used in 3G, 4G etc.) Used in data storage also. Bar codes, QR codes, etc.
3. Medicine: MRI, cancer detection, Tomography,etc.
4. Image processing: JPEG2000; Digital enhancement etc.
5. Regulating traffic: use of probability theory and queuing theory
6. Improving performance in sports
7. Betting and bidding; including spectrum auction using John Nash’s game theory.
8. Robotics
9. Space Exploration
10. Wireless communications including cellular telephony. (You can Google search this; for example, Fourier Series is used in Digital Signal Processing (DSP). Even some concepts of convergence of a series are necessary!) Actually, this is a digital communications systems and each component of this requires heavy use of mathematical machinery: as the information bearing signal is passed from source to sink, it under goes several steps one-by-one: like Source Coding, encryption (like AES, or RSA or ECC), Error Control Coding and Modulation/Transmission via physical channel. On the receiver or sink side, the “opposite” steps are carried out. This is generally taught in Electrical Engineering. You can Google search these things.
11. DNA Analysis
12. Exploring oceans (example, with unmanned underwater vehicles)
13. Packing (physical and electronic)
14. Aircraft designing
15. Pattern identification
16. Weather forecasting.
17. GPS also uses math. It uses physics also. Perhaps, just to satisfy your curiosity, GPS uses special relativity.
18. Computer Networks: of course, they use Queuing theory. Long back, the TCP/IP slow start algorithm was designed and developed by van Jacobson.(You can Google search all this — but the stuff is arcande right now due to your current education level.)
19. Architecture, of course, uses geometry. For example, Golden ratio.
20. Analyzing fluid flows.
21. Designing contact lenses for the eyes. Including coloured contact lenses to enhance beauty or for fashion.
22. Artificial Intelligence and Machine Intelligence.
23. Internet Security.
24. Astronomy, of course. Who can ever forget this? Get yourself a nice telescope and get hooked. You can also Stellarium.org freeware to learn to identify stars and planets, and constellations.
25. Analyzing chaos and fractals: the classic movie “Jurassic Park” was based on fractal geometry. The dino’s were, of course, simulations!
26. Forensics
27. Combinatorial optimization; the travelling salesman problem.
28. Computational Biology
We will try to look at bit deeper into these applications in later blogs. And, yes, before I forget “Ramanujan’s algorithm to compute $\pi$ up to a million digits is used to test the efficacy and efficiency of supercomputers. Of course, there will be other testing procedures also, for testing supercomputers.
There will be several more. Kindly share your views.
-Nalin Pithwa.
|
2017-12-18 06:55:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4578705430030823, "perplexity": 2519.574793633047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948609934.85/warc/CC-MAIN-20171218063927-20171218085927-00109.warc.gz"}
|
https://stats.stackexchange.com/questions/22419/identifying-instrumental-variables-for-structural-model
|
# Identifying instrumental variables for structural model
This question is about finding valid instrumental variables.
Variables are:
• the response variable for the structural equation, $y$
• the jointly endogenous RHS variable in the structural equation, $m$
• two control variables ($x_1$ and $x_2$) that will always appear on the RHS of the structural equation
• four potential instrumental variables, denoted by ($z_1$, $z_2$, $z_3$, $z_4$)
The problem of course, is that you have no idea if the potentional IVs are admissible. Your mission in life is to identifity those Instrumental Variables for which the exclusion restrictions are valid, and to estimate the casual effect of $m$ on $y$ in a linear regression on the form:
$$y=x_1\alpha_1+x_2\alpha_2+m\beta+u$$
I dont have any background in these area; is that question trivial or not ?
## 2 Answers
Take a look at Austin Nichols' causal inference with observational data notes for a nice outline of endogeneity testing for IV. The handout is aimed at Stata users, but most of the tests should be available in other packages. Christopher Baum's IV notes are pretty good as well. They cover weak instruments and testing when you have more instruments than endogenous variables (over-identified case).
Try this one: http://en.wikipedia.org/wiki/Sargan_test there is no way to conclusively determine the exogeneity of a given instrument, but it might provide some information.
|
2019-11-15 07:48:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7238237857818604, "perplexity": 1406.2228192211414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668594.81/warc/CC-MAIN-20191115065903-20191115093903-00338.warc.gz"}
|
http://comments.gmane.org/gmane.editors.lyx.general/63920
|
26 May 19:06 2010
How do I manage to put a "lightning" in the end of my proofs-of-contradiction?
If I use \lightning the symbol doesn't show up entirely. Only a small "apostroph" appears. And in the
exported PDF it's the same.
Regards,
iustifico
26 May 19:31 2010
iustifico <iustifico <at> gmail.com> writes:
>
> How do I manage to put a "lightning" in the end of my proofs-of-contradiction?
> If I use \lightning the symbol doesn't show up entirely. Only a small
"apostroph" appears. And in the
> exported PDF it's the same.
>
Which package are you loading to define \lightning? Best thing might be to post
a minimal sample document that shows the problem.
/Paul
26 May 20:31 2010
Here is a minimal sample document with the appropriate pdf.
Regards,
iustifico
Attachment (minimalexamplelightning.lyx): application/octet-stream, 1004 bytes
Am 26.05.2010 um 19:31 schrieb Paul Rubin:
> iustifico <iustifico <at> gmail.com> writes:
>
>>
>> How do I manage to put a "lightning" in the end of my proofs-of-contradiction?
>> If I use \lightning the symbol doesn't show up entirely. Only a small
> "apostroph" appears. And in the
>> exported PDF it's the same.
>>
>
> Which package are you loading to define \lightning? Best thing might be to post
> a minimal sample document that shows the problem.
>
> /Paul
>
>
26 May 21:12 2010
On 05/26/2010 02:31 PM, iustifico wrote:
> Here is a minimal sample document with the appropriate pdf.
> Regards,
> iustifico
>
>
You need to load, in the preamble, a package that defines the symbol.
\usepackage{stmaryrd} works for me; \usepackage{MnSymbol} should also
work (I couldn't test it because I don't have that package installed).
You can also use \usepackage{wasysym} and put \lightning in ERT rather
than in a math inset.
I can see where this would be a bit confusing, since in a math inset LyX
recognizes the macro and displays the correct screen glyph. If you put
\lightning in ERT and do not load a valid package, LaTeX complains about
an undefined macro, but in a math inset it apparently recognizes the
macro but maps it to the wrong glyph.
/Paul
27 May 00:23 2010
It worked, thank you very much!
Regards,
iustifico
Am 26.05.2010 um 21:12 schrieb Paul A. Rubin:
> On 05/26/2010 02:31 PM, iustifico wrote:
>> Here is a minimal sample document with the appropriate pdf.
>> Regards,
>> iustifico
>>
> You need to load, in the preamble, a package that defines the symbol. \usepackage{stmaryrd} works for me;
\usepackage{MnSymbol} should also work (I couldn't test it because I don't have that package
installed). You can also use \usepackage{wasysym} and put \lightning in ERT rather than in a math inset.
>
> I can see where this would be a bit confusing, since in a math inset LyX recognizes the macro and displays the
correct screen glyph. If you put \lightning in ERT and do not load a valid package, LaTeX complains about an
undefined macro, but in a math inset it apparently recognizes the macro but maps it to the wrong glyph.
>
> /Paul
Gmane
|
2014-03-12 21:35:50
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9180623888969421, "perplexity": 7449.943343607329}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394026215078/warc/CC-MAIN-20140305133015-00008-ip-10-183-142-35.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/548224/some-questions-on-abelian-category
|
# Some questions on abelian category
Let $f: C \longrightarrow D$ be a morphism in an abelian category $\mathfrak{A}$ with kernel and cokernel both zero. How can I show that it is an isomorphism? I am not able to find it's inverse.
Let $g: C \longrightarrow D$ be a morphism in the abelian category $\mathfrak{A}$. Let $i: K \hookrightarrow C$ be the kernel of $g$. How can I prove that the induced map $\text{coker}\ i \rightarrow D$ is a monic?
I don't want to use Mitchell's embedding theorem.
-
What definition of abelian category are you using? – Zhen Lin Nov 1 '13 at 18:54
An abelian category is an additive category such that every morphism has kernel and cokernel, every monic is kernel of its cokernel, every epic is cokernel of its kernel. This is the definition given by Grothendieck in Tohoku. – A.G Nov 1 '13 at 19:29
Suppose $f : C \to D$ is a morphism with zero kernel. Then $f$ must be monic: for, given any $c_0, c_1$ such that $f \circ c_0 = f \circ c_1$, we have $f \circ (c_0 - c_1) = 0$; but $\ker f = 0$, so $c_0 - c_1 = 0$, so $c_0 = c_1$. Dually, if a morphism has zero cokernel, then it must be epic. $\DeclareMathOperator{\coker}{coker}$
Next, suppose $f : C \to D$ is monic and epic. Then $f = \coker \ker f$. But then $f \circ \ker f = 0$, so $\ker f = 0$. Hence $f : C \to D$ is an isomorphism (because $\coker 0$ is always an isomorphism).
Finally, let $f : C \to D$ be any morphism. Let $\ker f : K \to C$ be the kernel, let $\coker \ker f : C \to I$ be its cokernel, and let $g : I \to D$ be the unique morphism such that $g \circ \coker \ker f = f$. Suppose $g \circ x = 0$ for some $x : X \to I$. Consider $\coker x : I \to Y$. Then there is a unique $y : Y \to I$ such that $y \circ \coker x = g$. The composite $\coker x \circ \coker \ker f : C \to Y$ is an epimorphism, hence $$\coker x \circ \coker \ker f = \coker h$$ for some $h : Z \to C$. Thus, $$f \circ h = g \circ \coker \ker f \circ h = y \circ \coker x \circ \coker \ker f \circ h = r \circ \coker h \circ h = 0$$ and therefore there is a unique $l : Z \to K$ such that $\ker f \circ l = h$. Now, $$\coker \ker f \circ h = \coker \ker f \circ \ker f \circ l = 0$$ so there is a unique $m : Y \to C$ such that $m \circ \coker h = \coker \ker f$. But $$m \circ \coker h = m \circ \coker x \circ \coker \ker f = \coker \ker f$$ and $\coker \ker f$ is epic, so $m \circ \coker x = \mathrm{id}_I$. But then $\coker x : I \to Y$ is (split) monic, hence, is an isomorphism. But that implies $x = 0$. Hence, $\ker g = 0$, and therefore $g : I \to D$ is indeed monic.
Let $f$ be a morphism with kernel and cokernel $0$, i.e. $f$ is monic and epic, hence a monic cokernel. Now observe that in an arbitrary linear category, any monic cokernel is an isomorphism (if $p$ is the cokernel of $i$, and monic, then $pi=0$ shows that $i=0$, and the cokernel of $0$ is an isomorphism).
|
2016-05-24 08:27:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.986832857131958, "perplexity": 93.60339325459726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049270513.22/warc/CC-MAIN-20160524002110-00054-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://codereview.stackexchange.com/questions/140392/slicing-stack-exchange-data-dump-xml-files-into-small-bites
|
# Slicing Stack Exchange data dump XML files into small bites
I was posed with a challenge when trying to load XML files from Stack Exchange into a SQL Server database; some of these XML files are extremely large (largest one being a whopping 67 GB in a single XML file), so much so that the database just cannot handle them without throwing a System.OutOfMemoryException.
After some testing, I found out the optimal XML size for the database to handle efficiently (at least on my database) is ~20 MB. So I wrote this Python script to slice up any file larger than 20 MB into more-or-less equal files < 20 MB each.
It works pretty good, but my code looks very procedural, and since I am very inexperienced with Python I am not certain how it could be improved. I have added documentation throughout to make the script easy to follow, hopefully.
I am primarily interested in improving the performance, although all improvement suggestions are appreciated. Note that I did not use an XML module due to the very simple structure of these XML files, that is why I decided to just parse them as regular text. I used UTF-8 encoding for the output files to match the input.
The script prints this summary at the end:
Path: D:\Downloads\stackexchange\stackoverflow.com\Badges
Size: 2016042158 bytes
17207172 total lines, split into 101 files = 170369 lines per input_file.
Execution time: 0:01:16.416938
#!/usr/bin/python
# -*- coding: utf-8 -*-
import os
import math
from datetime import datetime
__author__ = 'https://github.com/Phrancis'
'''
The purpose of this script is to split up very large XML files from the Stack Exchange
public data dump into small files which can then efficiently be loaded into a Microsoft SQL Server
database for further processing.
Testing with SQL Server on the author's database has proven that XML file sizes of ~20 MB
have reasonable performance and processing time. Larger files have led to System.OutOfMemoryException
on the database. The size of output files can be adjusted by changing the value of
the SQL_SERVER_XML_SIZE_LIMIT variable.
NOTE: This script was made using Python 3.5 and is not compatible with Python 2.x.
This script assumes the format of the Stack Exchange XML files to be as following,
and will not work correctly with differently-formatted XML.
<?xml version="1.0" encoding="utf-8"?>
<root_name>
<row Id="1" ... />
<row Id="2" ... />
<row Id="3" ... />
...
<row Id="176685" ... />
</root_name>
'''
# Clock to measure how long the script takes to execute
start = datetime.now()
SQL_SERVER_XML_SIZE_LIMIT = 20000000
# TODO Make this script iterate through all subdirectories instead of targeting just one
# get input_file size
file_size_bytes = os.path.getsize(file_path + file_name + '.xml')
# no splitting needed if XML input_file already fits within SQL Server limit
if file_size_bytes <= SQL_SERVER_XML_SIZE_LIMIT:
print('input_file size is less than 2 GB, no splitting needed.')
print('Path:', file_path + file_name + '.xml')
print('Size:', file_size_bytes, '/', SQL_SERVER_XML_SIZE_LIMIT)
else:
num_split_files_needed = math.ceil(file_size_bytes / SQL_SERVER_XML_SIZE_LIMIT)
input_file = open(file_path + file_name + '.xml', 'r', encoding='utf-8')
# get XML version, opening and closing root nodes,
# and count the lines in order to determine how many lines to write to each split output input_file
num_lines_in_file = 2
for line in input_file:
root_close = line
num_lines_in_file += 1
num_lines_per_split_file = math.ceil(num_lines_in_file / num_split_files_needed)
# BEGIN SLICING OF INPUT FILE INTO SMALLER OUTPUT FILES
# return stream to start of input_file
input_file.seek(0)
# skip top 2 lines of input_file as they contain the xml_var and root_open
for current_file_num in range(1, num_split_files_needed + 1):
with open(file_path + file_name + str(current_file_num) + '.xml', 'w+b') as output_file:
print('Writing to:', file_path + file_name + str(current_file_num) + '.xml')
output_file.write(xml_ver.encode())
output_file.write(root_open.encode())
# start writing lines from the input to the output file
output_line_num = 1
for line in input_file:
# write lines until we reach the num_lines_per_split_file or the end of the input_file
if output_line_num <= num_lines_per_split_file and line != root_close:
output_file.write(line.encode())
output_line_num += 1
else:
break
# write the footer as the last line in the file
output_file.write(root_close.encode())
# move on to the next output file
current_file_num += 1
# Clean up and print results
input_file.close()
print('Path:', file_path + file_name)
print('Size:', file_size_bytes, 'bytes')
print(num_lines_in_file, 'total lines, split into', num_split_files_needed, 'files =', num_lines_per_split_file, 'lines per input_file.')
print('Execution time:', datetime.now() - start)
I think most comments are just going to be about intricacies of Python, the general structure and comments are great.
The only other comment would be that you can probably open the file in binary mode, not do any text encoding conversion and save some processing time that way.
I don't quite get why you have to go through the whole file just to get the closing XML line ... it should be pretty clear what that line is going to be considering the opening XML line? I'd probably change the whole part to just read from the file, accumulate as much as possible and then open the next file instead of going through the file twice.
• Since you're using Python 3, #!/usr/bin/python isn't guaranteed to be it on many distributions, so I'd say using #!/usr/bin/env python3 is a bit safer - that also deals with the binary being in another location.
• The last bit after # Clean up and print results should also be indented no? All the variables are only set in the else branch. I'd actually put a sys.exit() in the if part and not have the indentation at all actually.
• Take a look at os.path for path manipulation, those functions are portable and a bit more structured than concatenating strings.
• I'd use with with the input_file too.
• The standard Python interpreter won't extract common subexpressions, so you'll have to and probably should do that yourself - the encode calls and the same occurrences of paths shouldn't be recomputed all the time.
• Getting the last item from an iterator is actually a question on StackOverflow, c.f. https://stackoverflow.com/a/2138894/2769043 - perhaps use that.
• The comparison line != root_close could probably be replaced with a check for the line number compared to the line number of the last line and that should be much faster than string comparison.
Something like this, still same general approach though.
#!/usr/bin/env python3
...
# TODO Make this script iterate through all subdirectories instead of targeting just one
full_file_path = file_path + file_name + '.xml'
# get input_file size
file_size_bytes = os.path.getsize(full_file_path)
# no splitting needed if XML input_file already fits within SQL Server limit
if file_size_bytes <= SQL_SERVER_XML_SIZE_LIMIT:
print('input_file size is less than 2 GB, no splitting needed.')
print('Path:', full_file_path)
print('Size:', file_size_bytes, '/', SQL_SERVER_XML_SIZE_LIMIT)
sys.exit()
num_split_files_needed = math.ceil(file_size_bytes / SQL_SERVER_XML_SIZE_LIMIT)
with open(full_file_path, 'rb') as input_file:
# get XML version, opening and closing root nodes,
# and count the lines in order to determine how many lines to write to each split output input_file
pointer = input_file.tell()
num_lines_in_file = 2
for root_close in input_file:
num_lines_in_file += 1
num_lines_per_split_file = math.ceil(num_lines_in_file / num_split_files_needed)
# BEGIN SLICING OF INPUT FILE INTO SMALLER OUTPUT FILES
# return stream to start of input
input_file.seek(pointer)
for current_file_num in range(1, num_split_files_needed + 1):
full_current_file_path = file_path + file_name + str(current_file_num) + '.xml'
with open(full_current_file_path, 'w+b') as output_file:
print('Writing to:', full_current_file_path)
output_file.write(opening)
# start writing lines from the input to the output file
output_line_num = 1
for line in input_file:
# write lines until we reach the num_lines_per_split_file or the end of the input_file
if output_line_num > num_lines_per_split_file or line == root_close:
break
output_file.write(line)
output_line_num += 1
# write the footer as the last line in the file
output_file.write(root_close)
# move on to the next output file
current_file_num += 1
# Clean up and print results
print('Path:', file_path + file_name)
print('Size:', file_size_bytes, 'bytes')
print(num_lines_in_file, 'total lines, split into', num_split_files_needed, 'files =', num_lines_per_split_file, 'lines per input_file.')
print('Execution time:', datetime.now() - start)
• There is a problem in the refactored code, this line with open(full_current_file_path, 'w+b') as output_file would overwrite data from the input file, my version uses a different file name with added str(current_file_num) (this will now also get a code comment :) – Phrancis Sep 4 '16 at 14:45
• I don't quite understand that comment, full_current_file_path is set in each iteration based on the current_file_num as well, I just extracted the duplicated code for it from the given code, so the semantics shouldn't have changed? – ferada Sep 5 '16 at 16:50
Consider actually using an xml module, specifically etree's iterparse() which is used for large XML files to iteratively read line by line without reading whole document in memory at once. Additionally, you do not treat the XML as a text file, writing in headers and closing tags as dedicated methods of .append() and .write() are used with handling of encoding.
#!/usr/bin/python
import os
import math
from datetime import datetime
import xml.etree.ElementTree as et
# Clock to measure how long the script takes to execute
start = datetime.now()
SQL_SERVER_XML_SIZE_LIMIT = 20000000
file_size_bytes = os.path.getsize(file_path + file_name + '.xml')
# no splitting needed if XML input_file already fits within SQL Server limit
if file_size_bytes <= SQL_SERVER_XML_SIZE_LIMIT:
print('input_file size is less than 2 GB, no splitting needed.')
print('Path:', file_path + file_name + '.xml')
print('Size:', file_size_bytes, '/', SQL_SERVER_XML_SIZE_LIMIT)
else:
num_split_files_needed = math.ceil(file_size_bytes / SQL_SERVER_XML_SIZE_LIMIT)
# determine how many lines to write to each split output input_file
num_lines_in_file = 2
with open(file_path + file_name + '.xml', 'r', encoding='utf-8') as f:
for line in f:
num_lines_in_file += 1
num_lines_per_split_file = math.ceil(num_lines_in_file / num_split_files_needed)
# ITERATIVELY READ LINES IN XML FILE
i = 0; current_file_num = 0
root = et.Element('root')
for (ev, el) in et.iterparse(file_path + file_name + '.xml'):
i += 1
if el.tag == 'row':
root.append(el)
# LINES PER FILE (BY MULTIPLE OF num_lines_per_split_file)
if i % num_lines_per_split_file == 0:
current_file_num += 1
tree_out = et.ElementTree(root)
tree_out.write(file_path + file_name + str(current_file_num) + '.xml',
encoding='utf-8', xml_declaration=True)
root = et.Element('root')
# REMAINING LINES (AFTER LAST MULTIPLE OF num_lines_per_split_file)
# (5 = XML DECL, ROOT OPEN/CLOSE + num_lines start + i start = 1 + 2 + 1 + 1)
if i == num_lines_in_file - 5:
current_file_num += 1
tree_out = et.ElementTree(root)
tree_out.write(file_path + file_name + str(current_file_num) + '.xml',
encoding='utf-8', xml_declaration=True)
# Clean up and print results
print('Path:', file_path + file_name)
print('Size:', file_size_bytes, 'bytes')
print(num_lines_in_file, 'total lines, split into', num_split_files_needed, 'files =',
num_lines_per_split_file, 'lines per input_file.')
print('Execution time:', datetime.now() - start)
• Isn't the issue that the database can't load the XML? Then it doesn't matter much whether the Python script uses an iterative approach - it's unlikely to be faster than not parsing the content at all. – ferada Sep 5 '16 at 16:55
• @ferada - I do not understand your comment. This approach is an alternate to yours and OP's but using an xml module all to split the large XML. I often advise not treating XML as a text file like writing in tags as you may disrupt well-formed DOM structures, entities, whitespace, even encoding. Just writing in the header string doesn't make it utf-8 encoded. – Parfait Sep 5 '16 at 17:40
• Of course, I'm assuming @Phrancis made sure that the files are actually formatted in a way that makes the pure binary approach viable, so nothing split across lines etc. – ferada Sep 5 '16 at 17:53
|
2020-07-02 10:30:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21291901171207428, "perplexity": 6359.823016659489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878639.9/warc/CC-MAIN-20200702080623-20200702110623-00485.warc.gz"}
|
https://blog.juliosong.com/linguistics/mathematics/category-theory-notes-8/
|
Fong & Spivak refer to category, functor, and natural transformation as the “big three” of category theory in their newly published textbook An Invitation to Applied Category Theory: Seven Sketches in Compositionality (henceforth Seven Sketches). In the previous seven posts of this series I’ve only written about categories. In this post I’m finally touching on functors.
The definition of a functor is straightforward. It’s just an arrow between two categories. Unlike arrows between objects (i.e., “arrows” when the word is used alone), functors map two types of data at once: objects and (inter-object) arrows. This is because at the functorial level the internal structures of the source and target categories are visible—recall that at the category-internal level the source and target objects are opaque.
## Functor defined
I mentioned the axiomatic definition of categories in my Aug 28 post. A category consists of a collection of objects and a collection of arrows (aka. morphisms), where each object is associated with an identity arrow and every pair of composable arrows actually compose, with the composition obeying associativity and the unit law. A functor maps all these data and their structures from one category to another. In Awodey’s words,
A functor $F\colon \mathbb{C}\rightarrow \mathbb{D}$ between categories $\mathbb{C}$ and $\mathbb{D}$ is a mapping of objects to objects and arrows to arrows, in such a way that
(a) $F(f\colon A\rightarrow B) = F(f)\colon F(A) \rightarrow F(B),$
(b) $F(1_A) = 1_{F(A)},$
(c) $F(g\circ f) = F(g)\circ F(f).$ (Category Theory, pp. 8–9)
So, a functor sort of creates an “image” of its source inside its target. Specifically, it may distort or collapse the source category but may not break connectivity.
## A functor is two mappings
Functoriality is a highly compact notion. When we say $F\colon\mathbb{C}\rightarrow\mathbb{D}$ is a functor, we mean that $F$ maps all the above-specified data from $\mathbb{C}$ to $\mathbb{D}$. In other words, $F$ is not a single mapping but a “bundle” of mappings. In Mac Lane’s words:
[A] functor $T$ … consists of two suitably related functions: The object function $T$ … and the arrow function (also written $T$) …. (CWM, p. 13)
Mathematicians habitually write the object and the arrow functions both with the same letter (e.g., $T$). By contrast, the two functions have separate notations in some applied areas such as functional programming. Take Haskell for example. Its type class Functor is defined as follows:
class Functor f where
-- one method
fmap :: (a -> b) -> f a -> f b
-- two laws
fmap id == id
fmap (f . g) == fmap f . fmap g
As the definition shows, the Functor class in Haskell has a single method fmap that maps a function a -> b to another function f a -> f b, and this mapping must satisfy two laws: (i) it must preserve identity; (ii) it must preserve composition.
Comparing the definition of the Haskell Functor with that of the mathematical functor (see above), we can easily find that the two are essentially the same: fmap is just the arrow function, and f is the categorical object function. As such, the Haskell Functor class itself is merely half of a mathematical functor, while the other half is defined as a method of the class. For those who want to learn more about the categorical nature of Haskell I recommend Bartosz Milewski’s 2018 book Category Theory for Programmers.
## Functor jectivity
Above I cited Mac Lane’s statement that a functor consists of two functions. An important property of function is jectivity: in set-theory a function can be injective (one-to-one), surjective (onto), or bijective (one-to-one correspondence).
The jectivity-based classification also makes sense in category theory—and at different abstraction levels. At the category-internal level, arrows are classified into monomorphisms, epimorphisms, and isomorphisms (see this blog post for an introduction); and at the functorial level, functors are classified as full, faithful, and so on. Here I won’t comment on arrow jectivity but will focus on functor jectivity, because I had only found the latter confusing.
Since a functor consists of two functions, it can be given a jectivity class in two dimensions—one based on objects and the other based on arrows. Moreover, the arrow dimension further involves two subdimensions: that of the overall arrow collection of the category and that of the arrow collection between each pair of objects (i.e., the hom-set).
A note for beginners: Textbooks usually focus on the hom-set-based classification, which defines full, faithful, and fully faithful functors. However, since you sooner or later will encounter classifications in other (sub)dimensions, it’s better to learn the whole picture from the beginning so that you won’t experience unnecessary confusion like I did. As far as I know, Awodey’s textbook (p. 148) has the most complete introduction of the various jectivity classes for functors.
### 1. Jectivity based on hom-set
A functor is
• full if its hom-set mapping is surjective,
• faithful if its hom-set mapping is injective, and
• full and faithful (aka. fully faithful) if its hom-set mapping is bijective.
These full/faithful terms are widely taught mainly because they are closely related to another important categorical concept subcategory. A subcategory is related to its “supercategory” via an inclusion functor, which is automatically faithful (because the subcategory is just part of the supercategory) and in addition may also be full (when the “part” keeps hom-sets intact); in the latter case we have a full subcategory.
### 2. Jectivity based on object collection
A functor is
Taking isomorphic objects into account, we can also define the following “essentially” versions of the above terms—a functor is
Two places I’ve seen these “essentially” notions1 are the definition of categorical embedding (full + faithful + essentially injective on objects) and that of categorical equivalence (full + faithful + essentially surjective on objects). I do have notes on both, but I’ll leave them to future posts.
### 3. Jectivity based on arrow collection
I haven’t met functor classes in this subdimension in my limited experience but list them anyway for completeness. A functor is
I haven’t met any “essentially” versions of these terms either. In my experience when essentially injective/surjective/bijective is used alone the intended reading is always “… on objects.”
In sum, functor jectivity is a lot more complex than arrow jectivity, because a functor is really a bundle of mappings. I use the following picture to illustrate the above-mentioned classes:
## Takeaway
• A functor consists of two suitably related functions, one on objects and the other on arrows.
• In Haskell the Functor type class merely corresponds to the object part of a categorical functor, while the arrow part is implemented as a method fmap on the Functor class.
• Functors can be classified based on jectivity in different (sub)dimensions, but the most useful subclasses for beginners to know are the hom-set-arrow-based “full/faithful” series and the isomorphism-supported-object-based “essentially” series.
1. Since we can't predict that essentially means “up to isomorphism” here, these essentially-terms are essentially idiomatic; see my Aug 31 post.
Tags:
Categories:
Updated:
## Subscribe to I-Yuwen
* indicates required
|
2021-09-20 08:45:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9347326755523682, "perplexity": 1128.3149608756532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057033.33/warc/CC-MAIN-20210920070754-20210920100754-00034.warc.gz"}
|
https://itprospt.com/num/8333510/carbonic-anhydrase-is-found-in-high-concentration-in-a-leucocytes
|
5
# Carbonic anhydrase is found in high concentration in(a) Leucocytes(b) Blood plasma(c) Erythrocytes(d) Lymphocytes...
## Question
###### Carbonic anhydrase is found in high concentration in(a) Leucocytes(b) Blood plasma(c) Erythrocytes(d) Lymphocytes
Carbonic anhydrase is found in high concentration in (a) Leucocytes (b) Blood plasma (c) Erythrocytes (d) Lymphocytes
#### Similar Solved Questions
##### 1211[ 'ncmcichlalr 1 Tolnlthc 1 < aoniedani 34+18 : 1 Mnantn J11 1 11 1 1 V NOLJOTOSh 1 ~apanding
1 2 1 1 [ 'ncmcichlalr 1 Tolnlthc 1 < aoniedani 34+18 : 1 Mnantn J 1 1 1 1 1 1 1 V NOLJOTOS h 1 ~apanding...
##### Population models can be extended t0 include term for daily harvesting: This tension of the logistic modeL used by the DoW for hunting license sales. Consider herd of deer with carrying capacity of 100 and harvesting rate of 1.275 deer per day (this an average, of course)_ The differential equation for this may look likeP' = 0.1P1.275100 _Find the equilibrium solutions for this model and classify them stable or stable. (You must draw phase line)(b) If P(0) = 50 deer, estimate lim P(t) using
Population models can be extended t0 include term for daily harvesting: This tension of the logistic modeL used by the DoW for hunting license sales. Consider herd of deer with carrying capacity of 100 and harvesting rate of 1.275 deer per day (this an average, of course)_ The differential equation ...
##### 4: (Pts: 10): Refer A: Find to your experiment 10 (molecular the most stable Lewis geometry) Consider the molecule POI; B: find the type structure for this molecule (using formal 'of hybridization of central atom charge) C: Find the molecular geometry
4: (Pts: 10): Refer A: Find to your experiment 10 (molecular the most stable Lewis geometry) Consider the molecule POI; B: find the type structure for this molecule (using formal 'of hybridization of central atom charge) C: Find the molecular geometry...
##### And the fact that Lis a linear Fart thc operator; UUSUA enable ou to just write down will Kicss this Write the general solution solution. one solutic Equation 10.2.8. (a) We Prove that if L is a linear opera- of the Itiplying 10,6 [Or then L? is also. one tion ' Problems 10.7-10.22 find the general solution (b) Mu linear differential equation_ Y(x) "given Assume any ano "the other than x or J is a constant alse ketter _ 3n -4y = 20 eq 2y" + Ty' pl 10 ld 2y" + Ty - 4
and the fact that Lis a linear Fart thc operator; UUSUA enable ou to just write down will Kicss this Write the general solution solution. one solutic Equation 10.2.8. (a) We Prove that if L is a linear opera- of the Itiplying 10,6 [Or then L? is also. one tion ' Problems 10.7-10.22 find the gen...
##### Basebasll diamond is square with side 90 ft. batter hits the ball and runs toward first base with speed of 28 ft/s.90 ftAt what rate is his distance from second base decreasing when he is halfway to first base? (Round your answer to one decima place_At what rate is his distance from third base increasing atthe same moment? (Round your answer t0 one decimal place,)(b) When air expands diabatically (without gaining or losing heat) its pressure P and volume V are related by the equation PV1.4 C,whe
basebasll diamond is square with side 90 ft. batter hits the ball and runs toward first base with speed of 28 ft/s. 90 ft At what rate is his distance from second base decreasing when he is halfway to first base? (Round your answer to one decima place_ At what rate is his distance from third base in...
##### 17. [-/5 Points]DETAILSSCALCET8 9.3.507.XFSolve the differential equation.dx IVx dx ey18. [-/6 Points]DETAILSSCALCET8 8.3.503XP
17. [-/5 Points] DETAILS SCALCET8 9.3.507.XF Solve the differential equation. dx IVx dx ey 18. [-/6 Points] DETAILS SCALCET8 8.3.503XP...
##### SW)-{r'-srs f(e -548} Let Find 1o Lct Find 1L Lel 3<} (= MlJ 1xw} Tid l+lw lim f () Mt f( Vim f (a
sW)-{r'-srs f(e -548} Let Find 1o Lct Find 1L Lel 3<} (= MlJ 1xw} Tid l+lw lim f () Mt f( Vim f (a...
##### A projectile molion lab is run such that ball is launched horizontally off of a table Op - The students measure the height of the table above the floor and the speed of the ball as it leaves the table They are instrucled [0 predict the range of the ball i6- how far horizontally from the edge of the table does the ball travel before hits the ground, including uncertainty and assuming that they can 1gnore aIr reSiSlance. The height of the table was measured t0 be 1.534 + 0.005 m and the accelerati
A projectile molion lab is run such that ball is launched horizontally off of a table Op - The students measure the height of the table above the floor and the speed of the ball as it leaves the table They are instrucled [0 predict the range of the ball i6- how far horizontally from the edge of the ...
##### 37-38 Use double integration t0 find the volume of the solid37.] 137 %cin19/4[ "k "bail)lurv38.ah ~U rl + 2=439-44 Use double integration to find the volume of each solid The solid bounded by the cylinder and the planes and z = 3
37-38 Use double integration t0 find the volume of the solid 37.] 137 %cin19/4[ "k "bail) lurv 38. ah ~U rl + 2=4 39-44 Use double integration to find the volume of each solid The solid bounded by the cylinder and the planes and z = 3...
##### A 4-sided die is fair and the sides are numbered 1, 2, 3, 10.This die is rolled twice and the random variable is defined to bethe sum. How many values of the random variable are there?
A 4-sided die is fair and the sides are numbered 1, 2, 3, 10. This die is rolled twice and the random variable is defined to be the sum. How many values of the random variable are there?...
|
2022-09-26 03:56:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7093327641487122, "perplexity": 4593.276735311634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00445.warc.gz"}
|
https://mathematica.stackexchange.com/questions/linked/128?sort=hot&pagesize=50
|
906 views
### Why does the order of arguments in Show influence the result? [duplicate]
I am wondering why there is a difference in the results from evaluating the two expressions, which differ only in the order of the arguments given to Show. When I ...
895 views
### Plot graphs together Using ListPlot [duplicate]
How can I plot two graphs together generated from ListPlot or ListLinePlot command, both t and s have the same number of elements. I thought Show would solve the problem however it does not, it just ...
509 views
### Multiple plots with insets within Show [duplicate]
I wonder why I cannot see the second inset by using this code ...
435 views
### Show does not combine the plots [duplicate]
i've the following problem: because it's not possible to plot complex numbers (or is it?) i created my own "function": ...
237 views
### Not seeing line defined in an Epilog option [duplicate]
In the following code, the line defined in Epilog option of my 2nd parametric plot doesn't show up in the content pane of my ...
198 views
### Show command missing red dots from one of the function [duplicate]
I am facing an issue when trying to use Show command to show multiple graphs within one graph. Show command only shows red dot from one of the graph. ...
112 views
### Strange behavior with Show command [duplicate]
I enter this: ...
136 views
### Not getting the range I want when plotting a function with Show [duplicate]
I have a problem with plotting a piecewise defined function with "Show". My code is ...
96 views
### Why will points in the Epilog “get lost” when combining two plots using Show? [duplicate]
I need to mark four points in a single graph, math does it separately, but when I put together both graphs, two points of a line disappear, I really do not know why. ...
107 views
### Plot with Epilog doesn't show up in Show [duplicate]
I start with: With[{r = 0.3, q = 10}, ans = NSolve[r u (1 - u/q) - u^2/(1 + u^2) == 0, u] ] Then: ...
45 views
### Show — autorange possible? [duplicate]
Given the following simple code: ...
38 views
### PlotRange -> All for Show [duplicate]
I want to fit a gaussian function to a data below: ...
31 views
### Need help in plotting surface with Show[] and Plot3D[] [duplicate]
I have the following surface to plot $$(1-\frac{z^2}{4})^{16}=x^2+y^2$$ I decied to express $x$ as $x=x(y,z)$. And met the difficulty when combining 2 plots. Here is what I've done. As u see I get ...
30 views
### Show[cp1,cp2, …] but PlotLabel in cp2 does not show up [duplicate]
I am visualizing the extrema of $f(x,y)=x^3+3y^2$ constrained to the level curve $g(x,y)=xy=-4$, found by using the Lagrange Multiplier Method. ...
27 views
### The Show function consistently only shows the first graph [duplicate]
The whole purpose of the Show function is to be able to compare multiple graphs, but mine just plots the first one. I've cleared all, and Cleared Global multiple times--and redefined data1. This isn't ...
26 views
Writing: ...
24 views
### How to avoid box frame being vanished when combining a graph and Plot3D? [duplicate]
I've combined a graph and a Plot3D as below:- ...
24 views
### Increasing length of an arrow in 3D plot [duplicate]
Initially, I have: a = Plot3D[x^2, {x, 0, 2}, {y, 0, 2}, PlotRange -> {{0, 2}, {0, 2}, {0, 5}}]; Show[a, Graphics3D[Arrow[{{0, 1, 0}, {0, 1, 4}}]]] To ...
21 views
### PlotRange Interval [duplicate]
I use the following code to show a Histogram with a PDF additionally ...
23k views
### Plotting two functions in one graph, with different value ranges [closed]
Plot[2x, {x,0,4}]; Plot[x^2, {x,4,8}]; How do I merge these two graphs into one?
761 views
### How can I make sure that 3D plots have the exact same orientation and viewpoint?
Edit - I made the example data much smaller, so it's not so much to download. I am trying to make animations of electronic orbitals, using functions like the ones listed here. In order to make an ...
578 views
### How to combine option settings in multiple plots such as Epilog/Prolog?
Suppose you have a program that produces plots. Later a user wishes to combine some of them with Show. The options in the first plot will override the options in ...
199 views
...
1k views
### Fitting data to plot using errors as weights [closed]
I have some enzyme kinetics data that I want to fit to the Michaelis-Menten curve in order to obtain the kinetic parameters of the enzyme. Is it meaningful, statistically and functionally speaking, ...
420 views
### Show function doesn't plot two graphics [duplicate]
I've got a following code snippet: ...
181 views
### Show is displaying points instead of markers when using Epilog after the first item is listed
I am trying to show a number of plots, which are indexed, and which all have the same marker type except when the error is under a certain threshold. Any plot on its own looks as expected, but when I ...
87 views
### Does combining a plot and a region with Show override the region's options?
A few simple shapes: ...
73 views
### LinearModelFit gets wrong result [closed]
The plot of data and the line that LinearModelFit returned obviously don't match. What went wrong? ...
90 views
### Epilog: Only one point appears in graph [duplicate]
I'm trying to graph something for a paper and I want to highlight two points in two function. I am using this code: ...
|
2019-12-16 12:58:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8080913424491882, "perplexity": 2257.685857394431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540565544.86/warc/CC-MAIN-20191216121204-20191216145204-00545.warc.gz"}
|
https://math.stackexchange.com/questions/1797475/problem-with-combination-and-permutation
|
# problem with Combination and Permutation
Four married couples have bought 8 seats in a row for a concert. In how many ways can they be seated:
a)if each couple is to sit together?
(8)(1)(6)(1)(4)(1)(2)(1)
b)if all men sit together?
n-k+1=8-4+1=5
so, (5)(4! 4!)=5! 4!
c)If no man sits next to his wife ?(This is a non-trivial questions)
8*6*6*4*4*2*2*1=18432 ways
could you please check it for me ?
• a) and b) are correct. For c), I suggest using Inclusion/Exclusion. – André Nicolas May 24 '16 at 2:59
• Subtract $a$ from $8!$. – A---B May 24 '16 at 3:56
• @FaraadArmwood In the first question, there are $4!$ ways of arranging the couple and $2!$ ways of arranging each couple internally, so there are $4!2^4$ arrangements in which each couple sits together. The OP opted to choose a person to sit in the leftmost seat, sit that person's partner in the next seat, sit one of the remaining people in the leftmost open seat, sit that person's partner in the next seat, and so forth. For the second question, the block of five men can be treated as one object. Place the block among the four women, then arrange the men and arrange the women. – N. F. Taussig May 24 '16 at 9:36
a) and b) are correct.
For c), we will use Inclusion/Exclusion. There are $8!$ arrangements without restriction. From this we need to subtract the number of bad arrangements, where at least one couple are next to each other.
First we count the number of arrangements where Couple A are together. Tie them together with rope. There are then $7$ objects to be arranged. This can be done in $7!$ ways. Now untie Couple A. They can occupy $2$ different positions, for a total of $2\cdot 7!$. Multiply by $\binom{4}{1}$ for the number of ways to choose a couple. Our first estimate of the number of bad arrangements is $\binom{4}{1}\cdot 2\cdot 7!$.
However, this grossly overcounts the number of bad arrangements, for it double-counts, among others, the arrangements where Couple A and B are both next to each other. An analysis similar to the previous one shows that there are $2^2\cdot 6!$ arrangements in which Couple A and B are together. Thus our adjusted count for the number of bad arrangements is $\binom{4}{1}\cdot 2\cdot 7!-\binom{4}{2}\cdot 2^2\cdot 6!$.
However, we have subtracted too much, for we have subtracted one too many times, for example, the arrangements where couples A, B, C are together. Adjusting for this gives the adjusted count $\binom{4}{1}\cdot 2\cdot 7!-\binom{4}{2}\cdot 2^2\cdot 6!+\binom{4}{3}\cdot 2^3\cdot 5!$.
A final adjustment needs to be made, we must subtract the $\binom{4}{4}\cdot 2^4\cdot 4!$ arrangements in which all couples are together. This will give us the total number of bad arrangements, and now we are nearly finished.
a)if each couple is to sit together?
$$4!~2!^4 = 8\cdot 6\cdot 4\cdot 2$$ $\checkmark$ LHS counts ways to arrange 4 couples, then ways to arrange partners in each couple. RHS counts ways to select a person, their partner, another person, their partner, and so on. Either way is okay (but see part c).
b)if all men sit together?
$$5!~4!$$
$\checkmark$ Count ways to arrange the women and a unit, and men within the unit.
c)If no man sits next to his wife ?(This is a non-trivial questions)
It is as easy as PIE. Use the principle of inclusion and exclusion. Count all the ways to arrange everybody, exclude ... , include ..., exclude ... , and finally include ways to select all four couples and arrange such that they each sit together.
$$8! - \underline\qquad+\underline\qquad-\underline\qquad+\binom{4}{4}4!~2!^4$$
Completion should be easy, but is left to you to do.
|
2021-01-17 13:18:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.741439163684845, "perplexity": 197.27749859381743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703512342.19/warc/CC-MAIN-20210117112618-20210117142618-00451.warc.gz"}
|
https://editorial-app.appspot.com/workflows/recent
|
# Editorial Workflows
## Stream with VLC
Stream the currently-selected URL in VLC media player for iOS. If no text is selected, the entire document contents will be passed.
## Dropbox Sync
Manually triggers Editorial’s Dropbox sync action and calls the native Bleep notification sound when the sync has completed.
## Define Selection
Retrieves the definition of the selected term using the native system dictionary, displays native definition dialog, and then speaks the term.
## Share WorkFlow
Opens the system share sheet locally with the contents of the system clipboard, URL-encoded. Designed primarily to AirDrop Editorial Workflows between devices.
## - List
Formats the selected text as an unordered (-) markdown list.
Transforms the selected text into a markdown-formatted hyperlink using the contents of the clipboard.
## Copy Schemed MD Document URL
Copies the url schemed bookmark url of the current file to the clipboard.
## Bold Selection
Wraps the selected text with double (*) asterisks.
## Search Drafts by Selected Tag
Opens a tag-filtered search by the current selection in Drafts via x-callback-url.
## Set Tot Dot
Pick a Dot (1-7) in Tot Pocket to replace with the selected text. If no text is selected, the entire document will be passed.
## Search Workflow Directory
Searches the Editorial Workflow Directory by the selected text in the in-app browser.
## OPEN
Opens the selected URL in a new Safari tab. Best paired with a keyboard shortcut.
## Tweetbot Selection
Opens the selected text in Tweetbot for iOS’ compose window. Upon install, you’ll need to enter your own Twitter @ name as the Account variable in the Tweetbot action. (Best used with a handy keyboard shortcut.)
## Clear Clipboard
Clears the system clipboard by copying nothing.
This action makes a link in Markdown from a selected URL. The link’s text is the Title of the page fetched from the URL. For example: http://dobyfriday.com Becomes: [Do By Friday](http://dobyfriday.com)
## List URLs...
Shows a list of URLs in the current document. Selecting one of the URLs opens it in the browser panel.
## New OmniFocus Project
This Workflow accepts TaskPaper text with «placeholder» tokens and prompts for you to enter final values, then creates a project in OmniFocus 2.14 for iOS using those tokens. For example, given this input: - «project_name» @parallel(false) @due(«due») - This task needs to be done at least 1 week before «project_name» is due @due(«due» -1w) - This task needs to be done at least 2 days before «project_name» is due @due(«due» -2d) You'll be prompted to enter values for «project_name» and «due». If you enter "Phonewave 1.2" and "next Thursday", it will create a new "Phonewave 1.2" project in OmniFocus that is due next Thursday, and has two tasks already filled with due dates of this Thursday and next Tuesday.
## Share...
Shows the iOS share sheet with the selected text as input. If nothing is selected, the entire document is shared. In a Markdown document, the text can optionally be converted to HTML first.
## Convert to PDF
Converts a Markdown or plain text document to PDF. The result is shown in a browser tab.
Replaces the selected word(s) with a Markdown link, pointing to the first Google result. E.g. when you select "Markdown", it is replaced with "[Markdown](http://daringfireball.net/projects/markdown)".
## TP Focus on Tag...
TP Focus on Tag … this workflow is based on the original "Focus on Tag…"* with two improvements for the use with TaskPaper documents. 1. added a step at the beginning to expand the entire document before looking for @tags to find all of them. the original "Focus on Tags…" would only find @tags in the currently filtered text. 2. adjusted the last step to show TaskPaper project headlines (ending with ":") in addition to @tags. this helps to keep an overview on the structure of the document. enjoy! ||| tomas jay @therealtomasjay * what the original "Focus on Tag…" does: After picking a tag from the list of tags in the current document, only tasks that contain this tag are shown, everything else is hidden (folded).
## List URLs...
Shows a list of URLs in the current document. Selecting one of the URLs opens it in the browser panel.
## Visual Find & Replace
This workflow uses the UI module to allow you to do a find & replace throughout your selected text or the entire document if no text is selected. The UI was designed to work on both the iPhone and the iPad. Options include: * Text or Regular Expression * Case-sensitive searches * Preview panel to make sure you're replacing what you want to replace (especially helpful for regex). This was adapted from the built-in Diff with Clipboard workflow. If you have any suggestions, find any bugs, or want to see my other workflows for Editorial, please visit http://sweetnessoffreedom.wordpress.com/projects
## Share...
Shows the iOS share sheet with the selected text as input. If nothing is selected, the entire document is shared. In a Markdown document, the text can optionally be converted to HTML first.
## List URLs...
Shows a list of URLs in the current document. Selecting one of the URLs opens it in the browser panel.
Replaces the selected word(s) with a Markdown link, pointing to the first Google result. E.g. when you select "Markdown", it is replaced with "[Markdown](http://daringfireball.net/projects/markdown)".
Very simple workflow to collapse all the Headings (single #) in the current document. The format to match is that the # must be at the beginning of a line followed by a single space.
## Markdown (Scene)
This workflow creates a markdown document with a file name and Markdown Frontmatter generated from a UI form fields. It creates an out put file name and frontmatter title from fields that are specific to creating a document that a screen writer would use to make a scene document.
## Custom Actions Pack
This is a collection of several unrelated custom actions.
## Share...
Shows the iOS share sheet with the selected text as input. If nothing is selected, the entire document is shared. In a Markdown document, the text can optionally be converted to HTML first.
## Working Copy
Workflow for transferring files from Working Copy to Editorial and back again. When called with input, it will pick the filename from the first line and write the other lines into this file, which lets Working Copy create a new file with predefined content. The filename contains a unique identifier for remembering where it belongs. To avoid overwriting files in Editorial by mistake, the previous contents of files are put in WorkingCopy.bak When this workflow is called without any input the contents of the current file is written back to Working Copy. Change askcommit variable to 0 if you just want to save and not be asked to commit. To install a new version of this workflow delete or rename this one and perform Edit in Editorial from Working Copy.
## Complete and Duplicate at End
This is a TaskPaper workflow to automatically complete a task, entering today’s date as the date of completion, and duplicate the completed task as an incomplete task at the end of the list. It is intended for those following some of Mark Forster’s time management systems.
## FTP_Client
This is a custom UI for Editorial which will present a fully featured FTP client in a popup window over the editor. After downloading you will need to tap the Info button next to the workflow, go to the Edit Workflow page, tap on the action block to expand it, and fill in your FTP login credentials in the variables below. Once you've done that just run it from the workflow menu and you should be set.
## FTP_Client
This is a custom UI for Editorial which will present a fully featured FTP client in a popup window over the editor. After downloading you will need to tap the Info button next to the workflow, go to the Edit Workflow page, tap on the action block to expand it, and fill in your FTP login credentials in the variables below. Once you've done that just run it from the workflow menu and you should be set.
## Diff with Clipboard
Shows a diff that compares the selected document with text in the clipboard. The diff is shown in the Preview panel.
## Share...
Shows the iOS share sheet with the selected text as input. If nothing is selected, the entire document is shared. In a Markdown document, the text can optionally be converted to HTML first.
Modified version of Clean Up Completed Tasks workflow. Intended to be run every now and then on a Main.taskpaper file to empty the archive by filtering lines tagged as @done and prepending them to an Archive.taskpaper. The lack of disclosure triangles in Editorial means having to look at that eventually unwieldy archive, but I don't always want to just delete completed tasks. Thanks to @scottzero for the original workflow this is based on. Note: all I did was change the filenames (and paths) in the proper actions and then change the line filtering to @done.
## Markdown(Note)
This is a work flow that creates a markdown file with a name and front matter based on date and time.
## p
Put text selected inside a
HTML tag.
## Markdown(Scene)
This workflow creates a markdown document for scene based content that aids in the creation of a story. The workflow creates a file name and front matter taken from a custom UI that grabs input from the following fields: Scene, Setting, Location and Enviroment titles. It also populates the document with the following story history headers: Previously, Objective, Obstacle and Up Next.
## Create Evernote String
This workflow presents the user with a list of his or her Enernote notebooks and another list of his or her tags and, based on selections from the list, builds a string that the user can paste into the subject line of an email addressed to the user's Evernote account so that Enernote will assign a notebook and tags to the incoming note. Before it can be used, user must paste in a developer token in the locations marked in the python segments. Workflow assembled by Jay Fogleman (fogleman2@gmail.com) from many pieces produced by others.
## Markdown(OpCallNotes)
This is a workflow that creates a Markdown file with a file name and front matter based date ina call note format.
## Markdown(Journal)
This is a workflow that creates a Markdown file with a file name and front matter that is based on the date the file was created.
## Markdown(Note)
This is a work flow that creates a markdown file with a name and front matter based on date and time.
## 2 Clow Cards
Picks 2 random Clow cards.
## Email without BCC
Sends a copy of the same email to multiple recipients individually, without BCC What this is good for: BCC is too often possible to see you are Bcc'd. This workflow allows automated sending of the same email to multiple addresses. How to use: Install the template http://www.editorial-workflows.com/workflow/5831758394163200/_RPQRLdd0FA Then fill the fields however you wish Features: I've improved typos + mistakes filters. Any extra commas, newlines, and spaces between recipients are OK. The workflow scans email addresses until one which is invalid is found. Valid email addresses are in the form X@X.X Any extra spaces and newlines in the subject section before any text are ignored
## Compose Email
A template to compose emails. Goes with http://www.editorial-workflows.com/workflow/5891835859828736/fXGW4koHTGs (Email without BCC)
## Email without BCC
Sends a copy of the same email to multiple recipients individually, without BCC What this is good for: BCC is too often possible to see you are Bcc'd. This workflow allows automated sending of the same email to multiple addresses. How to use: Install the template http://www.editorial-workflows.com/workflow/5831758394163200/_RPQRLdd0FA Then fill the fields however you wish Features: I've improved typos + mistakes filters. Any extra commas, newlines, and spaces between recipients are OK. The workflow scans email addresses until one which is invalid is found. Valid email addresses are in the form X@X.X Any extra spaces and newlines in the subject section before any text are ignored
## Preserve Line Breaks (regex)
Inserts two spaces before each line break, but only if it’s not *already* preceded by two spaces. (The other version of this didn’t work for me.)
## List URLs...
Shows a list of URLs in the current document. Selecting one of the URLs opens it in the browser panel.
## Select ->
Select forward one word at a time
## Share...
Shows the iOS share sheet with the selected text as input. If nothing is selected, the entire document is shared. In a Markdown document, the text can optionally be converted to HTML first.
## Print...
Prints the current document as plain text or HTML (converted from Markdown).
## DictionaryGVector_words
Dictionary access and translation GUI. Dictionary based on low level nosql database GVector. For key-value search used Phyton map. Dictionary can mange few Gb up to 1mln records.
## Convert to Unordered List
Converts the selected lines to an unordered (bulleted) list.
## Get Contact
I modified @jollipixel’s “get contacts” workflow to copy outputted name & contact info to clipboard, to paste where needed. Original description by jollipixel: Get Contact by @jollipixel It's meant to be used as a Subworkflow. Prompts for a name (could be partial name) and returns '' from your Contacts as a string. Will present a list if multiple matches are found.
Finds all web URLs in the current document and generates a list of Markdown links, using the titles of the web pages (as it would be shown in a browser).
## Filter by @due
Searches current folder looking for @due, @start, @started tags with date attributes less than or equal to today. Found results are displayed with clickable links. Dates are of the format YYYY-MM-DD and do not include times. Note, I adapted a good deal of original global search code from Ole Zorn. ::: Macdrifter ::: v.0.9
## Working Copy
Workflow for transferring files from Working Copy to Editorial and back again. When called with input, it will pick the filename from the first line and write the other lines into this file, which lets Working Copy create a new file with predefined content. The filename contains a unique identifier for remembering where it belongs. To avoid overwriting files in Editorial by mistake, the previous contents of files are put in WorkingCopy.bak When this workflow is called without any input the contents of the current file is written back to Working Copy. Change askcommit variable to 0 if you just want to save and not be asked to commit. To install a new version of this workflow delete or rename this one and perform Edit in Editorial from Working Copy.
Takes a URL you’ve previously copied to Clipboard and creates an HTML link around the selected text.
## Preserve Line Breaks
Preserve Line Breaks was inspired by Brett Terpstra's Mac Service of the same name. To preserve a line break in iOS, markdown syntax requires that you type three spaces. But in iOS, typing two spaces generates a period, which drives me nuts. This workflow takes the text of a document, adds three spaces to the end of each line as required by Markdown, and the replaces the original text with the proper Markdown syntax.
## Diff with Clipboard
Shows a diff that compares the selected document with text in the clipboard. The diff is shown in the Preview panel.
## Working Copy
Workflow for transferring files from Working Copy to Editorial and back again. When called with input, it will pick the filename from the first line and write the other lines into this file, which lets Working Copy create a new file with predefined content. The filename contains a unique identifier for remembering where it belongs. To avoid overwriting files in Editorial by mistake, the previous contents of files are put in WorkingCopy.bak When this workflow is called without any input the contents of the current file is written back to Working Copy. Change askcommit variable to 0 if you just want to save and not be asked to commit. To install a new version of this workflow delete or rename this one and perform Edit in Editorial from Working Copy.
## Archive @done
Moves @done tasks in a TaskPaper document to the "Archive" project (created if necessary). If a task is in a project, a @project(name) tag is appended automatically. Note that this ignores indentation and doesn't handle sub-projects though.
## Project to Omnifocus (simple)
Modified from omnigroup's New Omnifocus Project: http://people.omnigroup.com/kc/editorial-template-project-workflow.html Create new simple project without ask placeholder. 修改自 Omnigroup 的 New Omnifocus Project,取消了其中的 Placeholder,所以可以直接发送项目到 Omnifocus 而不会提示你没有给变量赋值。
## Detectar Textos Bíblicos
Detectar Textos Bíblicos
## Editorial Backup
Fixed an issue with "ZIP does not support timestamps before 1980" Enjoy! Saves or restores a backup of all Editorial workflows, snippets, bookmarks, and local documents as a zip file in Dropbox (this requires the Dropbox account to be linked). Please note: If you want to restore a backup on a different device, you first have to download the backup file (just tap on it in the document list). This is required because Editorial doesn't sync zip files by default. Restoring a backup will *overwrite* all existing workflows, snippets, and bookmarks, so it's possible that you'll lose data this way. The best way to avoid any data loss is to create a backup before restoring anything.
## Roll (OL)
Follows Sidekick dice specification format, but only supports items needed for Open Legend -- no modifiers, exploding dice, keeping high or low rolls pre-explosion.
## Share...
Shows the iOS share sheet with the selected text as input. If nothing is selected, the entire document is shared. In a Markdown document, the text can optionally be converted to HTML first.
## OF Templates
This Workflow accepts TaskPaper text with «placeholder» tokens and prompts for you to enter final values, then creates a project in OmniFocus 2.14 for iOS using those tokens. For example, given this input: - «project_name» @parallel(false) @due(«due») - This task needs to be done at least 1 week before «project_name» is due @due(«due» -1w) - This task needs to be done at least 2 days before «project_name» is due @due(«due» -2d) You'll be prompted to enter values for «project_name» and «due». If you enter "Phonewave 1.2" and "next Thursday", it will create a new "Phonewave 1.2" project in OmniFocus that is due next Thursday, and has two tasks already filled with due dates of this Thursday and next Tuesday.
## Scrape Glassdoor
scrapes 2 pages of Glassdoor website
## Paperify Preview
Preliminary work. Renders a markdown document similarly to what it would look like when converted with Paperify web. Includes custom Mathjax commands. The LaTex text is rendered via the MathJax javascript library, so an internet connection is required at the moment for the workflow to work properly.
## Focus on Tag...
After picking a tag from the list of tags in the current document, only tasks that contain this tag are shown, everything else is hidden (folded).
## Get Quote
Type a few words from a quote and an author's last name to pull the quote from GoodReads. If no text is selected, it will use the current line. If there are no results on GoodReads, it will run a Google Search on the In-App Browser. I like to use the workflow inline by using the abbreviation "gqt". You can also select text and send it to the workflow using a shortcut key via a keyboard or manually tapping it.
## New OmniFocus Project
This Workflow accepts TaskPaper text with «placeholder» tokens and prompts for you to enter final values, then creates a project in OmniFocus 2.14 for iOS using those tokens. For example, given this input: - «project_name» @parallel(false) @due(«due») - This task needs to be done at least 1 week before «project_name» is due @due(«due» -1w) - This task needs to be done at least 2 days before «project_name» is due @due(«due» -2d) You'll be prompted to enter values for «project_name» and «due». If you enter "Phonewave 1.2" and "next Thursday", it will create a new "Phonewave 1.2" project in OmniFocus that is due next Thursday, and has two tasks already filled with due dates of this Thursday and next Tuesday.
## Process Observer
Strip everything but the content of the Wrestling Observer Newsletter from the new website.
## Custom Actions Pack
This is a collection of several unrelated custom actions.
## List URLs...
Modified version of Clean Up Completed Tasks workflow. Intended to be run every now and then on a Main.taskpaper file to empty the archive by filtering lines tagged as @done and prepending them to an Archive.taskpaper. The lack of disclosure triangles in Editorial means having to look at that eventually unwieldy archive, but I don't always want to just delete completed tasks. Thanks to @scottzero for the original workflow this is based on. Note: all I did was change the filenames (and paths) in the proper actions and then change the line filtering to @done.
## Editorial Backup
Saves or restores a backup of all Editorial workflows, snippets, bookmarks, and local documents as a zip file in Dropbox (this requires the Dropbox account to be linked). Please note: If you want to restore a backup on a different device, you first have to download the backup file (just tap on it in the document list). This is required because Editorial doesn't sync zip files by default. Restoring a backup will *overwrite* all existing workflows, snippets, and bookmarks, so it's possible that you'll lose data this way. The best way to avoid any data loss is to create a backup before restoring anything.
## List URLs...
Shows a list of URLs in the current document. Selecting one of the URLs opens it in the browser panel.
## Paste from iBooks
Paste text from iBooks, removes annoying Excerpt From message.
## Insert current date
Insert current date
## Move cursor to the end of document
Move cursor to the end of document
## Insert current time
Insert current time
## Insert Image...
Saves an image from the camera roll as a jpeg file in the relatively 'resources' directory, and inserts a markdown image reference,using current time. you can change the label 'yypE' into anything you want.
## Working Copy
Workflow for transferring files from Working Copy to Editorial and back again. When called with input, it will pick the filename from the first line and write the other lines into this file, which lets Working Copy create a new file with predefined content. The filename contains a unique identifier for remembering where it belongs. To avoid overwriting files in Editorial by mistake, the previous contents of files are put in WorkingCopy.bak When this workflow is called without any input the contents of the current file is written back to Working Copy. Change askcommit variable to 0 if you just want to save and not be asked to commit. To install a new version of this workflow delete or rename this one and perform Edit in Editorial from Working Copy.
## Paste as...
Pastes the contents of the clipboard as a Markdown block quote, code block, or regular paragraph.
## back to drafts
Paired with the connected Drafts Actions, this workflow allows you to send a Draft here for editing, then send edited text back to the original Draft, overwriting it. Editorial Roundtrip: https://actions.getdrafts.com/a/1Q9 Editorial Return: https://actions.getdrafts.com/a/1Q0
|
2023-04-02 06:24:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23147527873516083, "perplexity": 4548.079679150077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00530.warc.gz"}
|
https://www.intel.com/content/www/us/en/docs/programmable/683432/21-4/tcl_pkg_sta_ver_1.0_cmd_report_datasheet.html
|
## Intel® Quartus® Prime Pro Edition User Guide: Scripting
ID 683432
Date 12/13/2021
Public
Tcl Package and Version Belongs to ::quartus::sta Syntax report_datasheet [-h | -help] [-long_help] [-append] [-expand_bus] [-file ] [-panel_name ] [-stdout] Arguments -h | -help Short help -long_help Long help with examples and possible return values -append If output is sent to a file, this option appends the result to that file. Otherwise, the file will be overwritten. This option is not supported for HTML files. -expand_bus If set, bus is reported as individual ports -file Sends the results to an ASCII or HTML file. Depending on the extension -panel_name Sends the results to the panel and specifies the name of the new panel -stdout Send output to stdout, via messages. You only need to use this option if you have selected another output format, such as a file, and would also like to receive messages. Description This function creates a datasheet report which summarizes the timing characteristics of the design as a whole. It reports setup (tsu), hold (th), clock-to-output (tco), minimum clock-to-output (mintco), output enable (tzx), minimum output enable (mintzx), output disable (txz), minimum output disable (mintxz), propagation delay (tpd), and minimum propagation delay (mintpd) times. These delays are reported for each clock or port for which they are relevant. If there is a case where there are multiple paths for a clock (for example if there are multiplexed clocks), then the maximum delay is reported for the tsu, th, tco, tzx, txz and tpd, and the minimum delay is reported for mintco, mintzx, mintxz and mintpd. The datasheet can be outputed to the Tcl console ("-stdout", default), a file ("-file"), or a report panel ("-panel_name"). Additionally if the "-file" option is used then the "-append" option can be used to specify that new data should be written to the end of the specified file. Example Usage project_open proj1 create_timing_netlist read_sdc update_timing_netlist # Report the datasheet to a report panel report_datasheet -panel_name Datasheet # Report the datasheet to a file report_datasheet -file file1.txt Return Value Code Name Code String Return TCL_OK 0 INFO: Operation successful TCL_ERROR 1 ERROR: Timing netlist does not exist. Use create_timing_netlist to create a timing netlist. TCL_ERROR 1 ERROR: Report database is not open
|
2023-02-05 09:00:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2361813634634018, "perplexity": 9615.214661171989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500250.51/warc/CC-MAIN-20230205063441-20230205093441-00730.warc.gz"}
|
http://www.statemaster.com/encyclopedia/Dual-space
|
FACTOID # 22: South Dakota has the highest employment ratio in America, but the lowest median earnings of full-time male employees.
Home Encyclopedia Statistics States A-Z Flags Maps FAQ About
WHAT'S NEW
SEARCH ALL
Search encyclopedia, statistics and forums:
(* = Graphable)
Encyclopedia > Dual space
In mathematics, any vector space V has a corresponding dual vector space (or just dual space for short) consisting of all linear functionals on V. Dual vector spaces defined on finite-dimensional vector spaces can be used for defining tensors which are studied in tensor algebra. When applied to vector spaces of functions (which typically are infinite dimensional) dual spaces are employed for defining and studying concepts like measures, distributions, and Hilbert spaces. Consequently, the dual space is an important concept in the study of functional analysis. Euclid, Greek mathematician, 3rd century BC, as imagined by by Raphael in this detail from The School of Athens. ... In mathematics, a vector space (or linear space) is a collection of objects (called vectors) that, informally speaking, may be scaled and added. ... In linear algebra, a branch of mathematics, a linear functional or linear form is a linear function from a vector space to its field of scalars. ... For more technical Wiki articles on tensors, see the section later in this article. ... In mathematics, the tensor algebra of a vector space V, denoted T(V) or T•(V), is the algebra of tensors on V (of any rank) with multiplication being the tensor product. ... In mathematics the concept of a measure generalizes notions such as length, area, and volume (but not all of its applications have to do with physical sizes). ... In mathematical analysis, distributions (also known as generalized functions) are objects which generalize functions and probability distributions. ... The mathematical concept of a Hilbert space (named after the German mathematician David Hilbert) generalizes the notion of Euclidean space in a way that extends methods of vector algebra from the plane and three-dimensional space to spaces of functions. ... Functional analysis is the branch of mathematics, and specifically of analysis, concerned with the study of spaces of functions. ...
There are two types of dual spaces: the algebraic dual space, and the continuous dual space. The algebraic dual space is defined for all vector spaces. When defined for a topological vector space there is a subspace of this dual space, corresponding to continuous linear functionals, which constitutes a continuous dual space. In mathematics a topological vector space is one of the basic structures investigated in functional analysis. ...
Given any vector space V over some field F, we define the dual space V* to be the set of all linear functionals on V, i.e., scalar-valued linear maps on V (in this context, a "scalar" is a member of the base-field F). V* itself becomes a vector space over F under the following definition of addition and scalar multiplication: In mathematics, a vector space (or linear space) is a collection of objects (called vectors) that, informally speaking, may be scaled and added. ... In abstract algebra, a field is an algebraic structure in which the operations of addition, subtraction, multiplication and division (except division by zero) may be performed, and the same rules hold which are familiar from the arithmetic of ordinary numbers. ... In linear algebra, a branch of mathematics, a linear functional or linear form is a linear function from a vector space to its field of scalars. ... In linear algebra, real numbers are called scalars and relate to vectors in a vector space through the operation of scalar multiplication, in which a vector can be multiplied by a number to produce another vector. ... In mathematics, a linear map (also called a linear transformation or linear operator) is a function between two vector spaces that preserves the operations of vector addition and scalar multiplication. ...
$(phi + psi )( x ) = phi ( x ) + psi ( x ) ,$
$( a phi ) ( x ) = a phi ( x ) ,$
for all φ,ψ in V*, a in F and x in V. In the language of tensors, elements of V are sometimes called contravariant vectors, and elements of V*, covariant vectors, covectors or one-forms. In mathematics, a tensor is (in an informal sense) a generalized linear quantity or geometrical entity that can be expressed as a multi-dimensional array relative to a choice of basis; however, as an object in and of itself, a tensor is independent of any chosen frame of reference. ... Contravariant is a mathematical term with a precise definition in tensor analysis. ... In category theory, see covariant functor. ... In linear algebra a one-form on a vector space is the same as a linear functional on it. ...
### The finite dimensional case
If V is finite-dimensional, then V* has the same dimension as V; if {e1,...,en} is a basis for V, then the associated dual basis {e1,...,en} of V* is given by In mathematics, the dimension of a vector space V is the cardinality (i. ... In mathematics, a subset B of a vector space V is said to be a basis of V if it satisfies one of the four equivalent conditions: B is both a set of linearly independent vectors and a generating set of V. B is a minimal generating set of V...
$mathbf{e}^i (mathbf{e}_j)= left{begin{matrix} 1, & mbox{if }i = j 0, & mbox{if } i ne j end{matrix}right.$
In the case of R2, its basis is B={e1=(1,0),e2=(0,1)}. Then, e1, and e2 are one-forms (functions which map a vector to a scalar) such that e1(e1)=1, e1(e2)=0, e2(e1)=0, and e2(e2)=1. (Note: The superscript here is an index, not an exponent.)
Concretely, if we interpret Rn as the space of columns of n real numbers, its dual space is typically written as the space of rows of n real numbers. Such a row acts on Rn as a linear functional by ordinary matrix multiplication. In mathematics, the real numbers may be described informally as numbers that can be given by an infinite decimal representation, such as 2. ... This article gives an overview of the various ways to perform matrix multiplication. ...
If V consists of the space of geometrical vectors (arrows) in the plane, then the elements of the dual V* can be intuitively represented as collections of parallel lines. Such a collection of lines can be applied to a vector to yield a number in the following way: one counts how many of the lines the vector crosses. A vector going from A to B. In physics and in vector calculus, a spatial vector, or simply vector, is a concept characterized by a magnitude and a direction. ...
### The infinite dimensional case
If V is not finite-dimensional but has a basis[1] eα indexed by an infinite set A, then the same construction as in the finite dimensional case yields linearly independent elements eα (αA) of the dual space, but they will not form a basis. In linear algebra, a set of elements of a vector space is linearly independent if none of the vectors in the set can be written as a linear combination of finitely many other vectors in the set. ...
Consider, for instance, the space R, whose elements are those sequences of real numbers which have only finitely many non-zero entries, which has a basis indexed by the natural numbers N: for iN, ei is the sequence which is zero apart from the ith term, which is one. The dual space of R is RN, the space of all sequences of real numbers: such a sequence (an) is applied to an element (xn) of R to give the number ∑nanxn, which is a finite sum because there are only finitely many nonzero xn. The dimension of R is countably infinite, whereas RN does not have a countable basis. For other senses of this word, see sequence (disambiguation). ... In mathematics, the dimension of a vector space V is the cardinality (i. ...
This observation generalizes to any[1] infinite dimensional vector space V over any field F: a choice of basis {eα:αA} identifies V with the space (FA)0 of functions f:AF such that fα=f(α) is nonzero for only finitely many αA, where such a function f is identified with the vector
$sum_{alphain A} f_alphamathbf{e}_alpha$
in V (the sum is finite by the assumption on f and any vV may be written in this way by the definition of a basis).
The dual space of V may then be identified with the space FA of all functions from A to F: a linear functional T on V is uniquely determined by the values θα=T(eα) it takes on the basis of V, and any function θ:AF (with θ(α)=θα) defines linear functional T on V by
$Tbiggl(sum_{alphain A} f_alpha e_alphabiggr) = sum_{alphain A} theta_alpha f_alpha.$
Again the sum is finite because fα is nonzero for only finitely many α.
Note that (FA)0 may be identified (essentially by definition) with the direct sum of infinitely many copies of F (viewed as a 1-dimensional vector space over itself) indexed by A, i.e., there are linear isomorphisms In abstract algebra, the direct sum is a construction which combines several vector spaces (or groups, or abelian groups, or modules) into a new, bigger one. ...
$Vcong (mathbb F^A)_0congbigoplus_{alphain A} mathbb{F}.$
On the other hand FA is (again by definition), the direct product of infinitely many copies of F indexed by A, and so the identification In mathematics, one can often define a direct product of objects already known, giving a new one. ...
$V^*cong biggl(bigoplus_{alphain A}mathbb{F}biggr)^*cong prod_{alphain A}mathbb{F}^*congprod_{alphain A}mathbb{F}cong mathbb{F}^A$
is a special case of a general result relating direct sums (of modules) to direct products. In abstract algebra, the direct sum is a construction which combines several vector spaces (or groups, or abelian groups, or modules) into a new, bigger one. ...
Thus if the basis is infinite, then there are always more vectors in the dual space than the original vector space. This is in marked contrast to the case of the continuous dual space, discussed below, which may be isomorphic to the original vector space even if the latter is infinite-dimensional.
### Bilinear products and dual spaces
As we saw above, if V is finite-dimensional, then V is isomorphic to V*, but the isomorphism is not natural and depends on the basis of V we started out with. In fact, any isomorphism Φ from V to V* defines a unique non-degenerate bilinear form on V by In category theory, an abstract branch of mathematics, a natural transformation provides a way of transforming one functor into another while respecting the internal structure (i. ... In mathematics, a bilinear form on a vector space V over a field F is a mapping V × V → F which is linear in both arguments. ...
$langle v,w rangle = (Phi (v))(w) ,$
and conversely every such non-degenerate bilinear product on a finite-dimensional space gives rise to an isomorphism from V to V*.
### Injection into the double-dual
There is a natural homomorphism Ψ from V into the double dual V**, defined by (Ψ(v))(φ) = φ(v) for all v in V, φ in V*. This map Ψ is always injective[1]; it is an isomorphism if and only if V is finite-dimensional. (Infinite-dimensional Hilbert spaces are not a counterexample to this, as they are isomorphic to their continuous duals, not to their algebraic duals.) In category theory, an abstract branch of mathematics, a natural transformation provides a way of transforming one functor into another while respecting the internal structure (i. ... In mathematics, a linear map (also called a linear transformation or linear operator) is a function between two vector spaces that preserves the operations of vector addition and scalar multiplication. ... In mathematics, an injective function (or one-to-one function or injection) is a function which maps distinct input values to distinct output values. ...
### Transpose of a linear map
If $f: V to W$ is a linear map, we may define its transpose (or dual) f*: W* $to$ V* by In mathematics, a linear transformation (also called linear operator or linear map) is a function between two vector spaces that respects the arithmetical operations addition and scalar multiplication defined on vector spaces, or, in other words, it preserves linear combinations. Definition and first consequences Formally, if V and W are...
$f^* (varphi) = varphi circ f ,$
where φ is an element of W*. In that case, f * (φ) is also known as the pullback of φ by f. Suppose that φ:M→ N is a smooth map between smooth manifolds M and N; then there is an associated linear map from the space of 1-forms on N (the linear space of sections of the cotangent bundle) to the space of 1-forms on M. This linear map is...
The assignment $f mapsto , f^*$ produces an injective linear map between the space of linear operators from V to W and the space of linear operators from W* to V*; this homomorphism is an isomorphism if and only if W is finite-dimensional. If V = W then the space of linear maps is actually an algebra under composition of maps, and the assignment is then an antihomomorphism of algebras, meaning that (fg)* = g*f*. In the language of category theory, taking the dual of vector spaces and the transpose of linear maps is therefore a contravariant functor from the category of vector spaces over F to itself. Note that one can identify (f*)* with f using the natural injection into the double dual. In mathematics, an injective function (or one-to-one function or injection) is a function which maps distinct input values to distinct output values. ... In mathematics, an isomorphism (in Greek isos = equal and morphe = shape) is a kind of mapping between objects, devised by Eilhard Mitscherlich, which shows a relation between two properties or operations. ... In mathematics, an algebra over a field K, or a K-algebra, is a vector space A over K equipped with a compatible notion of multiplication of elements of A. A straightforward generalisation allows K to be any commutative ring. ... In mathematics, a composite function, formed by the composition of one function on another, represents the application of the former to the result of the application of the latter to the argument of the composite. ... In mathematics, an antihomomorphism is a type of function defined on sets with multiplication that reverses the order of multiplication. ... In mathematics, category theory deals in an abstract way with mathematical structures and relationships between them. ... For functors in computer science, see the function object article. ...
If the linear map f is represented by the matrix A with respect to two bases of V and W, then f* is represented by the transpose matrix tA with respect to the dual bases of W* and V*, hence the name. Alternatively, as f is represented by A acting on the left on column vectors, f* is represented by the same matrix acting by the right on row vectors. These points of view are related by the canonical inner product on Rn, which identifies the space of column vectors with the dual space of row vectors. In mathematics, a matrix (plural matrices) is a rectangular table of elements (or entries), which may be numbers or, more generally, any abstract quantities that can be added and multiplied. ... In linear algebra, the transpose of a matrix A is another matrix AT (also written Atr, tA, or A′) created by any one of the following equivalent actions: write the rows of A as the columns of AT write the columns of A as the rows of AT reflect A...
## Continuous dual space
See main article Continuous dual space In mathematics it can be shown that any vector space V has a corresponding dual vector space (or just dual space for short) consisting of all linear functionals on V. In many cases, these two spaces are isomorphic which means that the distinction between their elements is not always apparent. ...
When dealing with topological vector spaces, one is typically only interested in the continuous linear functionals from the space into the base field. This gives rise to the notion of the "continuous dual space" which is a linear subspace of the algebraic dual space V*, denoted V ′. For any finite-dimensional normed vector space or topological vector space, such as Euclidean n-space, the continuous dual and the algebraic dual coincide. This is however false for any infinite-dimensional normed space. In topological contexts sometimes V* may also be used for just the continuous dual space and the continuous dual may just be called the dual. In mathematics a topological vector space is one of the basic structures investigated in functional analysis. ... In topology and related areas of mathematics a continuous function is a morphism between topological spaces. ... In mathematics it can be shown that any vector space V has a corresponding dual vector space (or just dual space for short) consisting of all linear functionals on V. In many cases, these two spaces are isomorphic which means that the distinction between their elements is not always apparent. ... Around 300 BC, the Greek mathematician Euclid laid down the rules of what has now come to be called Euclidean geometry, which is the study of the relationships between angles and distances in space. ...
The continuous dual V ′ of a normed vector space V (e.g., a Banach space or a Hilbert space) forms a normed vector space. A norm ||φ|| of a continuous linear functional on V is defined by In mathematics, with 2- or 3-dimensional vectors with real-valued entries, the idea of the length of a vector is intuitive and can easily be extended to any real vector space Rn. ... In mathematics, Banach spaces (pronounced ), named after Stefan Banach who studied them, are one of the central objects of study in functional analysis. ... The mathematical concept of a Hilbert space (named after the German mathematician David Hilbert) generalizes the notion of Euclidean space in a way that extends methods of vector algebra from the plane and three-dimensional space to spaces of functions. ...
$|phi | = sup { |phi ( x )| : |x| le 1 }$
This turns the continuous dual into a normed vector space, indeed into a Banach space so long as the underlying field is complete, which is often included in the definition of the normed vector space. In other words, this dual of a normed space over a complete field is necessarily complete.
### Examples
Let 1 < p < ∞ be a real number and consider the Banach space lp of all sequences a = (an) for which In mathematics, the Lp and spaces are spaces of p-power integrable functions, and corresponding sequence spaces. ... For other senses of this word, see sequence (disambiguation). ...
$|mathbf{a}|_p = left ( sum_{n=0}^infty |a_n|^p right) ^{1/p}$
is finite. Define the number q by 1/p + 1/q = 1. Then the continuous dual of lp is naturally identified with lq: given an element φ ∈ (lp)', the corresponding element of lq is the sequence (φ(en)) where en denotes the sequence whose n-th term is 1 and all others are zero. Conversely, given an element a = (an) ∈ lq, the corresponding continuous linear functional φ on lp is defined by φ(b) = ∑n an bn for all b = (bn) ∈ lp (see Hölder's inequality). In mathematical analysis, Hölders inequality, named after Otto Hölder, is a fundamental inequality relating Lp spaces: let S be a measure space, let 1 ≤ p, q ≤ ∞ with 1/p + 1/q = 1, let f be in Lp(S) and g be in Lq(S). ...
In a similar manner, the continuous dual of l1 is naturally identified with l (the space of bounded sequences). Furthermore, the continuous duals of the Banach spaces c (consisting of all convergent sequences, with the supremums norm) and c0 (the sequences converging to zero) are both naturally identified with l1. Wikibooks Calculus has a page on the topic of Limits In mathematics, the concept of a limit is used to describe the behavior of a function as its argument either gets close to some point, or as it becomes arbitrarily large; or the behavior of a sequences elements, as...
### Further properties
If V is a Hilbert space, then its continuous dual is a Hilbert space which is anti-isomorphic to V. This is the content of the Riesz representation theorem, and gives rise to the bra-ket notation used by physicists in the mathematical formulation of quantum mechanics. The mathematical concept of a Hilbert space (named after the German mathematician David Hilbert) generalizes the notion of Euclidean space in a way that extends methods of vector algebra from the plane and three-dimensional space to spaces of functions. ... There are several well-known theorems in functional analysis known as the Riesz representation theorem. ... Bra-ket notation is the standard notation for describing quantum states in the theory of quantum mechanics. ... For a less technical and generally accessible introduction to the topic, see Introduction to quantum mechanics. ...
In analogy with the case of the algebraic double dual, there is always a naturally defined injective continuous linear operator Ψ : VV ′′ from V into its continuous double dual V ′′. This map is in fact an isometry, meaning ||Ψ(x)|| = ||x|| for all x in V. Spaces for which the map Ψ is a bijection are called reflexive. In mathematics, an isometry, isometric isomorphism or congruence mapping is a distance-preserving isomorphism between metric spaces. ... A bijective function. ... This page concerns the reflexivity of a Banach space. ...
The continuous dual can be used to define a new topology on V, called the weak topology. In mathematics, weak topology is an alternative term for initial topology. ...
If the dual of V is separable, then so is the space V itself. The converse is not true; the space l1 is separable, but its dual is l, which is not separable. In topology and related areas of mathematics a topological space is called separable if it contains a countable dense subset; that is, a set with a countable number of elements whose closure is the entire space. ...
## Notes
1. ^ a b c Several assertions in this article require the axiom of choice for their justification. The axiom of choice is needed to show that an arbitrary vector space has a basis: in particular it is needed to show that RN has a basis. It is also needed to show that the dual of an infinite dimensional vector space V is nonempty, and hence that the natural map from V to its double dual is injective.
In mathematics, the axiom of choice, or AC, is an axiom of set theory. ...
Results from FactBites:
Dual space - Wikipedia, the free encyclopedia (1795 words) The use of the dual space in some fashion is thus characteristic of functional analysis. Consequently, dual space is an important concept in the study of functional analysis. The algebraic dual space is defined for all vector spaces.
IDENTICAL DUAL LATTICES... (1258 words) This paper intends to report on research in which generation of dual space lattices and hyperbolic partition surfaces are at the core of the inquiry. This stage, the impression is that the number of the dual and identical lattice pairs and the associated periodic hyperbolic surfaces, partitions that subdivide the entire space into two complementary and identical subspaces, is infinite. The E.P.R. is the smallest space unit (fundamental region) derived from the Euclidean space by means of the symmetry group that acts on this space.
More results at FactBites »
Share your thoughts, questions and commentary here
|
2019-12-08 03:45:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.913296639919281, "perplexity": 286.4416124375685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540504338.31/warc/CC-MAIN-20191208021121-20191208045121-00243.warc.gz"}
|
https://labs.tib.eu/arxiv/?author=S.-J.%20Lee
|
• This work is on the Physics of the B Factories. Part A of this book contains a brief description of the SLAC and KEK B Factories as well as their detectors, BaBar and Belle, and data taking related issues. Part B discusses tools and methods used by the experiments in order to obtain results. The results themselves can be found in Part C. Please note that version 3 on the archive is the auxiliary version of the Physics of the B Factories book. This uses the notation alpha, beta, gamma for the angles of the Unitarity Triangle. The nominal version uses the notation phi_1, phi_2 and phi_3. Please cite this work as Eur. Phys. J. C74 (2014) 3026.
• ### Sondheimer Oscillation as a Fingerprint of Surface Dirac Fermions(1105.0731)
Topological states of matter challenge the paradigm of symmetry breaking, characterized by gapless boundary modes and protected by the topological property of the ground state. Recently, angle-resolved photoemission spectroscopy (ARPES) has revealed that semiconductors of Bi$_{2}$Se$_{3}$ and Bi$_{2}$Te$_{3}$ belong to such a class of materials. Here, we present undisputable evidence for the existence of gapless surface Dirac fermions from transport in Bi$_{2}$Te$_{3}$. We observe Sondheimer oscillation in magnetoresistance (MR). This oscillation originates from the quantization of motion due to the confinement of electrons within the surface layer. Based on Sondheimer's transport theory, we determine the thickness of the surface state from the oscillation data. In addition, we uncover the topological nature of the surface state, fitting consistently both the non-oscillatory part of MR and the Hall resistance. The side-jump contribution turns out to dominate around 1 T in Hall resistance while the Berry-curvature effect dominates in 3 T $\sim$ 4 T.
|
2021-03-04 15:38:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7270256280899048, "perplexity": 1187.7707513286177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369420.71/warc/CC-MAIN-20210304143817-20210304173817-00267.warc.gz"}
|
http://en.wikisource.org/wiki/Popular_Science_Monthly/Volume_18/January_1881/Examination_of_Thermometers_at_the_Yale_Observatory
|
# Popular Science Monthly/Volume 18/January 1881/Examination of Thermometers at the Yale Observatory
(1881) Examination of Thermometers at the Yale Observatory By Leonard Waldo
EXAMINATION OF THERMOMETERS AT THE YALE OBSERVATORY.
By Dr. LEONARD WALDO.
ONE of the most useful institutions to science in England is the Kew Observatory of the Royal Society, whose principal work for the last quarter of a century has been to furnish accurate comparisons of thermometers sent there by physicists, meteorologists, physicians, and instrument-makers. The recognized benefits accruing to the scientific world from this well-known and widely popular service at Kew have caused the managing board of the Winchester Observatory of Yale College to organize a service having the same ends in view under the direction of the observatory. Although this work is but fairly commenced, yet it has met with most gratifying success, and there have been so many inquiries as to the methods and scope of this service that the writer has ventured upon a description suitable for the pages of the "Monthly," with the hope that in this form it may the more readily come to the notice of the meteorologists and physicians who are the most likely to be benefited by it.
The work at the Yale Observatory divides itself into two parts—the establishing of the standard thermometers with which thermometers sent to the observatory are to be compared, and the-work of comparing thermometers. The investigation of the standards themselves is by far the most tedious of the two; and as the methods used in studying the observatory standards are also the methods used, with greater or less detail, in investigating the higher grades of thermometers sent to the observatory, the methods will be briefly outlined.
It will be necessary to recall some of the fundamental principles of thermometry, however, in order to properly comprehend the methods of procedure in the case of standards:
1. Glass mercurial thermometers slowly increase their freezing-point readings as their age increases after the heating they undergo in filling with mercury in their manufacture.
2. The readings of the boiling-points are also increased, but in a much less degree—perhaps not more than one fifth as much as the freezing-point.
3. Whenever the thermometer is heated at all, the freezing-point is lowered, and the amount of this depression is very nearly proportional to the square of the temperature to which the thermometer is heated.
4. It follows from 3, that if a thermometer is kept at the ordinary temperatures, the freezing-point of water will be indicated by a lower scale reading than if the thermometer is kept at a low temperature. Now, if we suppose the thermometer has been kept at the freezing-point of water for a period of several days, and that the progressive change which takes place in the first years after manufacture has ceased, the freezing-point which is then determined is called the permanent freezing-point, and is the zero of the Centigrade scale, or 32° of the Fahrenheit scale. If we heat the thermometer to the boiling-point of water, and then immediately cool it and immerse it in melting ice, we shall obtain another point on the thermometer scale which we may call the temporary freezing-point, because it will gradually approach the permanent freezing-point, and after a few months, if the instrument is not again heated, it will finally coincide with it. The difference between the permanent and temporary freezing-points is usually about three fourths of a degree Fahrenheit, and, so far as now known, remains constant for the same thermometer.
5. The boiling-point of water at the level of the sea, and with a barometric pressure of 760 mm. $=$ 29∙922 inches in the latitude of 45°, is the second point in the thermometer scale to be fixed. To do this the thermometer is exposed to the steam of pure water, and, from the observed height of the barometer, the known elevation and latitude of the place of observation, the true boiling-point is computed from the observed one, and the 100° C. or 212° F. is thus fixed.
6. Having thus located the freezing and boiling points of a standard thermometer, the intermediate points are to be fixed by dividing the scale so that at every part the length of 1° shall measure an equal
Fig. 1.
volume of mercury. At least, this has been the usual procedure, and for ordinary standards perhaps it is the most convenient. For standards to be used in the highest class of work it would be better to graduate the distance between 0° and 100° C. into one hundred equal parts, and then allow the observer to accurately determine the value of the corrections at each degree. Indeed, it is preferable for many researches that the whole scale be simply a millimetre one, and care only be taken to have the millimetre graduation extremely accurate.
The dividing of the tube so that an equal volume of mercury may occupy the same number of degrees at the various parts of the tube is called the calibration of a thermometer, and on the perfection of this work, if it is attempted at all, largely depends the value of the thermometer. As Pernet has remarked, the labor of determining the errors of a thermometer is much increased by having to determine the errors the maker has introduced in the imperfect calibration of its scale. In observations not requiring an accuracy beyond 0·1º F., it it might be quite safely left to the skill of a reputable maker to free the instrument from errors of this kind. It is accomplished by detaching a small portion of the column and measuring its length at different, and usually consecutive, parts of the tube. Obviously from these results may be computed the value of 1º at successive parts of the thermometer scale, in terms of the dividing engine used by the maker.
The precision attained in the calibration of standards when the greatest care is exercised is surprising; thus, in the three Kew standards of the Yale Observatory, the maximum sum of the errors depending on imperfect calibration is very nearly 0.01º in each of them.
Supposing that several thermometers, by different and equally skillful makers, have been prepared with the greatest care, it is found in comparing them that they differ sensibly among themselves, owing to the difference in the glass used in their construction, their varying sensitiveness to the slight changes caused by the circulation of the water in which they are immersed, and a variety of less obvious causes. It becomes necessary, therefore, that some definite construction of the mercurial standard thermometer should be adopted, and the standard chosen by the Yale Observatory is defined upon the certificates issued with standards compared, as follows:
"The theoretical mercurial standard thermometer to which this instrument has been referred, is graduated by equal volumes upon a glass stem of the same dimensions and chemical constitution as the Kew standards 578 and 584. The permanent freezing-point is determined, by an exposure of not less than forty-eight hours to melting ice, supposing the temperature of the standard has not been greater than 25º Cent. $=$ 77º Fahr. during the preceding six months. The boiling-point is determined from the temperature of the steam of pure water at a barometric pressure of 760 mm. $=$ 29·922 inches (reduced to 0º Cent.) at the level of the sea and in the latitude of 45º."
This standard has its 0º and 100º C. identical with the standard of the International Commission of Weights and Measures, and the physicists generally have agreed upon the pressure and latitude given as the most advisable. It is practically coincident with a pressure of 29·905 inches in the latitude of London, and at the sea-level—the conditions under which the 212° point of the English standard is determined under act of Parliament.
A description of the Kew standards referred to is given in the accompanying table:
Maker. Maker’s Number. How Graduated. Length of 1°. Smallest Graduations. Length of Tube. Shape of Bulb. Kew Observatory. . 585 - 34° to + 275° C. 1·73 mm. 1 618 mm. Cylindrical. Kew Observatory. . 578 - 9 to + 105 C. 3·46 " 0·5 455 " " Kew Observatory. . 584 + 14 to + 220 F. 1·87 " 1 455 " "
The tubes of which the Kew standards are made are about twelve years old, and belong to the series purchased by the Royal Society and deposited at Kew to be used as standards.
The space between the outer and inner tanks is filled with cotton-wool which has been picked with the fingers until it has as little body as possible. The object of this wool is to prevent currents of air, which would otherwise cause a much greater conduction of heat to or from the body of water in the inner tank.
The determination of the freezing-points of standards is accomplished by the apparatus shown in Fig. 2, where a a' is a tinned-iron cylindrical vessel 9 x 9 inches, inclosing a smaller one 7½ X 5 inches. A strainer allows the water from the finely crushed ice or snow to escape into the open space b', and the space between the outer and inner vessels is filled with cotton-wool. Close-fitting covers prevent currents of air from the outside, and when in use each thermometer is fitted to a cork which is imbedded partly in the ice.
One boiling-point apparatus is constructed after Regnault's plan, and consists essentially of a brass stand (Fig. 3) supporting a water-tank w w', 6 inches in diameter and 3∙5 inches deep, upon which in turn rests a brass section of double tubing having an inside diameter of 5 and an outside diameter of 6 inches. This section, which extends upward 3·1 inches, has three open tubes each 0∙7 inch in diameter [v v' ) let into its outer wall. At the place which would be occupied by the fourth there is a small manometer-tube m, with a stopcock s, by whichFig. 2. the difference of the pressure of the steam inside and the air outside may be noted. Any one of a series of four brass double cylinders, ranging in height from three to twelve inches, may be fitted to this first section by a telescope-joint at will. Each of these double cylinders has perforations at its top for the insertion of thermometers. Around the top of the inner cylinder there is a series of ten holes, each three fourths of an inch in diameter, to allow the steam to pass from the inner chamber to the outer, and thus through the vents v v' to make its escape. When in use, the tank 10 w' is filled with pure water, taking the precaution to put several feet of brass ribbon in the bottom to equalize the boiling; and the heat is communicated by means of the Bunsen burner b'. The thermometers are suspended as at t t', with their bulbs at h.
Another boiling-point apparatus, to be used for very long thermometers, and where it is desirable to take the greatest care in the boiling-point determination, is made entirely of glass. The thermometer is completely immersed in steam, and the readings are made with the cathetometer by looking through the glass and steam which surround the thermometer. A standard barometer, wrapped in cotton-wool and cloth to prevent rapid change, in the temperature of its mercury, and made by James Green, is hung on the same level as the boiling-point apparatus, and the thermometers are read alternately with the barometer. The cathetometer is used for reading thermometers in both the boiling and freezing-point apparatus.
For the calibration of tubes, two microscopes have been mounted so that the position of the two ends of a short mercury-column may be
Fig. 3.
read at the same time by means of eye-piece micrometers. The observatory is having built a comparator especially for this work, which will soon be mounted in its place.
By far the most valuable apparatus in connection with this work is the collection of foreign standards which have been obtained to represent the work of foreign observatories. This collection comprises seventeen standards of the highest class, eight working standards, and forty-five comparison thermometers. The makers comprise noted artists of Europe, and among them are the Kew Observatory; Baudin, Fastré, Tonnelot, and Alvergniat, of Paris; Fuess, and Greiner & Geissler, of Berlin; and Casella, of London.
The comparison of the important standards was undertaken by the Kew Observatory in England, the Seewarte at Hamburg, and the Imperial Commission of Weights and Measures, under Dr. Foerster, at Berlin. There can be little doubt, therefore, that the observatory of Yale College possesses an accurate copy of the standard thermometers now in use in the prominent observatories in Europe.
It is the object of the observatory to make this service as widely popular as possible; and it particularly desires to be useful to the physicians, meteorologists, and the commercial manufacturers who have occasion to use fairly accurate thermometers. The testing of illuminating oils, the manufacture of spirits and ethers, and the numerous operations of the chemical laboratory, require thermometers of considerable accuracy, and for the benefit of persons using such the observatory has issued a circular which will be mailed on application.
Thermometers may be sent by mail or express, directed to the Winchester Observatory, New Haven, Connecticut. If they are sent by mail (and nothing larger than a clinical thermometer should be), they ought to be packed in a wooden box, in tissue-paper. In whatever manner they are sent, a little care taken in packing them in soft paper will materially lessen the risk of accident. Ordinary thermometers are returned to the senders, with certificates stating their deviation from the true mercurial standard for every ten degrees, within a few days from the date of their reception. Standards require from a week to a month for their investigation, depending upon the degree of precision desired in the final certificates.
The official circular of the observatory contains detailed information relating to the supervision of hospital thermometers and the facilities offered to makers. There is no good reason why any maker should not furnish, with any thermometer sold, a certificate stating the errors of that particular instrument. That the service will be a popular one is shown by the fact that already about five hundred thermometers have been sent to the observatory for verification, and not the least benefit will be that the errors of every thermometer issued with a certificate will be on file at the observatory, and this will be of particular value in cases where a uniformity of data is desired, as in the case of the United States Signal Service, or the observations made by isolated meteorologists in different parts of the country.
|
2014-03-11 15:01:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6295866370201111, "perplexity": 1529.4545019982938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011213270/warc/CC-MAIN-20140305092013-00043-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://axiomsofchoice.org/hilbert_space_mean_value
|
Hilbert space mean value
Set
context $V$…Hilbert space
definiendum $\overline{\cdot}_{-}:\mathrm{Observable}(V)\times V\to\mathbb R$ definiendum $\overline{A}_{\psi}:=\frac{\langle \psi | A\ \psi \rangle}{\Vert \psi \Vert^2}$
Discussion
One can rewrite this in many ways using:
• $\langle \psi | A\ \psi \rangle=\langle A \rangle_\psi$
• $\Vert \psi \Vert^2=\langle \psi | \psi \rangle=\langle 1 \rangle_\psi$
For any vector $\phi$ we have…
• $\Delta_\psi A = \left(\overline{\left(A-\overline A\right)^2}\right)^\frac{1}{2} = \overline{A^2}-\overline{A}^2=\frac{\Vert(A-\overline A)\psi\Vert}{\Vert\psi\Vert}$ is called non-negative mean fluctuation.
• $\gamma=\overline{(A-\overline{A})(B-\overline{B})}/(\Delta A\cdot \Delta B)=(\overline{AB}-\overline{A}\overline{B})/(\Delta A\cdot \Delta B)$ is called the correlation coefficient.
Theorems
$AB=BA\implies \gamma\in [-1,1]$.
|
2023-01-27 03:53:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9890298247337341, "perplexity": 7821.274977087273}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494936.89/warc/CC-MAIN-20230127033656-20230127063656-00573.warc.gz"}
|
https://codereview.stackexchange.com/questions/172637/tax-calculator-using-java-8-and-bigdecimal
|
# Tax Calculator using Java 8 and BigDecimal
I am learning Java8 and BigDecimal. The following exercise involved writing a Tax Calculator that took a salary and calculated the tax payable based on the tax band into which the salary falls. The test passes as the code stands but if I remove the call to
setScale(0, BigDecimal.ROUND_UP)
then the rounding does not occur and the tests fail like this >
TaxCalculator.calculateTax > 712.50
java.lang.AssertionError: Expected: is <713>
but: was <712.50>
I don't understand that because the 2nd parameter to the divide call specifies that rounding should be performed.
Also, Can anyone suggest improvements to the way this class is written please? I'm just trying to nail down best practices with respect to the Streams API. Am I using an appropriate way to calculate a percentage in Java?
import org.junit.Before;
import org.junit.Test;
import java.math.BigDecimal;
import java.util.HashMap;
import java.util.TreeMap;
import static org.hamcrest.CoreMatchers.is;
import static org.junit.Assert.assertThat;
public class TaxCalculatorTest {
private TaxCalculator taxCalculator;
private HashMap<BigDecimal, BigDecimal> testData = new HashMap<>(); // <salary, taxPayable>
private TreeMap<BigDecimal, BigDecimal> taxBands = new TreeMap<>(); // uses natural ordering of keys
// Could have used LinkedHashMap to maintain order but that relies on entries being inserted in the correct order
@Before
public void setup() {
testData.put(new BigDecimal("5000"), new BigDecimal("225"));
testData.put(new BigDecimal("11000"), new BigDecimal("413"));
testData.put(new BigDecimal("19000"), new BigDecimal("713"));
testData.put(new BigDecimal("27000"), new BigDecimal("905"));
taxBands.put(new BigDecimal("10000"), new BigDecimal("4.5"));
taxBands.put(new BigDecimal("35000"), new BigDecimal("3.35")); // TreeMap fixes the ordering
taxBands.put(new BigDecimal("20000"), new BigDecimal("3.75")); // This entry will be inserted before the previous one
taxCalculator = new TaxCalculator(taxBands);
}
@Test
public void calculateTax(){
testData.forEach((salary, taxPayable) -> {
assertThat(taxCalculator.calculateTax(salary), is(taxPayable));
}) ;
}
}
import java.math.BigDecimal;
import java.util.HashMap;
import java.util.TreeMap;
class TaxCalculator {
private static final BigDecimal NO_TAX = new BigDecimal("0");
/*
* scale > the number of digits to the right of the decimal point
*
* precision > the number of significant digits
*
* BigDecimal instances are immutable
*
*/
private BigDecimal ONE_HUNDRED = new BigDecimal("100");
private final TreeMap<BigDecimal, BigDecimal> taxBands;
TaxCalculator(TreeMap<BigDecimal, BigDecimal> taxBands) {
this.taxBands = taxBands;
}
BigDecimal calculateTax(BigDecimal salary) {
BigDecimal taxPayable = salary.multiply(getTaxRate(salary)).divide(ONE_HUNDRED, BigDecimal.ROUND_UP).setScale(0, BigDecimal.ROUND_UP);
System.out.println("TaxCalculator.calculateTax > " + taxPayable);
return taxPayable;
}
private BigDecimal getTaxRate(BigDecimal salary) {
return taxBands.entrySet().stream()
.filter((entry) -> salary.compareTo(entry.getKey()) < 0)
.map(HashMap.Entry::getValue)
.findFirst().orElse(NO_TAX);
}
}
1. The rounding doesn't work the way you expect it to work for division. From documentation:
Returns a BigDecimal whose value is (this / divisor), and whose scale is this.scale().
The product of the salary and the tax rate has two digits after the decimal point. So does the quotient.
2. The getTaxRate method is unnecessarily long and complicated. You're trying to find the largest entry in a TreeMap such that is less than the given value. That's exactly what the lower method does.
3. Printing something inside the calculateTax method is a bad idea. It makes the code less reusable (what if I want to just calculate the tax without any logging?). And violates the single responsibility principle (it calculates the tax and prints something to standard output).
/*
* scale > the number of digits to the right of the decimal point
*
* precision > the number of significant digits
*
* BigDecimal instances are immutable
*
*/
is even worse than useless. It clutters the code with irrelevant information. The documentation for the BigDecimal class belongs to the BigDecimal class, not to a TaxCalculator.
There several other comments in your code that repeat parts of the TreeMap class documentation. They're also gibberish. Removing all these comments will make the code cleaner.
5. Adding doc comments for your TaxCalculator class and its public is a good idea. They should describe what it represents, what are the pre- and post-conditions of each method (for instance, what happens if I pass a negative salary? In your case, it's garbage in- garbage out. Document that. It's obvious from the method signature itself).
6. You might want to add proper input validation (for instance, you can throw the exception if the salary is negative).
7. You can also use the valueOf static factory method to convert integers to BigDecimal's: to me, BigDecimal.valueOf(100) looks better than new BigDecimal("100"). It can also reuse frequent values.
• It doesn't change the principle, but I think the purpose of getTaxRate(BigDecimal) is to find the smallest key in the map that is greater than salary. – Stingy Aug 16 '17 at 21:31
• You don't need to create a new BigDecimal instance to represent zero, because an instance of zero already exists: BigDecimal.ZERO
• taxBands is declared as a TreeMap, which is a class, and your TaxCalculator constructor likewise accepts a TreeMap, but actually, you only need taxBands to exhibit a specific behavior defined in an interface, namely SortedMap (or, if you follow kraskevich's suggestion in point 2 of his answer, a NavigableMap), but you don't depend on the specific implementation of that interface, so I'd suggest declaring the taxBands as a SortedMap or NavigableMap, depending on which functionality you need.
TaxCalculator(TreeMap<BigDecimal, BigDecimal> taxBands) {
this.taxBands = taxBands; //this is dangerous
}
The constructor is passed a mutable object which the newly created TaxCalculator now references, so your TaxCalculator does not have exclusive control over taxBands, because if the TreeMap referenced by taxBands is modified from outside your TaxCalculator object, this modification will be reflected in the TaxCalculator. It would be safer to do this:
this.taxBands = new TreeMap<>(taxBands);
This also has the effect that the constructor could now accept any Map<BigDecimal, BigDecimal>, and not only a TreeMap, SortedMap, NavigableMap or whatever.
• This line is confusing:
.map(HashMap.Entry::getValue)
The Entry interface you want is defined in the interface Map, not in the class HashMap. Apparently, it still compiles and references the correct interface, but it would be less ambiguous to write:
.map(Map.Entry::getValue)
Interestingly, the following does not compile, even if taxBands is declared as a TreeMap:
.map(TreeMap.Entry::getValue)
The reason for this is that, now, the code doesn't refer to the interface Map.Entry, but to a package-private class named Entry in TreeMap, and since you are not inside the package java.util, you are denied access to java.util.TreeMap.Entry. Actually, even if TreeMap.Entry were public, I don't think the code above would compile, because even TreeMap.entrySet() only returns a Set<Map.Entry>, and a Map.Entry cannot be assigned to a TreeMap.Entry (only the other way round were possible, since TreeMap.Entry implements Map.Entry).
|
2020-01-29 00:27:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24361728131771088, "perplexity": 4571.812435580318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783342.96/warc/CC-MAIN-20200128215526-20200129005526-00474.warc.gz"}
|
http://mathhelpforum.com/calculus/25442-integral-notation.html
|
1. ## Integral Notation
One of the thing that bothered me for a long time with the $\displaystyle \bold{d}x$ appearing in the end of the integral. What is it for? And why was it but there? People told me that it is because we want to show what we variable we are integrating, but it is clear without it even being there. Behold! I have dreamt a dream and I have a revelation! Behold! It was Riemann he told me the answer. Behold! It was the Riemann-Stieltjes Integral.
I want to explain what it is, it will make the notion of why we put a $\displaystyle \bold{d}x$ in the integral a lot more clearer.
First let us begin with a simpler question. What is a Riemann Integral? If you taken a basic course in analysis you would know there are two ways to define it. The classical Riemann definition which is a little discussed in a Calculus course also, and another one (which is exactly the same thing) developed by Darboux. Since Riemann's definition is more elementary (but not as neat) let us do that definition.
Definition: Let $\displaystyle f$ be a bounded function on a closed interval $\displaystyle [a,b]$. We say that $\displaystyle f$ is integrable on this interval when there exists a real number $\displaystyle I$ such that: for any $\displaystyle \epsilon > 0$ there exists a $\displaystyle \delta >0$ so that for any partition $\displaystyle P= \{ a=x_0 < x_1< ... < x_{n-1}<x_n=b \}$ satisfing $\displaystyle \text{mesh}(P) = \max_{1\leq k\leq n} \ \{x_{k} - x_{k-1} \} < \delta$ we have that $\displaystyle \left| I - \sum_{i=1}^n f(t_k)(x_k-x_{k-1}) \right| < \epsilon$ where $\displaystyle t_k$ is any point chosen on $\displaystyle [x_{k-1},x_k]$-subinterval.
Basically the definition is saying that we can make the finite sums (approximating areas) as close as we want to the true value which we call $\displaystyle I$* as long as the partition $\displaystyle P$ of the interval is fine (or thin) enough. And note how much freedom we have, it says for any partition and there are infinitely many, and it says any point in sub-interval and again there are infinitely many. So there is so much freedom with these finite Riemann sums.
So we know that if $\displaystyle f(x) = x \mbox{ on }[0,1]$ then to show that $\displaystyle \int_0^1 f = \frac{1}{2}$ we need to show that the number $\displaystyle I = \frac{1}{2}$ is this number we need to satisfy the definition given above.
Note, that is what the fundamental theorem of Calculus is doing. Instead of going through all that difficult definition, it says that if we can find the anti-derivative it is that value $\displaystyle I$ that we are looking for.
If you think the Riemann integral definition is complicated just look at the Riemann-Steiljes integral definition. Now the Riemann-Steiljes integral is more general. It is an integral with respect to another function. Before stating their definition there is just one technical detail.
Definition: Let $\displaystyle g:[a,b]\mapsto \mathbb{R}$ is a bounded variation when there exist a constant $\displaystyle M>0$ so that if for any partition $\displaystyle P = \{ a=x_0<...<x_n = b\}$ we have that $\displaystyle \sum_{k=1}^n |g(x_k)-g(x_{k-1})| \leq M$.
Now we can state the definition (which might look monstrous in the beginning).
Definition: Let $\displaystyle f$ be a bounded function on $\displaystyle [a,b]$ and $\displaystyle g$ be a bounded variation on $\displaystyle [a,b]$. We say $\displaystyle f$ is Riemann-Steiljes integration with respect to $\displaystyle g$ when there exists a real number $\displaystyle I$ such that: for any $\displaystyle \epsilon > 0$ there exists $\displaystyle \delta > 0$ so that for any partion $\displaystyle P = \{a = x_0<x_1<...<x_n = b\}$ satisfing $\displaystyle \text{mesh}(P) < \delta$ we have that $\displaystyle \left| I - \sum_{k=1}^n f(t_k)[g(x_{k}) - g(x_{k-1})] \right| < \epsilon$ where $\displaystyle t_k$ are any points in the $\displaystyle [x_k-x_{k-1}]$ sub-interval. We call this distinguished number $\displaystyle I$ to be the Riemann-Steiljes integral of $\displaystyle f$ on $\displaystyle [a,b]$ with respect to $\displaystyle g$. And write $\displaystyle I = \int_a^b f \bold{d}g$.
Now why is this a generalization? Because if $\displaystyle g(x) = x$ then it is the standard Riemann integral! And that means with respect to $\displaystyle x$ we would write $\displaystyle \int_a^b f \bold{d}x$. And that is where the $\displaystyle \bold{d}x$ comes from.
In fact it turns out that if $\displaystyle f$ is continous and $\displaystyle g$ is smooth (continously differenciable) then it is a bounded variation and:
$\displaystyle \int_a^b f \bold{d}g = \int_a^b fg'$. Where the RHS is the standard Riemann Integral.
So not only does this explain the $\displaystyle \bold{d}x$ part it also explain the differencial part of a function.
For example,
$\displaystyle \int_0^\pi \sin x d(x^2+x) = \int_0^{\pi} \sin x (2x+1) dx$.
By the Riemann-Steiljes Integral.
Maybe you find that interesting, that is why I posted it.
*)It can be easily show that if $\displaystyle I_1,I_2$ are any possible real values for then Riemann integral then $\displaystyle I_1 = I_2$. Meaning there is only one such possible value $\displaystyle I$ and we define it to be the integral $\displaystyle \int_a^b f$.
2. If you would like to explore more of the topics you have introduced then that are two classics that are standards: INTRODUCTION TO THE THEORY OF INTERGRATION by T. H. Hildebrandt and THEORY OF THE INTEGRAL by Stanislaw Saks. I think that Hildebrandt is still the best on Riemann and Riemann-Stieltjes integrals there is. He also has a good discussion of content of a set which was the subject of another one of your postings.
If you are interested in modern work in integration theory I would suggest Robert McLeod’s book THE GENERALIZIED RIEMANM INTEGRAL. It explores a relatively new definition of the integral that makes any derivative, F’(x), integrable on [a,b] to F(b)-F(a). That is not true of earlier attempts. That book is an MAA publication: Carus#20.
3. Thank you. I do like theory of integration, one of my favorite in an analysis course but since I am doing so much stuff it seems I have to wait until I have time to explore integration in more detail.
Whose is Stieltjes pronounced?
4. Originally Posted by ThePerfectHacker
Whose is Stieltjes pronounced?
Thomas Joannes Stieltjes (pronounced 'sti:ltʃəs).
|
2018-04-25 03:18:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9330950379371643, "perplexity": 203.44759978919245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947690.43/warc/CC-MAIN-20180425022414-20180425042414-00068.warc.gz"}
|
http://wikien4.appspot.com/wiki/Location%E2%80%93scale_family
|
# Location–scawe famiwy
In probabiwity deory, especiawwy in madematicaw statistics, a wocation–scawe famiwy is a famiwy of probabiwity distributions parametrized by a wocation parameter and a non-negative scawe parameter. For any random variabwe ${\dispwaystywe X}$ whose probabiwity distribution function bewongs to such a famiwy, de distribution function of ${\dispwaystywe Y{\stackrew {d}{=}}a+bX}$ awso bewongs to de famiwy (where ${\dispwaystywe {\stackrew {d}{=}}}$ means "eqwaw in distribution"—dat is, "has de same distribution as"). Moreover, if ${\dispwaystywe X}$ and ${\dispwaystywe Y}$ are two random variabwes whose distribution functions are members of de famiwy, and assuming
1. existence of de first two moments and
2. ${\dispwaystywe X}$ has zero mean and unit variance,
den ${\dispwaystywe Y}$ can be written as ${\dispwaystywe Y{\stackrew {d}{=}}\mu _{Y}+\sigma _{Y}X}$ , where ${\dispwaystywe \mu _{Y}}$ and ${\dispwaystywe \sigma _{Y}}$ are de mean and standard deviation of ${\dispwaystywe Y}$.
In oder words, a cwass ${\dispwaystywe \Omega }$ of probabiwity distributions is a wocation–scawe famiwy if for aww cumuwative distribution functions ${\dispwaystywe F\in \Omega }$ and any reaw numbers ${\dispwaystywe a\in \madbb {R} }$ and ${\dispwaystywe b>0}$, de distribution function ${\dispwaystywe G(x)=F(a+bx)}$ is awso a member of ${\dispwaystywe \Omega }$.
• If ${\dispwaystywe X}$ has a cumuwative distribution function ${\dispwaystywe F_{X}(x)=P(X\weq x)}$, den ${\dispwaystywe Y{=}a+bX}$ has a cumuwative distribution function ${\dispwaystywe F_{Y}(y)=F_{X}\weft({\frac {y-a}{b}}\right)}$.
• If ${\dispwaystywe X}$ is a discrete random variabwe wif probabiwity mass function ${\dispwaystywe p_{X}(x)=P(X=x)}$, den ${\dispwaystywe Y{=}a+bX}$ is a discrete random variabwe wif probabiwity mass function ${\dispwaystywe p_{Y}(y)=p_{X}\weft({\frac {y-a}{b}}\right)}$.
• If ${\dispwaystywe X}$ is a continuous random variabwe wif probabiwity density function ${\dispwaystywe f_{X}(x)}$, den ${\dispwaystywe Y{=}a+bX}$ is a continuous random variabwe wif probabiwity density function ${\dispwaystywe f_{Y}(y)={\frac {1}{b}}f_{X}\weft({\frac {y-a}{b}}\right)}$.
In decision deory, if aww awternative distributions avaiwabwe to a decision-maker are in de same wocation–scawe famiwy, and de first two moments are finite, den a two-moment decision modew can appwy, and decision-making can be framed in terms of de means and de variances of de distributions.[1][2][3]
## Exampwes
Often, wocation–scawe famiwies are restricted to dose where aww members have de same functionaw form. Most wocation–scawe famiwies are univariate, dough not aww. Weww-known famiwies in which de functionaw form of de distribution is consistent droughout de famiwy incwude de fowwowing:
## Converting a singwe distribution to a wocation–scawe famiwy
The fowwowing shows how to impwement a wocation–scawe famiwy in a statisticaw package or programming environment where onwy functions for de "standard" version of a distribution are avaiwabwe. It is designed for R but shouwd generawize to any wanguage and wibrary.
The exampwe here is of de Student's t-distribution, which is normawwy provided in R onwy in its standard form, wif a singwe degrees of freedom parameter df. The versions bewow wif _ws appended show how to generawize dis to a generawized Student's t-distribution wif an arbitrary wocation parameter mu and scawe parameter sigma.
Probabiwity density function (PDF): dt_ws(x, df, mu, sigma) = 1/sigma * dt((x - mu)/sigma, df) Cumuwative distribution function (CDF): pt_ws(x, df, mu, sigma) = pt((x - mu)/sigma, df) Quantiwe function (inverse CDF): qt_ws(prob, df, mu, sigma) = qt(prob, df)*sigma + mu Generate a random variate: rt_ws(df, mu, sigma) = rt(df)*sigma + mu
Note dat de generawized functions do not have standard deviation sigma since de standard t distribution does not have standard deviation of 1.
## References
1. ^ Meyer, Jack (1987). "Two-Moment Decision Modews and Expected Utiwity Maximization". American Economic Review. 77 (3): 421–430. JSTOR 1804104.
2. ^ Mayshar, J. (1978). "A Note on Fewdstein's Criticism of Mean-Variance Anawysis". Review of Economic Studies. 45 (1): 197–199. JSTOR 2297094.
3. ^ Sinn, H.-W. (1983). Economic Decisions under Uncertainty (Second Engwish ed.). Norf-Howwand.
|
2019-10-23 10:56:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 29, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9203179478645325, "perplexity": 14990.357731903632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833089.90/warc/CC-MAIN-20191023094558-20191023122058-00038.warc.gz"}
|
https://pyramid.readthedocs.io/en/latest/api/testing.html
|
# pyramid.testing¶
setUp(registry=None, request=None, hook_zca=True, autocommit=True, settings=None, package=None)[source]
Set Pyramid registry and request thread locals for the duration of a single unit test.
Use this function in the setUp method of a unittest test case which directly or indirectly uses:
If you use the get_current_* functions (or call Pyramid code that uses these functions) without calling setUp, pyramid.threadlocal.get_current_registry() will return a global application registry, which may cause unit tests to not be isolated with respect to registrations they perform.
If the registry argument is None, a new empty application registry will be created (an instance of the pyramid.registry.Registry class). If the registry argument is not None, the value passed in should be an instance of the pyramid.registry.Registry class or a suitable testing analogue.
After setUp is finished, the registry returned by the pyramid.threadlocal.get_current_registry() function will be the passed (or constructed) registry until pyramid.testing.tearDown() is called (or pyramid.testing.setUp() is called again) .
If the hook_zca argument is True, setUp will attempt to perform the operation zope.component.getSiteManager.sethook( pyramid.threadlocal.get_current_registry), which will cause the Zope Component Architecture global API (e.g. zope.component.getSiteManager(), zope.component.getAdapter(), and so on) to use the registry constructed by setUp as the value it returns from zope.component.getSiteManager(). If the zope.component package cannot be imported, or if hook_zca is False, the hook will not be set.
If settings is not None, it must be a dictionary representing the values passed to a Configurator as its settings= argument.
If package is None it will be set to the caller's package. The package setting in the pyramid.config.Configurator will affect any relative imports made via pyramid.config.Configurator.include() or pyramid.config.Configurator.maybe_dotted().
This function returns an instance of the pyramid.config.Configurator class, which can be used for further configuration to set up an environment suitable for a unit or integration test. The registry attribute attached to the Configurator instance represents the 'current' application registry; the same registry will be returned by pyramid.threadlocal.get_current_registry() during the execution of the test.
tearDown(unhook_zca=True)[source]
Undo the effects of pyramid.testing.setUp(). Use this function in the tearDown method of a unit test that uses pyramid.testing.setUp() in its setUp method.
If the unhook_zca argument is True (the default), call zope.component.getSiteManager.reset(). This undoes the action of pyramid.testing.setUp() when called with the argument hook_zca=True. If zope.component cannot be imported, unhook_zca is set to False.
testConfig(registry=None, request=None, hook_zca=True, autocommit=True, settings=None)[source]
Returns a context manager for test set up.
This context manager calls pyramid.testing.setUp() when entering and pyramid.testing.tearDown() when exiting.
All arguments are passed directly to pyramid.testing.setUp(). If the ZCA is hooked, it will always be un-hooked in tearDown.
This context manager allows you to write test code like this:
1 2 3 4 with testConfig() as config: config.add_route('bar', '/bar/{id}') req = DummyRequest() resp = myview(req)
cleanUp(*arg, **kw)[source]
An alias for pyramid.testing.setUp().
class DummyResource(__name__=None, __parent__=None, __provides__=None, **kw)[source]
A dummy Pyramid resource object.
clone(__name__=<object object>, __parent__=<object object>, **kw)[source]
Create a clone of the resource object. If __name__ or __parent__ arguments are passed, use these values to override the existing __name__ or __parent__ of the resource. If any extra keyword args are passed in via the kw argument, use these keywords to add to or override existing resource keywords (attributes).
items()[source]
Return the items set by __setitem__
keys()[source]
Return the keys set by __setitem__
values()[source]
Return the values set by __setitem__
class DummyRequest(params=None, environ=None, headers=None, path='/', cookies=None, post=None, accept=None, **kw)[source]
A DummyRequest object (incompletely) imitates a request object.
The params, environ, headers, path, and cookies arguments correspond to their WebOb equivalents.
The post argument, if passed, populates the request's POST attribute, but not params, in order to allow testing that the app accepts data for a given view only from POST requests. This argument also sets self.method to "POST".
Extra keyword arguments are assigned as attributes of the request itself.
Note that DummyRequest does not have complete fidelity with a "real" request. For example, by default, the DummyRequest GET and POST attributes are of type dict, unlike a normal Request's GET and POST, which are of type MultiDict. If your code uses the features of MultiDict, you should either use a real pyramid.request.Request or adapt your DummyRequest by replacing the attributes with MultiDict instances.
Other similar incompatibilities exist. If you need all the features of a Request, use the pyramid.request.Request class itself rather than this class while writing tests.
request_iface = <InterfaceClass pyramid.interfaces.IRequest>
class DummyTemplateRenderer(string_response='')[source]
An instance of this class is returned from pyramid.config.Configurator.testing_add_renderer(). It has a helper function (assert_) that makes it possible to make an assertion which compares data passed to the renderer by the view function against expected key/value pairs.
assert_(**kw)[source]
Accept an arbitrary set of assertion key/value pairs. For each assertion key/value pair assert that the renderer (eg. pyramid.renderers.render_to_response()) received the key with a value that equals the asserted value. If the renderer did not receive the key at all, or the value received by the renderer doesn't match the assertion value, raise an AssertionError.
|
2019-01-19 04:04:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17661578953266144, "perplexity": 6757.757868246777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583662124.0/warc/CC-MAIN-20190119034320-20190119060320-00451.warc.gz"}
|
https://cs.stackexchange.com/questions/140167/determine-eulerian-or-hamiltonian
|
determine Eulerian or Hamiltonian
I am a beginner in graph theory and just found this question in a book after completing few topics and I was wondering how you approach this questions. For eulerian, I can say that the graph has vertex with odd degree hence not eulerian, but how can I determine if they are hamiltonian or not?
• What is the question? May 11 at 10:59
• @Nathaniel i have editted the title May 11 at 11:02
• @Nathaniel the last sentence seems to be a question. How can Aragorn determine whether a graph is Hamiltonian or not. May 11 at 11:03
That's a very good question, and the easy answer is that checking whether a graph is Eulerian is much simpler than checking whether a graph is Hamiltonian. You're diving head-first into the field of complexity theory and the famous question P vs NP. (Further reading: What is the definition of P, NP, NP-complete and NP-hard.)
As you said, a graph is Eulerian if and only if the vertices have even degrees.
For checking if a graph is Hamiltonian, I could give you a "certificate" (or "witness") if it indeed was Hamiltonian. However, there is no anti-certificate, or a certificate for showing that the graph is non-Hamiltonian; Checking if a graph is not Hamiltonian is a co-NP-complete problem.
In fact, we believe that any certificate for non-Hamiltonian-ness needs to be exponentially large.
To answer the question:
1. Try every permutation of vertices, and if one of the permutations is a cycle, then the graph is Hamiltonian. If so, you get a certificate.
2. If no permutation was a cycle, the graph is not Hamiltonian. You cannot convince your friends that the graph is non-Hamiltonian without trying all permutations*.
(Ps, using algorithmic techniques (DP) you don't have to try every permutation, but still exponentially many.)
• That means all the graphs here have a cycle and are hamiltonian, I don't need to go through a lot of permuations just to determine if those graphs are hamiltonian. May 11 at 11:21
• Incorrect, you need a cycle through all vertices. What is the permutation of the first graph? May 11 at 11:22
• 10c2 is the permutation May 11 at 11:26
Indeed, for Eulerian graphs there is a simple characterization, whereas for Hamiltonian graphs one can easily show that a graph is Hamiltonian (by drawing the cycle) but there is no uniform technique to demonstrate the contrary.
For larger graphs it is simply too much work to test every traversal, so we hope for clever ad hoc shortcuts.
As an example, consider your graph to the right. Let us color the top three nodes red and the bottom four nodes green. While there are edges between red nodes, in any Hamiltonian cycle any green node must be between two red nodes. Of course there are only three red nodes, not sufficient to be between the four green nodes. Hence the right graph is not Hamiltonian.
One can generalize this to the following theorem: (see our friends at math.SE)
Let $$G$$ be a graph. If there exists a set of $$k$$ nodes in $$G$$ such that removing these nodes leads to more than $$k$$ components, then $$G$$ is not Hamiltonian.
The graph to the left can also be handled in that way (or, directly, the graph is bipartite and the two partitions are not of equal size).
The middle graph, the Petersen graph, is not Hamiltonian either. It seems to need a different approach. Google is your friend.
|
2021-09-20 12:12:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5835949182510376, "perplexity": 384.97603067894323}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057036.89/warc/CC-MAIN-20210920101029-20210920131029-00509.warc.gz"}
|
http://math.stackexchange.com/questions/52876/two-different-adjoints-of-exterior-derivative-on-manifolds-with-boundary-in-th
|
# Two “different” adjoints of exterior derivative on manifolds with boundary in the $L^2$-setting
The follow problem appears in the setting of $L^2$-differential forms on manifolds with boundary. An abstracted operator theoretic problem is given below.
Suppose $M$ is a smooth Riemannian manifold with boundary. We have an inner product and the Hodge star on differential forms.
$\langle \omega , \eta \rangle = \int_M \omega \wedge \star \eta$
We have the Cartan derivative $d$ as an unbounded operator, with its domain being suitably regular differential forms. For $\omega, \eta$ smooth and compactly supported we now have
$\langle d\omega,\eta \rangle = \langle \omega, \delta \eta \rangle$
with $\delta := \star d \star$. Now, with algebraic properties of the exterior derivative, we can show more general for $\omega \in dom(d)$ and $\eta \in dom(\delta)$
$\int_M d\omega\wedge\star\eta = \int_M \omega\wedge\star \delta\eta + \int_{\partial M} \operatorname{Tr}\omega \wedge \star \operatorname{Tr}\eta$
so in general a trace term appears. In particular, $\delta$ is not the hermitian adjoint of $d$. We can fix this and define the unbounded operator $d^\ast$ with the same action as $\delta$ but smaller domain, namely $dom(d^\ast) = \{ \eta \in L^2\Lambda \mid \operatorname{Tr}\star\eta = 0 \}$
Assuming I have not committed any serious fallacies, now to my problem.
This difference between $d^\ast$ and $\delta$ does not appear in the setting of manifolds with boundary, and literature on this and the $L^2$ exterior calculus is not as ubiquitous as literature on the fully smooth setting without boundary. However, both $d^\ast$ and $\delta$ are used.
So, whereas $d^\ast$ is the adjoint, what is $\delta$?
I am interested to understand this from a purely operator theoretic point of view (i.e. functional analysis). Of particular interst for me is whether the adjoint of a linear unbounded operator with respect to a pairing may be extended in a canonical way such that "defects" which disturb the adjointness-relation appear ( just as the trace terms above )?
-
Already in very simple situations such issues can be examined: on $L^2[a,b]$ the Laplacian on test functions supported strictly in the interior has the obvious continuum of distinct self-adjoint extensions, depending on choices of boundary conditions to make the extension genuinely self-adjoint (as opposed to having leftover integration-by-parts terms as above), and these extensions are mutually incompatible. Perhaps this trivial (very accessible) example suggests something useful to you...? – paul garrett Jul 21 '11 at 15:23
I agree the essence of the problem can be formulated within a functional analytic setting without the analysis on manifolds. This should be more accessible. – shuhalo Jul 21 '11 at 20:13
|
2015-11-28 18:32:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9444324970245361, "perplexity": 280.5727330443844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398453656.76/warc/CC-MAIN-20151124205413-00187-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://www.inspirient.com/docs/en/Technical_Deep_dives/Data_Quality_Assessment.html
|
This section describes in detail the checks carried out by Inspirient to assess the quality of datasets. It also defines the metrics used in the process, with a particular emphasis on metrics applicable to survey data.
# #Data Quality Assessment Checks
The following three checks are available for assessing the quality of survey datasets:
## #Survey Duration Anomalies
Detecting cases with abnormally short or long duration can help identify survey quality issues for investigation. Abnormal case durations are detected using the generalized ESD (Extreme Studentized Deviate) test on the specified duration variable. The analysis output is a list of the Top (high-value) and Bottom (low-value) case duration outliers.
Input requirements Generated output
• A survey duration variable
• Histogram chart with outlier detection
• Top and Bottom duration outliers list
## #Straightliner Indicator
The straightliner indicator utilizes a score, between 0 and 1, which defines the degree to which a case is considered a straightliner.
For each interview, the straightliner score is derived by comparing the variation of the observed responses for a given interview against the expected variation which is calculated using from responses of all the interviews in the survey.
In order derive a measure of response variation, batteries of questions with matching Likert-scale domains are grouped and then the relative frequency distribution is calculated for each domain group to produce…
1. The mean and variance for each domain for every interview, i.e., the observed domain response variance for any given interview, and
2. The mean and variance for each domain across all interviews, i.e., the expected domain response variance for the entire survey
From the observed and expected variance, a variance distance measure can be easily calculated for each interview. This distance measure is then normalized to derive the survey response quality indicator, also named Case Divergence Score in the output generated by Inspirient’s Automated Analysis.
Input requirements Generated output
• A case ID variable
• Multiple response variables with the same point-scale
• Bar chart of the case divergence classifications
• Detailed report of the divergence result for each case in a survey as a Microsoft Excel file
## #Interviewer Effect Indicator
The interviewer effect indicator provides a data-driven estimation of the trustworthiness of each interviewer for any given survey. Currently, the indicator combines two factors: Degree of survey response deviation from the expected response distribution and the degree of interview duration deviation from the expected time to complete the survey. The survey response deviation score is calculated by locating outliers in expected vs. actual frequency distributions of survey response variables for each interviewer – the interviewers with the most deviation across interviews are surfaced to the top and could indicate foul-play.
The interview duration deviation score is calculated as the average survey duration per interviewer – the interviewers with survey durations significantly quicker than average are surfaced to the top and could indicate foul-play.
Input requirements Generated output
• An interviewer ID variable
• A survey duration column
• At least one response variable
• Top 10 list of interviewers with largest interviewer effect score
• Detailed report of the interviewer effect score calculation for each survey case available as a Microsoft Excel file and JSON file
# #Scoring Methods
This section provides a deep-dive into the following quality assessment indicators:
## #Straightliner Score
The straightliner score, ranging between 0 and 1, defines the degree to which a case is considered a straightliner.
The algorithmic steps of the analysis are as follows:
1. Group survey response variables with equal Likert scale domains. For example, a survey may contain 20 response variables with a 3-point scale, which we can call Domain #1, 10 response columns with a 5-point scale, which we can call Domain #2, and 5 response columns with a 7-point Likert scale, which we can call Domain #3.
2. For each interview, i(1…n), calculate the frequency distribution for each survey response domain, d(1…m). For example, the domain frequency distribution for i1 might look like the following:
1. Now that the domain frequency distributions for each interview have been calculated, the overall domain frequency distributions are calculated for the survey, i.e., all interviews, to derive an expected domain distribution. For example, a survey with 10 interviews might have the following overall domain frequency distributions:
1. For each interview, i(1…n), the distance between the observed frequency distribution and the expected frequency distribution is be derived by calculating normalized residual, , i.e., the difference, between the relative variance of the observed frequency distribution, rvobs, and the relative variance of the expected frequency distribution, rvexp. For each domain, d(1…m), this calculation can be broken down into three sub-steps:
i. First, calculate the relative variance rv (also known as the index of dispersion) of the observed relative frequencies and the expected relative frequencies using the formula:
rv = \frac{variance}{mean}
Thus, for i1, the observed relative variance of domain d1 is:
rv_{obs(i=1, d=1)} = \frac{0.031}{0.333} = 0.093
And, the expected relative variance of domain d1 is:
rv_{exp(d=1)} = \frac{0.023}{0.333} = 0.07
ii. Then, calculate the residual r to derive a distance measure between the observed and actual relative frequency distributions for interview i and domain d:
r_{(i, d)} = rv_{obs(i, d)} - rv_{exp(d)}
Thus,
r_{(i=1, d=1)} = 0.023
iii. Finally, normalize the residual for a given domain, to a range between [-1, +1], where [-1, 0) indicates that the observed relative frequency distribution has lower variation than expected, while, a positive value (0, +1] indicates a greater variation than expected. A score of 0 indicates that the observed relative frequency distribution matches the expected distribution. The following calculation is used to normalize the residual:
r̃_{(i=1, d=1)} = \begin{cases}
\frac{r_{(i, d)}}{rv_{exp(d)}}, \text{if }r_{(i, d)} \lt 0 \\
\frac{r_{(i, d)}}{1 - rv_{exp(d)}}, \text{otherwise}
\end{cases}
Thus,
r̃_{(i=1, d=1)} = \frac{0.023}{1 - 0.07} = 0.024
i.e, the observed relative frequency distribution of the responses for domain d1 of interview i1 is slightly more variant than expected.
2. For a given interview i repeat steps 2 to 4 for each domain, d(1…n), to produce a set of normalized residuals, R, i.e., one normalized residual for each domain. A weighted average of R is used to derive a divergence score, sdiv, for a given interview, i.e., survey case. The weights applied are the normalized response counts for each domain, e.g., for the given example above, the domain weightings would be: d1 = 0.57, d2 = 0.29, and d3 = 0.14.
3. Finally, repeat step 5 for each interview, i(1…n), to produce a set of divergence scores, Sdiv, i.e., one for each interview.
## #Interviewer Effect Score
The interviewer effect score provides a data-driven estimation of the trustworthiness of each interviewer for any given survey. Currently, the indicator combines two factors: Degree of survey response deviation from the expected response distribution and the degree of interview duration deviation from the expected time to complete the survey.
To calculate the degree of survey response deviation for each interviewer, carry out the following steps:
1. Calculate the contingency tables for all survey response variables by interviewer ID variable and perform a Chi-square test
2. For each contingency table, calculate the Chi-square residuals, i.e., the difference between the actual and expected frequency distribution of the interviewer ID and response variables
3. Locate anomalies by searching for outlier residuals (threshold defined in Haberman 1973)
4. For each anomaly, sum the absolute residual values by interviewer ID to get total survey response deviation for each interviewer.
5. The survey response deviation score for each interviewer should be normalized between 0 and 1 so that it can be combined with other scores later on
To calculate the degree of survey duration deviation for each interviewer, carry out the following steps:
1. For each interview, calculate the average survey duration by interviewer ID
2. Calculate the global average survey duration and then set all interviewers with an average survey duration greater than the global average to zero to penalize only the interviewers that are faster than average
3. The survey duration deviation score for each interviewer should be normalized between 0 and 1 so that it can be combined with other scores later on
Finally, the interviewer effect indicator for each interviewer is calculated by taking the equally weighted average of the survey response deviation and survey duration deviation scores.
|
2022-09-25 15:44:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8459303379058838, "perplexity": 2431.6130829803706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00393.warc.gz"}
|
http://encyclopedia.kids.net.au/page/di/Dividend
|
## Encyclopedia > Dividend
Article Content
# Dividend
A dividend is the distribution of profits to a company's shareholders.
Earnings that are not retained by a company may be distributed. The distribution may be in the form of a cash or stock dividend. A company by paying a cash dividend reduces the financial resources available to it by the amount of the distribution.
Alternatively, in the case of a stock dividend, there would be more shares in circulation for the same amount of shareholder equity.
The amount of the dividend is determined every year at the company's annual general meeting, and declared as either a cash amount or a percentage of the company's profit. The dividend is the same for all shares of a given class (e.g. preferred shares). Once declared, a dividend becomes a liability of the firm.
When a share is sold shortly before the dividend is to be paid, the seller rather than the buyer is entitled to the dividend. At the point at which it is the buyer is no longer entitled to the dividend if the share is sold, the share is said to go ex-dividend. This is usually a few days before the dividend is to be paid, depending on the rules of the stock exchange. When a share goes ex-dividend, its price will generally fall by the amount of the dividend.
The dividend is calculated mainly on the basis of the company's unappropriated profit and its business prospects for the coming year. It is then proposed by the Executive Board[?] and the Supervisory Board[?] to the annual general meeting. At most companies, however, the amount of the dividend remains constant. This helps to reassure investors, especially during phases when earnings are low, and sends the message that the company is optimistic with respect to its future performance.
Some companies have dividend-reinvestment plans. These plans allow shareholders to use dividends to systematically buy small amounts of stock often at no commission. Dividends are not yet paid in gold certificates although this idea has been discussed by mining companies such as Goldcorp.
An alternative to dividends is a stock buyback[?]. In a buyback the company buys back stock, thereby increasing the value of the stock left outstanding. In recent years this alternative has become more popular in the United States because investors generally have to pay more in income tax on dividends than they would in capital gains tax[?] on an increase in the value of their stock.
All Wikipedia text is available under the terms of the GNU Free Documentation License
Search Encyclopedia
Search over one million articles, find something about almost anything!
Featured Article
Quadratic formula ... non-zero. These solutions are also called the roots of the equation. The formula reads $x=\frac{-b \pm \sqrt {b^2-4ac}}{2a}$ The term b2 ...
|
2021-02-25 06:24:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3318043351173401, "perplexity": 1463.9884550548993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350717.8/warc/CC-MAIN-20210225041034-20210225071034-00511.warc.gz"}
|
http://www.math.gatech.edu/seminars-and-colloquia-by-series?series_tid=37
|
Seminars and Colloquia by Series
Friday, April 21, 2017 - 15:00 , Location: Skiles 254 , Adrian P. Bustamante , Georgia Tech , Organizer:
A classical theorem of Arnold, Moser shows that in analytic families of maps close to a rotation we can find maps which are smoothly conjugate to rotations. This is one of the first examples of the KAM theory. We aim to present an efficient numerical algorithm, and its implementation, which approximate the conjugations given by the Theorem.
Friday, April 7, 2017 - 15:05 , Location: Skiles 254 , Prof. Rafael de la Llave , School of Math, Georgia Tech , Organizer: Jiaqi Yang
It is well known that periodic orbits give all the information about dynamical systems, at least for expanding maps, for which the periodic orbits are dense. This turns out to be true in dimensions 1 and 2, and false in dimension 4 or higher.We will present a proof that two $C^\infty$ expanding maps of the circle, which are topologically equivalent are $C^\infty$ conjugate if and only if the derivatives or the return map at periodic orbits are the same.
Friday, March 31, 2017 - 15:05 , Location: Skiles 254 , Lei Zhang , School of Mathematics, GT , Organizer: Jiaqi Yang
In this talk, we will give an introduction to the variational approach to dynamical systems. Specifically, we will discuss twist maps and prove the classical results that area-preserving twist map has Birkhoff periodic orbits for each rational rotation number.
Friday, March 10, 2017 - 15:00 , Location: Skiles 254 , Rafael de la Llave , GT Math , Organizer: Rafael de la Llave
A classical theorem of Arnold, Moser shows that in analytic families of maps close to a rotation we can find maps which are smoothly conjugate to rotations. This is one of the first examples of the KAM theory. We aim to present a self-contained version of Moser's proof and also to present some efficient numerical algorithms.
Friday, March 3, 2017 - 15:05 , Location: Skiles 254 , Lu Xu , School of Mathematics, Jilin University , Organizer: Jiaqi Yang
My talk is about the quasi-periodic motions in multi-scaled Hamiltonian systems. It consists of four part. At first, I will introduce the results in integrable Hamiltonian systems since what we focus on is nearly-integrable Hamiltonian system. The second part is the definition of nearly-integrable Hamiltonian system and the classical KAM theorem. After then, I will introduce that what is Poincar\'e problem and some interesting results corresponding to this problem. The last part, which is also the main part, I will talk about the definition and the background of nearly-integrable Hamiltonian system, then the persistence of lower dimensional tori on resonant surface, which is our recent result. I will also simply introduce the Technical ingredients of our work.
Friday, February 24, 2017 - 15:05 , Location: Skiles 254 , Simon Berman , School of Physics , Organizer: Jiaqi Yang
In a high harmonic generation (HHG) experiment, an intense laser pulse is sent through an atomic gas, and some of that light is converted to very high harmonics through the interaction with the gas. The spectrum of the emitted light has a particular, nearly universal shape. In this seminar, I will describe my efforts to derive a classical reduced Hamiltonian model to capture this phenomenon. Beginning with a parent Hamiltonian that yields the equations of motion for a large collection of atoms interacting self-consistently with the full electromagnetic field (Lorentz force law + Maxwell's equations), I will follow a sequence of reductions that lead to a reduced Hamiltonian which is computationally tractable yet should still retain the essential physics. I will conclude by pointing out some of the still-unresolved issues with the model, and if there's time I will discuss the results of some preliminary numerical simulations.
Friday, January 20, 2017 - 03:05 , Location: Skiles 254 , Álex Haro , Univ. of Barcelona , Organizer:
We will design a method to compute invariant tori in Hamiltonian systems through the computation of invariant tori for time- T maps. We will also consider isoenergetic cases (i..e. fixing energy).
Friday, November 11, 2016 - 15:05 , Location: Skiles 170 , Adrián P. Bustamante , Georgia Tech , Organizer:
In this talk I am going to present a way to study numerically the complex domains of invariant Tori for the dissipative standar map. The numerical approach is based on a Nash Moser method. This is work in progress jointly with R. Calleja.
Friday, November 4, 2016 - 15:05 , Location: Skiles 170 , Adrián P. Bustamante , Georgia Tech , Organizer:
In the first part of the talks we are going to present a way to study numerically the complex domains of invariant Tori for the standar map. The numerical method is based on Padé approximants. For this part we are going to follow the work of C. Falcolini and R. de la LLave.In the second part we are going to present how the numerical method, developed earlier, can be used to study the complex domains of analyticity of invariant KAM Tori for the dissipative standar map. This part is work in progress jointly with R. Calleja (continuation of last talk).
Friday, October 28, 2016 - 15:05 , Location: Skiles 170 , Adrián P. Bustamante , Georgia Tech , Organizer:
In the first part of the talk(s) we are going to present a way to study numerically the complex domains of invariant Tori for the standar map. The numerical method is based on Padé approximants. For this part we are going to follow the work of C. Falcolini and R. de la LLave.In the second part we are going to present how the numerical method, developed earlier, can be used to study the complex domains of analyticity of invariant KAM Tori for the dissipative standar map. This part is work in progress jointly with R. Calleja (continuation of last talk).
|
2017-04-28 00:40:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5970124006271362, "perplexity": 900.7998372889716}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122720.81/warc/CC-MAIN-20170423031202-00243-ip-10-145-167-34.ec2.internal.warc.gz"}
|