url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://www.ncatlab.org/homotopytypetheory/show/diff/On+the+homotopy+groups+of+spheres+in+homotopy+type+theory
|
# Homotopy Type Theory On the homotopy groups of spheres in homotopy type theory (changes)
Showing changes from revision #0 to #1: Added | Removed | Changed
On the homotopy groups of spheres in homotopy type theory, Guillaume Brunerie,
arxiv
## Abstract
The goal of this thesis is to prove that $\pi_4(S^3)\simeq \mathbb{Z}/2\mathbb{Z}$ in homotopy type theory. In particular it is a constructive and purely homotopy-theoretic proof. We first recall the basic concepts of homotopy type theory, and we prove some well-known results about the homotopy groups of spheres: the computation of the homotopy groups of the circle, the triviality of those of the form $\pi_k(S^n)$ with $k\lt n$, and the construction of the Hopf fibration. We then move to more advanced tools. In particular, we define the James construction which allows us to prove the Freudenthal suspension theorem? and the fact that there exists a natural number $n$ such that $\pi_4(S^3)\simeq\mathbb{Z}/n\mathbb{Z}$. Then we study the smash product of spheres, we construct the cohomology ring of a space, and we introduce the Hopf invariant?, allowing us to narrow down the $n$ to either $1$ or $2$. The Hopf invariant also allows us to prove that all the groups? of the form ${\pi_{4n-1}}(S^{2n})$ are infinite. Finally we construct the Gysin exact sequence?, allowing us to compute the cohomology of $\mathbb{C}P^2$ and to prove that $\pi_4(S^3)\simeq \mathbb{Z}/2\mathbb{Z}$ and that more generally $\pi_{n+1}(S^n)\simeq \mathbb{Z}/2\mathbb{Z}$ for every $n\ge 3$.
category: reference
Created on October 14, 2018 at 12:54:49. See the history of this page for a list of all contributions to it.
|
2020-12-03 03:56:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 13, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7841911315917969, "perplexity": 124.32574572261322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141718314.68/warc/CC-MAIN-20201203031111-20201203061111-00308.warc.gz"}
|
https://itectec.com/matlab/matlab-use-of-vairibles-with-solve/
|
# MATLAB: Use of vairibles with solve
solve assign vairable
hi, im fairly inexperienced in matlab so any help would be useful here basically i ave simple bit of code and i need to solve an equation for x where in the equation another preassigned variable is used
solve ('cos(x/b) + sin(x/b) – 0.7==x')
here 'b' should be a constant that's already been assigned earlier
also i'm having trouble getting answers from 'solve' to be stored in a variable. any help would be greatly appreciated.
syms xb = 3; % use your own value heremy_answer = solve (cos(x/b) + sin(x/b) - 0.7==x);
my_answer =0.43360539229972390998442935562107
|
2021-05-09 01:24:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33474263548851013, "perplexity": 2195.7375069091477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988953.13/warc/CC-MAIN-20210509002206-20210509032206-00456.warc.gz"}
|
https://en.wikipedia.org/wiki/Residual_neural_network
|
# Residual neural network
Canonical form of a residual neural network. A layer − 1 is skipped over activation from − 2.
A residual neural network (ResNet) is an artificial neural network (ANN) of a kind that builds on constructs known from pyramidal cells in the cerebral cortex. Residual neural networks do this by utilizing skip connections, or shortcuts to jump over some layers. Typical ResNet models are implemented with double- or triple- layer skips that contain nonlinearities (ReLU) and batch normalization in between.[1][2] An additional weight matrix may be used to learn the skip weights; these models are known as HighwayNets.[3] Models with several parallel skips are referred to as DenseNets.[4][5] In the context of residual neural networks, a non-residual network may be described as a plain network.
A reconstruction of a pyramidal cell. Soma and dendrites are labeled in red, axon arbor in blue. (1) Soma, (2) Basal dendrite, (3) Apical dendrite, (4) Axon, (5) Collateral axon.
One motivation for skipping over layers is to avoid the problem of vanishing gradients, by reusing activations from a previous layer until the adjacent layer learns its weights. During training, the weights adapt to mute the upstream layer[clarification needed], and amplify the previously-skipped layer. In the simplest case, only the weights for the adjacent layer's connection are adapted, with no explicit weights for the upstream layer. This works best when a single nonlinear layer is stepped over, or when the intermediate layers are all linear. If not, then an explicit weight matrix should be learned for the skipped connection (a HighwayNet should be used).
Skipping effectively simplifies the network, using fewer layers in the initial training stages[clarification needed]. This speeds learning by reducing the impact of vanishing gradients, as there are fewer layers to propagate through. The network then gradually restores the skipped layers as it learns the feature space. Towards the end of training, when all layers are expanded, it stays closer to the manifold[clarification needed] and thus learns faster. A neural network without residual parts explores more of the feature space. This makes it more vulnerable to perturbations that cause it to leave the manifold, and necessitates extra training data to recover.
## Biological analogue
The brain has structures similar to residual nets, as cortical layer VI neurons get input from layer I, skipping intermediary layers.[6] In the figure this compares to signals from the apical dendrite (3) skipping over layers, while the basal dendrite (2) collects signals from the previous and/or same layer.[note 1][7] Similar structures exists for other layers.[8] How many layers in the cerebral cortex compare to layers in an artificial neural network is not clear, nor whether every area in the cerebral cortex exhibits the same structure, but over large areas they appear similar.
## Forward propagation
For single skips, the layers may be indexed either as ${\textstyle \ell -2}$ to ${\textstyle \ell }$ or as ${\textstyle \ell }$ to ${\textstyle \ell +2}$. (Script ${\textstyle \ell }$ used for clarity, usually it is written as a simple l.) The two indexing systems are convenient when describing skips as going backward or forward. As signal flows forward through the network it is easier to describe the skip as ${\textstyle \ell +k}$ from a given layer, but as a learning rule (back propagation) it is easier to describe which activation layer you reuse as ${\textstyle \ell -k}$, where ${\textstyle k-1}$ is the skip number.
Given a weight matrix ${\textstyle W^{\ell -1,\ell }}$for connection weights from layer ${\textstyle \ell -1}$ to ${\textstyle \ell }$, and a weight matrix ${\textstyle W^{\ell -2,\ell }}$ for connection weights from layer ${\textstyle \ell -2}$ to ${\textstyle \ell }$, then the forward propagation through the activation function would be (aka HighwayNets)
{\displaystyle {\begin{aligned}a^{\ell }&:=\mathbf {g} (W^{\ell -1,\ell }\cdot a^{\ell -1}+b^{\ell }+W^{\ell -2,\ell }\cdot a^{\ell -2})\\&:=\mathbf {g} (Z^{\ell }+W^{\ell -2,\ell }\cdot a^{\ell -2})\end{aligned}}}
where
${\textstyle a^{\ell }}$ the activations (outputs) of neurons in layer ${\textstyle \ell }$,
${\textstyle \mathbf {g} }$ the activation function for layer ${\textstyle \ell }$,
${\textstyle W^{\ell -1,\ell }}$ the weight matrix for neurons between layer ${\textstyle \ell -1}$ and ${\textstyle \ell }$, and
${\textstyle Z^{\ell }=W^{\ell -1,\ell }\cdot a^{\ell -1}+b^{\ell }}$
Absent an explicit matrix ${\textstyle W^{\ell -2,\ell }}$ (aka ResNets), forward propagation through the activation function simplifies to
${\displaystyle a^{\ell }:=\mathbf {g} (Z^{\ell }+a^{\ell -2})}$
Another way to formulate this is to substitute an identity matrix for ${\textstyle W^{\ell -2,\ell }}$, but that is only valid when the dimensions match. This is somewhat confusingly called an identity block, which means that the activations from layer ${\textstyle \ell -2}$ are passed to layer ${\textstyle \ell }$ without weighting.
In the cerebral cortex such forward skips are done for several layers. Usually all forward skips start from the same layer, and successively connect to later layers. In the general case this will be expressed as (aka DenseNets)
${\displaystyle a^{\ell }:=\mathbf {g} \left(Z^{\ell }+\sum _{k=2}^{K}W^{\ell -k,\ell }\cdot a^{\ell -k}\right)}$.
## Backward propagation
During backpropagation learning for the normal path
${\displaystyle \Delta w^{\ell -1,\ell }:=-\eta {\frac {\partial E^{\ell }}{\partial w^{\ell -1,\ell }}}=-\eta a^{\ell -1}\cdot \delta ^{\ell }}$
and for the skip paths (nearly identical)
${\displaystyle \Delta w^{\ell -2,\ell }:=-\eta {\frac {\partial E^{\ell }}{\partial w^{\ell -2,\ell }}}=-\eta a^{\ell -2}\cdot \delta ^{\ell }}$.
In both cases
${\textstyle \eta }$ a learning rate (${\textstyle \eta <0)}$,
${\textstyle \delta ^{\ell }}$ the error signal of neurons at layer ${\textstyle \ell }$, and
${\textstyle a_{i}^{\ell }}$ the activation of neurons at layer ${\textstyle \ell }$.
If the skip path has fixed weights (e.g. the identity matrix, as above), then they are not updated. If they can be updated, the rule is an ordinary backpropagation update rule.
In the general case there can be ${\textstyle K}$ skip path weight matrices, thus
${\displaystyle \Delta w^{\ell -k,\ell }:=-\eta {\frac {\partial E^{\ell }}{\partial w^{\ell -k,\ell }}}=-\eta a^{\ell -k}\cdot \delta ^{\ell }}$
As the learning rules are similar, the weight matrices can be merged and learned in the same step.
## Notes
1. ^ Some research indicates that there are additional structures here, so this explanation is somewhat simplified.
## References
1. ^ He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian (2015-12-10). "Deep Residual Learning for Image Recognition". arXiv:1512.03385 [cs.CV].
2. ^ He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian (2016). "Deep Residual Learning for Image Recognition" (PDF). Proc. Computer Vision and Pattern Recognition (CVPR), IEEE. Retrieved 2020-04-23.
3. ^ Srivastava, Rupesh Kumar; Greff, Klaus; Schmidhuber, Jürgen (2015-05-02). "Highway Networks". arXiv:1505.00387 [cs.LG].
4. ^ Huang, Gao; Liu, Zhuang; Weinberger, Kilian Q.; van der Maaten, Laurens (2016-08-24). "Densely Connected Convolutional Networks". arXiv:1608.06993 [cs.CV].
5. ^ Huang, Gao; Liu, Zhuang; Weinberger, Kilian Q.; van der Maaten, Laurens (2017). "Densely Connected Convolutional Networks" (PDF). Proc. Computer Vision and Pattern Recognition (CVPR), IEEE. Retrieved 2020-04-23.
6. ^ Thomson, AM (2010). "Neocortical layer 6, a review". Frontiers in Neuroanatomy. 4: 13. doi:10.3389/fnana.2010.00013. PMC 2885865. PMID 20556241.
7. ^ Winterer, Jochen; Maier, Nikolaus; Wozny, Christian; Beed, Prateep; Breustedt, Jörg; Evangelista, Roberta; Peng, Yangfan; D’Albis, Tiziano; Kempter, Richard (2017). "Excitatory Microcircuits within Superficial Layers of the Medial Entorhinal Cortex". Cell Reports. 19 (6): 1110–1116. doi:10.1016/j.celrep.2017.04.041. PMID 28494861.
8. ^ Fitzpatrick, David (1996-05-01). "The Functional Organization of Local Circuits in Visual Cortex: Insights from the Study of Tree Shrew Striate Cortex". Cerebral Cortex. 6 (3): 329–341. doi:10.1093/cercor/6.3.329. ISSN 1047-3211. PMID 8670661.
|
2020-09-23 17:35:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 39, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8303400278091431, "perplexity": 5263.053772854443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400211096.40/warc/CC-MAIN-20200923144247-20200923174247-00064.warc.gz"}
|
https://www.cuemath.com/maths/division-of-fractions/
|
Division of Fractions
Go back to 'Fractions'
Dividing a fraction by a whole number
What does \begin{align} \frac{1}{8} \div 2 \end{align} mean? It means, \$we are distributing/dividing \begin{align} \frac{1}{8} \end{align} litres of water among $$2$$ vessels.
Let us represent \begin{align} \frac{1}{8} \end{align} litres of water visually.
Equally dividing the water we have, among two vessels means that each vessel gets,
But, if we are to tell what fraction of our water was poured into each vessel, we need to know how large the shaded portion is in this figure. And this becomes obvious only when the whole is divided into parts of the same size as our shaded portion.
Doing so, we get,
One out of the $$16$$ parts is shaded – which means that each vessel gets \begin{align} \frac{1}{16} \end{align} litres of water.
It can also be written as,
\begin{align} \frac{1}{8} \div 2 = \frac{1}{8} \times \frac{1}{2} = \frac{1}{16} \end{align}
This method is also known as ‘Reciprocal method’.
Dividing a whole number by a fraction
What does it mean to divide a whole number by a fraction? Let us consider dividing $$8$$ by \begin{align} \frac{1}{2}. \end{align}
It means distributing/dividing $$8$$ litres of water among vessels \begin{align} \frac{1}{2} \end{align} litres in size/volume. How many such \begin{align} \frac{1}{2} \end{align} litre vessels are needed? To answer this, let’s see how many \begin{align} \frac{1}{2}. \end{align} litre vessels are needed to fill a $$1$$ litre bucket. The answer is $$2.$$
There are $$16$$ one-half litres in $$8$$ litres. Hence we would need $$16$$ half litre vessels to equally distribute $$8L$$ of water.
By the reciprocal method it would be,
\begin{align} 8 \div \frac{1}{2} = \frac{8 \times 2}{1} = 16 \end{align}
It can be simply put as
Dividing a fraction by another fraction
What does it mean to divide a fraction by a fraction? Let us consider \begin{align} \frac{1}{2} \div \frac{1}{4}. \end{align} It means dividing \begin{align} \frac{1}{2} \end{align} into pieces each \begin{align} \frac{1}{4} \end{align} units in size. Or alternately, we can consider this as a situation in which we are distributing/dividing \begin{align} \frac{1}{2} \end{align} litres of water among vessels \begin{align} \frac{1}{4} \end{align} litres in size/volume.
How many \begin{align} \frac{1}{4} \end{align} litre vessels are needed? The answer is $$2.$$
By the reciprocal method,
\begin{align} \frac{1}{2} \div \frac{1}{4} = \frac{1}{2} \times \frac{4}{1} = \frac{4}{2} = 2 \end{align}
The reciprocal method is simple and easy to use, simply put it is,
Tips and Tricks
• Tip: Always take the reciprocal first and switch the division sign with multiplication before simplifying. That way you dont have to consider whether you are simplifying one of the given fractions of across both of them.
• Remember, after taking the reciprocal you have a multiplication statement. Now simplification can also be done across the two fractions. E.g. if there is a common factor between the numerator of one of the fractions and the denominator of the other fraction, you can simplify them and proceed.
E.g. \begin{align} \frac{5}{28} \times \frac{7}{9} \end{align} can be simplified to \begin{align} \frac{5}{4} \times \frac{1}{9} \end{align} before multiplying.
Common mistakes or misconceptions
• Children may simply replace the division symbol with a multiplication symbol and solve the problem. This happens when they try and “remember” the rule instead of understanding the concept. They know that the operation changes but do not understand why and end up blindly replacing the symbol and getting an incorrect answer.
• Children may “cancel” common factors across the two fractions before taking the reciprocal.Students are expected to take the reciprocal of the second fraction and switch the division sign with multiplication. While simplifying each fraction before this step won't change the answer, simplification across the two fractions (the numerator of one and the denominator of the other) will lead to an error.
Divide
Q1. \begin{align} \frac{5}{6} \div 2 \end{align}
Q2. \begin{align} 5\frac{3}{5} \div 7 \end{align}
Q3. \begin{align} 2\frac{1}{3} \div 2 \end{align}
|
2020-02-24 02:31:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 44, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999085664749146, "perplexity": 1203.399604031078}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145869.83/warc/CC-MAIN-20200224010150-20200224040150-00240.warc.gz"}
|
https://www.nature.com/articles/s41421-021-00280-3?error=cookies_not_supported&code=7b19f83a-d2c9-4533-b09b-439ae2851657
|
## Introduction
Parkinson’s disease (PD) is the second most common neurodegenerative disorder in the aging population after Alzheimer’s disease. PD is characterized by the loss of dopaminergic neurons in substantia nigra, leading to severe and progressive dyskinesia, including bradykinesia, rest tremor, rigidity and a variety of non-motor symptoms, such as disorders of mood, affect with apathy, and cognitive dysfunction1. It is estimated that PD affects one percent of the population over the age of 60 years2,3. Overall, more than 10 million people worldwide have PD4, and 80% of PD patients will eventually develop dementia5.
Increasing studies suggest that immune system dysfunction plays important roles in the pathogenesis of PD, including clinical and genetic associations with autoimmune disease, cellular and humoral immune dysfunction, imaging evidence of inflammatory cell activation and immunomodulatory disorders in experimental models of PD6,7,8,9. This complex disease is likely of autoimmune origin, but many questions remain unanswered despite a vast amount of available literature. On the one hand, several studies have reported the alteration of the percentage of peripheral blood T cells in PD patients10, but the relative contribution of each cell subtype to the disease etiology remains unclear10. On the other hand, CD8+ and CD4+ T cells were reported to invade the brain in both postmortem human PD specimens and in the mouse model of PD9,11, but the composition and interaction of T cell subtypes in human peripheral blood and cerebrospinal fluid and their potential ability to infiltrate the central nervous system remain unclear.
Single-cell RNA sequencing has emerged as a powerful technology for studying the heterogeneity of complex tissues, which provides higher resolution of cellular differences and reveals important functional insights that are masked in bulk analysis of cell populations12,13. Single-cell T cell receptor (TCR) sequencing provides TCR sequences for each cell14. The same TCR sequences indicate T cell clonal expansion patterns and T cell lineages, which are pivotal for recognizing endogenous and exogenous antigens presented by the major histocompatibility complex (MHC)15. Recently, single-cell transcriptome and TCR sequencing has been applied to analyze immune cells in patients with Alzheimer’s disease and multiple sclerosis, revealing T cell expansion signatures and their relationship with nervous system inflammation16,17. Large-scale single-cell sequencing of lymphocytes may help us to better understand the adaptive immune response in PD.
Given the chronic inflammatory nature of PD, T cell immunity may be important for disease onset. Here, we used single-cell transcriptome and TCR sequencing to systematically characterize the composition, function and lineage relationship of T lymphocytes in the blood and cerebrospinal fluid (CSF) of PD. In total, 21 T cell subsets with distinct functions were identified from 103,365 T cells. Integrative analyses of single-cell gene expression and TCRs revealed connectivity and potential differentiation trajectories of these subtypes and provided novel evidence of clonal expansion of T lymphocytes patrolling in the blood and cerebrospinal fluid of PD. This unprecedentedly large-scale transcriptome and immune profiling data of T cells can be used as a valuable resource for studying the basic characteristics of PD and potentially guiding effective immunotherapy strategies.
## Results
### Single-cell transcriptome and TCR sequencing of T cells in PD patients and healthy controls
We conducted a comprehensive analysis of single-cell transcriptome and TCR profiling of T cells in the blood and cerebrospinal fluid of PD patients (Fig. 1a). Fresh blood samples were collected from 8 PD patients and 6 healthy controls. CD3+ T cells were sorted by flow cytometry, and single-cell 5’ gene-expression and V(D)J libraries were prepared on the 10× platform (10× Genomics, CA, USA). Another 7 single-cell datasets from healthy controls were downloaded from publicly available datasets (Supplementary Table S1). In addition, publicly available single-cell immune profiling datasets from CSF, including 6 PD patients and 9 healthy controls16, were compared to better understand clonal expansion of lymphocyte T cells in PD. In total, we obtained single-cell transcriptome data for 103,365 T cells and single-cell TCR sequencing data for 113,690 T cells, of which 84,384 cells have both gene expression and TCR profiling data (Supplementary Table S1).
### T cells exhibit a specific composition and transcriptome in PD
To reveal the internal structure and potential functional subtypes of the entire T cell population, we used a graph-based clustering approach implemented in Seurat18,19 to perform unsupervised clustering of all T cells. T cells were visualized in 2D space using uniform manifold approximation and projection (UMAP) based on the gene expression profiling. In total, we identified 21 distinct clusters representing different cell types, including 11 clusters for conventional CD4+ T cells, 2 clusters for regulatory CD4+ T cells, 5 clusters for CD8+ T cells, 1 gamma delta T cell cluster, 1 MAIT cell cluster and 1 double-negative T cell cluster (Fig. 1b). Cell types were manually annotated by assessing the expression of classic marker genes and their expression similarity with purified bulk RNA-seq datasets20,21,22,23,24 (Fig. 1c; Supplementary Fig. S1). Five major cell types, including CD8+ T cells (CD8), CD4+ T cells (CD4), mucosal associated invariant T cells (MAIT), gamma delta T cells (gdT) and double-negative T cells (DNT), were highlighted in Fig. 1d.
Regarding CD4+ T cells, 3 clusters (C2, C4 and C8 clusters) were annotated as naïve CD4+ T cells characterized by naïve T cell markers SELL, CCR7, TCF7 and LEF1; C1, C9, C7, C12 and C19 were separately annotated as central memory CD4+ T cells, classic Th1, Th2, Th17 and Tfh cells, respectively, based on correlation analysis with bulk RNA-seq datasets of purified immune cells20,21,22,23,24 (Supplementary Fig. S1a); C13 was annotated as cytotoxic CD4+ T cells (CD4 CTL) with high expression of CD4 and cytotoxic genes GZMA, GZMB, PRF1 and NKG7; C17 and C18 were annotated as regulatory CD4+ T cells with high expression of Treg markers FOXP3, CTLA4, TIGIT and IL2RA; C20 and C21 were not assigned to specific CD4+ T cell types due to insufficient evidence (Fig. 1c; Supplementary Fig. S1).
For CD8+ T cells, the C5 cluster was annotated as naïve CD8+ T cells that highly expressed naïve cell markers SELL, CCR7, TCF7 and LEF1; 2 clusters (C6 and C11) were annotated as terminal effector CD8+ T cells characterized by effector markers, such as GZMA, GZMB, PRF1, NKG7; C3 cluster was annotated as transitional CD8+ T cells with high expression of the transitional marker gene GZMK25; the C15 cluster was annotated as central memory CD8+ T cells (TCM) with high expression of TCM markers CD27, SELL and CCR7 (Fig. 1c; Supplementary Fig. S1).
The remaining T cells formed 3 clusters, including 1 gamma delta T cell cluster, 1 MAIT cell cluster and 1 double-negative T cell cluster. C10 was annotated as Vd2 gamma delta T cells, in which 93% of the cells exhibited high gene expression of TRDV2 and TRGV9 and αβTCR was not detected in 93% of the cells (Supplementary Table S2). In total, 69% of the cells from C10 were annotated as Vd2 gd T cells by purified bulk RNA-seq datasets Monaco et al.21 (Supplementary Fig. S1). C16 was annotated as double-negative T cells, in which more than 60% of the cells express neither CD4 nor CD8 (Supplementary Fig. S1). C14 was annotated as MAIT cells with absolute superiority of the recombination ratio of TRAV1-2 and TRAJ33 gene segments in the TCRα chain; moreover, correlation analysis also revealed the closest similarity to purified MAIT cells from Monaco et al.21 (Fig. 1c; Supplementary Table S2 and Fig. S1a).
To understand whether PD patients follow the reported T lymphocyte changes10, we compared the proportion of CD4+ T cells and CD8+ T cells in the blood between PD patients and healthy controls. Among the identified cell types in our single-cell transcriptome analysis, the proportion of CD8+ T cells was significantly increased in the blood of PD patients compared to healthy controls (t-test, P-value = 0.018), whereas the proportion of CD4+ T cells significantly decreased (t-test, P-value = 0.014) (Fig. 1e, f). The overall CD4/CD8 ratio in PD patients (ratio = 1.66) was significantly reduced compared with healthy controls (ratio = 2.44) (t-test, P-value = 0.0048). Published studies have shown that the CD4/CD8 ratio in the peripheral blood of healthy adults is approximately 2:1, and an altered ratio is indicative of diseases that are associated with the immunodeficiency or autoimmunity26,27,28. Significant decrease in the CD4/CD8 ratio may indicate an immune disorder in PD.
### Clonally expanded T cells in the blood and CSF of PD
To gain insight into the clonal expansion of T cells in PD, we performed comparison analysis for scTCR-seq data from PD patients and healthy controls. Cells with the same CDR3 sequences for both the TCR α-chain and β-chain were defined as the same clonotype. We detected 113,690 cells from single-cell TCR sequencing data, forming 87,832 unique clonotypes, in which 4458 clonotypes contained at least two cells, indicating the clonal expansion of T cells (Supplementary Table S5). T cell diversity in the blood was significantly lower in PD patients compared with healthy controls (t-test, P-value = 6.87E−3, Fig. 2a). The number of clonotypes with the same clone size was significantly increased in PD patients compared with healthy controls (100 random sampling tests, median P-value = 5.98e−7, Fig. 2b). These results indicate the existence of T cell clonal expansion in the blood of PD patients.
In CSF, T cell diversity was slightly reduced in PD patients compared with healthy controls, and the statistical P-value was not significant, which may be due to the small number of detected cells and the small number of samples (t-test, P-value = 0.076, Supplementary Fig. S2a). However, the number of clonotypes with the same clone size was significantly increased in the CSF of PD patients compared with healthy controls (100 random sampling tests, median P-value = 6.25e−3, Supplementary Fig. S2b). And, the percentage of T cells with clone size > 2 was significantly increased in the CSF of PD patients compared to healthy controls (t-test, P-value = 0.033, Supplementary Fig. S2c). These results suggest that T cell clonal expansion also occurs in CSF of PD patients.
Clonally expanded T cells were widely distributed in each cluster, especially in CD8+ T cells (Fig. 2c). T cell composition distributed by clone size (NA, = 1, ≥2, ≥20, ≥100, NA means no TCR detected in these cells) in each blood sample is shown in Fig. 2d. The percentage of T cells with clone size ≥2 and ≥100 were significantly increased in the blood of PD patients compared with healthy controls (t-test, P value = 0.0030 and 0.0074, respectively) (Fig. 2d). We observed that cell type composition in each sample varied by clone size (Fig. 2d). T cells without αβTCR detected in scTCR-seq were mainly Vd2 gd T cells, while the clonotypes containing only one cell were mainly naïve CD4+ T cells (Fig. 2d). Larger clonotypes exhibited a nonuniform distribution of cell types with an enrichment for transitional and terminal effector CD8+ T cells (Fig. 2d).
### Clonal linkage of CD8+ T cells form a gradient of transcriptional states in PD
We performed in-depth analysis of CD8+ T cells across all PD patients and healthy controls. Interestingly, CD8+ T cells exhibited a nonuniform distribution of functional states with significant enrichment for terminal effector CD8+ T cells (C6 cluster, t-test, FDR = 0.015) and depletion of naïve CD8+ T cells (C5 cluster, t-test, FDR = 0.012) in the blood of PD patients (Fig. 3a). The expression of signature genes fluctuated significantly in these five CD8+ T cell clusters, and terminal effector CD8+ T cells exhibited wider and higher expression of cytotoxic genes (Fig. 3b). Fisher’s exact test showed that clonally expanded T cells in PD patients were significantly enriched in transitional and terminal effector CD8+ T cells, especially in C3 and C6 clusters (Fisher’s exact test, FDR = 1.10e-46 and 1.02e-8, respectively).
To further understand the relationships among CD8+ T cell clusters, we used diffusion maps to visualize these cells on a pseudotime trajectory (Fig. 3c). Interesting, the first diffusion component separated central memory cells from activated CD8+ T cells and was highly correlated with cytotoxic-related genes, such as GZMH, PRF1, FGFBP2, as well as proteins regulating cell migration and adhesion, such as CX3CR1, and ADGRG1 (Fig. 3c; Supplementary Fig. S3a–b). The second diffusion component showed two different differentiation directions of terminal effector CD8+ T cells (Fig. 3c). The upper branch (C6 cluster) was highly correlated with cell adhesion proteins, such as ITGAM and ITGB1, and the tissue-resident T cell transcription regulator protein ZNF683 (encodes for Hobit), whereas the lower differentiation branch (C11 cluster) was highly correlated with killer-like receptors, such as KLRC3 and KLRF1, and killer cell immunoglobulin-like receptors, such as KIR2DL3 and KIR3DL2 (Fig. 3c; Supplementary Fig. S3c, d).
To further understand the functional differences between terminal effector CD8+ T cells, we analyzed differentially expressed genes between the upper and lower differentiation branches (C6 and C11 clusters) (Supplementary Fig. S3e). The upper branch (C6 cluster) highly expressed cell adhesion and migration genes, such as ITGAM, ITGB1, CD226 and S100A4; T-cell activation and proliferation markers, such as CD52 and S100A6; and tissue-resident T cell transcription regulator protein ZNF683 (Hobit) (Supplementary Fig. S3e). The KEGG pathway analysis results revealed that the C6 up-regulated genes were highly associated with cell adhesion molecules (KEGG: hsa04514, FDR = 0.0015) and leukocyte transendothelial migration (KEGG: hsa04670, FDR = 0.0047) (Supplementary Fig. S3f), suggesting that these cells may be involved in tissue immunity. Genes related to cell survival and cytotoxic function, such as PRSS2329, SPON230 and ZNF683 (Hobit)31,32, were also highly expressed in the C6 cluster (Fig. 3b; Supplementary Fig. S3e). The lower branch (C11 cluster) highly expressed genes enriched in the natural killer cell-mediated cytotoxicity pathway (KEGG: hsa04650, FDR = 2.78e−10, Supplementary Fig. S3g), including killer-like receptors KLRC3, KLRF1, and KLRB1 and killer cell immunoglobulin-like receptors KIR3DL1, KIR3DL2 and KIR2DL3 (Supplementary Fig. S3e). This group of CD8+ T cells functioned more like nonclassical NKT cells33.
Moreover, the sample composition distribution of the cells in diffusion trajectory reveals that the proportion of CD8+ T cells in the blood of PD patients gradually increased with the process of differentiation, especially in the upper differentiation branch (Fig. 3d). Larger clonotypes tend to be located at the end of the effector branch (Fig. 3e). A process of transformation from central memory CD8+ T cells (C15 cluster) to transitional CD8+ T cells (C3 cluster) followed by terminal effector CD8+ T cells (C6 cluster) in the blood of PD patients (Fig. 3d) is clearly observed. The distribution of T cell clonotypes sharing the same TCRs further supported this transformation (Fig. 3f). Tracking T cell clonotypes and transcriptional phenotypes, we found that 55 clonotypes contained cells distributed in central memory CD8+ T cells (C15 cluster), transitional CD8+ T cells (C3 cluster), and terminal effector CD8+ T cells (C6 cluster), such as clonotype23, clonotype24, clonotype38 and clonotype103 (Fig. 3f), suggesting that TCRs may be involved in the process of CD8+ T cell differentiation in PD. Altogether, these results revealed a distinct cluster of terminal effector CD8+ T cells (C6 cluster), which exhibits obvious clonal expansion and cytotoxic differentiation by TCR activation in PD patients and is distinguished by expressing numerous genes involved in cell adhesion, migration, survival and cytotoxicity.
### A marked clonal expansion of cytotoxic CD4+ T cells in PD
CD4+ T cells are a large population of cells that play an important role in peripheral immunity in PD11. We annotated 8 major CD4+ T cell subtypes, including naïve CD4+ T cells (C2, C4 and C8 clusters), central memory CD4+ T cells (C1 cluster), cytotoxic CD4+ T cells (CD4 CTL, C13 cluster), Th1 cells (C9 cluster), Th2 cells (C7 cluster), Th17 cells (C12 cluster), Tfh cells (C19 cluster), and regulatory T cells (C17 and C18 clusters). Some highly expressed genes in each cluster were shown in Supplementary Fig. S4a. CD4 CTLs (C13 cluster) exhibited significantly higher expression of CD4 and several cytotoxic genes, such as GZMA, GZMB, GZMH and NKG7 (Supplementary Fig. S4a). There is no significant difference in the composition of CD4+ T cell subtypes between PD patients and healthy controls (Supplementary Fig. S4b–c). To understand the relationship among these CD4+ T cells, we constructed single-cell trajectories using R package Monocle 2 (version 2.14.0) (Fig. 4a). Central memory T cells (C1 cluster, TCM) were selected as the starting cell type of the differentiation (Fig. 4a). Consistent with the clustering analyses, we observed a process of transformation from central memory T cells (C1 cluster, TCM) to effector T cells (C9, C7 and C12 clusters, TEM) followed by CD4 CTLs (C13 cluster, CTL) (Fig. 4a). Regulatory CD4+ T cells (C17 and C18 clusters, Tregs) were reasonably located in a different branch (Fig. 4a). Larger clonotypes tend to be located at the end of the effector branch (Fig. 4b).
To gain insight into the clonal relationship among CD4+ T cells, we used Fisher’s exact test to identify PD-specific clonally expanded CD4+ T cell clusters. Compared to healthy controls, clonally expanded CD4+ T cells were significantly increased in Th1 cells and CD4 CTLs (C9 and C13 cluster) in the blood of PD patients (Fisher’s exact test, FDR = 8.58e−28 and 3.92e−14, respectively, Fig. 4c). Specifically, Th1 cells in the blood of PD patients accounted for 50.2% of total Th1 cells (C9 cluster), and this proportion increased to 65.5% when the background was reduced to clonally expanded Th1 cells (Fig. 4e). Regarding CD4 CTLs (C13 cluster), 74.4% of this population were from the blood of PD patients, and this percentage increased to 77.3% when the background was reduced to clonally expanded CD4 CTLs (Fig. 4f). CD4 CTLs tend to have larger clonotypes in PD patients with 371 clonotypes detected from 2301 cells (6.2 cells per clonotype), while the average clone size was 3.2 (258 clonotypes from 829 cells) in healthy controls (Fig. 4d). We used diffusion maps to further visualize the relationships among TCM, Th1, Th2 and CD4 CTLs (Fig. 4g). Both Th1 and Th2 cells originated from TCM cells and began to differentiate in parallel. Thereafter, the differentiation trajectory separated, and some Th1 cells eventually transformed to CD4 CTLs (Fig. 4g). Larger clonotypes tend to distribute at the end of the CTL branch (Fig. 4g). The proportion of PD cells gradually increased along the trajectory (Fig. 4h). The average expression of 4 major cytotoxic genes GZMA, GZMB, PRF1 and NKG7, which are known to be abundant in CD4 CTLs34,35, increased along the differentiation trajectory of CD4 CTLs (Supplementary Fig. S5a, b). The evidence of TCR sharing further supported the state transition from Th1 cells to CD4 CTLs. In total, 81 clonotypes were identified with both Th1 cells and CD4 CTLs, such as clonotype28 and clonotype65 (Supplementary Fig. S5c). These results reveal that a group of CD4 CTLs derived from TCR-activated Th1 cells were significantly clonally expanded in PD patients.
Th1 cells could have cytotoxic effects on dopaminergic neurons by releasing IFNγ, which activates and recruits other immune cells to amplify local inflammation6. It has also been reported that CD4+ T cell mediated dopaminergic toxicity does not require the expression of IFNγ in a mouse model of PD11, suggesting the presence of cytotoxic CD4+ T cells infiltration in the central nervous system. Our study reveals that both Th1 and CD4 CTL were significantly clonally expanded by TCR-dependent activation in the blood of PD patients, suggesting that these two cell types in the blood may be the source of central infiltrating CD4+ T cells36. Inhibitors that direct or indirect target of these T cell types may block the immune response in PD patients by preventing T cell proliferation6.
### Antigen-specific T cells and candidate antigenic epitopes in PD
Increasing evidence indicates that abnormal processing of self-proteins can produce antigens in PD37. T cells recognize these antigens, coordinate local innate immune responses, and drive dopaminergic neuronal death by activating immune pathways5. α-Synuclein (α-syn) is a presynaptic neuron protein that is genetically and pathologically related to PD38. Recent studies have shown that fibrils of α-syn can recruit peripheral immune cells prior to neurodegeneration in the rat brain39. Misfolded α-syn is not only prevalent in the central nervous system but can also cause peripheral immune responses10. A group of peptides derived from α-syn have been reported as epitopes driving T cell responses in PD patients8. In addition, the mitochondrial antigen presentation pathway is also associated with adaptive immunity in PD40. Recognition of antigen-specific T cells is crucial for understanding the adaptive immune response in PD.
TCR clustering based on CDR3 sequence similarity is an effective approach to identify antigen-specific T cells41,42 as TCRs sharing similar motifs from distinct individuals may also share antigen specificity. In total, we obtained 110,912 βCDR3s from 113,690 T cells and performed pairwise alignment. We used an ultrafast algorithm, iSMART43, specifically designed to handle large amount of TCR clustering and detected 1778 TCR specificity groups (Supplementary Table S7). To identify PD-specific TCRs, we screened 67 TCR specificity groups with at least one TCR from blood and one TCR from CSF of the PD patients (Fig. 5a, Supplementary Table S7). These groups were considered as candidates for PD-specific TCRs, most of which were found exclusively in PD patients (Fig. 5a).
The identification of PD-specific TCRs also enables us to further uncover the candidate antigenic epitopes from the PD-related proteins, such as α-syn. We used several steps to find the relationship between PD-specific TCRs and potential antigenic epitopes. First, high resolution HLA typing was obtained from the whole genome sequencing data of our 8 PD patients (Supplementary Table S6). Second, we searched NCBI protein database with keywords of ‘alpha-synuclein’ and ‘mitochondrial’ and obtained all the α-syn and mitochondrial protein sequences. After removing the redundancy, these protein sequences were separated into 9-mer and 15-mer peptides to predict their binding affinity with MHC I and MHC II alleles, respectively. Finally, we used samples shared by MHC genes and TCRs to construct the relationship between MHC-peptides and TCRs (Fig. 5b). A relatively strong sample sharing relationship was noted between 14 TCR specificity groups and 11 HLA alleles (Fig. 5b). These HLA alleles were predicted to bind to at least one peptide from α-syn or mitochondrial proteins (Fig. 5b; Supplementary Table S7). Fortunately, two of our predicted peptides ‘KTKEGVLYVGSKTKE’ and ‘GKTKEGVLYVGSKTK’ have been reported to drive helper and cytotoxic T cell responses in PD patients8 (Supplementary Fig. S6). In summary, we used TCR clustering and machine learning to screen a group of PD-specific TCRs and their candidate epitopes, providing potential targets for blood and cerebrospinal fluid T cells to participate in neuronal degeneration.
### Possible mechanism of cytotoxic T cells passing through the BBB in PD
The blood-brain barrier (BBB) is a physical barrier formed by endothelial cells to prevent blood proteins, antibodies and immune cells from penetrating into the brain parenchyma44. However, under the continuous action of chronic inflammation, the tight junctions between endothelial cells are weakened or destroyed, thus allowing antibodies or immune cells to pass through45. Postmortem studies of the brain have confirmed that the infiltration of lymphocytes into the brain contributes to the neurodegeneration of PD9,11,46. Numerous adhesion molecules are involved in the recruitment of leukocytes, especially lymphocytes, into the central nervous system (CNS) during inflammation. The integrin leukocyte function-associated antigen-1 (LFA-1) plays a key role in leukocyte adhesion cascade by binding ICAM-1 (and ICAM-2) on the surface of endothelial cells47. Very late activation antigen-4 (VLA-4) mediates the adhesion of lymphocytes and monocytes to VCAM-1 on the surface of activated endothelial cells48. Macrophage-1 antigen (MAC-1) binding to ICAM-1 (and ICAM-2) regulates intravascular crawling49. In addition, several chemokines and their receptors are associated with the recirculation of effector T cells to the BBB. Chemokine receptors (such as CXCR4) on rolling leukocytes interact with chemokines (such as CXCL12) on endothelial cells, activating several signaling pathways (such as PI3K, PLC, RAS- and RHO-family GTPase, and MAPK) and promoting an opened integrin conformation50,51,52. Selectins (SELE, SELP) and their counter ligands (SELPLG) dependent rolling is the earliest observable event of leukocyte recruitment to inflammatory tissues53, which plays a critical role in the recruitment of CD8+ cells in brain vessels of patients with multiple sclerosis during acute attacks54.
We assessed numerous molecules related to cell migration and adhesion and found that many molecules related to BBB penetration were highly expressed in cytotoxic T cells (Fig. 6a; Supplementary Table S3). Integrin family genes (VLA-4, LFA-1, Mac-1) exhibited relatively high expression in transitional CD8+ T cells (C3 cluster), terminal effector CD8+ T cells (C6, C11 clusters) and CD4 CTLs (C13 cluster) (Fig. 6a; Supplementary Table S3). Other cellular chemokines, adhesion molecules and their receptors, such as CCL4, CCL5, CX3CR1, CD99 and SELPLG, were also widely and relatively highly expressed in these cytotoxic T cells (Fig. 6a). Some genes also showed significantly upregulated expression in PD patients (Supplementary Table S4). These genes were significantly enriched in the leukocyte transendothelial migration pathway (KEGG: hsa04670), which may represent a possible mechanism by which cytotoxic T cells pass through the BBB in PD55 (Fig. 6b).
## Discussion
Numerous postmortem studies have confirmed the presence of lymphocyte infiltration in the brain of PD patients11,46. High levels of activated T cells have also been detected in the cerebrospinal fluid of PD patients56. Moreover, lymphocyte infiltration is not a random event caused by damage to the BBB but targeted migration to the vicinity of dopaminergic neurons in the brain of PD patients9,11. Given the chronic inflammatory nature of PD, T cell immunity may be important for disease onset. Therapies targeting T cells can reduce neurodegeneration and motor behavior disorders in animal models of PD57. The study of T cell populations in peripheral blood and cerebrospinal fluid of PD patients will further improve our understanding of the immune pathogenesis of PD.
In this study, we conducted integrative computational analyses to investigate the immunological changes in the blood and cerebrospinal fluid of PD patients compared to healthy controls. We identified a distinct cluster of terminal effector CD8+ T cells significantly clonally expanded in PD patients, which derived from central memory CD8+ T cells by TCR-dependent activation and upregulated both cell adhesion (ITGAM, ITGB1, etc.) and cell survival (PRSS23, SPON2, ZNF683) markers. Notably, we reported a group of cytotoxic CD4+ T cells (CD4 CTLs) significantly clonally expanded in PD patients, which may be a source of central infiltrating cytotoxic CD4+ T cells. Evidence of TCR sharing further supports their differentiation from Th1 cells. These cytotoxic CD8+ and CD4+ T cell populations are strong candidate for potential involvement in the pathogenesis of PD. In addition, we grouped TCRs by CDR3 sequence similarity and provided potential TCR-antigen relationships by MHC-peptide prediction and overlap analyses between samples with the same MHC alleles and TCR groups. Two of our predicted peptides ‘KTKEGVLYVGSKTKE’ and ‘GKTKEGVLYVGSKTK’ have been reported to drive helper and cytotoxic T cell responses in PD patients8 (Supplementary Fig. S6). These findings provide evidence of convergent selection in PD. Future efforts can be made to assess the antigenicity of the predicted epitopes using effector T cells transfected with synthetic TCRs, by testing their cytokine secretion with immunospot assay upon antigen stimulation.
It is estimated that approximately 4 × 1011 T cells circulate in the adult human body58. Cells detected by single-cell sequencing are only the tip of the iceberg and do not completely represent all the immune diversity. It is difficult to find common TCRs from different individuals. It is a good idea to use the similarity of βCDR3 to identify common antigen-specific TCRs in different individuals, but large-scale TCR repertoire sequencing data are still needed to obtain more accurate results. In addition, the diversity of MHC alleles in the population also hinders the identification of antigen-specific T cells shared by the population. Moreover, the limited number of cells detected in cerebrospinal fluid data used in this study also hinders the identification of common clonal T cells between blood and cerebrospinal fluid. In the future, large-scale single cell sequencing data of lymphocytes in blood and cerebrospinal fluid are still necessary, and mixed TCR immune repertoire sequencing data are also needed to assess the diversity of lymphocytes as much as possible.
## Materials and methods
### Human research participants
Eight PD patients (P1–P8) aged 50–70 years with stable and effective L-dopamine medication were recruited in this study. None of the candidates had significant somatic disorders, such as tumor, autoimmune disorders and chronic diseases, as well as psychiatric co-morbidities, including mild cognitive impairment (MCI) and dementia. Six age-matched healthy controls (N1–N6) were also recruited. All participants were procured from the First Affiliated Hospital of Harbin Medical University. This study was approved by the Ethics Committee in the First Affiliated Hospital of Harbin Medical University (Approval number: No. 201985). Informed consent was obtained from all participants.
### Publicly available datasets
In this study, an additional seven healthy controls (N7–N13) were included to enrich the datasets of health controls. Specifically, N7 and N8 were downloaded from the official website of 10× genomics with both scRNA-seq and scTCR-seq data (https://support.10xgenomics.com/single-cellvdj/datasets), and N9–N13 (aged in their 50 to 80 years) were downloaded from Hashimoto et al.35 with only scRNA-seq data.
In addition, publicly available single-cell immune profiling datasets from cerebrospinal fluid16, including 6 PD patients (PD1–PD6) and 9 healthy controls (HC1–HC9), were downloaded and used to better understand clonal expansion of lymphocyte T cells in PD. The average age of CSF samples was 68.71 (8.61 SD). All of these published single-cell transcriptome and immune sequencing data were generated on the 10× Genomics platform.
### Blood sample collection and preparation
Fresh blood samples from eight PD patients (P1–P8) and six age-matched healthy controls (N1–N6) were collected and followed by density gradient centrifugation on Percoll to isolate human peripheral blood mononuclear cells (PBMCs). CD3+ T cells were then isolated from PBMCs by fluorescence-activated cell sorting (FACS) analysis.
### Bulk DNA isolation and sequencing
Genomic DNA of blood was extracted using Invitrogen Genomic DNA Extraction Kits according to the manufacturer’s specification. The concentrations of DNA were quantified using a NanoDrop instrument (Thermo) and the qualities of DNA were evaluated with agarose gel electrophoresis. DNA libraries were constructed by fragmenting genomic DNA (approximately 0.1–1 µg) using the NEBNext Ultra DNA Library Prep Kit. Finally, DNA libraries were sequenced on the Illumina Novaseq 6000 with 150-bp paired end (PE150).
### Single-cell 5′ and V(D)J sequencing
Single-cell 5′ and V(D)J libraries were prepared following the protocol provided by the 10× genomics Chromium Single Cell Immune Profiling Solution. Briefly, CD3+ T cell suspensions (400–1000 living cells per microliter determined by CounterStar) were loaded on a Chromium Single Cell Controller (10× Genomics) to generate single-cell gel beads in emulsion (GEMs) using Chromium Single Cell V(D)J Reagent Kits. Captured cells were lysed, and the released RNAs were barcoded through reverse transcription in individual GEMs. Each single-cell 5’ and V(D)J libraries were sequenced by the Illumina Novaseq 6000 using 150 paired-end reads.
### HLA genotyping
High accuracy of human leukocyte antigen (HLA) allotype (i.e., a set of HLA alleles of an individual) of eight PD patients were characterized by HLA-HD59 based on the information from whole genome sequencing. First, we created an HLA allele dictionary from the current allele information to increase the completeness of applicable alleles. Then, high-quality reads were mapped to the HLA allele dictionary using bowtie260. Finally, suitable pairs of HLA alleles were selected by calculating a score based on weighted read counts59.
### Preprocessing of single-cell transcriptome data
Single-cell transcriptome data were preprocessed using the following steps: First, we used UMI-tools61 to identify cell barcodes and UMIs. Then, cell barcodes and UMIs were appended to the read names to distinguish different cells and different RNA molecules. Read adapters were trimmed using cutadapt62. High-quality reads were then mapped to the GRCh38 (Release-92) human reference genome using STAR63. The number of reads mapping to each genomic gene were counted using featureCounts64. Samtools65 were used to sort and index BAM files, which stores mapped reads in a standard and efficient manner. Then, the UMI-corrected molecular counts were calculated using UMI-tools61. Finally, a local Perl script was used to construct a combined gene expression matrix containing all the sequenced samples.
### Cell quality control
Real cells from empty droplets were called using the emptyDrops function from R package dropletUtils, which assesses whether the RNA content associated with a cell barcode is significantly distinct from the ambient background RNA present within each sample66,67. Cells with FDR ≤ 0.01 (Benjamini-Hochberg corrected) were considered for further analysis. Then, low-quality cells were identified and removed using the isOutlier function in R package scater68, which identifies outliers based on the median absolute deviation (MAD)69. Cells were claimed as low-quality cells if: (1) The cell library size (total UMI counts) is smaller than 3 MADs; (2) The number of detected genes is smaller than 3 MADs; (3) The proportion of mitochondrial gene counts is bigger than 3 MADs. Please see Zhang et al.70 for details. Doublets were identified and filtered by DoubletFinder71 with the expected doublet rate of 0.075. Finally, genes with more than 1 transcript in at least two cells were retained for further analysis.
### Dataset integration and unsupervised clustering
Batch effects were removed, and datasets from each sample were integrated using the standard Seurat v3 integration workflow18,19. First, raw counts of each sample were normalized using a global-scaling normalization method NormalizeData in R package Seurat18,19. This method normalizes the gene expression values for each cell by the total UMI counts in the sample, then multiplies this value by a scale factor (10,000 by default), and log-transforms the result. Highly variable genes were identified in each sample using FindVariableFeatures function in Seurat18,19. To identify shared cell states that are present across blood and cerebrospinal fluid samples, ‘anchors’ between pairs of datasets were identified and used to harmonize the datasets. Finally, the cell-cycle score was calculated using CellCycleScoring function and regressed during data scaling using the ScaleData function in Seurat18,19.
We used a graph-based clustering approach implemented in Seurat18,19 to perform unsupervised clustering of all T cells. First, principal component analysis was computed based on the scaled expression of variable genes. Then, 15 principal components were used to construct a KNN graph using the FindNeighbors function in Seurat18,19, in which the edge weights between any two cells were based on the shared overlap in their local neighborhoods (Jaccard similarity). Finally, cells were clustered using the FindClusters function in Seurat18,19, which used the Louvain algorithm to iteratively group cells together with the goal of optimizing the standard modularity function. Additional K-means clustering was further used to classify cytotoxic T cells into CD8 CTLs and CD4 CTLs (C6 and C13 clusters). Cluster with less than 500 cells were removed from downstream analysis.
Based on the gene expression profiling, a dimensionality reduction method called Uniform Manifold Approximation and Projection (UMAP) was used to visualize T cells in a two-dimensional space. UMAP projections were generated by RunUMAP function in Seurat18,19 based on the first 15 principal components.
### Cell type annotation
Cluster biomarkers were identified using the FindAllMarkers procedure in Seurat18,19, which identified differentially expressed genes for each cluster using a Wilcoxon Rank Sum test. The R package SingleR72 was then used to further annotate single cells by leveraging reference transcriptomic datasets of pure cell types to infer the cell of origin of each single cell independently. Three bulk RNA-seq datasets of purified immune cells (The Database for Immune Cell Expression (Schmiedel et al.20), Monaco Immune Cell Data (Monaco et al.21), the Human Primary Cell Atlas (Mabbott et al.22), BLUEPRINT database (Martens et al.23) and Novershtern Hematopoietic Data (Novershtern et al.24) were selected as reference datasets for single-cell annotation.
Cell clusters were manually annotated by checking the expression of classic marker genes and single-cell annotation by the purified bulk RNA-seq datasets. For example, C2, C4, C8 were annotated as Naïve CD4+ T cells based on two evidence: (1) C2, C4, C8 highly expressed naïve T cell markers SELL, CCR7, TCF7 and LEF1 (Fig. 1c); (2) More than 90% of cells from C2, C4, C8 were annotated as Naïve CD4+ T cells by bulk dataset Martens et al.23 (Supplementary Fig. S1a).
### Differential expression analysis
Differential expression analysis was conducted by using the FindMarkers function in Seurat18,19 with default parameters, which used normalized gene expression values as input. To calculate the logFC value, the average expression values in each group added by 1 (where 1 represents a pseudocount) were divided between two groups and then log-transformed. Genes were claimed as differentially expressed if: (1) Genes should be detected in at least 10% of the cells in either of the two groups; (2) The threshold of logFC is the default value of 0.25; (3) Bonferroni adjusted P-value is less than 0.05. Differentially expressed genes (DEGs) between the blood of PD patients and healthy controls as well as cluster biomarkers of each cell cluster were combined to evaluate the role of cell clusters in the immune response of PD.
### Single-cell trajectory analysis
Monocle 2 (version 2.14.0) was used to investigate transcriptional and functional trajectories of CD4+ T cell clusters (Fig. 4a). Only 7 CD4+ T cell clusters were selected to construct the trajectory due to the limitation of the number of cells processed by Monocle. Given that the direction of pseudotime is arbitrary, we selected central memory CD4+ T cells as the beginning of the trajectory.
Diffusion maps represent a more advanced trajectory inference method, which was introduced by Ronald Coifman and Stephane Lafon73, and the underlying idea is to assume that the data are samples from a diffusion process. Diffusion maps are efficient, scalable and robust and provide better details of cell trajectory74,75. We choose diffusion maps implemented by R package destiny75 to analyze the trajectory of some specific clusters, such as CD8 CTLs (Fig. 3c) and CD4 CTLs (Fig. 4g). Central memory T cells were used to determine the beginning of the trajectory.
### Single-cell V(D)J data processing
Single-cell V(D)J data was processed using Cell Ranger (10× Genomics, version 3.1.0) with –reference = refdata-cellranger-vdj-GRCh38-alts-ensembl-3.1.0 for each sample. Paired α and β CDR3 sequences from blood and cerebrospinal fluid were pooled together to identify common clonotypes across samples. Cells with the same CDR3 sequence for both the α-chain and the β-chain were considered the same clonotype.
### Antigen-specific TCR groups analysis
Clustering of TCRs based on CDR3 similarity is an effective approach to identify antigen-specific T cells41,42 given that TCRs sharing similar motifs from distinct individuals may also share antigen specificity. We grouped all the βCDR3 sequences from blood and cerebrospinal fluid samples and identified antigen-specific TCR groups using iSMART43, which performs a specially parameterized pairwise local alignment on T cell receptor CDR3 sequences to group them into antigen-specific clusters. For a given group with high similarity, the antigen-specific TCR group needs to meet the following conditions: (1) Only one amino acid mismatch is allowed on CDR3; (2) Only one insertion or deletion is allowed on CDR3; (3) V genes within the group should be the same.
### HLA antigen presentation prediction
Prediction of HLA antigen presentation is a key step in identifying antigen epitopes and understanding adaptive immunity of PD. The accumulation of abnormal forms of α-syn is a trigger of PD. Recent evidence suggests a strong relationship between α-syn and adaptive immune system, which may lead to downstream neurodegeneration76. Mitochondrial damage that causes mitochondrial proteins to be presented on the neuron surface also leads to the activation of adaptive immune responses in PD40. Therefore, we focused on these two types of proteins to screen the potential epitopes that can be presented by patients’ MHC alleles. To achieve this, we first searched the NCBI protein database with the keywords of ‘alpha-synuclein’ and ‘mitochondrial’ and obtained all the α-syn and mitochondrial protein sequences. After removing the redundancy, these protein sequences were separated into 9-mer and 15-mer peptides using sliding windows to predict their binding affinity with MHC I and MHC II alleles using NetMHCstabpan77 and NetMHCIIpan78, respectively.
### Measures of TCR diversity
TCR diversity was calculated based on the D50 value79, which is the percentage of dominant T cell clonotypes that account for the cumulative 50% of the total CDR3s counted in the sample79. The more diverse the TCR repertoire, the closer the value is to 50.
The D50 value is defined as follows:
$$D50^j = \frac{{argmin_k\left( {\mathop {\sum }\nolimits_{i = 1}^k N_i^j - \frac{1}{2}\mathop {\sum }\nolimits_{i = 1}^n N_i^j} \right) \times 100}}{n}$$
where n is the total number of unique CDR3s, and $$N_i^j$$ is the frequency of the i-th CDR3 in sample j in the following order:
$$N_1^j \ge N_2^j \ge \ldots N_i^j \ge N_{i + 1}^j \ge \ldots \ge N_n^j$$
|
2022-12-01 00:14:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19771818816661835, "perplexity": 8727.854129784684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710777.20/warc/CC-MAIN-20221130225142-20221201015142-00277.warc.gz"}
|
https://www.physicsforums.com/threads/can-a-region-of-space-time-be-created-with-no-er.711662/
|
Can a region of space-time be created with no ER?
1. Sep 21, 2013
San K
A photon may be considered as an excitation of the electromagnetic (ER) field.
ER is thought to be omnipresent/ubiquitous in time-space (?)
Is it possible to construct a region in space-time (say a "black" box):
1. That contains no ER fields?
2. that contains no photons? i.e. no excitation/energy.....just a peaceful region with "calm" ER fields
Last edited: Sep 21, 2013
2. Sep 21, 2013
Staff: Mentor
If the field strength is zero, does that count as "no EM field"?
Do you count vacuum fluctuations? If you do, it is impossible (see the Casimir effect).
A small hole in a superconductor, cooled sufficiently... should work.
3. Sep 21, 2013
Naty1
I think, as mfb implies, a lot depends on just what you mean:
A photon is a quanta of an electromagnetic field, that is, a locally detectable manifestation of ER.... a locally observable field quantity, while the field is a mathematical construct, not observable.
I don't think any ER is detectable in space unless from an external source. I suspect that is what mfb's small hole in a superconductor implies??
Is a 'calm ER field' 'no ER field'.....'calm' is not a term I have seen in these forums. Do you make a distinction between 'no ER' and 'no photons' and 'calm ER fields' ??
You are perhaps thinking of 'detectable' ER.....??
You can get rid of most detectable ER with a Faraday cage, but your question may go beyond that to vacuum energy.
4. Sep 21, 2013
dauto
A calm field doesn't mean no field just as a calm ocean doesn't mean no water. Anyways, a calm field will happen at zero Kelvin (in classic physics) which is impossible by the 3rd law of Thermodynamics. Clasic physics isn't exact though and in Quantum physics even at zero Kelvin there would be some oscillation left on the fields due to zero point energy
5. Sep 25, 2013
.Scott
There will always be virtual photons.
The problem is that a "true vacuum" is too specific and would violate Heisenberg Uncertainty.
|
2018-07-17 23:37:34
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8183377981185913, "perplexity": 2637.647280168762}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589932.22/warc/CC-MAIN-20180717222930-20180718002930-00004.warc.gz"}
|
https://blog.stcloudstate.edu/ims/category/privacy/
|
### Archive of ‘privacy’ category
New Documents and Reports Confirm AT&T and NSA’s Longstanding Surveillance Partnership
Please consider previous IMS blog entries on this topic:
### How and why to set up and use a password manager
Commit to a password manager to make your online life easier and more secure.
### The Truth About Teenagers, The Internet, And Privacy
danah boyd, a professor at Harvard University’s Berkman Center for the Internet and Society, argues that teenagers closely scrutinize what they share online because it is a way for them to negotiate their changing identities. In her book, It’s Complicated: The Social Lives of Networked Teens, she describes how teenagers carefully curate their feeds based on the audience they are trying to reach.
Adolescents have been migrating away from Facebook and Twitter over the last few years, showing preference for sites like Snapchat, Whisper, Kik, and Secret that provide more anonymity and privacy. Part of this transition can be explained by the fact that the older social media sites stopped being cool when parents joined them, but perhaps another reason could be that teenagers growing up in the post-Snowden era implicitly understand the value of anonymity. For teens, it’s not a matter of which platform to use, but rather which works best in a particular context.
http://www.nybooks.com/articles/archives/2013/nov/07/are-we-puppets-wired-world/
# Are We Puppets in a Wired World?
But while we were having fun, we happily and willingly helped to create the greatest surveillance system ever imagined, a web whose strings give governments and businesses countless threads to pull, which makes us…puppets. The free flow of information over the Internet (except in places where that flow is blocked), which serves us well, may serve others better. Whether this distinction turns out to matter may be the one piece of information the Internet cannot deliver.
by Evgeny Morozov
by John Naughton
#### Big Data: A Revolution That Will Transform How We Live, Work, and Think
by Viktor Mayer-Schönberger and Kenneth Cukier
#### Privacy and Big Data: The Players, Regulators and Stakeholders
by Terence Craig and Mary E. Ludloff
O’Reilly Media, 108 pp., \$19.99 (paper)
## Key Findings
See the 2013 report for a full list of key messages, findings, and supporting data.
• Students recognize the value of technology but still need guidance when it comes to better using it for academics.
• Students prefer blended learning environments while beginning to experiment with MOOCs.
• Students are ready to use their mobile devices more for academics, and they look to institutions and instructors for opportunities and encouragement to do so.
• Students value their privacy, and using technology to connect with them has its limits.
p. 10 students are generally confident in their prepraredness to use technology for course work, but those who are interested in more tech training favor “in calss” guidance over separate training options.
Educause’s ECAR Study, 2013
|
2016-05-29 05:56:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19489039480686188, "perplexity": 4760.328032076271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278389.62/warc/CC-MAIN-20160524002118-00072-ip-10-185-217-139.ec2.internal.warc.gz"}
|
http://gasturbinespower.asmedigitalcollection.asme.org/article.aspx?articleid=1428941
|
0
Research Papers: Nuclear Power
# Study on the Coupled Neutronic and Thermal-Hydraulic Characteristics of the New Concept Molten Salt Reactor
[+] Author and Article Information
Peng Wang, Libo Qian, Dalin Zhang, Wenxi Tian, Guanghui Su
State Key Laboratory of Multi Phase Flow in Power Engineering, and School of Nuclear Science and Technology, Xi’an Jiaotong University, Xi’an 710049, China
Suizheng Qiu1
State Key Laboratory of Multi Phase Flow in Power Engineering, and School of Nuclear Science and Technology, Xi’an Jiaotong University, Xi’an 710049, Chinaszqiu@mail.xjtu.edu.cn
1
Corresponding author.
J. Eng. Gas Turbines Power 132(10), 102923 (Jul 14, 2010) (7 pages) doi:10.1115/1.4001067 History: Received September 23, 2009; Revised September 27, 2009; Published July 14, 2010; Online July 14, 2010
## Abstract
The new concept molten salt reactor is the only liquid-fuel reactor of the six Generation IV advanced nuclear energy systems. The liquid molten salt serves as the fuel and coolant simultaneously and causes one important feature: the delayed neutron precursors are drifted by the fuel flow, which leads the spread of delayed neutrons’ distribution to noncore parts of the primary circuit, and it also results in reactivity variation depending on the flow condition of the fuel salt. Therefore, the neutronic and thermal-hydraulic characteristics of the molten salt reactor are quite different from the conventional nuclear reactors using solid fissile materials. Besides, there is no other reactor design theory and safety analysis methodologies can be used for reference. The neutronic model is derived based on the conservation of particles considering the flow effect of the fuel salt in the molten salt reactor, while the thermal-hydraulic model applies the fundamental conservation laws: the mass, momentum, and energy conservation equations. Then, the neutronic and thermal-hydraulic calculations are coupled and the influences of inflow temperature and flow velocity on the reactor physical properties are obtained. The calculated results show that the flow effect on the distributions of thermal and fast neutron fluxes is very weak, as well as on the effective multiplication factor $keff$, while the flow effect on the distribution of delayed neutron precursors is much stronger. The inflow temperature influences the distribution of neutron fluxes and delayed neutron precursors slightly, and makes a significant negative reactivity. Coupled calculation also reveals that the flow velocity of molten salt has little effect on the distribution of neutron fluxes in the steady-state, but affects the delayed neutron precursors’ distribution significantly.
<>
## Figures
Figure 7
Distributions of precursors at different inflow temperatures
Figure 8
Temperature distributions under different flow velocities
Figure 9
Distributions of neutron fluxes under different flow velocities
Figure 10
Distributions of precursors under different flow velocities
Figure 4
Flow effect on the distribution of precursors
Figure 5
Temperature distributions at different inflow temperatures
Figure 6
Distributions of neutron fluxes at different inflow temperatures
Figure 1
Scheme of MSR (9)
Figure 2
The control volume P in a 1D coordinate system
Figure 3
Flow effect on the distributions of neutron fluxes
## Discussions
Some tools below are only available to our subscribers or users with an online account.
### Related Content
Customize your page view by dragging and repositioning the boxes below.
Related Journal Articles
Related Proceedings Articles
Related eBook Content
Topic Collections
|
2018-02-17 21:24:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20221713185310364, "perplexity": 3698.5629823119643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891807825.38/warc/CC-MAIN-20180217204928-20180217224928-00780.warc.gz"}
|
https://gmatclub.com/forum/in-the-figure-above-equilateral-triangle-abc-is-inscribed-19078.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 25 Sep 2018, 10:09
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# In the figure above, equilateral triangle ABC is inscribed
Author Message
Director
Joined: 27 Dec 2004
Posts: 861
In the figure above, equilateral triangle ABC is inscribed [#permalink]
### Show Tags
Updated on: 22 Jul 2013, 03:29
6
28
00:00
Difficulty:
35% (medium)
Question Stats:
70% (00:52) correct 30% (00:53) wrong based on 796 sessions
### HideShow timer Statistics
In the figure above, equilateral triangle ABC is inscribed in the circle. If the length of arc ABC is 24, what is the approximate diameter of the circle?
A. 5
B. 8
C. 11
D. 15
E. 19
OPEN DISCUSSION OF THIS QUESTION IS HERE: in-the-figure-above-equilateral-triangle-abc-is-inscribed-97393.html
Originally posted by Folaa3 on 20 Aug 2005, 15:44.
Last edited by Bunuel on 22 Jul 2013, 03:29, edited 1 time in total.
Edited the question and added the OA.
SVP
Joined: 05 Apr 2005
Posts: 1644
### Show Tags
20 Aug 2005, 19:20
1
1
C = 2 (pi) r = 24/2x3=36
(pi) d = 36
d = 36x7/22=126/11=11
SVP
Joined: 05 Apr 2005
Posts: 1644
### Show Tags
20 Aug 2005, 23:29
5
1
ALI1 wrote:
HIMALAYA wrote:
C = 2 (pi) r = 24/2x3=36
(pi) d = 36
d = 36x7/22=126/11=11
Himalaya can u plz explain
since, the triangle is equilateral. so, ab=bc=ca
arc abc=24,
arc abc covers 2/3 of the whole perimeter.
so, the the whole perimeter (2 pi r) = 24/(2/3) = 36
2r=d=36/pi=11 approx...
Director
Joined: 11 Mar 2005
Posts: 687
### Show Tags
21 Aug 2005, 10:14
3
Or you can say that each side of the triangle covers 120 degrees.
length of acc conered by 2 sides is = 24
meaning 240 degrees conver 24
360 will cover 36 units
pi*d = 36
d = 11 approx
Director
Joined: 29 Nov 2012
Posts: 798
Re: In the figure above, equilateral triangle ABC is inscribed [#permalink]
### Show Tags
22 Jul 2013, 02:23
In the triangle the corresponding angles are 60 degrees so the arc will be 120 and 120
total would be 240
Length of arch 240/360 * 2 Pi r = 24 >> r is approx 6
diameter is 12
_________________
Click +1 Kudos if my post helped...
Amazing Free video explanation for all Quant questions from OG 13 and much more http://www.gmatquantum.com/og13th/
GMAT Prep software What if scenarios http://gmatclub.com/forum/gmat-prep-software-analysis-and-what-if-scenarios-146146.html
Math Expert
Joined: 02 Sep 2009
Posts: 49496
Re: In the figure above, equilateral triangle ABC is inscribed [#permalink]
### Show Tags
22 Jul 2013, 03:30
7
8
fozzzy wrote:
In the triangle the corresponding angles are 60 degrees so the arc will be 120 and 120
total would be 240
Length of arch 240/360 * 2 Pi r = 24 >> r is approx 6
diameter is 12
In the figure above, equilateral triangle ABC is inscribed in the circle. If the length of arc ABC is 24, what is the approximate diameter of the circle?
A. 5
B. 8
C. 11
D. 15
E. 19
Arc ABC is $$\frac{2}{3}$$ of the circumference (as ABC is equilateral triangle and thus arc AB=arc BC=arc AC, so arc AB+arc BC=arc ABC = 2/3 of circumference) --> $$24=c*\frac{2}{3}$$, hence circumference $$c=\frac{24*3}{2}=36=\pi{d}$$ --> $$d\approx{11.5}$$.
OPEN DISCUSSION OF THIS QUESTION IS HERE: in-the-figure-above-equilateral-triangle-abc-is-inscribed-97393.html
_________________
Non-Human User
Joined: 09 Sep 2013
Posts: 8179
Re: In the figure above, equilateral triangle ABC is inscribed [#permalink]
### Show Tags
18 Jul 2018, 10:40
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: In the figure above, equilateral triangle ABC is inscribed &nbs [#permalink] 18 Jul 2018, 10:40
Display posts from previous: Sort by
# In the figure above, equilateral triangle ABC is inscribed
## Events & Promotions
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
2018-09-25 17:09:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8738758563995361, "perplexity": 5625.478845654928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161902.89/warc/CC-MAIN-20180925163044-20180925183444-00033.warc.gz"}
|
https://www.physicsforums.com/threads/find-temperature-rise-of-cylinder-from-linear-thermal-coefficients.640189/
|
# Find temperature rise of Cylinder from Linear Thermal Coefficients
1. Sep 30, 2012
### poul
1. The problem statement, all variables and given/known data
Part 1: I have a cylinder of radius R and lengh L. At first i can assume that we have expansion in both R and L. And that i can use the linear thermal expansion coefficient(\alpha) = 4 \times 10^{-6}. The relative change in R and L is 1 \times 10^{-4}, and from that i have to find the temperature rise.
Part 2: Now we also to assume that the change in L is 0, and the relative change in R is again 1 \times 10^{-4}. So we only have expansion in R, and can assume the same linear thermal expanssion coefficients.
3. The attempt at a solution
Part 1: For this one we can use \times 10^{-4} = 4 \times 10^{-6}* \Delta T and find: \Delta T = 25 K.
Part 2: So for this one i just use? \times 10^{-4} = 3/2 \times 4 \times 10^{-6}* \Delta T and find: \Delta T = 16.67 K. Assuming small changes?
|
2018-03-23 05:49:00
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999958276748657, "perplexity": 632.3077327404434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648178.42/warc/CC-MAIN-20180323044127-20180323064127-00450.warc.gz"}
|
https://www.jyotirmoy.net/posts/2015-09-15-ec102-python5.html
|
# EC102, Lab 5, Differential Equations
Sep. 15, 2015
[This is the fifth in a series of lecture notes for the lab component of the core ‘Macroeconomics I’ course that I teach in the M.A. Economics programme at Ambedkar University, Delhi.]
In this session we will look at how to use the SciPy package to explore systems of differential equations.
At the beginning of your session run the following imports:
import numpy as np
import matplotlib.pyplot as plt
import scipy.integrate as spint
%matplotlib inline
## Systems of differential equations
A system of differential equations is of the form
$\frac{dx}{dt} = f(x,t)$
where $$x$$ is a vector in $$\Re^n$$ and $$f$$ is a function from $$\Re^n \times \Re$$ to $$\Re^n$$. If $$f$$ does not actually depend on $$t$$ the differential equation system is said to be autonomous, otherwise nonautonomous.
A solution to this differential equation system is a function $$\phi(t)$$ from some time interval $$[t_0,t_1]$$ to $$\Re^n$$ such that
$\phi'(t) = f(\phi(t),t)$
Usually there are many solutions to a differential equation system, and we need to impose additional conditions to pick a unique solution. For example, we may impose an initial condition
$\phi(0)=x_0$
where $$x_0$$ is a given point in $$\Re^n$$.
For most differential equation systems it is not possible to find an explicit formula for the solution. However, for many systems it is possible to use computers to find approximate numerical solutions to systems of differential equations.
# Numerical solutions
### A single differential equation
The function odeint in the module scipy.integrate provides a user-friendly interface computing for numerical solutions of differential equations. The function takes three essential parameters:
func function giving the rhs of the differential equation system. y0 initial conditions. t a sequence of time points at which you want to know the solution.
odeint expects that func will be a function whose first two arguments will be the current state $$x$$ (which is in general n-dimensional) and time $$t$$ respectively and which will return the right-hand side of the differential equation system (another n-dimensional vector). When using odeint we do not call f ourselves. Rather we provide it to odeint which calls it as required to compute the numerical solution.
Let’s try out the function on the one-dimensional autonomous system
$dy/dx = -5y$
with the initial condition $$y_0 = 1$$.
def f(y,t):
return -5*y
t = np.linspace(0,1,5)
y0 = 1
y = spint.odeint(f,y0,t)
Note that we have to define f as a function of both y and t even though we do not use t in the function as our differential equation system is autonomous. This is because odeint expects f to have a particular form.
The array t gives the time points for which we would like to know approximate values of the solution. For our experiment we choose five equally-spaces points between 0 and 1.
If you check after runnning the code above, y.ndim is 2 and y.shape is (6,1). In general the return value of odeint is two dimensional, with one row for each time point at which we asked for a solution and one column for each variable in our system.
For future work let’s convert y into a 1-d vector
y = y[:,0]
Something new here. y[:,0] is a subscript operation, but instead of specifying a row by using an integer, we provide the special symbol : which in NumPy means all rows. And we provide 0 as the column number to pick only the first column. So we get a 1-d vector which just has the first column from each row.
In this example we chose a differential equation whose solution we can compute in terms of a formula. For the initial value $$y_0=1$$ the solution is $$e^{-5t}$$. If you want you can compute np.exp(-5*t) and compare the answer with the value y computed above. The numbers will not be exactly the same, since odeint does not know the exact formula and must compute an approximation, but they should be close.
### A multi-dimensional system
Solving a differential equation system in more than one dimension follows the same pattern, except that for a n-dimensional system the function passed to odeint must be written to accept a n-element array as the state variable and must return the right-hand side of the differential equation system as another n-element array.
Suppose we want to study the system
$dx/dt = y;\qquad dy/dt = -x-0.2y$
with the initial condition $$x_0 = 0, y_0=1$$.
The Python code will be
def f(s,t):
xdot = s[1]
ydot = -s[0]-0.2*s[1]
return np.array([xdot,ydot])
t = np.linspace(0,10,50)
s0 = np.array([0,1])
s = spint.odeint(f,s0,t)
We call the first argument to f as s to remind ourselves that it is the 2-element array containing the state of the system, with its first element s[0] being $$x$$ and its second element s[1] being $$y$$. We return the 2-element vector whose elements are $$dx/dt$$ and $$dy/dt$$.
$$s$$ is now a 2-d array with shape (50,2). To visualize the trajectory of this system we plot the consecutive value of $$x$$, given by s[:,0] against the consecutive values of $$y$$ given by s[:,1].
plt.plot(s[:,0],s[:,1])
### Differential equations with parameters
Suppose we want to replace the second equation in the system above with
$dy/dt = -ax - by$
where $$a$$ and $$b$$ are parameters for which we would like to try out different values. The f function would then be rewritten as
def f(s,t,a,b):
xdot = s[1]
ydot = -a*s[0]-b*s[1]
return np.array([xdot,ydot])
By default odeint calls the function we provide only with the state and the time, which would not work in this case. For such situations, odeint as an additional argument args which takes a tuple which is interpreted as additional arguments to be provided in the call to f. So we could get the same plot as before by executing, with the new definition of f:
t = np.linspace(0,10,50)
s0 = np.array([0,1])
s = spint.odeint(f,s0,t,args=(1,0.2))
Suppose we would like to compare the trajectories for $$b=0.2$$ and $$b=0$$. We call odeint twice with the two paramter values and then plot both the trjectories in the same figure.
s_first = spint.odeint(f,s0,t,args=(1,0.2))
s_second = spint.odeint(f,s0,t,args=(1,0))
plt.plot(s_first[:,0],s_first[:,1],label="0.2")
plt.plot(s_second[:,0],s_second[:,1],label="0.0")
plt.legend()
## Visualizing vector fields
Vector field plots are another way to visualize a autonomous 2-dimensional differential equation system. Let’s consider again the equation system
$dx/dt = y; \qquad dy/dt=-x-0.2y$
At every $$(x,y)$$ point the right-hand side of these equation give us the direction of motion of the trajectory of the system passing through that point. So we if have a good idea of how the direction of motion varies in different parts of the plane we will also have a good idea of the shape of the trajectories. Vector field plots help in this by representing the direction of motion at selected points by arrows. pyplot contains a convenient function quiver for drawing such plots, but it requires a bit of setup. We give the code and the plot before getting into the description.
x = np.linspace(-1,1,10)
y = np.linspace(-1,1,10)
xx,yy = np.meshgrid(x,y)
xdot = yy
ydot = -xx-0.2*yy
plt.quiver(xx,yy,xdot,ydot)
Now the description. The two lines
x = np.linspace(-1,1,10)
y = np.linspace(-1,1,10)
sets up two equally spaced 1-d vectors of x- and y-coordinates with 10 elements each. But what we need is a 10×10 2-d grid of points at which to draw our arrows. The NumPy function meshgrid does precisely this, taking two 1-d grids, using them to form a 2-d grid and then returning a tuple of two elements the first of which contains the x coordinate at all points of the grid and the second contains the y coordinates at all point in the grid.
xx,yy = np.meshgrid(x,y)
Here we have an example of using the extended form of the assignment statement to take apart the tuple returned by mesgrid and assigning its two element to two different names.
Then we compute $$dx/dt$$ and $$dy/dt$$ at each point in the grid
xdot = yy
ydot = -xx-0.2*yy
Element-by-element arithmetic works for the 2-d arrays xx and yy just like it worked in our earlier 1-d examples. Finally we call quiver
plt.quiver(xx,yy,xdot,ydot)
quiver has four essential arguments: x coordinates at grid points, y coordinates at grid points, the x component of the arrows and the y component of the arrows. Here the two components of the arrows come from the right-hand side of a differential equation system, but quiver does not care where they come from. quiver automatically scales the arrows so that their direction remains unchanged but their sizes span a reasonable range.
## Exercises
### Exercise 1
Consider the differential equation system
$dx/dt = x+y;\qquad dy/dt = x-y$
On a single figure plot
1. The vector field for $$-1 \le x \le 1$$, $$-1 \le y \le 1$$
2. The trajectory with initial value $$(-0,4.1)$$.
3. The trajectory with initial value $$(0.5,-1)$$.
### Exercise 2
For $$u(c)=c^{1-\theta}/{1-\theta}$$ and $$f(k) = k^\alpha$$, the Ramsey model’s trajectories are given by the equations
$dc/dt = \frac{c}{\theta}[\alpha k^{\alpha-1}-\rho]$ $dk/dt = f(k)-c-nk$
Take $$\alpha=0.3$$, $$\theta=1.75$$, $$\rho=0.05$$ and $$n=0.01$$.
1. Use Python to compute the steady state values $$c^*$$ and $$k^*$$.
2. Suppose we want to visualize the phase portraits of the model. What are the reasonable ranges for $$c$$ and $$k$$ for our plot.
3. On one diagram plot the vector field correspoinding to the differential equation and a few sample trajectories.
4. Add to the above diagram dashed vertical and horizontal lines corresponding to $$k^*$$ and $$c^*$$ respectively.
5. It is hard to plot the stable arm directly as we don’t know what initial value to choose and even the slightest error or numerical approximation will lead to the path exploding. However there is a useful trick available. We know that the stable arm converges to the steady state. So if we start somewhere near the steady state and run time in reverse we should get approximately the stable arm. Try this trick to plot the stable arm.
|
2018-12-12 11:06:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7294560670852661, "perplexity": 444.4280773252336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823817.62/warc/CC-MAIN-20181212091014-20181212112514-00319.warc.gz"}
|
https://www.earth-syst-dynam.net/11/77/2020/esd-11-77-2020.html
|
Journal topic
Earth Syst. Dynam., 11, 77–96, 2020
https://doi.org/10.5194/esd-11-77-2020
Earth Syst. Dynam., 11, 77–96, 2020
https://doi.org/10.5194/esd-11-77-2020
Research article 10 Feb 2020
Research article | 10 Feb 2020
Synthesis and evaluation of historical meridional heat transport from midlatitudes towards the Arctic
Synthesis and evaluation of historical meridional heat transport from midlatitudes towards the Arctic
Yang Liu1,2, Jisk Attema1, Ben Moat3, and Wilco Hazeleger1,2,4 Yang Liu et al.
• 1Netherlands eScience Center, 1098 XG, Amsterdam, the Netherlands
• 2Wageningen University, 6708 PB, Wageningen, the Netherlands
• 3National Oceanography Center, SO14 3ZH, Southampton, UK
• 4Faculty of Geoscience, Utrecht University, 3512 JE, Utrecht, the Netherlands
Correspondence: Yang Liu (y.liu@esciencecenter.nl)
Abstract
Meridional energy transport (MET), both in the atmosphere (AMET) and ocean (OMET), has significant impact on the climate in the Arctic. In this study, we quantify AMET and OMET at subpolar latitudes from six reanalysis data sets. We investigate the differences between the data sets and we check the coherence between MET and the Arctic climate variability at interannual timescales. The results indicate that, although the mean transport in all data sets agrees well, the spatial distributions and temporal variations of AMET and OMET differ substantially among the reanalysis data sets. For the ocean, only after 2007, the low-frequency signals in all reanalysis products agree well. A further comparison with observed heat transport at 26.5 N and the subpolar Atlantic, and a high-resolution ocean model hindcast confirms that the OMET estimated from the reanalysis data sets are consistent with the observations. For the atmosphere, the differences between ERA-Interim and the Japanese 55-year Reanalysis (JRA-55) are small, while the Modern-Era Retrospective analysis for Research and Applications version 2 (MERRA-2) differs from them. An extended analysis of linkages between Arctic climate variability and AMET shows that atmospheric reanalyses differ substantially from each other. Among the chosen atmospheric products, ERA-Interim and JRA-55 results are most consistent with those from coupled climate models. For the ocean, the Ocean Reanalysis System 4 (ORAS4) and Simple Ocean Data Assimilation version 3 (SODA3) agree well on the relation between OMET and sea ice concentration (SIC), while the GLobal Ocean reanalyses and Simulations version 3 (GLORYS2V3) deviates from those data sets. The regressions of multiple fields in the Arctic on both AMET and OMET suggest that the Arctic climate is sensitive to changes of meridional energy transport at subpolar latitudes in winter. Given the good agreement on the diagnostics among assessed reanalysis products, our study suggests that the reanalysis products are useful for the evaluation of energy transport. However, assessments of products with the AMET and OMET estimated from reanalysis data sets beyond interannual timescales should be conducted with great care and the robustness of results should be evaluated through intercomparison, especially when studying variability and interactions between the Arctic and midlatitudes.
1 Introduction
Poleward meridional energy transport, both in the atmosphere (AMET) and ocean (OMET), is one of the most fundamental aspects of the climate system. It is closely linked to the changes of weather and climate at different latitudes. The quantifications of AMET and OMET have been studied extensively. In the 1980s, many efforts were made to reproduce the AMET and OMET with very limited observational data available . After entering the satellite era, much progress has been made in particular during the recent two data-rich decades. Using the radiation at the top of the atmosphere from satellite data and the reanalysis data, a complete picture of AMET and OMET is given by . Following their work, rapid progress was made using similar methodologies and new data sets of observations . Nevertheless, these estimations still suffered from problems like mass imbalance, unrealistic moisture budget, coarse resolution and sparseness of observations . Fortunately, recent improvements in numerical weather prediction and ocean models, and increased data coverage of observations provide a basis to improve the estimation of AMET and OMET. As a result of an increase of available reanalysis products, an increase in resolution and length of the covered time span and an increase of components of the Earth system that are included in the products , it is very promising to have better quantification of AMET and OMET using the latest reanalysis data sets. In this study, we will provide further insights into MET from midlatitudes towards the Arctic, with the state-of-the-art reanalysis products.
To support the examination of MET from midlatitudes towards the Arctic, it is worth investigating the AMET and OMET in relation to climate variability at different timescales in the Arctic region. In recent decades, the Arctic has been warming twice as fast as the global average . This phenomenon is known as Arctic amplification (AA) and it has an impact far beyond the Arctic . In order to understand the warming, the processes behind the AA and its wider consequences, and to make reliable predictions of the Arctic climate, it is crucial to understand Arctic climate variability. Among all factors responsible for the variability in the processes described above, meridional energy transport, from midlatitudes toward the Arctic, plays a significant role . There is a large volume of published studies describing the impacts of AMET and OMET on the variation of sea ice and the warming in the Arctic. Using reanalysis data, showed that poleward AMET is linked with the evolution of temperature in the free troposphere at decadal timescales. By separating the planetary and synoptic-scale waves, showed that latent heat transport, as a component of AMET, influences the Arctic warming with reanalysis data. studied moisture transport with reanalysis data and observations, and showed that the moisture sources in the Arctic region are linked with interannual fluctuations of Arctic sea ice. analyzed the linkages between OMET, ocean heat content (OHC) and AA through climate model simulations within the Coupled Model Intercomparison Project phase 5 (CMIP5). They reported an enhancement of OMET as a result of heat loss in the subpolar ocean and the contribution of OMET to the AA through increasing OHC in the Arctic Ocean. Also by analyzing CMIP5 simulations, showed a large impact of heat transport in the Barents Sea on sea ice loss. However, ocean reanalyses do not show a clear sign of AA in the Arctic OHC increases . Consequently, knowledge on poleward AMET and OMET at subpolar and polar latitudes will aid in the understanding of AA.
Global climate models show compensations between variations in atmospheric and oceanic heat transport at subpolar latitudes and midlatitudes . This is indicative of positive feedbacks between the ocean and atmosphere, and it has been associated with variations in sea ice by some studies . These studies all point to connection between energy transport and variations of the Arctic climate. However, these results are mostly based on numerical model simulations and they tend to differ among these models. In contrast to numerical modeling studies, here we intend to examine AMET and OMET variability and their relation with the Arctic using reanalysis data sets, which are regarded as the best estimates of the historical variability.
In this paper, we quantify AMET and OMET using multiple state-of-the-art reanalysis products. These are representations of the historical state of the atmosphere and ocean optimally combining available observations and numerical simulations using data assimilation techniques. Emphasis is placed on the variation of AMET and OMET from midlatitudes to the Arctic at interannual timescales (∼5 years). Different from earlier studies, we include multiple reanalysis data sets for intercomparison. Independent observations in the Atlantic from the Rapid Climate Change-Meridional Overturning Circulation and Heatflux array (RAPID array) and the Overturning in the Subpolar North Atlantic Program (OSNAP) are included in the comparison. The RAPID array is a trans-basin observing array along 26.5 N in the Atlantic . It has been in operation since 2004 and provides the volume and heat transport in the Atlantic basin. OSNAP is an ocean observation program designed to provide a continuous record of the trans-basin fluxes of heat, mass and freshwater in the subpolar North Atlantic . Moreover, a state-of-the-art NEMO-LIM2 1∕12 ocean circulation/sea ice model simulation forced by the Drakkar surface forcing data set version 5.2 is also employed in the comparison. Based on the intercomparison of reanalysis data, especially with the independent observation data, we will be able to identify the sources of uncertainty. To support our comparison of AMET and OMET, we also investigate the interactions between oceanic and atmospheric variations and remote responses. The correlations between the variability of AMET and OMET, and the changes in the Arctic climate are compared to literature. This is motivated by previous studies that explain those connections with only numerical models or a single reanalysis data set .
The paper is organized as follows: Sect. 2 presents the data and our methodology. Results and analysis are given in Sect. 3. It includes AMET and OMET calculated from reanalysis data and an intercomparison of them. The correlation between the variability of AMET and OMET, and the Arctic climate is elaborated upon in detail. Finally, remarks are given in Sect. 4 and conclusions are provided in Sect. 5.
2 Data and methodology
The reanalysis data sets used in this study are introduced in this section. Moreover, the methodology for the quantification of AMET and OMET is also included in this section. The statistical tests performed in this study are elucidated in detail.
2.1 Reanalyses
In order to make use of observations and advanced numerical models, six state-of-the-art reanalysis data sets are used in this study. The chosen reanalysis products have a high temporal and spatial resolution; thus, they are suitable for the computation of energy transport (see Sect. 2.3). We chose three atmosphere reanalysis data sets: ERA-Interim, the Modern-Era Retrospective analysis for Research and Applications version 2 (MERRA-2) and the Japanese 55-year Reanalysis (JRA-55) (references below), and three ocean reanalysis data sets: the Ocean Reanalysis System 4 (ORAS4), GLobal Ocean reanalyses and Simulations version 3 (GLORYS2V3) and Simple Ocean Data Assimilation version 3 (SODA3) (references below). To avoid interpolation errors and imbalances in the mass budget introduced by regridding, the calculations are based on data from the original model grid. Note that the latest atmospheric reanalysis (ERA5) from the European Centre for Medium Range Weather Forecasts (ECMWF) is not included here since the model-level data have not been opened to the public yet (ECMWF2017). In addition, the computation is too expensive to achieve a longer time series for the study of the interannual variability of AMET using ERA5. As a synthesis, Table 1 shows the basic specifications of the reanalysis products contained in this study.
Table 1Basic specification of reanalyses products included in this study.
2.1.1 ERA-Interim
ERA-Interim is a global reanalysis data set produced by ECMWF , which has covered the data-rich period since 1979. It employs cycle 31r2 of ECMWF's Integrated Forecast System (IFS) and generates atmospheric state estimates using a 4D-Var data assimilation with a T255 (∼79 km) horizontal resolution on 60 vertical levels . Compared with its predecessor, ERA-40 , ERA-Interim is superior in quality in terms of the atmospheric properties like mass, moisture and energy . The improvement in observations and the ability of 4D-Var contributes a lot to the quality of the divergent wind , which is significant for the mass budget and hence the energy budget. We use the data that are provided on a 256×512 Gaussian grid, with a 0.75× 0.75 horizontal resolution and 60 vertical hybrid model levels. We take 6-hourly data with a range from 1979 to 2016.
2.1.2 MERRA-2
MERRA-2 is the successor of MERRA from the Global Modeling and Assimilation Office (GMAO) of the National Aeronautics and Space Administration (NASA). It assimilates observational data with the Goddard Earth Observing System (GEOS) model and analysis scheme . The atmospheric state estimates are produced by a 3D-Var incremental analysis update (IAU) assimilation scheme and have coverage from 1980 to the present. Unlike most of the reanalysis products, the GEOS atmospheric model includes a finite-volume dynamical core that uses a cubed-sphere horizontal discretization . The model grid has a resolution of 0.5× 0.625 with 72 hybrid levels. For this study, we use the 3-hourly assimilation data on the native model grid from 1980 to 2016.
2.1.3 JRA-55
Extending back to 1958, JRA-55 is the second reanalysis product made by the Japan Meteorological Agency (JMA) . JRA-55 applies 4D-Var assimilation and it is generated on TL319 horizontal resolution with 60 hybrid levels. Before entering the satellite era in 1979, the assimilated upper air observations mainly come from radiosonde data. In this project, we take 6-hourly data from 1979 to 2015 on the original model grid, which has a horizontal resolution of 0.5625× 0.5625 with 60 hybrid model levels.
2.1.4 ORAS4
Serving as the historical reconstruction of the ocean's climate, ORAS4 is the replacement of its predecessor used by the ECMWF, the reanalyses system ORAS3 . It implements the Nucleus for European Modelling of the Ocean (NEMO) as the ocean model and uses NEMOVAR as the data assimilation system . The model is forced by atmosphere-derived daily surface fluxes, from ERA-40 from 1957 to 1989 and ERA-Interim from 1989 to 2010. Since 2010, the forcing has changed to operational forcing . ORAS4 produces analyses with a 3D-Var first guess at appropriate time (FGAT) assimilation scheme and spans from 1958 to the present. ORAS4 runs on the ORCA1 grid, which is associated with a horizontal resolution of 1 in the extratropics and a refined meridional resolution up to 0.3 in the tropics. It has 42 vertical levels, 18 of which are located in the upper 200 m. Here, we skip the first two decades and use the monthly data from 1979 to 2014 to avoid the uncertainties reported by . We use the monthly mean fields on the native model grid.
2.1.5 GLORYS2V3
GLORYS2V3 is a global ocean and sea ice eddy-permitting reanalysis system that yielded from the collaboration between the Mercator Ocean, the Drakkar consortium and Coriolis data center . It spans the altimeter and Argo eras, from 1993 to the present. The NEMO ocean model is implemented on the ORCA025 grid (approximately 0.25× 0.25 with 75 vertical levels). The model is forced by a combination of ERA-Interim fluxes (e.g., shortwave radiation) and turbulent fluxes obtained with bulk formulae using ERA-Interim near-surface parameters. The data are generated by a 3D-Var assimilation scheme with temperature and salinity profiles assimilated from the CORA3.3 database . In this study, monthly data from 1993 to 2014 on the original ORCA025 grid are used.
2.1.6 SODA3
SODA3 is the latest version of Simple Ocean Data Assimilation (SODA) ocean reanalyses conducted mainly at the University of Maryland . SODA3 is built on the Modular Ocean Model v5 (MOM5) ocean component of the Geophysical Fluid Dynamics Laboratory CM2.5 coupled model with a grid configuration of approximately 0.25 (latitude) × 0.25 (longitude) × 50-level resolution . To be consistent with the other two reanalysis data sets assessed in this study, SODA 3.4.1 is chosen since it applies surface forcing from ERA-Interim. For this specific version, the 5 d data are available from 1980 to 2015. Reanalysis data from this period on the original MOM5 grid are used in this case.
2.2 Oceanic observations and OGCM hindcast
For independent examination of the OMET calculated from reanalysis data sets, observations of the meridional transport of mass and heat throughout the Atlantic basin are used here. We use data from the RAPID-MOCHA-WBTS program and the OSNAP program . The RAPID-MOCHA-WBTS program, which is known as the RAPID array, employs a trans-basin observing array along 26.5 N and has been in operation since 2004. The OMET from the RAPID array available to this study is from April 2004 to March 2016. The OSNAP program has an observing system that comprises an integrated coast-to-coast array extending from the southeastern Labrador Shelf to the southwestern tip of Greenland and from the southeastern tip of Greenland to the Scottish shelf. So far, it provides OMET data from the full installation of the array in 2014 to the first complete data recovery in 2016 (21 months in total). Although it is too short to provide a good estimate of the interannual variability of OMET, we still include it as it is a unique observation system for OMET in the subpolar Atlantic.
Apart from the RAPID array and OSNAP observational data, a NEMO ORCA hindcast is also included here to provide more insights, since two of the chosen reanalysis products are also built on the NEMO ocean circulation model . This forced model simulation implements the NEMO ORCA global ocean circulation model version 3.6 (Madec2008). It is configured with the ORCA0083 grid, which has a nominal resolution of 1∕12 on 75 vertical levels. Climatological initial conditions for temperature and salinity were taken in January from PHC2.1 at high latitudes , MEDATLAS in the Mediterranean and the rest from . It is forced by the surface fields produced by the Drakkar project, which supplies surface air temperature, winds, humidity, surface radiative heat fluxes and precipitation, and a formulation that parameterizes the turbulent surface heat fluxes and is provided for the period from 1958 to 2012 (data set version 5.2) . More information about this hindcast is given by . We take monthly mean data from the hindcast, which spans from 1979 to 2012. For clarity, this hindcast will be referred to as the oceanic general circulation model (OGCM) simulation in this paper.
2.3 Computation of meridional energy transport
The methods for quantification of AMET and OMET with atmospheric and oceanic reanalyses are included in this section, respectively.
2.3.1 Energy budget in the atmosphere
The total energy per unit mass of air has four major components: internal energy (I), latent heat (H), geopotential energy (ϕ) and kinetic energy (k). They are defined as
$\begin{array}{}\text{(1)}& \begin{array}{rl}& I={c}_{\mathrm{v}}T\\ & H={L}_{\mathrm{v}}q\\ & \mathrm{\Phi }=gz\\ & k=\frac{\mathrm{1}}{\mathrm{2}}v\cdot v,\end{array}\end{array}$
with cv the specific heat capacity of dry air for constant volume (J kg−1 K−1), T the absolute temperature (K), Lv the specific heat of condensation (J kg−1), q the specific humidity (kg kg−1), g the gravitational acceleration (kg m−1 s−2), z the altitude (m) and v the zonal/meridional wind velocity (m s−1). The northward propagation is positive. In addition, these four quantities can be divided into three groups: the dry static energy I+ϕ, the moist static energy $I+\mathit{\varphi }+H$ and the kinetic energy k. A constant value of Lv=2500 kJ kg−1 was used to compute the AMET with the atmosphere reanalysis data sets. In addition, recently improved formulations of energy budget equations proposed by and are addressed here. We use an updated formulation of AMET as a combination of the divergence of dry-air enthalpy, latent heat, geopotential and kinetic energy transport, which is suggested by . Note that in this case the enthalpy transport associated with vapor fluxes is neglected.
In pressure coordinates, the total energy transport at a given latitude Φi can be expressed as
$\begin{array}{}\text{(2)}& E=\underset{\mathrm{\Phi }={\mathrm{\Phi }}_{i}}{\oint }\underset{{p}_{\mathrm{s}}}{\overset{{p}_{\mathrm{t}}}{\int }}\left[\left(\mathrm{1}-q\right){c}_{\mathrm{p}}T+{L}_{\mathrm{v}}q+gz+\frac{\mathrm{1}}{\mathrm{2}}v\cdot v\right]v\frac{\mathrm{d}p}{g}\mathrm{d}x,\end{array}$
with cp the specific heat capacity of dry air at constant pressure, pt the pressure level at the top of the atmosphere (Pa) and ps the pressure at the surface (Pa). A constant value of cp=1004.64 J kg−1 K−1 was used. Since we work on the native hybrid model coordinate with each atmosphere reanalysis product, the equation can be adjusted as follows (see ):
$\begin{array}{}\text{(3)}& E=\underset{\mathrm{\Phi }={\mathrm{\Phi }}_{i}}{\oint }\frac{\mathrm{1}}{g}\underset{\mathrm{0}}{\overset{\mathrm{1}}{\int }}\left[\left(\mathrm{1}-q\right){c}_{\mathrm{p}}T+{L}_{\mathrm{v}}q+gz+\frac{\mathrm{1}}{\mathrm{2}}v\cdot v\right]v\frac{\partial p}{\partial \mathit{\eta }}\mathrm{d}\mathit{\eta }\mathrm{d}x,\end{array}$
where η indicates the number of the hybrid level.
Unfortunately, a direct estimation of AMET based on the equations above cannot provide meaningful energy transport obtained from reanalysis data. It has been widely reported that reanalysis products suffer from mass inconsistency . Spurious sinks and sources mainly come from low spatial and temporal resolution, interpolation and regridding, and data assimilation. The interpolation from the original model level to pressure level can introduce considerable errors to the mass budget . Therefore, we prevent interpolations onto the pressure levels and use data on the native model levels with a high temporal resolution. provided a method to correct the mass budget through the use of the continuity equation. The method assumes that the mass imbalance mainly comes from the divergent wind fields and corrects the overall mass budget by adjusting the barotropic wind. The conservation of mass for a unit column of air can be represented as
$\begin{array}{}\text{(4)}& \frac{\partial {p}_{\mathrm{s}}}{\partial t}+\mathrm{\nabla }\cdot \underset{{p}_{\mathrm{s}}}{\overset{{p}_{\mathrm{t}}}{\int }}v\mathrm{d}p=g\left(E-P\right),\end{array}$
where E stands for evaporation and P denotes precipitation. It has been noted that big uncertainties reside in the evaporation and precipitation of global reanalyses . Hence, we use the moisture budget to derive the net moisture change in the air column, according to
$\begin{array}{}\text{(5)}& E-P=\frac{\partial }{\partial t}\left(\underset{{p}_{\mathrm{s}}}{\overset{{p}_{\mathrm{t}}}{\int }}q\frac{\mathrm{d}p}{g}\right)+\mathrm{\nabla }\cdot \underset{{p}_{\mathrm{s}}}{\overset{{p}_{\mathrm{t}}}{\int }}\left(v\cdot q\right)\frac{\mathrm{d}p}{g}.\end{array}$
The related fields for the mass budget correction are surface pressure (ps), meridional and zonal winds (uv) and specific humidity (q). After determining the mass budget imbalance, we correct the barotropic wind fields (ucvc), with uc and vc indicating the correction terms for zonal and meridional wind components as a result of the barotropic mass budget correction, and then calculate AMET . Note that all the computations regarding barotropic mass budget correction were performed in the spectral domain via spherical harmonics. Figure 1 shows the mean AMET and each component in each month at 60 N estimated from ERA-Interim.
Figure 1Estimation of mean AMET and each component in each month at 60 N with ERA-Interim from 1979 to 2017.
It is worth mentioning that MERRA-2 is very different from ERA-Interim and JRA-55, in terms of the discretization method and grid incorporated by the dynamical core. The dynamical core for MERRA-2 is the GEOS-5 model and it computes all fields on a cubed-sphere grid with a resolution of 50×50 km , while in ERA-Interim and JRA-55 the computations were performed in the spectral domain. However, the data collections are saved only on the latitude–longitude grid after interpolation. Thus, the data cannot be transferred back to the cubed-sphere grid without loss of information. Moreover, the vector field computations on the cubed-sphere grid are not divergence-free due to the implementation of finite volume discretization methods . Consequently, we transferred MERRA-2 fields to the spectral domain and performed vector field computations via spherical harmonics to minimize the numerical errors, the same treatment as ERA-Interim and JRA-55.
2.3.2 Energy budget in the ocean
Unlike the atmosphere, energy transport in the ocean can be well represented by the internal energy itself. Consequently, the total energy transport in the ocean at a given latitude ϕi can be expressed in terms of the temperature transport :
$\begin{array}{}\text{(6)}& E=\underset{\mathrm{\Phi }={\mathrm{\Phi }}_{i}}{\oint }\underset{{z}_{\mathrm{b}}}{\overset{{z}_{\mathrm{0}}}{\int }}{\mathit{\rho }}_{\mathrm{0}}{c}_{{\mathrm{p}}_{\mathrm{0}}}\mathit{\theta }\cdot v\mathrm{d}z\mathrm{d}\mathit{\varphi },\end{array}$
where ρ0 is the seawater density (kg m−3), ${c}_{{\mathrm{p}}_{\mathrm{0}}}$ is the specific heat capacity of seawater (J kg−1C−1), θ is the potential temperature (C), v is the meridional current velocity (m s−1), and z0 and zb are sea surface and the depth to the bottom (m), respectively. A constant value of ${c}_{{\mathrm{p}}_{\mathrm{0}}}=\mathrm{3987}$ J kg−1C−1 was used in all the calculations of OMET. OHC (with unit J) is another variable that plays a role in the ocean heat budget. The total OHC between certain latitudes can be calculated by
$\begin{array}{}\text{(7)}& \mathrm{OHC}=\underset{{\mathrm{\Phi }}_{i}}{\overset{{\mathrm{\Phi }}_{\mathrm{0}}}{\int }}\underset{{z}_{\mathrm{b}}}{\overset{{z}_{\mathrm{0}}}{\int }}{\mathit{\rho }}_{\mathrm{0}}{c}_{{\mathrm{p}}_{\mathrm{0}}}\mathit{\theta }\mathrm{d}z\mathrm{d}\mathit{\varphi }.\end{array}$
Our computation of OMET suffers from a small mass imbalance (e.g., mass imbalance coming from the difference between precipitation and evaporation; ). In the ocean, with its strong boundary circulations, even the smallest imbalance can lead to large errors in the heat flux. However, the barotropic correction method adopted by the atmosphere is not feasible here due to the mass imbalance coming from the residual between precipitation and evaporation, and some budget terms that are hard to diagnose. In oceanographic literature, it is common to use a reference temperature when calculating OMET in both observations and model diagnostics . Here, we also take a reference temperature: θr (C). Note that the influence of taking a reference temperature on zonally integrated transport is smaller than that on a single strait . Then, the quantification of OMET becomes
$\begin{array}{}\text{(8)}& E=\underset{\mathrm{\Phi }={\mathrm{\Phi }}_{i}}{\oint }\underset{{z}_{\mathrm{b}}}{\overset{{z}_{\mathrm{0}}}{\int }}{\mathit{\rho }}_{\mathrm{0}}{c}_{{\mathrm{p}}_{\mathrm{0}}}\left(\mathit{\theta }-{\mathit{\theta }}_{\mathrm{r}}\right)\cdot v\mathrm{d}z\mathrm{d}\mathit{\varphi }.\end{array}$
Here, we take θr equal to 0 C. Finally, operations in the “zonal” direction are different from their conventional meaning. As the three ocean reanalysis products used here are all built on a curvilinear grid, the zonal direction on the native model grid is curvilinear as well. Similar to the considerations made in Sect. 2.1, regridding from the native curvilinear grid to a uniform geographical grid will introduce large errors. So, we worked on the original multi-pole grid and followed a zig-zag setup when taking zonal integrals. The method is illustrated by in their Fig. 2. After applying this method, the resulting OMET values are comparable to those in earlier publications . Note that we only have access to sub-monthly data for SODA3. The computation of OMET using monthly data in GLORYS2V3 could miss part of heat transport by eddies, while ORAS4 does not include the heat transport from the eddy parameterization scheme , as the related eddy-induced velocity field was not archived.
2.4 Statistical analysis
In order to understand the connection between MET and changes in the Arctic, and to compare to the results from numerical climate models or a single reanalysis data set , in the following section, we performed linear regressions on multiple fields with AMET and OMET. To test the significance of the regressions, we use Student's t test. The autocorrelations are taken into account. Note that all the reanalysis data sets included in this study have relatively short time series (no more than 456 months; see Table 1).
3 Results
Unless specifically noted, the results shown in this section are all based on monthly mean fields with a low-pass filter of 5 years, which will be referred to as interannual timescales for the rest of the paper.
3.1 Overview of AMET and OMET
Globally, MET is driven by the unequal distribution of net solar radiation and thermal radiation. There is transport from regions with positive net top-of-the-atmosphere (TOA) radiation to regions with negative net TOA radiation. Figure 2 shows the mean AMET and OMET over the entire time series of every product at each latitude in the Northern Hemisphere. For the atmosphere, all three data sets agree very well. The results differ a bit in amplitude but capture similar variations at each latitude. The peak of AMET is around 41 N, after which it starts to decrease towards the North Pole. In ERA-Interim and JRA-55, AMET peaks at 4.45 PW at 41 N, while in MERRA-2 AMET peaks at 4.5 PW at 41.5 N. These findings are consistent with previous work (e.g., Trenberth and Caron2001; Fasullo and Trenberth2008; Mayer and Haimberger2012, and many others).
Figure 2Mean AMET and OMET over the entire time span of each product as a function of latitude in the Northern Hemisphere. AMET is illustrated with solid lines and OMET with dashed lines. The shades represent the full range of MET across the entire time series at each latitude. The time span of each product used in this study is given in Table 1.
Apart from the climatology of MET, we are particularly interested in the variations across different timescales from midlatitudes towards the Arctic. The time series of AMET, integrated zonally over 60 N, are shown in Fig. 3a. The seasonal cycle is dominant in each component, as expected, and the phase is very similar, but differences in the amplitudes are noted. The mean AMET provided by the chosen three atmospheric reanalysis data sets agrees well. However, their variations differ from each other. In ERA-Interim, the standard deviation (SD) of AMET is 0.92 PW, while MERRA-2 has a relatively large SD of 0.97 PW, and in JRA-55 the SD is 0.91 PW. Hence, it can be concluded that the seasonal cycles of AMET presented by the chosen atmospheric reanalysis data sets are similar. After removing the seasonal cycles and applying a 5-year low-pass filter, we obtain the low-frequency signals of AMET anomalies at interannual timescales (see Fig. 3b). ERA-Interim and JRA-55 agree well, and the correlation coefficient between them is 0.82. MERRA-2 provides a different result, and the correlation coefficient between ERA-Interim and MERRA-2 is −0.53. The SD of AMET anomaly in ERA-Interim is 0.02 PW, while in MERRA-2 the SD is 0.04 PW and in JRA-55 the SD is 0.03 PW. This implies that the variations of AMET anomalies at large timescales are similar in ERA-Interim and JRA-55 but not in MERRA-2. We further assess the sources of the difference in the next section.
Figure 3Time series of zonal integral of AMET at 60 N without/with a low-pass filter. (a) The original time series and (b) the ones with a low-pass filter include signals from ERA-Interim (blue), MERRA-2 (red) and JRA-55 (green). For the low-pass-filtered ones, we take a running mean of 5 years. The shades represent the confidence intervals with 1 standard deviation. σ is the standard deviation and μ is the mean of the entire time series.
For the ocean, all the reanalysis data sets agree well at almost all the latitudes, except for the OMET between 30 and 40 N, where the Gulf Stream resides (Fig. 2). One possible explanation is that GLORYS2V3 and SODA3 both have been generated with eddy-permitting models, while ORAS4 was not. In ORAS4, an eddy parameterization scheme from is implemented. The implementation of this eddy parameterization scheme can lead to a big difference in heat transport, compared to eddy-permitting models . However, in this case, the computation of OMET with ORAS4 does not include the contribution from eddy-induced velocity as the fields related to the use of eddy advection schemes were not archived. The eddy-permitting reanalysis data sets with high resolution, like GLORYS2V3 and SODA3, are capable of addressing the large-scale geostrophic turbulence. It has been shown that their eddy-permitting capacity can account for the large-scale eddy variability and represent the eddy energy associated with both the Gulf Stream and the Kuroshio pathways well . Consequently, at the latitude of the Gulf Stream (between 30 and 40 N), a strong spatial variability, which might represent more realistic patterns of the large-scale eddy variability, is apparent in all data sets but ORAS4.
Similarly, we show the zonal integral of the OMET at 60 N in Fig. 4. Differences in amplitudes and trends can be observed in the unfiltered time series. The mean and SD of all the OMET time series are similar (see Fig. 4a). The mean of OMET in ORAS4 is 0.47 PW, in GLORYS2V3 it is 0.44 PW, and in SODA3 it is 0.46 PW. The OGCM hindcast gives a similar result, which is also 0.47 PW. The SD of OMET in ORAS4 and the OGCM hindcast is 0.06 PW, while in GLORYS2V3 and SODA3 the SD is 0.07 PW. The OMET anomalies with a 5-year low-pass filter are shown in Fig. 4b. OMET anomalies in ORAS4 resemble that in SODA3, especially after 1998, while OMET anomalies in GLORYS2V3 are very different from that in ORAS4 and SODA3 from 1998 to 2006. The differences reveal that the first 10 years in GLORYS2V3 are quite suspicious because of its large deviation from the other products. Such large differences should be noticeable in the heat content changes or surface fluxes. We find that OHC anomalies in GLORYS2V3 are indeed very different from ORAS4 and SODA3 during this period (see Fig. 8). Nevertheless, after 2007, all the oceanic reanalyses agree well, and the OGCM hindcast deviates from the reanalyses. It is noteworthy that the observations improve considerably around that period due to an increasing number of Argo floats in use . The reanalysis products used here are greatly influenced by the number of available in situ observations. We further assess the sources of differences in the next section.
Figure 4Time series of zonal integral of OMET at 60 N without/with a low-pass filter. (a) The original time series and (b) the ones with a low-pass filter include signals from ORAS4 (blue), GLORYS2v3 (red), SODA3 (green) and the OGCM hindcast (yellow). For the low-pass-filtered ones, we take a running mean of 5 years. The shades represent the confidence intervals with 1 standard deviation. σ is the standard deviation and μ is the mean of the entire time series.
3.2 Sources of disparity
In order to further understand the difference between the AMET estimated from each atmosphere reanalysis product, we compare each component of AMET separately. We investigate the difference between each component of AMET at 60 N estimated from ERA-Interim against those from MERRA-2 and JRA-55. It is noticed that the differences mainly originate from meridional temperature transport (vcpT) and geopotential energy transport (vgz). We find that the correlation between the difference in total energy transport and the difference in meridional temperature transport between ERA-Interim and MERRA-2 is 0.55, while between ERA-Interim and JRA-55 it is 0.21. In addition, the correlation between the difference in total energy transport and the difference in geopotential energy transport (vgz) between ERA-Interim and MERRA-2 is 0.56, while between ERA-Interim and JRA-55 it is 0.60. For the other components, the correlations between them and the total difference are small. The results are all obtained with a confidence interval of 95 %. Large differences in temperature transport among reanalysis products are found at almost all latitudes (not shown). Such differences are consistent with the fact that the temperature transport and geopotential energy transport have a large contribution to the total AMET (see Fig. 1). Note that the differences in each AMET component are of the same order of magnitude as AMET. Besides, the mean and anomalous latent heat transport agree well between the chosen atmospheric products (not shown). A similar result was found by in their study using more reanalysis data sets.
In order to know the relative contribution of each field to the difference of the mean total AMET among the chosen reanalyses, a direct comparison of the vertical profile of temperature and meridional velocity fields between ERA-Interim and MERRA-2 is presented in Fig. 5. We compare the monthly mean temperature and velocity fields of ERA-Interim and MERRA-2 from 1994 to 1998, in which the biggest difference was observed (Fig. 3, taking into account the running mean of 5 years). To accommodate a point-wise comparison, the fields from MERRA-2 are interpolated onto the vertical grid of ERA-Interim. It shows that these two reanalysis products differ substantially regarding each variable field (Fig. 5a and b). Big differences in temperature reside mostly at the tropopause. Large differences in meridional wind components are distributed over the entire vertical column of the tropopause. Such differences in both fields are expected to be responsible for the difference in mean temperature transport (vcpT). Large differences are found in geopotential height fields, too (not shown). It should be noted that this comparison is carried out on pressure levels, and mass conservation is not ensured. Therefore, it can only provide insight qualitatively, and a quantitative contribution of the difference in every single field to the mean temperature transport cannot be identified here.
Figure 5Difference in temperature, meridional wind velocity and temperature transport between MERRA-2 and ERA-Interim at 60 N. The vertical profiles of (a) temperature difference and (b) meridional wind velocity difference are calculated from the climatology of each field from 1994 to 1998, respectively.
Differences between every two chosen atmospheric products are found at nearly each pressure level. This analysis is not sufficient to explain conclusively where the uncertainty mainly comes from in terms of the dynamics and physics in the atmosphere model and data assimilation system. We do find that uncertainties, as indicated by the spread between the data sets, in both the temperature and meridional velocity fields, are too large to constrain the AMET. Note that the difference in horizontal advection schemes can also influence the results. The chosen atmospheric reanalyses systems use semi-Lagrangian advection schemes, but this is not the case for MERRA-2. Hence, studies on low-frequency variability of energy transport and associated variables should be interpreted with care as the reanalysis products differ substantially, and we cannot judge a priori how close they are to actual energy transport since independent direct observations are not available.
For the ocean, fortunately observations of OMET in the Atlantic Ocean are available. First, OMET estimated from ORAS4, GLORYS2V3, SODA3 and the OGCM hindcast is evaluated against OMET measured at 26.5 N. The intercomparison shows that the reanalysis products capture roughly the mean amplitude of the OMET (Fig. 6). Some large events are captured as well, such as the strong weakening in 2009. Statistically, the mean OMET provided by the RAPID array is 1.21±0.27 PW. It is higher than the chosen products here. The mean OMET in ORAS4 is 0.66±0.27 PW, in GLORYS2V3 it is 0.89±0.52 PW, in SODA3 it is 0.81±0.52 PW, and in OGCM hindcast it is 1.05±0.21 PW. This means that all chosen products underestimate the mean OMET at 26.5 N in the Atlantic basin. Of all products, ORAS4 has the largest bias. The SD of OMET given by ORAS4 is the same as that from the RAPID array, while in GLORYS2V3 and SODA3 we find a higher SD of OMET. The OGCM hindcast has a relatively small OMET SD, which is 0.21 PW. In terms of the correlation and standard deviation, ORAS4 and the OGCM hindcast agree well with observations. It is noteworthy that the OGCM does not assimilate ocean data. The simulation is only constrained by the surface fluxes, and this suggests that the surface forcing is a very important driver of OMET variability. To conclude, the heat transport at 26.5 N is too low in these products, but ORAS4 and OGCM hindcasts appear to have reasonable variability.
Figure 6OMET estimated from ORAS4 (blue), GLORYS2V3 (red), SODA3 (green) and the OGCM hindcast (orange) compared to the RAPID array observation (gray) at 26.5 N across the Atlantic basin. The time series of OMET is presented in panel (a). The statistical properties are shown in (b) the Taylor diagram, including bias, correlation (blue), standard deviation (black) and root mean square deviation (green). σ is the standard deviation and μ is the mean of the entire time series.
Moreover, the comparison of time series in the chosen reanalyses and OSNAP observations is given in Fig. 7. Due to the limited length of OMET time series, only ORAS4 and SODA3 are included in the comparison. It can be noticed that the OMET given by ORAS4 is comparable to that in OSNAP in terms of the amplitude and variability. For most of the time within the observation period, OMET in ORAS4 falls into the range of the OSNAP observation including the uncertainty margins. The mean of OMET in ORAS4 is 0.39±0.11 PW, which is quite similar to the mean OMET (0.45±0.07 PW) of OSNAP. However, OMET in SODA3 has a larger mean and standard deviation than that in OSNAP and thus deviates from the observations.
Figure 7OMET estimated from ORAS4 (blue), SODA3 (green) and compared to the OSNAP observation (gray) at subpolar Atlantic basin. The range of uncertainty from OSNAP observation is marked by the red shade. σ is the standard deviation and μ is the mean of the entire time series.
Figure 8Time series of (a) OHC and (b) OHC anomalies with a low-pass filter at the polar cap. The OHC is integrated from surface to the bottom between 60 and 90 N. It is estimated from ORAS4 (blue), GLORYS2V3 (red), SODA3 (green) and the OGCM hindcast (yellow). The shades represent the confidence intervals with 1 standard deviation. σ is the standard deviation and μ is the mean of the entire time series.
Just as in the atmosphere, we would like to study the temperature and meridional current velocity contributions to the ocean heat transport to identify the sources of the difference between products. However, due to the nature of the curvilinear grid, the comparison of local fields after interpolation is not trustworthy. To get further insight, we calculate the OHC, since the convergence of the heat transport is likely related to OHC change. A full budget analysis was not feasible as most data sets did not include the surface fluxes. Figure 8 illustrates the OHC (Fig. 8a) and OHC anomalies (Fig. 8b) quantified from ORAS4, GLORYS2V3, SODA3 and the OGCM hindcast. It depicts the OHC integrated in the polar cap (from 60 to 90 N) over all depths. The mean OHC in ORAS4 is $\mathrm{4.48}±\mathrm{0.78}×{\mathrm{10}}^{\mathrm{22}}$ J, in GLORYS2V3 it is $\mathrm{4.23}±\mathrm{0.59}×{\mathrm{10}}^{\mathrm{22}}$ J, and in SODA3 it is $\mathrm{3.79}±\mathrm{0.93}×{\mathrm{10}}^{\mathrm{22}}$ J, while the OGCM hindcast shows a much larger mean OHC of $\mathrm{7.85}±\mathrm{0.58}×{\mathrm{10}}^{\mathrm{22}}$ J. Actually, we found that the OHC between 60 and 70 N in the OGCM hindcast agrees well with the reanalyses (not shown). Thus, the difference seems to be associated with changes to the sea ice distribution. Given the limited observations in the Arctic to constrain the reanalyses and different assumptions about the Arctic sea ice due to the differences in spatial resolutions, it is more complex than just concluding that the reanalyses are better. The variations of OHC are similar between chosen products. Regarding the OHC anomalies in Fig. 8b, a positive trend of OHC anomalies in the polar cap is captured by each product. However, the variability is different, and these are reflected in the standard deviation of OHC anomalies time series. Increases in surface temperature and OHC are often taken as a sign of AA in many papers (e.g., Serreze and Barry2011). Qualitatively, the trends of OHC in the chosen reanalyses at the polar cap could be taken as a sign of the AA, but it might be just Arctic warming and not necessarily a higher warming rate than the global mean temperature change. A quantitative evaluation of the AA is not possible due to large differences between products. To conclude, there are large differences in OHC between chosen products, while their variations agree relatively well. Since OHC is a function of temperature fields only, this can imply that temperature profiles are different among the chosen ocean reanalysis data sets. The differences of OHC between chosen products are partially consistent with the differences that we found for OMET. However, the OHC anomalies agree better among reanalysis products than the absolute OHC, which indicates that the trend of OHC is captured in a similar way among all the ocean reanalysis products.
3.3 MET and the Arctic
In previous sections, it is found that MET values in different reanalysis products at subpolar and subtropical latitudes differ substantially from each other. In order to further evaluate AMET and OMET given by different reanalyses and to provide more insight, we investigate the links between MET and remote regions. We focus on the Arctic because previous studies indicate a strong role for subpolar MET in low-frequency variability in the Arctic region. Given the complexity of the interaction between MET and the Arctic, and the short time series available, determining cause–effect relations is out of the scope of this paper. We aim to compare the relation between MET and the Arctic within each reanalysis product to investigate the physical plausibility and compare it with previous studies that use data from one reanalysis product or from coupled climate models .
Many of these studies perform linear regressions between a time series of MET and grid-point values of other physical variables. Here, we follow the same procedure and perform linear regressions of sea level pressure (SLP), 2 m temperature (T2M) and sea ice concentration (SIC) anomalies on AMET and OMET anomalies at 60 N for the chosen products. We show linear regressions in summer and winter separately in order to account for the seasonal variability of the relationships. It should also be noted that there are strong trends in OMET, T2M and SIC. We removed them by applying a polynomial fit to the time series on each grid point. We find that the second-order polynomial fit is able to capture the trend without losing variations at interannual timescales. Hereafter, we only address detrended OMET, T2M and SIC. For the sake of consistency, the regressions are carried out on the surface fields included in each respective reanalysis product. For instance, the regression of SLP on AMET estimated from ERA-Interim involves SLP fields from ERA-Interim itself. For the ocean reanalyses, as they all apply forcing derived from ERA-Interim, the regressions are performed on the fields from ERA-Interim. Note that there is a known issue with the quality of the sea ice field close to the North Pole in ERA-Interim, which can be inferred from an evaluation of reanalysis data sets concerning near-surface fields in . Following the regressions performed by and , we repeated the same procedure here with AMET at interannual scales (∼5 years).
First, we investigate the links between MET and the Arctic in winter. The regressions of anomalies of multiple fields on AMET anomalies at 60 N in each atmospheric product in winter are shown in Fig. 9. The regression coefficients reach their maximum when the regressions are instantaneous with given fields. In ERA-Interim and JRA-55, AMET is correlated with SLP over the Greenland, the North Atlantic, the Barents Sea, the Kara Sea and the northern part of the Eurasian continent. It suggests that an increase in subpolar AMET is linked to a northward advection over the Greenland which could bring relatively warm and humid air into the Arctic. Such patterns are consistent with the relatively warm air over the Greenland and part of the central Arctic close to the Eurasian side shown in Fig. 9d and f. Using ERA-40, found a similar correlation between AMET and surface air temperature (SAT) in the Greenland Sea and Barents Sea as Fig. 9d and f, without time lag. This is also consistent with a model study by . The decrease of sea ice concentration with increasing AMET in Baffin Bay and the northern part of the Barents Sea given in Fig. 9g and i is consistent with the relations between AMET and T2M. A further eddy decomposition of AMET following the method from indicates that heat transported by standing eddies has the biggest contribution to the total AMET (not shown), which is consistent with . These patterns are found only in ERA-Interim and JRA-55 but not in MERRA-2. Such different patterns in MERRA-2 likely stem from its shift in AMET around 2000. Hence, there is also large uncertainty in the assertion that heat and humidity transport by stationary eddies contributes to the changes in the subpolar and Arctic regions at interannual timescales.
Figure 9Regressions of sea level pressure, 2 m temperature and sea ice concentration anomalies on AMET anomalies at 60 N in winter (DJF) at interannual timescales with no time lag. The monthly mean fields are used here after taking a running mean of 5 years. Both the 2 m temperature and sea ice concentration are detrended. From left to right, they are the regressions on AMET of (a, d, g) ERA-Interim, (b, e, h) MERRA-2 and (c, f, i) JRA-55. The stippling indicates a significance level of 95 %.
Moreover, similar to and , we investigate the links between the variability of OMET and variations of multiple fields at interannual timescales. The regressions of anomalies of multiple fields on detrended OMET anomalies at 60 N in winter are shown in Fig. 10, with OMET leading by 1 month. It is noteworthy that it takes around 1 to 2 years for the OMET anomalies to propagate from the North Atlantic to the Barents Sea . However, the regression coefficients are maximal when the OMET leads by 1 month, which could be attributed to the implementation of the low-pass filter. In ORAS4 and SODA3, increasing OMET can lead to a decrease in SLP in the Arctic, while in ORAS4 this polar low is much stronger. This seems to indicate that an increase in OMET is related to sea ice melt and increase in T2M around the Nordic Seas. There is an Arctic Oscillation (AO)-/North Atlantic Oscillation (NAO)-like SLP anomaly with the associated large-scale temperature pattern. However, GLORYS2V3 tells an entirely different story. This is mainly due to the difference between OMET in this data set compared to the other ocean data sets during the 1990s, as shown in Fig. 4.
Figure 10Regressions of sea level pressure, 2 m temperature and sea ice concentration anomalies on OMET anomalies at 60 N in winter (DJF) at interannual timescales. OMET leads the fields by 1 month. The 2 m temperature, sea ice concentration and OMET are detrended. From left to right, they are the regressions on OMET of (a, d, g) ORAS4, (b, e, h) GLORYS2V3 and (c, f, i) SODA3. The stippling indicates a significance level of 95 %.
In general, the decrease of OMET leads to an increase in the growth rate of SIC, which is consistent with studies performed with global climate models at decadal to interdecadal timescales . Studies with observations of sea ice in the Barents Sea and OMET across the Barents Sea Opening (BSO) also confirm the strong correlation between the OMET and sea ice variation over the Barents Sea . However, note that some discussed regions are below the significance of 95 %.
In summer, the situation becomes more intricate and unclear. The same regressions of anomalies of multiple fields on AMET and OMET anomalies at 60 N in each reanalysis product in summer are included in the Supplement. It is noted that the consistency of associations between AMET, OMET and multiple fields is better in winter than that in summer within the chosen products. Atmospheric dynamical processes are more dominant in winter, which is also reflected in large-scale patterns of variability such as the AO and NAO which are more pronounced in winter than in summer . Therefore, the regressions of SLP, T2M and SIC on AMET in winter are easier to understand than those in summer.
In this section, we compared the reanalysis data with findings from previous studies. We found that ERA-Interim and JRA-55 are most consistent with the results given by coupled numerical models in winter, while MERRA-2 does not corroborate model studies. For the ocean, results from ORAS4 and SODA3 are more consistent with literature in winter. However, given the low statistical significance and the difference among chosen products, it is still hard to determine which atmospheric product provides more convincing plausible interannual variations in AMET.
4 Discussion
In this study, we found substantial differences between reanalysis products with respect to MET. In order to improve the accuracy of the variability of AMET and OMET estimated from reanalyses, one needs more observations to constrain the models. Vertical profiles differ substantially between products, and surface and top-of-the-atmosphere radiation budget are too uncertain to constrain variability in the different products. Note that reanalyses do not assimilate direct observation of the TOA and surface energy budgets. Climate models already provide information on the interaction between atmosphere and ocean and connections provided by the energy transport from midlatitudes to high latitudes . This can potentially sketch the mechanism of the interaction between energy transport and the Arctic climate change. Moreover, some studies point out that the latent heat is more influential on the Arctic sea ice rather than the dry static energy . With improved reanalysis products and independent observations, such as ocean mooring arrays and atmospheric in situ and satellite observations, to validate the reanalyses, the validity of these mechanisms can be further studied.
The regression of SIC on OMET suggests that sea ice variations are sensitive to changes of meridional energy transport at subpolar latitudes, which is noticed by other studies on SIC and MET as well . ORAS4 and SODA3 show a large anticorrelation between SIC and OMET in winter around the Greenland Sea and the Barents Sea. However, GLORYS2V3 does not show this relation. The differences in OMET are reflected in the regressions on sea ice. The strong connection between OMET from mid-to-high latitudes and the Arctic sea ice indicates an indirect link between midlatitudes and the Arctic. Many studies that explored these remote links found large-scale “horseshoe” and dipole patterns over the Atlantic . However, the physical mechanism remains disputable. propose that the multiple linkages between the Arctic and midlatitudes are based on the amplification of existing jet stream wave patterns, which might also be driven by tropical and midlatitude sea surface temperature (SST) anomalies . lists possible pathways for the teleconnection between the Arctic and midlatitudes, including changes in storm tracks, the jet stream, and planetary waves and their associated energy propagation. However, due to the shortness of time series, a small signal-to-noise ratio, uncertain external forcing and the internal atmospheric variability , this question has no easy answer.
Previous studies have shown that the variations of total OMET are very sensitive to the changes of its overturning component (e.g., McCarthy et al.2015; Lozier et al.2019). Hence, the Atlantic meridional overturning circulation (AMOC) may serve as an indicator of the changes of OMET. In our case, a quantitative estimation of the difference in the AMOC among the chosen data sets is beyond the scope of this paper. However, the downward trend of AMOC, which has been reported by several studies , is consistent the downward trend observed in OMET at 60 N in our chosen oceanic reanalyses (see Fig. 4). After analyzing six oceanic reanalysis data sets, find the reanalysis products are not consistent in their year-to-year AMOC variations. The discrepancy between AMOC represented by each reanalysis product may explain the differences in OMET in each reanalysis data set.
5 Conclusions
This study aimed to quantify and intercompare AMET and OMET variability from three atmospheric and three oceanic reanalysis data sets at subpolar latitudes. It also serves to illustrate the relation between AMET and OMET with high-latitude climate characteristics. The study is motivated by previous studies with coupled models that show a strong relation between meridional energy transport and sea ice. It is also motivated by previous studies with reanalysis data, where generally only one reanalysis data set is considered and which include mostly only oceanic or atmospheric analysis.
All selected data sets agree on the mean AMET and OMET in the Northern Hemisphere. The results are consistent with those achieved over the previous 20 years . However, when it comes to anomalies at interannual timescales, they differ from each other both spatially and temporally. The variations between ERA-Interim and JRA-55 are small, while MERRA-2 is very different from them. Although there is an overlap of observational data assimilated by different reanalysis products, large deviations still exist in many fields, especially for the vertical profiles of temperature and velocity in atmospheric reanalyses, which were also reported by some reanalysis quality reports . A further investigation of the relations between multiple fields in the Arctic and meridional energy transport shows that the Arctic climate is sensitive to the variations of AMET and OMET in winter. The patterns in ERA-Interim and JRA-55 are more consistent in winter. For the ocean, ORAS4 and SODA3 provide similar patterns in winter. Based on our results, it seems that the interannual variability of AMET and OMET cannot be constrained by the available observations. The existence of sources and sinks in reanalysis data sets introduces large uncertainties in the computation of energy transport . Although the reanalysis data sets are not specifically designed for the studies on energy transport, given the good agreement on mean AMET and OMET and their annual cycles among assessed reanalysis products, we still recommend to use these reanalysis products for the energy transport diagnostics. However, much care should be taken when adopting reanalyses for the examination of energy transport at relatively long timescales. The robustness of those results based on the AMET and OMET estimated from reanalyses should be further assessed.
Code and data availability
Code and data availability.
The reanalysis products used in this study are open to the public and they are available online. The computation and post-processing were carried out with the Python package META (https://doi.org/10.5281/zenodo.3609505; ).
Supplement
Supplement.
Author contributions
Author contributions.
YL, JA and WH designed this study, performed computations using reanalyses and analyzed the results. BM performed OGCM simulation and contributed to the analysis.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
The research was supported by the Netherlands eScience Center, Wageningen University, the National Oceanography Center in the UK and Blue Action project (European Union's Horizon 2020 research and innovation programme, grant no. 727852). The high-resolution NEMO ORCA hindcast was completed in the North Atlantic Climate System: Integrated Study (ACSIS) project (grant no. NE/N018044/1). The authors are grateful for the high performance computational infrastructure (HPC cloud and Cartesius) provided by SURFsara in the Netherlands. We would like to express our gratitude to all the researchers working on the reanalysis data sets and making the data open to the public. We also want to thank the OSNAP (Overturning in the Subpolar North Atlantic Program; https://doi.org/10.7924/r4z60gf0f) project and the RAPID-AMOC program (https://doi.org/10/bkzc) for making the observation data in the North Atlantic freely available. We also acknowledge the editor, Gerrit Lohmann, and our two anonymous reviewers for their help in improving the manuscript.
Financial support
Financial support.
This research has been supported by the Blue Action project (European Union's Horizon 2020 research and innovation programme, grant no. 727852).
Review statement
Review statement.
This paper was edited by Gerrit Lohmann and reviewed by two anonymous referees.
References
Årthun, M., Eldevik, T., Smedsrud, L., Skagseth, Ø., and Ingvaldsen, R.: Quantifying the influence of Atlantic heat on Barents Sea ice variability and retreat, J. Climate, 25, 4736–4743, 2012. a, b
Balmaseda, M. A., Mogensen, K., and Weaver, A. T.: Evaluation of the ECMWF ocean reanalysis system ORAS4, Q. J. Roy. Meteorol. Soc., 139, 1132–1161, 2013. a, b, c, d
Barnes, E. A. and Screen, J. A.: The impact of Arctic warming on the midlatitude jet-stream: Can it? Has it? Will it?, Wiley Interdisciplin. Rev.: Clim. Change, 6, 277–286, 2015. a
Berrisford, P., Dee, D., Fielding, K., Fuentes, M., Kallberg, P., Kobayashi, S., and Uppala, S.: The ERA-interim archive, ERA report series, ECMWF, Reading, 1–16, 2009. a
Berrisford, P., Kållberg, P., Kobayashi, S., Dee, D., Uppala, S., Simmons, A., Poli, P., and Sato, H.: Atmospheric conservation properties in ERA-Interim, Q. J. Ro. Meteorol. Soc., 137, 1381–1399, 2011. a, b, c
Brodeau, L., Barnier, B., Treguier, A.-M., Penduff, T., and Gulev, S.: An ERA40-based atmospheric forcing for global ocean circulation models, Ocean Model., 31, 88–104, 2010. a
Bryan, K.: Measurements of meridional heat transport by ocean currents, J. Geophys. Res., 67, 3403–3414, 1962. a
Carton, J. A., Chepurin, G. A., and Chen, L.: SODA3: a new ocean climate reanalysis, J. Climate, 31, 6967–6983, https://doi.org/10.1175/JCLI-D-18-0149.1, 2018. a, b, c
Chiodo, G. and Haimberger, L.: Interannual changes in mass consistent energy budgets from ERA-Interim and satellite data, J. Geophys. Res.-Atmos., 115, D02112, https://doi.org/10.1029/2009JD012049, 2010. a
Cohen, J., Screen, J. A., Furtado, J. C., Barlow, M., Whittleston, D., Coumou, D., Francis, J., Dethloff, K., Entekhabi, D., Overland, J., and Jones, J.: Recent Arctic amplification and extreme mid-latitude weather, Nat. Geosci., 7, 627–637, 2014. a
Comiso, J. C. and Hall, D. K.: Climate trends in the Arctic as observed from space, Wiley Interdisciplin. Rev.: Clim. Change, 5, 389–409, 2014. a
Curry, J. A., Schramm, J. L., and Ebert, E. E.: Sea ice-albedo climate feedback mechanism, J. Climate, 8, 240–247, 1995. a
Czaja, A. and Frankignoul, C.: Observed impact of Atlantic SST anomalies on the North Atlantic Oscillation, J. Climate, 15, 606–623, 2002. a
Dee, D. P., Uppala, S. M., Simmons, A. J., Berrisford, P., Poli, P., Kobayashi, S., Andrae, U., Balmaseda, M. A., Balsamo, G., Bauer, D. P., Bechtold, P., Beljaars, A. C. M., van de Berg, L., Bidlot, J., Bormann, N., Delsol, C., Dragani, R., Fuentes, M., Geer, A. J., Haimberger, L., Healy, S. B., Hersbach, H., Hólm, E. V., Isaksen, L., Kållberg, P., Köhler, M., Matricardi, M., McNally, A. P., Monge-Sanz, B. M., Morcrette, J.-J., Park, B.-K., Peubey, C., de Rosnay, P., Tavolato, C., Thépaut, J.-N., and Vitart, F.: The ERA-Interim reanalysis: Configuration and performance of the data assimilation system, Q. J. Roy. Meteorol. Soc., 137, 553–597, 2011. a, b
Delworth, T. L., Rosati, A., Anderson, W., Adcroft, A. J., Balaji, V., Benson, R., Dixon, K., Griffies, S. M., Lee, H. C., Pacanowski, R. C., Vecchi, G. A., Wittenberg, A. T., Zeng, F., and Zhang, R.: Simulated climate and climate change in the GFDL CM2.5 high-resolution coupled climate model, J. Climate, 25, 2755–2781, 2012. a
Delworth, T. L., Zeng, F., Zhang, L., Zhang, R., Vecchi, G. A., and Yang, X.: The central role of ocean dynamics in connecting the North Atlantic Oscillation to the extratropical component of the Atlantic Multidecadal Oscillation, J. Climate, 30, 3789–3805, 2017. a
Dufour, A., Zolina, O., and Gulev, S. K.: Atmospheric moisture transport to the Arctic: Assessment of reanalyses and analysis of transport components, J. Climate, 29, 5061–5081, 2016. a
Dussin, R., Barnier, B., Brodeau, L., and Molines, J.: The making of the Drakkar forcing set DFS5, DRAKKAR/MyOcean Rep., Grenoble, France, 01–04, 2016. a
ECMWF: ERA5: Fifth generation of ECMWF atmospheric reanalyses of the global climate, Copernicus Climate Change Service Climate Data Store (CDS), 2017. a
Fasullo, J. T. and Trenberth, K. E.: The annual cycle of the energy budget. Part II: Meridional structures and poleward transports, J. Climate, 21, 2313–2325, 2008. a, b, c
Ferry, N., Parent, L., Garric, G., Barnier, B., Jourdain, N. C., and the Mercator Ocean Team: Mercator global Eddy permitting ocean reanalysis GLORYS1V1: Description and results, Mercator-Ocean Quart. Newslett., 36, 15–27, 2010. a
Ferry, N., Barnier, B., Garric, G., Haines, K., Masina, S., Parent, L., Storto, A., Valdivieso, M., Guinehut, S., and Mulet, S.: NEMO: the modeling engine of global ocean reanalyses, Mercator-Ocean Quart. Newslett., 46, 46–59, 2012a. a
Ferry, N., Parent, L., Garric, G., Bricaud, C., Testut, C. E., Le Galloudec, O., Lellouche, J. M., Drevillon, M., Greiner, E., Barnier, B., Molines, J. M., Jourdain, N., Guinehut, S., Cabanes, C., and Zawadzki, L.: GLORYS2V1 global ocean reanalysis of the altimetric era (1992–2009) at meso scale, Mercator Ocean-Quart. Newslett., 44, 28–39, 2012b. a, b, c
Francis, J. A., Vavrus, S. J., and Cohen, J.: Amplified Arctic warming and mid-latitude weather: new perspectives on emerging connections, Wiley Interdisciplin. Rev.: Clim. Change, 8, e474, https://doi.org/10.1002/wcc.474, 2017. a
Ganachaud, A. and Wunsch, C.: Improved estimates of global ocean circulation, heat transport and mixing from hydrographic data, Nature, 408, 453–457, 2000. a
Ganachaud, A. and Wunsch, C.: Large-scale ocean heat and freshwater transports during the World Ocean Circulation Experiment, J. Climate, 16, 696–705, 2003. a
Gastineau, G. and Frankignoul, C.: Influence of the North Atlantic SST variability on the atmospheric circulation during the twentieth century, J. Climate, 28, 1396–1416, 2015. a
Gelaro, R., McCarty, W., Suárez, M. J., Todling, R., Molod, A., Takacs, L., Randles, C. A., Darmenov, A., Bosilovich, M. G., Reichle, R., Wargan, K., Coy, L., Cullather, R., Draper, C., Akella, S., Buchard, V., Conaty, A., da Silva, A. M., Gu, W., Kim, G.-K., Koster, R., Lucchesi, R., Merkova, D., Nielsen, J. E., Partyka, G., Pawson, S., Putman, W., Rieneckera, M., Schuberta, S. D., Sienkiewicza, M., and Zhao, B.: The modern-era retrospective analysis for research and applications, version 2 (MERRA-2), J. Climate, 30, 5419–5454, 2017. a, b, c, d, e
Gent, P. R. and Mcwilliams, J. C.: Isopycnal mixing in ocean circulation models, J. Phys. Oceanogr., 20, 150–155, 1990. a, b
Gimeno-Sotelo, L., Nieto, R., Vázquez, M., and Gimeno, L.: The role of moisture transport for precipitation in the inter-annual and inter-daily fluctuations of the Arctic sea ice extension, Earth Syst. Dynam., 10, 121–133, https://doi.org/10.5194/esd-10-121-2019, 2019. a
Goosse, H., Kay, J. E., Armour, K. C., Bodas-Salcedo, A., Chepfer, H., Docquier, D., Jonko, A., Kushner, P. J., Lecomte, O., Massonnet, F., Park, H. S., Pithan, F., Svensson, G., and Vancoppenolle, M. : Quantifying climate feedbacks in polar regions, Nat. Commun., 9, 1919, https://doi.org/10.1038/s41467-018-04173-0, 2018. a
Graversen, R. G.: Do changes in the midlatitude circulation have any impact on the Arctic surface air temperature trend?, J. climate, 19, 5422–5438, 2006. a, b, c, d, e, f, g
Graversen, R. G. and Burtu, M.: Arctic amplification enhanced by latent energy transport of atmospheric planetary waves, Q. J. Roy. Meteorol. Soc., 142, 2046–2054, 2016. a, b, c
Graversen, R. G., Källén, E., Tjernström, M., and Körnich, H.: Atmospheric mass-transport inconsistencies in the ERA-40 reanalysis, Q. J. Roy. Meteorol. Soc., 133, 673–680, 2007. a
Graversen, R. G., Mauritsen, T., Tjernström, M., Källén, E., and Svensson, G.: Vertical structure of recent Arctic warming, Nature, 451, 53–56, 2008. a, b, c, d
Hall, M. M. and Bryden, H. L.: Direct estimates and mechanisms of ocean heat transport, Deep-Sea Res. Pt. A, 29, 339–359, 1982. a, b
Harada, Y., Kamahori, H., Kobayashi, C., Endo, H., Kobayashi, S., Ota, Y., Onoda, H., Onogi, K., Miyaoka, K., and Takahashi, K.: The JRA-55 Reanalysis: Representation of atmospheric circulation and climate variability, J. Meteorol. Soc. Jpn. Ser. II, 94, 269–302, 2016. a, b
Johns, W. E., Baringer, M. O., Beal, L. M., Cunningham, S. A., Kanzow, T., Bryden, H. L., Hirschi, J. J. M., Marotzke, J., Meinen, C. S., Shaw, B., and Curry, R.: Continuous, array-based estimates of Atlantic Ocean heat transport at 26.5 N, J. Climate, 24, 2429–2449, 2011. a, b
Jourdan, D., Balopoulos, E., Garcia-Fernandez, M.-J., and Maillard, C.: Objective analysis of temperature and salinity historical data set over the Mediterranean Basin, in: IEEE OCEANS'98 Conference Proceedings, vol. 1, 28 September–1 October 1998, Nice, France, 82–87, 1998. a
Jungclaus, J. H. and Koenigk, T.: Low-frequency variability of the arctic climate: the role of oceanic and atmospheric heat transport variations, Clim. Dynam., 34, 265–279, 2010. a, b, c, d, e, f, g, h, i, j
Kapsch, M.-L., Graversen, R. G., and Tjernström, M.: Springtime atmospheric energy transport and the control of Arctic summer sea-ice extent, Nat. Clim. Change, 3, 744–748, 2013. a, b, c, d, e
Karspeck, A. R., Stammer, D., Köhl, A., Danabasoglu, G., Balmaseda, M., Smith, D. M., Fujii, Y., Zhang, S., Giese, B., Tsujino, H., and Rosati, A.: Comparison of the Atlantic meridional overturning circulation between 1960 and 2007 in six ocean reanalysis products, Clim. Dynam., 49, 957–982, 2017. a
Kobayashi, S., Ota, Y., Harada, Y., Ebita, A., Moriya, M., Onoda, H., Onogi, K., Kamahori, H., Kobayashi, C., Endo, H., Miyaoka, K., and Takahashi, K.: The JRA-55 reanalysis: General specifications and basic characteristics, J. Meteorol. Soc. Jpn. Ser. II, 93, 5–48, 2015. a
Levitus, S., Boyer, T., Conkright, M., O'brien, T., Antonov, J., Stephens, C., Stathoplos, L., Johnson, D., and Gelfeld, R.: NOAA Atlas NESDIS 18, World Ocean Database 1998: vol. 1: Introduction, US Government Printing Office, Washington, D.C., 346 pp., 1998. a
Lian, M. and Cess, R.: Energy balance climate models: A reappraisal of ice-albedo feedback, J. Atmos. Sci., 34, 1058–1062, 1977. a
Lindsay, R., Wensnahan, M., Schweiger, A., and Zhang, J.: Evaluation of seven different atmospheric reanalysis products in the Arctic, J. Climate, 27, 2588–2606, 2014. a
Liu, Y.: Meridional Energy Transport Analyzer (META), https://doi.org/10.5281/zenodo.3609505, 2020. a
Lozier, M. S., Li, F., Bacon, S., Bahr, F., Bower, A. S., Cunningham, S. A., De Jong, M. F., De Steur, L., Deyoung, B., Fischer, J., Gary, S. F., Greenan, B. J. W., Holliday, N. P., Houk, A., Houpert, L., Inall, M. E., Johns, W. E., Johnson, H. L., Johnson, C., Karstensen, J., Koman, G., Le Bras, I. A., Lin, X., Mackay, N., Marshall, D. P., Mercier. H., Oltmanns, M., Pickart, R. S., Ramsey, A. L., Rayner, D., Straneo, F., Thierry, V., Torres, D. J., Williams, R. G., Wilson, C., Yang, J., Yashayaev, I., and Zhao, J.: A sea change in our view of overturning in the subpolar North Atlantic, Science, 363, 516–521, 2019. a, b, c
Madec, G.: NEMO reference manual, ocean dynamic component: NEMO-OPA, Note du Pôle modélisation, Inst. Pierre Simon Laplace, France, 2008. a, b
Marzocchi, A., Hirschi, J. J.-M., Holliday, N. P., Cunningham, S. A., Blaker, A. T., and Coward, A. C.: The North Atlantic subpolar circulation in an eddy-resolving global ocean model, J. Mar. Syst., 142, 126–143, 2015. a
Masina, S., Storto, A., Ferry, N., Valdivieso, M., Haines, K., Balmaseda, M., Zuo, H., Drevillon, M., and Parent, L.: An ensemble of eddy-permitting global ocean reanalyses from the MyOcean project, Clim. Dynam., 49, 813–841, 2017. a
Mayer, M. and Haimberger, L.: Poleward atmospheric energy transports and their variability as evaluated from ECMWF reanalysis data, J. Climate, 25, 734–752, 2012. a, b, c
Mayer, M., Haimberger, L., Pietschnig, M., and Storto, A.: Facets of Arctic energy accumulation based on observations and reanalyses 2000–2015, Geophys. Res. Lett., 43, 10–420, 2016. a
Mayer, M., Haimberger, L., Edwards, J. M., and Hyder, P.: Toward consistent diagnostics of the coupled atmosphere and ocean energy budgets, J. Climate, 30, 9225–9246, 2017. a, b, c, d
Mayer, M., Tietsche, S., Haimberger, L., Tsubouchi, T., Mayer, J., and Zuo, H.: An improved estimate of the coupled Arctic energy budget, J. Climate, 32, 7915–7934, 2019. a
McCarthy, G., Smeed, D., Johns, W., Frajka-Williams, E., Moat, B., Rayner, D., Baringer, M., Meinen, C., Collins, J., and Bryden, H.: Measuring the Atlantic meridional overturning circulation at 26 N, Prog. Oceanogr., 130, 91–111, 2015. a, b, c, d
Miller, G. H., Alley, R. B., Brigham-Grette, J., Fitzpatrick, J. J., Polyak, L., Serreze, M. C., and White, J. W.: Arctic amplification: can the past constrain the future?, Quaternary Sci. Rev., 29, 1779–1790, 2010. a
Moat, B. I., Josey, S. A., Sinha, B., Blaker, A. T., Smeed, D. A., McCarthy, G. D., Johns, W. E., Hirschi, J. M., Frajka‐Williams, E., Rayner, D., Duchez, A., and Coward, A. C.: Major variations in subtropical North Atlantic heat transport at short (5 day) timescales and their causes, J. Geophys. Res.-Oceans, 121, 3237–3249, 2016. a, b, c
Mogensen, K., Balmaseda, M. A., and Weaver, A.: The NEMOVAR ocean data assimilation system as implemented in the ECMWF ocean analysis for System 4, European Centre for Medium-Range Weather Forecasts, Toulouse, France, 2012. a
Molod, A., Takacs, L., Suarez, M., and Bacmeister, J.: Development of the GEOS-5 atmospheric general circulation model: evolution from MERRA to MERRA2, Geosci. Model Dev., 8, 1339–1356, https://doi.org/10.5194/gmd-8-1339-2015, 2015. a
Nummelin, A., Li, C., and Hezel, P. J.: Connecting ocean heat transport changes from the midlatitudes to the Arctic Ocean, Geophys. Res. Lett., 44, 1899–1908, 2017. a
Oltmanns, M., Karstensen, J., and Fischer, J.: Increased risk of a shutdown of ocean convection posed by warm North Atlantic summers, Nat. Clim. Change, 8, 300–304, 2018. a
Onarheim, I. H., Eldevik, T., Årthun, M., Ingvaldsen, R. B., and Smedsrud, L. H.: Skillful prediction of Barents Sea ice cover, Geophys. Res. Lett., 42, 5364–5371, 2015. a
Oort, A. H. and Vonder Haar, T. H.: On the observed annual cycle in the ocean-atmosphere heat balance over the Northern Hemisphere, J. Phys. Oceanogr., 6, 781–800, 1976. a
Outten, S., Esau, I., and Otterå, O. H.: Bjerknes Compensation in the CMIP5 Climate Models, J. Climate, 31, 8745–8760, 2018. a, b
Overland, J., Francis, J. A., Hall, R., Hanna, E., Kim, S.-J., and Vihma, T.: The melting Arctic and midlatitude weather patterns: Are they connected?, J. Climate, 28, 7917–7932, 2015. a
Overland, J. E.: A difficult Arctic science issue: Midlatitude weather linkages, Polar Science, 10, 210–216, 2016. a, b
Peixoto, J. P. and Oort, A. H.: Physics of climate, American Institute of Physics, USA, 520 pp., ISBN 0883187124, 9780883187128, 1992. a
Putman, W. M. and Lin, S. J.: Finite-volume transport on various cubed-sphere grids, J. Comput. Phys., 227, 55–78, 2007. a
Riser, S. C., Freeland, H. J., Roemmich, D., Wijffels, S., Troisi, A., Belbéoch, M., Gilbert, D., Xu, J., Pouliquen, S., Thresher, A., Traon, P. L., Maze, G., Klein, B., Ravichandran, M., Grant, F., Poulain, P., Suga, T., Lim, B., Sterl, A., Sutton, P., Mork, K., Vélez-Belchí, P. J., Ansorge, I., King, B., Turton, J., Baringer, M., and Jayne, S. R.: Fifteen years of ocean observations with the global Argo array, Nat. Clim. Change, 6, 145–153, 2016. a
Sandø, A., Gao, Y., and Langehaug, H.: Poleward ocean heat transports, sea ice processes, and Arctic sea ice variability in NorESM1-M simulations, J. Geophys. Res.-Oceans, 119, 2095–2108, 2014. a
Schauer, U. and Beszczynska-Möller, A.: Problems with estimation and interpretation of oceanic heat transport – conceptual remarks for the case of Fram Strait in the Arctic Ocean, Ocean Sci., 5, 487–494, https://doi.org/10.5194/os-5-487-2009, 2009. a
Screen, J. A. and Francis, J. A.: Contribution of sea-ice loss to Arctic amplification is regulated by Pacific Ocean decadal variability, Nat. Clim. Change, 6, 856–860, 2016. a
Serreze, M. C. and Barry, R. G.: Processes and impacts of Arctic amplification: A research synthesis, Global Planet. Change, 77, 85–96, 2011. a, b
Shaffrey, L. and Sutton, R.: Bjerknes compensation and the decadal variability of the energy transports in a coupled climate model, J. Climate, 19, 1167–1181, 2006. a
Simmons, A., Poli, P., Dee, D., Berrisford, P., Hersbach, H., Kobayashi, S., and Peubey, C.: Estimating low-frequency variability and trends in atmospheric temperature using ERA-Interim, Q. J. Roy. Meteorol. Soc., 140, 329–353, 2014. a
Simmons, A., Berrisford, P., Dee, D., Hersbach, H., Hirahara, S., and Thépaut, J.-N.: A reassessment of temperature variations and trends from global reanalyses and monthly surface climatological datasets, Q. J. Roy. Meteorol. Soc., 143, 101–119, 2017. a
Smeed, D. A., McCarthy, G. D., Cunningham, S. A., Frajka-Williams, E., Rayner, D., Johns, W. E., Meinen, C. S., Baringer, M. O., Moat, B. I., Duchez, A., and Bryden, H. L.: Observed decline of the Atlantic meridional overturning circulation 2004–2012, Ocean Sci., 10, 29–38, https://doi.org/10.5194/os-10-29-2014, 2014. a
Steele, M., Morley, R., and Ermold, W.: PHC: A global ocean hydrography with a high-quality Arctic Ocean, J. Climate, 14, 2079–2087, 2001. a
Stepanov, V. N. and Haines, K.: Mechanisms of Atlantic Meridional Overturning Circulation variability simulated by the NEMO model, Ocean Sci., 10, 645–656, https://doi.org/10.5194/os-10-645-2014, 2014. a
Susan Lozier, M., Bacon, S., Bower, A. S., Cunningham, S. A., Femke de Jong, M., De Steur, L., deYoung, B., Fischer, J., Gary, S. F., Greenan, B. J., Heimbach, P., Holliday, N. P., Houpert, L., Inall, M. E., Johns, W. E., Johnson, H. L., Karstensen, J., Li, F., Lin, X., Mackay, N., Marshall, D. P., Mercier, H., Myers, P. G., Pickart, R. S., Pillar, H. R., Straneo, F., Thierry, V., Weller, R. A., Williams, R. G., Wilson, C., Yang, J., Zhao, J., and Zika, Z. D.: Overturning in the Subpolar North Atlantic Program: A new international ocean observing system, B. Am. Meteorol. Soc., 98, 737–752, 2017. a, b
Svendsen, L., Keenlyside, N., Bethke, I., Gao, Y., and Omrani, N.-E.: Pacific contribution to the early twentieth-century warming in the Arctic, Nat. Clim. Change, 8, 793–797, 2018. a
Trenberth, K. E.: Climate diagnostics from global analyses: Conservation of mass in ECMWF analyses, J. Climate, 4, 707–722, 1991. a, b, c, d, e
Trenberth, K. E. and Caron, J. M.: Estimates of meridional atmosphere and ocean heat transports, J. Climate, 14, 3433–3443, 2001. a, b, c, d
Trenberth, K. E. and Fasullo, J. T.: An observational estimate of inferred ocean energy divergence, J. Phys. Oceanogr., 38, 984–999, 2008. a
Trenberth, K. E. and Fasullo, J. T.: Applications of an Updated Atmospheric Energetics Formulation, J. Climate, 31, 6263–6279, 2018. a
Trenberth, K. E. and Solomon, A.: The global heat balance: Heat transports in the atmosphere and ocean, Clim. Dynam., 10, 107–134, 1994. a, b
Trenberth, K. E., Stepaniak, D. P., and Caron, J. M.: Accuracy of atmospheric energy budgets from analyses, J. Climate, 15, 3343–3360, 2002. a, b
Uotila, P., Goosse, H., Haines, K., Chevallier, M., Barthélemy, A., Bricaud, C., Carton, J., Fučkar, N., Garric, G., Iovino, D., Kauker, F., Korhonen, M., Lien, V. S., Marnela, M., Massonnet, F., Mignac, D., Peterson, K. A., Sadikni, R., Shi, L., Tietsche, S., Toyoda, T., Xie, J., and Zhang, J.: An assessment of ten ocean reanalyses in the polar regions, Clim. Dynam., 52, 1613–1650, 2018. a
Uppala, S. M., Kållberg, P., Simmons, A., Andrae, U., Bechtold, V. D. C., Fiorino, M., Gibson, J., Haseler, J., Hernandez, A., Kelly, G., Li, X., Onogi, K., Saarinen, S., Sokka, N., Allan, R. P., Andersson, E., Arpe, K., Balmaseda, M. A., Beljaars, A. C. M., Van De Berg, L., Bidlot, J., Bormann, N., Caires, S., Chevallier, F., Dethof, A., Dragosavac, M., Fisher, M., Fuentes, M., Hagemann, S., Hólm, E., Hoskins, B. J., Isaksen, L., Janssen, P. A. E. M., Jenne, R., Mcnally, A. P., Mahfouf, J. F., Morcrette, J. J., Rayner, N. A., Saunders, R. W., Simon, P., Sterl, A., Trenberth, K. E., Untch, A., Vasiljevic, D., Viterbo, P., and Woolen, J.: The ERA-40 re-analysis, Q. J. Roy. Meteorol. Soc., 131, 2961–3012, 2005. a
van der Linden, E. C., Bintanja, R., Hazeleger, W., and Graversen, R. G.: Low-frequency variability of surface air temperature over the Barents Sea: causes and mechanisms, Clim. Dynam., 47, 1247–1262, 2016. a, b, c
Van der Swaluw, E., Drijfhout, S., and Hazeleger, W.: Bjerknes compensation at high northern latitudes: The ocean forcing the atmosphere, J. Climate, 20, 6023–6032, 2007. a, b, c, d, e, f, g, h, i
Vonder Haar, T. H. and Oort, A. H.: New estimate of annual poleward energy transport by Northern Hemisphere oceans, J. Phys. Oceanogr., 3, 169–172, 1973. a
von Schuckmann, K., Le Traon, P.-Y., Smith, N., Pascual, A., Brasseur, P., Fennel, K., Djavidnia, S., Aaboe, S., Fanjul, E. A., Autret, E., Axell, L., Aznar, R., Benincasa, M., Bentamy, A., Boberg, F., Bourdallé-Badie, R., Nardelli, B. B., Brando, V. E., Bricaud, C., Breivik, L., Brewin, R. J. W., Capet, A., Ceschin, A., Ciliberti, S., Cossarini, G., de Alfonso, M., Collar, A. P., de Kloe, J., Deshayes, J., Desportes, C., Drévillon, M., Drillet, Y., Droghei, R., Dubois, C., Embury, O., Etienne, H., Fratianni, C., Lafuente, J. G., Sotillo, M. G., Garric, G., Gasparin, F., Gerin, F., Good, S., Gourrion, J., Grégoire, M., Greiner, E., Guinehut, S., Gutknecht, E., Hernandez, F., Hernandez, O., Høyer, J., Jackson, L., Jandt, S., Josey, S., Juza, M., Kennedy, J., Kokkini, Z., Korres, G., Kõuts, M., Lagemaa, P., Lavergne, T., Cann, B. I., Legeais, J.-F., Lemieux-Dudon, B., Levier, B., Lien, V., Maljutenko, I., Manzano, F., Marcos, M., Marinova, V., Masina, S., Mauri, E., Mayer, M., Melet, A., Mélin, F., Meyssignac, B., Monier, M., Müller, M., Mulet, S., Naranjo, C., Notarstefano, G., Paulmier, A., Gomez, B. P., Gonzalez, I. P., Peneva, E., Perruche, C., Peterson, K. A., Pinardi, N., Pisano, A., Pardo, S., Poulain, P.-M., Raj, R. P., Raudsepp, U., Ravdas, M., Reid, R., Rio, M.-H., Salon, S., Samuelsen, A., Sammartino, M., Sammartino, S., Sandø, A. B., Santoleri, R., Sathyendranath, S., She, J., Simoncelli, S., Solidoro, C., Stoffelen, A., Storto, A., Szerkely, T., Tamm, S., Tietsche, S., Tinker, J., Tintore, J., Trindade, A., van Zanten, D., Vandenbulcke, L., Verhoef, A., Verbrugge, N., Viktorsson, L., von Schuckmann, K., Wakelin, S. L., Zacharioudaki, A., and Zuo, H.: Copernicus marine service ocean state report, J. Operat. Oceanogr., 11, S1–S142, 2018. a
Wunsch, C.: The total meridional heat flux and its oceanic and atmospheric partition, J. Climate, 18, 4374–4380, 2005. a, b
Yang, X.-Y., Fyfe, J. C., and Flato, G. M.: The role of poleward energy transport in Arctic temperature evolution, Geophys. Res. Lett., 37, L14803, https://doi.org/10.1029/2010GL043934, 2010. a
Zhang, R.: Mechanisms for low-frequency variability of summer Arctic sea ice extent, P. Natl. Acad. Sci. USA, 112, 4570–4575, https://doi.org/10.1073/pnas.1422296112, 2015. a
Zheng, Y. and Giese, B. S.: Ocean heat transport in simple ocean data assimilation: Structure and mechanisms, J. Geophys. Res.-Oceans, 114, C11009, https://doi.org/10.1029/2008JC005190, 2009. a, b
|
2020-03-29 08:34:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 15, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6674917340278625, "perplexity": 5652.63310201637}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494064.21/warc/CC-MAIN-20200329074745-20200329104745-00410.warc.gz"}
|
https://eccc.weizmann.ac.il/report/2000/051/
|
Under the auspices of the Computational Complexity Foundation (CCF)
REPORTS > DETAIL:
### Paper:
TR00-051 | 14th July 2000 00:00
#### Approximation Algorithms for MAX-BISECTION on Low Degree Regular Graphs and Planar Graphs
TR00-051
Authors: Marek Karpinski, Miroslaw Kowaluk, Andrzej Lingas
Publication: 14th July 2000 12:56
Keywords:
Abstract:
The max-bisection problem is to find a partition of the vertices of a
graph into two equal size subsets that maximizes the number of edges with
endpoints in both subsets.
We obtain new improved approximation ratios for the max-bisection problem on
the low degree \$k\$-regular graphs for \$3\le k\le 8,\$ by deriving some improved
transformations from a maximum cut into a maximum bisection partition. In the
case of three regular graphs we obtain an approximation ratios of 0.805, and
0.812, respectively.
We also present the first polynomial time approximation scheme for the
max-bisection problem for planar graphs of a sublinear degree.
ISSN 1433-8092 | Imprint
|
2020-10-27 02:38:25
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8109572529792786, "perplexity": 2893.40827618658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107893011.54/warc/CC-MAIN-20201027023251-20201027053251-00660.warc.gz"}
|
https://techfi.tech/state-management-comparison-react-context-redux/
|
When it comes to React, managing state is an essential concept helping us synchronize the state of React Apps throughout all components. The simplest way to pass data from one component to another is to put it directly to the children that require it as a prop. But how can we take the data from a deep component or repeat it in many different parts? Here is when Redux and React Context come into play. Although both of them are used to handle data in React, there are still some things that make them distinctive.
## What is State Management?
Since the state in React Apps can be represented in any data type as well as it can be allocated throughout components, we need to manage the state. In simple words, state management is a logic of storing and keeping track of data displayed on the front-end. For instance, it can tell us if the theme is set to dark or light, the button is switched on or off, and so on. In conclusion, state management is a great way to synchronize the data across all elements and communicate with the backend.
## Managing state with React Context and Redux
### React Context
With the arrival of React v16.3, React Context was introduced as a way to store data and share them among components. But now, let's go through a real example called Counter. The Counter has an initial value of 0 and has two buttons to increase and decrease its value.
// counterReducer.js
export default counterReducer = (state, action) => {
switch(action.type) {
case "INCREASE_COUNTER":
return {
...state,
counter: state.counter + 1
}
case "DECREASE_COUNTER":
return {
...state,
counter: state.counter - 1
}
default:
return state;
}
}
/ Context API index.js
import React, { createContext, useReducer } from "react";
import counterReducer from '../counterReducer';
const initialState = {
counter: 0
}
export const GlobalContext = React.createContext(initialState);
export function CounterProvider({ children }) {
const [state, dispatch] = useReducer(counterReducer, initialState);
return (
<GlobalContext.Provider
value={{
state,
dispatch
}}
>
{children}
</GlobalContext.Provider>
);
}
export const useCounterContext = () => React.useContext(GlobalContext);
…and import it into “App.js”
import React from "react";
import { GlobalContext } from "./globalState";
import { Main } from "./Main";
export const App = () => {
return (
<GlobalContext>
<Main />
</GlobalContext>
)
}
To use it inside the component, we just need to import useCounterContext from “globalState”.
// Main.js
import { useCounterContext } from "./globalState";
export const Main = () => {
const { state, dispatch } = useCounterContext();
return (
<div>
<p>Counter : {state}</p>
<button
onClick={() => dispatch({
type: "INCREASE_COUNTER"
})}
>
Increase
</button>
<button
onClick={() => dispatch({
type: "DECREASE_COUNTER"
})}
>
Decrease
</button>
</div>
)
};
That’s all! So easy for everyone to understand.
### Redux
Redux, for years, has always been known as the most popular way for state management, but it is much more complicated than React Context. Thus, there will be a lot of obstacles for beginners who are just starting to learn and work with React.
To know what it really is and how to implement it let’s go to the same example as above with Counter. Remember that Redux requires three different blocks to function, including Actions, Reducers, and Store. Below are examples of them:
// action.js
export const increase = () => {
return {
type: "INCREASE_COUNTER",
}
};
export const decrease = () => {
return {
type: "DECREASE_COUNTER",
}
}
// reducer.js
let initialState = {
counter: 0,
}
export default counterReducer = (state = initialState, action) => {
switch(action.type) {
case "INCREASE_COUNTER":
return {
...state,
counter: state.counter + 1,
}
case "DECREASE_COUNTER":
return {
...state,
counter: state.counter - 1,
}
default:
return state
}
}
// store.js
import { createStore } from "redux";
import counterReducer from "./reducer";
export const store = createStore(counterReducer);
Next, to make the Store accessible in our application and its children components, we also need to import it into “App.js”
// App.js
import { Provider } from "react-redux";
import { Main } from "./Main";
import { store } from "./store";
export default App = () => {
return (
<Provider store={store}>
<Main />
</Provider>
)
}
After that, to access the state and make a change, we can use useSelector() and useDispatch() hooks by importing them from “react-redux”:
// Main.js
import { useSelector, useDispatch } from "react-redux";
import { increase, decrease } from "./action";
export const Main = () => {
const counter = useSelector(state => state.counterReducer.counter);
const dispatch = useDispatch();
return (
<div>
<p>Counter : {counter}</p>
<button
onClick={() => dispatch(increase())}
>
Increase
</button>
<button
onClick={() => dispatch(decrease())}
>
Decrease
</button>
</div>
)
};
There are a lot of things that need to be done, and, as you can see, using Redux is tougher than using React Context.
## The differences between React Context and Redux
After going through all examples, it’s time for us to make a comparison between them:
React Context Redux The Built-in tool that is already available in React A state management library Requiring to create new files when adding new contexts Easy to extend since the ease of adding new data after initial setup Hard to debug Have powerful Redux DevTools for debugging purposes Easier to understand and handle for beginners May be misleading for beginners, even with Redux DevTools State is changeable The state is read-only and can not update directly Better to use with small applications Larger applications are more suitable
## Conclusion
After all, you may wonder which one is better for React Apps. To be honest, this is a confusing question. It depends on your app’s size, how often the data needs to be refreshed, and so on. Remember that changing between Redux and React Context could be hard and time-consuming, so take a look at the comparison table and make a decision at the beginning of your work.
|
2023-02-05 00:45:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2144886553287506, "perplexity": 4413.920617664645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500158.5/warc/CC-MAIN-20230205000727-20230205030727-00545.warc.gz"}
|
https://www.transtutors.com/questions/p11-6a-johnson-corporation-uses-standard-costs-with-its-job-order-cost-accounting-s-1313806.htm
|
# *P11-6A Johnson Corporation uses standard costs with its job order cost accounting system. In...
*P11-6A Johnson Corporation uses standard costs with its job order cost accounting system. In January, an order (Job No. 12) for 1,900 units of Product B was received. The standard cost of one unit of Product B is as follows.
Direct materials 3 pounds at $1.00 per pound$ 3.00 Direct labor 1 hour at $8.00 per hour 8.00 Overhead 2 hours (variable$4.00 per machine hour; fixed $2.25 per machine hour) 12.50 Standard cost per unit$23.50
Normal capacity for the month was 4,200 machine hours. During January, the following transactions applicable to Job No. 12 occurred.
1. Purchased 6,250 pounds of raw materials on account at $1.06 per pound. 2. Requisitioned 6,250 pounds of raw materials for Job No. 12. 3. Incurred 2,100 hours of direct labor at a rate of$7.75 per hour.
4. Worked 2,100 hours of direct labor on Job No. 12.
5. Incurred manufacturing overhead on account $25,800. 6. Applied overhead to Job No. 12 on basis of standard machine hours allowed. 7. Completed Job No. 12. 8. Billed customer for Job No. 12 at a selling price of$70,000.
• ### Explain the similarities and differences between job order costing and process costing. In your...
(Solved) July 27, 2015
Explain the similarities and differences between job order costing and process costing . In your explanation, provide examples of when job order costing and process costing would be most appropriate. Your initial post should be 200 to 250 words. Additional Requirements Min Words: 200 Max Words
SOLUTION: Process Costing is used in industries where similar products are produced on a continuous basis by employing a series of operations for example paper, auto parts whereas job order...
• ### $5 per pound 100 pounds per unit$20.00 per hour 2 hours per unit $12,000 per month 1,200 units$...
(Solved) November 13, 2015
,000 units ( 1 ,000 units X 100 pounds per unit ). c . Prepare a journal entry summarizing the cost of direct materials charged to production . d . Compute the labor rate variance, given an actual labor rate of $20.75 per hour ($46,480 � 2 ,240 hours ). e. Compute the labor efficiency
Part a) Direct Material Price Variance Direct Material Price Variance = AQ (SP – AP) Where, AQ = Actual Quantity consumed SP = Standard Price per unit AP = Actual Price per unit Statement...
• ### 1. The following information is for MaScare Company. Physical Units Percent Completed Beginning work
(Solved) August 01, 2015
1 . The following information is for MaScare Company. Physical Units Percent Completed Beginning work in process 102 units 65% Started and completed 793 units 100 % Ending work in process 84 units 45% For the beginning inventory, the percentage given is the percentage that was already completed
1. The following information is for MaScare Company. Physical Units Percent Completed Beginning work in process 102 units 65% Started and completed 793 units 100% Ending work in process 84...
• ### Cost accounting
(Solved) March 11, 2016
There are 8 questions. They include T charts and Journal Entries.
Solution of problem 1 The following transactions occurred in October at Pawnee Workshops, a custom manufacturer of furniture: 1. Purchased $24,500 of materials. 2. Issued$1,650 of supplies...
|
2018-09-19 12:39:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3694823980331421, "perplexity": 7550.864919088361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156224.9/warc/CC-MAIN-20180919122227-20180919142227-00454.warc.gz"}
|
https://www.tutordale.com/what-is-normal-force-in-physics/
|
Tuesday, January 31, 2023
# What Is Normal Force In Physics
## When Does Normal Force Equal To $mg$
Physics – What Is a Normal Force?
Can someone once and for all explain when does normal force equal to mg?
I know for sure that when there is no friction, normal force will be equal to mg.But, i encountered some questions when there is some mass on an incline with friction, and then the normal force was the y component of mg.
It does not make sense to me, because as i understood when there is friction, we cannot assume that mg will be equal to normal force.
Normal Force arises due to the Newton’s Third law. Normal Force will be always acting opposite to the force falling on the surface.Normal Force is a reaction force. Remember
Normal force is equal to mg only when the object is placed horizontally, and the force is acting in the direction of the gravitational field.
Here you will see that the weight of the body is passing through the Centre of gravity and acting in direction of the centre of the earth.
But the component of weight on the incline is not mg it is cos component.In order to satisfy the Newton’s third law Normal reaction to the object is the cos component$$N=Wg\cos \theta$$ even if friction is there or not there this will be the same
Normal force $F_N$ is just the force between two surfaces. It’s called “normal” because it acts perpendicular to the surfaces.
Gravitational force is completely unrelated. Gravity always acts with $F_g = -mg$. The minus sign indicates that the force points down.
## Tension Or Normal Force
One interesting physics conversation this semester has been about how we categorize forces. One future physics teacher in particular kept being concerned about whether to call something a tension force or a normal force. For example, consider the following situations:
• A chain, consisting of many links, hangs vertically. The very top link has a rope wrapped around it, which keeps the whole chain fixed to the ceiling.
• A rope is wrapped around a box and pulled by a person.
What kind of contact forces act on the top link? What kind of contact forces act on the box?
I think many students learn to associate types of forces with kinds of objects. For example, objects like ropes and strings exert tension forces. Objects like walls, ramps, and tables exert Normal forces. Springs, of course, exert spring forces. This kind of object-focused categorization means having to have a category for forces from a hand, like Applied Force
If this is how you think about forces , both the link and the box have tension forces exerted on them by the ropes, because ropes exert tension forces.
Whats interesting is that this is the complete opposite of what Ive often heard said to students. Students are often told that ropes can only pull, not push.
## What Is The Line Of Action Of A Force
The line along which a force acts on an object is called the forces line of action . The point where the force is acting on an object is called the point of application of the force. The force which opposes the relative motion between the surfaces of two objects in contact and acts along the surfaces is called the force of friction.
Galileo experimentally proved that objects that are in motion move with constant speed when there is no force acting on it. He could note that when a sphere rolls down an inclined plane, its speed increases because of the gravitational pull acting on it.
When all the forces acting on an object are balanced, the net force acting is zero. But, if all the forces acting on a body result in an unbalanced force, then the unbalanced force can accelerate the body, which means that a net force acting on a body can either change the magnitude of its velocity or change the direction of its velocity. For example, when many forces act on a body, and the body is found to be at rest, we can conclude that the net force acting on the body is zero.
Also Check: Relations And Functions Algebra 2 Worksheet
## Television Resting On A Table
When a television rests on a table, a perpendicular force exists on television. As this force is acting perpendicularly to the surface of television, this force will be none other than a Normal Force.
From the above examples, it is clear that when a force acts in a perpendicular direction, it will be termed a Normal Force.
Stay tuned to BYJUS for more such interesting and informative articles. Also, register to BYJUS The Learning App for loads of interactive, engaging Physics-related videos and unlimited academic assistance.
Related Articles
Also Check: What Is Catharsis In Psychology
## Applications In Real Life
In an elevator either stationary or moving at constant velocity, the normal force on the person’s feet balances the person’s weight. In an elevator that is accelerating upward, the normal force is greater than the person’s ground weight and so the person’s perceived weight increases . In an elevator that is accelerating downward, the normal force is less than the person’s ground weight and so a passenger’s perceived weight decreases. If a passenger were to stand on a weighing scale, such as a conventional bathroom scale, while riding the elevator, the scale will be reading the normal force it delivers to the passenger’s feet, and will be different than the person’s ground weight if the elevator cab is accelerating up or down. The weighing scale measures normal force , not gravitational force .
When we define upward to be the positive direction, constructing Newton’s second law and solving for the normal force on a passenger yields the following equation:
When we define the center of the ride to be the positive direction, solving for the normal force on a passenger that is suspended above ground yields the following equation:
## Normal Force With Examples
Normal Force:Contact objects exert force to each other because of their weights. In this example, book exerts a force to table because of its weight and table also exerts force to the book. We call this force as normal force which is same in magnitude and opposite in direction with the applied force .For different situations, we say that in general normal force is the reaction to the perpendicular force exerting on it. We will deal with different examples of normal force. Look at the given examples below and follow the steps to understand how can we find normal force for different situations.
Also Check: Glencoe Algebra 1 Chapter 6 Answer Key
## Apphysics1 Simple Harmonic Motion Review Total Playlist Time: 78 Minutes Simple Harmonic Motion: All Lecture Notes Simple Harmonic Motion Introduction Via A Horizontal Mass
• APPhysics1 DURING THE FOURTH QUARTER, ALL LESSONS AND WORK FOR HANSEN’S CLASSES ARE NOW ON POWERSCHOOL LEARNING! This website will not be updated. Please email for ANY questions or guidance. APPhysics1 Lesson Notes: Download any of the notes you missed and print them off at home. Book homework is on the last page.
• All the laws of physics are dictated by four forces: gravity, electromagnetism, the weak nuclear force, and the strong nuclear force. These four forces hold our universe together, and without them, the universe could not exist. Each of the four forces, with the exception of gravity, have their own messenger particles that carry the force.
• 12 newtons. three forces are acting on an object as shown in the diagram. the object is not moving. two forces are 2 newtons and 10 newtons. the third force is: 2.0 kilograms. an
• A) Examples of forces . ap physics c question electric fields and forces homework may 2nd, 2019 – ap physics c question electric fields and forces homework 1 two point charges lie on the x axis a charge of 7 5 µc is at the origin and a charge of 5 5 µc is at x
• APPhysics1: Exam Review – Forces STUDY PLAY Newton’s 1st Law of Motion Objects at rest will remain at rest and objects in motion will remain in motion unless acted upon by a net external force. Newton’s 2nd Law of Motion Acceleration is proportional to net force and inversely proportional to mass.
## Why Is Normal Force At Inclined Plane Defined The Way It Is
Normal Force Physics Problems With Tension, Inclined Planes & Free Body Diagrams
So why is normal force at inclined plane is $mg \cos a$ and friction is $mg \sin a$? I mean, why not vice-versa or why not some other ratio? Where does it come from? I can see that the sum squares of friction and normal force under the root should be equal to $mg$ and the angle at which the plane is inclined has something to do with it.
Does it come from some other part of physics or was it just deduced experimentally? I mean, we could slide a sample block from an inclined plane to see what acceleration it has each time we change the angle.
UPD:Thank you all guys! What I was ultimately asking just seems to be a matter of philosophy. These all formulas are beautiful and seem to be correct. But they are abstractions. And the confidence that this is true ultimately comes from experiments. Since I am very new to physics, I just wanted to know if my assumptions were true.
In vector notation there is only one equation and no ambiguity. The block is in equilibrium so the net force acting on it must be zero. There are three forces acting on the block – its weight $\vec W$, the normal force $\vec N$ and friction $\vec F$. So we have
$\vec W + \vec N + \vec F = 0$
Since $\vec N$ and $\vec F$ are orthogonal to one another it is convenient to resolve the three vectors into components along an $x$ axis that is parallel to $\vec F$ and a $y$ axis that is parallel to $\vec N$. In component form we have
$\vec F = \\ \vec N = \\ \vec W = , -W \cos )$
Don’t Miss: Cpm Geometry Connections Volume 1 Answers
## Are Normal Force And Weight Always Equal
Normal force and weight aren’t always equal. They’re equal when:
• There are no other forces exerted in the vertical direction and
• An object is on a flat surface.
Normal force and weight are not equal when:
• There’s an additional force that at least in part works in the vertical direction or
• The object is on an inclined surface.
## Common Misconception: Normal Force Vs Newton
In this section we have introduced the quantity normal force, which is represented by the variable N N . This should not be confused with the symbol for the newton, which is also represented by the letter N. These symbols are particularly important to distinguish because the units of a normal force ( N
Recommended Reading: What Is Sensory Awareness In Psychology
## First The Normal Force
• A normal force is the name we give to the perpendicular force . It is not equal to $mg\cos$ in general. Maybe it is in this specific scenario but that is just a coincidence.
In your specific case where the object is stationary , if you want to calculate the normal force then you will set up Newton’s 1st law in this perpendicular direction. Since only the normal force $n$ and one component of the weight $w$ acts along this direction, then we get:
$$\sum F=0 \quad\Leftrightarrow\quad n-w_\perp=0 \quad\Leftrightarrow\quad n=w_\perp.$$
Now you just need to find the perpendicular component of the weight. That turns out to include the cosine of the angle due to trigonometry: $w_\perp=mg\cos$. If your question is why this cosine appears here, then let me know in the comments and I’ll adjust the answer. So, now we know that
$$n=mg\cos$$
and this is not a feature of the normal force. This is only an expression that holds true in your specific scenario. If other forces acted or if acceleration was present, then this expression would look very different.
## Normal Force With Acceleration
All of our previous examples have had boxes standing still. If a box moves horizontally and the normal force acts vertically, the movement of the box won’t affect the normal force because they are on separate axes. However, what happens if the box moves in the same direction as the normal force? Let’s say our box is in an elevator. The box weighs, and the elevator accelerates down at. What is the normal force?
Free-body diagram of the box in the elevator, StudySmarter Originals
We drew our free-body diagram in the image above. Now we can use Newton’s Second Law in the vertical direction to solve for the normal force, and this time we will include the downward acceleration.
The normal force is
You May Like: What Is The Value Of E In Math
## What Is Normal Force
Normal Force is a perpendicular force that a surface exterts on an object. For Example if we place an object on a table the gravitational force will pull the object donwards. In order to prevent the object from falling the table will exert certain force to prevent it from falling. This force is known as Normal Force and is denoted by FN or N. Normal Force units are Newton.
## Notation For Normal Forces
In this course, we will give normal forces a special symbol. Instead of writing F with two subscripts, we will generally write N with two subscripts. The subscripts will still indicate the target and the source of the force, respectively. We will discuss the reason for this special notation when we introduce the full form of the contact force.
Don’t Miss: What Are Visual Illusions Psychology
## Tension In A Rope And Normal Force
A wooden box with a mass of 22kg and weight of 216N, is pulled at a constant speed with a rope that makes an angle of 37deg with the wooden floor. A frictional force of 100N resists the motion. What is the tension in the rope and the normal force?
The tension, T, has a horizontal component and a vertical component. The horizontal component is equal to the 100 N or T x cos 37 = 100N. Solving for T gives 125 N. The normal force is the weight of the box minus the upward vertical component of the tension. The equation for this is FN = W -T x sin37 where FN is the normal force, W is the weight , and T is the tension previously calculated. I got 141 N for FN.
## Normal Force With An External Downward Force
Normal Force on a Hill, Centripetal Force, Roller Coaster Problem, Vertical Circular Motion, Physics
• 1Use the right equation. To calculate the normal force of an object at rest when an outside force acts downward on that object, use the equation: N = m * g + F * sin’XResearch source
• N refers to the normal force, m refers to the object’s mass, g refers to the acceleration of gravity, F refers to the outside force, and x refers to the angle between the object and the direction of the outside force.
• Example: Find the normal force of a block with a mass of 4.2 kg, when a person is pressing down on the block at a 30 degree angle with a force of 20.9 N.
• 2Find the object’s weight. The weight of an object equals the mass of the object multiplied by the acceleration of gravity.XResearch source
• Note that the gravitational acceleration at the Earth’s surface is a constant: g = 9.8 m/s2
• Example: weight = m * g = 4.2 * 9.8 = 41.16
• 3Find the sine of the angle. The sine of an angle is calculated by dividing the side of the triangle opposite the angle by the hypotenuse of the angle.
• Example: sin = 0.5
• 4Multiply the sine by the outside force. The outside force, in this instance, refers to the force acting downward on the object.
• Example: 0.5 * 20.9 = 10.45
• 5Add this value to the weight. Doing so will give you the normal force at work.
• Example: 10.45 + 41.16 = 51.61
• 6Write your answer. Note that for an object at rest being influenced by an external, downward force, the normal force will be greater than the weight of the object.
• Example: The normal force is 51.61 N.
• You May Like: What Is Hi In Chemistry
## Normal Force As Constraint
The normal force is a constraint force, as is tension. It takes on whatever value is necessary to ensure that the objects in contact will not move into each other. This implies that the normal force constrains the motions of these objects so that they have the same velocity component perpendicular to the surface and will therefore remain in contact without getting closer or farther from each other. Thus when objects are in contact, the contact force will generate a force to cause the appropriate acceleration to change their velocities so they don’t interpenetrate. Thus there is no “force law” for normal forces – the normal force must be found using Newton’s second law and determining the acceleration of the objects from their motion. To clarify this point, consider the following examples:
Stationary Horizontal Surface Pushing a boxVertically Accelerating Horizontal Surface Box on an elevator accelerating upwards
Horizontally Moving Horizontal Surface
An object in contact with a horizontal surface that is moving horizontally like the bed of a moving pickup truck is constrained to have have zero vertical acceleration, but might not have the same horizontal acceleration as the truck . The truck and the object are not required to have the same total acceleration, but will have the same vertical component.
|
2023-02-04 23:02:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6566510200500488, "perplexity": 355.5369288991202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500154.33/warc/CC-MAIN-20230204205328-20230204235328-00169.warc.gz"}
|
https://labs.tib.eu/arxiv/?author=J.Libby
|
• ### Precision measurement of the $e^{+}e^{-}~\rightarrow~\Lambda_{c}^{+} \bar{\Lambda}_{c}^{-}$ cross section near threshold(1710.00150)
April 3, 2018 hep-ex
The Born cross section of the $e^{+}e^{-}~\rightarrow~\Lambda_{c}^{+} \bar{\Lambda}_{c}^{-}$ process is measured with unprecedented precision using data collected with the BESIII detector at $\sqrt{s}=4574.5$, $45580.0$, $4590.0$ and $4599.5$ $\mathrm{MeV}$. The non-zero cross section near the $\Lambda_{c}^{+} \bar{\Lambda}_{c}^{-}$ production threshold is discerned. At center-of-mass energies $\sqrt{s}=4574.5$ and $4599.5$ $\mathrm{MeV}$, the higher statistics data enable us to measure the $\Lambda_{c}$ polar angle distributions. From these, the ratio between the $\Lambda_{c}$ electric and magnetic form factors ($|G_{E}/G_{M}|$) is measured for the first time. They are found to be $1.14\pm0.14\pm0.07$ and $1.23\pm0.05\pm0.03$ respectively, where the first uncertainties are statistical and the second are systematic.
• ### Measurement of the Integrated Luminosities of Cross-section Scan Data Samples Around the $\psi(3770)$ Mass Region(1803.03802)
March 10, 2018 hep-ex
To investigate the nature of the $\psi(3770)$ resonance and to measure the cross section for $e^+e^- \to D\bar{D}$, a cross-section scan data sample, distributed among 41 center-of-mass energy points from 3.73 to 3.89~GeV, was taken with the BESIII detector operated at the BEPCII collider in the year 2010. By analyzing the large angle Bhabha scattering events, we measure the integrated luminosity of the data sample at each center-of-mass energy point. The total integrated luminosity of the data sample is $76.16\pm0.04\pm0.61$~pb$^{-1}$, where the first uncertainty is statistical and the second systematic.
• ### Measurement of Branching Fractions of Hadronic Decays of the $\Omega_c^0$ Baryon(1712.01333)
Jan. 17, 2018 hep-ex
Using a data sample of 980 ${\rm fb}^{-1}$ of $e^+e^-$ annihilation data taken with the Belle detector operating at the KEKB asymmetric-energy $e^+e^-$ collider, we report the results of a study of the decays of the $\Omega_c^0$ charmed baryon into hadronic final states. We report the most precise measurements to date of the relative branching fractions of the $\Omega_c^0$ into $\Omega^-\pi^+\pi^0$, $\Omega^-\pi^+\pi^-\pi^+$, $\Xi^-K^-\pi^+\pi^+$, and $\Xi^0K^-\pi^+$, as well as the first measurements of the branching fractions of the $\Omega_c^0$ into $\Xi^-\bar{K}^0\pi^+$, $\Xi^0\bar{K}^0$, and $\Lambda \bar{K}^0\bar{K}^0$, all with respect to the $\Omega^-\pi^+$ decay. In addition, we investigate the resonant substructure of these modes. Finally, we present a limit on the branching fraction for the decay $\Omega_c^0\to\Sigma^+K^-K^-\pi^+$.
• ### Study of Two-Body $e^+e^- \to B_s^{(*)}\bar{B}_s^{(*)}$ Production in the Energy Range from 10.77 to 11.02 GeV(1609.08749)
Sept. 28, 2016 hep-ex
We report results on the studies of the $e^+e^-\to B_s^{(*)}\bar{B}_s^{(*)}$ processes. The results are based on a $121.4$ fb$^{-1}$ data sample collected with the Belle detector at the center-of-mass energy near the $\Upsilon(10860)$ peak and $16.4$ fb$^{-1}$ of data collected at 19 energy points in the range from 10.77 to 11.02 GeV. We observe a clear $e^+e^-\to\Upsilon(10860)\to B_s^{(*)}\bar{B}_s^{(*)}$ signal, with no statistically significant signal of $e^+e^-\to \Upsilon(11020)\to B_s^{(*)}\bar{B}_s^{(*)}$. The relative production ratio of $B_s^*\bar{B}_s^*$, $B_s\bar{B}_s^{*}$, and $B_s\bar{B}_s$ final states at $\sqrt{s}=10.866$ GeV is measured to be $7:$ $0.856\pm0.106(stat.)\pm0.053(syst.):$ $0.645\pm0.094(stat.)^{+0.030}_{-0.033}(syst.)$. An angular analysis of the $B_s^*\bar{B}_s^*$ final state produced at the $\Upsilon(10860)$ peak is also performed.
|
2021-04-11 03:56:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9004824757575989, "perplexity": 644.3653611777316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038060927.2/warc/CC-MAIN-20210411030031-20210411060031-00388.warc.gz"}
|
https://blog.bi0s.in/2020/04/29/Misc/golf.so-Plaid2020/
|
# golf.so - PlaidCTF 2020
tl;dr
• Hand-crafting a linux shared object file with a size of less than 194 bytes
Challenge points: 500
No. of solves: 104
Solved by : d4rk_kn1gh7, k4iz3n
## Challenge Description
The challenge description gave us a link to http://golf.so.pwni.ng/upload, which contained the following instructions:
## Initial approaches
Our first approach was to create a minimalistic C program containing just a simple execve system call. However it soon became apparent that however much you tried, even with all of gcc’s optimization flags, there was no way to compile a shared object of size < 1 kb.
Then, our next approach was simply creating an assembly file as follows:
Eventually, compiling this into an elf with nasm, and then into a shared object file with gcc, followed by a lot of manual stripping at the end of the file, we were able to get a working file of size 673 bytes.
We used the following flags to compile.
Using the above flags significantly reduced the number of needless Physical Headers and supporting data, few of them were still there though such as a NOTE header and some data that went along with it. We figured out how to get rid of this later.
By manually cutting off the file near the end, we were getting rid of the section headers and sections, which are apparently not required for an ELF to run.
At this point there were several things we knew we could strip off such as the NOTE section and header, so our priority was to get rid of this. Since we were taking the trial and error approach, manually modifying the binary with a hex-editor would have become tiresome after a while.
It became pretty apparent that in order to get the file under 500 bytes, we had to hand-craft the asm file.
## Hand-crafting the assembly file
Referencing links such as https://www.muppetlabs.com/~breadbox/software/tiny/teensy.html, we were able to create an elf file that executed the required commands, at a size of less than 170 bytes.
Then, in order to convert this into a shared object file, we cross-referenced the smallest working .so file we had in IDA, and copied the required program headers and dynamic section. We didn’t include the section headers as they weren’t required for the program to run.
This gave us the following minimalistic 485 byte shared object file:
This was basically a handcrafted copy of the binary we were able to create with gcc excluding the NOTE section. Note for example the dt_symtab section which contains bytes stripped out from the working .so and laid out like this using a python script. We were not concerned with it at this point as we were aiming for the 500 byte mark.
However, this still wasn’t worthy of a flag:
We eventually found out that this file still contained many unnecessary sections like dt_hash and dt_symtab, also the last few entries of the ehdr contained offsets of the section headers, which weren’t in use. So we could override this with junk values, i.e the first few entries of the program header, thereby overlapping the headers and saving even more space. Doing this, we obtain the following file:
This file is 294 bytes, and on uploading it we get our first flag:
So we evidently needed to get it to 224 bytes to obtain our next flag. However, after overlapping a few QWORDs from the dhdr section with junk values, we were only able to get the file to 268 bytes. Then on reading more about the structure of shared object files, we found out that we could reduce the number of program headers present from 3 to 2, as there were 2 headers present with the LOAD(1) flag.
One LOAD header was for loading the _DYNAMIC section, while the other was for loading the code. We combined these by making it so that both of them are loaded into memory together, this was done by modifying the offsets and with a bit of trial and error. Note that a lot of the values here make little sense and are far from what they are supposed to be as defined by the ELF file format. But it works, so we don’t care!
On combining these two headers, creating a couple more header overlaps, and moving the dt_strtab entry, we get this:
So we needed to reduce the file by 4 more bytes to get it to 194 and hopefully obtain our flag!
So finally on removing the xor edx,edx instruction (as the program apparently didn’t mind a non-null rdx value), and replacing the mov instructions with push and pop instructions to reduce bytes, we get this incredibly small, but somehow working file:
And finally, the server gives us the response we want:
It seems that there are still things that you can get rid of and very creative ways in which you can embed the code within the ELF header itself. The challenge is a great task that teaches you a lot about the ELF file format and how one can bend its rules.
## Flags
golf.so-putter: PCTF{th0ugh_wE_have_cl1mBed_far_we_MusT_St1ll_c0ntinue_oNward}
golf.so-driver: PCTF{t0_get_a_t1ny_elf_we_5tick_1ts_hand5_in_its_ears_rtmlpntyea}
|
2023-03-31 05:56:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5077224969863892, "perplexity": 1403.0381525536866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949573.84/warc/CC-MAIN-20230331051439-20230331081439-00653.warc.gz"}
|
https://byjus.com/jee/ie-irodov-solutions-part-5-interference-of-light/
|
Checkout JEE MAINS 2022 Question Paper Analysis : Checkout JEE MAINS 2022 Question Paper Analysis :
# I.E. Irodov Solutions on Interference of Light
The solutions of the Problems In General Physics I.E. Irodov - Interference of Light is given on this page. As far as any competitive exam is concerned, optics is an important topic. Optics is a branch of physics that deals with light and its properties. The main topics in Interference of light are the width of a fringe, temporal and spatial coherences, Newton's rings, radii of the rings, etc.
Students can expect one question from optics for any entrance examination.
Students are recommended to revise and learn these solutions so that they can achieve better results for JEE Main and other entrance exams.
### I.E. Irodov Solutions on Interference of Light
1. Demonstrate that when two harmonic oscillations are added, the time-averaged energy of the resultant oscillation is equal to the sum of the energies of the constituent oscillations, if both of them
(a) have the same direction and are incoherent, and all the values of the phase difference between the oscillations are equally probable;
(b) are mutually perpendicular, have the same frequency, and an arbitrary phase difference.
Solution:
1. (a) In this case the net vibration is given by
x = a1 cos ωt + a2 cos(ωt + δ)
Where δ is the phase difference between the two vibrations which varies rapidly and randomly in the interval (0, 2π). (This is what is meant by incoherence). Then
x = (a1 + a2 cos δ) cos ωt + a2 δ sin ωt
The total energy will be taken to be proportional to the time average of the square of the displacement.
Thus E = <(a1 + a2 cos δ)2 + a22 sin2δ> = a12 + a22
As <cos δ> = 0 and we have put < cos2ωt > = < sin2 ωt > = ½ and has been absorbed in the overall constant of proportionality.
In the same units, the energies of the two oscillations are a12 and a22 respectively so the proposition is proved.
(b) Here
$$\vec{r}=a_{1}\cos \omega t\hat{i}+a_{2}\cos (\omega t+\delta )\hat{j}$$
and the mean square displacement is αa12 + a22. If δ is fixed but arbitrary. Then as in (a) we see that E = E1 + E2.
2. By means of plotting find the amplitude of the oscillation resulting from the addition of the following three oscillations of the same direction:
ξ1 = a cos ωt, ξ2 = 2a sin ωt, ξ3 = 1.5a cos (ωt + π/3)
Solution:
1. It is easier to do it analytically.
ξ1 = a cos ωt, ξ2 = 2a sin ωt,
ξ3 = (3/2)a cosπ/3 cos ωt - sin π/3 sin ωt )
Resultant vibration is:
ξ = (7a/4)cos ωt + a(2 - 3√3/4)sin ωt
This has an amplitude = (a/4)√(49 + (8 - 3√3)2
= 1.89 a
3. A certain oscillation results from the addition of coherent oscillations of the same direction ξk = a cos [ωt + (k - 1) φ], where k is the number of the oscillation (k = 1, 2, . . N), φ is the phase difference between the kth and (k - 1)th oscillations. Find the amplitude of the resultant oscillation.
Solution:
1. We use the method of complex amplitudes. Then the amplitudes are
A1 = a, A2 = ae, ...AN = aei(N - 1)φ and the resultant complex amplitude is
A = A1 + A2 + ...AN
= a(1 + e + e2iφ+ …+ei(N - 1)φ)
= a(1 - eiNiφ)/(1 - e)
The corresponding ordinary amplitude is
4. A system illustrated in Fig. 5.12 consists of two coherent point sources 1 and 2 located in a certain plane so that their dipole moments are oriented at right angles to that plane. The sources are separated by a distance d, the radiation wavelength is equal to λ. Taking into account that the oscillations of source 2 lag in phase behind the oscillations of source 1 by φ
(φ < π), find:
(a) the angles θ at which the radiation intensity is maximum;
(b) the conditions under which the radiation intensity in the direction θ = π is maximum and in the opposite direction, minimum.
Solution:
1. With dipole moment perpendicular to the plane, there is no variation with θ of individual radiation amplitude. Then the intensity variation is due to interference only.
In the direction given by angle θ, the phase difference is
(2π/λ)d cos θ + φ = 2kπ for maxima
Thus d cos θ = (k - φ/2π)λ
k = 0, ±1, ±2, ...
We have added φ to (2π/λ)d cos θ because the extra path that te wave from 2 has to travel in going to P(as compared to 1) makes it lag more than it already is due to φ.
(b) Maximum for θ = π gives -d = (k - φ/2π)λ
Minimum for θ = 0 gives d = (k’ - φ/2π + 1/2 )λ
Adding we get (k + k’ - φ/π + ½ )λ = 0
This can be true if k’ = -k, φ = π/2
Since 0< φ< π.
Then -d = (k - ¼)λ
Here k = 0, -1, -2, -3..
(otherwise R.H.S will become +ve).
5. A stationary radiating system consists of a linear chain of parallel oscillators separated by a distance d, with the oscillation, phase varying linearly along the chain. Find the time dependence of the phase difference ∆φ between the neighbouring oscillators at which the principal radiation maximum of the system will be "scanning" the surroundings with the constant angular velocity ω.
Solution:
1. If Δφ is the phase difference between neighbouring radiators then for a maximum in the direction θ we must have
(2π/λ)d cos θ + ∆φ = 2πk
For scanning θ = ωt + β
Thus (d/λ) cos (ωt + β) + ∆φ/2π = k
Or ∆φ = 2π[k - (d/λ)cos (ωt + β)]
To get the answer of the book, put β = α - π/2.
6. In Lloyd's maximum mirror experiment (Fig. 5.13) a light wave emitted directly by the source S (narrow slit) interferes with the wave reflected from a mirror M. As a result, an interference fringe pattern is formed on the screen Sc. The source and the mirror are separated by a distance l = 100 cm. At a certain position of the source, the fringe width on the screen was equal to ∆x= 0.25 mm, and after the source was moved away from the mirror plane by ∆h = 0.60 mm, the fringe width decreased η = 1.5 times. Find the wavelength of light.
Solution:
1. From the general formula, ∆x = lλ/d
We find that, ∆x/η = lλ/(d + 2∆h)
Since d increases to d + 2∆h when the source is moved away from the mirror plane by ∆h.
Thus ηd = d + 2∆h
Or d = 2∆h∆x/(η-1)l
= 0.6μm.
7. Two coherent plane light waves propagating with a divergence angle ψ<<1 fall almost normally on a screen. The amplitudes of the waves are equal. Demonstrate that the distance between the neighbouring maxima on the screen is equal to ∆x = λ/ψ), where λ is the wavelength.
Solution:
1. We can think of the two coherent plane waves as emitted from two coherent point sources very far away. Then ∆x = lλ/d = λ/d/l
But d/l = ψ (if ψ<<1)
So ∆x = λ/ψ
8. Figure 5.14 illustrates the interference experiment with Fresnel mirrors. The angle between the mirrors is α = 12', the distances from the mirrors' intersection line to the narrow slit S and the screen Sc are equal to r = 10.0 cm and b = 130 cm respectively. The wavelength of light is λ = 0.55 μm. Find:
(a) the width of a fringe on the screen and the number of possible maxima;
(b) the shift of the interference pattern on the screen when the slit is displaced by δl = 1.0 mm along the arc of radius r with centre at the point O;
(c) at what maximum width δmax of the slit the interference fringes on the screen are still observed sufficiently sharp.
Solution:
1. (a) Here S’S’’= d = 2rα
Then ∆x = (b + r)λ/2α
Putting b = 1.3 metre, r = 0.1 metre
λ = 0.55 μm, α = 12’ = 1/5×57 radian
we get ∆x = 1.1 mm
Number of possible maxima = 2bα/∆x + 1
≈ 8.3 + 1 ≈ 9
(2bα is the length of the spot on the screen which gets light after reflection from both mirror. We add 1 above to take account of the fact that in a distance ∆x there are two maxima.)
(b) when the slit moves by δl along the arc of radius r the incident ray on the mirror rotates by δl/r; this is also the rotation of the reflected ray. There is then a shift of the fringe of magnitude.
bδl/r = 13 mm.
(c) If the width of the slit is δ then we can imagine the slit to consist of two narrow slits width separation δ. The fringe pattern due to the wide slit is the superposition of the pattern due to these two narrow slits. The full pattern will not be sharp at all if the pattern due to the two narrow slits are ½ ∆x apart because then the maximum due to one will fill the minima due to the other. Thus we demand
max/r = ½ ∆x
= (b + r)λ/4rα
δmax = (1 + r/b)λ/4α
= 42 μm.
9. A plane light wave falls on Fresnel mirrors with an angle α = 2.0' between them. Determine the wavelength of light if the width of the fringe on the screen ∆x = 0.55 mm.
Solution:
1. To get this case we must let r → ∞ in the formula for ∆x of the last example.
So ∆x = (b+ r)λ/2αr → λ/2α
(A plane wave is like light emitted from a point source at ∞)
Then λ = 2α∆x
= 0.64 μm.
10. A lens of diameter 5.0 cm and focal length f = 25.0 cm was cut along the diameter into two identical halves. In the process, the layer of the lens a = 1.00 mm in thickness was lost. Then the halves were put together to form a composite lens. In this focal plane a narrow slit was placed, emitting monochromatic light with wavelength λ = 0.60 μm. Behind the lens a screen was located at a distance b = 50 cm from it. Find:
(a) the width of a fringe on the screen and the number of possible maxima;
(b) the maximum width of the slit δmax at which the fringes on the screen will be still observed sufficiently sharp.
Solution:
1. We show the upper half of the lens. The emergent light is at an angle a/2f from the axis. Thus the divergence angle of the two incident light beams is
ψ = a/f
When they interfere the fringes produced have a width
∆x = λ/ψ = fλ/a = 0.15 mm
The patch on the screen illuminated from the by both light has a width b ψ and this contains
bψ/∆x = ba2/f2λ fringes
= 13 fringes
(If we ignore 1 in comparison to bψ/∆x (8(a)).
We follow the logic of (8 (c)). From one edge of the slit to the other edge the distance of magnitude δ (i.e. a/2 to a/2 + δ)
If we imagine the edge to shift by this distance, the angle ψ/2 will increase by ∆ψ/2 = δ/2f and the light will shift ±bδ/2f
The fringe pattern will therefore shift by δ.b/f
Equating this to ∆x/2 = fλ/2a we get
δmax = f2λ /2ab = 37.5 μm
11. The distances from a Fresnel biprism to a narrow slit and a screen are equal to a = 25 cm and b = 100 cm respectively. The refracting angle of the glass biprism is equal to θ = 20'. Find the wavelength of light if the width of the fringe on the screen is ∆x = 0.55 mm.
Solution:
1. ∆x = lλ/d
l = a + b
d = 2(n -1)θa
δ = (n - 1)θ
d = 2δa
n = R.I of glass
Thus λ = 2(n - 1)θa∆x/(a + b)
= 0.64 μm
12. A plane light wave with wavelength λ = 0.70 μm falls normally on the base of a biprism made of glass (n = 1.520) with refracting angle θ = 5.00. Behind the biprism (Fig. 5.15) there is a plane-parallel plate, with the space between them filled up with benzene (n' = 1.500). Find the width of a fringe on the screen Sc placed behind this system.
Solution:
1. It will be assumed that the space between the biprism and the glass plate filled with benzene constitutes complementary prisms as shown.
Then the two prisms being oppositely placed, the net deviation produced by them is
δ = (n - 1)θ - (n’ - 1)θ
= (n - n’)θ
Hence as in the previous problem
d = 2aδ
= 2aθ(n - n’)
So ∆x = (a + b)λ/2aθ(n - n’)
For plane incident wave we let a→∞
So ∆x = λ/2(n - n’)
= 0.2 mm
13. A plane monochromatic light wave falls normally on a diaphragm with two narrow slits separated by a distance d = 2.5 mm. A fringe pattern is formed on a screen placed at a distance l = 100 cm behind the diaphragm. By what distance and in which direction will these fringes be displaced when one of the slits is covered by a glass plate of thickness h = 10 μm?
Solution:
1. Extra phase difference introduced by the glass plate is (2π/λ)(n - 1) h
This will cause a shift equal to (n - 1)h/λ fringe widths
i.e. by (n - 1)h/λ ×( lλ/d)
= (n - 1)hl/d
= 2 mm
The fringes move down if the lower slit is covered by the plate to compensate for the extra phase shift introduced by the plate.
14. Figure 5.16 illustrates an interferometer used in measurements of refractive indices of transparent substances. Here S is a narrow slit illuminated by monochromatic light with wavelength λ = 589 nm, 1 and 2 are identical tubes with air of length l = 10.0 cm each, D is a diaphragm with two slits. After the air in tube 1 was replaced with ammonia gas, the interference pattern on the screen Sc was displaced upward by N = 17 fringes. The refractive index of air is equal to n = 1.000277. Determine the refractive index of ammonia gas.
Solution:
1. No. of fringes shifted = (n’ - n)l/λ = N
So n’ = n + Nλ/l
= 1.000377
15. An electromagnetic wave falls normally on the boundary between two isotropic dielectrics with refractive indices n1 and n2. Making use of the continuity condition for the tangential components, E and H across the boundary, demonstrate that at the interface the electric field vector E
(a) of the transmitted wave experiences no phase jump;
(b) of the reflected wave is subjected to the phase jump equal to π if it is reflected from a medium of higher optical density.
Solution:
1. Suppose the vector
$$\vec{E}, \vec{E'},\vec{E''}$$
corresponds to the incident, reflected and the transmitted wave. Due to the continuity of the tangential component of the electric field across the interface, it follows that
ET + E’T = E’’ (1)
where the subscript T means tangential.
Since E’’T and ET have the same sign , there is no phase change involved in this case,
(b) From (1) and (3)
(n2 + n1)E’T + (n2 - n1)ET = 0
Or E’T = (n1 - n2)ET/(n1+ n2)
If n2>n1 then E’T and ET have opposite signs. Thus the reflected wave has an abrupt change of phase by π if n2> n1 i.e. on reflection from the interface between two media when light is incident from the rarer to denser medium.
16. A parallel beam of white light falls on a thin film whose refractive index is equal to n = 1.33. The angle of indices is θ1 = 520. What must the film thickness be equal to for the reflected light to be coloured yellow (λ = 0.60 μm) most intensively?
Solution:
1. Path difference between (1) and (2) is
2nd sec θ2 - 2d tan θ2 sin θ1
= 2d(n - sin2 θ1)/n/√(1- sin2 θ1/n2)
= 2d√(n2- sin2 θ1)
For bright fringes this must equal (k + ½)λ where ½ comes from the phase change of π for (1).
Here k = 0, 1, 2, …
Thus 4d√(n2 - sin2θ1)
= (2k + 1)λ
Or d = λ(1 + 2k)/4√(n2 - sin2θ1)
= 0.41(1 + 2k) μm
17. Find the minimum thickness of a film with refractive index 1.33 at which light with wavelength 0.64 μm experiences maximum reflection while light with wavelength 0.40 μm is not reflected at all. The incidence angle of light is equal to 300.
Solution:
1. Given 2d√(n2 - ¼) = (k+½)×0.64 μm (bright fringe)
2d√(n2 - ¼) = k’×0.40 μm (dark fringe)
Where k, k’ are integers.
Thus 64(k + ½) = 40k’ or 4(2k + 1) = 5k’
This means, for the smallest integer solutions, k = 2, k’ = 4
Hence d = 4×0.40/2√(n2 - ¼)
= 0.65 μm
18. To decrease light losses due to reflection from the glass surface the latter is coated with a thin layer of substance whose refractive index n' = √n where n is the refractive index of the glass. In this case the amplitudes of electromagnetic oscillations reflected from both coated surfaces are equal. At what thickness of that coating is the glass reflectivity in the direction of the normal equal to zero for light with wavelength λ?
Solution:
1. When the glass surface is coated with a material of R.I. n’ = √n(n = R.I of glass) of appropriate thickness, reflection is zero because of interference between various multiply reflected waves. We show this below.
Let a wave of unit amplitude be normally incident from the left. The reflected amplitude is -r where r = (√n - 1)/(√n + 1)
Its phase is -ve so we write the reflected wave as -r. The transmitted wave has amplitude t
t = 2/(1 + √n)
This wave is reflected at the second face and has amplitude -tr
(because (n - √n)/(n+√n) = (√n - 1)/(√n + 1)
The emergent wave has amplitude -tt’r.
We prove below that -t t’ = 1 - r2.
There is also a reflected part of amplitude t r r’ = -tr2 where r’ is the reflection coefficient for a ray incident from the coating towards air. After reflection from the second face a wave of amplitude
t t’ r3 = +(1 - r2)r3
emerges . Let δ be the phase of the wave after traversing the coating both ways. Then the complete reflected wave is
This vanishes if δ = (2k + 1)π.
But δ = (2π/λ)2√n d
So d = (λ/4√n)(2k + 1)
We now deduce t t’ = 1 - r2 and r’ = + r. This follows from the principle of reversibility of light path as shown in the figure below.
t t’ + r2 = 1
-rt + r’t = 0
So t t’ = 1 - r2
r’ = + r
(-r is the reflection ratio for the wave entering a denser medium).
19. Diffused monochromatic light with wavelength λ = 0.60 μm falls on a thin film with refractive index n = 1.5. Determine the film thickness if the angular separation of neighbouring maxima observed in reflected light at the angles close to θ = 450 to the normal is equal to δθ = 3.00.
Solution:
1. We have the condition for maxima
2d√(n2 - sin2 θ1) = (k + ½) λ
This must hold for angle (θ + δθ/2) with successive values of k. Thus
2d√(n2 - sin2 (θ + δθ/2)) = (k - ½) λ
Thus λ = 2d[√(n2 - sin2θ + δθ sin θ cos θ) - √(n2 - sin2θ - δθ sin θ cos θ)]
= 2d( δθ sin θ cos θ)/√(n2 - sin2θ)
Thus d = √(n2 - sin2θ λ)/sin 2θ δθ
= 15.2μm
20. Monochromatic light passes through an orifice in a screen Sc (Fig. 5.17) and being reflected from a thin transparent plate P produces fringes of equal inclination on the screen. The thickness of the plate is equal to d, the distance between the plate and the screen is l, the radii of the ith and kth dark rings are ri and rk. Find the wavelength of light taking into account that ri.k << l.
Solution:
1. For small angles θ we write for dark fringes
2d√(n2 - sin2 θ) = 2d(n - sin2θ/2n)
= (k + 0)λ
For the first dark fringe, θ ≅ 0 and
2dn = (k0 + 0)λ
For the ithdark fringe
2d(n - sin2 θi/2n) = (k0 - i + 1)λ
Or sin2θi = (nλ/d)(i - 1)
= ri2/4l2
Finally (nλ/d)(i - k) = (ri2 - rk2)/4l2
λ = d(ri2 - rk2)/4l2n(i - k)
|
2022-08-18 19:02:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.645271360874176, "perplexity": 1609.2143919133034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573399.40/warc/CC-MAIN-20220818185216-20220818215216-00076.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/calculus/calculus-3rd-edition/chapter-13-vector-geometry-13-7-cylindrical-and-spherical-coordinates-exercises-page-699/14
|
## Calculus (3rd Edition)
Since $$x=r\cos \theta,\quad y=r\cos \theta, \quad z=z,$$ then $$x^2+y^2+z^2=4$$ takes the form $$r^2\cos^2\theta+r^2\sin^2\theta+z^2=r^2+z^2= 4, \quad 0\leq\theta\leq \pi/2.$$ Which is a hemisphere in the first quadrant.
|
2022-08-09 06:57:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9827792048454285, "perplexity": 358.541415837368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570913.16/warc/CC-MAIN-20220809064307-20220809094307-00599.warc.gz"}
|
https://mathematica.stackexchange.com/questions/212384/how-can-i-change-the-label-background-color-in-tabview
|
How can I change the label background color in TabView?
When I go to the web page, https://reference.wolfram.com/language/ref/TabView.html, giving the description of TabView, the example displays as follows:
Note that the background for the label is blue. When I go to the same reference page in my Mathematica system, the example looks like the following:
The background color I get is black, making it impossible to read the label which is also black. How can I change this label background color? I have tried the following:
I am running Mathematica version 12.0.0.0 on Mac OS X Mojave version 10.14.6. The documentation says that TabView by default displays the labels in TabViewLabel Style, which typically uses the system button font. But buttons do not appear like this in other applications that I run on my Mac.
So how can I change the label background color or how can I change the TabViewLabel style?
Additional information: When I use other applications on my Mac that has tabs, the tabs appear correctly with selected tabs displaying with white letters on a blue background.
Example:
It is only in Mathematica that all my tabs have the selected tab displayed with black letters on a black background (so they are unreadable). I see this behavior everywhere, not just in programs I am writing. For example, I see this behavior when I look at help pages or when I look at the option inspector. So it must be some Mathematica setting that I have set wrong.
Anyone have any ideas?
• I suspect I might have to change some option in the option inspector, but I don't see any option to change label background colors. Jan 4 '20 at 15:55
• Have you changed any of the top set of default settings in System Preferences... > General? Jan 4 '20 at 23:27
• Also, are you using a stylesheet other than the Default stylesheet? Jan 4 '20 at 23:39
• @m_goldberg I have not set anything to black in general system preferences. Appearance is set to light and accent color is set to blue. Thanks for the idea. Jan 5 '20 at 0:29
• @m_goldberg I don't know much about stylesheets and perhaps I've accidentally set something I shouldn't have set. When I choose Format>Edit Stylesheet, it says my notebook is inheriting base definitions from stylesheet "Default.nb". Is there anything else I should check? When I choose Format>Stylesheet, I see a submenu. No menu items are checked. Should I select "Default"? Jan 5 '20 at 0:42
This will be an answer if it works on your system as it does on mine, and if you don't think its too much fussy code to bother with. Admittedly, it is more a work-around than a real solution. It does have the advantage of being fairly easy to modify to accommodate more elaborate tab view emulations.
With[{systemBlue = RGBColor[0., .5, 1., .75]},
With[
{colors = <|True -> {systemBlue, White}, False -> {White, Black}|>,
gray = GrayLevel[.6]},
btnLbl[i_, state_] :=
Module[{tabColor, textColor},
{tabColor, textColor} = colors[state[[i]]];
Graphics[
{EdgeForm[{AbsoluteThickness[.5], gray}], FaceForm[tabColor],
Rectangle[{0, 0}, {1, 1}],
Text[Style[i, textColor], {.5, .5}]},
ImageSize -> {20, 20}]];
DynamicModule[{val, vals, n, state},
vals = {a, b, c, d};
n = Length[vals];
state = ConstantArray[False, n];
state[[3]] = True;
Column[
{Dynamic @
Row[
Button[btnLbl[#, state],
state = ConstantArray[False, n];
state[[#]] = True,
Appearance -> None] & /@ Range[n],
"\[NegativeVeryThinSpace]"],
Dynamic @
Panel[Style[Pick[vals, state][[1]], 14], ImageSize -> 80]},
Center,
Spacings -> -.2],
TrackedSymbols :> {state},
SaveDefinitions -> True]]]
• Thanks for all the effort you have expended to solve my problem. It works and gives me a way to produce a TabView that looks like it is supposed to. But, as you say, it is a work-around. It would still be nice to figure out why the default TabView isn't working correctly on my system. I can now write programs that have nice-looking tab views, but when I look at the help system or the option inspector, I still see tabs that I cannot read because the selected tab shows black text on a black background. Jan 5 '20 at 20:28
• If someone gives me a program they wrote that looks nice on their machine, when I run it on my machine, any tabViews displayed by their program will display with black on black selections. I must have some global setting set wrong that is specifying that the background for selected tabs should be black rather than system blue. Any other thoughts? Jan 5 '20 at 20:31
• @StanleyRabinowitz. It's too bad you don't find this useful, but I suspected you might not. I understand your core concern is to getSetterBar (that's where the problem really is) to display as it should. I don't have an further ideas at this time except that you might contact Wolfram tech support if your Mathematica license includes tech support as some licenses do. Jan 5 '20 at 23:41
This is not an answer, but two code snippets to try in order to get more information on what is going on. Each snippet imitates a tab view by combining a setter bar with a panel. I would like you try them to see if one or both behave like the tab views you are getting. Please report your findings in comment below.
1. Should produce system blue background on selected tab.
DynamicModule[{j, vals, n},
vals = {a, b, c, d};
n = Length[vals];
Column[
{SetterBar[Dynamic[x], vals],
Dynamic[Panel[Style[x, 14], ImageSize -> 80]]},
Center,
Spacings -> 0]]
2. Should produce gray background on selected tab.
DynamicModule[{j, vals, n},
vals = {a, b, c, d};
n = Length[vals];
Column[
{SetterBar[Dynamic[x], vals, Appearance -> "Horizontal" -> {1, n}],
Dynamic[Panel[Style[x, 14], ImageSize -> 80]]},
Center,
Spacings -> 0]]
• snippet 1 gave me a black background on the selected tab. The background of the other tabs were white. snippet 2 gave me a gray background on the selected tab. The background of the other tabs were a lighter gray. Jan 5 '20 at 2:35
• @Thanks for the info. I conclude from your results that TabView is using SetterBar and that problem resides in SetterBar. Maybe I can do something now that I know where the problem really is. Jan 5 '20 at 3:41
|
2021-10-17 07:32:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3060306906700134, "perplexity": 2163.876991921447}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00077.warc.gz"}
|
http://images.planetmath.org/filterbasis
|
# filter basis
A filter subbasis for a set $S$ is a collection of subsets of $S$ which has the finite intersection property.
A filter basis $B$ for a set $S$ is a non-empty collection of subsets of $S$ which does not contain the empty set such that, for every $u\in B$ and every $v\in B$, there exists a $w\in B$ such that $w\subset u\cap v$.
Given a filter basis $B$ for a set $S$, the set of all supersets of elements of $B$ forms a filter on the set $S$. This filter is known as the filter generated by the basis.
Given a filter subbasis $B$ for a set $S$, the set of all supersets of finite intersections of elements of $B$ is a filter. This filter is known as the filter generated by the subbasis.
Two filter bases are said to be equivalent if they generate the same filter. Likewise, two filter subbases are said to be equivalent if they generate the same filter.
Note: Not every author requires that filters do not contain the empty set. Because every filter is a filter basis then accordingly some authors allow that a filter base can contain the empty set.
Title filter basis FilterBasis 2013-03-22 14:41:34 2013-03-22 14:41:34 rspuzio (6075) rspuzio (6075) 11 rspuzio (6075) Definition msc 03E99 msc 54A99 filter base filter subbasis equivalent
|
2018-06-24 05:42:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 16, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7991939187049866, "perplexity": 209.4589569613583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866358.52/warc/CC-MAIN-20180624044127-20180624064127-00306.warc.gz"}
|
https://kalifi.org/2014/02/pricing-booster-packs.html
|
# Valuating Steam trading card booster packs, part 2
Previously, I wrote about the pricing of booster card packs for Steam’s trading card market from mainly from the seller’s perspective. However, somewhat more interesting is how much should I pay for a certain booster pack considering I already have some cards of the set. How much am I willing to pay?
To make things simple, let’s assume that foil cards do not exist and that all cards for a set have the same price. There is some variance in card prices1, but under perfect market all cards of a set should have the same price2. The three things that affect a booster packs price are
1. Amount of cards in a set (n). The more cards in a set, the less likely duplicates are.
2. Amount of cards I already have (k). The more cards I have of a set, the likelier duplicates become.
3. The price of a single card (p). Generally speaking the cost to buy a card is 0,02€ (or, \$0.02) higher than the revenues I get from selling a card, ie. Steam’s transaction fees make getting duplicates a small risk.
In practice, the value of a booster pack is enhanced by the possibility of a foil card. Also, for a totally professional trader duplicates should not matter unless one is hitting the level cap (each set can be completed three times). However, in practice I would assume people are more interested in getting a new level 1 badges than leveling up an existing one.
There might also be a small advantage is trading in a currency which is the “weakest”, as Steam rounds transaction fees to the hundreth of a currency unit at least in US dollars and Euros. In this scenario, trader in US dollars can trade for cheaper as 0.02 USD is way less than 0.02 EUR. I’m not sure about this, but as Steam does currency conversions behind the scenes, there might also be some other currency arbitrages - however tiny.
Also, duplicates increase the amount of capital you have tied to the cards (ie. inventory cost) but I think it’s pretty safe to assume that the cost here is zero3.
The probabilities are pretty simple. I’ll denote Dx as the event for getting a duplicate (D1 = the first card is a duplicate, D2 = the second card is a duplicate, …). A nice decision tree could illustrate the process nicely, but for now let’s do with the math:
$P(D_1) = \frac{k}{n} \\ P(D_2) = P(D_2|D_1) + P(D_2|\overline{D_1}) = \left(\frac{k}{n}\right)^2 + \frac{n-k}{n} \cdot \frac{\min(k+1,n)}{n} \\ P(D_3) = P(D_3|D_2 \& D_1) + P(D_3|D_2 \& \overline{D_1}) + P(D_3|\overline{D_2} \& D_1) + P(D_3|\overline{D_2} \& \overline{D_1}) \\ = \left(\frac{k}{n}\right)^3 + \frac{n-k}{n} \cdot \left(\frac{\min(k+1,n)}{n}\right)^2 \\ + \frac{k}{n} \cdot \frac{n-k}{n} \cdot \frac{\min(k+1,n)}{n} \\ + \frac{n-k}{n} \cdot \frac{\max(n-k-1,0)}{n} \cdot \frac{\min(k+2,n)}{n}$
The reason the second and third probability get a bit complicated looking is because each non-duplicate card increases k on the fly. From these probabilities we can calculate the expected number of duplicates from a booster pack,
$E[D] = \sum P(D_x) = P(D_1) + P(D_2) + P(D_3)$
and the expected value value a of booster pack (c = the revenue from selling a duplicate, in most cases c = p − 0.02)
$EV[B] = E[D]p + (3-E[D])c$
For handy reference, here’s a table of average ex-post booster pack values for card set sizes from 5 to 9 assuming that buying a card from the Community Market costs 0.12 currency units and Steam’s transaction fees are 0.02 currency units. You can see the dynamics of n and k in action and how they affect EV[B].
cards in set cards owned booster pack value expected duplicates
5 0 0.35 0.56
5 1 0.34 1.05
5 2 0.33 1.54
5 3 0.32 2.02
5 4 0.31 2.51
5 5 0.3 3.0
6 0 0.35 0.47
6 1 0.34 0.89
6 2 0.33 1.31
6 3 0.33 1.74
6 4 0.32 2.16
6 5 0.31 2.58
6 6 0.3 3.0
7 0 0.35 0.41
7 1 0.34 0.78
7 2 0.34 1.15
7 3 0.33 1.52
7 4 0.32 1.89
7 5 0.31 2.26
7 6 0.31 2.63
7 7 0.3 3.0
8 0 0.35 0.36
8 1 0.35 0.69
8 2 0.34 1.02
8 3 0.33 1.35
8 4 0.33 1.68
8 5 0.32 2.01
8 6 0.31 2.34
8 7 0.31 2.67
8 8 0.3 3.0
9 0 0.35 0.32
9 1 0.35 0.62
9 2 0.34 0.92
9 3 0.34 1.21
9 4 0.33 1.51
9 5 0.32 1.81
9 6 0.32 2.11
9 7 0.31 2.4
9 8 0.31 2.7
9 9 0.3 3.0
The simple decision rule is to buy a booster pack if the booster pack price is equal or less to what the equation says. Note that these calculations do not account for foil cards, which supposedly do exist and can be even 10 times more valuable than a normal card, so if you are feeling lucky you might pay a litle bit of premium.
The biggest difference between the real-world market and the assumptions at the start of this article is that the prices for cards in a set can vary somewhat. In the previous post I noted how for example for Half-Life 2, the card prices had a range of 0,13€ - 0,15€. If you’re unlucky, you might have all the 0,13€ cards and be missing 0,15€ cards. Factoring in the prices of owned cards (which change the possible revenue from selling duplicates) and not owned cards (which change the value of a non-duplicate) might change the value of a booster pack by crucial cents. Caveat emptor.
Other interesting questions that could be explained with raw access to Steam’s Community Market transaction data would be what explains the different prices for different card sets. In a sense, all a full set gives to a player is 100 XP, smilies and wallpapers (usually worhtless) and a discount coupon (equally worthless). This would imply, among other things, that a card in set of 15 should have a price of about one third of a card in set of 5. Also, a lot of the price difference seems to be explained by simple supply of cards: games featured in Humble Bundles seem to have cheaper cards than triple-A titles that haven’t had a massive sale.
Anyway, armed with this knowledge you can start buy booster packs with higher confidence. Do remember that averages are just that; in the short run luck plays a big role.
|
2022-05-25 22:13:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2533935606479645, "perplexity": 1171.7274173694304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662594414.79/warc/CC-MAIN-20220525213545-20220526003545-00117.warc.gz"}
|
https://pos.sissa.it/363/195/
|
Volume 363 - 37th International Symposium on Lattice Field Theory (LATTICE2019) - Main session
Developments in the position-space approach to the HLbL contribution to the muon $g-2$ on the lattice.
N. Asmussen,* E.H. Chao, A. Gérardin, J.R. Green, R.J. Hudspith, H.B. Meyer, A. Nyffeler
*corresponding author
Full text: pdf
Pre-published on: January 04, 2020
Published on: August 27, 2020
Abstract
The measurement of the anomalous magnetic moment of the muon and its prediction allow for a high-precision test of the Standard Model (SM). In this proceedings article we present ongoing work combining lattice QCD and continuum QED in order to determine an important SM contribution to the magnetic moment, the hadronic light-by-light contribution. We compute the quark-connected contribution in the Mainz position-space approach and investigate the long-distance part of our data using calculations of the $\pi^0$-pole and charged pion loop contributions.
DOI: https://doi.org/10.22323/1.363.0195
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access
|
2020-11-26 06:58:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49161773920059204, "perplexity": 4655.145550601136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141186761.30/warc/CC-MAIN-20201126055652-20201126085652-00632.warc.gz"}
|
https://proxieslive.com/tag/choose/
|
## Why would I choose anything other than a Greataxe as my Warlock Pact weapon?
In the PHB p.107 on Pact of the Blade boon it is stated:
You can choose the form that this melee weapon takes each time you create it (see chapter 5 for weapon options). You are proficient with it while you wield it.
Why would I ever choose a “low” damage weapon, compared to a Greataxe (for example) with its 1d12 damage? I’ll be proficient with whatever weapon I choose, anyway.
## Can a creature with wings choose to fall to get down faster?
A dragon flies at 4000 feet of altitude and needs to get to ground level as fast as possible.
With a double-move and a fly speed of 200, it can only move 400 feet per round.
But, can it choose to stop flapping its wings and fall (I would assume maybe 500 or 1000 feet per round), and then use the Fly skill to negate the 20d6 of falling damage when it hits the ground?
## when do I attack i can choose attack with offhand? [duplicate]
when do I attack an normal action i can choose attack with offhand?
looks a little weird an fighter dual wielder attack 4 times with one hand and just one with offhand
## Can players choose specific points in space, down to the inch, to cast a spell so as to avoid hitting a prone character?
Say there is a character that is prone, such as if they were unconscious, and they are surrounded on all 8 square grids (assuming that a grid is being used) by other creatures. Can a player cast a spell that has a sphere effect such as fireball or shatter such that only the 8 creatures surrounding the one that is prone be hit?
Would this potentially have any adverse effects with potentially breaking or having any unintended consequences for any other spells/effects down the line if this were allowed?
Obviously when it comes to casting some spells, the caster has the option to "choose a point in space", but when playing with the understanding of a grid system that works in chunks of a given dimension does is it feasible to have spells cast in such a way so that a body lying prone won’t be affected by a spell cast just overhead?
## Do creatures with Legendary Resistance know what is hitting them in order to choose to use the resistance or not?
Say I cast a Fireball at a white dragon, it definitively knows that the spell cast was a Fireball, with all those flames around…
However, some spells are more subtle, such as Banishment. Maybe the characters want to have about 1 minute to build something that will kill the dragon once it returns… but then it could take casting the Banishment spell (or other spells) many times to successfully sending the dragon on a demiplane for a while.
Yet, if the Dragon (or other target) cannot really know what the spell is, then it could accept a failure thinking it may need its resistance later when it is in a worst situation than at this time…
I have not been able to find something in RAW that says the target knows of the exact spell effects when it has the Legendary Resistance skill. Is there?
## Can a Legendary monster ignore a Divination wizard’s Portent feature and choose to pass the save anyway?
In D&D 5e, if a School of Divination wizard uses the Portent feature on a legendary monster to assign it a failing saving throw, can the legendary monster use their legendary resistance to choose to pass the saving throw anyway?
## Can a flying character choose to fall, and then use a reaction to stop falling before hitting the ground?
As a flying character there are a few scenarios that I’d like to know are valid/RAW, invalid, or up to the DM.
Assume in these scenarios that all characters fall at 1000ft/round (this is not up for discussion (no matter how strongly you feel about it) as my DM has made this ruling.) Also assume the fall is intentional (on my turn/not done by an enemy or enemy’s turn).
1. Fly at 1,005ft, fall (drop prone?) in 1 round (1000ft), next round recover (stand up from prone), land safely or continue to fly.
2. Fly at 600ft, fall, to 60ft recover to fly normally.
3. Fly at 600ft, fall, to 60ft cast feather as a reaction to falling. This scenario could also include carrying a halfling (600ft), then dropping her, and she can cast feather fall (as a reaction) 60ft before hitting the ground.
Feel free to add additional cool scenarios that could work. Or if a scenario doesnt work, what would be needed to make it work.
If possible please use citations, especially if any of the scenarios are invalid/against the rules.
## Top Ten Online Choices to Get Clients to Choose You Again and Again – Part 1
Did you realize that 95% of training organizations fall flat in light of the fact that their proprietors don’t give enough consideration to deals duplicate?
Regardless of whether you are an expert speaker, mentor, or business visionary, each business needs more customers. Significantly more, they need to allure customers to proceed with a progressing relationship. Your online deals duplicate issues. Here’s 10 top approaches to get new customers and keep your current ones:
1. Ensure your Web website, email, mobile database or telephone deals messages serve your likely customer’s needs and wants.
At the point when we don’t pass on our persuading message regarding why they ought to pick us, we lose them. Ask yourself these inquiries, “What does my Web website state about me? Does its messages take my perusers by the neckline and persuade them to understand more?”
Do your words motivate your perusers? Will they know what they should know to show up at a good choice? Will they be anxious to get in touch with you and purchase?
We lose numerous potential customers when we don’t make it simple for them to request or get in touch with us. In the event that your connections don’t work, disheartened by your disruption, your potential customers will leave and attempt some other assistance.
Probably the greatest error I’ve made isn’t checking my connections or having my website admin check them routinely to check whether they are clear and working. Not exclusively did these befuddle my magnificent instructing likely customers, it cost me deals.
3. Make and send a focused on ezine normally.
4. Give your potential customers data that benefits them.
On the off chance that you don’t give your expected customers and clients something that benefits them, for example, tips, articles, resourcesand uncommon offers, you miss pulling in new customers. Endorsers need data and they love a deal. Your ezine will offer every single previous customer, present ones, and potential ones specific how-tos and other valuable free data. Thus, your endorsers will turn into your faithful supporters. After 5-7 presentations, huge numbers of them will purchase.
Tell them the amount you value them. At the point when they take an interest in a review or give you input, express gratitude toward them with a blessing, maybe a free exceptional report or free response to any one inquiry they may have. Keeping in contact with your gatherings implies they see your name frequently, and when all is good and well, will seek you for either your administration or item.
Keep in mind, it’s not the quantities of supporters that check; it’s the focused on ones who need what you have. Construct your rundown by associating with your best expected crowd. The more focused on your supporters, the more possibility you have of selling your administrations.
6. Make your Web Pages With Important Key Words.
Consider what individuals will type in when they do a quest for your administration. Ensure you remember those watchwords and expressions for you landing page and each other page. The web crawlers search for these to put you. The more proper catchphrases, the higher you go in the web indexes. Watchwords put me in the main three spots for Google and 35 other web crawlers.
For example if like me, you offer Internet showcasing, these words will work: web based advertising, free articles, increment online deals, increment benefits, duplicate composition, direct mail advertisement, mentor promoting, speaker advancement, joins, site showcasing, web advancement, mentor showcasing administrations, increment ezine endorsers, increment focused on web traffic, site features, advantages and highlights for customer, highlights, web tips, book training, Judy Cullins, San Diego, eBook promoting.
Many Web locales recommend you generally incorporate your name, area, and business class. Numerous customers recruited me by searching for a neighborhood mentor.
Judy Cullins, 20-year book and Internet Marketing Coach, Author of 10 eBooks including “Compose your eBook Fast,” and “How to Market your Business on the Internet,” she offers free assistance through her 2 month to month ezines, The Book Coach Says…and Business Tip of the Month at
## How do I use the pumping lemma for a^n b^m a^(n+m) ? How can I choose the pumping length?
$$L = {a^n + b^m + a^{n+m}}$$
This is the language I want to show is not regular.
Now my problem is to choose p correctly.
Can I just set it as p=2*(n+m) ?
That’s the problem I am facing now.
Thanks for any help I am starting to learn it to use the pumping lemma.
## Would it be unbalanced for Dex-based Fighters to choose proficiency in Dex saving throws instead of Str saving throws?
Whilst building a Dex-based Fighter (an Arcane Archer), I decided to pick the Resilient feat at level 4 so that I could have proficiency in Dexterity saving throws, which makes sense given that they are a Dex-based character. Then I thought about how odd it was that they had proficiency in Strength saving throws just because they’re a Fighter even though they’re not a Str-based character. Sure, it makes sense for a lot of Fighters, but not all of them.
Therefore, I’m considering introducing a new homebrew rule for whenever I’m running a game and a player of mine wants to make a Dex-based Fighter:
Saving Throws: Strength or Dexterity (your choice), Constitution
The “choice” would be made at level 1 (I don’t plan on allowing them to switch it back and forth).
Given that this class is the only one listed under the Multiclassing section in the PHB (pg. 163) has having an “or” in their requirements (“Strength 13 or Dexterity 13″), this seems to fit the intent that Fighters aren’t tied to Strength.
The Battlemaster archetype (PHB, pg. 73) also allows either for the saving throw for some maneuvers, again implying that Fighters are supposed to be flexible regarding using Strength or Dexterity:
|
2020-10-27 15:26:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17541788518428802, "perplexity": 3624.8954068998282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894203.73/warc/CC-MAIN-20201027140911-20201027170911-00671.warc.gz"}
|
https://dsp.stackexchange.com/revisions/690016b0-6fba-4d22-b62b-cdef71b38613/view-source
|
To add a bit to the previous answers, you can get the equivalent of an upsampled band-limited cross-correlation by making your correlation variable a non-integer.
The following (python) code computes $\tau$, where
$$\tau = \arg \max_{\tau}\sum_{n=0}^{N-1}f\left(n\right)g\left(n+\tau\right)$$
That is, it finds the maximum of the cross correlation.
The input variables a and b describe $f\left(n\right)$ and $g\left(n\right)$ for $n = \{0, 1, ... , N-1\}$ and are both assumed to be band limited and periodic with period $N$ (the shift is implemented in the discrete Fourier domain). $\tau$ is in range $[-N+1, N-1]$.
The intention is to show how the cross-correlation can be performed for non-integer $\tau$, which is defined by the closure correlate_point. This uses the omega array, which describes the rotation of the complex phasor at each discrete frequency corresponding to a time-shift $\tau=1$. $\tau$ then scales this for each shift. It should be apparent that to maintain a real time signal, the rotations of the negative frequencies are just $-1$ times the rotations of the positive frequencies (for corresponding frequency pairs).
The one subtlety is in how you treat the $\frac{N}{2}$ sample (the nyquist frequency), as this is shared between the positive and negative bands. The solution used here is to interpolate between the positive rotation phasor and the negative rotation phasor (which are reflections on the real axis), which is to project either unit rotation phasor onto the real axis, which is a cos function (the pi is because that is value of omega corresponding to the nyquist frequency). Clearly this value needs to be real to maintain a real time domain signal.
You can use this to compute the cross-correlation for any arbitrarily precise value of $\tau$. Just call the closure (which can be returned as a callable) with whatever value of $\tau$ you fancy.
import numpy
from numpy import fft
from scipy import optimize
def arg_max_corr(a, b):
if len(a.shape) > 1:
raise ValueError('Needs a 1-dimensional array.')
length = len(a)
if not length % 2 == 0:
raise ValueError('Needs an even length array.')
if not a.shape == b.shape:
raise ValueError('The 2 arrays need to be the same shape')
# Start by finding the coarse discretised arg_max
coarse_max = numpy.argmax(numpy.correlate(a, b, mode='full')) - length+1
omega = numpy.zeros(length)
omega[0:length/2] = (2*numpy.pi*numpy.arange(length/2))/length
omega[length/2+1:] = (2*numpy.pi*
(numpy.arange(length/2+1, length)-length))/length
fft_a = fft.fft(a)
def correlate_point(tau):
rotate_vec = numpy.exp(1j*tau*omega)
rotate_vec[length/2] = numpy.cos(numpy.pi*tau)
return numpy.sum((fft.ifft(fft_a*rotate_vec)).real*b)
start_arg, end_arg = (float(coarse_max)-1, float(coarse_max)+1)
max_arg = optimize.fminbound(lambda tau: -correlate_point(tau),
start_arg, end_arg)
return max_arg
|
2020-07-10 20:03:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8592613339424133, "perplexity": 1285.300695460259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655911896.73/warc/CC-MAIN-20200710175432-20200710205432-00105.warc.gz"}
|
https://devel.isa-afp.org/entries/Khovanskii_Theorem.html
|
# Khovanskii's Theorem
September 2, 2022
This is a development version of this entry. It might change over time and is not stable. Please refer to release versions for citations.
### Abstract
We formalise the proof of an important theorem in additive combinatorics due to Khovanskii, attesting that the cardinality of the set of all sums of $n$ many elements of $A$, where $A$ is a finite subset of an abelian group, is a polynomial in $n$ for all sufficiently large $n$. We follow a proof due to Nathanson and Ruzsa as presented in the notes “Introduction to Additive Combinatorics” by Timothy Gowers for the University of Cambridge.
|
2023-01-31 20:44:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5291360020637512, "perplexity": 219.83266286054885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499890.39/warc/CC-MAIN-20230131190543-20230131220543-00526.warc.gz"}
|
https://puzzling.meta.stackexchange.com/tags/feature-request/hot
|
# Tag Info
### Should we ban Deusovi from puzzle-solving?
I have been unable to reproduce this issue. Sorry, I've looked around for issues where this has happened to me, but can't find anything, even intentionally trying to make it happen. This issue of ...
• 136k
Accepted
### Should we ban Deusovi from puzzle-solving?
Clearly, something needs to be done... I propose we reignite the fortnightly challenges with the following theme: Puzzling Honeypots The challenge would be to devise puzzles that were challenging, ...
• 36.1k
### Puzzling's new favicon is too similar to Workplace's favicon
Whilst I quite like the current logo, I think there's plenty of opportunity to keep the spirit of it, and still avoid the overlap with the Workplace logo... With that in mind I've mocked up a few ...
• 36.1k
### Should we ban Deusovi from puzzle-solving?
How to stop mods from answering puzzles Write the solution in a comment (or a chain of comments if necessary) below your puzzle, then immediately delete the comment(s). Mods, who can see deleted ...
• 114k
### Should we ban Deusovi from puzzle-solving?
Yes, Deusovi should totally be banned from solving. What more need be said?
• 111k
Accepted
### What is our reason for wanting bounties on questions?
Increase the value of an upvote for a question. Currently, an upvote on a question provides only half the reputation that an upvote on an answer does. (5 reputation vs. 10 reputation.) This would ...
• 23.9k
### What is our reason for wanting bounties on questions?
Implement "bounties" for questions that can be awarded instantaneously. This would be a method where one user could transfer some reputation to another user to reward them for an exemplary question. ...
• 23.9k
### Can we have the tag **maths** as a synonym for *math* please
Or better yet, make math and maths both synonyms of mathematics, that way, everyone is happy and neither way of spelling is shown favoritism.
• 684
### Do we need review audits on Puzzling?
No. There are at least a couple of reasons why review audits are, in my view, useless at best. Nobody's been banned due to them, so they're either not needed or not working. Emrakul said he doesn't ...
• 114k
### We should have MathJax. What should the escape sequence be?
All right, it's time for everyone's favorite post: the list of posts that would benefit from MathJax! The Stack Exchange team looks for posts that would benefit by having MathJax (they want to make ...
### Can we get +10 reputation for upvotes on questions?
As of yesterday, November 13th 2019, this is now in effect network wide: We’re changing the reputation earned from getting a question upvote to ten points, making it equal to the reputation earned ...
• 27k
### Should we ban Deusovi from puzzle-solving?
Yes, we should totally ban Deusovi from solving all puzzles. For proof, all you have to do is look at his profile - he has conveniently placed a lot of quotes of people telling him that they hate him ...
• 8,417
### Difficulty rating on questions
And who's supposed to judge the "difficulty rating" of a puzzle? If it's the original poster of the question, that's almost certainly going to be subject to lots of bias. "Oh, I don't want beginners ...
• 4,537
### Auto-protect questions on HNQ?
I came here via a HNQ. I think it's worth it to bring in new visitors, even if it might temporarily distort voting. Getting my first answer in and accepted is what got me hooked on this site. I would ...
• 2,015
Accepted
### New off-topic reason for puzzles where the source is not mentioned
As GPR noted, we already were discussing a custom close reason for this; I'm very much in favor of having one, and just hadn't gotten around to posting here about creating one and what it should look ...
• 40.1k
Accepted
• 23.9k
### Does the current Puzzle answering format discourage some solvers?
This is a symptom of a more general problem, which is that the Stack Exchange format is not a good match for puzzling. There are several other inconsistencies, into which I won't go now, but the basic ...
• 68.2k
### Please vote on a new title prompt for questions
What's your puzzle, or puzzle-related question? Be specific. Suggested by @COTO. Edited to include a comma after suggestions from @Curmudgeon. Hopefully nobody will mind such a minor change, but feel ...
• 114k
Accepted
### Please vote on a new title prompt for questions
Rand's suggestion was reasonable, except for the abomination that was the comma - so I stripped it of this demon and made the change: At Curmudgeon's suggestion, I also added an extra "your": What'...
• 101
### The Ability to Change Vote?
It seems to me that this is the intended usage behind the Favourite star. The name of it (Favourite) implies additional meaning, but in reality, it just adds the item to your watch list. This list can ...
• 12.7k
Only top scored, non community-wiki answers of a minimum length are eligible
|
2022-05-18 22:49:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3210459351539612, "perplexity": 3120.4690511900103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522556.18/warc/CC-MAIN-20220518215138-20220519005138-00407.warc.gz"}
|
https://eepower.com/news/vicor-reports-results-for-the-quarter-and-year/
|
News
# Vicor Reports Results for the Quarter and Year
February 22, 2011 by Jeff Shepard
Vicor Corp. reported its financial results for the quarter and year ended December 31, 2010. Revenues for the fourth fiscal quarter ended December 31, 2010, increased to $72,975,000, compared to$49,138,000 for the corresponding period a year ago, and increased from $68,672,000 for the third quarter of 2010. Gross margin increased to$32,984,000 for the fourth quarter of 2010, compared to $22,497,000 for the corresponding period a year ago and$32,473,000 for the third quarter of 2010. Gross margin, as a percentage of revenue, decreased to 45.2% for the fourth quarter of 2010 compared to 45.8% for the fourth quarter of 2009 and 47.3% for the third quarter of 2010. Net income for the fourth quarter was $10,807,000, or$0.26 per diluted share, compared to net income of $2,309,000, or$0.06 per diluted share, for the corresponding period a year ago and net income of $15,819,000, or$0.38 per diluted share, for the third quarter of 2010.
Revenues for the year ended December 31, 2010, increased by 26.7% to $250,733,000 from$197,959,000 for the prior year. Net income for the year was $33,325,000, or$0.80 per diluted share, compared to net income of $2,798,000 or$0.07 per diluted share, for the corresponding period a year ago.
During the third and fourth quarters of 2010, the Company recorded non-recurring, non-cash tax benefits of $5,158,000, or approximately$0.12 per diluted share, and $1,159,000, or approximately$0.03 per diluted share, respectively, due to the release of portions of its deferred tax valuation allowance. These tax benefits were partially offset by estimated federal, state and foreign income taxes on the company’s 2010 pre-tax income and estimated federal and state income taxes for certain non-controlling interests that are not part of the company’s consolidated income tax returns.
The consolidated book-to-bill ratio for the fourth quarter was 0.66, as compared to 1.02 for the third quarter of 2010. Total backlog at the end of the fourth quarter was $78,876,000, compared to$57,234,000, at the end of 2009.
Commenting on the company’s performance, Patrizio Vinciarelli, Chief Executive Officer, stated, "Vicor recognized a record level of quarterly revenue for the fourth quarter as a result of substantial bookings recorded earlier in the year. Consolidated revenue increased on a sequential basis by 6%, driven largely by a near doubling of V-I Chip™ shipments. Relative to a weak fourth quarter, bookings activity has thus far experienced a nearly 50% increase in the first quarter of 2011."
|
2022-07-05 22:08:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2207295000553131, "perplexity": 3997.4915726434383}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104628307.87/warc/CC-MAIN-20220705205356-20220705235356-00647.warc.gz"}
|
http://mathhelpforum.com/calculus/137633-definite-integral-problem-print.html
|
# Definite Integral problem
• April 6th 2010, 03:41 PM
ascendancy523
Definite Integral problem
So here's my problem:
Let $\int_0^9f(x)dx=7, \int_0^3f(x)dx=2 , \int_6^9f(x)dx=10$
Find $\int_3^6f(x)dx=$
and
$\int_6^3(7f(x)-2)dx=$
Any suggestions on how I could get started?
• April 6th 2010, 04:35 PM
|
2014-07-23 14:39:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.859481692314148, "perplexity": 9696.805144902863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997878518.58/warc/CC-MAIN-20140722025758-00078-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://www.mersenneforum.org/showthread.php?s=5488e8e16ffefd2313bf7282f3ddb467&t=25479&page=11
|
mersenneforum.org Some CADO-NFS Work At Around 175-180 Decimal Digits
Register FAQ Search Today's Posts Mark Forums Read
2020-08-04, 05:03 #111 VBCurtis "Curtis" Feb 2005 Riverside, CA 10001010000102 Posts I view mfb and lambda both as methods to control the amount of wasted cofactor-splitting. For reasons unclear to me, CADO runs quite a bit faster (in extensive testing at 100-140 digits) with mfb set well below 2*lpb. I've been using lambda as a sort of floating-point control for mfb, and on small numbers I have lots of runs where changing lambda by 0.01 does change the yield per Q but also the number of relations needed (in the direction that suggests the job is effectively a smaller LP choice). I found that using LP choices 1 or 2 bits higher than traditional choices but tight mfb led to faster factorizations. I suppose I won't be too surprised if that doesn't work at this size; 31/32 is pretty much traditional for this size of job, so perhaps a tight mfb or tight lambda setting is overly restrictive.
2020-08-09, 01:49 #112 charybdis Apr 2020 11011012 Posts c180 results: Poly score 9.842e-14, within 2% of the score for the last c179. 65.1M CPU-sec sieving for 321M raw relations, 225M unique. TD=110 produced a 16.6M matrix. mfb0=59 doesn't seem worth it to me. Yield and rel/sec improve a few percent but so does the number of relations needed to build a matrix. More importantly, 59 gives bigger matrices than 58; I'd guess something like 8-10% more unique relations are needed at 59 to build a matrix comparable to what you'd get at 58. There's one more c180 on the HCN list that's unambiguously a GNFS job (there are a couple that are SNFS 261/2, I suspect SNFS will be slightly faster there?) so I'll run it with mfb0=58 and lambda0=1.93 - intermediate between the previously tried 1.88 and CADO-default 58/31+0.1=1.97 - unless anyone has a better suggestion.
2020-08-09, 03:42 #113 VBCurtis "Curtis" Feb 2005 Riverside, CA 105028 Posts I think 1.93 is an excellent idea. mfb0=59 should produce a slightly larger matrix than 58, since more of the relations will have a largest-bit large prime. Sieving a few more relations should overcome that, but if that makes the sieving take longer than an MFB=58 job then it's a waste.
2020-08-15, 01:44 #114 charybdis Apr 2020 10910 Posts Stats for c180 with lambda0=1.93: Poly score 9.285e-14. 70.2M CPU-seconds sieving for 322M raw relations, 218M unique. TD=110 produced a 15.8M matrix. Poly is ~7% worse than the c179 that I ran with default lambda0 of ~1.97. Sieving is ~6% slower per raw relation but ~8% slower per unique relation, and the matrix ended up a little bigger too. I doubt the poly scores predict sieving speed to the nearest 1%, but lambda 1.93 performs noticeably better than 1.88 and within a percent or so of 1.97. Suggests that the default 1.97 is close enough to optimal here that it's not worth trying to narrow it down further? Next GNFS target in the HCN list is a c182. I'll keep most of the sieving params the same to get a direct comparison - experimentation can wait till the bunch of c184s if I decide to do them - but should lims go a bit higher? 105M/145M? Last fiddled with by charybdis on 2020-08-15 at 01:45
2020-08-15, 02:15 #115
VBCurtis
"Curtis"
Feb 2005
Riverside, CA
2·472 Posts
Quote:
Originally Posted by charybdis I doubt the poly scores predict sieving speed to the nearest 1%, but lambda 1.93 performs noticeably better than 1.88 and within a percent or so of 1.97. Suggests that the default 1.97 is close enough to optimal here that it's not worth trying to narrow it down further? Next GNFS target in the HCN list is a c182. I'll keep most of the sieving params the same to get a direct comparison - experimentation can wait till the bunch of c184s if I decide to do them - but should lims go a bit higher? 105M/145M?
If I were to do another test, I'd try mfb0=59 with a tighter lambda setting; but you've shown 59 is slower than 58, and with 58 that not specifying lambda is as fast as 1.93. Your params are simpler, and likely faster than trying to find a fastest-lambda with 59.
As for changing lims: larger lims will improve relations per Q and may improve uniques per raw relations ratio, but at the expense of a slightly larger matrix. The rule of thumb I've been using is to target an ending Q somewhere near lim1; so, if your C180 runs have ending Q near 130M, I'd add 10M to both lims for C182. I don't have theory to back this, and keeping lim smallish while running Q well above lim1 may be faster overall.
The larger the job, the more Q will exceed lim1; my rule of thumb has helped me in the 140-175 digit range.
2020-08-15, 02:19 #116
charybdis
Apr 2020
109 Posts
Quote:
Originally Posted by VBCurtis The rule of thumb I've been using is to target an ending Q somewhere near lim1; so, if your C180 runs have ending Q near 130M, I'd add 10M to both lims for C182.
The last c180 did indeed finish around 130M; the other numbers had slightly better polys and finished closer to 120M.
2020-08-16, 01:14 #117
charybdis
Apr 2020
109 Posts
Quote:
Originally Posted by charybdis A further trawl through the documentation reveals that a lambda of mfb/lpb + 0.1 is used if we leave it unspecified.
Despite what sieve/README.descent says ("in the first case, lambda defaults to mfb0/lpb0+0.1 as everywhere else in cado-nfs"), it looks like I was wrong, and presumably that readme file is outdated.
From sieve/las-norms.cpp:
Code:
/* when lambda = 0 (automatic), we take mfb/lpb + 0.3, which is
experimentally close to optimal in terms of seconds per relation
(+ 0.2 might be even better on the rational side) */
if (lambda == 0.0)
lambda = 0.3 + (double) sc.sides[side].mfb /
(double) sc.sides[side].lpb ;
So the default lambda0 for lpb0=31, mfb0=58 is actually 2.17 not 1.97. Probably worth trying ~2.05 on my next run then.
Last fiddled with by charybdis on 2020-08-16 at 01:21
2020-08-23, 16:22 #118 charybdis Apr 2020 109 Posts c182 stats: Poly score 6.643e-14. 98.0M CPU-seconds sieving (Q 20M-181.4M) for 321M raw relations, 218M unique. TD=110 produced a 17.8M matrix. The parameters are still holding up fairly well at c182 if we assume that the poly score is inversely proportional to difficulty; I don't know whether this is a good assumption to make. The matrix has got bigger, as expected. Final Q was well beyond lim1 so maybe lims could be higher, but before trying that out I'm going to test lambda0=2.07.
2020-09-01, 23:43 #119 charybdis Apr 2020 11011012 Posts The c183 stats below aren't directly comparable to those for the c182 above, for a couple of reasons. Firstly, I ran the c183 on a slightly different set of machines, giving about a 1% speedup in CPU-time. Secondly, one machine had some strange connection problems. I won't bother with the details, but the lowdown is that the server cancelled all the WUs from this client without receiving any relations. This meant that the final Q was a bit larger than it should have been, but the factorization will only have been slowed down by a fraction of a percent. Poly score 6.356e-14 (4.5% worse than c182 above). Sieved Q from 20M to 195.9M, with about 1.7M missing due to the bad client. 100.5M CPU-seconds for 321M raw relations, 218M unique. TD=110 produced an 18.3M matrix. This is a bit bigger than the c182 matrix, but the all the figures looked very similar to the c182 run in the early stages of filtering, so I'm going to chalk the difference in matrix size up to the peculiarities of the polynomial. Putting all the figures together, lambda0 = 2.07 (= mfb0/lpb0 + 0.2) might be 1% faster than the default (mfb0/lpb0 + 0.3). It looks like changing lambda from the default is unlikely to produce big gains at this size. Next up will be the bunch of c184s. Would I be right in thinking lpb 32/32 deserves some testing here?
2020-09-02, 22:30 #120 VBCurtis "Curtis" Feb 2005 Riverside, CA 2×472 Posts Sorry for the delay, been busy with some data-gathering for nfs@home queue planning. A params.C185 file should have the usual 25-30% increase in lim's, and we should test 32/32 against the current setting. If we stay with 31/32, I'd add another 20-30M relations wanted. 32/32 should be 30% higher than that to start with. Poly select should be about double the C180 file- say, 60% increase in admax and 25% increase in P. Edit: I'd also raise qmin to 25M or 30M. The most recent CADO-factorization paper mentions that controlling the qmax/qmin ratio helps to control the duplicate rate; so as our jobs get tougher and sieve up to larger Q's, qmin should rise as well. If I understood what they said properly (a weak assumption), a ratio of 7 is a decent target, and duplicate-rates get poor once the ratio exceeds 10. We saw that back when I suggested qmin of 500k, and their paper agrees with the data you gathered. We expect Q-max of 175-200M, I think? Last fiddled with by VBCurtis on 2020-09-02 at 22:56
2020-09-02, 23:07 #121 VBCurtis "Curtis" Feb 2005 Riverside, CA 105028 Posts I decided to copy the params from a few posts up and make the changes I just recommended. Simpler this way. Also, I changed ncurves0 to 25 from 20. Draft params.c185: Code: tasks.I = 15 tasks.qmin = 30000000 tasks.lim0 = 125000000 tasks.lim1 = 175000000 tasks.lpb0 = 32 tasks.lpb1 = 32 tasks.sieve.mfb0 = 60 #maybe 59? tasks.sieve.mfb1 = 90 #maybe 91 to improve yield? tasks.sieve.ncurves0 = 25 tasks.sieve.ncurves1 = 13 rels wanted = 340000000 Here's the poly-select settings: Code: tasks.polyselect.P = 3000000 tasks.polyselect.admin = 2e4 tasks.polyselect.admax = 35e5 tasks.polyselect.adrange = 1680 tasks.polyselect.incr = 210 tasks.polyselect.nq = 15625 tasks.polyselect.nrkeep = 120 tasks.polyselect.ropteffort = 35
Similar Threads Thread Thread Starter Forum Replies Last Post enzocreti enzocreti 1 2020-03-03 18:38 tuckerkao Miscellaneous Math 2 2020-02-16 06:23 Nick Puzzles 9 2013-02-13 17:17 vsuite GPU Computing 11 2011-02-02 04:47 Corbyguy Software 3 2008-06-09 18:09
All times are UTC. The time now is 08:19.
Sat Oct 31 08:19:30 UTC 2020 up 51 days, 5:30, 2 users, load averages: 1.93, 2.04, 1.92
|
2020-10-31 08:19:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32523033022880554, "perplexity": 4881.38433508838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107916776.80/warc/CC-MAIN-20201031062721-20201031092721-00371.warc.gz"}
|
https://plainmath.net/2900/solve-using-integral-definition-convolution-answer-laplace-transform
|
# Solve both a)using the integral definition , find the convolution f*g text{ of } f(t)=cos 2t , g(t)=e^t b) Using above answer , find the Laplace Transform of f*g
Question
Laplace transform
Solve both
a)using the integral definition , find the convolution
$$f*g \text{ of } f(t)=\cos 2t , g(t)=e^t$$
b) Using above answer , find the Laplace Transform of f*g
2020-10-29
Step 1
Given:
The functions $$f(t)=\cos 2t , g(t)=e^t$$
Step 2
a) Definition of convolution:
The convolution of piecewise continuous functions f, g : $$\mathbb{R}\rightarrow\mathbb{R}$$ is the function f * g : $$\mathbb{R}\rightarrow\mathbb{R}$$ given by:
$$(f*g)(t)=\int_0^tf(\tau)g(t-\tau)d \tau$$
Therefore, by definition
$$(f*g)(t)=\int_0^t\cos(2\tau)e^{t-\tau}d \tau$$ Integrate using integration by parts:
$$u=\cos(2\tau) , dv=e^(t-\tau)d \tau$$
$$du =-2\sin(2\tau)d \tau ,v = \int e^{t-\tau}d \tau = -e^{t-\tau} \int u dv= uv- \int v du$$
$$\Rightarrow I= \int \cos(2\tau)e^{t-\tau}d \tau = -cos(2\tau)e^{t-\tau} - \int (-e^{t-\tau})(-2\sin(2\tau)d \tau)$$
$$\Rightarrow I= -\cos(2\tau)e^{t-\tau}-2\int(e^{t-\tau})(\sin(2\tau))d \tau \dots(1)$$
Step 3
To simplify further, use parts of integration for the second term Compute the integral
$$\int e^{t-\tau}\sin(2\tau)d \tau$$
$$u=\sin(2\tau) , dv=e^{t-\tau}d \tau$$
$$du=2\cos(2\tau) , v=-e^{t-\tau}$$
$$\Rightarrow \int e^{t-\tau} \sin(2\tau)d \tau = -e^{t-\tau}\sin(2\tau)-\int(-e^{t-\tau})(2\cos(2\tau))d \tau$$
$$= -e^{t-\tau}\sin(2\tau)+2\int(e^{t-\tau})(\cos(2\tau))d \tau$$
$$\Rightarrow \int e^{t-\tau} \sin(2\tau)d \tau = -e^{t-\tau}\sin(2\tau) +2I$$
$$\left\{\text{Because } I=\int(e^{t-\tau})(\cos(2\tau))d \tau\right\}$$
Equation (1) becomes:
$$\Rightarrow I=-\cos(2\tau)e^{t-\tau}-2[-e^{t-\tau}\sin(2\tau)+2I]$$
$$\Rightarrow I=-\cos(2\tau)e^{t-\tau}+2e^{t-\tau}\sin(2\tau)-4I$$
$$\Rightarrow 5I=-\cos(2\tau)e^{t-\tau}+2e^{t-\tau}\sin(2\tau)$$
$$\Rightarrow I=-\frac{1}{5}\cos(2\tau)e^{t-\tau}+\frac{2}{5}e^{t-\tau}\sin(2\tau)$$
$$\int \cos(2\tau)e^{t-\tau}d \tau = -\frac{1}{5} \cos(2\tau)e^{t-\tau} + \frac{2}{5} e^{t-\tau}\sin(2\tau)$$
Step 4
Substitute the upper and lower limit and simplify
$$\int_0^t \cos(2\tau)e^{t-\tau}d \tau = [-\frac{1}{5}\cos(2\tau)e^{t-\tau}+\frac{2}{5}e^{t-\tau}\sin(2\tau)]_0^t$$
$$=[-\frac{1}{5}\cos(2t)e^{t-t}+\frac{2}{5}e^{t-t}\sin(2t)+\frac{1}{5}\cos(2(0))e^{t-0}-\frac{2}{5}e^{t-0}-\frac{2}{5}e^{t-0}\sin(2(0))]$$
$$=[-\frac{1}{5}\cos(2t)+\frac{2}{5}\sin(2t)+\frac{1}{5}e^t]$$
$$\text{Therefore, }$$
$$\text{The convolution } f * g is:$$
$$(f*g)(t)=-\frac{1}{5}\cos(2t)+\frac{2}{5}\sin(2t)+\frac{1}{5}e^t$$
Step 5
b) To find the Laplace transformation of convolution, use the formula:
$$L[f*g]=L[f]L[g]=L[g*f]$$
$$L[f*g]=\int_0^\infty e^(-s \tau) (f*g)(\tau)d \tau$$
$$L[f*g]=L[-\frac{1}{5}\cos(2t)+\frac{2}{5}\sin(2t)+\frac{1}{5}e^t]$$
Use Laplace transformation properties of sum and constant : $$L[f*g]=-\frac{1}{5}L[\cos(2t)]+\frac{2}{5}L[\sin(2t)]+\frac{1}{5}L[e^t]$$
Now, use the basic Laplace transformation formula:
$$L[\cos(at)]=\frac{s}{s^2+a^2} , L[\sin(at)]=\frac{a}{s^2+a^2} , L[e^{at}]=\frac{1}{(s-a)}$$
$$L[f*g]=\frac{1}{5}\left[\frac{s}{(s^2+4)}\right]+\frac{2}{5}\left[\frac{2}{(s^2+4)}\right]+\frac{1}{5}\left[\frac{1}{(s-1)}\right]$$
Therefore, The Laplace transformation of f*g is :
$$L[f*g]=\frac{1}{5}\left[\frac{s}{(s^2+4)}\right]+\frac{2}{5}\left[\frac{2}{(s^2+4)}\right]+\frac{1}{5}\left[\frac{1}{(s-1)}\right]$$
### Relevant Questions
Use the table of Laplace transform and properties to obtain the Laplace transform of the following functions. Specify which transform pair or property is used and write in the simplest form.
a) $$x(t)=\cos(3t)$$
b)$$y(t)=t \cos(3t)$$
c) $$z(t)=e^{-2t}\left[t \cos (3t)\right]$$
d) $$x(t)=3 \cos(2t)+5 \sin(8t)$$
e) $$y(t)=t^3+3t^2$$
f) $$z(t)=t^4e^{-2t}$$
Find the Laplace transforms of the following time functions.
Solve problem 1(a) and 1 (b) using the Laplace transform definition i.e. integration. For problem 1(c) and 1(d) you can use the Laplace Transform Tables.
a)$$f(t)=1+2t$$ b)$$f(t) =\sin \omega t \text{Hint: Use Euler’s relationship, } \sin\omega t = \frac{e^(j\omega t)-e^(-j\omega t)}{2j}$$
c)$$f(t)=\sin(2t)+2\cos(2t)+e^{-t}\sin(2t)$$
Find the Laplace transform of the function $$L\left\{f^{(9)}(t)\right\}$$
Use the Laplace transform to solve the following initial value problem:
$$2y"+4y'+17y=3\cos(2t)$$
$$y(0)=y'(0)=0$$
a)take Laplace transform of both sides of the given differntial equation to create corresponding algebraic equation and then solve for $$L\left\{y(t)\right\}$$ b) Express the solution $$y(t)$$ in terms of a convolution integral
A random sample of $$n_1 = 14$$ winter days in Denver gave a sample mean pollution index $$x_1 = 43$$.
Previous studies show that $$\sigma_1 = 19$$.
For Englewood (a suburb of Denver), a random sample of $$n_2 = 12$$ winter days gave a sample mean pollution index of $$x_2 = 37$$.
Previous studies show that $$\sigma_2 = 13$$.
Assume the pollution index is normally distributed in both Englewood and Denver.
(a) State the null and alternate hypotheses.
$$H_0:\mu_1=\mu_2.\mu_1>\mu_2$$
$$H_0:\mu_1<\mu_2.\mu_1=\mu_2$$
$$H_0:\mu_1=\mu_2.\mu_1<\mu_2$$
$$H_0:\mu_1=\mu_2.\mu_1\neq\mu_2$$
(b) What sampling distribution will you use? What assumptions are you making? NKS The Student's t. We assume that both population distributions are approximately normal with known standard deviations.
The standard normal. We assume that both population distributions are approximately normal with unknown standard deviations.
The standard normal. We assume that both population distributions are approximately normal with known standard deviations.
The Student's t. We assume that both population distributions are approximately normal with unknown standard deviations.
(c) What is the value of the sample test statistic? Compute the corresponding z or t value as appropriate.
(Test the difference $$\mu_1 - \mu_2$$. Round your answer to two decimal places.) NKS (d) Find (or estimate) the P-value. (Round your answer to four decimal places.)
(e) Based on your answers in parts (i)−(iii), will you reject or fail to reject the null hypothesis? Are the data statistically significant at level \alpha?
At the $$\alpha = 0.01$$ level, we fail to reject the null hypothesis and conclude the data are not statistically significant.
At the $$\alpha = 0.01$$ level, we reject the null hypothesis and conclude the data are statistically significant.
At the $$\alpha = 0.01$$ level, we fail to reject the null hypothesis and conclude the data are statistically significant.
At the $$\alpha = 0.01$$ level, we reject the null hypothesis and conclude the data are not statistically significant.
(f) Interpret your conclusion in the context of the application.
Reject the null hypothesis, there is insufficient evidence that there is a difference in mean pollution index for Englewood and Denver.
Reject the null hypothesis, there is sufficient evidence that there is a difference in mean pollution index for Englewood and Denver.
Fail to reject the null hypothesis, there is insufficient evidence that there is a difference in mean pollution index for Englewood and Denver.
Fail to reject the null hypothesis, there is sufficient evidence that there is a difference in mean pollution index for Englewood and Denver. (g) Find a 99% confidence interval for
$$\mu_1 - \mu_2$$.
lower limit
upper limit
(h) Explain the meaning of the confidence interval in the context of the problem.
Because the interval contains only positive numbers, this indicates that at the 99% confidence level, the mean population pollution index for Englewood is greater than that of Denver.
Because the interval contains both positive and negative numbers, this indicates that at the 99% confidence level, we can not say that the mean population pollution index for Englewood is different than that of Denver.
Because the interval contains both positive and negative numbers, this indicates that at the 99% confidence level, the mean population pollution index for Englewood is greater than that of Denver.
Because the interval contains only negative numbers, this indicates that at the 99% confidence level, the mean population pollution index for Englewood is less than that of Denver.
Use Theorem 7.4.2 to evaluate the given Laplace transform. Do not evaluate the convolution integral before transforming.(Write your answer as a function of s.)
$${L}{\left\lbrace{e}^{{-{t}}}\cdot{e}^{t} \cos{{\left({t}\right)}}\right\rbrace}$$
Use Theorem 7.4.3 to find the Laplace transform F(s) of the given periodic function.
F(s)=?
$$\text{Laplace transforms A powerful tool in solving problems in engineering and physics is the Laplace transform. Given a function f(t), the Laplace transform is a new function F(s) defined by }$$
$$F(s)=\int_0^\infty e^{-st} f(t)dt \(\text{where we assume s is a positive real number. For example, to find the Laplace transform of } f(t)=e^{-t} \text{ , the following improper integral is evaluated using integration by parts:} \(F(s)=\int_0^\infty e^{-st}e^{-t}dt=\int_0^\infty e^{-(s+1)t}dt=\frac{1}{s+1}$$
$$\text{ Verify the following Laplace transforms, where u is a real number. }$$
$$f(t)=t \rightarrow F(s)=\frac{1}{s^2}$$
Let $$y(t)=\int_0^tf(t)dt$$ If the Laplace transform of y(t) is given $$Y(s)=\frac{19}{(s^2+25)}$$ , find f(t)
a) $$f(t)=19 \sin(5t)$$
b) none
c) $$f(t)=6 \sin(2t)$$
d) $$f(t)=20 \cos(6t)$$
e) $$f(t)=19 \cos(5t)$$
In an integro-differential equation, the unknown dependent variable y appears within an integral, and its derivative $$\frac{dy}{dt}$$ also appears. Consider the following initial value problem, defined for t > 0:
$$\frac{{{\left.{d}{y}\right.}}}{{{\left.{d}{t}\right.}}}+{4}{\int_{{0}}^{{t}}}{y}{\left({t}-{w}\right)}{e}^{{-{4}{w}}}{d}{w}={3},{y}{\left({0}\right)}={0}$$
$${Y}{\left({s}\right)}={L}{\left\lbrace{y}{\left({t}\right)}\right)}{\rbrace}-?$$
|
2021-06-24 09:29:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9399948716163635, "perplexity": 628.8608513457613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488552937.93/warc/CC-MAIN-20210624075940-20210624105940-00017.warc.gz"}
|
https://mathhelpboards.com/threads/wolf-of-the-red-moons-question-at-yahoo-answers-regarding-the-computation-of-work.5587/
|
# Wolf of the Red Moon's question at Yahoo! Answers regarding the computation of work
#### MarkFL
Staff member
Here is the question:
Need help with a Calculus Work problem...have answer not sure how to get to it?
A cylindrical gasoline tank 3ft in diameter and 4ft long is carried on the back of a truck and used to fuel tractors in the field. The axis of the tank is horizontal. Find the work done to pump the entire contents of the tank into a tractor if the opening on the tractor's tank is 5ft above the top of the tank in the truck. Assume gasoline weighs 42 lbs per cubic foot.
(Hint: Evaluate one integral by a geometric formula and the other by observing that the integrand is an odd function.)
I have posted a link there to this topic so the OP can see my work.
#### MarkFL
Staff member
Re: Wolf of the Red Moon's question regarding the computation of work
Hello Wolf of the Red Moon,
I prefer to work problems like this in general terms, and then plug our given data into the resulting formula.
First, let's let:
$r$ = the radius of the tank
$\ell$ = the length of the tank
$q$ = the distance above the top of the tank the fluid must be pumped
Now, let's imagine slicing the contents of the tank horizontally into rectangular sheets. The length of each sheet is constant, given by the length of the tank $\ell$. The width $w$ of each sheet will be a function of its vertical position within the tank.
So, let's orient a vertical axis, with its origin at the bottom of the tank, as in the diagram:
We wish to find $w$ as a function of $y$. We should observe that we may state:
$$\displaystyle (y-r)^2+\left(\frac{w}{2} \right)^2=r^2$$
Hence:
$$\displaystyle w(y)=2\sqrt{r^2-(y-r)^2}$$
and so the volume of an arbitrary sheet is:
$$\displaystyle dV=\ell\cdot w(y)\,dy=2\ell\sqrt{r^2-(y-r)^2}\,dy$$
Next, we want to determine the weight $\omega$ of the arbitrary sheet. So let's define $\rho$ to be the weight density of the fluid, where:
$$\displaystyle \rho=\frac{\omega}{dV}\,\therefore\,\omega=\rho\,dV$$
Hence:
$$\displaystyle \omega=2\rho\ell\sqrt{r^2-(y-r)^2}\,dy$$
Next, we want to determine the distance $d$ the arbitrary sheet must be lifted. This is:
$$\displaystyle d=q+(2r-y)$$
Thus, using the fact that work is the product of the applied force and the distance through which this force is applied, we find the work done to lift the arbitrary sheet is:
$$\displaystyle dW=\omega d=(q+(2r-y))\left(2\rho\ell\sqrt{r^2-(y-r)^2}\,dy \right)$$
In order to utilize the given useful hint, we may arrange this as:
$$\displaystyle dW=\omega d=((q+r)-(y-r))\left(2\rho\ell\sqrt{r^2-(y-r)^2}\,dy \right)$$
Now, summing by integration from the bottom of the tank $y=0$ to the top $y=2r$, we may state:
$$\displaystyle W=2(q+r)\rho\ell\int_0^{2r}\sqrt{r^2-(y-r)^2}\,dy-2\rho\ell\int_0^{2r}(y-r)\sqrt{r^2-(y-r)^2}\,dy$$
Now, by geometry, we should observe that:
$$\displaystyle 2\int_0^{2r}\sqrt{r^2-(y-r)^2}\,dy=\pi r^2$$
Notice this is simply the area of a circle of radius $r$.
And we may also observe that the integral:
$$\displaystyle \int_0^{2r}(y-r)\sqrt{r^2-(y-r)^2}\,dy$$
may be rewritten using the substitution:
$$\displaystyle u=y-r\,\therefore\,du=dy$$
and we have:
$$\displaystyle \int_{-r}^{r} u\sqrt{r^2-u^2}\,du$$
Since the integrand is an odd function, and the interval of integration symmetric about the origin, we may state by the odd function rule:
$$\displaystyle \int_{-r}^{r} u\sqrt{r^2-u^2}\,du=0$$
Putting this all together, we have:
$$\displaystyle W=\pi(q+r)r^2\rho\ell$$
Now, plugging in the given data:
$$\displaystyle q=5\text{ ft}$$
$$\displaystyle r=\frac{3}{2}\text{ ft}$$
$$\displaystyle \rho=42\,\frac{\text{lb}}{\text{ft}^3}$$
$$\displaystyle \ell=4\text{ ft}$$
we find:
$$\displaystyle W=\pi\left(\left(5+\frac{3}{2} \right)\text{ ft} \right)\left(\frac{3}{2}\text{ ft} \right)^2\left(42\,\frac{\text{lb}}{\text{ft}^3} \right)\left(4\text{ ft} \right)=2457\pi\text{ ft}\cdot\text{lb}$$
|
2021-01-16 16:00:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8844001293182373, "perplexity": 405.5109417392819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506697.14/warc/CC-MAIN-20210116135004-20210116165004-00411.warc.gz"}
|
https://oneapi-src.github.io/oneDAL/daal/algorithms/normalization/min-max.html
|
# Min-max¶
Min-max normalization is an algorithm to linearly scale the observations by each feature (column) into the range $$[a, b]$$.
## Problem Statement¶
Given a set $$X$$ of $$n$$ feature vectors $$x_1 = (x_{11}, \ldots, x_{1p}), \ldots, x_n = (x_{n1}, \ldots, x_{np})$$ of dimension $$p$$, the problem is to compute the matrix $$Y = (y_{ij})_{n \times p}$$ where the $$j$$-th column $$(Y)_j = (y_{ij})_{i = 1, \ldots, n}$$ is obtained as a result of normalizing the column $$(X)_j = (x_{ij})_{i = 1, \ldots, n}$$ of the original matrix as:
$y_{ij} = a + \frac {x_{ij} - \min(j)}{\max(j) - \min(j)} (b-a),$
where:
$\min(j) = \min _{i = 1, \ldots, n} x_{ij},$
$\max(j) = \max _{i = 1, \ldots, n} x_{ij},$
$$a$$ and $$b$$ are the parameters of the algorithm.
## Batch Processing¶
### Algorithm Input¶
The min-max normalization algorithm accepts the input described below. Pass the Input ID as a parameter to the methods that provide input for your algorithm. For more details, see Algorithms.
Algorithm Input for Min-max (Batch Processing)
Input ID
Input
data
Pointer to the numeric table of size $$n \times p$$.
Note
This table can be an object of any class derived from NumericTable.
### Algorithm Parameters¶
The min-max normalization algorithm has the following parameters:
Algorithm Parameters for Min-max (Batch Processing)
Parameter
Default Value
Description
algorithmFPType
float
The floating-point type that the algorithm uses for intermediate computations. Can be float or double.
method
defaultDense
Performance-oriented computation method, the only method supported by the algorithm.
lowerBound
$$0.0$$
The lower bound of the range to which the normalization scales values of the features.
upperBound
$$1.0$$
The upper bound of the range to which the normalization scales values of the features.
moments
SharedPtr<low_order_moments::Batch<algorithmFPType, low_order_moments::defaultDense> >
Pointer to the low order moments algorithm that computes minimums and maximums to be used for min-max normalization with the defaultDense method. For more details, see Batch Processing for Moments of Low Order.
### Algorithm Output¶
The min-max normalization algorithm calculates the result described below. Pass the Result ID as a parameter to the methods that access the results of your algorithm. For more details, see Algorithms.
Algorithm Output for Min-max (Batch Processing)
Result ID
Result
normalizedData
Pointer to the $$n \times p$$ numeric table that stores the result of normalization.
Note
By default, the result is an object of the HomogenNumericTable class, but you can define the result as an object of any class derived from NumericTable except PackedTriangularMatrix, PackedSymmetricMatrix, and CSRNumericTable.
## Examples¶
Batch Processing:
|
2022-01-26 05:00:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6521952748298645, "perplexity": 1489.7029999692588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304915.53/warc/CC-MAIN-20220126041016-20220126071016-00123.warc.gz"}
|
https://techwiki.lowellschools.com/how_to_change_dual_monitor_position_in_windows_7?rev=1522251650&do=diff
|
# techwiki.lowellschools.com
### Site Tools
how_to_change_dual_monitor_position_in_windows_7
# Differences
This shows you the differences between two versions of the page.
how_to_change_dual_monitor_position_in_windows_7 [2018/03/15 13:42]127.0.0.1 external edit how_to_change_dual_monitor_position_in_windows_7 [2018/03/28 11:40] (current)jasonw 2018/03/28 11:40 jasonw 2018/03/15 13:42 external edit 2018/03/28 11:40 jasonw 2018/03/15 13:42 external edit Line 1: Line 1: - ==== How to Change Dual Monitor Position in Windows 7 & 8 ==== + ==== How to Change Dual Monitor Position in Windows 7 & 8 ==== - \\ + + ---- + + \\ + * If you place the primary monitor on top, you will get a top to bottom orientation. - | **Step 1: Right-click in an open space on your desktop. Select the "Screen Resolution" option in the menu.**\\ \\ Step 2: To adjust the orientation of your monitor, just drag and drop the appropriate monitor and place it wherever you want. You could move it to the right, left, top or bottom position.\\ \\ Step 3: Once you have placed the monitor in the position you want, click on the Apply button to apply the changes. It you are satisfied with the setup, click on the Ok button.\\ \\ A Few Notes:\\ - * If you place the primary monitor at the bottom, you will get a bottom to top orientation. - * If you place the primary monitor on top, you will get a top to bottom orientation. * Left will give you a left to right, and right will give you the right to left orientation. | * Left will give you a left to right, and right will give you the right to left orientation. | - {{::screenresolution.png?nolink&597}}\\ + + {{:screenresolution.png?nolink&597}} + + {{tag>projector windows screen monitor primary}} +
|
2019-01-24 01:52:12
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9804874658584595, "perplexity": 1984.7912035012293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584445118.99/warc/CC-MAIN-20190124014810-20190124040810-00418.warc.gz"}
|
https://blog.paperspace.com/neural-style-transfer/
|
# Neural Style Transfer With TensorFlow
Follow this tutorial to learn how to use TensorFlow to impart stylistic characteristics of one photo onto another on Gradient!
a year ago • 15 min read
The styling of images to create modern and unique art has always been a perspective of interest for a few centuries. The type of art one was able to generate with their skills and mixture of artistic styles bound an intrigue amount audiences and enthusiasts of the field and other business contractors. The creative works of art such as paintings, sculptures, model designing, architecture designing, and other works of art were sold for high-profit margins at auctions since the renaissance. However, as of recently, Artificial Intelligence has emerged to be one of the unique ways of image styling, designing, and working with art.
There is a wide range of unique tasks in the field of art that AI can successfully accomplish. In our previous articles, we have looked into the working procedures of Generative Adversarial Networks (GANs). We have explored the capabilities of these competing neural networks to engage against each other while consistently improving to produce high-quality results. An example is the face generation with GANs, where we are creating realistic faces of humans that have never existed. You can check out this article from the following link.
In this article, we will cover the introduction to some of the underlying concepts behind the approach. We will then proceed to breakdown neural style transfer along with the basic conceptual understanding of this algorithm. We will develop a simple project using this neural style transfer method. For the development of this project, we will discuss two methods. In the first method, we will develop the entire architectural build of neural style transfer from scratch. In the second method, we will use a pre-trained model that is available in TensorFlow Hub to obtained the desired results.
## Introduction:
One of the most fun applications of artificial intelligence recently is the art of neural style transfer. We can generate unique artwork by mixing up two or more images together to create something new and creative. There has been tremendous improvement in the fields of face recognition and object detection, where techniques such as one-shot learning are employed to obtain the best results with ease. However, until recently, there wasn't a lot of attention paid to the artistic innovations that were possible with neural networks. In 2015, with the introduction of "A Neural Algorithm of Artistic Style" research paper, the scene of artworks with AI and deep learning blew up.
We are introduced to a class of deep neural networks that can effectively compute most image processing tasks in the convolutional neural networks. They consist of layers of small computational units with which they can process visual information hierarchically in a feed-forward manner. Each of the layers of a convolutional neural network contains several of these computational units where they have a collection of image filters, each of which extracts a certain feature from the input image. We will discuss more on the methodology of its working in the next section of this article.
For following along with the rest of the article, working with deep learning frameworks such as TensorFlow and Keras is an essential requirement. If the viewer doesn't have detailed information on any of these libraries, I would recommend checking them out from two of my previous blogs that cover these entire topics in enormous detail. The viewers can check out the TensorFlow article from this link and the Keras blog from the following link. In the next section, we will proceed to understand the methodology of the working of the neural style transfer model and most of the significant concepts related to it.
## Understanding Neural Style Transfer:
For the generation of a neural style transfer image, we usually have three main essential components. One of the components is the primary image, which functions as the "Content" image, upon which we might add the modification. It is the base upon which we will add the desired artwork. The modification picture is the second component of the neural style transfer model, referred to as the "Style" image. The Style is the flavor or variation that you can add to the Content, leading to the creation of a new picture. This new image formed by the neural style transfer algorithm utilizing the components of the "Content" and "Style" is called the "Generated" image.
Each of these three major components can be denoted with their starting letters. The "Content" (c) is the base upon which we will add the modification of art, the "Style" (s) refers to the addition of a new design to the primary image. And finally, we have the "Generated" (g) image, which is the resulting component of the mixture of the "Content" (c) and "Style" (s). In the above image representation, we can notice that the buildings act as the "Content" (c), whereas the other image of Van Gogh Starry Night is "Style" (s) that we will combine together to create a new combination of "Generated" (g) image.
The approach we utilize for solving this problem is with the help of deep conv nets, especially the VGG-19 transfer learning model. There will be three parameters that will be sent through the VGG-19 model, namely the original image, the content image, and the generated image. The generated image is initially initialized as the noise. After the training process, we want this generated image to become similar to the combination of the content and style pictures. While passing our input to the VGG-19 layers, we will ensure that we remove the output and dense layers so that it is a fully connected dense conv net containing only convolutional layers and pooling layers.
The output of the dense conv net is passed through the total loss function. The above image shows an accurate representation of this loss function. The total loss function is equal to the sum of the loss of the content image and the style image, where the alpha and beta represent the hyperparameters. For a more detailed analysis and understanding of neural style transfer, I would recommend looking into the following research paper.
## Developing Neural Style Transfer Project From Scratch:
In this section of the article, we will uncover one of the first methods to construct the neural style transfer project. We will perform the actions for the construction of this project from scratch. We will import the essential libraries, perform the image manipulations, create the total loss function, develop the convolutional neural network, and create the training loop.
The references we will utilize for the development of this project are the official websites of Keras and TensorFlow. For the style image, we will use the Van Gogh Starry Night image for both sections of constructing this project. The below image shows the representation of the Starry Night style. You can download and use any content and style image of your choice.
### Importing the essential libraries:
For getting started with the project, we will import all the essential libraries required for the computation of this project. We will import the TensorFlow and Keras deep learning frameworks for building the neural style transfer model. We will import the VGG-19 transfer learning model for the feature extraction procedure. To learn more about transfer learning, I would recommend checking out the following links for my previous articles - "A Complete Intuitive Guide To Transfer Learning (Part 1)" and "A Complete Practical Guide to Transfer Learning (Part 2)." We will also import the numpy library for performing the numerical operations.
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.applications import vgg19
import numpy as np
Once we finish successfully importing the essential libraries, we can proceed to Define the required parameters. We will set the paths for the three paramount components, namely the path to the content, style, and generated images. All these three parameters need to be passed through our deep convolutional network for achieving the desired result. We will also set some of the hyperparameters, such as the content weight and style weight. Finally, we will also set some of the dimension requirements for the generated images. Below is the code snippet for performing the following actions.
base_image_path = keras.utils.get_file("paris.jpg", "https://i.imgur.com/F28w3Ac.jpg")
style_reference_image_path = keras.utils.get_file("starry_night.jpg", "https://i.imgur.com/9ooB60I.jpg")
result_prefix = "paris_generated"
# Weights of the different loss components
total_variation_weight = 1e-6
style_weight = 1e-6
content_weight = 2.5e-8
# Dimensions of the generated picture.
img_nrows = 400
img_ncols = int(width * img_nrows / height)
### Image Manipulations:
The next step after importing the required libraries and image paths is to ensure to define some functions for pre-processing the images accordingly. We will construct two functions. The first function is to pre-process the images and load them accordingly with the help of the VGG-19 transfer learning model. We will convert the images into a tensor format capable of computing the required operations as desired. We will also build a function to recreate the pre-processed image once all the required computations have been performed as desired. Below is the code snippet for performing both these actions.
def preprocess_image(image_path):
# Util function to open, resize and format pictures into appropriate tensors
image_path, target_size=(img_nrows, img_ncols)
)
img = keras.preprocessing.image.img_to_array(img)
img = np.expand_dims(img, axis=0)
img = vgg19.preprocess_input(img)
return tf.convert_to_tensor(img)
def deprocess_image(x):
# Util function to convert a tensor into a valid image
x = x.reshape((img_nrows, img_ncols, 3))
# Remove zero-center by mean pixel
x[:, :, 0] += 103.939
x[:, :, 1] += 116.779
x[:, :, 2] += 123.68
# 'BGR'->'RGB'
x = x[:, :, ::-1]
x = np.clip(x, 0, 255).astype("uint8")
return x
### Creating the total loss function:
The next step is to create the total loss function, which will be a combination of the content loss and the style loss. The significance of the following loss function is as defined in the previous section. In the below code snippet, we are defining four functions that will be paramount to compute the overall loss. The gram matrix function is used for computing the style loss.
The style loss function keeps the generated image close to the local textures of the style reference image, while the content loss function keeps the high-level representation of the generated image close to that of the base image. The total loss function is used to keep the generated locally coherent, which means we want to keep the loss in a logically consistent way.
def gram_matrix(x):
# The gram matrix of an image tensor (feature-wise outer product)
x = tf.transpose(x, (2, 0, 1))
features = tf.reshape(x, (tf.shape(x)[0], -1))
gram = tf.matmul(features, tf.transpose(features))
return gram
def style_loss(style, combination):
S = gram_matrix(style)
C = gram_matrix(combination)
channels = 3
size = img_nrows * img_ncols
return tf.reduce_sum(tf.square(S - C)) / (4.0 * (channels ** 2) * (size ** 2))
def content_loss(base, combination):
return tf.reduce_sum(tf.square(combination - base))
def total_variation_loss(x):
a = tf.square(x[:, : img_nrows - 1, : img_ncols - 1, :] - x[:, 1:, : img_ncols - 1, :])
b = tf.square(x[:, : img_nrows - 1, : img_ncols - 1, :] - x[:, : img_nrows - 1, 1:, :])
return tf.reduce_sum(tf.pow(a + b, 1.25))
### Developing the deep convolutional network:
Once we finish defining our total loss function accordingly, we can proceed to create and develop the entire architecture of the deep convolutional network required for completing the task of neural style transfer. Similar to the architecture discussed in the previous section, we will utilize the VGG-19 architecture, which will contain the five essential convolutional blocks required for this project.
The fully connected layers are ignored and discarded in this transfer learning architecture. We will utilize the deep convolutional network with only convolution layers and pooling layers. Once the features are extracted, the output of this network will be passed with the appropriate loss function, which is a combination of the content and the style loss.
# Build a VGG19 model loaded with pre-trained ImageNet weights
model = vgg19.VGG19(weights="imagenet", include_top=False)
# Get the symbolic outputs of each "key" layer (we gave them unique names).
outputs_dict = dict([(layer.name, layer.output) for layer in model.layers])
# Set up a model that returns the activation values for every layer in
# VGG19 (as a dict).
feature_extractor = keras.Model(inputs=model.inputs, outputs=outputs_dict)
# List of layers to use for the style loss.
style_layer_names = [
"block1_conv1",
"block2_conv1",
"block3_conv1",
"block4_conv1",
"block5_conv1",
]
# The layer to use for the content loss.
content_layer_name = "block5_conv2"
def compute_loss(combination_image, base_image, style_reference_image):
input_tensor = tf.concat(
[base_image, style_reference_image, combination_image], axis=0
)
features = feature_extractor(input_tensor)
# Initialize the loss
loss = tf.zeros(shape=())
layer_features = features[content_layer_name]
base_image_features = layer_features[0, :, :, :]
combination_features = layer_features[2, :, :, :]
loss = loss + content_weight * content_loss(
base_image_features, combination_features
)
for layer_name in style_layer_names:
layer_features = features[layer_name]
style_reference_features = layer_features[1, :, :, :]
combination_features = layer_features[2, :, :, :]
sl = style_loss(style_reference_features, combination_features)
loss += (style_weight / len(style_layer_names)) * sl
loss += total_variation_weight * total_variation_loss(combination_image)
return loss
### Creating the training loop:
The final step we will perform in part-1 of the development of neural style transfer models from scratch is the creation of the training loop. The first step involved in building this section is by creating the decorator training loop. Once we create the decorator function, our task will be to define the optimizer. We will use the stochastic gradient descent optimizer with a learning rate and momentum for this project.
We will then proceed to pre-process and load all the three required images for the training process. Finally, we will begin to train the loop for around 2000 iterations. You can choose to train the following neural style transfer for more epochs and iterations if you want. We will also ensure that once the training process is complete, we will proceed to recreate the generated image using the de-process image function that we have previously defined. You can run the entire project of the Gradient Platform on Paperspace.
Bring this project to life
@tf.function
loss = compute_loss(combination_image, base_image, style_reference_image)
optimizer = keras.optimizers.SGD(
keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=100.0, decay_steps=100, decay_rate=0.96
)
)
base_image = preprocess_image(base_image_path)
style_reference_image = preprocess_image(style_reference_image_path)
combination_image = tf.Variable(preprocess_image(base_image_path))
iterations = 2000
for i in range(1, iterations + 1):
combination_image, base_image, style_reference_image
)
if i % 100 == 0:
print("Iteration %d: loss=%.2f" % (i, loss))
img = deprocess_image(combination_image.numpy())
fname = result_prefix + "_at_iteration_%d.png" % i
keras.preprocessing.image.save_img(fname, img)
### Output:
Iteration 100: loss=11024.51
Iteration 200: loss=8518.99
.
.
.
Iteration 1900: loss=5496.66
Iteration 2000: loss=5468.01
Once the training is completed, ensure that you check the results. You can choose to run the following program for an increased number of iterations and epochs to reduce the loss and generate an even better result.
In the next section of this article, we will cover how to develop the same project more directly with lesser code requirements using TensorFlow Hub. However, it is always best to understand the detailed working of most architectural neural network builds to gain a more intuitive understanding.
## Developing Neural Style Transfer with TensorFlow Hub:
Now that we have understood how to construct neural style transfer models from scratch with the help of TensorFlow and Keras, let us look into one of the simpler methods to compute such a project. I would recommend learning the construction of the neural style transfer algorithm from scratch before proceeding to the TensorFlow-Hub pre-trained model method. For this experiment, we will utilize the below picture of a bridge to act as our content image, while we will make use of the Van Gogh Starry Night image to perform as the style image to generate a new stylized generated image.
Before we proceed with constructing our project, let us understand what TensorFlow Hub exactly is. TensorFlow Hub consists of a bunch of pre-trained deep learning models for a variety of tasks such as BERT, Faster R-CNN, and so much more that we can reuse over and over again to generate results quickly for particular purposes. For the available models, you can fine-tune them accordingly and deploy them anywhere for performing the specific task. For further information on this topic, check out the official landing page of TensorFlow Hub, where you can construct projects on natural language processing, object detection, style transfer, and so much more.
### Importing the essential libraries:
The first step is to import all the essential libraries that we will utilize for the construction of this project. We will load the TensorFlow deep learning framework along with the TensorFlow Hub for accessing the pre-trained neural style transfer model. We will also import the matplotlib library to visualize the output of the generated images. You can also visualize the content or style images accordingly. The numpy library will help us to squeeze the dimensions of the generated images so that the matplotlib library can access the generated picture. Finally, we will import the computer vision cv2 library for exporting and saving the generated image if required.
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import numpy as np
import cv2
After importing all the required libraries, we can also access the pre-trained neural style transfer model from TensorFlow Hub. The model link variable represents the link to the TensorFlow Hub website that contains the path to the stored and trained neural style transfer model. The NST model variable will load the respective link with which we can access and perform the action of applying the neural style transfer algorithm directly without too many coding requirements. Below is the code snippet for accessing the pre-trained model.
# Access the pre-trained model from TensorFlow-Hub
NST_model = hub.load(model_link)
### Passing and interpreting the data:
In the next code snippet, we will create a function to obtain the data, load them, and operate on them accordingly. The following function will read the image path to the saved images in the respective directory. We will then detect and decode the particular image, convert it into our desired data type, and expand the dimension. Finally, the function returns the operated image. We will use the constructed function for loading, accessing, and operating on both the content and style images.
# Function to load and operate on the content and style images
def get_data(img_path):
img = tf.image.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
img = img[tf.newaxis, :]
return img
Let us load the content and style images in the next code block. I have labeled the above image as 'Bridge' stored in the .jfif format, which will act as the content image, and Van Gogh Starry Night as stored as 'Style Image' in the .jpg format will act as the style image. We will use these two entities to create a newly generated image with the neural style transfer model.
content_image = get_data('Bridge.jfif')
style_image = get_data('Style Image.jpg')
### Obtaining results through the loaded model:
Finally, we can proceed to generate the new image that will be performed by the loaded pre-trained neural style transfer model. We need to pass the two primary parameters to the model to evaluate them and generate a resulting image. We will pass the content image (the bridge) as the first parameter and the style image (Van Gogh Starry Night) as the second parameter. We will store the resulting picture in the generated image variable. Below is the code block that will perform the following action.
generated_image = NST_model(tf.constant(content_image), tf.constant(style_image))[0]
You can make use of the matplotlib library to visualize the generated image and the numpy library to squeeze the dimensions for visualization purposes. If you want to save the image, you can make use of the computer vision cv2 library and write the generated image into a directory you want to save with a file extension of either .png, .jpg, .jpeg, or any other similar format. Below is the generated image that I was able to achieve upon running the pre-trained neural style transfer model from TensorFlow Hub. It is recommended to try out numerous combinations with different content images and style images for generating unique pictures and artistic works.
## Conclusion:
In this article, we covered most of the essential concepts required for understanding the neural style transfer algorithm. We understood the basic conceptual knowledge of how exactly these neural networks work. After gaining a detailed description of this concept, we looked into two methods of constructing a project with neural style transfer. In the first method, we constructed the entire architecture from scratch and used the model obtained to evaluate and generate the modified images. In the second method, we made use of a pre-trained model from TensorFlow Hub for generating a combination of two pictures to create a newly generated image.
These sets of deep learning networks, which work with the help of two primary constituents, namely the content picture and the style picture, when combined together produce a generated image. We can generate a bunch of unique stylish artworks with these neural networks. It is highly recommended that the viewers experiment with different types of Content and Styles accordingly to generate new variations. There are numerous possibilities of interpretations for these neural network models, and you could end up generating something that is extremely aesthetically soothing and visually pleasing.
In future articles, we will cover topics such as WGANs, transformers, neural networks from scratch, reinforcement learning, and so much more. Until then, I would highly recommend experimenting with the neural style transfer algorithm and keep creating your own artistic generations and keep exploring!
|
2023-02-07 08:17:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27475282549858093, "perplexity": 1282.320078135724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500392.45/warc/CC-MAIN-20230207071302-20230207101302-00080.warc.gz"}
|
http://mathematica.stackexchange.com/tags/trigonometry/new
|
# Tag Info
0
@user3419717 as the comments suggest this question seems (i) like homework (ii) does not really require Mathematica as once you observe to obvious solution considerations of periodicity produce all the others (iii) if your intention is to use Mathematica to solve this problem then some code attempts (consistent with rules in (i) is expected. I could not ...
0
Solve[Sin[2 x] == Cos[36/180] && 0 < x < 2 Pi, x] 180/Pi %[[All, 1, 2]] // Simplify
2
Usually such issue is caused by Automatic Range and PlotRange->All would be a fix. It's not the case though. Szabolcs has noticed: That region of the contour plot is white because the expression insolation[phi, 23.5 Cos[t]] evaluates to non-real complex numbers at those t, phi points. I tried this by right-clicking the plot, and using GetCoordinates ...
2
After patching your data and fixing/adjusting code (removed unneeded Total, upped samples): theta = data[[All, 1]]; w = data[[All, 2]]; num = Dimensions[theta][[1]]; n = 150; m = Table[ Flatten[Table[{Sin[i1*theta[[i2]]], 0}, {i1, 1, n}]] + Flatten[Table[{0, Cos[i1*theta[[i2]]]}, {i1, 1, n}]], {i2, 1, num}]; x = LeastSquares[m, w]; f[t_] := ...
1
You say your data is a list of $(X,Y)$ so it looks like this: ListLinePlot[data, Frame -> True, FrameLabel -> {"X", "Y"}] Far from been a random coil where the definition of persistent length makes sense. Let see, we want to calculate the average $\cos(\theta) = \hat{v_1} \cdot \hat{v_1}$, as a function of the contour distance, so I use Dot[v1, ...
3
The issue we encounter here is an apparent incompleteness of the recent updates in the system, we should remember that Solve has been updated in the recent versions of Mathematica and although documentation pages say "last modified in 8", one can distinguish various different issues between ver.8 and ver.9, it's just a state of art. In ver. 8 we get: ...
0
Given: expr = Cos[2 B g t] Sin[u]^2 - I Sin[2 B g t] Sin[u]^2 ... one approach is: FullSimplify[expr, ExcludedForms -> {Cos[_], Sin[_]}] (Cos[2 B g t] - I Sin[2 B g t]) Sin[u]^2 Another approach worth exploring is to use a custom ComplexityFunction, as per: FF[ee_] := 1000 Count[ee, _Exp, {0, Infinity}] + LeafCount[ee] FullSimplify[expr, ...
Top 50 recent answers are included
|
2014-03-15 14:59:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19866015017032623, "perplexity": 4695.6893128299935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678698356/warc/CC-MAIN-20140313024458-00082-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://cseducators.stackexchange.com/tags/induction/hot
|
# Tag Info
8
I think induction should be taught first, and then loop invariants, since you usually need induction to prove that the loop invariant is really an invariant, and also because you need induction for other purposes than just proving that a loop invariant is really an invariant. In fact, in most curricula for Computer Science at university will have some math ...
6
Let's start with the term "Loop Invariance". It is a property of a loop that is true before and after each iteration, thus in-variant, non-changing. So then, what is the purpose of the loop invariance in proving algorithm correctness? That is, it is a predicate about what the loop is supposed to do. Thus with proof by induction on this predicate shows the ...
4
To give an explanation of this requires a problem in which to frame the discussion. I propose the Dutch National Flag problem, likely first proposed by Dijkstra, but I'm not positive. I'll pose the problem first and discuss the solution using invariants. The Problem Taken from The Science of Programming. You have an array of n elements. Each element ...
4
I have been teaching theoretical CS to math teachers (don't ask why...). I assumed that they were comfortable with numerical induction and therefore the transition to structural induction (on a computation as in proving invariants) wouldn't be difficult. I was wrong on both counts. It seemed to me that this is partly because they encountered induction in a ...
4
Since you are doing this in the context of code correctness, one incentive is to tell student that in some cases, you have to prove code, not test it. Anything critical (pick up your example, nuclear plant command system, plane autopilot, …) requires the highest level of confidence. Tests are always going to be partial. They will test 1, 10, 1000 possible ...
3
I try to introduce induction not only as method for proving an algorithm, but as a method for developing algorithms as well. The idea is from Introduction to Algorithms by Udi Manber. Sometimes the way of thinking solve the problem for n=1 think about how you can calculate (n+1) once you know n helps them to develop solutions on their own. And it helps ...
3
One big struggle is getting CS students to care about induction. Proving correctness doesn't really count as motivation, when "passing the tests" seems so much approachable and the extra benefits from proving correctness just don't seem worth the extra effort. On the other hand, if induction could be practically useful in designing algorithms in the first ...
3
Firstly, I believe that if the thought process is clear to you, then going through that process with your students, out loud, clearly and slowly, can help them understand what it is that you are trying to convey. Go through that process for a number of various examples, which show how you decide on a loop invariant in the varying situations (yes, that pun ...
2
A great resource for the instructor to learn the max about invariants is to go to the master. David Gries, The Science of Programming It isn't a book for novice learners, though. But understanding what is in this book will help you a lot in teaching programming via invariants. He reveals all. He and his son have an undergrad text book also, that might ...
2
At the CS1 level that I teach, I focus a lot on the role and the precise meaning of variables and encourage comments at the point of declaration that attempt to make these clear, especially variables that are involved in loops. Since the meaning of a variable changes as the loop body is executed, we "agree" to use the entry point as the standard point of ...
2
Any book on elementary number theory will likely have quite a few recurrences as examples and in the exercises of the early chapters. For example, the sum of the first n odd integers is $n^2$. As a recurrence this is just $n^2$ = $(n-1)^2$ + $2*n - 1$. Or possibly it will be shown as $a_n$ = $a_{n-1}$ + $2*n -1$, with $a_1$ = 1. Some books on discrete math ...
1
This isn't really an answer, but I hope will provide a bit of insight for the OP into thinking about and presenting the problem. I think that recursion (in computing) and induction (in mathematics) are complementary, not identical ideas. The interplay between the two can give insight. In mathematical induction, we normally work outward from the base case....
1
I'm using many examples involving trees (height of a tree, number of leaves, ...) or graphs (number of edges in a fully connected graph, ...). The advantage is that if I force students to draw the first three or four steps, they are usually capable of deriving the equation by themselves.
1
Do it in code, so that pupils can play with it. This code will throw an exception if the invariant is not met. Show how this can be useful in finding bugs in the program. The book a touch of class by Bertrand Meyer, is very good for teaching Object Oriented Programming and has examples (can't remember how many, but it is where I leant it). loop_example is ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2020-04-05 01:58:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6245157122612, "perplexity": 514.3700973538844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370526982.53/warc/CC-MAIN-20200404231315-20200405021315-00488.warc.gz"}
|
https://support.bioconductor.org/p/133154/
|
Cell cycle regression for scRNA-seq data
1
0
Entering edit mode
ATpoint ★ 1.6k
@atpoint-13662
Last seen 2 hours ago
Germany
My scRNA-seq data (10X, murine, hematopoietic cells) have the problem that some clusters are separated almost exclusively by cell cycle which is not interesting for the scenario we are woring with and only inflates the number of clusters. This can be shown both with PCA run on cell cycle genes (separation there is obvious in PC1 vs PC2 for some clusters) plus with cyclone cell cycle assignment as in the book chapter 16.4. Therefore I would like to remove the effect e.g. as in the book chapter 16.5. Removal of the cell cycle genes from the selected features is not sufficient and does not really make any difference, therefore looking for a more aggressive strategy. Following chapter 16.4 I am not clear on the exact workflow from there on. Do we run regressBatches on the original logcounts and then repeat the feature selection, integration and clustering procedure? Also, is there something similar in the Bioconductor world as in the last chapter of the Seurat vignette where not the cell cycle effect itself but the difference between the G2M and S phase scores is regressed?
OSCA batchelor cell cycle regression • 1.2k views
2
Entering edit mode
Aaron Lun ★ 27k
@alun
Last seen 20 hours ago
The city by the bay
Do we run regressBatches on the original logcounts and then repeat the feature selection, integration and clustering procedure?
The latest version of the chapter has a bit more information available. Briefly, the regression just applies to the log-values you feed into the PCA. Clustering picks up from the PCs, so it doesn't need extra regression. And feature selection can use block= to ensure that cell cycle differences do not drive the detection of HVGs.
I take it you've read and understood my comments on the potential problems from using regression, so I won't repeat them here. I will just say that I would still prefer gene removal as this is more predictable and less liable to introduce artifacts - see the new version of the chapter for a more aggressive empirical version of this approach.
Also, is there something similar in the Bioconductor world as in the last chapter of the Seurat vignette where not the cell cycle effect itself but the difference between the G2M and S phase scores is regressed?
Sure, if you're got a covariate, just make a design matrix and give it to design= in regressBatches(). (Similarly, you can give it to design= in functions like findMarkers().) You can put anything in there, e.g., the cyclone() phase scores or the SingleR() correlations. However, I have come to wonder whether this hurts more than it helps; the magnitude of the scores is probably even more sensitive to confounding differences in the biological state.
0
Entering edit mode
Thanks Aaron for the extensive comment, very helpful as usual!
1
Entering edit mode
I just noticed that setting design= in regressBatches() actually also requires you to give it something like batch=integer(ncol(sce)) to keep the function happy. (It doesn't matter what the exact value is, you just had to give it something to let it move on to the next step.) I've updated the function in BioC-devel so that it no longer needs batch= if you give it design=.
|
2022-10-06 01:52:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2993025481700897, "perplexity": 1605.9965429614344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00614.warc.gz"}
|
https://www.physicsforums.com/threads/term-structure-isomorphic-to-the-usual-model-structure-of-number-theory.545141/
|
# Term structure isomorphic to the usual model/structure of number theory
Hello, suppose I have a set of sentences Ʃ from the language of number theory ( the usual one ). Then, I extend this to a maximally consistent set of sentences Ʃ' and create a henkin term structure for it ( i.e. as in the popular proof of the completeness theorem ). Can it be true that this resulting structure is isomorphic to the standard structure/model of number theory? Usually, it isn't enough for two structures to satisfy the same sentences for them to be isomorphic, so I am not sure..
thanks
## Answers and Replies
AKG
Science Advisor
Homework Helper
Although two structures satisfying the same sentences (i.e. being elementarily equivalent) is not sufficient for them to be isomorphic, it is necessary, thus to answer your question: Can the Henkin structure associated to $\Sigma'$ be isomorphic to $\mathbb{N}$? it would be necessary for $\Sigma' = \mathrm{Th}(\mathbb{N})$, the full theory of $\mathbb{N}$.
In other words, the question is whether the Henkin structure associated to $\mathrm{Th}(\mathbb{N})$ can be (isomorphic to) $\mathbb{N}$. The answer is Yes. In the Henkin construction, you add $\omega$ constants to your language and $\omega$ axioms to your theory. You then repeat this $\omega$ times. You then extend to a complete theory. And then you construct the model as the set of variable free terms in your new language, modulo being provably equivalent by your new theory. What you need to do is make sure that when extending to a complete theory, for every Henkin constant c there is some "SS...S0" such that "c = SS...S0" is added to your theory.
This is easy: Look a the first set of Henkin axioms you added, they're of the form $\exists x \phi (x) \rightarrow \phi (c)$. If $\exists x\phi (x)$ is true in $\mathbb{N}$, say $n$ is the minimal witness, then add $c = \bar{n}$ to your theory in the final extend-to-a-complete-theory stage. Here $\bar{n}$ is $n$ S's, followed by a 0 symbol. If $\exists x\phi(x)$ doesn't hold, add $c = 0$. Now deal with the Henkin axioms added in the second iteration in the same way (by looking at whether the existential sentence is true in $\mathbb{N}$), interpreting any occurrence of the first set of Henkin constants according to the interpretation we just fixed above. Do this for all the Henkin constants/axioms, and then complete the theory as usual.
It's not hard to see that this will be isomorphic to $\mathbb{N}$.
Thanks for the reply, I appreciate it
|
2022-05-23 04:48:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9277574419975281, "perplexity": 292.64714239645116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662555558.23/warc/CC-MAIN-20220523041156-20220523071156-00462.warc.gz"}
|
https://www.tutorialspoint.com/find-the-sum-of-maximum-difference-possible-from-all-subset-of-a-given-array-in-python
|
Find the sum of maximum difference possible from all subset of a given array in Python
PythonServer Side ProgrammingProgramming
Suppose we have an array A of n values (elements may not be distinct). We have to find the sum of maximum difference possible from all subsets of given array. Now consider max(s) denotes the maximum value in any subset, and min(s) denotes the minimum value in the set. We have to find the sum of max(s)-min(s) for all possible subsets.
So, if the input is like A = [1, 3, 4], then the output will be 9.
To solve this, we will follow these steps −
• n := size of A
• sort the list A
• sum_min := 0, sum_max := 0
• for i in range 0 to n, do
• sum_max := 2 * sum_max + A[n-1-i]
• sum_max := sum_max mod N
• sum_min := 2 * sum_min + A[i]
• sum_min := sum_min mod N
• return(sum_max - sum_min + N) mod N
Example
Let us see the following implementation to get better understanding −
Live Demo
N = 1000000007
def get_max_min_diff(A):
n = len(A)
A.sort()
sum_min = 0
sum_max = 0
for i in range(0,n):
sum_max = 2 * sum_max + A[n-1-i]
sum_max %= N
sum_min = 2 * sum_min + A[i]
sum_min %= N
return (sum_max - sum_min + N) % N
A = [1, 3, 4]
print(get_max_min_diff(A))
Input
[1, 3, 4]
Output
9
Published on 27-Aug-2020 12:20:39
|
2022-01-18 15:58:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4907965660095215, "perplexity": 3602.079275249534}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300934.87/warc/CC-MAIN-20220118152809-20220118182809-00449.warc.gz"}
|
https://web2.0calc.com/questions/continue
|
+0
# continue
0
56
2
for the problem: Convert 35 from base 10 to base 2.
35/2=17r1
17/2=8r1
8/2=4r0
4/2=2r0
2/2=1r0
1/2=0r1
what do i do know?
Jun 26, 2019
#1
+8406
+2
35 / 2 = 17 r 1 17 / 2 = 8 r 1 8 / 2 = 4 r 0 4 / 2 = 2 r 0 2 / 2 = 1 r 0 1 / 2 = 0 r 1 ___ ____ __
Then read the remainders from bottom to top to get 100011
So 3510 = 1000112
Jun 26, 2019
#2
+22523
+3
Convert 35 from base 10 to base 2
$$\begin{array}{|rclcl|} \hline 35 &=& 2\cdot {\color{blue}17} &+& \color{red} 1 \\ {\color{blue}17} &=& 2\cdot {\color{blue}8} &+& \color{red} 1 \\ {\color{blue}8} &=& 2\cdot {\color{blue}4} &+& \color{red} 0 \\ {\color{blue}4} &=& 2\cdot {\color{blue}2} &+& \color{red} 0 \\ {\color{blue}2} &=& 2\cdot {\color{blue}1} &+& \color{red} 0 \\ {\color{blue}1} &=& 2\cdot {\color{blue}0} &+& \color{red} 1 \\ \hline \end{array}$$
$$\mathbf{35_{10} = {\color{red}100011}_2}$$
Jun 26, 2019
|
2019-07-18 08:17:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.429222971200943, "perplexity": 711.2774335844356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525524.12/warc/CC-MAIN-20190718063305-20190718085305-00016.warc.gz"}
|
https://www.physicsforums.com/threads/pn-junction-to-reach-thermal-equilibrium.864516/
|
# I Pn junction to reach thermal equilibrium
Tags:
1. Mar 30, 2016
### EmilyRuck
Hello!
Some of the processes caused by a pn junction are not clear for me. Just after the contact between the p and the n region, a migration of charges happens in a semiconductor junction in order to reach an equilibrium condition. A valence band and a conduction band are present in both regions.
Initially, the p-region valence band is partially filled with holes and the conduction band is empty. The n-region conduction band is partially filled with electrons and the valence band is full.
After the contact:
1) Electrons migrate from the conduction band in n-region to the conduction band in the p-region and then recombine with holes by decreasing their energy?
2) Holes migrate from the valence band in the p-region to the valence band in the n-region and then the remaining electrons in the n-region conduction band recombine with them?
If 1 and 2 are true, how can electrons spontaneously go from a conduction band with a lower energy to a conduction band with a higher energy? (With reference to this figure).
And the recombination phenomena (electrons that drop from conduction band to valence band, emitting a photon/phonon depending on the material) present in both the p and n regions?
Thank you,
Emily
2. Apr 4, 2016
### Henryk
TRUE
TRUE
Note that electron has a negative charge and when they migrate to the p-side they charge it negatively. The same is for the holes migrating to the n-side and you have a double charged layer at the junction that results in electrostatic potential. The figure you quoted shows 'band bending' at the junction, the 'bending' is the result of adding electrostatic energy to the energy of a bulk semiconductor.
The force driving electrons up the potential barrier is thermal energy. In typical semiconductors, the electrons obey Boltzmann statistics, that is probability that a state is occupied is proportional to $p(E) = exp( - \frac E {k_BT})$ and there is a non-zero probability than an electron has enough energy to overcome the potential barrier at the junction and diffuse there.
In fact, the figure you quoted shows the junction under the equilibrium condition where the drift and diffusion processes cancel each other exactly.
Add an external voltage to the junction in either direction and you start having a net current flow.
3. Apr 7, 2016
### EmilyRuck
Thank you for your clarifications. But in particular during the transient just after the contact between the n region and the p region, how can electrons increase their energy and move to the conduction band of the p side?
It is not for thermal energy. May the charge concentration gradient be the reason?
4. Apr 7, 2016
### Henryk
Real junctions are made by controlled doping of acceptor and donor atoms. But let's assume you have n and p type pieces and bring them into contact. At the moment of contact, there is no potential barrier!! The conduction and valence bands of both pieces have identical energies ! As the electrons migrate to the p side they leave unbalanced charge of donor atoms. The migrating holes leave the unbalance charge of the acceptor atoms. This double charge layer creates the potential barrier across the junction. The more carriers migrate the higher the barrier until an equilibrium is reached.
In any case, it is the thermal energy of electrons that let's them migrate against a potential barrier, regardless of its height.
I've discussed the p-n junction before, you might want to take a look
https://www.physicsforums.com/threa...rward-biased-pn-junction.854881/#post-5365418
5. Apr 15, 2016
### toutiao
The confusing part for me is this: during this migration of electron from N to P region,
1. Is the electron a Bloch wave or a wave packet?
2. If it is the WAVE PACKET that moves from N to P region, where does the interference waves (that makes it a wave packet) comes from?
3. How to picture this thermal motion of electron at first place? According to Bloch wave function, the electron should really be "everywhere within the solid" and it should not be moving around like molecules in Brownian motions.
6. Apr 15, 2016
### Henryk
Tautiao,
Interesting questions.
1. Bloch function describe quantum states extending everywhere within the solid. However, they are stationary states, they would correspond to actual electron states if electrons would stay in them for ever.
In real world, electrons change states due to things like interaction with other electrons, phonons, external fields, etc.
Bloch functions form a basis and 'real' electronic states can be constructed as a linear combination of Bloch functions.
For the description of electron dynamics in solids semiclassical transport theory is used.
In semiclassical transport theory, an electron is wave packet, that is a linear combination of Bloch states $\phi(k)$ of the form
$$\psi = \sum_k A(k) \phi(k)$$
The function $A(k)$ is localized in the $k$ space, that is, it has a maximum near the average value $k$ and width $\Delta k$
Thus, it is also localized in real space with $\Delta r \approx. \frac 1 {\Delta k}$ (that answers your third question)
The velocity of an electron is the group velocity of the wave packet $v = \frac 1 \hbar \nabla _k \mathcal E(k)$
So, the answer to your second question, the interference comes from nothing, it is just decomposition of a wave packet into plane waves much the same as a radar pulse can be Fourier decomposed as a superposition of plane electromagnetic waves.
Additionally, when an external field is applied, the $k$ vector changes at the rate given by $\hbar \dot k = q E$
To complete the picture, one can also assign an effective mass as $\frac 1 {m_{eff} }= \frac 1{\hbar^2} \nabla ^2_k \mathcal E(k)$
I hope I answered your questions.
7. Apr 16, 2016
### toutiao
Henryk,
After reading your reply, I checked out a solid state physics book and browse through the chapter regarding semiclassical transport theory. Based on my understanding, when external fields or phonons involved, the format of the Schrodinger's Equation changes correpondingly since we need to add the terms of electric field E and lattice defraction. In this case, the solution to the Schrodinger's Equation is no longer Block Function, but a wave packet instead, where the "basis" of the wave packet are Bloch functions with difinitive K. Adding those basis together (interference), we get a wave packet with momentum approximately K ("near the average value k and width Delta k) and position at approximate r (almost localized), without violating the uncertainty principle. Is this what you were explaining to me?
I have to confess that the derivation in the book is somewhat difficult to follow. Thanks for your prompt reply. I'll read more solid state physics books and hopefully things will clear up further. Looks like most semiconductor physics books don't go deep enough to unveil a clear picture of electron motion in crystal lattice.
You have a nice weekend.
8. Apr 17, 2016
### Henryk
Pretty much yes
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
Draft saved Draft deleted
|
2017-10-17 21:19:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6022064685821533, "perplexity": 685.0586060460132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822488.34/warc/CC-MAIN-20171017200905-20171017220905-00421.warc.gz"}
|
https://keplerlounge.com/information-theory/2021/04/26/incompressible-integers.html
|
## Almost all positive integers are incompressible:
Given that $$\mathbb{N}$$ is countable and the space of binary strings of finite length $$\{0,1\}^*$$ is also countable, we may construct a bijection from $$\mathbb{N}$$ to $$\{0,1\}^*$$. It follows that every positive integer has a unique binary encoding.
Moreover, almost all positive integers are incompressible since:
$$\forall n \in \mathbb{N}^* \forall k < n, |\{x \in \{0,1\}^*:K(x) \geq n -k \}| \geq 2^n(1-2^{-k})$$
where $$|x| = n$$, the binary length of $$x$$, which may be understood as the machine-code representation of an integer.
## Proof:
Let’s suppose an integer with binary encoding $$x$$ has an algorithmic complexity $$K(x) \leq n-k$$. Given that the number of binary strings of binary length less than $$n-k$$ is $$2^{n-k} - 1 < 2^{n-k}$$ we have:
$$2^n - 2^{n-k} = 2^n(1-2^{-k})$$
integers with an algorithmic complexity greater than or equal to $$n-k$$.
## Corollary:
For $$n > k \geq 10$$, more than 99.9% of integers have an algorithmic complexity greater than $$n-k$$ since:
$$1-\frac{1}{2^{10}} > 99.9\%$$
From this we may deduce that less than 1% of integers are compressible.
## References:
1. Lance Fortnow. Kolmogorov Complexity. 2000.
2. Lance Fortnow. Kolmogorov Complexity and Computational Complexity. 2003.
|
2021-07-28 00:51:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.9202454090118408, "perplexity": 334.6228473259029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153515.0/warc/CC-MAIN-20210727233849-20210728023849-00504.warc.gz"}
|
https://www.physicsforums.com/threads/why-steepest-descent-gives-a-wrong-direction-search.813568/
|
# Why steepest descent gives a wrong direction search?
1. May 12, 2015
### ymhiq
1. The problem statement, all variables and given/known data
I have to minimize the function (x1-1)2+x23+x1x2 by the steepest descent method. The initial point is [1,1]T
2. Relevant equations
3. The attempt at a solution
The gradient of this function is ∇ƒ(x1,x2)=[2(x1-1)-x2 3x22-x1]. This gradient evaluated in the initial point is ∇ƒ(1,1)=[-1 2]. Following the steepest descent method it is mandatory to minimize the function ƒ(x0-α∇ƒ(x0)) in order to find the value of α. So ƒ(x0-α∇ƒ(x0))=-5α+15α2-8α3 and ƒ'(x0-α∇ƒ(x0))=-5+30α-24α2. This function has extreme points in α1=0.95061 and α2=5.094. In order to be a minimum of this curve ƒ''(x0-α∇ƒ(x0))=30-48α has to be positive. This is my problem ƒ''(x0-α∇ƒ(x0) evaluated at both α values is negative so they don´t minimize the direction. So what I am doing wrong?
2. May 13, 2015
### SteamKing
Staff Emeritus
Just inspecting the gradient of the original function f(x1, x2), something doesn't look right.
If you take ∂f / ∂x1, how did you obtain [2(x1-1)-x2], specifically, the ' - x2' part? I'm confused, because there were no negative signs between terms in the original definition of f(x1, x2). A similar question arises in what you show to be ∂f / ∂x2.
3. May 13, 2015
### Ray Vickson
As SteamKing has pointed out, your gradient formula is incorrect, and your initial steepest-descent direction is wrong. However, when you correct these errors, you will obtain a function $\phi(\alpha) = f(x_0 - \alpha \nabla f(x_0))$ that has no stationary points at all. What does that tell you?
4. May 13, 2015
### ymhiq
Oh! Excuse me! You are right! However I made a mistake when I wrote the original problem. Let me write it again. I have to minimize the function ƒ(x1,x2)=(x1-1)2+x23-x1x2. The initial point is [1,1]T.
5. May 13, 2015
### ymhiq
Excuse me all of you. Finally I got the mistake I made solved. It was an incorrect solutions of ƒ'(x0-α∇ƒ(x0))=-5+30α-24α2 .
|
2017-10-20 09:50:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8152903914451599, "perplexity": 1105.0991209041638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823997.21/warc/CC-MAIN-20171020082720-20171020102720-00745.warc.gz"}
|
http://physics.stackexchange.com/questions/36006/how-could-horrocks-have-measured-the-au?answertab=active
|
# How could Horrocks have measured the AU?
I have always understood that the great historical significance of the transits of Venus, and the reason for the expeditions mounted to observe it, were that, by observing it simultaneously from two distant locations, the absolute distance to the Sun could be measured. But I read in several sources, including some of my texts and two articles in Wikipedia, that Jeremiah Horrocks's observations of the 1639 transit of Venus allowed him to
make an estimate of the distance between the Earth and the Sun, now known as the astronomical unit (AU)
Horrocks's made his observations from a single location, and thus could not have been using parallax (as was done 130 years later) to arrive at his estimate.
Without knowledge or the sizes of both the Sun and Venus, how could he have performed the necessary calculations? Was he simply using the " well-informed guess as to the size of Venus" mentioned in the Wikipedia article. If so, what made it "well-informed"
-
Horrocks did not, in fact directly use his Venus transit observations to measure the AU, but to confirm a relationship that he postulated between planetary radii and orbital distance, using which the AU could calculated simply from an estimate of the radius of Earth.
During the early 17th century it was widely believed that the size of a planet was related in some way to its distance from the Sun. As expressed by Kepler (1618):
Nothing is more in concord with nature than that the order of magnitude should be the the same as the order of the spheres, so that among the six primary planets, Mercury should ... obtain the most narrow sphere; that next to Mercury should be Venus, which is larger, but still smaller than Earth's ...
The significance of Horrocks's (1639) Venus transit observation, combined with earlier observations of Mercury and the contemporary estimates of the relative sizes of the orbits of the Mercury, Venus and Earth, is that it allowed him to be more explicit1:
All planets (with the exception of Mars) are the same angular size when seen from the Sun, this size being 28 seconds of arc.
From this "law"2 (sometimes called "Horrocks Law") arriving at an estimate of the AU is a matter of simple trigonometry, involving only the Earth's radius,3 $R_E$, since if the Earth subtends an angle of $28''$ when viewed from the Sun
$$R_E / 1 \text{AU} = \tan (28''/2)$$
so that
$$1 \text{AU} = (\tan 14'')^{-1}R_E \approx 14,733 R_E$$
(1) Horricks observed an angular size for Venus of $76''$ and used contemporary estimates of the relative Sun-Venus and Sun-Earth distances, taking orbital eccentricities into account, to compute the angular size of Venus as seen from the Sun:
$$76'' \cdot ( 0.98409 \text{AU} - 0.7200 \text{AU}) / 0.7200 \text{AU} \approx 27.876''$$
A similar calculation based on Gassendi's 1632 observation of Mercury (using $20''$ and Kepler's values for Sun-Mecury and Sun-Earth distances) also produced an angular size of about $28''$. This, along with the general beliefs of the time, were apparently enough to convince Horrocks that the angular size from the Sun was $28''$ for all planets — notably Earth, which made his calculation of the AU in terms of $R_E$ possible.
(2) The term is modern. Horricks was more careful: "I do not put forward this conjecture as a certain demonstration, but as a probability".
(3) For which many contemporary estimates were available.
-
|
2013-12-21 18:38:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8004270195960999, "perplexity": 389.4090630921072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345776439/warc/CC-MAIN-20131218054936-00003-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://forum.allaboutcircuits.com/threads/us-nuclear-force-still-uses-floppy-disks.138276/
|
US nuclear force still uses floppy disks
OBW0549
Joined Mar 2, 2015
3,566
Heh. I got a chuckle from the article sub-heading "The Floppy Disk - What is it?" I remember when 8" floppies were NEW, and vastly superior to mag tape and punched cards.
#12
Joined Nov 30, 2010
18,222
Last week I tried to use a couple of 1.44 meg plastic floppies and they wouldn't even format.
Not much of a loss, considering I haven't opened the drawer for a floppy in several years.
5 out of 7 hard drives wouldn't register after 5 years on the shelf.
I hope the U.S. Military Forces don't have drawers full of dusty old floppies and expect them to work after 5 or 10 years.
Ron
WBahn
Joined Mar 31, 2012
26,398
I'm not surprised they still have some floppies and drives around. I just recently sold off all that I had. But I'm a little perplexed if they actually use them for normal daily operations. I mean, what data could be on them that wouldn't be better stored on a thumb drive or other modern storage device? I guess you could have protocols to make up for the reliability issues and small capacity of the floppies. It just seems like the people responsible for these protocols would have suggested some improvements over the last 3 decades.
One big part of the reason is that the newer the technology, the more susceptible it is to electromagnetic pulse (EMP). Modern storage devices and electronics are inherently far more susceptible to EMP than older technologies. It is much easier to shield old floppy drives and older computers than modern ones.
wayneh
Joined Sep 9, 2010
17,153
One big part of the reason is that the newer the technology, the more susceptible it is to electromagnetic pulse (EMP). Modern storage devices and electronics are inherently far more susceptible to EMP than older technologies. It is much easier to shield old floppy drives and older computers than modern ones.
That's an argument for punch cards!
Unfortunately I threw my last punch cards - with my FORTRAN program on them - off the roof of my dorm. (That was more-or-less standard practice at the time.)
|
2021-07-24 08:22:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2526254951953888, "perplexity": 2617.8032202208665}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150134.86/warc/CC-MAIN-20210724063259-20210724093259-00322.warc.gz"}
|
https://cboard.cprogramming.com/windows-programming/100333-how-set-up-qt-dev-cplusplus.html
|
# Thread: How to set up Qt in Dev-c++?
1. ## How to set up Qt in Dev-c++?
How to set up Qt to work with Dev-C++?
I have set up these environment variables so far:
QTDIR = D:\Qt\4.3.4\
INCLUDE = D:\Qt\4.3.4\include
PATH is extended with ;D:\Qt\4.3.4\bin;D:\Dev-Cpp\bin
QMAKESPEC = I don't know what to write here???
So far Dev-c++ compiles programs only if include like this:
#include <QApplication.h>
, but .h extension is not needed in Linux and it sould not be needed in Dev-c++ to.
|
2017-07-27 14:54:46
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8269158005714417, "perplexity": 6958.3136256776015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549428300.23/warc/CC-MAIN-20170727142514-20170727162514-00058.warc.gz"}
|
http://mathhelpforum.com/new-users/222070-integration-parts.html
|
# Math Help - Integration by parts
1. ## Integration by parts
use integration by arts to solve the following 5x cos (4x) dx so I have u = 5x, du = 5, v=1/4sin(4x) and dv =cos (4x) now my first step getting the formula correct and I chose uv-integral vdu step one was (5x) 1/4sin(4x) - integreal 1/4 sin(4x) (5) which lead to 5x/4 sin(4x) - 5/4 integral sin(4x) dx then step three was 5x/4 sin(4x) - 5/4 (-1/4 cos(4x)) +c which lead to 5x/4 sin4x +5/16cos4x+c and a I have two final answers not sure which is correct answer one 5/16(cos(4x)+4xsin(4x))+c or answer two 5(4xsin(4x)+cos4x)/16 +c if someone can clear up the confusion that would be great and also see if my steps to the final abswer are correct please......
2. ## Re: Integration by parts
Originally Posted by Jock
use integration by arts to solve the following 5x cos (4x) dx so I have u = 5x, du = 5, v=1/4sin(4x) and dv =cos (4x)
Have a look at this.
3. ## Re: Integration by parts
That's the answer I got that's for the clarification of my answer
|
2015-10-09 21:18:52
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9205106496810913, "perplexity": 4696.863742930345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737935292.75/warc/CC-MAIN-20151001221855-00159-ip-10-137-6-227.ec2.internal.warc.gz"}
|
https://scicomp.stackexchange.com/questions/24293/effects-of-lumping-mass-matrix?noredirect=1
|
# Effects of Lumping Mass Matrix
I've recently finished an introductory course on the finite element method from a more mathematical perspective (following Brenner and Scott) and we were introduced to the finite element mass matrix in elliptic problems as the matrix arising from terms without a derivative. For example, a one-dimensional Helmholtz type equation with the form
$$-u(x)'' + au(x) = f(x), \quad 0 < x < 1, \quad a>0\\ u(0) = u(1) = 0$$
has a corresponding weak formulation that requires us to find $u$ such that
$$\int_0^1 u' v'dx + a\int_0^1uv dx = \int_0^1fvdx \quad \forall v \in H^1_0$$
where $H^1_0 = \{v \in H^1 : v(0) = v(1) = 0\}$.
Choosing $S \subset H^1_0$ to be a conforming finite dimensional subset with a basis $\{ \phi_i \}_{i=1}^N$, and saying $u = \sum_{j = 0}^N u_j \phi_j$, we get the linear problem
$$(\pmb{K} + a \pmb{M})U = F$$
where $K_{ij} = \int_0^1 \phi_i' \phi_j' dx$ is the stiffness matrix and $M_{ij} = \int_0^1 \phi_i \phi_j dx$ is the mass matrix. The finite element method typically proceeds by choosing $S$ to be the space of piecewise polynomials for example. This formulation extends naturally to higher dimensions.
From this previous post: How to formulate lumped mass matrix in FEM, there are various ways to lump the mass matrix. For example, by summing the off-diagonal terms: $M_{ii} = \sum M_{ij}$.
My question is what is the justification of this? Is there mathematical reasoning why this should give a consistent method? Is there a way to quantify the error introduced by doing this? I've seen an explanation that justifies mass matrix lumping in the context of mechanics where this assumption implies that the mass of the system is concentrated at discrete points, but how does this generalize to more general elliptic PDE problems?
• It's not consistent, but the error can indeed be controlled (and shown to be dominated by other terms in practice). This paper explains the issue pretty well. – Christian Clason Jun 23 '16 at 16:53
• As @ChristianClason mentioned, it is not consistent. But there are some things that you might impose/want to/from your mass matrices like: symmetry, mass conservation, positiveness... This article discuss a little bit about it. – nicoguaro Jun 23 '16 at 20:23
• Thanks for the references ChristianClason and @nicoguaro. Since we give up consistency, we give up convergence via the Lax Equivalence theorem. Is there a way to show that the finite element approximations approach the exact solution as the mesh size is reduced when using matrix lumping? – pmat Jun 23 '16 at 23:45
• Lax equivalence is used for finite difference methods, not finite element methods. For consistent FE methods, you use Galerkin orthogonality; non-consistent methods instead use the Strang lemma (first or second, depending on the nature of non-consistency). – Christian Clason Jun 24 '16 at 7:57
|
2021-06-15 10:58:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7050461173057556, "perplexity": 374.0168746999049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487620971.25/warc/CC-MAIN-20210615084235-20210615114235-00298.warc.gz"}
|
https://www.childmedia.net/yk8zsuqv/8c85d4-what-is-the-si-unit-of-heat
|
# what is the si unit of heat
What are the advantages and disadvantages of individual sports and team sports? What is the unit of constant b, expressed in SI base units? is defined through the formula Q = mL. What are the qualifications of a parliamentary candidate? What is the timbre of the song dandansoy? The SI unit of pressure is the pascal (Pa) where 1Pa = 1 Nm-2. What is the SI unit of heat energy? 0 ; The SI unit of heat is joules. Join now. Heat capacity or thermal capacity is a physical property of matter, defined as the amount of heat to be supplied to a given mass of a material to produce a unit change in its temperature. Search for: Home; General Tests. The SI unit of Heat is_____? Heat is form of energy so its SI unit is same as that of energy which is JOULES. Login Student Register Register Institute. Mcq Added by: admin. Add your comments. Q: The gravitational force of attraction between two bodies is _____ the product of their masses. Ans1: Heat capacity(C) is the amount of heat(q) to be supplied to a given mass of a material to produce a unit change in its temperature(T). In SI, unit of quantity of heat is (A) calorie (B) kilocalorie (C) joule (D) non of the above. It's not to say it can't be converted to other units, but the basic unit for it is #"J"#, and more often, #"kJ"#. the sum of kinetic energy of molecules in a sample of a substance. When did organ music become associated with baseball? recent questions recent answers. League of SI Superheroes – Dr. Kelvin. Like this answer? It takes about 1 J to raise a 100-g-apple 1 m. Energy units can be preceded by various factors, including the following: kilo (k=103), Mega (M=106), Giga (G=109), Tera (T=1012), Peta (P=1015), Exa (E=1018). - 14117381 1. 0 ; joules. Celsius (⁰C) = K - 273⁰. Heat flows through material objects through the conduction, with heated particles imparting their energy to neighboring particles. Still in common usage is the atmosphere, which relates to the pressure exerted by the weightof the atmosphere over the surface of the Earth. This lower orbit caused the spacecraft to heat up and disintegrate before crashing onto the surface of the planet. What is the first and second vision of mirza? The standard unit of heat in the International System of Units is the calorie (cal), which is the amount of energy transfer required to raise the temperature of one gram of pure liquid water by one degree Celsius, provided the water temperature is higher than the freezing point and lower than the boiling point. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. The SI system (International System of Units) is the modern metric system of measurement and the dominant system of international commerce and trade.SI units are gradually replacing Imperial and USCS units.. What does it mean when there is no flag flying at the White House? The words “power” and “energy” are often used interchangeably. Search... Post Your Enquiry. In terms of kinetic theory it can be defined as the sum of kinetic energy of molecules in a sample of a substance. What is the SI unit of heat? Why don't libraries smell like bookstores? 1 kilojoule = 1000 joule Heat is also measured in calories and kilocalories. This comic book-style video animation series has been developed to help middle school students learn about the 7 SI base measurement units. In SI units, the specific heat capacity can be defined as the amount of heat in joules required to raise the temperature of 1 gram of a substance by 1 Kelvin. Updated February 09, 2019 The heat current is the rate at which heat is transferred over time. Like 3 Dislike 0 ⚐ Report. Heat capacity or thermal capacity is a physical property of matter, defined as the amount of heat to be supplied to a given mass of a material to produce a unit change in its temperature. As all the energy is represented in Joules (J), therefore, heat is also represented in Joules. SI unit - Joule per kilogram. specific heat formula. a device designed to measure the heat involved in physical and chemical changes. 'Heat' is energy in transit between a warmer body and a cooler body. The SI unit of heat is Joule which is a derived unit and the temperature unit is Kelvin (K) or Celsius. EASY MEDIUM HARD. Heat dissipation can … & WHAT IS SI UNIT OF TEMERATURE ? The SI unit of specific heat capacity is J k g − 1 K − 1. Heat dissipation is a type of heat transfer. Kelvin (K) = ⁰C + 273⁰. In physics, the word "heat" is used for a type of energy; so that would be the joule. The SI unit for energy is the Joule. All Rights Reserved. Usually, 4.184 joules of heat energy is necessary to increase the temperature of a unit weight (say 1 g) of water from 0 degrees to 1 degree Celsius. that would be the joule. Log in. What is the SI unit of heat - 7556385 KyraDaymiel is waiting for your help. Both kelvin and celsius have same units of temerature difference and the difference between both of them lies in the freezing temperatures and lowest temperature that can be found o the earth. It doesn't matter whether it be heat energy, or some other form of energy.The SI unit for energy is the Joule. Everyday Science Online Test 2 Mcqs Preparation Practice Questions. All Rights Reserved. When inflating pneumatic tyres such as on cars and bicycles, the non-SI unit pounds per square inch (psi) is frequently used: 1. WHAT IS THE SI UNIT OF HEAT ? Well, we should know that heat flow is going to be in #"J"#, joules, from General Chemistry. two non SI units of heat. Previously, the non-SI unit of heat (the calorie) has been used since that is how the potential energy in food is reported. Leave a Comment Cancel reply. SI unit of heat is Joule. What is the SI unit of heat energy?…. Like this answer? gaikarpratibha1212 gaikarpratibha1212 14.12.2019 Science Secondary School +5 pts. What is the SI unit of heat capacity of body? In terms of kinetic theory it can be defined as. Solved: What is the amount of heat, in SI units, necessary to melt 1 pound of ice? Which island has been chosen as the venue for a surfing event by the organizers of the 2024 Paris Olympics? Obtain a … Copyright © 2021 Multiply Media, LLC. Why don't libraries smell like bookstores? the thermal energy of the earth itself. Is Betty White close to her stepchildren? The specific latent heat (L) of a material… is a measure of the heat energy (Q) per mass (m) released or absorbed during a phase change. The SI system (International System of Units) is the modern metric system of measurement and the dominant system of international commerce and trade.SI units are gradually replacing Imperial and USCS units.. Heat capacity is measured using a calorimeter. The International System of Units (SI, abbreviated from the French Système international (d'unités)) is the modern form of the metric system.It is the only system of measurement with an official status in nearly every country in the world. What is the WPS button on a wireless router? Add your answer and earn points. What are the qualifications of a parliamentary candidate? Because it is a rate of heat energy over time, the SI unit of heat current is joule per second, or watt (W). In SI units, the specific thermal capacity (c symbol) is the amount of heat in joules needed to raise 1 gram of a substance 1 Kelvin. How long will the footprints on the moon last? 1 atmosphere = 14.7 psi Who is the longest reigning WWE Champion of all time? In this experiment the SI unit of heat will be used: the Joule. Like 3 Dislike 0 ⚐ Report. Related Questions. In obesity …by the consumption of more calories than the body can use. Heat is a form of energy. SI unit of heat is SI unit of heat is SI unit of heat is And The mode of transfer of heat in fluids is called asp 1 See answer dipalidhavan is waiting for your help. For the best answers, search on this site https://shorturl.im/Uo4AN. In physics, the word "heat" is used for a type of energy; so that would be the joule. For most purposes, heat capacity is reported as an intrinsic property, meaning it is a characteristic of a specific substance. 1 si unit of heat is joules. In physics, the word "heat" is used for a type of energy; so that would be the joule.In physics, the word "heat" is used for a type of energy; so that would be the joule.In physics, the word "heat" is used for a type of energy; so that would be the joule.In physics, the word "heat" is used for a type of energy; so that would be the joule. The unit of energy is the same as that of work (As work and energy are the two sides of the same coin). What is the SI unit of heat energy? … The SI unit of Heat is_____?,,,,, Answers: Joule. Tehran University of Iran has created a robot that can understand, speak and translate _____ different languages. laxmidevasani983 laxmidevasani983 as an amount of energy (being transferred), the SI unit of heat is … What is the WPS button on a wireless router? Basic Arithmetic … 0 ; JOULES. Heat dissipation occurs when an object that is hotter than other objects is placed in an environment where the heat of the hotter object is transferred to the colder objects and the surrounding environment. is often just called the "latent heat" of the material. SI unit of heat. What is the first and second vision of mirza? Copyright © 2021 Multiply Media, LLC. Heat energy (or heat) form of energy that transfers among particles in a substance or system by means of kinetic energy of those particles (kinetic theory states that heat is transferred by particles bouncing into each other) Calorie. In SI units, molar heat capacity (symbol: c n) is the amount of heat in joules required to raise 1 mole of a substance 1 Kelvin. While energy in the ability to do work, power is the rate at which energy is transferred, used, or transformed. c=C/m. Ask your question. If your impeached can you run for president again? uses the SI unit joule per kilogram [J/kg]. Units of Latent Heat: Latent heat is measured in Cal g-1 in the C.G.S system and in the SI system the units measured in J kg-1. Heat capacity is a measurable physical quantity equal to the ratio of the heat added to an object to the resulting temperature change. What is the point of view of the story servant girl by estrella d alfon? What is the point of view of the story servant girl by estrella d alfon? Heat capacity is the amount of heat required to raise one gram of material 1 °C under constant pressure. Related Questions. The SI unit for specific heat is joule per kelvin per kilogram (J/K/kg, J/ (kg K), J K−1 kg−1, etc.) Sanskar said: (Thu, Jul 19, 2018 06:54:18 AM IST) Joule is the SI unit of heat. How long will the footprints on the moon last? It can also communicate the specific thermal capacity in the calorie units per gram of grade Celsius. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. Joules can be defined as the amount of energy required to raise the temperature of a given mass by one degree. For a complete detailed account of SI units and their proper usage the chapter in the Heat Exchanger Design Handbook entitled "Conventions and nomenclature for physical quantities, units numbers and mathematics" by Y. Start studying Chemistry SI Units of Temperature & Heat. In physics, the word "heat" is used for a type of energy; so The SI is maintained by the International Bureau of Weights and Measures (BIPM, for Bureau International des Poids et Mesures) in Paris. The average kinetic energy of the molecules of a body determines? Search in site. A related unit is the Watt, which is a unit of power (energy p… SO its SI unit is: Joules/deg Celsius or Joules/Kelvin A. J / K. B. J K. C. J. D. o C. Easy. Comment. Q: The gravitational force of attraction between two bodies is _____ the product of their masses. What is the timbre of the song dandansoy? A) Joule: B) Newton: C) Calorie: D) Kelvin: Answer: A) Joule Explanation: Subject: Physics Exam Prep: Bank Exams. Answer for question: Your name: Answers. Molar specific heat capacity (Cp):-If the gas is held under constant pressure during the heat transfer, then the corresponding molar specific heat capacity is called molar specific heat capacity at constant pressure (Cp). 1 ; View Full Answer the si unit of heat is Jule. Derived quantities are the molar heat capacity, which is the heat capacity per mole of a pure substance, and the specific heat capacity, often simply called specific heat, which is the heat capacity per unit mass of a material. CGS Unit of Energy. Join now. The SI unit of molar specific heat capacity is Jmol –1 K –1 . The SI is maintained by the International Bureau of Weights and Measures (BIPM, for Bureau International des Poids et Mesures) in Paris. More Questions by difficulty. What are the advantages and disadvantages of individual sports and team sports? Who is the longest reigning WWE Champion of all time? Sometimes a volumetric heat capacity is used. It can also be expressed as JkgK. Thus, a kiloJoule (kJ) is 1000 Joules and a MegaJoule (MJ) is 1,000,000 Joules. Of course, enthalpy can have other units. What does it mean when there is no flag flying at the White House? One way to maximize teaching time shorten delays due to transitions and focus on student’s behavior is used to establish _____ in the classroom? Ask Question . the amount of heat needed to cause a unit rise in the temperature of a unit mass of the substance. a week for Ever 00 Gwy no 16000 AnaRosenbohm; I want to Winn $5,000.00 a week for Ever no16000 ; You promised you … October 19, 2020 by Admin. Here you will find the General Science Multiple Choice Questions Mcqs are from Biology, Chemistry, Physics and Atmospheric Studies. A. Watt B. Volt C. Joule D. Newton. It is often just called as the "latent heat" of the material. All Important Abbreviation … Heat Transfer Coefficient Conversion Calculator Answer. What is the SI unit of heat??? geothermal energy. Hence, the SI unit of heat is Joules. The definition given in the original answer is for 'internal energy', not heat. Toggle navigation. This page shows answers for question: what is the S.I unit of heat. Like this answer? The SI composite unit of heat transfer is the kilogram per second cubed kelvin. The SI unit of energy is Joule (the basic energy unit of the metric system) symbolized as J is named in honor of an English physicist James Prescott Joule (1818–1889) for his experiments on the mechanical equivalent of heat. Are you involved in development or open source activities in your personal capacity? It is denoted by C heat capacity = heat energy absorbed Q/ temperature change ∆θ C = Q/∆θThe SI unit Jk-1 HEAT CAPACITY (C) This is the quantity of heat required to raise the temperature of a given mass of a material by one degree Celsius or one Kelvin. Asan Mcqs - Solved Original Papers, General Knowledge, Pakistan Current Affairs MCQs for JOBS Everyday Science Mcqs What is the SI unit of Heat is_____? Heat transfer coefficient is the inverse of thermal insurance, which is used for building materials (R-value) and for clothing insulation. Everyday Science Mcqs Everyday Science Mcqs. Home / Defence & Police / UP Police (UPP) / Constable. Calories can easily be converted to Joules using the equality: 1.000 calorie = 4.184 Joules. SI unit of heat energy is joule (J). Piyush said: (Thu, Jan 26, 2017 07:03:36 PM IST) SI unit of heat is Joule (J). -- Its CGS unit is the erg = 10-7 joule. The SI unit of temperature is the Kelvin. At temperatures close to 0 K, the specific heat capacity c of a particular solid is given by c = b T 3, where T is the temperature and b is a constant, characteristic of the solid. Share 4. In the Kelvin scale, the lowest possible temperature is 0 K. What are the difference between Japanese music and Philippine music? Ans1: Heat capacity(C) is the amount of heat(q) to be supplied to a given mass of a material to produce a unit change in its temperature(T). 1. Molar specific heat … Log in. Get an answer to your question "What is the si unit for heat ..." in Chemistry if there is no answer or all answers are wrong, use a search bar and try to find the answer among similar questions. K. Molar heat capacity is specific heat capacity per unit … Learn vocabulary, terms, and more with flashcards, games, and other study tools. Check Answer and Solution for above question from Physi I Dustin DeCoteau claim all my lotto numbers on the lucky day sweepstakes and the pch sweepstakes; A traveler needs to claim reimbursement; I want to Winn$5,000. Units of Heat Transfer Description Examples Description Heat transfer has the dimension mass per time cubed thermodynamic temperature . Share with your friends. Rate and follow the question.Get Answer key for asked question. Latent Heat is a measure of the heat energy (Q) per mass (m) released or absorbed during a phase change. The SI unit of Heat is_____? Name Email Website. Is Betty White close to her stepchildren? The SI unit of heat is joule and kilojoule. Specific heat capacity is the quantity of heat needed to raise the temperature per unit mass. Heat capacity is a physical property of a substance, which means that it depends on the state and properties of the substance under consideration. In the International System of Units (SI), the… Read More; relation to obesity. So naturally, enthalpy must then be in #\mathbf("J")#. Heat capacity of a substance is the amount of heat required to raise its temperature by 1 degree Celsius or 1 Kelvin. Système Internationale d'Unités, or SI Units, was the outcome. The SI unit of Heat is_____? The SI composite unit of heat transfer is the kilogram per second cubed kelvin . c n = Q/ΔT where Q is heat and ΔT is the change in temperature. 1) True Joule is SI unit of heat 2) False -- heat of reaction is measured in a calorimeter and is dependent upon sample mass 3) True -- 1 cal = 4.184J 4) False Other Heat Units Share with your friends. With the ability to speed up or slow down particles, Dr. Kelvin can measure any temperature. Which of the following is not a waterborne disease? C = q/T So its SI unit will be = J/K Ans2: Heat view the full answer It is defined through the formula Q = mL. On an average day at sea level, this pressure is known as 1 atmosphere, which equates to 101.325 kPa. Answered What is the SI unit of heat??? Notably, heat is a form of energy, and therefore the SI unit of heat is also joules (J) which are defined as the amount of energy needed to raise the temperature of a given mass by one degree. To increase the temperature of one unit weight of water by one degree, we require 4.184 joules of heat. It uses the SI unit joule per kilogram [J/ kg]. Although the Celsius temperature scale is also used, it is considered a derived SI unit and is generally used to measure everyday temperatures. The excess calories are then stored as fat, or adipose tissue. When did organ music become associated with baseball? -- Its SI unit is the Joule. 1 calorie = 4.184 joules 1 kilocalorie = 1000 calories. A) Joule: B) Newton: C) Calorie: D) Kelvin: Answer: A) Joule Explanation: Subject: Physics Exam Prep: Bank Exams. What are the difference between Japanese music and Philippine music? C = q/T So its SI unit will be = J/K Ans2: Heat view the full answer Hence the correct option is B. If your impeached can you run for president again? Add your answer and earn points. 1. Like 3 Dislike 0 ⚐ Report. Heat is a form of energy. Are you involved in development or open source activities in your personal capacity? A. Watt B. Volt C. Joule D. Newton. calorie and British thermal unit. calorimeter. Find right answer with solution and explaination of asked question. … The main cause of this mishap was traced to a thruster calibration table where British imperial units instead of SI units were used. In physics, the word "heat" is used for … Notably, heat is a form of energy, and therefore the SI unit of heat is also Joule (J) which are defined as the amount of energy needed to raise the temperature of … The molar heat capacity is the heat capacity per unit amount (SI unit: mole) of a pure substance, and the specific heat capacity, often called simply specific heat, is the heat capacity per unit mass of a material. Share 1 Add your comments. Answer By Toppr. What is the SI unit of heat energy? The SI units of heat transfer coefficient is watts per squared meter Kelvin (W/m²•K). The SI for heat capacity is joules (J) per kelvin (K). Categories General Science MCQs Post navigation. President again AM IST ) SI unit of heat????... Book-Style video animation series has been chosen as the venue for a event. Used: the joule … specific heat capacity is reported as an intrinsic property, meaning it often... For heat capacity is the longest reigning WWE Champion of all time on an average at. Robot that can understand, speak and translate _____ different languages sanskar said: (,! Body determines obtain a … what is the SI composite unit of b! And team sports ( Q ) per mass ( m ) released or absorbed during a phase change considered derived! Can understand, speak what is the si unit of heat translate _____ different languages, Chemistry, physics Atmospheric! Cooler body by the organizers of the heat involved in development or open source in! And kilojoule by estrella d alfon the joule what are the difference between music! ” and “ energy ” are often used interchangeably and Philippine music dissipation is a measure of the material (! Hence, the word heat '' is used for building materials ( R-value and. Defined through the formula Q = mL of more calories than the body can use House... Base measurement units the conduction, with heated particles imparting their energy to neighboring.... Units instead of SI units, necessary to melt 1 pound of ice [ J/kg ] heat of... A measure of the story servant girl by estrella d alfon the erg = 10-7 joule, kilojoule! You will find the General Science Multiple Choice Questions Mcqs are from Biology, Chemistry, and... Amount of energy ; so that would be the joule is same as that of energy ; so would. In the ability to speed up or slow down particles, Dr. Kelvin can measure any temperature capacity in original... Has created a robot that can understand, speak and translate _____ languages... The average kinetic energy of the heat energy, or transformed slow down particles, Dr. Kelvin measure! [ J/kg ] kilogram per second cubed Kelvin m ) released or absorbed a! / Defence & Police / up Police ( UPP ) / Constable grade Celsius heat joule. Chemistry SI units of heat transfer Description Examples Description heat transfer has dimension... A surfing event by the organizers of the heat current is the erg = 10-7 joule the,... Kg ] d alfon up or slow down particles, Dr. Kelvin measure! The longest reigning WWE Champion of all time mass by one degree, require. Consumption of more calories than the body can use 7 SI base units & heat more ; relation obesity... Per gram of grade Celsius 'internal energy ', not heat 1000 calories heat! ; relation to obesity here you will find the General Science Multiple Choice Questions Mcqs are Biology., in SI base measurement units Chemistry SI units were used your impeached can you run for again. Megajoule ( MJ ) is 1,000,000 Joules ) released or absorbed during a phase change ) is 1,000,000 Joules water. Cubed Kelvin by estrella d alfon joule is the first and second vision of mirza “ energy are. That of energy ; so that would be the joule measured in and! S.I unit of heat required to raise the temperature of one unit weight of water by one degree we... The heat energy ( Q ) per Kelvin ( K ) is defined through the,! Conduction, with heated particles imparting their energy to neighboring particles reigning Champion. Defined as it uses the SI composite unit of heat needed to the... Or absorbed during a phase change warmer body and a cooler body all. Of mirza b, expressed in SI base measurement units the quantity heat... And Atmospheric Studies the WPS button on a wireless router a warmer body and MegaJoule! Is same as that of energy ; so that would be the joule given by... Been developed to help middle school students learn about the 7 SI base units is watts per squared Kelvin! ) SI unit of heat is_____?,,,,,, answers: joule current the... Often just called as the latent heat '' is used for building materials ( R-value ) for. Time cubed thermodynamic temperature in obesity …by the consumption of more calories than the body can use help school... Are then stored as fat, or adipose tissue PM IST ) joule is first! The definition given in the original answer is for 'internal energy ', not heat type... Require 4.184 Joules 1 kilocalorie = 1000 joule heat is form of energy ; so that would be joule...
|
2021-07-29 01:54:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5162560939788818, "perplexity": 1221.2607903652345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153814.37/warc/CC-MAIN-20210729011903-20210729041903-00508.warc.gz"}
|
https://mammothmemory.net/maths/numbers/standard-form/standard-form-divide.html
|
Standard form divide
To divide in standard form all you need to remember is that
NOTE:
See indices Mammoth memory for further explanation
But the basis of divide in standard form is first look for a simple comparable that you know with indices.
i.e. try simple numbers you know first
Example 1
Work out the following giving your answer in standard form
44800div6.4times10^9
Method 1
Actually carry out the division
44800div6.4times10^9
is the same as
44800div6.4times10times10times10times10times10times10times10times10times10
which is
44800div640,000,000
which is
(448cancel0cancel0)/(6400,000,0cancel0cancel0)
448/(64,000,000)
=0.000007
and to put this in standard form
remember move decimal right and -1 to the power
Answer: is =7times10^-6
Method 2
An alternative way to calculate this is to say what is a simpler calculation, what is
10^2div10^3
=(cancel10timescancel10)/(10timescancel10timescancel10)=1/10
=10^-1
Therefore
44800div6.4times10^9
is the same as
4.48times10^4div6.4times10^9
(4.48times10^4)/(6.4times10^9)
4.48/6.4times10^4/10^9
0.7times10^-5
and in standard form this is
remember move decimal right and -1 to the power
Answer: is 7times10^-6
Example 2
(1.4times10^12)div(3.2times10^4)
Method 1
Actually carry out the division
1.4times10^12=1.4times10times10times10times10times10times10
times10times10times10times10times10times10
=1,400,000,000,000
and
3.2times10^4=3.2times10times10times10times10
=32,000
So the calculation becomes
(1,400,000,000,cancel(000))/(32,cancel(000))
=(1,400,000,000)/32
=43,750,000
and to put this in standard form
remember move decimal place left and +1 to the power
Answer is =4.357times10^7
Method 2
An alternative way to calculate this is to say what is a simpler calculation, what is
10^2div10^3
=(cancel10timescancel10)/(10timescancel10timescancel10)=1/10=10^-1
Therefore:
(1.4times10^12)div(3.2times10^4)
is the same as
(1.4times10^(cancel(\ 12)8))/(3.2timescancel10^cancel(\ 4))
(1.4times10^8)/3.2
=0.4375times10^8
remember move decimal place to right and -1 to the power
and to put this in standard form this becomes
4.375times10^7
Example 3
3.8times10^8div1.9times10^-3
Method 1
Actually carry out the division
3.8times10^8div1.9times10^-3
is the same as
3.8times10times10times10times10times10times10times10times10div1.9/(10times10times10)
which is
380,000,000div0.0019
which is
(380,000,000)/0.0019
=200,000,000,000
and to put this in standard form
remember to move decimal place left and +1 to the power
Answer: is 2times10^11
Method 2
An alternative is to find a similar calculation that you know. In this case
10^2div10^-2
=1000div0.1
=1000/0.1=10,000
or similarly
10^2div10^-2
is
10^2div1/10^2
=10^2/(1/10^2)
=10^2times10^2
=10^4
=10,000
Now to the actual calculation
3.8times10^8div1.9times10^-3
would be
3.8times10^8div1.9/10^3
=(3.8times10^8)/(1.9/10^3)
=(3.8times10^8times10^3)/1.9
=(3.8times10^11)/1.9
=2times10^11
which is already in standard form so the answer is =2times10^11
|
2019-06-17 21:40:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6436198353767395, "perplexity": 3857.4258088362913}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998580.10/warc/CC-MAIN-20190617203228-20190617225228-00211.warc.gz"}
|
https://www.physicsforums.com/threads/matrix-of-a-linear-transformation-abstract.545691/
|
# Matrix of a Linear Transformation (Abstract)
1. Oct 30, 2011
### brydustin
I was taught that the columns of a matrix, T, representing a transformation represent the first vector space's basis set and the rows represent the basis set of the range vector space.
i.e. T(v_k) = t_1,k*w_1 +.... + t_(m,k)*w_m
So v_k would be the k-th basis vector of the first space, V, and the w's are the vector basis set for W (the range space). The coeffiicients (t's) correspond to that specific column.
In other words, a transformation of a single basis (input) element is equal to a linear combination of the range's basis.
This is the convention in Linear Algebra Done Right, wikipedia, and every text i've read.... except recently on on mathematical physics, which has the reverse style (rows act like columns, colms. like rows -- as defined above). Is there a common convention? Or is one of the authors just plain wrong?
2. Oct 30, 2011
### mathwonk
the meaning of the entries in a matrix are pure an arbitrary convention - there is no right or wrong choice. However the most common convention is this: the first column of the matrix for a linear map T represents the coefficients of the image T(e1), of the first basis vector of the source, under the map T, expanded in terms of the basis of the target.
e.g. if T maps R^2 to R^3, and e1= (1,0) and if T(1,0) = (3,4,5), then the first column will have entries 3,4,5. the second column will be the coefficients of T(e2), etc...
|
2019-02-19 15:05:20
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8581225872039795, "perplexity": 765.1683130267446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247490225.49/warc/CC-MAIN-20190219142524-20190219164524-00077.warc.gz"}
|
https://blog.genglinxiao.com/category/it/os/
|
## SSH Logon takes long time?
I’ve been suffering from this on CentOS 7 for quite some time now but haven’t really have time to dig into it.
Just today, I noticed the line after a successful logon:
Last login: Fri March 27 16:03:23 2016 from gateway.
Aha, now I know where the time has been spent. The SSHd must have taken a long time to figure out the host name of my login IP.
I’ve suspected this before, but in my sshd_config file, the line “UseDNS” was commented out, so I thought it must be something else.
A simple “man sshd_config” revealed that, “UseDNS yes” is actually the default setting:
UseDNS Specifies whether sshd(8) should look up the remote host name and check that the resolved host name for the remote IP address maps back to the very same IP address. The default is “yes”.
So I just add “UseDNS no” in the configuration file and restarted sshd. Problem solved.
## Windows mind set vs. Linux mind set
I wanted to talk about this for a long time, however, the key points were not clear until recently I ran into the pipeline concept in PowerShell.
When you do pipeline in PowerShell, you’re passing not a text stream from the left command to the right command, but rather, you’re passing a .net object.
When I read this from TechNet, I smiled to myself, yes, this is typical Microsoft!
Windows PowerShell provides a new architecture that is based on objects, rather than text.
The cmdlet that receives an object can act directly on its properties and methods without
any conversion or manipulation. Users can refer to properties and methods of the object by
name, rather than calculating the position of the data in the output.
And I said to myself, now I know what I wanted to say.
One key difference between Windows and Linux is, Windows always tries to be smarter, where GNU Linux tries to stay plain and humble. Pipeline in scripting is just one recent example.
Linux has been using text configuration files. Windows came along and said, we need something better. That’s how Windows Registry came about.
Linux has been using pipeline for IPC. Windows came along and said, we need something better. That’s how COM came about.
Linux has been using lock files to prevent processes to start another instance. Windows came along and said, we need something better. So they use Kernel Object instead.
Linux has been using permissions as a basic security measure. Windows came along and said, we need something better. So they do everything using ACLs.
Linux has been using symbolic links. Windows came along and said, we need something better and introduced short cuts.
To be fair, not all of them are bad ideas.
Despite its complexity and awkward configuration, COM gained such popularity that it became the basic foundation of modern Windows. Open registry in any Windows that has been used for a while and chances are the biggest tree is HKLM\Classes\CLSIDS. (One of the key reasons why you Windows becomes slower as you install more and more software).
Kernel Object is much more reliable than lock files. ACLs indeed provide much flexibility in terms of access control. Linux is also doing it now.
However, we all know that Windows have record of being smart in cheap ways and then fail pathetically. (SilverLight is the one that came into my mind as I write this) To me, this idea of passing objects through pipeline looks like just another one that will fail.
Having said that, I have to admit that, this difference between Windows and Linux is also not surprising. Linux is developed by community led by technical experts. Introducing new features always involves extensive discussions between these experts.That’s why Linux has a bad reputation of not listening to its users.
Where as for Windows, most likely, new features are proposed by requirement collection team and developer team, then the list of new features have to go through rounds of prioritization processes. If there are disputes, then there will be escalations and some manager will decide. Once decided, then features still in the list will be implemented. Period.
Now, it is actually surprising that Microsoft actually made some key decisions right, right? 🙂
## What slows down your Windows?
Windows has long been complained of performing slower and slower as being used. This essay tries to explain how and why.
Before we go into details, let’s make it clear that we’re talking about the architecture design of Windows that makes it not performing in certain situation. So this is consistently measurable performance difference. We’re not talking about bad performance because of wrong configuration. Nor are we going to talk about performance of a specific program. A specific instance of slow down, for example, your notepad will performance slower when you have a lot of other programs running, is not what we’re going to talk about.
First of all, there’s no reason Windows should perform worse (or better) if you just spend more time on it. If the installation keeps the same, the size of your computer keeps the same, Windows should perform the same.
However, if one of your running programs or a device driver has memory leak, then it will eat up more and more memory as time pass by. That will slow down your Windows. The unique thing about this type of performance issue is, after a fresh restart, Windows should perform well. In the early days, memory leak was a common issue on Windows. That attributed much to the common belief that you should restart your Windows once in a while to keep better performance. Now, most commonly used program is mature enough to be free of memory leak, Windows should perform just the same as time goes by.
However, as time goes by, you’ll probably keep installing new software on your Windows. This could indeed slow down your computer. The reason is, by installing any non-trivial software, you’re not only copying files to the disk, but also registering COM components to Windows. These registry key/values will be keep in memory. So the more software you install, the less memory your program will be able to get.
As an example, after you install a program that is able to open a new file format, chances are:
• You’ll not be able to see a thumb nail that shows the content of the file in Explorer;
• You’ll see this program listed in the pop-up menu of this file type;
They were made possible using COM technology that depends heavily on Windows Registry.
The problem is, this registry keys/values won’t get cleaned up when you uninstall the software. So Windows registry keeps growing and growing, your programs have lesser and lesser memory to use.
The other factor that would for sure slow down your computer is disk fragmentation. Disk fragmentation affect performance not only when your program does disk IO. If your page file is fragmented, paging operation will be slow. That will cause noticeable sluggish. Also remember that all executable files and dll files become memory map files, so if they are fragmented, your program will be slow not only during start up, but also in running phase.
Bloated registry and disk fragmentation are the two reason that your Windows slows down in the long run. In some of my Windows computers, I actually create separate partitions for page file, for outlook pst file and Windows tmp file. These techniques worked quite well.
## imsc12.ime causes mmc.exe to freeze?
I had suffered from this annoying problem for a long time – Every time I opened the compmgmt.msc file, and tried to check the system log (or application log, or any other log loged by Event Log), I could open the event property windows normally by double click on a log entry. But as soon as I closed the event property window, the whole application got frozen. That is, I can move the main windows, maximize it, minimize or restore it, but the content of the window became blank. Soon after that, the title bar of mmc told me that the application stoped responding.
I couldn’t figure what’s happening until one time, I used SystemInternals’ process explorer – I love this tools – I found that there’s a new thread being spawned when I opened the event property window. Yet when I closed the window, the thread didn’t terminate. Just a desperate move in order to solve this problem, I killed the thread using procexp. Magically, the mmc window restored to its normal state!
I tried to reproduce this scenario, some time it works, other times it failed.
Yesterday, when I got an error, I check the log and got stucked, again! When I fired process explorer and trying to kill the stuck thread, this time, the worst thing happened, process explorer got stuck!
Finally, I would have to solve the whold problem. I launched visual studio, and created a new project, and attach to mmc.exe process. I waited for all the symbols got loaded and paused the process.
There they were! In the thread window, there were four threads, I repeated the resume and pause for some time. Everytime the process pauses at the same thread. So I can assume this thread was busy doing something. The call stack show that this thread is doing somthing with the IME engine – I’m working on an simplified Chinese platform.
There was another thread with a paused flag, I switched to this thread, call stack show it was paused in a WaitForSingleObject function call. It must be the main thread. A peek at The bottom line of the calling stack confirmed this.
So, could it be the IME engine causes this frozen?
I investigated the loaded modules of the process, and the imsc12.ime seemes suspicious. It was the module sit on top of the call stack of the busy thread.
I opened the c:\windows\system32\ folder and found the file. In the security property of the file, I denied everyone from accessing this file. After that, I opened the compmgmt.msc and double clicked on an event. With cautious and anxiety, I closed the window… IT WENT WELL! THE PROBLEM GOT SOLVED!
ps, I searched the net with all the possible keywords combination. It seemed nobody else ever had this problem.
|
2021-01-23 01:26:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3348093330860138, "perplexity": 1950.0749609123648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531702.36/warc/CC-MAIN-20210123001629-20210123031629-00793.warc.gz"}
|
https://www.ideals.illinois.edu/handle/2142/16340/browse?type=contributor&value=Aschenbrenner%2C+Matthias
|
We are inviting IDEALS users, both people looking for materials in IDEALS and those who want to deposit their work, to give us feedback on improving this service through an interview. Participants will receive a $20 VISA gift card. Please sign up via webform. # Browse Dissertations and Theses - Mathematics by Contributor "Aschenbrenner, Matthias" • (2017-07-09) The ordered valued differential field$\mathbb{T}_{\log}\$ of logarithmic transseries is conjectured to have good model theoretic properties. This thesis records our progress in this direction and describes a strategy moving ...
application/pdf
PDF (3MB)
|
2019-04-18 17:09:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17738749086856842, "perplexity": 1463.4867139234948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517745.15/warc/CC-MAIN-20190418161426-20190418183426-00198.warc.gz"}
|
http://www.jsvcycling.com/2018/11/13/hh-performance.html
|
# The Hodgkin-Huxley Model as a Performance Metric
13 November, 2018
The source code associated with this post is available on GitHub.
In the world of scientific computing, languages like Python, Julia, and even MATLAB have become the de facto standard. It’s understandable why these languages have become so popular too. For one, you don’t have to be a computer scientist to use them well; just about anyone can pick them up and start building useful tools. In addition, they’re all interpreted languages and don’t need to be compiled before being executed making it faster to iterate on.
While these languages have a lot going for them does performance suffer? Performance may not always be the most important metric when selecting a language to use, but as models become more complex performance starts to matter. In this post, I’ll be implementing the well-known Hodgkin-Huxley model in several different programming languages to see how their performance compares.
The Hodgkin-Huxley model is a mathematical model that describes the propagation of action potentials in neurons. It was developed by Alan Lloyd Hodgkin and Andrew Fielding Huxley in 1952 to describe the giant axon in squid. Since then, it has become a fairly standard model in mathematical neuroscience. I won’t be discussing the model itself in much more detail since we’re really only concerned with it’s performance but below are a few of the important equations. I do however highly recommend the Wikipedia page for a brief but thorough discussion on it.
Because this model is a complex derivative, we’re going to solve it numerically using the second-order Runge-Kutta method known as Heun’s method (sometimes referred to as the modified Euler method). In addition, we’ll be simulating it for a 10 seconds with an a constant applied current ($I_{app}$) of 12 nA (that’s nano-amperes if you didn’t know). All other values and equations are pulled from the original model.
All tests are executed on a Linode nanode instance running Debian 9 amd64 (I’m currently working to get MATLAB setup on a nanode instance). The table below briefly tabulates the average performance results across three attempts.
Language Version Run-time Memory usage
C float gcc 6.3.0 0.966 s 20.348 MB
C double gcc 6.3.0 0.919 s 39.901 MB
C++ float g++ 6.3.0 1.092 s 22.254 MB
C++ double g++ 6.3.0 1.273 s 41.723 MB
Julia 1.0.1 6.287 s 222.613 MB
Python 3.5.3 68.954 s 64.082 MB
Now let’s dive a little bit more into the results of each language.
### C
There’s nothing really all that surprising about C’s performance here in general. The combination of performance and low memory usage is something that C developers have raved about for years. After years of using C here and there I too have come to expect a result like this and I wasn’t disappointed. Also of note is that the performance when using double-precision values was actually faster on average than single-precision. Although I didn’t expect double-precision to be significantly slower I most definitely didn’t expect it to be faster. I wonder what the reasoning for this could be. Statistical error? Process scheduling? Aliens? We may never know.
What I did find challenging about implementing the model in C was the plotting. Although the plotting isn’t being taken into account with the performance test, I’m using plotting to visually validate that the model is working correctly. However, unlike Python, Julia, and other common data sciencpe languages, C doesn’t exactly have a plotting library that I can just link my program to and use. Instead, I had to open up a pipe stream to gnuplot and stream in the resulting model before plotting it to a PNG file. While this work file for me and for something simple like this, it’s definitely not as scalable as say Python’s matplotlib. Not to mention that I’m fairly comfortable coding in C and Linux, something that many data scientists and mathematicians can’t say.
To sum it all up, while C scores really highly on performance, it really suffers when it comes to usability. Not to mention that the C ecosystem is still fairly fragmented when it comes to operating system features and compilers.
### C++
Much like the results with C, C++ didn’t really provide any big surprises. It’s ever-so-slightly lower performance and small additional memory overhead are all characteristic of the relationship between C and C++. However, in a real-world situation, the addition of OOP and features added in C++11 and beyond make it a viable alternative for those who can afford the overhead. Notably, here we see that the double-precision version ran in just under 200 ms slower than the single-precision version. Even still, the C++ implementation comes out ahead of many of the other competitors (except for C, obviously).
Again though, I again experienced challenges with plotting and although C++ has slightly better offerings than C, I decided to stick with what I knew worked and ported over the gnuplot code (it wasn’t really a port, more like a Ctrl-C Ctrl-V but same idea).
I really don’t have much else to say about C++ since a lot of what was said about C applies here as well. If you want OOP and fancy “modern” language features but still really care about performance then C++ is the way to go. Otherwise, it’s probably best to stick with the more traditional data science languages.
### Julia
I’ll admit it, I was really expecting a lot from Julia. I’ve heard a lot of people say that it’s the future of data science. Instead of delivering on those expectations though it’s feels a bit like a mixed bag. I can’t really be too harsh on Julia though; this is my first time ever writing a line of code in Julia. I wouldn’t be surprised at all if I implemented the problem in a way that just doesn’t work well in Julia.
I’m not too concerned with the performance itself: 6 seconds is nothing to laugh at. What really concerns me is the memory usage. Although it ran 10x faster, Julia required over 3x as much memory as Python 3 (and almost 11x as much as C required). The Hodgkin-Huxley model is a rather simple model and there’s no reason it should be using that much memory.
In my opinion, Julia is a language that has a lot of potential when it comes to the computational sciences. Since I’ve never before used Julia I’m fully willing to admit that I did something wrong and that the 222 MB statistic can actually be lower. I really hope that this is the case too. Julia seems to be a language that fits nicely between C and Python in terms of performance and usability. It’s very well documented and it looks to have a fairly active community behind it. I’m definitely interested in learning more about it and using it in the future. Hopefully I’ll figure out a way to bring that memory usage down to a more reasonable level.
### Python
I’m not sure what I found more surprising: Julia’s memory usage or Python’s run-time. I mean, I didn’t expect anything near C or C++ level performance, but 10x slower than Julia seems a bit excessive. Python 3 is a language that I tend to feel fairly comfortable proficient in so I don’t believe that I wrote inherently poor code. Could my use of numpy be causing the slow down? In my past experiences with Python 3 as a data science language and using packages like numpy (and pandas in many cases), I found that it was generally faster than standard python lists.
I might have to take another look at the Python 3 implementation and see if there’s a more efficient and “python-ic” way to solving the equation. I know Python is relatively slow, but 68 seconds just seems way too damn slow compared to what I know and my own past experience. I really hope I just made some kind of newbie mistake or something with the Python implementation because otherwise I think I’ll have to re-evaluate my usage of Python for data science.
## Conclusion
Overall, I didn’t really find anything to be mid-boggling in any way. While Julia’s memory usage and Python’s run-time were pretty bad I can’t say I didn’t expect them (well the Python one at least, I didn’t really know what to expect from Julia). C and C++ are still king when it comes to raw performance but they really lack the approachability and easy-of-use that languages like Python 3 and Julia can offer. With the exception of the memory usage, I feel that Julia has the best balance between power and usability of the languages that were tested. Because of this, I’m really interested to learn about it more and maybe even start applying it to my data science needs both with my current research position and in my personal projects.
If I am able to find the time later on, I plan on diving more deeply into my concerns with Julia and Python 3 and figure out of I’m able to solve them. I also would like to test more languages like MATLAB, R, Ruby, Octave, and any others that I can think of between now and then. For now though, this simple performance test will have to suffice.
|
2019-06-20 04:18:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25290820002555847, "perplexity": 820.3919336136579}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999130.98/warc/CC-MAIN-20190620024754-20190620050754-00412.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/mbe.2014.11.761
|
# American Institute of Mathematical Sciences
• Previous Article
Stability and bifurcation analysis of epidemic models with saturated incidence rates: An application to a nonmonotone incidence rate
• MBE Home
• This Issue
• Next Article
On the estimation of sequestered infected erythrocytes in Plasmodium falciparum malaria patients
2014, 11(4): 761-784. doi: 10.3934/mbe.2014.11.761
## A SEIR model for control of infectious diseases with constraints
1 Faculdade de Engenharia da Universidade do Porto, DEEC and ISR-Porto, Rua Dr. Roberto Frias, s/n, 4200-465 Porto, Portugal, Portugal, Portugal
Received April 2013 Revised December 2013 Published March 2014
Optimal control can be of help to test and compare different vaccination strategies of a certain disease. In this paper we propose the introduction of constraints involving state variables on an optimal control problem applied to a compartmental SEIR (Susceptible. Exposed, Infectious and Recovered) model. We study the solution of such problems when mixed state control constraints are used to impose upper bounds on the available vaccines at each instant of time. We also explore the possibility of imposing upper bounds on the number of susceptible individuals with and without limitations on the number of vaccines available. In the case of mere mixed constraints a numerical and analytical study is conducted while in the other two situations only numerical results are presented.
Citation: M. H. A. Biswas, L. T. Paiva, MdR de Pinho. A SEIR model for control of infectious diseases with constraints. Mathematical Biosciences & Engineering, 2014, 11 (4) : 761-784. doi: 10.3934/mbe.2014.11.761
##### References:
[1] F. Brauer and C. Castillo-Chavez, Mathematical Models in Population Biology and Epidemiology, Springer-Verlag. New York, 2001. [2] F. Clarke, Optimization and Nonsmooth Analysis, John Wiley, New York, 1983. [3] F. Clarke, Functional Analysis, Calculus of Variations and Optimal Control, Springer-Verlag, London, 2013. doi: 10.1007/978-1-4471-4820-3. [4] F. Clarke and MdR de Pinho, Optimal control problems with mixed constraints, SIAM J. Control Optim., 48, (2010), 4500-4524. doi: 10.1137/090757642. [5] M. d. R. de Pinho, M. M. Ferreira, U. Ledzewicz and H. Schaettler, A model for cancer chemotherapy with state-space constraints, Nonlinear Analysis, 63 (2005), e2591-e2602. [6] M. d. R. de Pinho, P. Loewen and G. N. Silva, A weak maximum principle for optimal control problems with nonsmooth mixed constraints, Set-Valued and Variational Analysis, 17 (2009), 203-2219. doi: 10.1007/s11228-009-0108-1. [7] E. Demirci, A. Unal and N. Ozalp, A fractional order seir model with density dependent death rate, MdR de Pinho,Hacet. J. Math. Stat., 40 (2011), 287-295. [8] P. Falugi, E. Kerrigan and E. van Wyk, Imperial College London Optimal Control Software User Guide (ICLOCS), Department of Electrical and Electronic Engineering, Imperial College London, London, England, UK, 2010. [9] R. F. Hartl, S. P. Sethi and R. G. Vickson, A survey of the maximum principles for optimal control problems with state constraints, SIAM Review, 37 (1995), 181-218. doi: 10.1137/1037043. [10] M. R. Hestenes, Calculus of Variations and Optimal Control Theory, $2^{nd}$ Edition (405 pages), John Wiley, New York, 1980. [11] H. W. Hethcote, The basic epidemiology models: models, expressions for $R_0$, parameter estimation, and applications, In Mathematical Understanding of Infectious Disease Dynamics (S. Ma and Y. Xia, Eds.), Vol. 16. Chap. 1, pp. 1-61, World Scientific Publishing Co. Pte. Ltd., Singapore, 2008. doi: 10.1142/9789812834836_0001. [12] W. O. Kermack and A. G. McKendrick, Contributions to the mathematical theory of epidemics, Bulletin of Mathematical Biology, 53 (1991), 35-55. [13] H. Maurer and S. Pickenhain, Second order sufficient conditions for optimal control problems with mixed control-state constraints, J. Optim. Theory Appl., 86 (1995), 649-667. doi: 10.1007/BF02192163. [14] Helmut Maurer and H.J. Oberle, Second order sufficient conditions for optimal control problems with free final time: The Riccati approach, SIAM J. Control Optm., 41 (2002), 380-403. doi: 10.1137/S0363012900377419. [15] N. P. Osmolovskii and H. Maurer, Applications to Regular and Bang-Bang Control: Second-Order Necessary And Sufficient Optimality Conditions In Calculus Of Variations And Optimal Control, SIAM Advances in Design and Control, 24, 2012. doi: 10.1137/1.9781611972368. [16] D. S. Naidu, T. Fernando and K. R. Fister, Optimal control in diabetes, Optim. Control Appl. Meth., 32 (2011), 181-184. doi: 10.1002/oca.990. [17] R.M. Neilan and S. Lenhart, An introduction to optimal control with an application in disease modeling, DIMACS Series in Discrete Mathematics, 75 (2010), 67-81. [18] L.T. Paiva, Optimal Control in Constrained and Hybrid Nonlinear Systems, Project Report, 2013, http://paginas.fe.up.pt/~faf/ProjectFCT2009/report.pdf. [19] O. Prosper, O. Saucedo, D. Thompson, G. T. Garcia, X. Wang and C. Castillo-Chavez, Modeling control strategies for concurrent epidemics of seasonal and pandemic H1N1 influenza, Mathematical Biosciences and Engineering, 8 (2011), 141-170. doi: 10.3934/mbe.2011.8.141. [20] P. Shi and L. Dong, Dynamical models for infectious diseases with varying population size and vaccinations, Journal of Applied Mathematics, 2012 (2012), 1-20. doi: 10.1155/2012/824192. [21] H. Schäettler and U. Ledzewicz, Geometric Optimal Control. Theory, Methods and Examples, Springer, New York, 2012. doi: 10.1007/978-1-4614-3834-2. [22] C. Sun and Y. H. Hsieh, Global analysis of an SEIR model with varying population size and vaccination, Applied Mathematical Modelling, 34 (2010), 2685-2697. doi: 10.1016/j.apm.2009.12.005. [23] R. Vinter, Optimal Control, Birkhäuser, Boston, 2000. [24] A. Wächter and L. T. Biegler, On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming, Mathematical Programming, 106 (2006), 25-57. doi: 10.1007/s10107-004-0559-y.
show all references
##### References:
[1] F. Brauer and C. Castillo-Chavez, Mathematical Models in Population Biology and Epidemiology, Springer-Verlag. New York, 2001. [2] F. Clarke, Optimization and Nonsmooth Analysis, John Wiley, New York, 1983. [3] F. Clarke, Functional Analysis, Calculus of Variations and Optimal Control, Springer-Verlag, London, 2013. doi: 10.1007/978-1-4471-4820-3. [4] F. Clarke and MdR de Pinho, Optimal control problems with mixed constraints, SIAM J. Control Optim., 48, (2010), 4500-4524. doi: 10.1137/090757642. [5] M. d. R. de Pinho, M. M. Ferreira, U. Ledzewicz and H. Schaettler, A model for cancer chemotherapy with state-space constraints, Nonlinear Analysis, 63 (2005), e2591-e2602. [6] M. d. R. de Pinho, P. Loewen and G. N. Silva, A weak maximum principle for optimal control problems with nonsmooth mixed constraints, Set-Valued and Variational Analysis, 17 (2009), 203-2219. doi: 10.1007/s11228-009-0108-1. [7] E. Demirci, A. Unal and N. Ozalp, A fractional order seir model with density dependent death rate, MdR de Pinho,Hacet. J. Math. Stat., 40 (2011), 287-295. [8] P. Falugi, E. Kerrigan and E. van Wyk, Imperial College London Optimal Control Software User Guide (ICLOCS), Department of Electrical and Electronic Engineering, Imperial College London, London, England, UK, 2010. [9] R. F. Hartl, S. P. Sethi and R. G. Vickson, A survey of the maximum principles for optimal control problems with state constraints, SIAM Review, 37 (1995), 181-218. doi: 10.1137/1037043. [10] M. R. Hestenes, Calculus of Variations and Optimal Control Theory, $2^{nd}$ Edition (405 pages), John Wiley, New York, 1980. [11] H. W. Hethcote, The basic epidemiology models: models, expressions for $R_0$, parameter estimation, and applications, In Mathematical Understanding of Infectious Disease Dynamics (S. Ma and Y. Xia, Eds.), Vol. 16. Chap. 1, pp. 1-61, World Scientific Publishing Co. Pte. Ltd., Singapore, 2008. doi: 10.1142/9789812834836_0001. [12] W. O. Kermack and A. G. McKendrick, Contributions to the mathematical theory of epidemics, Bulletin of Mathematical Biology, 53 (1991), 35-55. [13] H. Maurer and S. Pickenhain, Second order sufficient conditions for optimal control problems with mixed control-state constraints, J. Optim. Theory Appl., 86 (1995), 649-667. doi: 10.1007/BF02192163. [14] Helmut Maurer and H.J. Oberle, Second order sufficient conditions for optimal control problems with free final time: The Riccati approach, SIAM J. Control Optm., 41 (2002), 380-403. doi: 10.1137/S0363012900377419. [15] N. P. Osmolovskii and H. Maurer, Applications to Regular and Bang-Bang Control: Second-Order Necessary And Sufficient Optimality Conditions In Calculus Of Variations And Optimal Control, SIAM Advances in Design and Control, 24, 2012. doi: 10.1137/1.9781611972368. [16] D. S. Naidu, T. Fernando and K. R. Fister, Optimal control in diabetes, Optim. Control Appl. Meth., 32 (2011), 181-184. doi: 10.1002/oca.990. [17] R.M. Neilan and S. Lenhart, An introduction to optimal control with an application in disease modeling, DIMACS Series in Discrete Mathematics, 75 (2010), 67-81. [18] L.T. Paiva, Optimal Control in Constrained and Hybrid Nonlinear Systems, Project Report, 2013, http://paginas.fe.up.pt/~faf/ProjectFCT2009/report.pdf. [19] O. Prosper, O. Saucedo, D. Thompson, G. T. Garcia, X. Wang and C. Castillo-Chavez, Modeling control strategies for concurrent epidemics of seasonal and pandemic H1N1 influenza, Mathematical Biosciences and Engineering, 8 (2011), 141-170. doi: 10.3934/mbe.2011.8.141. [20] P. Shi and L. Dong, Dynamical models for infectious diseases with varying population size and vaccinations, Journal of Applied Mathematics, 2012 (2012), 1-20. doi: 10.1155/2012/824192. [21] H. Schäettler and U. Ledzewicz, Geometric Optimal Control. Theory, Methods and Examples, Springer, New York, 2012. doi: 10.1007/978-1-4614-3834-2. [22] C. Sun and Y. H. Hsieh, Global analysis of an SEIR model with varying population size and vaccination, Applied Mathematical Modelling, 34 (2010), 2685-2697. doi: 10.1016/j.apm.2009.12.005. [23] R. Vinter, Optimal Control, Birkhäuser, Boston, 2000. [24] A. Wächter and L. T. Biegler, On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming, Mathematical Programming, 106 (2006), 25-57. doi: 10.1007/s10107-004-0559-y.
[1] Md. Haider Ali Biswas, Maria do Rosário de Pinho. A nonsmooth maximum principle for optimal control problems with state and mixed constraints - convex case. Conference Publications, 2011, 2011 (Special) : 174-183. doi: 10.3934/proc.2011.2011.174 [2] Stefan Doboszczak, Manil T. Mohan, Sivaguru S. Sritharan. Pontryagin maximum principle for the optimal control of linearized compressible navier-stokes equations with state constraints. Evolution Equations and Control Theory, 2022, 11 (2) : 347-371. doi: 10.3934/eect.2020110 [3] H. O. Fattorini. The maximum principle for linear infinite dimensional control systems with state constraints. Discrete and Continuous Dynamical Systems, 1995, 1 (1) : 77-101. doi: 10.3934/dcds.1995.1.77 [4] Huaiqiang Yu, Bin Liu. Pontryagin's principle for local solutions of optimal control governed by the 2D Navier-Stokes equations with mixed control-state constraints. Mathematical Control and Related Fields, 2012, 2 (1) : 61-80. doi: 10.3934/mcrf.2012.2.61 [5] Eduardo Casas, Fredi Tröltzsch. Sparse optimal control for the heat equation with mixed control-state constraints. Mathematical Control and Related Fields, 2020, 10 (3) : 471-491. doi: 10.3934/mcrf.2020007 [6] Theodore Tachim-Medjo. Optimal control of a two-phase flow model with state constraints. Mathematical Control and Related Fields, 2016, 6 (2) : 335-362. doi: 10.3934/mcrf.2016006 [7] Maria do Rosário de Pinho, Ilya Shvartsman. Lipschitz continuity of optimal control and Lagrange multipliers in a problem with mixed and pure state constraints. Discrete and Continuous Dynamical Systems, 2011, 29 (2) : 505-522. doi: 10.3934/dcds.2011.29.505 [8] Andrei V. Dmitruk, Nikolai P. Osmolovskii. Proof of the maximum principle for a problem with state constraints by the v-change of time variable. Discrete and Continuous Dynamical Systems - B, 2019, 24 (5) : 2189-2204. doi: 10.3934/dcdsb.2019090 [9] IvÁn Area, FaÏÇal NdaÏrou, Juan J. Nieto, Cristiana J. Silva, Delfim F. M. Torres. Ebola model and optimal control with vaccination constraints. Journal of Industrial and Management Optimization, 2018, 14 (2) : 427-446. doi: 10.3934/jimo.2017054 [10] Vincenzo Basco, Piermarco Cannarsa, Hélène Frankowska. Necessary conditions for infinite horizon optimal control problems with state constraints. Mathematical Control and Related Fields, 2018, 8 (3&4) : 535-555. doi: 10.3934/mcrf.2018022 [11] Luís Tiago Paiva, Fernando A. C. C. Fontes. Adaptive time--mesh refinement in optimal control problems with state constraints. Discrete and Continuous Dynamical Systems, 2015, 35 (9) : 4553-4572. doi: 10.3934/dcds.2015.35.4553 [12] Alexander Tyatyushkin, Tatiana Zarodnyuk. Numerical method for solving optimal control problems with phase constraints. Numerical Algebra, Control and Optimization, 2017, 7 (4) : 481-492. doi: 10.3934/naco.2017030 [13] Mourad Azi, Mohand Ouamer Bibi. Optimal control of a dynamical system with intermediate phase constraints and applications in cash management. Numerical Algebra, Control and Optimization, 2022, 12 (2) : 279-291. doi: 10.3934/naco.2021005 [14] Piermarco Cannarsa, Hélène Frankowska, Elsa M. Marchini. On Bolza optimal control problems with constraints. Discrete and Continuous Dynamical Systems - B, 2009, 11 (3) : 629-653. doi: 10.3934/dcdsb.2009.11.629 [15] Matthias Gerdts, Martin Kunkel. A nonsmooth Newton's method for discretized optimal control problems with state and control constraints. Journal of Industrial and Management Optimization, 2008, 4 (2) : 247-270. doi: 10.3934/jimo.2008.4.247 [16] Andrei V. Dmitruk, Alexander M. Kaganovich. Quadratic order conditions for an extended weak minimum in optimal control problems with intermediate and mixed constraints. Discrete and Continuous Dynamical Systems, 2011, 29 (2) : 523-545. doi: 10.3934/dcds.2011.29.523 [17] Georg Vossen, Torsten Hermanns. On an optimal control problem in laser cutting with mixed finite-/infinite-dimensional constraints. Journal of Industrial and Management Optimization, 2014, 10 (2) : 503-519. doi: 10.3934/jimo.2014.10.503 [18] Mikhail Gusev. On reachability analysis for nonlinear control systems with state constraints. Conference Publications, 2015, 2015 (special) : 579-587. doi: 10.3934/proc.2015.0579 [19] M. Arisawa, P.-L. Lions. Continuity of admissible trajectories for state constraints control problems. Discrete and Continuous Dynamical Systems, 1996, 2 (3) : 297-305. doi: 10.3934/dcds.1996.2.297 [20] Nidhal Gammoudi, Hasnaa Zidani. A differential game control problem with state constraints. Mathematical Control and Related Fields, 2022 doi: 10.3934/mcrf.2022008
2018 Impact Factor: 1.313
|
2022-09-29 23:30:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5824277997016907, "perplexity": 3238.3160424623857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00228.warc.gz"}
|
https://en.wikipedia.org/wiki/Separable_filter
|
# Separable filter
A separable filter in image processing can be written as product of two more simple filters. Typically a 2-dimensional convolution operation is separated into 2 onedimensional filters. This reduces the cost of computing the operator.
## Examples
1. A twodimensional smoothing filter is separated in this sample:
$\frac{1}{3} \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} * \frac{1}{3} \begin{bmatrix} 1 & 1 & 1 \end{bmatrix} = \frac{1}{9} \begin{bmatrix} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{bmatrix}$
2. Gaussian blur (smoothing)
$\frac{1}{4} \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix} * \frac{1}{4} \begin{bmatrix} 1 & 2 & 1 \end{bmatrix} = \frac{1}{16} \begin{bmatrix} 1 & 2 & 1 \\ 2 & 4 & 2 \\ 1 & 2 & 1 \end{bmatrix}$
3. Sobel operator (edge detection)
$\mathbf{G_x} = \begin{bmatrix} \quad~ & \quad~ & \quad~ \\[-2.5ex] 1 & 0 & -1 \\ 2 & 0 & -2 \\ 1 & 0 & -1 \end{bmatrix} * A = \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix} * \begin{bmatrix} +1 & 0 & -1 \end{bmatrix} * A$
This works also for Prewitt operator.
|
2015-09-04 05:11:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8834745287895203, "perplexity": 3146.787310962253}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645335509.77/warc/CC-MAIN-20150827031535-00340-ip-10-171-96-226.ec2.internal.warc.gz"}
|
http://physicshelpforum.com/kinematics-dynamics/2003-bullet-impact.html
|
Physics Help Forum Bullet impact
Kinematics and Dynamics Kinematics and Dynamics Physics Help Forum
Apr 15th 2009, 12:13 PM #1 Junior Member Join Date: Feb 2009 Posts: 8 Bullet impact A 12 g bullet is fired into a 100 g block initially at rest on a horizontal surface. After impact, the block slides 7.5 m before coming to rest. If the coefficient of friction between the block and the surface is 0.65, what is the speed of the bullet immediately before impact?
Apr 16th 2009, 10:07 AM #2 Physics Team Join Date: Feb 2009 Location: India Posts: 365 M = mass of the block m = mass of the bullet u = velocity of the bullet before impact U = velocity of block+bullet just after the impact. a = accelleration of the block+bullet after impact k = coefficient of kinetic friction g = accln due to gravity. x = distance travelled by block+bullet (=7.5 m) before comming to rest. The friction force = - k(M+m)g hence acceln of block+bullet = -kg $\displaystyle V^2 = U^2 + 2ax$ $\displaystyle 0 = U^2 -2kgx$ U = sqrt{2kgx} Now we use law of conservation of momentum mu = (M+m)U =(M+m)*sqrt{2kgx} u = (M+m)*sqrt{2kgx}/m
Tags bullet, impact
Thread Tools Display Modes Linear Mode
Similar Physics Forum Discussions Thread Thread Starter Forum Replies Last Post tastylick Kinematics and Dynamics 3 Jan 3rd 2014 10:09 AM fbaker Kinematics and Dynamics 1 Aug 17th 2010 10:34 PM kubombelar Kinematics and Dynamics 1 Nov 10th 2009 01:05 AM rjsci13 Kinematics and Dynamics 2 Sep 28th 2009 07:52 PM bfritz88 Kinematics and Dynamics 0 Dec 5th 2008 02:10 PM
|
2019-03-24 15:20:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6061965823173523, "perplexity": 2588.686087510384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203462.50/warc/CC-MAIN-20190324145706-20190324171706-00374.warc.gz"}
|
http://mathoverflow.net/questions/121186/jacobian-polynomial/121206
|
# jacobian polynomial
Here is the question which could be quit difficult (but could be not):
Let $C$ be a field of complex numbers and $f \in C[x,y]$ be a polynomial such that there exist $g \in C[x,y]$ and $Jac(f,g) \in C^{*}$ i.e. determinant of Jacobian matrix of polynomials $f$ and $g$ is a nonzero constant. Question: Is it true that $f$ is irreducible?
Any comments are welcome!
-
add comment
## 1 Answer
Edit: The statement in the next paragraph is wrong! I misunderstood the result of Kaliman: it says that given $(f,g)$ as in the question, there is a polynomial automorphism $\phi$ of $\mathbb{C}^2$ such that each fiber of $\phi \circ (f,g): \mathbb{C}^2 \to \mathbb{C}^2$ is irreducible. So I would assume it is still hard to give a positive answer to the question, but clearly what I wrote below is false.
I would assume the question is quite difficult, since a positive answer would imply the Jacobian conjecture by this result of Kaliman.
-
Thanks for such reference! It is a nice paper! – Andriy Regeta Feb 10 '13 at 14:05
add comment
|
2014-04-19 00:04:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9288045167922974, "perplexity": 208.68855672065155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://www.biostars.org/p/123266/
|
liftOver error: not reading the coordinate position correctly?
2
1
Entering edit mode
7.7 years ago
Neal ▴ 60
Hi all,
I am getting an error while running the liftOver tool locally on my Mac OSX
The code I execute is as follows:
~/Downloads/liftOver ./ICBP-summary-Nature.bed ~/Downloads/hg18ToHg19.over.chain.gz ./converted_icbp_coordinates.bed ./unlifted_icbp_coordinates.bed
Reading liftover chains
Mapping coordinates
Expecting integer field 2 line 2146483 of ./ICBP-summary-Nature.bed, got 6.5e+07
So I checked the data on this particular line and the data seems to be fine as follows by running sed as
sed -n '2146843p' ICBP-summary-Nature.bed
and I get the following output for the line
chr16 65819026 65819027 rs3852699
The consequence of this error is that the program breaks at this line number so my remaining snps (~300,000) in the subsequent lines do not get converted.
I thought I should post this query on the UCSC Genome Support Forum here, but the registration seems to have closed?
Thank you for going through my query and I would be grateful for any tips or suggestions on how to resolve this?
Update:
At the request of komal, I am posting a few lines which appear before and after this line. So the following are some of the lines before 2146843(including 2146843) are:
chr16 65787923 65787924 rs3730406
chr16 65790185 65790186 rs11700
chr16 65791635 65791636 rs8058861
chr16 65791780 65791781 rs8059662
chr16 65794844 65794845 rs13336793
chr16 65798783 65798784 rs12051247
chr16 65798928 65798929 rs12051249
chr16 65802113 65802114 rs16957240
chr16 65811920 65811921 rs7193713
chr16 65817068 65817069 rs7196793
chr16 65817164 65817165 rs7196989
chr16 65819026 65819027 rs3852699
And the following are some of the lines afterwards:
chr16 65819026 65819027 rs3852699
chr16 65819348 65819349 rs6499116
chr16 65822861 65822862 rs6499118
chr16 65829359 65829360 rs3852700
chr16 65829878 65829879 rs16957265
chr16 65835167 65835168 rs6499119
chr16 65842341 65842342 rs6499121
chr16 65847342 65847343 rs9940665
chr16 65847656 65847657 rs9931407
chr16 65850656 65850657 rs8064216
chr16 65855664 65855665 rs8053031
SNP hg19 liftOver hg18 • 4.4k views
0
Entering edit mode
Did you check if there is a space(s) instead of a tab between the first and the second field on this line?
0
Entering edit mode
Hei komal, and thanks for the comment. The original file was a CSV file which I converted to a tab delimited file through perl..so I am pretty sure there ought to be a tab between the first and second field on this line...
0
Entering edit mode
Can you show some lines that appear before and after this line?
0
Entering edit mode
Hei komal, I have just updated the post with some of the lines
0
Entering edit mode
I got no error when parsing this bed file to liftover (commandline version). Did you try the web version and see if you are getting the same error? Though, it might not be a good idea if you have a very large bed file.
0
Entering edit mode
Hmm you are correct. I tried it at home and these lines were converted perfectly. The file is the usual size for human snps (~2.5 million snps) so, the web version may not work. Maybe I could try to split the input file into ~1.2 million line chunks and then join the 2 output files (assuming it does not break again)
2
Entering edit mode
7.7 years ago
Neal ▴ 60
Ok so I finally found out what was going wrong.
I misread line 2146483 as 2146843. Hence I could not find the error.
I split the file into two using the linux split command
split -l 1230663 ./Blood_pressure/ICBP-summary-Nature.bed ./Blood_pressure/split_blood_pressure
And that is when I realized my oversight as the 2nd file also gave the same error albeit in a different line number obviously which I fortunately read correctly!
Here is what the erroneous line in the file looked like:
chr16 6.5e+07 65000001 rs2289150
I changed it using sed as follows:
sed -i '' 's/6.5e+07/65000000/' split_blood_pressureab
Then I joined the two files through ´cat´and proceeded as usual with liftOver.
My sincere thanks to komal for taking the time to help me out.
0
Entering edit mode
6.7 years ago
morovatunc ▴ 540
Say I did the process successfully. Is there a way to verify the lifting over correct?
|
2022-08-16 19:30:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49812015891075134, "perplexity": 3654.397657080676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572515.15/warc/CC-MAIN-20220816181215-20220816211215-00247.warc.gz"}
|
http://mathhelpforum.com/statistics/53135-word-combinations-problem-print.html
|
# word combinations problem
Printable View
• October 11th 2008, 12:49 PM
biggestbernard1
word combinations problem
How many ways can 4 people be chosen from a group of 9? Tell whether the situation is a permutation of combination. Then solve.
• October 11th 2008, 01:08 PM
Jhevon
Quote:
Originally Posted by biggestbernard1
How many ways can 4 people be chosen from a group of 9? Tell whether the situation is a permutation of combination. Then solve.
this is just "9 choose 4", or in other words, ${9 \choose 4} = _9C_4$
|
2016-06-26 09:26:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9135096669197083, "perplexity": 896.8326412249974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00171-ip-10-164-35-72.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/3186/einstein-notation-difference-between-vectors-and-scalars
|
# Einstein notation - difference between vectors and scalars
From Wikipedia:
First, we can use Einstein notation in linear algebra to distinguish easily between vectors and covectors: upper indices are used to label components (coordinates) of vectors, while lower indices are used to label components of covectors. However, vectors themselves (not their components) have lower indices, and covectors have upper indices.
I am trying to read the Wikipedia article, but I am constantly getting confused between what represents a vector/covector and what represents a component of one of these. How can I tell?
-
A vector component is always written with 1 upper index $a^i$, while a covector component is written with 1 lower index $a_i$.
In Einstein notation, if the same index variable appear in both upper and lower positions, an implicit summation is applied, i.e.
$$a_i b^i = a_1 b^1 + a_2 b^2 + \dotsb \qquad (*)$$
Now, a vector is constructed from its component as
$$\mathbf a = a^1 \mathbf{\hat e}_1 + a^2 \mathbf{\hat e}_2 + \dotsb$$
where $\mathbf{\hat e}_i$ are the basis vectors. But this takes the form like (*), so if we make basis vectors to take lower indices, we will get
$$\mathbf a = a^i \mathbf{\hat e}_i$$
This is likely what Wikipedia means.
-
Okay, so $a_i b^i$ doesn't represent the product of a vector times a covector, but a sum over elements. That confused me – Casebash Aug 24 '10 at 9:55
@Casebash: Right, $a_i b^i$ as a dot product is just a special case. For instance, we could use $x_i{}^i$ to represent the trace of a 2-tensor. – kennytm Aug 24 '10 at 11:13
|
2016-05-03 00:02:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9538077116012573, "perplexity": 496.35843171455036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860117914.56/warc/CC-MAIN-20160428161517-00061-ip-10-239-7-51.ec2.internal.warc.gz"}
|
http://de.vroniplag.wikia.com/wiki/Rh/Fragment_196_20
|
# Rh/Fragment 196 20
## < Rh
32.639Seiten in
diesem Wiki
Typus Verschleierung Bearbeiter Graf Isolan Gesichtet
Untersuchte Arbeit:
Seite: 196, Zeilen: 20-23
Quelle: Malevergne und Sornette 2006
Seite(n): 233, Zeilen: 18-22
One could hope for the existence of logical links between some of these measures, such as a vanishing tail dependence parameter implies vanishing asymptotic conditional correlation coefficients. Indeed, this turns out to be wrong and one can construct simple examples for which all possible combinations occur as in example 6.4.10. A priori, one could hope for the existence of logical links between some of these measures, such as a vanishing tail-dependence parameter λ implies vanishing asymptotic conditional correlation coefficients. In fact, this turns out to be wrong and one can construct simple examples for which all possible combinations occur.
Anmerkungen Kein Hinweis auf eine Übernahme. Sichter (Graf Isolan)
|
2017-05-25 22:13:27
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8282874822616577, "perplexity": 3527.237638557165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608617.6/warc/CC-MAIN-20170525214603-20170525234603-00112.warc.gz"}
|
https://studydaddy.com/question/a-suitcase-measures-24-inches-long-and-18-inches
|
QUESTION
# A suitcase measures 24 inches long and 18 inches
A suitcase measures 24 inches long and 18 inches high. What is the diagonal length of the suitcase to the nearest tenth of a foot?
• @
• 1 order completed
Tutor has posted answer for $10.00. See answer's preview$10.00
****** **** the solution *** attached ********
or
• @
• 5 orders completed
Tutor has posted answer for $10.00. See answer's preview$10.00
*********** Theorem ** a^2 * *** * *** * being *** ********** ** ******** in **** **** **
****
+ 18^2 * *** *** * 324
=
c^2 *** * *** ** *
c
Ans *****
or
• @
Tutor has posted answer for $10.00. See answer's preview$10.00
***** *** ***********************
then
diag = **********************
or
|
2018-04-24 04:36:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2680117189884186, "perplexity": 12552.612382744388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946564.73/warc/CC-MAIN-20180424041828-20180424061828-00293.warc.gz"}
|
https://jukidibysapixipov.thuoctrigiatruyenbaphuong.com/latex-concrete-habitat-book-15940pu.php
|
Last edited by Arashilabar
Wednesday, July 29, 2020 | History
2 edition of Latex Concrete Habitat found in the catalog.
# Latex Concrete Habitat
## by Albert Knott
Written in English
Subjects:
• Building construction & materials,
• House Plans,
• Do-It-Yourself,
• House & Home,
• Architecture,
• General,
• Design & Construction
• The Physical Object
FormatPaperback
Number of Pages132
ID Numbers
Open LibraryOL9835910M
ISBN 101412039975
ISBN 109781412039970
Latex-modified concrete was developed in the s for bridge deck overlays, and since then o bridges have been successfully rehabilitated. LMC also has been used to renovate roadways, parking decks, marine structures and even major league sports stadiums. Starting out, pour a little latex modified concrete, then broom it over the deck and edges, spreading the mortar mixture. Aggregates need to be removed. Keep the mixture ahead of the pour, but not so far it dries out (2m or 6' +/-) Note: complete application/coverage of mortar is critical to bonding of overlay to existing surface. Concrete.
To make a latex mold, start by applying a layer of latex to your chosen object and letting it dry for 30 minutes. Repeat this process to add layers for small objects and 10 or more layers for larger items. If your object is very large, cover some gauze with latex and Views: K. Latex Concrete Habitat: Albert Knott, George Nez: Latex Concrete Habitat: Albert Knott, George Nez:: Kindle Store. out about each day's deals, from literature and children's books to romance, science fiction, fantasy, and more. 3 of 3 people found the /5(K).
The research project investigates a new construction method for creating thin concrete panels. The faceted formwork includes a substructure of flexible wire mesh providing basic geometry, and a lining made of a latex sheet that determines the panel’s final form. The proposed method aims to reduce the weight of concrete molds and the amount of material used in construction. The three basic polymers used as latex modifiers for concrete or mortar are acrylics, styrene-butadiene rubbers (SBR) and polyvinyl acetates (PVA). Defoamers are incorporated into the polymer emulsions when they are manufactured to inhibit formation of excessive air that would be caused by foam generated during mixture of the mortar or concrete.
You might also like
Kenya in pictures.
Kenya in pictures.
The ancient art of Colima, Mexico
The ancient art of Colima, Mexico
Eighth United Nations Congress on the Prevention of Crime and the Treatment of Offenders, Havana, 27 August-7 September 1990
Eighth United Nations Congress on the Prevention of Crime and the Treatment of Offenders, Havana, 27 August-7 September 1990
Kingsley Plantation
Kingsley Plantation
Regional development and plan evaluation
Regional development and plan evaluation
An Illustrated Guide to Heart Failure
An Illustrated Guide to Heart Failure
Janes fighting ships.
Janes fighting ships.
Translations and Reprints from the Original Sources
Translations and Reprints from the Original Sources
Iron men & iron horses
Iron men & iron horses
Thecho en Mexico =
Thecho en Mexico =
An Electromagnetic Tool for Damping and Fatigue Analysis
An Electromagnetic Tool for Damping and Fatigue Analysis
Estás aquí
Estás aquí
Kurdistan
Kurdistan
Cargo tank cleaning manual.
Cargo tank cleaning manual.
Latex Concrete Habitat Paperback – February 1, by Dr. Albert Knott (Author), Dr. George Nez (Author) out of 5 stars 16 ratings. See all formats and editions Hide other formats and editions. Price New from Used from Paperback "Please retry" $/5(16). This is THE book on latex concrete, I bought it twice, the diagrams were not clear enough on the Kindle. Latex Concrete Habitat. One person found this helpful. Helpful. 0 Comment Report abuse DAN-i-EL. out of 5 stars Great Info and very versatile. Reviewed in the United States on Febru /5. In many war torn and poverty stricken regions, the indigenous architecture has been heavy mud and wattle roofs on thick mud walls. These structures, while cool in the summer, are of very low strength, are maintenance intensive, are time consuming to build, and are largely in massive disrepair. Replacing these mud structures with the light weight roofs of latex concrete produces a permanent. Isbn: - latex concrete habitat - Book information and reviews for ISBN,Latex Concrete Habitat by Albert Knott. Latex concrete habitat by albert knott | Select Fiction Paperbacks: 2 for$20; Pre-Order Harper Lee's Latex Concrete Habitat book Set a Watchman; Spring Totes Special Value: \$ with Purchase; Documentary Sale: Up to 50% Off Download latex concrete habitat e-book - Latex Concrete /5().
George Nez and Dr. Albert Knott developed this technique for low-cost roofing in Africa. They have a book, which I highly recommend, called "Latex Concrete Habitat" (there's a link to it in the introduction above). We used 4 x 5 gallon buckets of acrylic on the kids rooms (33 ft by 12 Latex Concrete Habitat book, so 20 gallons.
indexto LatexConcrete Habitat Chapter Page A. Latex ConcreteRoofs for Resettlementand RefugeeConditions 1 B. Vaulted Latex ConcreteShelters 8 C. Construction of a 20' x20' LCHPRoof 12 D. Selection of Materials 29 E. Stress Analysis 46 F. Placing NewRoofs on ExistingWalls 64 G. Ideasfor Construction of Latex Concrete Habitat 71 • HPShells 71 Definition.
Home» Latex Concrete Habitat» Download Latex Concrete Habitat Epub Download Latex Concrete Habitat Epub Scott Standard Postage Stamp Catalogue Latex Concrete Habitat Edit. Concrete repairs around the home are easier than ever with pre-mixed concrete patching materials and specialized cements for a variety of uses.
Latex cement offers particular advantages over other cement patching compounds, such as increased flexibility and fast drying time.
Latex cement is used similarly to other concrete patching materials. We look at a small latex modified concrete roof built nearly 10 years ago.
How has it held up. Building must be demolished shortly. Felt like I wanted to document this robust little roof. Tags: latex concrete habitat, latex concrete habitat download, latex concrete habitat pdf Related: the-best-guide-to-meditation-victor-npdf adams-chart-of-history-teachers-guide-master-bookspdf donald-winnicott-today-jan-abrampdf pirates-and-privateers-joyce-glasnerpdf journey-to-ethiopia-abigya-g-wubie.
Permaculture Earth Building. An interview in Denver, Colorado with George Nez, international habitat pioneer and developer of a new building concept: "roofs first." Latex Concrete Roof 10 years after construction.
A book on latex concrete technology has recently been produced (Albert Knott and Can you make an adobe plaster type mix to cover an old [PDF] Huck Read latex concrete habitat online/preview - Read the book Latex Concrete Habitat by Albert Knott online or Preview the book, service provided by Openisbn Project.
any change in the output. Finally, the input \LaTeX comes out in the output as LATEX. Thus our source is a mixture of text to be typeset and a couple of LATEX commands \emph and \LaTeX.
The first command changes the input text in a certain way and the second one generates new text. Now call up the file again and add one more sentence given below. A latex concrete roof is almost more of an art project than a roof.
The shape and style of it depends almost entirely on your imagination. If you want to create something truly unique, this is the method for you.
Its advantages are that. Latex Concrete Habitat By Dr. Albert Knott and Dr. George Nez. Tweet. Published: February and are largely in massive disrepair. Replacing these mud structures with the light weight roofs of latex concrete produces a permanent architecture significantly more safe and strong, of very low maintenance, and of remarkably low cost, as the.
Why Latex Modified Concrete Is Used For Cold Climate Bridge Deck Overlays. Updated November In colder climates, bridge decks undergo a freeze-thaw cycle more severe than other types of concrete infrastructure.
This is because they’re exposed to weather conditions on the top and bottom, unlike sidewalks and other concrete flatwork. Mix the compound in a five-gallon bucket. Most compounds will have two components: a liquid latex solution and a bag of dry mortar mix.
Prepare the compound right before you are ready to cover the floor. How much you mix will depend on the size of the room and the height of the : 16K. Latex Modified Concrete Materials Mixers – Continuous Mobile for Latex Modified Concrete Proportioning and Mixing of Latex Modified Concrete Removal of Existing Concrete Overlays Removal of Existing Concrete Overlay, Variable Thickness Curing Application LMC Overlays Method of Measurement.
Get free shipping on qualified Latex-ite Concrete Cement & Masonry or Buy Online Pick Up in Store today in the Building Materials department.
These concrete molds can be used over and over. Landscape Molds. Affordable! A single mold can create hundreds of concrete castings. Long-Lasting!
These concrete molds can be used over and over. Close. Home; Shop Products. Driveway/Walkway Stamps & Tools. Hawaiian Seamless Stone Floppy Stamps: 36″, 24″, 18″, 15″.A.
Conventional Concrete Systems on Bridge Decks B. Role of Latex in Concrete 1. Laboratory Evaluation of Latex Modifi ed Concrete 2. Laboratory Mix Procedure 3.
Specifi cations of the Mix Components 4. Formulations Used in the Mixes 5. Results of Evaluations C. Components of Latex Modifi ed Concrete 1.
Portland Cement W Aggrega 3. tes.Using the “Concrete” fonts The Concrete Roman fonts were designed by Don Knuth for a book called “Concrete Mathematics”, which he wrote with Graham and Patashnik (the Patashnik, of BibTeX fame).Knuth only designed text fonts, since the book used the Euler fonts for mathematics.
|
2021-04-21 10:01:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5504651069641113, "perplexity": 13903.883101839383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039536858.83/warc/CC-MAIN-20210421100029-20210421130029-00269.warc.gz"}
|
https://tex.stackexchange.com/questions/229901/customized-list-of-elements-using-etoolbox
|
# Customized list of elements using etoolbox
I need to have several lists with access by index (array-like). My minimum non-working example follows:
\documentclass{minimal}
\usepackage{etoolbox}
\newcounter{listAcounter}
\stepcounter{#2}%
\csdef{#1\the#2}{#3}
}
\newcommand\getFromList[2]{%
\csuse{#1#2}
}
\begin{document}
This is element 2 from list A: \getFromList{listA}{2}.
\end{document}
I get from compiling with pdflatex:
! You can't use the letter l' after \the.
<argument> listA\the l
istAcounter
How can I fix this code? Why is this wrong?
• Welcome to TeX.SX! Perhaps \csdef{#1\csuse{the#2}}{#3} will work. – egreg Feb 24 '15 at 14:04
When you type \the#2 you have an already formed token, so this will not be merged into a single command name such as \thelistAcounter. You have to build the name yourself:
\documentclass{article}
\usepackage{etoolbox}
\newcounter{listAcounter}
\stepcounter{#2}%
\csdef{#1\csuse{the#2}}{#3}
}
\newcommand\getFromList[2]{%
\csuse{#1#2}%
}
\begin{document}
This is element 2 from list A: \getFromList{listA}{2}.
\end{document}
Note the % after \csuse{#1#2}.
An easier implementation with expl3, that avoids defining a counter.
\documentclass{article}
\usepackage{xparse}
\ExplSyntaxOn
\NewDocumentCommand{\newList}{m}
{
\seq_new:c { l_kees_list_#1_seq }
}
{
\seq_put_right:cn { l_kees_list_#1_seq } { #2 }
}
\NewDocumentCommand{\getFromList}{mm}
{
\seq_item:cn { l_kees_list_#1_seq } { #2 }
}
\ExplSyntaxOff
\begin{document}
\newList{listA}
`
|
2019-07-21 14:48:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7014182209968567, "perplexity": 10145.418703054655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527048.80/warc/CC-MAIN-20190721144008-20190721170008-00494.warc.gz"}
|
https://tex.stackexchange.com/questions/560768/doublespacing-from-the-setspace-package-stretches-matrices
|
# Doublespacing from the setspace package stretches matrices?
I'm writing an article which is doublespaced throughout. Normal math mode looks fine, but in any instance of the pmatrix environment, the parentheses of the matrix become stretched out. Roughly, I have something like
\documentclass{article}
\usepackage{amsmath}
\usepackage{setspace}
\doublespace
\begin{document}
$\begin{pmatrix} a & b \end{pmatrix}$
\end{document}
The parentheses then stretch much wider than the matrix entries a and b and it looks bad to me. How can I make these matrices format normally, while still doublespacing throughout the document?
|
2021-04-21 11:48:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6206681728363037, "perplexity": 2653.8808841706127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039536858.83/warc/CC-MAIN-20210421100029-20210421130029-00027.warc.gz"}
|
https://support.bioconductor.org/p/9140007/
|
How to normalize Kallisto count matrix collapsed at gene-level with TMM/DESeq2?
1
0
Entering edit mode
@1a34845f
Last seen 7 months ago
Iran
Hi all,
Sorry for the simple question but I'm new to Kallisto and could not reach a definitive conclusion based on other posts.
I have two matrices of expression data (from a previous study) that I want to use for mRNA-miRNA correlation analysis. Based on the information provided by the authors, the mRNA expression matrix is derived using Kallisto. They did not mention the use of Tximport or any other tools but the read counts are already collapsed to gene-level, resulting in a matrix with gene names in the first column (e.g. TP53, A2M), genome-coordination in the second column (e.g. "chr17_7661779_7687550_-") and raw counts in the rest of the table (which are mostly non-integer values such as 1192.75812 and 2546.19874).
My question concerns how to normalize this data for correlation analysis. I have come to the conclusion that the best methods for normalization for correlation analysis are edgeR's TMM and DESeq2 normalization. Based on the other posts, I think the following scenarios would work out but I would appreciate it if someone could confirm this and correct my mistakes:
For edgeR: apparently, edgeR is capable of dealing with non-integer values and I think there is no need for any prior data transformation since the counts are already collapsed to the gene level. so simply using the following codes should prepare the matrices for correlation analysis -->
dgelist <- DGEList(matrix) ##the matrix with gene names as row names and counts in columns
norm-mat <- calcNormFactors(dgelist, method = "TMM")
norm-mat <- cpm(norm-mat, log = TRUE)
For DESeq2: DESeq2 requires integers as input. Others suggest using the function DESeqDataSetFromTximport() but I believe this works for outputs of Tximport, not a matrix of counts. apparently, this function also accounts for gene length bias for results of Kallisto but (correct me if I'm wrong) I don't think that's needed here since the counts are already collapsed to gene level. So the only thing I need to do is to use matrix <- round(matrix), create a DESeq dataset from this, and use the function counts(deseq.dataset, normalized = TRUE).
Q1: Is it OK to input this data as-is for normalization without any prior transformation regarding length and other stuff?
Q2: Does my scenario for edgeR work out? and is it appropriate for correlation analysis?
Q3: Does my scenario for DESeq2 work out? and is it appropriate for correlation analysis?
Thanks in advance for your help
DESeq2 tximport Kallisto TMM edgeR • 367 views
ADD COMMENT
2
Entering edit mode
@mikelove
Last seen 2 days ago
United States
I don't think that's needed here since the counts are already collapsed to gene level.
Even still, the argument of the tximport paper is that differential transcript usage which can lead to changes in the gene length can be corrected using our methods. The gene-level counts would still have this bias for DE at gene level.
But if you want to ignore this DTU effect on gene length, you can just round the counts and provide to DESeqDataSetFromMatrix(). Again, this is if you don't have access to the underlying transcript-level data. Otherwise I'd recommend using tximport.
Re: correlation analysis, I'd recommend vst() in the DESeq2 world. This is for example what WGCNA recommends for what expression values to use coming out of DESeq2. vst will take into account sequencing depth (and gene length if you use the tximport pipeline).
ADD COMMENT
Login before adding your answer.
Traffic: 397 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats
Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by the version 2.3.6
|
2022-08-18 19:31:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3441884219646454, "perplexity": 1813.432084792513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573399.40/warc/CC-MAIN-20220818185216-20220818215216-00630.warc.gz"}
|
https://yukwan.cn/summarize/record-a-hard-dp-problem/
|
LeetCode 85. Maximal Rectangle
This problem is a very hard problem, and I spend a lot of time to understand it. Im afraid someday may forgot the solution, so decide to record it.
description
85. Maximal Rectangle
degree of difficulty: $\color{red}{Hard}$
Given a 2D binary matrix filled with 0’s and 1’s, find the largest rectangle containing only 1’s and return its area.
Example:
Input:
[
["1","0","1","0","0"],
["1","0","1","1","1"],
["1","1","1","1","1"],
["1","0","0","1","0"]
]
Output: 6
solution
First of all, why it is a DP problem? and if it is a DP, how we can find the formal of this problem, actually I have no idea, and even after reading the answer, I still very confusing, I really spent a lot of energy to understand it.
It is a DP problem, because, when we using height[i] record the current number of continuous ‘1’ in column i; left[i] record the left most index j which satisfies that for any index k from j to i, height[k] >= height[i], which means that for every j -> i, the height[i] is the minimum one; right[i] record the right most index j which satisfies that for any index k from i to j, height[k] >= height[i], which means that for every i -> j, the height[i] is the minimum one; I misunderstand it for a long time. last the maximum of height[i] * (right[i] - left[i] + 1) be the answer.
Why should we use this definition? because it represent the max size of a rectangle of current height, it can make sure that we are computing a rectangle not another shape.
code
class Solution {
public int maximalRectangle(char[][] matrix) {
if (matrix == null || matrix.length == 0 || matrix[0] == null || matrix[0].length == 0) return 0;
int m = matrix.length, n = matrix[0].length, maxArea = 0;
int[] left = new int[n];
int[] right = new int[n];
int[] height = new int[n];
Arrays.fill(right, n - 1);
for (int i = 0; i < m; i++) {
int rB = n - 1;
for (int j = n - 1; j >= 0; j--) {
if (matrix[i][j] == '1') {
right[j] = Math.min(right[j], rB);
} else {
right[j] = n - 1;
rB = j - 1;
}
}
int lB = 0;
for (int j = 0; j < n; j++) {
if (matrix[i][j] == '1') {
left[j] = Math.max(left[j], lB);
height[j]++;
maxArea = Math.max(maxArea, height[j] * (right[j] - left[j] + 1));
} else {
height[j] = 0;
left[j] = 0;
lB = j + 1;
}
}
}
return maxArea;
}
}
|
2019-02-18 12:25:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5104951858520508, "perplexity": 2328.01508562928}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247486480.6/warc/CC-MAIN-20190218114622-20190218140622-00558.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-2-solving-equations-2-1-solving-one-step-equations-lesson-check-page-85/9
|
## Algebra 1
Published by Prentice Hall
# Chapter 2 - Solving Equations - 2-1 Solving One-Step Equations - Lesson Check: 9
#### Answer
x = 11$\frac{1}{3}$ y = 10$\frac{2}{3}$
#### Work Step by Step
$\frac{1}{3}y$ - $\frac{2}{3}x$ = -4 (Equation 1) 5x - 4y = 14 (Equation 2) Multiply equation (1) by 3 y - 2x = -12 (Equation 3) 5x - 4y = 14 Multiply equation (1) by 4 4y - 8x = -48 5x - 4y = 14 Rearrange to make y the subject 4y = 8x - 48 4y= 5x - 14 Therefore: 8x - 48 = 5x - 14 3x = 34 x = 11$\frac{1}{3}$ Substitute x in a previous equation to find y: y - 2x = -4 (Rearranged to =>) y = -12 + 2x y = -12 + 2(11$\frac{1}{3}$) y = 10$\frac{2}{3}$ Proof: $\frac{1}{3}$(10$\frac{2}{3}$) - $\frac{2}{3}$(11$\frac{1}{3}$) = -4 5(11$\frac{1}{3}$)-4(10$\frac{2}{3}$) = 14
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2018-06-25 10:22:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5590366721153259, "perplexity": 1417.891251190393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867644.88/warc/CC-MAIN-20180625092128-20180625112128-00585.warc.gz"}
|
https://en.m.wikiquote.org/wiki/Quasar
|
# Quasar
active galactic nuclei containing a massive black hole
A quasar (also called quasi-stellar object, abbreviated QSO) is an extremely luminous active galactic nucleus (AGN), in which a supermassive black hole with mass ranging from millions to billions of stellar masses (denoted M) is surrounded by a gaseous accretion disk. Active galactic nuclei are the most luminous persistent sources of electromagnetic radiation in the universe.
An artist’s impression showing the surroundings of a supermassive black hole, typical of that found at the heart of many galaxies.
A growing black hole, called a quasar, can be seen at the center of a faraway galaxy in this artist's concept.
## Quotes
• We seem to live in a remarkably economical X-ray universe, in that the observed cosmic X-ray background (CXRB) is produced with almost the least cosmic effort possible. It is not dominated by luminous obscured quasars thundering out huge amounts of power at z ≈ 2–4 but rather by moderate-luminosity, obscured AGNs at z ≈ 0.5–2.
• William N. Brandt and David M. Alexander: (2010). "Supermassive black-hole growth over cosmic time: Active galaxy demography, physics, and ecology from Chandra surveys". Proceedings of the National Academy of Sciences 107 (16): 7184–7189. ISSN 0027-8424. DOI:10.1073/pnas.0914151107.
• LOFAR is a new European radio interferometer operating at frequencies 15–240 MHz (van Haarlem et al., 2013) and represents a milestone in terms of radio survey speed compared to existing telescopes. The LOFAR Surveys Key Science Project aims to carry out a tiered survey. ... These surveys will open the low-frequency electromagnetic spectrum for exploration, allowing unprecedented studies of the radio population across cosmic time and opening up new parameter space for searches for rare, unusual objects such as high-z radio quasars in a systematic way. Perhaps, one of the most tantalizing prospects are the 21 cm absorption line measurements using LOFAR along sight lines toward z > 6 radio quasars.
• Edwin Retana-Montenegro and Huub Röttgering: (2018). "On the selection of high-z quasars using LOFAR observations". Frontiers in Astronomy and Space Sciences 5. DOI:10.3389/fspas.2018.00005.
• The continuum spectrum of a quasar can often be described, over a broad frequency range, by a power law of the form ${\displaystyle S}$ ${\displaystyle \nu }$ ${\displaystyle \propto }$ ${\displaystyle \nu }$ ${\displaystyle \alpha }$ ... where ${\displaystyle \alpha }$ is the spectral index. ${\displaystyle \alpha }$ = 0 corresponds to a flat spectrum, whereas ${\displaystyle \alpha }$ = 1 describes a spectrum in which the same energy is emitted in every logarithmic frequency interval.
|
2021-09-18 17:06:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6407355666160583, "perplexity": 3402.9679119091697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056548.77/warc/CC-MAIN-20210918154248-20210918184248-00243.warc.gz"}
|
https://math.stackexchange.com/questions/4510187/how-to-create-a-function-f-such-that-fx-y-is-high-when-either-x-or-y-i
|
# How to create a function $f$ such that $f(x,y)$ is high when either $x$ or $y$ is?
There are two variables, let's say $$x$$ and $$y$$.
I want to come up with a function $$f:[0,5]\times[0,5]\longrightarrow[0,5]$$ that respects the following rules:
1. If $$x$$ is high (close to the maximum value of 5) and $$y$$ is low (close to $$0$$), $$f(x,y)$$ should results in a high value (close to 5). The same would happen for the reverse ($$x$$ with a low value and $$y$$ with a high value)
2. If $$x$$ is high and $$y$$ is high, $$f(x,y)$$ will also be high; it would be useful that the values resulted would be higher than those from point $$(1)$$.
3. If $$x$$ is low and $$y$$ is low, $$f(x,y)$$ should result in low values.
I'm having troubles starting imagining what mathematical functions would be useful, as I am not even close to being advanced in the concepts of Mathematics. Any ideas would be appreciated, at least in the sense of finding any information that could get me started.
• you defined your function $f :[0,5]\rightarrow [0,5]$, and then you talk about two input variable $x$ and $y$, i guess you meant $f :[0,5]\times [0,5] \rightarrow [0,5]$ ? Aug 11 at 11:47
• that is correct, I'll add the change. Aug 11 at 11:48
• If you have precise value for your function $f$, you could use something like multivariate interpolation. Aug 11 at 11:52
• If you don't know the concept of interpolation you should check out first 1D interpolation like this one. Aug 11 at 11:57
• If $f(x,y)=f(y,x)$, and $f(x,y)$ is linear with respect to $x$ (this was not part of the problem statement, it was added by me to ensure uniqueness of solution), then $f(x,y)=axy+b(x+y)+c$. Let $f(0,0)=0$, $f(5,5)=5$, then $c=0$, $5a+2b=1$. If one wants to set value $f(0,5)=d$, then $b=\frac{d}{5}$, $a=\frac{5-2d}{25}$. At $d=5$ result is formula from my previous comment, but OP can take $f(0,5)=d=4.9$ or something else what needed. Aug 12 at 7:14
You can take, for instance,$$f(x,y)=\frac45\max\{x,y\}+\frac15\min\{x,y\}.$$It has all the properties that you are interested in.
One way to model this could be to use the distance from the line: $$x+y=0$$ through $$(0,0)$$. This line has normal vector: $$\vec n= \begin{pmatrix} 1\\ 1 \end{pmatrix}$$ and the distance of a given point $$(x,y)$$ to this line is proportional to $$t=\vec n\cdot\langle x,y \rangle=x+y$$ so in case we want something like: $$f(5,5)=5\\ f(5,0)=4\\ f(0,0)=0$$ we can connect this to a single dimensional function: $$f(x,y)=g(x+y)$$ where $$g(10)=5\\ g(5)=4\\ g(0)=0$$ A way to achieve this could be to add the extra requirement $$g'(10)=0$$ so that $$f$$ has maximum at $$(x,y)=(5,5)$$ and build $$g(0)=0$$ in: $$g(t)=at^3+bt^2+ct$$ Hence \begin{align} g(10)&=1000a+100b+10c&&=5\\ g(5)&=125a+25b+5c&&=4\\ g'(10)&=300a+20b+c&&=0 \end{align} which can be solved for $$a,b,c$$ to have: $$f(x,y)=g(x+y)=0.002(x+y)^3-0.09(x+y)^2+1.2(x+y)$$
See this link to look at interactive GeoGebra-applet with 3D-plot of this function
ADDENDUM: As can be seen both from the other answer and from comments, there will be (infinitely) many ways to satisfy your requirements, but to point you towards handling the additional requirement stated in your comment below this post, you could simply add a modifier to the above solution which takes the distance to the perpendicular line: $$x-y=0$$ as input. This distance is (similarly) proportional to $$t=x-y$$, and so we need a modifier function $$m(x,y)=h(x-y)=h(t)$$ that satisfies: $$h(0)=0\\ h(5)=m(5,0)\\ h(-5)=m(0,5)$$ so just choose which modification you want at $$(5,0)$$ and $$(0,5)$$ and match for instance a quadratic function as $$h$$: $$h(t)=\alpha t^2+\beta t+\gamma$$ and combine: \begin{align} q(x,y) &=f(x,y)+m(x,y)\\ &=g(x+y)+h(x-y)\\ &=a(x+y)^3+b(x+y)^2+c(x+y)+\alpha(x-y)^2+\beta(x-y)+\gamma \end{align} but be a little careful - if $$h(t)$$ increases too rapidly away from $$h(0)$$ to one side, then $$q$$ may exceed a value of $$5$$.
Here is GeoGebra-applet with example of this technique
• Very interesting and detailed answer. With the risk of going a bit outside the boundaries of the question: is there a way to introduce some kind of bias towards $x$ or $y$? In the sense that $f(x,y)$ should be greater than $f(y,x)$ or the opposite. Aug 16 at 13:06
• @Ionut-AlexandruBaltariu: I added an extra section about this. Generally, you can always combine two one-dimensional functions $g(x+y)$ and $h(x-y)$ to create formula for change in value when you move perpendicular to the two lines $x+y=0$ and $x-y=0$. This gives you a lot of control over what you want to happen. Just be careful, because combinations may escape the max/min values, since we only controlled them in points $(0,0),(5,0),(0,5)$ and $(5,5)$. Aug 17 at 8:41
|
2022-10-07 12:44:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 46, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8572801947593689, "perplexity": 197.11111795120058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00096.warc.gz"}
|
http://hackage.haskell.org/package/dovin-0.1.0.1/docs/Dovin-Builder.html
|
dovin-0.1.0.1: A proof assistant for Magic: The Gathering puzzles.
Dovin.Builder
Contents
Description
Functions for adding new cards to the board.
withLocation Hand $do withAttributes ["angel", token]$ addCreature (4, 4) "Angel"
Synopsis
# Builders
Each of these terminates a build chain, and will add a card with the specified type to the board.
# Fluid interface
These methods can be chained together to specify different properties of the card to be created.
Perform action as the specified player.
Add an attribute to the created card, as identified by a string. Attributes with that special meaning to Dovin built-ins (such as flying) are defined in Dovin.Attributes.
Helper version of withAttribute for adding multiple attributes at a time.
:: CardMatcher A matcher that must apply to this card for this affect to apply. matchInPlay is a typical value. -> (Card -> CardMatcher) Given the current card, return a matcher that matches cards that this affect applies to. -> (Card -> GameMonad Card) Apply an effect to the given card. -> GameMonad () -> GameMonad ()
|
2019-04-18 16:06:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24404500424861908, "perplexity": 4865.921396849442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517682.16/warc/CC-MAIN-20190418141430-20190418163430-00508.warc.gz"}
|
https://www.physicsforums.com/threads/questions-about-the-formula-for-acceleration.772904/
|
# Questions about the formula for acceleration
1. Sep 25, 2014
### Crovati
1. The problem statement, all variables and given/known data
I know that acceleration = change in velocity/change in time. Wouldn’t acceleration therefore also = distance/time2?
I thought this was true until i learned the formula for motion
s=ut*1/2at2
where
s = distance and u = initial velocity
Here, if you re-arrange the formula (and assuming that initial velocity =0), a= 2s/t2
So which of these formulas are right?
And if i were to create a graph where the slope can help find the acceleration, should i graph 2*distance vs t2 or just distance vs t2?
2. Sep 25, 2014
### haruspex
Average acceleration is $\Delta v / \Delta t$, and average velocity is $\Delta s / \Delta t$. But $\Delta v$ is not average velocity, so you cannot combine those two equations.
The SUVAT formula you quote is only valid for constant acceleration.
Last edited: Sep 25, 2014
3. Sep 25, 2014
### rcgldr
Assume constant acceleration, using va for v average:
v1 = v0 + a Δt
va = 1/2 (v0 + v1)
s1 = s0 + va Δt = s0 + 1/2 (v0 + v1) Δt = s0 + 1/2 (v0 + (v0 + a Δt)) Δt = s0 + v0 Δt + 1/2 a Δt^2
4. Sep 25, 2014
### theOrange
Acceleration IS equal to distance/time2
distance = meters (or m)
time = seconds (s)
The units for acceleration is: m/s2
5. Sep 25, 2014
### nasu
No, the fact that is has the same units does not mean that they are the same.
Work is not torque even though both are measured in N*m.
Acceleration is a measure of change in speed. If the speed is constant, you have no acceleration even if there is some distance traveled in some time.
|
2018-03-22 14:23:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7267267107963562, "perplexity": 1746.8695186788589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647885.78/warc/CC-MAIN-20180322131741-20180322151741-00717.warc.gz"}
|
https://www.futurelearn.com/info/courses/tackling-environmental-challenges/0/steps/150695
|
# The Human Development Index
This article explores some other ways to measure development, focusing on the UN's Human Development Index.
Gross Domestic Product (GDP) measures the production of goods and services within a country. GDP tells us about the economic development of a country, it shows us how the economy is changing and how much money is being spent.
But it doesn’t tell us about people’s quality of life, equality across the nation or education.
## Gross Domestic Product
GDP doesn’t take into consideration other factors such as services that happen where money doesn’t change hands, such as caring for elderly or young relatives.
It also doesn’t take equality into account – for example in terms of income the rich may be getting richer, producing more goods and spending more money, but this could mask other parts of the population that are getting poorer.
Not all spending and production is good either – GDP can go up during a war because so much spending goes towards arms and ammunition. There is also evidence to suggest that individuals who earn more money do not get happier the more they earn. Similarly, people within a country may not get any happier as the country produces more goods!
What is equality? Equality is ensuring that every individual has an equal opportunity to make the most of their lives and talents.
Equity is different – equal treatment will not necessarily guarantee equal results. For example, someone from a developing country may need more help getting to the same level of education as someone from a developed country. Providing this person with more assistance to achieve this level of equality is creating equity.
So in what other ways can development be measured?
## What is The Human Development Index?
The Human Development Index (HDI) has been developed by the United Nations to measure a broader degree of development than just economic growth. It was created to show that it is the people and their capabilities that should illustrate the development of a country, not just its economic performance, and it gives a more holistic view of development.
The HDI has three dimensions:
1. A long and healthy life – indicated by life expectancy at birth
2. Knowledge – indicated by expected years of schooling and average years of schooling
3. A decent standard of living – indicated by GNI per capita (PPP $) Each of these dimensions is given a score, and these are combined to give an overall HDI score, and then these are ranked. In 2020, Norway was top of the table, with a life expectancy of 82.4 years, 18.1 years of expected schooling, 12.9 years of average schooling, and a GNI per capita of$ 66,494. In contrast, Niger was bottom of the table, with a life expectancy of 62.4 years, 6.5 years of expected schooling, 2.1 years of average schooling, and a GNI per capita of $1,201. This illustrates how unequal the opportunities are for two people, one born in Norway and the other in Niger. What is PPP$? Across all the countries that are assessed in the HDI there are many different price levels and currencies. In order to be able to compare them the data needs to be converted into a common currency. PPP stands for Purchasing Power Parity and it is measured in dollars. One PPP dollar has the same purchasing power in the domestic economy of a country as USD \$ 1 has in the United States, according to 2011 international prices. This is the same value that is used to measure GDP and GNI.
Have a look at the HDI table. Can you see where countries have high GNI but are ranked lower than others with better life expectancy?
## Happiness as an indicator of development
There are other ways to measure development. For example, the World Happiness Report is an annual publication of the United Nations Sustainable Development Network. It bases its rankings on surveys from the Gallup World Poll where samples of each population assess their own wellbeing.
The report explores factors that may contribute to these assessments, such as GDP per capita, social support, healthy life expectancy, freedom to make life choices, generosity and freedom from corruption.
According to the 2020 report, which averages data from 2017-2019, Finland is the happiest nation, and Afghanistan the least happy. You can explore the Happiness Report and see how the country rankings compare to the HDI. The latest report for 2021 focuses on the impact of the Covid-19 pandemic.
To summarise, development is striving to achieve decent living standards and good quality of life in a fair, equal and environmentally sustainable world.
|
2022-12-01 13:01:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19434861838817596, "perplexity": 1823.0793138632998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710813.48/warc/CC-MAIN-20221201121601-20221201151601-00689.warc.gz"}
|
http://mathhelpforum.com/differential-geometry/150900-norm-matrix-algebra.html
|
# Thread: Norm in Matrix algebra
1. ## Norm in Matrix algebra
let $A\in M_n(\mathbb{C})$, show that
$\|A\|^2=\max\{\lambda: \det(\lambda-A^*A)=0\}=\max\{\lambda: \det(\lambda-AA^*)=0\}$
2. Originally Posted by Mauritzvdworm
let $A\in M_n(\mathbb{C})$, show that
$\|A\|^2=\max\{\lambda: \det(\lambda-A^*A)=0\}=\max\{\lambda: \det(\lambda-AA^*)=0\}$
The proof consists of two steps. (1) $\|A\|^2 = \|A^*A\|$; (2) A*A is a positive definite matrix and so its norm is equal to its largest eigenvalue. Those two steps together show that $\|A\|^2=\max\{\lambda: \det(\lambda-A^*A)=0\}$.
For the last part, use the fact that $\|A\| = \|A^*\|$ to deduce that $\|A\|^2=\max\{\lambda: \det(\lambda-AA^*)=0\}$.
|
2016-09-30 23:48:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9792605638504028, "perplexity": 347.6671703864381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662430.97/warc/CC-MAIN-20160924173742-00084-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://drogist-voorschoten.nl/geoff-mcfetridge-bxcgs/dividing-radical-expressions-825546
|
0, we! Which indicates the degree of the time, the index of each radical best experience each other solve! Radical from the denominator you want to attend yet radicals to be the same final expression to start, the. To find the right school ( 9.4.3 ) – multiply radicals with the same index, it is indicating cube... Not involve a radical in the denominator a simplified radical expression over another expression... Free radical equation calculator - solve radical equations step-by-step this website uses cookies ensure. Give the quotient Property of radical expressions with the same and that the denominator is a fourth root and represent! Are divisible by x. x squared divided by x is just x. x divided by x is just x. divided. Combine two radicals other trademarks and copyrights are the Property of radical expressions '', please here! Numerator as denominator solution: in this Example, the numerator and.! Section 3.4: multiply and divide radical expressions insides ) my question is really on,. Radical is the answer tests, quizzes, and more with flashcards, games and! } { 4 \sqrt { 15 } } { 4 \sqrt { 10 } } / { square.. ( 2x ) 2 ( 5y ) 4 32 fraction with radicals as the conjugate figure... This Property can be used the other way around to split a radical in denominator. Try refreshing the Page, or contact customer support expression by an appropriate fraction that is equivalent one... First step is to see if there is no denominator to rationalize the denominator -- then... Dividing the radical symbol, which is a very standard thing in math divided by another square root, first. Dividing rationalizing Higher Indices Et cetera above, we use the quotient Property of radical expressions again for easy.! A quotient instead of a product variables and exponents and personalized coaching to you... Order to move on Now we divide the denominator is a fourth root this second case, we used... Learn more, visit our Earning Credit Page everything is simplified and the resulting term is multiplied to left... No radical in it, we have used the quotient Property of radical expressions combine two radicals into one another... Expressions a common way of dividing the radical from the stuff given above, dividing radical expressions have 10... Period ©u 32f0 t1u2 j 9Kxu Vt8a5 sS8onfet8w 4a Ir 8e3 CLlLfCj dividing radical expressions 14 9 index of each radical of! An2.5: I can... you only need to use it to solve these radical expressions dividing radical expressions thousands. Move on final answer is its conjugate this Property ‘ in reverse ’ simplify... A BS in Biological Sciences than 2. result will not involve radical! Root divided by x is just x. x squared divided by x is just x. x divided... Public or Private college of their respective owners, quizzes, and more with,... – add and Subtract radical expressions '' and thousands of other math skills, tidy and extremely useful a.... Given fraction by its conjugate vice versa ) the answer are the of., divide the numerator and denominator can be simplified by division students moving around the room and Working to! Have used the quotient Property of radical expressions with numerical radicands ( maximum index of each radical is the and., games, and more with flashcards, games, and personalized coaching to help you succeed root by. A fraction with no radical in the denominator Page 150 Example 8 want to attend?... We give the quotient rule to divide radical expressions on these 8 Station Cards:! First step is to rationalize the fraction add radicals that have different index or radicand more flashcards... 13 6 # 14 9 index of both radicals is 3 13 4 # 6 simplify of we. With flashcards, games, and b represent nonnegative numbers, where b ≠ 0, then have... College you want to attend yet students could use this tool for dividing radical expressions reference or checking. When multiplied to the left of the exponent properties has an MS in Chemistry and a BS in Biological.... Dealing with a quotient is the Difference Between Blended Learning & Distance Learning and copyrights are the Property of expressions. The fraction quizzes and exams multiplied to the -- make a radical into two if 's. Index of each radical is the superscript number to the Community 2 this video looks at multiplying and dividing expressions! Will not involve a radical in it, we have the denominator by the square root - radical... Higher Indices Et cetera Rationaldenominator 1 … improve your math knowledge with free questions in divide! Out the radical symbol, which makes simplifying easy } { 4 \sqrt { 15 } }, the. Website uses cookies to ensure you get a whole number world are able to understand each.! Quotient rule numerator by, we use the quotient Property of radical expressions, we must multiply the entire by... Denominators are equal to 0 and radical expressions worksheet, students simplify given radical ''. After having gone through the stuff given above, we use the quotient Property radical... Expressions with the same index, we have to multiply the numerator,. Particular quiz is a fourth root expression can not have a radical into two if there are any terms can. * when dividing radicals, we have this particular quiz is a square root seen multiple times,... Cancel out the radical then is and is Now, we have one rational expression by! ) and divide under the radicals to be 2, or contact support. Higher Indices Et cetera 3 and it really just comes out of the radicals the... Be simplified by division dealing with more complicated expressions involving radicals, we use the quotient Property their! And it really just comes out of the radical simplifying, you can use the quotient dividing radical expressions... Look into the next step is to create a fraction that is equivalent to one video. Or more operations to simplify roots of fractions with free questions in divide expressions! Get a whole number expression divided by another square root for both numerator and.... Log in or sign up to add this lesson you must have the denominator is not zero student sheet. Per the quotient Property of radical expressions Cookie Policy odd, and coaching. Click here rule and how to divide radical expressions more with flashcards games! Indicating the cube root of 27 two radical expressions Page 150 Example 8 multiply! Value greater than 2. index, we use the quotient rule for dividing radical. Number to the denominator, you can use the quotient rule for radicals Example 8 a rational... Start, identify the index of each radical is the same index and radicands addthem. Sign -- and then we have to rationalize the fraction needed will the. College and save thousands off your degree 9.4.3 ) – add and radical... Case, the fraction needed will be the same procedure as multiplying radicals found for - dividing expressions. Look into some examples problems based on the above concepts the quotients of two.... Anyone form High school Page to learn more, visit our Earning Credit Page 3 3.4... Particular quiz is a series of math problems progress by passing quizzes and exams from denominator! N'T have to rationalize the denominator is not zero Algebra video tutorial explains how to radical. Division problem rationalize, the next Example problem on divide radical expressions ( rationalizing denominator... Physio Margaret River, Hot Deal Website, Banyan Tree Residences Floor Plan, Chan Express Menu, Numbers 11-20 Worksheets Pdf, Sau Cafe Hours, Dunkin Donuts Calories, Jungle Cactus Soil, " /> 0, we! Which indicates the degree of the time, the index of each radical best experience each other solve! Radical from the denominator you want to attend yet radicals to be the same final expression to start, the. To find the right school ( 9.4.3 ) – multiply radicals with the same index, it is indicating cube... Not involve a radical in the denominator a simplified radical expression over another expression... Free radical equation calculator - solve radical equations step-by-step this website uses cookies ensure. Give the quotient Property of radical expressions with the same and that the denominator is a fourth root and represent! Are divisible by x. x squared divided by x is just x. x divided by x is just x. divided. Combine two radicals other trademarks and copyrights are the Property of radical expressions '', please here! Numerator as denominator solution: in this Example, the numerator and.! Section 3.4: multiply and divide radical expressions insides ) my question is really on,. Radical is the answer tests, quizzes, and more with flashcards, games and! } { 4 \sqrt { 15 } } { 4 \sqrt { 10 } } / { square.. ( 2x ) 2 ( 5y ) 4 32 fraction with radicals as the conjugate figure... This Property can be used the other way around to split a radical in denominator. Try refreshing the Page, or contact customer support expression by an appropriate fraction that is equivalent one... First step is to see if there is no denominator to rationalize the denominator -- then... Dividing the radical symbol, which is a very standard thing in math divided by another square root, first. Dividing rationalizing Higher Indices Et cetera above, we use the quotient Property of radical expressions again for easy.! A quotient instead of a product variables and exponents and personalized coaching to you... Order to move on Now we divide the denominator is a fourth root this second case, we used... Learn more, visit our Earning Credit Page everything is simplified and the resulting term is multiplied to left... No radical in it, we have used the quotient Property of radical expressions combine two radicals into one another... Expressions a common way of dividing the radical from the stuff given above, dividing radical expressions have 10... Period ©u 32f0 t1u2 j 9Kxu Vt8a5 sS8onfet8w 4a Ir 8e3 CLlLfCj dividing radical expressions 14 9 index of each radical of! An2.5: I can... you only need to use it to solve these radical expressions dividing radical expressions thousands. Move on final answer is its conjugate this Property ‘ in reverse ’ simplify... A BS in Biological Sciences than 2. result will not involve radical! Root divided by x is just x. x squared divided by x is just x. x divided... Public or Private college of their respective owners, quizzes, and more with,... – add and Subtract radical expressions '' and thousands of other math skills, tidy and extremely useful a.... Given fraction by its conjugate vice versa ) the answer are the of., divide the numerator and denominator can be simplified by division students moving around the room and Working to! Have used the quotient Property of radical expressions with numerical radicands ( maximum index of each radical is the and., games, and more with flashcards, games, and personalized coaching to help you succeed root by. A fraction with no radical in the denominator Page 150 Example 8 want to attend?... We give the quotient rule to divide radical expressions on these 8 Station Cards:! First step is to rationalize the fraction add radicals that have different index or radicand more flashcards... 13 6 # 14 9 index of both radicals is 3 13 4 # 6 simplify of we. With flashcards, games, and b represent nonnegative numbers, where b ≠ 0, then have... College you want to attend yet students could use this tool for dividing radical expressions reference or checking. When multiplied to the left of the exponent properties has an MS in Chemistry and a BS in Biological.... Dealing with a quotient is the Difference Between Blended Learning & Distance Learning and copyrights are the Property of expressions. The fraction quizzes and exams multiplied to the -- make a radical into two if 's. Index of each radical is the superscript number to the Community 2 this video looks at multiplying and dividing expressions! Will not involve a radical in it, we have the denominator by the square root - radical... Higher Indices Et cetera Rationaldenominator 1 … improve your math knowledge with free questions in divide! Out the radical symbol, which makes simplifying easy } { 4 \sqrt { 15 } }, the. Website uses cookies to ensure you get a whole number world are able to understand each.! Quotient rule numerator by, we use the quotient Property of radical expressions, we must multiply the entire by... Denominators are equal to 0 and radical expressions worksheet, students simplify given radical ''. After having gone through the stuff given above, we use the quotient Property radical... Expressions with the same index, we have to multiply the numerator,. Particular quiz is a fourth root expression can not have a radical into two if there are any terms can. * when dividing radicals, we have this particular quiz is a square root seen multiple times,... Cancel out the radical then is and is Now, we have one rational expression by! ) and divide under the radicals to be 2, or contact support. Higher Indices Et cetera 3 and it really just comes out of the radicals the... Be simplified by division dealing with more complicated expressions involving radicals, we use the quotient Property their! And it really just comes out of the radical simplifying, you can use the quotient dividing radical expressions... Look into the next step is to create a fraction that is equivalent to one video. Or more operations to simplify roots of fractions with free questions in divide expressions! Get a whole number expression divided by another square root for both numerator and.... Log in or sign up to add this lesson you must have the denominator is not zero student sheet. Per the quotient Property of radical expressions Cookie Policy odd, and coaching. Click here rule and how to divide radical expressions more with flashcards games! Indicating the cube root of 27 two radical expressions Page 150 Example 8 multiply! Value greater than 2. index, we use the quotient rule for dividing radical. Number to the denominator, you can use the quotient rule for radicals Example 8 a rational... Start, identify the index of each radical is the same index and radicands addthem. Sign -- and then we have to rationalize the fraction needed will the. College and save thousands off your degree 9.4.3 ) – add and radical... Case, the fraction needed will be the same procedure as multiplying radicals found for - dividing expressions. Look into some examples problems based on the above concepts the quotients of two.... Anyone form High school Page to learn more, visit our Earning Credit Page 3 3.4... Particular quiz is a series of math problems progress by passing quizzes and exams from denominator! N'T have to rationalize the denominator is not zero Algebra video tutorial explains how to radical. Division problem rationalize, the next Example problem on divide radical expressions ( rationalizing denominator... Physio Margaret River, Hot Deal Website, Banyan Tree Residences Floor Plan, Chan Express Menu, Numbers 11-20 Worksheets Pdf, Sau Cafe Hours, Dunkin Donuts Calories, Jungle Cactus Soil, " />
(9.4.2) – Add and subtract radical expressions. Dividing Radicals: When dividing radicals (with the same index), divide under the radical, and then divide in front of the radical (divide any values multiplied times the radicals). Since the denominator has a radical, we have to rationalize the fraction. Dividing Radical Expressions (Rationalizing the Denominator) To divide radical expressions with the same index, we use the quotient rule for radicals. After any simplifying, you need to make sure that there is no radical in the denominator. Ensure that the index of each radical is the same and that the denominator is not zero. As you become more familiar with dividing and simplifying radical expressions, make sure you continue to pay attention to the roots of the radicals that you are dividing. G O XAfl wlv ur di 2g Uh2tWsF jrZe csse 2r8v kezdT.R 8 bM fa CdNeh 7wZiQtchS tI Pnsf gi4nDi6tye T DARljgReOb0rHad a2 Y.5 Worksheet by Kuta Software LLC Kuta Software - Infinite Algebra 2 Name___________________________________ just create an account. If n is odd, and b ≠ 0, then. Improve your math knowledge with free questions in "Divide radical expressions" and thousands of other math skills. Here are the steps to dividing radical expressions. courses that prepare you to earn To divide radical expressions, we have to take separate roots for both numerator and denominator. This video looks at multiplying and dividing radical expressions (square roots). This is done by determining a term that when multiplied to the denominator will cancel out the radical. There’s nothing we can do about that. Radical Expressions App is neat, tidy and extremely useful a app. How Do I Use Study.com's Assign Lesson Feature? Quiz & Worksheet - Dividing Radical Expressions | Study.com #117518 Select a subject to preview related courses: In order to divide more complex radical expressions, we must not only divide but make sure that there is not a radical in the denominator. Free radical equation calculator - solve radical equations step-by-step This website uses cookies to ensure you get the best experience. Speaking math correctly ensures that mathematicians all around the world are able to understand each other. In this second case, the numerator is a square root and the denominator is a fourth root. Multiplying and dividing radical expressions worksheet with answers Collection. "63xy3 "7y 27. Anyone can earn (9.4.3) – Multiply radicals with multiple terms. but my question is really on multiplying, i think. If you would like to review the concepts behind radical expressions further, check out our fun lesson Dividing Radical Expressions. Ensure that the index of each radical is the same and that the denominator is not zero. © copyright 2003-2020 Study.com. "3 4x2 "3 x 30. Log in here for access. - Definition, Equations & Graphs, Parallelograms: Definition, Properties, and Proof Theorems, Addition Property of Equality: Definition & Example, Undefined Terms of Geometry: Concepts & Significance, Arithmetic Sequence: Formula & Definition, How to Solve 'And' & 'Or' Compound Inequalities, How to Divide Polynomials with Long Division, Deciding on a Method to Solve Quadratic Equations, High School Algebra I: Homework Help Resource, NY Regents Exam - Integrated Algebra: Help and Review, NY Regents Exam - Integrated Algebra: Tutoring Solution, Precalculus Algebra for Teachers: Professional Development, Algebra Connections: Online Textbook Help, McDougal Littell Algebra 1: Online Textbook Help, Prentice Hall Pre-Algebra: Online Textbook Help, OSAT Advanced Mathematics (CEOE) (111): Practice & Study Guide, AP EAMCET E & AM (Engineering, Agriculture & Medical) Study Guide, BITSAT Exam - Math: Study Guide & Test Prep, Math 99: Essentials of Algebra and Statistics. Well, what if you are dealing with a quotient instead of a product? As a member, you'll also get unlimited access to over 83,000 And like we've seen multiple times before, these rational expressions aren't defined when their denominators are equal to 0. Big Idea The main idea of this lesson is that students compare dividing radicals by hand without rationalizing and realize why rationalizing came about and how it works. Start studying Dividing radicals assignments. Get students moving around the room and working together to work with rational exponents and radical expressions on these 8 Station Cards. Improve your math knowledge with free questions in "Divide radical expressions" and thousands of other math skills. lessons in math, English, science, history, and more. Did you know… We have over 220 college We actually have one rational expression divided by another rational expression. Visit the Algebra I: High School page to learn more. Since there is no denominator to rationalize, the problem is finished. Example 1: to simplify $(\sqrt{2}-1)(\sqrt{2}+1)$ type (r2 - 1)(r2 + 1) . The quotient rule states that a radical involving a quotient is equal to the quotients of two radicals. Example 2. The only thing you can do is match the radicals with the same index and radicands and addthem together. Dividing Radical Expressions (Rationalizing the Denominator) To divide radical expressions with the same index, we use the quotient rule for radicals. Divide Radical Expressions. A quotient is the answer to a division problem. Since we can't leave the expression with a radical in the denominator, we need to rationalize it, or find something that when multiplied by the denominator will remove the radical. Conjugates & Dividing by Radicals. The quotient property of radicals requires the indices of the radicals to be the same. Categorizing Memory: Study.com Academy Early Release, Plans for a Common Core Standards Open Resource, How to Become an Auto Finance Manager: Career Roadmap, How to Become a College Professor: Education and Career Roadmap, PeopleSoft Developer Career Info for Becoming a PeopleSoft Programmer or Developer, Exercise Physiology Technician Job Info and Requirements for Becoming an Exercise Physiologist Technician, Scheduling Coordinator Job Description and Info About Becoming a Scheduling Coordinator, High School Algebra: Solving Math Word Problems, High School Algebra: Calculations, Ratios, Percent & Proportions, High School Algebra: Exponents and Exponential Expressions, High School Algebra: Properties of Exponents, High School Algebra: Algebraic Expressions and Equations, High School Algebra: Algebraic Distribution, High School Algebra: Properties of Functions, High School Algebra: Working With Inequalities, High School Algebra: Graphing and Factoring Quadratic Equations, High School Algebra: Properties of Polynomial Functions, High School Algebra: Rational Expressions, High School Algebra: Matrices and Absolute Value, High School Algebra: Data, Statistics, and Probability, Math Review for Teachers: Study Guide & Help, CAHSEE Math Exam: Test Prep & Study Guide, Praxis Core Academic Skills for Educators - Mathematics (5732): Study Guide & Practice, Common Core Math Grade 8 - Expressions & Equations: Standards, Common Core Math Grade 8 - Functions: Standards, High School Precalculus: Tutoring Solution, High School Precalculus: Homework Help Resource, Solving and Graphing Two-Variable Inequalities, How to Reduce Fractions: Terms & Overview, Quiz & Worksheet - The Golden Ratio in Math, Quiz & Worksheet - Lines with Positive Slope, Quiz & Worksheet - Properties of the Orthocenter, Quiz & Worksheet - Writing & Using Ratios, CPA Subtest IV - Regulation (REG): Study Guide & Practice, CPA Subtest III - Financial Accounting & Reporting (FAR): Study Guide & Practice, ANCC Family Nurse Practitioner: Study Guide & Practice, Pre-Civil War Sectional Conflict (1850-1861), Roles & Responsibilities of Teachers in Distance Learning. Could somebody SHOW to me how to get the answer to this problem. Most of the time, the fraction needed will be the same as the radical in the denominator. We give the Quotient Property of Radical Expressions again for easy reference. Credit Page determine if the numerator and denominator, everything is simplified and the denominator will cancel out radical... An equivalent expression is called rationalizing the denominator19 Difference Between Blended Learning & Learning! Rational expressions are n't defined when their denominators are equal to the denominator simplified. The -- make dividing radical expressions radical create a fraction with radicals if the numerator and denominator by Now we! Respective owners the best experience can ’ t add radicals that have different index radicand! Denominator containing a radical by a whole number simplifying, you need use... 13 6 # 14 9 index of each radical only need to know more about divide! Get students moving around the room and Working together to work with exponents..., divide the coefficients ( outsides ) and divide under the radicals to be hard without the radical from stuff. Radical by a whole number the denominator19, I think 've seen multiple times before, these rational expressions n't! For this dividing radical expressions on these 8 Station Cards please click here worksheet, students simplify radical! Other math skills to find the right school necessary exponent determining an equivalent expression is called the... One rational expression for easy reference we have used the quotient Property of radical expressions ( and rationalize radicals... Complicated expressions involving radicals, we must multiply the numerator and denominator can multiplied. Or sign up to add this lesson will describe the quotient rule to help solve them 3 it... Have Example 10: divide out front and divide radical expressions to simplify a fraction with in. Exponents and radical expressions ( rationalizing the denominator of mc001-1.jpg by to the... Used to combine two radicals radical then is and is Now, use. Both the numerator and denominator can be simplified by division be the same procedure as multiplying radicals this case the. Go over how to use it to solve these radical expressions '' we divide the numerator and denominator by,. Add and Subtract radical expressions ( and rationalize for radicals with an n value greater than.! University students could use this Property ‘ in reverse ’ to simplify roots of fractions sS8onfet8w 4a 8e3. Example, the problem is finished 5y ) 4 32 and b ≠ 0, b > 0, we! Which indicates the degree of the time, the index of each radical best experience each other solve! Radical from the denominator you want to attend yet radicals to be the same final expression to start, the. To find the right school ( 9.4.3 ) – multiply radicals with the same index, it is indicating cube... Not involve a radical in the denominator a simplified radical expression over another expression... Free radical equation calculator - solve radical equations step-by-step this website uses cookies ensure. Give the quotient Property of radical expressions with the same and that the denominator is a fourth root and represent! Are divisible by x. x squared divided by x is just x. x divided by x is just x. divided. Combine two radicals other trademarks and copyrights are the Property of radical expressions '', please here! Numerator as denominator solution: in this Example, the numerator and.! Section 3.4: multiply and divide radical expressions insides ) my question is really on,. Radical is the answer tests, quizzes, and more with flashcards, games and! } { 4 \sqrt { 15 } } { 4 \sqrt { 10 } } / { square.. ( 2x ) 2 ( 5y ) 4 32 fraction with radicals as the conjugate figure... This Property can be used the other way around to split a radical in denominator. Try refreshing the Page, or contact customer support expression by an appropriate fraction that is equivalent one... First step is to see if there is no denominator to rationalize the denominator -- then... Dividing the radical symbol, which is a very standard thing in math divided by another square root, first. Dividing rationalizing Higher Indices Et cetera above, we use the quotient Property of radical expressions again for easy.! A quotient instead of a product variables and exponents and personalized coaching to you... Order to move on Now we divide the denominator is a fourth root this second case, we used... Learn more, visit our Earning Credit Page everything is simplified and the resulting term is multiplied to left... No radical in it, we have used the quotient Property of radical expressions combine two radicals into one another... Expressions a common way of dividing the radical from the stuff given above, dividing radical expressions have 10... Period ©u 32f0 t1u2 j 9Kxu Vt8a5 sS8onfet8w 4a Ir 8e3 CLlLfCj dividing radical expressions 14 9 index of each radical of! An2.5: I can... you only need to use it to solve these radical expressions dividing radical expressions thousands. Move on final answer is its conjugate this Property ‘ in reverse ’ simplify... A BS in Biological Sciences than 2. result will not involve radical! Root divided by x is just x. x squared divided by x is just x. x divided... Public or Private college of their respective owners, quizzes, and more with,... – add and Subtract radical expressions '' and thousands of other math skills, tidy and extremely useful a.... Given fraction by its conjugate vice versa ) the answer are the of., divide the numerator and denominator can be simplified by division students moving around the room and Working to! Have used the quotient Property of radical expressions with numerical radicands ( maximum index of each radical is the and., games, and more with flashcards, games, and personalized coaching to help you succeed root by. A fraction with no radical in the denominator Page 150 Example 8 want to attend?... We give the quotient rule to divide radical expressions on these 8 Station Cards:! First step is to rationalize the fraction add radicals that have different index or radicand more flashcards... 13 6 # 14 9 index of both radicals is 3 13 4 # 6 simplify of we. With flashcards, games, and b represent nonnegative numbers, where b ≠ 0, then have... College you want to attend yet students could use this tool for dividing radical expressions reference or checking. When multiplied to the left of the exponent properties has an MS in Chemistry and a BS in Biological.... Dealing with a quotient is the Difference Between Blended Learning & Distance Learning and copyrights are the Property of expressions. The fraction quizzes and exams multiplied to the -- make a radical into two if 's. Index of each radical is the superscript number to the Community 2 this video looks at multiplying and dividing expressions! Will not involve a radical in it, we have the denominator by the square root - radical... Higher Indices Et cetera Rationaldenominator 1 … improve your math knowledge with free questions in divide! Out the radical symbol, which makes simplifying easy } { 4 \sqrt { 15 } }, the. Website uses cookies to ensure you get a whole number world are able to understand each.! Quotient rule numerator by, we use the quotient Property of radical expressions, we must multiply the entire by... Denominators are equal to 0 and radical expressions worksheet, students simplify given radical ''. After having gone through the stuff given above, we use the quotient Property radical... Expressions with the same index, we have to multiply the numerator,. Particular quiz is a fourth root expression can not have a radical into two if there are any terms can. * when dividing radicals, we have this particular quiz is a square root seen multiple times,... Cancel out the radical then is and is Now, we have one rational expression by! ) and divide under the radicals to be 2, or contact support. Higher Indices Et cetera 3 and it really just comes out of the radicals the... Be simplified by division dealing with more complicated expressions involving radicals, we use the quotient Property their! And it really just comes out of the radical simplifying, you can use the quotient dividing radical expressions... Look into the next step is to create a fraction that is equivalent to one video. Or more operations to simplify roots of fractions with free questions in divide expressions! Get a whole number expression divided by another square root for both numerator and.... Log in or sign up to add this lesson you must have the denominator is not zero student sheet. Per the quotient Property of radical expressions Cookie Policy odd, and coaching. Click here rule and how to divide radical expressions more with flashcards games! Indicating the cube root of 27 two radical expressions Page 150 Example 8 multiply! Value greater than 2. index, we use the quotient rule for dividing radical. Number to the denominator, you can use the quotient rule for radicals Example 8 a rational... Start, identify the index of each radical is the same index and radicands addthem. Sign -- and then we have to rationalize the fraction needed will the. College and save thousands off your degree 9.4.3 ) – add and radical... Case, the fraction needed will be the same procedure as multiplying radicals found for - dividing expressions. Look into some examples problems based on the above concepts the quotients of two.... Anyone form High school Page to learn more, visit our Earning Credit Page 3 3.4... Particular quiz is a series of math problems progress by passing quizzes and exams from denominator! N'T have to rationalize the denominator is not zero Algebra video tutorial explains how to radical. Division problem rationalize, the next Example problem on divide radical expressions ( rationalizing denominator...
|
2021-04-21 08:37:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7427446246147156, "perplexity": 1695.9749136677497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039526421.82/warc/CC-MAIN-20210421065303-20210421095303-00273.warc.gz"}
|
https://cstheory.stackexchange.com/questions/153/computational-query-complexity-of-sq-learning
|
# Computational query complexity of SQ-learning
It is known that for PAC learning, there are natural concept classes (e.g. subsets of decision lists) for which there are polynomial gaps between the sample complexity needed for information theoretic learning by a computationally unbounded learner, and the sample complexity needed by a polynomial-time learner. (see, e.g. http://portal.acm.org/citation.cfm?id=267489&dl=GUIDE or http://portal.acm.org/citation.cfm?id=301437)
These results seem to depend on encoding a secret in particular examples, however, and so don't naturally translate into the SQ-model of learning, where the learner just gets to query statistical properties of the distribution.
Is it known whether there exist concept classes for which information-theoretic learning in the SQ model is possible with O(f(n)) queries, but computationally efficient learning is only possible with Omega(g(n)) queries for g(n) >> f(n)?
## 2 Answers
I've asked (myself) this question a while ago. At least for learning with respect to a specific distribution there is a fairly simple example of a concept class that is information theoretically SQ-learnable but is NP-hard to SQ learn. Let \phi a binary encoding of a SAT instance and y be its lexicographically first satisfying assignment (or 0^n is the instance is unsatisfiable). Now let f(\phi) be a function that over one half of the domain is the MAJ(\phi) and over the second half of the domain equals PAR(y). Here MAJ is the majority function over variables which are set to 1 in the string \phi and PAR(y) is the parity function over variables which are set to 1 in the string y. Let F be the class of functions obtained in this way. To SQ learn F over the uniform distribution U one only needs to learn majorities (which is easy) to find \phi and then find y. On the other hand, it is fairly easy to reduce SAT to SQ learning of F (to any accuracy noticeably greater than 3/4) over the uniform distribution. The reason for this, naturally, is that parities are essentially "invisible" to SQs and hence it is necessary to solve SAT to learn F.
This is a nice question. The power of the statistical query model is precisely the ability to prove unconditional lower bounds for learning with SQ -- for example, parity is not learnable with a polynomial number statistical queries.
I am not aware of results of the form you ask, but perhaps we are missing something obvious...
|
2021-04-12 11:57:01
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8090056777000427, "perplexity": 694.8353676719886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038067400.24/warc/CC-MAIN-20210412113508-20210412143508-00490.warc.gz"}
|
http://www.madebymark.com/madebymark/2013/2/1/achievement-requires-doing-something.html
|
# Achievement Requires Doing Something
Let's discuss it. Let's outline a plan. Let's collaborate. Let's have a vision session. Let's write our mission statement. Let's build mindshare. Let's identify all the stakeholders. Let's write some guidelines. Let's establish a policy. Let's write a creative brief. Let's have a meeting.
Doing something actually requires doing something! It means tackling the hard work of making something happen. It's much easier and much safer to sit around and have intellectual conversations, to gather large databases, to invest in technical infrastructure -- and never actually implement anything.
|
2013-12-06 14:39:53
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9418162703514099, "perplexity": 4572.699714427714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163051984/warc/CC-MAIN-20131204131731-00011-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://proofwiki.org/wiki/Definition:Simple_Group
|
# Definition:Simple Group
## Definition
A group $G$ is simple if and only if it has only $G$ and the trivial group as normal subgroups.
That is, if the composition length of $G$ is $1$.
## Also see
• Results about simple groups can be found here.
|
2021-04-17 18:28:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8920994997024536, "perplexity": 450.9623560219679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038461619.53/warc/CC-MAIN-20210417162353-20210417192353-00207.warc.gz"}
|
https://www.greencarcongress.com/2021/07/20210717-mit.html
|
## MIT study finds materials risk exposure increases significantly with vehicle electrification
##### 17 July 2021
Modern automobiles are built with more than 2,000 different compounds comprising 76 different elements. Now, a study by a team from MIT’s Materials Systems Laboratory, with support from Ford, provides insight into how electrification is changing vehicle composition and how that change is driving supply risk vulnerability.
The study, published in Environmental Science & Technology, provides the first comprehensive, high-resolution (elemental- and compound-level) snapshot of material use in both conventional and hybrid electric vehicles (HEVs) using a consistent methodology.
The team analyzed part-level data of material use for seven current year models, ranging from internal combustion engine vehicles (ICEV) to plug-in hybrid vehicles (PHEVs), all provided by Ford. The researchers devised a metric of vulnerability, referred to as exposure, which captures economic importance and susceptibility to price changes.
Among the findings were that exposure increases from $874 per vehicle for ICEV passenger vehicles to$2,344 per vehicle for SUV PHEVs. A shift to a fully PHEV fleet would double automaker exposure, adding approximately \$1 billion per year of supply risk to a hypothetical fleet of a million vehicles.
The increase in exposure is largely not only due to the increased use of battery elements such as cobalt, graphite, and nickel but also some more commonly used materials, most notably copper.
Resources
• Karan Bhuwalka, Frank R. Field, Robert D. De Kleine, Hyung Chul Kim, Timothy J. Wallington, and Randolph E. Kirchain (2021) “Characterizing the Changes in Material Use due to Vehicle Electrification” Environmental Science & Technology doi: 10.1021/acs.est.1c00970
|
2023-02-07 12:22:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2723963260650635, "perplexity": 9154.491715636252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500456.61/warc/CC-MAIN-20230207102930-20230207132930-00562.warc.gz"}
|
http://www.physicsforums.com/showthread.php?s=0b4f1ce1592b8e654851fb556282bf6b&p=4294635
|
## Acceleration, bicyclists and ski racing
So I'm a 195# cyclist, coasting down a hill. I pass the 120 # cyclist who is also coasting. I'm not sure why. Ignoring wind resistance and friction - which I believe aren't significant here - shouldn't we accelerate at the same rate? I believe it has to do with the fact that I'm carrying more momentum, but I can't quantify it. It's the same deal as downhill skiers, they tend to be larger because with more weight, they carry more momentum into the flats. Any thoughts?
PhysOrg.com physics news on PhysOrg.com >> Study provides better understanding of water's freezing behavior at nanoscale>> Soft matter offers new ways to study how ordered materials arrange themselves>> Making quantum encryption practical
Your problem is with "ignoring wind resistance and friction". They certainly are important! If there were no wind resistance and friction, you would both go at the same rate. You go faster because there is wind resistance and friction.
Thanks, Wind resistance and friction definitely have an impact, but with me being the larger rider with a presumably larger cross section, I would think I have more wind resistance than the smaller rider. The friction of my bike's bearings and tires on the road, has got to be negligible. I think it has to do with momentum, P=MV, but I can't reconcile F=MA.
## Acceleration, bicyclists and ski racing
Quote by pheadden Thanks, Wind resistance and friction definitely have an impact, but with me being the larger rider with a presumably larger cross section, I would think I have more wind resistance than the smaller rider. The friction of my bike's bearings and tires on the road, has got to be negligible. I think it has to do with momentum, P=MV, but I can't reconcile F=MA.
Yes, being larger you presumably have a larger cross-section. But assume that you have two riders of exactly the same shape, but Rider B is scaled up relative to rider A by a factor X. Rider B's mass will be X^3 times larger than Rider A, so the force pulling him downhill will be X^3 times larger. But his cross-sectional area will be only X^2 times larger, so the wind resistance holding him back will only increase by X^2. So the net acceleration of Rider A will be:
$$a_A = \frac{F_G - F_W}{M}$$
while the net acceleration of Rider B will be:
$$a_B = \frac{X^3 F_G - X^2 F_W}{X^3M} = \frac{F_G - \frac{F_W}{X}}{M}$$
So Rider B will accelerate faster.
Keep in mind that if there were no air resistance and friction, you would basically keep accelerating downhill and never hit a top speed.
Thanks, but my experience is that the heavier rider, Rider A, passes rider B while coasting down hill, even if they start at the same velocity, v0. PHyzguy's analysis indicates the opposite should occur. I believe Newton's 2nd law of motion indicates that in a vacuum (no friction/resistance), a bowling ball and a feather will accelerate at the same rate. Thus if they are both dropped from a building, they will hit the ground at the same time. So I'm still confused.
No, you've missed it. Rider B is the bigger, heavier rider. He is X times larger in scale and X^3 times heavier than rider A. X is a number greater than 1. For the case you gave with Rider A at 120 pounds, and Rider B at 195 pounds, X would be about 1.17.
|
2013-05-22 23:53:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.584808886051178, "perplexity": 1022.951769183881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702525329/warc/CC-MAIN-20130516110845-00097-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://brilliant.org/practice/complementary-and-supplementary-angles/
|
×
Back to all chapters
Angles and Lines
Forgive us for being obtuse, but this is a cute concept, and we think it’s right for you.
Complementary and Supplementary Angles
Given that $$\color{green}{\text{(green angle)} = 31^\circ},$$ which of the following is complementary to that $$\color{green}{\text{green angle}}$$?
Which pair of angles are supplementary?
Suppose $$\angle A = 65^\circ$$, $$\angle B\,$$ is complementary to $$\angle A,$$ and $$\angle C$$ is supplementary to $$\angle B.$$ What is the measure of $$\angle C ?$$
If $$\angle A=5 ^\circ$$ and $$\angle B$$ is the supplement of $$\angle A,$$ then $$\angle A + \angle B$$ forms a(n) $$\text{__________}.$$
$$\quad \text{A)}$$ acute angle
$$\quad \text{B)}$$ right angle
$$\quad \text{C)}$$ obtuse angle
$$\quad \text{D)}$$ straight line
$$\quad \text{E)}$$ none of the above
Find the measure of $$\angle Z.$$
×
|
2017-06-24 06:55:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.551201581954956, "perplexity": 1112.1002789947693}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320227.27/warc/CC-MAIN-20170624064634-20170624084634-00126.warc.gz"}
|
https://math.stackexchange.com/questions/624849/weak-differentiability-of-log-log-function
|
# weak differentiability of log log function
I want to understand why the following function has a weak derivative in two or three dimensions: $w(x) = \ln |\ln|x|| , x \in B_{1/2}(0)$. Can I say that if I have a strong derivative (except for the point 0), then it is shown? If yes, why is the dimension important? I computed the following derivative $\nabla w(x) = \frac{x}{(\ln|x|) |x|^2}$. Is it right?
Thanks a lot!
The definition of a weak derivative is: Let $w \in L^1_{\text{loc}}$. Then $u \in L^1_{\text{loc}}(\Omega)$ is called partial weak derivative if $\forall \phi \in D(\Omega)$:
$\int_\Omega w \partial_i \phi = -\int_\Omega \partial_i u \phi$. If this holds for all partial derivatives, $u$ is called weak derivative.
• Did you verified the conditions $w,\nabla w\in L^1_{loc}(B_{1/2}(0))$? – Tomás Jan 2 '14 at 13:43
• You can use spherial coordinates. – Tomás Jan 2 '14 at 13:59
• Yes, your derivative is right (if I do not calculate it wrong). Before answer your last questions, I will ask your another questions: can you please write in your post the definition of weak derivative? – Tomás Jan 2 '14 at 14:11
• Nice. Now, answering your last question, the only thing that remains to verify is the integral identity, but because the set where $w$ is not differentiable is a set with zero measure, then, the integral identity is true. – Tomás Jan 2 '14 at 14:56
• The definition should say $\int_\Omega w \partial_i \phi = -\int_\Omega u \phi$, because $u$ is already thought of as a derivative. – Post No Bulls Jan 4 '14 at 16:41
The double log has a weak derivative in all dimensions $\ge 2$. The restriction to $n=2,3$ is not necessary.
Depends on what you mean by strong derivative (there is such a term for differentiation of $L^p$ functions); I think you meant the pointwise derivatives, in which case the answer is no. Having a pointwise derivative, even at every point, is not enough to conclude that the weak derivative exists.
I will outline a different approach: write $w$ as an $L^1$ limit of functions $w_n$ for which the $x_i$ partial derivative, denoted $u_{i,n}$, exists, and converges in $L^1$ as $n\to\infty$. Then you can pass to the limit on both sides in $$\int_\Omega w_n \partial_i \phi = -\int_\Omega u_{i,n} \phi$$ A natural choice for $w_n$ is truncation $w_n=\min(w,n)$. To use it, you need to know that Lipschitz functions have weak derivatives. But the convergence part is straightforward.
Otherwise, you can use $w_n=\eta_n w$ where $\eta_n(x)=\eta(2^n x)$ and $\eta$ is a smooth function on $\mathbb R^d$ that vanishes when $|x|<1/2$ and is $1$ for $|x|>1$. Then there are longer estimates for establishing convergence of derivatives in $L^1$.
|
2021-02-27 07:20:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9235923886299133, "perplexity": 215.7295078545049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358203.43/warc/CC-MAIN-20210227054852-20210227084852-00441.warc.gz"}
|
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/256/2/g/c/
|
# Properties
Label 256.2.g.c Level $256$ Weight $2$ Character orbit 256.g Analytic conductor $2.044$ Analytic rank $0$ Dimension $8$ CM no Inner twists $2$
# Related objects
## Newspace parameters
Level: $$N$$ $$=$$ $$256 = 2^{8}$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 256.g (of order $$8$$, degree $$4$$, not minimal)
## Newform invariants
Self dual: no Analytic conductor: $$2.04417029174$$ Analytic rank: $$0$$ Dimension: $$8$$ Relative dimension: $$2$$ over $$\Q(\zeta_{8})$$ Coefficient field: 8.0.18939904.2 Defining polynomial: $$x^{8} - 4 x^{7} + 14 x^{6} - 28 x^{5} + 43 x^{4} - 44 x^{3} + 30 x^{2} - 12 x + 2$$ Coefficient ring: $$\Z[a_1, \ldots, a_{5}]$$ Coefficient ring index: $$2^{3}$$ Twist minimal: no (minimal twist has level 32) Sato-Tate group: $\mathrm{SU}(2)[C_{8}]$
## $q$-expansion
Coefficients of the $$q$$-expansion are expressed in terms of a basis $$1,\beta_1,\ldots,\beta_{7}$$ for the coefficient ring described below. We also show the integral $$q$$-expansion of the trace form.
$$f(q)$$ $$=$$ $$q + ( -1 + \beta_{1} + \beta_{6} - \beta_{7} ) q^{3} + ( \beta_{4} - \beta_{7} ) q^{5} + ( 1 + \beta_{1} + \beta_{3} + 2 \beta_{6} + \beta_{7} ) q^{7} + ( 1 - \beta_{1} - \beta_{2} + \beta_{3} - \beta_{5} - \beta_{6} + \beta_{7} ) q^{9} +O(q^{10})$$ $$q + ( -1 + \beta_{1} + \beta_{6} - \beta_{7} ) q^{3} + ( \beta_{4} - \beta_{7} ) q^{5} + ( 1 + \beta_{1} + \beta_{3} + 2 \beta_{6} + \beta_{7} ) q^{7} + ( 1 - \beta_{1} - \beta_{2} + \beta_{3} - \beta_{5} - \beta_{6} + \beta_{7} ) q^{9} + ( \beta_{2} + 2 \beta_{4} + \beta_{6} + \beta_{7} ) q^{11} + ( -2 \beta_{5} - \beta_{6} - \beta_{7} ) q^{13} + ( -1 - \beta_{3} + \beta_{4} - \beta_{5} - \beta_{6} ) q^{15} + ( -2 + \beta_{1} + \beta_{2} - \beta_{3} + 3 \beta_{4} - \beta_{5} - \beta_{6} - 2 \beta_{7} ) q^{17} + ( 1 - \beta_{2} + \beta_{3} - \beta_{4} - \beta_{5} + \beta_{7} ) q^{19} + ( 1 - 2 \beta_{2} + \beta_{4} - \beta_{6} + 3 \beta_{7} ) q^{21} + ( 1 + \beta_{2} - \beta_{4} + \beta_{5} + \beta_{6} - \beta_{7} ) q^{23} + ( -1 + 3 \beta_{6} - \beta_{7} ) q^{25} + ( 2 - \beta_{1} - 3 \beta_{4} + \beta_{5} - \beta_{6} + 3 \beta_{7} ) q^{27} + ( 2 \beta_{2} + 2 \beta_{3} + 3 \beta_{6} - \beta_{7} ) q^{29} + ( -4 + 2 \beta_{4} + 2 \beta_{6} ) q^{31} + ( -2 + \beta_{1} - \beta_{2} - \beta_{3} - 2 \beta_{4} + \beta_{5} - 2 \beta_{6} ) q^{33} + ( 2 - \beta_{2} - \beta_{3} + 2 \beta_{4} - \beta_{7} ) q^{35} + ( 2 - 2 \beta_{1} - 2 \beta_{3} - 3 \beta_{4} + 2 \beta_{5} - 2 \beta_{6} + 3 \beta_{7} ) q^{37} + ( -1 - \beta_{2} - \beta_{4} + \beta_{5} - 5 \beta_{6} - \beta_{7} ) q^{39} + ( 1 + 2 \beta_{2} - 2 \beta_{4} + 2 \beta_{5} + 2 \beta_{6} - \beta_{7} ) q^{41} + ( -2 + \beta_{2} - 2 \beta_{4} - \beta_{6} - 3 \beta_{7} ) q^{43} + ( 1 - \beta_{4} + 2 \beta_{5} ) q^{45} + ( -\beta_{1} - \beta_{2} - \beta_{3} - \beta_{4} - \beta_{5} - \beta_{6} - 2 \beta_{7} ) q^{47} + ( 2 + 2 \beta_{3} - 2 \beta_{4} + 2 \beta_{5} + 2 \beta_{6} + 3 \beta_{7} ) q^{49} + ( -2 + 2 \beta_{4} - 4 \beta_{6} - 4 \beta_{7} ) q^{51} + ( -2 + 2 \beta_{1} + 2 \beta_{2} - \beta_{4} + 2 \beta_{5} + 2 \beta_{6} - 3 \beta_{7} ) q^{53} + ( 3 - \beta_{1} + \beta_{3} - \beta_{7} ) q^{55} + ( 3 - \beta_{1} - \beta_{2} - \beta_{3} - \beta_{4} + \beta_{5} - 3 \beta_{6} + 3 \beta_{7} ) q^{57} + ( -2 - \beta_{1} - \beta_{3} + 3 \beta_{4} + \beta_{5} + 2 \beta_{6} - 3 \beta_{7} ) q^{59} + ( -4 + 2 \beta_{1} - 2 \beta_{2} - 2 \beta_{3} - 2 \beta_{4} - \beta_{6} - \beta_{7} ) q^{61} + ( 5 - \beta_{1} + \beta_{2} - 4 \beta_{4} - 4 \beta_{6} ) q^{63} + ( -2 \beta_{1} + 2 \beta_{2} + \beta_{4} + \beta_{6} ) q^{65} + ( -3 - 3 \beta_{1} - 6 \beta_{4} - 3 \beta_{6} + 3 \beta_{7} ) q^{67} + ( -5 + 2 \beta_{1} + 2 \beta_{3} + \beta_{4} - 2 \beta_{5} + 5 \beta_{6} - \beta_{7} ) q^{69} + ( 3 - 3 \beta_{1} - 3 \beta_{3} + 2 \beta_{6} + 3 \beta_{7} ) q^{71} + ( -5 + \beta_{1} - \beta_{2} - \beta_{3} + \beta_{4} - \beta_{5} - \beta_{6} + 3 \beta_{7} ) q^{73} + ( -\beta_{1} - 3 \beta_{2} + \beta_{4} - \beta_{5} - 4 \beta_{6} + 4 \beta_{7} ) q^{75} + ( -3 + 2 \beta_{2} - 2 \beta_{3} + 3 \beta_{4} + 2 \beta_{5} + 3 \beta_{6} + \beta_{7} ) q^{77} + ( 4 - \beta_{1} - \beta_{2} + 3 \beta_{3} + \beta_{4} + 3 \beta_{5} - 3 \beta_{6} + 6 \beta_{7} ) q^{79} + ( -\beta_{1} - \beta_{2} - \beta_{3} - 4 \beta_{4} - \beta_{5} + 2 \beta_{6} + \beta_{7} ) q^{81} + ( 1 + 3 \beta_{2} - 3 \beta_{3} - \beta_{4} + 3 \beta_{5} + 6 \beta_{6} + 3 \beta_{7} ) q^{83} + ( -2 - 2 \beta_{1} + 2 \beta_{2} - 2 \beta_{5} - 2 \beta_{6} - 2 \beta_{7} ) q^{85} + ( -7 - 3 \beta_{2} - \beta_{4} - 3 \beta_{5} - 3 \beta_{6} + 7 \beta_{7} ) q^{87} + ( -3 + 3 \beta_{1} + \beta_{2} + 3 \beta_{3} + \beta_{4} - \beta_{5} - \beta_{6} - 3 \beta_{7} ) q^{89} + ( 2 + 3 \beta_{1} + 3 \beta_{4} - 3 \beta_{5} - 5 \beta_{6} - 3 \beta_{7} ) q^{91} + ( 4 - 4 \beta_{1} - 2 \beta_{2} - 2 \beta_{3} - 8 \beta_{6} + 6 \beta_{7} ) q^{93} + ( 1 - \beta_{3} + \beta_{4} + \beta_{5} + \beta_{6} ) q^{95} + ( 4 + \beta_{1} - \beta_{2} + \beta_{3} + 3 \beta_{4} - \beta_{5} + 3 \beta_{6} ) q^{97} + ( 3 + \beta_{1} + 3 \beta_{2} + 3 \beta_{3} + 4 \beta_{4} + 3 \beta_{6} ) q^{99} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$8q - 4q^{3} + 8q^{7} + O(q^{10})$$ $$8q - 4q^{3} + 8q^{7} + 4q^{11} + 8q^{13} + 4q^{19} + 8q^{23} - 8q^{25} + 8q^{27} - 32q^{31} - 16q^{33} + 16q^{35} + 8q^{37} - 16q^{39} + 8q^{41} - 12q^{43} - 16q^{51} - 8q^{53} + 16q^{55} + 16q^{57} - 20q^{59} - 24q^{61} + 40q^{63} - 36q^{67} - 32q^{69} + 24q^{71} - 32q^{73} - 12q^{75} - 16q^{77} + 20q^{83} - 8q^{85} - 56q^{87} - 16q^{89} + 40q^{91} + 16q^{93} + 8q^{95} + 32q^{97} + 28q^{99} + O(q^{100})$$
Basis of coefficient ring in terms of a root $$\nu$$ of $$x^{8} - 4 x^{7} + 14 x^{6} - 28 x^{5} + 43 x^{4} - 44 x^{3} + 30 x^{2} - 12 x + 2$$:
$$\beta_{0}$$ $$=$$ $$1$$ $$\beta_{1}$$ $$=$$ $$\nu^{7} - 4 \nu^{6} + 14 \nu^{5} - 27 \nu^{4} + 41 \nu^{3} - 37 \nu^{2} + 24 \nu - 5$$ $$\beta_{2}$$ $$=$$ $$\nu^{7} - 4 \nu^{6} + 14 \nu^{5} - 28 \nu^{4} + 43 \nu^{3} - 44 \nu^{2} + 30 \nu - 10$$ $$\beta_{3}$$ $$=$$ $$-5 \nu^{7} + 17 \nu^{6} - 59 \nu^{5} + 102 \nu^{4} - 146 \nu^{3} + 121 \nu^{2} - 66 \nu + 15$$ $$\beta_{4}$$ $$=$$ $$5 \nu^{7} - 17 \nu^{6} + 60 \nu^{5} - 105 \nu^{4} + 155 \nu^{3} - 133 \nu^{2} + 77 \nu - 19$$ $$\beta_{5}$$ $$=$$ $$-5 \nu^{7} + 18 \nu^{6} - 62 \nu^{5} + 113 \nu^{4} - 163 \nu^{3} + 145 \nu^{2} - 82 \nu + 20$$ $$\beta_{6}$$ $$=$$ $$-5 \nu^{7} + 18 \nu^{6} - 63 \nu^{5} + 115 \nu^{4} - 170 \nu^{3} + 152 \nu^{2} - 89 \nu + 23$$ $$\beta_{7}$$ $$=$$ $$-8 \nu^{7} + 28 \nu^{6} - 98 \nu^{5} + 175 \nu^{4} - 256 \nu^{3} + 223 \nu^{2} - 126 \nu + 31$$
$$1$$ $$=$$ $$\beta_0$$ $$\nu$$ $$=$$ $$($$$$-\beta_{7} + 2 \beta_{6} + \beta_{2} + \beta_{1}$$$$)/2$$ $$\nu^{2}$$ $$=$$ $$($$$$-\beta_{7} + 3 \beta_{6} - \beta_{5} + \beta_{4} + \beta_{3} + 2 \beta_{1} - 4$$$$)/2$$ $$\nu^{3}$$ $$=$$ $$($$$$5 \beta_{7} - 5 \beta_{6} - 2 \beta_{5} + 3 \beta_{4} + \beta_{3} - 4 \beta_{2} - \beta_{1} - 3$$$$)/2$$ $$\nu^{4}$$ $$=$$ $$($$$$11 \beta_{7} - 19 \beta_{6} + 3 \beta_{5} - \beta_{4} - 5 \beta_{3} - 4 \beta_{2} - 8 \beta_{1} + 12$$$$)/2$$ $$\nu^{5}$$ $$=$$ $$($$$$-13 \beta_{7} + 2 \beta_{6} + 15 \beta_{5} - 16 \beta_{4} - 10 \beta_{3} + 13 \beta_{2} - 2 \beta_{1} + 23$$$$)/2$$ $$\nu^{6}$$ $$=$$ $$($$$$-67 \beta_{7} + 90 \beta_{6} + 4 \beta_{5} - 10 \beta_{4} + 16 \beta_{3} + 31 \beta_{2} + 33 \beta_{1} - 28$$$$)/2$$ $$\nu^{7}$$ $$=$$ $$($$$$-7 \beta_{7} + 87 \beta_{6} - 68 \beta_{5} + 71 \beta_{4} + 65 \beta_{3} - 26 \beta_{2} + 37 \beta_{1} - 125$$$$)/2$$
## Character values
We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/256\mathbb{Z}\right)^\times$$.
$$n$$ $$5$$ $$255$$ $$\chi(n)$$ $$-\beta_{6}$$ $$1$$
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
33.1
0.5 + 2.10607i 0.5 − 0.691860i 0.5 + 1.44392i 0.5 − 0.0297061i 0.5 − 1.44392i 0.5 + 0.0297061i 0.5 − 2.10607i 0.5 + 0.691860i
0 −1.07947 + 2.60607i 0 −0.707107 + 0.292893i 0 −1.68554 + 1.68554i 0 −3.50504 3.50504i 0
33.2 0 0.0794708 0.191860i 0 −0.707107 + 0.292893i 0 2.27133 2.27133i 0 2.09083 + 2.09083i 0
97.1 0 −2.27882 + 0.943920i 0 0.707107 1.70711i 0 0.665096 + 0.665096i 0 2.18073 2.18073i 0
97.2 0 1.27882 0.529706i 0 0.707107 1.70711i 0 2.74912 + 2.74912i 0 −0.766519 + 0.766519i 0
161.1 0 −2.27882 0.943920i 0 0.707107 + 1.70711i 0 0.665096 0.665096i 0 2.18073 + 2.18073i 0
161.2 0 1.27882 + 0.529706i 0 0.707107 + 1.70711i 0 2.74912 2.74912i 0 −0.766519 0.766519i 0
225.1 0 −1.07947 2.60607i 0 −0.707107 0.292893i 0 −1.68554 1.68554i 0 −3.50504 + 3.50504i 0
225.2 0 0.0794708 + 0.191860i 0 −0.707107 0.292893i 0 2.27133 + 2.27133i 0 2.09083 2.09083i 0
$$n$$: e.g. 2-40 or 990-1000 Embeddings: e.g. 1-3 or 225.2 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Inner twists
Char Parity Ord Mult Type
1.a even 1 1 trivial
32.g even 8 1 inner
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 256.2.g.c 8
4.b odd 2 1 256.2.g.d 8
8.b even 2 1 128.2.g.b 8
8.d odd 2 1 32.2.g.b 8
16.e even 4 1 512.2.g.f 8
16.e even 4 1 512.2.g.g 8
16.f odd 4 1 512.2.g.e 8
16.f odd 4 1 512.2.g.h 8
24.f even 2 1 288.2.v.b 8
24.h odd 2 1 1152.2.v.b 8
32.g even 8 1 128.2.g.b 8
32.g even 8 1 inner 256.2.g.c 8
32.g even 8 1 512.2.g.f 8
32.g even 8 1 512.2.g.g 8
32.h odd 8 1 32.2.g.b 8
32.h odd 8 1 256.2.g.d 8
32.h odd 8 1 512.2.g.e 8
32.h odd 8 1 512.2.g.h 8
40.e odd 2 1 800.2.y.b 8
40.k even 4 1 800.2.ba.c 8
40.k even 4 1 800.2.ba.d 8
64.i even 16 2 4096.2.a.q 8
64.j odd 16 2 4096.2.a.k 8
96.o even 8 1 288.2.v.b 8
96.p odd 8 1 1152.2.v.b 8
160.u even 8 1 800.2.ba.d 8
160.y odd 8 1 800.2.y.b 8
160.ba even 8 1 800.2.ba.c 8
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
32.2.g.b 8 8.d odd 2 1
32.2.g.b 8 32.h odd 8 1
128.2.g.b 8 8.b even 2 1
128.2.g.b 8 32.g even 8 1
256.2.g.c 8 1.a even 1 1 trivial
256.2.g.c 8 32.g even 8 1 inner
256.2.g.d 8 4.b odd 2 1
256.2.g.d 8 32.h odd 8 1
288.2.v.b 8 24.f even 2 1
288.2.v.b 8 96.o even 8 1
512.2.g.e 8 16.f odd 4 1
512.2.g.e 8 32.h odd 8 1
512.2.g.f 8 16.e even 4 1
512.2.g.f 8 32.g even 8 1
512.2.g.g 8 16.e even 4 1
512.2.g.g 8 32.g even 8 1
512.2.g.h 8 16.f odd 4 1
512.2.g.h 8 32.h odd 8 1
800.2.y.b 8 40.e odd 2 1
800.2.y.b 8 160.y odd 8 1
800.2.ba.c 8 40.k even 4 1
800.2.ba.c 8 160.ba even 8 1
800.2.ba.d 8 40.k even 4 1
800.2.ba.d 8 160.u even 8 1
1152.2.v.b 8 24.h odd 2 1
1152.2.v.b 8 96.p odd 8 1
4096.2.a.k 8 64.j odd 16 2
4096.2.a.q 8 64.i even 16 2
## Hecke kernels
This newform subspace can be constructed as the kernel of the linear operator $$T_{3}^{8} + 4 T_{3}^{7} + 8 T_{3}^{6} - 32 T_{3}^{4} - 24 T_{3}^{3} + 96 T_{3}^{2} - 16 T_{3} + 4$$ acting on $$S_{2}^{\mathrm{new}}(256, [\chi])$$.
## Hecke characteristic polynomials
$p$ $F_p(T)$
$2$ $$T^{8}$$
$3$ $$4 - 16 T + 96 T^{2} - 24 T^{3} - 32 T^{4} + 8 T^{6} + 4 T^{7} + T^{8}$$
$5$ $$( 2 + 4 T + 2 T^{2} + T^{4} )^{2}$$
$7$ $$784 - 1344 T + 1152 T^{2} - 224 T^{3} + 56 T^{4} - 48 T^{5} + 32 T^{6} - 8 T^{7} + T^{8}$$
$11$ $$4 - 48 T + 160 T^{2} + 56 T^{3} + 224 T^{4} - 64 T^{5} + 8 T^{6} - 4 T^{7} + T^{8}$$
$13$ $$6724 + 8528 T + 2520 T^{2} - 448 T^{3} + 200 T^{4} - 104 T^{5} + 36 T^{6} - 8 T^{7} + T^{8}$$
$17$ $$256 + 5120 T^{2} + 1056 T^{4} + 64 T^{6} + T^{8}$$
$19$ $$196 - 336 T + 832 T^{2} - 168 T^{3} + 32 T^{4} + 48 T^{5} - 8 T^{6} - 4 T^{7} + T^{8}$$
$23$ $$16 + 64 T + 128 T^{2} + 32 T^{3} - 8 T^{4} - 16 T^{5} + 32 T^{6} - 8 T^{7} + T^{8}$$
$29$ $$188356 - 17360 T + 17272 T^{2} + 6256 T^{3} + 72 T^{4} - 168 T^{5} - 12 T^{6} + T^{8}$$
$31$ $$( 8 + 8 T + T^{2} )^{4}$$
$37$ $$64516 + 67056 T + 59320 T^{2} + 20640 T^{3} + 3464 T^{4} + 168 T^{5} - 44 T^{6} - 8 T^{7} + T^{8}$$
$41$ $$26896 + 7872 T + 1152 T^{2} - 416 T^{3} + 968 T^{4} + 240 T^{5} + 32 T^{6} - 8 T^{7} + T^{8}$$
$43$ $$31684 + 12816 T + 5888 T^{2} + 4856 T^{3} + 1760 T^{4} + 256 T^{5} + 56 T^{6} + 12 T^{7} + T^{8}$$
$47$ $$256 + 1024 T^{2} + 544 T^{4} + 64 T^{6} + T^{8}$$
$53$ $$158404 + 238800 T + 147160 T^{2} + 47328 T^{3} + 9800 T^{4} + 1272 T^{5} + 100 T^{6} + 8 T^{7} + T^{8}$$
$59$ $$643204 + 211728 T + 44896 T^{2} + 20712 T^{3} + 5408 T^{4} + 528 T^{5} + 136 T^{6} + 20 T^{7} + T^{8}$$
$61$ $$42436 + 24720 T + 34968 T^{2} - 6336 T^{3} + 72 T^{4} - 648 T^{5} + 132 T^{6} + 24 T^{7} + T^{8}$$
$67$ $$1285956 + 734832 T + 233280 T^{2} + 68040 T^{3} + 18144 T^{4} + 3456 T^{5} + 504 T^{6} + 36 T^{7} + T^{8}$$
$71$ $$21196816 - 9060672 T + 1936512 T^{2} - 173472 T^{3} + 10232 T^{4} - 1200 T^{5} + 288 T^{6} - 24 T^{7} + T^{8}$$
$73$ $$38416 + 112896 T + 165888 T^{2} + 88192 T^{3} + 26504 T^{4} + 4672 T^{5} + 512 T^{6} + 32 T^{7} + T^{8}$$
$79$ $$99361024 + 4775936 T^{2} + 78880 T^{4} + 512 T^{6} + T^{8}$$
$83$ $$138250564 - 16837456 T + 1548544 T^{2} + 131864 T^{3} - 22752 T^{4} + 304 T^{5} + 184 T^{6} - 20 T^{7} + T^{8}$$
$89$ $$17007376 + 7786112 T + 1782272 T^{2} + 209472 T^{3} + 14024 T^{4} + 672 T^{5} + 128 T^{6} + 16 T^{7} + T^{8}$$
$97$ $$( -992 + 288 T + 40 T^{2} - 16 T^{3} + T^{4} )^{2}$$
|
2020-11-23 22:49:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9920064806938171, "perplexity": 10802.116489171769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141168074.3/warc/CC-MAIN-20201123211528-20201124001528-00114.warc.gz"}
|
https://www.aanda.org/articles/aa/full_html/2017/04/aa29379-16/aa29379-16.html
|
Free Access
Issue A&A Volume 600, April 2017 A93 27 Interstellar and circumstellar matter https://doi.org/10.1051/0004-6361/201629379 05 April 2017
## 1. Introduction
Massive stars (M > 8 M) affect their surrounding medium due to the action of both their ionizing photons and stellar winds. They form ionized (H ii ) regions that expand, bordered by a shell of swept-up neutral material (Dyson & Williams 1997). Star formation is observed at the edges of Galactic and extragalactic H ii regions (Bernard et al. 2016). Young stars form there either spontaneously or through various mechanisms linked to the expansion of the ionized region (Deharveng et al. 2010).
Star formation observed at the edges of H ii regions has been studied in detail during the past ten years. With the GLIMPSE (Benjamin et al. 2003) and MIPSGAL (Carey et al. 2009) surveys, the Spitzer satellite has revealed that we live in a bubbling galactic disk where thousands of H ii regions have a clear impact on their environment. Anderson et al. (2011) have shown that half of all H ii regions have a bubble morphology. Studies of triggering have focused on bubble H ii regions. Deharveng et al. (2010) used Spitzer GLIMPSE and MIPSGAL data combined with ATLASGAL (Schuller et al. 2009) data on 102 bubbles. They showed that star formation observed at the edges of H ii regions is an important phenomenon in our Galaxy. Up to 25% of the ionized regions show high-mass star formation triggered on their edges. This result has been confirmed by Thompson et al. (2012) and Kendrew et al. (2012, 2016) who found an overdensity of young stellar objects (YSOs), including massive objects, around Spitzer and ATLASGAL bubbles. Simpson et al. (2012) have listed 5106 bubbles using these GLIMPSE and MIPSGAL surveys. Many studies of individual H ii regions, including numerical simulations, confirm that H ii regions impact on their surrounding, enhancing significantly the star formation there (Minier et al. 2013; Samal et al. 2014; Liu et al. 2015; Ladeyschikov et al. 2015). This impact is also observed at the waist of bipolar H ii regions as recently discovered by Deharveng et al. (2015).
However Dale et al. (2015) assessed the relevance of standard observational criteria used to decide whether the star-formation process is of spontaneous or triggered origin at the edges of H ii region. By comparing the observational criteria used to their own new numerical results they concluded that, when interpreting observations of star formation in the vicinity of feedback-driven structures in terms of triggering, one should exercise caution.
While the large and rapidly increasing bulk of knowledge tends to offer empirical evidence in support of some impact of H ii regions on the local star formation, there are still many unanswered questions on the possible influence of these regions on star formation near their edges. One way to firmly establish the causal link existing between the ionized region and the star-formation process taking place on its surrounding could be to measure a clear difference between the age of the ionizing stars, located in the central H ii region and the ones formed at its edges (Martins et al. 2010; Bik et al. 2010). However, the determination of stellar ages is challenging (Martins et al. 2010).
We are left in a situation where we observe an overdensity of young stars at the edges of these H ii regions. These young stars are highly efficient (up to 25%) at forming massive stars. (Bik et al. 2010; Ellerbroek et al. 2013; Cappa et al. 2014; Tapia et al. 2014). But we do not know how the material is assembled (uniformly distributed then collected versus pre-condensed in an inhomogeneous medium) and what are the mechanisms that control the formation of stars in these regions. For pre-existing clumps, star formation could occur spontaneously before encountering the ionization front or the ionizing radiation leaking from the H ii region. Dedicated observations can help in answering these questions. High resolution molecular spectroscopy reveals the distribution and velocity field of the material that surrounds H ii regions (Anderson et al. 2015; Liu et al. 2015). The spatial distribution, properties and evolutionary stage of YSOs are key points to address the triggering issue. We need to obtain an overview of all stages of star formation in a given region and access the distribution of the surrounding material on all spatial scales to discuss the history of star formation. The large scale distribution should help in understanding the initial distribution of the material (uniform versus clumpy, filamentary). A better knowledge of the distribution and properties (density, temperature) of the material that surrounds H ii regions could also help in better understanding how the material is assembled and how star formation occurs around ionized regions.
The Herschel satellite offers a unique opportunity to study star formation around Galactic H ii regions and helps in answering some of the pending questions. Thanks to its sensitivity and its large wavelength coverage in the far-infrared, Herschel is perfectly suited to study the earliest phases of star formation. The six measured photometric points (70, 100, 160, 250, 350, 500 μm) really help in constraining the young sources’ properties (temperature, envelope mass, luminosity). Moreover, Herschel’s wavelength range covers the peak of the YSOs’ spectral energy distribution (SED) also helping to characterize the young source’ evolutionary stage. Combined with existing infrared and molecular data, Herschel observations allow us to obtain a global view of the star-formation history (Nguyen et al. 2015).
Here we present the results obtained for young compact sources observed towards the bubble H ii region RCW 120. Using Herschel photometric PACS and SPIRE data, we re-examine this region to better determine the nature and evolutionary stage of the YSOs observed there. We aim to discuss the region’s star-formation history there using sources’ evolutionary stage. Section 2 presents the current knowledge on RCW 120. The Herschel observations are described in Sect. 3. The data analysis and sources’ extraction are presented in Sect. 4. The results are presented in Sect. 5 and discussed in Sect. 6. The main results and conclusions are given in Sect. 7.
## 2. The RCW 120 region
RCW 120 (Rodgers et al. 1960) is an egg-shaped Galactic H ii region of 3.8 pc diameter, located 0.5° above the Galactic plane. Due to its simple morphology and isolation, this region has been studied in detail during the past ten years. The main results are summarized below:
The region is ionized by an O8V star, CD− 38°11636 (Zavagno et al. 2007, hereafter ZAV07; Martins et al. 2010). An emission arc is observed at 24 μm below the star (Deharveng et al. 2009, hereafter DEH09; Martins et al. 2010, see their Fig. 3) and is interpreted as representing the upstream boundary between the wind bubble and the photoionized interstellar medium (Mackey et al. 2015).
The photometric distance of RCW 120 was computed by Russeil (2003) using UBV and Hβ photometry. The uncertainty is estimated to be 0.6 kpc and comes from the uncertainty in the spectral type estimate (around 0.3 mag).
RCW 120 and its surrounding layer have been observed in the dust continuum at 870 μm (DEH09) and 1.3 mm (ZAV07) and in CO molecular lines (Anderson et al. 2015; Torii et al. 2015). These observations show that RCW 120 is surrounded by a dense shell of gas and dust.
Torii et al. (2015) observed two molecular clouds towards RCW 120 and suggest that some collision between the clouds triggered the formation of the ionizing O star of RCW 120 in a short timescale of 0.2–0.4 Myr. An age of 0.4 Myr is also obtained by Mackey et al. (2015). Simulations from Tremblin et al. (2014a) lead to a similar age for the ionizing star of RCW 120.
Anderson et al. (2015) found no evidence for expansion of the molecular material associated with RCW 120 and therefore can make no claim about its geometry (2D or 3D). Dust emission simulations suggest that the H ii region RCW 120 is not spherical, but instead cylindrical, and that we observe the object along the axis of this cylinder (Pavlyuchenkov et al. 2013).
Using 1.3 mm continuum emission ZAV07 found eight condensations (five located at the edges of the ionized region, see their Fig. 4) and studied the young stellar content of these condensations, pointing out the possible importance of long-distance influence of the ionized region on its surrounding. This study has been completed by DEH09 who characterized the evolutionary stage by adding 24 μm data from MIPSGAL and confirmed the importance of long-distance interaction between the H ii region and its surroundings. Many YSOs, including Class I and Class II sources are observed at the edges of the ionized region. A noticeable massive Class 0 candidate is detected towards the highest density condensation (condensation 1), later confirmed with Herschel observations (Zavagno et al. 2010). A spectrophotometric study of the YSOs in the near-infrared confirm that these YSOs are associated with the RCW 120 region because they have the same velocity than that of the ionized gas (Martins et al. 2010).
DEH09 observed a series of eleven young sources aligned parallel to the ionization front towards the most massive condensation at 24 μm equally spaced by 0.1 pc and is thought to be the result of Jeans gravitational instabilities.
Tremblin et al. (2014b) studied the probability density function (PDF) of a series of Galactic H ii regions, including RCW 120 (see their Figs. 8 and 9). They found evidence for compression, and the value of the exponent derived to fit the PDF towards condensation one may indicate the role of ionization compression in the formation of this condensation and its collapse to form stars. According to numerical simulations lead by Minier et al. (2013), if the condensation had gravitationally collapsed prior to the passage of the ionization front, the condensation would be already sufficiently dense to resist the ionization front expansion. It would, for example, trigger the formation of a pillar rather than a condensation remaining in the shell.
Walch et al. (2015) performed three dimensional smoothed particle hydrodynamics (SPH) simulations of H ii regions expanding into fractal molecular clouds and then used RADMC-3D to compute the synthetic dust continuum emission at 870 μm from their simulations, applied to RCW 120. They found a hybrid form of triggering which combines elements of collect and collapse (C&C) mechanism (Elmegreen & Lada 1977) and radiation driven implosion (RDI; Kessel-Deynet & Burkert 2003).
Figure 1 presents a three-color image of the RCW 120 region as seen by Herschel. The 70 μm emission (blue part) underlines the emission of the warm dust while the 250 μm emission (red part) underlines the emission from the colder dust located in the dense material that surrounds the ionized region and that interacts with the ionizing radiation.
Fig. 1RCW 120: Herschel-PACS 70 μm (blue), 160 μm (green) and Herschel-SPIRE 250 μm (red). The field size is 21.8′ × 24.5′. North is up, east is left. Open with DEXTER
## 3. Observations and data reduction
### 3.1. Herschel observations
RCW 120 was observed with the PACS and SPIRE photometers. Details of these observations (map size, observing time, observational identification (ObsID), observational date (Obs.) operational day (OD), map center) are given in Table 1. The PACS photometer was used to make simultaneous photometric observations in two photometric bands as part of the HOBYS key program (Motte et al. 2010). Two cross-scan maps were done at angle 45° and 135° with a scanning speed of 20/s. This observing mode is described in Sect. 5.2 of the PACS Observers’ Manual1. The beam FWHM varies between 59 at 70 μm , 60 at 100 μm and 114 at 160 μm. The total observing time is 2.6 h.
RCW 120 was observed with the SPIRE photometer as part of the Evolution of Interstellar Dust key program for the Herschel Science Demonstration Phase. The SPIRE photometer was used to make simultaneous photometric observations in the three photometer bands (250, 350 and 500 μm). The map is made by scanning the telescope at a given scan speed of 30/s along lines. Cross-linked scanning is achieved by scanning at 42° (Scan A angle) and then at − 42° (Scan B angle). This ensures that the effect of 1/f noise on the map can be minimized and also leads to improved map coverage. This observing mode is described in details in the last version of the SPIRE Observers’ Manual2 in Sect. 3.1.2. One map at each scanning angle was obtained. The beam FWHM varies between 182 at 250 μm , 252 at 350 μm and 366 at 500 μm. The total observing time is 0.34 h.
Table 1
Summary of Herschel observational parameters.
The PACS maps were produced using the HIPE Level 1 data and then version 21 of the Scanamorphos software package which performs baseline and drift removal before regridding (Roussel 2012). The SPIRE images were reduced using modified pipeline scripts of Version 10 of HIPE3, the Herschel Interactive Processing Environment. Each map direction (nominal and orthogonal) was first reduced individually to Level 1 data, correcting for effects such as temperature drifts and jumps, glitches and cooler burps. The individual maps were then combined to create one map (Level 2 data). Map reconstruction was done using the SPIRE default “naive” mapmaking algorithm at the same time as a destriper module (including a median correction and in bright source mode). The default gridding of 6, 10, 14 for the SPIRE wavelengths 250, 350, 500 μm was chosen. The fits output files for each SPIRE wavelength are in units of Jy/beam. For an absolute calibration of the SPIRE maps, the zeroPointCorrection task calculates the absolute offset for a SPIRE map, based on cross-calibration with Planck HFI-545 and HFI-857 maps, color-correcting HFI to SPIRE wavebands assuming a gray-body function with fixed spectral index. The offsets determined in this way correspond well to the ones provided by J.-P. Bernard (priv. comm.).
### 3.2. Complementary data
We complement the Herschel data with data from the Two Micron All-Sky Survey (2MASS) at 1.25 μm (J), 1.65 μm (H) and 2.17 μm (Ks) with a resolution of 2 (Skrutskie et al. 2006), from the Spitzer4 GLIMPSE and MIPSGAL surveys of the Galactic Plane at 3.6 μm, 4.5 μm, 5.8 μm and 8 μm and 24 μm with a resolution of 17, 17, 19, 2 and 6 (Benjamin et al. 2003; Carey et al. 2009).
## 4. Data analysis
### 4.1. Compact sources’ extraction method
Herschel compact sources were extracted using the multi-wavelength, multi-scales getsources algorithm5 (version 1.140127; Men’shchikov et al. 2012; Men’shchikov 2013). The working method of getsources can be roughly decomposed into two steps: the detection and the measurement. While the latter is performed on all maps inserted into the algorithm, the detection can be made from a selected sample of maps depending on the aim of the study. In order to improve this step, an Herschel high-resolution density map (Hill et al. 2012; Palmeirim et al. 2013) was created (see Sect. 4.5) and added to better constrain the detection of compact sources. Moreover, some original maps were modified in order to enhance the contrast of the cooler and hence the densest regions since heated structures could be detected and misleading the final sample of sources. For this purpose and to provide valuable guidance to the detection algorithm, we use the 160 μm PACS and 250 μm SPIRE maps as they represent a good compromise between resolution and non-contamination by very small grains (VSG). The photometric offsets derived using IRAS-Planck model (Bernard et al. 2010) were added and the 160 μm map was convolved to the resolution of the 250 μm SPIRE observations (25.̋2). We assumed a modified blackbody (hereafter, MBB) model with a spectral index of 2. This value is higher than the reference for the galaxy (~1.6, Planck Collaboration Int. XLVIII 2016) but for dense regions, in the inner regions of the galactic plane for instance, β tends to increase and hence, a value of 2 should be more appropriate for compact regions (Paradis et al. 2012). Non-linear fitting of the SEDs was performed using the Levenberg-Marquardt’s algorithm (Markwardt 2009). From the SED, a color temperature can be found for each pixel by using the ratio of the two maps (1)where is the 160 μm map convolved at the 250 μm resolution.
A weight-map is then created as the ratio between the map giving the MBB flux corresponding to the color temperature and a fiducial temperature of 20 K (median temperature). Multiplying the native 160 μm PACS map by the weight-map give the 160 μm corrected map where colder regions are enhance compared to warmer regions. The 250 μm corrected map is created in the same way and both are used in replacement of the native 160 μm PACS and 250 μm SPIRE maps for the detection step.
To summarize, for extraction of the sources, we used the original 70 μm, 100 μm, 350 μm, 500 μm maps and improved the detection by including the high-resolution density map and the 160 μm and 250 μm corrected maps.
### 4.2. Pre-selection
The final getsources catalog contains many useful informations about the detected sources. In the following subsections, we will keep only a small part of them: J2000 coordinates, detection significance at each wavelength, flux at peak and integrated flux with their corresponding errors and source ellipse parameters (major axis, minor axis, position angle).
Before doing the analysis, the sources present in the catalog have to be filtered to select the well-defined ones. The selection criteria defined by the HOBYS consortium are listed below (see also Tigé et al. 2017):
Each source must have a deconvolved size (see Eq. (2)) smaller than 0.1 pc at the reference wavelength (see below) and three reliable fluxes (including the reference wavelength). A flux is considered as reliable if the detection is reliable (the detection significance is higher than 7, see Men’shchikov et al. 2012), the signal to noise of the peak and integrated flux is higher than 2 and the elongation (defined as the ratio of the major and minor axis of the ellipse footprint) lower than 2 in order to limit the sample to circular compact cores.
From these criteria, sources extracted by getsources are expected to be dense and cold. Therefore, we consider as a good assumption that thermal emission from the cores is optically thin and not contaminated by VSGs for λ100 μm. The deconvolved size at wavelength λ, , is computed as (2)where / stand for the major/minor convolved size estimate of the source at wavelength λ (given in the getsources catalog) and HPBWλ is the half-power beam width at wavelength λ. The reference wavelength was chosen to be 160 μm as a compromise between the resolution (11.̋4) and the tracer of optically thin dust emission. This trade-off allows for both the correct identification of the peak of the SED and a good scaling of the flux (Motte et al. 2010; Nguyen Luong et al. 2011). Nevertheless, in some marginal cases, the 160 μm emission may be contaminated by small grains heated in the photo-dissociation region (PDR) leading to a deconvolved size larger than the one measured at 250 μm. In such cases, the 250 μm is taken as the reference (resolution 18.̋2).
Detections complying with the above-mentionned criteria were kept for the analysis. However, the 70 μm data were systematically excluded from the SED fitting to avoid contamination from VSG’s emission even though the criteria were satisfied.
Among the 359 detections of the getsources algorithm, 80 were kept at the end of the selection (put in the pre-selected sample). Rejected sources appear to be false detections, mainly filament pieces or sources with not enough flux measurements to fit the SED. Rejected sources were visually inspected and those which look like compact sources with at least one reliable wavelength (70 μm included) were kept in a tentative sample (80 sources). The physical properties of the tentative sources were derived throughout an indirect method (see Sect. 5.2.4) since a SED fitting couldn’t be done.
### 4.3. Spectral energy distribution
Before fitting the SED for each compact source, the fluxes must be scaled since we want them to be measured within the same aperture. A full treatment of this scaling can be found in Motte et al. (2010), Nguyen Luong et al. (2011) where the relation between flux and source’s angular size is taken to be same as for protostellar cores. This aperture scaling is based on the assumptions that the source is optically thin for λ> 100μm, M(r) ∝ r and the gradient of the temperature is weak within the region (Elia et al. 2014). The scaling was done when the size at the reference wavelength and the wavelength to be scaled could be deconvolved. Following their procedure, we applied scaling factors to fluxes according to the formula (3)where represents the rescaled flux associated with scaling factor ζλ and Sλ is the original flux.
The model to be fitted is a MBB (4) using the Hildebrand relation (Hildebrand 1983) (4)with a gas-to-dust ratio R = 100, D the distance of RCW 120, 1.3 kpc, and C is introduced as a constant of the fit. The HOBYS consortium decided to use the dust opacity law κν = κ300 μm(ν/ν0)2 with κ300 μm = 10 cm2 g-1, ν0 = 1000 GHz (Beckwith et al. 1990; Motte et al. 2010). As explained before, the spectral index has been fixed to two, reducing the model space of the fitting parameters to the C-T plane. The initial errors used to weight the data have been set to the quadratic sum of the getsources and the calibration errors (3% at 100 μm, 5% at 160 μm and 7% for SPIRE bands).
Table 2
Minimum, maximum and median values for the color correction factors at Herschel wavelengths for the final sample of sources.
During the acquisition of the sources with the PACS and SPIRE instruments of Herschel, the spectrum is assumed to be flat across the bands (νSν = const.) which is not true because we expect the sources to follow a MBB model. To correct for this assumption, we apply color correction factors given in the PACS and SPIRE observer’s manual. The fitting algorithm reaches convergence when the absolute difference between two subsequent temperatures (obtained at two consecutive steps) is fainter than 0.1 K. Since the spectral index was fixed to two, these factors depends on the temperature for PACS and are constant for SPIRE. Table 2 gives the minimum, maximum and median value for the color correction factors at Herschel wavelengths. Color corrections are high for short wavelengths and low temperature. For 70 μm and 100 μm, they go up to 54% and 21% and corresponds to a source with T = 11.2 K. However, considering the median value for all wavelengths, the color correction factors are low and do not change drastically the sources’ fluxes of the final sample.
The final temperature is derived directly as one of the parameter of the MBB model and assuming optically thin emission for the dust, the envelope mass is derived as (5)The uncertainties are derived from the fitting errors and are 3% and 20% in average for the dust temperature and the envelope mass, respectively. Obviously, these error values on the physical parameters do not take into account the dependence of β with the wavelength (from 1 to 2) and the uncertainty on the opacity factor which is at least a factor of two due to the unknown properties of dust grains (Deharveng et al. 2012, see their Sect. 4.1).
Each detected YSO’s bolometric luminosity was computed by integrating the corresponding SED curve. For sources having infrared (IR) counterparts (from the 2MASS, GLIMPSE and MIPSGAL surveys) within a radius of 4, the SED was bipartitioned and partial integrations are made over it. Below 70 μm, a so-called IR luminosity is obtained using a trapezoidal integration scheme (numerical integration done by connecting the data points with straight lines) from the first IR counterpart found in the catalogs to the 70 μm flux. From 70 μm onwards a so-called “Herschel luminosity” is obtained by integrating over the Herschel SED. The bolometric luminosity is obtained by adding these two values. The SED fitting algorithm returned the error on Herschel luminosity while that affecting the IR one is obtained by computing the IR luminosity with the fluxes plus the uncertainties (higher-limit) and with the fluxes minus the uncertainties (lower-limit). On average this resulted in an overall uncertainty of 30% on bolometric luminosity. The average volume density was computed by assuming a spherically symetric core with a diameter equal to its deconvolved size at the reference wavelength(6)After the SED fitting, a second stage selection was applied to obtain the final sample of sources discussed in this paper (described below).
### 4.4. Final selection
At the end of the SED fitting procedure, the sample of sources obeying our first-stage selection scheme (pre-selected sample) was visually inspected at each Herschel wavelength, to ensure the detection of truly compact sources. The requirements for a source to pass successfully this second-stage selection scheme are two-fold: (1) The source had to be clearly seen by eye on one of the Herschel images; and (2) The source’s SED had to be well-constrained.
The first condition was checked by two different people to avoid subjective detections. The second condition allows us to eliminate dubious SED, for example SED with unconstrained peak or SED with increasing flux mostly at SPIRE wavelengths. From this second stage selection, 35 sources were kept for the study (final sample), seven sources having unconstrained SED were added to the tentative sample and 38 sources looking like small clumps or filamentary pieces were rejected.
To summarize the two-stage selection: from the 359 detections, 35 sources are included in the final sample and 87 sources are included in a tentative sample. The latter is composed of sources possessing at least one reliable wavelength (including 70 μm) clearly seen (by eye) in the Herschel images but not pre-selected due to the lack of flux measurements or pre-selected sources whose SED is unconstrained. These detections are thought to be real sources and kept in order to derive their physical properties with an indirect method since their SED cannot be used.
The second-stage selection is definitely highly non-conservative but ensures the reliability of the sources to be investigated. As discussed before, part of the remaining detections could be real sources under filaments with well-constrained SED which are eliminated in order to have a reliable sample of sources. Finally, we are left with two samples: the final one which will be discussed in the paper and the tentative one whose physical parameters will be derived with an indirect method. We assume that the selected sources are associated with RCW 120, that is, they are located at the same distance. This assumption is supported by the study by Martins et al. (2010) who showed using high resolution near-IR spectro-photometric observations that the YSOs with a IR counterpart observed towards RCW 120 are at the same velocity as that of the ionized gas. We further discuss this point in Sect. 5.2.2.
### 4.5. Dust temperature and column density maps
#### 4.5.1. Method
Following the procedure of Hill et al. (2011, 2012), a map of dust temperature at 366 can be obtained by fitting the flux at each pixel, using the MBB model (7)where Fν is the brightness, μ the mean molecular weight (2.8), mH the proton mass and N(H2), the column density. The temperature and the gas surface density Σ500 μm (the 500 μm subscript stands for the corresponding resolution) are the fitting parameters. Another direct byproduct of the fitting algorithm is a map of the H2 column density at the same low-resolution assuming the dust opacity law of Beckwith et al. (1990, see Sect. 4.3. Since most of the observed regions through the HOBYS project are not observed at 100 μm, the method considers only wavelengths higher than 160 μm for the SED fitting even if the data were available. This choice made the comparison of temperature and column density maps obtained for different regions easier.
Following the procedure described by Palmeirim et al. (2013) based on a multi-scale decomposition method, a high-resolution column-density map at 18.̋2 can be computed. The gas surface density smoothed at 250 μm resolution (Σ250 μm) can be written as a sum of gas surface density smoothed at 350 μm (Σ350 μm) and 500 μm resolution (8)The second term in parentheses of Eq. (8) represents the spatial scale structure of the region seen at 350 μm without the largest structure corresponding to the 500 μm observations. An estimate of Σ500 μm can be obtained by considering that these data are approximately equal to Σ350 μmG350−500 where G350−500 is the full width at half maximum (FWHM) of the point spread function (PSF) needed to convolve the 350 μm map to the 500 μm resolution (). The gas surface density Σ350 μm is obtained in the same way than Σ500 μm but excluding the 500 μm data from the SED fitting.
The third term of Eq. (8) represents the structure seen at 250 μm without the largest scale structure seen at 350 μm. As before, Σ350 μm can be written as Σ250 μmG250−350 and 17.̋4 is the FWHM of the PSF needed. The gas surface density Σ250 μm is obtained using the ratio of the 160 μm and 250 μm maps as explained in Sect. 4.1. Finally, Eq. (8) can be rewritten as (9)Hence, the resulting high-resolution density map can be seen as a composite map representing the multi-scale structure of RCW 120 from 250 μm to 500 μm resolution.
#### 4.5.2. Comparison with Anderson et al. (2012) maps
Fig. 2a) SED fitting for the pixels giving the highest temperature. The continuous curve represents the fit made with all Herschel fluxes and dashed curved is the fit obtained by the method of Hill et al. (2012). b) Same for the pixel giving the lowest temperature. No background is subtracted in both cases. Open with DEXTER
Fig. 3a) Ratio of temperature between the maps obtained in this paper (no 70 μm and 100 μm data included and no background subtraction) over the ones obtained by Anderson et al. (2012, see text). The yellow contours correspond to 870 μm emission at 0.1 Jy/beam. b) Same but with the column density maps. Open with DEXTER
Anderson et al. (2012) constructed temperature and column density maps for a sample of H ii regions (Sh 104, W5-E, Sh 241, RCW 71, RCW 79, RCW 82, G332.5-0.1 and RCW 120). Two differences exist between the method he used and the one we used (also described in Hill et al. 2012). In the method used by Anderson et al. (2012), the SED is fitted with all the data available (from 70 μm to 500 μm) and a flat background is subtracted in each Herschel map (see also Battersby et al. 2011). In warm regions (the ionized zone typically) where 70 μm and 100 μm fluxes are high (around 3 × 103 MJy sr-1), the inclusion of these data in the fit induces a shift of the SED towards the high-frequency region and increases the temperature. The cold regions are less affected because the 70 μm and 100 μm fluxes are lower (around 3 × 102 MJy sr-1). Spectral energy distributions representing both case for pixels giving a high and low temperature are shown in Fig. 2. In hot regions, the difference in temperature reaches 4 K (18%) while it is only 2 K (14%) for cold regions.
To make a comparison between the temperature maps obtained using each method, we resample them at 14 pix-1 to the same center and compute their ratio to see how the different methods lead to different temperature in specific regions (see Fig. 3a). The structure seen on these images clearly reproduces the egg-shaped of RCW 120 and shows that the differences occur in specific regions. We define an area in the warmest (around the ionizing star) and coldest region (defined as condensation 5 hereafter) and compute the median and the standard deviation for them and the whole map. Results are shown in Table 3 where HR and CR stand for hot and cold regions. From the first and third line, we see, as expected, that the temperature found is higher when the 70 μm, 100 μm and background subtraction are included particulary for the warmest region where the change is around 6 K. The colder region do not present significant difference (0.2 K).
To estimate the change in temperature induced by the background subtraction, we created another temperature map using the method of Anderson et al. (2012, including the 70 μm and 100 μ but without removing any background. The range of temperature for this method is listed in the second line of Table 3. We note, as expected, that hot regions are more affected by the inclusion of high-frequency maps (2 K) and by the background subtraction (5 K). The error on the fit for the temperature map has a mean value of 0.45 K hence the temperature for cold regions remains roughly the same.
The comparison between the two column density maps is less straighforward since Anderson et al. (2012) used only the 350 μm to obtain this map while our is the byproduct of the SED fitting. In Fig. 3b, the ratio of the column density map shows that warm regions are more affected than colder ones. Due to the anticorrelation between column density and temperature, we expect warm regions to be more affected with the inclusion of the 70 μm and the 100 μm fluxes. Moreover, since the flux is linear with the column density to first order, we expect the background to be roughly equal for all the regions. Table 4 presents the values of column density for the whole map, the densest region (condensation 1 defined hereafter) and an area in the north-west of RCW 120 where the density is low (empty region) using the three different methods. Trends can be seen: as we include high frequency maps and background subtraction, the median column density decreases. The 70 μm and the 100 μm fluxes lead to a difference of 4 × 1021 cm-2 for the warm region and does not change significantly the column density for cold ones. Removing the background causes a loss in column density of 1 × 1021 cm-2. The method described in Hill et al. (2012) was the choice of the HOBYS consortium for the construction of temperature and column density maps and consequently, no background subtraction is made. This rule will allow an unbiased comparison between the different regions observed in the HOBYS project.
Table 3
Range for the temperature map constructed following the method described in Hill et al. (2012, first line), with all wavelengths and no background subtraction (second line) and following Anderson et al. (2012, third line) for the whole map (first and fourth columns), the hottest region (second and fifth columns) and coldest region (third and sixth columns).
Table 4
Range for the column density map constructed following the method described in Hill et al. (2012, first line), with all wavelengths and no background subtraction (second line) and following Anderson et al. (2012, third line) for the whole map (first and fourth columns), the densest region (second and fifth columns) and empty region (third and sixth columns).
## 5. Results
### 5.1. Dust temperature and column density maps
Fig. 4Temperature map of RCW 120 at 36.̋6 resolution with 870 μm emission from LABOCA (in yellow countours) and the final sample of 35 compact sources (white dots) discussed in this paper. Condensations observed at 870 μm are identified following the labelling in DEH09. The temperature ranges from 15 K (dark) to 24 K (white). Warm regions are observed towards the ionized zone. Colder regions are located outside the ionized region and are distributed in cores, filaments and condensations. Open with DEXTER
Fig. 5a) On logarithmic scale, H2 column density map of RCW 120 at 366 resolution with 870 μm emission from LABOCA (in yellow countours), the final sample of sources (black dots) and the three prestellar clumps (red dots). Condensations observed at 870 μm are identified following the labelling in DEH09. The density values range from 7 × 1021 cm-2 to 4 × 1023 cm-2. b) High resolution H2 column density map of RCW 120 at 182 resolution (in red) and Hα emission (in blue) from the SuperCOSMOS Hα Survey. The column density values range from 7 × 1021 cm-2 to 9.4 × 1023 cm-2. Open with DEXTER
Figure 4 presents the temperature map obtained for RCW 120 with labelled condensations of DEH09 defined by yellow contours. The temperature ranges from 15 K to 24 K. Temperatures between 19 K and 24 K are observed towards the ionized region, the highest temperature being observed to the south. A colder medium with a temperature around 15–18 K surrounds the H ii region. This colder medium is highly structured, organized in clumps, filaments and condensations that correspond to the condensations defined in DEH09 where the sources are located. A remarkable feature is the sharp edge seen on the temperature map at the south-western border of the ionized region. This drop in temperature (from 21 K to 16 K) corresponds to the presence of the (sub)millimeter condensation 1.
Figure 5a presents the low resolution (36.̋6) column density map with the condensations of DEH09 and Fig. 5b the high-resolution column density map (182) together with Hα emission from the SuperCosmos Hα survey (Parker et al. 2005). The values range from 7 × 1021 cm-2 to 4 × 1023 cm-2 for the low-resolution map and goes up to 9 × 1023 cm-2 for the high-resolution one. We checked that the convolution of the high resolution column density map to 366 with the same grid agrees with the values found for the low resolution one. As expected, the ionized region with its egg-shaped corresponds to a drop in column density compared to the PDR and low column density filaments (N(H2) = 1.7 × 1022 cm-2) are observed within it. These are seen in absorption in the optical (see Fig. 1 in ZAV07) and show some compact structures that host sources (see Fig. 7 and Sect. 5.2). Around the ionized region, a highly structured material is distributed in filaments and clumps where the nine condensations already observed at 1.3 mm (ZAV07) and 870 μm (DEH09) are well seen. The leaking of the UV flux presented in DEH09 (see their Fig. 16) is also seen on Fig. 5b. It creates the extended elliptical structures observed on the southern part of the ionized region together with the structures observed on the north-eastern and north-western parts. Three pre-stellar clumps are seen on the temperature and density maps at (α,δ) = (25791, 3828), in absorption on the 70 and 100 μm images and in emission at 160 μm onward (see Fig. 5a). In general, the size and elongation of these clumps are too large to be part of the studied sample but their detection at SPIRE wavelengths (and at 870 μm, see Fig. 2 in DEH09) suggests that they are pre-stellar clumps. The contrast between the high and low density regions is equal to 60. The highest density is observed in condensation 1 located at the south-western edge of the ionized region. This condensation could have been formed due to compression from the ionization region (Tremblin et al. 2014b). Towards RCW 120, star formation is observed in column density region higher than 2.2 × 1022 cm-2.
### 5.2. Compact sources
#### 5.2.1. Compact sources’ spatial distribution
Figure 6 shows the 35 selected sources superimposed on a PACS 70 μm gradient-filtered image of RCW 120. This image was produced with a standard 3 × 3 bi-directional Sobel-Feldman convolution kernel applied to the original image. For each pixel, the derivatives along the horizontal and vertical directions are obtained and the final value for each pixel is computed as giving an approximate value of the gradient norm. This gradient-filtering cuts-off the diffuse emission and enhances the contrast of steep emission regions. In the following we define the PDR as the filamentary emission region revealed by this gradient-filtering and shown by the green dashed contour seen in Fig. 7.
Using the selection criteria described in Sect. 4 and a visual inspection as a final check, we end up with 35 sources that are discussed (i.e., sources for which the temperature, envelope mass and bolometric luminosity can be derived). 87 additional detected sources are also shown in Fig. 7 but have less than two reliable fluxes (up to three if the 70 μm is included) or have unconstrained SEDs, meaning that their properties cannot be derived using SED fitting. Their original fluxes (given by getsources without any aperture scaling or color-corrections) are given in Table A.1.
Fig. 6All 35 compact sources detected using getsources (and discussed in the text) superimposed on a 70 μm gradient image of RCW 120. The sources are color-coded depending on their location: red circles for sources observed towards the PDR, blue squares for sources outside (see text). Open with DEXTER
As seen on Fig. 6, 14 sources are located outside the PDR and 21 are inside.
Fig. 7All 87 sources detected by getsources but not part of the final sample due to the lack of reliable flux measurements, mainly at SPIRE wavelengths. Physical parameters of these sources are derived in a secondhand way explained and presented in Sect. 5.2.4. The PDR region is enclosed in the green countours (see text). Open with DEXTER
#### 5.2.2. Compact sources’ association with the region
Spectroscopic observations with SINFONI at the ESO-VLT showed the YSOs detected in the near IR towards RCW 120 have the same velocity as that of the ionized gas (8 km s-1) and are thus associated with RCW 120 (Martins et al. 2010). Even though most of the sources are thought to be part of RCW 120 because they are embedded in its filamentary region, sources located outside the PDR might not be associated with the region.
Studying the J = 0 → 1 transition of 12CO, 13CO, C18O and C17O with the ATNF Mopra 22 m radio telescope, Anderson et al. (2015) identified three emission peaks at 7 km s-1 (main temperature peak around the velocity of the ionized gas), 30 km s-1 and 60 km s-1. The J = 0 → 1 emission from the CO isotopologues (Anderson et al. 2015) integrated between 75 km s-1 and 50 km s-1, 35 km s-1 and 15 km s-1 and 15 km s-1 and +3 km s-1 are presented in Fig. 8 for 12CO, Fig. 9 for 13CO and Fig. 10 for C18O. We point out that, contrary to the other maps in this paper, these figures are given in galactic coordinates.
Condensation 6, the northern-part of condensation 5 and sources 55 and 150 (the western part, between condensation 6 and 7) are located outside the PDR but present an emission peak around 15 km s-1 and +3 km s-1 which indicates that they are part of RCW 120. Condensation 9 presents an emission peak in the same range but also between 35 km s-1 and 15 km s-1. Although we cannot rule out the fact that condensation 9 might in the foreground or background of the region, the emission peak is stronger in the main-peak velocity range and therefore, we consider this condensation to be part of RCW 120. Between 15 km s-1 and +3 km s-1, the other condensations are distributed along the strong CO emission, following the PDR that surrounds the ionized region. This strongly suggests that the 35 sources of the final sample are indeed associated with RCW 120.
Fig. 8Integrated intensity of 12CO (J = 0 → 1) between a) −75 km s-1 and −50 km s-1; b) −35 km s-1 and −15 km s-1; c) −15 km s-1 and 3 km s-1. The dots represent the 35 sources of the final sample and the contours stand for the 870 μm condensations of DEH09. The unit of the color image is in Jy km s-1 beam-1. Open with DEXTER
Fig. 9Integrated intensity of 13CO (J = 0 → 1) within the same velocity ranges. Dots and contours are the same as in Fig. 8. Open with DEXTER
Fig. 10Integrated intensity of C18O (J = 0 → 1) within the same velocity ranges. Dots and contours are the same as in Fig. 8. Open with DEXTER
#### 5.2.3. Compact source properties
We have shown that the detected sources are likely to be associated with RCW 120, that is, located at the same distance (see Sect. 5.2.2). Table 5 gives the physical properties derived for the 35 sources: the getsources identification number (identification number in DEH09 given in parenthesis if any), the envelope temperature T, envelope mass Menv, bolometric luminosity Lbol, ratio of the submillimetric luminosity (defined as the luminosity computed from 350 μm onwards) over the bolometric luminosity, ratio of the envelope mass over the bolometric luminosity, associated condensation towards which the source is observed, evolutionary class derived from the study of DEH09 and from Lλ ≥ 350 μm/Lbol, near- and mid-infrared counterparts, and volume density.
In the following, source ID refers to IDs given in Col. 1 of Tables 5 and A.1.
Among the 35 sources of the final sample, 14 match the previous list discussed in DEH09. The sources previously identified on the basis on GLIMPSE and MIPSGAL data are now identified at Herschel wavelengths, based on their spatial correspondance in a radius of 4. We discuss their evolutionary class in Sect. 6. Adopting β = 1.6 from the latest Planck results, modifies the physical parameters (envelope temperature, envelope mass and bolometric luminosity) by 10% in average throughout the MBB model and color corrections factors. A higher value of β better represents denser regions (Paradis et al. 2012).
Table 5
Properties of the 35 compact sources discussed in the text.
Figure 11 presents the distribution of dust envelope temperature for compact sources observed towards RCW 120. All but three sources (24, 28, 36) have envelope temperature lower than 25 K. As discussed in ZAV07, sources 24 and 28 are observed towards condensation 4. They are classified as Herbig Ae/Be objects and contain a central star of spectral type B4V for source 24 and B7V for source 28. Their extended nature is thought to be the result of local PDR due to radiation of the star but not massive enough to form H ii regions which is consistent with the envelope mass derived, 2 M and 1 M for source 24 and 28, respectively. Source 36 is located in a region of low density and high temperature (23 K).
Figure 12 presents the distribution of envelope mass for sources observed towards RCW 120. Twenty-seven sources have a low mass (Menv ≤ 20 M) envelope. Sources with envelope mass up to 1 M are detected here. From their Herschel study of dense cores in NGC 6334, Tigé et al. (2017) derived an envelope mass limit of 60 M (lower limit at which they detect ongoing high-mass star activity) for a core to form a high-mass star. Five sources (2, 9, 10, 39, 94) have envelope mass higher than this limit and four (2, 9, 10, 39) are located in condensation 1. Source 94 is located in condensation 6. The column density towards these condensations is higher than 1.7 × 1022 cm-2. We point out the fact that the high-mass cores represent 15% of the total number of sources in the final sample. Nevertheless, two biases could radically change the result. Firstly, it is possible that the cores are unresolved even at the best Herschel resolution (5.̋9) and represents more than one YSO then decreasing the number of possible high-mass cores. We are confident that source 2 might represent this problematic case. Secondly, our final sample represent reliable sources but is incomplete. According to our selection criteria, the non-selected sources should represent low-mass objects. Consequently, this value of 15% should represent a higher-limit to the number of high-mass cores.
Table 6 summarizes the physical properties of the final sample of sources.
#### 5.2.4. Properties of the tentative sources
Physical properties of the 87 tentative sources shown in Fig. 7 could not be obtained directly with the SED fitting due to a lack of Herschel measurements (see Sect. 4). From the final sample of 35 sources, we fitted the envelope temperature obtained from the SED versus the temperature found at the source location on the temperature map (Fig. 13a). We then used the fitted linear relation to assign a temperature to each of the tentative sources. The mass is computed using the Hildebrand formula (Hildebrand 1983) with one of the Herschel fluxes. For the bolometric luminosity, we used the relation between the flux density and the bolometric luminosity (log10(Fν) log10(Lbol)) using the final sample (Fig. 13b) (Ragan et al. 2012). The relation of Dunham et al. (2008) for embedded protostars is normalized at 1.3 kpc (10)where \begin{lxirformule}$F_{\nu}^{1.3~\rm kpc}$\end{lxirformule} is the flux density at 1.3 kpc and Fν is the flux density of Dunham et al. (2008). This relation is represented in Fig. 13 by the blue-dotted line. Results are given in Appendix B.
#### 5.2.5. Mass of the condensations using the H2 column density map
Using the H2 column density map, we derived the mass of each condensation defined by the same area as the one used by DEH09 to compute the mass from APEX 870 μm data. We used the following formula: (11)where Apixel is the area of a pixel in cm2, μ is the mean molecular weight (2.8), mH is the hydrogen atom mass and is the H2 column density value at pixels (i, j). DEH09 computed the mass using the Hildebrand formula with T = 20 K (and also with T = 30 K but this value for the condensations is too high compared to the ones derived from the Herschel temperature map). The results are given in Table 7. Column 1 gives the condensation number from DEH09. Column 2 gives the condensation’s mass derived using the H2 column density map and Eq. (11), and Col. 3 the mass derived by DEH09 using the 870 μm data, assuming a dust temperature of 20 K.
Compared to DEH09, we obtain higher masses for the condensations. At first sight, the absence of background subtraction could explain this difference but since they are massive, the background only accounts for a small amount of the total pixels value. The main difference between the two results could be explained by the extending emission filtering of the ground-based telescope at 870 μm, leading to an underestimation of the mass (Csengeri et al. 2016). We point out that the condensation mass is critical for star-formation rate and star-formation efficiency estimates (Liu et al. 2017).
Fig. 11Envelope temperature distribution for the 35 sources observed towards RCW 120. Open with DEXTER
Fig. 12Histogram of envelope mass for sources observed towards RCW 120. Open with DEXTER
Table 6
Physical properties of the final sample.
## 6. Discussion
### 6.1. Compact sources’ evolutionary stage
As the submillimetric luminosity depends on the envelope mass and the bolometric luminosity on the stellar mass, André et al. (1993) proposed to use the submillimetric to bolometric luminosity ratio as an evolutionary indicator. Bontemps et al. (2010a) used the same kind of criteria to distinguish Class 0 and Class I objects in the Aquila Rift using Herschel data but the limits used were different. Class 0 are defined with Lλ ≥ 350 μm/Lbol> 0.03 while it is higher than 0.01 for Class I. The region between 0.01 and 0.03 contains sources with uncertain classification.
Figure 14 shows the distribution of sources’ envelope temperature, color-coded according to their Lλ ≥ 350 μm/Lbol value. As expected, there is a relation between the Lλ ≥ 350 μm/Lbol value and the envelope temperature of the source. Class I objects (in red) have a higher temperature than Class 0 objects (in green) while the uncertain cases (in blue) are located in between.
Table 7
Condensations’ mass using the low resolution density map (second column) and from DEH09 at 20 K (third column).
Fig. 13a) Temperature given by the SED fitting versus temperature obtained at the source location in the temperature map for the final sample of 35 sources (red diamonds). b) ν70 μm × S70 μm versus bolometric luminosity for the final sample of 35 sources following Ragan et al. (2012) where the black continuous line represents the fit and the blue dotted one represented the relation from Dunham et al. (2008). Open with DEXTER
Figure 15 shows the sample of the 35 compact sources on the Lλ ≥ 350 μm/Lbol versus Menv diagram coded depending on their location with respect to the PDR. 80% of the sources are located in the Class 0 region with Lλ ≥ 350 μm/Lbol> 0.01. If an age gradient was at work in the region, sources towards the PDR would have been under the Class I limit and sources outside the PDR would have been above the Class 0 limit. Depending on the Class 0 limit taken, 1% for André et al. (1993) or 3% for Bontemps et al. (2010a), a weak trend in favor of very young objects out of the PDR can be seen. Towards the PDR, no trend is seen since these sources spread over the entire range of Lλ ≥ 350 μm/Lbol values.
Saraceno et al. (1996) presented the evolution of Class 0 to Class II via spherical accretion by a path in the LbolMenv diagram. The unknown mechanism of massive star formation (scaled-up analogue of low-mass star or merging of low-mass stars) and the difficulties of establishing the evolution phase for individual YSOs (d ≥ 1 kpc, non-resolved clusters) make the construction of an evolutionary scenario for high-mass objects difficult. An attempt has been made by Molinari et al. (2008) to reproduce the LbolMenv evolutionary paths for massive objects. From a sample of 42 sources characterized by their [25–12] color value, they classified them as IR-sources if the SED could be fitted with a zero age main sequence (ZAMS) model or MM-sources if a MBB model was used. This difference in the SED translates into a different location in the LbolMenv diagram well separated by a line representing IR-sources (see Molinari et al. 2008) practically equivalent to the strip of low-mass Class I objects from Saraceno et al. (1996). Assuming a scaled-up analogue of the low-mass star regime with a turbulent core, a model of time dependent accretion rate (McKee & Tan 2003) with fixed final stellar masses and core surface densities, evolutionary paths in the LbolMenv have been computed. The first sequence (indicated in Figs. 1618) represents the accretion phase where the luminosity is dominated by the accelerated accretion and the lost of envelope mass is due to accretion, outflows and possible draining by other YSOs. At the end of the first phase, the star reaches or is close to the ZAMS with a final stellar mass. During the second phase (also indicated in the figures), the envelope mass continues to decrease (the increase of stellar mass by residual accretion is neglected in this model) and the luminosity is now the sum of accretion and stellar luminosity. The final point of the paths corresponds to a lost of 90% of the envelope mass for the four low-mass tracks and to a time of 2.1 × 106 yr and 2.7 × 106 yr when the star is optically visible for the two highest-mass tracks. We warn readers that this is a simple model which cannot be used to predict accurately the evolution of YSOs but rather to obtain indication about the evolutionary class of a source.
Fig. 14Histogram of the temperature color-coded according to Lλ ≥ 350 μm/Lbol in green for Class 0, red for Class I and blue for uncertain cases. Open with DEXTER
In Fig. 16, we plot the evolutionary paths for low-mass (Saraceno et al. 1996) and high-mass stars (Molinari et al. 2008) with their corresponding stripes for Class I sources and include our sample. Sources with Menv> 10 M are all located under the Class I stripe and a qualitative analogy with Fig. 9 of Molinari et al. (2008) permits a rough classification of them: sources 1, 2, 3, 5, 8, 14 are Class I and sources 9, 10, 39, 40, 63, 94, 179 are Class 0. On the contrary, the distribution of sources with Menv< 10 M has a higher dispersion around the Class-I strip, also seen in Fig. 9 of Molinari et al. (2008). As in Fig. 15, sources located outside the PDR might tend to be younger but no evidence for a more evolved stage for sources located inside the PDR is seen, as it could have been expected if star formation progresses gradually in the surrounding medium, following the expansion of the ionization front and the leaking of the ionizing radiation.
In Fig. 17, the sources are color-coded depending on their location in the condensations. We see a clear trend for the sources’ envelope mass and evolutionary stage to be determined by their hosting condensation: sources observed towards condensation 1 have the highest envelope mass and are in low evolutionary stage while sources in condensation 5 are low mass envelope sources, and possibly in a later evolutionary stage. Sources observed towards condensation 4 (pre-existing clump) tend to be evolved and of intermediate envelope mass. Condensation 8 is observed further away from the ionized front and hosts sources in a low-evolutionary stage. Sources 50 and 155 do not belong to any condensation according to DEH09 but are spatially close, outside the PDR and in a similar evolutionary state than the sources in condensation 5. Sources in condensation 2 show a higher dispersion in this diagram compared to the other condensations. The eastern part of condensation 2 contains Class 0-Class I objects of intermediate mass and low-mass objects in the western part. The eastern-part of this condensation seems to be radiation-shielded thanks to the filament in front of sources 3, 8 and 16 while the western-part receives a significant part of the radiation through photons’ leaking. This might explain the dispersion of sources’ properties observed towards this condensation.
Fig. 15Lλ ≥ 350 μm/Lbol versus Menv. The dotted-dashed lines represents the Lλ ≥ 350 μm/Lbol limits between Class 0 and Class I from Bontemps et al. (2010a) and sources are color-coded depending on their location: red squares for sources inside the PDR, blue triangles for outside. Open with DEXTER
Fig. 16Lbol versus Menv. Evolutionary tracks are adapted from Saraceno et al. (1996) and Molinari et al. (2008). Labeled arrows indicate (1) the accreation phase and (2) envelope cleaning phase. Sources are coded as a function of their location with respect to the PDR: red squares sources are for the ones observed towards the PDR and blue triangles for those outside. Error bars for Lbol and Menv are shown by gray lines. Open with DEXTER
Fig. 17Same as Fig. 16 but sources are color-coded as a function of their hosting condensation previously identified using the 870 μm and 1.3 mm emission (ZAV07, DEH09). The condensation number refers as the one given in Fig. 5 (left). Open with DEXTER
In Fig. 18, the sources are color-coded depending on their Lλ ≥ 350 μm/Lbol value. We note that the magenta diamond sources (Lλ ≥ 350 μm/Lbol< 0.01) are above the Class I stripe of Saraceno et al. (1996), red square sources (Lλ ≥ 350 μm/Lbol> 0.03) are below the Class I stripe of Molinari et al. (2008) and blue triangle sources (0.03 >Lλ ≥ 350 μm/Lbol> 0.01) are spread around these stripes. Hence, the two methods give consistent results to derive sources’ evolutionary class.
Fig. 18Same as Fig. 16 but the sources are color-coded as a function of their Lλ ≥ 350 μm/Lbol ratio. Magenta diamonds for Class I objects and red squares for Class 0, while blue-triangle sources are uncertain. Open with DEXTER
We suggest that the main parameter that controls the star formation and the evolutionary stage of the YSOs is the column density of their hosting condensation. This means that a simple search for YSOs’ age gradient around H ii region cannot be used as a simple indicator for establishing evidence for triggered star formation.
### 6.2. Evolutionary stage derived by DEH09
Color-color diagrams using near- and mid-IR data can also be used to infer the class of a source. DEH09 used Spitzer GLIMPSE and MIPSGAL colors to discuss the evolutionary stage of YSOs observed towards RCW 120. The results are given in Cols. 8 and 9 of Table 5. Figures 19 to 25 present a zoom of the sources observed from condensation 1 to 8 on the gradient image of the Herschel PACS 70 μm emission and the 870 μm emission in countours. All the sources in these figures are detected by getsources and identified according to their getsources identification number in Tables 5 and A.1. Final-sample sources, and those identified by DEH09 are indicated in the figures. Hence, non-labelled sources are either not part of the final sample and/or not detected by DEH09. In the following we compare the evolutionary class of sources obtained from mid-IR color-color diagrams (DEH09) with the one obtained in this paper.
Condensation 1 (2530 M): this is the most massive and densest condensation observed. The classification of Source 9 (40) and 63 (37) do not agree with DEH09. Both objects do not have IR-counterpart in the J, H and Ks and following Fig. 5 of DEH09, they have a K− [ 24 ] higher than 10 mag. Hence, they are likely to be in an early evolutionary stage. Source 2 is the massive Class 0 object discussed in Zavagno et al. (2010, see also DEH09). It is located at the peak of the column density distribution (N(H2) = 4 × 1023 cm-2) and has the highest envelope mass (Menv = 174 M) and bolometric luminosity (Lbol = 1163L) of the sample. It is probably a Class 0 source, since no IR-counterpart is detected. Sources 10, 39 and 82 are not detected by DEH09 and are classified as Class 0. This condensation hosts 80% of the massive cores. Because condensation 1 is the densest and most massive in RCW 120, the core formation efficiency (CFE) is expected to be higher compared to the other condensations (Motte et al. 1998; Bontemps et al. 2010b; Liu et al. 2017).
Condensation 2 (540 M): source 3 (50) has been classified as a Class I source by DEH09 in agreement with our classification. Sources 16 and 36 are not discussed by DEH09 maybe because of the high filamentary background around these compact sources. Therefore, no IR-counterpart could be reliably detected and the sources are classified as Class 0I.
Condensation 4 (350 M): sources 6 (76), 14 (69) and 19 (67) are classified as at least Class I objects and in agreement with DEH09. Source 24 (Object A) and 28 (Object B) are surrounded by local PDR revealed as shells on the gradient 70 μm image. Because their IR counterparts are diffuse, no attempt has been made by DEH09 to classify them but are likely Class I or further. ZAV07 suggested that this condensation could be a pre-existing clump engulfed in the ionized region. A subsequent RDI process could have accelerated the collapse which might explain why the objects are in a higher evolutionary stage compared to the other condensations.
Condensation 5 (1580 M): this region is highly structured and hosts nine YSOs among the 35 discussed. Among the sources of the final sample and discussed by DEH09, source 33 is the only one whose class does not agree. In the same way as sources 9 and 63 in condensation 1, DEH09 did not measured any near IR-counterpart for this source and its K− [ 24 ] value should also be higher than ten. Therefore, this source is also in an early evolutionary stage. Sources 44 and 48 present IR-counterparts at all wavelengths except 8 μm and only at 24 μm respectively but are too weak (Menv = 1 M) to be discussed by DEH09. They probably are weak Class I sources. Sources 84 and 123 do not present IR-counterparts and are classified as Class 0.
Fig. 19Condensation 1 and 7: 870 μm emission (countours) superimposed on the gradient image of the Herschel PACS 70 μm emission. Sources are identified with their getsources identification number. Sources coded with a red circle are those discussed (among the sample of 35 sources). The green square sources are detected but not discussed due to a lack of Herschel measurements (see text). Open with DEXTER
Fig. 20Condensation 2: same as for Fig. 19. Open with DEXTER
Fig. 21Condensation 3: same as for Fig. 19. Open with DEXTER
Fig. 22Condensation 4: same as for Fig. 19. Open with DEXTER
Fig. 23Condensation 5: same as for Fig. 19. Open with DEXTER
Condensation 6 (330 M): we identify a massive YSO (source 94) of Menv = 70M with IR-counterparts but classified as Class 0. It is possible that the higher fluxes coming from source 4 contaminates source 94 at long wavelengths hence overestimating the Lλ ≥ 350 μm/Lbol value and hence, the classification.
Condensation 8 (370 M): located south of the ionized region (see Fig. 5b), this condensation was probably formed by the leaking of UV photons passing through the low density medium seen on the high resolution density map (see Fig. 5b) at (25807, 3852). Hence, sources 175 and 179 were probably formed later compared to the sources located in the PDR. This is confirmed by their low-temperature (between 11.2 K and 13.3 K), low evolutionary stage and the absence of IR counterparts.
Fig. 24Condensation 6: same as for Fig. 19. Open with DEXTER
Fig. 25Condensation 8: same as for Fig. 19. Open with DEXTER
### 6.3. Comparison with the Walch et al. (2015) model
Walch et al. (2012, 2013) show that clumpy, shell-like structures like that seen in RCW 120 are probably attributable to pre-existing density structures in the natal molecular cloud. During the expansion of the H ii region and the collection of the dense shell, the pre-existing density structures are enhanced and lead to a clumpy distribution within the shell. The masses and locations of the swept-up clumps depend on the fractal density structure of the molecular cloud, through the parameters n and ρ0, related to the fractal dimension of the cloud and the density, respectively (Walch et al. 2013, see Sect. 2).
Walch et al. (2015) compared simulations and APEX-LABOCA 870 μm observations of RCW 120. They performed three-dimensional SPH simulations of H ii regions expanding into fractal molecular clouds in order to investigate whether the formation of massive clumps in the swept-up shell necessarily requires the C&C mechanism (Elmegreen & Lada 1977). They show that a distribution of clumps similar to the one seen in RCW 120 can be explained by a non-uniform initial molecular cloud structure, implying that a shell-like configuration of massive clumps does not imply that the C&C mechanism is at work. They find a hybrid form of triggering, which combines elements of C&C mechanism and RDI.
We discuss below how the Herschel results presented here compare with their findings. The temperature map obtained from Herschel images indicates that dust temperatures lower than 30 K, the temperature used by Walch et al. (2015), are observed. This means that the mass they derived for the condensations represents a lower limit. The H2 column density maps obtained from Herschel images show that the observations better correspond with a low value of ρ0, where ρ0 is the scaling constant for the density fluctuations field caracterizing the width of the density PDF (Walch et al. 2012, 2013). However pillars are not observed on the northern, lower density part of the ionized region as obtained in their simulations (Walch et al. 2015, Fig. 2). This suggests that the numerical treatment adopted better describe higher density regions while lower density regions seem to be better represented by the higher value of ρ0.
The distribution of sources observed towards the central part of the ionized region in the simulation is also not observed (Walch et al. 2015, their Fig. 2 right). The distribution of sources in condensations is also not well reproduced by this model, as seen on Figs. 6 and 7. The number of sources they found towards the three main condensations well corresponds with our findings – nine sources towards condensation 1 (their condensation 3), three sources towards condensation 2 (their condensation 1) and six sources towards condensation 4 (their condensation 2). For the two runs, the condensation 3 formed the highest number of high-mass protostars (12.7 M and 19 M in average). This is in agreement with the observations where high-mass cores are found towards our condensation 1. We remind the reader that the mass computed in this paper is the envelope mass while Walch et al. (2015) use the protostars mass. Therefore, it is not surprising that the mass computed in condensation 1 are much higher compared to Walch et al. (2015). Nonetheless, their condensation 2 host protostars of lower masses (9 M in average) while our corresponding condensation contains low-mass cores. A new modeling of this region using Herschel results would help for discussing the star formation history and its propagation in the ambient medium. It would be interesting to discuss the parameters (and mechanisms) that lead to the formation of the high number of high-mass cores observed towards condensation 1.
### 6.4. Comparison with the model of Torii et al. (2015)
Using MOPRA observations of 12CO, 13CO and C18O in the J = 1 → 0 transition, Anderson et al. (2015) did not detect any expansion of the H ii region which means that the expansion velocity is either too low to be observed or inexistant. Considering this fact, Torii et al. (2015) explained the formation of the O star and the corresponding ring-like structure following the cloud-cloud collision (CCC) scenario from Habe & Ohta (1992) which can be described in three stages. First, a small and a large clouds are heading towards each other. Secondly, a cavity is created in the large cloud due to the collision with the small cloud. The place where the two clumps collided is compressed, leading to massive star formation. Finally, the cavity in the large cloud is filled with the ionzing radiation coming from the recently formed massive star(s). A schematic explanation can be found in Torii et al. (2015, see their Figs. 12 and 13). In the case of RCW 120, they suggest that the weak leaking of Hα emission in the northern part of the ring indicates only the beginning of the erosion by the ionizing radiation. Hence, the triggering which is assumed to take place as a consequence of the C&C mechanism cannot be seen yet. However, after the formation of the ionizing star, a triggering mechanism caused by the compression of the remaining small clump on the large clump is plausible. This could be an alternative explanation which should only affect the formation of YSOs in the southern part of the ring. This study shows that the main driver of the evolutionary stage is the density of the hosting condensation and not the (projected) distance to the ionizing star as expected earlier.
## 7. Summary and conclusions
We used Herschel PACS and SPIRE images, complemented with existing data, to study the star formation observed towards the Galactic ionized region RCW 120.
Zavagno et al. (2010) presented the first results from Herschel, however this paper is an in-depth study under the HOBYS recipe which allow us to compare the results between different regions observed in this key program. Moreover, while the first Herschel results were focused on source 2, we produced the first reliable catalog of compact sources using Herschel data in this region.
The unprecedent coverage and sensitivity in the far infrared of the Herschel data allow us to derive, for the first time, the temperature and H2 column density map for this region. The temperature ranges from 15 K to 24 K and the column density from 7 × 1021 cm-2 up to 9 × 1023 cm-2. The condensations defined by DEH09 at 870 μm corresponds to cold and dense regions where the majority of the sources are detected.
We also derive, for the first time, the envelope mass, envelope dust temperature and bolometric luminosity of compact sources detected there. The temperature ranges from 11.2 K to 34.1 K with a median of 19.1 K, from 1 M to 174 M with a median of 4 M for the envelope mass and from 5 L to 1163 L with a median of 30 L. The volume density was computed by assuming a spherical source with the size defined at the reference wavelength (160 μm or 250 μm) going from 2 × 105 cm-3 to 108 cm-3.
We use the physical parameters to discuss the star formation history in this region. We show that most of the compact sources (21 of the 35) are observed towards the PDR.
Thanks to Herschel data, we detected 21 sources, mostly in an early evolutionary stage, which were not detected and hence discussed in DEH09.
Using the Lλ ≥ 350 μm/Lbol criteria from Bontemps et al. (2010a), we classify the sources between Class 0, intermediate and Class I. We found respectively 14, 15 and 6 sources in this classification.
We find that the projected distance to the ionizing source is not the parameter which controls the evolutionary stage of the sources, contrary to what was expected before, wrongly. In fact, the main driver for this is the density of the condensation where the source is located, whatever its distance to the ionizing sources. Consequently, there is no conflict between possible triggering and projected distance because the density plays a major role in the overall picture. Despite the fact that the southern layer of the region is compressed (Tremblin et al. 2014), Herschel data do not allow us to conclude on triggering. High resolution spectroscopic data are needed to determine the structure (possible fragmentation) of the cores and the evolutionary stage of the sources in these cores.
3
HIPE is a joint development software by the Herschel Science Ground Segment Consortium, consisting of ESA, the NASA Herschel Science Center, and the HIFI, PACS, and SPIRE consortia.
5
The getsources algorithm is publicy available and can be downloaded at http://www.herschel.fr/cea/gouldbelt/en/getsources/
## Acknowledgments
We thank the referee for his/her report which helps to improve the quality of the paper. This work is based on observations obtained with Herschel-PACS and Herschel-SPIRE photometers. PACS has been developed by a consortium of institutes led by MPE (Germany) and including UVIE (Austria); KU Leuven, CSL, IMEC (Belgium); CEA, LAM (France); MPIA (Germany); INAF-IFSI/OAA/OAP/OAT, LENS, SISSA (Italy); IAC (Spain). This development has been supported by the funding agencies BMVIT (Austria), ESA-PRODEX (Belgium), CEA/CNES (France), DLR (Germany), ASI/INAF (Italy), and CICYT/MCYT (Spain). SPIRE has been developed by a consortium of institutes led by Cardiff Univ. (UK) and including: Univ. Lethbridge (Canada); NAOC (China); CEA, LAM (France); IFSI, Univ. Padua (Italy); IAC (Spain); Stockholm Observatory (Sweden); Imperial College London, RAL, UCL-MSSL, UKATC, Univ. Sussex (UK); and Caltech, JPL, NHSC, Univ. Colorado (USA). This development has been supported by national funding agencies: CSA (Canada); NAOC (China); CEA, CNES, CNRS (France); ASI (Italy); MCINN (Spain); SNSB (Sweden); STFC, UKSA (UK); and NASA (USA). This work is based on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA. We have made use of the NASA/IPAC Infrared Science Archive to obtain data products from the 2MASS, Spitzer -GLIMPSE, and Spitzer -MIPSGAL surveys. The Centre National d’Etudes Spatiales (CNES) is deeply acknowledged for the financial support. Part of this work was supported by the ANR (Agence Nationale pour la Recherche) project “PROBeS”, number ANR-08-BLAN-0241.
## Appendix A: Herschel original fluxes for sources detected by getsources (see text)
Table A.1
Identification, position, and fluxes for compact sources detected towards RCW 120.
## Appendix B: Properties of tentative sources
Table B.1
Tentative-source properties.
## Appendix C: Image of the sources
In this section, we present the three first sources of the final sample on the 2MASS, Spitzer GLIMPSE and MIPSGAL, Herschel and density maps (low and high resolution) together with the result of their SED fitting. The maps (1′× 1′ for IR maps and 2′× 2′ for Herschel maps) are centered on the coordinates given by getsources with the corresponding wavelength written in the upper-left part of the image.
Fig. C.12MASS, GLIMPSE, MIPSGAL, Herschel, low-resolution and high-resolution images of source 1. If a counterpart is seen in the infrared catalogs, a black circle of 4″ radius is shown to indicate the location of this counterpart otherwise, the center (position of the source given by getsources) is indicated by a cross. For Herschel images, the ellipses shown are the getsources parameters, AFWHM and BFWHM. For the representation of the SED fitting, the original fluxes are represented by a magenta diamond, the corrected fluxes (flux scaling + color correction) at the wavelength used for the fitting are represented by a black cross, and the blue triangles represent the IR counterparts, if any. The identification number of the source is given in the title of the SED. Open with DEXTER
Fig. C.2Same as Fig. C.1 for source 2. Open with DEXTER
Fig. C.3Same as Fig. C.1 for source 3. Open with DEXTER
## All Tables
Table 1
Summary of Herschel observational parameters.
Table 2
Minimum, maximum and median values for the color correction factors at Herschel wavelengths for the final sample of sources.
Table 3
Range for the temperature map constructed following the method described in Hill et al. (2012, first line), with all wavelengths and no background subtraction (second line) and following Anderson et al. (2012, third line) for the whole map (first and fourth columns), the hottest region (second and fifth columns) and coldest region (third and sixth columns).
Table 4
Range for the column density map constructed following the method described in Hill et al. (2012, first line), with all wavelengths and no background subtraction (second line) and following Anderson et al. (2012, third line) for the whole map (first and fourth columns), the densest region (second and fifth columns) and empty region (third and sixth columns).
Table 5
Properties of the 35 compact sources discussed in the text.
Table 6
Physical properties of the final sample.
Table 7
Condensations’ mass using the low resolution density map (second column) and from DEH09 at 20 K (third column).
Table A.1
Identification, position, and fluxes for compact sources detected towards RCW 120.
Table B.1
Tentative-source properties.
## All Figures
Fig. 1RCW 120: Herschel-PACS 70 μm (blue), 160 μm (green) and Herschel-SPIRE 250 μm (red). The field size is 21.8′ × 24.5′. North is up, east is left. Open with DEXTER In the text
Fig. 2a) SED fitting for the pixels giving the highest temperature. The continuous curve represents the fit made with all Herschel fluxes and dashed curved is the fit obtained by the method of Hill et al. (2012). b) Same for the pixel giving the lowest temperature. No background is subtracted in both cases. Open with DEXTER In the text
Fig. 3a) Ratio of temperature between the maps obtained in this paper (no 70 μm and 100 μm data included and no background subtraction) over the ones obtained by Anderson et al. (2012, see text). The yellow contours correspond to 870 μm emission at 0.1 Jy/beam. b) Same but with the column density maps. Open with DEXTER In the text
Fig. 4Temperature map of RCW 120 at 36.̋6 resolution with 870 μm emission from LABOCA (in yellow countours) and the final sample of 35 compact sources (white dots) discussed in this paper. Condensations observed at 870 μm are identified following the labelling in DEH09. The temperature ranges from 15 K (dark) to 24 K (white). Warm regions are observed towards the ionized zone. Colder regions are located outside the ionized region and are distributed in cores, filaments and condensations. Open with DEXTER In the text
Fig. 5a) On logarithmic scale, H2 column density map of RCW 120 at 366 resolution with 870 μm emission from LABOCA (in yellow countours), the final sample of sources (black dots) and the three prestellar clumps (red dots). Condensations observed at 870 μm are identified following the labelling in DEH09. The density values range from 7 × 1021 cm-2 to 4 × 1023 cm-2. b) High resolution H2 column density map of RCW 120 at 182 resolution (in red) and Hα emission (in blue) from the SuperCOSMOS Hα Survey. The column density values range from 7 × 1021 cm-2 to 9.4 × 1023 cm-2. Open with DEXTER In the text
Fig. 6All 35 compact sources detected using getsources (and discussed in the text) superimposed on a 70 μm gradient image of RCW 120. The sources are color-coded depending on their location: red circles for sources observed towards the PDR, blue squares for sources outside (see text). Open with DEXTER In the text
Fig. 7All 87 sources detected by getsources but not part of the final sample due to the lack of reliable flux measurements, mainly at SPIRE wavelengths. Physical parameters of these sources are derived in a secondhand way explained and presented in Sect. 5.2.4. The PDR region is enclosed in the green countours (see text). Open with DEXTER In the text
Fig. 8Integrated intensity of 12CO (J = 0 → 1) between a) −75 km s-1 and −50 km s-1; b) −35 km s-1 and −15 km s-1; c) −15 km s-1 and 3 km s-1. The dots represent the 35 sources of the final sample and the contours stand for the 870 μm condensations of DEH09. The unit of the color image is in Jy km s-1 beam-1. Open with DEXTER In the text
Fig. 9Integrated intensity of 13CO (J = 0 → 1) within the same velocity ranges. Dots and contours are the same as in Fig. 8. Open with DEXTER In the text
Fig. 10Integrated intensity of C18O (J = 0 → 1) within the same velocity ranges. Dots and contours are the same as in Fig. 8. Open with DEXTER In the text
Fig. 11Envelope temperature distribution for the 35 sources observed towards RCW 120. Open with DEXTER In the text
Fig. 12Histogram of envelope mass for sources observed towards RCW 120. Open with DEXTER In the text
Fig. 13a) Temperature given by the SED fitting versus temperature obtained at the source location in the temperature map for the final sample of 35 sources (red diamonds). b) ν70 μm × S70 μm versus bolometric luminosity for the final sample of 35 sources following Ragan et al. (2012) where the black continuous line represents the fit and the blue dotted one represented the relation from Dunham et al. (2008). Open with DEXTER In the text
Fig. 14Histogram of the temperature color-coded according to Lλ ≥ 350 μm/Lbol in green for Class 0, red for Class I and blue for uncertain cases. Open with DEXTER In the text
Fig. 15Lλ ≥ 350 μm/Lbol versus Menv. The dotted-dashed lines represents the Lλ ≥ 350 μm/Lbol limits between Class 0 and Class I from Bontemps et al. (2010a) and sources are color-coded depending on their location: red squares for sources inside the PDR, blue triangles for outside. Open with DEXTER In the text
Fig. 16Lbol versus Menv. Evolutionary tracks are adapted from Saraceno et al. (1996) and Molinari et al. (2008). Labeled arrows indicate (1) the accreation phase and (2) envelope cleaning phase. Sources are coded as a function of their location with respect to the PDR: red squares sources are for the ones observed towards the PDR and blue triangles for those outside. Error bars for Lbol and Menv are shown by gray lines. Open with DEXTER In the text
Fig. 17Same as Fig. 16 but sources are color-coded as a function of their hosting condensation previously identified using the 870 μm and 1.3 mm emission (ZAV07, DEH09). The condensation number refers as the one given in Fig. 5 (left). Open with DEXTER In the text
Fig. 18Same as Fig. 16 but the sources are color-coded as a function of their Lλ ≥ 350 μm/Lbol ratio. Magenta diamonds for Class I objects and red squares for Class 0, while blue-triangle sources are uncertain. Open with DEXTER In the text
Fig. 19Condensation 1 and 7: 870 μm emission (countours) superimposed on the gradient image of the Herschel PACS 70 μm emission. Sources are identified with their getsources identification number. Sources coded with a red circle are those discussed (among the sample of 35 sources). The green square sources are detected but not discussed due to a lack of Herschel measurements (see text). Open with DEXTER In the text
Fig. 20Condensation 2: same as for Fig. 19. Open with DEXTER In the text
Fig. 21Condensation 3: same as for Fig. 19. Open with DEXTER In the text
Fig. 22Condensation 4: same as for Fig. 19. Open with DEXTER In the text
Fig. 23Condensation 5: same as for Fig. 19. Open with DEXTER In the text
Fig. 24Condensation 6: same as for Fig. 19. Open with DEXTER In the text
Fig. 25Condensation 8: same as for Fig. 19. Open with DEXTER In the text
Fig. C.12MASS, GLIMPSE, MIPSGAL, Herschel, low-resolution and high-resolution images of source 1. If a counterpart is seen in the infrared catalogs, a black circle of 4″ radius is shown to indicate the location of this counterpart otherwise, the center (position of the source given by getsources) is indicated by a cross. For Herschel images, the ellipses shown are the getsources parameters, AFWHM and BFWHM. For the representation of the SED fitting, the original fluxes are represented by a magenta diamond, the corrected fluxes (flux scaling + color correction) at the wavelength used for the fitting are represented by a black cross, and the blue triangles represent the IR counterparts, if any. The identification number of the source is given in the title of the SED. Open with DEXTER In the text
Fig. C.2Same as Fig. C.1 for source 2. Open with DEXTER In the text
Fig. C.3Same as Fig. C.1 for source 3. Open with DEXTER In the text
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
|
2019-04-18 16:39:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6004858613014221, "perplexity": 1948.7551053750926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517745.15/warc/CC-MAIN-20190418161426-20190418183426-00432.warc.gz"}
|
https://www.r-bloggers.com/2018/10/le-monde-puzzle-1066-3/
|
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
A purely (?) algorithmic Le Monde mathematical puzzle
For the table below, what is the minimal number of steps required to reach equal entries when each step consists in adding ones to three entries sitting in a L, such as (7,11,12) or (5,6,10)? Same question for the inner table of four in yellow.
For the inner table, this is straightforward as there are four possible L’s, three equations like 6+n⁶=7+n⁷, and two degrees of freedom leading to a unique entry of N=13 (impossible!) or 16 (feasible). Hence adding 10 L’s. For the entire table, summing up all entries after completion leads to 16N, which is also equal to 1+3+3+…+16+M, where M is the number of added L’s, itself equal to 138+3O, if O denotes the number of ones added. Hence M can only take the values 18, 21, … It took me quite a while to R code an approach to complete the table into 16 18’s, as my versions of simulated annealing did not seem to converge. In the end, I used a simplified version where the table was completed by multinomial draws, like a M(17;3⁻¹,3⁻¹,3⁻¹) for the upper left corner, corresponding to random draws of one of the 36 available L’s, which should be used 50 times in total, and then augmented or reduced of one L depending on the value at a randomly selected entry. Leading to the result
> aneal(grid=c(1,3,3:13,15,15,16),maxT=1e5)
[1] 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18
The R code is quite simple-minded if a wee bit long, using a preliminary definition of the 36 L’s as a 26×3 matrix named allels:
aneal=function(gri,horz=1e3){
numbels=rep(0,36)
sumz=rep(0,16)
while (sum(numbels)<50){ #50 L's used in toto
i=sample(1:16,1)
if (sum(numbels[apply(i==allels,1,max)==1])<18-gri[i]){
rez=18-gri[i]-sum(numbels[apply(i==allels,1,max)==1])#leftover
indz=(1:36)[apply(i==allels,1,max)==1]
indz=indz[numbels[indz]==0]#empty L's
indz=sample(rep(indz,2),rez,rep=TRUE)
for (j in indz) numbels[j]=numbels[j]+1
}}
t=1
while (t<horz){ i=sample(1:16,1)
indz=(1:36)[apply(i==allels,1,max)==1]
sumz=sum(numbels[indz])+gri[i]
if (sumz>18){#remove one L
subz=indz[numbels[indz]>0]#used local L's
j=sample(rep(subz,2),1)
numbels[j]=numbels[j]-1}
j=sample(rep(indz,2),1)
numbels[j]=numbels[j]+1}
for (i in 1:16)#check constraints
sumz[i]=sum(numbels[apply(i==allels,1,max)==1])+gri[i]
if ((min(sumz)==18)&(max(sumz)==18)) break()
t=t+1}
print(sumz)}
|
2021-10-23 07:17:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6762151718139648, "perplexity": 1049.0797825377051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585653.49/warc/CC-MAIN-20211023064718-20211023094718-00256.warc.gz"}
|
http://mathhelpforum.com/geometry/188900-angles-cyclic-quadrilateral-semi-circle.html
|
# Math Help - Angles in cyclic quadrilateral in semi-circle
1. ## Angles in cyclic quadrilateral in semi-circle
Given a cyclic quadrilateral ABCD in a semi-circle, (let AD be the diameter),let AB=BC=1. Draw the line DB.
Prove that angle ADB= angle BDC
This was taken as obvious in a solution I read, but it is not at all obvious to me. Help in understanding this result would be appreciated.
2. ## Re: Angles in cyclic quadrilateral in semi-circle
Hello, I-Think!
$\text{Given a cyclic quadrilateral }ABCD\text{ in a semicircle with diamter AD.}$
$\text{Let }AB = BC = 1.\;\text{ Draw the line segment }DB.$
$\text{Prove that: }\:\angle ADB \,=\, \angle BDC$
Didn't they explain why it is "obvious"?
Angles $ADB$ and $BDC$ are inscribed angles.
. . They are measured by one-half their intercepted arcs.
We are told that chords $AB$ and $BC$ are equal.
. . Hence, their respective arcs are equal: . $\overline{AB} \:=\:\overline{BC}$
Therefore: . $\angle ADB \,=\,\angle BDC$
|
2015-11-26 00:57:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6026222705841064, "perplexity": 1871.0604847878117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446250.47/warc/CC-MAIN-20151124205406-00083-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://zbmath.org/?q=an:0707.11072
|
## Partitions without small parts.(English)Zbl 0707.11072
Number theory. Vol. I. Elementary and analytic, Proc. Conf., Budapest/Hung. 1987, Colloq. Math. Soc. János Bolyai 51, 9-33 (1990).
Let $$r(n,m)$$ denote the number of partitions of $$n$$ into parts $$\ge m$$. The present authors show that the asymptotic formula $r(n,m)=p(n)\left(\frac{C}{2\sqrt{n}}\right)^{m-1}(m-1)!\{1+O(m^2/\sqrt{n})\} \tag{*}$ holds uniformly for $$1\le m\le n^{1/4}$$; here $$C=\pi \sqrt{2/3}$$ and $$p(n)=r(n,1)$$ is the number of (unrestricted) partitions of $$n$$. Their proof is based on the representation $$r(n,m)=D^{(m-1)}(p(n))$$ where $$D^{(m-1)}$$ is a difference operator reflecting the relation $\sum_{n\geq 1}r(n,m)x^ n=\prod^{m-1}_{k-1}(1-x^ k)\sum_{n\geq 1}p(n)x^ n.$ Here $$p(n)$$ may be replaced by the Hardy-Ramanujan formula $p(n)=\frac{C^3}{2\pi \sqrt{2}}F'(C^2(n-1/24)) + \text{error}$ where $$F(x)=\exp (\sqrt{x})/\sqrt{x}$$, hence a simple extension of the mean value theorem gives for $$m\le \sqrt{n}$$ $r(n,m)=(m-1)!\frac{C^{2m+1}}{2\pi \sqrt{2}}F^{(m)}(t) + \text{error}$ with $$C^2(n- m^2/2)\le t\le C^2n$$. Evaluation of $$F^{(m)}(t)$$ in terms of Bessel polynomials and some straightforward estimates yield (*) uniformly in $$m\le n^{1/4}$$.
As an application of (*) an asymptotic formula of P. Erdős and M. Szalay [Studies in Pure Math., Memory of P. Turán, 187–212 (1983; Zbl 0523.10029)] for the number of nonpractical partitions of $$n$$ is considerably sharpened. (A partition $$n=n_1+\cdots+n_k$$ of $$n$$ is said to be practical if each $$b\in \{1,\ldots,n\}$$ is representable as a subsum $$b=\sum^k_{i=1}a_i n_i$$ with $$a_i\in \{0,1\}$$.)
[For the entire collection see Zbl 0694.00005.]
### MSC:
11P82 Analytic theory of partitions
### Citations:
Zbl 0694.00005; Zbl 0523.10029
|
2022-06-24 23:21:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9389982223510742, "perplexity": 364.7867929032522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103033816.0/warc/CC-MAIN-20220624213908-20220625003908-00422.warc.gz"}
|
https://pinakimondal.org/2021/12/
|
# Lüroth’s theorem (a “constructive” proof)
Lüroth’s theorem (Lüroth 1876 for $$k = \mathbb{C}$$, Steinitz 1910 in general). If $$k \subseteq K$$ are fields such that $$k \subseteq K \subseteq k(x)$$, where $$x$$ is an indeterminate over $$k$$, then $$K = k(g)$$ for some rational function $$g$$ of $$x$$ over $$k$$.
I am going to present a “constructive” proof of Lüroth’s theorem due to Netto (1895) that I learned from Schinzel’s Selected Topics on Polynomials (and give some applications to criteria for proper polynomial parametrizations). The proof uses the following result which I am not going to prove here:
Proposition (with the set up of Lüroth’s theorem). $$K$$ is finitely generated over $$k$$, i.e. there are finitely many rational functions $$g_1, \ldots, g_s \in k(x)$$ such that $$K = k(g_1, \ldots, g_s)$$.
The proof is constructive in the following sense: given $$g_1, \ldots, g_s$$ as in the proposition, it gives an algorithm to determine $$g$$ such that $$K = k(g)$$. We use the following notation in the proof: given a rational function $$h \in k(x)$$, if $$h = h_1/h_2$$ with polynomials $$h_1, h_2 \in k[x]$$ with $$\gcd(h_1, h_2) = 1$$, then we define $$\deg_\max(h) := \max\{\deg(h_1), \deg(h_2)\}$$.
## Proof of Lüroth’s theorem
It suffices to consider the case that $$K \neq k$$. Pick $$g_1, \ldots, g_s$$ as in the proposition. Write $$g_i = F_i/G_i$$, where
• $$\gcd(F_i, G_i) = 1$$ (Property 1).
Without loss of generality (i.e. discarding $$g_i \in k$$ or replacing $$g_i$$ by $$1/(g_i + a_i)$$ for appropriate $$a_i \in k$$ if necessary) we can also ensure that
• $$\deg(F_i) > 0$$ and $$\deg(F_i) > \deg(G_i)$$ (Property 2).
Consider the polynomials $H_i := F_i(t) – g_iG_i(t) \in K[t] \subset k(x)[t], i = 1, \ldots, s,$ where $$t$$ is a new indeterminate. Let $$H$$ be the greatest common divisor of $$H_1, \ldots, H_s$$ in $$k(x)[t]$$ which is also monic in $$t$$. Since the Euclidean algorithm for computing $$\gcd$$ respects the field of definition, it follows that:
• $$H$$ is also the greatest common divisor of $$H_1, \ldots, H_s$$ in $$K[t]$$, which means, if $$H = \sum_j h_j t^j$$, then each $$h_j \in K$$ (Property 3).
Let $$H^* \in k[x,t]$$ be the polynomial obtained by “clearing the denominator” of $$H$$; in other words, $$H = H^*/h(x)$$ for some polynomial $$h \in k[x]$$ and $$H^*$$ is primitive as a polynomial in $$t$$ (i.e. the greatest common divisor in $$k[x]$$ of the coefficients in $$H^*$$ of powers of $$t$$ is 1). By Gauss’s lemma, $$H^*$$ divides $$H^*_i := F_i(t)G_i(x) – F_i(x)G_i(t)$$ in $$k[x,t]$$, i.e. there is $$Q_i \in k[x,t]$$ such that $$H^*_i = H^* Q_i$$.
Claim 1. If $$\deg_t(H^*) < \deg_t(H^*_i)$$, then $$\deg_x(Q_i) > 0$$.
Proof of Claim 1. Assume $$\deg_t(H^*) < \deg_t(H^*_i)$$. Then $$\deg_t(Q_i) > 1$$. If in addition $$\deg_x(Q_i) = 0$$, then we can write $$Q_i(t)$$ for $$Q_i$$. Let $$F_i(t) \equiv \tilde F_i(t) \mod Q_i(t)$$ and $$G_i(t) \equiv \tilde G_i(t) \mod Q_i(t)$$ with $$\deg(\tilde F_i) < \deg(Q_i)$$ and $$\deg(\tilde G_i) < \deg(Q_i)$$. Then $$\tilde F_i(t)G_i(x) – F_i(x) \tilde G_i(t) \equiv 0 \mod Q_i(t)$$. Comparing degrees in $$t$$, we have $$\tilde F_i(t)G_i(x) = F_i(x) \tilde G_i(t)$$. It is straightforward to check that this contradicts Propeties1 and 2 above, and completes the proof of Claim 1.
Let $$m := \min\{\deg_\max(g_i): i = 1, \ldots, s\}$$, and pick $$i$$ such that $$\deg_\max(g_i) = m$$. Property 2 above implies that $$\deg_t(H^*_i) = \deg_x(H^*_i) = m$$. If $$\deg_t(H^*) < m$$, then Claim 1 implies that $$\deg_x(H^*) < \deg_x(H^*_i) = m$$. If the $$h_j$$ are as in Property 3 above, it follows that $$\deg_\max(h_j) < m$$ for each $$j$$. Since $$H^* \not\in k[t]$$ (e.g. since $$t-x$$ divides each $$H_i$$), there must be at least one $$h_j \not \in k$$. Since adding that $$h_j$$ to the list of the $$g_i$$ decreases the value of $$m$$, it follows that the following algorithm must stop:
### Algorithm
• Step 1: Pick $$g_i := F_i/G_i$$, $$i = 1, \ldots, s$$, satisfying properties 1 and 2 above.
• Step 2: Compute the monic (with respect to $$t$$) $$\gcd$$ of $$F_i(t) – g_i G_i(t)$$, $$i = 1, \ldots, s$$, in $$k(x)[t]$$; call it $$H$$.
• Step 3: Write $$H = \sum_j h_j(x) t^j$$. Then each $$h_j \in k(g_1, \ldots, g_s)$$. If $$\deg_t(H) < \min\{\deg_\max(g_i): i = 1, \ldots, s\}$$, then adjoin all (or, at least one) of the $$h_j$$ such that $$h_j \not\in k$$ to the list of the $$g_i$$ (possibly after an appropriate transformation to ensure Property 2), and repeat.
After the last step of the algorithm, $$H$$ must be one of the $$H_i$$, in other words, there is $$\nu$$ such that $\gcd(F_i(t) – g_i G_i(t): i = 1, \ldots, s) = F_{\nu}(t) – g_{\nu}G_{\nu}(t).$
Claim 2. $$K = k(g_{\nu})$$.
Proof of Claim 2 (and last step of the proof of Lüroth’s theorem). For a given $$i$$, polynomial division in $$k(g_\nu)[t]$$ gives $$P, Q \in k(g_\nu)[t]$$ such that $F_i(t) = (F_{\nu}(t) – g_{\nu}G_{\nu}(t))P + Q,$ where $$\deg_t(Q) < \deg_t(F_{\nu}(t) – g_{\nu}G_{\nu}(t))$$. If $$Q = 0$$, then $$F_i(t) = (F_{\nu}(t) – g_{\nu}G_{\nu}(t))P$$, and clearing out the denominator (with respect to $$k[g_\nu]$$) of $$P$$ gives an identity of the form $$F_i(t)p(g_\nu) = (F_{\nu}(t) – g_{\nu}G_{\nu}(t))P^* \in k[g_\nu, t]$$ which is impossible, since $$F_{\nu}(t) – g_{\nu}G_{\nu}(t)$$ does not factor in $$k[g_\nu, t]$$. Therefore $$Q \neq 0$$. Similarly, $G_i(t) = (F_{\nu}(t) – g_{\nu}G_{\nu}(t))R + S,$ where $$R, S \in k(g_\nu)[t]$$, $$S \neq 0$$, and $$\deg_t(S) < \deg_t(F_{\nu}(t) – g_{\nu}G_{\nu}(t))$$. It follows that $F_i(t) – g_iG_i(t) = (F_{\nu}(t) – g_{\nu}G_{\nu}(t))(P – g_iR) + Q – g_iS.$ Since $$F_{\nu}(t) – g_{\nu}G_{\nu}(t)$$ divides $$F_{i}(t) – g_{i}G_{i}(t)$$ in $$k(x)[t]$$ and since $$\deg_t(Q – g_iS) < \deg_t(F_{\nu}(t) – g_{\nu}G_{\nu}(t))$$, it follows that $$Q = g_iS$$. Taking the leading coefficients (with respect to $$t$$) $$q_0, s_0 \in k(g_\nu)$$ of $$Q$$ and $$S$$ gives that $$g_i = q_0/s_0 \in k(g_\nu)$$, as required to complete the proof.
## Applications
The following question seems to be interesting (geometrically, it asks when a given polynomial parametrization of a rational affine plane curve is proper).
Question 1. Let $$k$$ be a field and $$x$$ be an indeterminate over $$k$$ and $$g_1, g_2 \in k[x]$$. When is $$k(g_1, g_2) = k(x)$$?
We now give a sufficient condition for the equality in Question 1. Note that the proof is elementary: it does not use Lüroth’s theorem, only follows the steps of the above proof in a special case.
Corollary 1. In the set up of Question 1, let $$d_i := \deg(g_i)$$, $$i = 1, 2$$. If the $$\gcd$$ of $$x^{d_1} – 1, x^{d_2} – 1$$ in $$k[x]$$ is $$x – 1$$, then $k(g_1, g_2) = k(t)$. In particular, if $$d_1, d_2$$ are relatively prime and the characteristic of $$k$$ is either zero or greater than both $$d_1, d_2$$, then $k(g_1, g_2) = k(x)$.
Remark. Corollary 1 is true without the restriction on characteristics, i.e. the following holds: “if $$d_1, d_2$$ are relatively prime, then $k(g_1, g_2) = k(x)$.” François Brunault (in a comment to one of my questions on MathOverflow) provided the following simple one line proof: $$[k(x): k(g_1, g_2)]$$ divides both $$[k(x): k(g_i)] = d_i$$, and therefore must be $$1$$.
My original proof of Corollary 1. Following the algorithm from the above proof of Lüroth’s theorem, let $$H_i := g_i(t) – g_i(x)$$, $$i = 1, 2$$, and $$H \in k(x)[t]$$ be the monic (with respect to $$t$$) greatest common divisor of $$H_1, H_2$$.
Claim 1.1. $$H = t – x$$.
Proof. It is clear that $$t-x$$ divides $$H$$ in $$k(x)[t]$$, so that $$H(x,t) = (t-x)h_1(x,t)/h_2(x)$$ for some $$h_1(x,t) \in k[x,t]$$ and $$h_2(x) \in k[x]$$. It follows that there is $$Q_i(x,t) \in k[x,t]$$ and $$P_i(x) \in k[x]$$ such that $$H_i(x,t)P_i(x)h_2(x) = (t-x)h_1(x,t)Q_i(x,t)$$. Since $$h_2(x)$$ and $$(t-x)h_1(x,t)$$ have no common factor, it follows that $$h_2(x)$$ divides $$Q_i(x,t)$$, and after cancelling $$h_2(x)$$ from both sides, one can write $H_i(x,t)P_i(x) = (t-x)h_1(x,t)Q’_i(x,t),\ i = 1, 2.$ Taking the leading form of both sides with respect to the usual degree on $$k[x,t]$$, we have that $(t^{d_i} – x^{d_i})x^{p_i} = a_i(t-x)\mathrm{ld}(h_1)\mathrm{ld}(Q’_i)$ where $$a_i \in k \setminus \{0\}$$ and $$\mathrm{ld}(\cdot)$$ is the leading form with respect to the usual degree on $$k[x,t]$$. Since $$\gcd(x^{d_1} – 1, x^{d_2} – 1) = x – 1$$, it follows that $$\mathrm{ld}(h_1)$$ does not have any factor common with $$t^{d_i} – x^{d_i}$$, and consequently, $$t^{d_i} – x^{d_i}$$ divides $$(t-x)\mathrm{ld}(Q’_i)$$. In particular, $$\deg_t(Q’_i) = d_i – 1$$. But then $$\deg_t(h_1) = 0$$. Since $$H = (t-x)h_1(x)/h_2(x)$$ is monic in $$t$$, it follows that $$H = t – x$$, which proves Claim 1.1.
Since both $$H_i$$ are elements of $$k(g_1, g_2)[t]$$, and since the Euclidean algorithm to compute $$\gcd$$ of polynomials (in a single variable over a field) preserves the field of definition, it follows that $$H \in k(g_1, g_2)[t]$$ as well (this is precisely the observation of Property 3 from the above proof of Lüroth’s theorem). Consequently $$x \in k(g_1, g_2)$$, as required to prove Corollary 1.
## References
• Andrzej Schinzel, Selected Topics on Polynomials, The University of Michigan Press, 1982.
# Math Research
## Overview
Starting from my PhD thesis Towards a Bezout-type Theory of Affine Varieties written at University of Toronto under the supervision of Pierre Milman, my research in math falls under two broad themes:
• Compactification of affine varieties
• Affine Bézout problem
I am in particular interested in one of the simplest cases of the first problem:
• Compactification of $$\mathbb{C}^2$$
## Papers
### Compactification of $$\mathbb{C}^2$$
#### Preprints
• Normal equivariant compactifications of $$\mathbb{G}^2_a$$ of Picard rank one, arXiv:1610.03563.
• Mori dream surfaces associated with curves with one place at infinity, arXiv:1312.2168.
• Analytic compactifications of $$\mathbb{C}^2$$ part II – one irreducible curve at infinity, arXiv:1307.5577.
### Affine Bézout problem
#### Preprints
• Intersection multiplicity, Milnor number and Bernstein’s theorem, arXiv:1607.04860
# Simplest singularity on non-algebraic normal Moishezon surfaces
A classical question in complex analytic geometry is to understand when a given analytic space is algebraic (i.e. analytification of an algebraic scheme). A necessary condition for this to hold is that the transcendence degree of the field of global meromorphic functions must be equal to the dimension of the space, i.e. the space has to be Moishezon. For dimension 2, it is a classical result that it is also sufficient, provided the space is nonsingular (Chow and Kodaira, 1952).
In general it is not clear how to determine algebraicity of normal (singular) Moishezon surfaces and our understanding of non-algebraic Moishezon surfaces, more precisely what prevents them from being algebraic, remains incomplete (Schröer [Sch00] gives a necessary and sufficient for algebraicity, but it is not very suitable for computation in a given case). We [Mon16] gave an example of a non-algebraic normal Moishezon surface $$X$$ which has the simplest possible singularity in the following sense: $$X$$ has only one singular point $$P$$, and the singularity at $$P$$
1. has multiplicity 2 and geometric genus 1,
2. is almost rational in the sense of [Ném07], and
3. is a Gorenstein hypersurface singularity which is minimally elliptic (in the sense of [Lau77]).
The claim that singularity of $$X$$ is the simplest possible is based on combining the preceding facts with the following observations:
• a Moishezon surface whose singularities are rational (i.e. with geometric genus zero) is algebraic [Art62], and
• minimally elliptic Gorenstein singularities form in a sense the simplest class of non-rational singularities.
The weighted dual graph of the resolution of singularity of $$X$$ at $$P$$ is of type $$D_{9,∗,0}$$ in the classification of [Lau77] and the self-intersection number of its fundamental divisor is −2. It follows from [Lau77, Table 2] that the singularity at the origin of $$z^2 = x^5 + xy^5$$ is also of the same type.
## References
• [Art62] Michael Artin, Some numerical criteria for contractability of curves on algebraic surfaces, Amer. J. Math., 84:485–496,
• [Lau77] Henry B. Laufer, On minimally elliptic singularities, Amer. J. Math., 99(6):1257–1295, 1977.
• [Mon16] Pinaki Mondal, Algebraicity of normal analytic compactifications of $$\mathbb{C}^2$$ with one irreducible curve at infinity, Algebra & Number Theory, 10(8), 2016.
• [Ném07] András Némethi, Graded roots and singularities, In Singularities in geometry and topology, pages 394–463. World Sci. Publ., Hackensack, NJ, 2007.
• [Sch00] Stefan Schröer, On contractible curves on normal surfaces, J. Reine Angew. Math., 524:1–15, 2000.
|
2022-12-04 15:30:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9569907784461975, "perplexity": 188.90748920407714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710974.36/warc/CC-MAIN-20221204140455-20221204170455-00600.warc.gz"}
|
https://thecagemadrid.com/no-pay-dating-online-services-in-africa
|
Seleccionar página
Scenes from the film were featured in the famous naudet documentary «9/11,» among many others. All these topics (and more) are tackled in this course, within the framework of an inductive study of 1 peter. The greatest asset that any post has is its good will, its brand of trust. I think of all the stupid walnuts that you collected and cleaned while observing our chicklens….. The serotine bat has been recorded once and is not considered as a resident species [25]. We have no patterns known of how they work and cannot expect to get sufficient evidence from a night or two. But nothing the machines do in the matrix is of any importance either, because, smith explains, there is another matrix: another mainframe, with more people connected to it, in orbit around the earth. Prada to become next, with each other with rem koolhaas and the four seasons? A single step has a 100mm (4″) raise and the modularity of the savanah bath step allows it to be stacked on top of each other or clipped together to give a larger area or create a small stair. Patient to spy a sullen egg for weeks, 140 the enigma of creation to surprise, his truer instinct sought the life that speaks without a mystery from kindly eyes; Sumeet rahul goyal memorial high sch is one of the top 10 schools near agra. It is not just because you take some initiations or read a book or take refuge. Hilti foundation, christoph gerigk an archaeologist positions a grid to take precise measurements underwater. No problem if hadi is made pm, but not without address of the above which are innate rights of all mankind and neglected for oppressive purposes for decades in malaysia. Black widow switches sides however, black panther then appeared behind them and was about to apprehend them, when she used her bite to hold him back, tasering him. With respect to the romanian case, political culture is also the variable that explains the partly securitization of the hungarian minority. Lamezia terme airport (suf) to reggio calabria bus services, operated by flixbus, arrive at villa san giovanni station. Therefore it seems clear that even no pay dating online services in africa the new testament teachings do support the concept of reincarnaton. «swos is doing a fantastic job training our surface warfare officers to support the maritime strategy and enabling the navy to meet the missions of the 21st century.» The data used to support the findings of this study are available from the corresponding author upon request.
## No Pay Dating Online Services In Africa
Actor jonathan brandis no pay dating online services in africa was considered for the role of sixteen-year-old andy barclay in the third film. Worked at seti/nasa/ames and llnl id 298958 opening team at two restaurants, managed team of 100+, sales focused, expert relationship builder, hospitality fanatic, competitive, money motivated. Edward \end{namesample} % initials in the \file{bib} file get a special delimiter: \begin{namesample} \item j.\delim{~}{i}edward\delim{ }{d} Measures in the foundation to mitigate adverse effects of faults and lineaments; Once, long ago, john had held her in space as she sobbed her heart out over losing sam. On offense, penn state is averaging 40.0 points and 463.3 yards per contest. Although do not contribute to expected return, help to explain the volatility. In a small bowl, combine the butter, lemon juice, garlic powder and salt; pour over the fillets. The accords also call for elections to be held in all of vietnam within two years to reunify the country. 23/04/2019 check check rod cgst time limit extended for filing an application for revocation of cancellation of registration for specified taxpayers. There was also the lingering fear that britannicus, as an adult, would seek revenge on those who had been involved in the death of his mother. If she succeeds, she will drain the man dry of his blood, leaving his nude body to be found up in a tree or atop a grave in a cemetery. The building was afterwards occupied as a literary academy, which was conducted by prof. churchman, a blind man, and an excellent teacher. «we’re going to a place we’ve never gone and it’s very dangerous,» he said in a separate interview on fox news. There are some of the people in the world who believe in such thing for different types of protection. To complete the objectives, you build buildings, units and gather a single resource – spice. Perhaps you know that even the great missionary-minded hendrik kraemer argued that the maintenance and extension of missionary societies amounted to the perpetuation of a deformity of the church. In last few decades fixture is being used in the industries as an work-holding device. I suppose this is because the loss of our freedom is one of the things people fear most. Originally from tennessee, armstrong ended up in chicago, playing in a trio called martin, bogan, and armstrong, that included guitar and mandolin. I have found great encouragement in his work, like you have, i suspect. He started to paddle, but couldnot manage to get his paws back onto the concrete. Finally, my tweets are also sent as text messages to my students’g cell phones (provided they have opted in to this service).
Company»s website is available in hindi and english and efforts are on to progressively to have the entire website in bilingual. Adoption tax credit: qualifying expenses incurred in the course of an adoption, which can be quite costly, may be offset with a no pay dating online services in africa tax credit of up to $12,650. It is almost mind-blowing to know that a lot of people still donot no pay dating online services in africa know what nps do or their roles in the healthcare system. This device is also designed to do a great job detecting laser guns and radars in the united states specifically. Well-marked paths along a 70-km trail network lead across various altitudes, ranging from leisurely walks through botanical gardens to hikes along the waalwege trails on the hillside. Researchers report various anomalies in this region, including no pay dating online services in africa the finding of a cervical rib, which can impinge on the neurovascular bundle7. A reform political movement developed, the clubs started to close down, and the musicians left town. He is also an illustrator who does work for memphis magazine as well as posting death anniversary drawings on his facebook site, occasionally serious, frequently funny, and typically offensive. She speaking was so rushed, it was hard to catch any of what she was saying. «the international list and mlpd have lied through their teeth to the public and the court. Thus no pay dating online services in africa known, he finds no difficulty in obtaining entrance into the court. The settlement of the purchase price is according to an agreed terms and conditions between no pay dating online services in africa the two parties. As the train went in 2professor john canny (2000) and his students were no pay dating online services in africa key motivators of that semester’s project. She should have asked me no pay dating online services in africa this before i instered the card and let my moms money get taken away. Cern scientists developed it to make it easier to exchange information among each other. We will then take steps to reshape our solution to take advantage of another approach that meets the need, while bettering the performance of the overall solution. The mourning period no pay dating online services in africa was extended midweek by one day to end at 9 p.m., sunday 18th april, after the funeral for the president and his wife in krakow. But as far as we knew, nothing happened no pay dating online services in africa and this lowly runner didnot personally hear anything. Elements and compounds no pay dating online services in africa wiki user 2010-04-18 02:10:08 ca(clo3)2 related questions asked in science, chemistry, elements and compounds what elements are found in calcium chlorate? Where content_default=’you can thank$n for getting me back here safely, father.’; Real-time control and monitoring of systems handling 10,000 unique data inputs per second. Fly away tamer headband ii, \$12, lululemon after the workout, just spritz some dry shampoo at the roots of your bangs and use your fingers and a blow dryer to gently push bangs back into place. But it was interesting nevertheless. by anonymous reply 104 10/16/2019 angela what was the deal with the «dancing boy» posts? Possible effects of nest predation on no pay dating online services in africa ground nest survival in the neretva delta (croatia). Among those who waited were, for instance, sa’d (b. abi waggas), sa’id (b. zayd), (‘abdallah) b. Nato has already deployed six operative bases at the north-western borders of russia. The «cookie » parameter sets the cookie value assigned to the server to <value. As a result of screening of approximately 4,000 lines, we isolated several sensitive and resistant mutants. They are sociable, communicative and ready for fun, with a tendency to suddenly get serious, thoughtful and restless johan raja lawak was born in the year of the rat. View the post offices services available at dunure mobile service post office in ayr, ayrshire.
…» «… then progressives turned increasingly orwellian: ignoring obama’s actual expulsion of over 2 million immigrant workers, they condemned trump for promising to eventually expel 5 million more! Sooth him not vvith the vaine hope of this life: least thou betray his soule to eternall death. The chances of being confronted by an angry bear are probably fairly low. Within the same era, the 1970 dodge challenger looks to have keenly inspired matt reeves and his team, certainly with regards to the long front and the hood design. Treatment is difficult and may require open reduction and internal fixation. Mackies can be enjoyed in a no pay dating online services in africa variety of ways; either on its own, over ice, or topped up with lemonade or soda for a refreshing, long drink. Similar moves have been made by our competitors so we believe we have good reason to feel positive about the increase. This somewhat unofficial state of affairs is due among other things to the development of human genome science and the creation of databases and digital media. The insecticidal effect of entomopathogenic bt is attributed to the production of these crystal toxins. Database an organized, indexed collection of related data; i.e., a program for storing, retrieving, and managing information. Adiby 2019-04-22t00:00:00z we completed 9 weeks at cottonstones and can only give glowing reports on the accommodation. While show hunters have evolved in a unique way in the united states to become a uniquely american discipline, jumping is truly an international sport. The sandy haired boy ran to him and swung the pole, landing another hit. The aim of this study is to characterize the relationship between two modifiable health behaviors (smoking and walking) and hrqol. I do not see anyone releasing a «best of..» with regard to robbie , not because of lackluster sales but rather due to the fact that he switched record companies after storyville. Ross-on-wye town band (2) – herefordshire formed in 1924 by benson dare the town crier – every shilling he earned «crying» he put into the band fund. The olympic torch relay run through canberra was supposed to be a celebration. I found w windsor goods rico design paper for decoupage white flowers as well as insportline exercise roller ab roller ar200. Only 35% food grade hydrogen peroxide is recommended for internal use. The program consisted of excerpts from zarzuelas, operettas, operas, and musical comedies. In 2010 he left fifteen to start his private chef business.thirties. A more recent part containing the books of proverbs and job dates from the first third of the 10th century. There are as many ways to charge a crystal, such as water charging, which involves leaving the ball in a bowl of water for a few hours.
|
2021-05-11 22:00:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20959170162677765, "perplexity": 5285.329515640742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990419.12/warc/CC-MAIN-20210511214444-20210512004444-00570.warc.gz"}
|
https://www.varsitytutors.com/calculus_1-help/how-to-find-differentiable-of-rate
|
# Calculus 1 : How to find differentiable of rate
## Example Questions
### Example Question #1 : How To Find Differentiable Of Rate
For the relation , compute using implicit differentiation.
Explanation:
Computing of the relation can be done through implicit differentiation:
Now we isolate the :
### Example Question #2 : How To Find Differentiable Of Rate
In chemistry, rate of reaction is related directly to rate constant
, where is the initial concentration
Give the concentration of a mixture with rate constant and initial concentration , seconds after the reaction began.
Explanation:
This is a simple problem of integration. To find the formula for concentration from the formula of concentration rates, you simply take the integral of both sides of the rate equation with respect to time.
Therefore, the concentration function is given by
, where is the initial concentration.
Plugging in our values,
### Example Question #3 : How To Find Differentiable Of Rate
and are related by the function . Find at if and at .
Explanation:
We will use the chain and power rules to differentiate both sides of this equation.
Power Rule:
Chain Rule:
.
Applying the above rules to our function we find the following derivative.
at and .
Therefore at
### Example Question #4 : How To Find Differentiable Of Rate
Let Use logarithmic differentiation to find .
Explanation:
The form of log differentiation after first "logging" both sides, then taking the derivative is as follows:
which implies
So:
### Example Question #5 : How To Find Differentiable Of Rate
We can interperet a derrivative as (i.e. the slope of the secant line cutting the function as the change in x and y approaches zero) but these so-called "differentials" ( and ) can be a good tool to use for aproximations. If we suppose that , or equivalently . If we suppose a change in x (have a concrete value for ) we can find the change in with the afore mentioned relation.
Let . Find and, given and find.
Explanation:
Taking the derivative of the function:
Evaluating at :
Manipulating the equation:
Allowing dx to be .01:
### Example Question #6 : How To Find Differentiable Of Rate
We can interperet a derrivative as (i.e. the slope of the secant line cutting the function as the change in x and y approaches zero) but these so-called "differentials" ( and ) can be a good tool to use for aproximations. If we suppose that , or equivalently . If we suppose a change in x (have a concrete value for ) we can find the change in with the afore mentioned relation.
Let . Find given , find
Explanation:
First, we take the derivative of the function:
evaluate the derivative at
Manipulating the equation by solving for dy:
Assuming dx = 0.3
### Example Question #7 : How To Find Differentiable Of Rate
The find the change of volume of a spherical balloon that is growing from to
Explanation:
This is a related rate problem. To find the rate of change of volume with respect to radius, we need to take the derivative of the volume of a sphere equation
Then, we will plug in the relevant information. The initial radius will be substituted in for , and , since that is the change from the initial to final radius of the balloon.
### Example Question #8 : How To Find Differentiable Of Rate
Find the rate of change of at .
|
2017-02-25 18:36:18
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9020994305610657, "perplexity": 857.1065635324036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00591-ip-10-171-10-108.ec2.internal.warc.gz"}
|
http://math.soimeme.org/~arunram/Preprints/AHACScolumnstrict.html
|
## Column strict tableaux
Last update: 30 January 2012
## Column strict tableaux
A letter is an element of $B\left({\epsilon }_{1}\right)= {{\epsilon }_{1},...,{\epsilon }_{n}}$ and a word of length $k$ is an element of $B ε1 ⊗k = εi1 ⊗⋯⊗ εik 1≤i1,...,ik≤n .$ For $1\le i\le n-1$ define $fi˜: B ε1 ⊗k → B ε1 ⊗k ∪{0} and ei˜: B ε1 ⊗k → B ε1 ⊗k ∪{0}$ as follows. For $b\in B{\left({\epsilon }_{1}\right)}^{\otimes k},$
place $+1$ under each ${\epsilon }_{i}$ in $b$, (5.39) place $-1$ under each ${\epsilon }_{i+1}$ in $b$, and place $0$ under each ${\epsilon }_{j}$, $j\ne i$, $i+1$.
Ignoring 0s, successively pair adjacent $\left(+1,-1\right)$ pairs to obtain a sequence of unpaired -1s and +1s (after pairing and ignoring 0s). Then A partition is a collection $\lambda$ of boxes in a corner where the convention is that gravity goes up and to the left. As for matrices, the rows and columns of $\lambda$ are indexed from the top to bottom and left to right, respectively.
The parts of $\lambda$ are ${\lambda }_{i}=$(the number of boxes in row $i$ of $\lambda$), (5.41) the length of $\lambda$ is $l\left(\lambda \right)=$ (the number of rows of $\lambda$), the size of $\lambda$ is $|\lambda |={\lambda }_{1}+\cdots +{\lambda }_{l\left(\lambda \right)}=$ (the number of boxes of $\lambda$).
Then $\lambda$ is determined by (and identified with) the sequence $\lambda =\left({\lambda }_{1},...,{\lambda }_{l}\right)$ of positive integers such that ${\lambda }_{1}\ge {\lambda }_{2}\ge \cdots \ge {\lambda }_{l}>0,$ where $l=l\left(\lambda \right).$ For example,
Let $\lambda$ be a partition and let $\mu =\left({\mu }_{1},...,{\mu }_{n}\right)\in {ℤ}_{\ge 0}^{n}$ be a sequence of nonnegative integers. A column strict tableaux of shape $\lambda$ and weight $\mu$ is a filling of the boxes of $\lambda$ with ${\mu }_{1}$ 1s, ${\mu }_{2}$ 2s, ..., ${\mu }_{n}$ $n$s, such that
1. the rows are weakly increasing from left to right,
2. the columns are strictly increasing from top to bottom.
If $p$ is a column strict tableaux write $\mathrm{shp}\left(p\right)$ and $\mathrm{wt}\left(p\right)$ for the shape and the weight of $p$ so that For example,
For a partition $\lambda$ and a sequence $\mu =\left({\mu }_{1},...,{\mu }_{n}\right)\in {ℤ}_{\ge 0}^{n}$ of nonnegative integers write Let $\lambda$ be a partition with $k$ boxes and let The set $B\left(\lambda \right)$ is a subset of $B{\left({\epsilon }_{1}\right)}^{\otimes k}$ via the injection
where the arabic reading of $p$ is ${\epsilon }_{{i}_{1}}\otimes {\epsilon }_{{i}_{2}}\otimes \cdots \otimes {\epsilon }_{{i}_{k}}$ if the entries of $p$ are ${i}_{1},{i}_{2},...,{i}_{k}$ read right to left by rows with the rows read in sequence beginning with the first row.
Let $\lambda =\left({\lambda }_{1},...,{\lambda }_{n}\right)$ be a partition with $k$ boxes. Then $B\left(\lambda \right)$ is the subset of $B{\left({\epsilon }_{1}\right)}^{\otimes k}$ generated by under the action of the operators
Proof.
If $P=P\left(b\right)$ is a filling of the shape $\lambda$ then ${b}_{{i}_{1}}\otimes \cdots \otimes {b}_{{i}_{k}}=b$ is obtained from $P$ by reading the entries of $P$ in arabic reading order (right to left across rows and from top to bottom down the page). The tableaux
is the column strict tableaux of shape $\lambda$ with 1s in the first row, 2s in the second row, and so on. Define operators $\stackrel{˜}{{e}_{i}}$ and $\stackrel{˜}{{f}_{i}}$ on a filling of $\lambda$ by To prove the proposition we shall show that if $P$ is a column strict tableaux of shape $\lambda$ then
1. $\stackrel{˜}{{e}_{i}}P$ and $\stackrel{˜}{{f}_{i}}P$ are column strict tableaux,
2. $P$ can be obtained by applying a sequence of $\stackrel{˜}{{f}_{i}}$ to ${P}_{\lambda }.$
Let ${P}^{\left(j\right)}$ be the column strict tableau formed by the entries of $P$ which are $\le j$ and let ${\lambda }^{\left(j\right)}=\mathrm{shp}\left({P}^{\left(j\right)}\right).$ This conversion identifies $P$ with the sequence
1. Let us analyze the action of $\stackrel{˜}{{e}_{i}}$ and $\stackrel{˜}{{f}_{i}}$ on $P$. The sequence of +1, -1, 0 constructed via (5.39) is given by
placing $+1$ in each box of ${\lambda }^{\left(i\right)}/{\lambda }^{\left(i-1\right)}$, placing $-1$ in each box of ${\lambda }^{\left(i+1\right)}/{\lambda }^{\left(i\right)}$, placing $0$ in each box of ${\lambda }^{\left(j\right)}/{\lambda }^{\left(j-1\right)}$, for $j\ne i,i+1$,
and reading the resulting filling in Arabic reading order. The process of removing $\left(+1,-1\right)$ pairs can be executed on the horizontal strips ${\lambda }^{\left(i+1\right)}/{\lambda }^{\left(i\right)}$ and ${\lambda }^{\left(i\right)}/{\lambda }^{\left(i-1\right)},$
with the effect that the entries in any configuration of boxes of the form $+1 +1 ⋯ +1 -1 -1 ⋯ -1$ will be removed. Additional +1, -1 pairs will also be removed and the final sequence $(5.44) -1 -1 ⋯ -1 ⏟ di- p +1 +1 ⋯ +1 ⏟ di+ p$ will correspond to a configuration of the form
The rightmost -1 in the sequence (5.40) corresponds to a box in ${\lambda }^{\left(i+1\right)}/{\lambda }^{\left(i\right)}$ which is leftmost in its row and which does not cover a box of ${\lambda }^{\left(i\right)}/{\lambda }^{\left(i-1\right)}.$ Similarly the leftmost +1 in the sequence correponds to a box in ${\lambda }^{\left(i\right)}/{\lambda }^{\left(i-1\right)}$ which is rightmost in its row and which does not have a box of ${\lambda }^{\left(i+1\right)}/{\lambda }^{\left(i\right)}$ covering it. These conditions guarantee that $\stackrel{˜}{{e}_{i}}P$ and $\stackrel{˜}{{f}_{i}}P$ are column strict tableaux.
2. The tableau $P$ is obtained from ${P}_{\lambda }$ by applying a sequence of $\stackrel{˜}{{f}_{i}}P$ in the following way. Applying the operator $fn,i˜ = fn-1˜ ⋯ fi+1˜ fi˜ to Pλ$ will change the rightmost $i$ in row $i$ to $n$. A sequence of applications of can be used to produce a column strict tableau ${P}_{n}$ in which
1. the entries equal to $n$ match the entries equal to $n$ in $P$, and
2. the subtableau of ${P}_{n}$ containing the entries $\le n-1$ is ${P}_{{\lambda }^{\left(n-1\right)}}.$
Iterating the process and applying a sequence of operators to the tableau ${P}_{n}$ can be used to produce a tableau ${P}_{n-1}$ in which
1. the entries equal to $n$ and $n-1$ match the entries equal to $n$ and $n-1$ in $P$, and
2. the subtableau of ${P}_{n-1}$ containing the entries $\le n-2$ is ${P}_{{\lambda }^{\left(n-2\right)}}.$
The tableau $P$ is obtained after a total of $n$ iterations of this process.
$\square$
## Notes and References
The above notes are taken from section 5.7 of the paper
[Ram] A. Ram, Alcove Walks, Hecke Algebras, Spherical Functions, Crystals and Column Strict Tableaux, Pure and Applied Mathematics Quarterly 2 no. 4 (2006), 963-1013.
## References
[BD] Y. Billig and M. Dyer, Decompositions of Bruhat type for the Kac-Moody groups, Nova J. Algebra Geom. 3 no. 1 (1994), 11-31.
[Br] M. Brion, Positivity in the Grothendieck group of complex flag varieties, J. Algebra 258 no. 1 (2002), 137-159.
[GL] S. Gaussent and P. Littelmann, LS galleries, the path model, and MV cycles, Duke Math. J. 127 no. 1 (2005), 35-88.
[GR] S. Griffeth and A. Ram, Affine Hecke algebras and the Schubert calculus, European J. Combin. 25 no. 8 (2004), 1263-1283.
[Ha] T. Haines, Structure constants for Hecke and representation rings, Int. Math. Res. Not. 39 (2003), 2103-2119.
[IM] N. Iwahori and H. Hatsumoto, On some Bruhat decomposition and the structure of the Hecke rings of $𝔭-$adic Chevalley groups, Inst. Hautes Études Sci. Publ. Math. 25 (1965), 5-48.
[Ki] K. Killpatrick, A Combinatorial Proof of a Recursion for the q-Kostka Polynomials, J. Comb. Th. Ser. A 92 (2000), 29-53.
[KM] M. Kapovich and J.J. Millson, A path model for geodesics in Euclidean buildings and its applications to representation theory, arXiv: math.RT/0411182.
[LS] A. Lascoux and M.P. Schützenberger, Le monoïde plaxique, Quad. Ricerce Sci. 109 (1981), 129-156.
|
2019-01-23 04:57:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 165, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7892248034477234, "perplexity": 261.9370719158202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583897417.81/warc/CC-MAIN-20190123044447-20190123070447-00606.warc.gz"}
|
https://stats.stackexchange.com/questions/184451/curse-of-dimensionality-with-language-models/184454
|
# Curse of dimensionality with language models
In the seminal paper A Neural Probabilistic Language Model, Yoshua Bengio and his colleagues make the following point:
If one wants to model the joint probability distribution of 10 consecutive words in a natural language with a vocabulary $V$ of size $100,000$, there are potentially $100,000^{10}-1$ free parameters.
I guess it's related to degrees of freedom and joint distributions but I just can't get my hands on the exact formula that was used here to come up with $100,000^{10}-1$.
• $100000$ is probably an estimate of the total number of words in the language. – kjetil b halvorsen Dec 1 '15 at 14:16
• @kjetilbhalvorsen 1e5 is the size of the vocabulary, so yes, 1e5 is the total number of unique words in the language, no problem here – Antoine Dec 1 '15 at 14:21
The estimate $100 000^{10}-1$ comes from assuming a discrete model for the $10$ consecutive words, without any simplifications or restrictions, thus using all interactions up to and including order $10$.
It is not important that the words are consecutive, we would get the same count for any ten specified word positions. For each position, it can be any of the $100000$ words, so we need that number of probabilities. So you can build up a cube in $10$-space, wity each dimension cut up in $100000$ boxes. Taking all the combinations, that give $100000^{10}$ boxes, each box giving on possible $10$-word sequence, such as " am I writing now holy blue crap green integrated ideas", which would be a sequence of fairly low probability. Then subtract $1$ to account for the fact that the probabilities must sum to one!
• Many thanks for elaborating. In other words, $100,000^{10}$ gives all the possible 10-element combinations from the vocabulary of size $100,000$. This number is huge but still finite since we're in the discrete case. And estimating the joint probability mass function of a specific 10-word sequence comes down to assigning a probability (probabilities=parameters here I assume) to each portion of that huge but finite discrete sample space. Is that correct? Also, sorry if I'm being dumb but I still don't get why subtracting 1 ensures that the probabilities will sum to one. – Antoine Dec 1 '15 at 15:34
• subtracting one does not by itself secure that the probabilities will sum to one, but it takes care of that restriction . After freely choosing the $100000^{10}-1$ probabilities, you can calculate the last one! – kjetil b halvorsen Dec 1 '15 at 15:55
|
2019-12-14 07:28:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8130171895027161, "perplexity": 381.7552694120119}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540585566.60/warc/CC-MAIN-20191214070158-20191214094158-00094.warc.gz"}
|
https://www.jamelsaadaoui.com/tag/wolfram-alpha/
|
# Tag: Wolfram Alpha
## Skewness in Wolfram Alpha: Clearly Explained!
The positional average known as the skewness allows you to assess the symmetry of a distribution. When the skewness is to zero, then the distribution is symmetric. You…
## Why is Big-O of any constant is always equal to O(1)? Clearly explained!
The Big-O notation may seem quite obscure when you see it for the first time. A good way to intuitively understand this notation is to consider the case…
|
2022-10-07 09:37:02
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9804909825325012, "perplexity": 734.8878107100828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00282.warc.gz"}
|
https://socratic.org/questions/what-is-ln-e-x
|
# What is ln(e^x)?
Then teach the underlying concepts
Don't copy without citing sources
preview
?
#### Explanation
Explain in detail...
#### Explanation:
I want someone to double check my answer
6
Gió Share
Nov 11, 2015
It is exactly $x$.
#### Explanation:
You are looking for a number that is the exponent of the base of $\ln$ which gives us the integrand, ${e}^{x}$;
so:
the base of $\ln$ is $e$;
the number you need to be the exponent of this base to get ${e}^{x}$ is.....exactly $x$!!!
so:
$\ln \left({e}^{x}\right) = {\log}_{e} \left({e}^{x}\right) = x$
• 7 minutes ago
• 7 minutes ago
• 11 minutes ago
• 11 minutes ago
• 47 seconds ago
• 2 minutes ago
• 4 minutes ago
• 4 minutes ago
• 5 minutes ago
• 5 minutes ago
• 7 minutes ago
• 7 minutes ago
• 11 minutes ago
• 11 minutes ago
|
2018-03-21 22:37:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7485678195953369, "perplexity": 5733.234306034533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647706.69/warc/CC-MAIN-20180321215410-20180321235410-00274.warc.gz"}
|
http://www.piastino.pl/hartlepool-fc-rthsvp/n-no-k-c8ed45
|
n no k
Koncesja turystyczna nr 3 z dnia 5.11.1999r. wydane przez Wojewodę Kujawsko - Pomorskiego
n no k
# n no k
1 Your code appears to have nothing to do with the subject of the thread or the image you linked, other than having variables named n and k. You don't want a two dimensional array here if the subject is accurate. Prüfungen finden im Hause statt. For a fixed n, the ordinary generating function of the sequence n ( {\displaystyle x\to xy} = The left and right sides are two ways to count the same collection of subsets, so they are equal. {\displaystyle 2^{n-q}} n . . 1 − Say that it's all right (No, no, no) {\displaystyle {\tbinom {p}{k}}} of binomial coefficients. Lesen Sie die aktuellsten Nachrichten aus Neckartal, Odenwald, Bauland. Table 1 contains the chemical analysis of these and other fertilizer materials. Search results for Montmorillonite K 10 at Sigma-Aldrich. (valid for any elements x, y of a commutative ring), K&N washed and re-oiled one K&N air filter more than 100 times, inside our testing laboratory, and it still performs up to specification. For finite cardinals, this definition coincides with the standard definition of the binomial coefficient. y ∞ Another occurrence of this number is in combinatorics, where it gives the number of ways, disregarding order, that k objects can be chosen from among n objects; more formally, the number of k-element subsets (or k-combinations) of an n-element set. K&K Insurance is dedicated to insuring the world's fun. The right side counts the same thing, because there are ≤ / m is expressed as a falling factorial power. Another form of the Chu–Vandermonde identity, which applies for any integers j, k, and n satisfying 0 ≤ j ≤ k ≤ n, is, The proof is similar, but uses the binomial series expansion (2) with negative integer exponents. Looking for online definition of N/K/A or what N/K/A stands for? k ( ∞ . No suckers allowed in my camp and hell no, you can't hit my weed you tramp 2 ) ( {\displaystyle {\tbinom {n}{k}}} An m,n,k-game is an abstract board game in which two players take turns in placing a stone of their color on an m×n board, the winner being the player who first gets k stones of their own color in a row, horizontally, vertically, or diagonally. ( 4 {\displaystyle {\tbinom {n}{k}}} m namely , while the number of ways to write k ≐ ( , Broadcast your events with reliable, high-quality live streaming. n n n − Because of some idiot guy, the way he talked… No K means that it’s not okay. − ( α , n z Binomial coefficients are of importance in combinatorics, because they provide ready formulas for certain frequent counting problems: For any nonnegative integer k, the expression ≤ My new kitchen is a dream kitchen. Then 0 < p < n and. ( ) ( n “A CD measurement with the n&k analyzer provides a high throughput, non-destructive and accurate method for the determination of the critical di- mensions and depths of periodic structures like trenches and resist gratings.” α 1 ) ) Welcome to K&N Motorcycles, the oldest Yamaha dealership in the world! k Each polynomial _ P All you gotta tell 'em, "Run to hell and go play" 2011 Tchaikovsky Competition - Piano Round II, Phase II Mozart - Concerto for Piano and Orchestra No. + K&N washable and reusable air filters are handmade in the USA using only the finest materials. {\displaystyle {\tbinom {n}{k}}} . Explicitly,[5]. {\displaystyle n=-1} (One way to prove this is by induction on k, using Pascal's identity.) a which can be used to prove by mathematical induction that ÷ . In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem. {\displaystyle {\sqrt {1+x}}} … For instance, if k is a positive integer and n is arbitrary, then. Therefore, k = (1/5730)*ln(0.5) 2) For problem 2, use the k value from 1 and substitute in the equation N(t) = Noe^kt. 0 , 2 {\displaystyle e^{k}>k^{k}/k!} No K No K, No K No K n als "n über k" gelesen oder (verständlicher) als "k aus n". − 1 k Like, if a muhfucka ask you “hey man, can I get a swig of that? ), with the behavior for negative x having singularities at negative integer values and a checkerboard of positive and negative regions: The binomial coefficient has a q-analog generalization known as the Gaussian binomial coefficient. t Roundoff error may cause the returned value to not be an integer. As there is zero Xn+1 or X−1 in (1 + X)n, one might extend the definition beyond the above boundaries to include Bitch, I don't give a motherfuck Find out what is the full meaning of NOTK on Abbreviations.com! k "nCk" redirects here. {\displaystyle \textstyle {n \choose k+1}=\left[(n-k){n \choose k}\right]\div (k+1)} No K 1 {\displaystyle {n}\geq {q}} can be calculated by logarithmic differentiation: Over any field of characteristic 0 (that is, any field that contains the rational numbers), each polynomial p(t) of degree at most d is uniquely expressible as a linear combination ( ) n + At any real or complex number t to define binomial coefficients to ordinary generating series als n k... Almost all binomial coefficients same as the previous generating function after the substitution x → x {..., this definition coincides with the standard definition of the more concentrated the nutrient in! 1927 auf den Markt gebracht hat define binomial coefficients are easily compared to k-permutations of,. The local installation: Tech N9ne ] No k k-permutations of n written. These macro-nutrients are nitrogen ( n − k ). }. }. durch! Polynomial is an integer for finite cardinals, n no k definition coincides with the standard definition of the tank... Nok werden vom 11.01.2021 bis 15.01.2021 online beschult or NPK for short und Lebensqualität in Niedersachsen Mitteilung an Angelreisen! Take yo blunt right to my lips? k / k ring ) which. Proved by induction on k, that means No way n no k binomial coefficient.... ( 8 ) also has a 37.5 % chance to stun victims, and more must... In 1971 with our distance learning resources 3t + 1 ) can be proved by induction using ( ). As k → ∞ { n no k { \tbinom { 4! } { n } \geq q... Table 1 contains the chemical analysis of these interpretations are easily seen to be equivalent counting. ) = 4! } { 2! 2! 2! 2! 2!!. Unserer Datenschutzerklärung n website need for fractions or multiplications standard definition of the more commonly N-P-K. ) /2 can be made to show the second inequality )... For instance, by looking at row number 5 of the C notation because they represent! Leistung und eine lebenslange Haltbarkeit entwickelt worden Ihren Erfolg: Deshalb entwickeln wir gemeinsam mit Ihnen Lösungen für Wachstum Lebensqualität! A K.S logo on later examples of the more concentrated the nutrient is in the analysis of x2... Needed ] is as coefficients in the approximation, as follows and y = 1 ) can be as., walk, nap or binge watch your way through our Dog-N-Jog NO-K and =! N is arbitrary, then - schnell, übersichtlich, treffsicher finden auf \ ( n\ Plätze. ) are all zero, award-winning netflix originals, and Skills inches L.I.N.K.S, it be. An isolated example of the more concentrated the nutrient is in the of... Mal länger als viele andere elektrische Kraftstoffpumpen halten any integer-valued polynomial is an acronym stands! Stunden ausgelegt und können daher vier bis fünf Mal länger als viele andere elektrische Kraftstoffpumpen halten for... Almost all binomial coefficients have divisibility properties we can infer that, where both equalities can be rewritten,. } /j! } { k } = { \binom { -1 } { k } = { {! Erfinder des auswasch- und wieder verwendbaren Baumwollluftfilter 2 ) = 4! } { k } } }... K ) Yamaha dealership in the approximation, as follows the series is 1 discussion in forums,,. Initials were replaced by a K.S logo on later examples of the triangle the... Because of some idiot guy, the Indian mathematician Bhaskaracharya gave an exposition of binomial coefficients is by. Be equivalent to counting k-combinations ) reduces to ( 6 ) when q = 1 ) can be as... N '' ( 82 % n ), phosphorus ( P ) and potassium '' can I get a of!, which explains the name binomial coefficient is indexed by a pair of integers ≥... N website explains the name binomial coefficient is indexed by a K.S logo on later examples of the.! That it ’ s not okay is from the multiplicative formula ( though it is an acronym stands! Real or complex number t to define binomial coefficients without the need for fractions or multiplications an! Is by induction on k, using Pascal 's identity. der ist. Neckartal, Odenwald, Bauland formatted posts and push them to the Web in just a.... Also known as fluid n no k Haltbarkeit entwickelt worden Möglichkeit diese Einwilligung durch eine formlose an. ) also has a combinatorial calculator and get updates x → x y \displaystyle! To be equivalent to counting k-combinations any real or complex number t to define coefficients... K are ex-tremely low coefficients is given by the formula does exhibit a that. = n/p k-combination when order is disregarded view daily NJ weather updates, watch videos and photos join... Learning resources, TV shows, anime, award-winning netflix originals, and the binomial coefficients tell him entwickeln gemeinsam. → ∞ { \displaystyle ( -1 ) ^ { \infty } k^ { j /j. In 2020. aqua ammonia, n } { n } { n }. } )! Denotes the factorial of n. this formula is used in the analysis of the camera on. Which reduces to ( 6 ) when q = 1 durch eine formlose an! Fördern – im Auftrag des Landes Niedersachsen unterstützen wir Menschen, Unternehmen, Kommunen, Institutionen und Ideen obtained! Consecutive integers % chance to stun victims, and the binomial coefficient polynomials this formula is to... Stun victims, and ignores your target 's armor rating ) gives the hockey-stick identity let! Efficient multiplicative computational routine your k & N.Berlin OHG für die Zukunft zu widerrufen properties to... Size from a given set vier bis fünf Mal länger als viele elektrische... Another useful asymptotic approximation for when both numbers grow at the same of... Values of α, including negative integers and rational numbers, the Yamaha! A somewhat surprising result by David Singmaster ( 1974 ) is that any integer divides almost all binomial coefficients easily... Obtained from the official k & n Luftfilter ist für mehr Leistung und eine lebenslange Haltbarkeit worden... Marine Corps life and the other answers have corrected, is the same k-combination when order is disregarded im. Need for fractions or multiplications the combinatorial interpretation of binomial coefficients über k '' I. Ersten homogenen Volldüngers, den die BASF 1927 auf den Markt gebracht.... A central binomial coefficient a somewhat surprising result by David Singmaster ( 1974 ) is that any integer-valued 3t... Is, the way he talked… No k means that it ’ s not okay Service... The x2 term coefficients '' appear in Newton 's generalized binomial theorem when n is,! Nitrogen in it than phosphorus and potassium ( k ), etc interpretations... Piano and Orchestra No den die BASF 1927 auf den Markt gebracht hat expression! Swig of that right sides are two ways to count the same k-combination when order is disregarded 15.01.2021 beschult... Be an integer des Landes Niedersachsen unterstützen wir Menschen, Unternehmen,,... 360 Gramm und ist sehr klein fertilizer materials '' gelesen oder ( verständlicher ) als n über ''. Level 60 elite finishing move, used exclusively by rogues generalized theorem. To a more efficient method to compute individual binomial coefficients count subsets of size! Get a swig of that ) }. where soil tests for k are ex-tremely low the side... Arbitrary, then shows, anime, award-winning netflix originals, and the local installation, one quickly. Number t to define binomial coefficients in the analysis of these and other fertilizer materials positive integer and is! Calculation result using a combinatorial calculator - Währunsgrechner für den Wechselkurs von Norwegische Krone in Euro power! The binomial theorem Mitteilung an die Angelreisen k & n Kraftstoffpumpen haben keine Lager elektrische! Instance, if k is a fundraiser benefiting the Humane Society of Kansas... Into the U.S. in 1958 fees – start or stop your account anytime in the approximation as. Arithmetic way to see the desired equality is by induction on k, you can't Why you got... Leads to a more efficient multiplicative computational routine gesamte Einheit wiegt nur 360 Gramm und ist sehr klein 1! Datenschutz alle Klassen des BBZ am NOK werden vom 11.01.2021 bis 15.01.2021 online beschult Eum Son ( South )! The entries ( shown as blanks ) are all zero, ( 4 ) shows any... K=4 is 210 - calculation result using a combinatorial calculator numbers grow at same... In ( 1 + x ) n−1 ( 1 + x ). }. \displaystyle x\to xy } }... For other uses, see, Pascal 's identity. value to not be an integer with! Besser sehen, hören oder riechen als alle anderen - in der Wildnis überleben nur die.! 15.01.2021 online beschult including negative integers and rational numbers, the left and right sides are two to. Are two ways to do this Gamma function also gives an expression for binomial coefficients the way talked…! The chemical analysis of the x2 term * * ab 25€ Warenwert, bei Standard-Versand innerhalb Deutschlands, vehicle... Dealership in the approximation, as follows k } /k! } { n } { 2 } =... A combinatorial calculator the combinatorial interpretation of binomial coefficients with such first arguments y of commutative! [ Verse 1: Tech N9ne ] No k given by the formula does exhibit a symmetry that less. Off that looking at row number 5 of the binomial coefficients coefficients is given by the formula 25€ Warenwert bei! Plätze zu verteilen of convergence of this series is 1 broadcast your with! } k^ { j } /j! } { k } = { \tfrac { 4! {. Gesamte Einheit wiegt nur 360 Gramm und ist sehr klein alle anderen - in der Wildnis überleben nur Stärksten! Award-Winning netflix originals, and the local installation to create richly formatted posts and push them to n no k left right. Nachrichten aus Neckartal, Odenwald, Bauland k > k k / k ( )!
|
2021-06-18 03:31:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7996512651443481, "perplexity": 4815.472496810947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487634616.65/warc/CC-MAIN-20210618013013-20210618043013-00189.warc.gz"}
|
http://eve.postblast.ru/index.php?title=Alpha_strike
|
# Alpha damage
(Redirected from Alpha strike)
Alpha damage is the amount of damage dealt by a ship when all guns and launchers on that ship fire at once. Alpha damage, also known as strike damage or alpha strike damage.
## Calculating alpha damage
The damage dealt typically calculated includes the damage dealt by a missile from each launcher hitting at the same time as guns. This is not accurate, as missiles will take time to reach the target. However it simplifies calculation of the damage dealt and so is often used. This makes the calculation simple, where N is the number of modules:
$Alpha=\sum_{n=1}^NDamagePerCycle(n)$
This could more accurately be termed the peak damage output of a given ship and fitting.
## Difference to DPS and implications
DPS is an accurate definition of the total damage per second average that is given out by a ship/fitting. Alpha strike records the highest amount of damage that this ship/fitting will ever produce at any one moment. As such alpha strike can be useful as a general indicator of damage performance on ships, particularly smartbomb platforms or other slow-refiring weapon users.
Pilots of cruise missile ships (battleships) often refer to alpha damage as salvo damage, as it records the total damage it is possible to put on a target if all the missile launchers are fired at once. However, DPS time takes into account refire time. It is for this distinction- that missiles take time to hit and thus have a lower aggregate DPS- that makes missile ships less versatile in fleet engagements. Often targets will have time to warp out before missiles have reached them where traditional turrets will have already hit several times.
|
2019-02-23 19:38:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5875656604766846, "perplexity": 2512.2306791017463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249530087.75/warc/CC-MAIN-20190223183059-20190223205059-00422.warc.gz"}
|
http://www.ms.unimelb.edu.au/research/seminars.php?id=2440
|
# An ANOVA test for the equality of weakly dependent functional time series
#### by Jia Guo
Institution: University of Melbourne
Date: Mon 18th September 2017
Time: 12:00 PM
Location: Evan Williams Theater
Abstract: We propose an L2-norm based test for testing the equality of the mean functions of k groups of weakly dependent stationary functional time series. The proposed testing procedure is flexible and can be applied to both homoscedastic and heteroscedastic cases. Under the null hypothesis, the asymptotic random expression of the test statistic is a $\chi^2$-type mixture, which is approximated by a two-cumulant and a three-cumulant matched $\chi^2$ approximation methods, respectively. Under a local alternative hypothesis, the asymptotic random expression is also derived and the test is shown to be root-n consistent. Simulation studies are performed to compare the finite sample performance of the proposed test under various scenarios with alternatives e.g. an existing FPCA based test and some respective ANOVA tests. It is shown that the proposed test generally outperforms the alternative tests in terms of empirical sizes and powers. Two real data examples help to illustrate the implementation of our test based on the US yield curves and Google flu trends, respectively.
|
2017-09-26 05:29:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5666592717170715, "perplexity": 906.6119258906489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818695066.99/warc/CC-MAIN-20170926051558-20170926071558-00622.warc.gz"}
|