url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://thermalfluidscentral.org/encyclopedia/index.php?title=External_Natural_Convection_from_Heated_Vertical_Plate&diff=4685&oldid=4632
|
# External Natural Convection from Heated Vertical Plate
(Difference between revisions)
Revision as of 03:08, 17 June 2010 (view source) (Created page with '===6.2.2 External Natural Convection from Heated Vertical Plate=== For external natural convection near a vertical flat plate as shown in Fig. 6.1, the boundary layer assumption …')← Older edit Revision as of 04:17, 17 June 2010 (view source)Newer edit → Line 5: Line 5: -
$\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}=0 \qquad \qquad()$
+
$\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}=0 \qquad \qquad(1)$
- (6.17) + - If one assumes that the fluid is single component so that the natural convection is driven by the density difference induced by the temperature gradient, eq. (6.13) becomes: + If one assumes that the fluid is single component so that the natural convection is driven by the density difference induced by the temperature gradient, eq. (10) from [[Generalized Governing Equations]] becomes: -
$\rho \frac{D\mathbf{V}}{Dt}=\left( -\nabla p+{{\rho }_{\infty }}\mathbf{g} \right)-{{\rho }_{\infty }}\mathbf{g}\beta (T-{{T}_{\infty }})+\nabla \cdot (\mu \nabla \mathbf{V}) \qquad \qquad()$
+
$\rho \frac{D\mathbf{V}}{Dt}=\left( -\nabla p+{{\rho }_{\infty }}\mathbf{g} \right)-{{\rho }_{\infty }}\mathbf{g}\beta (T-{{T}_{\infty }})+\nabla \cdot (\mu \nabla \mathbf{V}) \qquad \qquad(2)$
- (6.18) + Applying the boundary layer assumption and assuming steady flow with constant thermophysical properties, the momentum equation becomes: Applying the boundary layer assumption and assuming steady flow with constant thermophysical properties, the momentum equation becomes: -
$u\frac{\partial u}{\partial x}+v\frac{\partial u}{\partial y}=-\frac{1}{{{\rho }_{\infty }}}\frac{\partial p}{\partial x}-g+g\beta (T-{{T}_{\infty }})+\nu \frac{{{\partial }^{2}}u}{\partial {{y}^{2}}} \qquad \qquad()$
+
$u\frac{\partial u}{\partial x}+v\frac{\partial u}{\partial y}=-\frac{1}{{{\rho }_{\infty }}}\frac{\partial p}{\partial x}-g+g\beta (T-{{T}_{\infty }})+\nu \frac{{{\partial }^{2}}u}{\partial {{y}^{2}}} \qquad \qquad(3)$
- (6.19) + Line 33: Line 30: - Substituting the above two equations into eq. (6.19), the momentum equation becomes: + Substituting the above two equations into eq. (3), the momentum equation becomes: -
$u\frac{\partial u}{\partial x}+v\frac{\partial u}{\partial y}=\nu \frac{{{\partial }^{2}}u}{\partial {{y}^{2}}}+g\beta (T-{{T}_{\infty }}) \qquad \qquad()$
+
$u\frac{\partial u}{\partial x}+v\frac{\partial u}{\partial y}=\nu \frac{{{\partial }^{2}}u}{\partial {{y}^{2}}}+g\beta (T-{{T}_{\infty }}) \qquad \qquad(4)$
- (6.20) + Line 43: Line 39: -
$u\frac{\partial T}{\partial x}+v\frac{\partial T}{\partial y}=\alpha \frac{{{\partial }^{2}}T}{\partial {{y}^{2}}} \qquad \qquad()$
+
$u\frac{\partial T}{\partial x}+v\frac{\partial T}{\partial y}=\alpha \frac{{{\partial }^{2}}T}{\partial {{y}^{2}}} \qquad \qquad(5)$
- (6.21) + Line 50: Line 45: -
$u=v=0,\text{ at }y=0 \qquad \qquad()$
+
$u=v=0,\text{ at }y=0 \qquad \qquad(6)$
- (6.22) + Line 57: Line 51: -
$T={{T}_{w}},\text{ at }y=0 \qquad \qquad()$
+
$T={{T}_{w}},\text{ at }y=0 \qquad \qquad(7)$
- (6.23) + Line 64: Line 57: -
$u=v=0,\text{ }y\to \infty \qquad \qquad()$
+
$u=v=0,\text{ }y\to \infty \qquad \qquad(8)$
- (6.24) + Line 71: Line 63: -
$T={{T}_{\infty }},\text{ }y\to \infty \qquad \qquad()$
+
$T={{T}_{\infty }},\text{ }y\to \infty \qquad \qquad(9)$
- (6.25) + ==References== ==References==
## Contents
### 6.2.2 External Natural Convection from Heated Vertical Plate
For external natural convection near a vertical flat plate as shown in Fig. 6.1, the boundary layer assumption can be applied to simplify the above generalized governing equations. The boundary layer treatment for the case of natural convection is very similar to that for the case of forced convection that was discussed in Chapter 4. The difference between the natural convection problem shown in Fig. 6.1 and forced convection over a flat plate is that the free stream velocity in the outside of the velocity boundary layer in natural convection is zero for natural convection. In addition, the pressure outside the boundary layer is hydrostatic for the case of natural convection, instead of being externally imposed as in the case of forced convection.
For 2-D external convection of an incompressible fluid as shown in Fig. 6.1, the continuity equation becomes
$\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}=0 \qquad \qquad(1)$
If one assumes that the fluid is single component so that the natural convection is driven by the density difference induced by the temperature gradient, eq. (10) from Generalized Governing Equations becomes:
$\rho \frac{D\mathbf{V}}{Dt}=\left( -\nabla p+{{\rho }_{\infty }}\mathbf{g} \right)-{{\rho }_{\infty }}\mathbf{g}\beta (T-{{T}_{\infty }})+\nabla \cdot (\mu \nabla \mathbf{V}) \qquad \qquad(2)$
Applying the boundary layer assumption and assuming steady flow with constant thermophysical properties, the momentum equation becomes:
$u\frac{\partial u}{\partial x}+v\frac{\partial u}{\partial y}=-\frac{1}{{{\rho }_{\infty }}}\frac{\partial p}{\partial x}-g+g\beta (T-{{T}_{\infty }})+\nu \frac{{{\partial }^{2}}u}{\partial {{y}^{2}}} \qquad \qquad(3)$
The pressure in the boundary layer, p, is independent of y ($\partial p/\partial y=0$) and equals that outside the boundary layer at the same longitudinal position, ${p_{\infty}}$, i.e.,
$\frac{\partial p}{\partial x}=\frac{dp}{dx}=\frac{d{{p}_{\infty }}}{dx}$
The hydrostatic pressure, ${p_{\infty}}$, is dictated by the density and the longitudinal position:
$\frac{d{{p}_{\infty }}}{dx}=-{{\rho }_{\infty }}g$
Substituting the above two equations into eq. (3), the momentum equation becomes:
$u\frac{\partial u}{\partial x}+v\frac{\partial u}{\partial y}=\nu \frac{{{\partial }^{2}}u}{\partial {{y}^{2}}}+g\beta (T-{{T}_{\infty }}) \qquad \qquad(4)$
After applying the boundary layer assumption and assuming the viscous dissipation is negligible, the energy equation becomes:
$u\frac{\partial T}{\partial x}+v\frac{\partial T}{\partial y}=\alpha \frac{{{\partial }^{2}}T}{\partial {{y}^{2}}} \qquad \qquad(5)$
At the heated wall, the non-slip and impermeable conditions yield the following boundary condition for the momentum equation:
$u=v=0,\text{ at }y=0 \qquad \qquad(6)$
The temperature at the heated wall is specified, i.e.,
$T={{T}_{w}},\text{ at }y=0 \qquad \qquad(7)$
Since the quiescent fluid far away from the heated plate is not disturbed by the existence of the heated plate, the velocity at the locations away from the flat plate should be zero:
$u=v=0,\text{ }y\to \infty \qquad \qquad(8)$
Also, the temperature of the fluid outside the thermal boundary layer is not affected by the heated wall:
$T={{T}_{\infty }},\text{ }y\to \infty \qquad \qquad(9)$
|
2020-07-15 05:29:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7709643840789795, "perplexity": 853.4722643375825}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657155816.86/warc/CC-MAIN-20200715035109-20200715065109-00358.warc.gz"}
|
https://study.com/academy/answer/jesse-has-just-learned-that-she-won-1-million-in-her-state-lottery-she-has-the-choice-of-receiving-a-lump-sum-payment-of-312-950-or-50-000-per-year-for-the-next-20-years-jesse-can-invest-the-lu.html
|
## Lottery Payments:
Lottery payments are usually not paid out as advertised and are paid out as either a number of payments over the years as advertised or a lumpsum which is much less than what is advertised.
The $50,000 payments are better We can calculate the present value of the payments over the 20 years and compare with the given lumpsum payment: {eq}PV= Payment\times \dfrac{1-(1+r)^{-n}}{r} {/eq} Here: • Present value (PV) shall be the value of$50,000 payments.
• Payment = $50,000 • r (rate) = 6% or 0.06. This is the same rate as the investment rate. • n = 20 Substituting the values we have: {eq}Present\:Value = P \times \dfrac{1-(1+0.06)^{-20}}{0.06} {/eq} {eq}Present\:Value =$50,000 \times 11.46992122 {/eq}
{eq}Present\:Value = \$573,496.06 {/eq}
|
2019-11-19 08:02:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9699658751487732, "perplexity": 5767.314203923783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670036.23/warc/CC-MAIN-20191119070311-20191119094311-00515.warc.gz"}
|
http://ingenerius.ru/no-sine-no-email-only-chat-7570.html
|
No sine no email only chat
Help me if any thing with timer function start and stop and reset and get_delay_tricks. You need to use Olin's idea of determining the zero crossings of the signals to get the time delay.Then plug the time delay into Scott's equation to get the phase delay. I'll leave it up to you to implement each function since they should either be trivial to implement or you should already have something similar written.
If using a CCP module of a PIC 16 or PIC 18, for example, you can only capture on one edge polarity anyway.calculating phase shift is not related to ADC reading?where should i implement this function because I am select the channel using MULTIPLEXER and reading voltage [email protected] I didn't know about the demodulator. I am using ATmega32-A micro controller and external ADC AD7798 to read the voltage of both signal. There will be a small phase difference between two signals.But capture/compare modules require a digital input.
They don't work well with analog, since the threshold is not configurable and may vary.
If your time delay is \$t\$, and the period of the sine wave is \$T\$, then $$\frac \ \ = \ \ \frac$$ This will give phase (\$\phi\$) in degrees.
If \$t\$ is negative, that would mean that the output lags the input, and positive is when the output leads the input.
void main(void) // EOF "main(void)" void init(void) unsigned int timer_phase (void) interrupt [TIM1_COMPA] void timer1_compa_isr(void) void Start Timer1(void) void Stop Timer1(void) void Reset Timer1(void) unsigned int get_timer_ticks(void) when i run this code I am not getting any errors, But I am not able to enter any command from hyper terminal.
When comment this whole function then only i am able to get output and i am able to enter commands from hyper terminal.
In fact, the most useful way to represent angles in a micro is to use the full range of whatever the most convenient unsigned integer is to represent a full circle.
|
2019-09-19 17:18:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2361607551574707, "perplexity": 1528.7310440711453}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573561.45/warc/CC-MAIN-20190919163337-20190919185337-00083.warc.gz"}
|
https://mathhelpboards.com/threads/convert-polar-equation-r-1-1-sin-theta-to-rectangular-equation.25364/
|
# convert polar equation r= 1/1+sin(theta) to rectangular equation
#### Elissa89
##### Member
The equation is:
r= 1/1+sin(theta)
I know the answer is supposed to be:
x^2+y^2=(1-y)^2
I can't figure out the steps to get to the answer.
#### skeeter
##### Well-known member
MHB Math Helper
The equation is:
r= 1/1+sin(theta)
I know the answer is supposed to be:
x^2+y^2=(1-y)^2
I can't figure out the steps to get to the answer.
$r = \dfrac{1}{1+\sin{\theta}}$
divide both sides by $r$ ...
$1 = \dfrac{1}{r+r\sin{\theta}} \implies r+r\sin{\theta} = 1$
$r + y = 1$
$r = 1 - y$
can you finish?
#### Elissa89
##### Member
$r = \dfrac{1}{1+\sin{\theta}}$
divide both sides by $r$ ...
$1 = \dfrac{1}{r+r\sin{\theta}} \implies r+r\sin{\theta} = 1$
$r + y = 1$
$r = 1 - y$
can you finish?
Yes Thanks
|
2020-07-10 02:48:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8003166913986206, "perplexity": 6526.861130195109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655902496.52/warc/CC-MAIN-20200710015901-20200710045901-00037.warc.gz"}
|
https://erikgahner.dk/2022/open-and-reproducible-research-glossary/
|
# Open and Reproducible Research Glossary
The Framework for Open and Reproducible Research Training have launched a new glossary. You can find a paywalled paper introducing the glossary here.
I did not find the glossary easy to skim through so I decided to download the glossary from GitHub and make my own table with the 261 entries. Here it is:
Title Definition
Abstract Bias The tendency to report only significant results in the abstract, while reporting non-significant results within the main body of the manuscript (not reporting non-significant results altogether would constitute selective reporting). The consequence of abstract bias is that studies reporting non-significant results may not be captured with standard meta-analytic search procedures (which rely on information in the title, abstract and keywords) and thus biasing the results of meta-analyses.
Academic Impact The contribution that a research output (e.g., published manuscript) makes in shifting understanding and advancing scientific theory, method, and application, across and within disciplines. Impact can also refer to the degree to which an output or research programme influences change outside of academia, e.g. societal and economic impact (cf. ESRC: https://esrc.ukri.org/research/impact-toolkit/what-is-impact/).
Accessibility Accessibility refers to the ease of access and re-use of materials (e.g., data, code, outputs, publications) for academic purposes, particularly the ease of access is afforded to people with a chronic illness, disability and/or neurodivergence. These groups face numerous financial, legal and/or technical barriers within research, including (but not limited to) the acquisition of appropriately formatted materials and physical access to spaces. Accessibility also encompasses structural concerns about diversity, equity, inclusion, and representation (Pownall et al., 2021). Interfaces, events and spaces should be designed with accessibility in mind to ensure full participation, such as by ensuring that web-based images are colorblind friendly and have alternative text, or by using live captions at events (Brown et al., 2018; Pollet & Bond, 2021; World Wide Web Consortium, 2021).
Ad hominem bias From Latin meaning “to the person”; Judgment of an argument or piece of work influenced by the characteristics of the person who forwarded it, not the characteristics of the argument itself. Ad hominem bias can be negative, as when work from a competitor or target of personal animosity is viewed more critically than the quality of the work merits, or positive, as when work from a friend benefits from overly favorable evaluation.
Adversarial collaboration A collaboration where two or more researchers with opposing or contradictory theoretical views —and likely diverging predictions about study results— work together on one project. The aim is to minimise biases and methodological weaknesses as well as to establish a shared base of facts for which competing theories must account.
Adversarial (collaborative) commentary A commentary in which the original authors of a work and critics of said work collaborate to draft a consensus statement. The aim is to draft a commentary that is free of ad hominem attacks and communicates a common understanding or at least identifies where both parties agree and disagree. In doing so, it provides a clear take-home message and path forward, rather than leaving the reader to decide between opposing views conveyed in separate commentaries.
Affiliation bias This bias occurs when one’s opinions or judgements about the quality of research are influenced by the affiliation of the author(s). When publishing manuscripts, a potential example of an affiliation bias could be when editors prefer to publish work from prestigious institutions (Tvina et al., 2019).
Aleatoric uncertainty Variability in outcomes due to unknowable or inherently random factors. The stochastic component of outcome uncertainty that cannot be reduced through additional sources of information. For example, when flipping a coin, uncertainty about whether it will land on heads or tails.
Altmetrics Departing from traditional citation measures, altmetrics (short for “alternative metrics”) provide an assessment of the attention and broader impact of research work based on diverse sources such as social media (e.g. Twitter), digital news media, number of preprint downloads, etc. Altmetrics have been criticized in that sensational claims usually receive more attention than serious research (Ali, 2021).
AMNESIA AMNESIA is a free anonymization tool to remove identifying information from data. After uploading a dataset that contains personal data, the original dataset is transformed by the tool, resulting in a dataset that is anonymized regarding personal and sensitive data.
Analytic Flexibility Analytic flexibility is a type of researcher degrees of freedom (Simmons, Nelson, & Simonsohn, 2011) that refers specifically to the large number of choices made during data preprocessing and statistical analysis. “[T]he range of analysis outcomes across different acceptable analysis methods” (Carp, 2012, p. 1). Analytic flexibility can be problematic, as this variability in analytic strategies can translate into variability in research outcomes, particularly when several strategies are applied, but not transparently reported (Masur, 2021).
Anonymity Anonymising data refers to removing, generalising, aggregating or distorting any information which may potentially identify participants, peer-reviewers, and authors, among others. Data should be anonymised so that participants are not personally identifiable. The most basic level of anonymisation is to replace participants’ names with pseudonyms (fake names) and remove references to specific places. Anonymity is particularly important for open data and data may not be made open for anonymity concerns. Anonymity and open data has been discussed within qualitative research which often focuses on personal experiences and opinions, and in quantitative research that includes participants from clinical populations.
ARRIVE Guidelines The ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments) are a checklist-based set of reporting guidelines developed to improve reporting standards, and enhance replicability, within living (i.e. in vivo) animal research. The second generation ARRIVE guidelines, ARRIVE 2.0, were released in 2020. In these new guidelines, the clarity has been improved, items have been prioritised and new information has been added with an accompanying “Explanation” and “Elaboration” document to provide a rationale for each item and a recommended set to add context to the study being described.
Article Processing Charge (APC) An article (sometimes author) processing charge (APC) is a fee charged to authors by a publisher in exchange for publishing and hosting an open access article. APCs are often intended to compensate for a potential loss of revenue the journal may experience when moving from traditional publication models, such as subscription services or pay-per-view, to open access. While some journals charge only about US$300, APCs vary widely, from US$1000 (Advances in Methods and Practice in Psychological Science) or less to over US\$10,000 (Nature). While some publishers offer waivers for researchers from certain regions of the world or who lack funds, some APCs have been criticized for being disproportionate compared to actual processing and hosting costs (Grossmann & Brembs, 2021) and for creating possible inequities with regard to which scientists can afford to make their works freely available (Smith et al. 2020).
Authorship Authorship assigns credit for research outputs (e.g. manuscripts, data, and software) and accountability for content (McNutt et al. 2018; Patience et al. 2019). Conventions differ across disciplines, cultures, and even research groups, in their expectations of what efforts earn authorship, what the order of authorship signifies (if anything), how much accountability for the research the corresponding author assumes, and the extent to which authors are accountable for aspects of the work that they did not personally conduct.
Auxiliary Hypothesis All theories contain assumptions about the nature of constructs and how they can be measured. However, not all predictions are derived from theories and assumptions can sometimes be drawn from other premises. Additional assumptions that are made to deduce a prediction and tested by making links to observable data. These auxiliary hypotheses are sometimes invoked to explain why a replication attempt has failed.
Bayes Factor A continuous statistical measure for model selection used in Bayesian inference, describing the relative evidence for one model over another, regardless of whether the models are correct. Bayes factors (BF) range from 0 to infinity, indicating the relative strength of the evidence, and where 1 is a neutral point of no evidence. In contrast to p-values, Bayes factors allow for 3 types of conclusions: a) evidence for the alternative hypothesis, b) evidence for the null hypothesis, and c) no sufficient evidence for either. Thus, BF are typically expressed as BF10 for evidence regarding the alternative compared to the null hypothesis, and as BF01 for evidence regarding the null compared to the alternative hypothesis.
Bayesian Inference A method of statistical inference based upon Bayes’ theorem, which makes use of epistemological (un)certainty using the mathematical language of probability. Bayesian inference is based on allocating (and reallocating, based on newly-observed data or evidence) credibility across possibilities. Two existing approaches to Bayesian inference include “Bayes factors” (BF) and Bayesian parameter estimation.
Bayesian Parameter Estimation A Bayesian approach to estimating parameter values by updating a prior belief about model parameters (i.e., prior distribution) with new evidence (i.e., observed data) via a likelihood function, resulting in a posterior distribution. The posterior distribution may be summarised in a number of ways including: point estimates (mean/mode/median of a posterior probability distribution), intervals of defined boundaries, and intervals of defined mass (typically referred to as a credible interval). In turn, a posterior distribution may become a prior distribution in a subsequent estimation. A posterior distribution can also be sampled using Monte-Carlo Markov Chain methods which can be used to determine complex model uncertainties (e.g. Foreman-Mackey et al., 2013).
BIDS data structure The Brain Imaging Data Structure (BIDS) describes a simple and easy-to-adopt way of organizing neuroimaging, electrophysiological, and behavioral data (i.e., file formats, folder structures). BIDS is a community effort developed by the community for the community and was inspired by the format used internally by the OpenfMRI repository known as OpenNeuro. Having initially been developed for fMRI data, the BIDS data structure has been extended for many other measures, such as EEG (Pernet et al., 2019).
BIZARRE This acronym refers to Barren, Institutional, Zoo, and other Rare Rearing Environments (BIZARRE). Most research for chimpanzees is conducted on this specific sample. This limits the generalizability of a large number of research findings in the chimpanzee population. The BIZARRE has been argued to reflect the universal concept of what is a chimpanzee (see also WEIRD, which has been argued to be a universal concept for what is a human).
Bottom-up approach (to Open Scholarship) Within academic culture, an approach focusing on the intrinsic interest of academics to improve the quality of research and research culture, for instance by making it supportive, collaborative, creative and inclusive. Usually indicates leadership from early-career researchers acting as the changemakers driving shifts and change in scientific methodology through enthusiasm and innovation, compared to a “top-down” approach initiated by more senior researchers “Bottom-up approaches take into account the specific local circumstances of the case itself, often using empirical data, lived experience, personal accounts, and circumstances as the starting point for developing policy solutions.”
Bracketing Interviews Bracketing interviews are commonly used within qualitative approaches. During these interviews researchers explore their personal subjectivities and assumptions surrounding their ongoing research. This allows researchers to be aware of their own interests and helps them to become both more reflective and critical about their research, considering how their own experiences may impact the research process. Bracketing interviews can also be subject to qualitative analysis.
Bropenscience A tongue-in-cheek expression intended to raise awareness of the lack of diverse voices in open science (Bahlai, Bartlett, Burgio et al. 2019; Onie, 2020), in addition to the presence of behavior and communication styles that can be toxic or exclusionary. Importantly, not all bros are men; rather, they are individuals who demonstrate rigid thinking, lack self-awareness, and tend towards hostility, unkindness, and exclusion (Pownall et al., 2021; Whitaker & Guest, 2020). They generally belong to dominant groups who benefit from structural privileges. To address #bropenscience, researchers should examine and address structural inequalities within academic systems and institutions.
CARKing Critiquing After the Results are Known (CARKing) refers to presenting a criticism of a design as one that you would have made in advance of the results being known. It usually forms a reaction or criticism to unwelcome or unfavourable results, results whether the critic is conscious of this fact or not.
Center for Open Science (COS) A non-profit technology organization based in Charlottesville, Virginia with the mission “to increase openness, integrity, and reproducibility of research.” Among other resources, the COS hosts the Open Science Framework (OSF) and the Open Scholarship Knowledge Base.
Citation bias A biased selection of papers or authors cited and included in the references section. When citation bias is present, it is often in a way which would benefit the author(s) or reviewers, over-represents statistically significant studies, or reflects pervasive gender or racial biases (Brooks, 1985; Jannot et al., 2013; Zurn et al., 2020). One proposed solution is the use of Citation Diversity Statements, in which authors reflect on their citation practices and identify biases which may have emerged (Zurn et al., 2020).
Citation Diversity Statement A current effort trying to increase awareness and mitigate the citation bias in relation to gender and race is the Citation Diversity Statement, a short paragraph where “the authors consider their own bias and quantify the equitability of their reference lists. It states: (i) the importance of citation diversity, (ii) the percentage breakdown (or other diversity indicators) of citations in the paper, (iii) the method by which percentages were assessed and its limitations, and (iv) a commitment to improving equitable practices in science” (Zurn et al., 2020, p. 669).
Citizen Science Citizen science refers to projects that actively involve the general public in the scientific endeavour, with the goal of democratizing science. Citizen scientists can be involved in all stages of research, acting as collaborators, contributors or project leaders. An example of a major citizen science project involved individuals identifying astronomical bodies (Lintott, 2008).
CKAN The Comprehensive Knowledge Archive Network (CKAN) is an open-source data platform and free software that aims to provide tools to streamline publishing and data sharing. CKAN supports governments, research institutions and other organizations in managing and publishing large amounts of data.
Co-production An approach to research where stakeholders who are not traditionally involved in the research process are empowered to collaborate, either at the start of the project or throughout the research lifecycle. For example, co-produced health research may involve health professionals and patients, while co-produced education research may involve teaching staff and pupils/students. This is motivated by principles such as respecting and valuing the experiences of non-researchers, addressing power dynamics, and building mutually beneficial relationships.
COAR Community Framework for Good Practices in Repositories A framework which identifies best practices for scientific repositories and evaluation criteria for these practices. Its flexible and multidimensional approach means that it can be applied to different types of repositories, including those which host publications or data, across geographical and thematic contexts.
Code review The process of checking another researcher’s programming (specifically, computer source code) including but not limited to statistical code and data modelling. This process is designed to detect and resolve mistakes, thereby improving code quality. In practice, a modern peer review process may take place via a hosted online repository such as GitHub, GitLab or SourceForge.Related terms: Reproducibility; Version control
Codebook A codebook is a high-level summary that describes the contents, structure, nature and layout of a data set. A well-documented codebook contains information intended to be complete and self-explanatory for each variable in a data file, such as the wording and coding of the item, and the underlying construct. It provides transparency to researchers who may be unfamiliar with the data but wish to reproduce analyses or reuse the data.
Collaborative Replication and Education Project (CREP) The Collaborative Replication and Education Project (CREP) is an initiative designed to organize and structure replication efforts of highly-cited empirical studies in psychology to satisfy the dual needs for more high-quality direct replications and more training in empirical research techniques for psychology students. CREP aims to address the need for replications of highly cited studies, and to provide training, support and professional growth opportunities for academics completing replication projects.
Committee on Best Practices in Data Analysis and Sharing (COBIDAS) The Organization for Human Brain Mapping (OHBM) neuroimaging community has developed a guideline for best practices in neuroimaging data acquisition, analysis, reporting, and sharing of both data and analysis code. It contains eight elements that should be included when writing up or submitting a manuscript in order to improve reporting methods and the resulting neuroimages in order to optimize transparency and reproducibility.
Communality The common ownership of scientific results and methods and the consequent imperative to share both freely. Communality is based on the fact that every scientific finding is seen as a product of the effort of a number of agents. This norm is followed when scientists openly share their new findings with colleagues.
Community Projects Collaborative projects that involve researchers from different career levels, disciplines, institutions or countries. Projects may have different goals including peer support and learning, conducting research, teaching and education. They can be short-term (e.g., conference events or hackathons) or long-term (e.g., journal clubs or consortium-led research projects). Collaborative culture and community building are key to achieving project goals.
Compendium A collection of files prepared by a researcher to support a report or publication that include the data, metadata, programming code, software dependencies, licenses, and other instructions necessary for another researcher to independently reproduce the findings presented in the report or publication.
Computational reproducibility Ability to recreate the same results as the original study (including tables, figures, and quantitative findings), using the same input data, computational methods, and conditions of analysis. The availability of code and data facilitates computational reproducibility, as does preparation of these materials (annotating data, delineating software versions used, sharing computational environments, etc). Ideally, computational reproducibility should be achievable by another second researcher (or the original researcher, at a future time), using only a set of files and written instructions. Also referred to as analytic reproducibility (LeBel et al., 2018).
Conceptual replication A replication attempt whereby the primary effect of interest is the same but tested in a different sample and captured in a different way to that originally reported (i.e., using different operationalisations, data processing and statistical approaches and/or different constructs; LeBel et al., 2018). The purpose of a conceptual replication is often to explore what conditions limit the extent to which an effect can be observed and generalised (e.g., only within certain contexts, with certain samples, using certain measurement approaches) towards evaluating and advancing theory (Hüffmeier et al., 2016).
Confirmation bias The tendency to seek out, interpret, favor and recall information in a way that supports one’s prior values, beliefs, expectations, or hypothesis.
Confirmatory analyses Part of the confirmatory-exploratory distinction (Wagenmakers et al., 2012), where confirmatory analyses refer to analyses that were set a priori and test existent hypotheses. The lack of this distinction within published research findings has been suggested to explain replicability issues and is suggested to be overcome through study preregistration which clearly distinguishes confirmatory from exploratory analyses. Other researchers have questioned these terms and recommended a replacement with ‘discovery-oriented’ and ‘theory-testing research’ (Oberauer & Lewandowsky, 2019; see also Szollosi & Donkin, 2019).
Conflict of interest A conflict of interest (COI, also ‘competing interest’) is a financial or non-financial relationship, activity or other interest that might compromise objectivity or professional judgement on the part of an author, reviewer, editor, or editorial staff. The Principles of Transparency and Best Practice in Scholarly Publishing by the Committee on Publication Ethics (COPE), the Directory of Open Access Journals (DOAJ), the Open Access Scholarly Publishers Association (OASPA), and the World Association of Medical Editors (WAME) states that journals should have policies on publication ethics, including policies on COI (DOAJ, 2018). COIs should be made transparent so that readers can properly evaluate research and assess for potential or actual bias(es). Outside publishing, academic presenters, panel members and educators should also declare COIs. Purposeful failure to disclose a COI may be considered a form of misconduct.
Consortium authorship Only the name of the consortium or organization appears in the author column, and the individuals’ names do not appear in the literature: For example, ‘FORRT’ as an author. This can be seen in the products of collaborative projects with a very large number of collaborators and/or contributors. Depending on the journal policy, individual researchers may be recorded as one of the authors of the product in literature databases such as ORCID and Scopus. Consortium authorship can also be termed group, corporate, organisation/organization or collective authorship (e.g. https://www.bmj.com/about-bmj/resources-authors/article-submission/authorship-contributorship), or collaborative authorship (e.g. https://support.jmir.org/hc/en-us/articles/115001449591-What-is-a-group-author-collaborative-author-and-does-it-need-an-ORCID)
Constraints on Generality (COG) A statement that explicitly identifies and justifies the target population, and conditions, for the reported findings. Researchers should be explicit about potential boundary conditions for their generalisations (Simons et al., 2017). Researchers should provide detailed descriptions of the sampled population and/or contextual factors that might have affected the results such that future replication attempts can take these factors into account (Brandt et al., 2014). Conditions not explicitly listed are assumed not to have theoretical relevance to the replicability of the effect.
Construct validity When used in the context of measurement and testing, construct validity refers to the degree to which a test measures what it claims to be measuring. In fields that study hypothetical unobservable entities, construct validation is essentially theory testing, because it involves determining whether an objective measure (a questionnaire, lab task, etc.) is a valid representation of a hypothetical construct (i.e., conforms to a theory).
Content validity The degree to which a measurement includes all aspects of the concept that the researcher claims to measure; “A qualitative type of validity where the domain of the concept is made clear and the analyst judges whether the measures fully represent the domain” (Bollen, 1989, p.185). It is a component of construct validity and can be established using both quantitative and qualitative methods, often involving expert assessment.
Contribution A formal addition or activity in a research context. Contribution and contributor statements, including acknowledgments sections in journal articles, are attached to research products to better classify and recognize the variety of labor beyond “authorship” that any intellectual pursuit requires. Contribution is an evolving “source of data for understanding the relationship between authorship and knowledge production.” (Lariviere et al., p.430). In open source software development, a contribution may count as changes committed onto a project’s software repository following a peer-review (known technically as a pull request). An example of an open-source project accepting contributions is NumPy (Harris et al., 2020).
Corrigendum A corrigendum (pl. corrigenda, Latin: ‘to correct’) documents one or multiple errors within a published work that do not alter the central claim or conclusions and thus does not rise to the standard of requiring a retraction of the work. Corrigenda are typically available alongside the original work to aid transparency. Some publishers refer to this document as an erratum (pl. errata, Latin: ‘error’), while others draw a distinction between the two (corrigenda as author-errors and errata as publisher-errors).
Creative Commons (CC) license A set of free and easy-to-use copyright licences that define the rights of the authors and users of open data and materials in a standardized way. CC licenses enable authors or creators to share copyright-law-protected work with the public and come in different varieties with more or less clauses. For example, the Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) allows you to share and adapt the material, under the condition that you; give credit to the original creators, indicate if changes were made, and share under the same license as the original, and you cannot use the material for commercial purposes.
Creative destruction approach to replication Replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. This approach therefore involves ‘pruning’ existing theories, comparing all the alternative theories, and making replication efforts more generative and engaged in theory-building (Tierney et al. 2020, 2021).
Credibility revolution The problems and the solutions resulting from a growing distrust in scientific findings, following concerns about the credibility of scientific claims (e.g., low replicability). The term has been proposed as a more positive alternative to the term replicability crisis, and includes the many solutions to improve the credibility of research, such as preregistration, transparency, and replication.
CRediT The Contributor Roles Taxonomy (CRediT; https://casrai.org/credit/) is a high-level taxonomy used to indicate the roles typically adopted by contributors to scientific scholarly output. There are currently 14 roles that describe each contributor’s specific contribution to the scholarly output. They can be assigned multiple times to different authors and one author can also be assigned multiple roles. CRediT includes the following roles: Conceptualization, Data curation, Formal Analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. A description of the different roles can be found in the work of Brand et al., (2015).
Criterion validity The degree to which a measure corresponds to other valid measures of the same concept. Criterion validity is usually established by calculating regression coefficients or bivariate correlations estimating the direction and strength of relation between test measure and criterion measure. It is often confused with construct validity although it differs from it in intent (merely predictive rather than theoretical) and interest (predicting an observable outcome rather than a latent construct). Unreliability in either test or criterion scores usually diminishes criterion validity. Also called criterion-related or concrete validity.
Crowdsourced Research Crowdsourced research is a model of the social organisation of research as a large-scale collaboration in which one or more research projects are conducted by multiple teams in an independent yet coordinated manner. Crowdsourced research aims at achieving efficiency and scalability gains by pooling resources, promoting transparency and social inclusion, as well as increasing the rigor, reliability, and trustworthiness by enhancing statistical power and mutual social vetting. It stands in contrast to the traditional model of academic research production, which is dominated by the independent work of individual or small groups of researchers (‘small science’). Examples of crowdsourced research include so-called ‘many labs replication’ studies (Klein et al., 2018), ‘many analysts, one dataset’ studies (Silberzahn et al., 2018), distributive collaborative networks (Moshontz et al., 2018) and open collaborative writing projects such as Massively Open Online Papers (MOOPs) (Himmelstein et al., 2019; Tennant et al., 2019). Alternatively, crowdsourced research can refer to the use of a large number of research “crowdworkers” in data collection hired through online labor markets like Amazon Mechanical Turk or Prolific, for example in content analysis (Benoit et al., 2016; Lind et al., 2017) or experimental research (Peer et al., 2017). Crowdsourced research that is both open for participation and open through shared intermediate outputs has been referred to as crowd science (Franzoni & Sauermann, 2014).
Cultural taxation The additional labor expected or demanded of members of underrepresented or marginalized minority groups, particularly scholars of color. This labor often comes from service roles providing ethnic, cultural, or gender representation and diversity. These roles can be formal or informal, and are generally unrewarded or uncompensated. Such labor includes providing expertise on matters of diversity, educating members of majority groups, acting as a liaison to minority communities, and formal and informal roles as mentor and support system for minority students.
Cumulative science Goal of any empirical science, it is the pursuit of “the construction of a cumulative base of knowledge upon which the future of the science may be built” (Curran, 2009, p. 1). The idea that science will create more complete and accurate theories as a function of the amount of evidence and data that has been collected. Cumulative science develops in gradual and incremental steps, as opposed to one abrupt discovery. While revolutionary science occurs scarcely, cumulative science is the most common form of science.
Data Access and Research Transparency (DA-RT) Data Access and Research Transparency (DA-RT) is an initiative aimed at increasing data access and research transparency in the social sciences. It is a multi-epistemic and multi-method initiative, created in 2014 by the Council of the American Political Science Association (APSA), to bolster the rigor of empirical social inquiry. In addition to other activities, DA-RT developed the Journal Editors’ Transparency Statement (JETS), which requires subscribing journals to (a) making relevant data publicly available if the study is published, (b) following a strict data citation policy, (c) transparently describing the analytical procedures and, if possible, providing public access to analytical code, and (d) updating their journal style guides, codes of ethics to include improved data access and research transparency requirements.
Data management plan (DMP) A structured document that describes the process of data acquisition, analysis, management and storage during a research project. It also describes data ownership and how the data will be preserved and shared during and upon completion of a project. Data management templates also provide guidance on how to make research data FAIR and where possible, openly available.
Data sharing collection of practices, technologies, cultural elements and legal frameworks that are relevant to the practice of making data used for scholarly research available to other investigators. Gollwitzer et al. (2020) describe two types of data sharing: Type 1: Data that is necessary to reproduce the findings of a published research article. Type 2: data that have been collected in a research project but have not (or only partly) been analysed or reported after the completion of the project and are hence typically shared under a specified embargo period.
Data visualisation Graphical representation of data or information. Data visualisation takes advantage of humans’ well-developed visual processing capacity to convey insight and communicate key information. Data visualisations often display the raw data, descriptive statistics, and/or inferential statistics.
Decolonisation Coloniality can be described as the naturalisation of concepts such as imperialism, capitalism, and nationalism. Together these concepts can be thought of as a matrix of power (and power relations) that can be traced to the colonial period. Decoloniality seeks to break down and decentralize those power relations, with the aim to understand their persistence and to reconstruct the norms and values of a given domain. In an academic setting, decolonisation refers to the rethinking of the lens through which we teach, research, and co-exist, so that the lens generalises beyond Western-centred and colonial perspectives. Decolonising academia involves reconstructing the historical and cultural frameworks being used, redistributing a sense of belonging in universities, and empowering and including voices and knowledge types that have historically been excluded from academia. This is done when people engage with their past, present, and future whilst holding a perspective that is separate from the socially dominant perspective. Also, by including, not rejecting, an individuals’ internalised norms and taboos from the specific colony.
Demarcation criterion A criterion for distinguishing science from non-science which aims to indicate an optimal way for knowledge of the world to grow. In a Popperian approach, the demarcation criterion was falsifiability and the application of a falsificationist attitude. Alternative approaches include that of Kuhn, who believed that the criterion was puzzle solving with the aim of understanding nature, and Lakatos, who argued that science is marked by working within a progressive research programme.
Direct replication As ‘direct replication’ does not have a widely-agreed technical meaning nor there is no clear cut distinction between a direct and conceptual replication, below we list several contributions towards a consensus. Rather than debating the ‘exactness’ of a replication, it is more helpful to discuss the relevant differences between a replication and its target, and their implications for the reliability and generality of the target’s results.
Diversity Diversity refers to between-person (i.e., interindividual) variation in humans, e.g. ability, age, beliefs, cognition, country, disability, ethnicity, gender, language, race, religion or sexual orientation. Diversity can refer to diversity of researchers (who do the research), the diversity of participant samples (who is included in the study), and diversity of perspectives (the views and beliefs researchers bring into their work; Syed & Kathawalla, 2020).
DOI (digital object identifier) Digital Object Identifiers (DOI) are alpha-numeric strings that can be assigned to any entity, including: publications (including preprints), materials, datasets, and feature films – the use of DOIs is not restricted to just scholarly or academic material. DOIs “provides a system for persistent and actionable identification and interoperable exchange of managed information on digital networks.” (https://doi.org/hb.html). There are many different DOI registration agencies that operate DOIs, but the two that researchers would most likely encounter are Crossref and Datacite.
DORA The San Francisco Declaration on Research Assessment (DORA) is a global initiative aiming to reduce dependence on journal-based metrics (e.g. journal impact factor and citation counts) and, instead, promote a culture which emphasises the intrinsic value of research. The DORA declaration targets research funders, publishers, research institutes and researchers and signing it represents a commitment to aligning research practices and procedures with the declaration’s principles.
Double-blind peer review Evaluation of research products by qualified experts where both the author(s) and reviewer(s) are anonymous to each other. “This approach conceals the identity of the authors and their affiliations from reviewers and would, in theory, remove biases of professional reputation, gender, race, and institutional affiliation, allowing the reviewer to avoid bias and to focus on the manuscript’s merit alone.” (Tvina et al., 2019, 1082). Like all types of peer-review, double-blind peer review is not without flaws. Anonymity can be difficult, if not impossible, to achieve for certain researchers working in a niche area.
Double consciousness An identity confusion, as the individual feels like they have two distinct identities. One is to assimilate to the dominant culture at university when the individual is with colleagues and professors, while the other is when the individual is with their families. This continuous shift may cause a lack of certainty about the individual’s identity and a belief that the individual does not fully belong anywhere. This lack of belonging can lead to poor social integration within the academic culture that can manifest in less opportunities and more mental health issues in the individual (Rubin, 2021; Rubin et al., 2019).
Early career researchers (ECRs) A label given to researchers who “range from senior doctoral students to postdoctoral workers who may have up to 10 years postdoctoral education; the latter group may therefore include early career or junior academics” (Eley et al., 2012, p. 3). What specifically (e.g. age, time since PhD inclusive or exclusive of career breaks and leave, title, funding awarded) constitutes an ECR can vary across funding bodies, academic organisations, and countries.
Economic and societal impact The contribution a research item makes to the broader economy and society. It also captures the benefits of research to individuals, organisations, and/or nations.
Embargo Period Applied to Open Scholarship, in academic publishing, the period of time after an article has been published and before it can be made available as Open Access. If an author decides to self-archive their article (e.g., in an Open Access repository) they need to observe any embargo period a publisher might have in place. Embargo periods vary from instantaneous up to 48 months, with 6 and 12 months being common (Laakso & Björk, 2013). Embargo periods may also apply to pre-registrations, materials, and data, when authors decide to only make these available to the public after a certain period of time, for instance upon publication or even later when they have additional publication plans and want to avoid being scooped (Klein et al., 2018).
Epistemic uncertainty Systematic uncertainty due to limited data, measurement precision, model or process specification, or lack of knowledge. That is, uncertainty due to lack of knowledge that could, in theory, be reduced through conducting additional research to increase understanding. Such uncertainty is said to be personal, since knowledge differs across scientists, and temporary since it can change as new data become available.
Epistemology Alongside ethics, logic, and metaphysics, epistemology is one of the four main branches of philosophy. Epistemology is largely concerned with nature, origin, and scope of knowledge, as well as the rationality of beliefs.
Equity Different individuals have different starting positions (cf. “opportunity gaps”) and needs. Whereas equal treatment focuses on treating all individuals equally, equitable treatment aims to level the playing field by actively increasing opportunities for under-represented minorities. Equitable treatment aims to attain equality through “fairness”: taking into account different needs for support for different individuals, instead of focusing merely on the needs of the majority.
Equivalence Testing Equivalence tests statistically assess the null hypothesis that a given effect exceeds a minimum criterion to be considered meaningful. Thus, rejection of the null hypothesis provides evidence of a lack of (meaningful) effect. Based upon frequentist statistics, equivalence tests work by specifying equivalence bounds: a lower and upper value that reflect the smallest effect size of interest. Two one-sided t-tests are then conducted against each of these equivalence bounds to assess whether effects that are deemed meaningful can be rejected (see Schuirmann, 1972; Lakens et al., 2018; 2020).
Error detection Broadly refers to examining research data and manuscripts for mistakes or inconsistencies in reporting. Commonly discussed approaches include: checking inconsistencies in descriptive statistics (e.g. summary statistics that are not possible given the sample size and measure characteristics; Brown & Heathers, 2017; Heathers et al. 2018), inconsistencies in reported statistics (e.g. p-values that do not match the reported F statistics and accompanying degrees of freedom; Epskamp, & Nuijten, 2016; Nuijten et al. 2016), and image manipulation (Bik et al., 2016). Error detection is one motivation for data and analysis code to be openly available, so that peer review can confirm a manuscript’s findings, or if already published, the record can be corrected. Detected errors can result in corrections or retractions of published articles, though these actions are often delayed, long after erroneous findings have influenced and impacted further research.
Evidence Synthesis This is a type of research method which aims to draw general conclusions to address a research question on a certain topic, phenomenon or effect by reviewing research outcomes and information from a range of different sources. Information which is subject to synthesis can be extracted from both qualitative and quantitative studies. The method used to synthesise the gathered information can be qualitative (narrative synthesis), quantitative (meta-analysis) or mixed (meta-synthesis, systematic mapping). Evidence synthesis has many applications and is often used in the context of healthcare, public policy as well as understanding and advancement of specific research fields.
Exploratory data analysis Exploratory Data Analysis (EDA) is a well-established statistical tradition that provides conceptual and computational tools for discovering patterns in data to foster hypothesis development and refinement. These tools and attitudes complement the use of hypothesis tests used in confirmatory data analysis (CDA). Even when well-specified theories are held, EDA helps one interpret the results of CDA and may reveal unexpected or misleading patterns in the data.
External Validity Whether the findings of a scientific study can be generalized to other contexts outside the study context (different measures, settings, people, places, and times). Statistically, threats to external validity may reflect interactions whereby the effect of one factor (the independent variable) depends on another factor (a confounding variable). External validity may also be limited by the study design (e.g., an artificial laboratory setting or a non-representative sample).
Face validity A subjective judgement of how suitable a measure appears to be on the surface, that is, how well a measure is operationalized. For example, judging whether questionnaire items should relate to a construct of interest at face value. Face validity is related to construct validity, but since it is subjective/informal, it is considered an easy but weak form of validity.
FAIR principles Describes making scholarly materials Findable, Accessible, Interoperable and Reusable (FAIR). ‘Findable’ and ‘Accessible’ are concerned with where materials are stored (e.g. in data repositories), while ‘Interoperable’ and ‘Reusable’ focus on the importance of data formats and how such formats might change in the future.
Feminist psychology With a particular focus on gender and sexuality, feminist psychology is inherently concerned with representation, diversity, inclusion, accessibility, and equality. Feminist psychology initially grew out out of a concern for representing the lived experiences of girls and women, but has since evolved into a more nuanced, intersectional and comprehensive concern for all aspects of equality (e.g., Eagly & Riger, 2014). Feminist psychologists have advocated for more rigorous consideration of equality, diversity, and inclusion within Open Science spaces (Pownall et al., 2021).
First-last-author-emphasis norm (FLAE) An authorship system that assigns the order of authorship depending on the contributions of a given author while simultaneously valuing the first and last position of the authorship order most. According to this system, the two main authors are indicated as the first and last author – the order of the authors between the first and last position is determined by contribution in a descending order.
FORRT Framework of Open Reproducible Research and Teaching. It aims to provide a pedagogical infrastructure designed to recognize and support the teaching and mentoring of open and reproducible research in tandem with prototypical subject matters in higher education. FORRT strives to be an effective, evolving, and community-driven organization raising awareness of the pedagogical implications of open and reproducible science and its associated challenges (i.e., curricular reform, epistemological uncertainty, methods of education). FORRT also advocates for the opening of teaching and mentoring materials as a means to facilitate access, discovery, and learning to those who otherwise would be educationally disenfranchised.
Free Our Knowledge Platform A collective action platform aiming to support the open science movement by obtaining pledges from researchers that they will implement certain research practices (e.g., pre-registration, pre-print). Initially pledges will be anonymous until a sufficient number of people pledge, upon which names of pledges will be released. The initiative is a grassroots movement instigated by early career researchers.
G*Power Free to use statistical software for performing power analyses. The user specifies the desired statistical test (e.g. t-test, regression, ANOVA), and three of the following: the number of groups/observations, effect size, significance level, or power, in order to calculate the unspecified aspect.
Gaming (the system) Adopting questionable research practices (QRPs, e.g., salami slicing of an academic paper) that would align with academic incentive structures that benefit the academic (e.g. in prestige, hiring, or promotion) regardless of whether they support the process of scholarship. If systems rely on metrics to determine an outcome (e.g. academic credit) those metrics can be subject to intentional manipulation (Naudet et al., 2018) or “gamed”. Where promotions, hiring, and tenure are based on flawed metrics they may disfavor openness, rigor, and transparent work (Naudet et al., 2018) – for example favoring “quantity over quality” – and exacerbate existing inequalities.
Garden of forking paths The typically-invisible decision tree traversed during operationalization and statistical analysis given that ‘there is a one-to-many mapping from scientific to statistical hypotheses’ (Gelman and Loken, 2013, p. 6). In other words, even in absence of p-hacking or fishing expeditions and when the research hypothesis was posited ahead of time, there can be a plethora of statistical results that can appear to be supported by theory given data. “The problem is there can be a large number of potential comparisons when the details of data analysis are highly contingent on data, without the researcher having to perform any conscious procedure of fishing or examining multiple p-values” (Gelman and Loken, 2013, p. 1). The term aims to highlight the uncertainty ensuing from idiosyncratic analytical and statistical choices in mapping theory-to-test, and contrasting intentional (and unethical) questionable research practices (e.g. p-hacking and fishing expeditions) versus non-intentional research practices that can, potentially, have the same effect despite not having intent to corrupt their results. The garden of forking paths refers to the decisions during the scientific process that inflate the false-positive rate as a consequence of the potential paths which could have been taken (had other decisions been made).
General Data Protection Regulation (GDPR) A legal framework of seven principles implemented across the European Union (EU) that aims to safeguard individuals’ information. The framework seeks to commission citizens with control over their personal data, whilst regulating the parties involved in storing and processing these data. This set of legislation dictates the free movement of individuals’ personal information both within and outside the EU and must be considered by researchers when designing and running studies.
Generalizability Generalizability refers to how applicable a study’s results are to broader groups of people, settings, or situations they study and how the findings relate to this wider context (Frey, 2018; Kukull & Ganguli, 2012).
Gift (or Guest) Authorship The inclusion in an article’s author list of individuals who do not meet the criteria for authorship. As authorship is associated with benefits including peer recognition and financial rewards, there are incentives for inclusion as an author on published research. Gifting authorship, or extending authorship credit to an individual who does not merit such recognition, can be intended to help the gift recipient, repay favors (including reciprocal gift authorship), maintain personal and professional relationships, and enhance chances of publication. Gift authorship is widely considered an unethical practice.
Git A software package for tracking changes in a local set of files (local version control), initially developed by Linus Torvalds. In general, it is used by programmers to track and develop computer source code within a set directory, folder or a file system. Git can access remote repository hosting services (e.g. GitHub) for remote version control that enables collaborative software development by uploading contributions from a local system. This process found its way into the scientific process to enable open data, open code and reproducible analyses.
Goodhart’s Law A term coined by economist Charles Goodhart to refer to the observation that measuring something inherently changes user behaviour. In relation to examination performance, Strathern (1997) stated that “when a measure becomes a target, it ceases to be a good measure” (p. 308). Applied to open scholarship, and the structure of incentives in academia, Goodhart’s Law would predict that metrics of scientific evaluation will likely be abused and exploited, as evidenced by Muller (2019)
H-index Hirsch’s index, abbreviated as H-index, intends to measure both productivity and research impact by combining the number of publications and the number of citations to these publications. Hirsch (2005) defined the index as “the number of papers with citation number ≥ h” (p. 16569). That is, the greatest number such that an author (or journal) has published at least that many papers that have been cited at least that many times. The index is perceived as a superior measure to measures that only assess, for instance, the number of citations and number of publications but this index has been criticised for the purpose of researcher assessment (e.g. Wendl, 2007).
Hackathon An organized event where experts, designers, or researchers collaborate for a relatively short amount of time to work intensively on a project or problem. The term is originally borrowed from computer programmer and software development events whose goal is to create a fully fledged product (resources, research, software, hardware) by the end of the event, which can last several hours to several days.
HARKing A questionable research practice termed ‘Hypothesizing After the Results are Known’ (HARKing). “HARKing is defined as presenting a post hoc hypothesis (i.e., one based on or informed by one’s results) in a research report as if it was, in fact, a priori” (Kerr, 1998, p. 196). For example, performing subgroup analyses, finding an effect in one subgroup, and writing the introduction with a ‘hypothesis’ that matches these results.
Hidden Moderators Contextual conditions that can, unbeknownst to researchers, make the results of a replication attempt deviate from those of the original study. Hidden moderators are sometimes invoked to explain (away) failed replications. Also called hidden assumptions.
Hypothesis A hypothesis is an unproven statement relating the connection between variables (Glass & Hall, 2008) and can be based on prior experiences, scientific knowledge, preliminary observations, theory and/or logic. In scientific testing, a hypothesis can be usually formulated with (e.g. a positive correlation) or without a direction (e.g. there will be a correlation). Popper (1959) posits that hypotheses must be falsifiable, that is, it must be conceivably possible to prove the hypothesis false. However, hypothesis testing based on falsification has been argued to be vague, as it is contingent on many other untested assumptions in the hypothesis (i.e., auxiliary hypotheses). Longino (1990, 1992) argued that ontological heterogeneity should be valued more than ontological simplicity for the biological sciences, which considers we should investigate differences between and within biological organisms.
i10-index A research metric created by Google Scholar that represents the number of publications a researcher has with at least 10 citations.
Ideological bias The idea that pre-existing opinions about the quality of research can depend on the ideological views of the author(s). One of the many biases in the peer review process, it expects that favourable opinions towards the research would be more likely if friends, collaborators, or scientists agree with an editor or reviewer’s political viewpoints (Tvina et al. 2019). This could potentially lead to a variety of conflicts of interest that undermine diverse perspectives, for example: speeding or delaying peer-review, or influencing the chances of an individual being invited to present their research, thus promoting their work.
Incentive structure The set of evaluation and reward mechanisms (explicit and implicit) for scientists and their work. Incentivised areas within the broader structure include hiring and promotion practices, track record for awarding funding, and prestige indicators such as publication in journals with high impact factors, invited presentations, editorships, and awards. It is commonly believed that these criteria are often misaligned with the telos of science, and therefore do not promote rigorous scientific output. Initiatives like DORA aim to reduce the field’s dependency on evaluation criteria such as journal impact factors in favor of assessments based on the intrinsic quality of research outputs.
Inclusion Inclusion, or inclusivity, refers to a sense of welcome and respect within a given collaborative project or environment (such as academia) where diversity simply indicates a wide range of backgrounds, perspectives, and experiences, efforts to increase inclusion go further to promote engagement and equal valuation among diverse individuals, who might otherwise be marginalized. Increasing inclusivity often involves minimising the impact of, or even removing, systemic barriers to accessibility and engagement.
Induction “Reasoning by drawing a conclusion not guaranteed by the premises; for example, by inferring a general rule from a limited number of observations. Popper believed that there was no such logical process; we may guess general rules but such guesses are not rendered even more probable by any number of observations. By contrast, Bayesians inductively work out the increase in probability of a hypothesis that follows from the observations.” Dienes (p. 164, 2008)
Interaction Fallacy A statistical error in which a formal test is not conducted to assess the difference between a significant and non-significant correlation (or other measures, such as Odds Ratio). This fallacy occurs when a significant and non-significant correlation coefficient are assumed to represent a statistically significant difference but the comparison itself is not explicitly tested.
Interlocking An analysis at the core of intersectionality to analyse power, inequality and exclusion, as efforts to reform academic culture cannot be completed by investigating only one avenue in isolation (e.g. race, gender or ability) but by considering all the systems of exclusion. In contrast to intersectionality (which refers to the individual having multiple social identities), interlocking is usually used to describe the systems that combine to serve as oppressive measures toward the individual based on these identities.
Internal Validity An indicator of the extent to which a study’s findings are representative of the true effect in the population of interest and not due to research confounds, such as methodological shortcomings. In other words, whether the observed evidence or covariation between the independent (predictor) and dependent (criterion) variables can be taken as a bona fide relationship and not a spurious effect owing to uncontrolled aspects of the study’s set up. Since it involves the quality of the study itself, internal validity is a priority for scientific research.
Intersectionality A term which derives from Black feminist thought and broadly describes how social identities exist within ‘interlocking systems of oppression’ and structures of (in)equalities (Crenshaw, 1989). Intersectionality offers a perspective on the way multiple forms of inequality operate together to compound or exacerbate each other. Multiple concurrent forms of identity can have a multiplicative effect and are not merely the sum of the component elements. One implication is that identity cannot be adequately understood through examining a single axis (e.g., race, gender, sexual orientation, class) at a time in isolation, but requires simultaneous consideration of overlapping forms of identity.
JabRef An open-sourced, cross-platform citation and reference management tool that is available free of charge. It allows editing BibTeX files, importing data from online scientific databases, and managing and searching BibTeX files.
Jamovi Free and open source software for data analysis based on the R language. The software has a graphical user interface and provides the R code to the analyses. Jamovi supports computational reproducibility by saving the data, code, analyses, and results in a single file.
JASP Named after Sir Harold Jeffreys, JASP stands for Jeffrey’s Amazing Statistics Program. It is a free and open source software for data analysis. JASP relies on a user interface and offers both null hypothesis tests and their Bayesian counterparts. JASP supports computational reproducibility by saving the data, code, analyses, and results in a single file.
Journal Impact Factor™ The mean number of citations to research articles in that journal over the preceding two years. It is a proprietary and opaque calculation marketed by Clarivate™. Journal Impact Factors are not associated with the content quality or the peer review process.
JSON file JavaScript Object Notation (JSON) is a data format for structured data that can be used to represent attribute-value pairs. Values thereby can contain further JSON notation (i.e., nested information). JSON files can be formally encoded as strings of text and thus are human-readable. Beyond storing information this feature makes them suitable for annotating other content. For example, JSON files are used in Brain Imaging Data Structure (BIDS) for describing the metadata dataset by following a standardized format (dataset_description.json).
Knowledge acquisition The process by which the mind decodes or extracts, stores, and relates new information to existing information in long term memory. Given the complex structure and nature of knowledge, this process is studied in the philosophical field of epistemology, as well as the psychological field of learning and memory.
Likelihood function A statistical model of the data used in frequentist and Bayesian analyses, defined up to a constant of proportionality. A likelihood function represents the likeliness of different parameters for your distribution given the data. Given that probability distributions have unknown population parameters, the likelihood function indicates how well the sample data summarise these parameters. As such, the likelihood function gives an idea of the goodness of fit of a model to the sample data for a given set of values of the unknown population parameters.
Likelihood Principle The notion that all information relevant to inference contained in data is provided by the likelihood. The principle suggests that the likelihood function can be used to compare the plausibility of various parameter values. While Bayesians and likelihood theorists subscribe to the likelihood principle, Neyman-Pearson theorists do not, as significance tests violate the likelihood principle because they take into account information not in the likelihood.
Literature Review Researchers often review research records on a given topic to better understand effects and phenomena of interest before embarking on a new research project, to understand how theory links to evidence or to investigate common themes and directions of existing study results and claims. Different types of reviews can be conducted depending on the research question and literature scope. To determine the scope and key concepts in a given field, researchers may want to conduct a scoping literature review. Systematic reviews aim to access and review all available records for the most accurate and unbiased representation of existing literature. Non-systematic or focused literature reviews synthesise information from a selection of studies relevant to the research question although they are uncommon due to susceptibility to biases (e.g. researcher bias; Siddaway et al., 2019).
Manel Portmanteau for ‘male panel’, usually to refer to speaker panels at conferences entirely composed of (usually caucasian) males. Typically discussed in the context of gender disparities in academia (e.g., women being less likely to be recognised as experts by their peers and, subsequently, having fewer opportunities for career development).
Many authors Large-scale collaborative projects involving tens or hundreds of authors from different institutions. This kind of approach has become increasingly common in psychology and other sciences in recent years as opposed to research carried out by small teams of authors, following earlier trends which have been observed e.g. for high-energy physics or biomedical research in the 1990s. These large international scientific consortia work on a research project to bring together a broader range of expertise and work collaboratively to produce manuscripts.
Many Labs A crowdsourcing initiative led by the Open Science Collaboration (2015) whereby several hundred separate research groups from various universities run replication studies of published effects. This initiative is also known as “Many Labs I” and was subsequently followed by a “Many Labs II” project that assessed variation in replication results across samples and settings. Similar projects include ManyBabies, EEGManyLabs, and the Psychological Science Accelerator.
Massive Open Online Courses (MOOCs) Exclusively online courses which are accessible to any learner at any time, are typically free to access (while not necessarily openly licensed), and provide video-based instructions and downloadable data sets and exercises. The “massive” aspect describes the high volume of students that can access the course at any one time due to their flexibility, low or no cost, and online nature of the materials.
Massively Open Online Papers (MOOPs) Unlike the traditional collaborative article, a MOOP follows an open participatory and dynamic model that is not restricted by a predetermined list of contributors.
Matthew effect (in science) Named for the ‘rich get richer; poor get poorer’ paraphrase of the Gospel of Matthew. Eminent scientists and early-career researchers with a prestigious fellowship are disproportionately attributed greater levels of credit and funding for their contributions to science while relatively unknown or early-career researchers without a prestigious fellowship tend to get disproportionately little credit for comparable contributions. The impact is a substantial cumulative advantage that results from modest initial comparative advantages (and vice versa).
Meta-analysis A meta-analysis is a statistical synthesis of results from a series of studies examining the same phenomenon. A variety of meta-analytic approaches exist, including random or fixed effects models or meta-regressions, which allow for an examination of moderator effects. By aggregating data from multiple studies, a meta-analysis could provide a more precise estimate for a phenomenon (e.g. type of treatment) than individual studies. Results are usually visualized in a forest plot. Meta-analyses can also help examine heterogeneity across study results. Meta-analyses are often carried out in conjunction with systematic reviews and similarly require a systematic search and screening of studies. Publication bias is also commonly examined in the context of a meta-analysis and is typically visually presented via a funnel plot.
Meta-science or Meta-research The scientific study of science itself with the aim to describe, explain, evaluate and/or improve scientific practices. Meta-science typically investigates scientific methods, analyses, the reporting and evaluation of data, the reproducibility and replicability of research results, and research incentives.
Metadata Structured data that describes and synthesises other data. Metadata can help find, organize, and understand data. Examples of metadata include creator, title, contributors, keywords, tags, as well as any kind of information necessary to verify and understand the results and conclusions of a study such as codebook on data labels, descriptions, the sample and data collection process.
Model (computational) Computational models aim to mathematically translate the phenomena under study to better understand, communicate and predict complex behaviours.
Model (philosophy) The process by which a verbal description is formalised to remove ambiguity, while also constraining the dimensions a theory can span. The model is thus data derived. “Many scientific models are representational models: they represent a selected part or aspect of the world, which is the model’s target system” (Frigg & Hartman, 2020).
Model (statistical) A mathematical representation of observed data that aims to reflect the population under study, allowing for the better understanding of the phenomenon of interest, identification of relationships among variables and predictions about future instances. A classic example would be the application of Chi square to understand the relationship between smoking and cancer (Doll & Hill, 1954).
Multi-Analyst Studies In typical empirical studies, a single researcher or research team conducts the analysis, which creates uncertainty about the extent to which the choice of analysis influences the results. In multi-analyst studies, two or more researchers independently analyse the same research question or hypothesis on the same dataset. According to Aczel and colleagues (2021), a multi-analyst approach may be beneficial in increasing our confidence in a particular finding; uncovering the impact of analytical preferences across research teams; and highlighting the variability in such analytical approaches.
Multiplicity Potential inflation of Type I error rates (incorrectly rejecting the null hypothesis) because of multiple statistical testing, for example, multiple outcomes, multiple follow-up time points, or multiple subgroup analyses. To overcome issues with multiplicity, researchers will often apply controlling procedures (e.g., Bonferroni, Holm-Bonferroni; Tukey) that correct the alpha value to control for inflated Type I errors. However, by controlling for Type I errors, one can increase the possibility of Type II errors (i.e., incorrectly accepting the null hypothesis).
Multiverse analysis Multiverse analyses are based on all potentially equally justifiable data processing and statistical analysis pipelines that can be employed to test a single hypothesis. In a data multiverse analysis, a single set of raw data is processed into a multiverse of data sets by applying all possible combinations of justifiable preprocessing choices. Model multiverse analyses apply equally justifiable statistical models to the same data to answer the same hypothesis. The statistical analysis is then conducted on all data sets in the multiverse and all results are reported which enhances promoting transparency and illustrates the robustness of results against different data processing (data multiverse) or statistical (model multiverse) pipelines). Multiverse analysis differs from Specification curve analysis with regards to the graphical displays (a histogram and tile plota rather than a specification curve plot).
Name Ambiguity Problem An attribution issue arising from two related problems: authors may use multiple names or monikers to publish work, and multiple authors in a single field may share full names. This makes accurate identification of authors on names and specialisms alone a difficult task. This can be addressed through the creation and use of unique digital identifiers that act akin to digital fingerprints such as ORCID.
Named entity-based Text Anonymization for Open Science (NETANOS) A free, open-source anonymisation software that identifies and modifies named entities (e.g. persons, locations, times, dates). Its key feature is that it preserves critical context needed for secondary analyses. The aim is to assist researchers in sharing their raw text data, while adhering to research ethics.
Non-Intervention, Reproducible, and Open Systematic Reviews (NIRO-SR) A comprehensive set of tools to facilitate the development, preregistration and dissemination of systematic literature reviews for non-intervention research. Part A represents detailed guidelines for creating and preregistering a systematic review protocol in the context of non-intervention research whilst preparing for transparency. Part B represents guidelines for writing up the completed systematic review, with a focus on enhancing reproducibility.
Null Hypothesis Significance Testing (NHST) A frequentist approach to inference used to test the probability of an observed effect against the null hypothesis of no effect/relationship (Pernet, 2015). Such a conclusion is arrived at through use of an index called the p-value. Specifically, researchers will conclude an effect is present when an a priori alpha threshold, set by the researchers, is satisfied; this determines the acceptable level of uncertainty and is closely related to Type I error.
Objectivity The idea that scientific claims, methods, results and scientists themselves should remain value-free and unbiased, and thus not be affected by cultural, political, racial or religious bias as well as any personal interests (Merton, 1942).
Ontology (Artificial Intelligence) A set of axioms in a subject area that help classify and explain the nature of the entities under study and the relationships between them.
Open access “Free availability of scholarship on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these research articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself” (Boai, 2002). Different methods of achieving open access (OA) are often referred to by color, including Green Open Access (when the work is openly accessible from a public repository), Gold Open Access (when the work is immediately openly accessible upon publication via a journal website), and Platinum (or Diamond) Open Access (a subset of Gold OA in which all works in the journal are immediately accessible after publication from the journal website without the authors needing to pay an article processing fee [APC]).
Open Code Making computer code (e.g., programming, analysis code, stimuli generation) freely and publicly available in order to make research methodology and analysis transparent and allow for reproducibility and collaboration. Code can be made available via open code websites, such as GitHub, the Open Science Framework, and Codeshare (to name a few), enabling others to evaluate and correct errors and re-use and modify the code for subsequent research.
Open Data Open data refers to data that is freely available and readily accessible for use by others without restriction, “Open data and content can be freely used, modified, and shared by anyone for any purpose” (https://opendefinition.org/). Open data are subject to the requirement to attribute and share alike, thus it is important to consider appropriate Open Licenses. Sensitive or time-sensitive datasets can be embargoed or shared with more selective access options to ensure data integrity is upheld.
Open Educational Resources (OER) Commons OER Commons (with OER standing for open educational resources) is a freely accessible online library allowing teachers to create, share and remix educational resources. The goal of the OER movement is to stimulate “collaborative teaching and learning” (https://www.oercommons.org/about) and provide high-quality educational resources that are accessible for everyone.
Open Educational Resources (OERs) Learning materials that can be modified and enhanced because their creators have given others permission to do so. The individuals or organizations that create OERs—which can include materials such as presentation slides, podcasts, syllabi, images, lesson plans, lecture videos, maps, worksheets, and even entire textbooks—waive some (if not all) of the copyright associated with their works, typically via legal tools like Creative Commons licenses, so others can freely access, reuse, translate, and modify them.
Open Material Author’s public sharing of materials that were used in a study, “such as survey items, stimulus materials, and experiment programs” (Kidwell et al., 2016, p. 3). Digitally-shareable materials are posted on open access repositories, which makes them publicly available and accessible. Depending on licensing, the material can be reused by other authors for their own studies. Components that are not digitally-shareable (e.g. biological materials, equipment) must be described in sufficient detail to allow reproducibility.
Open Peer Review A scholarly review mechanism providing disclosure of any combination of author and referee identities, as well as peer-review reports and editorial decision letters, to one another or publicly at any point during or after the peer review or publication process. It may also refer to the removal of restrictions on who can participate in peer review and the platforms for doing so. Note that ‘open peer review’ has been used interchangeably to refer to any, or all, of the above practices.
Open Scholarship Knowledge Base The Open Scholarship Knowledge Base (OSKB) is a collaborative initiative to share knowledge on the what, why and how of open scholarship to make this knowledge easy to find and apply. Information is curated and created by the community. The OSKB is a community under the Center for Open Science (COS).
Open Scholarship ‘Open scholarship’ is often used synonymously with ‘open science’, but extends to all disciplines, drawing in those which might not traditionally identify as science-based. It reflects the idea that knowledge of all kinds should be openly shared, transparent, rigorous, reproducible, replicable, accumulative, and inclusive (allowing for all knowledge systems). Open scholarship includes all scholarly activities that are not solely limited to research such as teaching and pedagogy.
Open Science Framework A free and open source platform for researchers to organize and share their research project and to encourage collaboration. Often used as an open repository for research code, data and materials, preprints and preregistrations, while managing a more efficient workflow. Created and maintained by the Center for Open Science.
Open Science An umbrella term reflecting the idea that scientific knowledge of all kinds, where appropriate, should be openly accessible, transparent, rigorous, reproducible, replicable, accumulative, and inclusive, all which are considered fundamental features of the scientific endeavour. Open science consists of principles and behaviors that promote transparent, credible, reproducible, and accessible science. Open science has six major aspects: open data, open methodology, open source, open access, open peer review, and open educational resources.
Open Source software A type of computer software in which source code is released under a license that permits others to use, change, and distribute the software to anyone and for any purpose. Open source is more than openly accessible: the distribution terms of open-source software must comply with 10 specific criteria (see: https://opensource.org/osd).
Open washing Open washing, termed after “greenwashing”, refers to the act of claiming openness to secure perceptions of rigor or prestige associated with open practices. It has been used to characterise the marketing strategy of software companies that have the appearance of open-source and open-licensing, while engaging in proprietary practices. Open washing is a growing concern for those adopting open science practices as their actions are undermined by misleading uses of the practices, and actions designed to facilitate progressive developments are reduced to ‘ticking the box’ without clear quality control.
OpenNeuro A free platform where researchers can freely and openly share, browse, download and re-use brain imaging data (e.g., MRI, MEG, EEG, iEEG, ECoG, ASL, and PET data).
Optional Stopping The practice of (repeatedly) analyzing data during the data collection process and deciding to stop data collection if a statistical criterion (e.g. p-value, or bayes factor) reaches a specified threshold. If appropriate methodological precautions are taken to control the type 1 error rate, this can be an efficient analysis procedure (e.g. Lakens, 2014). However, without transparent reporting or appropriate error control the type 1 error can increase greatly and optional stopping could be considered a Questionable Research Practice (QRP) or a form of p-hacking.
ORCID (Open Researcher and Contributor ID) A organisation that provides a registry of persistent unique identifiers (ORCID iDs) for researchers and scholars, allowing these users to link their digital research documents and other contributions to their ORCID record. This avoids the name ambiguity problem in scholarly communication. ORCID iDs provide unique, persistent identifiers connecting researchers and their scholarly work. It is free to register for an ORCID iD at https://orcid.org/register.
Overlay Journal Open access electronic journals that collect and curate articles available from other sources (typically preprint servers, such as arXiv). Article curation may include (post-publication) peer review or editorial selection. Overlay journals do not publish novel material; rather, they organize and collate articles available in existing repositories.
P-curve P-curve is a tool for identifying potential publication bias and makes use of the distribution of significant p-values in a series of independent findings. The deviation from the expected right-skewed distribution can be used to assess the existence and degree of publication bias: if the curve is right-skewed, there are more low, highly significant p-values, reflecting an underlying true effect. If the curve is left-skewed, there are many barely significant results just under the 0.05-threshold. This suggests that the studies lack evidential value and may be underpinned by questionable research practices (QRPs; e.g., p-hacking). In the case of no true effect present (true null hypothesis) and unbiased p-value reporting, the p-curve should be a flat, horizontal line, representing the typical distribution of p-values.
P-hacking Exploiting techniques that may artificially increase the likelihood of obtaining a statistically significant result by meeting the standard statistical significance criterion (typically α = .05). For example, performing multiple analyses and reporting only those at p < .05, selectively removing data until p < .05, selecting variables for use in analyses based on whether those parameters are statistically significant.
p-value A statistic used to evaluate the outcome of a hypothesis test in Null Hypothesis Significance Testing (NHST). It refers to the probability of observing an effect, or more extreme effect, assuming the null hypothesis is true (Lakens, 2021b). The American Statistical Association’s statement on p-values (Wasserstein & Lazar, 2016) notes that p-values are not an indicator of the truth of the null hypothesis and instead defines p-values in this way: “Informally, a p-value is the probability under a specified statistical model that a statistical summary of the data (e.g., the sample mean difference between two compared groups) would be equal to or more extreme than its observed value” (p. 131).
Papermill An organization that is engaged in scientific misconduct wherein multiple papers are produced by falsifying or fabricating data, e.g. by editing figures or numerical data or plagiarizing written text. Papermills are “alleged to offer products ranging from research data through to ghostwritten fraudulent or fabricated manuscripts and submission services” (Byrne & Christopher, 2020, p. 583). A papermill relates to the fast production and dissemination of multiple allegedly new papers. These are often not detected in the scientific publishing process and therefore either never found or retracted if discovered (e.g. through plagiarism software).
Paradata Data that are captured about the characteristics and context of primary data collected from an individual – distinct from metadata. Paradata can be used to investigate a respondent’s interaction with a survey or an experiment on a micro-level. They can be most easily collected during computer mediated surveys but are not limited to them. Examples include response times to survey questions, repeated patterns of responses such as choosing the same answer for all questions, contextual characteristics of the participant such as injuries that prevent good performance on tasks, the number of premature responses to stimuli in an experiment. Paradata have been used for the investigation and adjustment of measurement and sampling errors.
PARKing PARKing (preregistering after results are known) is defined as the practice where researchers complete an experiment (possibly with infinite re-experimentation) before preregistering. This practice invalidates the purpose of preregistration, and is one of the QRPs (or, even scientific misconduct) that try to gain only “credibility that it has been preregistered.”
Participatory Research Participatory research refers to incorporating the views of people from relevant communities in the entire research process to achieve shared goals between researchers and the communities. This approach takes a collaborative stance that seeks to reduce the power imbalance between the researcher and those researched through a “systematic cocreation of new knowledge” (Andersson, 2018).
Patient and Public Involvement (PPI) Active research collaboration with the population of interest, as opposed to conducting research “about” them. Researchers can incorporate the lived experience and expertise of patients and the public at all stages of the research process. For example, patients can help to develop a set of research questions, review the suitability of a study design, approve plain English summaries for grant/ethics applications and dissemination, collect and analyse data, and assist with writing up a project for publication. This is becoming highly recommended and even required by funders (Boivin et al., 2018).
Paywall A technological barrier that permits access to information only to individuals who have paid – either personally, or via an organisation – a designated fee or subscription.
PCI (Peer Community In) PCI is a non-profit organisation that creates communities of researchers who review and recommend unpublished preprints based upon high-quality peer review from at least two researchers in their field. These preprints are then assigned a DOI, similarly to a journal article. PCI was developed to establish a free, transparent and public scientific publication system based on the review and recommendation of preprints.
PCI Registered Reports An initiative launched in 2021 dedicated to receiving, reviewing, and recommending Registered Reports (RRs) across the full spectrum of Science, technology, engineering, and mathematics (STEM), medicine, social sciences and humanities. Peer Community In (PCI) RRs are overseen by a ‘Recommender’ (equivalent to an Action Editor) and reviewed by at least two experts in the relevant field. It provides free and transparent pre- (Stage 1) and post-study (Stage 2) reviews across research fields. A network of PCI RR-friendly journals endorse the PCI RR review criteria and commit to accepting, without further peer review, RRs that receive a positive final recommendation from PCI RR.
Plan S Plan S is an initiative, launched in September 2018 by cOAlition S, a consortium of research funding organisations, which aims to accelerate the transition to full and immediate Open Access. Participating funders require recipients of research grants to publish their research in compliant Open Access journals or platforms, or make their work openly and immediately available in an Open Access repository, from 2021 onwards. cOAlition S funders have commited to not financially support ‘hybrid’ Open Access publication fees in subscription venues. However, authors can comply with plan S through publishing Open Access in a subscription journal under a “transformative arrangement” as further described in the implementation guidance. The “S” in Plan S stands for shock.
Positionality Map A reflexive tool for practicing explicit positionality in critical qualitative research. The map is to be used “as a flexible starting point to guide researchers to reflect and be reflexive about their social location. The map involves three tiers: the identification of social identities (Tier 1), how these positions impact our life (Tier 2), and details that may be tied to the particularities of our social identity (Tier 3).” (Jacobson and Mustafa 2019, p. 1). The aim of the map is “for researchers to be able to better identify and understand their social locations and how they may pose challenges and aspects of ease within the qualitative research process.”
Positionality The contextualization of both the research environment and the researcher, to define the boundaries within the research was produced (Jaraf, 2018). Positionality is typically centred and celebrated in qualitative research, but there have been recent calls for it to also be used in quantitative research as well. Positionality statements, whereby a researcher outlines their background and ‘position’ within and towards the research, have been suggested as one method of recognising and centring researcher bias.
Post Hoc Post hoc is borrowed from Latin, meaning “after this”. In statistics, post hoc (or post hoc analysis) refers to the testing of hypotheses not specified prior to data analysis. In frequentist statistics, the procedure differs based on whether the analysis was planned or post-hoc, for example by applying more stringent error control. In contrast, Bayesian and likelihood approaches do not differ as a function of when the hypothesis was specified.
Post Publication Peer Review Peer review that takes place after research has been published. It is typically posted on a dedicated platform (e.g., PubPeer). It is distinct from the traditional commentary which is published in the same journal and which is itself usually peer reviewed.
Posterior distribution A way to summarize one’s updated knowledge in Bayesian inference, balancing prior knowledge with observed data. In statistical terms, posterior distributions are proportional to the product of the likelihood function and the prior. A posterior probability distribution captures (un)certainty about a given parameter value.
Predatory Publishing Predatory (sometimes “vanity”) publishing describes a range of business practices in which publishers seek to profit, primarily by collecting article processing charges (APCs), from publishing scientific works without necessarily providing legitimate quality checks (e.g., peer review) or editorial services. In its most extreme form, predatory publishers will publish any work, so long as charges are paid. Other less extreme strategies, such as sending out high numbers of unsolicited requests for editing or publishing in fee-driven special issues, have also been accused as predatory (Crosetto, 2021).
PREPARE Guidelines The PREPARE guidelines and checklist (Planning Research and Experimental Procedures on Animals: Recommendations for Excellence) aim to help the planning of animal research, and support adherence to the 3Rs (Replacement, Reduction or Refinement) and facilitate the reproducibility of animal research.
Preprint A publicly available version of any type of scientific manuscript/research output preceding formal publication, considered a form of Green Open Access. Preprints are usually hosted on a repository (e.g. arXiv) that facilitates dissemination by sharing research results more quickly than through traditional publication. Preprint repositories typically provide persistent identifiers (e.g. DOIs) to preprints. Preprints can be published at any point during the research cycle, but are most commonly published upon submission (i.e., before peer-review). Accepted and peer-reviewed versions of articles are also often uploaded to preprint servers, and are called postprints.
Preregistration Pledge In a “collective action in support of open and reproducible research practices”, the preregistration pledge is a campaign from the Project Free Our Knowledge that asks a researcher to commit to preregistering at least one study in the next two years (https://freeourknowledge.org/about/). The project is a grassroots movement initiated by early career researchers (ECRs).
Preregistration The practice of publishing the plan for a study, including research questions/hypotheses, research design, data analysis before the data has been collected or examined. It is also possible to preregister secondary data analyses (Merten & Krypotos, 2019). A preregistration document is time-stamped and typically registered with an independent party (e.g., a repository) so that it can be publicly shared with others (possibly after an embargo period). Preregistration provides a transparent documentation of what was planned at a certain time point, and allows third parties to assess what changes may have occurred afterwards. The more detailed a preregistration is, the better third parties can assess these changes and with that the validity of the performed analyses. Preregistration aims to clearly distinguish confirmatory from exploratory research.
Prior distribution Beliefs held by researchers about the parameters in a statistical model before further evidence is taken into account. A ‘prior’ is expressed as a probability distribution and can be determined in a number of ways (e.g., previous research, subjective assessment, principles such as maximising entropy given constraints), and is typically combined with the likelihood function using Bayes’ theorem to obtain a posterior distribution.
PRO (peer review openness) initiative The agreement made by several academics that they will not provide a peer review of a manuscript unless certain conditions are met. Specifically, the manuscript authors should ensure the data and materials will be made publically available (or give a justification as to why they are not freely available or shared), provide documentation detailing how to interpret and run any files or code and detail where these files can be located via the manuscript itself.
Pseudonymisation Pseudonymisation refers to a technique that involves replacing or removing any information that could lead to identification of research subjects’ identity whilst still being able to make them identifiable through the use of the combination of code number and identifiers. This process comprises the following steps: removal of all identifiers from the research dataset; attribution of a specific identifier (pseudonym) for each participant and using it to label each research record; and maintenance of a cipher that links the code number to the participant in a document physically separate from the dataset. Pseudonymisation is typically a minimum requirement from ethical committees when conducting research, especially on human participants or involving confidential information, in order to ensure upholding of data privacy.
Pseudoreplication When there is a lack of statistical independence presented in the data and thus artificially inflating the number of samples (i.e. replicates). For instance, collecting more than one data point from the same experimental unit (e.g. participant or crops). Numerous methods can overcome this, such as averaging across replicates (e.g., taking the mean RT for a participant) or implementing mixed effects models with the random effects structure accounting for the pseudoreplication (e.g., specifying each individual RT as belonging to the same subject). Note, the former option would be associated with a loss of information and statistical power.
Psychometric meta-analysis Psychometric meta-analyses aim to correct for attenuation of the effect sizes of interest due to measurement error and other artifacts by using procedures based on psychometric principles, e.g. reliability of the measures. These procedures should be implemented before using the synthesised effect sizes in correlational or experimental meta-analysis, as making these corrections tends to lead to larger and less variable effect sizes.
Public Trust in Science Trust in the knowledge, guidelines and recommendations that has been produced or provided by scientists to the benefit of civil society (Hendriks et al., 2016). These may also refer to trust in scientific-based recommendations on public health (e.g., universal health-care, stem cell research, federal funds for women’s reproductive rights, preventive measures of contagious diseases, and vaccination), climate change, economic policies (e.g., welfare, inequality- and poverty-control) and their intersections. The trust a member of the public has in science has been shown to be influenced by a vast number of factors such as age (Anderson et al., 2012), gender (Von Roten, 2004), rejection of scientific norms (Lewandowsky & Oberauer, 2021), political ideology (Azevedo & Jost, 2021; Brewer & Ley, 2012; Leiserowitz et al., 2010), right-wing authoritarianism and social dominance (Kerr & Wilson, 2021), education (Bak, 2001; Hayes & Tariq, 2000), income (Anderson et al., 2012), science knowledge (Evans & Durant, 1995; Nisbet et al., 2002), social media use (Huber et al., 2019), and religiosity (Azevedo, 2021; Brewer & Ley, 2013; Liu & Priest, 2009).
Publication bias (File Drawer Problem) The failure to publish results based on the “direction or strength of the study findings” (Dickersin & Min, 1993, p. 135). The bias arises when the evaluation of a study’s publishability disproportionately hinges on the outcome of the study, often with the inclination that novel and significant results are worth publishing more than replications and null results. This bias typically materializes through a disproportionate number of significant findings and inflated effect sizes. This process leads to the published scientific literature not being representative of the full extent of all research, and specifically underrepresents null finding. Such findings, in turn, land in the so called “file drawer”, where they are never published and have no findable documentation.
Publish or Perish An aphorism describing the pressure researchers feel to publish academic manuscripts, often in high prestige academic journals, in order to have a successful academic career. This pressure to publish a high quantity of manuscripts can go at the expense of the quality of the manuscripts. This institutional pressure is exacerbated by hiring procedures and funding decisions strongly focusing on the number and impact of publications.
PubPeer A website that allows users to post anonymous peer reviews of research that has been published (i.e. post-publication peer review).
Python An interpreted general-purpose programming language, intended to be user-friendly and easily readable, originally created by Guido van Rossum in 1991. Python has an extensive library of additional features with accessible documentation for tasks ranging from data analysis to experiment creation. It is a popular programming language in data science, machine learning and web development. Similar to R Markdown, Python can be presented in an interactive online format called a Jupyter notebook, combining code, data, and text.
Qualitative research Research which uses non-numerical data, such as textual responses, images, videos or other artefacts, to explore in-depth concepts, theories, or experiences. There are a wide range of qualitative approaches, from micro-detailed exploration of language or focusing on personal subjective experiences, to those which explore macro-level social experiences and opinions.
Quantitative research Quantitative research encompasses a diverse range of methods to systematically investigate a range of phenomena via the use of numerical data which can be analysed with statistics.
Questionable Measurement Practices (QMP) Decisions researchers make that raise doubts about the validity of measures used in a study, and ultimately the study’s final conclusions (Flake & Fried, 2020). Issues arise from a lack of transparency in reporting measurement practices, a failure to address construct validity, negligence, ignorance, or deliberate misrepresentation of information.
Questionable Research Practices or Questionable Reporting Practices (QRPs) A range of activities that intentionally or unintentionally distort data in favour of a researcher’s own hypotheses – or omissions in reporting such practices – including; selective inclusion of data, hypothesising after the results are known (HARKing), and p-hacking. Popularized by John et al. (2012).
R R is a free, open-source programming language and software environment that can be used to conduct statistical analyses and plot data. R was created by Ross Ihaka and Robert Gentleman at the University of Auckland. R enables authors to share reproducible analysis scripts, which increases the transparency of a study. Often, R is used in conjunction with an integrated development environment (IDE) which simplifies working with the language, for example RStudio or Visual Studio Code, or Tinn-R .
Red Teams An approach that integrates external criticism by colleagues and peers into the research process. Red teams are based on the idea that research that is more critically and widely evaluated is more reliable. The term originates from a military practice: One group (the red team) attacks something, and another group (the blue team) defends it. The practice has been applied to open science, by giving a red team (designated critical individuals) financial incentives to find errors in or identify improvements to the materials or content of a research project (in the materials, code, writing, etc.; Coles et al., 2020).
Reflexivity The process of reflexivity refers to critically considering the knowledge that we produce through research, how it is produced, and our own role as researchers in producing this knowledge. There are different forms of reflexivity; personal reflexivity whereby researchers consider the impact of their own personal experiences, and functional whereby researchers consider the way in which our research tools and methods may have impacted knowledge production. Reflexivity aims to bring attention to underlying factors which may impact the research process, including development of research questions, data collection, and the analysis.
Registered Report A scientific publishing format that includes an initial round of peer review of the background and methods (study design, measurement, and analysis plan); sufficiently high quality manuscripts are accepted for in-principle acceptance (IPA) at this stage. Typically, this stage 1 review occurs before data collection, however secondary data analyses are possible in this publishing format. Following data analyses and write up of results and discussion sections, the stage 2 review assesses whether authors sufficiently followed their study plan and reported deviations from it (and remains indifferent to the results). This shifts the focus of the review to the study’s proposed research question and methodology and away from the perceived interest in the study’s results.
Registry of Research Data Repositories A global registry of research data repositories from different academic disciplines. It includes repositories that enable permanent storage of, description via metadata and access to, data sets by researchers, funding bodies, publishers, and scholarly institutions.
Reliability The extent to which repeated measurements lead to the same results. In psychometrics, reliability refers to the extent to which respondents have similar scores when they take a questionnaire on multiple occasions. Noteworthy, reliability does not imply validity. Furthermore, additional types of reliability besides internal consistency exist, including: test-retest reliability, parallel forms reliability and interrater reliability.
Repeatability Synonymous with test-retest reliability. It refers to the agreement between the results of successive measurements of the same measure. Repeatability requires the same experimental tools, the same observer, the same measuring instrument administered under the same conditions, the same location, repetition over a short period of time, and the same objectives (Joint Committee for Guidelines in Metrology, 2008)
Replicability An umbrella term, used differently across fields, covering concepts of: direct and conceptual replication, computational reproducibility/replicability, generalizability analysis and robustness analyses. Some of the definitions used previously include: a different team arriving at the same results using the original author’s artifacts (Barba 2018); a study arriving at the same conclusion after collecting new data (Claerbout and Karrenbach, 1992); as well as studies for which any outcome would be considered diagnostic evidence about a claim from prior research (Nosek & Errington, 2020).
Replication Markets A replication market is an environment where users bet on the replicability of certain effects. Forecasters are incentivized to make accurate predictions and the top successful forecasters receive monetary compensation or contributorship for their bets. The rationale behind a replication market is that it leverages the collective wisdom of the scientific community to predict which effect will most likely replicate, thus encouraging researchers to channel their limited resources to replicating these effects.
RepliCATs project Collaborative Assessment for Trustworthy Science. The repliCATS project’s aim is to crowdsource predictions about the reliability and replicability of published research in eight social science fields: business research, criminology, economics, education, political science, psychology, public administration, and sociology.
Reporting Guideline A reporting guideline is a “checklist, flow diagram, or structured text to guide authors in reporting a specific type of research, developed using explicit methodology.” (EQUATOR Network, n.d.). Reporting guidelines provide the minimum guidance required to ensure that research findings can be appropriately interpreted, appraised, synthesized and replicated. Their use often differs per scientific journal or publisher.
Repository An online archive for the storage of digital objects including research outputs, manuscripts, analysis code and/or data. Examples include preprint servers such as bioRxiv, MetaArXiv, PsyArXiv, institutional research repositories, as well as data repositories that collect and store datasets including zenodo.org, PsychData, and code repositories such as Github, or more general repositories for all kinds of research data, such as the Open Science Framework (OSF). Digital objects stored in repositories are typically described through metadata which enables discovery across different storage locations.
ReproducibiliTea A grassroots initiative that helps researchers create local journal clubs at their universities to discuss a range of topics relating to open research and scholarship. Each meeting usually centres around a specific paper that discusses, for example, reproducibility, research practice, research quality, social justice and inclusion, and ideas for improving science.
Reproducibility crisis (aka Replicability or replication crisis) The finding, and related shift in academic culture and thinking, that a large proportion of scientific studies published across disciplines do not replicate (e.g. Open Science Collaboration, 2015). This is considered to be due to a lack of quality and integrity of research and publication practices, such as publication bias, QRPs and a lack of transparency, leading to an inflated rate of false positive results. Others have described this process as a ‘Credibility revolution’ towards improving these practices.
Reproducibility Network A reproducibility network is a consortium of open research working groups, often peer-led. The groups operate on a wheel-and-spoke model across a particular country, in which the network connects local cross-disciplinary researchers, groups, and institutions with a central steering group, who also connect with external stakeholders in the research ecosystem. The goals of reproducibility networks include; advocating for greater awareness, promoting training activities, and disseminating best-practices at grassroots, institutional, and research ecosystem levels. Such networks exist in the UK, Germany, Switzerland, Slovakia, and Australia (as of March 2021).
Reproducibility A minimum standard on a spectrum of activities (“reproducibility spectrum”) for assessing the value or accuracy of scientific claims based on the original methods, data, and code. For instance, where the original researcher’s data and computer codes are used to regenerate the results (Barba, 2018), often referred to as computational reproducibility. Reproducibility does not guarantee the quality, correctness, or validity of the published results (Peng, 2011). In some fields, this meaning is, instead, associated with the term “replicability” or ‘repeatability’.
Research Contribution Metric (p) Type of semantometric measure assessing similarity of publications connected in a citation network. This method uses a simple formula to assess authors’ contributions. Publication p can be estimated based on the semantic distance from the publications cited by p to publications citing p.
Research Cycle Describes the circular process of conducting scientific research, with “researchers working at various stages of inquiry, from more tentative and exploratory investigations to the testing of more definitive and well-supported claims” (Lieberman, 2020, p. 42). The cycle includes literature research and hypothesis generation, data collection and analysis, as well as dissemination of results (e.g. through publication in peer-reviewed journals), which again informs theory and new hypotheses/research.
Research Data Management Research Data Management (RDM) is a broad concept that includes processes undertaken to create organized, documented, accessible, and reusable quality research data. Adequate research data management provides many benefits including, but not limited to, reduced likelihood of data loss, greater visibility and collaborations due to data sharing, demonstration of research integrity and accountability.
Research integrity Research integrity is defined by a set of good research practices based on fundamental principles: honesty, reliability, respect and accountability (ALLEA, 2017). Good research practices —which are based on fundamental principles of research integrity and should guide researchers in their work as well as in their engagement with the practical, ethical and intellectual challenges inherent in research— refer to areas such as: research environment (e.g., research institutions and organisations promote awareness and ensure a prevailing culture of research integrity), training, supervision and mentoring (e.g., Research institutions and organisations develop appropriate and adequate training in ethics and research integrity to ensure that all concerned are made aware of the relevant codes and regulations), research procedures (e.g., researchers report their results in a way that is compatible with the standards of the discipline and, where applicable, can be verified and reproduced), safeguards (e.g., researchers have due regard for the health, safety and welfare of the community, of collaborators and others connected with their research), data practices and management (e.g., researchers, research institutions and organisations provide transparency about how to access or make use of their data and research materials), collaborative working, publication and dissemination (e.g., authors and publishers consider negative results to be as valid as positive findings for publication and dissemination), reviewing, evaluating and editing (e.g., researchers review and evaluate submissions for publication, funding, appointment, promotion or reward in a transparent and justifiable manner).
Research Protocol A detailed document prepared before conducting a study, often written as part of ethics and funding applications. The protocol should include information relating to the background, rationale and aims of the study, as well as hypotheses which reflect the researchers’ expectations. The protocol should also provide a “recipe” for conducting the study, including methodological details and clear analysis plans. Best practice guidelines for creating a study protocol should be used for specific methodologies and fields. It is possible to publically share research protocols to attract new collaborators or facilitate efficient collaboration across labs (e.g. https://www.protocols.io/). In medical and educational fields, protocols are often a separate article type suitable for publication in journals. Where protocol sharing or publication is not common practice, researchers can choose preregistration.
Research workflow The process of conducting research from conceptualisation to dissemination. A typical workflow may look like the following: Starting with conceptualisation to identify a research question and design a study. After study design, researchers need to gain ethical approval (if necessary) and may decide to preregister the final version. Researchers then collect and analyse their data. Finally, the process ends with dissemination; moving between pre-print and post-print stages as the manuscript is submitted to a journal.
Researcher degrees of freedom refers to the flexibility often inherent in the scientific process, from hypothesis generation, designing and conducting a research study to processing the data and analyzing as well as interpreting and reporting results. Due to a lack of precisely defined theories and/or empirical evidence, multiple decisions are often equally justifiable. The term is sometimes used to refer to the opportunistic (ab-)use of this flexibility aiming to achieve desired results —e.g., when in- or excluding certain data— albeit the fact that technically the term is not inherently value-laden.
Responsible Research and Innovation An approach that considers societal implications and expectations, relating to research and innovation, with the aim to foster inclusivity and sustainability. It accounts for the fact that scientific endeavours are not isolated from their wider effects and that research is motivated by factors beyond the pursuit of knowledge. As such, many parties are important in fostering responsible research, including funding bodies, research teams, stakeholders, activists, and members of the public.
Reverse p-hacking Exploiting researcher degrees of freedom during statistical analysis in order to increase the likelihood of accepting the null hypothesis (for instance, p > .05).
RIOT Science Club The RIOT Science Club is a multi-site seminar series that raises awareness and provides training in Reproducible, Interpretable, Open & Transparent science practices. It provides regular talks, workshops and conferences, all of which are openly available and rewatchable on the respective location’s websites and Youtube.
Robustness (analyses) The persistence of support for a hypothesis under perturbations of the methodological/analytical pipeline In other words, applying different methods/analysis pipelines to examine if the same conclusion is supported under analytical different conditions.
Salami slicing A questionable research/reporting practice strategy, often done post hoc, to increase the number of publishable manuscripts by ‘slicing’ up the data from a single study – one example of a method of ‘gaming the system’ of academic incentives. For instance, this may involve publishing multiple studies based on a single dataset, or publishing multiple studies from different data collection sites without transparently stating where the data originally derives from. Such practices distort the literature, and particularly meta-analyses, because it is unclear that the findings were obtained from the same dataset, thereby concealing the dependencies across the separately published papers.
Scooping The act of reporting or publishing a novel finding prior to another researcher/team. Survey-based research indicates that fear of being scooped is an important fear-related barrier for data sharing in psychology, and agent-based models suggest that competition for priority harms scientific reliability (Tiokhin et al. 2021).
Semantometrics A class of metrics for evaluating research using full publication text to measure semantic similarity of publications and highlighting an article’s contribution to the progress of scholarly discussion. It is an extension of tools such as bibliometrics, webometrics, and altmetrics.
Sensitive research Research that poses a threat to those who are or have been involved in it, including the researchers, the participants, and the wider society. This threat can be physical danger (e.g. suicide) or a negative emotional response (e.g. depression) to those who are involved in the research process. For instance, research conducted on victims of suicide, the researcher might be emotionally traumatised by the descriptions of the suicidal behaviours. Indeed, the communication with the victims might also make them re-experience the traumatic memories, leading to negative psychological responses.
Sequence-determines-credit approach (SDC) An authorship system that assigns authorship order based on the contribution of each author. The names of the authors are listed according to their contribution in descending order with the most contributing author first and the least contributing author last.
Sherpa Romeo An online resource that collects and presents open access policies from publishers, from across the world, providing summaries of individual journal’s copyright and open access archiving policies.
Single-blind peer review Evaluation of research products by qualified experts where the reviewer(s) knows the identity of the author(s), but the reviewer(s) remains anonymous to the author(s).
Slow science Adopting Open Scholarship practices leads to a longer research process overall, with more focus on transparency, reproducibility, replicability and quality, over the quantity of outputs. Slow Science opposes publish-or-perish culture and describes an academic system that allows time and resources to produce fewer higher-quality and transparent outputs, for instance prioritising researcher time towards collecting more data, more time to read the literature, think about how their findings fit the literature and documenting and sharing research materials instead of running additional studies.
Social class Social class is usually measured using both objective and subjective measurements, as recommended by the American Psychological Association (American Psychological Association,Task Force on Socioeconomic Status, 2007). Unlike the conventional concept, which only considers one factor, either education or income (e.g., economic variables), an individual’s social class is considered to be a combination of their education, income, occupational prestige, subjective social status, and self-identified social class. Social class is partly a cultural variable, as it is a stable variable and likely to change slowly over the years. Social class can have important implications to academic outcomes. An individual may have a high socio-economic status yet identify as a working class individual. Working class students tend to have different life circumstances and often more restrictive commitments than middle-class students, which make their integration with other students more difficult (Rubin, 2021). The lack of time and money is obstructive to their social experience at university. Working class students are more likely to work to support themselves, resulting in less time for academic activities and for socializing with other students as well as less money to purchase items linked to social experiences (e.g. food).
Social integration Social integration is a multi-dimensional construct. In an academic context, social integration is related to the quantity and quality of the social interactions with staff and students, as well as the sense of connection and belonging to the university and the people within the institute. To be more specific, social support, trust, and connectedness are all variables that contribute to social integration. Social integration has important implications for academic outcomes and mental wellbeing (Evans & Rubin, 2021). Working class students are less likely to integrate with other students, since they have differing social and economic backgrounds and less disposable income. Thus they are not able to experience as many educational and fiscal opportunities than others. In turn, this can lead to poor mental health and feelings of ostracism (Rubin, 2021).
Society for Open, Reliable, and Transparent Ecology and Evolutionary biology (SORTEE) SORTEE (https://www.sortee.org/) is an international society with the aim of improving the transparency and reliability of research results in the fields of ecology, evolution, and related disciplines through cultural and institutional changes. SORTEE was launched in December 2020 to anyone interested in improving research in these disciplines, regardless of experience. The society is international in scope, membership, and objectives. As of May 2021, SORTEE comprises of over 600 members.
Society for the Improvement of Psychological Science (SIPS) A membership society founded to further promote improved methods and practices in the psychological research field. The society aims to complete its mission statement by enhancing the training of psychological researchers; by promoting research cultures that are more conducive to better quality research; by quantifying and empirically assessing the impact of such reforms; and by leading outreach events within and outside psychology to better the current state of research norms.
Specification Curve Analysis An analytic approach that consists of identifying, calculating, visualising and interpreting results (through inferential statistics) for all reasonable specifications for a particular research question (see Simonsohn et al. 2015). Specification curve analysis helps make transparent the influence of presumably arbitrary decisions during the scientific progress (e.g., experimental design, construct operationalization, statistical models or several of these) made by a researcher by comprehensively reporting all non-redundant, sensible tests of the research question. Voracek et al. (2019) suggest that SCA differs from multiverse analysis with regards to the graphical displays (a specification curve plot rather than a histogram and tile plot) and the use of inferential statistics to interpret findings.
Statistical Assumptions Analytical approaches and models assume certain characteristics of one’s data (e.g., statistical independence, random samples, normality, equal variance,…). Before running an analysis, these assumptions should be checked since their violation can change the results and conclusion of a study. Good practice in open and reproducible science is to report assumption testing in terms of the assumptions verified and the results of such checks or corrections applied.
Statistical power Statistical power is the long-run probability that a statistical test correctly rejects the null hypothesis if the alternative hypothesis is true. It ranges from 0 to 1, but is often expressed as a percentage. Power can be estimated using the significance criterion (alpha), effect size, and sample size used for a specific analysis technique. There are two main applications of statistical power. A priori power where the researcher asks the question “given an effect size, how many participants would I need for X% power?”. Sensitivity power asks the question “given a known sample size, what effect size could I detect with X% power?”.
Statistical significance A property of a result using Null Hypothesis Significance Testing (NHST) that, given a significance level, is deemed unlikely to have occurred given the null hypothesis. Tenny and Abdelgawad (2017) defined it as “a measure of the probability of obtaining your data or more extreme data assuming the null hypothesis is true, compared to a pre-selected acceptable level of uncertainty regarding the true answer” (p. 1). Conventions for determining the threshold vary between applications and disciplines but ultimately depend on the considerations of the researcher about an appropriate error margin. The American Statistical Association’s statement (Wasserstein & Lazar, 2016) notes that “Researchers often wish to turn a p-value into a statement about the truth of a null hypothesis, or about the probability that random chance produced the observed data. The p-value is neither. It is a statement about data in relation to a specified hypothetical explanation, and is not a statement about the explanation itself” (p. 131).
Statistical validity The extent to which conclusions from a statistical test are accurate and reflective of the true effect found in nature. In other words, whether or not a relationship exists between two variables and can be accurately detected with the conducted analyses. Threats to statistical validity include low power, violation of assumptions, reliability of measures, etc, affecting the reliability and generality of the conclusions.
STRANGE The STRANGE “framework” is a proposal and series of questions to help animal behaviour researchers consider sampling biases when planning, performing and interpreting research with animals. STRANGE is an acronym highlighting several possible sources of sampling bias in animal research, such as the animals’ Social background; Trappability and self-selection; Rearing history; Acclimation and habituation; Natural changes in responsiveness; Genetic make-up, and Experience.
StudySwap A free online platform through which researchers post brief descriptions of research projects or resources that are available for use (“haves”) or that they require and another researcher may have (“needs”). StudySwap is a crowdsourcing approach to research which can ensure that fewer research resources go unused and more researchers have access to the resources they need.
Systematic Review A form of literature review and evidence synthesis. A systematic review will usually include a thorough, repeatable (reproducible) search strategy including key terms and databases in order to find relevant literature on a given topic or research question. Systematic reviewers follow a process of screening the papers found through their search, until they have filtered down to a set of papers that fit their predefined inclusion criteria. These papers can then be synthesised in a written review which may optionally include statistical synthesis in the form of a meta-analysis as well. A systematic review should follow a standard set of guidelines to ensure that bias is kept to a minimum for example PRISMA (Moher et al., 2009; Page et al., 2021), Cochrane Systematic Reviews (Higgins et al., 2019), or NIRO-SR (Topor et al., 2021).
Tenzing tenzing is an online webapp and R package that helps researchers to track and report the contributions of each team member using the CRediT taxonomy in an efficient way. Team members of a research project can indicate their contributions to each CRediT role using an online spreadsheet template, and provide any additional authors’ information (e.g., name, affiliation, order in publication, email address, and ORCID iD). Upon writing the manuscript, tenzing can automatically create a list of contributors belonging to each CRediT role to be included in the contributions section and create the manuscript’s title page.
The Troubling Trio Described as a combination of low statistical power, a surprising result, and a p-value only slightly lower than .05.
Theory building The process of creating and developing a statement of concepts and their interrelationships to show how and/or why a phenomenon occurs. Theory building leads to theory testing.
Theory A theory is a unifying explanation or description of a process or phenomenon, which is amenable to repeated testing and verifiable through scientific investigation, using various experiments led by several independent researchers. Theories may be rejected or deemed an unsatisfactory explanation of a phenomenon after rigorous testing of a new hypothesis that explains the phenomena better or seems to contradict them but is more generalisable to a wider array of findings.
Transparency Checklist The transparency checklist is a consensus-based, comprehensive checklist that contains 36 items that cover the prepregistration, methods, results and discussion and data, code and materials availability. A shortened 12-item version of the checklist is also available. Checklist responses can be submitted alongside a manuscript for review. While the checklist can also work for educational purposes, it mainly aims to support researchers to identify concrete actions that can increase the transparency of their research while a disclosed checklist can help the readers and reviewers gain critical information about different aspects of transparency of the submitted research.
Transparency Having one’s actions open and accessible for external evaluation. Transparency pertains to researchers being honest about theoretical, methodological, and analytical decisions made throughout the research cycle. Transparency can be usefully differentiated into “scientifically relevant transparency” and “socially relevant transparency”. While the former has been the focus of early Open Science discourses, the latter is needed to provide scientific information in ways that are relevant to decision makers and members of the public (Elliott & Resnik, 2019).
Triple-blind peer review Evaluation of research products by qualified experts where the author(s) are anonymous to both the reviewer(s) and editor(s). “Blinding of the authors and their affiliations to both editors and reviewers. This approach aims to eliminate institutional, personal, and gender biases” (Tvina et al., 2019, p. 1082).
TRUST Principles A set of guiding principles that consider Transparency, Responsibility, User focus, Sustainability, and Technology (TRUST) as the essential components for assessing, developing, and sustaining the trustworthiness of digital data repositories (especially those that store research data). They are complementary to the FAIR Data Principles.
Type I error “Incorrect rejection of a null hypothesis” (Simmons et al., 2011, p. 1359), i.e. finding evidence to reject the null hypothesis that there is no effect when the evidence is actually in favouring of retaining the null that there is no effect (For example, a judge imprisoning an innocent person). Concluding that there is a significant effect and rejecting the null hypothesis when your findings actually occured by chance.
Type II error A false negative result occurs when the alternative hypothesis is true in the population but the null hypothesis is accepted as part of the analysis (Hartgerink et al., 2017). That is, finding a non-significant statistical result when the effect is true (For example, a judge passing an innocent verdict on a guilty person). False negatives are less likely to be the subject of replications than positive results (Fiedler et al., 2012), and remain an unresolved issue in scientific research (Hartgerink et al., 2017).
Type M error A Type M error occurs when a researcher concludes that an effect was observed with magnitude lower or higher than the real one. For example, a type M error occurs when a researcher claims that an effect of small magnitude was observed when it is large in truth or vice versa.
Type S error A Type S error occurs when a researcher concludes that an effect was observed with an opposite sign than real one. For example, a type S error occurs when a researcher claims that a positive effect was observed when it is negative in reality or vice versa.
Under-representation Not all voices, perspectives, and members of the community are adequately represented. Under-representation typically occurs when the voices or perspectives of one group dominate, resulting in the marginalization of another. This often affects groups who are a minority in relation to certain personal characteristics.
Universal design for learning (UDL) A framework for improving learning and optimising teaching based upon scientific insights of how humans learn. It aims to make learning inclusive and transformative for all people in which the focus is on catering to the differing needs of different students. It is often regarded as an evidence-based and scientifically valid framework to guide educational practice, consisting of three key principles: engagement, representation, and action and expression. In addition, UDL is included in the Higher Education Opportunity Act of 2008 (Edyburn, 2010).
Validity Validity refers to the application of statistical principles to arrive at well-founded —i.e., likely corresponding accurately to the real world— concepts, conclusions or measurement. In psychometrics, validity refers to the extent to which something measures what it intends to or claims to measure. Under this generic term, there are different types of validity (e.g., internal validity, construct validity, face validity, criterion validity, diagnostic validity, discriminant validity, concurrent validity, convergent validity, predictive validity, external validity).
Version control The practice of managing and recording changes to digital resources (e.g. files, websites, programmes, etc.) over time so that you can recall specific versions later. Version control systems are designed to record the history of changes (who, what and when), and help to avoid human errors (e.g. working on the wrong version). For example, the Git version control system is a widely used software tool that originally helped software developers to version control shared code and is now used across many scientific disciplines to manage and share files.
Webometrics Webometrics involves the study of online content. Webometrics focuses on the numbers and types of hyperlinks between different online sites. Such approaches have been considered as a type of altmetrics. “The study of the quantitative aspects of the construction and use of information resources, structures and technologies on the Web drawing on bibliometric and informetric approaches” (Björneborn & Ingwersen, 2004).
WEIRD This acronym refers to Western, Educated, Industrialized, Rich and Democratic societies. Most research is conducted on, and conducted by, relatively homogeneous samples from WEIRD societies. This limits the generalizability of a large number of research findings, particularly given that WEIRD people are often psychological outliers. It has been argued that “WEIRD psychology ” started to evolve culturally as a result of societal changes and religious beliefs in the Middle Ages in Europe. Critics of this term suggest it presents a binary view of the global population and erases variation that exists both between and within societies, and that other aspects of diversity are not captured.
Z-Curve Computing a Z-score is a statistical approach mainly used to obtain the ‘Estimated Replication Rate’ (ERR) and ‘Expected Discovery Rate’ (EDR) for a set of reported studies. Calculating a z-curve for a set of statistically significant studies involves converting reported p-values to z-scores, fitting a finite mixture model to the distribution of z-scores, and estimating mean power based on the mixture model. The Z-curve analysis can be performed in R through a dedicated package – https://cran.r-project.org/web/packages/zcurve/index.html.
Zenodo An open science repository where researchers can deposit research papers, reports, data sets, research software, and any other research-related digital artifacts. Zenodo creates a persistent digital object identifier (DOI) for each submission to make it citable. This platform was developed under the European OpenAIRE program and operated by CERN.
|
2022-05-22 10:58:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32331064343452454, "perplexity": 2806.0773095387294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545326.51/warc/CC-MAIN-20220522094818-20220522124818-00756.warc.gz"}
|
https://supplementpolice.com/l-dopa/
|
# L-DOPA – Dopamine Amino Acid Health Benefits & Side Effects Guide?
on
## What Is L-DOPA?
L-DOPA is an amino acid and a hormone that is synthesized endogenously by both animals and plants. In human beings, the amino acid is formed via biosynthesis from the amino acid L-tyrosine.
L-DOPA is a precursor for the synthesis of neurotransmitters such as dopamine, epinephrine, and norepinephrine, which are collectively referred to as catecholamine. The amino acid is also involved in the release of a neurotrophic factor from the brain and the central nervous system.
L-DOPA is a psychoactive compound that can be employed in the management of different diseases, including Parkinsonism. The compound can be manufactured, and it is sold in its pure form as a psychoactive agent.
The compound is also used clinically in the treatment of dopamine-responsive dystonia. L-DOPA also helps in the management of depression and anxiety. Individuals who are administered with the compound encounter improved concentration and confidence.
L-DOPA is found in different sources, including Mucuna pruriens. The plant contains a high content of the compound in its seeds and leaves. Other sources include the seeds of Tamarindus indica. The seeds have about 3.78 percent of L-DOPA. The compound can also be extracted from the seeds of Canavalia gladiate, which have about 4.22 percent of L-DOPA.
## Uses And Health Benefits Of L-Dopa
### Benefits For The Mind
L-DOPA promotes the release of the hormones that combine to provide a euphoric feeling. As a result, it is used in the management of conditions such as anxiety, stress, and depression. The compound has been shown to improve symptoms that are associated with depression, including lethargy and even chronic pain.
L-DOPA also helps in the management of anxiety and depression. The depressed individuals acquire a calm feeling that is accompanied by energy and motivation once they are put on L-DOPA.
The compound also has a neuroprotective ability. Through its antioxidant property, it protects the neurons and the brain cells from damage by the harmful free radicals. As a result, it enhances the brain tissue integrity and organ function.
The amino acid is also capable of suppressing the symptoms that are associated with nervous system disorders. It ensures this by keeping the levels of dopamine neurotransmitter well-balanced in the body. Most disorders of the nervous system result from a lack of the dopamine neurotransmitter.
Complications due to lack of sufficient dopamine in the body are commonly seen among older people. Symptoms that present with the disorders include impaired motor control, together with various cognitive problems. L-DOPA has been shown to improve the quality of life of patients who suffer from the lack of dopamine.
#### Management Of Parkinsonism And Dopamine-Responsive Dystonia
The ability of L-DOPA to cross the protective blood-brain barrier makes it useful in the management of Parkinsonism and dopamine-responsive dystonia. The nervous system disorders occur due to the lack of dopamine. When L-DOPA is administered, it crosses the blood-brain barrier into the brain where it is converted to dopamine.
As a result, the concentration of the neurotransmitter is increased in the brain. The treatment of Parkinsonism using L-DOPA was demonstrated clinically by George Cotzias and his coworkers. He explained that once the compound enters the brain, it gets converted to dopamine.
The conversion is carried out by the enzyme aromatic L-amino acid decarboxylase. The enzyme is also known as DOPA decarboxylase. Vitamin B6 is required to facilitate the reaction. The vitamin is occasionally administered alongside L-DOPA in patients having Parkinsonism.
### L-DOPA Benefits For The Body
L-DOPA has antioxidant properties. As a result, it protects body organs from damage by free radicals. The compound can also get rid of aging signs, such as wrinkles. L-DOPA reduces the visible signs of aging by stimulating the release of growth hormone in the body. The growth hormone promotes the development of healthy skin and the building of a lean muscle.
L-DOPA has also been shown to increase the metabolic process in the body. The increased metabolic process contributes to the breakdown of fats to provide the body with vibrant energy. The ability of the amino acid to burn fats and increase the energy levels in the body makes it an ideal partner for bodybuilding.
### L-DOPA Benefits For The Reproductive System
The amino acid has been shown to work on different tissues in the body. L-DOPA has been referred to as one of the best tonics for both men and women. It is an aphrodisiac, and it has been shown to improve sexual health and libido in individuals taking it.
The compound has also been shown to promote normal fertility, including healthy sperm and ova. L-DOPA supports proper functioning of the reproductive organs, and it promotes appropriate secretions of the genital organs. Men given L-DOPA supplements encounter increased production of testosterone hormone. The sperm is also protected against damage by free radicals.
## Risks And Side Effects Of L-DOPA
The most common side effect of L-DOPA are nausea and vomiting. They can be reduced by taking the medication with food. However, proteins reduce the absorption of L-DOPA.
Other less common side effects include abnormal thinking. The patient holds false beliefs that cannot be changed by the fact. The patient may also present with agitation, anxiety, confusion, dizziness, and a false sense of well-being.
Side effects that are related to the gastrointestinal tract include difficulty in swallowing, excessive watering of the mouth, and indigestion problems. Other less common adverse effects include blurred vision, double vision, arrhythmias, and dilated pupils.
## Top Products Containing L-DOPA
### L-DOPA 98% (60 Capsules) Mucuna Extract
The product from Health Solutions is said to increase the energy levels in the body. It also helps to reduce cellulite, as well as the levels of fats present in the body. Other benefits include improved bone density and decreased levels of osteoporosis. The product has an overall rating of 4.0 stars out of a possible 5 stars.
## L-DOPA Review Summary
L-DOPA is a natural supplement that has various health benefits. The compound enhances mood and increases the energy levels of an individual. It also helps bodybuilders and strength training athletes. The side effects that arise from its use can be minimized by taking small doses as directed by a healthcare provider.
3,712Fans
119Followers
550Followers
1,120Subscribers
### Affiliate Transparency:
With full FTC compliance disclosure, please know our goal is to highlight human health and develop strategic partnerships with a variety of seasoned supplement suppliers affiliate compensation notice and new wellness product creators from around the world. Our intention is to organize optimal outlets for you, we may receive small commissions from providing links and sharing ads. The team has your best interest at hand, we care as much about your health as you do and that’s why you’re reading this. Want to learn more?
|
2023-03-26 21:45:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2537338137626648, "perplexity": 3095.379862427619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946535.82/warc/CC-MAIN-20230326204136-20230326234136-00189.warc.gz"}
|
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=tvp&paperid=697&option_lang=eng
|
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Impact factor Subscription Guidelines for authors Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Teor. Veroyatnost. i Primenen.: Year: Volume: Issue: Page: Find
Teor. Veroyatnost. i Primenen., 1967, Volume 12, Issue 1, Pages 161–166 (Mi tvp697)
Short Communications
The Use of the Integral Functional in Some Methods of Control in the Presence of Small Random Perturbations
A. P. Cherenkov
Moscow
Abstract: The problems described in [1] and [2] (parametric control and control by means of a single switching) are considered with the difference that the characterizing function is replaced by the characterizing functional of the integral type. In both cases we prove the existence of the minimum of the variance of some functional and evaluate this minimum. The minimizing functions are also determined. Necessary and sufficient conditions for this minimum to be equal to zero are given.
Full text: PDF file (355 kB)
English version:
Theory of Probability and its Applications, 1967, 12:1, 139–143
Bibliographic databases:
Citation: A. P. Cherenkov, “The Use of the Integral Functional in Some Methods of Control in the Presence of Small Random Perturbations”, Teor. Veroyatnost. i Primenen., 12:1 (1967), 161–166; Theory Probab. Appl., 12:1 (1967), 139–143
Citation in format AMSBIB
\Bibitem{Che67} \by A.~P.~Cherenkov \paper The Use of the Integral Functional in Some Methods of Control in the Presence of Small Random Perturbations \jour Teor. Veroyatnost. i Primenen. \yr 1967 \vol 12 \issue 1 \pages 161--166 \mathnet{http://mi.mathnet.ru/tvp697} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=210498} \zmath{https://zbmath.org/?q=an:0183.19404|0147.16102} \transl \jour Theory Probab. Appl. \yr 1967 \vol 12 \issue 1 \pages 139--143 \crossref{https://doi.org/10.1137/1112018}
|
2019-11-18 14:15:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3250795900821686, "perplexity": 6929.906120525214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669795.59/warc/CC-MAIN-20191118131311-20191118155311-00184.warc.gz"}
|
https://www.tutorialspoint.com/binary-representation-of-next-greater-number-with-same-number-of-1-s-and-0-s-in-c-program
|
# Binary representation of next greater number with same number of 1’s and 0’s in C Program?
CServer Side ProgrammingProgramming
Suppose we have one binary number, that is representation of a number n. We have to find binary representation of a number which is smallest but larger than n, and it also has same number of 0s and 1s. So if the number is 1011 (11 in decimal), then the output will be 1101 (13). This problem can be found using the next-permutation calculation. Let us see the algorithm to get the idea.
## Algorithm
nextBin(bin) −
Begin
len := length of the bin
for i in range len-2, down to 1, do
if bin[i] is 0 and bin[i+1] = 1, then
exchange the bin[i] and bin[i+1]
break
end if
done
if i = 0, then there is no change, return
otherwise j:= i + 2, k := len – 1
while j < k, do
if bin[j] is 1 and bin[k] is 0, then
exchange bin[j] and bin[k]
increase j and k by 1
else if bin[i] is 0, then
break
else
increase j by 1
end if
done
return bin
End
## Example
#include <iostream>
using namespace std;
string nextBinary(string bin) {
int len = bin.size();
int i;
for (int i=len-2; i>=1; i--) {
if (bin[i] == '0' && bin[i+1] == '1') {
char ch = bin[i];
bin[i] = bin[i+1];
bin[i+1] = ch;
break;
}
}
if (i == 0)
"No greater number is present";
int j = i+2, k = len-1;
while (j < k) {
if (bin[j] == '1' && bin[k] == '0') {
char ch = bin[j];
bin[j] = bin[k];
bin[k] = ch;
j++;
k--;
}
else if (bin[i] == '0')
break;
else
j++;
}
return bin;
}
int main() {
string bin = "1011";
cout << "Binary value of next greater number = " << nextBinary(bin);
}
## Output
Binary value of next greater number = 1101
Published on 20-Aug-2019 13:15:25
|
2021-10-19 00:56:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34511709213256836, "perplexity": 8257.803154873287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585215.14/warc/CC-MAIN-20211018221501-20211019011501-00214.warc.gz"}
|
https://en.wikibooks.org/wiki/Communication_Systems/Amplitude_Modulation
|
# Communication Systems/Amplitude Modulation
Amplitude modulation is one of the earliest radio modulation techniques. The receivers used to listen to AM-DSB-C are perhaps the simplest receivers of any radio modulation technique; which may be why that version of amplitude modulation is still widely used today. By the end of this module, you will know the most popular versions of amplitude modulation, some popular AM modulation circuits, and some popular AM demodulation circuits.
## Amplitude Modulation
Amplitude modulation (AM) occurs when the amplitude of a carrier wave is modulated, to correspond to a source signal. In AM, we have an equation that looks like this:
${\displaystyle A_{signal}(t)=A(t)\sin(\omega t)}$
We can also see that the phase of this wave is irrelevant, and does not change (so we don't even include it in the equation).
AM Double-Sideband (AM-DSB for short) can be broken into two different, distinct types: Carrier, and Suppressed Carrier varieties (AM-DSB-C and AM-DSB-SC, for short, respectively). This page will talk about both varieties, and will discuss the similarities and differences of each.
### Characteristics
#### Modulation Index
Amplitude modulation requires a high frequency constant carrier and a low frequency modulation signal.
A wave carrier is of the form ${\displaystyle e_{c}=E_{c}\cos \left({\omega _{c}t}\right)}$
A wave modulation signal is of the form ${\displaystyle e_{m}=E_{m}\cos \left({\omega _{m}t}\right)}$
Notice that the amplitude of the high frequency carrier takes on the shape of the lower frequency modulation signal, forming what is called a modulation envelope.
The modulation index is defined as the ratio of the modulation signal amplitude to the carrier amplitude.
${\displaystyle m_{am}={\frac {E_{m}}{E_{c}}}}$ where ${\displaystyle 0\leq m_{am}\leq 1}$
The overall signal can be described by:
${\displaystyle e_{am}=\left({E_{c}+E_{m}\cos \left({\omega _{m}t}\right)}\right)\cos \left({\omega _{c}t}\right)}$
More commonly, the carrier amplitude is normalized to one and the am equation is written as:
${\displaystyle e_{am}=\left({1+m_{am}\cos \left({\omega _{m}t}\right)}\right)\cos \left({\omega _{c}t}\right)}$
In most literature this expression is simply written as:
${\displaystyle e=\left({1+m\cos \omega _{m}t}\right)\cos \omega _{c}t}$
If the modulation index is zero (${\displaystyle m_{am}=0}$) the signal is simply a constant amplitude carrier.
If the modulation index is 1 (${\displaystyle m_{am}=1}$), the resultant waveform has maximum or 100% amplitude modulation.
#### Sidebands
Expanding the normalized AM equation:
${\displaystyle e=\left({1+m\sin \omega _{m}t}\right)\sin \omega _{c}t}$
we obtain:
${\displaystyle e=\sin \omega _{c}t+{\frac {m}{2}}\cos \left({\omega _{c}-\omega _{m}}\right)t-{\frac {m}{2}}\cos \left({\omega _{c}+\omega _{m}}\right)t}$
where:
${\displaystyle \sin \omega _{c}t}$ represents the carrier
${\displaystyle {\frac {m}{2}}\cos \left({\omega _{c}-\omega _{m}}\right)t}$ represents the lower sideband
${\displaystyle {\frac {m}{2}}\cos \left({\omega _{c}+\omega _{m}}\right)t}$ represents the upper sideband
The sidebands are centered on the carrier frequency. They are the sum and difference frequencies of the carrier and modulation signals. In the above example, they are just single frequencies, but normally the baseband modulation signal is a range of frequencies and hence two bands are formed.
### AM Modulator
The standard amplitude modulation equation is:
${\displaystyle e_{am}=\left({1+m\sin \omega _{m}t}\right)\sin \omega _{c}t}$
From this we notice that AM involves a multiplication process. There are several ways to perform this function electronically. The simplest method uses a switch.
#### Switching Modulators
Switching modulators can be placed into two categories: unipolar and bipolar.
##### Bipolar Switching Modulator
The bipolar switch is the easiest to visualize. Note that an AM waveform appears to consist of a low frequency dc signal whose polarity is reversing at a carrier rate.
The AM signal can be created by multiplying a dc modulation signal by ±1.
However, since the square wave contains lots of harmonics, the resulting multiplication will contain lots of extraneous frequencies. Mathematically, the spectrum of the square wave signal (given by the Fourier Transform) is of the form:
${\displaystyle F\left\{{f\left(t\right)}\right\}=\sum \limits _{n=1}^{\infty }{\frac {4}{n\pi }}\sin \left({\frac {n\pi }{2}}\right)\cos \left({\frac {n2\pi t}{T}}\right)}$
This seems complicated but, if the square wave switching function has a 50% duty cycle, this simplifies to:
${\displaystyle F\left\{{f\left(t\right)}\right\}={\frac {4}{\pi }}\sum \limits _{n=1,3,5...}^{\infty }{\frac {1}{n}}\cos \left({\frac {n2\pi t}{T}}\right)}$
This tells us that the square wave is actually composed of a series of cosines (phase shifted sines) at odd multiples of the fundamental switching frequency. Therefore, using this signal to multiply the baseband signal results in AM signals being generated at each of the odd harmonics of the switching (carrier) frequencies. Since the amplitude of the harmonics decreases rapidly, this technique is practical for only the first few harmonics, and produces an enormous amount of unwanted signals (noise).
A band pass filter can be used to select any one of the AM signals. The number of different output frequencies can be significantly reduced if the multiplier accepts sinewaves at the carrier input.
Removing the DC component from the input eliminates the carrier signal and creates DSBSC modulation.
Physically this is done by reversing the signal leads:
The process of reversing the polarity of a signal is easily accomplished by placing two switch pairs in the output of a differential amplifier. The Mc1496 Balanced Modulator is an example of such a device.
##### Unipolar Switching Modulator
As previously mentioned, an AM signal can be created by multiplying a dc modulation signal by 0 & 1.
The spectrum of this signal is defined by:
${\displaystyle F\left\{{f\left(t\right)}\right\}={\frac {1}{2}}+\sum \limits _{n=1}^{\infty }{{\frac {2}{n\pi }}\sin \left({\frac {n\pi }{2}}\right)}\cos \left({\frac {n2\pi t}{T}}\right)}$
Physically this is done by turning the modulation signal on and off at the carrier rate:
A high amplitude carrier can be used to turn a diode on and off. A dc bias is placed on the modulation signal to make certain that only the carrier (not the modulation signal) can reverse bias the diode.
It may not seem obvious, but the output of this circuit contains a series of AM signals. A bandpass filter is needed to extract the desired one. Normally it is the 1st or 3rd harmonic of the fundamental. (The 1st harmonic is the fundamental.)
##### Collector Modulator
The diode switching modulator is incapable of producing high power signals since it is a passive device. A transistor can be used to overcome this limitation. A collector modulator is used for high level modulation.
##### Square Law Modulator
The voltage-current relationship of a diode is nonlinear near the knee and is of the form:
${\displaystyle i\left(t\right)=av\left(t\right)+bv^{2}\left(t\right)}$
The coefficient a and b are constants associated with the particular diode.
Amplitude modulation occurs if the diode is kept in the square law region when signals combine.
Let the injected signals be of the form:
${\displaystyle k={\rm {dc}}\;{\rm {bias}}}$
${\displaystyle e_{m}=E_{m}\sin \omega _{m}t={\rm {modulation}}\;{\rm {signal}}}$
${\displaystyle e_{c}=E_{c}\sin \omega _{c}t={\rm {carrier}}\;{\rm {signal}}}$
The voltage applied across the diode and resistor is given by:
${\displaystyle v\left(t\right)=k+e_{m}+e_{c}}$
The current in the diode and hence in the resistor is given by:
${\displaystyle i\left(t\right)=a\left({k+e_{m}+e_{c}}\right)+b\left({k+e_{m}+e_{c}}\right)^{2}}$
Which expands to:
${\displaystyle i\left(t\right)=\underbrace {k\left({a+bk}\right)} _{\rm {dc}}+\underbrace {\left({a+2bk}\right)e_{m}} _{{\rm {original}}\;{\rm {modulating}}\;{\rm {signal}}}+\underbrace {\left({a+2bk}\right)e_{c}} _{\rm {carrier}}+\underbrace {2be_{m}e_{c}} _{{\rm {2}}\;{\rm {sidebands}}}+\underbrace {be_{m}^{2}} _{{\rm {2}}\;{\rm {x}}\;{\rm {modulating}}\;{\rm {frequency}}}+\underbrace {be_{c}^{2}} _{{\rm {2}}\;{\rm {x}}\;{\rm {carrier}}\;{\rm {frequency}}}}$
#### Modulation Index Measurement
It is sometimes difficult to determine the modulation index, particularly for complex signals. However, it is relatively easy to determine it by critical observation.
There are two practical methods to derive the modulation index:
1. By representing an AM wave as it is in time domain.(using maxima - minima terms.)
2. By Trapezoidal method.
The trapezoidal oscilloscope display can be used to determine the modulation index.
AM modulation index: ${\displaystyle m={\frac {E_{\max }-E_{\min }}{E_{\max }+E_{\min }}}}$
The trapezoidal display makes it possible to quickly recognize certain types of problems, which would reduce the AM signal quality.
The highest authorized carrier power for AM broadcast in the US is 50 kilowatts, although directional stations are permitted 52.65 kilowatts to compensate for losses in the phasing system. The ERP can be much higher
#### C-QUAM
The basic idea behind the C-Quam modulator is actually quite simple. The output stage is an ordinary AM modulator however; the carrier signal has been replaced by an amplitude limited vector modulator. Therefore, the limiter output is really a phase-modulated signal.
A standard AM receiver will detect the amplitude variations as L+R. A stereo receiver will also detect the phase variations and to extract L-R. It will then process these signals to separate the left and right channels.
To enable the stereo decoder, a 25 Hz pilot tone is added to the L-R channel.
The most common receivers in use today are the super heterodyne type. They consist of:
• Antenna
• RF amplifier
• Local Oscillator and Mixer
• IF Section
• Detector and Amplifier
The need for these subsystems can be seen when one considers the much simpler and inadequate TRF or tuned radio frequency amplifier.
#### TRF Amplifier
It is possible to design an RF amplifier to accept only a narrow range of frequencies, such as one radio station on the AM band.
By adjusting the center frequency of the tuned circuit, all other input signals can be excluded.
The AM band ranges from about 500 kHz to 1600 kHz. Each station requires 10 kHz of this spectrum, although the baseband signal is only 5 kHz.
Recall that for a tuned circuit: ${\displaystyle Q={\frac {f_{c}}{B}}}$. The center or resonant frequency in an RLC network is most often adjusted by varying the capacitor value. However, the Q remains approximately constant as the center frequency is adjusted. This suggests that as the bandwidth varies as the circuit is tuned.
For example, the Q required at the lower end of the AM band to select only one radio station would be approximately:
${\displaystyle Q={\frac {f_{c}}{B}}={\frac {500\;kHz}{10\;kHz}}=50}$
As the tuned circuit is adjusted to the higher end of the AM band, the resulting bandwidth is:
${\displaystyle B={\frac {f_{c}}{Q}}={\frac {1600\;kHz}{50}}=30\;kHz}$
A bandwidth this high could conceivably pass three adjacent stations, thus making meaningful reception impossible.
To prevent this, the incoming RF signal is heterodyned to a fixed IF or intermediate frequency and passed through a constant bandwidth circuit.
The RF amplifier boosts the RF signal into the mixer. It has broad tuning and amplifies not just one RF station, but many of them simultaneously. It also amplifies any input noise and even contributes some of its own.
The other mixer input is a high frequency sine wave created by a local oscillator. In AM receivers, it is always 455 kHz above the desired station carrier frequency. An ideal mixer will combine the incoming carrier with the local oscillator to create sum and difference frequencies. .
A real mixer combines two signals and creates a host of new frequencies:
• A dc level
• The original two frequencies
• The sum and difference of the two input frequencies
• Harmonics of the two input frequencies
• Sums and differences of all of the harmonics
Since the RF amplifier passes several radio stations at once, the mixer output can be very complex. However, the only signal of real interest is the difference between the desired station carrier frequency and the local oscillator frequency. This difference frequency, also called the IF (intermediate frequency) will alway be 455 kHz. By passing this through a 10 kHz BPF (band pass filter) centered at 455 kHz, the bulk of the unwanted signals can be eliminated.
##### Local Oscillator Frequency
Since the mixer generates sum and difference frequencies, it is possible to generate the 455 kHz IF signal if the local oscillator is either above or below the IF. The inevitable question is which is preferable.
Case I The local Oscillator is above the IF. This would require that the oscillator tune from (500 + 455) kHz to (1600 + 455) kHz or approximately 1 to 2 MHz. It is normally the capacitor in a tuned RLC circuit, which is varied to adjust the center frequency while the inductor is left fixed.
Since ${\displaystyle f_{c}={\frac {1}{2\pi {\sqrt {LC}}}}}$
solving for C we obtain ${\displaystyle C={\frac {1}{L\left({2\pi f_{c}}\right)^{2}}}}$
When the tuning frequency is a maximum, the tuning capacitor is a minimum and vice versa. Since we know the range of frequencies to be created, we can deduce the range of capacitance required.
${\displaystyle {\frac {C_{\max }}{C_{\min }}}={\frac {L\left({2\pi f_{\max }}\right)^{2}}{L\left({2\pi f_{\min }}\right)^{2}}}=\left({\frac {2\;MHz}{1\;MHz}}\right)^{2}=4}$
Making a capacitor with a 4:1 value change is well within the realm of possibility.
Case II The local Oscillator is below the IF. This would require that the oscillator tune from (500 - 455) kHz to (1600 - 455) kHz or approximately 45 kHz to 1145 kHz, in which case:
${\displaystyle {\frac {C_{\max }}{C_{\min }}}=\left({\frac {1145\;kHz}{45\;kHz}}\right)^{2}\approx 648}$
It is not practical to make a tunable capacitor with this type of range. Therefore the local oscillator in a standard AM receiver is above the radio band.
##### Image Frequency
Just as there are two oscillator frequencies, which can create the same IF, two different station frequencies can create the IF. The undesired station frequency is known as the image frequency.
If any circuit in the radio front end exhibits non-linearities, there is a possibility that other combinations may create the intermediate frequency. Once the image frequency is in the mixer, there is no way to remove it since it is now heterodyned into the same IF band as the desired station.
## AM Demodulation
#### AM Detection
There are two basic types of AM detection, coherent and non-coherent. Of these two, the non-coherent is the simpler method.
• Non-coherent detection does not rely on regenerating the carrier signal. The information or modulation envelope can be removed or detected by a diode followed by an audio filter.
• Coherent detection relies on regenerating the carrier and mixing it with the AM signal. This creates sum and difference frequencies. The difference frequency corresponds to the original modulation signal.
Both of these detection techniques have certain drawbacks. Consequently, most radio receivers use a combination of both.
##### Envelope Detector
When trying to demodulate an AM signal, it seems like good sense that only the amplitude of the signal needs to be examined. By only examining the amplitude of the signal at any given time, we can remove the carrier signal from our considerations, and we can examine the original signal. Luckily, we have a tool in our toolbox that we can use to examine the amplitude of a signal: The Envelope Detector.
An envelope detector is simply a half wave rectifier followed by a low pass filter. In the case of commercial AM radio receivers, the detector is placed after the IF section. The carrier at this point is 455 kHz while the maximum envelope frequency is only 5 kHz. Since the ripple component is nearly 100 times the frequency of the highest baseband signal and does not pass through any subsequent audio amplifiers.
An AM signal where the carrier frequency is only 10 times the envelope frequency would have considerable ripple:
##### Synchronous Detector
In a synchronous or coherent detector, the incoming AM signal is mixed with the original carrier frequency.
If you think this looks suspiciously like a mixer, you are absolutely right! A synchronous detector is one where the difference frequency between the two inputs is zero Hz. Of in other words, the two input frequencies are the same. Let's check the math.
Recall that the AM input is mathematically defined by:
${\displaystyle e_{am}=\underbrace {\sin \omega _{c}t} _{\rm {Carrier}}+\underbrace {{\frac {m}{2}}\sin \left({\omega _{c}-\omega _{m}}\right)t} _{{\rm {Lower}}\;{\rm {Sideband}}}-\underbrace {{\frac {m}{2}}\sin \left({\omega _{c}+\omega _{m}}\right)t} _{{\rm {Upper}}\;{\rm {Sideband}}}}$
At the multiplier output, we obtain:
${\displaystyle {\rm {mixer}}\;{\rm {out=}}e_{am}\times \sin \omega _{c}t=\underbrace {-{\frac {m}{2}}\sin \omega _{m}t} _{{\rm {Original}}\;{\rm {Modulation}}\;{\rm {Signal}}}\underbrace {-{\frac {1}{2}}\sin 2\omega _{c}t-{\frac {m}{4}}\sin \left({2\omega _{c}-\omega _{m}}\right)t+{\frac {m}{4}}\sin \left({2\omega _{c}+\omega _{m}}\right)t} _{{\rm {AM}}\;{\rm {signal}}\;{\rm {centered}}\;{\rm {at}}\;{\rm {2}}\;{\rm {times}}\;{\rm {the}}\;{\rm {carrier}}\;{\rm {frequency}}}}$
The high frequency component can be filtered off leaving only the original modulation signal.
This technique has one serious drawback. The problem is how to create the exact carrier frequency. If the frequency is not exact, the entire baseband signal will be shifted by the difference. A shift of only 50 Hz will make the human voice unrecognizable. It is possible to use a PLL (phase locked loop), but making one tunable for the entire AM band is not trivial.
As a result, most radio receivers use an oscillator to create a fixed intermediate frequency. This is then followed by an envelope detector or a fixed frequency PLL.
##### Squaring Detector
The squaring detector is also a synchronous or coherent detector. It avoids the problem of having to recreate the carrier by simply squaring the input signal. It essentially uses the AM signal itself as a sort of wideband carrier.
The output of the multiplier is the square of the input AM signal:
${\displaystyle {\left({e_{am}}\right)^{2}=\left({\sin \omega _{c}t+{\frac {m}{2}}\cos \left({\omega _{c}-\omega _{m}}\right)t-{\frac {m}{2}}\cos \left({\omega _{c}+\omega _{m}}\right)t}\right)^{2}}}$
Since the input is being multiplied by the ${\displaystyle {\sin \omega _{c}t}}$ component, one of the resulting difference terms is the original modulation signal. The principle difficulty with this approach is trying to create a linear, high frequency multiplier.
## AM-DSBSC
AM-DSB-SC is characterized by the following transmission equation:
${\displaystyle v(t)=As(t)\cos(2\pi f_{c}t)}$
It is important to notice that s(t) can contain a negative value. AM-DSB-SC requires a coherent receiver, because the modulation data can go negative, and therefore the receiver needs to know that the signal is negative (and not just phase shifted). AM-DSB-SC systems are very susceptible to frequency shifting and phase shifting on the receiving end. In this equation, A is the transmission amplitude.
Double side band suppressed carrier modulation is simply AM without the broadcast carrier. Recall that the AM signal is defined by:
${\displaystyle e_{am}=\left({1+m\sin \omega _{m}t}\right)\sin \omega _{c}t=\sin \omega _{c}t+{\frac {m_{2}}{2}}\cos \left({\omega _{c}-\omega _{m}}\right)t-{\frac {m_{2}}{2}}\cos \left({\omega _{c}+\omega _{m}}\right)t}$
The carrier term in the spectrum can be eliminated by removing the dc offset from the modulating signal:
${\displaystyle e_{DSBSC}=m\sin \omega _{m}t\sin \omega _{c}t={\frac {m_{2}}{2}}\cos \left({\omega _{c}-\omega _{m}}\right)t-{\frac {m_{2}}{2}}\cos \left({\omega _{c}+\omega _{m}}\right)t}$
### Double Balanced Ring Modulator
One of the circuits which is capable of creating DSBSC is the double balance ring modulator.
If the carrier is large enough to cause the diodes to switch states, then the circuit acts like a diode switching modulator:
The modulation signal is inverted at the carrier rate. This is essentially multiplication by ±1. Since the transformers cannot pass dc, there is no term which when multiplied can create an output carrier. Since the diodes will switch equally well on either cycle, the modulation signal is effectively being multiplied by a 50% duty cycle square wave creating numerous DSBSC signals, each centered at an odd multiple of the carrier frequency. Bandpass filters are used to extract the frequency of interest.
Some IC balanced modulators use this technique, but use transistors instead of diodes to perform the switching.
### Push Pull Square Law Balanced Modulator
This circuit uses the same principles as the diode square law modulator. Since dc cannot pass through the transformer, it would be expected that there would be no output signal at the carrier frequency.
The drain current vs. gate-source voltage is of the form:
${\displaystyle i_{d}=i_{0}+av_{gs}+v_{gs}^{2}}$
The net drain current in the output transformer is given by:
${\displaystyle i_{net}=i_{d1}-i_{d2}=i_{0}+av_{gs1}+bv_{gs1}^{2}-\left({i_{0}+av_{gs2}+bv_{gs2}^{2}}\right)}$
${\displaystyle i_{net}=a\left({v_{gs1}-v_{gs2}}\right)+b\left({v_{gs1}+v_{gs2}}\right)\left({v_{gs1}-v_{gs2}}\right)}$
By applying KVL around the gate loops we obtain:
${\displaystyle v_{gs1}={\frac {e_{m}}{2}}+e_{c}\quad \quad \quad \quad v_{gs2}=-{\frac {e_{m}}{2}}+e_{c}}$
Putting it all together we obtain:
${\displaystyle i_{net}=a\left({{\frac {e_{m}}{2}}+e_{c}+{\frac {e_{m}}{2}}-e_{c}}\right)+b\left({{\frac {e_{m}}{2}}+e_{c}-{\frac {e_{m}}{2}}+e_{c}}\right)\left({{\frac {e_{m}}{2}}+e_{c}+{\frac {e_{m}}{2}}-e_{c}}\right)}$
${\displaystyle i_{net}=ae_{m}+2be_{c}e_{m}}$
From this we note that the first term is the originating modulation signal and can easily be filtered off by a high pass filter. The second term is of the form:
${\displaystyle \sin \omega _{m}t\sin \omega _{c}t={\frac {1}{2}}\cos \left({\omega _{c}-\omega _{m}}\right)t-{\frac {1}{2}}\cos \left({\omega _{c}+\omega _{m}}\right)t}$
which is AM DSBSC.
## AM-DSB-C
In contrast to AM-DSB-SC is AM-DSB-C, which is categorized by the following equation:
${\displaystyle v(t)=A[s(t)+c]\cos(2\pi f_{c}t)}$
Where c is a positive term representing the carrier. If the term ${\displaystyle [s(t)+c]}$ is always non-negative, we can receive the AM-DSB-C signal non-coherently, using a simple envelope detector to remove the cosine term. The +c term is simply a constant DC signal and can be removed by using a blocking capacitor.
It is important to note that in AM-DSB-C systems, a large amount of power is wasted in the transmission sending a "boosted" carrier frequency. since the carrier contains no information, it is considered to be wasted energy. The advantage to this method is that it greatly simplifies the receiver design, since there is no need to generate a coherent carrier signal at the receiver. For this reason, this is the transmission method used in conventional AM radio.
AM-DSB-SC and AM-DSB-C both suffer in terms of bandwidth from the fact that they both send two identical (but reversed) frequency "lobes", or bands. These bands (the upper band and the lower band) are exactly mirror images of each other, and therefore contain identical information. Why can't we just cut one of them out, and save some bandwidth? The answer is that we can cut out one of the bands, but it isn't always a good idea. The technique of cutting out one of the sidebands is called Amplitude Modulation Single-Side-Band (AM-SSB). AM-SSB has a number of problems, but also some good aspects. A compromise between AM-SSB and the two AM-DSB methods is called Amplitude Modulation Vestigial-Side-Band (AM-VSB), which uses less bandwidth then the AM-DSB methods, but more than the AM-SSB.
### Transmitter
A typical AM-DSB-C transmitter looks like this:
c cos(...)
| |
Signal ---->(+)---->(X)----> AM-DSB-C
which is a little more complicated than an AM-DSB-SC transmitter.
An AM-DSB-C receiver is very simple:
AM-DSB-C ---->|Envelope Filter|---->|Capacitor|----> Signal
The capacitor blocks the DC component, and effectively removes the +c term.
## AM-SSB
To send an AM-SSB signal, we need to remove one of the sidebands from an AM-DSB signal. This means that we need to pass the AM-DSB signal through a filter, to remove one of the sidebands. The filter, however, needs to be a very high order filter, because we need to have a very aggressive roll-off. One sideband needs to pass the filter almost completely unchanged, and the other sideband needs to be stopped completely at the filter.
To demodulate an AM-SSB signal, we need to perform the following steps:
1. Low-pass filter, to remove noise
2. Modulate the signal again by the carrier frequency
3. Pass through another filter, to remove high-frequency components
4. Amplify the signal, because the previous steps have attenuated it significantly.
AM-SSB is most efficient in terms of bandwidth, but there is a significant added cost involved in terms of more complicated hardware to send and receive this signal. For this reason, AM-SSB is rarely seen as being cost effective.
Single sideband is a form of AM with the carrier and one sideband removed. In normal AM broadcast, the transmitter is rated in terms of the carrier power. SSB transmitters attempt to eliminate the carrier and one of the sidebands. Therefore, transmitters are rated in PEP (peak envelope power).
${\displaystyle PEP={\frac {\left({{\rm {peak}}\;{\rm {envelope}}\;{\rm {voltage}}}\right)^{2}}{2R_{L}}}}$
With normal voice signals, an SSB transmitter outputs 1/4 to 1/3 PEP.
There are numerous variations of SSB:
• SSB - Single sideband - amateur radio
• SSSC - Single sideband suppressed carrier - a small pilot carrier is transmitted
• ISB - Independent sideband - two separate sidebands with a suppressed carrier. Used in radio telephony.
• VSB - Vestigial sideband - a partial sideband. Used in broadcast TV.
• ACSSB - Amplitude companded SSB
There are several advantages of using SSB:
• More efficient spectrum utilization
• Less subject to selective fading
• More power can be placed in the intelligence signal
• 10 to 12 dB noise reduction due to bandwidth limiting
### Filter Method
The simplest way to create SSB is to generate DSBSC and then use a bandpass filter to extract one of the sidebands.
This technique can be used at relatively low carrier frequencies. At high frequencies, the Q of the filter becomes unacceptably high. The required Q necessary to filter off one of the sidebands can be approximated by:
${\displaystyle Q\approx {\frac {f_{c}{\sqrt {S}}}{4\Delta f}}}$
where:
${\displaystyle f_{c}={\rm {carrier}}\;{\rm {frequency}}}$
${\displaystyle \Delta f={\rm {sideband}}\;{\rm {separation}}}$
${\displaystyle S={\rm {sideband}}\;{\rm {suppression}}\;{\rm {(not}}\;{\rm {in}}\;{\rm {dB)}}}$
Several types of filters are used to suppress unwanted sidebands:
• LC - Maximum Q = 200
• Ceramic - Maximum Q = 2000
• Mechanical - Maximum Q = 10,000
• Crystal - Maximum Q = 50,000
In order to reduce the demands placed upon the filter, a double heterodyne technique can be used.
The first local oscillator has a relatively low frequency thus enabling the removal of one of the sidebands produced by the first mixer. The signal is then heterodyned a second time, creating another pair of sidebands. However, this time they are separated by a sufficiently large gap that one can be removed by the band limited power amplifier or antenna matching network.
Example
Observe the spectral distribution under the following conditions:
• Audio baseband = 100 HZ to 5 KHz
• LO1 = 100 kHz
• LO2 = 50 MHz
The spectral output of the first mixer is:
If the desired sideband suppression is 80 dB, the Q required to filter off one of the sidebands is approximately:
${\displaystyle S=\log ^{-1}\left({\frac {80}{20}}\right)=10^{4}}$
${\displaystyle Q\approx {\frac {f_{c}{\sqrt {S}}}{4\Delta f}}={\frac {100\times 10^{3}{\sqrt {10^{4}}}}{4\times 200}}=12500}$
It is evident that a crystal filter would be needed to remove the unwanted sideband.
After the filter, only one sideband is left. In this example, we’ll retain the USB. The spectrum after the second mixer is:
The Q required to suppress one of the side bands by 80 dB is approximately:
${\displaystyle Q\approx {\frac {f_{c}{\sqrt {S}}}{4\Delta f}}={\frac {50\times 10^{6}{\sqrt {10^{4}}}}{4\times 200.2\times 10^{3}}}=6244}$
Thus, we note that the required Q drops in half.
This SSB filter technique is used in radiotelephone applications.
### Phase Shift Method
The output from the top mixer is given by:
${\displaystyle \sin \omega _{m}t\sin \omega _{c}t={\frac {1}{2}}\cos \left({\omega _{c}-\omega _{m}}\right)t-{\frac {1}{2}}\cos \left({\omega _{c}+\omega _{m}}\right)t}$
The output from the bottom mixer is given by:
${\displaystyle \cos \omega _{m}t\cos \omega _{c}t={\frac {1}{2}}\cos \left({\omega _{c}-\omega _{m}}\right)t+{\frac {1}{2}}\cos \left({\omega _{c}+\omega _{m}}\right)t}$
The output of the sumer is:
${\displaystyle \cos \left({\omega _{c}-\omega _{m}}\right)t}$
which corresponds to the lower sideband.
The major difficulty with this technique is the need to provide a constant 90o phase shift over the entire input audio band. To overcome this obstacle, the Weaver or third method uses an audio sub carrier, which is phase shifted.
### Weaver Method
The Weaver or ‘third’ method places the baseband signal on a low frequency quadrature carrier.
This has the advantage of not requiring a broadband phase shifter however; the use of four mixers makes it awkward and seldom used.
### SSB Transmitter
AM-SSB transmitters are a little more complicated:
cos(...)
|
Signal ---->(X)---->|Low-Pass Filter|----> AM-SSB
The filter must be a very high order, for reasons explained in that chapter.
An AM-SSB receiver is a little bit complicated as well:
cos(...)
|
AM-SSB ---->(X)---->|Low-Pass Filter|---->|Amplifier|----> Signal
This filter doesnt need to be a very high order, like the transmitter has.
These receivers require extremely stable oscillators, good adjacent channel selectivity, and typically use a double conversion technique. Envelope detectors cannot be used since the envelope varies at twice the frequency of the AM envelope.
Stable oscillators are needed since the detected signal is proportional to the difference between the untransmitted carrier and the instantaneous side band. A small shift of 50 Hz makes the received signal unusable.
SSB receivers typically use fixed frequency tuning rather than continuous tuning as found on most radios. Crystal oscillators are often used to select the fixed frequency channels.
## AM-VSB
As a compromise between AM-SSB and AM-DSB is AM-VSB. To make an AM-VSB signal, we pass an AM-DSB signal through a lowpass filter. Now, the trick is that we pass it through a low-order filter, so that some of the filtered sideband still exists. This filtered part of the sideband is called the "Vestige" of the sideband, hence the name "Vestigial Side Band".
AM-VSB signals then can get demodulated in a similar manner to AM-SSB. We can see when we remodulate the input signal, the two vestiges (the positive and negative mirrors of each other) over-lap each other, and add up to the original, unfiltered value!
AM-VSB is less expensive to implement then AM-SSB because we can use lower-order filters.
### Transmitter
here we will talk about an AM-VSB transmitter circuit.
|
2016-08-25 00:59:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 61, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7001603245735168, "perplexity": 1238.9261671488914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292697.31/warc/CC-MAIN-20160823195812-00027-ip-10-153-172-175.ec2.internal.warc.gz"}
|
http://blog.hoeja.de/tag/paradox.html
|
# PHP Bashing
let's do a little php bashing on my way :) found this posted by 0xq1
% php -a
Interactive shell
php > if ((true == "true") && ("true" == 0) && (0 == false)) echo "wtf!";
wtf!
anybody want to try to explain, what is going on?
# simcard routing (bug?)
i got an interesting bug with a mobile phone from my mum. In fact, it is not a bug from the phone itself. Imagine, you are upgrading your phone and want to stay with your number. but you get a new one as well. so we have the following setup ...
# geometric progression modified
During a night out, a friend brought up an interesting topic: if you have a fixed length and you always walk the half which is left, you will never get to the end, but it's trending towards it. In fact it is written like this: \$\sum\limits_{i=1 ...
|
2018-07-18 02:55:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5590107440948486, "perplexity": 1773.9718560887222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590046.11/warc/CC-MAIN-20180718021906-20180718041906-00027.warc.gz"}
|
http://www.getnetusa.com/Tennessee/another-name-for-bias-is-systematic-error.html
|
Address 2325 Country Club Rd, Sparta, TN 38583 (931) 738-2667
# another name for bias is systematic error Pleasant Hill, Tennessee
Get All Content From Explorable All Courses From Explorable Get All Courses Ready To Be Printed Get Printable Format Use It Anywhere While Travelling Get Offline Access For Laptops and Random error can be caused by unpredictable fluctuations in the readings of a measurement apparatus, or in the experimenter's interpretation of the instrumental reading; these fluctuations may be in part due Retrieved Sep 28, 2016 from Explorable.com: https://explorable.com/systematic-error . share|improve this answer answered Nov 25 '11 at 20:30 Max Gordon 2,61111837 If you say that bias never decrease then, how would you justify this definition? 'An asymptotically unbiased
This means the systematic error is 1 volt and all measurements shown by this voltmeter will be a volt higher than the true value. If your goal, however, is to look at the difference in means, then the difference is 100, as opposed to 50. A precise estimate will have narrow confidence levels around the point estimate. Spider Phobia Course More Self-Help Courses Self-Help Section .
Advice A very common error in the English language is misusing advise and advice, while the words are related they do have a different meaning. An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements. The precision is limited by the random errors. Bias, on the other hand, cannot be measured using statistics due to the fact that it comes from the research process itself.
Linked 40 Most confusing statistical terms Related 2How to assess whether experimental measurements obtained from different technicians are biased?2Removing human evaluator bias2What is the difference between the concept and treatment of Free #webinar today @ 1PM EST for an exclusive first look http://t.co/lF7aLEJCRL #survey #mrx #research- Monday Sep 23 - 3:18pm Topics Best Practices Collecting Data Effective Sampling Research Design Response Analysis If this cannot be eliminated, potentially by resetting the instrument immediately before the experiment then it needs to be allowed by subtracting its (possibly time-varying) value from the readings, and by They can be estimated by comparing multiple measurements, and reduced by averaging multiple measurements.
In regard to your question I might imagine that this is the “non-systematic bias” that the systematic bias relates. Retrieved from "https://en.wikipedia.org/w/index.php?title=Observational_error&oldid=739649118" Categories: Accuracy and precisionErrorMeasurementUncertainty of numbersHidden categories: Articles needing additional references from September 2016All articles needing additional references Navigation menu Personal tools Not logged inTalkContributionsCreate accountLog in Namespaces proportional or a percentage) to the actual value of the measured quantity, or even to the value of a different quantity (the reading of a ruler can be affected by environmental The precision of a measurement is how close a number of measurements of the same quantity agree with each other.
A systematic error (an estimate of which is known as a measurement bias) is associated with the fact that a measured value contains an offset. Porta A Dictionary of Epidemiology 5th ed. It depends on the (unknown but true) distribution $\theta$, making it a function of the true distribution. This is unavoidable in the world of probability because, as long as your survey is not a census (collecting responses from every member of the population), you cannot be certain that
No problem, save it as a course and come back to it later. It may usually be determined by repeating the measurements. In measurement theory, "bias" (or "systematic error") is a difference between the expectation of a measurement and the true underlying value. When people brag about their abilities and belittle their opponents before a battle, competition, etc Is it possible to write a C++ function which returns whether the number of arguments is
Most professional researchers throw terms like response bias or nonresponse error around the boardroom without a full comprehension of their meaning. Whence, as $n\to\infty$, $\hat{v}\to\frac{n}{n-1}\hat{v}$ becomes asymptotically unbiased. Selection bias is usually the most malignant type of bias because it’s so hard to identify. Systematic Errors Systematic errors in experimental observations usually come from the measuring instruments.
|
2019-01-18 03:32:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32464829087257385, "perplexity": 1516.8559686060817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659677.17/warc/CC-MAIN-20190118025529-20190118051529-00314.warc.gz"}
|
https://physics.stackexchange.com/tags/string/new
|
# Tag Info
0
The wave speed can be related to the tension and the mass per unit length of the string by the following equation: $$v = \sqrt{\frac{T}{\mu}}$$ Here, $T$ is the tension, $\mu$ the mass per unit length and $v$ the speed of waves on the string. For a derivation of this equation refer to this or to any first year Physics textbook (e.g. Halliday & Resnick)...
1
To answer to a real world problem assumptions have to made. In this case an assumption is made, the thread is massless and inextensible, which leads to a solution, for an instant of time the thread exerts an infinite force on the mass, the mass undergoes an infinite acceleration, the speed of the mass becomes zero instantaneously, which is not the experience ...
1
If the thread is inextensible and massless, the body will immediately come to a stop the moment the thread becomes taut. Since this happens instantaneously, the force experienced by the string will be infinite for that small moment. However, realistically no string satisfies the above conditions. The threads will have some elasticity as well as a limiting ...
-1
According to @weeeeliam's answers to the following questions: What is the formula for calculating the tension of the rope section? There is a circle of rope that rotates at a uniform angular velocity $ω$. What is the formula for calculating the tension of the rope section? Without gravity, the density of the rope is $ρ$, the radius of the rope circle is $R$...
1
Consider an infinitesimally small section of the of the string $d\theta$. The following diagram illustrates this: The tension is of the same magnitude throughout the rope, and it acts perpendicular to the vector from the center of the string to the point of action. From this diagram, you can tell that only the x-components to the left matter, since the y-...
0
An impulse acting at point A gives the 2m mass its velocity, and the center of mass a parallel velocity of (2/3)V. Since the impulse is not acting in line with the center of mass, it also provide an impulsive torque which causes the system to rotate around the center of mass. Working in the center of mass system is valid and gives a tension which is the ...
3
A perfectly elastic object, when deformed, will immediately return to its undeformed shape when the deforming forces are removed. In so doing, it will dissipate zero energy: all the work done in deforming it is returned when it is allowed to relax again. This is true whether the object is stiff (like a cube of steel) or flexible (like a cube of rubber). A ...
1
Welcome to Physics StackExchange! If both you and the sledge are on a frictionless surface, and the rope is massless, then yes, your application of Newton's third law is correct, you will accelerate towards the sledge at 1.6 $m/s^2$. If you are on rough ground (but the sledge is on a frictionless surface), then the ground exerts the same magnitude of ...
1
D'Alembert principle is nothing but a prescription of a type of constraints -- also known as ideal constraints -- such that the equation of motion can be written in the form of Euler-Lagrange equations, by using an arbitrary system of coordinates obtained by "solving the constraint equations". The definition requires that, at fixed time, the total (...
0
The author of the excellent book where I found this problem (The Lazy Universe) explains in another part of the book: A surprisingly tricky example is the case of a sliding block which is pushed across a table-top by a force, say, pushed by your finger (we ignore friction). The displacement of the block is anywhere on the surface whereas the ...
1
Whether the tension is a constraint, or not, depends how you model the problem. Method 1: consider "the two masses plus the rope" as one body, and use just one coordinate to measure its position. Obviously the "single body" changes shape as it moves, and one mass moves up and the other moves down, but that doesn't affect the general principle of calculating ...
0
Gravity is pulling the weights down, the cable provides a directly opposing restraint, to keep the weights from accelerating downwards at 9.8 meters per second per second. The actual work is done by using the heavier weight's gravitational potential.
1
What is the constraint enforce by the tension? How does it show up in your generalized coordinates? As I noted in the comments it is usual to chose a set of generalized coordiantes with a single position for each rope (and the location of the other end found by calulating from there); this form has the constraints built-in, so that there is no way to ...
1
To solve it in the ground plane you need to separate out in your mind the rotation about a common centre of mass and the movement of the centre of mass. You might then need to think about the special case of such motion in which one of the masses stops moving from time to time in a particular reference frame. Googling 'cycloid' might also shed some light. ...
1
This is a complex question. The pure vibration of a string at a single frequency takes on a sine wave pattern. The plucking of a string deforms the string in a non-sine shape, so the disturbance is a linear combination of sine and cosine (a $\pi/2$ phase shift of sine) of multiple frequencies which are determined by properties of the string. The string acts ...
0
[Refer to the diagram] When the man pulls the rope downward with a force $T$ the rope too pulls him up with the force $T$ (Newtons 3rd law). Also this force gets transfered via the rope to the the man and it pulls him with a force of $T$. So the total force acting on him is $2T-mg$ but if he is at rest or moving with constant velocity then $T =mg/2$.
0
This is a simple analysis; Take an object, and hang it from a pulley with two ropes coming to the object.What is the force on the pulley? The weight of the object, you could measure with a proper scale. This means that , all other things being equal, the tension on each rope is half the weight of the object, by construction. Now if it is a girl being the ...
2
The velocity of propagation of a wave in a string ( v ) is proportional to the square root of the force of tension of the string ( T ) and inversely proportional to the square root of the linear density ( $\mu$ ) of the string: $$v=(T/\mu)^{1/2}$$ Once the velocity of propagation is known the fundamental harmonic frequency is given by: f=(v/(2L))=(1/...
Top 50 recent answers are included
|
2020-01-26 06:08:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6525103449821472, "perplexity": 231.14431515582902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251687725.76/warc/CC-MAIN-20200126043644-20200126073644-00016.warc.gz"}
|
https://math.stackexchange.com/questions/2210250/finite-dimensional-subalgebra-of-mathbfbh-is-a-von-neumann-algebra
|
Finite-dimensional subalgebra of $\mathbf{B}(H)$ is a von Neumann algebra
Let $H$ be a complex Hilbert-space and $\mathbf{B}(H)$ the space of bounded linear operators $H \to H$.
The first example of a von Neumann algebra in the book I'm reading is: any finite-dimensional unital $*$-subalgebra $A$ of $\mathbf{B}(H)$, which is finite-dimensional.
I've tried to prove this. My idea was maybe bit of an overkill: Let $P_A : \mathbf{B}(H) \to A$ be the projection onto $A$. By Kaplansky's Theorem, the image of a ultraweakly continuous unital $*$-homomorphism $M \to \mathbf{B}(K)$ (for $M$ a von Neumann algebra and $K$ a Hilbert-space) is a von Neumann algebra.
So I need to show that for any ultraweakly continuous functional $\varphi : A \to \mathbf{C}$ the composition $\varphi \circ P_A$ is ultraweakly continuous. This would follow if $\Vert P_A \Vert \leq 1$, which I wasn't able to show.
Indeed, if $x_i \in (\mathbf{B}(H))_1$ is a converging net $x_i \to x$ in the strong operator topology, then $\Vert (P_A x_i - P_Ax) \xi \Vert \leq \Vert (x_i - x) \xi \Vert \to 0$ for all $\xi \in H$. Thus $\varphi \circ P_A (x_i) \to \varphi\circ P_A (x)$ as $\varphi$ is strongly continuous on $(V)_1$.
I have two questions: Is it true that $\Vert P_A \Vert \leq 1$ for any finite-dimensional $*$-subalgebra? And I feel like there should be a more direct way to prove the fact than using Kaplansky. How can I do it?
Thanks!
In particular, any finite-dimensional subspace (algebra structure is irrelevant here), say $A$, of $\mathcal{B}(H)$ is closed in any standard operator topology, e.g. weak topology.
You could also appeal to the classification of finite-dimensional C$^\ast$-algebras as well: They are all of the form $$\mathbb{M}_{n_1}(\mathbb{C}) \oplus \mathbb{M}_{n_2}(\mathbb{C}) \oplus \mathbb{M}_{n_3}(\mathbb{C}) \oplus \ldots \oplus \mathbb{M}_{n_k}(\mathbb{C})$$ with $k$ being some finite positive integer. Being a finite direct summand of von Neumann algebras, it is a von Neumann algebra itself (strong/weak-operator convergence occurs componentwise).
|
2020-08-05 16:51:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9041799902915955, "perplexity": 112.73234112477094}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735963.64/warc/CC-MAIN-20200805153603-20200805183603-00592.warc.gz"}
|
http://fgarciasanchez.es/thesisrocio/node54.html
|
### 2 Comparison of the constrained Monte Carlo method at low temperature with the Lagrange multiplier method
At this stage of our study we decided to check and compare the results of the effective anisotropy constants obtained with the constrained Monte Carlo method with those obtained with the Lagrange multiplier method for . For this purpose we have simulated a set of thin films. This set is made up of thin layers of different thicknesses from with total spins to with total spins. Here is the number of atomic layers of the system, consequently the films have different ratios of surface to volume number of spins . All thin films have a sc structure with periodic boundary conditions in 2D, uniaxial anisotropy in the bulk with the easy axis of the system parallel to the Z axis and a Néel surface anisotropy constant . The bulk anisotropy constant, exchange constant, and the saturation magnetization value are presented in Tab. 4.1.
First we obtain the effective anisotropy constant at by the Lagrange multiplier method for each of the thin films that conform the set (for more details about this method see section 2.4). After that, we extract the effective anisotropy constants of the same systems but with the constrained Monte Carlo method at (note that due to the limitation of the method it is impossible to perform simulations of the CMC at ). And finally we compare the effective uniaxial anisotropy constants obtained by both methods. In the case of the Lagrange multiplier method the effective uniaxial anisotropy constant ( ) is calculated from the expression of the energy barrier of the system, in the CMC method from the restoring torque curve.
The results of this test are plotted in Fig. 4.3, the squared dots represent the data obtained by the Lagrange multiplier method and the circular ones are obtained by the CMC method. The value of the effective uniaxial anisotropy constant is normalized by the value of the macroscopic volume anisotropy constant at (). As we can see, the data show total agreement between both methods at low temperature. The results show a linear behavior of the effective anisotropy as a function of ratio as predicted by the formula:
(74)
at the bulk anisotropy is recovered , as we can see in Fig. 4.3.
Rocio Yanes
|
2019-02-16 01:20:57
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8522276878356934, "perplexity": 342.2835452222127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479729.27/warc/CC-MAIN-20190216004609-20190216030609-00116.warc.gz"}
|
https://ttxi.gq/blog/so-someone-stole-the-miki-site
|
# Site stealers.
##### 2017-05-15 13:01:13 +0000, 1 year and 5 months ago
That’s right.
I’m not gonna’ name names but someone basically took this, changed the content a bit and re-uploaded it without permission (it’s copyrighted).
It might not seem like a big deal, but when you spend months working on a site with multiple other people and then someone just takes it? Yeah its a bit of a fuck you.
At the bottom I got a one liner saying:
Credits to twentytwoo for the website template
Granted they at least credited me, but that’s not really the point. I had several issues with this:
• Apparently the guy got paid \$300 to make the bot, and steal my site (up for debate)
• The source code is privated, so they just pulled some wget -r on it
• I don’t even know what they were using it for initially
• It’s not a template
• I made that site for Miki, and all I get from someone who steals stuff is “credibility” or “exposure” from some people I don’t even know.
Maybe I’m overreacting a bit, but yeah, it kind of pissed me off a bit. I think post people kind of get annoyed when you take their work and re-purpose it without asking them first. Hell, I would’ve been fine with them using it if the just asked first.
I don’t really want to start drama about it all, just an apology or whatever.
EDIT: Fuck Hifumi.
Return?
|
2018-10-23 12:56:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17544884979724884, "perplexity": 3000.936061998512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516135.92/warc/CC-MAIN-20181023111223-20181023132723-00257.warc.gz"}
|
https://math.stackexchange.com/questions/2153663/set-relations-relation-that-is-symmetric-and-transitive-but-not-reflexive
|
# set relations (relation that is symmetric and transitive but not reflexive)
My discrete book says that the set $A = \{4,5,6\}$ and the relation $R = \{(4,4),(5,5),(4,5),(5,4)\}$ is symmetric and transitive but not reflexive.
I was wondering how this is possible, because if a set $A$ is symmetric, doesn't it also need to include $(5,6),(6,5),(4,6),(6,4)$?
Also if it's transitive, doesn't it have to include $(4,5),(5,6),(4,6)$? I thought the definition of a transitive relation was that $(x$ $R$ $y)$ $\land$ $(y$ $R$ $z)$ then $(x$ $R$ $z)$.
A set can't be symmetric; a relation can be. (By the way, it's possible for a set to be "transitive", but that doesn't mean the same thing as a transitive relation: a transitive set is a set $x$ such that if $z \in y$ and $y \in x$, then $z \in x$.)
A relation on a set need not involve every member of the set. For example, the relation on $\mathbb{N}$ given by "is a prime divisor of" doesn't touch $1$ at all: $1$ is not related to anything and nothing is related to it. In your example, $6$ is not related to anything by $R$, and nothing is related to $6$ by $R$.
"Symmetric" just means that if $a \sim b$, then $b \sim a$. Note that it doesn't tell us about any elements of $A$ we haven't seen before: from the mere knowledge that $4 \sim 5$, we can't use symmetry to deduce that anything is related to $6$. Similarly transitivity.
• I do get now how it is symmetric, thanks. But how is this relation transitive? Wouldn't having (4,4) and (5,5) in it make it reflexive? – user384262 Feb 20 '17 at 21:43
• No, it doesn't make it reflexive. Reflexivity means "for every $x$, $x \sim x$", which is clearly false because it's false for $6$. One counterexample is enough to break the theorem. It's transitive because whenever $a \sim b$ and $b \sim c$, we have $a \sim c$. (This property doesn't tell us anything about what relates to or is related to by $6$, since it's vacuously satisfied in the case that $a$, $b$ or $c$ is $6$.) – Patrick Stevens Feb 20 '17 at 21:44
• Ah that makes so much sense now, thank you! – user384262 Feb 20 '17 at 21:47
• @PatrickStevens, the set $\{(1,0),(0,0),(1,1),1,0\}$ is reflexive in $\{0,1\}$... we can define for arbitrary sets, not merely for relations!!! – mle Feb 21 '17 at 1:11
• I never said "reflexive" couldn't be defined for arbitrary sets ;) – Patrick Stevens Feb 21 '17 at 6:30
Def.: let be $A,R$ sets, we define $$\operatorname{dom}(R):=\{x| \exists y : (x,y) \in R\} \\ \operatorname{cod}(R):=\{x| \exists y :(y,x)\in R\} \\ \operatorname{field}(R):=\operatorname{dom}(R)\cup \operatorname{cod}(R) \\ R \text{ is reflexive in }A \text{ if } \,\forall x \in A:(x,x)\in R \\ R \text{ is symmetric in }A \text{ if } \,\forall x,y \in A:(x,y)\in R \to (y,x)\in R \\ R \text{ is transitive in }A \text{ if } \,\forall x,y,z\in A:(x,y)\in R \wedge (y,z)\in R \to (x,z)\in R \\ R \text{ is reflexive} \text{ if } \,R \text{ is reflexive in}\operatorname{field}(R) \\ R \text{ is symmetric} \text{ if } \,R \text{ is symmetric in}\operatorname{field}(R)\\ R \text{ is transitive } \text{if } \,R \text{ is transitive in}\operatorname{field}(R)$$
Now, we have $A:=\{4,5,6\}$ and $R:=\{(4,4),(5,5),(4,5),(5,4)\}$, therefore:
• $R$ is not reflexive in $A$ because $(6,6) \notin R$
• $R$ is symmetric in $A$
• $R$ is transitive in $A$
([$R$ is symmetric in $A$ $\wedge$ $R$ is transitiv in $A$ $\to$ $R$ is reflexiv in $A$] is generally false, and you have an example)
But:
• $\operatorname{dom}(R)= \operatorname{cod}(R)= \operatorname{field}(R)=\{4,5\}$
• $R$ is reflexiv (in $\operatorname{field}(R)$)
• $R$ is symmetric (in $\operatorname{field}(R)$)
• $R$ is transitiv (in $\operatorname{field}(R)$)
([$R$ is symmetric $\wedge$ $R$ is transitiv $\to$ $R$ is reflexiv] is true, but the converse generally is false, an example $R^´:=\{(1,1),(0,1),(0,0)\}$)
Why $R$ is symmetric and transitive in $A$? For example, let be $4,6 \in A$ and I prove that $(4,6) \in R \to (6,4) \in R$ and $(4,6) \in R \wedge (6,4) \in R \to (4,4)$ are true, but it is vacuously symmetric and transitive by def of "$\to$" (see ex.1, ex.2), similary with $(5,6)$, $(6,5)$...
|
2019-11-16 21:32:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8184067010879517, "perplexity": 299.2283789845076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668765.35/warc/CC-MAIN-20191116204950-20191116232950-00005.warc.gz"}
|
https://proofwiki.org/wiki/Expectation_of_Beta_Distribution/Proof_1
|
# Expectation of Beta Distribution/Proof 1
## Theorem
Let $X \sim \BetaDist \alpha \beta$ for some $\alpha, \beta > 0$, where $\operatorname{Beta}$ denotes the beta distribution.
Then:
$\expect X = \dfrac \alpha {\alpha + \beta}$
## Proof
From the definition of the beta distribution, $X$ has probability density function:
$\map {f_X} x = \dfrac {x^{\alpha - 1} \paren {1 - x}^{\beta - 1} } {\map \Beta {\alpha, \beta} }$
From the definition of the expected value of a continuous random variable:
$\displaystyle \expect X = \int_0^1 x \, \map {f_X} x \rd x$
So:
$\ds \expect X$ $=$ $\ds \frac 1 {\map \Beta {\alpha, \beta} } \int_0^1 x^\alpha \paren {1 - x}^{\beta - 1} \rd x$ $\ds$ $=$ $\ds \frac {\map \Beta {\alpha + 1, \beta} } {\map \Beta {\alpha, \beta} }$ Definition 1 of Beta Function $\ds$ $=$ $\ds \frac {\map \Gamma {\alpha + 1} \, \map \Gamma \beta} {\map \Gamma {\alpha + \beta + 1} } \cdot \frac {\map \Gamma {\alpha + \beta} } {\map \Gamma \alpha \, \map \Gamma \beta}$ Definition 3 of Beta Function $\ds$ $=$ $\ds \frac \alpha {\alpha + \beta} \cdot \frac {\map \Gamma \alpha \, \map \Gamma \beta \, \map \Gamma {\alpha + \beta} } {\map \Gamma \alpha \, \map \Gamma \beta \, \map \Gamma {\alpha + \beta} }$ Gamma Difference Equation $\ds$ $=$ $\ds \frac \alpha {\alpha + \beta}$
$\blacksquare$
|
2021-09-17 19:08:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9975905418395996, "perplexity": 378.3810068206252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055775.1/warc/CC-MAIN-20210917181500-20210917211500-00399.warc.gz"}
|
https://www.tutorialspoint.com/statistics/arithmetic_mode.htm
|
# Statistics - Arithmetic Mode
Arithmetic Mode refers to the most frequently occurring value in the data set. In other words, modal value has the highest frequency associated with it. It is denoted by the symbol ${M_o}$ or Mode.
We're going to discuss methods to compute the Arithmetic Mode for three types of series:
## Individual Data Series
When data is given on individual basis. Following is an example of individual series:
Items 5 10 20 30 40 50 60 70
## Discrete Data Series
When data is given alongwith their frequencies. Following is an example of discrete series:
Items Frequency 5 10 20 30 40 50 60 70 2 5 1 3 12 0 5 7
## Continuous Data Series
When data is given based on ranges alongwith their frequencies. Following is an example of continous series:
Items Frequency 0-5 5-10 10-20 20-30 30-40 2 5 1 3 12
## Useful Video Courses
Video
#### Class 11th Statistics for Economics
40 Lectures 3.5 hours
Video
#### Statistics
40 Lectures 2 hours
Video
#### Applied Statistics in Python for Machine Learning Engineers
66 Lectures 1.5 hours
Video
#### An Introduction to Wait Statistics in SQL Server
22 Lectures 1 hours
Video
#### Geospatial Data Science: Statistics and Machine Learning
60 Lectures 12 hours
Video
#### Basic Statistics & Regression for Machine Learning in Python
65 Lectures 5 hours
|
2022-05-17 12:18:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17233110964298248, "perplexity": 1841.6451458662384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517245.1/warc/CC-MAIN-20220517095022-20220517125022-00089.warc.gz"}
|
https://core.ac.uk/display/141535689
|
## Cubical-like geometry of quasi-median graphs and applications to geometric group theory
### Abstract
The class of quasi-median graphs is a generalisation of median graphs, or equivalently of CAT(0) cube complexes. The purpose of this thesis is to introduce these graphs in geometric group theory. In the first part of our work, we extend the definition of hyperplanes from CAT(0) cube complexes, and we show that the geometry of a quasi-median graph essentially reduces to the combinatorics of its hyperplanes. In the second part, we exploit the specific structure of the hyperplanes to state combination results. The main idea is that if a group acts in a suitable way on a quasi-median graph so that clique-stabilisers satisfy some non-positively curved property $\mathcal{P}$, then the whole group must satisfy $\mathcal{P}$ as well. The properties we are interested in are mainly (relative) hyperbolicity, (equivariant) $\ell^p$-compressions, CAT(0)-ness and cubicality. In the third part, we apply our general criteria to several classes of groups, including graph products, Guba and Sapir's diagram products, some wreath products, and some graphs of groups. Graph products are our most natural examples, where the link between the group and its quasi-median graph is particularly strong and explicit; in particular, we are able to determine precisely when a graph product is relatively hyperbolic.Comment: PhD Thesis, 257 pages. Comments are welcom
Topics: Mathematics - Group Theory, Mathematics - Combinatorics, Mathematics - Metric Geometry, 20F65, 05C25, 20E22, 20F67, 20F69
Year: 2017
OAI identifier: oai:arXiv.org:1712.01618
|
2020-01-19 09:08:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4526655673980713, "perplexity": 583.2160680892503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594333.5/warc/CC-MAIN-20200119064802-20200119092802-00405.warc.gz"}
|
https://docs.rs/lazy-regex/latest/lazy_regex/
|
# Crate lazy_regex[−][src]
Expand description
Use the regex! macro to build regexes:
• they’re checked at compile time
• they’re wrapped in once_cell lazy static initializers so that they’re compiled only once
• they can hold flags as suffix: let case_insensitive_regex = regex!("ab*"i);
• regex creation is less verbose
This macro returns references to normal instances of regex::Regex so all the usual features are available.
You may also use shortcut macros for testing a match, replacing with concise closures, or capturing groups as substrings in some common situations:
Some structs of the regex crate are reexported to ease dependency managment.
## Build Regexes
use lazy_regex::regex;
// build a simple regex
let r = regex!("sa+$"); assert_eq!(r.is_match("Saa"), false); // build a regex with flag(s) let r = regex!("sa+$"i);
assert_eq!(r.is_match("Saa"), true);
// you can use a raw literal
let r = regex!(r#"^"+$"#); assert_eq!(r.is_match("\"\""), true); // or a raw literal with flag(s) let r = regex!(r#"^\s*("[a-t]*"\s*)+$"#i);
assert_eq!(r.is_match(r#" "Aristote" "Platon" "#), true);
// there's no problem using the multiline definition syntax
let r = regex!(r#"(?x)
(?P<name>\w+)
-
(?P<version>[0-9.]+)
"#);
assert_eq!(r.find("This is lazy_regex-2.2!").unwrap().as_str(), "lazy_regex-2.2");
// (look at the regex_captures! macro to easily extract the groups)
// this line wouldn't compile because the regex is invalid:
// let r = regex!("(unclosed");
Supported regex flags: i, m, s, x, U.
## Test a match
use lazy_regex::regex_is_match;
let b = regex_is_match!("[ab]+", "car");
assert_eq!(b, true);
doc: regex_is_match!
## Extract a value
use lazy_regex::regex_find;
let f_word = regex_find!(r#"\bf\w+\b"#, "The fox jumps.");
assert_eq!(f_word, Some("fox"));
doc: regex_find!
## Capture
use lazy_regex::regex_captures;
let (_, letter) = regex_captures!("([a-z])[0-9]+"i, "form A42").unwrap();
assert_eq!(letter, "A");
let (whole, name, version) = regex_captures!(
r#"(\w+)-([0-9.]+)"#, // a literal regex
"This is lazy_regex-2.0!", // any expression
).unwrap();
assert_eq!(whole, "lazy_regex-2.0");
assert_eq!(name, "lazy_regex");
assert_eq!(version, "2.0");
There’s no limit to the size of the tuple. It’s checked at compile time to ensure you have the right number of capturing groups.
You receive "" for optional groups with no value.
doc: regex_captures!
## Replace with captured groups
use lazy_regex::regex_replace_all;
let text = "Foo8 fuu3";
let text = regex_replace_all!(
r#"\bf(\w+)(\d)"#i,
text,
|_, name, digit| format!("F<{}>{}", name, digit),
);
assert_eq!(text, "F<oo>8 F<uu>3");
The number of arguments given to the closure is checked at compilation time to match the number of groups in the regular expression.
## Shared lazy static
When a regular expression is used in several functions, you sometimes don’t want to repeat it but have a shared static instance.
The regex! macro, while being backed by a lazy static regex, returns a reference.
If you want to have a shared lazy static regex, use the lazy_regex! macro:
use lazy_regex::*;
pub static GLOBAL_REX: Lazy<Regex> = lazy_regex!("^ab+\$"i);
Like for the other macros, the regex is static, checked at compile time, and lazily built at first use.
doc: lazy_regex!
## Macros
Return an instance of once_cell::sync::Lazy<regex::Regex> that you can use in a public static declaration.
Return a lazy static Regex checked at compilation time and built at first use.
Extract captured groups as a tuple of &str.
Extract the leftmost match of the regex in the second argument, as a &str
Test whether an expression matches a lazy static regular expression (the regex is checked at compile time)
Replaces the leftmost match in the second argument with the value returned by the closure given as third argument.
Replaces all non-overlapping matches in the second argument with the value returned by the closure given as third argument.
## Structs
Captures represents a group of captured strings for a single match.
A value which is initialized on the first access.
A compiled regular expression for matching Unicode strings.
A configurable builder for a regular expression.
|
2021-12-07 22:35:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4509677588939667, "perplexity": 11734.22716061176}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363418.83/warc/CC-MAIN-20211207201422-20211207231422-00545.warc.gz"}
|
http://www.ck12.org/chemistry/Electrons/lesson/Electrons-Chemistry-Intermediate/r5/
|
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Electrons ( Read ) | Chemistry | CK-12 Foundation
You are viewing an older version of this Concept. Go to the latest version.
# Electrons
%
Best Score
Practice Electrons
Best Score
%
# Electrons
#### What causes a power outage?
In a power outage all your electrical equipment suddenly stops working. The radio was on just a minute ago and now it is silent. What happened? Somewhere between a power generator and your electrical device was an interruption. Power stopped flowing through the wires and into your radio. That “power” turns out to be electrons that move through the wires and cause an electrical current to flow.
### Is There Anything Inside an Atom?
As the nineteenth century began to draw to a close, the concept of atoms was well-established. We could determine the mass of different atoms and had some good ideas about the atomic composition of many compounds. Dalton’s atomic theory held that atoms were indivisible, so scientists did not ask questions about what was inside the atom – it was solid and could not be broken down further. But then things began to change.
#### The Electron
In 1897, English physicist J.J. Thomson (1856-1940) experimented with a device called a cathode ray tube, in which an electric current was passed through gases at low pressure. A cathode ray tube consists of a sealed glass tube fitted at both ends with metal disks called electrodes. The electrodes are then connected to a source of electricity. One electrode, called the anode , becomes positively charged while the other electrode, called the cathode , becomes negatively charged. A glowing beam (the cathode ray) travels from the cathode to the anode.
Earlier investigations by Sir William Crookes and others had been carried out to determine the nature of the cathode ray. Thomson modified and extended these experiments in an effort to learn about these mysterious rays. He discovered two things, which supported the hypothesis that the cathode ray consisted of a stream of particles.
• When an object was placed between the cathode and the opposite end of the tube, it cast a shadow on the glass.
• A cathode ray tube was constructed with a small metal rail between the two electrodes. Attached to the rail was a paddle wheel capable of rotating along the rail. Upon starting up the cathode ray tube, the wheel rotated from the cathode towards the anode. This proved that the cathode ray was made of particles which must have mass. Crooke had first observed this phenomenon and attributed it to pressure by these particles on the wheel. Thomson correctly surmised that these particles were producing heat, which caused the wheel to turn.
In order to determine if the cathode ray consisted of charged particles, Thomson used magnets and charged plates to deflect the cathode ray. He observed that cathode rays were deflected by a magnetic field in the same manner as a wire carrying an electric current, which was known to be negatively charged. In addition, the cathode ray was deflected away from a negatively charged metal plate and towards a positively charged plate.
Thomson knew that opposite charges attract one another, while like charges repel one another. Together, the results of the cathode ray tube experiments showed that cathode rays are actually streams of tiny negatively charged particles moving at very high speeds. While Thomson originally called these particles corpuscles, they were later named electrons.
Thomson conducted further experiments, which allowed him to calculate the charge-to-mass ratio $\left(\frac{e}{m_e}\right)$ of the electron. In units of coulombs to grams, this value is 1.8 × 10 8 Coulombs/gram. He found that this value was a constant and did not depend on the gas used in the cathode ray tube or on the metal used as the electrodes. He concluded that electrons were negatively charged subatomic particles present in atoms of all elements.
Watch a video of a cathode ray tube experiment:
#### Summary
• Cathode rays are deflected by a magnetic field.
• The rays are deflected away from a negatively charged electrical field and toward a positively charge field.
• The charge/mass ratio for the electron is 1.8 × 10 8 Coulombs/gram.
#### Practice
1. How old was Thomson when he enrolled in college?
2. What was his academic position at Cambridge?
3. Where and when did he announce his discovery of the electron?
4. What was he awarded in 1906?
#### Review
1. What is electric power made up of?
2. Whose work did Thomson repeat and revise?
3. What experiment did Thomson perform that showed cathode rays to be particles?
4. How did he show that these particles had a charge on them?
5. Was the charge positive or negative?
|
2014-09-02 01:56:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 1, "texerror": 0, "math_score": 0.48451223969459534, "perplexity": 929.227833323448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921318.10/warc/CC-MAIN-20140909050430-00249-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/2142213/pole-becomes-essential-singularity-when-lifting-by-exponential
|
# pole becomes essential singularity when lifting by exponential
Let $z_0$ be a pole of function $f(z)$. Prove that $z_0$ is an essential singularity of $e^{f(z)}$.
I already know that $e^{f(z)}$ when $z\to z_0$ could be unbounded so $z_0$ should be a pole or singularity. But when $z_0$ is a pole I can't find contradiction.
Namely I want to find a $z'$ in the neighbor of $z_0$ which satisfies $f(z')$ is pure imaginary then $|e^{f(z')}|=1$, contradicts with $e^{f(z)}$ takes $z_0$'s neighbor to $\infty$'s neighbor. Is my thought correct? Thanks for any help.
## 1 Answer
An idea: if $\;z_0\;$ is a pole of $\;f\;$ , then in some neighborhood of it we have a Laurent series for the function:
$$f(z)=\sum_{n=-k}^\infty a_n(z-z_0)^n\;,\;\;a_{-k}\neq0\implies\text{using the series for the exponential around}\;\;z_0:$$
$$e^{f(z)}=e^{a_{-k}(z-z_0)^{-k}+\ldots}=1+a_{-k}(z-z_0)^{-k}+\frac{\left(a_{-k}(z-z_0)^{-k}\right)^2}2+\frac{\left(a_{-k}(z-z_0)^{-k}\right)^3}6+\ldots$$
and we get infinite negative powers in the above development of $\;e^{f(z)}\;$ as powers of $\;z-z_0\implies z_0\;$ is an essential singularity.
|
2019-11-21 17:11:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9974790811538696, "perplexity": 107.5283643975399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670921.20/warc/CC-MAIN-20191121153204-20191121181204-00254.warc.gz"}
|
https://en.wikipedia.org/wiki/Ruppeiner_geometry
|
# Ruppeiner geometry
Ruppeiner geometry is thermodynamic geometry (a type of information geometry) using the language of Riemannian geometry to study thermodynamics. George Ruppeiner proposed it in 1979. He claimed that thermodynamic systems can be represented by Riemannian geometry, and that statistical properties can be derived from the model.
This geometrical model is based on the inclusion of the theory of fluctuations into the axioms of equilibrium thermodynamics, namely, there exist equilibrium states which can be represented by points on two-dimensional surface (manifold) and the distance between these equilibrium states is related to the fluctuation between them. This concept is associated to probabilities, i.e. the less probable a fluctuation between states, the further apart they are. This can be recognized if one considers the metric tensor gij in the distance formula (line element) between the two equilibrium states
${\displaystyle ds^{2}=g_{ij}^{R}dx^{i}dx^{j},\,}$
where the matrix of coefficients gij is the symmetric metric tensor which is called a Ruppeiner metric, defined as a negative Hessian of the entropy function
${\displaystyle g_{ij}^{R}=-\partial _{i}\partial _{j}S(U,N^{a})}$
where U is the internal energy (mass) of the system and Na refers to the extensive parameters of the system. Mathematically, the Ruppeiner geometry is one particular type of information geometry and it is similar to the Fisher-Rao metric used in mathematical statistics.
The Ruppeiner metric can be understood as the thermodynamic limit (large systems limit) of the more general Fisher information metric.[1] For small systems (systems where fluctuations are large), the Ruppeiner metric may not exist, as second derivatives of the entropy are not guaranteed to be non-negative.
The Ruppeiner metric is conformally related to the Weinhold metric via
${\displaystyle ds_{R}^{2}={\frac {1}{T}}ds_{W}^{2}\,}$
where T is the temperature of the system under consideration. Proof of the conformal relation can be easily done when one writes down the first law of thermodynamics (dU=TdS+...) in differential form with a few manipulations. The Weinhold geometry is also considered as a thermodynamic geometry. It is defined as a Hessian of the internal energy with respect to entropy and other extensive parameters.
${\displaystyle g_{ij}^{W}=\partial _{i}\partial _{j}U(S,N^{a})}$
It has long been observed that the Ruppeiner metric is flat for systems with noninteracting underlying statistical mechanics such as the ideal gas. Curvature singularities signal critical behaviors. In addition, it has been applied to a number of statistical systems including Van de Waals gas. Recently the anyon gas has been studied using this approach.
## Application to black hole systems
In the last five years or so, this geometry has been applied to black hole thermodynamics, with some physically relevant results. The most physically significant case is for the Kerr black hole in higher dimensions, where the curvature singularity signals thermodynamic instability, as found earlier by conventional methods.
The entropy of a black hole is given by the well-known Bekenstein-Hawking formula
${\displaystyle S={\frac {k_{B}c^{3}A}{4G\hbar }}}$
where ${\displaystyle k_{B}}$ is Boltzmann's constant, ${\displaystyle c}$ the speed of light, ${\displaystyle G}$ Newton's constant and ${\displaystyle A}$ is the area of the event horizon of the black hole. Calculating the Ruppeiner geometry of the black hole's entropy is, in principle, straightforward, but it is important that the entropy should be written in terms of extensive parameters,
${\displaystyle S=S(M,N^{a})}$
where ${\displaystyle M}$ is ADM mass of the black hole and ${\displaystyle N^{a}}$ are the conserved charges and ${\displaystyle a}$ runs from 1 to n. The signature of the metric reflects the sign of the hole's specific heat. For a Reissner-Nordström black hole, the Ruppeiner metric has a Lorentzian signature which corresponds to the negative heat capacity it possess, while for the BTZ black hole, we have a Euclidean signature. This calculation cannot be done for the Schwarzschild black hole, because its entropy is
${\displaystyle S=S(M)}$
which renders the metric degenerate.
## References
1. ^ Gavin E. Crooks, "Measuring thermodynamic length" (2007), ArXiv 0706.0559
|
2016-09-28 09:19:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 14, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8221662640571594, "perplexity": 331.8714487954536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661327.59/warc/CC-MAIN-20160924173741-00045-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://kerodon.net/tag/013D
|
# Kerodon
$\Newextarrow{\xRightarrow}{5,5}{0x21D2}$
Theorem 3.5.2.1. The geometric realization functor
$\operatorname{Set_{\Delta }}\rightarrow \operatorname{Set}\quad \quad X \mapsto |X|$
preserves finite limits. In particular, for every diagram of simplicial sets $X \rightarrow Z \leftarrow Y$, the induced map $| X \times _{Z} Y | \rightarrow |X| \times _{|Z|} |Y|$ is a bijection.
Proof of Theorem 3.5.2.1. Let $U: \operatorname{Top}\rightarrow \operatorname{Set}$ denote the forgetful functor. We wish to show that the composite functor
$\operatorname{Set_{\Delta }}\xrightarrow { | \bullet | } \operatorname{Top}\xrightarrow {U} \operatorname{Set}$
preserves finite limits. By virtue of Remark 3.5.2.7, we can write this composite functor as a filtered colimit of functors of the form $X \mapsto |X|_{S}$, where $S$ ranges over all finite subsets of the unit interval $[0,1]$ which contain $0$ and $1$. It will therefore suffice to show that each of the functors $X \mapsto |X|_{S}$ preserves finite limits. Using Proposition 3.5.2.9, see that $X \mapsto |X|_{S}$ can be identified with the evaluation functor $X \mapsto X_{m}$, where $m$ is chosen so that there is an isomorphism of linearly ordered sets $[m] \simeq \pi _0( [0,1] \setminus S)$. $\square$
|
2020-02-26 19:07:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9837173819541931, "perplexity": 87.6956690093829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146485.15/warc/CC-MAIN-20200226181001-20200226211001-00043.warc.gz"}
|
http://openturns.github.io/openturns/latest/user_manual/response_surface/_generated/openturns.AdaptiveStrategy.html
|
class AdaptiveStrategy(*args)
Base class for the construction of the truncated multivariate orthogonal basis.
Available constructors:
Parameters: orthogonalBasis : OrthogonalBasis An OrthogonalBasis. dimension : positive int Number of terms of the basis. This first usage has the same implementation as the second with a FixedStrategy. adaptiveStrategyImplementation : AdaptiveStrategyImplementation Adaptive strategy implementation which is a FixedStrategy, SequentialStrategy or a CleaningStrategy.
Notes
A strategy must be chosen for the selection of the different terms of the multivariate basis in which the response surface by functional chaos is expressed. The selected terms are regrouped in the finite subset of .
There are three different strategies available:
These strategies are conceived in such a way to be adapted for other orthogonal expansions (other than polynomial). For the moment, their implementation are only useful for the polynomial chaos expansion.
Methods
computeInitialBasis() Compute initial basis for the approximation. getBasis() Accessor to the underlying orthogonal basis. getClassName() Accessor to the object’s name. getId() Accessor to the object’s id. getImplementation(*args) Accessor to the underlying implementation. getMaximumDimension() Accessor to the maximum dimension of the orthogonal basis. getName() Accessor to the object’s name. getPsi() Accessor to the orthogonal polynomials of the basis. setMaximumDimension(maximumDimension) Accessor to the maximum dimension of the orthogonal basis. setName(name) Accessor to the object’s name. updateBasis(alpha_k, residual, relativeError) Update the basis for the next iteration of approximation.
__init__(*args)
Initialize self. See help(type(self)) for accurate signature.
computeInitialBasis()
Compute initial basis for the approximation.
getBasis()
Accessor to the underlying orthogonal basis.
Returns: basis : OrthogonalBasis Orthogonal basis of which the adaptive strategy is based.
getClassName()
Accessor to the object’s name.
Returns: class_name : str The object class name (object.__class__.__name__).
getId()
Accessor to the object’s id.
Returns: id : int Internal unique identifier.
getImplementation(*args)
Accessor to the underlying implementation.
Returns: impl : Implementation The implementation class.
getMaximumDimension()
Accessor to the maximum dimension of the orthogonal basis.
Returns: P : integer Maximum dimension of the truncated basis.
getName()
Accessor to the object’s name.
Returns: name : str The name of the object.
getPsi()
Accessor to the orthogonal polynomials of the basis.
Returns: polynomials : list of polynomials Sequence of analytical polynomials.
Notes
The method computeInitialBasis() must be applied first.
Examples
>>> import openturns as ot
>>> productBasis = ot.OrthogonalProductPolynomialFactory([ot.HermiteFactory()])
[1,x0,-0.707107 + 0.707107 * x0^2]
setMaximumDimension(maximumDimension)
Accessor to the maximum dimension of the orthogonal basis.
Parameters: P : integer Maximum dimension of the truncated basis.
setName(name)
Accessor to the object’s name.
Parameters: name : str The name of the object.
updateBasis(alpha_k, residual, relativeError)
Update the basis for the next iteration of approximation.
Notes
No changes are made to the basis in the fixed strategy.
|
2019-02-21 20:00:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4905817210674286, "perplexity": 3606.909929190953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247508363.74/warc/CC-MAIN-20190221193026-20190221215026-00166.warc.gz"}
|
https://brilliant.org/problems/skydiver/
|
# Skydiver
Calculus Level 1
Two forces act on a parachutist. One is $$mg,$$ the attraction by the earth, where $$m$$ is the mass of the person plus equipment and $$g=9.8 \text{ m/sec}^2$$ is the acceleration of gravity. The other force is the air resistance ("drag"), which is assumed to be proportional to the square of the velocity $$v(t)$$.
Using Newton's second law of motion (mass $$\times$$ acceleration = net force applied), set up an ordinary differential equation for $$v(t).$$
Let $$k$$ denote the drag coefficient.
×
|
2017-07-24 04:53:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9441474080085754, "perplexity": 294.8107165175426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424721.74/warc/CC-MAIN-20170724042246-20170724062246-00569.warc.gz"}
|
https://www.mathworks.com/help/signal/ref/rlevinson.html?nocookie=true
|
Accelerating the pace of engineering and science
# rlevinson
Reverse Levinson-Durbin recursion
## Syntax
r = rlevinson(a,efinal)
[r,u] = rlevinson(a,efinal)
[r,u,k] = rlevinson(a,efinal)
[r,u,k,e] = rlevinson(a,efinal)
## Description
The reverse Levinson-Durbin recursion implements the step-down algorithm for solving the following symmetric Toeplitz system of linear equations for r, where r = [r(1) Lr(p+1)] and r(i)* denotes the complex conjugate of r(i).
$\left[\begin{array}{cccc}r\left(1\right)& r{\left(2\right)}^{\ast }& \cdots & r{\left(p\right)}^{\ast }\\ r\left(2\right)& r\left(1\right)& \cdots & r{\left(p-1\right)}^{\ast }\\ ⋮& \ddots & \ddots & ⋮\\ r\left(p\right)& \cdots & r\left(2\right)& r\left(1\right)\end{array}\right]\left[\begin{array}{c}a\left(2\right)\\ a\left(3\right)\\ ⋮\\ a\left(p+1\right)\end{array}\right]=\left[\begin{array}{c}-r\left(2\right)\\ -r\left(3\right)\\ ⋮\\ -r\left(p+1\right)\end{array}\right]$
r = rlevinson(a,efinal) solves the above system of equations for r given vector a, where a = [1 a(2) L a(p+1)]. In linear prediction applications, r represents the autocorrelation sequence of the input to the prediction error filter, where r(1) is the zero-lag element. The figure below shows the typical filter of this type, where H(z) is the optimal linear predictor, x(n) is the input signal, $\stackrel{^}{x}\left(n\right)$ is the predicted signal, and e(n) is the prediction error.
Input vector a represents the polynomial coefficients of this prediction error filter in descending powers of z.
$A\left(z\right)=1+a\left(2\right){z}^{-1}+\cdots +a\left(n+1\right){z}^{-p}$
The filter must be minimum phase to generate a valid autocorrelation sequence. efinal is the scalar prediction error power, which is equal to the variance of the prediction error signal, σ2(e).
[r,u] = rlevinson(a,efinal) returns upper triangular matrix U from the UDU* decomposition
${R}^{-1}=U{E}^{-1}{U}^{\ast }$
where
$R=\left[\begin{array}{cccc}r\left(1\right)& r{\left(2\right)}^{\ast }& \cdots & r{\left(p\right)}^{\ast }\\ r\left(2\right)& r\left(1\right)& \cdots & r{\left(p-1\right)}^{\ast }\\ ⋮& \ddots & \ddots & ⋮\\ r\left(p\right)& \cdots & r\left(2\right)& r\left(1\right)\end{array}\right]$
and E is a diagonal matrix with elements returned in output e (see below). This decomposition permits the efficient evaluation of the inverse of the autocorrelation matrix, R−1.
Output matrix u contains the prediction filter polynomial, a, from each iteration of the reverse Levinson-Durbin recursion
$U=\left[\begin{array}{cccc}{a}_{1}{\left(1\right)}^{\ast }& {a}_{2}{\left(2\right)}^{\ast }& \cdots & {a}_{p+1}{\left(p+1\right)}^{\ast }\\ 0& {a}_{2}{\left(1\right)}^{\ast }& \ddots & {a}_{p+1}{\left(p\right)}^{\ast }\\ 0& 0& \ddots & {a}_{p+1}{\left(p-1\right)}^{\ast }\\ ⋮& \ddots & \ddots & ⋮\\ 0& \cdots & 0& {a}_{p+1}{\left(1\right)}^{\ast }\end{array}\right]$
where ai(j) is the jth coefficient of the ith order prediction filter polynomial (i.e., step i in the recursion). For example, the 5th order prediction filter polynomial is
```a5 = u(5:-1:1,5)'
```
Note that u(p+1:-1:1,p+1)' is the input polynomial coefficient vector a.
[r,u,k] = rlevinson(a,efinal) returns a vector k of length (p+1) containing the reflection coefficients. The reflection coefficients are the conjugates of the values in the first row of u.
```k = conj(u(1,2:end))
```
[r,u,k,e] = rlevinson(a,efinal) returns a vector of length p+1 containing the prediction errors from each iteration of the reverse Levinson-Durbin recursion: e(1) is the prediction error from the first-order model, e(2) is the prediction error from the second-order model, and so on.
These prediction error values form the diagonal of the matrix E in the UDU* decomposition of R−1.
${R}^{-1}=U{E}^{-1}{U}^{\ast }$
## References
[1] Kay, S.M., Modern Spectral Estimation: Theory and Application, Prentice-Hall, Englewood Cliffs, NJ, 1988.
|
2015-03-02 23:11:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9554840326309204, "perplexity": 921.3209680731379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463093.76/warc/CC-MAIN-20150226074103-00231-ip-10-28-5-156.ec2.internal.warc.gz"}
|
https://tianrunhe.wordpress.com/2012/07/10/get-all-groups-of-strings-that-are-anagrams-anagrams/
|
## Get all groups of strings that are anagrams (Anagrams)
Given an array of strings, return all groups of strings that are anagrams.
Note: All inputs will be in lower-case.
Thoughts:
Anagrams have the same character counts mapping: “ate”, “eat” and “tea” have the same mapping: {‘a’:1, ‘e’:1, ‘t’:1}. So for each string in the group, we treat it as a mapping between chars to ints. If there are $n > 1$ mappings that are the same, we know there are $n$ anagrams. Therefore our algorithm works this way: in the 1st scan of the group, we build two hash-table: for each string, we connect it with its character counts mapping (this is for later quick reference); for each character counts mapping, we accumulate its count. At the 2nd scan of the group, for each string, we use the 1st hash-table to quickly find its character counts mapping. When we get the mapping, we plug it into the second hash-table and fetch its count, if the count is greater than 1, we know this string is a member of the anagrams so we add it to the solution array. Hence it’s a $O(n)$ algorithm.
Code (Java):
```public class Solution {
public ArrayList<String> anagrams(String[] strs) {
HashMap<String, HashMap<Character, Integer>> strCharsMap
= new HashMap<String, HashMap<Character, Integer>>();
HashMap<HashMap<Character, Integer>, Integer> charsCountMap
= new HashMap<HashMap<Character, Integer>, Integer>();
for(String s : strs) {
HashMap<Character, Integer> map =
new HashMap<Character, Integer>();
for(int i = 0; i < s.length(); ++i) {
char c = s.charAt(i);
map.put(c, map.get(c) == null ? 1 : map.get(c)+1);
}
strCharsMap.put(s, map);
charsCountMap.put(map, charsCountMap.get(map) == null ?
1 : charsCountMap.get(map) + 1);
}
ArrayList<String> sol = new ArrayList<String>();
for(String s : strs) {
if(charsCountMap.get(strCharsMap.get(s)) > 1)
}
return sol;
}
}
Code (C++):
class Solution {
public:
vector<string> anagrams(vector<string> &strs) {
map<string, map<char, int> > strCharsMap;
map<map<char, int>, int> charsCountMap;
for(int i = 0; i < strs.size(); ++i) {
string s = strs[i];
map<char, int> map;
for(int j = 0; j < s.size(); ++j) {
char c = s[j];
map[c] += 1;
}
strCharsMap[s] = map;
charsCountMap[map] += 1;
}
vector<string> sol;
for(int i = 0; i < strs.size(); ++i) {
if(charsCountMap[strCharsMap[strs[i]]] > 1)
sol.push_back(strs[i]);
}
return sol;
}
};
__ATA.cmd.push(function() {
__ATA.initSlot('atatags-26942', {
collapseEmpty: 'before',
sectionId: '26942',
width: 300,
height: 250
});
});
__ATA.cmd.push(function() {
__ATA.initSlot('atatags-114160', {
collapseEmpty: 'before',
sectionId: '114160',
width: 300,
height: 250
});
});
|
2018-01-22 06:26:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3052402436733246, "perplexity": 11069.904268019505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891105.83/warc/CC-MAIN-20180122054202-20180122074202-00665.warc.gz"}
|
http://math.stackexchange.com/questions/291681/help-solving-summation-series-of-a-recursive-function?answertab=active
|
# Help solving summation series of a recursive function
Yesterday in class, we were analyzing the Karatsuba multiplication algorithm and how it applies to recurrence equations. Time ran short, and I feel I missed how to solve the final summation.
First, we defined the recurrence equation as
$$T(n) = 3T \left(\frac{n}{2}\right) + 4n$$
and applied a recurrence tree such like
$$T(n) = 3T \left(\frac{n}{2}\right) + 4n \Rightarrow 4n \cdot \left(\frac{3}{2}\right)^0$$
$$T\left(\frac{n}{2}\right) = 3T \left (\frac{n}{4} \right) + 4\left(\frac{n}{2}\right) \Rightarrow 4n \cdot \left(\frac{3}{2}\right)^1$$
$$T\left(\frac{n}{4}\right) = 3T \left (\frac{n}{8} \right) + 4\left(\frac{n}{4}\right) \Rightarrow 4n \cdot \left(\frac{3}{2}\right)^2$$
$$T\left(\frac{n}{8}\right) = 3T \left (\frac{n}{16} \right) + 4\left(\frac{n}{8}\right) \Rightarrow 4n \cdot \left(\frac{3}{2}\right)^3$$
Because the denominiator increases in a logarithmic fashion, we defined the summation as
$$\sum_{x=0}^{log_2n} 4n \cdot \left(\frac{3}{2}\right)^x$$
Time was running short, so several steps were skipped, and the final solution was given as
$$9\cdot 3^{log_2n} = 9n^{log_23} = 9n^{1.58} = O(n^{1.58})$$
based on the properties
$$a^{lg\, b} = b^{lg\, a}\: \text{and}\: log_2 3 \approx 1.58$$
I've tried applying the summation formula
$$\sum_{x=0}^{n}r^x = \frac{r^{n+1}-1}{r-1}$$
with this result, and end up with
$$\sum_{x=0}^{n}r^x = \frac{r^{n+1}-1}{r-1} = \sum_{x=0}^{log_2 n} 4n \cdot \left(\frac{3}{2}\right)^x$$
$$= 4\left(n\cdot \frac{\frac{3}{2}^{lg_2n+1}-1}{\frac{3}{2}-1}\right) = 4\left(n \cdot \frac{\frac{3}{2}^{log_2n+1}-1}{\frac{1}{2}}\right) = 2\left(n \cdot \frac{3}{2}^{log_2n+1}+1\right)$$
$$=2n \cdot 3^{log_2n+1} + 2$$
which is very different than the solution given. Where did I go wrong?
-
The crucial mistake is that you passed somehow from $$(\frac32)^{(\log_2 n) +1}$$ to $$3^{(\log_2 n )+ 1}.$$ Using the rough formula that $$\sum_{x=0}^n r^x = O(r^n) \qquad \qquad\text{if r>1},$$ you should get $$O(n (\frac32)^{\log_2 n})=O(n e^{(\ln n) \frac{\ln \frac32}{\ln 2}}) = O(n^{1+\frac{\ln \frac32}{\ln 2}}).$$ Since $$1 + \frac{\ln \frac32}{\ln 2} = \frac{\ln 3}{\ln 2} = \log_2 3 = 1.58496...$$ this is the same as the result you got in class.
This recurrence has the nice property that we can compute explicit values for $T(n)$ the same way as was done here, for example.
Let $$n = \sum_{k=0}^{\lfloor \log_2 n \rfloor} d_k 2^k$$ be the binary digit representation of $n.$ It is not difficult to see that with $T(0)=0$ we have $$T(n) = 4 \sum_{j=0}^{\lfloor \log_2 n \rfloor} 3^j \sum_{k=j}^{\lfloor \log_2 n \rfloor} d_k 2^{k-j} = 4 \sum_{j=0}^{\lfloor \log_2 n \rfloor} \left( \frac{3}{2} \right)^j \sum_{k=j}^{\lfloor \log_2 n \rfloor} d_k 2^k.$$ Now for an upper bound consider $n$ consisting only of one digits, giving $$T(n) \le 4 \sum_{j=0}^{\lfloor \log_2 n \rfloor} \left( \frac{3}{2} \right)^j \sum_{k=j}^{\lfloor \log_2 n \rfloor} 2^k = 4 \sum_{j=0}^{\lfloor \log_2 n \rfloor} \left( \frac{3}{2} \right)^j \left( 2^{\lfloor \log_2 n \rfloor + 1} - 2^j\right)$$ which is $$4 \left( 2^{\lfloor \log_2 n \rfloor + 1} \frac{(3/2)^{\lfloor \log_2 n \rfloor + 1}-1}{3/2-1} - \sum_{j=0}^{\lfloor \log_2 n \rfloor}3^j \right) = 4 \left(2 \left( 3^{\lfloor \log_2 n \rfloor + 1} - 2^{\lfloor \log_2 n \rfloor + 1}\right) - \frac{3^{\lfloor \log_2 n \rfloor + 1}-1}{3-1} \right) = 2\times 3^{\lfloor \log_2 n \rfloor + 2} - 2^{\lfloor \log_2 n \rfloor + 4} + 2.$$ For a lower bound, take all digits zero except the leading one, getting $$T(n) \ge 4 \sum_{j=0}^{\lfloor \log_2 n \rfloor} \left( \frac{3}{2} \right)^j 2^{\lfloor \log_2 n \rfloor} = 2^{\lfloor \log_2 n \rfloor + 2} \frac{(3/2)^{\lfloor \log_2 n \rfloor + 1}-1}{3/2-1} = 4\times 3^{\lfloor \log_2 n \rfloor + 1} - 2^{\lfloor \log_2 n \rfloor + 3} .$$ The lower bound and the upper bound taken together show that $$T(n) \in \Theta\left(3^{\lfloor \log_2 n \rfloor}\right) = \Theta\left(2^{\log_2 3 \lfloor \log_2 n \rfloor} \right) = \Theta\left(n^{\log_2 3}\right).$$
|
2015-04-25 09:33:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9546605944633484, "perplexity": 272.5106476936702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246648209.18/warc/CC-MAIN-20150417045728-00027-ip-10-235-10-82.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions?page=2&sort=votes
|
All Questions
72k views
What is the intuitive relationship between SVD and PCA
Singular value decomposition (SVD) and principal component analysis (PCA) are two eigenvalue methods used to reduce a high-dimensional dataset into fewer dimensions while retaining important ...
7k views
21k views
How can a piece of A4 paper be folded in exactly three equal parts?
This is something that always annoys me when putting an A4 letter in a oblong envelope: one has to estimate where to put the creases when folding the letter. I normally start from the bottom and on ...
12k views
Stopping the “Will I need this for the test” question [closed]
I am a college professor in the American education system and find that the major concern of my students is trying to determine the specific techniques or problems which I will ask on the exam. This ...
8k views
In (relatively) simple words: What is an inverse limit?
I am a set theorist in my orientation, and while I did take a few courses that brushed upon categorical and algebraic constructions, one has always eluded me. The inverse limit. I tried to ask one of ...
21k views
Why can a Venn diagram for 4+ sets not be constructed using circles?
This page gives a few examples of Venn diagrams for 4 sets. Some examples: Thinking about it for a little, it is impossible to partition the plane into the $16$ segments required for a complete ...
10k views
Mental Calculations
This is the famous picture "Mental Arithmetic. In the Public School of S. Rachinsky." by the Russian artist Nikolay Bogdanov-Belsky. The problem presented on a blackboard requires computing the ...
6k views
How to show $e^{e^{e^{79}}}$ is not an integer
In this question, I needed to assume in my answer that $e^{e^{e^{79}}}$ is not an integer. Is there some standard result in number theory that applies to situations like this? Much later addendum: ...
5k views
Why does this matrix give the derivative of a function?
I happened to stumble upon the following matrix: $$A = \begin{bmatrix} a & 1 \\ 0 & a \end{bmatrix}$$ And after trying a bunch of different examples, I noticed the ...
8k views
Are there any open mathematical puzzles?
Are there any (mathematical) puzzles that are still unresolved? I only mean questions that are accessible to and understandable by the complete layman and which have not been solved, despite serious ...
18k views
Examples of mathematical discoveries which were kept as a secret
There could be several personal, social, philosophical and even political reasons to keep a mathematical discovery as a secret. For example it is completely expected that if some mathematician find ...
25k views
Is $0.999999999… = 1$?
I'm told by smart people that $0.999999999\ldots = 1$, and I believe them, but is there a proof that explains why this is?
182k views
How many sides does a circle have?
My son is in 2nd grade. His math teacher gave the class a quiz, and one question was this: If a triangle has 3 sides, and a rectangle has 4 sides, how many sides does a circle have? My first ...
5k views
What does $2^x$ really mean when $x$ is not an integer?
We all know that $2^5$ means $2\times 2\times 2\times 2\times 2 = 32$, but what does $2^\pi$ mean? How is it possible to calculate that without using a calculator? I am really curious about this, so ...
15k views
How to put 9 pigs into 4 pens so that there are an odd number of pigs in each pen?
So I'm tutoring at the library and an elementary or pre K student shows me a sheet with one problem on it: Put 9 pigs into 4 pens so that there are an odd number of pigs in each pen. I tried to ...
45k views
Best book ever on Number Theory
Which is the single best book for Number Theory that everyone who loves Mathematics should read?
7k views
A 1,400 years old approximation to the sine function by Mahabhaskariya of Bhaskara I
The approximation $$\sin(x) \simeq \frac{16 (\pi -x) x}{5 \pi ^2-4 (\pi -x) x}\qquad (0\leq x\leq\pi)$$ was proposed by Mahabhaskariya of Bhaskara I, a seventh-century Indian mathematician. I ...
6k views
How discontinuous can a derivative be?
There is a well-known result in elementary analysis due to Darboux which says if $f$ is a differentiable function then $f'$ satisfies the intermediate value property. To my knowledge, not many ...
6k views
Symmetry of function defined by integral
Define a function $f(\alpha, \beta)$, $\alpha \in (-1,1)$, $\beta \in (-1,1)$ as $$f(\alpha, \beta) = \int_0^{\infty} dx \: \frac{x^{\alpha}}{1+2 x \cos{(\pi \beta)} + x^2}$$ One can use, for ...
9k views
Is 2048 the highest power of 2 with all even digits (base ten)?
I have a friend who turned 32 recently. She has an obsessive compulsive disdain for odd numbers, so I pointed out that being 32 was pretty good since not only is it even, it also has no odd factors. ...
|
2015-12-02 02:11:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7741166353225708, "perplexity": 568.0567428984523}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398525032.0/warc/CC-MAIN-20151124205525-00037-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://tex.stackexchange.com/questions/285238/avoid-figure-above-section-title/285242
|
# Avoid figure above section title [duplicate]
I am generally happy with where LaTeX puts the various figures in my document, except when it puts them on top of the title of its own section. This is particularly annoying if the section starts on a new page. How can you avoid that?
I know that you can use the option \begin{figure}[b] to make LaTeX put the figure at the bottom of a page, but firstly this often leads to all figures being clumped together at the end of the section and secondly I don't mind if the figures are on top of a page unless they come before the section title. I am sure there is an easy answer for that, right?
## marked as duplicate by egreg, user31729, user13907, lockstep, Herr K.Dec 30 '15 at 19:15
• How about \begin{figure}[h]? Without an MWE it might be difficult to help you directly, but you should have a look at Figure/table positioning for general solutions. – Runar Dec 30 '15 at 15:25
• Well, I don't need the figure to be at this position. If I used \begin{figure}[h] I'd need to do this on most of my figures, but I prefer if they are positioned where they match best except before the section title. I was hopint that there is a global option/package that avoids this kind of positioning. – Dietmar Haba Dec 30 '15 at 15:32
• add \usepackage{flafter} then latex will never float images backwards to before their point in the source. – David Carlisle Dec 30 '15 at 15:37
• Thanks for your quick answers. While \usepackage{flafter} again does more than I wanted, it works fine for me and does the job. Thanks a lot! – Dietmar Haba Dec 30 '15 at 15:46
Using \usepackage{flafter} avoids that figures are floated before they were defined, which did the job in my case.
|
2019-06-26 08:24:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8319011330604553, "perplexity": 959.3640107257086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000231.40/warc/CC-MAIN-20190626073946-20190626095946-00340.warc.gz"}
|
https://mathleadershipcorps.com/question/every-surd-is-an-irrational-number-but-every-irrational-number-need-not-be-a-surd-justify-your-a-19201748-25/
|
## Every surd is an irrational number but every irrational number need not be a surd, justify your answer
Question
Every surd is an irrational number but every irrational number need not be a surd, justify your answer
in progress 0
3 hours 2021-11-24T22:52:45+00:00 1 Answer 0 views 0
|
2021-12-07 21:55:07
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8769758343696594, "perplexity": 2142.6334268248343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363418.83/warc/CC-MAIN-20211207201422-20211207231422-00169.warc.gz"}
|
https://deheerenkamer.com/mq69x3/97d113-square-shape-definition
|
The United States will inaugurate a new president on January 20th when Joe Biden will be sworn in as the forty-sixth person to hold the office. Elearning, Online math tutor, LMS", http://forumgeom.fau.edu/FG2016volume16/FG201627.pdf, "Cyclic Averages of Regular Polygons and Platonic Solids", Animated course (Construction, Circumference, Area), Animated applet illustrating the area of a square, https://en.wikipedia.org/w/index.php?title=Square&oldid=990968392, Wikipedia pages semi-protected against vandalism, Creative Commons Attribution-ShareAlike License, A quadrilateral where the diagonals are equal, and are the perpendicular bisectors of each other (i.e., a rhombus with equal diagonals), A convex quadrilateral with successive sides. A square has equal sides (marked "s") and every angle is a right angle (90°) Also opposite sides are parallel. From filk to derp: discover the latest words added to the Collins Dictionary. In hyperbolic geometry, squares with right angles do not exist. The town centre is thick with churches and cafe-lined squares. There are four lines of, A rectangle with two adjacent equal sides, A quadrilateral with four equal sides and four, A parallelogram with one right angle and two adjacent equal sides. If you have a square face, the sides of your face are straight, and your jawline is slightly … What does Square mean? The house is located in one of Pimlico's prettiest garden squares. All four angles of a square are equal (each being 360°/4 = 90°, a right angle). The orange edges are the base edges of the prism. In geometry, a square is a regular quadrilateral, which means that it has four equal sides and four equal angles (90-degree angles, or 100-gradian angles or right angles). If a classification is a known 2D shape, name that shape (example: square, rectangle, circle, triangle). {\displaystyle {\sqrt {2}}.} A square is a shape with four sides that are all the same length and four corners that are all right angles. ℓ {\displaystyle \pi R^{2},} square-shaped: 1 adj shaped like a square Synonyms: angular , angulate having angles or an angular shape In geometry, a shape can be defined as the form of an object or its outline, outer boundary or outer surface. d4 is the symmetry of a rectangle, and p4 is the symmetry of a rhombus. the crossed rectangle is related, as a faceting of the rectangle, both special cases of crossed quadrilaterals.[13]. A square is a quadrilateral. 2 Amaze your friends with your new-found knowledge! g2 defines the geometry of a parallelogram. Round tables seat more people in the same space as a square table. Only the g4 subgroup has no degrees of freedom, but can seen as a square with directed edges. The square has Dih4 symmetry, order 8. All the latest wordy news, linguistic insights, offers and competitions every month. A square has a larger area than any other quadrilateral with the same perimeter. Answer: Some of the most common 2D shapes are triangle, square, rectangle, polygon, pentagon, hexagon, heptagon, octagon, nonagon, decagon, etc. square-shaped (adj.) Square. Oval Face. If a figure is both a rectangle (right angles) and a rhombus (equal edge lengths), then it is a square. The shape of many milk and juice boxes around us is in square prism model. Serve the cake warm or at room temperature, cut in squares. A square and a crossed square have the following properties in common: It exists in the vertex figure of a uniform star polyhedra, the tetrahemihexahedron. ↕ All translations of square shaped. And, believe it or not, squares have a lot of interesting identifying properties. D4 is the quadrilateral containing the largest area within a given perimeter is the British English definition square... = 90°, a square is the symmetry of a kite inaugurated, they are great circle of... From triangle to n-gon where n represents the number of sides likened to a bow tie butterfly! This square vertices and 6 edges of the square is sometimes likened to a bow or. Was described in terms of the 2D shape 12 sides, 8 corners and edges..., rectangular piece of candy, etc 6 other shapes that go on from triangle to n-gon n! It is a highly symmetric object ou des ) angle ( s ) ( fr ) Classe... Based on the superellipse with directed edges can seen as a square is the symmetry of a in... Directed edges of Pimlico 's prettiest garden squares forest left as types of,! Ad free, so sign up now and start using at home or in the square Point. 2 } }. making three-dimensional shapes million square feet of office space 2D.. 360°/4 = 90°, a power of two are at least two definitions of squircle. Portmanteau of the triangle 's area that is filled by the square larger! This square square shape definition adj. upper and lower soles with their exact.! Is located in one of Pimlico 's prettiest garden squares our new online dictionaries schools! Geometric intersection is not considered a vertex before she went into this business with Walker also an! And group order. [ 13 ] exact meaning many lower symmetry:. Vegetables and knots edited on 27 November 2020, at 15:27 the,... 'S longest side squared things with Jay before she went into this business with Walker non-Euclidean geometry, with..., 4th Edition x2 or y2, whichever is larger, equals 1 ''... This free content ABCD shape ( example: square,,! The properties of the inradius r, the second power identifying properties in square prism 12. Set out to square his dreams with reality ( through 180° ) squared. Can also be defined as a faceting of the round face [ Similaire ] (! Polygons, and multiply by 5.12 the one opposite to it with object ), squared, … square... Updates and offers speed of light, an exceedingly large number mean raising the. Height of the square is a four-sided shape whose corners are all of city. Word squircle '' in use, the most common of which is created by connecting 4 line in... } ABCD house is located in one of those 6 other shapes go! { 2 } }. this led to the second power or a hand with different finger.! The height of the equal lengths and they come together to form 4 right measuring... - angle [ Dérivé ] rounded [ Ant. ) [ Classe ] angle [ Cont. order 4 containing. Is located in one of those 6 other shapes that has four right angles sides have equal.! A bow tie or butterfly such a square is sometimes likened to a bow tie or butterfly English... Qui forme un ( ou des ) angle ( s ) ( fr ) [ Classe ] angle [ ]... There were too many square pegs in rounds holes, does n't.. Segments in the above formula sign in to access this free content shape of many symmetry., the area of a rectangle, circle, Tangency Points, they are called! Use, the angles of less than right angles do not exist of office space dictionary! An official ceremony the latest news and gain access to exclusive updates and offers closed two-dimensional..., squared, … a square are larger than square shape definition right angle verb ( used with object,! Of less than right angles area of a square, rectangle, circle etc. Has the same perimeter are larger than a right angle part of the speed of,. Square from the online English dictionary from Macmillan Education within a given.... Wall, with large squares around the dates we need to keep adding words. Crossed square is least perimeter enclosing a given perimeter advertizing analogical dictionary qui forme un ( ou des angle! Square … they are formally given their new position at an official ceremony where n represents number... ; straight, level, even, etc a calendar on the web considered a vertex, … square. World in the most comprehensive dictionary definitions resource on the superellipse when a shape! Access this free content fixed properties of 2D shapes said it was all right city block, piece!... the square, and 3 cyclic subgroups: Z4, Z2, and p4 is quadrilateral... Have half the symmetry of the words square '' and circle '' the language all... And best of all it 's ad free, so sign up and! When a new leader is inaugurated, they are great for building, decorating, and have half symmetry! And sign in to access this free content '' in use, the common. Words to the use of the 4 vertices and 6 surfaces fact is that new words to the English?... Thus, a homeomorphism is a portmanteau of the 4 vertices and 6 surfaces last edited 27... And multiply by 5.12 ] rounded [ Ant. symmetries express 8 distinct symmetries on a square table cases. Square prism model download our English dictionary apps - available for both and. History of the equal lengths and they come together to form 4 right angles of 's. Equation means x2 or y2, whichever is larger, equals.! Jackets, currencies, vegetables and knots shapes in this category include circles, squares, rectangles, triangles polygons... One of those 6 other shapes that has four right angles a lot of identifying! 3-Simplex ( tetrahedron ) [ 13 ]: square, Point on the inscribed circle, triangle.! John Conway labels these by a letter and group order. [ 12.... Square prism has 12 sides, 8 corners and 6 edges of prism... To exclusive updates and offers 4 equal sides and equal angles in different postures, a right angle.! A continuous stretching and bending of an isosceles trapezoid, and p2 is the symmetry of the round face,... Four right angles order 4 geometry, the angles of less than right angles include circles, have... In use, the shape of many milk and juice boxes around us is in square prism 12... Usually symmetrical such as square, Point on the wall, with large squares around the dates movements by. Circles, squares have a lot of interesting identifying properties this equation means or. Two-Dimensional shape with four equal sides larger area than any other quadrilateral the! Dihedral subgroups: Z4, Z2, and so forth jackets, currencies, vegetables and knots it was right. Was all right angles and four corners that are all of the square is the British English of. Light, an exceedingly large number include circles, squares in hyperbolic geometry have angles of a rhombus keep new! Of office space, does n't it serve the cake warm or at room temperature, in! Pegs in rounds holes as a square … they are great for building, decorating, and so.... '' is a portmanteau of the area of a rhombus, believe it not... Square of plane geometry, squares in hyperbolic geometry, squares, rectangles, triangles polygons. Free content 12 ] currencies, vegetables and knots, the angles of less than right angles 6 shapes... Advertizing analogical dictionary qui forme un ( ou des ) angle ( )... Canary Wharf was set to provide 10 million square feet of office.... News and gain access to exclusive updates and offers geometric intersection is not considered a vertex defined as square. The largest area within a given perimeter the round face and appropriate environment for children, order.!
Maltese For Sale Quezon City, Toilet Paper Shortage September 2020, Mazda 6 Reliability, Mercedes Slr Mclaren Specs, Sba3 Vs Sba4, Pagers For Sale,
|
2021-07-27 12:15:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3174456059932709, "perplexity": 2474.4071911709766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153391.5/warc/CC-MAIN-20210727103626-20210727133626-00560.warc.gz"}
|
https://proofwiki.org/wiki/Category:Transitive_Classes
|
# Category:Transitive Classes
This category contains results about Transitive Classes.
Definitions specific to this category can be found in Definitions/Transitive Classes.
Let $A$ denote a class, which can be either a set or a proper class.
Then $A$ is transitive if and only if every element of $A$ is also a subclass of $A$.
That is, $A$ is transitive if and only if:
$x \in A \implies x \subseteq A$
or:
$\forall x: \forall y: \paren {x \in y \land y \in A \implies x \in A}$
## Subcategories
This category has the following 8 subcategories, out of 8 total.
## Pages in category "Transitive Classes"
The following 18 pages are in this category, out of 18 total.
|
2023-03-31 09:46:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.861431360244751, "perplexity": 594.1412405731197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949598.87/warc/CC-MAIN-20230331082653-20230331112653-00190.warc.gz"}
|
https://www.physicsforums.com/threads/rigorous-definition-of-derivative.569424/
|
# Rigorous definition of derivative
1. Jan 21, 2012
### Chsoviz0716
Hi, I have a question about the definition of derivative.
As far as I know, for a real valued function f defined on a subset of R, the derivative of f at a is
(f(x+h)-f(x))/h as h → 0.
And if it exists it's said f is differentiable at x.
What if I define f : Q → R as follows,
f(x) = sin(x) (the domain of f is all rational numbers.)
Then it satisfies every condition to be a differentiable function, thus it's a differentiable function on its domain.
Is this a valid argument?
Or what about f : [0,1]U[2,3] → R defined as follows,
f(x) = 1 when x belongs in the first closed interval
2 when x belongs in the second closed interval
f'(x) at points 1 and 2 seem to exist. But do they really?
And one small question; when you have f'(x) = ∞, it implies f isn't defined at x. Is it even possible to have such an equality? If f(x) isn't defined, how can f'(x) can be calculated in the first place when the definition of f'(x) requires the existence of f(x)?
Thank you in advance.
2. Jan 21, 2012
### JG89
As far as I know, differentiability is defined for maps whose domain is an open interval (a,b), or a union of such intervals.
f'(1) and f'(2) are considered one-sided derivatives. For example, for f'(1) the difference quotient $\frac{f(1+h) - f(1)}{h}$ cannot be formed if h > 0, so we would calculate f'(1) by evaluating the limit $\lim_{h \rightarrow 0^{-}} \frac{f(1+h) - f(1)}{h}$.
However, more formally, if $f: S \rightarrow R$ is a map where $S \subset \mathbb{R}$ and S isn't necessarily an open interval, we say that f is differentiable if we can find a map $g: T \rightarrow \mathbb{R}$ such that:
1) $S \subset T$
2) $g = f$ on $S \cap T$
3) T is an open interval.
In this case we say that we have extended f to a map g.
Saying that $f'(x) = \infty$ for some number x doesn't really make sense. What you are really trying to say is that $\lim_{x \rightarrow a} f'(x) = \infty$.
For example, look at the graph of $f(x) = \frac{1}{x}$. If you look at the graph, you can see that as x approaches 0, f'(x) will approach infinity since the tangent lines will become increasingly steeper. To algebraically verify this just note that $\frac{d}{dx} (\frac{1}{x}) = \frac{-1}{x^2}$ and that upon letting $x \rightarrow 0$ we see that $\frac{d}{dx} (\frac{1}{x})$ becomes infinite.
3. Jan 21, 2012
### pwsnafu
Consider f(x) = x^{1/3} when x=0.
4. Jan 21, 2012
### Chsoviz0716
Thank you very much!!
There's one thing I'm still not sure about.
If it says f'(a) exists, can I take it that f is defined on some open interval that includes a?
Or does it cover the case where 'a' can be an end point of some close interval such as [a,b] on which f is defined?
5. Jan 21, 2012
### JG89
Well consider the map $f: [a,b] \rightarrow R$ where $f(x) = x^2$. One would surely say that f'(a) exists, however f isn't defined on an open interval containing 'a'. One could of course extend f to make this possible, but you aren't asking about the extended map, you're asking about f and we have defined it so its domain is [a,b]!.
6. Jan 21, 2012
### Fredrik
Staff Emeritus
The statement "f is differentiable on [a,b]" is sometimes defined to mean that the domain of f is an open set that contains [a,b], and for some ε>0, f is differentiable on (a-ε,b+ε). I think I saw something like this in "Introduction to smooth manifolds", by John M. Lee.
"f is differentiable" usually means that "the limit that defines f'(x) exists for all x in the domain of f". It only makes sense to talk about that limit when x is an interior point of the domain, so if we are to use phrases like "f is differentiable" without fancy definitions like the one above, the domain must be an open set.
7. Jan 21, 2012
### jgens
As a general note, it is possible to define the derivative of mappings between normed vector spaces using bounded linear operators. I think that this is generally introduced in functional analysis type courses so I am guessing it is not what the OP is looking for.
This is a fairly common interpretation. Another interpretation just requires the left- or right-hand limits to exist at the end points.
8. Jan 22, 2012
### Tarantinism
You only need an accumulation point to define a limit. So, why not on the set of rational numbers?
The only problem... maybe it's useless, at least for physical purposes. But you can take the limits in such rational-real function
9. Jan 22, 2012
### Fredrik
Staff Emeritus
Because the limit we're talking about is the limit of
$$\frac{f(x+h)-f(x)}{h}$$ as $h\to 0$, and the numerator (specifically "f(x)") makes sense only when x is in the domain.
10. Jan 22, 2012
### Tarantinism
Of course, so let x be the rational (an accumulation point) and h run over the neighborhood of x;
you only need an acc point to define the limit
11. Jan 22, 2012
### Studiot
Why have you chosen this and what does your book say about continuity?
12. Jan 22, 2012
### Tarantinism
The weird examples from mathematicians... :D
Just to ask whether it is differentiable.
I think yes: you can take the derivative in the whole domain (all points are accumulation points, so you can take the limit), they exist so they define the derivative.
This would not be possible provided that the domain was the set of integers: they are not accumulation points.
13. Jan 22, 2012
### Fredrik
Staff Emeritus
An accumulation point of what? I would have guessed that you meant an accumulation point of the domain of f, but then you agreed that x needs to be a member of the domain.
$f(x)\to y$ as $x\to a$ if for all $\varepsilon>0$, there's a $\delta>0$ such that for all x in the domain, $|x-a|<\delta\ \Rightarrow\ 0<|f(x)-y|<\varepsilon$. Note that the entire interval $(a-\delta,a+\delta)$ is mapped into $\{z\in\mathbb R:0<|z-y|<\varepsilon\}$, so f must be defined on that entire interval. This is why this definition doesn't work for functions defined on e.g. $[0,1]\cap\mathbb Q$.
To define f'(x), x must be an interior point of an open subset of the domain of f. It certainly isn't enough that x is an accumulation point of the domain, if that's what you meant.
14. Jan 22, 2012
### Studiot
Is f(x) = {sin(x) such that x is rational} a continuous function?
What about all the non rational x for which sin(x) are not defined?
15. Jan 22, 2012
### Fredrik
Staff Emeritus
I assume that you're asking about the function $f:\mathbb Q\to\mathbb R$ defined by f(x)=sin x for all $x\in\mathbb Q$. Since it's not defined on any open subset of $\mathbb R$, it can't satisfy the definition of "continuous at x" for any x.
So it's not continuous with respect to the topology of $\mathbb R$. It is however continuous with respect to the topology of $\mathbb Q$.
16. Jan 22, 2012
### Studiot
I'm just trying to clarify why the OP specified a sine function with lots of 'gaps' in it, by restricting the domain to Q.
For instance if we consider an epsilon-delta argument should the values of epsilon and delta be restricted to rational numbers, and if so why?
17. Jan 22, 2012
### Tarantinism
Why not? You can take a limit as well. You are too restrictive with your real-analysis definition.
18. Jan 23, 2012
### Fredrik
Staff Emeritus
I already answered that. If x isn't in the domain, "f(x)" doesn't make sense, and we're talking about the limit of
$$\frac{f(x+h)-f(x)}{h}$$ as $h\to 0$.
19. Jan 23, 2012
### Studiot
Not only x but also h?
20. Jan 23, 2012
### Tarantinism
Ah, of course. But on the domain intersection accumulation points of the domain (any rational in this case). It's not necessary rigorously to have an interior point to take the limit in a topological sense, right?
|
2018-08-18 06:35:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8899930119514465, "perplexity": 465.12304999349124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213405.46/warc/CC-MAIN-20180818060150-20180818080150-00073.warc.gz"}
|
http://math.stackexchange.com/questions/235860/example-of-infinite-group-with-infinitely-many-simple-subgroups
|
# Example of infinite group with infinitely many simple subgroups
What is an example of an infinite group with a composition series and infinitely many simple subgroups?
-
One example is the direct sum of all the finite simple groups (more precisely, pick one for each isomorphism class).
Another (perhaps less cheat-y) one is the group of permutations of $\mathbb N$, which contains all the alternating groups $A_n$ as subgroups.
-
first doesn't quite work (composition series are generally finite), but the second is fine. – Jack Schmidt Nov 12 '12 at 19:16
Why is the composition series in the second case finite? Also, to be clear, you mean all permuations of $\mathbb{N}$ or only the finitely supported ones? (Or does it matter?) – Jason DeVito Nov 12 '12 at 20:35
@JasonDeVito: only matters a little. both have composition series of finite length, but different lengths. Alt(finitary) <= Sym(finitary) <= Sym(all N) or so, I believe. Scott's Group Theory textbook has a nice description of composition series of symmetric groups of infinite sets. – Jack Schmidt Nov 12 '12 at 21:12
I see - I think the part I was missing was that $A_{\text{finitary}}$ is also simple. – Jason DeVito Nov 12 '12 at 21:15
|
2016-02-07 06:47:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8914626836776733, "perplexity": 532.5445192798543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701148558.5/warc/CC-MAIN-20160205193908-00304-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/247757/grounding-spherical-shells
|
# Grounding spherical shells
A $$Q_1$$ charged spherical shell $$A$$ with radius of $$a$$ is inside a $$Q_2$$ charged spherical shell $$B$$ with radius $$b$$.
Now $$A$$ is grounded. Since no force is acting on $$Q_1$$, all of them should be neutralized, and at the end it charge on $$A$$ should be zero.
But when we equate the potential on $$A$$ to zero, since it is grounded we will get an answer that $$A$$ is charged by $$-aQ_2/b$$. And I know that's the right answer, and what I said earlier that the charge should be zero, is wrong.
My doubt is grounding should remove the excess charges which is free to move then how a negative charge came there? Explain me what is happening inside.
|
2021-10-16 18:13:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8843398690223694, "perplexity": 241.2474229371803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00374.warc.gz"}
|
http://reactionwheel.net/2009/03/supply-of-what.html
|
Supply of What?
Frames are disabled on this site.
Why are online CPMs so low?
Why is the reader of an article on NYTimes.com worth a third of the exact same person reading the exact same article on paper?
Dumb question. Everyone know that CPMs are low because there is an excess of advertising inventory: supply and demand.
This can’t be true.
Let’s say you had a newspaper–like in the old days–and you sold a quarter page ad for a $25 CPM. And let’s say that advertisers thought that was a fair price and–despite not being able to measure it very well–thought it provided the right ROI. You had 20,000 readers in your little city, so you made$500 off that ad space every day. Now let’s say a competitor comes into your city with another newspaper, one that remedies your woeful sports coverage. Your competitor takes half your readership and sells ads just like you.
What has happened to inventory here? There are twice as many newspapers, so has it doubled?
No. Even though there are now twice as many ad spaces, they each get half the number of impressions. So the inventory, the total number of impressions, stays the same. Of course it does! Unless the readers are reading more news in total, the ad inventory has to stay the same: the inventory is the consumers’ attention.
(Of course, if CPMs stay the same, your newspaper now makes half as much because it has lost half its readership, but that’s a completely different problem.)
The same is true online. There is not appreciably more inventory in the world than there was ten years ago. In fact, in a world where we collectively spend less time with media, inventory is contracting. Online inventory is increasing because the amount of time spent online is increasing. But the supply of inventory per user is constant, and demand per user should be constant, so supply and demand should stay matched. Plummeting CPMs are not a supply and demand problem.
So, then, why are online CPMs so low? If it’s not supply and demand, what is it? The answer has to be either: (1) the market for online ad inventory is fubar, or (2) online ads just don’t work very well. I suspect it’s a bit of both, but this whole supply and demand argument has just got to go.
1. Jerry, I’d have to completely disagree, primarily because the concepts of communication and media have crossed over.
Take sports:
You might have spent X time reading about sports in your local paper and Y time (where Y is at least 3-5 times larger than X) talking with your friends in a cafe, phoning them up, writing them letters and sending them faxes about sports.
Before the Internet communications was not ad supported at all and now with the Internet it is.
Also, even with the media component, demand that wasn’t met before (because of the cost to publish and distribute) is now being met, so people are reading more about sports than the one or two articles they did read in the local papers.
So the number 1 factor in all of this is that communications is now trying to be ad-supported (i count web e-mail, forums and social networks in this). And paying a lesser role is that more demand for media time is able to be met as well.
The net-net is a huge rise in ad supported page impressions. For instance from 2005-2007 I think ad impressions on the Internet tripled (source:adrelevance). The audience and usage didn’t triple but the number of ad supported pages did. And they tripled because of just a few sites (MySpace, Facebook and Youtube). The other sites grew too, suggesting that more ad inventory *was* created in aggregate.
p.s. How’s that theory on mortgage lead prices holding up? :)
2. Interesting hypothesis… I wonder if the report showing our media consumption being flat to down includes substitutes, like IM substituting for phone calls.
I think your point is that if there are more impressions and we aren’t spending more time with media, then that means we are spending less time with each ad. But isn’t that the same as a newspaper selling 16 ads per page instead of 4? Of course the CPM goes down.
I still think my basic question–why is there a difference between the CPM for a reader of an online NY Times article and the CPM for a reader of the same article offline?–stands. If your answer is that they’re selling ten ads next to the online article versus two offline, then I buy that, but revenue per user per article should still be the same… unless the reason is something other than supply and demand.
What have mortgage lead prices done? I haven’t seen recent data. I am willing to admit that I was wrong that financial services companies would continue to advertise. I did not, 18 months ago, foresee the historic collapse of the international financial system. I am willing–if I was wrong–to take a back seat in prognostication ability to those who saw that coming, all three of them.
3. This is very interesting, Jerry.
Fundamentally, I agree with you that an impression is in essence an impression and should be valued the same as all other impressions. However…
…media outlets have done a very good sales job over the years of convincing buyers that some impressions are better than others and hence deserving of higher value (for which the manifestation here is higher CPM).
Generally the “better” impression argument comes down to context (i.e., I have a better environment in which to grab the end user’s attention). I suspect that media buyers inherently, and perhaps unconsciously, still value traditional media higher than electronic media, maybe because of the argument made by the content owners, maybe because of the perception of relative scarcity (it cost money to print the nth paper).
Broadcast tv has used another variant of this argument — to wit: since we can deliver more eyeballs at one sitting than any of our cable competitors, you the advertiser should pay us a higher CPM for that instantaneous audience aggregation…hence broadcast tv CPM’s have always outpaced cable CPM’s, even these days as the ratings move towards one another.
It will be interesting to compare CPMs on Hulu and other over-the-top television plays, as those services scale to see whether these CPM’s move towards the CPM’s for the same content delivered over the air.
|
2017-03-23 02:13:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3306694030761719, "perplexity": 2067.35747864491}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186608.9/warc/CC-MAIN-20170322212946-00470-ip-10-233-31-227.ec2.internal.warc.gz"}
|
http://uncyclopedia.wikia.com/index.php?title=uncyclopedia:thisisabugdonteditthispage&oldid=5422363
|
Hot indiscriminate sex
After four award winning dissertations on molecular theory, thirteen publications authored on non-Newtonian calculus, and positions of professor emeritus granted by all eight Ivy League institutions, I have at long last come to one definitive conclusion about the state of affairs in the world today; what is direly needed on a global scale is a far greater frequency of hot, indiscriminate sex.
Ideally, this sex should be with me.
Overview
The world is wrought with injustice; millions live in poverty while the elite classes live in perfect luxury. Racism, sexism, disease and hatred are a central issue in every culture in the world. I could go on about the trials and tribulations of humanity, but this has been done extensively by countless others, and to a degree that I could not match with my primarily scientific and mathematic academic expertise. However, what I can say with confidence is that instead of sitting here typing as I am now, I strongly feel I should be fiercely pounding my meatstick into the sloppy snatch of the hot blonde that was in line in front of me at the supermarket today. This by no means limits my desire to the woman I came upon at the Kroger cash register; in fact, I could probably go for a rough tumble in the hay with several of the readers now in the process of reading my piece. Considering the tingling sensation currently overwhelming this writer, I can now hypothesize with near certainty that I’ll be darned if I’m not getting it on hot and heavy with a bowlegged skank ho from downtown the second I finish typing the last sentence of this entry.
The love question
It is not uncommon to hear now, in particular from those of the young generation, the lamentation of “Where is the love?”. One could analyze the social background and environmental ramifications that give rise to this hypothetical question oft posed by young adults. However, analysis alone has never brought about progress, and put simply, the answer to the question is not as far away as may be thought. Yes, the truth is that the love is in my pants, between my legs. The love is hot, long, tubular in shape, and eager to penetrate the cooter of the next woman I see walking down the street from my 5th floor apartment room.
...No, forgive me. I meant the NEXT woman I see walking down the street.
..Oh, pardon me, my hand must have slipped as I typed, as I meant the NEXT woman I see. The next one will be the one I indiscriminately choose.
...Oh, YES. Now that is the woman I would be willing to bend over and give a hot beef injection without asking so much as her name, up against the dumpster behind the Korean-run pizza shop. No words, no protection, no strings attached. Just me giving it to her deep and hard, and pulling out in a timely manner for a victorious spray of my horde of albino children.
The significance of “relationships”
Far too many people of our modern society mistakenly search for love and fulfillment in so-called “meaningful relationships”. As I sit here typing this I struggle to not burst out into raucous laughter just pondering the concept. Therefore, I would like to challenge the established notion of what a meaningful relationship is. To begin with, we must define “meaningful” as used in this instance. Is a relationship “meaningful” when one partner offers continuous emotional support to the other? Is a relationship “meaningful” when one remains faithful to their partner in thought, word and action? Is a relationship “meaningful” when a couple has developed an unspoken trust? Nay, meaningful is my penis plunging in and out of a random woman’s vagina, jabbing her kidneys like an Everlast punching bag and berating her as a nut-gobbling crack whore. The meaningfulness then reaches its pinnacle as the crack whore screams “Harder, daddy, harder. I am about to cum.” However, I am not one to allow a dirty slut the enjoyment of an orgasm, so this is when I put my load in her eye and send her out the door with a sheet of paper towel. At this point, the meaningful relationship has come to its conclusion.
Equation
I offer the following equation to demonstrate my interpretation of hot, indiscriminate sex.
$\frac{(17(Random+Whore)^2(\frac{WetVagina}{3})^7-1.125(LustForMe))}{MyPenis}=1.264*10^9(Explosive)+.7625(Orgasm)$
For the reader’s deeper understanding of hot, indiscriminate sex, I present a few gems from my vast history of the practice.
Redhead with an Iron Maiden tattoo
Hot Sex Index:
One night in a particular bar I frequent downtown, I had imbibed my share of bourbon and decided to give in and break the proverbial seal. Upon entering the men’s restroom, who other was passed out there in front of the urinal than the red-haired woman that had just finished her 7th shot of tequila moments ago. Seizing upon the opportunity, I awakened her with a quick blast of my urine and then emptied the rest into the nearby urinal so as to keep the floor dry. She regained consciousness and I led her into the stall, where she went with little resistance. The frenzied rabbit-like anal sex that then ensued would have earned her 4 rating had she not released the foulest gas emission I have ever had the displeasure to smell, just as I climaxed.
Chinese woman at Ching Chong Palace
Hot Sex Index:
While dining on my General Tso’s Chicken at Ching Chong Palace, I noticed my waitress had a certain look in her eye when she took my order; a decidedly slanted look. I took this as a definite sign that she was after my sucky-sucky stick, and called her for a second order. I asked her if they serve dog, to which she replied no. I then asked her if she would like to be served doggy style. I was swiftly escorted to the duck cage room in the back of the establishment where I gave her a one-man Rape of Nanking to a chorus of incessant quacks.
Blonde college girl from the library
Hot Sex Index:
Perusing academic journals at the library as usual on a Wednesday night, I was approached by a young woman in a college sweatshirt who apparently recognized me. She asked me if I was the author of Modern Advances in Mathematical Theory and I answered in the affirmative; “Hell yeah, bitch.” Needless to say, that night I taught her the equation where my hardened cock equals spasmodic moans of ecstasy for her.
Conclusion
In conclusion, it can be said that the relationship between myself and the female population is an unending mutual desire for hot, indiscriminate sex. Should we meet at an upcoming symposium or strip club visit, I ask that you please act as your desires drive you, and engage in a world-rocking night of lovemaking with me. Thank you.
Featured Article (read another featured article) Featured version: 18 July 2008 This article has been featured on the main page. — You can vote for or nominate your favourite articles at Uncyclopedia:VFH.
|
2015-09-02 00:45:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23638318479061127, "perplexity": 3211.2129353351643}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645235537.60/warc/CC-MAIN-20150827031355-00099-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://fr.mathworks.com/help/phased/ref/coincidence.html
|
Main Content
# coincidence
Coincidence algorithm
## Syntax
``x = coincidence(res,div,maxval)``
``x = coincidence(res,div,maxval,tol)``
## Description
example
````x = coincidence(res,div,maxval)` returns the scalar `x` that is less than or equal to `maxval` and is congruent to each remainder in `res` for the corresponding divisor in `div`. `x` satisfies mod(`x`,`div`) = `res`.In other words, dividing `x` by each element of `div` leaves as remainder the corresponding element of `res`.```
````x = coincidence(res,div,maxval,tol)` also specifies the tolerance. In practice, there may be no value that satisfies all constraints in `res` and `div` exactly. In that case, `coincidence` identifies a set of candidates that approximately satisfy the constraints and are within an interval of width 2 × `tol` centered at the candidates' median. The function then returns the median as `x`.```
## Examples
collapse all
Find a number smaller than `1000` that has a remainder of `2` when divided by `9`, a remainder of `3` when divided by `10.4`, and a remainder of `6.3` when divided by `11`.
There is no number that satisfies the constraints exactly, so specify a tolerance of `1`. `coincidence` identifies a set of numbers that approximately satisfy the constraints and are within $2×tol=2$ from their median. The function then outputs the median.
```tol = 1; x = coincidence([2 3 6.3],[9 10.4 11],1000,tol)```
```x = 127.8000 ```
Increase the tolerance to `2`.
```tol = 2; x = coincidence([2 3 6.3],[9 10.4 11],1000,tol)```
```x = 74 ```
Specify a tolerance of `3.3`. Any tolerance larger than this value results in the same answer.
```tol = 3.3; x = coincidence([2 3 6.3],[9 10.4 11],1000,tol)```
```x = 3 ```
In a staggered pulse repetition frequency (PRF) radar system, the first PRF corresponds to `70` range bins and the second PRF corresponds to `85` range bins. The target is detected at bin `47` for the first PRF and bin `12` for the second PRF. Assuming each range bin is `50` meters, compute the target range from these two measurements. Assume the farthest target can be `50` km away.
```idx = coincidence([47 12],[70 85],50e3/50); r = 50*idx```
```r = 30350 ```
## Input Arguments
collapse all
Remainder array, specified as a row vector of nonnegative numbers. `res` must have the same number of elements as `div`.
Data Types: `single` | `double`
Divisor array, specified as a row vector of positive integers. `div` must have the same number of elements as `res`.
Data Types: `single` | `double`
Upper bound, specified as a positive scalar.
Data Types: `single` | `double`
Tolerance, specified as a nonnegative scalar.
Data Types: `single` | `double`
## Output Arguments
collapse all
Congruent value, returned as a scalar.
## Version History
Introduced in R2021a
|
2022-08-13 12:38:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9459307789802551, "perplexity": 3068.2429632990015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571950.76/warc/CC-MAIN-20220813111851-20220813141851-00457.warc.gz"}
|
http://www.cse.unsw.edu.au/~teachadmin/tc/meetings/2007/05/minutes.html
|
### CSE Teaching Committee, (Rough) Minutes for Meeting, 25 May 2007
Note: These minutes are based on JohnS's recollection of the discussion. As noted in the September 2007, he lost his notes from the meeting, and so doesn't have the fine details, such as attendance, start-time, end-time, and the precise discussion points.
1. Previous minutes and issues arising from them - John Shepherd
No issues were noted.
2. Changes to Masters programs - Eric Martin
EricM presented a significant revision of the CSE Masters programs. The idea is to dispense with separate degrees for professional up-skilling'' (the old 8685) and retraining'' (the old 8682). The rationale was that there were a range of incoming backgrounds that were not catered for well by the coarse division into 8685 and 8682 (e.g. some students with an undergraduate computing degree, who would normally have been channeled into 8685, turned out to have a relatively weak computing background, which meant that they struggled in 8685). Also, there were problems with students choosing degrees on the basis of their length, rather than what was appropriate for their background.
There will be a single degree, with students entering at different levels, depending on their backgrounds. The Masters degree would be 16 courses (2 years full-time); there would also be an 12-course Graduate Diploma and a 4-course Graduate Certificate. Students without an undergraduate computing degree would be channeled into either the Certificate or Diploma. Students with a solid undergraduate computing background could get sufficient credit to complete the Masters with around 8 courses (similar to the old 8685). Advanced standing would be based entirely on exemption exams (students would need to prove that they had the relevant background knowledge, not simply show the presence of an apparently relevant subject on a transcript).
The previous postgraduate degrees allowed articulation from a Diploma to a Masters only if the lower degree was not conferred. Under the new scheme, articulation with full credit is simplified. As long as a student has completed the lower degree with a WAM of 65 or better, they can enrol in the higher degree with full credit.
The final change is in the course groupings. The old Group A, B, C, D, which corresponded roughly to introductory (A), intermediate (B,C) and advanced (D), have been replaced by groupings based on the length of the pre-requisite chain. This eliminates anomalies in the old scheme, where some Group B/C courses were introductory in nature, and some Group A courses were more advanced.
3. New Course Proposal Engineering Decision Structures - Arthur Ramer
ArthurR proposed a new course to be included in the new Faculty masters program, but also made available in the CSE postgrad coursework programs. It is similar in nature to the old Advanced Decision Theory course that used to be offered in CSE.
4. Year 0 and ENGGxxxx Fundamentals of Computing - Maurice Pagnucco
MauriceP gave a brief outline of an introductory-level IT literacy course that CSE could offer as a component of the new Faculty-based DipSET program (which is designed to bring intending engineering students up to the requisite level in their HSC engineering requirements, if they don't have the right background from HSC, but are keen to study Engineering). Since this course will be bundled in with our existing GENE8000 offering, and since the numbers are likely to be small, there is no significant additional teaching load generated.
5. Review of Engineering Week 2006
MauriceP described what they'd done in 06s2 Engineering Week and talked about the comments from the Eng Faculty survey on student satisfaction with the week. He proposed some changes for 07s2 to deal with student comments that they would have preferred more "hands-on" activities.
6. CATEI and end-of-session course evaluations - Bill Wilson
BillW (after considerable personal effort) has set up the online CATEI system so that all courses will be surveyed with a Form A Course Evaluation, and all lecturers will be given a Form B teaching Evaluation.
7. UNSW change to 12-week semesters - John Shepherd
JohnS noted the change to 12-week semesters and also that the Faculty was going to provide web interface where people could record their proposed course changes, to simplify the task of pushing any revisions through the relevant faculty committees.
8. Heads-up: Summer session 2007-2008 (proximity to 07s2) - John Shepherd
JohnS noted that summer session this year starts \emph{before} marks from 07s2 are finalised. This is clearly a problem for students who are waiting on an 07s2 mark to decide whether to enrol in a summer session course.
9. Heads-up: Review of COMP1911/1921/2911 - Richard Buckland
RichardB will present some comments on and ideas about how to improve the new foundations stream (COMP1911, COMP1921, COMP2911) at an upcoming meeting.
Eventually, there will be financial and accreditation pressure to ensure that we thoroughly document how our courses lead to students acquiring the graduate attributes specified on the UNSW, Engineering and (eventually) CSE web-sites.
None that I recall.
Postponed to a future meeting:
• Future directions in CSE curricula
(Computer forensics? Games development? Service-oriented computing?)
School of Computer Science & Engineering
The University of New South Wales
Sydney 2052, AUSTRALIA
Last updated:
|
2014-08-21 22:01:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2771388590335846, "perplexity": 3175.9058321654047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500821666.77/warc/CC-MAIN-20140820021341-00131-ip-10-180-136-8.ec2.internal.warc.gz"}
|
http://torbydahl.blogspot.com/
|
## Monday, March 12, 2018
### MCTS - Further Improvements
There is a rich literature on how to improve MCTS beyond the different confidence bound values presented in by previous blog post on MCTS Confidence Bounds.
## Single Player Games
Most MCTS algorithms have been developed to search two-player games such as Go. When applying MCTS principles to single-player games a a number of issues have been considered.
Some work discusses the option of discounting the reward value, using a discount factor, $\gamma$, during a backup. This is considered important in two-player games where the opponent may be unpredictable [2], but less so for single-player games where action selection and state transition is more deterministic.
The more deterministic environment in single-player games also makes it more attractive to chose a child based on the maximum simulation reward for each child, rather than the mean reward. This requires each note to record the maximum reward as well as the mean. In CADIAPLAYER [2] the mean is still used during simulation, but the max is used for the final selection of an action to play.
The CADIAPLAYER also adds the complete search path to the tree, rather than a single-node extension, for searches that lead to optimal solutions.
## Recursive Searches
Instead of doing a large number of simulations and select the final action based on the observed statistics, algorithms such as Nested Monte-Carlo Tree Search (NMCS) [9] make further searches starting from the best node identified in the search before. This increases the proportion of searches around the most promising path but reports better results on some problems than MCTS.
Another recursive solution is Nested Policy Rollout Adaptation [10] which uses a policy to guide the child selection in the typically random rollout phase. It also updates the given policy using a gradient descent step after each recursive call.
## Combining MCTS with Learning
MCTS can be effectively combined with RL algorithms for learning RL policies or value functions. Due to their ability to compress information and generalise states and actions, such algorithms can provide global, long-term representations that complement the detailed uncompressed, focused and short-term representation of MCTS.
A typical setup used by AlpaZero [8], Expert Iteration [1] and Dyna-2 [7] is for MCST to provide state, policy and/or reward samples for supervised training of long-term policies or value functions, while the policy and/or value functions provide heuristics to guide the action selection in MCTS.
## References
[1] Thomas Anthony and David Barber (2017) Thinking Fast and Slow with Deep Learning and Tree Search, NIPS 2017, pp5360-5370.
[2] Yngvi Bjornsson and Hilmar Finnsson (2009) CADIAPLAYER: A Simulation-Based General Game Player,
[3] B. Bouzy and B. Helmstetter (2004) Advances in Computer Games, vol 135. Springer.
[4] B. Brügmann (1993) Monte Carlo Go, technical report.
[5] Remi Coulom (2006) Efficient selectivity and backup operators in Monte-Carlo tree search, in the proceedings of the 5th International Conference on Computers and Games, pp72-83.
[6] Alan Fern and Paul Lewis (2011) Ensemble Monte-Carlo planning: an empirical study, in the Proceedings of the Twenty-First International Conference on International Conference on Automated Planning and Scheduling, pp58-65.
[7] David Silver, Richard S. Sutton and Martin Müller (2008) Sample-based learning and search with permanent and transient memories, in the proceedings of the 25th international conference on Machine learning (ICML), New York, USA, pp968-975.
[8] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, Demis Hassabis (2017) Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm, arXiv:1712.01815 [cs.AI].
[9] Tristan Cazenave (2009) Nested Monte-Carlo Search, in the Proceedings of the 21st International Joint Conference on Artifical intelligence (IJCAI), pp456-461.
[10] Christopher D. Rosin (2011) Nested Rollout Policy Adaptation for Monte Carlo Tree Search, in the Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI), pp649-654.
### MCTS - Confidence Bounds
Most Monte-Carlo Tree Search algorithms use some version of a confidence bound on the value of a node to decide where to focus the search, i.e., which node to expand next. See the
## Upper Confidence Bounds for Trees (UCT)
UCT is one family of MCTS algorithms. In particular, UCT algorithms use the maximum UCT value to chose between the children of a node. The UCT value is defined in Equation \eqref{eq:uct}.
$$UCT = \bar{X_j}+2C_p\sqrt{\frac{2\ln n}{n_j} } \tag{1}\label{eq:uct}$$
where $\bar{X_j}$ is the mean reward from branch $j$, $n$ is the number of times the current (parent) node has been visited and $n_j$ is the number of times branch $j$ has been visited. $C$ is a constant exploration term greater than 0 which decides the amount of exploration done. An $n_j = 0$ implies a $UCT$ value of $\infty$ so unvisited branches will always be explored at least once.
Given enough resources, UCT converges to Minimax and provides the optimal solution.
The $\sqrt{\frac{2\ln n}{n_j} }$ element of Equation \eqref{eq:uct} describes the upper confidence bound or $UCB1$.
## Improving on $UCB1$
It is possible to improve theoretically on $UCB1$, making tighter bounds, by replacing the $UCB1$ element with the expression in Equation \eqref{eq:ucb_tuned}.
$$UCB_{Tuned} = \sqrt{\frac{\ln n}{n_j} min\{\frac{1}{4},V_j(n_j)\}} \tag{2}\label{eq:ucb_tuned}$$
where $V_j(s)$ is given in Equation \eqref{eq:var}.
$$V_j(s) = (\frac{1}{2}\sum_{\tau=1}^s X_{j,\tau}^2) - \bar{X}_{j,s}^2 + \sqrt{\frac{2 \ln t}{s}} \tag{3}\label{eq:var}$$
Another alternative for improvement on the bounds is to use a Bayesian framework where the algorithm maximised $B_i$ as defined in Equation \eqref{eq:Bayes}.
$$B_i = \mu_i + \sqrt{\frac{2\ln N}{n_i} } \sigma_i \tag{4}\label{eq:Bayes}$$
where $\mu_i$ is the mean of an extremum (minimax) distribution $P_i$ and $\sigma_i$ is the square root of th evariance of $P_i$. The Bayesian bounds has been shown to be better than $UBC1$ but are also slower to calculate.
## Handling Poor Early Estimates
It has also been suggested that it is a good idea to wait to follow the UCT values until the sample size behind them suggests that they are meaningful. This is especially relevant when there are alternative policies available, e.g., heuristics or a separately learned policy. This approach is called Progressive Bias.
Initial a node's statistics with something other than 0s in general, is called Search Seeding. It has been found that such priors work best when provided by a function approximation as done when combining MCTS with learning.
An effective amelioration of this issue as suggested in a post by Brian Lee, one of the contributors to the MiniGo implementation of the AlphaGo algorithm, on the the computer-go mailing list. When expanding a node, the values of its children is set to that of their parent node with appropriate mechanisms for handling the fact that these values do not imply an increase in the number of visits to the parent node.
## Avoiding Disappearing Node Selection Probabilities
A problem with using UCT estimiates as the foundation of a probability distribution for node selection, but not related to is that it can be difficult to ensure exploration on lower levels as the combined probabilities across the levels get increasingly small. First Play Urgency introduced a a constant probability of exploring untried nodes at least once.
## References
[1] Thomas Anthony and David Barber (2017) Thinking Fast and Slow with Deep Learning and Tree Search, NIPS 2017, pp5360-5370.
[2] Yngvi Bjornsson and Hilmar Finnsson (2009) CADIAPLAYER: A Simulation-Based General Game Player,
[3] B. Bouzy and B. Helmstetter (2004) Advances in Computer Games, vol 135. Springer.
[4] B. Brügmann (1993) Monte Carlo Go, technical report.
[5] Remi Coulom (2006) Efficient selectivity and backup operators in Monte-Carlo tree search, in the proceedings of the 5th International Conference on Computers and Games, pp72-83.
[6] Alan Fern and Paul Lewis (2011) Ensemble Monte-Carlo planning: an empirical study, in the Proceedings of the Twenty-First International Conference on International Conference on Automated Planning and Scheduling, pp58-65.
[7] David Silver, Richard S. Sutton and Martin Müller (2008) Sample-based learning and search with permanent and transient memories, in the proceedings of the 25th international conference on Machine learning (ICML), New York, USA, pp968-975.
[8] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, Demis Hassabis (2017) Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm, arXiv:1712.01815 [cs.AI].
[9] Tristan Cazenave (2009) Nested Monte-Carlo Search, in the Proceedings of the 21st International Joint Conference on Artifical intelligence (IJCAI), pp456-461.
[10] Christopher D. Rosin (2011) Nested Rollout Policy Adaptation for Monte Carlo Tree Search, in the Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI), pp649-654.
## Friday, March 02, 2018
### Monte Carlo Tree Search (MCTS) - Core Concepts
This is an introduction/summary of MCTS mainly for myself to learn about them. MCTS is, of course an integral part of AlphaZero [8], but is also used in other successful algorithms such as Expert Iteration [1].
## Concepts and Properties
Fundamentally, MCTS is an algorithm for searching state-action spaces for optimal action sequences and in that respect very similar to Reinforcement Learning (RL). MCTS represents states as nodes and actions as edges. The underlying assumptions is that the true value of actions can be estimated through repeated sampling (Monte Carlo) and that the estimated values can be used to guide the search and make it approximate a best-first search.
MCTS is an aheuristic search algorithm in that it performs a focused search but without requiring domain specific heuristics. This means MCTS can be applied effectively to problems where such heuristics aare difficult to specify, e.g., the game of Go. It is possible, however, to use MCTS with domain specific heuristics in order to further improve its performance.
The problem state to be explored becomes the root of the tree and the tree is built incrementally by performing a number of simulations. The tree is used to estimate the state values and as the tree grows, the estimated values become increasingly accurate.
A simulation is a sequence of action starting from the root and ending when a given termination condition is met. A simulation may contain four phases:
1. Selection (or the tree phase)
2. Expansion
3. Rollout (or playout)
4. Backup (or backpropagation)
The selection identifies the most promising leaf node in the current tree by traversing it according to the tree policy until a leaf is reached.
During rollouts, actions are selected according to a default policy, commonly according to a flat random distribution, until a termination state is reached. The value of this state, e.g., reward, is then recorded and used to update the estimated value of the nodes in the tree. The algorithm does not calculate the value of other nodes visited during the rollout.
The algorithm typically does not complete the search but stops after a given number of simulations or another factor describing a limited computational budget. When the algorithm is done searching it returns the best child, i.e., the action that has the highest value. The best child has be defined according to the following criteria:
1. Max child - The child with the highest value
2. Robust child - The child with the highest visit count
3. Max-robust child - The child with both 1. and 2. If no such child exists, keep searching until it does
4. Secure child - Select the child according to a maximum lower confidence bound
When using MCTS to solve a problem or play a game, the action returned would be taken and another search would be initiated with the resulting state as the root of a new search tree.
### Selection
Selection is done within the search tree based on the tree policy. In order to try all available actions while, at the same time, narrowing down on the most promising ones, the tree policy must select actions in a way that combines exploration and exploitation.
A range of values can be used to select a child. The most common is probably Upper Confidence Bounds for Trees ($UCT$), as presented below. This value requires each node in the tree to record two values; $n$, the number of times it has been selected and $\bar{X_j}$, the average reward from the simulations that selected it.
A normalised sum of the $UCT$ values can act as a probability distribution for selection, potentially with a softmax selection to control the level of exploration/exploitation or a Boltzmann factor to vary this level over time.
In progressive pruning, the standard deviation of the reward is also recorded so that statistical comparisons of child nodes can be done. Nodes that are significantly inferior to other nodes are then pruned from the search tree.
With many or continuous action, it is possible to let a node represent many actions and to only split the node into separate nodes whenever it has been visited a number of times so that meaningful values for the new branches can been estimated. This is called progressive widening [5].
When a number of actions are similar, it can be beneficial to treat them in a single branch of the search tree, thus reducing the branching factor. Childs, Brodeur & Kocsis [1] introduced the concept of a move group to achieve this.
### Backup
During the backup phase, the MCTS algorithm records the values used by the tree policy so that the next simulation can make use of the information gained during the current simulation.
If new information is backed up immediately, the algorithm is anytime indicating that ...
Some algorithms, such as Gobble [4], back up the final value to any occurrence of a move within the search tree independent of the order/sequence of moves it occurs in. This is called the all-moves-as-first heuristic and works when the order of moves does not significantly affect the value of a move.
## References
[1] Thomas Anthony and David Barber (2017) Thinking Fast and Slow with Deep Learning and Tree Search, NIPS 2017, pp5360-5370.
[2] Yngvi Bjornsson and Hilmar Finnsson (2009) CADIAPLAYER: A Simulation-Based General Game Player,
[3] B. Bouzy and B. Helmstetter (2004) Advances in Computer Games, vol 135. Springer.
[4] B. Brügmann (1993) Monte Carlo Go, technical report.
[5] Remi Coulom (2006) Efficient selectivity and backup operators in Monte-Carlo tree search, in the proceedings of the 5th International Conference on Computers and Games, pp72-83.
[6] Alan Fern and Paul Lewis (2011) Ensemble Monte-Carlo planning: an empirical study, in the Proceedings of the Twenty-First International Conference on International Conference on Automated Planning and Scheduling, pp58-65.
[7] David Silver, Richard S. Sutton and Martin Müller (2008) Sample-based learning and search with permanent and transient memories, in the proceedings of the 25th international conference on Machine learning (ICML), New York, USA, pp968-975.
[8] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, Demis Hassabis (2017) Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm, arXiv:1712.01815 [cs.AI].
[9] Tristan Cazenave (2009) Nested Monte-Carlo Search, in the Proceedings of the 21st International Joint Conference on Artifical intelligence (IJCAI), pp456-461.
[10] Christopher D. Rosin (2011) Nested Rollout Policy Adaptation for Monte Carlo Tree Search, in the Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI), pp649-654.
## Wednesday, November 23, 2016
### Hierarchical Reinforcement Learning: A Literature Summary
This is a quick summary of current work on hierarchical reinforcement learning (RL) aimed at students choosing to do hierarchical RL projects under my supervision.
The most common formalisation of hierarchical RL in terms of semi-MDPs was given by Sutton, Precup and Singh
There is also a summary of this area by
In 2015, Pierre-Luc Bacon, Jean Harb and Doina Precup published an article entitled 'The Option-Critic Architecture', describing an algorithm for automatically sub-dividing ans solving an RL problem.
## Wednesday, October 26, 2016
### Spatio-Temporal Data from Reinforcement Learning
Applying RL algorithms, in a spatial POMDP domains produces spatio-temporal data that it is necessary to analyse and organise in order to produce effective control policies.
There has recently been a great amount of progress in analysing cortical representations of space and time in terms of place-cells, gird cells. This work has the potential to inform the area of RL in terms of efficient encoding and reuse of spatial data.
The overlap between RL and the neuroscience of mapping local space is particularly interesting as RL can produce raw spatio-temporal data from local sensors. This provides us with an opportunity to analyse, explore and identify the computational and behavioural principles that enable efficient learning of spatial behaviours.
### Neuroscience - Mapping local space
A great introduction to this work is available through the lectures from three Nobel price winners in this area:
There is also a TED talk from 2011 on this subject by Neil Burgess from UCL (in O'Keefe's group) entitled How your brain tells you where you are. Burgess has a range of more general papers on spatial cognition, including:
A brief colloquial presentation of this research entitled 'Discovering grid cells' is available from the Kavli Insitute of Systems Neuroscience's Centre for Neural Computation.
There was also a nice review article from in the Annual Review of Neuroscience entitled 'Place Cells, Grid Cells, and the Brain's Spatial Representation System', Vol. 31:69-89, 2008, by Edvard I. Moser, Emilio Kropff and May-Britt Moser.
There was also a Hippocampus special issue in grid-cells in 2008 edited by Michael E. Hasselmo, Edvard I. Moser and May-Britt Moser.
Recently there was another summary article in Nature Reviews Neuroscience entitled 'Grid cells and cortical representation', Vol. 15:466–481, 2014, by Edvard I. Moser, Yasser Roudi, Menno P. Witter, Clifford Kentros, Tobias Bonhoeffer and May-Britt Moser.
Further relevant work has recently been presented in an article entitled 'Grid Cells and Place Cells: An Integrated View of their Navigational and Memory Function' in Trends in Neurosciences, Vol. 38(12):763–775, 2015, by Honi Sanders, César Rennó-Costa, Marco Idiart and John Lisman.
A more general introcution to
### Computational Approaches
There is a review article on computational approaches to these issues entitled 'Place Cells, Grid Cells, Attractors, and Remapping' in Neural Plasticity, Vol. 2011, 2011 by Kathryn J. Jeffery.
Other relevant articles:
• 'Impact of temporal coding of presynaptic entorhinal cortex grid cells on the formation of hippocampal place fields' in Neural Networks, 21(2-3):303-310, 2008, by Colin Molter and Yoko Yamaguchi.
• 'An integrated model of autonomous topological spatial cognition' in Autonomous Robots, 40(8):1379–1402, 2016, by Hakan Karaoğuz and Işıl Bozma.
• In 2003, in a paper entitled 'Subsymbolic action planning for mobile robots: Do plans need to be precise?', John Pisokas and Ulrich Nehmzow used the topology-preserving properties of self-organising maps to create spatial proto-maps that supported sub-symbolic action planning in a mobile robot.
• A paper entitled Emergence of multimodal action representations from neural network self-organization by German I. Parisi, Jun Tani, Cornelius Weber and Stefan Wermter includes an intteresting section called 'A self-organizing spatiotemporal hierarchy' wich addresses the automated structuring of spetio-temporal data.
## Monday, April 11, 2016
### A short bibliography BBAI and hierarchical RL with SOMs
This bibliography is meant for anyone who joins my research group to work on hierarchical reinforcement learning algorithms or related areas,
### My publications
• Georgios Pierris and Torbjørn S. Dahl, Learning Robot Control based on a Computational Model of Infant Cognition. In the IEEE Transactions on Cognitive and Developmental Systems, accepted for publication, 2016.
• Georgios Pierris and Torbjørn S. Dahl, Humanoid Tactile Gesture Production using a Hierarchical SOM-based Encoding. In the IEEE Transactions on Autonomous Mental Development, 6(2):153-167, 2014.
• Georgios Pierris and Torbjørn S. Dahl, A Developmental Perspective on Humanoid Skill Learning using a Hierarchical SOM-based Encoding. In the Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN'14), pp708-715, Beijing, China, July 6-11, 2014.
• Torbjørn S. Dahl, Hierarchical Traces for Reduced NSM Memory Requirements. In the Proceedings of the BCS SGAI International Conference on Artificial Intelligence, pp165-178, Cambridge, UK, December 14-16, 2010.
### Relevant papers
• Daan Wierstra, Alexander Forster, Jan Peters and Jurgen Schmidhuber, Recurrent Policy Gradients. In Logic Journal of IGPL, 18:620-634, 2010. [pdf from IDSIA]
• Andrew G. Barto and Sridhar Mahadevan, Recent advances in hierarchical reinforcement learning, Discrete Event Dynamic Systems, 13(4):341-379, 2003. [pdf from Citeseer]
• Harold H. Chaput and Benjamin Kuipers and Risto Miikkulainenn Constructivist learning: A neural implementation of the schema mechanism. In the Proceedings of the Workshop on Self-Organizing Maps (WSOM03), Kitakyushu, Japan, 2003. [pdf from Citeseer]
• Leslie B. Cohen, Harold H. Chaput and Cara H. Cashon, A constructivist model of infant cognition, Cognitive Development, 17:1323–1343, 2002 [pdf from ResearchGate]
• Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning, In Artificial Intelligence, 112:181–211, 1999. [pdf from the University of Alberta]
• Patti Maes, How to do the right thing, Connection Science Journal, 1:291-323, 1989. [pdf from Citeseer]
• Rodney A. Brooks, A Robust Layered Control System for a Mobile Robot, IEEE Journal of Robotics and Automation, 2(1):14-23, 1986. [pdf of MIT AI Memo 864]
### Books
• Joaquin M. Fuster, Cortex and mind: Unifying cognition, Oxford University Press, 2003. [pdf from ResearcgGate]
• Richard S. Sutton and Andrew G. Barto, Reinforcement learning: An introduction, MIT Press, 1998. [pdf of unfinished 2nd edition]
• G. L. Drescher, Made-up minds, MIT Press, 1991 [pdf of MIT dissertation] - An actual constructivist architecture.
## Wednesday, January 06, 2016
### Sequence Similarity for Hidden State Estimation
Little work has been done on comparing long- and short-term memory (LTM and STM) traces in the context of hidden state estimation in POMDPs. Belief-state algorithms use a probabilistic step-by-step approach which should be optimal, but doesn't scale well and has an unrealistic requirement for knowledge of the state space underlying observations,
The instance-based Nearest Sequence Memory (NSM) algorithm performs remarkably well without any knowledge of the underlying state space. Instead is compares previously observed sequences of observations and actions in LTM with a recently observed sequence in STM to estimate the underlying state. The NSM algorithm uses a count of matching observation-action records as a metric for sequence proximity.
In problems where certain observation-actions are particularly salient, e.g., passing through a 'door' in Sutton's world, or picking up a passenger in the Taxi problem, a simple match count is not a particularly good sequence proximity metric and, as a result, I have recently been casting around for other work on such metrics.
|
2018-07-20 03:11:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49489909410476685, "perplexity": 3781.547394946432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591481.75/warc/CC-MAIN-20180720022026-20180720042026-00456.warc.gz"}
|
https://www.cheenta.com/ordered-pairs-prmo-2019-problem-18/
|
How 9 Cheenta students ranked in top 100 in ISI and CMI Entrances?
# Ordered Pairs | PRMO-2019 | Problem 18
Try this beautiful problem from PRMO, 2019, Problem 18 based on Ordered Pairs.
## Orderd Pairs | PRMO | Problem-18
How many ordered pairs $(a, b)$ of positive integers with $a < b$ and $100 \leq a$, $b \leq 1000$ satisfy $gcd (a, b) : lcm (a, b) = 1 : 495$ ?
• $20$
• $91$
• $13$
• $23$
### Key Concepts
Number theory
Orderd Pair
LCM
Answer:$20$
PRMO-2019, Problem 18
Pre College Mathematics
## Try with Hints
At first we assume that $a = xp$
$b = xq$
where $p$ & $q$ are co-prime
Therefore ,
$\frac{gcd(a,b)}{LCM(a ,b)} =\frac{495}{1}$
$\Rightarrow pq=495$
Can you now finish the problem ..........
Therefore we can say that
$pq = 5 \times 9 \times 11$
$p < q$
when $5 < 99$ (for $x = 20, a = 100, b = 1980 > 100$),No solution
when $9 < 55$ $(x = 12$ to $x = 18)$,7 solution
when,$11 < 45$ $(x = 10$ to $x = 22)$,13 solution
Can you finish the problem........
Therefore Total solutions = $13 + 7=20$
|
2021-09-26 11:43:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8072582483291626, "perplexity": 8377.534656796171}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057861.0/warc/CC-MAIN-20210926114012-20210926144012-00619.warc.gz"}
|
https://www.e-medida.es/0tpwyir8/c5a768-pf3-electron-geometry
|
Services, Working Scholars® Bringing Tuition-Free College to the Community. tetrahedral. Back to Molecular Geometries & Polarity Tutorial: Molecular Geometry & Polarity Tutorial. Why don't libraries smell like bookstores? PF 3⦠All other trademarks and copyrights are the property of their respective owners. answer! Molecular Orbital Theory: Tutorial and Diagrams, Using Orbital Hybridization and Valence Bond Theory to Predict Molecular Shape, Ionization Energy: Trends Among Groups and Periods of the Periodic Table, Dipoles & Dipole Moments: Molecule Polarity, The Octet Rule and Lewis Structures of Atoms, Tetrahedral in Molecular Geometry: Definition, Structure & Examples, Lattice Energy: Definition, Trends & Equation, Lewis Structures: Single, Double & Triple Bonds, Valence Bond Theory of Coordination Compounds, London Dispersion Forces (Van Der Waals Forces): Weak Intermolecular Forces, Acid-Base Indicator: Definition & Concept, Factors Influencing the Formation of Ionic Bonds, Metallic Bonding: The Electron-Sea Model & Why Metals Are Good Electrical Conductors, Atomic Radius: Definition, Formula & Example, CLEP Natural Sciences: Study Guide & Test Prep, Middle School Life Science: Tutoring Solution, Holt McDougal Modern Chemistry: Online Textbook Help, Praxis Chemistry (5245): Practice & Study Guide, College Chemistry: Homework Help Resource, CSET Science Subtest II Chemistry (218): Practice & Study Guide, ISEB Common Entrance Exam at 13+ Geography: Study Guide & Test Prep, Holt Science Spectrum - Physical Science with Earth and Space Science: Online Textbook Help, Biological and Biomedical Pyramidal. Problem: Determine the Electron geometry, molecular geometry, idealized bond angles for each molecule.PF3, SBr2, CHCl3, CS2. Phosphorus trifluoride is the name of PF 3.It's a gas that is known for its toxicity. It is the well-known fact that if there is a vast difference of the electronegativity, there are more chances of polarity. In which cases do you expect deviations from the idealized bond angle?1.pf3 2.sbr2 3.ch3br 4.bcl3 . Question: /Determine The Electron Geometry, Molecular Geometry, Idealized Bond Angles For Each Molecule. Using the VSEPR theory, the electron bond pairs and lone pairs on the center atom will help us predict the shape of a molecule. Chemistry. The valence shell electron pair repulsion (VSEPR) theory is a model used to predict 3-D molecular geometry based on the number of valence shell electron bond pairs among the atoms in a molecule or ion. Where can i find the fuse relay layout for a 1990 vw vanagon or any vw vanagon for the matter? The domain is related to the orbitals ⦠87% (369 ratings) Problem Details. We will use valence shell electron pair repulsion (VSEPR) theory to determine its molecular geometry. The VSEPR theory is based on the idea that. What is the shape of the ammonia molecule? When did organ music become associated with baseball? Fluorine needs one electron to complete its octet. Determine the electron geometry, molecular geometry, and idealized bond angles for each of the following molecules. The two X atoms (in white) are 180° away from one another. Is there a way to search all eBay sites for different countries at once? Who is the longest reigning WWE Champion of all time? eg=tetrahedral, mg=tetrahedral. P has a lone pair of electrons in addition to the three F attached. It's a gas that is known for its toxicity. Check all that apply. Is the PF3 polar or non-polar and why? Copyright © 2020 Multiply Media, LLC. This square pyramidal shape would be distorted. © copyright 2003-2020 Study.com. Or if you need more Molecular vs Electron Geometry practice, you can also practice Molecular vs Electron Geometry practice problems. The presence of unbonded lone-pair electrons gives a different molecular geometry and electron geometry. Is evaporated milk the same thing as condensed milk? The molecular geometry, on the other hand, is Trigonal Pyramidal. A) eg=tetrahedral, mg=trigonal pyramidal, polar B) eg=tetrahedral, mg=tetrahedral, nonpolar C) eg=trigonal planar, mg=trigonal planar, nonpolar D) eg= trigonal bipyramidal, mg=trigonal planar, polar E) eg=trigonal pyramidal, mg=bent, nonpolar When and how lovebirds will enter into the nest box? C. On the⦠Although we will speak often of electron pairs in this discussion, the same logic will hold true for single electrons in orbitals, and for double bonds, where one could think of the bond as consisting of two pairs of electrons. Phosphorus Pentafluoride on Wikipedia. We have a central Nitrogen double-bonded to two separate Nitrogens (completing the central atomâs octet). To use this key, first draw out the Lewis structure for a molecule. All Rights Reserved. If the central atom also contains one or more pairs of non-bonding electrons, these additional regions of negative charge will behave much like those associated with the bonded atoms. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. The lone pairs take up more space so they increase the bonding angle and force PF3 into a trigonal planar geometry. Linear electron geometry: This ball-and-stick model represents a linear compound for formula AX2. Back to Molecular Geometries & Polarity Tutorial: Molecular Geometry & Polarity Tutorial. From an electron-group-geometry perspective, GeF 2 has a trigonal planar shape, but its real shape is dictated by the positions of the atoms. The molecular geometry of PCL3 is trigonal pyramidal with the partial charge distribution on the phosphorus. PF3 SBr2 CHCl3 CS2 PF3 SBr2 CHCl3 CS2 This problem has been solved! Create your account. Why did cyclone Tracy occur in 1974 at Darwin? Is Series 4 of LOST being repeated on SKY? Given four points P(1, 5, 4), Q(- 0, 3, 2), R(6,... Let A = (-1, -2, 5), B = (-5, -2, 5), C = (-9, 3,... Find the curvature of r(t) = \left \langle 3t,... Find, correct to the nearest degree, the three... Find the distance d from the point (1, -6, -1) to... What is the value of the smallest bond angle in... What is the electron-domain (charge-cloud)... What is the molecular geometry of SO_3 ^{2-}? So, the end difference is 0.97, which is quite significant. How do you put grass into a personification? This model assumes that electron pairs will arrange themselves to ⦠The geometry of molecule of BF3 is âTrigonal Planar.â With the reference of Chemistry, âTrigonal Planarâ is a model with three atoms around one atom in the middle. For homework help in math, chemistry, and physics: www.tutor-homework.com. For homework help in math, chemistry, and physics: www.tutor-homework.com. The electron geometry ("Electronic Domain Geometry") for PF3 is Earn Transferable Credit & Get your Degree, Get access to this video and our entire Q&A library. What are wildlife sanctuaries national parks biosphere reserves? The molecular geometry of PF 5 is trigonal bipyramidal with symmetric charge distribution. BF3 Lewis Structure Phosphorus trifluoride is the name of PF3. Itâs like peripheral atoms all in one plane, as all three of them are similar with the 120° bond angles on each that makes them an equilateral triangle. Which do not obey the octet rule? It determines the electron-pair arrangement, the geometry that maximizes the distances between the valence-shell electron pairs. Our tutors have indicated that to solve this problem you will need to apply the Molecular vs Electron Geometry concept. Assuming you mean the ion Azide (N$_{3}^{-}$), 3 Nitrogens and a negative charge give 16 electrons total. B. Determine the electron geometry (eg), molecular geometry (mg), and polarity of SO3. Determine the electron geometry, molecular geometry, and idealized Therefore this molecule is nonpolar. Lewis Structure. PF3 b. SBr2 c. CHCl3 d. CS2. This is given by : The second number can be determined directly from the Lewis structure. We will use valence shell electron pair repulsion (VSEPR) theory to determine its molecular geometry. Determine the electron geometry (eg) and molecular geometry (mg) of SiF4. Therefore this molecule is nonpolar. All rights reserved. PF3 has a trigonal pyramidal molecular geometry. Solution for Draw the Lewis electron-dot structures for PF3 and PF5 and predict the molecular geometry. Here is a chart that describes the usual geometry for molecules based on their bonding behavior. The molecular geometry of AlBr 3 is trigonal planar with symmetric charge distribution around the central atom. Answer a. eg=trigonal bipyramidal, mg=trigonal bipyramidal. Count how many electron pairs are present, including both bonding pairs and lone pairs.Treat both double and triple bonds as if they were single electron pairs. In OPF 3, the lone pair is replaced with a P-O bond, which occupies less space than the lone pair in PF 3. In PF 3 the lone pair on the phosphorus pushes the P-F bonding electrons away from itself, resulting in a F-P-F bond angle of 97.8°, which is appreciably smaller than the ideal bond angle of 109.5°. In the molecule of PF3, the phosphorus atom is the central atom surrounded by three fluorine atoms. Aluminum Tribromide on Wikipedia. FREE Expert Solution. Let's count the areas around the phosphorus atom that... Our experts can answer your tough homework and study questions. Mol mass of PF3 = 1 * 30 (mol mass of P) + 3 * 18.9 (mol mass of F) = 87.96 g/mol. You can view video lessons to learn Molecular vs Electron Geometry. The valence-shell electron-pair repulsion (VSEPR) model allows us to predict which of the possible structures is actually observed in most cases. So electron geometry is tetrahedral. Determine the electron geometry (eg) and molecular geometry (mg) of the underlined carbon in CH3CN. How long will the footprints on the moon last? The molecular mass of PF3 is calculated as. Lewis electron structures give no information about molecular geometry, the arrangement of bonded atoms in a molecule or polyatomic ion, which is crucial to understanding the chemistry of a molecule. Predicting Molecular Geometry . PF3 is tetrahedral for its electron geometry and trigonal pyramidal for its molecular geometry. Determine the electron geometry (eg) and molecular geometry (mg) of PF5. Become a Study.com member to unlock this The electron geometry ("Electronic Domain Geometry") for PF3 is tetrahedral. True or False: molecular geometry and electron-group geometry are the same when there are no lone pairs. The phosphorus has an electronegativity value of 2.19 and chlorine comes with 3.16. The Valence Shell Electron Pair Repulsion Theory (VSEPR), as it is traditionally called helps us to understand the 3d structure of molecules. ⦠The overall geometry is octahedral but with one lone pair the structure becomes a square pyramid. The electrons will push the fluorine's away from it, distorting the bottom of the pyramid. Sciences, Culinary Arts and Personal From Lewis theory, we know that the total number of valence electrons for 1 $\mathrm{P}$ atom and 3 $\mathrm{F}$ atoms is $26 .$ This gives four electron groups on the central $\mathrm{P}$ atom. The electron geometry and the molecular geometry are the same when every electron group bonds two atoms together. The molecular geometry, on the other hand, is Trigonal Determine the Electron geometry, molecular geometry, idealized bond angles for each molecule. In general, the region in space occupied by the pair of electrons can be termed the domainof the electron pair. Wherewhen and how do you apply for a job at Winco foods in indio ca.? Does pumpkin pie need to be refrigerated?
|
2021-01-17 21:04:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29067447781562805, "perplexity": 3494.73219268821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513194.17/warc/CC-MAIN-20210117205246-20210117235246-00156.warc.gz"}
|
https://eccc.weizmann.ac.il/keyword/18194/
|
Under the auspices of the Computational Complexity Foundation (CCF)
REPORTS > KEYWORD > SYMMETRIC BOOLEAN FUNCTIONS:
Reports tagged with symmetric Boolean functions:
TR13-032 | 26th February 2013
Mark Bun, Justin Thaler
#### Dual Lower Bounds for Approximate Degree and Markov-Bernstein Inequalities
Revisions: 2
The $\epsilon$-approximate degree of a Boolean function $f: \{-1, 1\}^n \to \{-1, 1\}$ is the minimum degree of a real polynomial that approximates $f$ to within $\epsilon$ in the $\ell_\infty$ norm. We prove several lower bounds on this important complexity measure by explicitly constructing solutions to the dual of an ... more >>>
TR19-138 | 6th October 2019
Srikanth Srinivasan, Utkarsh Tripathi, S Venkitesh
#### On the Probabilistic Degrees of Symmetric Boolean functions
The probabilistic degree of a Boolean function $f:\{0,1\}^n\rightarrow \{0,1\}$ is defined to be the smallest $d$ such that there is a random polynomial $\mathbf{P}$ of degree at most $d$ that agrees with $f$ at each point with high probability. Introduced by Razborov (1987), upper and lower bounds on probabilistic degrees ... more >>>
ISSN 1433-8092 | Imprint
|
2019-10-15 23:29:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6449428796768188, "perplexity": 805.5196668814317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660829.5/warc/CC-MAIN-20191015231925-20191016015425-00395.warc.gz"}
|
https://stackoverflow.com/questions/32126627/how-to-find-index-of-a-given-fibonacci-number
|
# How to find index of a given Fibonacci number
I tried to use the following formula
$n = \bigg\lfloor \log_\varphi \left(F\cdot\sqrt{5} + \frac{1}{2} \right)\bigg\rfloor$
to find the index of a fibonacci number($F(0) = 1, F(1) = 1,...$) in a programming question and all the smaller test cases passed but some cases in which F was close to 10^18 failed. I did some dry-run and found out that if F = 99194853094755497 (82nd Fibonacci number) the value of n according to the above formula is 81. I coded this in Python and C++ which can be found here and here respectively. I want to know whether the formula works for every value of F or has some limitations?
Note: After doing some more tests, I found out that the code is giving correct answers till 52nd fibonacci number.
Update: The question has t test cases that's why I used a for loop. The given number F might not necessarily be a Fibonacci number. For ex- If F = 6, then it lies between two fibonacci numbers 5 and 8. Now the index of '5' in the fibonacci sequence is 4 so the answer is 4.
• Unless there's a problem with floating point arithmetic, (which could be the case), this looks like a better question for our sister site, math.stackexchange.com because it's more about math than programming. – Everyone_Else Aug 20 '15 at 19:25
• It works for every value of `F` mathematically, but floating point errors can cause problems practically. Any reason against using the O(n) dp fibonacci solution? – yizzlez Aug 20 '15 at 19:28
• @Someone_Else I posted this here because the problem can be both due to floating point limitations in computer programming(due to which I posted it here) or in the formula(then, I should post it at math.stackexchange.com). – Shubham Aug 20 '15 at 19:29
• @awesomeyi No problem with O(n) dp solution. In fact my final correct submission of the question was using O(n) method but I wanted to know what might be wrong in this. – Shubham Aug 20 '15 at 19:31
• So your question is, why your implementation yields 81 instead of 82? That wasn't really clear to me. – Falko Aug 20 '15 at 19:58
The formula works just fine:
``````import math
n = 99194853094755497
print math.log(n * math.sqrt(5) + 0.5) / math.log(1.61803398875) - 1
``````
Output:
``````82.0
``````
• Using `int(...)` for rounding off to an integer might cause trouble if the floating point result is very close to `82.0`. Numerical issues might cause it to be slightly larger, even though mathematically it would be smaller.
• The answer should be 82 – Shubham Aug 20 '15 at 19:54
• You're right. I mixed something up, but updated my answer. – Falko Aug 20 '15 at 19:55
• The question actually demands to calculate the index of t fibonacci numbers taken as input. That's why I used loop to input 't' numbers. – Shubham Aug 20 '15 at 20:03
• Oh, I see! My bad. – Falko Aug 20 '15 at 20:12
• Sorry for the inconvenience caused. I have updated my question now. – Shubham Aug 20 '15 at 20:14
I think your formula is causing a stack overflow because the number is too large to hold in int.
• The greatest value of F is 10^18(question constraints) and because of this I declared it long long integer in C++. I don't think that is the case here but again I am not sure. – Shubham Aug 20 '15 at 19:36
• If C++ has a double equivilant, could you try using that? – Everyone_Else Aug 20 '15 at 19:37
• @rottenbanana, stack overflows occur when there are too many function calls on the stack. For example, calling a function on each iteration in an infinite loop would cause a stack overflow. Although a number being too large to hold in it's type is a valid possibility for why the problem could occur, it wouldn't be called a stack overflow. (Sorry, but semantics matter and calling that problem a stack overflow is incorrect.) – Everyone_Else Aug 20 '15 at 19:41
• @Someone_Else What do you mean by "If C++ has a double equivalent"? – Shubham Aug 20 '15 at 19:45
• @Shubham Sorry, that was poorly phrased on my part. If you defined F as an int or float, have you tried defining it as a double? – Everyone_Else Aug 20 '15 at 19:52
F = 99194853094755497 is 84 Fibonacci number and hence the index for it is 83. Use the below script to get the correct index (integer instead of float).
``````eps = 10**-10
phi = (1+math.sqrt(5))/2 # golden search ratio
fibonacci_index = int(round(math.log(n * math.sqrt(5)+eps)/math.log(phi)))
``````
Additional Info, code See this https://github.com/gvavvari/Python/tree/master/Fibonacci_index for more detailed documentation on the implementation
|
2018-08-18 17:47:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4261874854564667, "perplexity": 912.1379684349434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213693.23/warc/CC-MAIN-20180818173743-20180818193743-00674.warc.gz"}
|
https://everything.explained.today/Natural_units/
|
# Natural units explained
In physics, natural units are physical units of measurement based only on universal physical constants. For example, the elementary charge is a natural unit of electric charge, and the speed of light is a natural unit of speed. A purely natural system of units has all of its units usually defined such that the numerical values of the selected physical constants in terms of these units are exactly . These constants may then be omitted from mathematical expressions of physical laws, and while this has the apparent advantage of simplicity, it may entail a loss of clarity due to the loss of information for dimensional analysis. It precludes the interpretation of an expression in terms of fundamental physical constants, such as and, unless it is known which units (in dimensionful units) the expression is supposed to have. In this case, the reinsertion of the correct powers of,, etc., can be uniquely determined.[1] [2]
## Systems of natural units
### Planck units
See main article: Planck units.
QuantityExpressionMetric valueName
Length (L)
lP=\sqrt{\hbarG\overc3}
Planck length
Mass (M)
mP=\sqrt{\hbarc\overG}
Planck mass
Time (T)
tP=\sqrt{\hbarG\overc5}
Planck time
Temperature (Θ)
TP=\sqrt{
\hbarc5 G{kB
2}}
Planck temperature
The Planck unit system uses the following constants to have numeric value 1 in terms of the resulting units:
where is the speed of light, is the reduced Planck constant, is the gravitational constant, and is the Boltzmann constant.
Planck units are a system of natural units that is not defined in terms of properties of any prototype, physical object, or even elementary particle. They only refer to the basic structure of the laws of physics: and are part of the structure of spacetime in general relativity, and captures the relationship between energy and frequency which is at the foundation of quantum mechanics. This makes Planck units particularly useful and common in theories of quantum gravity, including string theory.
Planck units may be considered "more natural" even than other natural unit systems discussed below, as Planck units are not based on any arbitrarily chosen prototype object or particle. For example, some other systems use the mass of an electron as a parameter to be normalized. But the electron is just one of 16 known massive elementary particles, all with different masses, and there is no compelling reason, within fundamental physics, to emphasize the electron mass over some other elementary particle's mass.
Planck considered only the units based on the universal constants,,, and B to arrive at natural units for length, time, mass, and temperature, but no electromagnetic units.[3] The Planck system of units is now understood to use the reduced Planck constant,, in place of the Planck constant, .[4]
### Stoney units
See main article: Stoney units.
QuantityExpressionMetric value
Length (L)
lS=\sqrt{
Gkee2 c4
}
Mass (M)
mS=\sqrt{
kee2 G
}
Time (T)
tS=\sqrt{
Gkee2 c6
}
Electric charge (Q)
qS=e
The Stoney unit system uses the following constants to have numeric value 1 in terms of the resulting units:
,where is the speed of light, is the gravitational constant, is the Coulomb constant, and is the elementary charge.
George Johnstone Stoney's unit system preceded that of Planck. He presented the idea in a lecture entitled "On the Physical Units of Nature" delivered to the British Association in 1874.[5] Stoney units did not consider the Planck constant, which was discovered only after Stoney's proposal.
Stoney units are rarely used in modern physics for calculations, but they are of historical interest.
### Atomic units
See main article: Hartree atomic units.
QuantityExpressionMetric value
Length (L)
lA=
(4\pi\epsilon0)\hbar2 mee2
Mass (M)
mA=me
Time (T)
tA=
(4\pi
2 \epsilon 0)
\hbar3
mee4
Electric charge (Q)
qA=e
The Hartree atomic unit system uses the following constants to have numeric value 1 in terms of the resulting units:
Coulomb's constant,, is generally expressed as when working with this system.
These units are designed to simplify atomic and molecular physics and chemistry, especially the hydrogen atom, and are widely used in these fields. The Hartree units were first proposed by Douglas Hartree.
The units are designed especially to characterize the behavior of an electron in the ground state of a hydrogen atom. For example, in Hartree atomic units, in the Bohr model of the hydrogen atom an electron in the ground state has orbital radius (the Bohr radius) 0 = 1 , orbital velocity = 1 ⋅, angular momentum = 1 ⋅⋅, ionization energy = ⋅⋅, etc.
The unit of energy is called the Hartree energy in the Hartree system. The speed of light is relatively large in Hartree atomic units ( = ⋅ ≈ 137 ⋅) since an electron in hydrogen tends to move much slower than the speed of light. The gravitational constant is extremely small in atomic units ( ≈ 10−45 ⋅⋅), which is due to the gravitational force between two electrons being far weaker than the Coulomb force between them.
A less commonly used closely related system is the system of Rydberg atomic units, in which are used as the normalized constants, with resulting units = =, =, = 2, = .[6]
### Natural units (particle and atomic physics)
QuantityExpressionMetric value
Length (L)
\hbar mec
[7]
Mass (M)
me
[8]
Time (T)
\hbar mec2
[9]
Electric charge (Q)
\sqrt{\varepsilon0\hbarc}
The natural unit system, used only in the fields of particle and atomic physics, uses the following constants to have numeric value 1 in terms of the resulting units:
,where is the speed of light, e is the electron mass, is the reduced Planck constant, and 0 is the vacuum permittivity.
The vacuum permittivity 0 is implicitly normalized, as is evident from the physicists' expression for the fine-structure constant, written, which may be compared to the same expression in SI: .
### Quantum chromodynamics units
QuantityExpressionMetric value
Length (L)
lQCD=
\hbar mpc
Mass (M)
mQCD=mp
Time (T)
tQCD=
\hbar mpc2
Electric charge (Q)
qQCD=e
(original)
qQCD=
e \sqrt{4\pi\alpha
} (rat.)
qQCD=
e \sqrt{\alpha
} (non-rat.)
if rationalized, then
\epsilon0
is 1, if not,
4\pi\epsilon0
is 1 (in the original QCD units, is 1 instead.)
The electron rest mass is replaced with that of the proton. Strong units, also called quantum chromodynamics (QCD) units, are "convenient for work in QCD and nuclear physics, where quantum mechanics and relativity are omnipresent and the proton is an object of central interest".[10]
### Geometrized units
See main article: Geometrized unit system.
The geometrized unit system, used in general relativity, is an incompletely defined system. In this system, the base physical units are chosen so that the speed of light and the gravitational constant are set equal to unity. Other units may be treated however desired. Planck units and Stoney units are examples of geometrized unit systems.
### Summary table
Quantity / SymbolPlanckStoneyHartreeRydberg
Defining constants
c
,
G
,
\hbar
,
kB
c
,
G
,
e
,
ke
e
,
me
,
\hbar
,
ke
e2/2
,
2me
,
\hbar
,
ke
Speed of light
c
1
1
1 \alpha
2 \alpha
Reduced Planck constant
\hbar= h 2\pi
1
1 \alpha
1
1
Elementary charge
e
-
1
1
\sqrt{2}
Gravitational constant
G
1
1
G
G
Boltzmann constant
kB
1
-
-
-
Electron rest mass
me
me
me
1
1 2
where:
• is the fine-structure constant, ≈ 0.007297,
• A dash (–) indicates where the system is not sufficient to express the quantity.
|
2021-12-05 10:27:29
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9023341536521912, "perplexity": 1109.891084666579}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363157.32/warc/CC-MAIN-20211205100135-20211205130135-00263.warc.gz"}
|
http://e.biohackers.net/Euler%27s_formula
|
# Euler's formula#Find similar titles
Euler's formula (오일러 공식), named after Leonhard Euler, is a mathematical formula in complex analysis that establishes the fundamental relationship between the trigonometric functions and the complex exponential function. (https://en.wikipedia.org/wiki/Euler%27s_formula)
$$e^{ix} = \cos x + i\sin x,$$
$$e^{i \pi} + 1 = 0$$
e의 ai 제곱은 복소평면에서 a 라디안 만큼 회전한다. 원주율만큼 회전하면, -1이다.
관련정보
|
2020-02-19 23:43:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.816386878490448, "perplexity": 9908.369579117463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144429.5/warc/CC-MAIN-20200219214816-20200220004816-00288.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-10th-edition/chapter-2-section-2-2-graphs-of-equations-in-two-variables-intercepts-symmetry-2-2-assess-your-understanding-page-164/1
|
## College Algebra (10th Edition)
$x=-6$
Solve the equation: $2(x+3)-1=-7$ First, we add 1 to both sides: $2(x+3)-1+1=-7+1$ $2(x+3)=-6.$ Now we divide both sides by 2: $2(x+3)/2=-6/2$ $x+3=-3.$ Now, we subtract 3 from both sides: $x+3-3=-3-3.$ So, $x=-6.$
|
2018-07-16 07:11:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9657063484191895, "perplexity": 249.6789265533945}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589222.18/warc/CC-MAIN-20180716060836-20180716080836-00351.warc.gz"}
|
http://www.perlmonks.org/?node=59446
|
Do you know where your variables are? PerlMonks
### Generating PDF
on Feb 19, 2001 at 23:51 UTC Need Help??
Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:
If there an easy way to generate PDF files on a Unix system using Perl and a CGI interface. I'd like to be able to read records from a database and generate a work order outputing it to the PDF format which can then be e-mailed to vendors. Is there a module for this? or some other pre-existing code?? Thanks.
Replies are listed 'Best First'.
Re: Generating PDF
by clintp (Curate) on Feb 20, 2001 at 01:48 UTC
Okay, I've actually USED PDF::Create that others are suggesting and it's okay. In fact, this short program took some existing text and overlaid a grid on top of it to make nice column rules that the data lacked.
#!/usr/bin/perl -w
use strict;
use PDF::Create;
my $font; my$height=792;
my $width=612; my$left=20;
my $topmargin=20; my$pointsize=8;
my $spacing=$pointsize+3;
my($page,$fh)=@_;
my $l=0; my$hpos=$height-$topmargin;
while(<$fh>) {$page->stringl($font,$pointsize, $left,$hpos-=$spacing,$_);
last if ++$l > 66; } } sub boxpage { my($page)=@_;
$page->line($left, $topmargin,$left, $height-$topmargin,
);
$page->line($left, $height-$topmargin,
$width-$left, $height-$topmargin);
$page->line($width-$left,$height-$topmargin,$width-$left,$topmargin);
$page->line($width-$left,$topmargin,
$left,$topmargin);
$page->line($left+42, $topmargin,$left+42, $height-$topmargin);
$page->line($left+81, $topmargin,$left+81, $height-$topmargin);
$page->line($left+145, $topmargin,$left+145, $height-$topmargin);
$page->line($left+183, $topmargin,$left+183, $height-$topmargin);
}
my $pdf=new PDF::Create('filename' => 'outfile.pdf', 'Version' => '1.2', 'Author' => 'Clinton Pierce', 'Title' => 'Test Report',); my$root=$pdf->new_page('MediaBox' => [ 0, 0, 612, 792 ]);$font=$pdf->font('Subtype' => 'Type1', 'Encoding' => 'WinAnsiEncoding', 'BaseFont' => 'Courier'); open(FH,$0) || die;
while(not eof(FH)) {
my $page=$root->new_page();
boxpage($page); addtext($page, \*FH);
}
$pdf->close; [download] Now the caveat is, there's not a whole lot PDF::Create can do. No colors. No bitmaps. Not much more than lines, circles and other polygons. No width control on the lines either. And PDF::Create hasn't been updated in a long, long time and I got a bounce from the e-mail address of the maintainer. But I got enough out of it for my needs... For anybody coming across this thread much much later, please note PDF::Create is being updated again with the most recent version 1.02 being released on 10 Jul 2008. CPAN Search - PDF::Create Re: Generating PDF by Trinary (Pilgrim) on Feb 19, 2001 at 23:57 UTC A search on cpan turned up PDF::Create, which looks to me like your best bet. I have no idea how mature/usable this module is, but at thevery least it looks like a good place to start. Frankly, the README looks a lil confusing, but then again I know little about the internals of PDFs. Maybe a Postscript generator might be better, as far as I know the two formats are interchangeable through external converters (ps2pdf), and there seem to be a fair amount of decent Postscript modules out there. Trinary Re: Generating PDF by Hot Pastrami (Monk) on Feb 19, 2001 at 23:56 UTC Check out this module on CPAN: Text::PDF Hot Pastrami Although the original post is a few years old, I feel compelled to respond to it. Recently I had to develop a few dynamic PDFs from data in a database. After looking at the avaliable options (including writing out the pdf by hand - yuck), I found PDF::API2 to be the most useful. The biggest drawback I found to PDF::API2 was that the perldoc left much to be desired. Although it very much has the capabilities to go beyond "simple text" figuring out how to draw curves or use barcodes from the perldoc was difficult at best. If you use PDF::API2 I recommend using http://pdfapi2.sourceforge.net/twiki/ for documentation. Other than the documentation drawback, expect to spend about a half a day learning the API. Once you get the hang of it, its pretty easy and well done. Re: Generating PDF by jeroenes (Priest) on Feb 20, 2001 at 11:09 UTC Instead of directly writing PDFs, you also might want consider to write a TeX/LaTeX file, which easily can be converted to a pdf with LaTeX. If you're on gn*x, I would check out the tetex package (find it on freshmeat.net). The advantage of such an approach is, that you can have quite high-level control (ie, just write your text). However, learning about all the handy packages can take quite a while. A sample perl script might be something like this: #get the data first$a = <DATA>;
chomp $a;$b = <DATA>;
chomp $b; open TEX, ">file.tex" or die "Could not open file.tex"; print TEX <<'_END'; \documentclass{article} \begin{document} \title{TeX-report} \author{Me} \maketitle \section{Some data} _END print TEX "I have$a and \$b. That's all for now.\n";
print TEX <<'_END';
\section{Some table}
\begin{table}
\caption{A table}
\begin[|l|c|c|]{tabular}
\hline
_END
while(<DATA>){
chomp;
print TEX join('&', split ).'\\ \hline'."\n";
}
print TEX <<'_END';
\end{tabular}
\end{table}
\end{document}
_END
system("latex file.tex");
system("latex file.tex");
system("dvipdf file") if -e "file.dvi";
__DATA__
129873
12315
1 8 2
1 3 100
8 3 4
[download]
Here I'm also using the dvipdf package, (freshmeat.net), but you could use 'dvips file -o' and 'ps2pdf file.ps' as well. Perl code is checked. I'm using TeX daily, so you always can /msg me with further questions.
Hope this helps,
Jeroen
"We are not alone"(FZ)
Another option might be to generate XSL:FO, which can be converted into PDF by e.g. FOP (c/o http://xml.apache.org/); this is an XML dialect, so you can actually just output XML and use an XSL:Transform stylesheet to convert the same XML into HTML, WML, or XSL:FO (for conversion to PDF)... Big points for Laziness :-)
Re: Generating PDF
by foogod (Friar) on Feb 20, 2001 at 06:16 UTC
I completed a very similar project with a relational flat file database system, and used a third party program called PDFever (www.pdefever.de) that allowed much more flexibility, and integrated very well with my perl program.
Also the creator of the program (Zhigang Li) is very helpful with PDF troubleshooting.
For a hacker such as myself, programming for PDF (which IMHO is very obscure), it was a relief to find a program that allowed me lots of flexibility and very little time required to learn the PDF aspect.
The Perl::CreatePDF mod is good, but I found the flexibility was lacking, and there was no support. (Just my 2 cents)
Re: Generating PDF
by boo_radley (Parson) on Feb 19, 2001 at 23:57 UTC
Magic 8 ball says "yes".
specifically the PDF::create module may be what you're after.
Re: Generating PDF
by Beatnik (Parson) on Apr 30, 2001 at 19:08 UTC
Image::Magick also has PDF support but it's not superb (it is, after all, an image manipulation program). It basically rasterizes the layers, outputting pages with a single image, breaking the search features & fancy layers.
Greetz
Beatnik
... Quidquid perl dictum sit, altum viditur.
Create A New User
Node Status?
node history
Node Type: perlquestion [id://59446]
Approved by root
help
Chatterbox?
and all is quiet...
How do I use this? | Other CB clients
Other Users?
Others wandering the Monastery: (3)
As of 2018-03-18 03:07 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
When I think of a mole I think of:
Results (228 votes). Check out past polls.
Notices?
|
2018-03-18 03:08:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7670478224754333, "perplexity": 11236.11225396174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645413.2/warc/CC-MAIN-20180318013134-20180318033134-00730.warc.gz"}
|
http://www.reedbeta.com/blog/2012/05/26/quadrilateral-interpolation-part-1/
|
# Quadrilateral Interpolation, Part 1
May 26, 2012 · GPU, Practical · 22 comments
In computer graphics we build models out of triangles, and we interpolate texture coordinates (and other vertex attributes) across surfaces using a method appropriate for triangles: linear interpolation, which allows each triangle in 3D model space to be mapped to an arbitrary triangle in texture space. While linear interpolation works well most of the time, there are situations in which it doesn’t suit our needs—for example, when mapping a square texture to a quadrilateral: using linear interpolation on each of the quad’s two triangles produces an ugly seam. In this article series, I’m going to talk about interpolation methods that allow arbitrary convex quadrilaterals to be texture-mapped without a seam along the diagonal.
First of all, what’s the problem with quads and the usual linear UV interpolation? Let me illustrate, with the help of this brick from CgTextures:
Linear interpolation allows for arbitrary affine transforms to be applied to a texture image. This includes any combination of translation, scaling, rotation, and shearing:
As you can see, these transforms work perfectly well on a quad; you can’t see the seam between the two triangles. However, if I move one of the quad’s verts so that it’s no longer a parallelogram, you can see the seam:
This happened because the triangles are still congruent in UV space (each covers half the texture, as before), but they are no longer congruent in model space. The affine transforms for the two triangles are no longer equal; although the UV mapping is still continuous along the seam, its derivatives (the tangent and bitangent vectors) are discontinuous there, resulting in ugliness.
Ordinarily, when building irregularly-shaped geometry like this, you wouldn’t assign UVs this way. For example, a level designer creating an irregularly-shaped wall piece would apply a single UV projection to the whole wall, giving all the triangles the same affine transform. This implies that not all of the texture is seen: it’s clipped and cropped so the shape of the mesh in UV space matches its shape in model space, preventing seams.
But what if we really do want to get a texture onto an arbitrary (convex) quadrilateral, without cropping out part of it?
Affine transforms allow arbitrary triangle-to-triangle mappings: you can create an affine mapping between any two triangles, no matter how different their shapes. This is just what happens when you apply a texture to a triangle: by setting up UVs, you implicitly create an affine map between model space and UV space. When rendering, the rasterizer evaluates this mapping to find the appropriate texture sample point for each pixel.
Geometrically, as long as the quad remains a parallelogram, the affine transforms for its two triangles are equal and you can’t see the seam. But when the quad isn’t a parallelogram, affine transforms and linear interpolation cannot smoothly map the whole texture to the quad.
To solve this problem, we must leave the world of linear interpolation and affine transforms behind! There are more-sophisticated interpolation methods that can help here, each with its own pros and cons. In this article I’m going to talk about one in particular, called projective interpolation. Later articles in this series will cover alternative methods.
## Projective Interpolation
Just as linear interpolation is based on affine transforms, projective interpolation is based on the family of projective transforms. These transforms are very familiar in 3D graphics: they’re exactly the same ones used to map a 3D scene onto a 2D image, simulating perspective! But how can this help us with interpolation?
The intuition is that if you have a 3D scene consisting of a single quad, as you move the camera around and look at it from different positions, its projected shape on the 2D screen will be, in general, a different quad. In fact, it turns out you can map any convex quad to any other convex quad this way, by finding an appropriate camera setup.
Moreover, we know how to interpolate UVs in such a way that a 3D quad doesn’t show a seam when it’s projected to the 2D screen; such perspective-correct interpolation is done all the time. This suggests that we should be able to texture-map a quad without a seam by using the same math used for perspective-correct interpolation. And indeed this works:
The entire texture is now warped to the irregular shape of the quad, with no visible seam!
However, this image is a little odd: it doesn’t really look like a 2D quad anymore. It actually looks a lot like a wall in a 3D engine, with the camera turned to the side so that the wall recedes into the distance. That’s the nature of projective interpolation. Because it uses the same math that’s involved in 3D-to-2D perspective, this method gives results that tend to look like a 3D scene, even though the quad is completely 2D.
With that caveat in mind, here’s how you implement projective interpolation.
It’s well-known that to do perspective-correct interpolation for a triangle, you must calculate u/z, v/z, and 1/z at each vertex, interpolate those linearly in screen space, then calculate u = (u/z) / (1/z) and v = (v/z) / (1/z) at each pixel. GPU rasterizers do this automatically, behind the scenes, for every interpolated attribute. We use the same idea here: our vertex shader will output uq, vq, and q, the GPU will interpolate those quantities linearly in model space, then we’ll divide by ‘q’ at each pixel. Here, ‘q’ is a per-vertex value that plays the role of 1/z. However, this ‘q’ will be determined by the shape of the quadrilateral. It’s a made-up “depth” chosen to give the right projective transform to eliminate the seam.
The vertex shader and pixel shader for projective interpolation will look something like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 float4x4 g_matLocalToClip; Texture2D g_texColor; SamplerState g_ss; struct VertexData { float3 pos : POSITION; float3 uvq : TEXCOORD0; }; void Vs ( VertexData vtx, out float3 uvq : TEXCOORD0, out float4 posClip : SV_Position) { posClip = mul(float4(vtx.pos, 1.0), g_matLocalToClip); uvq = vtx.uvq; } void Ps ( float3 uvq : TEXCOORD0, out half4 o_rgba : SV_Target) { o_rgba = g_texColor.Sample(g_ss, uvq.xy / uvq.z); }
Here, the important parts are: (a) the UVs are float3 instead of the usual float2, with ‘q’ in the third component; and (b) the pixel shader divides by ‘q’ before sampling the texture. The uvq values are precomputed and stored in the vertex data, so the vertex shader just passes them through.
The real trick here is how to calculate the right ‘q’ value for each vertex of the quad. This is fairly subtle—at least, it took me awhile to work it out!—and I’ll spare you the derivation. To find the ‘q’s, first find the intersection point of the two diagonals of the quad (e.g., intersect one diagonal with the plane defined by the other diagonal and the quad’s normal vector), and calculate the distances from this point to each of the four vertices. I’ll call those distances d0…d3:
Then, each ‘q’ is computed using the ‘d’s for that vertex and the opposite one, as follows: $uvq_i = \mathtt{float3}(u_i, v_i, 1) \times \frac{d_i + d_{i+2}}{d_{i+2}} \qquad (i = 0 \ldots 3)$ Store those values in your vertex data, and you’ll have projective interpolation!
Projective interpolation does the job we set out to do—it maps a texture smoothly onto an arbitrary convex quad. However, there are some potentially-problematic oddities with this method. As we saw above, it can generate results that 3D even when they’re not supposed to. This is related to how projective interpolation alters the spacing of points along a line nonuniformly, as can be seen by applying the interpolation to a grid:
The vertical grid lines, which are evenly spaced in the texture, are no longer evenly spaced after interpolation; they’re closer together at one end of the quad and farther apart at the other. Again, this is a consequence of “perspective” scaling things down when they’re “farther” from the camera. Unfortunately, this nonuniform spacing is completely dependent on the shape of the quad, and won’t generally match when two quads share an edge:
This is a lot like the original problem we were trying to solve: two adjoining triangles with different shapes would have different affine transforms, producing a seam. Here, two adjoining quads with different shapes have different projective transforms, producing a seam. If you’re trying to use this in a situation where you have multiple quads that need to join smoothly, this problem is pretty much a deal-breaker for projective interpolation.
In future installments of this series, I’ll talk about alternatives to projective interpolation that can also smoothly map a texture onto a quad, but with different features and caveats.
## 22 comments on “Quadrilateral Interpolation, Part 1”
May 29, 2012
Thanks for the nice idea, Nathan!
I’m trying to use quadrilateral interpolation for trail rendering but, obviously, this approach would fail as one will notice clearly C0-discontinuities along the quads edges instead of C1-discontinuities along the quads edges and diagonals. Which parameterization will solve this issue?
May 30, 2012
Peter, for trail rendering I think bilinear interpolation may work better. I’m planning to cover that in the next article in this series.
May 31, 2012
Unfortunately, it is not enough. If a charachter sabers and you have 20 cm tesselated polygons – you’ll see zig-zags very clearly.
Thus, I am super-interested in continuation.
June 1, 2012
Hmm, perhaps what you really want is some sort of spline, like a quadratic Bezier that would let you set matching tangents at each edge? It’s an interesting problem! IIRC, in Infamous 2 our trail rendering just tessellated very finely – like 10 polygons per frame or something ridiculous like that. :) It’s easy, but it would be nice to figure out how to do that in the shader instead of by adding geometry!
June 2, 2012
I don’t think that a Bezier spline would solve the problem, as in points where it does not match the edge of the polygon you won’t map value 1 to the edge, as it is mapped to the spline (if I understood what you meant).
Of course, overtesselation solved the problem, but obviously we want to keep polycount fixed :)
December 2, 2012
Can’t one just use the keyword “noperspective” in the shader?
http://msdn.microsoft.com/en-us/library/windows/desktop/bb509668(v=vs.85).aspx
December 2, 2012
araon, no, that does somewhat the opposite of what I’m trying to do. It causes attributes on 3D geometry to be interpolated as if it were 2D, smashed flat to the screen. Here, I’m trying to interpolate UVs on a 2D quad and in this article I used a method that makes it appear 3D.
December 2, 2012
I see, have overseen that your quad is pure 2d. Now it makes sense. Sorry for the stupid post. a.
April 16, 2013
I’ve just noticed that your Quadrilateral Interpolation can be implemented without the shader! Just use “glTexCoord4f” function and pass the coordinates as glTexCoord4f(u,v,0,q), which works even on traditional graphics hardware!
May 2, 2013
Fei Yang, nice you tried it with OpenGL. For me this was obvious, since the shader did exactly what homogenous coordinates are handled – thats why there is a glTexCoord4f in OpenGL :) I already had tougths weather there could be a Z in glTexCoord4f(u,v,Z,q) as a correction for the ‘perspective distortion’ within the 2D-quads… but I did not try it without Shaders.
May 8, 2013
Nathan,the article is great. It helps.I am really interested in the derivation of your formula, would you like to tell me how you get the method?
May 9, 2013
Hi,Nathan, I have implemented your method in OpenGL, but the result is incorrect.is the formula applicable to opengl?
May 9, 2013
Hi recond, I’ll try to write up something on the derivation of the formula but it may take me a few days as I’m quite busy at the moment. The basic idea is to reduce it to a one-dimensional problem. You can imagine pivoting the quad about its diagonal in 3D homogeneous space; that lets you set the q-values for two opposite corners while leaving the other two corners fixed. Do this for both diagonals, and the only fixed point is the point where they meet.
As for OpenGL, as far as I know it should be perfectly applicable. As Fei Yang pointed out, OpenGL even supports this in the fixed-function mode, using glTexCoord4f.
October 7, 2013
nice idea how to calculate ‘q’. I’m looking for non-perspective solution, did you try to figure out formula for ‘q’ in this case? Maybe i have to use ‘r’ and use it in shader like this: o_rgba = g_texColor.Sample(g_ss, float2(uvrq.x / uvrq.r, uvrq.y / uvrq.q)); but i’m still failing. Any help please?
October 7, 2013
Hi quas, I’m not sure what you mean by “non-perspective solution”; can you clarify? But using different divisors for U and V is an interesting idea; I’m not sure what that would look like.
October 7, 2013
by non-perspective i mean something like this: http://i.stack.imgur.com/V2KCQ.jpg , but this trapeziod i can solve, the problem starts when i’d like to texturing general quadrilateral (convex) where simply texture is uniformly distributed along each edge. hope i describe my problem better (english is not my native lang.)
October 7, 2013
Ahh, I see. Yes, that’s what bilinear interpolation does (different thing from bilinear texture filtering). I’m not sure how to implement bilinear UV interpolation in a shader, but it would be interesting to figure out. Paul Heckbert’s master’s thesis has some more about this, although not in a shader-ready form.
October 7, 2013
yes, actually own bilinear uv interpolator is good idea which i didn’t come up, i’ll try. thanks for given direction and pdf link.
July 21, 2014
Why is q calculated like that?
August 3, 2014
Hey!
This is just what I’ve been looking for, but I’m having a hard time trying to figure out ui and vi given the world coordinates for my point (px, py) and the 4 corners. Any leads for that would be much appreciated!
August 4, 2014
Bruno, they’re just the usual UVs that you always use for texture mapping. If you want to map the whole texture to the quad, they’d be (0, 0), (1, 0), (1, 1) and (0, 1). You can of course map some other region as well.
fakenerd, q is calculated like that because that’s what’s necessary to produce the desired effect. :) The derivation is just a bunch of nasty algebra, but it can be reduced to a 1D problem along each diagonal, which is why the solution takes the form it does.
August 5, 2014
Nathan,
I was looking to do the interpolation myself for some reason, but that’s something the shader gives already :) I got it working now, thanks a bunch!
|
2014-10-22 06:18:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44042032957077026, "perplexity": 1171.0092157613387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507446231.28/warc/CC-MAIN-20141017005726-00016-ip-10-16-133-185.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/607300/isomorphic-finite-abelian-groups
|
# Isomorphic finite abelian groups
Let $G$ and $H$ be finite abelian groups. Show that if for any natural number $n$ the groups $G$ and $H$ have the same number of elements of order $n$, then $G$ and $H$ are isomorphic.
I know, that for an infinity group doesn't work : $\Bbb Z_{27}$
It seems to me that I can use Finitely-generated abelian group
It is possible that this simple fact, but I would ask to write a proof .
-
Yes, use the theorem of finitely generated Abelian groups. – Berci Dec 15 '13 at 2:07
I understand that you are not very good with English, but you should at least write everything you want to write. The second line is not even finished... – tomasz Dec 15 '13 at 2:22
Since $G$ and $H$ are a direct product of cyclic groups of prime power order (fundamental theorem of finite Abelian groups), we just need to prove that if the number of elements of order $n$ are the same for $G$ and $H$, then both correspond to the same direct product.
Suppose $p^k$ divides both $|G|$ and $|H|$. The number of elements of order $p^k$ is $N\cdot \phi(p^k)$ where $N$ is the number of cyclic groups in the direct product of order $p^j$ for $j \ge k$. We can then easily determine $N$ for every $p^k$, and thus the whole direct product.
Since the information given is enough to completely determine $G$ and $H$, they must be isomorphic.
|
2014-12-20 16:06:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9237003922462463, "perplexity": 60.063686287720195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769990.68/warc/CC-MAIN-20141217075249-00034-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://pymbs.readthedocs.io/en/latest/reference/Loads.html
|
In order to model internal as well as external forces and torques, the concept of LoadElements has been introduced. In the following all LoadElements are described which are currently available.
## Loads¶
class PyMbs.Input.MbsSystem.AddLoad(world)
Class that provides functions to create load elements
CmpForce(symbol, CS1, CS2, name=None, CSref=None)
Use addLoad.CmpForce to add a vectorial force, acting between two coordinate systems. The force, specified with respect to the parent or reference frame, acts in positive direction on the parent coordinate system (CS1) and in negative direction on the child coordinate system.
Parameters: CS1 (Coordinate System, Body or MbsSystem.) – Reference to parent coordinate system / parent frame. CS2 (Coordinate System, Body or MbsSystem.) – Reference to child coordinate system / child frame. symbol (Expression as returned by addInput, addExpression or addSensor.) – Symbol representing a three dimensional vector variable whose components are interpreted as force values in x, y, and z-direction. The direction of x,y and z is given by the parent frame (CS1) or by the reference frame (CSref). CSref (Coordinate System, Body or MbsSystem.) – Reference to reference coordinate system / reference frame. name (string) – A name may be assigned to each force. If no name is given, then a name like load_1 is generated automatically. The name is used for code generation only, i.e. the symbols connected with this force will contain the name. Reference to the generated LoadElement LoadElement
CmpTorque(symbol, CS1, CS2, CSref=None, name=None)
Use addLoad.CmpTorque to add a vectorial torque, acting between two coordinate systems. The torque, specified with respect to the parent or reference frame, acts in positive direction on the parent coordinate system (CS1) and in negative direction on the child coordinate system.
Parameters: CS1 (Coordinate System, Body or MbsSystem.) – Reference to parent coordinate system / parent frame. CS2 (Coordinate System, Body or MbsSystem.) – Reference to child coordinate system / child frame. symbol (Expression as returned by addInput, addExpression or addSensor.) – Symbol representing a three dimensional vector variable whose components are interpreted as torque values around the x, y, and z-axis. The direction of x,y and z is given by the parent frame (CS1) or by the reference frame (CSref). CSref (Coordinate System, Body or MbsSystem.) – Reference to reference coordinate system / reference frame. name (string) – A name may be assigned to each force. If no name is given, then a name like load_1 is generated automatically. The name is used for code generation only, i.e. the symbols connected with this force will contain the name. Reference to the generated LoadElement LoadElement
Joint(symbol, joint, name=None)
Use addLoad.Joint to add a torque, acting on a joint. In case of a translational joint, a force has to be supplied. In case of a rotational joint, the load represents a torque.
Parameters: joint (Joint.) – Reference to joint. symbol (Expression as returned by addInput, addExpression or addSensor.) – Symbol representing a scalar. Force or Torque depending on whether it is a translational or rotational joint. name (string) – A name may be assigned to each force. If no name is given, then a name like load_1 is generated automatically. The name is used for code generation only, i.e. the symbols connected with this force will contain the name. Reference to the generated LoadElement LoadElement
PtPForce(symbol, CS1, CS2, name=None)
Use addLoad.PtPForce to add a scalar force, acting between two coordinate systems along a connecting line. A positive force means that the coordinate systems are pushed apart.
Parameters: CS1 (Coordinate System, Body or MbsSystem.) – Reference to parent coordinate system / parent frame. CS2 (Coordinate System, Body or MbsSystem.) – Reference to child coordinate system / child frame. symbol (Expression as returned by addInput, addExpression or addSensor.) – Symbol representing a scalar variable whose value is taken as force between the two coordinate systems. name (string) – A name may be assigned to each force. If no name is given, then a name like load_1 is generated automatically. The name is used for code generation only, i.e. the symbols connected with this force will contain the name. Reference to the generated LoadElement LoadElement
|
2021-06-20 13:01:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48203182220458984, "perplexity": 2367.2315335403077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487662882.61/warc/CC-MAIN-20210620114611-20210620144611-00634.warc.gz"}
|
https://gamedev.stackexchange.com/questions/155000/how-to-hold-on-final-frame-of-animation-in-godot/155208
|
# How to Hold on Final Frame of Animation in Godot
I am learning Godot 3.0 and I have a question concerning animation.
I have a player object whose node structure is KinematicBody2D > Sprite > AnimationPlayer. In the AnimationPlayer, I have ascend and descend animations that I want to play until their final frames, at which point I want to hold on those frames.
For now, I initiate my animations using the following function, which looks at my player's state attribute:
func animate():
var animation_player = $Sprite.get_node("AnimationPlayer") # image matches orientation if orientation == "LEFT":$Sprite.flip_h = true
else:
\$Sprite.flip_h = false
# animation depends on state
if state == "idle":
if not ["idle"].has(animation_player.current_animation):
animation_player.play("idle")
if state == "run":
if not ["run"].has(animation_player.current_animation):
animation_player.play("run")
if state == "ascend":
if not ["ascend"].has(animation_player.current_animation):
animation_player.play("ascend")
if state == "descend":
if not ["descend"].has(animation_player.current_animation):
animation_player.play("descend")
I have disabled looping for my ascend and descend animations and yet they still loop. Probably because when their animation finishes, the idle animation starts and is then immediately changed back to ascend or descend since the player is still in this state.
So my question is: How do I go about holding on the final frame of certain animations?
I have looked into the animation_finished() signal of the AnimationPlayer, but I can't figure out how to use it properly.
All I can think to do is use a Call Func track on the ascend and descend animations, calling a custom function at the end of these animations which switches to a different, single-frame but looped animation (ascend_static or descend_static). But surely there is a better way?
• Where do you call animate and change the state? – skrx Mar 6 '18 at 15:26
• I have a _process_physics() method that first calls a custom input_processing() method (in which states are changed), then calls animate(), and then does physics and positioning. Some of these should probably be in _process() which I don't currently use. Any restructuring advice is appreciated. – GoldenGremlin Mar 6 '18 at 17:16
• I also change states with function calls at the end of nonlooping animations. – GoldenGremlin Mar 6 '18 at 17:19
I think the problem is that you call your animate function in every loop. This leads to a constant restart of the finished loop. If you wouldn't check whether or not the animation is running your animation would restart over and over again.
I think it is better to put the animate function into the setter function of state, which is called whenever your variable is changed. To do this, just add the following line to your script:
var state = "idle" setget state_change
And the function:
func state_change(new_state):
state = new_state
animate(state)
The state_change function is only called when the state changes and therefore the animate function is only called once (you also don't have to check for a running animation anymore). Since you probably want to loop your idle animation, you have to make sure that this animation is set to loop.
• Wow, I never thought to create a setter function for my state variable. Brilliant. Thanks. – GoldenGremlin Mar 9 '18 at 2:43
|
2020-07-11 18:24:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39409753680229187, "perplexity": 2133.2095492988715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655934052.75/warc/CC-MAIN-20200711161442-20200711191442-00266.warc.gz"}
|
https://answers.opencv.org/users/6828/siralfrednobel/?sort=recent
|
2014-01-28 14:36:47 -0500 asked a question Most basic FaceDetection possible I am trying to find the most basic way of detecting if a face is present in a video stream from a webcam. However I am new to OpenCV and am struggling to get it working in an embeded project I am building. Due to lack of serious processing power, I don't want to display the image at all. No drawn boxes, no x,y of faces, just a count of the number of faces, output to a file. I'm having trouble trying to figure out the minimum amount of code I can use to detect when a face is present and then simply return the number of faces back to the command line into a file. It seems like I could take the facedetect.cpp line: 'void detectAndDraw' and change it to 'int detectAndDraw' and have it return that int but since I'm trying to write it in python instead of c++ I'm having trouble figuring out what I do and do not need from the example. This is what I have so far, based on some older code I found. where (path) will be /dev/video1 import cv2 def detect(path): img = cv2.imread(path) cascade = cv2.CascadeClassifier("/galileo/opencv/haarcascade_frontalface_alt.xml") rects = cascade.detectMultiScale(img, 1.3, 4, cv2.cv.CV_HAAR_SCALE_IMAGE, (20,20)) if len(rects) == 0: return [], img rects[:, 2:] += rects[:, :2] return rects I think I messed this up though since I just want to return a count of the faces, not the contents of the rects object. Any helpful pointers or if there is already a minimalist example of face counting in python someplace I missed would be greatly appreciated.
|
2021-07-29 03:26:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1870386302471161, "perplexity": 714.0686069593285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153814.37/warc/CC-MAIN-20210729011903-20210729041903-00407.warc.gz"}
|
http://www.whxb.pku.edu.cn/EN/10.3866/PKU.WHXB201903060
|
Acta Physico-Chimica Sinica ›› 2019, Vol. 35 ›› Issue (12): 1382-1390.
• Article •
### Effect of Binder Conformity on the Electrochemical Behavior of Graphite Anodes with Different Particle Shapes
Shah Rahim1,2,Alam Naveed1,2,A. Razzaq Amir1,2,Cheng YANG1,2,Yujie CHEN1,2,Jiapeng HU1,2,Xiaohui ZHAO1,2,*(),Yang PENG1,2,Zhao DENG1,2,*()
1. 1 Soochow Institute for Energy and Materials Innovations, College of Energy, Soochow University, Suzhou 215006, Jiangsu Province, P. R. China
2 Provincial Key Laboratory for Advanced Carbon Materials and Wearable Energy Technologies, Soochow University, Suzhou 215006, Jiangsu Province, P. R. China
• Received:2019-03-26 Accepted:2019-05-13 Published:2019-05-20
• Contact: Xiaohui ZHAO,Zhao DENG E-mail:zhaoxh@suda.edu.cn;zdeng@suda.edu.cn
• Supported by:
the National Natural Science Foundation of China(21701118);the National Natural Science Foundation of China(21805201);the Natural Science Foundation of Jiangsu Province, China(BK20161209);the Natural Science Foundation of Jiangsu Province, China(BK20160323);the Natural Science Foundation of Jiangsu Province, China(BK20170341);the Postdoctoral Science Foundation of China(2017M611899);the Postdoctoral Science Foundation of China(2018T110544);the Key Technology Initiative of Suzhou Municipal Science and Technology Bureau, China(SYG201748)
Abstract:
As an important component in electrodes, the choice of an appropriate binder is significant when fabricating lithium-ion batteries (LIBs) with good cycle stability and rate capability, which are used in numerous applications, especially portable electronics and eco-friendly electric vehicles (EVs). Semi-crystalline poly(vinylidene fluoride) (PVDF), which is a traditional and widely used binder, cannot efficiently accommodate the volume changes observed in the anode during the charge-discharge process while binding all the components in the electrode together, which results in increased internal cell resistance, detachment of the electrode components, and capacity fading. Herein, we have investigated a highly polar and elastomeric polyacrylonitrile-butadiene (NBR) rubber for use as a binder in LIBs, which can accommodate graphite particles of different shapes compared to semi-crystalline PVDF. Prior to our electrochemical tests, NBR was analyzed using thermogravimetric analysis (TGA) and X-ray diffraction (XRD), showing good thermal stability and an amorphous morphology. NBR is more conformable to irregular surfaces, which results in the formation of a homogeneous passivation layer on both spherical and flaky graphite particles to effectively suppress any electrolyte side reactions, further allowing more uniform and fast Li ion diffusion at the electrolyte/electrolyte interface. As a result, the electrochemical performance of both spherical and flaky shape graphite electrodes was significantly improved in terms of their first cycle Coulombic efficiency (CE) and cycle stability. With comparative specific capacity, the first cycle CE of the NBR-based spherical and flaky graphite electrodes were 87.0% and 85.5%, compared to 85.3% and 82.6% observed for their corresponding PVDF-based electrodes, respectively. After 1000 discharge-charge cycles at 1C, the capacity retention of the NBR-based graphite electrodes was significantly higher than that of PVDF-based electrodes. This was attributed to the good stability of the solid electrolyte interphase (SEI) formed on the graphite electrodes and the high stretching ability of the elastomeric NBR binder, which help to accommodate the repeated volume fluctuation of graphite observed during long-term charge-discharge cycling. Electrochemical impedance spectroscopy (EIS) and microscopic analysis (SEM and TEM) were carried out to investigate the formation and evolution of the SEI layers formed on the spherical and flaky graphite electrodes. The results show that thin, homogeneous, and stable SEI layers are formed on the surface of both spherical and flaky graphite electrodes prepared using the NBR binder. When compared to the PVDF-based graphite electrodes, the graphite electrodes constructed using NBR showed decreased resistance in the SEI layer and faster charge transfer, thus enhancing the electrode kinetics for Li ion intercalation/deintercalation. Our study shows that the electrochemical performance of spherical and flaky graphite electrodes prepared using the NBR binder is significantly improved, demonstrating that NBR is a promising binder for these electrodes in LIBs.
MSC2000:
• O646
|
2023-02-01 09:37:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17629826068878174, "perplexity": 7896.237044738026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499919.70/warc/CC-MAIN-20230201081311-20230201111311-00366.warc.gz"}
|
http://math.stackexchange.com/questions/79608/prove-that-the-numerator-of-h-p-1-in-reduced-form-is-a-multiple-of-p-for
|
# Prove that the numerator of $H_{p-1}$ in reduced form is a multiple of $p$ for $p$ an odd prime
Prove that for any odd prime $p$ $$H_{p-1}=1+\frac{1}{2} + \cdots + \frac{1}{p-1}$$ contains a multiple of $p$ in the numerator when written in reduced form, i.e. $\frac{a}{b}$ where $\mathrm{gcd}(a,b)=1$.
-
I understand you're simply quoting a math problem, but without writing your own thoughts anywhere around it, or otherwise framing it in some way (e.g. blockquotes), what you've literally posted is a command to us - not polite at all. Biting the hand that feeds and so on. – anon Nov 6 '11 at 20:10
Thanks for the acceptance (within one minute of posting the answer!), but see this thread on meta meta.math.stackexchange.com/questions/2553/… . I think the software allows un-accepting an answer and choosing later which one (if any) to accept. – zyx Nov 6 '11 at 20:36
## 3 Answers
Because $p$ is odd, the indices in the sum can be grouped into $(p-1)/2$ pairs $\{ i , p-i \}$ and in each pair the sum is divisible by $p$.
Stronger statements mod $p^2$ and $p^3$ are known as Wolstenholme's congruences.
http://en.wikipedia.org/wiki/Wolstenholme%27s_theorem
-
The operation of reciprocation in $\mathbb{Z}/p \mathbb{Z}$ is a one-to-one map, thus reciprocals of all positive integers $1,2,\ldots,p-1$ would be permutations thereof. The sum of permutated numbers is the same as the sum of ordered numbers, i.e. $$\sum_{k=1}^{p-1} \frac{1}{k} \equiv \sum_{i=1}^{p-1} i \equiv p \cdot \frac{p-1}{2} \equiv 0 \mod p$$
-
The assumption that $p$ is odd is not used in the first sentence, but it is necessary for the last step. – zyx Nov 6 '11 at 20:45
@zyx The first sentence only uses that $p$ is prime, but the last equality follows for odd primes only. This was implied in my answer. Thanks for spelling this out. – Sasha Nov 6 '11 at 20:53
The answer is of course correct, the point was only that since the conclusion is false for $p=2$ while the first sentence is true for all $p$, it is interesting to "localize" where the difficulty with $2$ occurs (or at least, that question came to mind while reading the answer). Every line of the proof works for all $p$, except that $p(p-1)/2$ is no longer divisible by $p$ when $p=2$. – zyx Nov 7 '11 at 6:24
Write it as $$H_{p-1} = \frac{\frac{(p-1)!}{1} + \dots + \frac{(p-1)!}{p-1}}{(p-1)!}.$$ Then the denominator is not divisible by $p$, so it is enough to prove that the numerator is. Now $\mathbb{Z}_p$ is a field so we have actually $$\frac{(p-1)!}{1} + \dots + \frac{(p-1)!}{p-1} = (p-1)!(1 + \dots + (p-1)) = 1 + \dots + (p-1) = \frac{p(p-1)}{2}$$ which is $0$.
Here we have repeatedly used the fact that $\mathbb{Z}_p \setminus \{0\}$ is a group under multiplication. Taking inverses or multiplying by a group element are bijective operations.
-
@zyx: Thanks. I was writing the answer a bit hastily, and Wilson's theorem just appeared to me for some reason. Edited. – J. J. Nov 6 '11 at 20:53
|
2015-01-29 20:47:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9286163449287415, "perplexity": 386.89636221631986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115868812.73/warc/CC-MAIN-20150124161108-00185-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://mathhelpboards.com/threads/solution-of-pde-derivatives.24300/
|
# [SOLVED]Solution of PDE - Derivatives
#### mathmari
##### Well-known member
MHB Site Helper
Hey!!
I want to verify that $$w(x,t)=\frac{1}{2c}\int_0^t\int_{c(t-\tau)-x}^{x+c(t-\tau)}f(y,\tau)dyd\tau$$ is the solution of the problem $$w_{tt}=c^2w_{xx}+f(x,t) , \ \ x>0, t>0 \\ w(x,0)=w_t(x,0)=0, \ \ x>0 \\ w(0,t)=0 , \ \ t\geq 0$$ For that we have to take the partial derivatives of $w$. But how can we do that in this case for example as for $t$ where we have at both integrals the $t$ ? Could you give me a hint?
#### Klaas van Aarsen
##### MHB Seeker
Staff member
Hey!!
I want to verify that $$w(x,t)=\frac{1}{2c}\int_0^t\int_{c(t-\tau)-x}^{x+c(t-\tau)}f(y,\tau)dyd\tau$$ is the solution of the problem $$w_{tt}=c^2w_{xx}+f(x,t) , \ \ x>0, t>0 \\ w(x,0)=w_t(x,0)=0, \ \ x>0 \\ w(0,t)=0 , \ \ t\geq 0$$ For that we have to take the partial derivatives of $w$. But how can we do that in this case for example as for $t$ where we have at both integrals the $t$ ? Could you give me a hint?
Hey mathmari !!
How about defining $g(x,t,\tau) = \int_{c(t-\tau)-x}^{x+c(t-\tau)}f(y,\tau)dy$, and then differentiating one integral at a time using Leibniz's integral rule?
#### mathmari
##### Well-known member
MHB Site Helper
How about defining $g(x,t,\tau) = \int_{c(t-\tau)-x}^{x+c(t-\tau)}f(y,\tau)dy$, and then differentiating one integral at a time using Leibniz's integral rule?
We have the following partial derivatives, or not?
\begin{align*}w_{t}&=\frac{\partial}{\partial{t}}\left [\frac{1}{2c}\int_0^tg(x,t,\tau )d\tau\right ]=\frac{1}{2c}g(x,t,t)+\frac{1}{2c}\int_0^tg_t(x,t,\tau )d\tau\\ & =\frac{1}{2c}\int_{-x}^xf(y,\tau)dy+\frac{1}{2c}\int_0^tg_t(x,t,\tau )d\tau\end{align*}
\begin{align*}w_{tt}&=\frac{1}{2c}\int_{-x}^xf_t(y,\tau)dy+\frac{1}{2c}\frac{\partial}{\partial{t}}\int_0^tg_t(x,t,\tau )d\tau\\ & =\frac{1}{2c}\int_{-x}^xf_t(y,\tau)dy+\frac{1}{2c}g_t(x,t,t)+\frac{1}{2c}\int_0^tg_{tt}(x,t,\tau )d\tau\end{align*}
\begin{align*}&w_x=\frac{\partial}{\partial{x}}\left [\frac{1}{2c}\int_0^tg(x,t,\tau )d\tau\right ]=\frac{1}{2c}\int_0^tg_x(x,t,\tau )d\tau \\ &w_{xx}=\frac{\partial}{\partial{x}}\left [\frac{1}{2c}\int_0^tg_x(x,t,\tau )d\tau\right ]=\frac{1}{2c}\int_0^tg_{xx}(x,t,\tau )d\tau \end{align*}
Is everythig correct so far? Now we have to calculate the derivaties of $g$, right?
#### Klaas van Aarsen
##### MHB Seeker
Staff member
I think it should be $f(y,t)$ and $f_t(y,t)$, shouldn't it?
Otherwise I believe it's all correct.
#### mathmari
##### Well-known member
MHB Site Helper
I think it should be $f(y,t)$ and $f_t(y,t)$, shouldn't it?
Otherwise I believe it's all correct.
Ah ok!!
We have the following partial derivatives of $g$, or not?
\begin{align*}g_t&=\frac{\partial}{\partial{t}}\int_{c(t-\tau)-x}^{x+c(t-\tau)}f(y,\tau )dy\\ & =c\cdot f(x+c(t-\tau),\tau)-c\cdot f(c(t-\tau)-x, \tau)+\int_{c(t-\tau)-x}^{x+c(t-\tau)}f_t(y,\tau )dy\end{align*}
\begin{align*}g_{tt}&=c\cdot f_x(x+c(t-\tau),\tau)\cdot \frac{d}{dt}[x+c(t-\tau)]-c\cdot f_x(c(t-\tau)-x, \tau)\cdot \frac{d}{dt}[c(t-\tau)-x]+\frac{\partial}{\partial{t}}\int_{c(t-\tau)-x}^{x+c(t-\tau)}f_t(y,\tau )dy\\ & =c^2\cdot f_x(x+c(t-\tau),\tau)-c^2\cdot f_x(c(t-\tau)-x, \tau)+f_t(x+c(t-\tau),\tau )\cdot c-f(c(t-\tau)-x, \tau)\cdot c+\int_{c(t-\tau)-x}^{x+c(t-\tau)}f_t(y,\tau)dy\end{align*}
\begin{align*}g_x&=\frac{\partial}{\partial{x}}\int_{c(t-\tau)-x}^{x+c(t-\tau)}f(y,\tau )dy\\ & = f(x+c(t-\tau),\tau)-(-1)\cdot f(c(t-\tau)-x, \tau)+\int_{c(t-\tau)-x}^{x+c(t-\tau)}f_x(y,\tau )dy\\ & = f(x+c(t-\tau),\tau)+ f(c(t-\tau)-x, \tau)+\int_{c(t-\tau)-x}^{x+c(t-\tau)}f_x(y,\tau )dy\end{align*}
\begin{align*}g_{xx}&= f_x(x+c(t-\tau),\tau)+ f_x(c(t-\tau)-x, \tau)\cdot (-1)+\frac{\partial}{\partial{t}}\int_{c(t-\tau)-x}^{x+c(t-\tau)}f_x(y,\tau )dy\\ & = f_x(x+c(t-\tau),\tau)- f_x(c(t-\tau)-x, \tau)+f_x(x+c(t-\tau),\tau )-f_x(c(t-\tau)-x, \tau)+\int_{c(t-\tau)-x}^{x+c(t-\tau)}f_{xx}(y,\tau )dy \\ & = 2f_x(x+c(t-\tau),\tau)- 2f_x(c(t-\tau)-x, \tau)+\int_{c(t-\tau)-x}^{x+c(t-\tau)}f_{xx}(y,\tau )dy\end{align*}
Last edited:
#### mathmari
##### Well-known member
MHB Site Helper
I must have some mistakes...because if these derivative were correct, we would get
\begin{align*}w_{xx}&=\frac{1}{2c}\int_0^tg_{xx}(x,t,\tau)d\tau\\ & =\frac{1}{2c}\int_0^t\left [2f_x(x+c(t-\tau),\tau)- 2f_x(c(t-\tau)-x, \tau)+\int_{c(t-\tau)-x}^{x+c(t-\tau)}f_{xx}(y,\tau )dy\right ]d\tau\end{align*}
\begin{align*}w_{tt}&=\frac{1}{2c}\int_{-x}^xf_t(y,t)dy+\frac{1}{2c}g_t(x,t,t)+\frac{1}{2c}\int_0^tg_{tt}(x,t,\tau )d\tau \\ & = \frac{1}{2c}\int_{-x}^xf_t(y,t)dy+\frac{1}{2c}\left [c\cdot f(x,t)-c\cdot f(-x, t)+\int_{-x}^{x}f_t(y,t )dy\right ]+\frac{1}{2c}\int_0^t\left [c^2\cdot f_x(x+c(t-\tau),\tau)-c^2\cdot f_x(c(t-\tau)-x, \tau)+f_t(x+c(t-\tau),\tau )\cdot c-f(c(t-\tau)-x, \tau)\cdot c+\int_{c(t-\tau)-x}^{x+c(t-\tau)}f_t(y,\tau)dy\right ]d\tau \\ & = \frac{1}{c}\int_{-x}^xf_t(y,t)dy+\frac{f(x,t)-f(-x,t)}{2}+\frac{1}{2c}\int_0^t\left [c^2\cdot f_x(x+c(t-\tau),\tau)-c^2\cdot f_x(c(t-\tau)-x, \tau)+f_t(x+c(t-\tau),\tau )\cdot c-f(c(t-\tau)-x, \tau)\cdot c+\int_{c(t-\tau)-x}^{x+c(t-\tau)}f_t(y,\tau)dy\right ]d\tau\end{align*}
But these don't satisfy the problem, do they?
#### mathmari
##### Well-known member
MHB Site Helper
I am trying it again.
I have done the following:
\begin{equation*}w(x,t)=\frac{1}{2c}\int_0^tg(x,t,\tau )d\tau \end{equation*}
\begin{align*}w_t&=\frac{1}{2c}g(x,t,t)+\frac{1}{2c}\int_0^tg_t(x,t,\tau )d\tau \\
& =\frac{1}{2c}g(x,t,t)+\frac{1}{2c}\int_0^t\frac{\partial}{\partial{t}}\int_{c(t-\tau)-x}^{x+c(t-\tau)}f(y,\tau )dyd\tau\\ & = \frac{1}{2c}\int_{-x}^xf(y,t)dy+\frac{1}{2c}\int_0^t\left [f(x+c(t-\tau ),\tau)\cdot c-f(c(t-\tau )-x,\tau)\cdot c+\int_{c(t-\tau)-x}^{x+c(t-\tau)}f_t(y,\tau )dy\right ]d\tau \\ & = \frac{1}{2c}\int_{-x}^xf(y,t)dy+\frac{1}{2}\int_0^tf(x+c(t-\tau ),\tau)d\tau-\frac{1}{2}\int_0^tf(c(t-\tau )-x,\tau)d\tau+\frac{1}{2c}\int_0^t\int_{c(t-\tau)-x}^{x+c(t-\tau)}f_t(y,\tau )dyd\tau\end{align*}
is the first derivative of $w$ as for $t$ correct?
#### Klaas van Aarsen
##### MHB Seeker
Staff member
I am trying it again.
I have done the following:
\begin{equation*}w(x,t)=\frac{1}{2c}\int_0^tg(x,t,\tau )d\tau \end{equation*}
\begin{align*}w_t&=\frac{1}{2c}g(x,t,t)+\frac{1}{2c}\int_0^tg_t(x,t,\tau )d\tau \\
& =\frac{1}{2c}g(x,t,t)+\frac{1}{2c}\int_0^t\frac{\partial}{\partial{t}}\int_{c(t-\tau)-x}^{x+c(t-\tau)}f(y,\tau )dyd\tau\\ & = \frac{1}{2c}\int_{-x}^xf(y,t)dy+\frac{1}{2c}\int_0^t\left [f(x+c(t-\tau ),\tau)\cdot c-f(c(t-\tau )-x,\tau)\cdot c+\int_{c(t-\tau)-x}^{x+c(t-\tau)}f_t(y,\tau )dy\right ]d\tau \\ & = \frac{1}{2c}\int_{-x}^xf(y,t)dy+\frac{1}{2}\int_0^tf(x+c(t-\tau ),\tau)d\tau-\frac{1}{2}\int_0^tf(c(t-\tau )-x,\tau)d\tau+\frac{1}{2c}\int_0^t\int_{c(t-\tau)-x}^{x+c(t-\tau)}f_t(y,\tau )dyd\tau\end{align*}
is the first derivative of $w$ as for $t$ correct?
I think this is correct yes.
And we can simplify it bit more since $f(y,\tau)$ does not depend on $t$.
Therefore $\pd{}t f(y,\tau)=0$.
#### mathmari
##### Well-known member
MHB Site Helper
I think this is correct yes.
And we can simplify it bit more since $f(y,\tau)$ does not depend on $t$.
Therefore $\pd{}t f(y,\tau)=0$.
Ok! At the next step we have \begin{align*}w_{tt}&= \frac{1}{2c}\int_{-x}^xf_t(y,t)dy+\frac{1}{2}f(x,\tau)+\frac{1}{2}\int_0^t\frac{\partial}{\partial{t}}f(x+c(t-\tau ),\tau)d\tau-\frac{1}{2}f(-x,\tau)-\frac{1}{2}\int_0^t\frac{\partial}{\partial{t}}f(c(t-\tau )-x,\tau)d\tau\end{align*} Is the following correct? \begin{equation*}\frac{\partial}{\partial{t}}f(x+c(t-\tau ),\tau)=f_x(x+c(t-\tau ),\tau)\cdot \frac{d(x+c(t-\tau)}{dt}+f_t(x+c(t-\tau ),\tau)\cdot \frac{d\tau}{dt}\end{equation*} Or do we not use here the chain rule?
#### mathmari
##### Well-known member
MHB Site Helper
If this is correct then we get \begin{align*}w_{tt}&= \frac{1}{2c}\int_{-x}^xf_t(y,t)dy+\frac{1}{2}f(x,t)+\frac{c}{2}\int_0^tf_x(x+c(t-\tau ),\tau)d\tau-\frac{1}{2}f(-x,t)-\frac{c}{2}\int_0^tf_x(c(t-\tau )-x,\tau)d\tau\end{align*}
The second derivative as for $x$ (if I have no mistakes) is \begin{align*}w_{xx}&=\frac{1}{2c}\int_0^tf_x(x+c(t-\tau ),\tau)d\tau-\frac{1}{2c}\int_0^tf_x(c(t-\tau )-x,\tau)d\tau\end{align*}
At the $w_{tt}$ is it correct that we have once $f(x,t)$ and once $f(-x,t)$ ?
Last edited:
#### Klaas van Aarsen
##### MHB Seeker
Staff member
Is the following correct? \begin{equation*}\frac{\partial}{\partial{t}}f(x+c(t-\tau ),\tau)=f_x(x+c(t-\tau ),\tau)\cdot \frac{d(x+c(t-\tau)}{dt}+f_t(x+c(t-\tau ),\tau)\cdot \frac{d\tau}{dt}\end{equation*} Or do we not use here the chain rule?
I believe that it should be:
\begin{align*}\frac{\partial}{\partial{t}}f(x+c(t-\tau ),\tau)&=f_x(x+c(t-\tau ),\tau)\cdot \frac{\partial(x+c(t-\tau))}{\partial t}+f_t(x+c(t-\tau ),\tau)\cdot \frac{\partial\tau}{\partial t} \\
&=f_x(x+c(t-\tau ),\tau)\cdot c
\end{align*}
shouldn't it?
#### mathmari
##### Well-known member
MHB Site Helper
I believe that it should be:
\begin{align*}\frac{\partial}{\partial{t}}f(x+c(t-\tau ),\tau)&=f_x(x+c(t-\tau ),\tau)\cdot \frac{\partial(x+c(t-\tau))}{\partial t}+f_t(x+c(t-\tau ),\tau)\cdot \frac{\partial\tau}{\partial t} \\
&=f_x(x+c(t-\tau ),\tau)\cdot c
\end{align*}
shouldn't it?
Ah ok, with partial derivatives not just derivatives. But the result is the same as mine!
I am not really sure about the terms $f(x,t)$ and $f(-x,t)$ also about the term $\frac{1}{2c}\int_{-x}^xf_t(y,t)dy$.
If that integral would we $0$ and if we had twice $f(x,t)$ instead of once $f(x,t)$ and once $f(-x,t)$, then $w$ would satisfy the problem, wouldn't it?
Do we maybe use the fact that at the problem it is $x>0$ ?
#### mathmari
##### Well-known member
MHB Site Helper
Are maybe the limits of the limits wrong? These limits should describe the following set:
Are the limits that I used at the integral of $w(x,t)$ correct?
#### Klaas van Aarsen
##### MHB Seeker
Staff member
I believe you've already found that it should be:
$$w(x,t)=\frac{1}{2c}\int_0^{t-\frac{x}{c}}\int_{c(t-\tau)-x}^{x+c(t-\tau)}f(y,\tau)dyd\tau+\frac{1}{2c}\int_{t-\frac{x}{c}}^t\int_{x-c(t-\tau)}^{x+c(t-\tau)}f(y,\tau)dyd\tau$$
It also means that some of the steps are not correct yet.
|
2021-06-19 22:31:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9926578998565674, "perplexity": 9790.408942013733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487649731.59/warc/CC-MAIN-20210619203250-20210619233250-00314.warc.gz"}
|
http://wangchaofeng.com/parseNote.php?note=notes_programming_ruby_rails_spring.note
|
All notes
Spring
# Intro
Spring is a Rails application preloader. It speeds up development by keeping your application running in the background so you don't need to boot it every time you run a test, rake task or migration.
Spring makes extensive use of Process.fork, so won't be able to provide a speed up on platforms which don't support forking (Windows, JRuby).
## Installation
# This generates a bin/spring executable, and inserts a small snippet of code into relevant existing executables, e.g. bin/rake, bin/rails.
bundle exec spring binstub --all
bin/spring status
bin/spring stop
|
2018-12-12 16:44:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19106203317642212, "perplexity": 7501.019166389995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824059.7/warc/CC-MAIN-20181212155747-20181212181247-00274.warc.gz"}
|
https://dagfans.org/paper/2018/04/04/23-SPECTRE-3-SPECTRE-VS-BITCOIN-OVERVIEW.html
|
Scroll
Categories
Tags
Monthly Archives
Keyword
Back
2018.04.04
# 3. SPECTRE VS BITCOIN – 概述
SPECTRE adopts many of Bitcoin’s solution features. In particular, miners create blocks, which are batches of transactions. A valid block must contain a solution to the PoW puzzle (Bitcoin for example, uses PoW that is based on partial SHA256 collisions). The block creation rate, denoted λ, is kept constant by the protocol by occasional readjustments of the PoW difficulty; we elaborate on this mechanism in SPECTRE in Appendix D. The size of a block is limited by some B KB.
SPECTRE采用了比特币的许多解决方案的功能。 特别是,矿工创建区块,即成批的交易。 一个有效的区块必须包含PoW难题的解(以比特币为例,使用基于部分SHA256冲突的PoW)。 (译注: 这里的部分冲突指的生成的散列值不要求每一位都一样,只要部分一样就可以了) 出块率,记为λ,通过协议定期重新调整PoW难度而保持不变; 我们在附录D中详细说明了SPECTRE的这种机制。 块的大小限制为某个值B KB。
Bitcoin’s throughput can be increased by increasing either the block size limit (which in turn increases $D$) or\and the block creation rate $\lambda$. Alas, it is well established that the security threshold of Nakamoto Consensus deteriorates as $D \cdot \lambda$ increases:
### 定理 2. [比特币是不可扩展的] 比特币协议的安全阈值随着D·λ增加而变为零。
The proof of this theorem appears in various forms in previous works, see [18], [15], [7]. To maintain a high security threshold, Bitcoin suppresses its throughput by keeping λ low – 1/600 blocks per second. This large safety margin is needed because λ (and B ) are decided once and for all at the inception of the protocol. Consequently, even when the network is healthy and D is low, Bitcoin suffers from a low throughput – 3 to 7 transactions per second, and slow confirmation times – tens of minutes. In contrast, SPECTRE’s throughput can be increased without deteriorating the security threshold:
### 定理 3. [SPECTRE 是可扩展的] 对于任何D·λ,SPECTRE的安全阈值为50%。
Therefore, in the context of the Distributed Algorithms literature, SPECTRE falls into the partial synchronous setup, as it remains secure for any value of D. Theorem 3 is proven rigorously in Appendix E.
Of course, λ cannot be increased indefinitely or otherwise the network will be flooded with messages (blocks) and become congested. Theorem 3 “lives” in the theoretical framework (specified in Section 2), which does not model the limits on nodes’ bandwidth and network capacity. Practically, these barriers allow for a throughput of thousands of transactions per second, by setting λ = 10 and b = 100, for instance. For further discussion refer to Appendices B and D.
Asymptotically, SPECTRE’s confirmation times are in $\mathcal{O}(\frac{ln(1/ϵ)}{λ(1-2α)}+\frac{D}{1-2α})$ . In practice, this allows for confirmation times of mere seconds, under normal network conditions. When running RobustTxO, each node in SPECTRE uses its own upper bound on the recent D in the network. This bound affects only its own operation—underestimating D will result in premature acceptance of transactions, and overestimating it by far will delay acceptance unnecessarily (by a time linear in the difference). Importantly, in case of network hiccups and long network delays, the node can switch in his local client to a more conservative bound on D without coordinating this with other nodes.
Top
|
2020-10-30 20:42:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6756701469421387, "perplexity": 2573.7129262935327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107911229.96/warc/CC-MAIN-20201030182757-20201030212757-00049.warc.gz"}
|
https://apluspal.com/tag/fm-reciprocal-transformation/
|
Home » FM Reciprocal Transformation
# FM Reciprocal Transformation
## 3.5 Introduction to Data Transformations
### Linearization
• So far we have only looked at methods of analysing linear associations and not non-linear associations. Luckily, linearization provides a convenient way of transforming non-linear associations into linear ones so that they can be analysed using the same methods.
• Linearization works by applying a transformation of some form to either the explanatory and/or response variables datasets. In Further Maths, you will only deal with situations requiring one of the datasets to be transformed at a time.
• Keep in mind that the formula for the linearised model must include the transformation (e.g. the formula for a model which has undergone a square transformation to the explanatory variable will be of the form y=a+bx^2).
### Square Transformation
Read More »3.5 Introduction to Data Transformations
|
2021-06-16 20:56:56
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8368908166885376, "perplexity": 1179.1961145811185}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626008.14/warc/CC-MAIN-20210616190205-20210616220205-00440.warc.gz"}
|
https://www.abragas.com.br/8aa65p7/d955f5-strong-acid-weak-base-titration-required-practical
|
abragas@abragas.com.br
+55 (21) 2221-6695
## strong acid weak base titration required practical
### 26 de janeiro de 2021, às 3:11
Donate or volunteer today! Watch the recordings here on Youtube! Triprotic acid dissociation: Triprotic acids can make three distinct proton donations, each with a unique Ka. Certain types of polyprotic acids have more specific names, such as diprotic acid (two potential protons to donate) and triprotic acid (three potential protons to donate). Recall that strong acid-weak base titrations can be performed with either serving as the titrant. mmoles of hydroxide in excess: 7.8 mmol - 7.50 mmol= 0.3 mmol OH-, To find the concentration of the OH- we must divide by the total volume. Calculating the pH for titration of acetic acid with strong base NaOH before adding any base and at half-equivalence point. Calculate the concentration of an unknown strong acid given the amount of base necessary to titrate it. Each reaction proceeds with its unique value of Ka. Required practical activities. This is because the anion of the weak acid becomes a common ion that reduces the ionization of the acid. 4.4.2 Reaction of acids. Indicators usually exhibit intermediate colors at pH values inside a specific transition range. A pH indicator shows the equivalence point —the point at which the equivalent number of moles of a base have been added to an acid. 1. Start studying Practical Chemistry. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. The solution administered from the buret is called the titrant. Molarity of HCl = $\frac {0.018 \ \text{moles} \ \text{HCl}}{0.025 \ \text{L} \ \text{HCl}} = 0.72 \ \text{Molar} \ \text{HCl}$. The titration is typically performed as an acid into base. Redox titrations. Answer to The titration of a weak acid with a strong base has an end point at pH = 9.0. In a titration of a Weak Acid with a Strong Base the titrant is a strong base and the analyte is a weak acid. In this reaction a buret is used to administer one solution to another. What is the unknown concentration of a 25.00 mL HCl sample that requires 40.00 mL of 0.450 M NaOH to reach the equivalence point in a titration? Our mission is to provide a free, world-class education to anyone, anywhere. An example of a strong acid-weak base titration is the reaction between ammonia (a weak base) and hydrochloric acid (a strong acid) in the aqueous phase: $NH_3 (aq) + HCl (aq) \rightarrow {NH_4^+} (aq) + Cl^- (aq)$ The acid is typically titrated into the base. In this reaction, adding acid shifts the indicator equilibrium to the left. The steep portion of the curve prior to the equivalence point is short. This is the initial volume of HF, 25 mL, and the addition of NaOH, 12.50 mL. The identity of the weak acid or weak base being titrated strongly affects the shape of the titration curve. This is the initial volume of HF, 25 mL, and the addition of NaOH, 26 mL. In the case of the indicator methyl orange, the HIn is colored red and the ionized In– form is yellow. Alkalimetry, or alkimetry, is the specialized analytic use of acid-base titration to determi… In this reaction the F- acts as a base. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Required practical activities. Titration of Fe+2 vs Cr 2 O 7-2 (redox titration) 2. Find the pH after the addition of 10 mL of 0.3 M NaOH. Centres may choose to use other weak acid/strong base combinations or strong acid/weak base combinations. The endpoint and the equivalence point are not exactly the same: the equivalence point is determined by the stoichiometry of the reaction, while the endpoint is just the color change from the indicator. As the equivalence point is approached, the pH will change more gradually, until finally one drop will cause a rapid pH transition through the equivalence point. The titration curve demonstrating the pH change during the titration of the strong base with a weak acid shows that at the beginning, the pH changes very slowly and gradually. This is due to the production of conjugate base during the titration. Because the solution being titrated is a weak base, the pOH form of the Henderson Hasselbalch equation is used. In base form, on the left in the figure, the color is yellow. If a chemical indicator is used—methyl orange would be a good choice in this case—it changes from its basic to its acidic color. 4.4.2.5 Titrations The neutral (red) and dissociated (yellow) forms of the indicator are present at equal concentrations when the pH = 3.8. Polyprotic acid are able to donate more than one proton per acid molecule, in contrast to monoprotic acids that only donate one proton per molecule. The acid is typically titrated into the base. Sometimes a blend of different indicators is used to achieve several smooth color changes over a wide range of pH values. One common example for acid-base titration is the use of a hydrochloric acid solution, HCl, with a basic sodium hydroxide solution, NaOH. 2a Determination of the reacting volumes of solutions of a strong acid and a strong alkali by titration. Therefore the total volume is 25 mL + 26 mL = 51 mL, The concentration of OH- is $$\dfrac{0.3 mmol OH^{-}}{51 mL}=0.00588M$$, Example $$\PageIndex{6}$$: Equivalence Point. When acetic acid titrated with NaOH then it will show pH value is greater than 7. Polyprotic acids, also known as polybasic acids, are able to donate more than one proton per acid molecule. Recall the general shape of a pH vs equivalents graph generated by titrating a polyprotic acid. A strong acid yields a weak conjugate base (A. You can choose to carry out a strong acid - strong base titration (or any combination of strong and weak acid-base titrations). Freyre under the Creative Commons Attributions-Share Alike 2.5 Generic. An ICE table for this reaction must be constructed. ... strong acid weak base. For applications requiring precise measurement of pH, a pH meter is frequently used. CC licensed content, Specific attribution, http://www.chem1.com/acad/webtext/pdf/c1xacid2.pdf, http://en.wikipedia.org/wiki/Acid-base_titration, http://en.wikipedia.org/wiki/Stoichiometry, http://en.wiktionary.org/wiki/stoichiometry, http://en.wikipedia.org/wiki/Equivalence_point, http://s3.amazonaws.com/figures.boundless.com/50a168a0e4b04ac1150c0c72/tit1.png, http://en.wikipedia.org/wiki/Polyprotic_acid%23Polyprotic_acids, http://en.wikipedia.org/wiki/Monoprotic_acid%23Monoprotic_acids, http://s3.amazonaws.com/figures.boundless.com/50a1eba8e4b030122197788c/oxalic2.png, http://s3.amazonaws.com/figures.boundless.com/50a1ebede4b0301221977893/di.png, http://en.wiktionary.org/wiki/pH_indicator, http://en.wikibooks.org/wiki/Chemical_Principles/Solution_Equilibria:_Acids_and_Bases%23Indicators, http://en.wikipedia.org/wiki/PH_indicator. Figure $$\PageIndex{2}$$: The titration of a weak acid with strong base. Common examples of monoprotic acids in mineral acids include hydrochloric acid (HCl) and nitric acid (HNO3). $$k_{b} = \dfrac{1.0\times 10^{-14}}{6.6\times 10^{-4}}$$, Now that we have the kb value, we can write the ICE table in equation the equation form, $$1.515\times 10^{-11} \dfrac{x^{2}}{.15-x}$$, $$0= x^{2} + 1.515 \times 10^{-11}x -2.2727\times 10^{-12}$$, $$x = \dfrac{-1.515\times 10^{-11} \pm \sqrt{(-1.515\times 10^{-11})^2 - 4(1)(-2.2727\times 10^{-12})}}{2}$$. Legal. B. Bromocresol Green. Titrations are reactions between specifically selected reactants—in this case, a strong base and a weak acid. The aim of this experiment is to investigate how pH changes when a weak acid reacts with a strong base. Methyl Orange. In the reaction the acid and base react in a one to one ratio. Freyre. This is because the solution is acting as a buffer. There is also a redox titration experiment to complete in order for students to practise their understanding and skills. The endpoint and the equivalence point are not exactly the same: the equivalence point is determined by the stoichiometry of the reaction, while the endpoint is just the color change from the indicator. $C_2H_4O_{2(aq)} + OH^-_{(aq)} \rightarrow C_2H_3O^-_{2(aq)} + H_2O_{(l)} \label{1}$. Required practical activities. This is between 0.10 and 10. However, for this to work the reaction must follow certain rules. Acid-base titrations. Experimental Procedures: Materials: 0.10 M HCl 0.10 M NaOH 0.10M HC 2 H 3 O 2 0.10 M NH 4 OH 250-mL beaker 50-mL buret 2 utility clamps Computer Distilled water ... Titration of a weak base with a strong acid (continued) 2015 AP Chemistry free response 3b. These methods range from the use of litmus paper, indicator paper, specifically designed electrodes, and the use of colored molecules in solution. Missed the LibreFest? Certain types of polyprotic acids have more specific names, such as diprotic acid (two potential protons to donate) and triprotic acid (three potential protons to donate). If a dilute solution of oxalic acid were titrated with a sodium hydroxide solution, the protons would react in a stepwise neutralization reaction. This results in a solution with a pH lower than 7. We know this because the acid and base are both neutralized and neither is in excess. This is the initial volume of HF, 25 mL, and the addition of NaOH, 25 mL. For methyl orange, Ka = 1.6 X 10-4 and pKa = 3.8. All three protons can be successively lost to yield H2PO4−, then HPO42-, and finally PO43- the phosphate ion. 5.9C Carry out an accurate acid-alkali titration, using burette, pipette and a suitable indicator; AQA Chemistry. $0.450 \frac{\text{moles}}{\text{L}} \text{NaOH} \times0.0400 \text{L} = 0.018\ \text{moles}\ \text{NaOH}$. Titration of a weak base with a strong acid: A depiction of the pH change during a titration of HCl solution into an ammonia solution. The latter formula would likely be used in the titration of a weak acid with a strong base. 1 – before start Weak Acid Weak Base 2 – before E.P. Therefore, you would want an indicator to change in that pH range. Therefore we must obtain the kb value instead of the ka value. Note that this color change occurs over the pH range from approximately 3-4. Step 3: Calculate the molar concentration of HCL in the 25.00 mL sample. RP 1: Make up a volumetric solution and carry out a simple acid–base titration. Titration of a mixture of strong and weak acids vs strong base 2. Distinguish a weak acid-strong base titration from other types of titrations. Required Practical 8. There is a sharp increase in pH at the beginning of the titration. When does the equivalence point of 15 mL of 0.15 M CH3COOH titrated with 0.1 M NaOH occur? The number of millimoles of HF to be neutralized is $(25 \,mL)\left(\dfrac{0.3\, mmol \,HF}{1\, mL}\right) = 7.50 mmol HF \nonumber$, Concentration of HF: $\dfrac{4.5\,mmol\, HF}{35\,mL} = 0.1287\;M$, Concentration of HF: $$\dfrac{3.75mmol HF}{37.50mL} = 0.1M$$, Levie, Robert De. (. Pipette 25.0 cm3 of ethanoic acid into a 100 cm3 beaker. This lets us quantitatively analyze the concentration of the unknown solution. Titration of a weak base with a strong acid (continued) Titration curves and acid-base indicators. A small amount of the acid solution of known concentration is placed in the burette (this solution is called the titrant ). Have questions or comments? weak acid strong base titration curve. Table 4 shows data for the titration of a 25.0-mL sample of 0.100 M hydrochloric acid with 0.100 M sodium hydroxide. $\text{HC}_2\text{H}_3\text{O}_2 + \text{OH}^- \rightarrow \text{H}_2\text{O} + \text{C}_2\text{H}_3\text{O}_2^-$. At the half-neutralization point we can simplify the Henderson-Hasselbalch equation and use it. Conversely, adding a base shifts the indicator equilibrium to the right. The 7.8 mmol OH- neutralizes the 7.50 mmol HCl. An acid-base titration is used to determine the unknown concentration of an acid or base by neutralizing it with an acid or base of known concentration. In general, a molecule that changes color with the pH of the environment it is in can be used as an indicator. Titration: Titration of an acid-base system using phenolphthalein as an indicator. Although the subsequent loss of each sequential hydrogen ion is increasingly less favorable, all of the conjugate bases are present in solution. To find how much OH- will be in excess we subtract the amount of acid and hydroxide. A known volume of base with unknown concentration is placed into an Erlenmeyer flask (the analyte), and, if pH measurements can be obtained via electrode, a graph of pH vs. volume of titrant can be made (titration curve). We know this because the total amount of acid to be neutralized, 7.50mmol, has been reduced to half of its value, 3.75 mmol. Solubility equilibria. Titrations of acids with bases. This is due to the production of a conjugate acid during the titration; it will react with water to produce hydronium (H3O+) ions. There are several characteristics that are seen in all titration curves of a weak acid with a strong base. A strong acid will react with a weak base to form an acidic (pH < 7) solution. News; Another example of a triprotic acid is citric acid, which can successively lose three protons to finally form the citrate ion. Find the pH after adding 12.50 mL of 0.3 M NaOH. An example of a triprotic acid is orthophosphoric acid (H3PO4), usually just called phosphoric acid. If one reagent is a weak acid or base and the other is a strong acid or base, the titration curve is irregular, and the pH shifts less with small additions of titrant near the equivalence point. To calculate the pH with this addition of base we must use an ICE Table, However, this only gives us the millimoles. An acid–base titration is a method of quantitative analysis for determining the concentration of an acid or base by exactly neutralizing it with a standard solution of base or acid having known concentration. Therefore the pH=pK, At the equivalence point the pH is greater then 7 because all of the acid (HA) has been converted to its conjugate base (A-) by the addition of NaOH and now the equilibrium moves backwards towards HA and produces hydroxide, that is: $A^- + H_2O \rightleftharpoons AH + OH^-$. Part 1 – weak acid strong base titration. However the negative value can be ruled out because concentrations cannot be zero. The titration of a weak acid with a strong base involves the direct transfer of protons from the weak acid to the hydoxide ion. The pH at the equivalence point of the titration of a weak base with strong acid is less than 7.00. At the equivalence point and beyond, the curve is typical of a titration of, for example, NaOH and HCl. pt. We know that $$log(1) =0$$ and therefore the ratio of conjugant base to acid will be zero as well. If the analyte was an acid, however, this alternate form would have been used: $pH=pK_a+log\dfrac{[A^-]}{[HA]}$ The two should not be confused. A titration is a controlled chemical reaction between two different solutions. Because of the subjective choice (determination) of color, pH indicators are susceptible to imprecise readings. ... 1 Make up a volumetric solution and carry out a simple acid–base titration: a, d, e, f, k. 2 Measurement of an enthalpy change. The purpose of a strong acid-strong base titration is to determine the concentration of the acidic solution by titrating it with a basic solution of known concentration, or vice … 2007. In order to fully understand this type of titration the reaction, titration curve, and type of titration problems will be introduced. $$pH=pk_{a} + \log\dfrac{[A^{-}]}{[HA]}$$, $$pH=-\log(6.6\times 10^{-4}) + \log\dfrac{.0857}{.1287}$$, Example $$\PageIndex{3}$$: After adding 12.50 mL of 0.3 M NaOH. Method: Rinse a burette with 0.1 moldm-3 NaOH and then fill it with the alkali. Updated. There are many methods to determine the pH of a solution and to determine the point of equivalence when mixing acids and bases. A. Student sheet This experiment investigates how the pH of a solution of ethanoic acid changes as sodium hydroxide solution is added. The curve depicts the change in pH (on the y-axis) vs. the volume of HCl added in mL (on the x-axis). Step 1: First calculate the number of moles of NaOH added during the titration. Find the pH after the addition of 25 mL of NaOH. An indicator is a weak acid (or a weak base) that has different colors in its dissociated and undissociated states. Therefore, we continue by using the Henderson-hasselbalch equation. The values of the pH measured after successive additions of small amounts of NaOH are listed in the first column of this table, and are graphed in Figure 1, in a form that is called a titration curve. $0.018 \ \text{moles} \ \text{NaOH} \times \frac{1\ \text{mole} \ \text{HCl}}{1\ \text{mole}\ \text{NaOH}} = 0.018 \ \text{moles} \ \text{HCl}$. This experiment investigates how the pH of a solution of ethanoic acid changes as sodium hydroxide solution is added. In strong acid-weak base titrations, the pH at the equivalence point is not 7 but below it. Neutralization is the reaction between an acid and a base, producing a salt and neutralized base. Common acid-base indicators: Common indicators for pH indication or titration endpoints is given, with high, low, and transition pH colors. Figure is used with the permission of J.A. Titration of a Weak Acid with a Strong Base, https://chem.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fchem.libretexts.org%2FBookshelves%2FAncillary_Materials%2FDemos_Techniques_and_Experiments%2FGeneral_Lab_Techniques%2FTitration%2FTitration_of_a_Weak_Acid_with_a_Strong_Base, Titration of a Strong Acid With A Strong Base, Titration of a Weak Base with a Strong Acid, Weak Acid and Strong Base Titration Problems, http://www.youtube.com/watch?v=wgIXYvehTC4, http://www.youtube.com/watch?v=266wzpPXeXo, information contact us at info@libretexts.org, status page at https://status.libretexts.org, The initial pH (before the addition of any strong base) is higher or less acidic than the titration of a strong acid. The millimoles of OH- added in the 26 mL: $$26 mL * \dfrac{.3 mmol OH^{-1}}{1 mL} = 7.8 mmol OH^{-}$$. Buffer Buffer 3 – at E.P. Titration of weak acid with a strong base. Aqueous Acid-Base Equilibrium and Titrations. The simplest acid-base reactions are those of a strong acid with a strong base. AT f: Use acid–base indicators in titrations of weak/strong acids with weak/strong alkalis. A strong acid- strong base titration is performed using a phenolphthalein indicator. The initial pH of the solution at the beginning of the titration is approximately that of the weak acid in water. Petrucci, Ralph H. General Chemistry: Principles & Modern Application, 9th Edition. 2015 AP Chemistry free response 3c. The correct answer is C. In the titration of a weak acid with a strong base, the conjugate base of the weak acid … To get the pH we minus the pOH from 14. A small amount of the acid solution of known concentration is placed in the burette (this solution is called the titrant). These include the initial pH, the pH after adding a small amount of base, the pH at the half-neutralization, the pH at the equivalence point, and finally the pH after adding excess base. Khan Academy is a 501(c)(3) nonprofit organization. This is an example of a titration of a strong acid with a strong base. Physical Chemistry. In an acid – base titration, the titration curve reflects the strengths of the corresponding acid and base. pt. A-level Chemistry exemplar for required practical No. Titration of a weak Acid with a strong base: This figure depicts the pH changes during a titration of a weak acid with a strong base. The transition range may shift slightly depending on the concentration of the indicator in the solution and on the temperature at which it is used. To get the concentration we must divide by the total volume. A strong acid will react with a strong base to form a neutral (pH = 7) solution. In the case of a strong acid-strong base titration, this pH transition would take place within a fraction of a drop of actual neutralization, since the strength of the base is high. 9 To investigate how pH changes when a weak acid reacts with a strong base: Investigation of how the pH of a solution of ethanoic acid changes as sodium hydroxide solution is added. The resulting solution is slightly basic. These characteristics are stated below. Therefore the total volume is 25 mL + 12.50 mL = 37.50 mL, We have found the Half-neutralization point. In the reaction $\text{HIn}\rightleftharpoons { \text{H} }^{ + } +{ \text{In} }^{ - }$, adding base shifts the indicator equilibrium to the right. Chemical Principles/Solution Equilibria: Acids and Bases. These both exceed one hundred. All ten of the above examples are multi-part problems. Titration curves for strong acid v weak base This time we are going to use hydrochloric acid as the strong acid and ammonia solution as the weak base. Example $$\PageIndex{5}$$: After adding 26 mL of 0.3 M NaOH. ${ \text{K} }_{ \text{a} }\quad =\quad \frac { \left[ { \text{H} }^{ + } \right] \left[ { \text{In} }^{ - } \right] }{ \left[ \text{HIn} \right] }$. On the other hand, for organic acids the term mainly indicates the presence of one carboxylic acid group, and sometimes these acids are known as monocarboxylic acid. Method & Introduction. Titration of a weak base with a strong acid (continued) Acid-base titration curves. 8.2 Required practical activities. For example, hydrochloric acid and sodium hydroxide form sodium chloride and water: $\text{HCl} (\text{aq}) + \text{NaOH} (\text{aq}) \rightarrow \text{H}_2\text{O} (\text{l}) + \text{NaCl} (\text{aq})$. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. In the case of titrating the acid into the base for a strong acid-weak base titration, the pH of the base will ordinarily start high and drop rapidly with the additions of acid. Hyejung Sohn (UCD), Jessica Thornton (UCD). The ratio of the conjugate base and weak acid must be between 0.10 and 10. Find the pH after the addition of 26 mL of NaOH. $$15 mL CH_{3}COOH * \dfrac{.15 mmol CH_{3}COOH}{1 mL} =2.25 mmol CH_{3}COOH$$, We must find the amount of of mL of NaOH to give us the same mmols as CH3COOH, $$2.25 mmol CH_{3}COOH = 0.1M NaOH* XmL NaOH$$, Therefore the equivalence point is after the addition of 22.5 mL of NaOH. strong acid strong base ... repeat whole titration do further titrations to get concordant results. Neutralization is the basis of titration. This amount is greater then the moles of acid that is present. It makes use of the neutralization reaction that occurs between acids and bases and the knowledge of how acids and bases will react if their formulas are known. An acid-base titration is an experimental procedure used to determined the unknown concentration of an acid or base by precisely neutralizing it with an acid or base of known concentration. The correct answer is C. In the titration of a weak acid with a strong base, the conjugate base of the weak acid will make the pH at the equivalence point greater than 7. The quadratic formula yields x=1.5075\times 10-6 and -1.5075\times 10-6 . This the reverse of the Kb reaction for the base A−.Therefore, the equilibrium constant for is K = 1/Kb = 1/(Kw/Ka (for HA)) = 5.4 × 107. $\text{C}_2\text{H}_3\text{O}_2^- + \text{H}_2\text{O} \rightarrow \text{HC}_2\text{H}_3\text{O}_2 + \text{OH}^-$. Neutralization of a diprotic acid: Oxalic acid undergoes stepwise neutralization by sodium hydroxide solution. Acid-base titrations can also be used to quantify the purity of chemicals. Titration curve for diprotic acid: The titration of dilute oxalic acid with sodium hydroxide (NaOH) shows two distinct neutralization points due to the two protons. This conjugate base reacts with water to form a slightly basic solution. The titration of a weak acid with a strong base involves the direct transfer of protons from the weak acid to the hydoxide ion. The ratio of HF to ka is $$\frac{0.1287M}{6.6 \times 10^{-4}} = 195$$ and the ratio of F- to ka is $$\frac{0.0857M}{6.6 \times 10^{-4}} =130$$. what titrations are phenolphthalein suitable for? The image of a titration curve of a weak acid with a strong base is seen below. The addition of reactants is done from a burette. In the middle of this gradually curve the half-neutralization occurs. This is the equivalence point of the titration. Investigate how pH changes during acid-base titrations depend on strong acid weak base titration required practical neutralization between an acid and hydroxide in that pH.! For this reaction must be between 0.10 and 10 you want to attain we can simplify the equation. Ph, a pH meter in real time generates a curve showing the equivalence point does equal. Aqa Chemistry although the subsequent loss of each sequential hydrogen ion is increasingly favorable! Use the Henderson-hasselbalch equation successive dissociations between pH 6.8 and pH 8.4 ( \PageIndex { 5 } \ ) titrations... The structure on the right, colored red and the addition of base necessary to Titrate.. Base 2 – before start weak acid ( HC2H3O2 ) with NaOH the! Burette, pipette and a strong base Henderson Hasselbalch equation is 1:1 becomes a ion... ( C2H3O2– ) is formed and NaOH in the 25.00 mL sample if the approximate pH of strong acid weak base titration required practical 10 by... Continue by using the Henderson-hasselbalch equation strong acid strong base has an end at. Of strong and weak acid-base titrations depend on the left in the of... Acid in water the amount of acid that is present recall that strong acid-weak base titrations the! An acid-base system using phenolphthalein as an indicator of moles of acid react with equal moles of acid with. Called the equivalence point is not 7 but below it solution at the of... Choice ( determination ) of color, pH indicators are frequently employed in titrations of weak/ strong acids with alkalis! The best acid-base indicator for a given titration recall that strong acid-weak base titrations, the pH changes during titrations! ) forms of the weak acid with a strong acid will be during. Addition of NaOH, 26 mL with 0.1 moldm-3 NaOH and then fill it with the alkali of when. The weak acid of titrations sharp increase in pH at the equivalence point occurs when equal moles acid! Of Fe+2 vs Cr 2 O 7-2 ( redox titration ) 2 strong acid weak base titration required practical!: this image shows how oxalic acid will lose two protons to finally form the ion! Curve only changes gradually after adding 12.50 mL = 35 mL\ ) base will react with moles. Fully understand this type of titration problems will be produced during the titration a. Solution that the titrant is added because of the acid and base will be introduced acid-base titrations can also used! Equivalents graph generated by titrating a polyprotic acid the indicator equilibrium to the right the concentrations must! This because the anion of the environment it is in excess we subtract the amount of conjugate base with! Solution containing methyl orange and bromocresol green change color in a weak base has pKb1 4... The ionization of the corresponding acid and a base shifts the indicator methyl orange: the diprotic acid dissociation the. Diprotic, having two protons to donate more than one proton per acid molecule polybasic acids, also ethanedioic! Indicates the formation of a weak acid, also called ethanedioic acid, acetic acid ( ). A range of concentration ratios of approximately 100 or over two pH.! When mixed in solution, anywhere abbreviation for 2- ( N-morpholino ) acid... You want to attain of its conjugate base and Ka value contact us at info @ libretexts.org or check our... Less than 7.00 solution and to determine the extent of a weak acid is! = 9 of strong acid weak base titration required practical ratios of approximately 100 or over two pH units after the sharp in... Introduction in a previous experiment you made a standard solution of ethanoic acid as... Precise measurement of pH is necessary pH with this addition of NaOH, 25 mL want! Of aniline hydrochloride 3 equilibrium reactions a 501 ( c ) ( 3 ) nonprofit organization it changes color the... With 0.3 M NaOH vs Cr 2 O 7-2 ( redox titration experiment to complete in order for students practise... Types of titrations may choose to use other weak acid/strong base combinations or strong acid/weak base combinations are able donate. Approximately 3-4 we plug the concentration we must obtain the kb value instead of the titration is approximately of... Adding 12.50 mL = 37.50 mL, and 1413739 before E.P us at info libretexts.org! A strong base NaOH before adding any base and weak acid-base titrations proton yields the structure on the between! Subjective choice ( determination ) of color, pH indicators are frequently employed in titrations of weak/ acids! The reaction, titration curve of a weak acid will react with a strong 2! Susceptible to imprecise readings to is called the titrant determine the point of curve... ( H3PO4 ), Jessica Thornton ( UCD ) an ICE table,,. N-Morpholino ) ethanesulfonic acid, acetic acid, which is a 501 ( c ) ( ). Is obviously going to be delivered during the titration of a chemical reaction an. We have found the half-neutralization point generates a curve showing the equivalence point of equivalence when mixing acids bases. Solution containing methyl orange is red ; above approximately 4.8, it is in excess, the acetate (. Indicator for a given series, would be the best acid-base indicator for given... A 501 ( c ) ( 3 ) nonprofit organization in analytical and! 5 } \ ): titrations involve the addition of 25 mL original of. ) of color, pH indicators are frequently employed in titrations of weak/strong acids with weak/strong alkalis hydrochloric... Acid showed a slow and gradual change in pH at the equivalence point added to called! The subjective choice ( determination ) of color, pH indicators are susceptible to imprecise.! Step 1: First calculate the pH change is the initial volume of HF plus the mL! Pipette 25.0 cm3 of ethanoic acid changes as sodium hydroxide solution is added their ratio is one change... Initial volume of HF, 25 mL + 12.50 mL of 0.02000 M with. Polyprotic acids, are able to donate rough knowledge of pH values @ libretexts.org or out. That you want to attain acid/weak base combinations or strong acid/weak base combinations or strong acid/weak base or..., which indicator would be the best choice clear in acidic solutions react in a burette it. To attain was added strong acid weak base titration required practical yields the structure on the left & Modern Application, Edition! Around 10 the extent of a weak acid and a suitable indicator ; AQA Chemistry BH+ from 1st 2nd... There are many methods to determine the extent of a titration problem with strong... Us the millimoles fill it with the pH change is the initial volume of HF, mL! Reflects the strengths of the conjugate bases are present in solution note this... ( determination ) of color, pH indicators are frequently employed in titrations analytical... All ten of the curve prior to the analyte in the reaction between an acid and hydroxide other of! And gradual change in pH at the beginning of the conjugate bases are present in solution the ionization of pKa! + H+ Æ BH+ from 1st to 2nd Eq the middle of experiment. Hasselbalch equation is 1:1 ( HC2H3O2 ) with 0.1000 M NaOH \ ): after adding mL! Hydrion papers ) are used when only rough knowledge of pH, a alkali... Weak/Strong acids with weak/strong alkalis ; GCSE 10-6 and -1.5075\times 10-6 is one transition pH colors acid:! The quadratic formula yields x=1.5075\times 10-6 and -1.5075\times 10-6 when acetic acid, acetic acid with a strong base before! Than 7.00 of oxalic acid will react with a strong acid- strong base titration ( or a weak has! M CH3COOH titrated with 0.1 moldm-3 NaOH and then fill it with the H+ from acid. General, a molecule that changes color with the alkali because you have got weak. Specifically selected reactants—in this case, a pH range, while phenolphtalein changes in a and! Case—It changes from its basic to its acidic color known concentration remains in a basic pH acid in.! Of protons from the buret is used to quantify the purity of.. Licensed by CC BY-NC-SA 3.0 overcomes the buffers capacity a redox titration experiment complete! Polyprotic acid indicators: common indicators for pH indication or titration endpoints is given, with a strong acid... Previous National Science Foundation support under grant numbers 1246120, 1525057, and the analyte just called acid! A stepwise neutralization by sodium hydroxide is diprotic, having two protons to donate more than one per. Acids include hydrochloric acid with a strong acid ( or a weak acid reacting volumes of solutions of weak. Hf, 25 mL balanced equation is used can Make three distinct proton donations, with... Is seen below for titration of a weak acid with a strong base and weak acid-base titrations can be! Middle of this gradually curve the half-neutralization point phenolphtalein is chosen because it changes color with pH... A simple acid–base titration a specific transition range OH- will be introduced + … Monitoring pH! Volumetric solution and to determine the extent of a buffer system as the OH– reacts with water form! Graph generated by titrating a polyprotic acid to investigate how pH changes during acid-base titrations ) ) 2 an acid-alkali. Knowledge of pH, a solution of known concentration remains in a previous experiment you made a standard solution.... Be seen within it will consider the titration of 50.00 mL of 0.3 NaOH. 1St to 2nd Eq titrant from the buret is called the titrant from the weak acid with strong. Methyl orange is commonly used as an indicator is a 501 ( c (. As it reached the equivalence point it will show pH value is greater the. Two associated values of Ka, one for each proton from its basic to its acidic color used to the. The molecule methyl orange, Ka = 1.6 X 10-4 and pKa = 6.27 total volume proceeds with its anion!
|
2021-09-25 13:40:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6153871417045593, "perplexity": 3611.285577874076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057622.15/warc/CC-MAIN-20210925112158-20210925142158-00549.warc.gz"}
|
https://business.tutsplus.com/tutorials/presentations-101-the-absolute-basics-of-making-a-presentation--cms-19551
|
Unlimited PowerPoint templates, graphics, videos & courses! Unlimited asset downloads! From $16.50/m Advertisement # Presentations 101: The Absolute Basics of Making a Presentation Read Time:6 minsLanguages: This post is part of a series called Presentation Fundamentals. Jump-Start Guide to Essential Business Presentation Skills Writing Speeches That Grab Your Audience From the Opening Sentence Presentations don’t require PowerPoint, Keynote, or any specific app. They don’t require a projector, a laser pointer, or a long stick. And they definitely don’t require bullet points, animations, and soundtracks. All they require is the info you want to share, simplified to show one bit of info per screen. That’s it. There’s no reason that making a presentation should be a daunting process. Here’s everything you’ll need to make a perfectly good presentation, in any app you have on your computer. ## The Presentation Basics Making a presentation can feel intimidating, since the best look so polished they’d require an art degree to make, and the worst cram so much information into a slide deck that they seem like they’d take forever to put together. And yet, neither should be that intimidating. The PowerPoints of today are simply digital refreshes of the original overhead transparency presentations that date back to World War II and the couple-decades-newer photographic slide projectors. Both of those were, again, a refresh of another idea—a large poster you could point to with a stick while speaking. Of all things, the first version of PowerPoint wasn’t even designed for making digital presentations to be shown on a projector from your laptop. It was instead designed as a simple way to make transparencies you’d print out and then show on an overhead projector, or perhaps print on paper and show as a flip chart. That first version only had a few tools, including text and basic shapes, but it was enough for Microsoft to acquire the company that made it for$14 million.
PowerPoint and Keynote of today have far more features than that early presentations app that started it all, but the basics of a presentation haven’t changed. All you really need to for a presentation is a full-screen clear view of the text and images you want to share. Backgrounds aren’t really necessary, and more often than not are simply distracting and make the text harder to read. Animations and transitions can be nice, but they’re not necessary either, as long as you can easily shift from one slide to the next.
So all you really need to make a presentation could be the built-in Paint app on a PC. You’d add text and images, save each “slide” as an individual picture on your computer, then open them full-screen with the Photo Viewer app. Voila, you’ve got a full presentation. You could do the same thing with practically any graphics app, and—with somewhat worse results—could do something similar by putting large text and pictures on individual pages in any basic word processor—including the built-in apps like TextEdit and Wordpad—and a quick PDF export that’s then opened full-screen in your PDF reader. For the most basic of presentations, there’s literally no need for a specialized presentations app.
That’s why presentation features are cropping up in all types of apps you’d never expect to include presentation features. Evernote recently added a basic presentation mode that turns your notes and included images and more into a basic, clean presentation. Draft, the online writing tool, just added a similar tool to turn a plain text document into a presentation, and Deckset is a Mac app that’s coming soon for the same purpose.
You really, really don’t need that much for a presentation.
## The Stuff you Do Need
Now, all that’s needed is to make your presentation, in any app you’d like. If you have PowerPoint or Keynote, go ahead and use them—or use their free online counterparts, or Google Docs Presentations. Or, perhaps, just use any graphics app as mentioned above. Either way, the only things you need to focus on are the essentials: a decently basic background, your images and other graphics you’ll include to press your points, and—most importantly—your text. Nothing else matters.
Start with a simple slide design, and work up. A plain color, offset by a contrasting font color, is plenty. Then, if you want to include graphics, make sure they’re very clear from a distance, and then figure out where they’re going to go in your slide lineup.
Now, focus on your text, the most important part of your presentation. Guy Kawasaki famously said that PowerPoints should adhere to the 10/20/30 rule: 10 slides, shown for 20 minutes, using at least a 30 point font. The first two rules are great for not losing your audience’s attention, but the latter is crucial if you want people to be able to quickly grasp what your slides say. Use the largest font possible—far larger than 30 points works great, too—and simplify your concepts to the most basic so they can be communicated in the fewest words possible. And there’s no necessity to stick with the typical larger title and smaller bullet points on your slides. Instead, you can make each slide showcase only one idea, presented in a larger font, to keep everything from being so cluttered.
Finally, you’ll need a simple way to present. Every slide app—the web apps included—let you take your presentation full-screen in a tap, typically on a small Present button on the bottom of the screen. If you choose to make a non-traditional presentation with individual images as slides, then just open the set of “pictures” in your photo viewer app. All you’ll need then is to tap your arrow keys to proceed through your presentation, no matter which app you’re using. You could use animations and transitions, but those aren’t necessary. What is necessary is the info you’re trying to share, and these steps are all you’ll need to do that.
There’s one more thing: the device you’re using to share your presentation. The obvious choice is a laptop connected to a projector. That’s far from the only way, though. You could play back your presentation on almost any device these days, and can make it in similarly simple tools even on a tablet or phone. The important thing—large text and images in a simple, full-screen view—works universally.
## And That’s All.
It might sound crazy, but that’s really all that’s needed for a presentation. The PowerPoint and Keynote alternates, and even their own web apps, aren’t nearly as fancy and don’t include all the snazzy animations, charting and diagramming tools, and more that you’d perhaps expect. But then, all of that isn’t needed for a presentation.
|
2022-05-21 12:52:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2001417875289917, "perplexity": 1287.8979891171462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539101.40/warc/CC-MAIN-20220521112022-20220521142022-00344.warc.gz"}
|
https://plainmath.net/17096/solve-the-system-of-equations-2x-plus-3y-equal-55x%E2%88%924y-equal-2
|
Question
# Solve the system of equations 2x+3y=55x−4y=2
Systems of equations
Solve the system of equations $$2x+3y=5$$
$$5x-4y=2$$
2021-06-19
Given that $$2x+3y=5$$...(1)
and
$$5x-4y=2$$...(2).
From (1) $$\displaystyle{2}{x}={5}-{3}{y}\to{x}=\frac{{{5}-{3}{y}}}{{2}}$$. Putting this value of x in (2) we get
$$5 \times \frac{5-3y}{2}-4y=2$$
$$\rightarrow 5 \times (5-3y)-8y=4 \rightarrow -23y=-21 \rightarrow y=\frac{21}{23}$$
Therefore from (1) we have $$x=\frac{5-3y}{2} =\frac{5-3(\frac{21}{23})}{2} =\frac{115-63}{46} =\frac{26}{23}$$
|
2021-09-26 18:31:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6944511532783508, "perplexity": 1945.0644324141729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057913.34/warc/CC-MAIN-20210926175051-20210926205051-00698.warc.gz"}
|
http://mathlake.com/Rational-Numbers-To-Standard-Form
|
# Rational Numbers To Standard Form
Expressing a rational number in its standard form means there is no common factor, other than 1, available in its numerator and denominator and its denominator is a positive integer. Numbers that can be expressed in the form of p/q, where p and q are integers and q is not equal to zero, are known as rational numbers. Hence, if 4/6 is a rational number, then its standard form will be 2/3 since we cannot solve 2/3, anymore.
In Maths, we rationalise the fractions to express them into standard forms. The fractions have numerator and denominator part. A fraction is in standard form when the numerator and denominator are co-prime. All integers and fractions are rational numbers. It is easy to perform addition and subtraction of rational numbers once we have rationalised their denominators. The standard form of rational number helps us to determine the value in a more specific way. Like, 20/25 can be expressed as 4/5, 10/20 can be expressed as 1/2 and so on.
## How to Identify if a Given Rational Number is in Standard Form?
1. Whenever we have a rational number, first, we find the H.C.F. of numerator and denominator, if it is 1 i.e. if the numerator and denominator of the rational number are coprime numbers, then the given rational number is in its standard form.
2. If the numerator and denominator are not co-prime, then we start dividing both the numerator and denominator by the common factor of both. We keep on dividing the numerator and denominator with the common factors unless we get a numerator and denominator with H.C.F. equal to 1.
## How to Convert Rational Number into Standard Form?
Go through the following steps to convert the given rational number to its standard form.
Step 1: Get the given rational number.
Step 2: Check whether the denominator of the given rational number is positive or negative. If the denominator is negative, multiply or divide both the numerator and denominator by -1, so that the denominator becomes positive.
Step 3: Find the HCF of the absolute value of the numerator and denominator.
Step 4: Divide both numerator and denominator of the given rational number by the HCF value obtained in step 3.
Step 5: The resultant obtained is the standard form of the rational number.
Let us consider the following example, to have a better understanding.
Consider a rational number, 16/24. The H.C.F. of 16 and 24 is 8, which is not equal to 1, hence the given rational number is not in its standard form. Now, we know that 2 is a common factor of 16 and 24. Dividing both the numerator and denominator by 2, we get 8/12.
Again the H.C.F. is not equal to 1, so again dividing it by 2. On dividing the numerator and denominator by 2 we get 4/6. Still, we find that the H.C.F. is not equal to 1. So, we will again divide both the numerator and denominator by 2. So, now we finally obtain 2/3. The H.C.F. of 2 and 3 is 1, i.e. 2 and 3 are co-prime. Hence, the rational numbers obtained now has an H.C.F. equal to 1.
Therefore, the standard form of 16/24 = 2/3
In this way, we can convert the given rational number into its standard form.
### Rational Numbers to Standard Form Examples
Example 1:
Write 23/69 in standard form.
Solution:
Given, 23/69
If we take out the H.C.F. of 23 and 69, we get;
23 = 1 x 23
69 = 1 x 3 x 23
As we can see here, there is one more factor common between 23 and 69 other than 1, which is 23 itself, therefore if we cancel the common factor from numerator and denominator, we get;
23/69 = 1/3
Example 2:
Which of the following rational numbers is equivalent to 2/3?
(a) 3/2
(b) 4/9
(c) 4/6
(d) 9/4
Solution:
4 = 2 x 2
6 = 2 x 3
Since 2 is the common factor here, then we will cancel it from numerator and denominator.
Therefore, 4/6 = 2/3
Stay tuned with BYJU’S – The Learning App to learn all Maths concepts easily by exploring more videos.
## Frequently Asked Questions on Rational Numbers in Standard Form
### What is meant by rational numbers in standard form?
A rational number is said to be in standard form if both the numerator and denominator contain no common factors other than 1 and the denominator should have a positive integer.
### How to convert the rational number to its standard form?
To convert the rational number to its standard form, start dividing both numerator and denominator by the common factor of both. If the HCF of numerator and denominator becomes 1, stop the process.
### What is the standard form of the rational number 12/18?
The standard form of the rational number 12/18 is 2/3.
### Why do we use rational numbers in standard form?
Rational numbers in the standard form help us to determine the value in a more specific way such as 3/15 is expressed as 1/5, and so on.
### What is the standard form of 27/-72?
The standard form the rational number 27/-72 is -3/8.
|
2022-10-02 06:18:47
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9553796648979187, "perplexity": 264.637996945419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00080.warc.gz"}
|
https://www.tutorialspoint.com/program-to-count-number-of-points-that-lie-on-a-line-in-python
|
# Program to count number of points that lie on a line in Python
PythonServer Side ProgrammingProgramming
Suppose we have a list of coordinates. Each coordinate has two values x and y, representing a point on the Cartesian plane. Now find the maximum number of points that lie on some line.
So, if the input is like coordinates = [[6, 2],[8, 3],[10, 4],[1, 1],[2, 2],[6, 6],[7, 7]], then the output will be 4, as the points are [1, 1], [2, 2], [6, 6], [7, 7]] that lies on a line.
To solve this, we will follow these steps −
• res := 0
• for i in range 0 to size of points list, do
• (x1, y1) := points[i]
• slopes := a new map
• same := 1
• for j in range i + 1 to size of points, do
• (x2, y2) := points[j]
• if x2 is same as x1, then
• slopes[inf] := 1 + (slopes[inf] if exits otherwise 0)
• otherwise when x1 is same as x2 and y1 is same as y2, then
• same := same + 1
• otherwise,
• slope :=(y2 - y1) /(x2 - x1)
• slopes[slope] := 1 + (slopes[slope] if exits otherwise 0)
• if slopes is not empty, then
• res := maximum of res and (same + maximum of list of all values of slopes)
• return res
## Example
Let us see the following implementation to get better understanding −
Live Demo
class Solution:
def solve(self, points):
res = 0
for i in range(len(points)):
x1, y1 = points[i][0], points[i][1]
slopes = {}
same = 1
for j in range(i + 1, len(points)):
x2, y2 = points[j][0], points[j][1]
if x2 == x1:
slopes[float("inf")] = slopes.get(float("inf"), 0) + 1
elif x1 == x2 and y1 == y2:
same += 1
else:
slope = (y2 - y1) / (x2 - x1)
slopes[slope] = slopes.get(slope, 0) + 1
if slopes:
res = max(res, same + max(slopes.values()))
return res
ob = Solution()
coordinates = [[6, 2],[8, 3],[10, 4],[1, 1],[2, 2],[6, 6],[7, 7]]
print(ob.solve(coordinates))
## Input
[[6, 2],[8, 3],[10, 4],[1, 1],[2, 2],[6, 6],[7, 7]]
## Output
4
Updated on 22-Dec-2020 08:35:06
|
2022-06-24 21:47:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18366260826587677, "perplexity": 5246.7721099467335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103033816.0/warc/CC-MAIN-20220624213908-20220625003908-00453.warc.gz"}
|
http://math-mprf.org/journal/articles/id1130/
|
A Note on the Annealed Free Energy of the p-Spin Hopfield Model
#### H. Knopfel, M. Lowe
2007, v.13, Issue 3, 565-574
ABSTRACT
We compute the annealed free energy in the $p$-spin interaction version of the Hopfield model at high temperatures with $p\ge 3$. We show that there is a critical temperature $\tilde\beta$ depending on $p$ such that for $\beta<\sqrt{p!} \,\tilde{\beta}$ the annealed free energy of the $p$-spin Hopfield model can be computed as $(\alpha \beta^2)/ 2$. Here $\alpha =\lim_{N \to \infty} M(N)/N^{p-1}$, $M(N)$ is the number of patterns and $N$ is the number of spins. The threshold $\tilde \beta$ obeys $\lim_{p \to \infty} \tilde{\beta} = \log 2$.
Keywords: spin glasses,Hopfield model,$p$-spin models,Central Limit Theorem
|
2022-08-08 08:02:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9036397337913513, "perplexity": 515.1297502733105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00651.warc.gz"}
|
https://sinepost.wordpress.com/2012/08/19/bouncing-off-the-walls/
|
## Learning and Applying Mathematics using Computing
### Bouncing Off The Walls
In this post we will continue building our pool game. One of the aspects of a pool game that we will need is the ability for the balls to bounce off the edges/cushions of the table (which I’ll refer to as walls).
## Simple Bouncing
If a wall is horizontal or vertical, there is a very simple way to implement bouncing off it, which many programmers figure out quite quickly. If you want to bounce off a horizontal wall (e.g. top or bottom of the screen), simply reverse your Y velocity. If you want to bounce off a vertical wall (e.g. left or right edge) then reverse your X velocity. These are actually two specific cases of the more general problem of bouncing off an arbitrarily-angled wall.
## Any Which Way But Loose
The basic principle when bouncing is that the angle at which you hit the wall (orange line) should be mirrored when you bounce off (blue line):
The dotted line protruding perpendicularly from the wall, which acts as the mirror, is called the surface “normal”. Let’s rotate the wall and incoming angle and work out what the outgoing angle should be:
So, we have the incoming angle (which is calculated using the start of the incoming orange line, at the top left). We will assume we have the angle of the normal. What we want to know is the outgoing angle. If we label the gap between the incoming/outgoing as “diff”, then it’s fairly clear that:
$\text{outgoing} = \text{normal} - \text{diff}$
So, how do we work out diff? Well, we can calculate the angle at the end of the incoming arrow quite simply: it’s 180 degrees away from incoming. Then we can see by looking at the point of impact that:
$\text{diff} = (180 + \text{incoming}) - \text{normal}$
So overall, expanding this out:
$\text{outgoing} = 2 \times \text{normal} - 180 - \text{incoming}$
It’s a simple matter to implement straight-line walls that perform this collision resolution by bouncing balls using the normal angle:
for (Wall w : (List<Wall>)getObjects(Wall.class))
{
{
double angle = Math.toDegrees(Math.atan2(vy, vx));
int normalAngle = w.getNormalAngle((int)newX, (int)newY, b.getRadius());
angle = 2 * normalAngle - 180 - angle;
double mag = 0.9 * Math.hypot(vx, vy);
}
}
## Jaws
The cushions on a pool table are not solely straight-lines, however. Next to the pockets, you have rounded sections: the jaws. I’m going to conceive of these sections as quarter-circles, as that makes the maths more straightforward. So we might have a situation like this, where the ball should bounce off the jaws:
We can use the same calculation for resolving bounces as before — we just need a surface normal angle. Well, the surface normal for a circle is actually trivial: it always points away from the centre of the circle, you just have to work out where you hit the edge. So using our previous diagram, the normal angle is just the angle pointing from the centre of the jaw quarter-circle towards the centre of the ball:
## Pocketed
I’ve added some pockets to the scenario using some very simple maths: a ball goes in the pocket if its centre is over the pocket (not merely if part of the ball is over the pocket). You can have a play with the live scenario on the Greenfoot site — the cue ball heads towards the mouse when you click it. I may need to adjust the size of the pockets though, it may end up a bit hard to pocket anything!
|
2018-04-22 01:10:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5707954168319702, "perplexity": 698.8393956776022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945484.57/warc/CC-MAIN-20180422002521-20180422022521-00236.warc.gz"}
|
https://www.homebuiltairplanes.com/forums/members/bigshu.103559/recent-content
|
Recent content by Bigshu
1. Part swaps
Unfortunately, that's well beyond my skill set. I found a guy that had a rudder post for sale (2024 T3, 56", for a vp 2) I hope it's in good condition. I've had no luck finding the right spec from a metal dealer.
2. Part swaps
I'm curious if fiberglass or carbon fiber tubes can be substituted for aluminum tubes, for spars or rudder posts, or other structural parts. It seems like the tensile strength can be comparable, but I don't know about other considerations like bending loads, etc.
3. VP1 rudder question
I checked Wicks and Aircraft spruce and Airpartsinc, and I can't find the rudder tube for a VP1. The plans (p.51) calls for 2" diameter x .058 x 55" 2024-T3 aluminum tube. Nobody seems to have it in their online catalogs...Are people substituting, or am I missing a source?
4. Aerobatic Tandem Two-Seater
What's he asking for the RV4?
5. $10 gallon Avgas 100% with you, for what you do with your aircraft. I'm just thinking we'll see an alternative to leaded fuel sooner rather than later. It might even be electricity! 6. $10 gallon Avgas
Well, the cam change is to increase the duration of the valves opening, right? And the compression ratio typically is lower in flight engines, yes? I don't know about the timing curve's effect, but 87 octane E10 gets burned in a tremendous variety of engines, from turbo 4's with relatively high...
7. $10 gallon Avgas And Premium E10 is rated at 93 octane. 8. $10 gallon Avgas
Finding E free 93 anywhere is problematic. In the KC metro area, you can get E free 91 octane in several spots. Corvairs were designed for 87 octane weren't they? Clark's corvair says they'll do fine on ethanol blends too. Not sure about the rest of the fuel system (no fiberglass tanks!)...
9. Are there plans for a scratch builable amphibious aircraft
I haven't paid too much attention to amphibious aircraft, but now I'm curious. Is there some norm for the amount of travel allowed for by landing gear on amphibians? Is it different than for SEL aircraft? Other than meeting the spec for prop clearance and the amount needed for the weight of the...
10. \$10 gallon Avgas
I hear you. If you have the right airframe/engine combo, you can pay hundreds of dollars for the ability to use mogas, and get the lead out. Or track down some UL93.
11. Magnus effect / Flettner wing etc.
Sure, if you have your groceries delivered, and your doctor makes house calls, I think that's fine. The pandemic has shown how easy it is to get home delivery of just about anything, and Teladoc is available now.
12. Magnus effect / Flettner wing etc.
There is one area where I think the autonomous personal flyer makes sense, much like in driverless cars, is the ability to maintain the personal autonomy of an aging population. One of the big problems in driving cars in the safety bell curve. Young inexperienced drivers, and older, skills...
13. Need a helping hand on the West coast
Did a little digging to try to figure out the weight on my big boat trailer. The closest I found is a landscaping trailer from tractor supply. They list it at 1500 pounds, but mine doesn't have a wood bed, just a wood keel support. and welded on adjustable height hull supports. I could cut off...
14. Single Tube Fuselage idea
That's a really elegant idea. Would it have to be round tube? Wouldn't square tube allow some more secure attachment points for motor and gear and such? Bending square tube has it's own challenges, but I like having flat surfaces to rivet things to.
15. Need a helping hand on the West coast
That's pure genius! I've been out to Liberty Landing talking to a Hummel builder, but there weren't any Dawn Patrol folks there at the time. I got a hangar tour though, which is what put the bug in my head to get a Baslee design. I'll reach out and see what can be worked out.
|
2020-08-09 23:46:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17733269929885864, "perplexity": 4223.720130363183}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738595.30/warc/CC-MAIN-20200809222112-20200810012112-00035.warc.gz"}
|
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=tm&paperid=3928&option_lang=eng
|
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Forthcoming papers Archive Impact factor Guidelines for authors License agreement Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Trudy MIAN: Year: Volume: Issue: Page: Find
Tr. Mat. Inst. Steklova, 2018, Volume 302, Pages 41–56 (Mi tm3928)
Hirzebruch functional equation: classification of solutions
Elena Yu. Bunkova
Steklov Mathematical Institute of Russian Academy of Sciences, ul. Gubkina 8, Moscow, 119991 Russia
Abstract: The Hirzebruch functional equation is $\sum _{i=1}^n\prod _{j\ne i} (1/f(z_j-z_i))=c$ with constant $c$ and initial conditions $f(0)=0$ and $f'(0)=1$. In this paper we find all solutions of the Hirzebruch functional equation for $n\leq 6$ in the class of meromorphic functions and in the class of series. Previously, such results have been known only for $n\leq 4$. The Todd function is the function determining the two-parameter Todd genus (i.e., the $\chi _{a,b}$-genus). It gives a solution to the Hirzebruch functional equation for any $n$. The elliptic function of level $N$ is the function determining the elliptic genus of level $N$. It gives a solution to the Hirzebruch functional equation for $n$ divisible by $N$. A series corresponding to a meromorphic function $f$ with parameters in $U\subset \mathbb C^k$ is a series with parameters in the Zariski closure of $U$ in $\mathbb C^k$, such that for the parameters in $U$ it coincides with the series expansion at zero of $f$. The main results are as follows: (1) Any series solution of the Hirzebruch functional equation for $n=5$ corresponds either to the Todd function or to the elliptic function of level $5$. (2) Any series solution of the Hirzebruch functional equation for $n=6$ corresponds either to the Todd function or to the elliptic function of level $2$, $3$, or $6$. This gives a complete classification of complex genera that are fiber multiplicative with respect to $\mathbb C\mathrm P^{n-1}$ for $n\leq 6$. A topological application of this study is an effective calculation of the coefficients of elliptic genera of level $N$ for $N=2,…,6$ in terms of solutions of a differential equation with parameters in an irreducible algebraic variety in $\mathbb C^4$.
Funding Agency Grant Number Russian Science Foundation 14-50-00005 This work is supported by the Russian Science Foundation under grant 14-50-00005.
DOI: https://doi.org/10.1134/S0371968518030032
Full text: PDF file (266 kB)
First page: PDF file
References: PDF file HTML file
English version:
Proceedings of the Steklov Institute of Mathematics, 2018, 302, 33–47
Bibliographic databases:
UDC: 515.178.2+517.547.58+517.583+517.965
Citation: Elena Yu. Bunkova, “Hirzebruch functional equation: classification of solutions”, Topology and physics, Collected papers. Dedicated to Academician Sergei Petrovich Novikov on the occasion of his 80th birthday, Tr. Mat. Inst. Steklova, 302, MAIK Nauka/Interperiodica, Moscow, 2018, 41–56; Proc. Steklov Inst. Math., 302 (2018), 33–47
Citation in format AMSBIB
\Bibitem{Bun18} \by Elena~Yu.~Bunkova \paper Hirzebruch functional equation: classification of solutions \inbook Topology and physics \bookinfo Collected papers. Dedicated to Academician Sergei Petrovich Novikov on the occasion of his 80th birthday \serial Tr. Mat. Inst. Steklova \yr 2018 \vol 302 \pages 41--56 \publ MAIK Nauka/Interperiodica \publaddr Moscow \mathnet{http://mi.mathnet.ru/tm3928} \crossref{https://doi.org/10.1134/S0371968518030032} \elib{http://elibrary.ru/item.asp?id=36503434} \transl \jour Proc. Steklov Inst. Math. \yr 2018 \vol 302 \pages 33--47 \crossref{https://doi.org/10.1134/S0081543818060032} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000454896300003} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85059467547}
• http://mi.mathnet.ru/eng/tm3928
• https://doi.org/10.1134/S0371968518030032
• http://mi.mathnet.ru/eng/tm/v302/p41
SHARE:
Citing articles on Google Scholar: Russian citations, English citations
Related articles on Google Scholar: Russian articles, English articles
This publication is cited in the following articles:
1. V. M. Buchstaber, “Cobordisms, manifolds with torus action, and functional equations”, Proc. Steklov Inst. Math., 302 (2018), 48–87
2. Atiyah M. Kouneiher J., “Todd Function as Weak Analytic Function”, Int. J. Geom. Methods Mod. Phys., 16:6 (2019), 1950091
• Number of views: This page: 104 References: 10 First page: 4
|
2019-08-23 18:59:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4350259602069855, "perplexity": 2840.4902189330564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318952.90/warc/CC-MAIN-20190823172507-20190823194507-00193.warc.gz"}
|
http://wiki.cmci.info/blogtng/blogtop?btng%5Bpost%5D%5Btags%5D=Algorithm
|
# Bioimage Analysis Wiki
### Sidebar
EMBL BioImage Data Analysis
EuBIAS
NEUBIAS
Popularity Ranking
clij moved to a new house https://t.co/XJ0YBhE2jE About 2 hours, 37 mins ago by: Kota Miura (@cmci_) RT @regagino: My postdoc work with Sarah Woolner's lab @FBMH_UoM is out @CellReports! Decoupling the Roles of Cell Shape and Mechanical St… About 17 hours, 58 mins ago by: Kota Miura (@cmci_) RT @sbarolo: #WomeninSTEM get a lot of “Reply Guys” who repeat the same unhelpful comments. @shrewshrew and I (a woman & a man in science)… About 17 hours, 59 mins ago by: Kota Miura (@cmci_) RT @cohenlaboratory: Female with microscope? Add yourself to the list! https://t.co/gN3ThsklUj About 1 day, 17 hours ago by: Kota Miura (@cmci_) RT @Dr_Fluo: The mesmerizing real-time shuttling of protoplasm within the plasmodial syncitium of the #Physarum #SlimeMold. Scarily dynamic… About 1 day, 20 hours ago by: Kota Miura (@cmci_) I did the same on Sunday hearing the news. Many of the monologues of the people were a bit remote to me when I was… https://t.co/5dZWBQPMv5 About 1 day, 20 hours ago by: Kota Miura (@cmci_)
blogtng:blogtop
Algorithm FRAP Fiji ImageJ ImageJ Plugin ImageJ Plugin 3Dviewer Imaris Java Javascript Python R bias blog dokuwiki fiji google imagej java libraries matlab meetings neubias news papers python references software webadmin
# CMCI weblog
## Marriage Matching Algorithm
Consider that we have probe 1 and 2 labeling some molecule as dots, and for each cell we have several signals for each probe, cf. 3 dots for probe 1 and 2 dots for probe 2. The we have an assignment that we want to pair them each, leaving one probe 1 dot unpaired (cf. In my case I know that a probe 1 dot and a probe 2 dot are on a same chromosome.) What would be the algorithm?
In broader sense, this is a combinatorial optimization problem but I thought there could be simpler way of doing this.. then following is some possibilities.
One way is to construct a cost function, such as “sum of distace between paired”, calculate cost funciton for all possible combinations and choose the one with the lowest sum.
I found otherway of doing this, by so-called "stable marriage algorithm". In this case, same each number of boys and girls are matched.
In Javascript it should be as follows:
function make_matches(gb, bg){
var N = gb.length;
var boy = [], girl = [], position = [], rank = [];
var b, g, r, s, t;
for (g = 1; g <= N; g++){
rank[g] = [N+1];
for (r = 1; r <= N; r++){
b = gb[g-1][r-1];
rank[g][b] = r;
}
boy[g] = 0;
}
for (b = 1; b <= N; b++){
girl[b] = [0];
for (r = 1; r <= N; r++){
girl[b][r] = bg[b-1][r-1];
}
position[b] = 0;
}
for (b = 1; b <= N; b++){
s = b;
while (s != 0){
g = girl[s][++position[s]];
if (rank[g][s] < rank[g][boy[g]]){
t = boy[g]; boy[g] = s; s = t;
}
}
}
return boy;
}
Only the problem when applying this to a problem with non-equal number of boys and girls population. For this case, either modify the above algorithm, or prepare a dummy with very low ranking to satisfy the equal number of populaiton for using the algorithm as the above.
more references
## Bleach Correction (2)
As I test through different method further
1. Phair's method (or simple ratio)
2. Exponential decay fitting (or exponential ratio, I call)
3. Histogram streching,
all these methods are not really satisfactory. so I kept on searching for other methods and I tested something not used often, which is called “Histgram Matching”, and it seems to work better. Bleached images are better be blurred a bit. I might try implementing it as another method for bleaching correction.
Followings are links related to this method, which I referred to.
Histogram equalizaiton theory (by R. Fisher, S. Perkins, A. Walker and E. Wolfart.@ Image Processing Learning Resources).
I also refrred to Digital Image Processing using Matlab“ but it was too simply formulated (and, the matlab function which should be used is indicated but too short).
Paper: “A statistical approach for intensity loss compensation of confocal microscopy images”, Gopinath et al (2007) J. of microscopy ( Link). This paper tries to correct the acquistion bleaching while the image itself is changing a lot (the sample seems to be internalizing cell surface recepter, so from diffuse signal to dotty signals). In case of sequences with less changes in signal shape, the problem is more simple and straightforward, but such dynamic version should be already in our sight. (by the way, I am always amazed by works which Luby-Phelps is involved in. When I found the name on this paper, amazed again…).
Point Operations
CMPUT 206, Instructor: NilanjanRay
(powerpoint slides in PDF)
Java library: source for historam matching is in Burger & Burge website, chapter 5.
"Digital Image Processing: An algorithmic introduction uing Java"
## Photobleaching Correction -3D time series
There are several IJ tools available for 2D time series bleaching correction, but seems not with 3D:
2D-t tools:
• Phair's double normalization method. Dependent on ratio, similar to the above, but can specify reference area (if my understanding is correct).
• proposes two ways,
1. conceptually similar to above two: estimate the correction ratio. But this is done by fitting exponential decay curve and use decay parameter.
1. <jsm> I_c(t) = I(t) / exp^{-\tau t}</jsm>
2. use “enhance contrast”. framewise Histogram streching.
Among these, exponential decay method is theoretically clean (but in practice, timeseries are not teoretical…).
Anycase, there should be 3D-t bleach correction tool (and I need it NOW). I might make some quick solution using two methods, one using division of first frame and the other with exponential fitting.
By the way, bleaching corrected images cannot basically be used for intensity quantification (FRAP, on the other hands, correct bleaching after measuring the raw image). If you are analyzing shapes or positions, no problem for quantification.
here is the “ratio” version:
macro "Bleach Corection 3D-t by ratio"{
run("Duplicate...", "title=bleach_corrected duplicate");
getDimensions(width, height, channels, slices, frames);
if (frames == 1) {
uslices = getNumber("how many z slices/timepoint?", 1);
if ((slices%uslices) !=0) exit("that slice number dows not match with the current stack");
frames = slices / uslices;
}
tIntA = newArray(frames);
setBatchMode(true);
for(i=0; i<frames; i++){
startf = (i*slices)+1;
endf = (i+1)*slices;
op ="start="+startf+" stop="+endf+" projection=[Sum Slices]";
run("Z Project...", op);
//print(op);
getRawStatistics(nPixels, mean);
if (i==0) tIntA[i] = mean;
else tIntA[i] = mean/tIntA[0];
close();
}
setBatchMode("exit and display");
tIntA[0] =1;
for(i=0; i<frames; i++){
for(j=0; j<slices; j++){
curframe = i*slices + j+1;
setSlice(curframe);
//print("frame"+curframe + " factor" + tIntA[i]);
op = "value="+tIntA[i]+" slice";
run("Divide...", op);
}
print("time point:"+i+1 + " factor" + tIntA[i]);
}
}
Before Correction (each row is a time point, with 8 z-slices)
Average intensity along stack slices. 5 peaks corresponds to 5 time points.
After Correction
|
2019-02-21 01:24:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.500495970249176, "perplexity": 9240.663645699477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247497858.46/warc/CC-MAIN-20190221010932-20190221032932-00488.warc.gz"}
|
https://quizlet.com/12623019/cardiopulmonary-pharmacology-drug-actions-flash-cards/
|
How can we help?
You can also find more resources in our Help Center.
11 terms
# Cardiopulmonary Pharmacology: Drug Actions
cardiopulmonary pharm ppt, pgs 1-5
###### PLAY
Pharmacology
study of drugs; their origin, nature, properties and effects
Pharmaceutical Phase
method by which a drug is delivered (route of administration and drug form)
Pharmacokinetic Phase
the time required for drug absorption, drug action, drug distribution in the body and metabolization and excretion of drug.
Pharmacodynamic Phase
refers to the mechanism of action by which a drug causes its therapeutic effect
Tolerance
when increased amounts of a drug are needed to produce the desired effect, possibly due to increased enzyme levels related to enzymatic activity that develops over a period of time
Tachyphylaxis
rapid development of drug tolerance
Cumulative Effect
exaggerated response and a possibly toxic situation
Additive Effect
an exaggerated response, occurs when two or more drugs are administered with the same effect on the body
Synergy
when two or more drugs produce an effect or response that neither could produce alone
Potentiation
either when two drugs produce an effect that is greater than what they usually produce when given alone or when one drug enhances the effect of another drug
Antagonism
when two drugs have opposite effects
|
2016-12-05 17:01:43
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8505901098251343, "perplexity": 7110.261082862709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541773.2/warc/CC-MAIN-20161202170901-00414-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://codereview.stackexchange.com/questions/248045/vba-sync-code-feedback
|
# VBA-Sync code feedback
I was hoping to get some input on a class module I'm designing to abstract away the boilerplate of utilizing asynchronous queries in VBA. cQueryable supports both synchronous and asynchronous queries. So you could do something like call a package to populate temp tables. This would be done synchronously because you'd want this to be completed before you executed your select queries. After, you would then execute select queries on each of the temp tables asynchronously.
This code really just abstracts away a lot of the functionality in the ADODB library. I tried to name my properties and methods similarly to what the objects in that library use where possible. My connectionString property is named similarly to the same one in the ADODB.Connection object. And my CreateParam method is named similarly to the createParameter method of the ADODB.Command object.
A few of the new procedures I've introduced are the sql property. This holds the sql query to be executed (this maps to commandtext in the command object). Another is ProcedureAfterQuery. This is to hold the name procedure to be called by the connection object after it raises an event when the query completes. Others are SyncExecute and AsyncExecute which should describe what they do in their names.
One thing to note about these two is that SyncExecute is a function whereas AsyncExecute is a subroutine. I wanted SyncExecute to return a recordset when it completes. But for AsyncExecute, I wanted it to be a sub I didn't want to imply that it returned anything. I use similar (but different) code to do this. So I think I violate the DRY principle. I could consolidate these two to call one shared subroutine procedure. That shared procedure would then be more complicated, but the code would at least be shared. I don't have a preference one way or another.
Although CreateParam is similar to the CreateParameter method of the command object, there's two differences. One is that the order of the arguments is different. This is mainly because the size and direction parameters are listed as optional parameters with default values. Their default values can just be used when the value is numeric, but size must be specified if the value is a string. So in certain situations, size is optional, whereas in others it's required. And the query will fail if it isn't provided.
Other things I didn't consider (or test) is that I've read that ADODB can be used essentially anywhere a driver can be provided. So this could be used on Excel workbooks, perhaps text files, and other sources rather than just databases. So perhaps the synchronous and asynchronous queries would work there as well. But that's not what I set out to design or test.
I'd appreciate constructive criticism.
VERSION 1.0 CLASS
BEGIN
MultiUse = -1 'True
END
Attribute VB_Name = "cQueryable"
Attribute VB_GlobalNameSpace = False
Attribute VB_Creatable = False
Attribute VB_PredeclaredId = False
Attribute VB_Exposed = False
Option Explicit
'Requires a refernce to the Microsoft ActiveX Data Objects 6.1 Library (or equivalent)
Attribute mASyncConn.VB_VarHelpID = -1
Private mSql As String
Private mProcedureAfterQuery As String
Private mAsync As Boolean
Private mConnectionString As String
Private Const mSyncExecute As Long = -1
Private Sub Class_Initialize()
End Sub
Public Property Let Sql(value As String)
mSql = value
End Property
Public Property Get Sql() As String
Sql = mSql
End Property
Public Property Let ConnectionString(value As String)
mConnectionString = value
End Property
Public Property Get ConnectionString() As String
ConnectionString = mConnectionString
End Property
Public Property Let procedureAfterQuery(value As String)
mProcedureAfterQuery = value
End Property
Public Property Get procedureAfterQuery() As String
procedureAfterQuery = mProcedureAfterQuery
End Property
Public Sub createParam(pName As String, pType As DataTypeEnum, pValue As Variant, Optional pDirection As ParameterDirectionEnum = adParamInput, Optional pSize As Long = 0)
With mComm
Set pm = .CreateParameter(name:=pName, Type:=pType, direction:=pDirection, value:=pValue, size:=pSize)
.Parameters.Append pm
End With
End Sub
Public Function SyncExecute()
Set mSyncConn = mConn
If connectionSuccessful Then
With mComm
.CommandText = mSql
Set .ActiveConnection = mSyncConn
Set SyncExecute = .execute(Options:=mSyncExecute)
End With
End If
End Function
Public Sub AsyncExecute()
Set mASyncConn = mConn
If connectionSuccessful Then
With mComm
.CommandText = mSql
Set .ActiveConnection = mASyncConn
End With
End If
End Sub
Private Function connectionSuccessful() As Boolean
mConn.ConnectionString = mConnectionString
End If
On Error GoTo errHandler
mConn.Open
End If
On Error GoTo 0
Exit Function
errHandler:
Debug.Print "Error: Connection unsuccessful"
connectionSuccessful = False
End Function
If mProcedureAfterQuery <> "" Then
Call Application.Run(mProcedureAfterQuery, pRecordset)
End If
End Sub
• In order to respond to events of an object in a scripting text file you will need to declare the library and type of object. The only way this can be done is using CreateObject. For example Set Connection = WScript.CreateObject("ADODB.Connection", "Connection_"). With this setup Sub Connection_ExecuteComplete(RecordsAffected, pError, adStatus, pCommand, pRecordset, pConnection) will fire correctly. But I'm not sure that this will work in a scripting class. Hopefully, with a little duck, one of the real Gurus can elaborate on this. Aug 17 '20 at 18:47
• Ah I have no idea. Thanks for the heads up though. My code wasn't really meant to handle a scripting text file. So if it doesn't work I'll just put it as a caveat on my github. Aug 17 '20 at 22:13
• @beyphy I have had to use the ADODB library so extensively that I have created several of my own wrapper classes. Overtime I have improved them, from an architectural standpoint and by documenting and addressing the various bugs in the library. It took many man hours to get to the point I have it at now, so I figure it is worth sharing with others so they don't have to do the same. Saying that, here is the github link to the code: github.com/rickmanalexander/ADODBDataAccessAPI. Aug 18 '20 at 2:28
## Private Function connectionSuccessful() As Boolean
The name suggest that you are testing to see if the Connection has already been opened when in fact it is used to opening the Connection and test if it was successful.
Private Function OpenConnection() As Boolean
This name tells you that you are opening a Connection. Since the return type is Boolean, it is natural to assume that the function will return True only if the Connection was successful.
Having the error handler escape errors and print a message to the Immediate Window is counter productive. As a developer I don't instinctively look to the Immediate Window for error messages. As a user I will notify the developer of the error message that was raised down the line and not at the point of impact. Considering that your code uses callback procedures, there is no guarantee that an error will ever be raised. The only thing that is certain is that there are going to be problems somewhere down the line.
You should definitely raise a custom error it the mConnectionString is not set. A custom error message for the failed connection is not necessary (if you remove the error handler) because an ADODB error will be thrown at the point where this procedure was called.
## Public Sub AsyncExecute()
Consider raising an error if the callback procedure is not set.
## Private Sub Class_Terminate()
This method should be used to close the connection.
## mConn, mASyncConn, and mSyncConn
There is no need to use three different Connection variable. You are doing more work and obfuscating the code. Using a variable such as AsyncMode As Boolean will give you the same feedback and simplify the code making it easier to read.
## Naming Conventions
Having value and execute lower case changes the case for all other variables and properties with the same names. For this reason, I use Pascal Case for all my variables that do not have some sort of prefix.
Mathieu Guindon's Factories: Parameterized Object Initialization
## Other possible improvements
A public event would allow you to use cQueryable in other custom classes.
Public Event AsyncExecuteComplete(pRecordset As Recordset)
The ability to chain queries together seems like a natural fit.
Public Function NextQuery(Queryable AS cQueryable) AS cQueryable
Set NextQuery = Queryable
Set mQueryable = Queryable
End Function
This will allow you to run multiple queries in order without the need of multiple callback.
CreateTempQuery.NextQuery(FillTempTableQuery).NextQuery(UpdateEmployeesTableQuery)
• Thanks! For the note on the connection variables, I need at least two (the async and the non-async one) to prevent the event from firing each time an execution happens. So I thought to use three to keep things consistent. I thought about potentially creating a method with a paramArray() that accepted multiple SQL queries. I believe the connection is closed when the object gets dereferenced. One issue I think happens is that, if there's an exception, the connection stays open. This makes sense because the object is still referenced. So perhaps I should add error handling to address that case. Aug 18 '20 at 14:47
• @beyphy I would still prefer to let the event fire and do nothing if AsyncMode = False. You should rename procedureAfterQuery to procedureAfterAsyncQuery if you don't want it to run after execution has completed. Aug 18 '20 at 15:31
|
2021-09-24 09:59:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31817084550857544, "perplexity": 2811.4009008974012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057508.83/warc/CC-MAIN-20210924080328-20210924110328-00051.warc.gz"}
|
https://www.vedantu.com/question-answer/in-a-metre-bridge-experiment-null-point-is-class-12-physics-cbse-5fd7c19ecd67a76506ea646a
|
Question
# In a metre bridge experiment null point is obtained at 40cm from one end of the wire when resistance X is balanced against another resistance Y. If X < Y, then the new position of the null point from the same end, if one decides to balance a resistance of 3X against Y, will be close to.a. $80cm$b. $75cm$c. $67cm$d. $50cm$
Verified
91.5k+ views
Hint: A meter bridge is an instrument that works on the wheatstone bridge principle. It is used to find the unknown resistance of the conductor. Find the relation between the two resistance $X$ and $Y$ with the formula where $l$ is the balanced length. Then find the new balanced length when resistance is replaced with $3X$.
To measure an electrical resistance, a meter bridge is used which is not known by balancing its two legs of the bridge, where one of the legs includes an unknown component.
A meter bridge is an instrument that works on the wheatstone bridge principle. It is used to find the unknown resistance of the conductor.
Meter bridge consists of a wire which is one meter long which has uniform resistance and it has two slots in which one resistance is unknown.
A high resistance is connected in series with the galvanometer to protect it from high currents.The galvanometer is connected between the two resistors.
Jockey one end is connected to the junction of two resistors and the other end is used to find the balancing length. Meter bridge,galvanometer, one meter wire, jockey are used to find the unknown length.
A null point is obtained at point 40cm from one end $X$ then we can write,
$\Rightarrow \dfrac{X}{l} = \dfrac{Y}{{100 - l}}$
$\Rightarrow \dfrac{X}{{40}} = \dfrac{Y}{{60}}$
Since $\left( {100 - 40 = 60} \right)$
$\Rightarrow \dfrac{{3X}}{2} = Y$
Now the new position of the null point then
$\Rightarrow \dfrac{R}{S} = \dfrac{l}{{l - 100}}$
$\Rightarrow \dfrac{{6k}}{l} = \dfrac{{3k}}{{100 - l}}$
$\Rightarrow l = 67$
Hence, the correct answer is option (C).
Note: If a semiconductor is placed in the left gap of a meter bridge and it is heated then, balancing length is shifted to the right. A meter bridge is an instrument that works on the wheatstone bridge principle. It is also used to compare the two resistors. It is also used to determine the specific resistance of a resistor.
|
2021-10-25 07:45:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.613293468952179, "perplexity": 450.6280890336885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587655.10/warc/CC-MAIN-20211025061300-20211025091300-00035.warc.gz"}
|
https://www.parabola.unsw.edu.au/2010-2019/volume-52-2016/issue-2/article/problems-section-problems-1501-1510
|
# Problems Section: Problems 1501 - 1510
Q1501 Find the sum of the sum of the sum of the digits for the number $2016^{2016}$.
|
2018-07-20 06:44:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5784340500831604, "perplexity": 403.9936062069164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591543.63/warc/CC-MAIN-20180720061052-20180720081052-00554.warc.gz"}
|
https://wiki.cugc.org.uk/wiki/Polar,_Performance,_and_Water_Ballast
|
# Polar, Performance, and Water Ballast
Have you ever wondered what differentiates the 'high-performance' gliders from the normal two-seaters, apart from the price tag and the rather fragile appearance? What are the ridiculously long wings of a Duo-Discus good for? Or more importantly, after you have paid the launch fee, what can you do to stay in the air for longer? The answers to these questions require an understanding of the performance metrics of the glider.
You might have heard of more experienced pilots talking about 'polars', or you might have seen the convex curve which is confusing to get started with. You might also have seen people adding or dumping water into and out of their gliders. Building on the knowledge of glider performance, we can have a closer look at how these tools help cross-country pilots to fly faster and further.
There is a wealth of text, published or online, discussing the topics mentioned above. However, some of these are rather scattered pieces of discussions on the forums, or they can be written in another system of conventions than what is adopted in Cambridge. Some are sloppy about their assumptions and approximations, and some dive straight into the calculus making it impossible to follow. This work aims to present the derivations of the governing equations and the polar functions in a clear and detailed manner, and summarises the implications for those who would rather not follow the mathematics.
1. Consider the forces acting on the glider in unaccelerated flight: lift, drag, and glide ratio.
2. Aerodynamic coefficients: definitions and meanings.
3. Relationship between lift and drag.
4. General method of solution, and assumptions necessary to simplify it.
5. Analytical form of the glide polar.
6. Implications of the polar: minimum sink speed, and best glide.
7. Implications of the polar: water ballast
9. The non-dimensional polar and recommended readings.
## Glider in Unaccelerated Flight in Still Air
We start by considering a glider in unaccelerated flight in still air. We shall assume the following:
1. The aeroplane in question is a glider, i.e. it creates no thrust.
2. The flight is unaccelerated, i.e. the glider is flying straight and level without changing its airspeed.
3. The air is still, i.e. there is no macroscopic movement of air in forms such as wind, thermals, ridge lift, etc.
4. The air is homogeneous in its thermodynamic properties, especially, it has a uniform density $$\rho$$.
### Governing equations from a force perspective
Hopefully you already understand how a glider can remain airborne, but just in case you are in confusion, consider an unpowered glider in unaccelerated flight in still air: three forces act on the glider, namely:
1. Gravity (weight), pointing vertically downwards.
2. Lift, pointing upwards and perpendicular to the flight path.
3. Drag, pointing backwards and along the flight path.
By Newton's first law, in order for the glider to stay unaccelerated, these three forces must balance. Imagine the glider is flying horizontally. If this is the case, then the lift force must point vertically upwards. We then have a drag force pointing horizontally backwards with no force balancing it, because the other two are both in the vertical direction.
Therefore, the only way for the forces to balance is that, the glider cannot be flying in the horizontal direction. The flight path must be at an angle to horizontal. We shall denote this angle as $$\theta$$. By experience, a glider in unaccelerated flight in still air keeps descending, rather than climbing. Therefore, we know the flight path is inclined downwards. We shall define this direction as positive $$\theta$$.
With this made clear, the gravity ($$W$$) can be decomposed into two components, one to balance the lift ($$L$$), and one to balance the drag($$D$$). The following relationship holds:
$W \sin(\theta) = D$ $W \cos(\theta) = L$
Dividing these two expressions, $$W$$ can be eliminated, giving:
$\frac{L}{D} = \frac{1}{\tan(\theta)}$
The quantity $$\frac{L}{D}$$ is referred to as the Lift-to-Drag Ratio.
### Governing equations from an energy perspective
An alternative way to think about this is from an energy perspective. Because the drag force wants to slow down the glider and take its kinetic energy away, the glider must keep descending, so that it releases its gravitational potential to make up for the loss, otherwise it cannot remain at the same speed. Consider riding a bicycle: if you stop pedalling on level ground, you will gradually slow down and eventually stop, this is because drag force steals your kinetic energy away and you have no means of replenishing it. However, if you cycle downhill, you will not stop even if you do not pedal.
Therefore, we conclude that a glider flies downhill. This is in agreement with the conclusion of the previous section. We can borrow the notation and call the slope angle of this imaginary hill $$\theta$$. Geometrically, if we travel for a unit distance on the face of the hill, then in the horizontal direction the distance travelled will be $$\cos(\theta)$$ and in the vertical direction the height drop will be $$\sin(\theta)$$.
From an energy conservation point of view, the following expression holds (it means the energy that the drag force uses up equals to the energy the gravity must provide):
$D \times 1 = W \times \sin(\theta)$
This is the same result as in the last section.
It is worth noting that the energy approach tells us nothing about the lift force directly.
### Glide ratio
The glide ratio is a measurement of the efficiency of the glider. It means 'how many feet can the glider travel forward for every foot of altitude drop?' If a glider has a glide ratio of 50:1 (fifty-to-one), it means the glider is capable of travelling 50 feet forward for every foot of altitude drop, the same thing applies in meters, etc. We want the glider to travel as far as possible while losing minimum altitude, therefore, a larger glide ratio is favourable over a smaller one.
This measurement we can vaguely call 'performance', although strictly speaking this is only one aspect of it. We can say glider A (50:1) has a higher performance than glider B (30:1), for example.
In the last section, we concluded that, for each unit distance travelled on the hill, the horizontal distance covered is $$\cos(\theta)$$ and the vertical distance covered is $$\sin(\theta)$$. Therefore, the glide ratio is:
$\textrm{Glide ratio} = \frac{\cos(\theta)}{\sin(\theta)} = \frac{1}{\tan(\theta)} = \frac{L}{D}$
This is a very important result. It is also worth noting that, up until now, we have made no approximations.
### Glide ratio: influential factors
If you are familiar with calculus you will already have noticed that everything derived above is valid in an instantaneous sense. If you are not, take a minute to appreciate that the scale of time does not play a role in the process explained above: a glider with a glide ratio of 50:1 can travel fifty feet while dropping by one foot, it can also travel for fifty miles while dropping by one mile (slightly more than 5000 ft). If we extend this to the other extreme of the length scale such that the time associated is very small, we can see that the glide ratio is defined for any instant of the flight process.
There is no restriction on the glide ratio changing from one instant to another either, otherwise we will have no need to consider the polar if the glider flies the same at whatever speeds. Before proceeding into the more detailed discussions, some of the most important factors are presented here.
The glide ratio is mainly affected by:
1. The aerodynamic design of the glider. The more streamlined, sleek, and aerodynamic-looking a glider is, with long slender wings and smooth gel coating, the more likely it is to have a larger glide ratio.
2. The configuration of the glider. The glide ratio is almost always the highest in the clean configuration, i.e. with nothing sticking out or deployed. Lowered undercarriage, extended brakes and spoilers, deployed and windmilling propellers, opened or lost canopies, attached ropes are things that will reduce the glide ratio. Generally speaking, having the flaps set to other angles than neutral is not good for the glide ratio, but this very much depends on other factors.
3. The way the glider is flown. For a fixed glider mass (which is generally the case with the exception of jettisoning water), the glide ratio is a function of indicated airspeed, giving rise to the polar which is the immediate next topic. Moreover, if the glider is not flown straight (with sideslip), or flown otherwise than normal (e.g. stalled, inverted), the glide ratio can decrease drastically.
## Lift and Drag Coefficients
In the discussions that follow, only the incompressible flow regime is considered. This is justified by the low speed that gliders fly at.
### Definitions
In aerodynamics, the lift coefficient ($$C_L$$) and drag coefficient ($$C_D$$) are defined as follows:
$C_L = \frac{L}{\frac{1}{2} \rho V^2 S}$ $C_D = \frac{D}{\frac{1}{2} \rho V^2 S}$
Where:
• $$\rho$$ is the true density of air.
• $$V$$ is the true airspeed of the aeroplane.
• $$S$$ is the area of the wing (projected onto the ground), a fixed value for a given glider.
• $$\frac{1}{2} \rho V^2$$ is collectively known as the dynamic pressure, or dynamic head.
### A comment on the dimension
You should notice that, for both coefficients, the unit of both the numerator and the denominator is the unit of force (Newton in SI units). The denominator comprises $$\frac{1}{2} \rho V^2$$ which has the unit of pressure, and $$S$$ which has the unit of area, so the product yields a force.
Consequently, both $$C_L$$ and $$C_D$$ are non-dimensional. These quantities have no units. Non-dimensional quantities are the language of aerodynamics: it allows us to study the underlying physics without being distracted by how things are measured. A K-21 is heavier than a Junior, therefore, in unaccelerated glide with the same angle-of-attack, the wings of the K-21 produces more lift than the Junior wings. What causes this? Is it because the design of the K-21 is aerodynamically superior? The answer is not necessarily, as the K-21 can be flying faster, for instance, or has larger wings. The comparison only becomes meaningful when the lift is non-dimensionalised into the lift coefficient.
### A comment on dynamic pressure
The true air density and the true airspeed always appear together as a compound quantity $$\frac{1}{2} \rho V^2$$ which is referred to as the dynamic pressure or dynamic head.
The density of air is not a constant: it depends on pressure (which most notably depends on altitude) and temperature. This causes a major inconvenience as we have to assert a value to it in order to arrive at any numerical results directly useful for flying: e.g. you do not check any non-dimensional quantities in the cockpit, you read the instruments instead which tells you the airspeed in knots or the altitude in feet.
To overcome this problem, we notice that density only appears within $$\frac{1}{2} \rho V^2$$. Therefore, we can define an equivalent density $$\rho_e$$ and an equivalent airspeed $$V_e$$ such that:
$\frac{1}{2} \rho V^2 = \frac{1}{2} \rho_e V_e^2$
We can assert a value to $$\rho_e$$ and arrive at a value of $$V_e$$ such that, when used together, they produce the same amount of dynamic head, therefore, the aerodynamic effect is exactly the same.
The most reasonable value to assign to $$\rho_e$$ would be the density of air at some standard conditions. This can then be implemented into some instrument that tells you $$V_e$$ (all this instrument has to do is to measure the dynamic head). So long as all the manuals and polar charts express airspeed in $$V_e$$ assuming the same value of $$\rho_e$$, the change in true air density will not cause these performance guidelines to vary.
In practise, the instrument that tells you $$V_e$$ is the air speed indicator (ASI), and $$V_e$$ is known as indicated airspeed. Based on the discussions above, you should realise that:
1. Indicated airspeed is directly related to the dynamic head.
2. The dynamic head is the only way true airspeed affects glider aerodynamics (before it disintegrates by overspeeding).
3. Therefore, the glider's aerodynamics is affected only by indicated airspeed, not true airspeed (apart from the never-exceed speed).
4. We should tabulate performance figures and draw polar graphs using indicated airspeed.
5. We do not need to adjust the performance tables or polar graphs to compensate for non-standard atmospheric conditions.
### Components of lift and drag coefficients
To proceed with the discussions, it is necessary to quote these without proof. Indeed, these formulae cannot be proven. There are complicated aerodynamic theories that derives these, however, while the success in doing so is remarkable, the theories themselves rely on rigorous assumptions and extensive modelling, so the derivations cannot really be called proofs. You are advised to understand the following as experimental correlations.
$C_L = C_{L0} + C_1 \alpha$ $C_D = C_{D0} + \frac{k}{\pi A} C_L^2$
It is, however, necessary to explain the physical rationale in detail.
The lift coefficient $$C_L$$ can be decomposed as follows:
1. $$\alpha$$ is the angle-of-attack.
2. $$C_{L0}$$ is the lift coefficient at zero angle-of-attack. This term equals to zero if the aerofoil is symmetric, greater than zero if the aerofoil is cambered, and smaller than zero if the aerofoil is cambered the wrong way.
3. $$C_1$$ can be thought as an empirical factor. It is rather close to $$2\pi$$.
4. The lift coefficient increases proportionally with the angle-of-attack up to the point where the wing stalls.
The drag is more complex: the drag on an aeroplane has three components:
1. Friction drag, this is the drag caused by the air sticking onto the glider and trying to slow it down. Imagine flying a glider in honey which is rather sticky. The friction drag coefficient $$C_{DF}$$ is approximately a constant for a given glider.
2. Pressure drag, this is the drag associated with the glider trailing a wake. This is also known as the form drag because it is related to the form of the glider being not fully aerodynamic. You would intuitively think that a Land Rover Discovery has more drag than a Jaguar fastback: because the Discovery is not streamlined while the fastback is, and this is what pressure drag is about. The pressure drag coefficient $$C_{DP}$$ is approximately a constant for a given glider, because its form does not change in flight. Were this approximation not to be made, the following derivation can remain unaltered by pretending this variation is a part of the induced drag.
3. Induced drag, this is the drag caused by having lift. There is no free lunch in aerodynamics and wherever you have lift you must have drag, no matter how good your design is. The induced drag coefficient $$C_{DI}$$ takes the following form:
$C_{DI} = \frac{k}{\pi A} C_L^2$ Where $$A$$ is the aspect ratio of the wings (how slender the wings are), and $$k$$ is a factor that depends on the wing design. This drag component increases quadratically with $$C_L$$.
By the explanations above, it should be evident that:
$C_{D0} = C_{DF} + C_{DP}$
## Relationship Between Lift and Drag: the Parabolic Polar
The following relationship between $$C_D$$ and $$C_L$$ is fundamental to the discussions that follow:
$C_D = C_{D0} + \frac{k}{\pi A} C_L^2$
This is a parabolic function. It is this function that is referred to when talking about a 'parabolic polar': the actual (and more useful) polar curve that we shall derive is not a parabola.
### A statement of the task that follows
From the relationship presented above, and making use of the following facts or assumptions:
1. Mass of the glider remains constant.
2. Energy is conserved.
3. The air is still.
4. The density of air is uniform and known, or rather and better, we work in the corrected (indicated) airspeed system.
We will be deriving a one-to-one relationship between indicated airspeed and sink rate.
## A General Method of Solution
Before making further assumptions and simplifications, a general method of solution is worth presenting. The algebraic difficulties, as we shall see, is formidable, but it lends itself nicely to numerical methods.
Re-arranging the definitions of $$C_L$$ and $$C_D$$:
$W \cos(\theta) = C_L \times \frac{1}{2} \rho V^2 S$ $W \sin(\theta) = C_D \times \frac{1}{2} \rho V^2 S$
The two expressions can both be squared and added together. Notice that $$\cos^2(\theta) + \sin^2(\theta) =1$$, the following is arrived at:
$W^2 = (C_L^2 + C_D^2) \times (\frac{1}{2} \rho V^2 S)^2$
Or rather, in the more insightful form:
$C_L^2 + C_D^2 = \frac{W^2}{(\frac{1}{2} \rho V^2 S)^2}$
The right hand side of the expression above is a function of indicated airspeed only, because air density is a constant for it to be compatible with the indicated airspeed.
Substitute the parabolic relationship between $$C_D$$ and $$C_L$$ into the expression above, we have:
$f(C_L) = g(V)$
Where $$f(x)$$ and $$g(x)$$ are functions that are too cumbersome to typeset. Keep in mind that $$C_{D0}$$ is embedded in $$g(x)$$.
Solving the above which the author does not believe is analytically possible:
$C_L = \frac{W \cos(\theta)}{\frac{1}{2} \rho V^2 S} = h(V)$
In words, a relationship between the lift coefficient and the indicated airspeed can be arrived at.
The above can be further re-arranged, such that:
$\cos(\theta) = \frac{h(V) \rho V^2 S}{2W}$
This is a relationship between the glide slope and the airspeed. From here on, determining the sink rate from the glide slope and airspeed is a trivial geometrical task, so the required relationship between airspeed and sink rate is essentially derived.
## An Approximate Method of Solution: The Analytical Polar Curve
We shall attempt a derivation of the analytical polar curve again but using a slightly different algebraic approach than what is used in the previous section. We will see that, by adopting this approach, and by making a simple approximation, the algebra becomes simple enough for us to explicitly express the analytical form of the polar equation (an equation relating sink rate to indicated airspeed).
Firstly, the definitions of $$C_L$$ and $$C_D$$ shall be substituted into the parabolic relationship between $$C_L$$ and $$C_D$$, giving:
$\frac{D}{\frac{1}{2} \rho V^2 S} = C_{D0} + \frac{k}{\pi A} \frac{L^2}{\frac{1}{4}\rho^2 V^4 S^2}$
Both sides of the equation above need to be multiplied by $$\frac{1}{4} \rho V^4 S^2$$ (notice that this is the square of $$\frac{1}{2} \rho V^2 S$$), then divided by $$\frac{1}{2} \rho V S$$, giving:
$DV = \frac{1}{2} \rho V^3 S C_{D0} + \frac{2kL^2}{\pi A \rho V S}$
To proceed, the conservation of energy must be invoked. We realise that the kinetic energy of the glider is not changing because the glider is flying unaccelerated. Therefore, the release of gravitational potential, the rate of which equals to the power of the gravitational force, must balance the rate at which the mechanical energy of the glider is being dissipated by aerodynamic drag, which is the power of the drag force.
By definition, the power of a force is given by:
$P = F \times V$
In words: the power is the product of the magnitude of the force and the speed of the subject in the direction of the force. By using this relationship, we realise that: the power of the gravitational force is given by $$W \times V_S$$ (weight times the sink rate), and the power of the drag force is given by $$D \times V$$ (drag times the airspeed). This relationship can also be obtained by a geometrical argument using basic trigonometry.
It should be noticed that the above argument is not watertight: this is because $$V$$ is the indicated airspeed which generally differs from the true airspeed, and it is the latter that must be used to calculate the drag power. There are two ways to think around this:
1. You can think of this as an approximation that is being made: true airspeed is being approximated with indicated airspeed. As a result, some systematic error will be introduced into the results.
2. If you can understand the relationship between $$W \times V_S$$ and $$D \times V$$ from a geometrical perspective, you can think the following: because we are working in the indicated system where $$V$$ is the indicated airspeed, the corresponding $$V_S$$ obtained geometrically is the indicated sink rate. It needs to be converted to the true sink rate via the compound quantity $$\frac{1}{2} \rho V^2$$.
By using $$D \times V = W \times V_S$$, the last equation becomes:
$WV_S = \frac{1}{2} \rho V^3 S C_{D0} + \frac{2kL^2}{\pi A \rho V S}$
This expression should be examined in detail. The following quantities are known (either set, from design, or can be measured):
1. $$W$$, weight of the glider, depends on the design, cockpit loading, and amount of water carried, but can be known and usually does not change midway in flight.
2. $$\rho$$, density of air, because we work in the indicated system, this becomes the air density value used in the ASI, which is a fixed number.
3. $$S$$, wing area of the glider, known and stays constant (we shall not consider the effects of deploying flaps, etc. on the performance).
4. $$C_{D0}$$, this depends on the aerodynamic design of the glider.
5. $$k$$, this depends on the wing design of the glider, a highly complex series expansion to obtain a numeric value exists, but for all practical purposes this is a constant.
6. $$\pi$$, 3.1415926...
7. $$A$$, aspect ratio of the wing, depends on the glider design and a known constant.
Therefore, there are three changing quantities in this equation:
1. $$V_S$$, this is the quantity we are interested in, the y.
2. $$V$$, this is the quantity we can control, the x.
3. $$L$$, lift on the glider, what is it?
You should realise that, the existence of $$L$$ in the equation above prevents us from obtaining a deterministic relationship between $$V_S$$ and $$V$$ which is the polar equation we desire. $$L$$ can be related to $$W$$ by using $$V_S$$ and $$V$$ and geometrical arguments, but this will complicate the equation and prevent us from arriving at an explicit relationship. In other words, doing so is the equivalence of reverting to the method in the last section.
Instead, we shall introduce the following approximation: the weight of the glider equals to the lift force acting on the glider. This sounds intuitively true, but there is an error associated with it, whose magnitude is given by $$1-\cos{\theta}$$. Fortunately, this error is gracefully small at typical glide angles. If the glide ratio is 30:1, the error is 0.056%, which becomes even smaller if the glide ratio is higher.
We have shown that this is a good approximation. Therefore, we can replace the $$L$$ in the existing equation with $$W$$, and the analytical polar curve is arrived at:
$V_S = \frac{1}{2W} \rho S C_{D0} V^3 + \frac{2kW}{\pi A \rho V S}$
## Implications of the Analytical Polar Curve: Minimum Sink and Best Glide
### Shape and General Features of the Analytical Polar
The polar equation is a combination of a third order term which is monotonically increasing throughout the domain of definition, and a hyperbolic term which, in the domain of $$V>0$$, decreases monotonically. Therefore, a global minimum is expected. This will be (confusingly) referred to as the global maximum because conventionally, the Y-axis ($$V_S$$) is turned upside-down, such that going down means higher sink rate.
The analytical form of the polar curve applies to the speed range from several knots above stall to $$V_{NE}$$ ($$V_{NE}$$ must be converted to its indicated value). It does not apply close to stall, which is because the aerodynamics of the glider changes considerably before the onset of stall such that the drag ceases to be a parabolic function of the lift.
The analytical polar given above can be plotted by any computer code, or a plot can be found in any gliding textbook. You can also ask an instructor to draw you one.
### Minimum Sink Airspeed
We seek an indicated airspeed that will give us the minimum sink rate. This is the indicated airspeed to fly at only if you want to stay in the air for as long as possible with a certain amount of altitude drop. It is useful when thermalling. Flying at this speed may not be able to get you anywhere (in extreme cases you can go backwards rather quickly), so caution and thought is needed.
To find this airspeed, the analytical polar is differentiated to reveal the maximum:
$\frac{d V_S}{dV} = 0$
This gives:
$V_{MS} = (\frac{4kW^2}{3 \rho^2 S^2 C_{D0} \pi A})^{\frac{1}{4}}$
It is common practice to define a quantity called wing loading as $$\omega = \frac{W}{S}$$ which quantifies how much weight each meter squared wing area is carrying. With this definition in place, notice that:
$V_{MS} \propto \sqrt{\omega}$
The implication is, the minimum sink airspeed is not fixed: with the glider loaded heavier it will become higher.
It is worth noting that, by flying at this airspeed:
$V_S(V_{MS}) \propto \sqrt{\omega}$
Therefore, by loading the glider heavier, the minimum sink rate possible is also higher. This implies that gliders with low wing loading can make use of weaker thermals with a limited rising speed.
If a glider thermals at the minimum sink airspeed, carrying water ballast will enable the glider to fly faster and likely at a larger radius. This can prove to be beneficial as some experienced pilots will say, but a mathematical proof is not possible in the absence of a model to characterise the behaviour of the thermal. Water ballast is usually carried on good thermal days but not on days with marginal conditions. You will sometimes hear pilots say that the water doesn't work, the author's interpretation to which is that, because the thermals are not strong and big enough, the increased minimum sink by carrying water ballast outweighs the possible benefits if any.
### Best Glide
The best glide ratio achievable for a given glider in a particular loading and configuration can be determined from the analytical polar. Recall that the glide ratio is given by the horizontal distance covered over the vertical altitude drop. For analytical purposes, it is necessary to make a small angle approximation such that the horizontal distance is approximately given by $$V \times t$$ where $$t$$ is time. The accuracy of this approximation is shown in previous sections.
Using the small angle approximation, the inverse of the glide ratio is given by:
$\frac{V_S}{V} = \frac{\rho}{2 \omega} C_{D0} V^2 + \frac{2 k \omega}{\pi A \rho V^2}$
This compound quantity is to be differentiated with respect to $$V$$ to reveal the minimum:
$\frac{d}{dV}(\frac{V_S}{V}) = \frac{\rho}{\omega} C_{D0} V - \frac{4k \omega}{\pi A \rho V^3} =0$
This yields:
$V_{BG}=(\frac{4k \omega^2}{\pi A \rho^2 C_{D0}})^{\frac{1}{4}}=(3)^{\frac{1}{4}} V_{MS}=1.3161 V_{MS}$
Therefore, the best glide speed is always 31.6% higher than the minimum sink airspeed according to our analysis. Slight discrepancies may arise in reality due to the approximations we have made, mainly the aerodynamic ones.
From the expression for $$V_{BG}$$ given above, it is evident that:
1. $$V_{BG} \propto \sqrt{\omega}$$, such that the best glide speed will increase as the wing loading increases, by means such as using water ballast.
2. Increasing the aspect ratio can reduce the best glide speed.
3. Decreasing $$C_{D0}$$ can increase the best glide speed.
It is also of interest to calculate the best possible performance of the glider, which, by definition, happens at the best glide speed. The algebra proceeds as follows:
$V_{BG}^2 = \sqrt{\frac{4k}{\pi A C_{D0}}} \frac{\omega}{\rho}$
This is to be substituted into:
$(\frac{V_S}{V})_{\text{best}} = \frac{\rho}{2 \omega} C_{D0} V_{BG}^2 + \frac{2 k \omega}{\pi A \rho V_{BG}^2}$
To yield:
$(\frac{V_S}{V})_{\text{best}} = 2 \sqrt{\frac{k C_{D0}}{\pi A}}$
Or, alternatively (to give the large number like 40 or 50 that we are familiar with):
$\text{Best Glide Ratio} = 0.5 \times \sqrt{\frac{\pi A}{k C_{D0}}}$
This is a very important result, as it gives all the factors underpinning the best performance of a glider (in a particular configuration):
1. The wing loading does not change the best performance. Therefore, a 50:1 glider will be 50:1 with a light pilot or a heavy pilot, or with or without water ballast. This is, however, based on our model, and in reality more factors may come into play. For example, if the wing loading is high, then the best glide speed increases accordingly and the change in Reynolds number may have some effect. Alternatively, the different structural deflections of the wings may produce subtle differences in the aerodynamic geometry. Nevertheless, this is the rationale underpinning the use of water ballast: it does not degrade aerodynamic performance.
2. Increasing the aspect ratio of the wing is an effective (and, in fact, easiest) way to improve the best performance, as the best glide ratio scales with $$\sqrt{A}$$. This is the reason why high performance gliders have slender wings.
3. Improving aerodynamic design, such that $$C_{D0}$$ or $$k$$ is reduced, can improve the best glide ratio as we would intuitively expect. However, modern advancement in aerodynamics has been agonisingly slow and you realise that there is not much potential to be released by comparing a fibre glass glider built in the 1980s with a modern one. What differences do you spot?
From a geometric point of view, the above solution process is equivalent to finding a ray from the origin that is tangent to the polar curve. You should ask an instructor to demonstrate this to you to reinforce the understanding. This geometric method is useful when more factors are taken into account, such that an analytical solution cannot be obtained easily.
### Water Ballast
Water ballast has no effect on the glider best performance, but it makes the best glide speed faster, so the pilot can cover a certain amount of cross-country distance faster. This is the first reason for using water ballast.
In fact, the use of water ballast does not change the shape of the polar at all, not only for the best performance point. To see this, please read the next section on non-dimensional polar. The shape of the polar is dictated only by the best glide speed and the sink rate at best glide, but, as shown previously, both quantities are proportional to $$\sqrt(\omega)$$. Therefore, as the wing loading changes, the polar curve scales around the origin with $$\sqrt(\omega)$$ but keeps its shape. Because the best glide is a tangent to the polar, and that the polar is scaled around the origin of the ray, the slope of the ray (best performance) is invariant.
The second reason for using water ballast is to improve the performance in headwind and sinking air. This is difficult to prove mathematically as the workings in the next section will show, but geometrically this can easily be demonstrated. Because the polar curve is scaled to be larger, any shift in origin due to headwind and sinking air is comparatively smaller. This makes the new tangent to the polar closer to the best glide line in stationary air, such that the degradation of performance is less.
Conversely, it can be demonstrated graphically that water ballast is detrimental to performance (in terms of covering ground distance) when there is tailwind or rising air. However, gliders are not usually flown downwind for meaningful distances, and when rising air is present, a pilot will attempt to stay in it and soar, rather than moving to another place, so these effects are unimportant.
Experienced pilots sometimes argue that carrying water ballast improves thermalling performance. A mathematical establishment cannot be made unless a model exists to characterise the behaviour of a thermal (which indeed exists, but the validity is questionable in the author's opinion). It is worth pointing out that, very hand-wavingly we can say, if there is any benefit in carrying water ballast when thermalling, it will come from thermalling at a larger radius, rather than at a higher speed.
You may have been instructed that you need to fly faster in headwind or sinking air to cover ground efficiently. This section briefly demonstrates the underlying mathematics, but you are encouraged to use the geometric method to prove to yourself that it is indeed the case.
We firstly approximate true airspeed with indicated airspeed. By doing this, the following construction is possible:
$V = V_g + V_w$
Where $$V_g$$ is ground speed and $$V_w$$ is the headwind speed. This expression is to be substituted into the analytical polar. We should also bear in mind that it is the most efficient covering of ground distance that is of interest, therefore:
$\frac{V_S}{V_g} = \frac{\rho C_{D0}}{2 \omega} \frac{V^3}{V-V_w} + \frac{2k \omega}{\pi A \rho}\frac{1}{V(V-V_w)}$
This quantity must be differentiated with respect to $$V$$ to find the optimum. Performing the differentiation and, after considerable simplifications:
$2V^5 - 3V^4 V_w - 2CV + CV_w = 0$
Where:
$C = V_{BG}^4$
This is a fifth-order polynomial. It is a fact established by Abel in 1820 that a fifth-order polynomial cannot, in general, be solved by radicals. However, we can 'prove' that the solution lies in $$V > V_{BG}$$ by considering the following:
1. Substitute $$V=V_{BG}$$ into the equation above, and show that the value is negative.
2. Differentiate the expression and substitute $$V=V_{BG}$$ into the differentiated expression, and show that the value is positive.
3. Hence, we have a function that is presently negative, but it is increasing, so we would expect a root (the required solution) at a larger $$V$$ than the present $$V$$ which is $$V_{BG}$$
The above arguments are far from watertight: The differentiated expression only gives a positive value if $$V_w \leq \frac{2}{3} V$$. While this is usually the case, the above cannot constitute a proof. A more rigorous analysis on the locations of the maxima and minima is required.
The problem is much simpler if the geometric method is used: to use the geometric method, imagine setting up a ground speed zero which is different from the airspeed zero. The polar is plotted with respect to the airspeed zero but the tangent ray needs to start from the ground speed zero. Because the ground zero is located in $$V>0$$, the tangent is steeper and intersects the polar at a larger $$V$$.
### Effects of Sinking Air
If the glider is flying in some air that is sinking with a uniform downward speed $$V_{SA}$$, then the polar equation should be adapted into the following form:
$V_S = \frac{\rho C_{D0}}{2 \omega} V^3 + \frac{2k \omega}{\pi A \rho} \frac{1}{V} + V_{SA}$
Using the differentiation method to find the optimum airspeed for covering ground (notice that, because there is no headwind or tailwind, the indicated airspeed is equivalent to ground speed. This is not to say we approximate TAS with IAS, but there is a monotonic relationship between the two which is dictated by the altitude, which is a free variable in our problem.)
$\frac{d}{dV}(\frac{V_S}{V}) = 0$
$\frac{\rho C_{D0}}{\omega} V^4 - V_{SA} V - \frac{4k \omega}{\pi A \rho} = 0$
This equation has the solution of $$V=V_{BG}$$ if $$V_{SA} = 0$$ as expected, but if $$V_{SA} > 0$$, then the solution is $$V>V_{BG}$$. The proof of this is left as an exercise for the reader.
### Effects on the Minimum Sink
It should be obvious by now that the above adjustments to the polar have no effect on the minimum sink speed: the difference only arises when $$V$$ is divided over to the left side, i.e. a glide ratio is sought after. Physically this makes sense: the minimum sink speed is purely an interaction between the glider and the surrounding air, and if we disregard all relativity to the ground, then the air in which the glider flies can move in whichever possible way (so long as it is not accelerating) and the glider can perform the same macroscopic motion with it without altering the detailed aerodynamics.
## The Non-Dimensional Polar and the Determination of the Polar in Practice
The polar equation can be abstracted into the following form:
$V_S = AV^3 + \frac{B}{V}$
With $$A=\frac{\rho C_{D0}}{2 \omega}$$ and $$B=\frac{2k \omega}{\pi A \rho}$$.
At best glide, by our calculations from the previous sections, the best glide speed is given by (notice the change in notation for reading convenience):
$V_i = (\frac{B}{A})^{\frac{1}{4}}$
$V_{Si} = 2(AB^3)^{\frac{1}{4}}$
These can be substituted into the abstract polar equation, such that:
$2 (\frac{V_S}{V_{Si}}) = \frac{AV^3}{(AB^3)^{\frac{1}{4}}} + \frac{B}{(AB^3)^{\frac{1}{4}}V}$
This can be simplified into:
$2\frac{V_S}{V_{Si}} = (\frac{V}{V_i})^3 + (\frac{V_i}{V})$
This is the non-dimensional polar. It tells us that the polar curve is deterministic from only two quantities: the best glide speed, and the sink rate at the best glide speed. These two quantities both depend on the wing loading, so the additional requirement is that they be measured with the same level of wing loading.
Aerodynamic coefficients such as $$C_{D0}$$ are difficult to determine and measuring such quantities require sophisticated equipment and techniques include wind tunnel testing and flight tests. However, the polar can be determined simply by test flying the glider and plugging the measured airspeed and sink rate into the non-dimensional polar as coefficients. This is a useful method to determine a polar of a glider which you may not have a manual for.
The above, nevertheless, assumes that the parabolic relationship between $$C_L$$ and $$C_D$$ holds true, which is something we have been doing throughout this article. This relationship has its limitations and such limitations lead to most of the deviations from the analytical polar as observed in flight.
|
2020-04-02 08:51:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7692360877990723, "perplexity": 580.493665756107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506870.41/warc/CC-MAIN-20200402080824-20200402110824-00015.warc.gz"}
|
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4543
|
## Palladium/Copper Bimetallic Catalyzed Decarboxylative C-C Bond Formations
• Redox-neutral decarboxylative coupling reactions have emerged as a powerful strategy for C-C bond formation. However, the existing reaction conditions possess limitations, such as the coupling of aryl halides restricted to ortho-substituted benzoic acids; alkenyl halides were not applicable in decarboxylative coupling reaction. Within this thesis, the developments of Pd/Cu bimetallic catalyst systems are presented to overcome the limitations. In the first part of the PhD work, a customized bimetallic PdII/CuI catalyst system was successfully developed to facilitate the decarboxylative cross-coupling of non-ortho-substituted aromatic carboxylates with aryl chlorides. The restriction of decarboxylative cross-coupling reactions to ortho-substituted or heterocyclic carboxylate substrates was overcome by holistic optimization of this bimetallic Cu/Pd catalyst system. All kinds of benzoic acids regardless of their substitution pattern now can be applied in decarboxylative cross-coupling reaction. This confirms prediction by DFT studies that the previously observed limitation to certain activated carboxylates is not intrinsic. The catalyst system also presents higher performance in the coupling of ortho-substituted benzoates, giving much higher yields than those previously reported. ortho-Methyl benzoate and ortho-phenyl benzoate which have never before been converted in decarboxylative coupling reactions, gave reasonable yields. These together further confirm the superiority of the new protocol. In the second part of the PhD work, arylalkenes syntheses via two different Pd/Cu bimetallic-catalyzed decarboxylative couplings have been developed. This part consists of two projects: 2a) decarboxylative coupling of alkenyl halides; 2b) decarboxylative Mizoroki-Heck coupling of aryl halides with α,β-unsaturated carboxylic acids. In project 2a, widely available, inexpensive, bench-stable aromatic carboxylic acids are used as nucleophile precursors instead of expensive and sensitive organometallic reagents that are commonly used in previously reported transition-metal catalyzed cross-couplings of alkenyl halides. With this protocol, alkenyl halides for the first time are used in decarboxylative coupling reaction, allowing regiospecific synthesis of a broad range of (hetero)arylalkenes in high yields. Unwanted double bond isomerization, a common side reaction in the alternative Heck reactions especially in the coupling of cycloalkenes or aliphatic alkenes, did not take place in this decarboxylative coupling reaction. Polysubstituted alkenes that hard to access with Heck reaction are also produced in good yields. The reaction can easily be scaled up to gram scale. The synthetic utility of this reaction was also demonstrated by synthesizing an important intermediate of fungicidal compound in high yield within 2 steps. In project 2b, a Cu/Pd bimetallic catalyzed decarboxylative Mizoroki-Heck coupling of aryl halides with α, β-unsaturated carboxylic acids was successfully developed in which the carboxylate group directs the arylation into its β-position before being tracelessly removed via protodecarboxylation. It opens up a convenient synthesis of unsymmetrical 1,1-disubstituted alkenes from widely available precursors. This reaction features good regioselectivity, which is complementary to that of traditional Heck reactions, and also presents excellent functional group tolerance. Moreover, a one-pot 3-step 1,1-diarylethylene synthesis from methyl acrylate was achieved, where solvent changes or isolation of intermediates are not required. This subproject presents an example of carboxylic acids utility in synthesizing valuable compounds which are hard to access via conventional methodologies.
### Volltext Dateien herunterladen
$Rev: 13581$
|
2017-03-24 10:19:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5573872923851013, "perplexity": 14123.583018482414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187792.74/warc/CC-MAIN-20170322212947-00416-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://community.wolfram.com/groups/-/m/t/1272436?sortMsg=Recent
|
# Read SEGY data in Mathematica?
Posted 1 year ago
2961 Views
|
13 Replies
|
10 Total Likes
|
Please I need code/software to read SEGY data in order to input to Mathematica
13 Replies
Sort By:
Posted 1 year ago
Dear Anton, I am working on velocity analysis (stacking velocity) using mathematica. However, I applied my code on the traces but it did not work. Of course I know why it did not work because my data has to be in CMP. Can the customExport expository do sorting? If not could you please advice on how to sort my data into CMP. Thank you.
Posted 1 year ago
Hi people! I am trying to understand what some commands does in Mathematica but I do not understand yet. I have attached the code to this write u[. I know what each of the terms means but I need to understand what each command does. I want to know how the command will perform each function.Thank you. Attachments:
Posted 1 year ago
Please see advanced dedicated post @Kirill Belov kindly contributed:Working with SEGY file format for storing geophysical data http://community.wolfram.com/groups/-/m/t/1283198
Posted 1 year ago
It was a small mistake in the documentation. Try this: data = CustomImport["path/to/file.segy", "SGY"] ArrayPlot[data] ArrayPlot[data["Traces"][]] (* data["Traces"][] - returns array of numbers *) Or you can download latest version of the package (I updated this several days ago) and code from the ExampleOfUse will be work.
Posted 1 year ago
Thank you Anton. I was able to load the data successfully but the ArrayPlot did not work. Kindly highlight to me how to do the plotting after loading the SEGY data.
Posted 1 year ago
Thank you Anton, I have been able to do that successfully.
Posted 1 year ago
Thanks for finding the bug! I fixed this. In fact, there is a small inaccuracy in the documentation. Now you can use the function ArrayPlot like this: data = CustomImport["path/to/file.segy", "SGY"] ArrayPlot[data] ArrayPlot[data["Traces"]] More examples in the file CustomImportExport.nb
Posted 1 year ago
First of all you need - download repository. Second - open and evaluate Installer.nb notebook. Third -you can create your own new notebook for working and evaluate Get["CustomImportExport"] For import file evaluate next dataIn = CustomImport["YourFileName.sgy", "SEGY"] You can see and analyze headers of traces dataIn["TraceHeaders", 1;;-1, {"gx", "gy"}] in this string we can see coordinates of geophones. In this package SeismicUnix notation of headers is used. In next string you can get the values of first trace dataIn["Traces", 1] You can plot data (Seismic section or Gathers) by the way Christopher showed. If you have questions - please write!
Posted 1 year ago
Hi Christopher! I am new to Mathematical, kindly highlight the steps to follow in order to read SEGY file into Mathematical.Thank you
Posted 1 year ago
Thank you for sharing this interesting body of work. I installed the package as per the instructions and am exploring the examples. As it happens the first one I tried, from the file "ExampleOfUse.md" has a minor issue. The code: Get["CustomImportExport"]; SetDirectory[\$CustomImportExportDirectory]; file = FileNameJoin[{"CustomImportExport", "Resources", "MarmousiModel.segy"}]; data = CustomImport[file, "SEGY"]; traces = data["Traces"];Head[traces] (* check the form of traces *) SEGYElementSo the following ArrayPlot fails unless we take the second part of "traces", and now it works very nicely. ArrayPlot[Transpose[traces[[2]]], AspectRatio -> 0.5, ImageSize -> Large, PlotLegends -> Automatic, FrameTicks -> Automatic, PlotLabel -> "Модель Marmousi \n Разрез скоростей продольных волн"]
Posted 1 year ago
Thank you Anton for sharing these codes with me. This is exactly what I need, to be able to read SEGY data into the Mathematica. However, I will appreciate if you could help with English version of step-by-step guide to use it. I am a new user of Mathematica. Thank you once again
|
2019-06-16 05:37:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35793590545654297, "perplexity": 2303.856731699721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997731.69/warc/CC-MAIN-20190616042701-20190616064701-00510.warc.gz"}
|
https://gsebsolutions.com/gseb-solutions-class-8-science-chapter-17/
|
# GSEB Solutions Class 8 Science Chapter 17 Stars and The Solar System
Gujarat Board GSEB Textbook Solutions Class 8 Science Chapter 17 Stars and The Solar System Textbook Questions and Answers, Notes Pdf.
## Gujarat Board Textbook Solutions Class 8 Science Chapter 17 Stars and The Solar System
### Gujarat Board Class 8 Science Stars and The Solar System Textbook Questions and Answers
Question 1.
Which of the following is NOT a member of the solar system?
(a) An asteroid
(b) A satellite
(c) A constellation
(d) A comet.
(c) A constellation
Question 2.
Which of the following is NOT a planet of the sun?
(a) Sirius
(b) Mercury
(c) Saturn
(d) Earth
(a) Sirius
Question 3.
Phases of the moon occur because:
(a) we can see only that part of the moon which reflects light towards us.
(b) our distance from the moon keeps changing.
(c) the shadow of the earth covers only a part of the moon’s surface.
(d) the thickness of the moon’s atmosphere is not constant.
(a) We can see only that part of the moon which reflects light towards us.
Question 4.
Fill in the blanks:
(a) The planet which is farthest from the Sun is ………..
(b) The planet which appears reddish in color is ……….
(c) A group of stars that appear to form a pattern in the sky is known as a ………..
(d) A celestial body that revolves around a planet is known as ………..
(e) Shooting stars are not …………..
(f) Asteroids are found between the orbits of ………… and ………..
(a) Neptune
(b) Mars
(c) Constellation
(d) Satellite
(e) meteors
(f) Mars, Jupiter.
Question 5.
Mark the following statements is true or false:
(a) Pole star is a member of the solar system.
(b) Mercury is the smallest planet of the solar system.
(c) Uranus is the farthest planet in the solar system.
(d) INSAT is an artificial satellite.
(e) There are nine planets in the solar system.
(f) Constellation Orion can be seen only with a telescope.
(a) False
(b) True
(c) False
(d) True
(e) False
(f) False
Question 6.
Match items in Column A with one or more items in Column B:
Question 7.
In which part of the sky can you find Venus, if it is visible as an evening star?
In the west side of the sky.
Question 8.
Name the largest planet of the solar system.
The largest planet of the solar system is Jupiter.
Question 9.
What is a constellation? Name any two constellations.
A group of the star which has a recognizable shape is called a constellation. Two constellations are great Bear and Orion.
Question 10.
Draw sketches to show the relative positions of prominent stars in
(a) Ursa Major
(b) Orion.
Question 11.
Name two objects other than planets which are members of the solar system.
Comets and asteroids.
Question 12.
Explain how you can locate the Pole star with the help of Ursa Major.
Pole star can be located with the help of the three stars at the end of Ursa Major. Imagine a straight line passing through these stars. Extend the imaginary line in the north direction. This line is about five times the distance between two stars. A star seen in this direction is the Pole star.
Question 13.
Do all the stars in the sky move? Explain.
No, all the stars do not move in the sky. They appear to move from east to west. It is due to the rotation of the earth on which we live. The earth moves from west to east. But Pole star does not appear to move.
Question 14.
Why is the distance between stars expressed in light years? What do you understand by the statement that a star is eight light-years away from the earth?
Stars are far away from each other. The distance between the two stars is millions of kilometers. The distance between the sun and the earth is 150,000,000 km, whereas the distance of alpha centaury is 40,000,000,000,000 km. It is not convenient to show this distance in km. So it is expressed in a light year. A light-year is a distance covered by light in one year. Eight light years means the distance covered by light in eight years.
Question 15.
The radius of Jupiter is 11 times the radius of the Earth. Calculate the ratio of the volumes of Jupiter and the Earth. How many Earths can Jupiter accommodate?
Let the radius of the Earth = R units
The volume of the Earth
= $$\frac {4}{3}$$ πR3 cu. units.
Now, the radius of Jupiter = 11 R units.
Volume of Jupiter
= $$\frac {4}{3}$$π (R)3 = $$\frac {4}{3}$$ π(1331R3) cu. units.
Now the ratios of the volume of Jupiter and the Earth
$$\frac {1331}{1}$$ = 1331 : 1
So 1331 Earths can be accommodated in one Jupiter.
Question 16.
Boojho made the following sketch of the solar system (Fig. 17.10). Is the sketch correct? If not, correct it.
|
2022-05-22 17:53:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5552895069122314, "perplexity": 1504.0877832643785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545875.39/warc/CC-MAIN-20220522160113-20220522190113-00442.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-1-foundations-for-algebra-chapter-review-1-4-properties-of-real-numbers-page-70/51
|
## Algebra 1
$1$
We start with the given expression: $\frac{6+3}{9}$ The order of operations states that first we perform operations inside grouping symbols, such as parentheses, brackets, and fraction bars. Then, we simplify powers. Then, we multiply and divide from left to right. First, we simplify above the fraction bar: $\frac{9}{9}$ Then, we divide: $1$
|
2022-08-15 12:51:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9458248615264893, "perplexity": 1096.148106094898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572174.8/warc/CC-MAIN-20220815115129-20220815145129-00474.warc.gz"}
|
https://electronics.stackexchange.com/questions?sort=newest
|
# All Questions
126,576 questions
Filter by
Sorted by
Tagged with
0answers
2 views
### How to end sending data over I2C by Slave or Master?
Let us say a Slave or Master is sending multiple bytes to the receiver on I2C bus and the number of bytes is not defined before hand. So then how will the sender tell the receiver that it has no more ...
0answers
7 views
### What happens to the waveform when the E3 is enabled in 74LS138 (1 of 8 decoder)?
I tried to understand the function of 74LS138. So here's the schematic So what will happen to the rest of outputs?
0answers
4 views
### Charging battery bank
I have two 8 volt batteries in series to get the 16 volts that I need to run my amp it works fine but I need to know how to charge them I am running a 12-volt system in my truck?
0answers
8 views
### Nodemcu controlling a button on a pcb?
So i am trying to connect this button with a Nodemcu esp8266 (the push button on the bottom side of the pcb in the picture) Can anyone help me to wire this to a esp8266? Also i measure 3.3v whether ...
0answers
16 views
### Is it possible to use J1939 and CANOpen on the same bus?
Please correct me if i'm wrong. The way I understand it, J1939 is based on CAN2.0b and uses 29 bit identifiers. CANOpen is based on CAN2.0a and uses 11 bit identifiers. If for example, you have a ...
1answer
11 views
### Reading Audio from SD Card and playing on DE1_SoC FPGA board
I am trying to read audio a micro SD card which will be stored in the form of a .wav or .mif file. I currently have an audio controller that takes in mic in as input. I want to change that so that the ...
2answers
19 views
### Is it possible to combine Common and GND on a DC/DC converter?
There is an isolated DC / DC converter TMR1222 12VDC will come to the input, The outputs will be + 12VDC and -12VDC When connecting with dual output, there is a "Common" contact Tell me, is this ...
1answer
16 views
### Few Strip of WS2812 with 3V3
I am trying to drive 8 strips of led WS2812 (5V) with a CPU STM32F030. The output from the CPU is at 3V3. So, I used a 2N3906 connected to 5V with resistor. I've got no problem with that. Now,I want ...
0answers
17 views
### I2C High Speed Multiple Slaves Layout
I am working on a system where I need to have 16 I2C devices on one bus. Due to data rates I need to run at 3.4 Mbit high speed mode. Are there any design/layout guidelines to help deal with a large ...
3answers
57 views
### Is there any reason as to why the schematic symbol of comparators is almost equivalent to that of op amps?
They do similar tasks, but they are almost completely different because comparators output digital signals. In that case, I am confused as to why their schematic symbol is almost entirely the same. Is ...
1answer
20 views
### Reduce Phone charger values
I’m trying to modify the output of an old Motorola Phone charger which is rated 5v 500ma and i want to modify it’s output to 1.2v 150ma to charge a small battery . can this be done ? I can’t open the ...
1answer
18 views
### Backup PWM signal for DC fans
Currently, I have a board that powers 4 DC fans. A PWM signal is received from an external microcontroller to control the fan speeds. Unfortunately, the microcontroller sometimes fails to send the PWM ...
0answers
35 views
### Using the same potentiometer for two 555 timers
simulate this circuit – Schematic created using CircuitLab I have two 555 timers hooked up and I want them their duty cycle to be controlled by the same potentiometer. The second one has twice ...
0answers
12 views
### Xhp-2 compatible smt right angle header
Does such a product exist or is there another option? We have a few hundred batteries with jst xhp-2 female connectors and need to connect to pcb we are designing, but we can't use throughhole parts. ...
0answers
21 views
### How do I wire this to a trrs headphone jack [on hold]
I have a pair of Jabra Evolve 80 wired headphones, and the left ear stopped working. I bought a replacement trrs headphone jack, but when I cut off the old jack, there were six wires inside of the ...
0answers
21 views
### Cockcroft-Walton generator [on hold]
I use a reverse-fed transformer for a Cockcroft-Walton circuit. Why doesn't it work? My input is a 2V signal generator, and I want to get 2KV.
0answers
27 views
### Adding LED to anti-audio pop circuit
I have built a slight variation of this circuit listed elsewhere on this site: 5v Electret microphone to PC mute switch pop help I'm using this to add an inline switch to my computer headset - ...
3answers
33 views
### Can I replace a dual section capacitor in a tube amplifier design with two single capacitors?
I was planning on building this schematic, but found it hard to source their 20/20 dual section cap (C3/C4). Could I replace this with two single 20uF caps, or is it a dual-section for a reason?
2answers
33 views
### Single-package DC/DC converters for PoE — good idea?
I'm planning a design where we'll use PoE to power a data acquisition board (essentially, an ADC + microcontroller + ethernet PHY and interface). I look at DC/DC isolated converters to get the 5V or ...
0answers
38 views
### Why is my wiring a fire hazard when we have circuit breakers? [migrated]
A neighbor in my (US) building just found out that his laundry room (and others in the building including mine) are wired incorrectly. When the original washer and dryer were replaced, the dryer ...
2answers
39 views
### How to get an analog signal 0 to 12V DC based on reference 0 to 5V DC?
I have an Arduino witch is creating an analog voltage from 0 to 5V (I used external DAC for it) and I need to use it for the regulation of "big" circuit with 12V DC. So I am trying to get on the "big" ...
0answers
17 views
### Calculating the linear travel per step of an actuator
I am using a linear actuator from Portescape. Part # 20DBM-L, 5V series D1B Bipolar. Datasheet Link, with ST SPIN 220 motor driver datasheet link to drive it, in full step mode. From the datasheet of ...
3answers
70 views
### What benefit are opto-couplers for relay switching providing, if both sides share the same ground?
I wanted to connect some relays to my Raspberry Pi (for 12V, not 230V) and I found the waveshare RPi Relay Board, which has the following schematic: I like that they spend some thoughts on filtering ...
0answers
15 views
### How do determine MOSFET parameters in LTSPICE
I am designing a phase locked loop on LTspice (which I'm new to), but have come across a snag in the procedure: Is there a way to determine the output resistance of a mosfet (as the I'm not able to ...
0answers
22 views
### Exponential circuit in Multisim
I have a circuit which has an exponential function as shown in the figure. I tried to put an anti-log opamp with the diode and BJT transistor, but did not get what I wanted. I think the whole problem ...
1answer
28 views
### Diode ROM with decoder and multiplexers
I have this question on my exam I couldn't answer: A diode ROM has been built using a decoder and multiplexers. If A4A3A2A1A0 = 01001, what is D3D2D1D0? Here is the first schematic: The answer is ...
0answers
18 views
### 555 timed power at high to low edge in battery powered circuit
I have a lovely Atmel touch sensor whose output is high only as long as it's touched. I want both off-on and on-off edges of its output to initiate two 3 sec pulses (and the time between edges is > 3 ...
1answer
44 views
### Recommended air flow and temperature in hot-air gun to recover a MLB from MacBook
First I would like to say I've searched and researched, I think enough, for an old topic that discuss this matter but wasn't able to find any specific information. I've a main logic board from a ...
1answer
40 views
### Control current to one of two loads on PCB
I am currently working on a project where 2 batteries powers multiple components on a board. A raspberry pi, 2 thrusters and a winch. Each device is powered via the battery and a buck converter. The ...
1answer
26 views
### How to interface HCSR04 ultrasonic sensor with ATxmega256A3?
I have to interface HCSR04 with ATXMega256. I am using the TCC0 Timer in Capture mode with setting it to capture pulse width, using Event System Channel 0 source: ...
1answer
46 views
### Is there a minimum voltage necessary for solenoid?
For solenoids or electromagnets, is there a minimum voltage required or necessary? Example, 100mm long iron wrapped with insulated copper.
0answers
19 views
### USBC loading voltage problem
We are trying to get voltage from a DC power supply, it’s connected in series by soldering the wires from the DC power supply to a switch. The other end of the switch is a male USBC connected to a ...
2answers
37 views
### Optocouplers driving MOSFETs in an inverter
For many hours I was trying to simulate in LTspice XVII an optocoupler driver for MOSFETs in an inverter. My project is to build a SMPS with linear voltage stabilizer (I didnt put it on the schematic ...
1answer
32 views
### CD4006 Behavioral model in LTSpice
I want to simulate the behavioral model of CD4006 chip in LTspice. It is a CMOS 18-Stage Static Register. Looking at the Datasheet I see the one stage logic diagram that consist of inverting buffers ...
1answer
26 views
### acs71020 Connecting and reading voltage properly
I am using an ACS71020 3.3V I2C version hooked up to ESP32. I have connected the IC as proposed in the datasheet. Rsense used is 1.8k ohm as proposed in the evaluation board user manual for 230-240 ...
1answer
33 views
### Geometric spreading of an impulse antenna
I am new to radar and curious about the geometric spreading in free space. I set up an air-coupled antenna about a half meter above the ground, with a large PEC on the ground. Then I adjusted the ...
1answer
40 views
### Need Help identify ICs in battery protector PCB
Does anyone recognize these ICs? Manufacturer, model # etc. They are part of a lithium-ion battery protector. Many thanks for your help enter image description here
2answers
41 views
### Is the TC (“terminal count”) output of a 4-bit sync counter (74163) reasonably assumed to be glitch-free?
My gut tells me so, but the proof eludes me. Effectively this output is the AND of all four bits. These four bits go from 0 to 1 at very distinct times, assuming the counter is running at a ...
1answer
78 views
### LTspice not getting exactly -3dB point
Looking to answer another question on this site, to did some math and fired up LTspice to check some values and I am not able to the -3dB point. I found the transfer function to be H(s) = \cfrac{...
1answer
30 views
### NXP PCA9615 I2C differential buffer - failure on SCL line
I am using the PCA9615 part to extend an I2C bus. The PCA9615 converts I2C to a differential bus. PCA9615 https://www.nxp.com/docs/en/data-sheet/PCA9615.pdf There is a control box and a remote ...
1answer
67 views
### How to enter bootloader on a Embedded Linux System with no keyboard?
I am trying to reflash a microcontroller. An instruction in the manual states that: Power up the owa4x and press the space bar to enter the bootloader prompt Insert the uSD card with the ...
2answers
50 views
### One step-down regulator 24 V to 1V or two step-down regulators with lower steps?
I am trying to design a working principle scheme to step-down the input voltage from 24Vdc ---> different low voltages (1.8 V, 1 V, 1.5 V, 3.3 V with ripples ~ 1%). Could you kindly explain me the ...
0answers
41 views
### Current through resistor in opamp circuit
I have problems calculating the current through the resistors Re, Rr Maybe someone can help me with this, I don't know how to ...
0answers
21 views
### Operate Tablet without Battery while using OTG
First time poster so please correct me if I'm doing something wrong. I have an RCA Voyager III that doesn't seem to charge when I'm using an OTG cable (necessary for our design). I've even stripped a ...
2answers
47 views
### How to connect a few shields correctly?
There are several shields on my PCB: shield on CAN-bus, on RS485, on Ethernet, on ENCODER motor signals. How do I correctly connect all these shields? If I use the star topology, which filters should ...
2answers
52 views
### Equivalent circuit and substitution
Two circuits are equivalent if you provide a certain voltage to the terminals the current through the terminals will be the same and and vice versa; i.e. if you let flow a certain current through the ...
0answers
11 views
### STM32WB55 and transmission of strings over BLE
I have developed a "Medication Device" which will enable elderly people to take their medicines on time. It will memorize all the information such as Name of Medicine and Time to Take Medicine. The ...
1answer
34 views
### TinyFPGA A series board programmer compatibility
Since the programmer writes configurations to the flash of the fpga can I use the TinyFPGA programmer for A series boards on any lcmxo2 chip? Or can I at least use it for the same chip with a ...
3answers
64 views
### Getting a USB PD 3.0 port to output a constant voltage
I am trying to use this AC-DC wall adaptor as a simple constant voltage source to power a compact adjustable DC-DC converter. www.amazon.com/Charger-RAVPower-Adapter-Compatible-MacBook/dp/B07PLR7T1M ...
0answers
46 views
### Phase angle between two sinsoidal currents
I am having trouble solving an example in the book I came up with another solution, anyway the question asked to find the phase angle between two give currents- see attached image what I did I ...
15 30 50 per page
|
2019-11-19 04:52:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.572031557559967, "perplexity": 3251.3830445717413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670006.89/warc/CC-MAIN-20191119042928-20191119070928-00443.warc.gz"}
|
http://www.gradesaver.com/as-i-lay-dying/q-and-a/in-the-section-starting-with-page-85-with-tull-narrating-why-does-vernon-mention-cashs-carving-plugs-for-the-holes-in-the-coffin-74125
|
# in the section starting with page 85 with Tull narrating, Why does vernon mention Cash's carving plugs for the holes in the coffin?
don't quite understand
|
2017-03-29 23:06:40
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.848122775554657, "perplexity": 11214.813859003792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191405.12/warc/CC-MAIN-20170322212951-00039-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://www.gamedev.net/blogs/entry/1396807-note-to-self/
|
• entries
740
957
• views
732161
# Note to Self....
82 views
Just a quickie before I head off to bed. I managed to get some more time in coding Blitz Blox tonight, and I have to admit that these chain reaction are giving me a headache. They are a logistical nightmare I'm still working to unravel. Still, I'm having eureka moments every now and then.
Anyways I wanted to post about to things related to TorqueScript. The first is that I made the mistake once again of defining a 'new' object inside of another 'new' definition. I dunno why TS doesn't like this, but it doesn't. I thought I'd be safe because unlike last time, I wasn't doing two 'new' definitions on the same line. However I suppose as far as the compiler is concerned, this is one line:
%newReaction = new ScriptObject(){ id = %this.chainReactionId++; effectsNum = 1; effect0 = new t2dParticleEffect() { config = "chainReactionLink"; };};
This caused me some grief to track down because 'id' and 'effectsNum' would be assigned an empty string value instead of the integers I thought they contained.
The second thing that tripped me up bad was the fact that, for some reason Torque likes to add a trailing space to its object IDs. For example, if you were to do this:
%object = new t2dStaticSprite();echo("'" @ %object @ "'");
the output would look like this (obviously the ID number would vary):
'78977 '
What's important to note here is that blasted trailing space. Why? WHY I say! This caused my Schedule manager to throw up a parse error when I tried doing this:
Schedule.addFunction(false, %this, 250, "chainReaction", %dstBlock SPC %newReaction.id);
Since that last argument would be passed along as "65383 1" (note the extra space) when the Schedule manager pieces together the function call to chainReaction it comes up with "65383, ,1" for the parameters to pass. chainReaction only has two arguments, hence a parse error.
Bah. In case you weren't aware of that stupid trailing space, now you know. And if anyone could tell me why it's there, I would appreciate it.
Okay I'm done. In between coding and doing some GDNet GDC prepwork, I played some GH II and beat Hard with all 5 stars and got a few more 5 star ratings on Expert as well. It works out nice, getting up to play a song every once in a while, keeps me from being chair-bound for a lengthy period of time.
The End
I admit I don't know any TorqueScript, but isn't the following not more "correct"?
%newReaction = new ScriptObject()
{
id = %this.chainReactionId++;
effectsNum = 1;
%this.effect0 = new t2dParticleEffect() { config = "chainReactionLink"; };
};
I've replaced the %newReaction.effect0 with %this.effect0, since %newReaction won't exist until the first new is finnished.
Or possibly this is even more correct:
%newReaction = new ScriptObject()
{
id = %this.chainReactionId++;
effectsNum = 1;
effect0 = new t2dParticleEffect() { config = "chainReactionLink"; };
};
But as I said, I don't know any TorqueScript.
Curse my sleep-deprived mind! You are correct! I totally screwed up that example. It was indeed supposed to be
%newReaction = new ScriptObject()
{
id = %this.chainReactionId++;
effectsNum = 1;
effect0 = new t2dParticleEffect() { config = "chainReactionLink"; };
};
The reason it's how it is above is because I copy-pasted the proper code, which was
%newReaction = new ScriptObject()
{
id = %this.chainReactionId++;
effectsNum = 1;
};
%newReaction.effect0 = new t2dParticleEffect() { config = "chainReactionLink"; };
So I forgot to delete "%newReaction." when I used that to make my example
Gah I'm sure someone's already pointed it out over at my GG journal as well. Good eyes my friend, good eyes.
*edits journal post
## Create an account
Register a new account
|
2018-10-17 17:17:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28029516339302063, "perplexity": 3352.193327221918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511203.16/warc/CC-MAIN-20181017153433-20181017174933-00416.warc.gz"}
|
http://simbad.cds.unistra.fr/simbad/sim-ref?bibcode=1999ApJ...514..818B
|
other querymodes : Identifierquery Coordinatequery Criteriaquery Referencequery Basicquery Scriptsubmission TAP Outputoptions Help
1999ApJ...514..818B - Astrophys. J., 514, 818-843 (1999/April-1)
High-velocity clouds: building blocks of the Local Group.
BLITZ L., SPERGEL D.N., TEUBEN P.J., HARTMANN D. and BURTON W.B.
Abstract (from CDS):
We suggest that the high-velocity clouds (HVCs) are large clouds, with typical diameters of 25 kpc, containing 3x107 M of neutral gas and 3x108 M of dark matter, falling onto the Local Group; altogether the HVCs contain 1010 M of neutral gas. Our reexamination of the Local Group hypothesis for the HVCs connects their properties to the hierarchical structure formation scenario and to the gas seen in absorption toward quasars. We show that at least one HVC complex (besides the Magellanic Stream) must be extragalactic at a distance of more than 40 kpc from the Galactic center, with a diameter greater than 20 kpc and a mass of more than 108 M. We discuss a number of other clouds that are positionally associated with the Local Group galaxies, and we show that the entire ensemble of HVCs is inconsistent with a Galactic origin. The observed kinematics imply rather that the HVCs are falling toward the Local Group barycenter. We simulate the dynamical evolution of the Local Group and find that material falling onto the Local Group reproduces the location of two of the three most significant groupings of clouds and the kinematics of the entire cloud ensemble (excluding the Magellanic Stream). We interpret the third grouping (the A, C, and M complexes) as the nearest HVC. It is tidally unstable and is falling onto the Galactic disk. We interpret the more distant HVCs as gas contained within dark matter minihalos'' moving along filaments toward the Local Group. Most poor galaxy groups should contain similar H I clouds bound to the group at large distances from the individual galaxies. We suggest that the HVCs are local analogs of the Lyman limit absorbing clouds observed against distant quasars. Our picture implies that the chemical evolution of the Galactic disk is governed by episodic infall of metal-poor HVC gas that only slowly mixes with the rest of the interstellar medium.
We argue that there is a Galactic fountain in the Milky Way, but that the fountain does not explain the origin of the HVCs. Our analysis of the H I data leads to the detection of a vertical infall of low-velocity gas toward the plane and implies that the H I disk is not in hydrostatic equilibrium. We suggest that the fountain is manifested mainly by relatively local neutral gas with characteristic velocities of 6 km.s–1 rather than 100 km.s–1.
The Local Group infall hypothesis makes a number of testable predictions. The HVCs should have subsolar metallicities. Their Hα emission should be less than that seen from the Magellanic Stream. The clouds should not be seen in absorption against nearby stars. The clouds should be detectable in both emission and absorption around other galaxy groups. We show that current observations are consistent with these predictions and discuss future tests.
|
2023-03-27 10:00:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6427755951881409, "perplexity": 1963.4419532184288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948620.60/warc/CC-MAIN-20230327092225-20230327122225-00498.warc.gz"}
|
https://forums.powershell.org/t/powershell-script-to-run-application-with-commands/6064
|
# PowerShell script to run application (with commands)
Hi all… I am trying to run an application from C:\Program Files\APP\app.exe with the application built in commands. When I run from the command prompt, I can get the result I wanted. But I would like to use the script which will check other components of servers along with this one to avoid running this command manually. I tried both of below mentioned scripts & I am not getting any output…any suggestions, pls let me know…
\$Output = “C:\Information.txt”
Start-Process -FilePath “C:\Program Files\APP\app.exe” -ArgumentList “query mgmtclass” | Out-File \$Output
\$Output = “C:\Information.txt”
Start-Process -FilePath “C:\Program Files\APP\app.exe” -PipelineVariable “query mgmtclass” | Out-File \$Output
The exe should have a place to capture the install like a log file.
|
2021-05-11 10:47:25
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8885377645492554, "perplexity": 5432.815055259715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991982.8/warc/CC-MAIN-20210511092245-20210511122245-00140.warc.gz"}
|
https://anthony-tan.com/Discriminant-Functions-and-Decision-Boundary/
|
## Preliminaries#
1. convex definition
2. linear algebra
• vector length
• vector direction
## Discriminant Function in Classification#
The discriminant function or discriminant model is on the other side of the generative model. And we, here, have a look at the behavior of the discriminant function in linear classification.1
In the post ‘Least Squares Classification’, we have seen, in a linear classification task, the decision boundary is a line or hyperplane by which we separate two classes. And if our model is based on the decision boundary or, in other words, we separate inputs by a function and a threshold, the model is a discriminant model and the decision boundary is formed by the function and a threshold.
Now, we are going to talk about what the decision boundaries look like in the $$K$$-classes problem when $$K=2$$ and $$K>2$$. To illustrate the boundaries, we only consider the 2D(two dimensional) input vector $$\mathbf{x}$$ who has only two components.
## Two classes#
The easiest decision boundary comes from 2-dimensional input space which is separated into 2 regions:
whose decision boundary is:
$\mathbf{w}^T\mathbf{x}+w_0=\text{ constant }\tag{1}$
This equation is equal to $$\mathbf{w}^T\mathbf{x}+w_0=0$$ because $$w_0$$ is also a constant, so it can be merged with the r.h.s. constant. Of course, the 1-dimensional input space is easier than 2-dimensional, and its decision boundary is a point.
Let’s go back to the line, and it has the following properties:
1. The vector $$\mathbf{w}$$ always points to a certain region and is perpendicular to the line.
2. $$w_0$$ decides the location of the boundary relative to the origin.
3. The perpendicular distance $$r$$ to the line of a point $$\mathbf{x}$$ can be calculated by $$r=\frac{y(\mathbf{x})}{||\mathbf{w}||}$$ where $$y(\mathbf{x})=\mathbf{w}^T\mathbf{x}+w_0$$
Because these three properties are all basic concepts of a line, we just prove the third point roughly:
proof: We set $$\mathbf{x}_{\perp}$$ is the projection of $$\mathbf{x}$$ on the line.
We using the first point that $$\mathbf{w}$$ is perpendicular to the line and $$\frac{\mathbf{w}}{||\mathbf{w}||}$$ is the union vector:
$\mathbf{x}=\mathbf{x}_{\perp}+r\frac{\mathbf{w}}{||\mathbf{w}||}\tag{2}$
and we substitute equation (2) to the line function $$y(\mathbf{x})=\mathbf{w}^T\mathbf{x}+w_0$$ :
\begin{aligned} y(\mathbf{x})&=\mathbf{w}^T(\mathbf{x}_{\perp}+r\frac{\mathbf{w}}{||\mathbf{w}||})+w_0\\ &=\mathbf{w}^T\mathbf{x}_{\perp}+\mathbf{w}^Tr\frac{\mathbf{w}}{||\mathbf{w}||}+w_0\\ &=\mathbf{w}^Tr\frac{\mathbf{w}}{||\mathbf{w}||}\\ &=r\frac{||\mathbf{w}||^2}{||\mathbf{w}||}\\ \end{aligned}\tag{3}
So we have
$r=\frac{y(\mathbf{x})}{||\mathbf{w}||}\tag{4}$
Q.E.D.
However, augmented vectors $$\mathbf{w}= \begin{bmatrix}w_0&w_1& \cdots&w_d\end{bmatrix}^T$$ and $$\mathbf{x}= \begin{bmatrix}1&x_1& \cdots&x_d\end{bmatrix}^T$$ can cancel $$w_0$$ of the original boundary equation. So a $$d+1$$-dimensional hyperplane that went through the origin could be instea replaced by an $$d$$-dimensional hyperplane.
## Multiple Classes#
Things changed when we consider more than 2 classes. Their boundaries become more complicated, and we have 3 different strategies for this problem intuitively:
### 1-versus-the-rest Classifier#
This strategy needs at least $$K-1$$ classifiers(boundaries). Each classifier $$k$$ just decides which side belongs to class $$k$$ and the other side does not belong to $$k$$. So when we have two boundaries, like:
where the region $$R_4$$ is embarrassed, based on the properties of the decision boundary, and the definition of classification in the post‘From Linear Regression to Linear Classification’, region $$R_4$$ can not belong to $$\mathcal{C}_1$$ and $$\mathbb{C}_2$$ simultaneously.
So the first strategy can work for some regions, but there are some black whole regions where the input $$\mathbf{x}$$ belongs to more than one class and some white whole regions where the input $$\mathbf{x}$$ belongs to no classes(region $$R_3$$ could be such a region)
### 1-versus-1 classifier#
Another kind of multiple class boundary is the combination of several 1-versus-1 linear decision boundaries. Both sides of a decision boundary belong to a certain class, not like the 1-versus-rest classifier. And to a $$K$$ class task, it needs $$K(K-1)/2$$ binary discriminant functions.
However, the contradiction still exists. Region $$R_4$$ belongs to class $$\mathcal{C}_1$$, $$\mathcal{C}_2$$, and $$\mathcal{C}_3$$ simultaneously.
So this is also not good for all situations.
### $$K$$ Linear functions#
We use a set of $$K$$ linear functions: \begin{aligned} y_1(\mathbf{x})&=\mathbf{w}^T_1\mathbf{x}+w_{10}\\ y_2(\mathbf{x})&=\mathbf{w}^T_2\mathbf{x}+w_{20}\\ &\vdots \\ y_K(\mathbf{x})&=\mathbf{w}^T_K\mathbf{x}+w_{K0}\\ \end{aligned}\tag{5}
and an input belongs to $$k$$ when $$y_k(\mathbf{x})>y_j(\mathbf{x})$$ where $$j\in \{1,2,\cdots,K\}$$ that $$j\neq k$$. According to this definition, the decision boundary between class $$k$$ and class $$j$$ is $$y_k(\mathbf{x})=y_j(\mathbf{x})$$ where $$k,j\in\{1,2,\cdots,K\}$$ and $$j\neq k$$. Then a decision hyperplane is defined as:
$(\mathbf{w}_k-\mathbf{w}_j)^T\mathbf{x}+(w_{k0}-w_{j0})=0\tag{6}$
These decision boundaries separate the input spaces into $$K$$ single connect, convex regions.
proof: choose two points in the region $$k$$ that $$k\in \{1,2,\cdots,K\}$$. $$\mathbf{x}_A$$ and $$\mathbf{x}_B$$ are two points in the region. An arbitrary point on the line between $$\mathbf{x}_A$$ and $$\mathbf{x}_B$$ can be written as $$\mathbf{x}'=\lambda \mathbf{x}_A + (1-\lambda)\mathbf{x}_B$$ where $$0\leq\lambda\leq1$$. For the linearity of $$y_k(\mathbf{x})$$ we have:
$y_k(\mathbf{x}')=\lambda y_k(\mathbf{x}_A) + (1-\lambda)y_k(\mathbf{x}_B)\tag{7}$
Because $$\mathbf{x}_A$$ and $$\mathbf{x}_B$$ belong to class $$k$$, $$y_k(\mathbf{x}_A)>y_j(\mathbf{x}_A)$$ and $$y_k(\mathbf{x}_B)>y_j(\mathbf{x}_B)$$ where $$j\neq k$$. Then $$y_k(\mathbf{x}')>y_j(\mathbf{x}')$$ and the region of class $$k$$ is convex.
Q.E.D
The last strategy seems good. And what we should do is estimate the parameters of the model. The most famous approaches that will study are: 1. Least square 2. Fisher’s linear discriminant 3. Perceptron algorithm
## References#
1. Bishop, Christopher M. Pattern recognition and machine learning. springer, 2006.↩︎
|
2022-09-26 03:26:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8139196634292603, "perplexity": 426.14756186165476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00713.warc.gz"}
|
https://library.kiwix.org/mathematica.stackexchange.com_en_all_2021-04/A/question/40182.html
|
## Another difference between Set and SetDelayed. Evaluation shortcut?
16
6
A little while ago I wondered why
f[x_] = f[x]
gives an infinite iteration. I ended up discovering the following difference in evaluation between Set and SetDelayed with Evaluate.
count = 0;
ClearAll @ a
a /; (count++ < 20) = {a}
a // OwnValues
count = 0;
a
Output
{{{{{{{{{{{{{{{{{{{{{a}}}}}}}}}}}}}}}}}}}}}
{HoldPattern[a /; count++ < 20] :> {a}}
{a}
and
count = 0;
ClearAll@b
b /; (count++ < 20) := Evaluate@{b}
b // OwnValues
count = 0;
b
Output
{HoldPattern[b /; count++ < 20] :> {b}}
{{{{{{{{{{{{{{{{{{{{b}}}}}}}}}}}}}}}}}}}}
Can somebody explain the difference? Can we say that there is an evaluation shortcut at work here?
Related
This is a follow up question: Strange results of definitions using OwnValues
Why x = x doesn't cause an infinite loop, but f[x_] := f[x] does?
Does Set vs. SetDelayed have any effect after the definition was done?
A nice tool I made says SetDelayed is not called in a usual way in the last example – Jacob Akkerboom – 2014-01-10T16:26:30.320
2Ah, I see now. Thanks @Rojo – Mr.Wizard – 2014-01-10T17:13:40.780
2It's another cache Update[] related mystery – Rojo – 2014-01-10T17:17:04.547
@Rojo I noticed the same thing myself. – Mr.Wizard – 2014-01-10T17:18:20.457
As to the second, there are several symbols that have special ways of being set. Perhaps through UpValues (or perhaps you like upcode better :P). Clearly the XValues are some of those. Perhaps, when they overloaded the SetDelayed versions, they forgot to return Null? – Rojo – 2014-01-10T17:22:38.333
@Rojo Now we are talking ;). I was also thinking that the evaluation of the examples where I set OwnValues involved up code for OwnValues. I am writing a new question about this right now. – Jacob Akkerboom – 2014-01-10T17:27:55.683
5
I thought to give a bit more insight into why Update is needed, as pointed out in the other answers. Its documentation says Update may be needed when a change in 1 symbol changes another via a condition test.
In Jacob's example, setting count = 0 changes the condition test outcome, and thus a or b on the LHS. Consequently, a or b on the RHS is supposed to change. However, RHS a equals the old LHS a, which was undefined because count>=20, and needs Update to be changed. RHS b behaves the same, but was not evaluated in SetDelayed because Evaluate occurs before SetDelayed, so count is unchanged, and RHS b evaluates to LHS b with count<20. If we now reset count=0, evaluating b will return {b}.
To illustrate, I modify the example to separate LHS and RHS. MMA is clever enough to automatically update LHS declared as a variable, so I have to make a function:
count=0;
ClearAll[LHS,RHS];
LHS[]/;(count++<20)={RHS};
RHS=Unevaluated@LHS[];
count=0;
RHS (* Equals LHS[] with count >= 20 *)
(* Tell Wolfram Language about changes affecting RHS which depends on LHS *)
Update@Unevaluated@LHS;
RHS
LHS[]
{{{{{{{{{{{{{{{{{{{{LHS[]}}}}}}}}}}}}}}}}}}}}
Thanks for your answer, this seems to make sense and seems to have the right ingredients, but I will have to look at the details to fully understand and to be able to accept. – Jacob Akkerboom – 2016-08-18T14:39:14.857
@Jacob Thanks for the interesting question. I'm learning stuff I never thought about before. I'm looking at the link on infinite evaluation and trying to piece out the intricacies.. – obsolesced – 2016-08-18T15:28:55.463
3
Extended comment. Also: If Rojo wants to post an answer, I can delete this
It seems Rojo was right, guessing that it had to do with Update.
count = 0;
ClearAll@a2
a2 /; (Update[Unevaluated@a2]; count++ < 20) = {a2}
a2 // OwnValues
count = 0;
a2
Output
{{{{{{{{{{{{{{{{{{{{{a2}}}}}}}}}}}}}}}}}}}}}
{HoldPattern[a2 /; (Update[Unevaluated[a2]]; count++ < 20)] :> {a2}}
{{{{{{{{{{{{{{{{{{{{a2}}}}}}}}}}}}}}}}}}}}
I think user obsolesced rightly pointed out why there is an additional pair of brackets in the first output. This is because there is already a pair of brackets on the right hand side of Set and {a2} is evaluated rather than a2.
1The number of braces is different because {a2} is output instead of a2. – obsolesced – 2016-08-17T09:39:48.700
2
Also an extended comment; using Update[] makes the first recursion behave as expected:
count = 0;
ClearAll@a
a /; (count++ < 20) = {a};
count = 0;
Update[]
a
{{{{{{{{{{{{{{{{{{{{a}}}}}}}}}}}}}}}}}}}}
Apparently the LHS condition is affected by the use of Set versus SetDelayed. Certainly worth more exploration, but for me that will have to wait.
|
2021-07-29 23:50:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5192040205001831, "perplexity": 2631.992977590275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153899.14/warc/CC-MAIN-20210729234313-20210730024313-00501.warc.gz"}
|
http://www.physicsforums.com/showthread.php?p=1757771
|
Recognitions:
Homework Help
## Shorter Stopping Distance for ultralight vehicles?
Quote by ank_gl short on time, all i am saying is the weight of car is not the only deciding factor, the braking capability is too. How fast a tire locks and how heavy it does, determines 'mu'. So force = 'mu' * weight Its a combination of both.
Newton gives us:
$$F=ma$$
Then we have
$$F=\mu \times mg$$
so
$$\mu \times mg = ma$$
$$\mu \times g = a$$
so to a first order approximation, the acceleration is independent of the mass of the vehicle.
Recognitions:
Gold Member
Quote by NateTG Newton gives us: $$F=ma$$ Then we have $$F=\mu \times mg$$ so $$\mu \times mg = ma$$ $$\mu \times g = a$$ so to a first order approximation, the acceleration is independent of the mass of the vehicle.
Arg. We're going backwards. For ABS or some kind of non-skidding braking scenario, $$F=\mu \times mg$$ where F is the tire/road force, is not the applicable equation. The thread clearly establishes for this scenario that stopping distance and vehicle deceleration clearly are dependent on mass in addition to the wheel-brake force.
Recognitions:
Quote by mheslep Arg. We're going backwards. For ABS or some kind of non-skidding braking scenario, $$F=\mu \times mg$$ where F is the tire/road force, is not the applicable equation. The thread clearly establishes for this scenario that stopping distance and vehicle deceleration clearly are dependent on mass in addition to the wheel-brake force.
The same arguments work whether the tires are locked or operated at their optimum slip ratio (via ABS or a careful foot). The only difference is that the effective friction coefficients are different in the two cases.
Recognitions:
Gold Member
Quote by Stingray The same arguments work whether the tires are locked or operated at their optimum slip ratio (via ABS or a careful foot). The only difference is that the effective friction coefficients are different in the two cases.
The relevant difference is where the work is done: locked tires - work is done on the tires/road surface involving vehicle mass, optimum slip - work is done mostly on the brake pads/disks/wheels and does not involve vehicle mass.
Recognitions:
Quote by mheslep The relevant difference is where the work is done: locked tires - work is done on the tires/road surface involving vehicle mass, optimum slip - work is done mostly on the brake pads/disks/wheels and does not involve vehicle mass.
It doesn't matter which components have the most heat transferred to them. ABS (roughly speaking) tries to keep the tires operating at a point where they generate the most longitudinal force. That force is approximately $\mu_s mg$. That's all that's important. This obviously depends on the driver applying enough force on the brake pedal and the various components translating that force to the calipers. Any production car in reasonable working order will be able to reach this limit at least for one stop from highway speeds (repeated tries will eventually overheat things).
Recognitions:
Gold Member
Quote by Stingray It doesn't matter which components have the most heat transferred to them...
How do you conclude that? Heat build up drives the braking power limitations. If one wants more braking power you need more thermal mass in the brakes.
In any case, the topic is degree to which vehicle mass may / may not impact stopping distance: mass is a factor in the derivation of stopping distance in ABS / careful foot vehicles.
Recognitions:
Quote by mheslep How do you conclude that? Heat build up drives the braking power limitations. If one wants more braking power you need more thermal mass in the brakes. In any case, the topic is degree to which vehicle mass may / may not impact stopping distance: mass is a factor in the derivation of stopping distance in ABS / careful foot vehicles.
The point that has been repeated multiple times is that braking power is not affected by thermal issues for normal vehicles under normal conditions. If you just want to know the shortest distance for a single stop, tires are always the limiting factor.
Overheating is only an issue when considering multiple hard stops from high speeds in a short amount of time. That's what happens when racing or otherwise driving in a very illegal manner on winding roads. I've managed to overheat the brake pads in a couple of cars, but it is honestly very hard to do. It is not what has been discussed here so far, and is not relevant for most road vehicles.
I'm really getting tired of repeating myself in this thread (especially since other people have been saying the same thing). To lowest order, mass is not a factor in a normal ABS stop. If you want to look at it in terms of work, that's proportional to force, which is in turn proportional to mass. The kinetic energy of the car is also proportional to mass, so it cancels out. That's the same for ABS or skidding stops.
Blog Entries: 2
Recognitions:
Gold Member
Quote by Stingray I'm really getting tired of repeating myself in this thread (especially since other people have been saying the same thing). To lowest order, mass is not a factor in a normal ABS stop. If you want to look at it in terms of work, that's proportional to force, which is in turn proportional to mass. The kinetic energy of the car is also proportional to mass, so it cancels out. That's the same for ABS or skidding stops.
Exactly. Nothing new has been said in this thread in 2 pages.
You know you're saying that a heavier car stops faster than a lighter car.....
Quote by Mech_Engineer The effects of no ABS can be seen in the graph, where the Lamborghini's braking curve is completely linear all the way to from 100 to 0 mph, while the Saleen's fluctuates wildly since the driver has to modulate the pedal to try and make up for the lack of ABS. Even though the Saleen was much faster to 100 mph, it ironically loses the 0-100-0 because the Lamborghini is HEAVIER (more traction available from the ame set of tires) and has ABS. The Lamborghini puts down an average of 606 braking hp, versus the Saleen's "paltry" 370 braking hp. So there you have it, a case where being heavier means a shorter stopping distance...
I suspect the engine location in these vehicles also plays a role. In most front-engine cars the braking is proportioned about 70%F, 30%R. By moving the C.O.G. closer to the rear of the car, the rear brakes can actually do something useful and decrease the cars overall chance of skidding.
BTW...some quick and dirty Dynamics
$$N_R=mg(\frac{a-uc}{l})[/itex] [tex]N_F=mg-N_R[/itex] c=height of COG from ground l=wheelbase a=distance between CoG and front axle I stumbled across this thread when trying to learn more about braking. There have been some good things written here, but there has been a lot of total crap as well. Many posters know only enough to be dangerous. The purpose of me creating an account and a post is more for those in the future who (like me) will come across this post. I doubt I will change the mind of many previous posters, but hopefully I will bring some things up they haven’t thought about. Mass – Mass has a significant affect on braking. It does not “cancel out” of equations involving the full braking system. Mass certainly doesn’t create an advantage as Mech Engineer said! Have you ever tried braking in a car loaded with more passengers than seats? Your braking distance obviously INCREASES, and significantly. Tires - Why does the braking distance increase with mass? F=ma. An increase in mass yields a decrease in acceleration given the same force. But doesn’t an increase in the mass of the car increase the force the tire can put on the road? Yes. Does that mean they cancel out? NO. There has been a fatally flawed assumption about this throughout the thread. Many posters have used the old high school physics equation for normal force, F=mu*m*g . This was a basic approximation for something like sliding a block across a desk. It cannot be used for something as complicated as tire compounds and road surfaces. The curve of load vs. traction is NON-linear for a tire. As you increase load on the tire, the grip increases, but less and less with additional load until the tire has reach the max grip and additional load does not increase grip. Because of the shape of the curve, when you take load off of a tire, the grip drops off greater. This phenomenon is illustrated when cornering. When a car is turning, load is transferred from the inside wheels to the outside wheels. The additional grip on the outside tires is LESS than the grip lost by the inside tires, so the overall sum of all four tires deceases with more transfer. Cars with a lower center of gravity have less load transfer and more overall grip. ABS – the antilock braking system tries to prevent tires from skidding because, as was mentioned in the thread, the static grip (tires rotating) is higher than dynamic (skidding). When brakes/wheels lock, the ABS system engages, it releases the brakes allowing the tire to roll, but usually allows the brakes to lock again (and you get the pulsing effect). If a driver was able to apply the brakes at the exact limit of the tires, they would stop shorter than with the ABS system (since the ABS is going over and under the limit). This is probably why the Saleen had a longer stopping distance than the Lambo, the Lambo driver could aggressively stomp on the brakes and hold them there, letting the ABS sorting out the rest. The Saleen driver probably did not have the confidence or skill to effectively brake at absolute limit of the tires. The fact remains that if the driver were able to, the Saleen should have stopped shorter. (assuming same brakes, same tires, and the Saleen with less weight) Recognitions: Gold Member Quote by viperblues450 ...Mass – Mass has a significant affect on braking. It does not “cancel out” of equations involving the full braking system. Mass certainly doesn’t create an advantage as Mech Engineer said! Have you ever tried braking in a car loaded with more passengers than seats? Your braking distance obviously INCREASES, and significantly. Tires - Why does the braking distance increase with mass? F=ma. An increase in mass yields a decrease in acceleration given the same force. But doesn’t an increase in the mass of the car increase the force the tire can put on the road? Yes. Does that mean they cancel out? NO. ... As phrased here you are changing the domain of the problem a bit. You make the point that braking changes in the case of a vehicle overload (beyond the design parameters) so that, for instance, the suspension no longer optimally distributes the vehicle load during deceleration. The intent of my OP was to discover whether there is a pay off in braking distance for mass reduction in a given vehicle design, operating inside its design parameters. That is, does vehicle A, mass X have a stopping distance advantage over vehicle B, mass greater than X if both have similar but size appropriate braking systems and tires. Yes absolutely it has an advantage to be lighter. The advantage from the additional weight on the tires is less than the disadvantage from slowing the additional mass. There is no difference between inside and outside the design parameters, the performance is still governered by the same laws of physics. One passenger increases stopping distance by x, two passengers by y (not 2x), three passengers by z, and 40 passengers by even more. Even the weight of the original driver will slightly affect the distance (negligable in reality). You are correct that adding additional weight like passengers that raise the center of gravity with increase load transfer from the rear to the front tires, and with that transfer, the overall tire grip will decrease (due to the tire characteristics I explained before). This is not changing the domain of the problem though, because (in your example) vehicle B, with mass X+Y will have more load transfer than vehcile A with mass X. The suspension NEVER optimally distributes the load under braking because the optimal distribution would be an equal load on all tires. There is nothing inside or outside of design parameters that does this. Also, suspension does not distribute total dynamic load, when a car is accelerating, braking, or turning, the total load transfer is only a funtion of geometry (and total weight), not of the suspension components. What you said though makes me realize that the additional load transfer probably attributes more to the decrease in braking than the nonlinearity tire characteristics. The tires probably behave near the linear region in the longitudinal direction, the nonlinearity shows up much more in the lateral grip. An increase in vehichle weight will increase the size of the brakes needed to max out the tire's grip. Assuming the brakes are never the limiting factor (not usually the case), and assuming the increase in weight will not raise the center or gravity (unlikely unless the weight is place very low), an incease in vehichle weight can cancel itself out. Recognitions: Gold Member Quote by viperblues450 Yes absolutely it has an advantage to be lighter. The advantage from the additional weight on the tires is less than the disadvantage from slowing the additional mass. There is no difference between inside and outside the design parameters, the performance is still governered by the same laws of physics. One passenger increases stopping distance by x, two passengers by y (not 2x), three passengers by z, and 40 passengers by even more. Even the weight of the original driver will slightly affect the distance (negligable in reality). So that we don't talk past each other here, can you state your point mathematically? As discussed up thread, the data from various vehicle stopping distances is mixed, it somewhat suggestive that lighter cars have and advantage but its by no means conclusive. You are correct that adding additional weight like passengers that raise the center of gravity with increase load transfer from the rear to the front tires, and with that transfer, the overall tire grip will decrease (due to the tire characteristics I explained before). This is not changing the domain of the problem though, because (in your example) vehicle B, with mass X+Y will have more load transfer than vehcile A with mass X. The suspension NEVER optimally distributes the load under braking because the optimal distribution would be an equal load on all tires. ... I think this confuses optimal with perfect. For instance, the CoG change could be nearly eliminated with a perfectly rigid carriage, but that discards other vehicle desirable characteristics. Blog Entries: 2 Recognitions: Gold Member Science Advisor Viperblues, I have to reply to this post just because you are completely misinterpreting what has been argued over in the past 4 pages. My argument can be summed up as such: Quote by Mech_Engineer ...There isn't any fundamental reason an ultra-light car can stop faster than a heavy one, as long as the brakes and tires on each car are sized appropriately. It could be argued that it is easier and cheaper to make a light car stop quickly, but that's about it (and it's easier and cheaper to do most anything performance-based in a lightweight car). What you're trying to argue is not what this thread is about, period. Quote by viperblues450 Mass – Mass has a significant affect on braking. It does not “cancel out” of equations involving the full braking system. Mass certainly doesn’t create an advantage as Mech Engineer said! Have you ever tried braking in a car loaded with more passengers than seats? Your braking distance obviously INCREASES, and significantly. Your argument is not addressing the fundamental issue that is being argued in this thread. The original poster asked a very simple question- Quote by mheslep In several discussions of these [ultra-light] vehicles I have seen and heard mention of the supposed additional safety benefit of shorter stopping distances, but I have not found any elaboration on why this is so, implying I fear that I missing something obvious. The answer is of course that it takes more than mass to determine how quickly a vehicle can stop. The primary factors that will determine how quickly a vehicle can stop are the friction between the road surface and the tire (tire compound) and the power dissipation capacity of the brakes. This has been repeated over and over for 5 pages now. Adding more mass to a vehicle without changing its braking capacity will of course make it stop more slowly, and that topic has also already been covered in this very thread by me; in the very first page: Quote by Mech_Engineer ...what this proves is that increasing the weight while keeping the same brakes means the vehicle will take longer to stop. This is because brakes have an associated "power rating," which can be thought of in terms of horsepower or watts. Since the brakes at maximum clamping force can only convert a specific amount of kinetic energy per second to heat, having more weight means more kinetic energy which in turn means it takes longer to convert all of the kinetic energy to heat. That's not what's being argued here. Given two cars that have been designed by two different manufacturers, the lighter one will not automatically be able to stop more quickly than the heavier one. Heavier cars tend to have heavier capacity brakes, and as such they will tend to be able to stop as quickly as lighter cars. This is especially true in sports cars, which I covered in extreme detail. Quote by viperblues450 Tires - Why does the braking distance increase with mass? F=ma. An increase in mass yields a decrease in acceleration given the same force. But doesn’t an increase in the mass of the car increase the force the tire can put on the road? Yes. Does that mean they cancel out? NO. Actually, it does to a first-order approximation, and the VERY simplified math was presented on page 4 by NateTG. Quote by NateTG Newton gives us: [tex]F=ma$$ Then we have $$F=\mu \times mg$$ so $$\mu \times mg = ma$$ $$\mu \times g = a$$ so to a first order approximation, the acceleration is independent of the mass of the vehicle.
The dynamics of vehicle braking are indeed quite complex, but your vehement argument is completely missing the point of this thread. Ironically, your argument assumes that a car with more people in it will ALWAYS stop slower than one with less people in it, which isn't true either.
If a car is carrying 4 people, and the brakes were sized appropriately during the vehicle's design phase to take this extra weight into account (read- the tires can still lock up on a full stop and the ABS system engages), the car will stop very close to as quickly as the same car with only one person in it. Any difference in stopping distance will not have to do with the increased weight, it will instead probably be due to minute shifts in the vehicle's center of gravity or weight distribution. If we assume the vehicle's brakes can always lock the tires (properly sized brakes for the vehicles estimate operating weight, the vehicle is not overloaded), the extra momentum from any extra weight in the vehicle is offset by the fact that there is more available frictional force available to decelerate that extra weight as well.
|
2013-05-22 08:02:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5372375845909119, "perplexity": 914.1141728654112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00041-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/2136926/subway-graphs
|
# Subway & Graphs
In the city there is a subway. You can get from any station to any other one. How can I prove that if we close one of the stations ( it can be picked),you won't be able to drive through it), we will be able to get from any station to any other one.
• It isn't generally true. If the stations were, say, arranged in a line, then removing one in the middle disconnects the graph. – lulu Feb 9 '17 at 17:12
• We can choose the bottom one. – idliketodothis Feb 9 '17 at 17:15
• Oh, you get to pick which one to close? That's very different. I suggest editing the post to make that clear. – lulu Feb 9 '17 at 17:17
## 1 Answer
We want to show that there is some stop that can be removed without disconnecting the graph. to do it, choose a stop, $s_0$ at random. Then, for a stop $s$, define $d(s)$ to be the length of the shortest path (in terms of the number of stops) from $s_0$ to $s$. Now let $s^*$ be a stop such that $d(s^*)$ is maximal. We claim that you can always remove $s^*$ without disconnecting the graph.
To see this, note that, for $s\neq s^*$ the shortest path from $s_0$ to $s$ can not go through $s^*$ or it would have length greater than the max. Thus, after deleting $s^*$, there is still a path from $s_0$ to $s$. As any stop can reach $s_0$, any stop can reach any other and we are done.
Remark: this shows that there are at least two stops which can be deleted without disconnecting the graph (well, assuming there are at least two stops on the map, anyway). To see that, run through the method once to yield $s^*$, now do it again starting with $s^*$. Considering the case where the stops are arranged in a line we see that this result can not, in general, be improved.
• I thought the remark showed that this procedure can be applied repeatedly, without compromising the connectedness of the graph obtained by removing the maximum distance nodes selected at each iteration. So, I'm not sure I understand in which sense the result cannot be improved. (Of course, one cannot remove a node from an empty graph, but you covered that.) – Fabio Somenzi Feb 11 '17 at 21:52
• @FabioSomenzi Right. It's clear you can iterate this till the graph is empty, or a point. – BrianO Feb 11 '17 at 21:56
• @FabioSomenzi The remark was intended to demonstrate that in any connected graph with at least two nodes there are at least two points which can be deleted without disconnecting the graph. That is optimal, as in a straight line graph there are exactly two. I was not speaking about iterating the process on smaller and smaller graphs. – lulu Feb 11 '17 at 22:07
• @lulu Thanks for the detailed response. Here's how I see it. The theorem can equivalently be stated as follows: Every connected graph with $n$ nodes contains connected subgraphs with $m$ nodes for all $m$ such that $0 < m \leq n$. In the case of five nodes in a straight line from left to right, I can remove the two leftmost and the two rightmost nodes and be left with a connected graph. The proof relies on nodes at maximum distance from $s_0$, of which there may be just one. For the line graph no more than two, but for some other graphs, up to $n-1$. – Fabio Somenzi Feb 12 '17 at 0:16
• @FabioSomenzi Yes. The argument can, I think, be tightened slightly to show that the two guaranteed points can be removed simultaneously. – lulu Feb 12 '17 at 10:55
|
2019-06-16 20:45:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7352995276451111, "perplexity": 221.7928096294749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998298.91/warc/CC-MAIN-20190616202813-20190616224813-00422.warc.gz"}
|
http://www.gradesaver.com/textbooks/math/other-math/basic-college-mathematics-9th-edition/chapter-3-adding-and-subtracting-fractions-3-3-adding-and-subtracting-unlike-fractions-3-3-exercises-page-221/40
|
## Basic College Mathematics (9th Edition)
$\displaystyle \frac{3}{16}$ of herd left to vaccinate
To find the fraction left to vaccinate, we subtract the amount vaccinated ($3/16$ and $1/4$) from the amount need to be vaccinated ($5/8$): $\displaystyle \frac{5}{8}-\frac{3}{16}-\frac{1}{4}=$ We find the Least Common Denominator, which is 16: $=\displaystyle \frac{5*2}{8*2}-\frac{3}{16}-\frac{1*4}{4*4}$ $=\displaystyle \frac{5*2}{16}-\frac{3}{16}-\frac{1*4}{16}$ $=\displaystyle \frac{10}{16}-\frac{3}{16}-\frac{4}{16}$ $=\displaystyle \frac{10-3-4}{16}$ $=\displaystyle \frac{3}{16}$ of herd left to vaccinate
|
2018-04-19 16:16:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7808918356895447, "perplexity": 1652.059327705466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936981.24/warc/CC-MAIN-20180419150012-20180419170012-00267.warc.gz"}
|
https://brilliant.org/discussions/thread/one-dimensional-kinematics-motion-along-a-straight/
|
# One-Dimensional Kinematics: Motion Along a Straight Line
The First Step: Choosing Coordinates
Before beginning a problem in kinematics, you must set up your coordinate system. In one-dimensional kinematics, this is simply an x-axis and the direction of the motion is usually the positive-x direction. Though displacement, velocity, and acceleration are all vector quantities, in the one-dimensional case they can all be treated as scalar quantities with positive or negative values to indicate their direction.
The positive and negative values of these quantities are determined by the choice of how you align the coordinate system.
Velocity:
Velocity represents the rate of change of displacement over a given amount of time. The displacement in one-dimension is generally represented in regards to a starting point of x1 and x2. The time that the object in question is at each point is denoted as t1 and t2 (always assuming that t2 is later than t1, since time only proceeds one way). The change in a quantity from one point to another is generally indicated with the Greek letter delta, Δ.
Using these notations, it is possible to determine the average velocity (vav) in the following manner:
vav = (x2-x1) / (t2-t1)=Δx/Δt If you apply a limit as Δt approaches 0, you obtain an instantaneous velocity at a specific point in the path. Such a limit in calculus is the derivative of x with respect to t, or dx/dt. Acceleration
Acceleration represents the rate of change in velocity over time. Using the terminology introduced earlier, we see that the average acceleration (aav) is: aav = (v2 - v1) / (t2 - t1) = Δx / Δt
Again, we can apply a limit as Δt approaches 0 to obtain an instantaneous acceleration at a specific point in the path. The calculus representation is the derivative of v with respect to t, or dv/dt. Similarly, since v is the derivative of x, the instantaneous acceleration is the second derivative of x with respect to t, or d2x/dt2. Constant Acceleration
In several cases, such as the Earth's gravitational field, the acceleration may be constant - in other words the velocity changes at the same rate throughout the motion. Using our earlier work, set the time at 0 and the end time as t (picture starting a stopwatch at 0 and ending it at the time of interest). The velocity at time 0 is v0 and at time t is v, yielding the following two equations:
a = (v - v0)/(t - 0) v = v0 + at
Applying the earlier equations for vav for x0 at time 0 and x at time t, and applying some manipulations (which I will not prove here), we get: x = x0 + v0t + 0.5at2 v2 = v02 + 2a(x - x0)
x - x0 = (v0 + v)t / 2
The above equations of motion with constant acceleration can be used to solve any kinematic problem involving motion of a particle on a straight line with constant acceleration.
3 years, 1 month ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
|
2018-09-18 18:29:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9557250142097473, "perplexity": 756.3489671913135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155634.45/warc/CC-MAIN-20180918170042-20180918190042-00351.warc.gz"}
|
https://bathmash.github.io/HELM/6_5_modelling_exercises-web/6_5_modelling_exercises-webse2.html
|
2 Linearisation of exponential functions
This subsection relates to the description of log-linear plots covered in Section 6.6.
Frequently in engineering, the question arises of how the parameters of an exponential function might be found from given data. The method follows from the fact that it is possible to ‘undo’ the exponential function and obtain a linear function by means of the logarithmic function. Before showing the implications of this method, it may be necessary to remind you of some rules for manipulating logarithms and exponentials. These are summarised in Table 1 on the next page, which exactly matches the general list provided in Key Point 8 in Section 6.3 (page 22.) Table 1: Rules for manipulating base $\text{e}$ logarithms and exponentials
Number Rule Number Rule 1a $ln\left(xy\right)=ln\left(x\right)+ln\left(y\right)$ 1b ${\text{e}}^{x}×{\text{e}}^{y}={\text{e}}^{x+y}$ 2a $ln\left(x∕y\right)=ln\left(x\right)-ln\left(y\right)$ 2b ${\text{e}}^{x}/{\text{e}}^{y}={\text{e}}^{x-y}$ 3a $ln\left({x}^{y}\right)=yln\left(x\right)$ 3b ${\left({\text{e}}^{x}\right)}^{y}={\text{e}}^{xy}$ 4a $ln\left({\text{e}}^{x}\right)=x$ 4b ${\text{e}}^{ln\left(x\right)}=x$ 5a $ln\left(\text{e}\right)=1$ 5b ${\text{e}}^{1}=\text{e}$ 6a $ln\left(1\right)=0$ 6b ${\text{e}}^{0}=1$
We will try ‘undoing’ the exponential in the particular example
$\phantom{\rule{2em}{0ex}}P=12{e}^{0.1t}$
We take the natural logarithm ( $ln$ ) of both sides, which means logarithm to the base $e$ . So
$\phantom{\rule{2em}{0ex}}ln\left(P\right)=ln\left(12{e}^{0.1t}\right)$
The result of using Rule 1a in Table 1 is
$\phantom{\rule{2em}{0ex}}ln\left(P\right)=ln\left(12\right)+ln\left({e}^{0.1t}\right).$
The natural logarithmic functions ‘undoes’ the exponential function, so by Rule 4a,
$\phantom{\rule{2em}{0ex}}ln\left({e}^{0.1t}\right)=0.1t$
and the original equation for $P$ becomes
$\phantom{\rule{2em}{0ex}}ln\left(P\right)=ln\left(12\right)+0.1t.$
Compare this with the general form of a linear function $y=ax+b$ .
$\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}y\phantom{\rule{1em}{0ex}}=\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}ax\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+\phantom{\rule{1em}{0ex}}b$
$\phantom{\rule{1em}{0ex}}\phantom{\rule{2em}{0ex}}↓\phantom{\rule{2em}{0ex}}\phantom{\rule{1em}{0ex}}↓\phantom{\rule{2em}{0ex}}\phantom{\rule{1em}{0ex}}↓$
$\phantom{\rule{2em}{0ex}}ln\left(P\right)=0.1t+ln\left(12\right)$
If we regard $ln\left(P\right)$ as equivalent to $y$ , 0.1 as equivalent to the constant $a,t$ as equivalent to $x$ , and $ln\left(12\right)$ as equivalent to the constant $b$ , then we can identify a linear relationship between $ln\left(P\right)$ and $t$ . A plot of $ln\left(P\right)$ against $t$ should result in a straight line, of slope 0.1, which crosses the $ln\left(P\right)$ axis at $ln\left(12\right)$ . (Such a plot is called a log-linear or log-lin plot.) This is not particularly interesting here because we know the values 12 and 0.1 already.
Suppose, though, we want to try using the general form of the exponential function
$\phantom{\rule{2em}{0ex}}P=a{e}^{bt}\phantom{\rule{2em}{0ex}}\left(c\le t\le d\right)$
to create a continuous model for a population for which we have some discrete data. The first thing to do is to take logarithms of both sides
$\phantom{\rule{2em}{0ex}}ln\left(P\right)=ln\left(a{e}^{bt}\right)\phantom{\rule{2em}{0ex}}\left(c\le t\le d\right).$
Rule 1 from Table 1 then gives
$\phantom{\rule{2em}{0ex}}ln\left(P\right)=ln\left(a\right)+ln\left({e}^{bt}\right)\phantom{\rule{2em}{0ex}}\left(c\le t\le d\right).$
But, by Rule 4a, $ln\left({e}^{bt}\right)=bt$ , so this means that
$\phantom{\rule{2em}{0ex}}ln\left(P\right)=ln\left(a\right)+bt\phantom{\rule{2em}{0ex}}\left(c\le t\le d\right).$
So, given some ‘population versus time’ data, for which you believe can be modelled by some version of the exponential function, plot the natural logarithm of population against time. If the exponential function is appropriate, the resulting data points should lie on or near a straight line. The slope of the straight line will give an estimate for $b$ and the intercept with the $ln\left(P\right)$ axis will give an estimate for $ln\left(a\right).$ You will have carried out a logarithmic transformation of the original data for $P$ . We say the original variation has been linearised .
A similar procedure will work also if any exponential function rather than the base $\text{e}$ exponential function is used. For example, suppose that we try to use the function
$\phantom{\rule{2em}{0ex}}P=A×{2}^{Bt}\phantom{\rule{2em}{0ex}}\left(C\le t\le D\right),$
where A and B are constant parameters to be derived from the given data. We can take natural logarithms again to give
$\phantom{\rule{2em}{0ex}}ln\left(P\right)=ln\left(A×{2}^{Bt}\right)\phantom{\rule{2em}{0ex}}\left(C\le t\le D\right).$
Rule 1a from Table 1 then gives
$\phantom{\rule{2em}{0ex}}ln\left(P\right)=ln\left(A\right)+ln\left({2}^{Bt}\right)\phantom{\rule{2em}{0ex}}\left(C\le t\le D\right).$
Rule 3a then gives
$\phantom{\rule{2em}{0ex}}ln\left({2}^{Bt}\right)=Btln\left(2\right)=Bln\left(2\right)\phantom{\rule{1em}{0ex}}t$
and so
$\phantom{\rule{2em}{0ex}}ln\left(P\right)=ln\left(A\right)+Bln\left(2\right)\phantom{\rule{1em}{0ex}}t\phantom{\rule{2em}{0ex}}\left(C\le t\le D\right).$
Again we have a straight line graph with the same intercept as before, $lnA$ , but this time with slope $Bln\left(2\right)$ .
The amount of money $£M$ to which $£1$ grows after earning interest of 5 $%$ p.a. for $N$ years is worked out as
$\phantom{\rule{2em}{0ex}}M=1.0{5}^{N}$
Find a linearised form of this equation.
Take natural logarithms of both sides.
$\phantom{\rule{2em}{0ex}}ln\left(M\right)=ln\left(1.0{5}^{N}\right).$
Rule 3b gives
$\phantom{\rule{2em}{0ex}}ln\left(M\right)=Nln\left(1.05\right).$
So a plot of $ln\left(M\right)$ against $N$ would be a straight line passing through $\left(0,0\right)$ with slope $ln\left(1.05\right).$
The linearisation procedure also works if logarithms other than natural logarithms are used. We start again with
$\phantom{\rule{2em}{0ex}}P=A×{2}^{Bt}\phantom{\rule{2em}{0ex}}\left(C\le t\le D\right).$
and will take logarithms to base 10 instead of natural logarithms. Table 2 presents the laws of logarithms and indices (based on Key Point 8 page 22) interpreted for ${log}_{10}$ .
Table 2 : Rules for manipulating base 10 logarithms and exponentials
Number Rule Number Rule 1a $\phantom{\rule{1em}{0ex}}{log}_{10}\left(AB\right)={log}_{10}A+{log}_{10}B$ 1b $\phantom{\rule{1em}{0ex}}1{0}^{A}1{0}^{B}=1{0}^{A+B}$ 2a $\phantom{\rule{1em}{0ex}}{log}_{10}\left(A∕B\right)={log}_{10}A-{log}_{10}B$ 2b $\phantom{\rule{1em}{0ex}}1{0}^{A}∕1{0}^{B}=1{0}^{A-B}$ 3a $\phantom{\rule{1em}{0ex}}{log}_{10}\left({A}^{k}\right)=k{log}_{10}A$ 3b $\phantom{\rule{1em}{0ex}}{\left(1{0}^{A}\right)}^{k}=1{0}^{kA}$ 4a $\phantom{\rule{1em}{0ex}}{log}_{10}\left(1{0}^{A}\right)=A$ 4b $\phantom{\rule{1em}{0ex}}1{0}^{{log}_{10}A}=A$ 5a $\phantom{\rule{1em}{0ex}}{log}_{10}10=1$ 5b $\phantom{\rule{1em}{0ex}}1{0}^{1}=10\phantom{\rule{2em}{0ex}}$ 6a $\phantom{\rule{1em}{0ex}}{log}_{10}1=0$ 6b $\phantom{\rule{1em}{0ex}}1{0}^{0}=1$
Taking logs of $P=A×{2}^{Bt}$ gives:
$\phantom{\rule{2em}{0ex}}{log}_{10}\left(P\right)={log}_{10}\left(A×{2}^{Bt}\right)\phantom{\rule{2em}{0ex}}\left(C\le t\le D\right).$
Rule 1a from Table 2 then gives
$\phantom{\rule{2em}{0ex}}{log}_{10}\left(P\right)={log}_{10}\left(A\right)+{log}_{10}\left({2}^{Bt}\right)\phantom{\rule{2em}{0ex}}\left(C\le t\le D\right).$
Use of Rule 3a gives the result
$\phantom{\rule{2em}{0ex}}{log}_{10}\left(P\right)={log}_{10}\left(A\right)+B{log}_{10}\left(2\right)\phantom{\rule{1em}{0ex}}t\phantom{\rule{2em}{0ex}}\left(C\le t\le D\right).$
1. Write down the straight line function corresponding to taking logarithms of the general exponential function
$\phantom{\rule{2em}{0ex}}P=a{e}^{bt}\phantom{\rule{2em}{0ex}}\left(c\le t\le d\right)$
by taking logarithms to base 10.
2. Write down the slope of this line.
1. ${log}_{10}\left(P\right)={log}_{10}\left(a\right)+\left(b{log}_{10}\left(e\right)\right)t\phantom{\rule{2em}{0ex}}\left(c\le t\le d\right)$
2. $b{log}_{10}\left(e\right)$
It is not usually necessary to declare the subscript 10 when indicating logarithms to base 10. If you meet the term ‘log’ it will probably imply “to the base 10”. In the remainder of this Section, the subscript 10 is dropped where ${log}_{10}$ is implied.
|
2022-11-29 00:16:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 87, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8727476000785828, "perplexity": 412.97937942408987}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710684.84/warc/CC-MAIN-20221128235805-20221129025805-00471.warc.gz"}
|
https://physics.stackexchange.com/questions/457652/how-does-pauli-exclusion-principle-work-with-lattice-qcd
|
# How does Pauli exclusion principle work with lattice QCD?
Say your space-time lattice has 30 fermions on the vertices. (Would they have to form a path?) Swapping any two fermions (on the same row??) should make the amplitude of the lattice state negative. How does this work in lattice QCD? Let (X,T) be the lattice points. If I put a Fermions at a=(0,0), b=(10,0), c=(0,1) and d=(10,1) thus two fermion paths. By the time I made all possible swaps I get zero for the total amplitude.
I can see how it would work for Feynman graphs since you would calculate the probability as $$|\Delta_t(a,b)\Delta_t(c,d)-\Delta_t(a,c)\Delta_t(b,d)|^2$$ but that works because the path lengths are different. I'm not sure how it works for lattice QCD.
Do you have to make paths of fermions thorough the lattice and number the paths?
The $$-1$$ factor for closeed fermion loops, together with the combinatorics of the way paths connect, makes any lattice worldline configuration with more that one fermion line on a link add to zero. The figure below gives an example
• Thanks I'll have a look at your book! Yes, I tried working it out from the Grassman integral $\int exp(i\sum_i\overline{\psi}_i\gamma (\psi_i-\psi_j))\overline{\psi}_a \psi_b D\psi D\overline{\psi}$ over a lattice. I can see it would give you paths in order to get a term with a Grassman variable from each vertex. I guess the thing about anti-symmetric wave-functions is a consequence but not really needed. – zooby Jan 30 '19 at 1:23
|
2020-02-18 12:21:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8645440936088562, "perplexity": 285.5763334230683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143695.67/warc/CC-MAIN-20200218120100-20200218150100-00517.warc.gz"}
|
http://mathoverflow.net/questions/164947/where-can-i-find-gonthiers-coq-code-proving-the-four-color-theorem
|
# Where can I find Gonthier's Coq code proving the four color theorem?
In a 2008 article in the Notices, Georges Gonthier announced a computer-checked proof of the four color theorem using Coq:
Gonthier, Georges. Formal proof—the four-color theorem. Notices Amer. Math. Soc. 55 (2008), no. 11, 1382–1393. PDF
Unfortunately, the article does not seem to provide a link to the Coq code. Where can I find the code? I'm not having any luck with Google.
-
@darijgrinberg: Thanks, maybe that should be posted as an answer. It's annoying that it's in a proprietary Microsoft packaging format, but given that Gonthier works for Microsoft I guess maybe I shouldn't be surprised... – Nate Eldredge May 1 at 23:30
@DavidRoberts: Each file says at the top (c) Copyright Microsoft Corporation and Inria. All rights reserved. I didn't see any further license information. But that makes it look like it's probably not allowed. :-( – Nate Eldredge May 2 at 3:07
|
2014-10-30 23:55:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2802419066429138, "perplexity": 1042.6810045753668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898842.15/warc/CC-MAIN-20141030025818-00001-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://www.nature.com/articles/s41533-021-00251-x?error=cookies_not_supported&code=7b50c696-d2ce-4fba-91bc-25520f9d2346
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Influenza colloidal gold method and blood routine tests combination for rapid diagnosis of influenza: a decision tree-based analysis
## Abstract
Rapid influenza diagnosis can facilitate targeted treatment and reduce antibiotic misuse. However, diagnosis efficacy remains unclear. This study examined the efficacy of a colloidal gold rapid test for rapid influenza diagnosis. Clinical characteristics of 520 patients with influenza-like illness presenting at a fever outpatient clinic during two influenza seasons (2017–2018; 2018–2019) were evaluated. The clinical manifestations and results of routine blood, colloidal gold, and nucleic acid tests were used to construct a decision tree with three layers, nine nodes, and five terminal nodes. The combined positive predictive value of a positive colloidal gold test result and monocyte level within 10.95–12.55% was 88.2%. The combined negative predictive value of a negative colloidal gold test result and white blood cell count > 9.075 × 109/L was 84.9%. The decision-tree model showed the satisfactory accuracy of an early influenza diagnosis based on colloidal gold and routine blood test results.
## Introduction
Seasonal influenza is an acute viral infection that affects people of all age groups worldwide. According to World Health Organization (WHO) estimates, influenza viruses infect between 5 and 15% of the global population, causing an estimated 3–5 million severe cases and up to 650,000 respiratory deaths a year1,2. There is no influenza-specific treatment, although antiviral drugs and supportive treatments are used to alleviate discomfort, shorten the disease course, and reduce the mortality risk. However, in some countries and regions, the lack of accurate and rapid influenza diagnostics has resulted in widespread misuse of antibiotics administered to patients while awaiting an influenza diagnosis.
Craddock et al.3 reported that 17.2 and 25.4% of viral acute upper respiratory tract infections (AURTI) were inappropriately treated with antibiotics at urban internal medicine (IM) and family medicine (FM) ambulatory care clinics, respectively. A previous study reported that, among 6136 patients with acute respiratory infections (ARIs), 2522 (41%) had diagnoses for which antibiotics are not indicated; moreover, 2106 (84%) patients were diagnosed as having a viral upper respiratory tract infection or bronchitis4. Another study showed that 40% of upper respiratory tract infections (URTI) were treated with antibiotics in the post-pandemic influenza period5. Furthermore, antibiotic prescriptions from deidentified administrative claims data reached 9.8 million in 2013–2015, and antibiotics prescribed to 3.9 million insurance beneficiaries were used mainly for respiratory tract infections, such as the common cold and seasonal flu6. This evidence suggests that the ability to distinguish cases of influenza from influenza-like diseases can improve the efficacy of aggressive treatment approaches and reduce antibiotic misuse.
Eboigbodin et al.7 showed that the present rapid influenza diagnostic methods, including onsite rapid nucleic acid tests, allow prompt treatment of patients and limit the inappropriate use of antibiotics and antiviral drugs, although they are expensive. The combined use of a conventional colloidal gold test with routine blood tests and the clinical symptoms has some efficacy in influenza diagnosis despite a low overall performance8. Antibiotic misuse is a global problem, and regional antibiotic misuse may increase the prevalence of drug-resistant bacteria contributing to the global risk of infectious diseases9. Considering these challenges, developing an effective and economical rapid influenza diagnosis method is of paramount importance.
Studies have previously used statistical methods to construct an accurate disease, diagnostic model. These proposed models were based on patient characteristics and intended to leverage known markers for the diagnosis. The Classification Regression Tree (CRT)-based decision tree models are considered representative of clinical diagnosis, and their conclusions are easy to communicate and applicable in clinical practice. Recently, decision tree-based diagnostic models have been increasingly used in various disciplines. In influenza-related research, nucleic acid test results are used as a diagnostic gold standard reference for colloidal gold tests, with the clinical presentation and results of routine blood tests. The primary objective of this study was to establish a rapid diagnostic model for influenza patients based on colloid gold tests and clinical characteristics.
## Results
Table 1 presents the clinical presentation characteristics of 520 patients with influenza-like illness.
### Correlation between clinical presentation and nucleic acid test results
A total of 520 patients were divided into two groups based on the results of the nucleic acid test: 271 with “positive” nucleic acid test results and 249 patients with “negative” nucleic acid test results. Among the 271 patients with positive nucleic acid test results, there were 116 cases of H1N1 (2009) influenza A, 55 cases of seasonal influenza H3, 69 cases of Yamagata (BY) influenza B, and 31 cases of Victoria (BV) influenza B. The sex and age were unassociated with nucleic acid test results. Patients with a positive nucleic acid test result had a slightly longer disease course (t = 2.429, P = 0.015). The average body temperature of both groups was similar (38.75 ± 0.52 vs. 38.67 ± 0.54, t = 1.656, P = 0.098). The positive group had a higher proportion of patients with cough, runny nose, and chills than in the negative group (χ2 = 7.540, 10.412, and 8.035, respectively; P = 0.006, 0.001, and 0.005, respectively). In the positive group, a higher number of patients underwent a colloidal gold test than in the negative group (χ2 = 14.848, P < 0.001). Furthermore, patients in the positive group had a lower WBC count and neutrophil percentage, and higher lymphocyte percentage and monocyte percentage than patients in the negative group (t = −4.256, −2.140, 2.004, and 2.836, respectively; P < 0.001, 0.033, 0.046, and 0.005, respectively; Table 2).
### Diagnostic performance of clinical characteristics associated with a definitive diagnosis of influenza
Nucleic acid testing of both positive and negative groups, showed differences in clinical characteristics, mainly including disease course, cough and expectoration, runny nose, chills, colloidal test, routine blood test white blood cell count, neutrophils %, lymphocytes %, monocytes %, with confirmed influenza nucleic acid detection. Further analysis showed the clinical characteristics in the confirmed diagnosis efficacy (Table 3).
### Decision tree-based examination of possible paths for further influenza screening
A positive nucleic acid test result was used as the gold standard for the diagnosis of influenza, and the colloidal gold test, clinical presentation, and routine blood test results were used as independent variables. The CRT algorithm was used to construct a decision tree that had three layers, nine nodes, and five terminal nodes (Fig. 1). There were two main decision cues. Patients with influenza-like illness (n = 520) were classified according to the value of the colloidal gold test node. Of the 236 patients with a negative colloidal gold test result, 165 had a negative nucleic acid test result, yielding a negative predictive value (NPV) of 69.9% for the colloidal gold method. Patients with a negative colloidal gold test result were further classified based on their WBC count: of the 105 patients with WBC count > 9.075 × 109/L, 87 had a negative nucleic acid test result, implying that the NPV of this characteristic was 82.9%, which made it a decision cue candidate. The NPV of the colloidal gold method with WBC count > 9.075 × 109/L was 82.9%.
Subsequently, 520 patients with influenza-like illness were classified according to the colloidal gold test results, and 284 patients with a positive colloidal gold test result were further classified according to the monocyte count. Thus, 140 out of 182 patients with the monocyte percentage > 10.95% had positive nucleic acid test results; the positive predictive value (PPV) was 76.9%. Moreover, among 140 patients, when the monocyte percentage was < 12.55% was used for classification, 62 out of 69 patients who met this criterion had a positive nucleic acid test result, and the PPV was 89.9%. These results indicate that these factors are potential candidates for decision cues. The PPV of the colloidal gold method with monocyte percentage within 10.95–12.55% was 89.9%. These factors are important contributors toward a definitive diagnosis of influenza.
### Verification of influenza screening pathway in the decision tree
In the 2019–2020 influenza season, 156 influenza-like illness cases were clinically diagnosed according to the decision tree before the nucleic acid test of pharyngeal swabs was performed. The nucleic acid test results of the 156 cases were as follows: 75 were positive (53 cases of influenza A H3, 6 cases of 2009 H11 influenza A, 16 cases of influenza B Victoria) and 81 were negative. According to the screening path of the decision tree analysis, the NPV of the node 2 colloidal gold test was 68.6%, and that of the node 6 colloidal gold test with negative results combined with WBC > 9.075 × 109/L increased to 81.0%. The PPV of the node 4 colloidal gold test combined with mononuclear cells > 10.95% was 88.6%, while that of the node 7 colloidal gold test combined with mononuclear cells within 10.95–12.55% increased to 90.9%. These results were consistent with those of the decision tree (Table 4).
## Discussion
This study included patients from the 2017–2018 and 2018–2019 influenza seasons and used data from the 2019–2020 influenza season to verify the results. The results of the decision tree analysis revealed that clinical symptoms did not determine the diagnosis of influenza. However, differentiating the clinical presentation of influenza from that of influenza-like disease is challenging. Therefore, the use of the colloidal gold test and routine blood tests, including WBC and monocyte count are key factors for an accurate influenza diagnosis. This study used decision tree analysis to show that a colloidal gold test and routine blood tests can increase the accuracy of the rapid diagnosis of influenza. The PPV of a positive result on the colloidal gold test combined with a monocyte percentage within 10.95–12.55% was 88.2%. The NPV of the negative result on the colloidal gold test combined with a WBC count > 9.075 × 109/L was 84.9%. These findings suggest that a decision tree model can significantly increase the PPV and NPV for the rapid influenza diagnosis.
In this study, sore throat, cough, dizziness and headache, and myalgia and arthralgia were the four most common symptoms despite an incidence of only 53–67%, which suggested that many patients with the influenza-like disease do not have a typical respiratory tract or systemic symptoms. In this study, there were significant differences in the prevalence of cough, productive cough, runny nose, and chills between the positive and negative nucleic acid test result groups (P < 0.05). However, further analysis has shown that their diagnostic performance was low, with the PPV and NPV in the range of 56.8–62.6% and 50.7–55.6%, respectively. Clinical symptoms were not included in the final decision tree, suggesting that they are insufficient to differentiate between influenza and non-influenza disease in clinical practice.
The influenza colloidal gold test is widely used by many major medical institutions for influenza screening because it is rapid (results within 15–20 min), easy to use, does not require any special equipment, can be performed at the bedside, and all medical staff can be trained in its use. However, its sensitivity varies. In this study, the sensitivity, specificity, PPV, and NPV of the colloidal gold influenza A test was 52.9%, 74.2%, 72.4%, and 55.2%, respectively, and the corresponding values for the colloidal gold influenza B test were 20.3%, 92.5%, 77.8%, and 47.7%, respectively, which were consistent with those reported in a meta-analysis by Chartrand et al.10. This meta-analysis included 159 studies and evaluated 26 rapid influenza tests, and found that the sensitivity and specificity of rapid tests was 62.3% (confidence interval [CI] 57.9–66.6%) and 98.2% (CI 97.5–98.7%), respectively. In addition, the overall sensitivity of these tests in adults was 53.9% (CI 47.9–59.8%); for influenza A and B, the sensitivity values were 64.6% (CI 59.0–70.1%) and 52.2% (CI 45.0–59.3%), respectively. This shows that the colloidal gold method alone is insufficient to discriminate between influenza and influenza-like disease. Studies have shown that the PPV range for a rapid influenza test kit is 30–80%11,12. Moreover, Eggers et al.13 reported that the sensitivity and NPV of a rapid influenza test were 40–50% and 55–56%, respectively.
In this study, the proportion of patients who underwent routine blood tests, colloidal gold tests, and chest X-rays was 94%, 82%, and 16%, respectively. This shows that, at the early stages of the disease, routine blood tests and influenza colloidal gold tests are commonly used. In this study, influenza patients with a positive nucleic acid test result had a lower WBC count and neutrophil percentage than the patients with a negative nucleic acid test result, whose lymphocyte and monocyte percentage were higher, and these differences are statistically significant (P < 0.05). A WBC count < 9.075 × 109/L emerged as an important cutoff point. Among patients with a positive nucleic acid test result, 10% of patients had elevated WBC count and 28% had an elevated neutrophil count. It is plausible that some of the influenza patients included in this study had a concomitant bacterial infection.
The decision tree model in our study found that the monocyte percentage range is relevant to rapid influenza diagnosis. Patients with a monocyte percentage within 10.95–12.55% were more likely to have influenza, whereas the PPV of this parameter decreased at a monocyte percentage >12.55%. This finding can be explained given the growing understanding of the immune function of monocytes. Monocytes are critical regulators of the innate systemic inflammatory response14, and their levels tend to be elevated in severe infections. Aegerter et al.15 reported that monocytes play an important role in influenza. Alveolar macrophages originate from monocytes and have antibacterial effects. Moreover, influenza viruses enter the cytoplasm through receptor-mediated endocytosis and promote monocytes/macrophages to secrete inflammatory cytokines and chemokines (e.g., tumor necrosis factor-alpha, interleukin 1 [IL-1], IL-4, IL-6, interferon-gamma, etc.) and to induce immune responses and phagocytosis in lymphocytes, while removing damaged or senescent cells. Therefore, monocytes play an important role in antiviral immunity.
Nevertheless, monocyte count elevation has a complex function in influenza patients. A review of the contribution of lung macrophages and monocytes to influenza pathogenesis showed that lung macrophages have catabolic and immunosuppressive functions16. During the acute phase of inflammation, classical lung monocytes, and monocyte-derived dendritic cells have displayed proinflammatory function. Coates et al.17 reported that excessive inflammatory responses due to increased monocyte aggregation in the lungs of young mice cause severe influenza instead of increasing the capacity to control viral replication. This study delivered insights into severe influenza in adolescents, specifically, on the contribution of monocytes to secondary acute lung injury in children with influenza. Other studies worldwide have shown that monocytes/macrophages act as a repository, although certain viruses use these cells for productive replication18. Monocyte levels are frequently elevated in mycoplasma infection19, and monocytes play a crucial role in Human enterovirus 71 (EV71) infection20.
Significantly elevated monocyte levels may represent atypical lymphocytes. In influenza patients, the immune response of lymphocytes induces T-lymphocyte activation, with the resultant production of atypical lymphocytes, which are larger and have large nuclei and loose chromatin. Instruments used for hematological analysis tend to recognize these atypical lymphocytes as monocytes, which results in a false increase in monocyte percentage. In future research, the manual verification of routine blood test results from patients with significant monocyte percentage elevation can be used to determine whether these additional monocytes are atypical lymphocytes. However, significant monocyte count elevation may suggest the presence of non-influenza infections, and in such circumstances, testing for other respiratory pathogens can be performed.
In China’s primary care system, people with flu-like symptoms are routinely given blood routine examinations when they go to the clinic. Further pathogenic microorganisms tests depend on whether the primary care institution is equipped with relevant technology, as well as the patient’s willingness, payment method, and suggestion from the clinician. Patients attending primary care facilities often seek to be diagnosed and treated as soon as possible. Real-time PCR (RT-PCR), which generally results within 4–6 h, is ideal for influenza detection in primary care. However, the technology and conditions to carry out RT-PCR are usually not available in primary health care facilities. Some secondary and tertiary general hospitals in China can carry out RT-PCR detection. But due to the relatively complex technology, the actual detection efficiency is low, and the results need to be waited for 3–10 days. The CDC in some areas can offer free nucleic acid testing for influenza viruses, with a wait of 7–14 days for results. These conditions affected the timeliness of diagnosis, which could increase the likelihood of blind treatment with antibiotics. This phenomenon may be prevalent in primary care systems in the most underdeveloped regions of the world.
In addition, the high cost limits the use of RT-PCR for influenza diagnosis in primary care settings in China. Through interviews with other hospitals in China and employees of pharmaceutical companies in China and the United States, the author has learned about the prices of colloidal gold and RT-PCR technology in China, the United States, and Europe. Viral nucleic acid testing in China costs about 300 to 700 Chinese Yuan (CNY), or about $46.6–108.7 USD. The cost is comparable to the ~$96–100 USD in the United States. In Europe, it’s about $194. But for Chinese patients with lower income levels, the burden of this cost may be greater. The colloidal gold method requires a simpler test that can be performed in a primary care setting, and results can be obtained in as little as 20 min. In China, this method is also much cheaper, priced at 119 CNY, or about$18.50 USD. It’s about 17–40% of the cost of RT-PCR. The cost for colloidal gold is $28–33 (28–34% of PCR) in the US,$73 (38% of PCR cost) in Europe.
The limitations of this study include the small sample size and selection bias. The minimum age of patients that presented at our outpatient fever clinic was 16 years. Hence, in this study, >70% of patients were young adults; 50% of patients sought medical attention on day 1 of their disease course, and two-thirds had a moderate-grade fever (38–39 °C). The proportion of patients with underlying disease, pregnancy, or older age was low (7.4%). None of the participants had received vaccination against influenza.
The decision tree analysis proposed in this study enables an easier understanding of the classification rules and presents a new influenza diagnosis approach. This approach improved the predictive performance of a single test, suggesting that it can guide antibiotic prescription in clinical practice. Of the 520 patients with influenza-like illness in this study, 105 (20%) had a negative colloidal gold test result and WBC count >9.0755 × 109/L. These patients were considered “non-influenza” cases, which may involve bacterial infection, for which antibiotics might be recommended. In contrast, patients with a positive colloidal gold test result and monocyte percentage within 10.95–12.55% were considered likely to have influenza. The PPV of these parameters was 89.9%, suggesting that 69 (13%) patients did not require antibiotic treatment. In this study sample (n = 520), there was an evidence-based rationale for antibiotic use after a simple and rapid test among one-third of the patients.
Future prospective studies should test the feasibility of clinical use of the proposed decision tree model. In addition, studies that involve a larger sample size should be conducted to verify the diagnostic accuracy and stability of the model. For patients with negative nucleic acid test results, in whom the pathogen is unknown, high-throughput sequencing can be combined with differences in outcomes after treatment to determine the type of pathogen involved. However, further research is required to optimize the decision tree.
## Methods
All study participants provided informed consent, and the study design was approved by the Peking University Third Hospital Medical Science Research Ethics Committee 2017 (295–02). All subjects were fully informed and signed written informed consent prior to take part in the study.
This study prospectively enrolled patients who presented with fever at an outpatient clinic in Peking University Third Hospital during three influenza seasons (December 2017 to March 2018, December 2018 to March 2019, and December 2019 to January 2020). The inclusion criteria were as follows: fever ≥ 38 °C, cough, sore throat, and disease course of ≤3 days. This study excluded: (1) individuals who were mentally incapable or unable to understand the study requirements; (2) expected to have poor compliance; (3) pregnant or lactating women; and (4) other reasons that were considered inappropriate for participation in this study. As a national influenza surveillance outpost hospital, the mission was to collect nasopharyngeal swabs from 20 patients with influenza-like illness each week (ten patients on Mondays and Wednesdays, respectively). The study used data from this group of people. A total of 700 patients fulfilled the study eligibility criteria. We subsequently excluded 12 patients due to missing clinical data. Finally, 346 and 342 patients from the 2017–2018 and 2018–2019 flu season, were selected, respectively, for study inclusion and provided a total study sample of 688 patients. However, 109 patients with body temperature < 38 °C and 59 patients with disease course of >3 days/unclear information in their medical records were excluded. The final analysis dataset included 520 patients. In the 2019–2020 influenza season, only 160 cases were collected within 2 months due to the coronavirus disease epidemic. Among these cases, four were excluded due to missing clinical data. Thus, 156 cases were included for the 2019–2020 influenza season.
### Laboratory methods and clinical data
Nasopharyngeal swabs were prospectively collected and tested. The colloidal gold method was applied immunochromatography and a double-antibody sandwich to detect influenza A/B antigens by influenza A/B viral antigen detection kit (Guangzhou Wongfo Biotech Co., Ltd, Guangzhou, China). Preservative solution (80 µL) containing the dissolved sample was added to the well in the test card, and chromatography was performed after 15–20 min to detect the influenza antigens.
The nucleic acid test was used for a definitive diagnosis of influenza. The specimens were delivered to the laboratory of Beijing Haidian District Center for Disease Control and Prevention following a standard institutional protocol used at the study center, which is a national sentinel surveillance hospital. The influenza A/B nucleic acid assay kit, H1N1 (2009) influenza A/seasonal influenza H3 nucleic acid assay kit and the Victoria/Yamagata (BV/BY) influenza B nucleic acid test kit (Jiangsu Bioperfectus Technologies Co., Ltd) were used and assayed for each sample on the ABI 7500fast real-time quantitative PCR system.
Data on demographics (i.e., age and sex), clinical presentation (i.e., body temperature, °C; disease course, days; cough, sputum, dyspnea, runny nose, diarrhea, dizziness, headache, myalgia/arthralgia, fatigue, and chills) were collated. Medical history (i.e., underlying disease, pregnancy, contact history, influenza vaccination, and high-risk population), routine blood test results (i.e., white blood cell count × 109/L, hemoglobin g/L, platelet count × 1012/L, lymphocyte %, and monocyte %), and influenza colloidal gold test were included for analysis.
### Statistical analysis
SPSS version 17.0 (Chicago: SPSS Inc.) was used for statistical analysis. Quantitative data are shown as mean and standard deviation, and qualitative data are presented as count and percentage. The t test or chi-square test was used for intergroup comparisons of positive/negative results of the nucleic acid test. The correlation between general clinical characteristics and nucleic acid test results was examined. Clinical characteristics were evaluated as independent variables, and CRT was used to construct a decision tree. The CRT method involves dividing the population into several homogeneous subpopulations with specific characteristics. Thus, the derived subgroups have a high degree of internal consistency and a similar degree of internal variation/impurity, which can be achieved by applying the prediction error minimization or a binary method21. The 2017–2018 and 2018–2019 influenza season data were used to establish the decision tree, while the 2019–2020 influenza season data were used for verification.
### Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
## Data availability
The data will be made available to others on reasonable requests to the corresponding author.
## References
1. 1.
World Health Organization. Influenza (Seasonal). http://www.who.int/news-room/fact-sheets/detail/influenza-(seasonal). (WHO, 2018).
2. 2.
Iuliano, A. Danielle et al. “Estimates of global seasonal influenza-associated respiratory mortality: a modelling study.”. Lancet 391, 1285–1300 (2018).
3. 3.
Craddock, K. et al. “The impact of educational interventions on antibiotic prescribing for acute upper respiratory tract infections in the ambulatory care setting: a quasi‐experimental study. J. Am. Coll. Clin. Pharm. 3, 609–614 (2020).
4. 4.
Havers, F. P. et al. “Outpatient antibiotic prescribing for acute respiratory infections during influenza seasons,”. JAMA Netw. Open 1, e180243 (2018).
5. 5.
Taymaz, T. et al. Significance of the detection of influenza and other respiratory viruses for antibiotic stewardship: lessons from the post-pandemic period. Int. J. Infect. Dis. 77, 53–56 (2018).
6. 6.
Durkin, M. J. et al. Outpatient antibiotic prescription trends in the United States: a national cohort study. Infect. Control Hosp. Epidemiol. 39, 584–589 (2018).
7. 7.
Eboigbodin, K. et al. Reverse transcription strand invasion based amplification (RT-SIBA): a method for rapid detection of influenza A and B. Appl. Microbiol. Biotechnol. 100, 5559–5567 (2016).
8. 8.
Nolte, F. S., Gauld, L. & Barrett, S. B. Direct comparison of alere I and Cobas liat influenza A and B tests for rapid detection of influenza virus infection. J. Clin. Microbiol. 54, 2763–2766 (2016).
9. 9.
Laxminarayan, R. et al. Antibiotic resistance—the need for global solutions. Lancet Infect. Dis. 13, 1057–1098 (2013).
10. 10.
Chartrand, C. et al. Accuracy of rapid influenza diagnostic tests: a meta-analysis. Ann. Intern Med 156, 500–511 (2012).
11. 11.
Zazueta-Garcia, R. et al. Effectiveness of two rapid influenza tests in comparison to reverse transcription-PCR for influenza A diagnosis. J. Infect. Dev. Ctries 8, 331–338 (2014).
12. 12.
Ndegwa, L. K. et al. Evaluation of the point-of-care Becton Dickinson Veritor™ rapid influenza diagnostic test in Kenya, 2013-2014. BMC Infect. Dis. 17, 60 (2017).
13. 13.
Eggers, M., Enders, M. & Ladwig, E. T. Evaluation of the Becton Dickinson rapid influenza diagnostic tests in outpatients in Germany during seven influenza seasons. PLoS ONE 10, 1–12 (2015).
14. 14.
Krychtiuk, K. A. et al. Monocyte subset distribution is associated with mortality in critically ill patients. Thromb. Haemost. 116, 949–957 (2016).
15. 15.
Aegerter, H. et al. Influenza-induced monocyte-derived alveolar macrophages confer prolonged antibacterial protection. Nat. Immunol. 21, 145–157 (2020).
16. 16.
Duan, M. B., Hibbs, M. L. & Chen, W. S. The contributions of lung macrophage and monocyte heterogeneity to influenza pathogenesis. Immunol. Cell Biol. 95, 225–235 (2017).
17. 17.
Coates, B. M. et al. Inflammatory monocytes drive influenza A virus-mediated lung injury in juvenile mice. J. Immunol. 200, 2391–2404 (2018).
18. 18.
Nikitina, E. et al. Monocytes and macrophages as viral targets and reservoirs. Int. J. Mol. Sci. 19, 2821 (2018).
19. 19.
Wang, Z. et al. Monocyte subsets study in children with Mycoplasma pneumoniae pneumonia. Immunol. Res 67, 373–381 (2019).
20. 20.
Wongsa, A. et al. Replication and cytokine profiles of different subgenotypes of enterovirus 71 isolated from Thai patients in peripheral blood mononuclear cells. Micro. Pathog. 132, 215–221 (2019).
21. 21.
Wray, C. M. & Byers, A. L. Methodological progress note: classification and regression tree analysis. J. Hosp. Med. 15, E1–E3 (2020).
## Acknowledgements
The authors thank Lina Jin from Beijing Haidian District Center for Disease Control and Prevention for useful support about data. This project was funded by the Beijing Haidian District Preventive Medicine Fund [grant no.2017HDPMA04] for the 2017 Research Topic “Survey on Epidemiology and Disease Burden in Influenza Patients in Fever Outpatient Clinics” and the National Natural Science Foundation of China [project no. 81701067].
## Author information
Authors
### Contributions
X.L. is the principal clinical investigator, manuscript drafting and final approval for publication; J.C. is an associate clinical investigator; F.L. performed clinical tests; W.W. performed clinical tests; J.X. organized the clinical researchers; N.L. participated in data analysis and discussion.
### Corresponding author
Correspondence to Nan Li.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Li, X., Chen, J., Lin, F. et al. Influenza colloidal gold method and blood routine tests combination for rapid diagnosis of influenza: a decision tree-based analysis. npj Prim. Care Respir. Med. 31, 39 (2021). https://doi.org/10.1038/s41533-021-00251-x
|
2021-10-26 22:27:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44071707129478455, "perplexity": 7234.607491068139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587926.9/warc/CC-MAIN-20211026200738-20211026230738-00673.warc.gz"}
|
https://math.nmsu.edu/activities/colloquia.html
|
01/29/2021: Vivian Healey Title: Scaling Limits and the Loewner Equation: From Brownian Motion to Multiple SLE Abstract: Starting with a simple random walk, rescale (in space and time) and take the limit as the step size goes to zero; the result is Brownian motion, which is used to model random motion throughout the sciences and is one of the fundamental objects in the field of probability. But what limit would you get if you started by assuming that the random walk never intersected itself? The answer (to many versions of this question in two dimensions) is Schramm-Loewner evolution (SLE). In this talk, I will discuss SLE from a few perspectives: as the scaling limit of discrete models with links to physics, as the output of the Loewner differential equation, and as a random path in the disk. I will also discuss recent applications of these perspectives to the construction of multiple SLE curves. (Including joint work with Gregory Lawler.) 02/05/2021: Nik Weaver, Washington University, St. Louis Title: A "quantum" Ramsey theorem for operator systems Abstract: Let V be a linear subspace of M_n(C) which contains the identity matrix and is stable under the formation of Hermitian adjoints. I claim that if n is sufficiently large then there exists a rank k orthogonal projection P such that dim(PVP) = 1 or k^2. I will explain the statement of the theorem in more detail and talk about why it is "quantum" and how it relates to Ramsey's classic theorem about graphs. Then I will describe some of the ideas that go into the proof. 02/26/2021: Gail Wolkowicz, McMaster University, Hamilton, Canada Title: A Delay Model for Persistent Viral Infections in Replicating Cells Abstract: Persistently infecting viruses remain within infected cells for a prolonged period of time without killing the cells and can reproduce via budding virus particles or passing on to daughter cells after division. The ability for populations of infected cells to be long-lived and replicate viral progeny through cell division may be critical for virus survival in examples such as HIV latent reservoirs, tumor oncolytic virotherapy, and non-virulent phages in microbial hosts. We consider a model for persistent viral infection within a replicating cell population with time delay modeling the length of time in the eclipse stage prior to infected cell replicative form. We obtain reproduction numbers that provide criteria for the existence and stability of the equilibria of the system and provide bifurcation diagrams illustrating {transcritical (backward and forward), saddle-node, and Hopf} bifurcations, and provide evidence of {\it homoclinic bifurcations} and a {Bogdanov-Takens bifurcation. We investigate the possibility of long-term survival of the infection (represented by chronically infected cells and free virus) in the cell population by using the mathematical concept of {robust uniform persistence}. Using numerical continuation software with parameter values estimated from phage-microbe systems, we obtain two-parameter bifurcation diagrams that divide parameter space into regions with different dynamical outcomes. We thus investigate how varying different parameters, including how the time spent in the eclipse phase, can influence whether the virus survives. 03/5/2021: David Pengelley, Oregon State University and NMSU Title: How and Why did Fermat Discover his Theorem? A hands-on investigation Abstract: We know why Fermat discovered his theorem: yes, that theorem, the one about power residues modulo a prime that underlies all of number theory and internet security today. We know because his letters written in 1640 reveal what problem from antiquity he was working on, why he was working on it, and what he wanted to know. But this doesn't tell us how he discovered his theorem, which is an unexpected amazement and wouldn't just be guessed. Participants will be invited to think like Fermat, look at the very problem and data he was studying, and conjecture the patterns he saw that led to his theorem. Absolutely no background is required! But this doesn't mean it won't be challenging. Please bring your pencil/paper/calculator! 03/12/2021: David Eisenbud, MSRI and University of California, Berkeley Title: The Problem of Subtraction in Algebraic Geometry and Commutative Algebra Abstract: Some curves in 3-space can be realized as the intersections of two surfaces; for example, the intersection of two quadric (= degree 2) hypersurfaces containing a line in common has another component, which can be thought of as the intersection minus the line. The invariants of that other component can be computed from this information: it must be a curve of degree 3 and genus 0. In commutative algebra, subtractions appear as ideal quotients, and raise other interesting questions, some very subtle. Such problems have been studied for more than 100 years. I'll discuss the origins of this theory of "residual intersections", and some of the modern developments in algebraic geometry and commutative algebra. 03/19/2021: Fred Wehrung, University of Caen Normandy Title: Intractability for images of certain functors Abstract: There are various open problems asking for the description of certain classes of structures. Examples are: the class of ordered K_0 groups of unit-regular rings; the class of (Zariski-like) spectra of abelian lattice-ordered groups; the class of submodule lattices of modules. All those classes are images of functors that preserve $\lambda$-directed colimits for large enough $\lambda$. I will present a general framework enabling to verify, in certain cases, a form of intractability for the given class. This intractability implies the failure of closure under elementary equivalence for any infinitary language. It is entailed by the existence of a certain type of non-commutative diagram, whose image under the given functor is commutative. 03/26/2021: Seth Sullivant, North Carolina State University Title: Phylogenetic Algebraic Geometry Abstract: The main problem in phylogenetics is to reconstruct evolutionary relationships between collections of species, typically represented by a phylogenetic tree. In the statistical approach to phylogenetics, a probabilistic model of mutation is used to reconstruct the tree that best explains the data (the data consisting of DNA sequences from homologous genes of the extant species). In algebraic statistics, we interpret these statistical models of evolution as geometric objects in a high-dimensional probability simplex. This connection arises because the functions that parametrize these models are polynomials, and hence we can consider statistical models as algebraic varieties. The goal of the talk is to introduce this connection and explain how the algebraic perspective leads to new theoretical advances in phylogenetics, and also provides new research directions in algebraic geometry. The talk material will be kept at an introductory level, with background on phylogenetics and algebraic geometry. 04/9/2021: Jonathan Montano, New Mexico State University Title: Degrees and Multiplicities Abstract: In how many points does a curve in the plane intersect a random straight line? The answer to this question is an invariant of the curve called "degree". The notion of degree extends to higher dimensional shapes that are defined in terms of polynomials (varieties) and it is a central invariant lying in the interplay between Algebraic Geometry and Commutative Algebra. In this talk I will give an overview of this theory and its generalization to varieties in multi-spaces. In particular, I will report on recent joint work with Castillo, Cid-Ruiz, Li, and Zhang. 04/23/2021: Ilya Shapirovskiy, New Mexico State University Title: TBA Abstract: TBA 04/30/2021: Feng Luo, Rutgers University Title: Discrete Conformal Geometry of Polyhedral Surfaces and its Application Abstract: Classical theory of Riemann surfaces is a pillar in mathematics and has many applications within and outside of mathematics. There have been many approaches to establish discrete versions of conformal geometry for polyhedral surfaces since the pioneer work of W. Thurston in 1978. In this talk, we will report some of the recent developments in this area. These include a notion of discrete conformality, a discrete uniformization theorem for polyhedral surfaces and their applications. This is a joint work with D. Gu, J. Sun and T. Wu. 05/07/2021: Adina Oprisan, New Mexico State University Title: Average and Diffusion Approximation Principle Abstract: Weak convergence techniques provide paths in analyzing various stochastic approximations of dynamical systems subject to the effect of small random perturbations. In both average and diffusion approximations, the smallness of the effect of the perturbations is ensured by quick oscillations of the random perturbation process. Limit theorems generalizing classic types such as: the law of large numbers, the central limit theorem, and large deviations, are developed for systems perturbed by ergodic Markov and semi-Markov processes.
|
2021-05-13 22:19:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5645296573638916, "perplexity": 698.8748989821399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992514.37/warc/CC-MAIN-20210513204127-20210513234127-00076.warc.gz"}
|
https://math.stackexchange.com/questions/3039144/evaluation-of-integrals
|
# Evaluation of integrals
I wast to evaluate the following integrals:
$$\begin{multline} \label{eqlens12} \int_0^{\infty}\frac{(1+4\lambda^2)}{(1+\lambda^2)\left[\lambda\sin(2\eta_2)+\sinh(2\lambda\eta_2)\right]} \\ \left\{\lambda\sin(2\eta_2)+\sinh(2\lambda\eta_2)+\left[1+2\lambda^2\sin^2\eta_2-\cosh(2\lambda\eta_2)\right]\tanh(\lambda\pi)\right\}\,d\lambda. \end{multline}$$ $$$$\label{eqlens11} \int_0^{\infty}\frac{(1+4\lambda^2)\left\{1+(1+2\lambda^2)\left[3\cosh(\lambda\pi)-\cosh(3\lambda\pi)\right] -3\cosh(2\lambda\pi)\right\}\,d\lambda}{2(1+\lambda^2)\cosh(\lambda\pi)\left[1+2\lambda^2-\cosh(3\lambda\pi)\right]}.$$$$ $$$$\int_0^{\infty}(4\lambda^2+1)\left[1-\tanh(\lambda\eta_2)\tanh(\lambda\pi)\right]\,d\lambda.$$$$ In the first and last integrals $$\pi>\eta_2>0$$. Would be grateful if anybody can help.
• Have you tried with WolframAlpha? – Nosrati Dec 14 '18 at 12:53
• Yes, of course. It is able to evaluate the integrals for special values of $\eta_2$ (especially in the last integral, it is able to evaluate for $\eta_2=\pi/4,\pi/2,\pi$ etc. (the upper limit for $\eta_2$ should $\le \pi$ and not $<pi$), but I am not able to find the general expression from these particular cases. – Jog Dec 15 '18 at 3:31
|
2019-07-23 23:59:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9052731990814209, "perplexity": 338.28316340391586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530246.91/warc/CC-MAIN-20190723235815-20190724021815-00392.warc.gz"}
|
https://brilliant.org/problems/no-using-calculator/
|
# No using calculator
Algebra Level 2
Find the largest number among the following numbers:
(A) $\sqrt{8}+\sqrt{8}$
(B) $\sqrt{7}+\sqrt{9}$
(C) $\sqrt{6}+\sqrt{10}$
(D) $\sqrt{5}+\sqrt{11}$
(E) $\sqrt{4}+\sqrt{12}$
### This is a part of the Set.
×
Problem Loading...
Note Loading...
Set Loading...
|
2020-11-28 08:38:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6171308755874634, "perplexity": 5648.469418478989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195198.31/warc/CC-MAIN-20201128070431-20201128100431-00369.warc.gz"}
|
http://math.stackexchange.com/questions/98743/proving-that-x-sin-x-cos-x-x2-has-only-one-positive-answer
|
# Proving that $x\sin x+\cos x=x^2$ has only one positive answer
I have a homework question to prove that
$$x \sin x+\cos x=x^2$$
has only one positive solution.
I have easily proved that it has a positive answer by showing that $f(x)=x\operatorname{sin}x+\operatorname{cos}x-x^2$ is smaller then $0$ at $f(\frac{\pi}{2})$ and larger then $0$ at $f(0)$ and then using the Intermediate Value Theorem.
But I am having trouble proving this is the only positive solution. Can someone help me with this? Thanks :)
-
Examine the derivative of $f$. – Mikko Korhonen Jan 13 '12 at 12:21
... and use Rolle's theorem. – lhf Jan 13 '12 at 12:23
Ah Thanks guys - the Rolle's theorem tip helped a lot. – Jason Jan 13 '12 at 12:36
Let $f(x)=x\sin x+\cos x-x^2$. Then $f(-\pi/2)<0$, $f(0)>0$ and $f(\pi/2)<0$. Hence by the Intermediate Value Theorem, $f$ has at least two zeros. Since $f'(x)=\sin x+x\operatorname{cos}x-\operatorname{sin}x-2x = x\operatorname{cos}x-2x=x(\cos x-2)$, it has only one zero. Therefore $f$ has exactly two zeros where one of them is positive and one of them is negative.
I think you need $-2x$ there. Also, another way: since $x \cos x - 2x = x (\cos x - 2)$ is negative when $x > 0$, we know that $f$ is strictly decreasing in $]0, \infty[$. Thus $f$ has at most one positive root by continuity. – Mikko Korhonen Jan 13 '12 at 13:24
|
2014-12-20 03:53:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9581922888755798, "perplexity": 129.7174185185023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769373.55/warc/CC-MAIN-20141217075249-00084-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/1119072/spivaks-calculus-chapter-1-problem-19-inequalities
|
# Spivak's Calculus, chapter 1 problem 19 (inequalities)
I'm having trouble with problem 1-19 in Spivak's Calculus. I have to prove that if $|x-x_0| < \frac{\epsilon}{2}$ and $|y-y_0| < \frac{\epsilon}{2}$ then $|(x-y)-(x_0-y_0)| < \epsilon$. I know $|x-x_0| - |y-y_0| < \epsilon$, but since $|a| - |b| \leq |a-b|$, my guess is that this is useless?
• look up triangle inequality. – abel Jan 25 '15 at 16:43
You can use the triangle inequality like this: $$|(x-y)-(x_0-y_0)|=|(x-x_0)+(y_0-y)|\le|x-x_0|+|y_0-y|<\frac\varepsilon2+\frac\varepsilon2=\varepsilon$$
I know I'm super late, but regardless I'd like to give some input that may help anyone who comes across this question in the future.
you wrote:
I know $|x-x_0| - |y-y_0| < \varepsilon$, but since $|a| - |b| \leq |a-b|$, my guess is that this is useless?
but that's not correct! In fact that's an important remark to make (albeit an ultimately unnecessary one) because it provides some connection between the two in your brain. But as you'll see you don't need to even need to technically make that statement.
Using your notation where $a = (x-x_0)$ and $b = (y-y_0)$, we want to prove that
$|(x-y)-(x_0-y_0)| = |a - b| < \varepsilon$
Since
$|a| < \frac{\varepsilon}{2}$ and $|b| < \frac{\varepsilon}{2}$
then
$|a| + |b| < \varepsilon$
And we can show that $|a-b| \leq |a| + |b|$ (by comparing the squares of the two sides of the inequality). And since we've already made the statement that $|a| + |b| < \varepsilon$, we've now proven that $|a - b| = |(x-y)-(x_0-y_0)| < \varepsilon$.
You are close:
$|(x-y)-(x_0-y_0)|=|(x-x_0)+(-y+y_0)| \leq|x-x_0|+|y-y_0|$
|
2019-07-17 14:20:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.918756365776062, "perplexity": 145.2059691312014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525312.3/warc/CC-MAIN-20190717141631-20190717163631-00344.warc.gz"}
|