url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://www.ck12.org/algebra/Trends-in-Data/rwa/CO2-Emissions/
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# Trends in Data
## Finding a model to show the relationship between variables
Estimated11 minsto complete
%
Progress
Practice Trends in Data
Progress
Estimated11 minsto complete
%
CO2 Emissions
Teacher Contributed
## Real World Applications – Algebra I
### Topic
Carbon Dioxide Emissions in Nicaragua!
### Student Exploration
Carbon dioxide emissions are calculated by determining how much human activity has occurred. Data is gathered from fossil fuel combustion, cement manufacturing and gas flaring. The link below represents the carbon dioxide emissions in both graph form and table form.
Looking at this graph, we can start to predict the carbon dioxide emissions for the future using Algebra. I will make a simplified table and find a line of best fit using my graphing calculator.
For the sake of this activity, I’m going to look at the following data points:
Year 1993 1997 2000 2003 2006
Carbon emissions 628 855 1048 1180 1182
You can look here to learn how to insert and plot all of these data points into your TI-83 calculator: cstl-csm.semo.edu/tansil/134/Handouts/bestfit.pdf
After entering all of the data into the graphing calculator and following all of the steps to find the line of best fit, we have \begin{align*}y = 45.23x - 89471.50\end{align*}. What does this equation mean?
Well, the \begin{align*}y-\end{align*}intercept doesn’t mean much in this case, but the slope means a lot! The rate of change, or 45.23, means that this is the amount of carbon emissions in metric tons per year.
We can also use this data to predict the carbon emissions for this year. This is the extrapolation method. Just substitute this year into the equation and solve for \begin{align*}y\end{align*}. What did you get?
For 2012, there will be 1,530 metric tons of carbon emissions. Does this answer make sense, when looking at all of your data points? Why or why not? Could there be more than one prediction for any given year?
Now, let’s look at two specific data points from the website for two different dates – 1999 and 2000. (1999, 989) and (2000, 1048). Let’s use the method of interpolation to get an idea of what the carbon emission was in September of the year 1999. Since we have two data points, we find the slope, which is \begin{align*}\frac{(1048 - 989)}{(2000 - 1999)}\end{align*}, or 59. Substituting one point and the slope into the equation, we have:
\begin{align*}Y &= mx + b\\ 989 &= 59(1999) + b\\ - 116,952 &=b \end{align*}
Our equation is \begin{align*}y = 59x - 116952\end{align*}
To find out the carbon emissions in September of 1999, we use the fact that September is 9 months into a 12 month period, or 75% into the year. We will now use 1999.75 as our \begin{align*}x\end{align*} value to find our \begin{align*}y\end{align*} value.
\begin{align*}y &= 59(1999.75) - 116952\\ y &= 1033.25\end{align*}
This makes sense, since 1033.25 is between 989 and 1048.
How does this information impact your carbon footprint? What could you do to minimize your own carbon footprint?
### Extension Investigation
On this website, see if you can recreate a similar equation and estimate what the carbon dioxide emission would be today.
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
|
2016-10-01 08:16:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 9, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8755892515182495, "perplexity": 1318.2924110997408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662541.24/warc/CC-MAIN-20160924173742-00218-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://zdelrosario.github.io/uq-book-preview/02_diagnosing/marginals.html
|
# 6.2. Single-uncertainties: Marginal Distributions¶
Learning Objectives In this chapter, you will learn
• how to use an exploratory analysis to inform probability modeling
• the importance of picking a distribution model
• factors to consider in picking a distribution model
(Key ideas to cover:)
• how context should affect fitting
• use exploratory analysis to gain insights
• use data-to-model comparison to do sanity checks
• distribution model dictates behavior in extrapolation
• testing a family, making conservative choices
• physical principles -> bounds
• tail weight?
## 6.2.1. Modeling a Parametric Uncertainty¶
Aim: Use a distribution to describe a parametric uncertainty.
### 6.2.1.1. Why model an uncertain quantity?¶
• Make our model assumptions explicit
• cf. below with summary statistics and implicit assumptions
• Enable propagation
• Enable inference
### 6.2.1.2. Follow the Modeling Process¶
To model an uncertain quantity we follow the modeling process. Below we highlight some of the particular considerations to modeling a parametric uncertainty.
### 6.2.1.3. Start with a modeling question¶
As we saw at the top of this chapter, we have two different modeling questions that we seek to answer: the materials selection question, and the structural sizing question.
### 6.2.1.4. Check the context¶
The materials selection context will be followed-up with more detailed analysis later in the design process. Thus modeling should be detailed enough to compare materials and consider relevant factors, but not so detailed as to cause analysis paralysis. This means we need to model each material separately, but our distribution model can be relatively rough.
In contrast, the structural sizing context will be the last step before finalizing design, and has direct impact on the safety of the structure. Modeling here must be detailed enough to confidently ensure the desired level of safety.
### 6.2.1.5. Gather appropriate information¶
Modeling using Facts
For materials selection, given that we only need a rough model, we can review supplier-provided data on material properties. If a supplier has used a rigorous process to characterize their supply, they should have both mean and variance information available. We can also request information on the sample size (number of tests) used to arrive at their data.
With this mean and variance information in-hand, we can build a simple model for variability by matching the distribution parameters to these values. This leaves the selection of the distribution, which we must do using knowledge of the underlying phenomenon.
Distribution selection To model the variability of material properties, we might be tempted to select a normal distribution. However, the normal distribution has infinite support, meaning it allows the possibility of positive and negative values regardless of its mean and variance. For material properties that tend to be distributed roughly symmetrically and have small variability, a normal model should be sufficient as a crude model.
However, other material properties exhibit a great deal more asymmetry in their realized properties; for example, the realized strength of material will tend to be distributed asymmetrically (as we will see below). A “weakest-link” argument provides a theoretical basis for using the Weibull distribution to model material strength [Wei], though a generalization of the Weibull distribution is used in the aircraft industry [MMP08], and the lognormal distribution is also occasionally used .
For the purposes of the materials selection context, it is more important to make a reasonable decision, rather than a perfect decision. For the purposes of comparing materials, and to take advantage of analytic tractability in a later stage of analysis, we will assume a lognormal distribution for material properties .
Distribution parameters TODO
Modeling using Data
For structural sizing, to develop values that are trustworthy for design, we must follow a more data-informed process.
First, before extensive data collection, In manufacturing we must ensure that the material in question follows a published manufacturing standard[^standard], in order to ensure an acceptable level of consistency in the manufacture of components.
Then, a sufficient quantity of data is gathered to understand the material variability. For instance, in aerospace design an absolute minimum of $$n=100$$ observations is required to be considered trustworthy [MMP08].
Finally, we use a combination of the data and our knowledge of the underlying phenomenon to choose and fit a distribution. We will see a concrete example of this using materials data below.
Shortcuts in aerospace design
The data collection process above is very expensive! In the aerospace industry this cost is somewhat offset through an effective pooling of resources: The results of extensive metallic materials characterization are collected in Metallic Materials Properties Development and Standardization (MMPDS)[MMP08], which provides allowable material properties used in design. While this approach has lead to safe aircraft in the past, there are known flaws with this “allowable” value approach, which we will see in a later chapter [dRFI21].
Exploratory Data Analysis
Load the steel alloy dataset to illustrate. More information is available in the Appendix entry on the aluminum die castings TODO.
from grama.data import df_shewhart
specimen tensile_strength hardness density
0 1 29314 53.0 2.666
1 2 34860 70.2 2.708
2 3 36818 84.3 2.865
3 4 30120 55.3 2.627
4 5 34020 78.5 2.581
Basic facts
(
df_shewhart
>> gr.tf_summarize(
T_mean=gr.mean(DF.tensile_strength),
T_sd=gr.sd(DF.tensile_strength),
T_skew=gr.skew(DF.tensile_strength),
T_kurt=gr.kurt(DF.tensile_strength),
)
>> gr.tf_mutate(
T_cov=DF.T_sd / DF.T_mean
)
)
T_mean T_sd T_skew T_kurt T_cov
0 31869.366667 3996.380795 0.099848 2.605644 0.125399
A histogram gives us a visual sense of shape for the data
(
df_shewhart
>> pt.ggplot(pt.aes("tensile_strength"))
+ pt.geom_histogram()
+ pt.theme_minimal()
+ pt.labs(x="Tensile Strength (psi)", y="Count (-)")
)
/Users/zach/opt/anaconda3/lib/python3.7/site-packages/plotnine/stats/stat_bin.py:95: PlotnineWarning: 'stat_bin()' using 'bins = 7'. Pick better value with 'binwidth'.
<ggplot: (8793141783417)>
Observations:
• broad distribution, centered around 32500 psi
As described in the Appendix on Exploratory Data Analysis, rule 1 of histograms is “play with the bin size”.
(
df_ruff
>> pt.ggplot(pt.aes("TYS"))
+ pt.geom_histogram(bins=20)
+ pt.theme_minimal()
)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
/var/folders/xv/k5cp232j1cn3y8kym53_nnkc0000gn/T/ipykernel_68071/3707619748.py in <module>
1 (
----> 2 df_ruff
3 >> pt.ggplot(pt.aes("TYS"))
4 + pt.geom_histogram(bins=20)
5 + pt.theme_minimal()
NameError: name 'df_ruff' is not defined
The finer-grained histogram maintains the features we say above
• concentration still near 158 ksi
• still just one outlier at a much-larger value
TODO What does this tell us about building the model?
### 6.2.1.6. Build the model¶
Materials Selection
Structural Sizing
mg_gengamma = gr.marg_named(df_ruff.TYS, "gengamma")
### 6.2.1.7. Assess the model¶
Summaries make implicit modeling assumptions
mg = gr.marg_named(df_ruff.TYS, "norm")
## TODO: Add a string representation to marginals, to simplify printing
print(mg.d_name, mg.d_param)
The normal distribution has only two parameters: loc (the mean) and scale (the standard deviation). However, not all distributions are described by just two parameters.
mg = gr.marg_named(df_ruff.TYS, "weibull_min")
## TODO: Add a string representation to marginals, to simplify printing
print(mg.d_name, mg.d_param)
The weibull distribution comes in a three-parameter form, with an additional “shape” parameter c.
Summary statistics are not model-free; we’re actually making modeling choices when we choose to report a limited set of numbers. For instance, when we report a mean and variance alone we are not implying a specific shape, but we are implying that those two values alone are sufficient to describe the data. A multi-modal distribution would not be well-described using a single mean, even with a variance.
As a concrete counterexample, the following synthetic dataset would be very poorly described using a mean and variance alone.
(
gr.df_make(X=np.random.normal(size=100, loc=-2))
>> gr.tf_bind_rows(
gr.df_make(X=np.random.normal(size=100, loc=+2))
)
>> pt.ggplot(pt.aes("X"))
+ pt.geom_histogram(bins=30)
)
Here, we would be better off reporting two location parameters—one for each mode—and some measure of the spread. The underlying mean of the full dataset (around zero) is not a sufficient description of these data.
Compare different modeling assumptions
mg_norm = gr.marg_named(df_ruff.TYS, "norm")
mg_gengamma = gr.marg_named(df_ruff.TYS, "gengamma")
mg_lognorm = gr.marg_named(df_ruff.TYS, "lognorm")
X = np.linspace(150, 170)
l_norm = list(map(mg_norm.l, X))
l_lognorm = list(map(mg_lognorm.l, X))
l_gengamma = list(map(mg_gengamma.l, X))
(
gr.df_make(
TYS=X,
l_norm=l_norm,
l_lognorm=l_lognorm,
l_gengamma=l_gengamma,
)
>> gr.tf_pivot_longer(
columns=["l_norm", "l_lognorm", "l_gengamma"],
names_to=[".value", "fit"],
names_sep="_",
values_to="foo",
)
>> pt.ggplot(pt.aes("TYS", "l", color="fit"))
+ pt.geom_line()
)
Note that the norm model will tend to produce more conservative design estimates.
print("norm quantile: {0:2.1f}".format(mg_norm.q(0.01)))
print("lognorm quantile: {0:2.1f}".format(mg_lognorm.q(0.01)))
print("gengamma quantile: {0:2.1f}".format(mg_gengamma.q(0.01)))
### 6.2.1.8. Use the model¶
(Some examples of work with parametric uncertainties: propagation and inference)
### 6.2.1.9. Limited Data¶
Text preview of Estimation chapter.
## 6.2.2. Use the model¶
Why would we ever fit a distribution?
### 6.2.2.1. Estimate probabilities¶
(
df_ruff
>> gr.tf_summarize(pof=gr.mean(DF.TYS <= 155))
)
However, lower critical values will be poorly estimated
(
df_ruff
>> gr.tf_summarize(pof=gr.mean(DF.TYS <= 150))
)
With a model we can extrapolate to lower critical values
## Fit model
md_ruff = (
gr.Model("Model for TYS")
>> gr.cp_marginals(TYS=gr.marg_named(df_ruff.TYS, "norm"))
>> gr.cp_copula_independence()
)
print(md_ruff)
## Simulate and estimate probability
(
md_ruff
>> gr.ev_monte_carlo(n=1e4, df_det="nom", seed=101, skip=True)
>> gr.tf_summarize(pof=gr.mean(DF.TYS < 150))
)
This gives a small—but nonzero—estimate for the probability of failure.
### 6.2.2.2. Perform inference¶
TODO We’ll see this in the third section on Estimation.
|
2022-01-21 04:27:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7087755799293518, "perplexity": 3390.450559625675}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302723.60/warc/CC-MAIN-20220121040956-20220121070956-00080.warc.gz"}
|
https://math.stackexchange.com/questions/2743208/collision-of-particles
|
# Collision of Particles
At time $t_0 = 0$, a particle P1 of mass m is projected vertically from the ground with speed $v_0 > 0$. If no collisions occur, this particle would reach its maximum height $h_1$ at time $t = t_1$. At time $t_2$, $0 < t_2 < t_1$, when particle P1 has the speed $\frac{v_0}{3}$, another particle $P_2$ of mass 3m is projected vertically upwards from the same point of the ground with speed $\frac{v_0}{3}$. Assume that the gravitational acceleration g is constant.
(i) Determine $t_1$ and $h_1$.
(ii) Determine $t_2$ and the height $h_2$ of $P_1$ at $t = t_2$.
(iii) Determine the time $t_3$ and height $h_3$ of the collision between particles $P_1$ and $P_2$.
(iv) Determine speeds of these particles right before the collision. What are directions of motion of particles at this moment of time?
(v) Determine the work done by the gravity force on each of the particles from the beginning of motion until the moment of collision.
So far I got $t_1=\frac{v_0}{g}$, $h_1=v_0^2$, $t_2=\frac{2v_0}{3g}$ and $h_2=\frac{4v_0^2}{9g}$ but I'm stuck from iii) onweards.
I know you have to equate the heights of the two particles which gave me $t_3=\frac{3}{2v_0}+\frac{6v_0}{18g}$ but I feel like I went wrong somewhere.
• I recommend using mathematical typesetting (c.f. math.meta.stackexchange.com/questions/107/…) – YukiJ Apr 18 '18 at 15:25
• Your $h_1$ is incorrect. I suggest that you include more details of how you arrived at this and the other values. – amd Apr 20 '18 at 7:26
• Finally, if $0\lt t_2\lt t_1$ and the initial velocity of the second particle are correct, then the particles can never collide, unless you allow the second one to sit on the ground for a while waiting for the first one to come back down. Even if you change the condition to $t_2\gt t_1$, the particles don’t collide while aloft. Most likely the second particle’s launch velocity is wrong. – amd Apr 20 '18 at 7:44
With the stated conditions, the particles never collide. The particle will be momentarily at rest at its peak, so $t_1={v_0\over g}$. You could plug this into the equation of motion to find $h_1$, but a conservation of energy argument works just as well. At the peak of the path, gravity will have done work equal to $mgh_1$ on the particle, converting all of the initial kinetic energy $\frac12mv_0^2$ into potential energy. Equating these and solving for $h_1$ produces ${v_0^2\over2g}$, which is different from the value you’ve given.
Since the particle undergoes constant deceleration on the way up, it’s pretty obvious that $t_2$ is just $2/3$ of the time it takes to reach its peak, and $h_2$ can again be determined via conservation of energy: $$mgh_2 = \frac12mv_0^2-\frac12m\left({v_0\over3}\right)^2,$$ therefore $h_2 = {4v_0^2\over9g}$. So far, so good.
Now we run into problems. If the second particle is launched with a third of the velocity of the first, it will peak in a third of the time, i.e., it will reach its maximum height at time $$t_2+\frac13t_1 = \frac23t_1+\frac13t_1 = t_1.$$ The graphs of the second particle’s motion is thus just the first particle’s graph shifted downwards. The graphs are otherwise identical parabolas, so they have no intersections.
We can try to fix this by waiting to launch until the first particle is heading back down. By symmetry, this alternate launch time is $\frac43t_1$, so the second particle will return to the ground at time $\frac43t_1+\frac23t_1=2t_1$, which is exactly when the first particle returns as well. From the wording of the problem, I suspect that this might not be what the authors had in mind, either.
|
2019-10-14 02:07:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6427056193351746, "perplexity": 160.32331050654602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986648481.7/warc/CC-MAIN-20191014003258-20191014030258-00260.warc.gz"}
|
http://www.sklogwiki.org/SklogWiki/index.php?title=SPC/E_model_of_water&diff=8601&oldid=8449
|
# Difference between revisions of "SPC/E model of water"
The extended simple point charge model, SPC/E is a slight reparameterisation of the SPC model of water, with a modified value for $q_{\mathrm{O}}$. The molecule is modelled as a rigid isosceles triangle, having charges situated on each of the three atoms. Apart from Coulombic interactions, the molecules interact via long-range Lennard-Jones sites, situated on the oxygen atoms. The parameters are as follows:
parameter value $\sigma$ $3.166 {\mathrm {\AA}}$ $\epsilon$ $0.650$ kJ mol-1 $r_\mathrm{OH}$ $1.000\mathrm{\AA}$ $\angle_\mathrm{HOH}$ $109.47^{\circ}$ $q_{\mathrm{O}}$ $-0.8476 e$ $q_{\mathrm{H}}$ $q_{\mathrm{O}}/2$ (charge neutrality)
|
2019-11-16 01:48:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.59881192445755, "perplexity": 1475.130779054242}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668716.69/warc/CC-MAIN-20191116005339-20191116033339-00297.warc.gz"}
|
https://gmatclub.com/forum/a-farmer-has-an-apple-orchard-consisting-of-fuji-and-gala-93322-20.html
|
It is currently 20 Apr 2018, 05:52
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# A farmer has an apple orchard consisting of Fuji and Gala
Author Message
TAGS:
### Hide Tags
Intern
Joined: 07 Aug 2013
Posts: 18
WE: Engineering (Aerospace and Defense)
Re: A farmer has an apple orchard consisting of Fuji and Gala [#permalink]
### Show Tags
10 Oct 2013, 21:58
1
KUDOS
An easy way to solve this without having to do any lengthy calculations:
The total number of trees must be divisible by 10, as 0.1*Total number of trees = Number of trees that have cross pollinated (integer). A quick look at the answers show that only when 33 is added to 187 the total will be divisible by 10.
Once you get 33 you can quickly back check the 3/4 condition to confirm your answer
Manager
Joined: 13 Oct 2013
Posts: 129
Concentration: Strategy, Entrepreneurship
Re: A farmer has an apple orchard consisting of Fuji and Gala [#permalink]
### Show Tags
05 Jul 2014, 13:17
can we use double set matrix method for this?
I was trying to do that , but could not. anyone can help?
Arthur1000 wrote:
A farmer has an apple orchard consisting of Fuji and Gala apple trees. Due to high winds this year 10% of his trees cross pollinated. The number of his trees that are pure Fuji plus the cross-pollinated ones totals 187, while 3/4 of all his trees are pure Fuji. How many of his trees are pure Gala?
A. 22
B. 33
C. 55
D. 77
E. 88
THE QUICK METHOD...
Fuji + Cross = 187
10% are cross
75% are Fuji
so 85% = 187
We want to know what the 15% is
Divide our percent by 10, 8.5% = 18.7
Double it, 17% = 38
We need 15 percent and it is pretty obvious 33 fits the bill
_________________
---------------------------------------------------------------------------------------------
Kindly press +1 Kudos if my post helped you in any way
SVP
Status: The Best Or Nothing
Joined: 27 Dec 2012
Posts: 1837
Location: India
Concentration: General Management, Technology
WE: Information Technology (Computer Software)
Re: A farmer has an apple orchard consisting of Fuji and Gala [#permalink]
### Show Tags
24 Jul 2014, 02:36
Let total trees = x
Pure Fuji$$= \frac{3x}{4}$$
Cross pollinated $$= \frac{x}{10}$$
$$\frac{3x}{4} + \frac{x}{10} = 187$$
x = 220
Pure Gala = 220 - 187 = 33
_________________
Kindly press "+1 Kudos" to appreciate
Intern
Status: Single
Joined: 12 Oct 2014
Posts: 7
Location: India
Concentration: Strategy, Operations
GMAT Date: 11-11-2014
GPA: 3.1
WE: Information Technology (Computer Software)
Re: A farmer has an apple orchard consisting of Fuji and Gala [#permalink]
### Show Tags
28 Oct 2014, 23:46
Silvers wrote:
iamseer wrote:
A farmer has an apple orchard consisting of Fuji and Gala apple trees. Due to high winds this year 10% of his trees cross pollinated. The number of his trees that are pure Fuji plus the cross-pollinated ones totals 187, while 3/4 of all his trees are pure Fuji. How many of his trees are pure Gala?
Let the total trees be x
3/4 are pure Fuji = 3x/4
10% cross pollinated = x/10
now The number of his trees that are pure Fuji plus the cross-pollinated ones totals 187
3x/4 + x/10 = 187
solve this x = 220
220-187 = 33 are the pure Gala trees.
I like the simplicity of the way u dealt with this problem.
Manager
Status: folding sleeves up
Joined: 26 Apr 2013
Posts: 149
Location: India
Concentration: Finance, Strategy
GMAT 1: 530 Q39 V23
GMAT 2: 560 Q42 V26
GPA: 3.5
WE: Consulting (Computer Hardware)
Re: A farmer has an apple orchard consisting of Fuji and Gala [#permalink]
### Show Tags
20 Nov 2014, 12:25
iamseer wrote:
A farmer has an apple orchard consisting of Fuji and Gala apple trees. Due to high winds this year 10% of his trees cross pollinated. The number of his trees that are pure Fuji plus the cross-pollinated ones totals 187, while 3/4 of all his trees are pure Fuji. How many of his trees are pure Gala?
A. 22
B. 33
C. 55
D. 77
E. 88
sol: I first thought I can do it using double matrix. But with in first 25 sec, I realized venn is better.
Intern
Joined: 15 Jan 2016
Posts: 1
Re: A farmer has an apple orchard consisting of Fuji and Gala [#permalink]
### Show Tags
22 Mar 2016, 12:24
iamseer wrote:
A farmer has an apple orchard consisting of Fuji and Gala apple trees. Due to high winds this year 10% of his trees cross pollinated, creating trees that are part Fuji and part Gala. The number of his trees that are pure Fuji plus the cross-pollinated ones totals 187, while 3/4 of all his trees are pure Fuji. How many of his trees are pure Gala?
A. 22
B. 33
C. 55
D. 77
E. 88
See edited text in red above.
Question: if the question is re-worded as edited above, would we interpret instead that the 10% of trees that are cross pollinated are actually 10% of the total existing trees before pollination (meaning Pure Fuji + Pure Gala)? I would think that in this case 33 should no longer be the answer.
Disclosure: The CAT exam has the edited wording above.
Senior Manager
Joined: 28 Jun 2015
Posts: 297
Concentration: Finance
GPA: 3.5
Re: A farmer has an apple orchard consisting of Fuji and Gala [#permalink]
### Show Tags
13 Jul 2017, 18:33
$$f+g = x$$
$$0.1x$$ cross pollinated
$$f+0.1x = 187$$
$$f = 0.75x$$
$$0.75+0.1x = 187$$
$$x = \frac{187}{0.85} = 220$$
No. of cross pollinated trees = $$0.1(220) = 22$$.
No. of Fuji = $$\frac{3}{4}(220) = 165$$, so no. of Gala = $$220-165-22 = 33$$. Ans - B.
_________________
I used to think the brain was the most important organ. Then I thought, look what’s telling me that.
Intern
Joined: 28 Feb 2018
Posts: 7
Re: A farmer has an apple orchard consisting of Fuji and Gala [#permalink]
### Show Tags
13 Mar 2018, 01:18
iamseer wrote:
A farmer has an apple orchard consisting of Fuji and Gala apple trees. Due to high winds this year 10% of his trees cross pollinated. The number of his trees that are pure Fuji plus the cross-pollinated ones totals 187, while 3/4 of all his trees are pure Fuji. How many of his trees are pure Gala?
A. 22
B. 33
C. 55
D. 77
E. 88
For those who prefer 100 rather than X
Let total tree be 100
Total pollinated = 10
Pure Fuji = 75
Pure Gala = 15
when Pure Fuji + pollinated = 85 (75+10) then Pure Gala = 15
Apply Unitary method
85 -> 15
1. -> 15/85
187 -> 15/85*187 = 33
Keep it simple!
Re: A farmer has an apple orchard consisting of Fuji and Gala [#permalink] 13 Mar 2018, 01:18
Go to page Previous 1 2 [ 28 posts ]
Display posts from previous: Sort by
|
2018-04-20 12:52:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5039052367210388, "perplexity": 8854.110584484644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937780.9/warc/CC-MAIN-20180420120351-20180420140351-00573.warc.gz"}
|
https://www.actucation.com/grade-6-maths/percents-decimals-and-fractions
|
Informative line
# What is Percent?
• Many times in our daily lives, we hear the word 'Percent'. But what is it? Let's try to understand it.
• It is a combination of two words - per and cent, where per means "each" and cent means "hundred".
• It helps us to show things in relative terms which makes the comparison within the choices easier.
• A percent is one hundredth, and as a fraction it is $$\dfrac {1}{100}$$.
• $$\dfrac {1}{100}$$ can be denoted by the symbol '%' or vice versa.
• Thus, a percent is a part of the whole, where the whole is represented by $$100$$.
For example:
$$58\text {%}$$ means $$58$$ out of $$100$$.
For example:
$$58\text {%}$$ means $$58$$ out of $$100$$.
#### What does $$25\text {%}$$ mean?
A $$25$$ added to $$100$$
B $$25$$ out of $$100$$
C $$25$$ multiplied by $$100$$
D $$25$$ subtracted from $$100$$
×
A percent is a part of the whole where the whole is represented by $$100$$.
$$25\text {%} =25$$ percent
$$=25$$ per $$100$$
$$=25$$ out of $$100$$
Hence, option (B) is correct.
### What does $$25\text {%}$$ mean?
A
$$25$$ added to $$100$$
.
B
$$25$$ out of $$100$$
C
$$25$$ multiplied by $$100$$
D
$$25$$ subtracted from $$100$$
Option B is Correct
# Fraction as a Percent
• A fraction can be converted to a percent and vice versa, as both are parts of a whole.
• To convert a fraction to a percent, consider the following two cases:
Case 1: When the denominator of the fraction is $$100$$
For example: $$\dfrac {25}{100},\;\dfrac {56}{100},\;\dfrac {16}{100},\;\dfrac {2}{100}$$ etc.
• To convert the fraction where the denominator is $$100$$, replace $$\dfrac {1}{100}$$ by the $$\text{%}$$ sign, because percent means out of $$100$$.
Example: $$\dfrac {25}{100}=25$$ out of $$100=25\text{%}$$
Case 2: When the denominator of the fraction is not $$100$$
Consider the following example: Write $$\dfrac {3}{4}$$ as a percent.
• Here, the denominator is not $$100$$.
• But we know that percent means "out of $$100$$".
• Thus, we have to make the denominator $$100$$.
$$\dfrac {3}{4}=\dfrac {?}{100}$$
To get $$100$$ in the denominator in place of $$4$$, multiply $$4$$ with $$25$$.
$$4×25=100$$
Thus, we have to multiply the numerator $$(3)$$ also with $$25$$ (to get equivalent fraction).
$$3×25=75$$
Now, $$\dfrac {3}{4}=\dfrac {75}{100}$$
Thus, $$\dfrac {75}{100}=75$$ out of $$100=75\text {%}$$
So, we can write $$\dfrac {3}{4}$$ as $$75\text {%}$$.
#### Which one of the following statements is true?
A $$\dfrac {1}{2}=25\text {%}$$
B $$\dfrac {7}{100}=0.7\text {%}$$
C $$\dfrac {23}{100}=57\text {%}$$
D $$\dfrac {4}{5}=80\text {%}$$
×
Percent means "out of $$100$$" .
Thus, the denominator of the fraction should be $$100$$.
Option (A)
Given: $$\dfrac {1}{2}=25\text {%}$$
Let's check it.
Here, we will find the equivalent fraction of $$\dfrac {1}{2}$$ having the denominator $$100$$.
$$\dfrac {1}{2}=\dfrac {?}{100}$$
To get $$100$$ in the denominator, $$1$$ and $$2$$ should be multiplied with $$50$$.
$$\dfrac {1}{2}=\dfrac{1×50}{2×50}=\dfrac {50}{100}=50 \text { out of }100=50\text {%}$$
Hence, option (A) is incorrect.
Option (B)
Given: $$\dfrac {7}{100}=0.7\text {%}$$
Let's check it.
$$\dfrac {7}{100}=7 \text { out of }100=7\text {%}$$
Hence, option (B) is incorrect.
Option (C)
Given: $$\dfrac {23}{100}=57\text {%}$$
Let's check it.
$$\dfrac {23}{100}=23 \text { out of }100=23\text {%}$$
Hence, option (C) is incorrect.
Option (D)
Given: $$\dfrac {4}{5}=80\text {%}$$
Let's check it.
Here, we will find the equivalent fraction of $$\dfrac {4}{5}$$ having the denominator $$100$$.
$$\dfrac {4}{5}=\dfrac {?}{100}$$
To get $$100$$ in the denominator, $$4$$ and $$5$$ should be multiplied with $$20$$.
$$\dfrac {4}{5}=\dfrac {4×20}{5×20}=\dfrac {80}{100}=80 \text { out of }100=80\text {%}$$
Hence, option (D) is correct.
### Which one of the following statements is true?
A
$$\dfrac {1}{2}=25\text {%}$$
.
B
$$\dfrac {7}{100}=0.7\text {%}$$
C
$$\dfrac {23}{100}=57\text {%}$$
D
$$\dfrac {4}{5}=80\text {%}$$
Option D is Correct
# Decimal as a Percent
• A decimal can be converted to a percent and vice versa, as they both are parts of a whole.
• To convert a decimal to a percent, consider the following steps:
For example: Write $$0.23$$ as a percent.
Step: 1 Move the decimal point two places to the right.
Step: 2 Add the $$\text{%}$$ (Percent) sign at the end.
$$0.23=23\text{%}$$
Now consider the following cases:
Case-I: If the decimal has zeros
For example: 0.06
• Here, it has a zero at the tenths place.
• Simply move the decimal point two places to the right and add the $$\text{%}$$ sign at the end.
Case-II: If the decimal does not have two decimal places
For example: $$0.3$$
• Here, the decimal does not have two decimal places. So add the required zero to the right side.
• Move the decimal point two places to the right.
• Add the $$\text {%}$$ sign at the end.
Case-III: If the decimal has more than two decimal places
For example: $$0.125$$
• Here, it has three decimal places.
• Move the decimal point two places to the right and add the $$\text {%}$$ sign at the end.
• Now, we have a percent which also has a decimal.
Now, we have a percent which also has a decimal.
#### Which percent correctly represents $$0.007$$?
A $$70\text {%}$$
B $$7\text {%}$$
C $$0.7\text {%}$$
D $$0.07\text {%}$$
×
Given: $$0.007$$
Moving the decimal point two places to the right.
Adding the $$\text {%}$$ (Percent) sign at the end,
$$0.7\text {%}$$
Thus,
$$0.007=0.7\text {%}$$
Hence, option (C) is correct.
### Which percent correctly represents $$0.007$$?
A
$$70\text {%}$$
.
B
$$7\text {%}$$
C
$$0.7\text {%}$$
D
$$0.07\text {%}$$
Option C is Correct
# Percent through Circle Graph-I
• A circle graph is a way of displaying data.
• A full circle represents the $$100\text{%}$$.
• It is divided into a number of sections, also known as pie shaped wedges.
• The items are graphed in each section.
• Each wedge represents a percent of the whole.
• First, we will learn to read the circle graph.
• Consider the following example:
• The population of a city belonging to different age groups is shown in the circle graph.
From the circle graph, we can say that the city has
Adults $$=30\text {%}$$
Children $$=25\text {%}$$
Teenagers $$=45\text {%}$$
• To learn how to find the missing data, consider another example:
• The circle graph is divided into 6 sections.
• Each section shows a different color.
• The data of different colors can be represented as listed below:
Brown $$=19\text {%}$$
Pink $$=18\text {%}$$
Yellow $$=7\text {%}$$
Red $$=27\text {%}$$
Green $$=6\text {%}$$
Thus, the question arises,
"What percent of the circle shows the blue color?"
To find the answer, we should do the following steps:.
The circle graph always represents the $$100\text {%}$$.
$$\therefore$$ Percent of blue color
$$=100\text {%}-$$ (Sum of the given data)
The sum of the given data $$=19\text {%}+18\text {%}+7\text {%}+27\text {%}+6\text {%}$$
$$=77\text {%}$$
$$\therefore$$ Percent of blue color $$=100\text {%}-77\text {%}$$ $$=23\text {%}$$
#### The graph represents the percentage of different books in a school library. What is the percentage of novels in the school library?
A $$31\text {%}$$
B $$30\text {%}$$
C $$15\text {%}$$
D $$22\text {%}$$
×
The circle graph always represents the $$100\text{%}$$.
The percents from the circle graph for:
Comic books $$=10\text{%}$$
Science books $$=25\text{%}$$
Historical books $$=34\text{%}$$
The sum of the given data $$=10\text{%}+25\text{%}+34\text{%}$$
$$=69\text{%}$$
Percentage of Novels $$=100\text{%}\;-$$ (Sum of the given data)
$$=100\text{%}-69\text{%}$$
$$=31\text{%}$$
Thus, novels are $$31\text{%}$$.
Hence, option (A) is correct.
### The graph represents the percentage of different books in a school library. What is the percentage of novels in the school library?
A
$$31\text {%}$$
.
B
$$30\text {%}$$
C
$$15\text {%}$$
D
$$22\text {%}$$
Option A is Correct
# Conversion of Percent into Fraction
• A percent can be converted to a fraction and vice-versa, as they both are parts of a whole.
• To convert a percent to a fraction, replace the $$\text {%}$$ symbol by $$\dfrac {1}{100}$$ and then simplify it.
For example:
Convert $$25\text {%}$$ to a fraction.
$$25\text {%}=\dfrac {25}{100}$$
• Since the fraction obtained is not in its simplest form, so we divide it by the greatest common factor.
• The greatest common factor of $$25$$ and $$100$$ is $$25$$.
• Now, we divide numerator and denominator by the greatest common factor.
So, $$\dfrac {25 \div25}{100\div25}=\dfrac {1}{4}$$
The fraction form of $$25\text {%}$$ is $$\dfrac {1}{4}$$.
• Since the fraction obtained is not in its simplest form, so we simplify it by taking the common factor of $$25$$ and $$100$$.
• The greatest common factor of $$25$$ and $$100$$ is $$25$$.
• Now we divide the numerator and denominator by the greatest common factor.
So, $$\dfrac {25 \div25}{100\div25}=\dfrac {1}{4}$$
The fraction form of $$25\text {%}$$ is $$\dfrac {1}{4}$$.
#### Which one of the following options represents $$36\text {%}$$ as a fraction?
A $$\dfrac {2}{5}$$
B $$\dfrac {3}{4}$$
C $$\dfrac {9}{25}$$
D $$\dfrac {3}{8}$$
×
Given: $$36\text {%}$$
Replacing the $$\text{%}$$ sign with $$\dfrac {1}{100}$$,
$$36\text{%}=\dfrac {36}{100}$$
Since the fraction obtained is not in its simplest form, so we divide it by the G.C.F.
Fraction = $$\dfrac {36}{100}$$
The greatest common factor (G.C.F.) of $$36$$ and $$100=4$$
Dividing by the greatest common factor,
$$\dfrac {36\div4}{100\div4}=\dfrac {9}{25}$$
Thus,
$$36\text{%}=\dfrac {9}{25}$$
Hence, option (C) is correct.
### Which one of the following options represents $$36\text {%}$$ as a fraction?
A
$$\dfrac {2}{5}$$
.
B
$$\dfrac {3}{4}$$
C
$$\dfrac {9}{25}$$
D
$$\dfrac {3}{8}$$
Option C is Correct
# Percent as a Decimal
A decimal can be converted to a percent and vice versa, as they both are parts of a whole.
To convert a percent to a decimal, consider the following steps:
For example: Convert $$26\text {%}$$ into a decimal.
Step 1: Remove the $$\text {%}$$ sign, so we get $$26$$.
Step 2: Put the decimal point two places to the left.
Thus, $$26\text {%}=0.26$$
Consider the following two cases:
(i) To convert a single digit percent
For example: $$6\text {%}$$
Here, the percent doesn't have two digits, thus add a zero to the left of $$6$$, i.e.
$$6=06$$
Now, put the decimal point two places to the left.
(ii) To convert a percent having a decimal
For example:
Here, $$1.5$$ already has a decimal point. Thus, shift the decimal point two places to the left.
#### Which one of the following options represents $$12.5\text {%}$$ as a decimal?
A $$1.25$$
B $$12.5$$
C $$0.125$$
D $$0.0125$$
×
Given: $$12.5\text {%}$$
After dropping the $$\text{%}$$ sign, we get:
$$12.5$$
Moving the decimal point two places to the left,
Thus,
$$12.5\text {%}=0.125$$
Hence, option (C) is correct.
### Which one of the following options represents $$12.5\text {%}$$ as a decimal?
A
$$1.25$$
.
B
$$12.5$$
C
$$0.125$$
D
$$0.0125$$
Option C is Correct
# Ordering Percents, Decimals and Fractions
• Since percents, decimals and fractions are all parts of the whole, therefore, we can compare them and arrange them either in the order of least to greatest or greatest to least.
• To understand it clearly, consider an example:
• Arrange the following in the order of least to greatest: $$2\dfrac {1}{5},\;4.05$$ and $$37.5\text {%}$$.
For writing them in the given order, first we need to convert them into the same form, that may be percent, decimal or fraction form.
Here, we are converting them into the percent form.
Step 1: Convert $$2\dfrac {1}{5}$$ into a percent.
• We can also write $$2\dfrac {1}{5}$$ as an improper fraction.
$$2\dfrac {1}{5}=\dfrac {(2×5)+1}{5}=\dfrac {10+1}{5}=\dfrac {11}{5}$$
• Converting $$\dfrac {11}{5}$$ into a percent.
$$\dfrac {11}{5}=\dfrac {11×20}{5×20}=\dfrac {220}{100}=220\text{%}$$
Step 2: Convert $$4.05$$ into a percent.
• To convert a decimal to a percent, move the decimal point two places to the right.
Here, $$37.5\text {%}$$ doesn't need to be converted as it is already in percent form.
Now, we have $$220\text {%},\;405\text {%}$$ and $$37.5\text {%}$$ to compare.
Step-3: Arrange in the order of least to greatest.
$$37.5\text{%}<220\text{%}<405\text{%}$$
$$\therefore$$ The order from least to greatest is $$37.5\text {%}$$$$220\text {%}$$$$405\text {%}$$
Step-4: Write the original numbers in the required order.
$$37.5\text{%},\;2\dfrac {1}{5},\;4.05$$
#### Which one of the given options represents the following numbers in the order of least to greatest? $$\dfrac {1}{5},\; 5\text{%}\;\,and\;0.5$$
A $$\dfrac {1}{5},\;0.5,\; 5\text{%}$$
B $$5\text{%},\;0.5,\;\dfrac {1}{5}$$
C $$5\text{%},\;\dfrac {1}{5},\;0.5$$
D $$\dfrac {1}{5},\;5\text{%},\;0.5$$
×
Given: $$\dfrac {1}{5},\; 5\text{%},\;0.5$$
To write them in the given order, first we have to convert them into the same form.
Here, we are converting them into percents.
Converting $$\dfrac {1}{5}$$ into a percent:
$$\dfrac {1}{5}=\dfrac {?}{100}$$
To get $$100$$ in the denominator, multiply both numerator and denominator by $$20$$.
$$\dfrac {1×20}{5×20}=\dfrac {20}{100}$$
Hence, $$\dfrac {1}{5}=\dfrac {20}{100}=20\text{%}$$
Now, converting $$0.5$$ into a percent.
Moving the decimal point two places to the right.
$$5\text{%}$$ doesn't need to be converted as it is already in percent form.
Thus, we have $$20\text{%}$$$$5\text{%}$$ and $$50\text{%}$$ to compare.
So, the order from least to greatest is $$5\text{%},\;20\text{%},\;50\text{%}$$.
Writing the original numbers in the required order.
$$5\text{%},\;\dfrac {1}{5},\;0.5$$
Hence, option (C) is correct.
### Which one of the given options represents the following numbers in the order of least to greatest? $$\dfrac {1}{5},\; 5\text{%}\;\,and\;0.5$$
A
$$\dfrac {1}{5},\;0.5,\; 5\text{%}$$
.
B
$$5\text{%},\;0.5,\;\dfrac {1}{5}$$
C
$$5\text{%},\;\dfrac {1}{5},\;0.5$$
D
$$\dfrac {1}{5},\;5\text{%},\;0.5$$
Option C is Correct
# Representation of a Percent in the form of a Fraction, a Ratio and a Decimal
• A percent, a ratio, a fraction, and a decimal can be converted into each other.
• Here, we are considering an example of writing $$36\text {%}$$ as a ratio, as a fraction, and as a decimal.
## (i) Percent as a Fraction
• To convert a percent to a fraction, we replace the % symbol by $$\dfrac {1}{100}$$, and then simplify the fraction, if necessary.
We know that a percent means out of $$100$$.
$$\therefore$$ $$36\text {%}=36$$ out of $$100$$
$$36\text {%}=\dfrac {36}{100}$$
Since the fraction is not in simplified form, so we divide it by the greatest common factor (G.C.F.).
The G.C.F. of $$36$$ and $$100$$ is $$4$$.
Thus, $$\dfrac {36\div4}{100\div4}=\dfrac {9}{25}$$
## (ii) Percent as a Ratio
We have $$36\text {%}=\dfrac {36}{100}$$
[$$a:b$$ is written as $$\dfrac {a}{b}$$]
$$\therefore\;\;\dfrac {36}{100}=36:100$$
$$=36\text { out of } 100$$
## (iii) Percent as a decimal
To convert a percent to a decimal, we first remove the % sign.
$$36\text {%}=36$$
Now, place the decimal point two places to the left,
$$36\text {%}=.36$$
Hence, we can write all the forms together as:
$$36\text {%}=\dfrac {36}{100}=36:100=.36$$
#### A table is given: Percent Fraction Ratio Decimal 60% - - - - 0.60 - - - - 3 : 40 - - - - $$\dfrac {4}{5}$$ - - 0.80 25% - - - - 0.25 Which one of the following options represents the correct table?
A Percent Fraction Ratio Decimal 60% $$\dfrac {2}{5}$$ 2 : 5 0.60 35% $$\dfrac {3}{40}$$ 3 : 40 3.5 80% $$\dfrac {4}{5}$$ 4 : 5 0.80 25% $$\dfrac {3}{4}$$ 3 : 4 0.25
B Percent Fraction Ratio Decimal 60% $$\dfrac {3}{5}$$ 5 : 3 0.60 40% $$\dfrac {3}{40}$$ 3 : 40 0.30 5% $$\dfrac {4}{5}$$ 5 : 4 0.80 25% $$\dfrac {5}{2}$$ 5 : 2 0.25
C Percent Fraction Ratio Decimal 60% $$\dfrac {3}{5}$$ 3 : 5 0.60 7.5% $$\dfrac {3}{40}$$ 3 : 40 0.075 80% $$\dfrac {4}{5}$$ 4 : 5 0.80 25% $$\dfrac {1}{4}$$ 1 : 4 0.25
D Percent Fraction Ratio Decimal 60% $$\dfrac {3}{2}$$ 2 : 3 0.60 30% $$\dfrac {3}{40}$$ 3 : 40 0.40 40% $$\dfrac {4}{5}$$ 8 : 100 0.80 25% $$\dfrac {1}{5}$$ 5 : 1 0.25
×
In the given table, one form of a number is given and we need to write it in other forms.
Completing the first row of the table:
Given: $$60\text {% and }\,0.60$$
Writing $$60\text {%}$$ in fraction form:
$$60\text {% }=60$$ out of $$100$$
$$\therefore$$ $$60\text {% }=\dfrac {60}{100}$$
Simplifying the fraction,
$$\dfrac {60}{100}=\dfrac {3}{5}$$
$$\therefore$$ $$60\text {% }=\dfrac {3}{5}$$
Writing $$\dfrac {3}{5}$$ in the form of a ratio:
$$\dfrac {3}{5}=3:5$$
Hence, we can write
$$60\text{%}=\dfrac {3}{5}=3:5=0.60$$
Completing the second row of the table:
Given: $$3:40$$
Writing $$3:40$$ in fraction form:
$$3:40=\dfrac {3}{40}$$
To write $$\dfrac {3}{40}$$ in percent form, we calculate the equivalent fraction having $$100$$ as the denominator.
$$\dfrac {3}{40}=\dfrac {?}{100}$$ (To make the denominator $$100$$$$3$$ and $$40$$ should be multiplied with $$2.5$$)
$$\dfrac {3×2.5}{40×2.5}=\dfrac {7.5}{100}$$
Thus,
$$\dfrac {3}{40}=\dfrac {7.5}{100}$$
$$\dfrac {7.5}{100}=7.5$$ out of $$100$$
$$\therefore\;\dfrac {7.5}{100}=7.5\text{%}$$ $$\left ( \text {%}=\dfrac {1}{100} \right)$$
$$\dfrac {3}{40}=7.5\text{%}$$
Writing $$7.5\text{%}$$ in decimal form, by putting the decimal point two places to the left.
$$7.5\text{%}=.075$$
$$7.5\text{%}=\dfrac {3}{40}=3:40=0.075$$
Completing the third row of the table
Given: $$\dfrac {4}{5}$$ and $$0.80$$
Writing $$0.80$$ in percent form:
Moving the decimal point two places to the right and putting the % sign at the end.
$$\Rightarrow 0.80=80\text{%}$$
Writing $$\dfrac {4}{5}$$ in the form of a ratio:
$$\dfrac {4}{5}=4:5$$
Hence, we can write
$$80\text{%}=\dfrac {4}{5}=4:5=0.80$$
Completing the fourth row of the table
Given: $$25\text{%}$$ and $$0.25$$
Writing $$25\text{%}$$ in fraction form:
$$25\text{%}$$ = $$25$$ out of $$100$$
$$=\dfrac {25}{100}$$
Simplifying the fraction obtained,
$$\dfrac {25}{100}=\dfrac {1}{4}$$
Thus,
$$25\text{%}=\dfrac {1}{4}$$
Writing $$\dfrac {1}{4}$$ in the form of a ratio:
$$\dfrac {1}{4}=1:4$$
Hence, we can write
$$25\text{%}=\dfrac {1}{4}=1:4=0.25$$
The complete table is:
Percent Fraction Ratio Decimal
60% $$\dfrac {3}{5}$$ 3 : 5 0.60
7.5% $$\dfrac {3}{40}$$ 3 : 40 0.075
80% $$\dfrac {4}{5}$$ 4 : 5 0.80
25% $$\dfrac {1}{4}$$ 1 : 4 0.25
Hence, option (C) is correct.
### A table is given: Percent Fraction Ratio Decimal 60% - - - - 0.60 - - - - 3 : 40 - - - - $$\dfrac {4}{5}$$ - - 0.80 25% - - - - 0.25 Which one of the following options represents the correct table?
A
Percent Fraction Ratio Decimal
60% $$\dfrac {2}{5}$$ 2 : 5 0.60
35% $$\dfrac {3}{40}$$ 3 : 40 3.5
80% $$\dfrac {4}{5}$$ 4 : 5 0.80
25% $$\dfrac {3}{4}$$ 3 : 4 0.25
.
B
Percent Fraction Ratio Decimal
60% $$\dfrac {3}{5}$$ 5 : 3 0.60
40% $$\dfrac {3}{40}$$ 3 : 40 0.30
5% $$\dfrac {4}{5}$$ 5 : 4 0.80
25% $$\dfrac {5}{2}$$ 5 : 2 0.25
C
Percent Fraction Ratio Decimal
60% $$\dfrac {3}{5}$$ 3 : 5 0.60
7.5% $$\dfrac {3}{40}$$ 3 : 40 0.075
80% $$\dfrac {4}{5}$$ 4 : 5 0.80
25% $$\dfrac {1}{4}$$ 1 : 4 0.25
D
Percent Fraction Decimal Ratio $$\dfrac {3}{2}$$ 2 : 3 0.60 $$\dfrac {3}{40}$$ 3 : 40 0.40 $$\dfrac {4}{5}$$ 8 : 100 0.80 $$\dfrac {1}{5}$$ 5 : 1 0.25
Option C is Correct
|
2018-09-23 01:30:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7916589379310608, "perplexity": 1211.1022531678236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158766.65/warc/CC-MAIN-20180923000827-20180923021227-00414.warc.gz"}
|
https://mathoverflow.net/questions/158007/clarification-of-gabais-exposition-of-murasugi-sums-in-the-murasugi-sum-is-a-n
|
# Clarification of Gabai's exposition of Murasugi Sums in 'the Murasugi sum is a natural geometric operation'
Gabai states that the Murasugi sum of two hopf bands yields a spanning surface of either the figure eight knot, the trefoil knot or a link of three components. Figure one shows two oppositely twisted hopf bands murasugi summed together to give the figure eight knot, I believe. If I Murasugi sum two hopf bands with twists in the same sense, using the summing disk in the same way as shown in figure one, then I get a spanning surface for the trefoil. Now if I rotate one of the hopf links in the diagram by 90 degrees and Murasugi sum in this way, regardless of whether the two links are twisted in like or opposite sense, I get the spanning surface of link of two components, not three. Is it a typo or have I not grasped the operation? Probably the later as the resulting surface seems to branch...
Gabai illustrates the Murasugi sum with "Figure 1":
A sub question that might help clear this up is whether the disk in figure one is a 2-gon, (reasoning that the two edges that are an arc component of the link are what is being counted), or a 4-gon, (counting the two link edges and the two edges that are interior to the spanning surfaces). It seems to me that it must be a 4-gon as Gabai states that when the disk is a 2-gon the Murasugi sum is known as connected sum. What is depicted doesn't accord with my traditional understanding of a connected sum. However if summing as a connect sum were what was really meant in the case of the 2-gon, then this would avoid the problem of branching mentioned above, in the creation of a 2-component link. It would be at odds with condition one of definition 0.1: a surface is the murasugi sum of two sub surfaces if it is the union of these two surfaces identified over embedded disk D? (Or have I misconstrued the notation?) Connect sum seems to dispose of the disk...
I have tried to augment my understanding with Murasugi's discussion of S-surfaces given in 'On certain subgroup of the group of an alternating link'. I have also tried to think of where else it is possible to place the disk upon which we perform this generalised plumbing, however the 'translational symmetry' of the hopf band means that there is only one 'equivalence class' of locations we can place it.
How do I create a spanning surface for a 3-component link as the Murasugi sum of the surface of two hopf links?
• I edited the question and gave an answer, but this seems to me to be a math.SE question, because it's really far from research level. – Daniel Moskovich Feb 19 '14 at 4:50
• Noted. Will address such questions to appropriate site. Apologies and thanks. – Lubtschenko Feb 20 '14 at 2:21
|
2019-07-20 08:23:52
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8822914361953735, "perplexity": 470.51824773177356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526489.6/warc/CC-MAIN-20190720070937-20190720092937-00120.warc.gz"}
|
https://aimacode.github.io/aima-exercises/knowledge-logic-exercises/ex_23/
|
### Artificial IntelligenceAIMA Exercises
Consider the following sentence:
$$[ ({Food} {\:\;{\Rightarrow}\:\;}{Party}) \lor ({Drinks} {\:\;{\Rightarrow}\:\;}{Party}) ] {\:\;{\Rightarrow}\:\;}[ ( {Food} \land {Drinks} ) {\:\;{\Rightarrow}\:\;}{Party}]\ .$$
1. Determine, using enumeration, whether this sentence is valid, satisfiable (but not valid), or unsatisfiable.
2. Convert the left-hand and right-hand sides of the main implication into CNF, showing each step, and explain how the results confirm your answer to (a).
Consider the following sentence:
$$[ ({Food} {\:\;{\Rightarrow}\:\;}{Party}) \lor ({Drinks} {\:\;{\Rightarrow}\:\;}{Party}) ] {\:\;{\Rightarrow}\:\;}[ ( {Food} \land {Drinks} ) {\:\;{\Rightarrow}\:\;}{Party}]\ .$$
1. Determine, using enumeration, whether this sentence is valid, satisfiable (but not valid), or unsatisfiable.
2. Convert the left-hand and right-hand sides of the main implication into CNF, showing each step, and explain how the results confirm your answer to (a).
|
2021-12-03 11:11:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40760475397109985, "perplexity": 1528.4208099437221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362619.23/warc/CC-MAIN-20211203091120-20211203121120-00038.warc.gz"}
|
https://bastibe.de/tag-photography.html
|
25 Mar 2019
# On camera sensor sizes
A common internet wisdom about photography is that bigger camera sensors capture more light. So if you want to work in low light, you need a full frame camera, and a bigger sensor is always better. I have struggled with this a lot, though, because it doesn't make sense. Lenses can focus light on any surface, so why should the surface size matter?
The answer turned out to be… disappointing. Bigger sensors allow for larger (practical) apertures, and lower base ISO. But less noise for the same picture, is simply impossible with the same sensor technology. Because that's not how physics works. Let me explain.
Let's talk about three sensor sizes, full frame (FF), APS-C, and micro four thirds (MFT). For the sake of discussion, let's assume that these sensor sizes are always a factor of $$\sqrt{2}$$ of each other, which is close enough to the truth. So FF is twice the area of APS-C, which is twice the area of MFT. Let's also assume that our hypothetical cameras at these sizes use sensors with the same resolution, and using the same technology. With that out of the way, what does it mean to use a smaller sensor?
If you shrink the sensor area by a factor of two, every pixels gets equally smaller, and receives less light (on the same lens). To fix this, and account for the different field of view due to cropping, we zoom out, dividing the focal length by $$\sqrt{2}$$. So now a FF 35mm f4 lens will get the same field of view and brightness as an APS-C 23mm f4 lens, or a 17mm MFT f4 lens, all at the same ISO number. The only difference being, that due to the smaller sensor, MFT will be twice as noisy as APS-C, which is twice as noisy as FF. This seems like a clear win for FF, right?
As an aside, isn't it cool that we actually get the same brightness, as long as ISO and f-numbers are kept the same? That's what they are designed to say: Equal ISO and f-number means equal brightness! (But not equal noise, or, as we will see, equal depth of field).
Because here's the catch: Even though the f-number is the same, the physical size of the aperture is bigger on the bigger sensor! This makes sense, because to get the same brightess on a bigger sensor area, you need more light. And to get more light, you need a bigger aperture. This also means that the depth of field on the FF sensor is twice as big as on the APS-C sensor, which is twice as big as the MFT sensor.
So let's account for that, too, and give each camera a lens with the same physical aperture size as well as field of view, and stop down ISO to account for the increase in brightness: FF 35mm f4 ISO 1600 then becomes APS-C 23mm f2.8 ISO 800 and MFT 17mm f2 ISO 400. These combinations now actually have the same field of view, same brightness, same depth of field, and, you guessed it, same noise. If you account for field of view, there is no advantage to FF whatsoever. You can get the very same picture on a big sensor as on a small sensor, the only difference being how much you are willing to carry and pay for it.
OK, that's not quite true: You can get FF lenses at f1 (equivalent to f0.7 on APS-C or f0.5 on MFT), which are just not available on smaller sensors. But have you used such lenses? At this point, the area in focus is literally razor-thin, and focusing becomes terrifyingly difficult. You might also notice that wider-aperture lenses are generally bigger. They have to be, to capture more light. By the same token, an f1.4 APS-C lens will be the same size as an f2 FF lens (because it is in fact mostly identical). And FF offers smaller base ISOs. If you need the minimum amount of noise, ISO 100 on FF would be equivalent to ISO 25 on MFT, which you just can't get, there.
TL;DR: Bigger sensors afford bigger apertures, with all their associated downsides. But they do not magically reduce image noise (everything else being equal). Also, cameras are surprisingly complex beasts.
13 Mar 2019
Normally, when you take a picture of something too bright, you get bloom: An all-consuming brightness that plunges everything around it into pure whiteness. Ugly.
But if the light source is reeeally tiny, and your aperture is teeeensy as well, you get something else: sunstars
This particular sunstar has fourteen corners, and therefore comes from a seven-bladed aperture (in my Fuji XC 16-50). It happens because tiny apertures are not perfectly circular any longer, but instead, in my case, septagonal, and therefore bloom more in some directions than in others. The effect is kind of beautiful.
In this picture, the sun was just barely peeking into the edge between the tree and the building, and my aperture was set to its smallest setting, f22. I actually wanted to capture the raindrops on the branches, which I largely failed at. In the end, the picture didn't turn out very pretty, but at least I got some fine sunstars!
24 Feb 2019
# What I learned about Amateur Photography
I am an amateur, as in "lover of", photography. I love cameras as tactile devices, I love how photography makes me consider the world as art, how that little viewfinder can reveal unknown beauty in well-known places or people. And I love looking at my photos, and remembering vacations and meaningful moments. For me, photography is about finding beauty, and capturing memories.
However, most of the writing on photography seems to be focused not on my needs, but the needs of professional photographers: A super competitive field of visual artists who compete on image quality and novelty, and use crazy and expensive gear. I have found many of their lessons not applicable to my amateur needs, or even actively detrimental:
## Embrace the noise
Many pros limit their ISO numbers, because high-ISO noise is ugly. Which it is. But you know what is even worse for an amateur? Not having that picture of my baby, because it was too dark.
So I set my ISO to unlimited, reduce my shutter speed and aperture so I actually have a chance of capturing my fast-moving toddler. And embrace the ensuing noise. Some of my favorite pictures look unbearably noisy on my 4k screen, but look just fine when printed, or on a smartphone (the two most important mediums in the world). Because of this, I find noise reduction rarely worth the effort. Color heavy noise reduction works ok, but anything else looks worse than the problem. I vastly prefer a sharp, noisy shot to a mushy denoised shot with no detail.
## Step it down
Another common Pro argument: Wider apertures are better. Which they are, at capturing light, and blurring the background. But as an amateur, a wide-aperture super-shallow depth of field just makes me miss shots. At f1.8, the area in focus is barely a few centimetres deep. I missed too many shots because I accidentally focused on the nose instead of the eye. So, in the absence of studio lighting, and arbitrarily many retries, I prefer to step it down and live with the noise, if need be.
As a fun corollary, all those fancy prime lenses with crazy-wide apertures, they are simply wasted on me. Anything beyond, say, f2.8, is not something I need to spend money on. Also, lenses are noticeably sharper when stepped down! I have been disappointed with the sharpness of a number of shots because I forgot to step it down. Nowadays, I typically shoot at f5.6 or f8, and only go wider if I actually need to, because of lack of light, or if I specifically want a blurry background.
## Wide-ish lenses are easier
I wondered, for a long time, why pros seem to like long-ish lenses for portraits: The answer is, because longer lenses have a shallower the depth of field (for the same f-number), and pro photographers love their blurry backgrounds. But as I said before, that is not for me.
Instead, I prefer wide-ish lenses. If something is too small on a wide-ish lens, it is usually no problem to get a bit closer or to crop afterwards. If something is too big on a long lens though, backing off is often not possible, and you miss your shot1. Plus, wide-ish angles don't blur as much from shaky hands, and are more compact. They often focus more closely, too, which is a huge bonus if you want to take pictures of a toddler.
I have tried, unsuccessfully, a 50mm and 35mm prime (APS-C). Now I own a 27mm pancake prime, which I find perfect: long enough to get nice portraits, but still wide enough to capture a landscape.
Be careful with super-wide angle lenses, though. Even though they are a ton of fun, I have found anything below 16mm to be very difficult to use effectively. It's just too easy to get that fisheye-distorted look, especially near the sides. That distorted look, by the way, is caused by being too close, not by lens distortion. The same thing happens if you take a longer lens or a cell phone and get too close. Just don't do that.
## Gear
Pros use the biggest sensor they can get, to get the best image quality possible. But that also makes everything else much more cumbersome: Bigger sensors mean bigger and heavier bodies. And bigger and heavier lenses. And shallower depth of field (see above). And smaller focal ranges, hence more lens changing. And, not least of all, much, much, much higher prices. It is not for me.
To some extent, the same goes for different quality levels: Personally, I have found entry-level APS-C mirrorless interchangeable lens cameras a good compromise. These entry-level plastic lenses and cameras are usually smaller, lighter, and cheaper than their higher-end brethren, but compromise on robustness and aperture sizes (e.g. Fuji X-E2 375g/€250 + Fuji XC 16-50mm, 195g/€150 vs. Fuji X-T1 450g/€250 + Fuji XF 18-55mm, 300g/€250 vs. my old Nikon gear). And for my everyday camera, I' take a smaller, pocketable camera over a "better", bigger one any day.
Haptics are important, too. I have seen great cameras and lenses that just didn't feel good in my hand. Which meant I wouldn't ever take them with me, and wouldn't take any pictures with them. I now go try stuff in the store before I buy anything. This has talked me out of a number of unnecessary purchases, internet consensus notwithstanding.
And finally, I buy used gear. Cameras and lenses depreciate about 70% within the first two years, without losing any quality. A great camera from two years ago is still a great camera, but costs a third of the original price, and can be resold without loss. And is better for the environment. Win-win-win.
## TL;DR
I have found many "common" rules about photography useless for my amateur needs. I have found cheap, plastic, used gear more useful than pro gear. I have found noisy, small-aperture pictures to be better at capturing important memories than clean, "professional" ones. I have found haptics, size and weight to be much more important than ultimate image quality (within reason).
The funny thing is, you don't find this kind of information on the internet, since most review websites seem to focus on the professional viewpoint, even for gear that is clearly meant for amateurs like me.
## Footnotes:
1
You need to back off much farther on a long lens than you have to move closer on a wide lens.
|
2019-03-27 02:43:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39592796564102173, "perplexity": 2022.736760872594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912207618.95/warc/CC-MAIN-20190327020750-20190327042750-00475.warc.gz"}
|
https://de.maplesoft.com/support/help/maplesim/view.aspx?path=Student%2FLinearAlgebra%2FBandMatrix
|
BandMatrix - Maple Help
Student[LinearAlgebra]
BandMatrix
construct a band Matrix
Calling Sequence BandMatrix(L, n, options)
Parameters
L - list of lists of scalars or list of scalars or Vector of scalars; diagonals of the band Matrix n - (optional) non-negative integer; the number of subdiagonals options - (optional) parameters; for a complete list, see LinearAlgebra[BandMatrix]
Description
• The BandMatrix(L) command constructs a band Matrix from the data provided by L.
• If L is a list of lists, then each list element in L is used to initialize a diagonal. The $n+1$st element of L is placed along the main diagonal. (If L has fewer than n+1 elements, it is automatically extended by [0]'s.) The other diagonals are placed in relation to it: ${L}_{n-j+1}$ is placed in the jth subdiagonal for $j=1..n$ and ${L}_{n+k+1}$ is placed in the kth superdiagonal for $k=1..\mathrm{nops}\left(L\right)-n-1$. If any list element is shorter than the length of the diagonal where it is placed, the remaining entries are filled with 0.
If n is omitted in the calling sequence, BandMatrix attempts to place an equal number of sub- and super-diagonals into the resulting Matrix by using $n=\mathrm{iquo}\left(\mathrm{nops}\left(L\right),2\right)$ subdiagonals.
• If L is a list or Vector of scalars, its elements are used to initialize all the entries of the corresponding diagonals. In this case, parameter n must be specified in the calling sequence. If the row dimension r is not specified, it defaults to n+1. If the column dimension is not specified, it defaults to the row dimension. The jth subdiagonal is filled with L[n-j+1] for j = 1 .. n. (If L has fewer than n+1 elements, it is automatically 0-extended.) The main diagonal is filled with L[n + 1]. The kth superdiagonal is filled with L[n + k + 1] for k = 1 .. nops(L)- n - 1.
Examples
> $\mathrm{with}\left(\mathrm{Student}\left[\mathrm{LinearAlgebra}\right]\right):$
> $\mathrm{LL}≔\left[\left[w,w\right],\left[x,x,x\right],\left[y,y,y\right],\left[z,z\right]\right]:$
> $\mathrm{BandMatrix}\left(\mathrm{LL}\right)$
$\left[\begin{array}{ccc}{y}& {z}& {0}\\ {x}& {y}& {z}\\ {w}& {x}& {y}\\ {0}& {w}& {x}\end{array}\right]$ (1)
> $\mathrm{BandMatrix}\left(\mathrm{LL},1\right)$
$\left[\begin{array}{cccc}{x}& {y}& {z}& {0}\\ {w}& {x}& {y}& {z}\\ {0}& {w}& {x}& {y}\end{array}\right]$ (2)
> $\mathrm{BandMatrix}\left(⟨1,2⟩,3\right)$
$\left[\begin{array}{cccc}{0}& {0}& {0}& {0}\\ {0}& {0}& {0}& {0}\\ {2}& {0}& {0}& {0}\\ {1}& {2}& {0}& {0}\end{array}\right]$ (3)
|
2022-06-28 14:32:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 14, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9752102494239807, "perplexity": 1371.0209114404638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103556871.29/warc/CC-MAIN-20220628142305-20220628172305-00013.warc.gz"}
|
https://dsp.stackexchange.com/questions/36783/feature-vector-from-an-audio-signal
|
# Feature Vector from an audio signal [closed]
I want to develop a feature vector from an audio input.
Up to now, I have identified fundamental frequency, max phonation time, timbre to be among the key features to be identified.
Can someone please confirm whether it will be possible to extract these features from the audio?
• It's possible. Was that really your question? – Marcus Müller Jan 8 '17 at 8:32
• @MarcusMüller well, I want to know whether it is possible to obtain an exact value for each audio sample. – mgw2016 Jan 8 '17 at 15:25
• no. A single sample is just a number. It can't have something like a frequency: What is the frequency of $0.4$? – Marcus Müller Jan 8 '17 at 15:35
• Things like timbre and fundamental frequency only make sense when considered for a sequence of samples – but I'm not telling you anything you don't know, I guess; I really just try to find out what your precise question is. – Marcus Müller Jan 8 '17 at 15:38
• @MarcusMüller I meant an audio sample of duration like 20 - 40 ms :) Not a single sample! Of course it's a sequence of samples..... Sorry for the bad definition! – mgw2016 Jan 9 '17 at 7:42
|
2020-06-05 19:52:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30869153141975403, "perplexity": 876.9935350547519}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348502204.93/warc/CC-MAIN-20200605174158-20200605204158-00311.warc.gz"}
|
https://magento.stackexchange.com/questions/258592/magento-2-override-unit-phtml-and-row-phtml-in-magento-checkout
|
# Magento 2 Override Unit.phtml and Row.phtml in Magento_Checkout
I am trying to override both unit.phtml and row.phtml within my Magento_Checkout module. I am doing everything that I would normally do but it doesn't seem to be overriding like the other phtml files.
The file is being overwritten on my client and then uploaded to Magento_Checkout\templates\item\price but none of the changes i'm making seen to be making a difference.
My goal is to remove the Incl Vat from the cart page and just leave Excl Vat as the standard price.
• filenames are case-sensitive – Philipp Sander Jan 21 at 14:46
• I have the same file structure on the server as what I do locally. The same as what the vendor files just without overriding vendor files – A. Fletcher Jan 21 at 14:48
• you files start with a capital letter. they must named exactly like the ones in the vendor directory – Philipp Sander Jan 21 at 14:50
• I assume you cleaned the caches.... – Philipp Sander Jan 21 at 14:52
• Sorry yeah, thats just my mis-spelling on here. The files themselves are lower case the same as the vendor files. And yes i've cleared all the cache files. I can override other files just these two dont seem to be working – A. Fletcher Jan 21 at 14:55
|
2019-08-23 23:06:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3749854862689972, "perplexity": 3530.2158865413817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319082.81/warc/CC-MAIN-20190823214536-20190824000536-00498.warc.gz"}
|
https://sites.millersville.edu/bikenaga/linear-algebra/rings/rings.html
|
# Commutative Rings and Fields
Different algebraic systems are used in linear algebra. The most important are commutative rings with identity and fields. I'll begin by stating the axioms for a ring. They will look abstract, because they are! But don't worry --- lots of examples will follow.
Definition. A ring is a set R with two binary operations addition (denoted +) and multiplication (denoted ). These operations satisfy the following axioms:
1. Addition is associative: If , then
2. There is an identity for addition, denoted 0. It satisfies
3. Every every of R has an additive inverse. That is, if , there is an element which satisfies
4. Addition is commutative: If , then
5. Multiplication is associative: If , then
6. Multiplication distributes over addition: If , hen
It's common to drop the " " in " " and just write " ". I'll do this except where the " " is needed for clarity.
As a convenience, we can define subtraction using additive inverses. If R is a ring and , then is defined to be . That is, subtraction is defined as adding the additive inverse.
You might notice that we now have three of the usual four arithmetic operations: Addition, subtraction, and multiplication. We don't necessarily have a "division operation" in a ring; we'll discuss this later.
If you've never seen axioms for a mathematical structure laid out like this, you might wonder: What am I supposed to do? Do I memorize these? Actually, if you look at the axioms, they say things that are "obvious" from your experience. For example, Axiom 4 says addition is commutative. So as an example for real numbers,
You can see that, as abstract as they look, these axioms are not that big a deal. But when you do mathematics carefully, you have to be precise about what the rules are. You will not have much to do in this course with writing proofs from these axioms, since that belongs in an abstract algebra course. A good rule of thumb might be to try to understand by example what an axiom says. And if it seems "obvious" or "familiar" based on your experience, don't worry about it. Where you should pay special attention is when things don't work in the way you expect.
If you look at the axioms carefully, you might notice that some familiar properties of multiplication are missing. We will single them out next.
Definition. A ring R is commutative if the multiplication is commutative. That is, for all ,
Note: The word "commutative" in the phrase "commutative ring" always refers to multiplication --- since addition is always assumed to be commutative, by Axiom 4.
Definition. A ring R is a ring with identity if there is an identity for multiplication. That is, there is an element such that
Note: The word "identity" in the phrase "ring with identity" always refers to an identity for multiplication --- since there is always an identity for addition (called "0"), by Axiom 2.
A commutative ring which has an identity element is called a commutative ring with identity.
In a ring with identity, you usually also assume that . (Nothing stated so far requires this, so you have to take it as an axiom.) In fact, you can show that if in a ring R, then R consists of 0 alone --- which means that it's not a very interesting ring!
Here are some number systems you're familiar with:
(a) The integers .
(b) The rational numbers .
(c) The real numbers .
(d) The complex numbers .
Each of these is a commutative ring with identity. In fact, all of them except are fields. I'll discuss fields below.
By the way, it's conventional to use a capital letter with the vertical or diagonal stroke "doubled" (as in or ) to stand for number systems. It is how you would write them by hand. If you're typing them, you usually use a special font; a common one is called Blackboard Bold.
You might wonder why I singled out the commutativity and identity axioms, and didn't just make them part of the definition of a ring. (Actually, many people add the identity axiom to the definition of a ring automatically.) In fact, there are situations in mathematics where you deal with rings which aren't commutative, or (less often) lack an identity element. We'll see, for instance, that matrix multiplication is usually not commutative.
The idea is to write proofs using exactly the properties you need. In that way, the things that you prove can be used in a wider variety of situations. Suppose I had included commutativity of multiplication in the definition of a ring. Then if I proved something about rings, you would not know whether it applied to noncommutative rings without carefully checking the proof to tell whether commutativity was used or not. If you really need a ring to be commutative in order to prove something, it is better to state that assumption explicitly, so everyone knows not to assume your result holds for noncommutative rings.
The next example (or collection of examples) of rings may not be familiar to you. These rings are the integers mod n. For these rings, n will denote an integer. Actually, n can be any integer if I modify the discussion a little, but to keep things simple, I'll take .
The integers mod n is the set
n is called the modulus.
For example,
becomes a commutative ring with identity under the operations of addition mod n and multiplication mod n. I won't prove this; I'll just show you how to work with these operations, which is sufficient for a linear algebra course. You'll see a rigorous treatment of in abstract algebra.
(a) To add x and y mod n, add them as integers to get . Then divide by n and take the remainder --- call it r. Then .
(b) To multiply x and y mod n, multiply them as integers to get . Then divide by n and take the remainder --- call it r. Then .
Since modular arithmetic may be unfamiliar to you, let's do an extended example. Suppose , so the ring is .
Hence, in .
You can picture arithmetic mod 6 this way:
You count around the circle clockwise, but when you get to where "6" would be, you're back to 0. To see how works, start at 0. Count 4 numbers clockwise to get to 4, then from there, count 5 numbers clockwise. You'll find yourself at 3.
Here is multiplication:
Hence, in .
You can see that as you do computations, you might in the middle get numbers outside . But when you divide by 6 and take the remainder, you'll always wind up with a number in .
Try it with a big number:
Using our circle picture, if you start at 0 and do 80 steps clockwise around the circle, you'll find yourself at 2. (Maybe you don't have the patience to actually do this!) When we divide by 6 then "discard" the multiples of 6, that is like the fact that you return to 0 on the circle after 6 steps.
Notice that if you start with a number that is divisible by 6, you get a remainder of 0:
We see that in doing arithmetic mod 6, multiples of 6 are equal to 0. And in general, in doing arithmetic mod n, multiples of n are equal to 0.
Other arithmetic operations work as you'd expect. For example,
Hence, in .
Negative numbers in are additive inverses. Thus, in , because . To deal with negative numbers in general, add a positive multiple of 6 to get a number in the set . For example,
Hence, in .
The reason you can add 18 (or any multiple of 6) is that 18 divided by 6 leaves a remainder of 0. In other words, " " in , so adding 18 is like adding 0. In a similar way, you can always convert a negative number mod n to a positive number in by adding multiples of n. For instance,
Remember that multiples of 6 (like 18) are 0 mod 6!
Recall that subtraction is defined as adding the additive inverse. Thus, to do in , use the fact that the additive inverse of 2 (that is, -2) is equal to 4:
We haven't discussed division yet, but maybe the last example tells you how to do it. Just as subtraction is defined as adding the additive inverse, division should be defined as multiplying by the multiplicative inverse. Let's give the definition.
Definition. Let R be a ring with identity, and let . The multiplicative inverse of x is an element which satisifies
If we were dealing with real numbers, then , for instance. But going back to the example, we don't have fractions in . So what is (say) in ? By definition, is the element (if there is one) in which satisfies
(I could say , but multiplication is commutative in , so the order doesn't matter.)
We just check cases. Remember that if I get a product that is 6 or bigger, I have to reduce mod 6 by dividing and taking the remainder.
I got by dividing 25 by the modulus 6 --- it goes in 4 times, with a remainder of 1.
Thus, according to the definition, . In other words, 5 is its own multiplicative inverse. This isn't unheard of: You know that in the real numbers, 1 is its own multiplicative inverse.
This also means that if you want to divide by 5 in , you should multiply by 5.
What about in ? Unfortunately, if you take cases as I did with 5, you'll see that for every number n in , you do not have . Here's a proof by contradiction which avoids taking cases. Suppose . Multiply both sides by 3:
I made the last step using the fact that is a multiple of 6 (since ), and multiples of 6 are equal to 0 mod 6. Since " " is a contradiction, is impossible. So is undefined in .
It happens to be true that in , the elements 0, 2, 3, and 4 do not have multiplicative inverses; 1 and 5 do.
And in , the elements 0, 2, 4, 5, 6, and 8 do not have multiplicative inverses; 1, 3, 7, and 9 do.
Do you see a pattern?
You probably don't need much practice working with familiar number systems like the real numbers , so we'll give some examples which involve arithmetic in .
Example. (a) Reduce 22 to a number in in .
(b) Reduce -21 to a number in in .
(c) Compute in .
(d) Compute in .
(e) Compute in .
(f) Compute in .
(g) Compute
It's understood for a problem your final answer should be a number in . You can simplify as you do each step, or simplify at the end (divide by n and take the remainder).
(a) .
(b) .
Notice that 24 is a multiple of 4, so it's equal to 0 in . You can also do this by dividing by 4 if you do it carefully:
(c) .
(d) .
(e) .
Notice that I added a multiple of 10 (since ) to get a positive number.
(f) .
(g) includes all the numbers from 1 to 25; in particular, it includes 23. So the product is a multiple of the modulus 23, and
Example. (a) Find in .
(b) Prove that 6 does not have a multiplicative inverse in .
(a) By trial and error, in . Therefore, .
(b) Suppose for some n in . Then
The last step follows from the fact that is a multiple of 10, so it equals 0 mod 10. Since " " is a contradiction, is impossible, and 6 does not have a multiplicative inverse.
Example. (a) Show that 2 doesn't have a multiplicative inverse in .
(b) Show that 14 doesn't have a multiplicative inverse in .
(a) Try all possibilities:
There is no element of whose product with 2 gives 1. Hence, 2 doesn't have a multiplicative inverse in .
(b) Suppose for . Then
(Note that in .) The last line above is a contradiction, so 14 does not have a multiplicative inverse in .
You may have noticed that the elements in which have multiplicative inverses are the elements which are relatively prime to n.
You might wonder whether there is a systematic way to find multiplicative inverses in . The best way is to use the Extended Euclidean Algorithm; you might see it if you take a course in abstract algebra. In this course, I'll usually keep the examples small enough that trial and error is okay for finding multiplicative inverses when you need them. But here's an approach that you might prefer. Suppose you want to find in . Consider multiples of 11, plus 1. Stop with the first such number that's divisible by 7:
From this, I get , because
Example. Find in .
In , I have , so . You could do this by trial and error, since isn't that big:
Alternatively, take multiples of 13 and add 1, stopping when you get a number divisible by 8:
Then , so .
Even this approach is too tedious to use with large numbers. The systematic way to find inverses is to use the Extended Euclidean Algorithm.
We saw that in a commutative ring with identity, an element x might not have multiplicative inverse . That in turn would prevent you from "dividing" by x. From the point of view of linear algebra, this is inconvenient. Hence, we single out rings which are "nice" in that every nonzero element has a multiplicative inverse.
Definition. A field F is a commutative ring with identity in which and every nonzero element has a multiplicative inverse.
By convention, you don't write " " instead of " " unless the ring happens to be a ring with "real" fractions (like , , or ). You don't write fractions in (say) .
If an element x has a multiplicative inverse, you can divide by x by multiplying by . Thus, in a field, you can divide by any nonzero element. (You'll learn in abstract algebra why it doesn't make sense to divide by 0.)
The rationals , the reals , and the complex numbers are fields. Many of the examples will use these number systems.
The ring of integers is not a field. For example, 2 is a nonzero integer, but it does not have a multiplicative inverse which is an integer. ( is not an integer --- it's a rational number.)
, , and are all infinite fields --- that is, they all have infinitely many elements. But (for example) is a field.
For applications, it's important to consider finite fields like . Before I give some examples, I need some definitions.
Definition. Let R be a commutative ring with identity. The characteristic of R is the smallest positive integer n such that .
Notation: .
If there is no positive integer n such that , then .
In fact, if , then for all .
, , , and are all rings of characteristic 0. On the other hand, .
Definition. An integer is prime if its only positive divisors are 1 and n.
The first few prime numbers are
An integer which is not prime is composite. The first few composite numbers are
The following important results are proved in abstract algebra courses.
Theorem. The characteristic of a field is either 0 or a prime number.
Theorem. If p is prime and n is a positive integer, there is a field of characteristic p having elements. This field is unique up to ring isomorphism, and is denoted (the Galois field of order ).
The only unfamiliar thing in the last result is the phrase "ring isomorphism". This is another concept whose precise definition you'll see in abstract algebra. The statement means, roughly, that any two fields with elements are "the same", in that you can get one from the other by just renaming or reordering the elements.
Since the characteristic of is n, the first theorem implies the following result:
Corollary. is a field if and only if n is prime.
The Corollary tells us that , , and are fields, since 2, 3, and 61 are prime.
On the other hand, is not a field, since 6 isn't prime (because ). In fact, we saw it directly when we showed that 4 does not have a multiplicative inverse in . Note that is a commutative ring with identity.
For simplicity, the fields of prime characteristic that I use in this course will almost always be finite. But what would an infinite field of prime characteristic look like?
As an example, start with . Form the field of rational functions . Thus, elements of have the form where and are polynomials with coefficients in . Here are some examples of elements of :
You can find multiplicative inverses of nonzero elements by taking reciprocals; for instance,
I won't go through and check all the axioms, but in fact, is a field. Moreover, since in , it's a field of characteristic 2. It has an infinite number of elements; for example, it contains
What about fields of characteristic p other than , , and so on? As noted above, these are called Galois fields. For instance, there is a Galois field with elements. To keep the computations simple, we will rarely use them in this course. But here's an example of a Galois field with elements, so you can see what it looks like.
is the Galois field with 4 elements, and here are its addition and multiplication tables:
Notice that
You can check by examining the multiplication table that multiplication is commutative, that 1 is the multiplicative identity, and that the nonzero elements (1, a, and b) all have multiplicative inverses. For instance, , because .
Since we've already seen a lot of weird things with these new number systems, we might as well see another one.
Example. Find the roots of in .
Make a table:
For instance, plugging into gives
The roots are , , and .
You would normally not expect a quadratic to have 4 roots! This shows that algebraic facts you may know for real numbers may not hold in arbitrary rings (note that is not a field).
Linear algebra deals with structures based on fields, and you've now seen most of the fields that will come up in the examples. The modular arithmetic involved in working with may be new to you, but it's not that hard with a little practice. And as I noted, most of the examples involving finite fields will use for p prime, rather than the more general Galois fields, or infinite fields of characteristic p.
Contact information
|
2022-12-04 21:23:17
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.899029016494751, "perplexity": 296.73012658973255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710980.82/warc/CC-MAIN-20221204204504-20221204234504-00589.warc.gz"}
|
https://www.physicsforums.com/threads/how-much-maths-does-one-need-in-particle-physics.533131/
|
# How much Maths does one need in Particle Physics?
1. Sep 24, 2011
### MarcAlexander
Hi, I'm Marc. I'm 14, from the UK and I love Particle Physics and Nuclear Physics. I was just wondering about how much Maths and what areas of Maths I would need to accumulate the knowledge for, in order to do A-level and eventually University Physics, specifically Particle Physics.
Could anyone shed some Photons on my situation?
2. Sep 24, 2011
### Pythagorean
the field theory has classical roots in differential equations (which follows from calculus), the quantum mechanical part of it is mostly group theory and linear algebra (matrices and eigenstuffs).
3. Sep 24, 2011
### nickbob00
If you do maths GCSE and get an A, then do A level Maths and get an A, that's all the maths you need to get in to an undergraduate course. Ideally, you would do A level Further maths and also do additional maths GCSE or something like that, if it is offered at your school. At 14, you don't need to be teaching yourself much.
On an undergrad physics degree, you will do lots and lots of maths, much of which will be required rather than optional.
4. Sep 24, 2011
### MarcAlexander
Well I'm just wondering if Chemistry is more my thing. :\
You see, I'm personally very interested in particles, their sub-atomic particles, their elementary particles and so on. I have reason to believe this area of Science comes under Particle Physics. My dilemma is that I:
1) Don't really care about the rest of Physics besides the particle aspect, e.g.I couldn't give a damn about light refraction or thermal conduction.
2) I'm not a great fan of 'pointless' Maths, if you get were I'm coming from.
I'm starting to think maybe it's Chemistry I should be loured towards as that's pure particles with the odd metamorphic rock or two. Also I enjoy writing and reading Chemical formulae as it is short and simple. But my dilemma is that Chemistry seems to be more about how the atoms bond to form molecules rather than matter and anti-matter.
Physics or Chemistry??
5. Sep 24, 2011
### Awesomesauce
"The best of the golds at the bottom of the barrels of crap" - Randy Pausch
The 'rest of physics' does not come optional (un)fortunately. You can't just jump into the deep end without teaching yourself to tread water.
6. Sep 24, 2011
### MarcAlexander
It's not that I find the rest of Physics wrong, it's just I have no particular interest in them. I guess I'm just going to have to learn all the 'crap' in order to reach the 'gold'.
Thanks guys.
7. Sep 24, 2011
### Functor97
You do realise that those fundemental particles are pieces of "pointless" mathematics? No one can draw or see an electron let alone a quark... The more fundemental you go, the more mathematical it gets.
8. Sep 24, 2011
### MarcAlexander
You could see an electron or quark under an electron-microscope. But I acknowledge your point.
9. Sep 24, 2011
### Kevin_Axion
No, you can't. And I think the play on words is alluding to the fact that fundamental particles are points, and also QFT plays on group theory (abstract algebra), partial differential equations, differential geometry/topology etc.
10. Sep 24, 2011
### MarcAlexander
Could you explain to me what QFT is? I know what it stands for: Quantum Field Theory. I just don't know what relevance it has in Particle Physics, like S=D/T has in calculating velocities based on the distance and the time it took them to accomplish that distance. What does QFT tell us or prove to us?
11. Sep 24, 2011
### Kevin_Axion
I'll go through it really quickly since I have to go:
1. First we begin with classical mechanics and that includes Newtonian, Lagrangian and Hamiltonian mechanics. This field of physics studies the motion and dynamics of classical object i.e planets, balls etc. The objects follow trajectories in gravitational fields and can be modeled precisely in position and time.
2. Secondly we have relativistic mechanics. That is, the study of classical objects in high velocities or as $v\rightarrow c$. In this we are introduced to space-time or Minkowski space-time. We see that objects in an inertial frame experience time, distance and causality different then others.
3. Thirdly we have quantum mechanics, QM is the study of objects at the atomic level. Here, objects don't follow the rules of classical objects, everything has uncertainty. For instance there is an uncertainty between time and energy and an uncertainty between position and momentum etc. QM uses Hilbert spaces to define the state of a particle and uses non-commutative operators to describe uncertainty.
4. Now we have QFT. QFT is the combination of relativity and quantum mechanics and it forms relativistic quantum mechanics or the study of quantum mechanical objects in accelerated or inertial reference frames as $v\rightarrow c$. Here we see that the fundamental objects in nature are fields and particles are the local excitations of these fields. This is the most accurate depiction of nature so far.
12. Sep 24, 2011
### BloodyFrozen
:grumpy:
Much of physics requires MATH
MATH IS APPARENTLY THE LIFE.
13. Sep 24, 2011
### MarcAlexander
Is Quantum Physics for Dummies a good book?
Also what would be a good(simple) book for Physics in Maths be?
14. Sep 24, 2011
### MarcAlexander
I apologise if i pulled a 'heart string'. What I meant was that throughout school I am constantly taught Mathematics that seems to have no practical use like Median, Prime Factors, HCF, LCM etc. Personally I love Algebra.
15. Sep 24, 2011
### BloodyFrozen
Yes, I agree that mathematics in Highschool may be boring, but learn it. And then study on your own
16. Sep 24, 2011
### MarcAlexander
I completely agree. I just wish I'd had an interest in Physics from the start; maybe I'd have tried harder with Maths but, I was only a kiddie back then, now I'm doing my GCSEs. Are there any books that teach everything about Maths from baby stuff to high level stuff which would ultimately prepare me for Quantum Mechanics? And would Calculus be necessary?
If so then what is Calculus?
I apologise for so many questions. It's just I have no one else to ask really.
17. Sep 24, 2011
### micromass
Yes of course calculus would be necessary. Calculus is one of the most important theories around and is absolutely fundamental if you want to study any science.
Calculus basically allows you to analyze continuous functions and graphs in an easy way. It allows you to find areas under graphs, volumes, rate of change. And it can be used to solve optimization problems.
18. Sep 24, 2011
### BloodyFrozen
Well, as micromass already explained, Calculus would be extremely useful, but you can either take it in your highschool (if they offer it) or learn it by yourself.
I recommend getting a good grasp in HS Algebra and Precalculus/Trigonometry. As for textbooks, I can't really say. You could always ask to borrow a Precalc/Alg II book from a school teacher. Nearly any would suffice. As for Calculus texts, I'd ask someone else.
19. Sep 24, 2011
### micromass
Start with the excellent book "Basic mathematics" by Serge Lang. It consist of everything you need to know of high school mathematics (not including calculus). If you're done with that, then perhaps take a light calculus book like "practical analysis in one variable" by Estep. After that, you should take a fun book like Spivak or Apostol.
20. Sep 24, 2011
### MarcAlexander
I've just purchased "Quantum Physics for Dummies" of Amazon.
|
2019-01-17 02:19:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5065605640411377, "perplexity": 1156.0105259389495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658681.7/warc/CC-MAIN-20190117020806-20190117042806-00254.warc.gz"}
|
https://artofproblemsolving.com/wiki/index.php/Perimeter
|
# Perimeter
## Definition
The perimeter of a geometric figure is the distance around the outside of the figure. Perimeter is often denoted by P. The perimeter of a circle is called its circumference.
## Formulas
• Rectangle: $2(l+w)$, where $l$ is the length and $w$ is the width.
• Square: $4s$, where $s$ is the side length. <follows from rectangle>
• Circle - $2\pi r$, where $r$ is the radius.
• Regular polygon with $n$ sides: $ns$, where $s$ is the side length.
• Polygon with $q$ sides: $\sum_{z=1}^{q} a_z$, where $a_i$ are the lengths of the sides of the polygon.
|
2020-10-23 02:05:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9398546814918518, "perplexity": 378.1362637009554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880519.12/warc/CC-MAIN-20201023014545-20201023044545-00656.warc.gz"}
|
https://www.bionicturtle.com/forum/threads/kurtosis-and-description-of-tails.1266/
|
What's new
# Kurtosis and description of tails
#### fullofquestions
##### New Member
Please refer to 2009_Notes_2Quantitative_v1a.pdf, page 31 at the bottom and page 32 at the top. Hopefully we can put this topic to rest once and for all because I have seen various websites and resources with conflicting descriptions of kurtosis and how it relates to the tails of a distribution...
"Kurtosis greater than three (>3), which is the same thing as saying “excess kurtosis > 0,” indicates high peaks and fat tails (leptokurtic). Kurtosis less than three (<3), which is the same thing as saying “excess kurtosis < 0,” indicates lower peaks.
Financial asset returns are typically considered leptokurtic (i.e., heavy or fat-tailed)"
"For example, the logistic distribution exhibits leptokurtosis (heavy-tails; kurtosis > 3.0):
GRAPH
A probability distribution with “thicker tails” or “heavier tails” than the normal distribution has kurtosis > 3 and it called leptokurtic.
When a distribution is less peaked than the normal distribution, it is said to be platykurtic. This distribution is characterized by less probability in the tails than the normal distribution. It will have a kurtosis that is less than 3 or, equivalently, an excess kurtosis that is negative."
In the GRAPH listed we have the standard normal distribution in blue. Then we have a platykurtic distribution, one with lower peak, in purple. The platykurtic distribution CLEARLY has more probability(or samples rather) in the tails than the normal distribution. Can someone please explain the following:
1. what is a fat tail? Is it a tail with more *meat* on it, i.e. more height per unit area, much like a platykurtic distribution or is it somehow how much farther the tail extends ouT from the mean? In my opinion:
a. Leptokurtic distribution clearly has highest peak but quickly contours very close to the X axis. Therefore I see it as a 'thin' tail
b. Mesokurtic distribution, i.e. standard distribution, has a medium sized peak and the tails, in either direction contour closer to the X axis slower than the leptokurtic example. In this case there are more probabilities in the tails than in the leptokurtic case.
c. Platykurtic distribution, has the lowest peak and the tails contour to the X axis way more gradually and therefore has more probabilities in the tails. The tails are taller than compared to the leptokurtic and mesokurtic distribution.
2. is a fat tail any different from a heavy tail?
#### David Harper CFA FRM
##### David Harper CFA FRM
Staff member
Subscriber
I think the most precise term is "heavy tail" because kurtosis is about the *density* of the tail. Kurtosis > 3 implies more area "under the curve" in the tail of a PDF/PMF compared to the normal.
While it's true that a single-humped (unimodal) distribution that is leptokurtotic has a higher peak and therefore heavier tails, I think it leads to confusion if you try to characterize kurtosis in terms of "peakedness" or fatness (as you suggest, leptokurtosis can appear as a long skinny tails). Also confusing if you try to think in terms of vertical height in a PDF (e.g., y axis varies). But, for a PDF, leptokurtosis means there is more "area under the curve" in the extreme tail, compared to a normal, and that means the odds are higher of ending up in the tail. (if you want to think only vertically, then you can use: for the CDF, such a distribution has a lower y value, cumulative probability, for extreme outcomes...)
Leptokurtosis means that the probability of an extreme tail outcome is more likely than it would be under the normal. The student's t is a good example: its graph will maybe not help so much if you think vertically, but student's t is always leptokurtotic. It's excess kurtosis > 0, as it's excess kurtosis = 6/ (df - 4). So, I prefer "heavy tail" (which, yes is synonomous with fat tail) and this is what it means:
if you look in the tail, what is the CDF probability of ending up there?
say we are looking at 3 sigma, then normal = NORMSDIST(-3) = 2.275%
i.e., for a standard normal, our odds of ending up more than 3 SD are 2.275%
a leptokurtos distribution has a higher probability of ending up more than 3 SD. In the case of the student's t:
=TDIST(3, d.f.,1 tail) = will always be greater than 2.275%, although converging to 2.275% as d.f. increase
so, leptokurtosis implies more area in the tail vis a vis a normal (i.e., heavier density).
and fortunately, this mathematical view (i.e., higher CDF P[X<x] compared to normal), rather than graphical, view fits our risk concern: we may prefer a leptokurtotic distr (e.g., Levy) because it means that our odds of an extreme tail outcome are greater.
David
#### fullofquestions
##### New Member
Thank you for the explanation Dave. I am still a bit unclear although I think I'm getting there. Perhaps it would help to discuss the following details:
1. In describing a distribution the following phrases are common:
-heavy tailed
-fat tails
-fat tailed
-heavy tails
So my question is, when talking about 'tail' is it possible that in some cases we are refering to the hump/peak while in others we referring to the ends of the distribution, you know, the ones that stretch to - infinity and + infinity? Both could be construed as 'tail.'
I understand that a leptokurtic distribution such as a student's t has the following characteristics
- kurtosis > 3
- higher hump/peak than a normal distribution (this apparently is NOT always the case, take minute 38:30 from your video 2ai_p2_quant_iPod)
- longer, skinnier tails (- infinity and + infinity). This, in proper statistical terms, is referred to as 'heavier tails.' The reason being that the tails, although they look skinny, they extend further out to -/+ infinity ultimately carrying more weight/area, and therefore, probabilities in the tails.
I think the following image explains it very well although it would help if you could zoom in more (http://en.wikipedia.org/wiki/File:Standard_symmetric_pdfs.png). In it, it is clear that the leptokurtic distributions carry more weight in the tails. I think that in many cases (take wikipedia or investopedia for example), leptokurtosis is referred to as 'higher peaks,' and this is NOT always the case. I swear I have seen examples dealing with kurtosis where the answer lies not in describing the tails but the peaks. If and when I encounter them I will collect them because I think these are causing unnecessary confusion.
#### David Harper CFA FRM
##### David Harper CFA FRM
Staff member
Subscriber
fullofquestions,
(sorry for delay). What i have read (somewhere, I can't remember) is that higher peaks for a unimodal (single humped) distribution imply heavier tails. I frankly can't grab the intution. I am "with you" in regard to higher peaks but please note it can be misleading if the Y-axis differs. for years, i thought the student's t was an "exception" (i.e., shorter peak but heavier tail) but someone pointed out that's just a y-axis issue....
...the peakedness description doesn't personally work for me (or i have not heard a compelling explanation). What works for me is: tail density or tail heaviness. And mathematically, this is simply that, for a given "cutoff" (+ standard deviations to the right), the "rejection region" has more density (a larger % of the entire 100% probability). Since kurtosis is about the tail, it seems to me the peak (being more body) is incidental anyway
David
|
2021-06-16 20:18:34
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8010597229003906, "perplexity": 1575.4082278142225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626008.14/warc/CC-MAIN-20210616190205-20210616220205-00570.warc.gz"}
|
https://math.stackexchange.com/questions/2458059/magic-squares-in-combinatorics
|
Magic squares in combinatorics
Let $P_{3}(r)$ be the number of 3 x 3 magic squares that are symmetric to their main diagonal. Prove that $P_{3}(r) \leq (r+1)^3$.
$r$ in this problem seems to be the sum of each row and column
This is the first time I've dealt with magic squares, didn't even know of their existence before today. But from what I gathered off the internet they're square matrices containing non-negative integers where all row sums and column sums are equal to each other.
This question is from the chapter on permutations, strings over finite alphabets, and problems of choice. But I don't know how to prove this. Any ideas?
• what is $r$? Are the elements in $\{0,1,2,\dots,r\}$? – Jorge Fernández Hidalgo Oct 4 '17 at 22:23
• r is the sum of each row and column. I just edited it in. Sorry about that – user482578 Oct 4 '17 at 22:31
• In fact you can bound it by $(r+1)^ 2$ – Jorge Fernández Hidalgo Oct 4 '17 at 22:34
Suppose that $a$ and $b$ have been selected:
$$\begin{pmatrix} a & ? & ?\\ ? & b & ? \\ ? & ? & ?\\ \end{pmatrix}$$
We can immediately deduce the following value:
$$\begin{pmatrix} a & ? & ?\\ ? & b & ? \\ ? & ? & (r-a-b)\\ \end{pmatrix}$$
Using symmetry we obtain these two values:
$$\begin{pmatrix} a & ? & \frac{r-b}{2}\\ ? & b & ? \\ \frac{r-b}{2} & ? & r-a-b\\ \end{pmatrix}$$
Using the sum of the first and last row we deduce:
$$\begin{pmatrix} a & \frac{r-b-2a}{2} & \frac{r-b}{2}\\ ? & b & ? \\ \frac{r-b}{2} & \frac{2a+b-r}{2} & r-a-b\\ \end{pmatrix}$$
Using the sum of the first and last column we deduce:
$$\begin{pmatrix} a & \frac{r+b-2a}{2} & \frac{r-b}{2}\\ \frac{r+b-2a}{2} & b & \frac{3b+2a-r}{2} \\ \frac{r-b}{2} & \frac{2a+3b-r}{2} & r-a-b\\ \end{pmatrix}$$
It is not hard to see that the sum of the middle column is $3b$ so in fact $b=\frac{r}{3}$
Now notice $a=\frac{2a+3b-r}{2}$, so $P_3(r)=0$ for all $r$.
• So since $P_{3}(r) = 0$ for all $r$ then it is $\leq (r+1)^{3}$? – user482578 Oct 5 '17 at 21:52
|
2019-12-09 05:28:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9057297706604004, "perplexity": 197.34678357509455}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540517557.43/warc/CC-MAIN-20191209041847-20191209065847-00085.warc.gz"}
|
https://gaming.meta.stackexchange.com/questions/1356/what-should-be-done-about-strategy-and-similar-tags-what-conventions-should-w/1360
|
# What should be done about [Strategy] and similar tags? What conventions should we be adhering to?
Those who have been in chat for the past day or so have seen me ranting at length about how tags which are not [Game Title] are, for the most part, fundamentally not useful.
As an example, take a look at [Strategy], which has 121 questions about 47 distinct games. Even worse, [Strategy] could be a useful tag as a filter for the strategy genre, but the tag is presently poisoned by it's broader use. of those 47 games listed, less than half are actually Strategy games. The tag is also being used be people asking for help with platformers, puzzle games, squad-based shooters, RPGs, Roguelikes, and even Rock Band.
Essentially, there is no positive value in using this tag as a filter - it's only ever going to be useful in conjunction with another tag, at which point, isn't that what the Search feature, which can read the entire body of a question is for?
Now admittedly, [Strategy] is a special case, and could probably use somebody going through and just cleaning out all the garbage so that it can continue to be useful as a genre tag. To use an example that is less akin to the [Programming] tag on SO, lets look at Weapons 17 questions about 16 games, and for which there is no broadly applicable knowledge which binds them together. In other words, there is no such thing as an expert on [Weapons], and I just can't see any use case for which the tag is useful except when used to drill down further within questions about a particular game.
The community seems to have rejected the notion that tagging every game is poisonous - I'd go further and argue that using anything other than tagging by game for 90% of cases is poisonous, and that what we need are conventions for when, where, and how to use tags that go beyond [Game-Title], [Genre], and [Platform].
• Maybe it's just me (I seem to have a history for this sort of thing) but [weapons] seems like a pretty useful filter to me. If the user wants to know about them, they don't need to sift through every other question related to that game. – GnomeSlice Oct 26 '10 at 4:45
• The problem isn't that Weapons isn't something you'd want to search for. The problem is that it doesn't stand on it's own. It's never useful except in conjunction with another tag, at which point you're using Search, and that'll scan the entire body of the question anyway. Tags are primarily useful as a filter that narrows a search or to emphasize (or hide) content on the front page/feed. If a tag doesn't stand on it's own as a coherent set of information, what purpose does it serve? – LessPop_MoreFizz Oct 26 '10 at 4:49
• For strategy as a "special case", some previous discussions: meta.gaming.stackexchange.com/questions/321/… and meta.gaming.stackexchange.com/questions/703/…. Note that in the latter one, it's established that since RTS and TBS occupy the complete spectrum of the strategy genre, "strategy" isn't needed for the genre itself. – Grace Note Oct 26 '10 at 12:01
I'm going to focus on the general scenario of "semi-dependent" tags here, rather than specifically since that one is ambiguous and not very well understood (like pointed out over here, as well as in comments above).
If we only have 17 questions tagged across 16 games, this might mean the tag is bad. Alternatively, it can mean any of two things: we only have 17 questions actually about weapons, or we need to be a lot more prudent on tagging them. This is one of the dependent types of tags that I think will actually be pretty useful for our site, since there really can be a lot of questions about weapons in-game.
Can text search replace this kind of cooperative tagging? Sure, but let me highlight something from a very early discussion on a similar tagging qualm.
It's a straw-man argument to say that you can search without tags. Of course you can. So why have tags at all? You can search for "Civilization 4" quite successfully, whether or not there's a Civilization-4 tag. Large tags allow people to follow things they're interested in. Small tags allow specific, highly targeted searches.
It may not necessarily be any better than text search, but tags still have their operations in categorization of content and the filtering of content on those categories. Tag search and text search shouldn't be in competition because ideally they should work together.
That said, good tagging helps for the situations when people phrase things differently. The plague of duplicate questions that differ simply by choice of words is testament to this foul shortcoming of text search: you simply cannot find what isn't there. This same shortcoming applies to tags, but the thing here is that tags can represent the specific as well as the general. So you can ask a question about weapons without ever saying "weapons". In a multi-question theoretical example, suppose we got questions about the game Sora, one question about the Pilebunker, one about the Flamethrower, and one about the Bullet. They really don't need to share any words besides the name of the game and maybe "damage" in their question bodies, but the presence of a unifying tag lets me group these together in a way that they should since they're all about the same content: weapons. You can accomplish this categorization without needing to alter the word choice of the individual questions, because tags are independent of the author's expression of the problem.
To me, a tag like functions similar to a tag like [strings] on Stack Overflow. Since you aren't the most programming literate (according to those transcripts), I'll basically explain that strings vary wildly between implementations in different languages, but the core concept of what a string represents (a sequence of characters typically forming a word or sentence) is fairly consistent across languages. There's not really a "broadly applicable knowledge" on strings, instead we just plop the tag on questions for languages when strings are an important concept in the question, like how to manipulate strings in a specific fashion. Tag badges are even won for these kinds of tags, not because of a "general expertise" in strings across all languages, but simply for being good at some language's strings.
This kind of cooperative tagging is promoted in large by our friend the Related Tags column. Below are for [c#] and respectively on their respective sites.
So that image doesn't actually have strings, but [multithreading] and [generics] are two other examples that operate on the same principle, they may have general understood data but language context is where they shine. On our own site, I can see that if I have a question about how to get a specific achievement, there are already 31 questions tagged which might answer my concern. Maybe I'm having trouble with replays, in which case I can spot 14 questions tagged .
What hurts tagging isn't tags that merely rely on other tags for context. It's tags that are utterly meaningless, or perhaps just not that good, when they are alone. Searching for [achievements] alone isn't going to turn up that much good if 90% don't deal with the games I care about. But the tag still has meaning alone, because the concept is relatively understood across our scope of Gaming to have a specific meaning. So without the rest of the context, I can still get a good picture of what the question should be about, and context will only narrow it down to something better.
For a tag like [weapons], this should be similar to how it works with [strings]. While the implementation, behavior, and style of weapons will vary between games, the concept of what a weapon generally is will remain... relatively consistent as something I hit things with. I probably won't need to group them across games, but grouping them within the game, outside the constraints of the question body, I see that as a very positive use as a filter.
• I also think tags like this help convey what type of information is in the question. A user might simply search [halo-reach], for example, and upon browsing through the list of questions, will immediately be able to see the ones that relate to "weapons", even if it is not what they were searching for. This makes it easier to avoid reading questions you don't need, or to browse questions of a more specific topic, by game. – GnomeSlice Nov 5 '10 at 23:13
• All the cases where weapons are used (e.g. TF2 vs. EVE) vary much more wildly than strings or multithreading IMHO, where the differences are more superficial, versus the example where they operate in totally different manners. – Nick T Aug 22 '11 at 22:50
I have already posted in the past about the misuse of this tag, I'd just consider banning it and forcing pople to use real-time-strategy and turn-based-strategy as there really is no use for an [advice] tag tbh.
That said I believe non-game tags can be useful. , , , are all tags that should have a right to exist, just like you have tags about, say, threads on SO.
At any rate it is little use to go OCD on tagging. We should instead focus on leading by example by asking great questions with non-brain-damaged tagging :)
Does it have to stand on its own?
I remember discussing this with you earlier and I came to the conclusion that these sorts of tags are only really useful with respect to their games. Thus [strategy] may be useless, but [starcraft/strategy] is actually really useful. I called these tags subtags, since they're meaningless on their own, but useful with respect to a given game.
So I'm not sure why standing on its own a necessary prerequisite. I know I initially had the same reaction as you, but then I realized, that what additional functionality would I want from subtags other than filtering already filtered results?
If I want Strategy in Civilization 5, I would search for [civilization-5][strategy].
https://gaming.stackexchange.com/questions/tagged/civilization-5+strategy
Wow that seems to have really worked. Sure maybe I'll never just filter strategy, but the same would be true for a subtag. In fact, I've found that subtag functionality already exists by just looking for the intersection of a game and a useless tag.
|
2019-10-16 12:11:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29153257608413696, "perplexity": 975.9207178906357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668569.22/warc/CC-MAIN-20191016113040-20191016140540-00413.warc.gz"}
|
https://tex.stackexchange.com/questions/499168/escapeinside-doesnt-work-with-tcolorbox-in-beamer
|
# escapeinside doesn't work with tcolorbox in beamer
I am trying to make a beamer presentation with the following tcblisting declaration in the preamble.
\newtcblisting{ListingBoxWithEscape}[2][]{
minted language=#2,
minted style=friendly,
minted options={breaklines,autogobble,fontsize=\footnotesize,tabsize=2,escapeinside=!!},
title=#1
}
In my document I have:
\begin{ListingBoxWithEscape}{html}
<body>
<div class="container">
<div class="jumbotron">
<h1>Grid Cols and a Row</h1>!
\tikz \node[coordinate] (a) {};
!<p>Resize this responsive page to see the effect</p>
</div>
<div class="row">
<div class="col-sm-4">
<h3>Web Design</h3>
<p>This is Web design. This is web design. This is web design..</p>
<p>We are trying to learn Responsive web design using Bootstrap......</p>
</div>
\end{ListingBoxWithEscape}
To which I get the following errors:
! Package tikz Error: A node must have a (possibly empty) label text.
See the tikz package documentation for explanation.
Type H <return> for immediate help.
...
l.6 \PYG{esc}{ \tikz \node[coordinate] (a) {};}
I am unable to figure out what the problem may be. Anybody have any ideas?
Here is an MWE:
\documentclass{beamer}
\usepackage{tikz}
\usepackage{tcolorbox}
\tcbuselibrary{xparse,minted}
\newtcblisting{ListingBoxWithEscape}[2][]{
minted language=#2,
minted style=friendly,
minted options={breaklines,autogobble,fontsize=\footnotesize,tabsize=2,escapeinside=!!},
title=#1
}
\usetikzlibrary{calc,shapes.callouts,shapes.arrows,positioning}
\begin{document}
\begin{frame}[c,fragile]
\frametitle{Test}
\begin{ListingBoxWithEscape}{html}
<body>
<div class="container">
<div class="jumbotron">
<h1>Grid Cols and a Row</h1>!
\tikz \node[coordinate] (a) {};
!<p>Resize this responsive page to see the effect</p>
</div>
<div class="row">
<div class="col-sm-4">
<h3>Web Design</h3>
<p>This is Web design. This is web design. This is web design..</p>
<p>We are trying to learn Responsive web design using Bootstrap......</p>
</div>
\end{ListingBoxWithEscape}
\end{frame}
\end{document}
I find that if I hide the tikzpicture in a command, say \newcommand\tst{\tikz[remember picture]{\coordinate (a);}}, and then use this command, the error message is gone. The following has many inputs from @egreg, whom I thank for the comments below. As @egreg points out, the issue are the { and }, I was only guessing that something in (I thought there is something fundamentally wrong, but as @egreg pointed out you may want to add listing only.)
\documentclass{beamer}
\usepackage{tikz}
\usepackage{tcolorbox}
\tcbuselibrary{xparse,minted}
\newtcblisting{ListingBoxWithEscape}[2][]{listing only,
minted language=#2,
minted style=friendly,
minted options={breaklines,autogobble,fontsize=\footnotesize,tabsize=2,
escapeinside=!!},
title=#1
}
\usetikzlibrary{calc,shapes.callouts,shapes.arrows,positioning}
\begin{document}
\begin{frame}[c,fragile]
\newcommand\tst{\tikz[remember picture]{\coordinate (a);}}
\frametitle{Test}
\begin{ListingBoxWithEscape}{html}
<body>
<div class="container">
<div class="jumbotron">
<h1>Grid Cols and a Row</h1>!\tst!
<p>Resize this responsive page to see the effect</p>
</div>
<div class="row">
<div class="col-sm-4">
<h3>Web Design</h3>
<p>This is Web design. This is web design. This is web design..</p>
<p>We are trying to learn Responsive web design using Bootstrap......</p>
</div>
\end{ListingBoxWithEscape}
\begin{tikzpicture}[remember picture,overlay]
\draw[red,latex-] (a) to[out=0,in=-90] ++ (1,1) node[above] {test};
\end{tikzpicture}
\end{frame}
\end{document}
• I guess it's a problem of category codes; for some reason {} doesn't get recognized for the node label. Hiding the picture in a macro solves the issue because the replacement text is tokenized in advance. – egreg Jul 9 '19 at 9:23
• @egreg Thanks! Yes, that makes sense. I am however concerned about the lower half of the tcolorbox, this seems to be rubbish, regardless of the escape stuff. – user121799 Jul 9 '19 at 9:25
• I guess the OP forgot to add listing only – egreg Jul 9 '19 at 9:27
• @egreg Makes sense, thanks! – user121799 Jul 9 '19 at 9:27
• The temporary definition can better be done inside the frame, prior to the ListingBoxWithEscape environment. – egreg Jul 9 '19 at 9:29
|
2020-11-30 02:40:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8563416004180908, "perplexity": 7441.8412022306065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141204453.65/warc/CC-MAIN-20201130004748-20201130034748-00244.warc.gz"}
|
https://stats.stackexchange.com/questions/323739/can-i-add-several-additional-variables-to-my-principal-component-regression?noredirect=1
|
# Can I add several additional variables to my principal component regression?
I did a principal component analysis, which resulted in 7 components that I am now using for a principal component regression on my independent variable.
However, I want to add 2 control variables, but these are not component scores, but "normal" variables (on a Likert scale, the same sort of variables my independent variables were).
Is that okay to do or do I have to make these into components as well? I have done the regression both with those two as variables and combined in a component and the results are practically the same.
• This is not a duplicate. – amoeba Jan 18 '18 at 20:41
• I have no idea why this is closed as a duplicate of that Q. It is not a duplicate! This is a separate and a clearly defined question (that has been asked before: stats.stackexchange.com/questions/47972 - but wasn't answered, so now that Q is closed as a duplicate of this one). It's well answered below and the answer is accepted. This thread should stay open. I voted to reopen. – amoeba Jan 21 '18 at 21:33
It depends.
As you may know PCR starts with PCA on the independent variables.
$$X = TP' + E$$
After obtaining the scores ($T$), one carries out regression between $T$ and $Y$ so that:
$$Y = TB + F$$
The first PCA step assures the scores (columns of $T$) are uncorrelated so that you can find "healthy" regression coefficients rather than dealing with the originally problematic matrix (rank deficiency, multicollinearity, $p \gg n$ etc..) which can, for example, yield very large regression coefficients and cause overfitting.
Thus, if you add some variables, depending on the nature of those variables, you may end up with a similar problem that caused you to use PCR rather than OLS in the first place. On the other hand, it may be just OK. I suggest to confirm your each model's success via testing it on an independent validation set or at least by using CV.
Personally, I would add those variables prior to PCR. If interpretability by looking at the regression coefficients is your concern, then $$Y = XPB + F$$ thus $\hat{B} = PB$ which you can directly use on your (probably at least mean-centered) data and can be interpreted just as easly.
• Well, adding a few (specifically, two) additional variables will not make $p > n$ if PCA originally reduced $p$ to $p\ll n$. Is your concern then that these 2 additional variables might be highly correlated to the retained PC scores? – amoeba Jan 18 '18 at 13:58
• For this specific case, yes. But usually I try to provide less specific answers to aid future readers. Is this a bad practice? – theGD Jan 18 '18 at 14:08
• It's a good practice :-) I was just clarifying what you meant. – amoeba Jan 18 '18 at 14:11
• Thank you, very helpful reply! I have done most of the assumption testing and I think I have good and valid results in the end. – Boudewijn Hulst Jan 19 '18 at 14:06
|
2020-10-29 08:04:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6721943616867065, "perplexity": 617.6228441360747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107903419.77/warc/CC-MAIN-20201029065424-20201029095424-00151.warc.gz"}
|
https://www.physicsforums.com/threads/integration-by-parts.633419/
|
# Integration By Parts
I understand this integration technique, for the most part. One thing I am curious to know is why, when you do your rudimentary substitution for this particular technique, does dv have to always include dx?
## Answers and Replies
HallsofIvy
$\int udv= \int d(uv)- \int vdu$. Of course, $\int d(uv)= uv$.
|
2021-05-13 00:35:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9428098797798157, "perplexity": 956.0429862715652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991413.30/warc/CC-MAIN-20210512224016-20210513014016-00142.warc.gz"}
|
https://figshare.com/articles/Increasing_Polyaromatic_Hydrocarbon_PAH_Molecular_Coverage_during_Fossil_Oil_Analysis_by_Combining_Gas_Chromatography_and_Atmospheric_Pressure_Laser_Ionization_Fourier_Transform_Ion_Cyclotron_Resonance_Mass_Spectrometry_FT_ICR_MS_/2061015/1
|
## Increasing Polyaromatic Hydrocarbon (PAH) Molecular Coverage during Fossil Oil Analysis by Combining Gas Chromatography and Atmospheric-Pressure Laser Ionization Fourier Transform Ion Cyclotron Resonance Mass Spectrometry (FT-ICR MS)
2016-01-06T16:49:59Z (GMT) by
Thousands of chemically distinct compounds are encountered in fossil oil samples that require rapid screening and accurate identification. In the present paper, we show for the first time, the advantages of gas chromatography (GC) separation in combination with atmospheric-pressure laser ionization (APLI) and ultrahigh-resolution Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) for the screening of polyaromatic hydrocarbons (PAHs) in fossil oils. In particular, reference standards of organics in shale oil, petroleum crude oil, and heavy sweet crude oil were characterized by GC-APLI-FT-ICR MS and APLI-FT-ICR MS. Results showed that, while APLI increases the ionization efficiency of PAHs, when compared to other ionization sources, the complexity of the fossil oils reduces the probability of ionizing lower-concentration compounds during direct infusion. When gas chromatography precedes APLI-FT-ICR MS, an increase (more than 2-fold) in the ionization efficiency and an increase in the signal-to-noise ratio of lower-concentration fractions are observed, giving better molecular coverage in the <i>m</i>/<i>z</i> 100–450 range. That is, the use of GC prior to APLI-FT-ICR MS resulted in higher molecular coverage, higher sensitivity, and the ability to separate and characterize molecular isomers, while maintaining the ultrahigh resolution and mass accuracy of the FT-ICR MS separation.
|
2018-10-15 17:55:06
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8101696968078613, "perplexity": 9320.041801974558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509336.11/warc/CC-MAIN-20181015163653-20181015185153-00510.warc.gz"}
|
http://eccellent.it/wozr/correlation-matrix-chart-python.html
|
# Correlation Matrix Chart Python
To get the element-wise matrix multiplcation of matrices using Python you can use the multiply method provided by numpy module. The coefficient indicates both the strength of the relationship as well as the direction (positive vs. Also at the end is the code which is really simple. A correlation matrix is handy for summarising and visualising the strength of relationships between continuous variables. 93 GB 93%)1. The cells in the table are color-coded to highlight significantly positive and negative relationships. This article aims to give a better understanding of a very important technique of multivariate exploration. (The data is plotted on the graph as " Cartesian (x,y) Coordinates ") The local ice cream shop keeps track of how much ice cream they sell versus the noon temperature on that day. This Ranges from [-1,1] 2. Bubble Chart - A bubble chart is similar to a scatter plot in that it can show distribution or relationship. In programming, we often see the same 'Hello World' or Fibonacci style program implemented in multiple programming languages as a comparison. for every row of your matrix x is the array made of 6 case-control values for every row following x y is the array made of 6 case-control values compute correlation of x and y save result end loop over y end loop over x. A correlation with many variables is pictured inside a correlation matrix. The data goes from September 4, 2018 to September 28, 2018. A correlation matrix is simply a table which displays the correlation coefficients for different variables. Previous post Adding a correlation matrix in Power BI using Python. It is an R based solution so you will need to ensure that an R environment is setup (which I detail here – skip the nflscrapR steps) and that it is accessible from Power BI. Then take correlation of that dataset and visualize by sns heatmap. Description Usage Arguments Note Author(s) See Also Examples. Hey, don't worry. The idea is to pass the correlation matrix into the NumPy method and then pass this into the mask argument in order to create a mask on the heatmap matrix. The same transformation can be used in using a Wiimote to make a low-cost interactive whiteboard or light pen (due to Johnny Chung Lee). We're going to be continuing our work with the minimum wage dataset and our correlation table. Helwig (U of Minnesota) Data, Covariance, and Correlation Matrix Updated 16-Jan-2017 : Slide 1. Navigate to the Data Source tab. Let’s take a look at a positive correlation. order (ascending = False)) #first element of os series is the pair with the bigest correlation. import matplotlib. - [Instructor] Perhaps the simplest way … of looking at the association between variables … is with the correlation coefficient, specifically … the Pearson Product Moment Correlation Coefficient, … usually just called R. Both line and bar charts have a built-in toolbar that support a rich set of client-side interactions. 95 to_drop = [column for column in upper. The function CORR () can be used to calculate the Pearson Correlation Coefficient. The eigs keyword specifies the eigenvalues of the correlation matrix, and implies the dimension. Recently a SAS programmer asked how to construct a bar chart that displays the pairwise. Flow chart diagram on cyber space. The correlation coefficient matrix, or just the correlation matrix as it is popularly called, is related to the concept of covariance in statistics. I am a data-science rookie and I would like to use Python/ R to create a correlation matrix (something like this:. Alternatives allow for ordering based upon any vector or matrix. Though I do not see any error in the output I am unable to see the graph. We'll use the built in mtcars dataset that consists of fuel consumption and 10 variables of automobile design, such as number. Auto correlation is the correlation of one time series data to another time series data which has a time lag. In the following example, Python script will generate and plot Scatter matrix for the Pima Indian Diabetes dataset. Graph as matrix in Python Graph represented as a matrix is a structure which is usually represented by a -dimensional array (table) indexed with vertices. It takes in the data frame object and the required parameters that are defined to customize the plot. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 16-Jan-2017 Nathaniel E. The elements in the matrix must have filter capabilities whereby clicking on an element should show all the projects which are of that risk I imagine this can be done by modifying the slicer function Has anybod. prob() function will produce this matrix. matshow(corr) plt. The default is to sort by the loadings on the first factor. About Correlation The correlation between any two stocks (or sets of variables) summarizes a relationship, whether or not there is any real-world connection between the two stocks. For example, below is a simple R script that will perform this task. Now, if we look at the individual elements of the correlation matrix, the main diagonal all comprises of 1. Correlation is used to measure strength of the relationship between two variables. Here we show the Plotly Express function px. But with the below code, I could not generate the name of the variables labeled in the matrix. Our lists are filled with strings, not numbers. e(V) gets you the confidence intervals, p. To create the Correlation Matrix chart from the top menu: From the top menu of Morningstar Office, click on New , Charts , Correlation Matrix to open the Select Investments dialog box. Thanks for contributing an answer to Code Review Stack Exchange! Please be sure to answer the question. arange(2, children. xticks(range(len(corr. Correlation in Python. The original data were stored in 4 different tables with 4 columns in Vertica. 5) or the dot function or method: Upcasting When operating with arrays of different types, the type of the resulting array corresponds to the more general or precise one (a behavior known as upcasting). Then using Python and a subset of the usual machine learning suspects — scikit-learn, numpy, pandas, matplotlib and seaborn, I set out to understand the shape of the dataset I was dealing with. you can see a few examples in there and play around with the lib] Correlation Matrix plots. If the value is 2, no serial correlation exists. Start with a Correlation Matrix. Visualize correlation matrix. correlation matrix chart. Correlation matrix Modelling correlations using Python Author: Eric Marsden Subject: Risk engineering and safety management Keywords: correlation;risk;linear correlation coefficient;dependency;python;SciPy Created Date: 4/9/2020 7:14:28 PM. One of the first things you probably do with a dataset, is checking the number of records, counting the number of variables and understanding what the variables mean. corr() This is the complete Python code that you can use to create the correlation matrix for our example:. The Correlation Matrix is based on the correlation coefficient, a number between 1. 12 years ago Bill Venables provided a function on the R help mailing list for replacing the upper triangle of the correlation matrix with the p-values for those correlations (based on the known relationship between t and r). Is there a way to fix this? What we are looking for is a principled way to come up with a bona fide correlation matrix that is as close as possible to the original matrix. Each individual will be a node. Calculate the mean for Y in the same way. A correlation matrix is handy for summarising and visualising the strength of relationships between continuous variables. Next, use the formula for standard deviation to calculate it for both X and Y. This type of chart can be used in to visually describe relationships (correlation) between two numerical parameters or to represent distributions. Here is a simple example of a correlation matrix using Python. triu() is a method in NumPy that returns the lower triangle of any matrix given to it, while. Similar methods have also been explored in R in the blog Model Evaluation using R. test as stars. There are various methods expressed usually by the names of the authors: LPCB, PG+, and so on. The name of your @RISK correlation matrix now appears beneath the list of Inputs in the Explorer pane of the @RISK Model Window. I am trying to get correlation matrix for 13 variables of the df. corr() Next, I'll show you an example with …. Introduction to Correlation and Regression Analysis. DataFrame(data. On bottom, the bivariate scatterplots, with a fitted line. Code to add this calci to your website. background_gradient(cmap='coolwarm') # 'RdBu_r' & 'BrBG' are other good diverging colormaps. If positive, there is a regular correlation. python-pptx¶. Creating and Updating Figures. An expected return is pretty straightforward. DESCRIPTION The procedure is principally used to test the association between networks. csv, not unstack compress noobs star(* 0. The coefficient indicates both the strength of the relationship as well as the direction (positive vs. subplots(figsize=(size, size)) ax. The closer that the absolute value of r is to one, the better that the data are described by a linear equation. For example: A = [[1, 4, 5], [-5, 8, 9]] We can treat this list of a list as a matrix having 2 rows and 3 columns. Calculating the correlation between two series of data is a common operation in Statistics. I am trying to create a chart to display the historic trend of correlation coefficient based on a user defined lookback period (which I defined as the DIMENSION "Correlation Coefficient Lookback Period"). tril() returns the upper triangle of any matrix given to it. The correlation matrix is a table that shows the correlation coefficients between the variables at the intersection of the corresponding rows and columns. corrplot-package Visualization of a correlation matrix Description The corrplot package is a graphical display of a correlation matrix, confidence interval or general matrix. What is a Matrix Diagram? Quality Glossary Definition: Matrix. The correlation coefficient matrix, or just the correlation matrix as it is popularly called, is related to the concept of covariance in statistics. 00 means there is absolutely no correlation. This type of chart can be used in to visually describe relationships (correlation) between two numerical parameters or to represent distributions. The total number of possible pairings of x with y observations is n(n−1)/2, where n is the size of x and y. Covariance vs Correlation Covariance is a measure of whether two variables change ("vary") together. This similar to the VAR and WITH commands in SAS PROC CORR. iii) print the correlation matrix for dataframe X. Altair is a declarative statistical visualization library for Python, based on Vega and Vega-Lite, and the source is available on GitHub. Introduction. RMT is a competitor to shrinkage methods of covariance estimation. Better Heatmaps and Correlation Matrix Plots in Python You already know that if you have a data set with many columns, a good way to quickly check correlations among columns is by visualizing the correlation matrix as a heatmap. Correlation between two variables indicates that a relationship exists between those variables. Obviously there are more than 14 equities on the exchange. Correlation is one of the most widely used — and widely misunderstood — statistical concepts. r/Python: news about the dynamic, interpreted, interactive, object-oriented, extensible programming language Python Visualizing correlation matrices. Scientific Charts. yticks(range(len(corr. DESCRIPTION The procedure is principally used to test the association between networks. It's because a linear combination of a few Xs that are only weakly correlated with Y may have a larger correlation with Y than a linear combination of a few Xs that are strongly correlated with Y. Let us extract lower triangular matrix of the correlation matrix with diagonal elements using np. correlate2d(A,A) where A is a 2D matrix, typically a picture. I tried to define a value label at the same time and display the. The scatterplot matrix, known as SPLOM, allows data aficionados to quickly realize any interesting correlations within the dataset they are investigating. Next, use the formula for standard deviation to calculate it for both X and Y. Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. And then look at the returns. Now close the chart editor. pyplot as plt sns. keepdims: Boolean. Essentially, a correlation matrix is a grid of values that quantify the association between every possible pair of variables that you want to investigate. Consequences – You do not have Best Linear Unbiased Estimates. Though I do not see any error in the output I am unable to see the graph. order (ascending = False)) #first element of os series is the pair with the bigest correlation. In the comparison employed in this paper, this estimator is called EX. Other times you are given a correlation matrix, but you really need the covariance. Template Matching in MATLAB The comparison of two images is done based on correlation coefficient. Let us see various plots and charts that can be created using matplotlib. import seaborn as sns import pandas as pd data = pd. A correlation matrix measures the correlation between many pairs of variables. histogram: TRUE/FALSE whether or not to display a histogram. About Correlation The correlation between any two stocks (or sets of variables) summarizes a relationship, whether or not there is any real-world connection between the two stocks. tril() function with np. If your main goal is to visualize the correlation matrix, rather than creating a plot per se, the convenient pandas styling options is a viable built-in solution: import pandas as pd import numpy as np rs = np. For plotting the charts on an excel sheet, firstly, create chart object of specific chart class( i. Create one with Plotly in Python. In this article we are going to learn about a different mathematical formula which will also give us a score usually called correlation coefficient. It can be included in all the graphical toolkits that are available for Python. The function corrcoef provided by numpy returns a matrix R of correlation coefficients calculated from an input matrix X whose rows are variables and whose columns are observations. SAS Stored Processes. 0]] This form, where the constant terms are explicitly viewed as coefficients, and hence form a column of the matrix is called an augmented matrix. To find the correlation coefficient by hand, first put your data pairs into a table with one row labeled “X” and the other “Y. Plot multiple lines in one chart with different style Python matplotlib. Statistics for Python was released under the Python License. Drag a second copy of the Orders table into the canvas area. If the correlation coefficient equals -1 or +1, the variables have functional dependence. A matrix plot is a plot of matrix data. Now, when it comes to making a choice, which is a better measure of the relationship between two variables, correlation is preferred over covariance, because it remains unaffected by the change in location and scale, and can also be used to make a. Hey, don't worry. Also called: matrix, matrix chart. The symbol for Pearson's correlation is "ρ" when it is measured in the population and "r" when it is measured in a sample. Residual correlation matrix: This matrix is calculated as the difference between the variables correlation matrix and the reproduced correlation matrix. As a reminder to aficionados, but mostly for new readers' benefit: I am using a very small toy dataset (only 21 observations) from the paper Many correlation coefficients, null hypotheses, and high value (Hunt, 2013). They are used for creating static, animated, and interactive visualizations which can be in form of charts, plots, figures, etc. See the Package overview for more detail about what’s in the library. Also at the end is the code which is really simple. Pearson Correlation with log returns. Coefficients have a range of -1 to 1; -1 is the perfect negative correlation while +1 is the perfect positive correlation. This value represents the fraction of the variation in one variable that may be explained by the other variable. We will use gapminder data and compute correlation between gdpPercap and life expectancy values from multiple countries over time. height and weight). This type of chart can be used in to visually describe relationships (correlation) between two numerical parameters or to represent distributions. The matrix diagram shows the relationship between two, three, or four groups of information. Correlation in Linear Regression The square of the correlation coefficient, r², is a useful value in linear regression. Introduction to Correlation and Regression Analysis. This gives you a good understanding of. Numbers represent search interest relative to the highest point on the chart for the given region and time. txt file that we did on day 1 using TextWrangler. Guy, Thanks for your constructive comments on chart. The correlation matrix provides the correlation coefficients between each combination of two input bands. The correlation widget is based on your current setting of the correlation table. In Python, this can be created using the corr() function, as in the line of code below. It is known as the best method of measuring the association between variables of interest because it is based on the method of covariance. It is calculated by computing the products, point-by-point, of the deviations seen in the previous exercise, dx[n]*dy[n] , and then finding the average of all those products. This includes information like how many rows, the average of all of the data, standard deviation for all of the data max and min % swing on all data. This elegant. log Step 4 - Visualization Note that you need to threshold the p-value matrix at the desired cut-off and to convert it into a network using a script of your own. It includes implementations of several factorization methods, initialization approaches, and quality scoring. The Python matplotlib pie chart displays the series of data in slices or wedges, and each slice is the size of an item. Subscribe to RSS Feed. In addition to. Plotly Fundamentals. Matplotlib can be used in Python scripts, the Python and IPython shells, the Jupyter notebook, web application servers, and four graphical user interface toolkits. Covariance vs Correlation Covariance is a measure of whether two variables change ("vary") together. Step 3: Create a Correlation Matrix using Pandas. categorical” function). They also seem to be inversely related to the S&P 500. There is another way to create a matrix in python. I have two variables which vary with each run of my code. When someone speaks of a correlation matrix, they usually mean a matrix of Pearson-type correlations. In Python, Pandas provides a function, dataframe. Similarly, the lower right value is the correlation coefficient for y. py is free and open source and you can view the source, report issues or contribute on GitHub. The Macroaxis Correlation Table is a table showing correlation coefficients between stocks, funds, ETFs, or cryptocurrencies. shape), k = 1). Since correlation coefficients are weird beasts constrained to [-1, 1], standard gaussian errors like you are expecting. Dear Statalist, although it's not a particularly Stata specific question , I am hoping to get advise on the following (basic?) question: I am using the following command to get a correlation matrix quietly estpost correlate `vars', matrix esttab using correlations. Excel Correlation Matrix. Correlation matrix Modelling correlations using Python Author: Eric Marsden Subject: Risk engineering and safety management Keywords: correlation;risk;linear correlation coefficient;dependency;python;SciPy Created Date: 4/9/2020 7:14:28 PM. Partial Correlation It is simply defined as the measure of the relationship between two or more variables while controlling for the effects of one or more additional variables. You can see the Correlation Heatmap Matrix for this dataset in the image below. Correlation Matrix in Excel We'll start with Excel. TrajectoryIterator, 101 frames: Size: 0. Scientific Charts. Cohen2,3, Kai Li1, Nicholas B. In Jake's presentation, he shows the same scatter plot in several of the. Use this syntax with any of the arguments from the previous. Data visualization is the discipline of trying to understand data by placing it in a visual context so that patterns, trends and correlations that might not otherwise be detected can be exposed. Drag a second copy of the Orders table into the canvas area. Correlation statistics can be used in finance and investing. Previous post Adding a correlation matrix in Power BI using Python. These are the cumulative sums of the two principal components. The correlation matrix is great for visualizing similarities between. On bottom, the bivariate scatterplots, with a fitted line. To start, here is a template that you can apply in order to create a correlation matrix using pandas: df. More Basic Charts. Try attaching the model comparison node to the variable clustering node and then run the variable clustering node. It gives a quick overview of the whole dataset. 85 is significant. The correlation coefficient matrix, or just the correlation matrix as it is popularly called, is related to the concept of covariance in statistics. When one variable changes if the other … Continue reading "Scatter Plots – Free Six Sigma Scatter Plot template". 10 Minutes to pandas. Know what is a confusion matrix and its elements. leastsq() can be told to return the covariance matrix of the estimated parameters (m and o in your example; I have no idea what you think r-coeff is). A correlation with many variables is pictured inside a correlation matrix. Matrix calculus. A correlation matrix appears, for example, in one formula for the coefficient of multiple determination , a measure of goodness of fit in multiple regression. We can generate another correlation matrix with annot=True. Numbers represent search interest relative to the highest point on the chart for the given region and time. On the other hand, for evaluating classification models, methods such as Confusion Matrix along with charts such as KS, Gain and Lift Chart got used for evaluating a Logistic Regression Model. So, let's start the Python Statistics Tutorial. Correlation values range between -1 and 1. set_xticks (np. The correlation matrix above includes 14 securities. In other words, it measures to what extent the prices of two securities move together. You want a table that has the same row labels, in the same order, as the column labels. A good place to start learning about NumPy is the official User. A correlation simply means that two measures tend to vary together. It is a powerful tool to summarize a large dataset and to identify and visualize patterns in the given data. Compute the correlation (matrix) for the input RDD(s) using the specified method. Note that FACTOR uses listwise deletion of missing values by default but we can easily change this to pairwise deletion. The cell (i,j) of such a matrix displays the scatter plot of the variable Xi versus Xj. RandomState(33) d = pd. It only takes a minute to sign up. Correlation matrix of residuals m1 realgdp cpi m1 1. For this tutorial, I’m going to create this using Jupyter Notebooks. In this post I will demonstrate how to plot the Confusion Matrix. A matrix plot is a color-coded diagram that has rows data, columns data, and values. The built-in Python statistics library has a relatively small number of the most important statistics functions. A Scatter plot depicts the relationship between the two variables and determines if there is a correlation between those two variables. 6 Easy Steps to Learn Naive Bayes Algorithm with codes in Python and R Complete Guide to Parameter Tuning in XGBoost with codes in Python 40 Questions to test a Data Scientist on Clustering Techniques (Skill test Solution) 7 Regression Techniques you should know!. It allows missing values to be deleted in a pair-wise or row-wise fashion. The head() function returns the first 5 entries of the dataset and if you want to increase the number of rows displayed, you can specify the desired number in the head() function as an argument for ex: sales. Obviously there are more than 14 equities on the exchange. xcorr ( x , y , usevlines = True. Statistics for Python is an extension module, written in ANSI-C, for the Python scripting language. You can then plot the correlation matrix and get an idea of which variables have a high correlation with each other. Coefficients have a range of -1 to 1; -1 is the perfect negative correlation while +1 is the perfect positive correlation. Correlation matrix Problem. Python: Plot a pandas correlation matrix. Lets now code TF-IDF in Python from scratch. The value of r is always between +1 and –1. The built-in Python statistics library has a relatively small number of the most important statistics functions. This is called a correlation matrix. The matrix product can be performed using the @ operator (in python >=3. The statistics are written to the output file in ASCII text format. Multiple Logistic Regression; Confusion matrix False Positive, False Negative; True Positive, True Negative; Sensitivity, Recall, Specificity, F1; Receiver operating characteristics. can be represented by the matrix [1. You can then use pyplot matshow() in order to visualise that correlation matrix. This post gives an overview of some of the most popular and useful tools out there to help you choose which is the right one for your specific application. Recall that multiple regression. That is, it finds two transformation matrices, Ltrans and Rtrans, such that row vectors in the transformed matrices L*Ltrans and R*Rtrans are as correlated as possible (note that in this notation we interpret L as a matrix with the input vectors in its rows). Auto correlation varies from +1 to -1. IPython (Interactive Python) Pandas (Python Library to handle time series data ) NSEpy (Fetch Historical data from NSEindia - NSEpy 0. Plot multiple lines in one chart with different style Python matplotlib. For a population, the Pearson correlation coefficient is: For a sample is: This is the python code for the two. Here are examples of applications addressed in Coding the Matrix. The matrix diagram shows the relationship between two, three, or four groups of information. In this overview, we provide the definitions and intuition behind several types. Correlation matrix can be also reordered according to the degree of association between variables. load_dataset('iris') import matplotlib. This is important to identify the hidden structure and pattern in the matrix. I have a correlation matrix of the returns of 500 stocks, and in order to denoise the matrix I have to find the maximum eigenvalue thanks to Marcenko-Pastur's theorem. That is, each pair-wise correlation is identified by 3 returned columns: variable_name_1, variable_name_2, and corr_value. I am a data-science rookie and I would like to use Python/ R to create a correlation matrix (something like this:. , debt issues are rated by several rating agencies. We use tools from Pandas, NumPy, and SciPy to implement a correlation matrix filtering algorithm of Marcenko and Paster. The Correlation analysis tool in Excel (which is also available through the Data Analysis command) quantifies the relationship between two sets of data. Of course, we will look into how to use Pandas and the corr method later in this post. Python has increasingly become the most popular and innovative tool for data visualisation. This is useful to know, because some machine learning algorithms like linear and logistic regression can have poor performance if there are highly correlated input variables in your data. Return a random correlation matrix, given a vector of eigenvalues. Visualizing the correlations between variables often provides insight into the relationships between variables. for an in-depth discussion in this video, Evaluating similarity based on correlation, part of Building a Recommendation System with Python Machine Learning & AI. Covariance Matrix for N-Asset Portfolio fed by Quandl in Python Quant at Risk. For two corelated variables, the formula is much as one would get from intuition about the meaning of correlation with some twist due to. I'm currently performing matrix cross correlation in python using : C = scipy. python - 다른 계열과 비교하여 데이터 프레임의 색인 값 찾기. Correlation matrix. Start with a simple bubble chart. shape), k = 1). Coefficients have a range of -1 to 1; -1 is the perfect negative correlation while +1 is the perfect positive correlation. A perfect downhill (negative) linear relationship […]. It has a value between +1 and −1, where 1 is total positive linear correlation, 0 is no linear correlation, and −1 is total negative linear correlation. The usual float formats with a precision retain their original meaning (e. A value of 100 is the peak popularity for the term. In addition, correlation icons appear beside each correlated Input in the grid. For example, a correlation coefficient could be calculated to determine the level of correlation between the price of crude oil and the. Hey following Vega heatmap chart for matrix stats (correlation) Here is a vega displaying a heatmap of correlation matrix (matrix stats Elasticsearch api call) split by a key (isFraud in this example). Correlation values range between -1 and 1. If 2 individuals are close enough (we set a threshold), then they are linked by a edge. The only issue with these functions, however, is that they do not return the p-values, but only the correlation coefficients. Email to a Friend. fftconvolve. Thus they are related in the sense that change in any one variable is accompanied by change in the. Alternatives allow for ordering based upon any vector or matrix. Similarly, using the same data-matrix and the covariance matrix, let us define the correlation matrix (R): As we see here, the dimension of the correlation matrix is again p × p. Whether to keep the sample axis as singletons. metrics) and Matplotlib for displaying the results in a more intuitive visual format. Click any correlation number for a time-series chart option. We use cookies on Kaggle to deliver our services, analyze web traffic, and improve your experience on the site. If no variable list is specified then the variables are re-ordered according to hierarchical clustering or the Hunter (2973) chain method in which the first variable is the variable with the largest sum of squared correlations of all the variables, then the variable that has the highest correlation with. Since everything varies, one rarely sees a perfect correlation. It can be positive, negative or zero. I want to be able to get a correlation among three different cases, and we use the following metrics of correlation to calculate these: 1. It shows a numeric value of the correlation coefficient for all the possible combinations of the variables. Then I simply change the visual from a Table to a Python visual. A Scatter (XY) Plot has points that show the relationship between two sets of data. As a reminder to aficionados, but mostly for new readers' benefit: I am using a very small toy dataset (only 21 observations) from the paper Many correlation coefficients, null hypotheses, and high value (Hunt, 2013). It is easy to do it with seaborn: just call the pairplot function # library & dataset import seaborn as sns df = sns. NetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks. In order to convert a p x p correlation matrix to a covariance matrix, you need the variances (or standard deviations) of the p variables. In Python, however, there is no functions to directly obtain confidence intervals (CIs) of Pearson correlations. Generally speaking, low correlations across different markets is the main idea behind global portfolio diversification, and without it, there's no benefit to the rebalancing of internationally exposed portfolios. This article aims to give a better understanding of a very important technique of multivariate exploration. Have another way to solve this solution? Contribute your code (and comments) through Disqus. Hi, I am trying to create a risk matrix chart in Power BI like the one shown below. In this post I will demonstrate how to plot the Confusion Matrix. corr() method and pass that correlation matrix to sns. The upper left value is the correlation coefficient for x and x. Correlation: correlation matrix chart In PerformanceAnalytics: Econometric Tools for Performance and Risk Analysis. correlation matrix. Correlation Matrix Bitcoin and S&P 500 The red colour of the graph means that the relationship is strong and positive. For example, I will create three lists and will pass it the matrix() method. Hint: You can try manually or make another Correlation matrix. Start with a Correlation Matrix. It's Ben Jann's solution, not mine. As with the Pearson's correlation coefficient, the coefficient can be calculated pair-wise for each variable in a dataset to give a correlation matrix for review. The axes are the scores given by the labeled critics and the similarity of the scores given by both critics in regards to certain an_items. 6 Easy Steps to Learn Naive Bayes Algorithm with codes in Python and R Complete Guide to Parameter Tuning in XGBoost with codes in Python 40 Questions to test a Data Scientist on Clustering Techniques (Skill test Solution) 7 Regression Techniques you should know!. ) Output Ports Filtered data from input. New to Plotly? Plotly is a free and open-source graphing library for Python. (just click the Py icon). More often than not, the correlation metric used in these instances is Pearson's r (AKA the […]. If the value is 2, no serial correlation exists. Return a random correlation matrix, given a vector of eigenvalues. Click OK twice. Now, create a correlation matrix using this template: df. I have two variables which vary with each run of my code. Pearson Correlation Coefficient is a sophisticated statistics tool, and a deeper understanding of how this tool works is recommended before using it. Principal component analysis is a well known technique typically used on high dimensional datasets, to represent variablity in a reduced number of characteristic dimensions, known as the principal components. A correlation matrix is a matrix that represents the pair correlation of all the variables. Click Apply and then close the box. How to Create a Matrix Plot in Seaborn with Python. The axes are the scores given by the labeled critics and the similarity of the scores given by both critics in regards to certain an_items. From there you can create a visual using the seaborn library. The user can build presentations that require nine cells matrixes (3×3 3D Matrix) or 4 cells matrixes (2×2 quadrant 3D Matrix). Python has increasingly become the most popular and innovative tool for data visualisation. # correlation matrix in R using mtcars dataframe x <- mtcars[1:4] y <- mtcars[10:11] cor(x, y) so the output will be a correlation matrix. It shows a numeric value of the correlation coefficient for all the possible combinations of the variables. I am trying to get correlation matrix for 13 variables of the df. Cohen2,3, Kai Li1, Nicholas B. corr()' function to compute correlation matrix iv) from the correlation matrix note down the correlation value between 'CRIM' and 'PTRATIO' and assign it to variable 'corr_value' v) import stats model as sm vi) initalize the OLS model with target Y and dataframe X(features). In Python, this can be created using the corr() function, as in the line of code below. Using it we can create plots, histograms, bar charts, scatterplots, etc. We’re interested in the values of correlation of x with y (so position (1, 0) or (0, 1)). If tendency is pronounced, the correlation coefficient is close to -1 or +1 (depending on sign). Displaying Figures. 01) long b(%9. This article describes how to plot a correlogram in R. Then I simply change the visual from a Table to a Python visual. Packages Required import pandas as pd import matplotlib. Correlation. Length constitutes the 1st row and 1st column of the matrix. This add-in is available in all versions of Excel 2003 through Excel 2019, but is not. We're going to be continuing our work with the minimum wage dataset and our correlation table. background_gradient(cmap='coolwarm') # 'RdBu_r' & 'BrBG' are other good diverging colormaps. A correlation matrix can be obtained using the variable clustering node. 2) For all combinations of blocks, the correlation matrix is calculated, so A/A, A/B, B/B etc. View solution in original post. It also contains some algorithms to do matrix reordering. ) function and calculate Log Returns, Correlation Matrix, and OLS Regression models using Cufflinks which makes financial data visualization convenient. In this correlation matrix, you can see that: For target 0, the sepal length and width have a correlation of 0. Exploring Correlation in Python. It can be easily verified that similar matrices have identical characteristic polynomials. Covariance Matrix for N-Asset Portfolio fed by Quandl in Python Quant at Risk. Essentially, a correlation matrix is a grid of values that quantify the association between every possible pair of variables that you want to investigate. When someone speaks of a correlation matrix, they usually mean a matrix of Pearson-type correlations. If positive, there is a regular correlation. The shaded area is one standard deviation. It is an R based solution so you will need to ensure that an R environment is setup (which I detail here – skip the nflscrapR steps) and that it is accessible from Power BI. head(10), similarly we can see the. Though I do not see any error in the output I am unable to see the graph. A correlation matrix is handy for summarising and visualising the strength of relationships between continuous variables. Correlation Matrix is basically a covariance matrix. Correlation bits > I'll defer to Peter to comment on (fixing) the coding in chart. spearmanr , whereas for the confidence interval and the probability of spurious correlation I use my own functions, which I include below (following, respectively, Stan Brown’s Stats without tears and Cynthia Kalkomey’s Potential risks when using seismic attributes as predictors of. A positive value for the correlation implies a positive association (large values of X tend to be associated with large values of Y and small values of X tend to be associated with small values of Y). First, the two select gold mining stocks are highly correlated. I have a correlation matrix which states how every item is correlated to the other item. The correlation matrix can be used to estimate the linear historical relationship between the returns of multiple assets. Filter for finding attribute pairs. This type of chart can be used in to visually describe relationships (correlation) between two numerical parameters or to represent distributions. Kite is a free autocomplete for Python developers. This article describes how to plot a correlogram in R. Correlation matrix can be also reordered according to the degree of association between variables. A positive value for the correlation implies a positive association (large values of X tend to be associated with large values of Y and small values of X tend to be associated with small values of Y). This article explores how to create a correlation matrix table between Bank Stocks(includes bank nifty in our case) using amibroker. However, if a value of less than 2 indicates a +ve correlation and a value greater than 2 shows a -ve correlation. The default is to sort by the loadings on the first factor. The elements in the matrix must have filter capabilities whereby clicking on an element should show all the projects which are of that risk I imagine this can be done by modifying the slicer function Has anybod. Correlation matrix of residuals m1 realgdp cpi m1 1. csv" #create a dataframe df = pd. import pandas as pd import numpy as np rs = np. The matrix is symmetric, which means that the lower triangle and upper triangle of the matrix are simply reflections of each other since correlation is a bi-directional measurement. Best Friends (Incoming) Linear Correlation (67 %) Deprecated; Rank Correlation (5 %) Deprecated; Low Variance Filter (3 %). There is a Correlation Plot custom visual for Power BI published my Microsoft through the Marketplace. Matplotlib can be used in Python scripts, the Python and IPython shells, the Jupyter notebook, web application servers, and four graphical user interface toolkits. We’re interested in the values of correlation of x with y (so position (1, 0) or (0, 1)). Rename the matrix as desired by entering a new name in the Name text box. A positive covariance would indicate a positive linear relationship between the variables, and a negative covariance would indicate the opposite. Also, missingno. In addition to. Correlation computes the correlation matrix for the input Dataset of. Get introduced to “Cut off value” estimation using ROC curve. Our approach here helps answer the age-old question of whether a particular day is a good trading day — or not. Before posting me my question, I have searched on forum to get the solution for the same but I didn't find any solution which fits in my requirements. In Jake's presentation, he shows the same scatter plot in several of the. prob() function will produce this matrix. SAS Correlation analysis is a particular type of analysis, useful when a researcher wants to establish if there are possible connections between. Correlation is a measure of relationship between variables that is measured on a -1 to 1 scale. Bubble Chart - A bubble chart is similar to a scatter plot in that it can show distribution or relationship. Lets now code TF-IDF in Python from scratch. The first step is to load the dependencies which are the essential library. negative correlations). Principal component analysis is a technique used to reduce the dimensionality of a data set. These are the cumulative sums of the two principal components. RdYlGn) # display the matrix ax. In addition to. A tuple (width, height) in inches. background_gradient (cmap = 'coolwarm') # 'RdBu_r' & 'BrBG' are other good diverging colormaps. The module is not intended to be a competitor to third-party libraries such as NumPy, SciPy, or proprietary full-featured statistics packages aimed at professional statisticians such as Minitab, SAS and Matlab. This allows you to see which pairs have the highest correlation Read packages into Python library. It can be positive, negative or zero. Then using Python and a subset of the usual machine learning suspects — scikit-learn, numpy, pandas, matplotlib and seaborn, I set out to understand the shape of the dataset I was dealing with. The correlation coefficient matrix, or just the correlation matrix as it is popularly called, is related to the concept of covariance in statistics. It is an R based solution so you will need to ensure that an R environment is setup (which I detail here – skip the nflscrapR steps) and that it is accessible from Power BI. Explanation. Type in the correlation criteria to find the least and/or most correlated forex currencies in real time. A correlation diagram can be created using Matplotlib. com data provider. The function CORR () can be used to calculate the Pearson Correlation Coefficient. #importing libraries import pandas as pd import numpy as np import matplotlib. There is another way to create a matrix in python. Use MathJax to format equations. This allows you to see which pairs have the highest correlation. ) Sometimes (As a dot plot, if values occur at irregular intervals of time) Often (To feature overall trends and patterns and to support. The documentation for Confusion Matrix is pretty good, but I struggled to find a quick way to add labels and visualize the output into a 2×2 table. QuantLib-Python: Simulating Paths for Correlated 1-D Stochastic Processes This program, which is just an extension to my previous post , will create two correlated Geometric Brownian Motion processes, then request simulated paths from dedicated generator function and finally, plots all simulated paths to charts. A soft drink bottler is trying to predict delivery times for a driver. index cm = df [cols. For each method, one can start by filtering the covariance matrix directly, or filter the correlation matrix and then covert the cleansed. Populate Python with Data. Matrix calculus. If a vector, entries must form a contiguous block of dims. Correlation or correlation coefficient captures the association between two variables (in the simplest case), numerically. Data visualization with different Charts in Python Data Visualization is the presentation of data in graphical format. To show the correlation matrix on heatmap pass bool ‘True’ value to annot parameter. A correlation matrix is a table showing correlation coefficients between sets of variables. Correlation bits > I'll defer to Peter to comment on (fixing) the coding in chart. A Covariance Matrix, like many matrices used in statistics, is symmetric. set_xticks (np. QuantLib-Python: Simulating Paths for Correlated 1-D Stochastic Processes This program, which is just an extension to my previous post , will create two correlated Geometric Brownian Motion processes, then request simulated paths from dedicated generator function and finally, plots all simulated paths to charts. py] from string import ascii_letters import numpy as np import pandas as pd import seaborn as sns import matplotlib. iii) print the correlation matrix for dataframe X. Start with a simple bubble chart. Correlation is used to measure strength of the relationship between two variables. Alternatives allow for ordering based upon any vector or matrix. These methods can only detect monotonic relationship. A scatterplot matrix is a matrix associated to n numerical arrays (data variables), X 1, X 2, …, X n. Scientific Charts. We will use np. for every row of your matrix x is the array made of 6 case-control values for every row following x y is the array made of 6 case-control values compute correlation of x and y save result end loop over y end loop over x. Previous: Write a NumPy program to compute the covariance matrix of two given arrays. A correlation matrix is simply a table which displays the correlation coefficients for different variables. A matrix is called symmetric if $$a_{ij}$$ is equal to $$a_{ji}$$. Let us see some examples. The auto-correlation matrix (also called second moment) of a random vector = (, …,) is an × matrix containing as elements the autocorrelations of all pairs of elements of the random vector. This might save someone a bit of time, I could not find a standard xcorr function (like MATLAB's) in Python, which returns the coefficients of a cross correlation of two signals (instead of the inner product). C:\pandas > python example. With Altair, you can spend more time understanding your data and its meaning. In statistics, the correlation coefficient r measures the strength and direction of a linear relationship between two variables on a scatterplot. Parameters eigs 1d ndarray. Previous post Adding a correlation matrix in Power BI using Python. 000000 ----- Calculating correlation between two DataFrame. The autocorrelation matrix is used in various digital signal processing algorithms. If positive, there is a regular correlation. I downloaded stock prices data from Yahoo for the month of September 2018 for five different companies. The cor() function will produce a basic correlation matrix. Steps to Create a Correlation Matrix using Pandas. Our lists are filled with strings, not numbers. Partial Correlation It is simply defined as the measure of the relationship between two or more variables while controlling for the effects of one or more additional variables. The correlation matrix is great for visualizing similarities between. Multiple Linear Regression. The correlation coefficient is a way to measure the strength of the relationship between two assets, useful because analysis of one market can sometimes help us infer things about the other market. Use method= to specify the method to be used for single RDD inout. Displaying Figures. test as stars. Pick between ‘kde’ and ‘hist’ for either Kernel Density Estimation or Histogram plot in the diagonal. Correlation matrix is symmetric so we only show the lower half. Change the current settings to change the correlation widget. Amount of transparency applied. The attached example workbook Correlaton Value Matrix. This article describes how to plot a correlogram in R. now() start = datetime(end. corr() corr. Also, missingno. We use tools from Pandas, NumPy, and SciPy to implement a correlation matrix filtering algorithm of Marcenko and Paster. 737144 Banana -0. Correlation matrix can be also reordered according to the degree of association between variables. It is more used for exploratory purpose than explanatory. The Pearson product-moment correlation coefficient is measured on a standard scale -- it can only range between -1. log Step 4 - Visualization Note that you need to threshold the p-value matrix at the desired cut-off and to convert it into a network using a script of your own. heatmap(collisions) In this example, it seems that reports which are filed with an OFF STREET NAME variable are less likely to have complete geographic data. Taking X and Y two variables of interest and Z the matrix with all the variable minus {X, Y}, I couldn't find any other Python implementations of the partial correlation. This is useful to know, because some machine learning algorithms like linear and logistic regression can have poor performance if there are highly correlated input variables in your data. Pearson Correlation with log returns. publicly traded companies. This includes information like how many rows, the average of all of the data, standard deviation for all of the data max and min % swing on all data. How to Calculate Correlation Matrix - Definition, Formula, Example Definition: Correlation matrix is a type of matrix, which provides the correlation between whole pairs of data sets in a matrix. Both correlation and regression assume that the relationship between the two variables is linear. py arctic_soils_sparcc. Suppose this is your data:. These methods can only detect monotonic relationship. pyplot as plt import seaborn as sns import pandas_datareader. Correlation matrix using pairs plot In this recipe, we will learn how to create a correlation matrix, which is a handy way of quickly finding out which variables in a dataset are correlated with each other. prob() function will produce this matrix. This observed positive trend means that if we observe more tornadoes for this given month we will also see more tornadoes for the whole year. 297494 realgdp -0. We're going to be continuing our work with the minimum wage dataset and our correlation table. A good place to start learning about NumPy is the official User. Hi, I am trying to create a risk matrix chart in Power BI like the one shown below. SAS Visual Analytics. Tables have 10M, 20M, 40M, and 80M rows. Thanks to this rule, an $$N \times N$$ symmetric matrix needs to store only. It measures how change in one variable is associated with change in another variable. We can plot correlation matrix to show which variable is having a high or low correlation in respect to another variable. Altair is a declarative statistical visualization library for Python, based on Vega and Vega-Lite, and the source is available on GitHub. It is a powerful tool to summarize a large dataset and to identify and visualize patterns in the given data. 0] which in turn can be represented in Python as >>> D = [[1. Positive Correlation. The matrix depicts the correlation between all the possible pairs of values in a table. Similarly, using the same data-matrix and the covariance matrix, let us define the correlation matrix (R): As we see here, the dimension of the correlation matrix is again p × p. Know what is a confusion matrix and its elements. We use tools from Pandas, NumPy, and SciPy to implement a correlation matrix filtering algorithm of Marcenko and Paster. In a large empty space next to the variables, copy all of the variable names and paste them into new columns. Here we show the Plotly Express function px. test as stars. But with the below code, I could not generate the name of the variables labeled in the matrix. More Statistical Charts. Check If Matrix Is Symmetric Python. Obviously there are more than 14 equities on the exchange. correlation synonyms, correlation pronunciation, correlation translation, English dictionary definition of correlation. Start with a simple bubble chart. The read_csv function loads the entire data file to a Python environment as a Pandas dataframe and default delimiter is ',' for a csv file. The main advantage of using a symmetric matrix in comparison with a classic matrix lies in smaller memory requirements. Import the required python modules. A correlation matrix contains the correlation coefficients between a set of variables e. It is widely used as a benchmark index for U. The demo data is shown below: Download the file. Correlation Matrix Chart add-in is a tool to visualize the correlations between each data pair in the data series with 3 types of trendline. 2j6epb836dcx, o1gcp9wdp8bj, gedla6ruo444, 7s2tyllkfes1, xd1f7gd2k3, 66ad6x9hu2, uqujzd1pw3x8df2, rsrzegat39a0, zhhryru85vcn, u338zrez4f, ubmy4b9pviuaao, sk85qnye1vfl4, jmubo64bmmd9, 9gfhvdx8di5ug5, 6c96h555uie, 9bdk903aq8k0swy, l7gw41iwiu8mn6a, wnjv8zti7j7yt, n14ytwgtnejf, 6ln4mux1xm0aq, y89yenhn8o4, vhirvqfh9x24, xk7hb38ehvnl1, flrdjcyow3snb7c, mbcbz9jb2b, gc25evv8wbwsi3o, vmsv0m2rqqw, 8hsngvfaset7n, lv6twowezt41gg
|
2020-06-07 06:00:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3872530460357666, "perplexity": 923.8814023707116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348523564.99/warc/CC-MAIN-20200607044626-20200607074626-00556.warc.gz"}
|
https://www.growingwiththeweb.com/2014/02/async-vs-defer-attributes.html
|
# async vs defer attributes
Published
Tags:
The async and defer attributes for the <script> element have great support now, so it’s time to learn exactly what they do!
## <script>
Let’s start by defining what <script> without any attributes does. The HTML file will be parsed until the script file is hit, at that point parsing will stop and a request will be made to fetch the file (if it’s external). The script will then be executed before parsing is resumed.
## <script async>
async downloads the file during HTML parsing and will pause the HTML parser to execute it when it has finished downloading.
## <script defer>
defer downloads the file during HTML parsing and will only execute it after the parser has completed. defer scripts are also guaranteed to execute in the order that they appear in the document.
## When should I use what?
Typically you want to use async where possible, then defer then no attribute. Here are some general rules to follow:
• If the script is modular and does not rely on any scripts then use async.
• If the script relies upon or is relied upon by another script then use defer.
• If the script is small and is relied upon by an async script then use an inline script with no attributes placed above the async scripts.
## Support
IE9 and below have some pretty bad bugs in their implementation of defer such that the execution order isn’t guaranteed. If you need to support <= IE9 I recommend not using defer at all and include your scripts with no attribute if the execution order matters. Read the specifics here.
|
2018-08-14 17:14:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36978355050086975, "perplexity": 3052.99197029168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209216.31/warc/CC-MAIN-20180814170309-20180814190309-00658.warc.gz"}
|
http://www.digitalmars.com/d/archives/digitalmars/D/announce/DMD_0.169_release_4872.html
|
## digitalmars.D.announce - DMD 0.169 release
Walter Bright <newshound digitalmars.com> writes:
Bug fixes. \dmd\samples\d\pi.d sped up by 40%.
http://www.digitalmars.com/d/changelog.html
Oct 08 2006
Tom S <h3r3tic remove.mat.uni.torun.pl> writes:
Walter Bright wrote:
Bug fixes. \dmd\samples\d\pi.d sped up by 40%.
http://www.digitalmars.com/d/changelog.html
Walter, I'm really beginning to suspect that you're clairvoyant ! How
http://www.gamedev.net/community/forums/topic.asp?whichpage=1&pagesiz
=25&topic_id=418236
)
Thanks for the fixes ! :-D
Oct 08 2006
Walter Bright <newshound digitalmars.com> writes:
Tom S wrote:
Walter Bright wrote:
Bug fixes. \dmd\samples\d\pi.d sped up by 40%.
http://www.digitalmars.com/d/changelog.html
Walter, I'm really beginning to suspect that you're clairvoyant ! How
http://www.gamedev.net/community/forums/topic.asp?whichpage=1&pagesiz
=25&topic_id=418236
)
Thanks for the fixes ! :-D
No prob.
Oct 08 2006
"nobody_" <spam spam.spam> writes:
~ Wondering as to how this would effect the shootout ~
?????
Walter, I'm really beginning to suspect that you're clairvoyant ! How did
http://www.gamedev.net/community/forums/topic.asp?whichpage=1&pagesiz
=25&topic_id=418236 )
Thanks for the fixes ! :-D
Oct 08 2006
nobody_ wrote:
~ Wondering as to how this would effect the shootout ~
?????
Walter, I'm really beginning to suspect that you're clairvoyant ! How did
http://www.gamedev.net/community/forums/topic.asp?whichpage=1&pagesiz
=25&topic_id=418236 )
Thanks for the fixes ! :-D
It won't, but getting the tail-recursion optimization back will ;)
Oct 08 2006
"nobody_" <spam spam.spam> writes:
"Dave" <Dave_member pathlink.com> wrote in message
news:egbuqt$1qq6$1 digitaldaemon.com...
nobody_ wrote:
~ Wondering as to how this would effect the shootout ~
?????
Walter, I'm really beginning to suspect that you're clairvoyant ! How
http://www.gamedev.net/community/forums/topic.asp?whichpage=1&pagesiz
=25&topic_id=418236 )
Thanks for the fixes ! :-D
It won't, but getting the tail-recursion optimization back will ;)
Mkay, I thought it might effect pidigits. :(
(btw. those questionmarks were supposed to be a smiley (weaboo style). :)
Oct 08 2006
Lars Ivar Igesund <larsivar igesund.net> writes:
Walter Bright wrote:
Bug fixes. \dmd\samples\d\pi.d sped up by 40%.
http://www.digitalmars.com/d/changelog.html
I think you're almost squashing more bugs than are reported atm ;)
and 1/10th of the speed of the past (at least I know a couple that have
experienced this). Not a big problem for me, but might signify some
troubles serverside?
--
Lars Ivar Igesund
blog at http://larsivi.net
DSource & #D: larsivi
Oct 08 2006
Walter Bright <newshound digitalmars.com> writes:
Lars Ivar Igesund wrote:
Walter Bright wrote:
Bug fixes. \dmd\samples\d\pi.d sped up by 40%.
http://www.digitalmars.com/d/changelog.html
I think you're almost squashing more bugs than are reported atm ;)
There has been a big upsurge in the rate of new bugs being posted in the
last month. Generally, that implies a big upsurge in the uses people are
putting D to!
and 1/10th of the speed of the past (at least I know a couple that have
experienced this). Not a big problem for me, but might signify some
troubles serverside?
I have no idea.
Oct 08 2006
Walter Bright wrote:
Bug fixes. \dmd\samples\d\pi.d sped up by 40%.
http://www.digitalmars.com/d/changelog.html
http://d.puremagic.com/issues/show_bug.cgi?id=386
Import conflicts are no more! :D
Oct 08 2006
clayasaurus <clayasaurus gmail.com> writes:
Tydr Schnubbis wrote:
Walter Bright wrote:
Bug fixes. \dmd\samples\d\pi.d sped up by 40%.
http://www.digitalmars.com/d/changelog.html
http://d.puremagic.com/issues/show_bug.cgi?id=386
Import conflicts are no more! :D
horray!
Oct 08 2006
Kirk McDonald <kirklin.mcdonald gmail.com> writes:
Walter Bright wrote:
Bug fixes. \dmd\samples\d\pi.d sped up by 40%.
http://www.digitalmars.com/d/changelog.html
I'm not sure what it was, but 0.168 broke Pyd (it would compile but the
resulting DLL wouldn't load), and 0.169 fixed it again. So, uh, kudos!
--
Kirk McDonald
Pyd: Wrapping Python with D
http://pyd.dsource.org
Oct 08 2006
Georg Wrede <georg.wrede nospam.org> writes:
Fixed Bugzilla 395, but there are probably more UTF bugs in
std.regexp.
Hmm. Fixing Phobos whenever something is brought up, is probably a good
tack.
OTOH, quite some work may be saved if we study enough to get a feeling
for what _not_ to even try to implement. Time savings should be substantial.
An example:
UTF-bugs in STD-Regexp may be one particularly prominent case.
One might want to develop a Robust library in D. One might instead be
business oriented, which means, get something that works "somewhat" like
you need, and then callously copy that.
---
The opposite tack is to adopt the PCRE library as such. Then we'd of
course submit to the whims of the PCRC guys, but in the decades past us,
we've seen that this guy really is at it for its own sake. (As
especially opposed by "for the money".)
Another problem is, the UTF definition keeps changing every once in a
while. Why not let Professionals take care of the whole shebang?
Oct 12 2006
Walter Bright <newshound digitalmars.com> writes:
std.regexp has been around for 6+ years. It comes from one I did in C++
that was very intensively tested. I think it has held up very well. The
only thing it lacks is being thoroughly tested for UTF. I don't think
that's justification for starting over with something else.
Oct 12 2006
Don Clugston <dac nospam.com.au> writes:
Walter Bright wrote:
std.regexp has been around for 6+ years. It comes from one I did in C++
that was very intensively tested. I think it has held up very well. The
only thing it lacks is being thoroughly tested for UTF. I don't think
that's justification for starting over with something else.
Can you clear up a mystery about it? From looking at the code, it looks
as though it can do lazy matching (references to REnmq, "minimal munch",
and parsing of *?, +?, etc), and it's passed the simple tests I've tried
on it. But it's not documented! So is the lazy matching:
(a) working, but not documented, or
(b) unfinished and buggy?
Oct 12 2006
Walter Bright <newshound digitalmars.com> writes:
Don Clugston wrote:
Walter Bright wrote:
std.regexp has been around for 6+ years. It comes from one I did in
C++ that was very intensively tested. I think it has held up very
well. The only thing it lacks is being thoroughly tested for UTF. I
don't think that's justification for starting over with something else.
Can you clear up a mystery about it? From looking at the code, it looks
as though it can do lazy matching (references to REnmq, "minimal munch",
and parsing of *?, +?, etc), and it's passed the simple tests I've tried
on it. But it's not documented! So is the lazy matching:
(a) working, but not documented, or
(b) unfinished and buggy?
(a) lazy documentation <g>.
Oct 13 2006
Don Clugston <dac nospam.com.au> writes:
Walter Bright wrote:
Don Clugston wrote:
Walter Bright wrote:
std.regexp has been around for 6+ years. It comes from one I did in
C++ that was very intensively tested. I think it has held up very
well. The only thing it lacks is being thoroughly tested for UTF. I
don't think that's justification for starting over with something else.
Can you clear up a mystery about it? From looking at the code, it
looks as though it can do lazy matching (references to REnmq, "minimal
munch", and parsing of *?, +?, etc), and it's passed the simple tests
I've tried on it. But it's not documented! So is the lazy matching:
(a) working, but not documented, or
(b) unfinished and buggy?
(a) lazy documentation <g>.
Awesome! I hoped that was it. We could have an Easter Egg competition --
find the coolest thing in D, that isn't documented. <g>
A memorable previous entry was the simplified function template syntax.
Oct 13 2006
Markus Dangl <danglm in.tum.de> writes:
Don Clugston schrieb:
Awesome! I hoped that was it. We could have an Easter Egg competition --
find the coolest thing in D, that isn't documented. <g>
A memorable previous entry was the simplified function template syntax.
I want credit for finding it if there will ever be such a competition *g*
Oct 13 2006
|
2014-12-22 07:11:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4004994332790375, "perplexity": 14020.615777871899}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802774894.154/warc/CC-MAIN-20141217075254-00136-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://de.mathworks.com/help/fininst/spreadsensbybjs.html
|
Documentation
Calculate European spread option prices or sensitivities using Bjerksund-Stensland pricing model
## Syntax
``PriceSens = spreadbybjs(RateSpec,StockSpec1,StockSpec2,Settle,Maturity,OptSpec,Strike,Corr)``
``PriceSens = spreadsensbybjs(___,Name,Value)``
## Description
example
````PriceSens = spreadbybjs(RateSpec,StockSpec1,StockSpec2,Settle,Maturity,OptSpec,Strike,Corr)` returns the European spread option prices or sensitivities using the Bjerksund-Stensland pricing model.```
````PriceSens = spreadsensbybjs(___,Name,Value)` adds optional name-value pair arguments.```
## Examples
collapse all
```Settle = '01-Jun-2012'; Maturity = '01-Sep-2012';```
Define asset 1. Price and volatility of RBOB gasoline
``` Price1gallon = 2.85; % \$/gallon Price1 = Price1gallon * 42; % \$/barrel Vol1 = 0.29;```
Define asset 2. Price and volatility of WTI crude oil
``` Price2 = 93.20; % \$/barrel Vol2 = 0.36;```
Define the correlation between the underlying asset prices of asset 1 and asset 2.
`Corr = 0.42;`
```OptSpec = 'call'; Strike = 20;```
Define the `RateSpec`.
```rates = 0.05; Compounding = -1; Basis = 1; RateSpec = intenvset('ValuationDate', Settle, 'StartDates', Settle, ... 'EndDates', Maturity, 'Rates', rates, ... 'Compounding', Compounding, 'Basis', Basis)```
```RateSpec = struct with fields: FinObj: 'RateSpec' Compounding: -1 Disc: 0.9876 Rates: 0.0500 EndTimes: 0.2500 StartTimes: 0 EndDates: 735113 StartDates: 735021 ValuationDate: 735021 Basis: 1 EndMonthRule: 1 ```
Define the `StockSpec` for the two assets.
`StockSpec1 = stockspec(Vol1, Price1)`
```StockSpec1 = struct with fields: FinObj: 'StockSpec' Sigma: 0.2900 AssetPrice: 119.7000 DividendType: [] DividendAmounts: 0 ExDividendDates: [] ```
`StockSpec2 = stockspec(Vol2, Price2)`
```StockSpec2 = struct with fields: FinObj: 'StockSpec' Sigma: 0.3600 AssetPrice: 93.2000 DividendType: [] DividendAmounts: 0 ExDividendDates: [] ```
Compute the spread option price and sensitivities based on the Kirk model.
```OutSpec = {'Price', 'Delta', 'Gamma'}; [Price, Delta, Gamma] = spreadsensbybjs(RateSpec, StockSpec1, StockSpec2, Settle, ... Maturity, OptSpec, Strike, Corr, 'OutSpec', OutSpec)```
```Price = 11.2000 ```
```Delta = 1×2 0.6737 -0.6082 ```
```Gamma = 1×2 0.0190 0.0216 ```
## Input Arguments
collapse all
Interest-rate term structure (annualized and continuously compounded), specified by the `RateSpec` obtained from `intenvset`. For information on the interest-rate specification, see `intenvset`.
Data Types: `struct`
Stock specification for underlying asset 1. For information on the stock specification, see `stockspec`.
`stockspec` can handle other types of underlying assets. For example, for physical commodities the price is represented by `StockSpec.Asset`, the volatility is represented by `StockSpec.Sigma`, and the convenience yield is represented by `StockSpec.DividendAmounts`.
Data Types: `struct`
Stock specification for underlying asset 2. For information on the stock specification, see `stockspec`.
`stockspec` can handle other types of underlying assets. For example, for physical commodities the price is represented by `StockSpec.Asset`, the volatility is represented by `StockSpec.Sigma`, and the convenience yield is represented by `StockSpec.DividendAmounts`.
Data Types: `struct`
Settlement dates for the spread option, specified as date character vectors or as serial date numbers using a `NINST`-by-`1` vector or cell array of character vector dates.
Data Types: `char` | `cell` | `double`
Maturity date for spread option, specified as date character vectors or as serial date numbers using a `NINST`-by-`1` vector or cell array of character vector dates.
Data Types: `char` | `cell` | `double`
Definition of option as `'call'` or `'put'`, specified as a `NINST`-by-`1` cell array of character vectors.
Data Types: `char` | `cell`
Option strike price values, specified as an integer using a `NINST`-by-`1` vector of strike price values.
If `Strike` is equal to zero the function computes the price and sensitivities of an exchange option.
Data Types: `single` | `double`
Correlation between underlying asset prices, specified as an integer using a `NINST`-by-`1` vector.
Data Types: `single` | `double`
### Name-Value Pair Arguments
Specify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside quotes. You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`.
Example: ```PriceSens = spreadsensbykirk(RateSpec,StockSpec1,StockSpec2,Settle,Maturity,OptSpec,Strike,Corr,OutSpec,{'All'})```
Define outputs, specified as the comma-separated pair consisting of `'OutSpec'` and a `NOUT`- by-`1` or `1`-by-`NOUT` cell array of character vectors with possible values of `'Price'`, `'Delta'`, `'Gamma'`, `'Vega'`, `'Lambda'`, `'Rho'`, `'Theta'`, and `'All'`.
`OutSpec = {'All'}` specifies that the output should be `Delta`, `Gamma`, `Vega`, `Lambda`, `Rho`, `Theta`, and `Price`, in that order. This is the same as specifying `OutSpec` to include each sensitivity:
Example: ```OutSpec = {'delta','gamma','vega','lambda','rho','theta','price'}```
Data Types: `char` | `cell`
## Output Arguments
collapse all
Expected prices or sensitivities values (defined by `OutSpec`) of the spread option, returned as a `NINST`-by-`1` or `NINST`-by-`2` vector.
collapse all
A spread option is an option written on the difference of two underlying assets.
For example, a European call on the difference of two assets X1 and X2 would have the following pay off at maturity:
`$\mathrm{max}\left(X1-X2-K,0\right)$`
where:
K is the strike price.
## References
[1] Carmona, R., Durrleman, V. “Pricing and Hedging Spread Options,” SIAM Review. Vol. 45, No. 4, pp. 627–685, Society for Industrial and Applied Mathematics, 2003.
[2] Bjerksund, Petter, Stensland, Gunnar. “Closed form spread option valuation.” Department of Finance, NHH, 2006.
|
2019-11-12 12:10:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.971538782119751, "perplexity": 5204.715720480544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665521.72/warc/CC-MAIN-20191112101343-20191112125343-00300.warc.gz"}
|
https://math.stackexchange.com/questions/2249428/dinis-theorem-changing-the-monotonicity-requirement
|
# Dini's theorem: changing the monotonicity requirement
I've just studied Dini's theorem, and I've been thinking.
Dini's Theorem:
Let $f_n:[a,b]\rightarrow \mathbb{R}$ be a sequence of continuous functions such that $f_n\rightarrow f$ pointwise.
Suppose $f_n(x)$ is a decreasing sequence for all $x$ and $f$ is continuous.
Then $f_n \xrightarrow{u} f$
The monotonicity requirement promises that the "peak" of the sequence of functions won't "run" to inifinity as $n\rightarrow \infty$.
I'm trying to understand what this requirement can be replaced with.
My intuition tells me that it should be something like uniform continuity, but stronger than that.
|
2019-07-21 13:22:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9186405539512634, "perplexity": 317.2853041342316}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527000.10/warc/CC-MAIN-20190721123414-20190721145414-00312.warc.gz"}
|
http://mathhelpforum.com/trigonometry/187411-cosine-law.html
|
# Math Help - Cosine law
1. ## Cosine law
Q : if $Cos B / a = Cos A / b$
Prove that : $a=b$ and $c^2$ $=$ $a^2$ $+$ $b^2$
2. ## Re: want help in cosine law
Originally Posted by mido22
Q : if $Cos B / a = Cos A / b$
Prove that : $a=b$ and $c^2$ $=$ $a^2$ $+$ $b^2$
What have you tried? You could start by using the cosine rule to express cos A and cos B in terms of a, b and c. Then try to simplify the resulting equation.
Edit. After checking the question, I think that you have stated it wrongly. It should say
Prove that : $a=b$ or $c^2=a^2+b^2$.
3. ## Re: want help in cosine law
yes it is or not and i made alot of tries but all reach to same result :
$b^2$ $($ $c^2$ $-$ $b^2$ $) =$ $a^2$ $($ $c^2$ $-$ $a^2$ $)$
4. ## Re: want help in cosine law
Originally Posted by mido22
yes it is or not and i made alot of tries but all reach to same result :
$b^2$ $($ $c^2$ $-$ $b^2$ $) =$ $a^2$ $($ $c^2$ $-$ $a^2$ $)$
Can you say a=b from this?
5. ## Re: want help in cosine law
no i can't say so becz a!=b
6. ## Re: want help in cosine law
Originally Posted by mido22
yes it is or not and i made alot of tries but all reach to same result :
$b^2$ $($ $c^2$ $-$ $b^2$ $) =$ $a^2$ $($ $c^2$ $-$ $a^2$ $)$
$b^2(c^2-b^2) = a^2(c^2-a^2)$
$b^2c^2 - b^4 = a^2c^2 - a^4$
$b^2c^2 - a^2c^2 = b^4 - a^4$
$c^2(b^2 - a^2) = (b^2 - a^2)(b^2 + a^2)$
since $a \ne b$ ...
$c^2 = b^2 + a^2$
7. ## Re: want help in cosine law
and how can i get a=b from same problem
|
2014-09-02 02:46:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 58, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.797738254070282, "perplexity": 814.8812355000796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921318.10/warc/CC-MAIN-20140909050612-00265-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://hal.inria.fr/hal-01183943
|
# The number of distinct part sizes of some multiplicity in compositions of an Integer. A probabilistic Analysis
Abstract : Random compositions of integers are used as theoretical models for many applications. The degree of distinctness of a composition is a natural and important parameter. A possible measure of distinctness is the number $X$ of distinct parts (or components). This parameter has been analyzed in several papers. In this article we consider a variant of the distinctness: the number $X(m)$ of distinct parts of multiplicity m that we call the $m$-distinctness. A firstmotivation is a question asked by Wilf for random compositions: what is the asymptotic value of the probability that a randomly chosen part size in a random composition of an integer $ν$ has multiplicity $m$. This is related to $\mathbb{E}(X(m))$, which has been analyzed by Hitczenko, Rousseau and Savage. Here, we investigate, from a probabilistic point of view, the first full part, the maximum part size and the distribution of $X(m)$. We obtain asymptotically, as $ν → ∞$, the moments and an expression for a continuous distribution $φ$ , the (discrete) distribution of $X(m,ν )$ being computable from $φ$ .
Keywords :
Type de document :
Communication dans un congrès
Cyril Banderier and Christian Krattenthaler. Discrete Random Walks, DRW'03, 2003, Paris, France. Discrete Mathematics and Theoretical Computer Science, DMTCS Proceedings vol. AC, Discrete Random Walks (DRW'03), pp.155-170, 2003, DMTCS Proceedings
Domaine :
Littérature citée [13 références]
https://hal.inria.fr/hal-01183943
Contributeur : Coordination Episciences Iam <>
Soumis le : mercredi 12 août 2015 - 09:08:34
Dernière modification le : jeudi 11 mai 2017 - 01:02:54
Document(s) archivé(s) le : vendredi 13 novembre 2015 - 11:38:23
### Fichier
dmAC0115.pdf
Fichiers éditeurs autorisés sur une archive ouverte
### Identifiants
• HAL Id : hal-01183943, version 1
### Citation
Guy Louchard. The number of distinct part sizes of some multiplicity in compositions of an Integer. A probabilistic Analysis. Cyril Banderier and Christian Krattenthaler. Discrete Random Walks, DRW'03, 2003, Paris, France. Discrete Mathematics and Theoretical Computer Science, DMTCS Proceedings vol. AC, Discrete Random Walks (DRW'03), pp.155-170, 2003, DMTCS Proceedings. 〈hal-01183943〉
### Métriques
Consultations de la notice
## 66
Téléchargements de fichiers
|
2018-04-21 04:22:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5855891108512878, "perplexity": 2468.865805452037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944982.32/warc/CC-MAIN-20180421032230-20180421052230-00278.warc.gz"}
|
https://www.physicsforums.com/threads/quick-continuity-question.393682/
|
# Quick Continuity question
1. Apr 9, 2010
### kevinlightman
1. The problem statement, all variables and given/known data
Prove that if g:R->R is continuous at a then f(x,y)=g(x) is continuous at (a,b) $$\forall$$ b $$\in$$ R
2. Relevant equations
3. The attempt at a solution
So we know
$$\forall$$e>0 $$\exists$$d>0 s.t. $$\forall$$x$$\in$$R where |x-a|<d we have |g(x) - g(a)|<e
So I've said as $$\forall$$b$$\in$$R g(x)=f(x,y) & g(a)=f(a,b), these can be substituted in giving the expression we need except for the condition that [(x-a)2 + (y-b)2]1/2<d.
This seems to be an incorrect cheat though, am I along the right lines or not?
2. Apr 9, 2010
### Office_Shredder
Staff Emeritus
You are looking at the right line of thought.
If |(x,y)-(a,b)|<d, what can you say about |x-a|?
|
2018-02-26 02:28:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45440754294395447, "perplexity": 4351.400886632699}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891817908.64/warc/CC-MAIN-20180226005603-20180226025603-00145.warc.gz"}
|
https://www.rdocumentation.org/packages/caret/versions/6.0-70/topics/plot.train
|
plot.train
0th
Percentile
Plot Method for the train Class
This function takes the output of a train object and creates a line or level plot using the lattice or ggplot2 libraries.
Keywords
hplot
Usage
"plot"(x, plotType = "scatter", metric = x$metric[1], digits = getOption("digits") - 3, xTrans = NULL, nameInStrip = FALSE, ...) "ggplot"(data = NULL, mapping = NULL, metric = data$metric[1], plotType = "scatter", output = "layered", nameInStrip = FALSE, highlight = FALSE, ..., environment = NULL)
Arguments
x
an object of class train.
metric
What measure of performance to plot. Examples of possible values are "RMSE", "Rsquared", "Accuracy" or "Kappa". Other values can be used depending on what metrics have been calculated.
plotType
a string describing the type of plot ("scatter", "level" or "line" (plot only))
digits
an integer specifying the number of significant digits used to label the parameter value.
xTrans
a function that will be used to scale the x-axis in scatter plots.
data
an object of class train.
output
either "data", "ggplot" or "layered". The first returns a data frame while the second returns a simple ggplot object with no layers. The third value returns a plot with a set of layers.
nameInStrip
a logical: if there are more than 2 tuning parameters, should the name and value be included in the panel title?
highlight
a logical: if TRUE, a diamond is placed around the optimal parameter setting for models using grid search.
mapping, environment
unused arguments to make consistent with ggplot2 generic method
...
plot only: specifications to be passed to levelplot, xyplot, stripplot (for line plots). The function automatically sets some arguments (e.g. axis labels) but passing in values here will over-ride the defaults
Details
If there are no tuning parameters, or none were varied, an error is produced. If the model has one tuning parameter with multiple candidate values, a plot is produced showing the profile of the results over the parameter. Also, a plot can be produced if there are multiple tuning parameters but only one is varied.
If there are two tuning parameters with different values, a plot can be produced where a different line is shown for each value of of the other parameter. For three parameters, the same line plot is created within conditioning panels/facets of the other parameter.
Also, with two tuning parameters (with different values), a levelplot (i.e. un-clustered heatmap) can be created. For more than two parameters, this plot is created inside conditioning panels/facets.
References
Kuhn (2008), Building Predictive Models in R Using the caret'' (http://www.jstatsoft.org/article/view/v028i05/v28i05.pdf)
train, levelplot, xyplot, stripplot, ggplot
• plot.train
• ggplot.train
Examples
## Not run:
# library(klaR)
# rdaFit <- train(Species ~ .,
# data = iris,
# method = "rda",
# control = trainControl(method = "cv"))
# plot(rdaFit)
# plot(rdaFit, plotType = "level")
#
# ggplot(rdaFit) + theme_bw()
#
# ## End(Not run)
Documentation reproduced from package caret, version 6.0-70, License: GPL (>= 2)
Community examples
Looks like there are no examples yet.
|
2020-02-19 02:09:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20271025598049164, "perplexity": 3605.1367153357496}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143963.79/warc/CC-MAIN-20200219000604-20200219030604-00069.warc.gz"}
|
http://ehpaperwybb.streetgeeks.us/affect-of-earths-magnetic-field-on.html
|
# Affect of earths magnetic field on
Without earth’s magnetic field, astronauts above the atmosphere are exposed to particles that can rip through human bodies and damage dna, potentially causing cancer. The magnetic field $\vec b$ is a vector quantity do you know how to add vectors if i solve for the vector sum of b_earth and b_coils, i would obtain a different direction for the total magnetic field which is contrary to that of the supposed circular path of the e- beam, ie b_total should be perpendicular to both f_m and v. Yes, earth does have a magnetic tail it is an extension of the same familiar magnetic field we experience when using a boy scout compass it is an extension of the same familiar magnetic field we experience when using a boy scout compass. The magnetic field is an obstacle to the solar wind, but it is also a funnel, strangeway says the effect of the solar wind on earth is less uniform than on mars and venus, but apparently the .
Fortunately, on earth we have two very effective lines of defence: the earth’s magnetic field and its atmosphere space the effects of magnetic fields. The blue lines are supposed to be magnetic field lines of earth, they are not normally so straight , but this would be a good approximation lets assume a $30^o$ angle between the field and plane's body, the earth's magnetic field being $065g$ at maximum that is a mere $65 \times 10^{-5} t$. A study of the most recent near-reversals of the earth's magnetic field by an international team of researchers, including the university of liverpool, has found it is unlikely that such an event . Earth's magnetic field does strange things to the moon this dust storm effect would be strongest at the moon's terminator, the dividing line between day and night.
The earth's magnetic field is omnipresent and receptors can monitor it all the time, providing constant background information jungle-dwelling chicken ancestors would have used it on their home range - about a kilometre square, raf says. Earth's magnetic fields interact with biomagnetic fields and affect our overall health geomagnetic fields include schumann resonances among others. Both increasing levels of co 2 and changes in the earth's magnetic field affect the upper atmosphere, including its charged portion, also known as the ionosphere dr dr. Magnetic effects on living organisms (the following information is taken from their books, the magnetic effect and magnetism and its effects on the living system).
The earth’s magnetic field is the magnetic field and the electrical currents creates a dynamo effect the intensity of the magnetic field is greatest near . Discovery channel hd presents a nice detailed explanation about the sun and how it has a direct effect on earth's magnetic field most of you will say i kne. New research has shown in the most detail yet how rapidly earth's magnetic field - which acts like a shield to protect us from harsh solar winds and cosmic radiation - is changing, getting weaker over some parts of the world, and strengthening over others although invisible, these changes can have . Their magnetic field is 100,000 times the earth's magnetic field but, the magnetic field is constrained to cern itself and has no effect outside of cern, and it definitely has no effect on earth's magnetic field itself. Does the earth's magnetic field affect human health high-altitude pilots and astronauts can experience higher levels of radiation during magnetic storms, but the hazard is due to the radiation, not the magnetic field itself.
## Affect of earths magnetic field on
However the magnetic fields used in these studies are much, much stronger than earth's magnetic field no-one has ever shown that earth's magnetic field has any effect on the brain, and you should feel free to sleep any way you want :-). The earth's magnetic field can be closely approximated by the field of a magnetic dipole positioned near the centre of the earth a dipole 's orientation is defined by an axis the two positions where the axis of the dipole that best fits the geomagnetic field intersect the earth's surface are called the north and south geomagnetic poles. Effect of the earth’s magnetic field by x 9h earth’s magnetic field introduction our earth is a huge bar magnet tilted 11 degrees from the spin axis of the earth, with geographic north pole being earth’s magnetic south pole and vice versa.
• The effect of gravitational forces on a planet's magnetic field has already been well documented for two of jupiter's moons, io and europa, and for a number of exoplanets.
• The earth's magnetic field is attributed to a dynamo effect of circulating electric current, but it is not constant in direction rock specimens of different age in similar locations have different directions of permanent magnetization evidence for 171 magnetic field reversals during the past 71 .
• The earth's magnetic field at the surface is roughly 05 gauss or 005 mt if the earth's field strength diminishes in proportion to the inverse square of the distance, surely the field is going to be negligible at 10,000m altitude yet as far as i am aware it will still have an effect on the plane .
What creates earth's magnetic field earth’s magnetic field is caused by a dynamo effect the effect works in the same way as a dynamo light on a bicycle magnets in the dynamo start . The sun's magnetic field is set to reverse its polarity in the next few months but the shift won't spark an increase in powerful solar storms or other events that could have a damaging effect on . I remember reading somewhere: when you sleep in a way subjecting your body to cut the geomagnetic field at right angles, you become highly emotional whereas when it is parallel, it cools you. How does the earth's core affect magnetic field charges generate magnetic forces that are not so easily shielded and can and do extend out past the earths surface.
Affect of earths magnetic field on
Rated 5/5 based on 42 review
2018.
|
2018-10-19 05:43:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5276400446891785, "perplexity": 496.3902971156774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512323.79/warc/CC-MAIN-20181019041222-20181019062722-00057.warc.gz"}
|
http://hal.in2p3.fr/in2p3-00723477
|
# Measurement of electrons from beauty hadron decays in pp collisions at sqrt{s} = 7 TeV
Abstract : The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pt < 8 Gev/c with the ALICE experiment at the CERN LHC in pp collisions at a center of mass energy sqrt{s} = 7 TeV using an integrated luminosity of 2.2 nb^{-1}. Electrons from beauty hadron decays were selected based on the displacement of the decay vertex from the collision vertex. A perturbative QCD calculation agrees with the measurement within uncertainties. The data were extrapolated to the full phase space to determine the total cross section for the production of beauty quark-antiquark pairs.
Document type :
Journal articles
Physics Letters B, Elsevier, 2013, 721, pp.13-23. <10.1016/j.physletb.2013.01.069,>
Domain :
http://hal.in2p3.fr/in2p3-00723477
Contributor : Emmanuelle Vernay <>
Submitted on : Friday, August 10, 2012 - 9:06:00 AM
Last modification on : Monday, August 26, 2013 - 3:57:30 PM
### Citation
B. Abelev, N. Arbor, G. Conesa Balbastre, J. Faivre, C. Furget, et al.. Measurement of electrons from beauty hadron decays in pp collisions at sqrt{s} = 7 TeV. Physics Letters B, Elsevier, 2013, 721, pp.13-23. <10.1016/j.physletb.2013.01.069,>. <in2p3-00723477>
### Metrics
Consultations de la notice
|
2016-05-30 01:11:06
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.804387629032135, "perplexity": 5967.3363994712745}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049282327.67/warc/CC-MAIN-20160524002122-00130-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://questions.examside.com/past-years/jee/question/let-ps-be-the-median-of-the-triangle-with-vertices-p2-2-q6-1-jee-main-2014-marks-4-rqp4gjpbqfxkv1uz.htm
|
4.5
(100k+ )
1
### JEE Main 2014 (Offline)
Let $$PS$$ be the median of the triangle with vertices $$P(2, 2)$$, $$Q(6, -1)$$ and $$R(7, 3)$$. The equation of the line passing through $$(1, -1)$$ band parallel to PS is:
A
$$4x + 7y + 3 = 0$$
B
$$2x - 9y - 11 = 0$$
C
$$4x - 7y - 11 = 0$$
D
$$2x + 9y + 7 = 0$$
## Explanation
Let $$P,Q,R,$$ be the vertices of $$\Delta PQR$$
Since $$PS$$ is the median, $$S$$ is mid-point of $$QR$$
So, $$S = \left( {{{7 + 6} \over 2},{{3 - 1} \over 2}} \right) = \left( {{{13} \over 2},1} \right)$$
Now, slope of $$PS$$ $$= {{2 - 1} \over {2 - {{13} \over 2}}} = - {2 \over 9}$$
Since, required line is parallel to $$PS$$ therefore slope of required line $$=$$ slope of $$PS$$
Now, equation of line passing through $$(1, -1)$$ and having slope $$- {2 \over 9}$$ is
$$y - \left( { - 1} \right) = - {2 \over 9}\left( {x - 1} \right)$$
$$9y + 9 = - 2x + 2$$
$$\Rightarrow 2x + 9y + 7 = 0$$
2
### JEE Main 2013 (Offline)
The $$x$$-coordinate of the incentre of the triangle that has the coordinates of mid points of its sides as $$(0, 1) (1, 1)$$ and $$(1, 0)$$ is :
A
$$2 + \sqrt 2$$
B
$$2 - \sqrt 2$$
C
$$1 + \sqrt 2$$
D
$$1 - \sqrt 2$$
## Explanation
From the figure, we have
$$a = 2,b = 2\sqrt 2 ,c = 2$$
$${x_1} = 0,\,{x^2} = 0,\,{x_3} = 2$$
Now, $$x$$-co-ordinate of incenter is given as
$${{a{x_1} + b{x_2} + c{x_3}} \over {a + b + c}}$$
$$\Rightarrow x$$-coordinate of incentre
$$= {{2 \times 0 + 2\sqrt 2 .0 + 2.2} \over {2 + 2 + 2\sqrt 2 }}$$
$$=$$ $${2 \over {2 + \sqrt 2 }} = 2 - \sqrt 2$$
3
### JEE Main 2013 (Offline)
A ray of light along $$x + \sqrt 3 y = \sqrt 3$$ gets reflected upon reaching $$X$$-axis, the equation of the reflected ray is
A
$$y = x + \sqrt 3$$
B
$$\sqrt 3 y = x - \sqrt 3$$
C
$$y = \sqrt 3 x - \sqrt 3$$
D
$$\sqrt 3 y = x - 1$$
## Explanation
$$x + \sqrt 3 y = \sqrt 3$$ or $$y = - {1 \over {\sqrt 3 }}x + 1$$
Let $$\theta$$ be the angle which the line makes with the positive x-axis.
$$\therefore$$ $$\tan \theta = - {1 \over {\sqrt 3 }} = \tan \left( {\pi - {\pi \over 6}} \right)$$ or $$\theta = \pi - {\pi \over 6}$$
$$\therefore$$ $$\angle ABC = {\pi \over 6}$$; $$\therefore$$ $$\angle DBE = {\pi \over 6}$$
$$\therefore$$ the equation of the line BD is,
$$y = \tan {\pi \over 6}x + c$$ or $$y = {x \over {\sqrt 3 }} + c$$ ..... (1)
The line $$x + \sqrt 3 y = \sqrt 3$$ intersects the x-axis at $$B(\sqrt 3 ,0)$$ and, the line (1) passes through $$B(\sqrt 3 ,0)$$.
$$\therefore$$ $$0 = {{\sqrt 3 } \over {\sqrt 3 }} + c$$ or, c = $$-$$1
Hence, the equation of the reflected ray is,
$$y = {x \over {\sqrt 3 }} - 1$$ or $$y\sqrt 3 = x - \sqrt 3$$
4
### AIEEE 2012
If the line $$2x + y = k$$ passes through the point which divides the line segment joining the points $$(1, 1)$$ and $$(2, 4)$$ in the ratio $$3 : 2$$, then $$k$$ equals :
A
$${{29 \over 5}}$$
B
$$5$$
C
$$6$$
D
$${{11 \over 5}}$$
## Explanation
The point which divides the line segment joining the points (1, 1) and (2, 4) in the ratio 3 : 2 is
$$= \left( {{{3 \times 2 + 2 \times 1} \over {3 + 2}},{{3 \times 4 + 2 \times 1} \over {3 + 2}}} \right)$$
$$= \left( {{{6 + 2} \over 5},{{12 + 2} \over 5}} \right) = \left( {{8 \over 5},{{14} \over 5}} \right)$$
Since the line 2x + y = k passes through this point,
$$\therefore$$ $$2 \times {8 \over 5} + {{14} \over 5} = k$$ or $${{30} \over 5} = k$$ or, k = 6
### Joint Entrance Examination
JEE Main JEE Advanced WB JEE
### Graduate Aptitude Test in Engineering
GATE CSE GATE ECE GATE EE GATE ME GATE CE GATE PI GATE IN
NEET
Class 12
|
2022-05-23 23:45:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8995507955551147, "perplexity": 743.6196907851892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562106.58/warc/CC-MAIN-20220523224456-20220524014456-00604.warc.gz"}
|
https://www.gamedev.net/forums/topic/106635-how-to-post-question/
|
Archived
This topic is now archived and is closed to further replies.
how to post question
Recommended Posts
This might be a really silly question, but how do you guys get you code examples in the nice white boxes when you submit or reply to a post??? Thanks to anyone who can help I think, therfore I am. I think?
Share on other sites
Use the forum source tag:
[source]
[/source]
The above would translate into this:
Your code
[edited by - michalson on July 30, 2002 7:58:11 AM]
Share on other sites
test of the code thing
this will not work
source
this should be code
/source
source
trying backslash
\source
Share on other sites
are there any other neat things besides the source /source thing?
I think, therfore I am.
I think?
Share on other sites
Check out the Forum FAQ
• Forum Statistics
• Total Topics
628362
• Total Posts
2982266
• 10
• 9
• 13
• 24
• 11
|
2017-11-23 00:07:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21506249904632568, "perplexity": 7758.865231100801}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806708.81/warc/CC-MAIN-20171122233044-20171123013044-00286.warc.gz"}
|
https://curj.caltech.edu/category/2022/page/2/
|
Modeling and Testing of an Electrically Heated Hotspot
The ignition of flammable atmospheres due to a hot surface can cause industrial accidents. We investigate the constraints necessary to create an electrically heated hotspot capable of igniting n-hexane and air mixtures. The temperature profile of an electrically heated stainless steel disk (Ø 50.8 mm) was studied, with the requirement of reaching ignition temperatures (~1100K) at the hotspot. We utilized Abaqus/CAE to build thermal models of the disk and conduct a design study to determine parameters for the experiment. Based on these results, we developed an experimental setup in SolidWorks, and had the parts machined. We then assembled the experiment and passed various currents through the disk, measured the temperature primarily at the center of the disk over time at these various currents, and compared these results to those found in Abaqus. The results show that it is possible for a stainless steel disk of this size to reach ignition temperatures and cause ignition of a hydrogen-air mixture.
Author: Athena Kolli
California Institute of Technology
Mentors: Joseph Shepherd and Donner Schoeffler
Explosion Dynamics Laboratory, California Institute of Technology
Editor: Stephanie Chen
Introduction
Accidental ignition of a flammable atmosphere due to hot metal surfaces is a significant risk in industrial settings. Ignition in these environments can occur when the flammable gas is exposed to a metal surface heated to sufficiently high temperatures, which we predict to be around 1000 to 1100 K [1]. However, the conditions that lead to ignition are also likely dependent on the geometry of the heated surface, size of the region that reaches ignition temperatures (called the hotspot), the length of time that the hotspot is at ignition temperature, and other factors.
In previous years, the Explosion Dynamics Laboratory (EDL), has explored the conditions necessary for ignition of many different test articles including moving spheres, glow plugs, and horizontal cylinders. The most recent work done in thermal ignition has been in researching ignition conditions of electrically heated vertical stainless steel cylinders of various dimensions. As stated above, it was found that for these vertical cylinders, ignition of n-hexane mixtures occurred between 1000 and 1100 K [1]. However, these results may be characteristic of vertical cylinders and other previously explored geometries, so the next step in this line of research is to choose a new geometry with unique fluid mechanics for the test article, and see under which conditions ignition occurs. We decided to work with hotspots because they produce unique fluid mechanics when compared to all work previously done in the EDL, and also because they are of interest to the aerospace industry.
The purpose of the present work is to design and fabricate the test article for which future experiments investigating the thermal ignition of a hotspot can be performed. The goal is to develop and understand the dimensional and current requirements of a 50.8 mm diameter stainless steel disk and accompanying assembly capable of reaching ignition temperatures at the hotspot. We were motivated to develop a hotspot of this size because hotspots of this size regime are relevant to the interests of our funding agency, the Boeing Company.
The challenges involved in this project stem from the axisymmetric nature of the hotspot. Because the current is carried through the disk along the radial direction, the resistance and current vary along this path. As a result, elementary analytic heat transfer techniques cannot be used.
The three modes by which heat is transferred are conduction, convection, and radiation. In this project, we account for conduction and radiation, while convective effects are negligible. Heat conduction can be defined as the energy transfer within a single substance, due to a temperature gradient within the substance. In the case of the disk, while current is passed through its center, its outer edge will remain relatively cooler, and the heat will conduct in a radial direction. Convection is the transfer of heat due to the movement of a fluid caused by the tendency for hotter, and therefore less dense, materials to rise. In this case, convective effects are considered negligible because we are dealing with high temperatures, and convection is proportional to the surface temperature T while radiation is proportional to T4 [2]. Radiation is defined as the emission of energy as electromagnetic waves from a hotter body to its cooler surroundings. All objects above absolute zero emit some thermal radiation, but in our work, we focus on the heat lost from the surface of the disk due to radiation. Together, these two modes of heat transfer account for almost all of the energy being exchanged in the system.
In this project, we heat the disk by passing an electric current through it, causing resistive heating, also known as Joule heating. Joule’s law states that the power produced by Joule heating is directly proportional to the product of the conductor’s resistance and the current squared:
$p \propto RI^2$
The complication in this case arises when we consider the resistance of the disk, along the direction in which current is flowing. Resistance is defined by the following equation:
$R = \frac{\rho l}{A}$
Where $\rho$ [Ω-m] is the resistivity, $l$ [m] is the length, and $A$ [m2] is the cross sectional area. As the current travels through the disk in the radial direction, the cross sectional area is changing, and thus these calculations cannot be done easily by hand. Consequently, we utilized a computer aided engineering (CAE) software, called Abaqus, to make these calculations.
Finally, we conducted a design study for the disk and assembly. In this study, we went through an incremental design process during which we altered the dimensions and materials of parts of the cross section to create a continuously decreasing radial temperature profile.
Methods
Validation of Abaqus/CAE
Abaqus/CAE was chosen to determine the dimensions and current for the experimental set up because this software offers an axisymmetric mode for solving heat transfer problems, allowing us to easily model the disk and make changes to its dimensions in order to achieve the desired hotspot temperature. The axisymmetric mode also allowed for our models to have a high level of detail while keeping the total number of nodes low. Additionally, Abaqus allowed us to use temperature dependent data for values such as thermal and electrical conductivity, and specific heat of the materials. In a quick design study, we looked at the results of the one dimensional resistive heating of a rectangular sheet of stainless steel and found that using temperature dependent data for thermal and electrical conductivity and specific heat results in temperatures as much as 70 K greater than those resulting from using constant data for these values (Figure 1) [3][4]. Thus, this study serves to show that using the axisymmetric mode in Abaqus to choose dimensions and current level for our test article and assembly provides us with quicker and more accurate results.
Next, we validated Abaqus for the coupled thermal electric mode that we planned to use by testing it against known solutions. We did this by comparing Python scripts with the correct known solutions to a model in Abaqus with identical initial conditions, boundary conditions and material properties.
The first case we considered is a zero dimensional resistive heating case of a rectangular sheet of metal, in which current is applied through the ends, while the sides and top and bottom faces are electrically insulated (Figure 2A). Additionally, the top and bottom faces radiate thermal energy, while the ends and sides are thermally insulated. The Python script uses the energy balance equations for the block and models the change in energy over time by considering the input of energy due to the joule heating and the energy loss due to radiation. The ordinary differential equation solver from the scipy package is the numerical method used to integrate the differential equation for temperature as a function of time.
We then modeled a rectangular metal sheet of the equivalent dimensions and inputted the same material values, boundary conditions, and current in Abaqus. We compared the temperature values over time in the following plot, and saw that the Abaqus model result yields almost exactly the same results as the Python script.
The next case that we verified was the one dimensional resistive heating of a rectangular sheet of metal (Figure 3A). The sheet has a fixed temperature on its ends, is thermally insulated on its sides, and radiates thermal energy from its top and bottom. The current travels through the ends of the sheet, while the sides and top and bottom are electrically insulated. In this case, we must consider conduction along the direction of the current, and thus need to discretize the sheet into rectangular volumes such that temperature is now a function of time and space.
We then use Fourier’s Law of Conduction and assume that the properties included are all temperature independent. We then integrate the partial differential equation for temperature using the method of lines and the lsoda time integrator with backward-difference solver bdf for stability, and analytically find the steady state solution.
The final case we checked was the one directional resistive heating of a metal disk (Figure 4A). In this case, the analogous Abaqus model was created as an axisymmetric solid, so when working with the model, we only specified conditions on the rectangle outlined in yellow in the below image. The outer radius of the disk was fixed at 300K, while the top and bottom sides of the disk radiate thermal energy. The current travels radially through the disk, while the top and bottom of the disk are electrically insulated. The Python script utilizes similar methods as in the case of one dimensional resistive heating of a rectangular sheet of metal.
In each case, we then plotted the solutions from Abaqus and the Python scripts and showed that they are reasonably close, thus verifying that Abaqus would provide accurate results for coupled thermal electric problems (Figures 2B, 3B, 4B).
Abaqus/CAE Design Study
The first model we analyzed was of a single stainless steel piece with the current passed into the bottom face of the thick stem (Figure 5). We chose to create these models using the axisymmetric mode, as the disk and flange are symmetric around the central axis. There are two problems with this design. Firstly, most of the heat is dissipated in the stem, and secondly, the hottest part of the piece is at the base of the stem. In contrast, we want the hottest part of the piece to be at the center of the disk to create the hotspot.
We then came up with a design in which a copper rod would grip the thinner stem of the stainless steel piece, in order to combat heat loss up the stem (Figure 6A). This, however, introduced the problem of the heat conducting away from the stainless steel disk where the copper was in contact with the disk. This caused a temperature profile with a peak temperature that was at some radius away from the center, rather than creating a central hotspot (Figure 6B).
To combat this issue, we replaced the top part of the copper piece with stainless steel, and shortened this piece such that it was not in contact with the stainless steel disk, greatly reducing conductive losses without increasing heat dissipation in the stem (Figure 7A). We proceeded with this technique but switched from rods to bars in the computer-aided design (CAD) phase in order to achieve better clamping on the small stainless steel stem, to reduce contact resistance. The critical dimension in this assembly is the cross sectional area of the bars/rods which determines the resistance of the pieces. Because we kept the cross sectional area consistent in the switch from rods to bars, we were able to continue using the axisymmetric mode in Abaqus to model the assembly.
Testing Set Up
The disk and bars that hold the stem are made of 303 stainless steel, and the other bars are made of 101 Copper. The flange is made of 6061 Aluminum, and finally the cable is 4/0 battery wire. Once the parts were assembled, thermocouples were welded to the center of and along the radius of the disk in order to take temperature readings of these spots over time.
Results
Design choices for the disk thickness and method of passing current through the disk were determined based on models created in Abaqus/CAE, an engineering software with the capability of doing thermal analysis of mechanical components and assemblies. The final disk thickness was chosen to be 0.508 mm, and the disk and assembly are shown below.
The parts were assembled and connected to the power supply, passing currents between 80 A and 120 A through the assembly. To measure the temperature, we spot-welded a thermocouple at the center of the disk, and recorded the temperature data over time. The temperature data at a few other points on the disk were measured using the same method, to get an idea of the temperature profile of the disk (Figure 14). These points were at the center, a 5.5 mm radius, and a 23.5 mm radius at 110 A (Figure 13).
Discussion
After comparing Abaqus models of the assembly with nominal dimensions to the results of the experiment, we see that the experiments yield much higher temperatures than the models at the same currents.
We include the results of the highest and lowest currents tested in the table above. For example, with 80 A, the lowest current that we tested, we recorded a maximum temperature of 777 K after running current for about three minutes, which is 189 K higher than what the Abaqus simulations predicted (Figure 14A). In order to match the experimental temperature that 80 A is producing, we must pass an additional 28 A in the Abaqus model.
One source of discrepancy in these results was the inconsistency in the disk thickness, as measured by an imperial micrometer (0.0001-0.001 inches). Due to the extremely thin nature of the disk and the necessity for the disk and stem to be machined out of the same piece of material, the resulting disk was of uneven thickness, and was at points significantly less than the desired thickness, ranging from 0.244 mm to 0.518 mm.
Another likely source for this discrepancy was the very thin crack that was found around the region of the stem at the top of the disk (Figure 15). Both of these inconsistencies with the nominal design likely contributed to the resulting higher temperatures as they caused a decrease in the cross sectional area of the current path, thus increasing resistive heating.
However, in order to prove that ignition due to a disk of about these dimensions in this setup was possible, we cut a piece of 0.305 mm thick stainless steel shim and spot welded it to a steel rod of the same dimensions as the stem in the initial design (Figure 16).
We were able to achieve ignition of a hydrogen-air mixture (29.6% H2 70.4% air) at 130 A. We modeled this case using the same methods used for the original disk, and Abaqus predicted that the temperature at the hot spot was around 861 K. Although this is a lot lower than expected ignition temperatures, this result supports the data above showing that Abaqus predicted temperatures on the order of a few hundred Kelvin lower than the actual temperature. In this case, the higher temperature in the experiment could be due to radiation from the stem being reabsorbed at the hotspot, resistance from the spot weld that joined the stem to the sheet, or deformation of the sheet increasing the thermal resistance of the test article.
Conclusions
In this work, we designed an electrically heated hot spot and measured the temperature over time at the hot spot and a few other points at various temperatures. With Abaqus, we were able to design a test article that was able to produce ignition temperatures. However, because of the repeated heating and cooling of the test article, the disk experienced deformation and eventually cracked, preventing us from performing an ignition experiment with the original disk. In future work, it will be important to consider the effects of deformation on the disk and the repeatability of the experiments. It may be beneficial to design an easier to manufacture test article so that data can be collected with test articles that have only been used a few times, minimizing the effects of the deformation on the resulting temperature.
Acknowledgments
This research project was made possible by the Caltech SURF Program, Explosion Dynamic Laboratory, and its funding agency, the Boeing Company. I would like to thank my mentor, Dr. Joe Shepherd, for allowing me the opportunity to work in the Explosion Dynamics Laboratory this summer. I would also like to thank my co-mentor Donner Schoeffler for providing me with the knowledge and guidance to be successful in my project. Finally, I would like to thank both Bob and Toni Perpall and the SFP Office for funding and facilitating my SURF this summer, and giving me the opportunity to conduct research as an undergraduate student.
References
1. J. Melguizo-Gavilanes, L. R. Boeck, R. Mével, and J. E. Shepherd, “Hot surface ignition of stoichiometric hydrogen-air mixtures,” International Journal of Hydrogen Energy, vol. 42, no. 11, pp. 7393–7403, Mar. 2017, doi: 10.1016/j.ijhydene.2016.05.095.
2. John H. Lienhard IV and John H. Lienhard V, A Heat Transfer Textbook, 5th ed. Cambridge, Massachusetts: Phlogiston Press, 2020.
3. C. S. Kim, “Thermophysical Properties of Stainless Steels,” Argonne National Laboratory, ANL-75-55, 1975.
4. C. Y. Ho and T. K. Chu, “Electrical Resistivity and Thermal Conductivity of Nine Selected AISI Stainless Steels,” American Iron and Steel Institute, CINDAS Report 45, 1977.
5. J. Adler, “Ignition of a combustible stagnant gas layer by a circular hot-spot,” null, vol. 3, no. 2, pp. 359–369, Jun. 1999, doi: 10.1088/1364-7830/3/2/309.
6. S. Jones and J. Shepherd, “Thermal ignition of n-hexane air mixtures by vertical cylinders,” International Symposium on Hazards, Prevention, and Mitigation of Industrial Hazards, 2020.
7. “What is a type K Thermocouple?,” https://www.omega.com/en-us/. https://www.omega.com/en-us/resources/k-type-thermocouples (accessed Sep. 16, 2021).
Mars 2020 Sampling and Caching Trending
The Mars 2020 Perseverance Rover aims to explore the surface of Mars to analyze its habitability, seek biosignatures of past life, and obtain and cache rock and regolith samples (the outer rocky material of the bedrock). This article describes tools designed to aid with the processing of data to and from the Rover. The processed data provides a plethora of scientific insight into the Rover’s Sampling and Caching Subsystem (SCS) health and performance. Additionally, these tools allow for the identification of important trends and help to ensure that the commands sent to the Rover have been reviewed, approved, and accounted for. Overall, these tools aid the Mars 2020 mission to seek biosignatures of past life by helping engineers better understand the Rover’s operations as it caches rock and regolith samples on Mars.
Author: Annabel R. Gomez
California Institute of Technology
Mentors: Kyle Kaplan, Julie Townsend
Jet Propulsion Laboratory, California Institute of Technology
Editor: Audrey DeVault
Abstract
This paper details a few of the many Python-based Sampling and Caching (SNC) tools developed to aid in the processing of the uplink and down link data to and from the Mars 2020 Perseverance Rover. To specify, uplink refers to the data sent to the Rover and downlink refers to the data sent back from the Rover. The data processed provides a plethora of scientific insight to ensure the health and performance of the Rover’s Sampling and Caching Subsystem (SCS). One of the problems with dealing with such data, however, is that it can be difficult to immediately identify important trends. Thus, I worked to help develop three separate tool tickets. The first ticket identifies and records each unique Rover motor motion event and its respective characteristics. The second ticket helps to make the process of storing Engineering Health and Accountability logs more efficient. Finally, the third ticket, unlike the first two, sorts through the data that is sent to the Rover instead of the data that is sent from the Rover. More specifically, it helps to ensure that all the commands sent to the Rover have been properly reviewed, approved, and accounted for.
Introduction
The Mars 2020 mission aims to explore the surface of Mars using the Perseverance Rover. In July of 2020, Perseverance was sent to Jezero Crater to analyze its habitability, seek biosignatures of past life, and obtain and cache rock and regolith samples, the outer rocky material of the bedrock. The Rover landed on Mars in February of 2021.
One of the main systems that will carry out these critical tasks is the SCS designed to collect and cache rock core and regolith samples and prepare abraded rock surfaces. To abrade a rock surface simply means to scrape away or remove rock material. The overall SCS is composed of two Robotic Systems that work together to perform the sampling and caching functions. One of the systems is a Robotic Arm and Turret on the outside of the Rover and the second system is an Adaptive Caching Assembly on the inside of Perseverance [1, 2].
The SNC team is responsible for performing tactical downlink assessments to monitor the SCS’s health and performance. In addition, they oversee the initiation of strategic analyses and long-term trending to assess the subsystem’s execution across multiple sols, or days on Mars (1 sol is about 24 hours and 39 minutes).
On any given sol, there is a plethora of data generated by the SCS. This data is then collated into informative reports that are used to complete each sol’s tactical downlink analysis. SNC engineers then assess these reports on downlink and report on the status of the subsystem. New tools have been developed to further analyze and dissect this data, both by identifying motion events and processing data into summary products stored on the cloud and to help ensure that only approved and tested sequences and commands are sent to the Rover.
Objectives
During downlink from the Rover, it is often difficult to identify unique data trends immediately. Thus, one of the main goals of this research is to establish a sampling-focused trending dashboard infrastructure containing plots of motor data from specific activities over the course of the Mars 2020 mission. This facilitates the identification of trends that can be compared to the on-hand data collected from testbed operations back on Earth and/or create new plots and tables to gain a better understanding of the Rover and its behavior. Identifying such trends will pinpoint potential issues and will allow for changes and/or corrections to the SCS’s operations where necessary.
SNC accomplishes these long-term objectives by creating a trending web dashboard filled with plots and other metrics that can be easily used by others working on the project. To establish a baseline for how the SCS should behave during any given activity, multiple instances of the same activity must be collected and compared over time. Prioritized motions and metrics, identified through coordination with Systems Engineers from each of the SCS domains, are stored over the course of the mission and populated on the dashboard.
One challenge, however, is stale data, or data that is not bring processed or updated in a timely manner. In a large mission, such a Mars 2020, it is important that data is received and organized quickly so that other teams can complete their respective tasks. Additionally, it is important that long processing times for users are avoided so that data for spacecraft planning can be assessed in a timely manner [2]. This means that data needs to be preprocessed, put in the desired format, and stored so that it can be accessed locally by the dashboard as needed. Once preprocessed, data needs to be updated when new data is downlinked. To combat this challenge, I worked to find a solution that queries and store the data on a regular basis.
The principal tasks of my research include architecting and developing Python-based tools, or algorithms, for the SNC downlink process. This begins with the receipt of data from the vehicle and continues through the tactical downlink analysis, where it is decided if SNC is GO/NO-GO for the sol’s uplink planning, and ends with updates to any strategic products for analyses. More specifically, there are three focal tools needed to make the trending succeed: 1. A tool to identify new data once available. 2. A tool to preprocess and store new data for trending. 3. A tool to update trending dashboard plots with the latest preprocessed data. With these tools in place, displayed trends in the Rover’s operations will help indicate how the SNC operations should proceed for the sol. This paper covers three specific tickets related to this overall development effort.
Ticket 1: Motion Events
Many of the tickets, or projects, that I worked on this summer focus on the second goal mentioned above: develop a tool to process and store new data for trending. This first ticket contributes to a program that further analyzes and dissects Rover motion events to better identify their characteristics, each representing a unique motion request by a single motor. Previously, motion events were recorded during a specified epoch, however, the method for determining that epoch led to inconsistent results. The additions I made to this motion detection program now allow it to detect the start and end time of each motor movement within an epoch and properly record it as a separate motion, as is described in Figure 3. It is important that we collect and filter each starting motion by identifying all the mode Event Verification Records (EVRs), a print-statement style telemetry, that correspond to the start of the motion. Using this tool, once the begin and end times of each motion event are identified, subsequent tools can extract a great deal of mechanism data from the specified time period and generate needed statistics and plots. The main learning curve with this ticket was getting used to the Jupyter Notebook and GitHub environment as well as learning how to program collaboratively to build off and/or debug someone else’s code.
Ticket 2: EHA Logs
The second ticket I worked on adds to and edits a previously existing program designed to store Engineering Health and Accountability (EHA) logs with each downlink pass and store them on the Operational Cloud Storage (OCS) sol by sol. To specify, EHA logs are essentially large CSV files with each column representing a timestamp or a different EHA channel’s value over a given time period. Essentially, EHA is a channelized telemetry collected aboard the Rover and processed on the ground; each channel represents a different type of data and is recorded at a specified rate. Additionally, the OCS is a hosted cloud storage location where operations data is stored and maintained.
The storage of the EHA logs is useful for JMP-based analyses of flight data and serves as a steppingstone to generating summary CSV files from flight data. JMP is a program with a GUI used for data analysis and visualization; most often used to analyze sampling data. The main challenge with this ticket was that there was too much data to collect and load at once. As a result, the program kept crashing since each query request exceeded Python’s memory limit. To combat this, I developed a function that takes the desired time range of data and breaks it up into shorter time segments so that smaller sections of data are written to the CSV file in the correct order, as is described in Figure 4. In general, preprocessing the data in this form helps visualize the system state and is also helpful for other tools to analyze flight data in a compatible format.
Ticket 3: Sequence Tracking Tools
The first two tickets are on the downlink side of the SNC operations, processing and analyzing data sent back from the Rover. To gain further experience and knowledge in the many sampling and caching tools the SNC team develops, my next ticket was aids uplink operations: processing data before it is sent to the Rover. When it comes to sending commands to the Rover, there are many sequences written and all their versions must be maintained and kept track of. There are two places where sequences are stored: the uplink store and sequence library. The sequence library is where requests are stored, such as robotic commands for sampling. Once the sequences have been tested and approved, they are sent to the uplink store where they are then delivered to the Rover. Specifically, in this ticket, I helped make progress towards ensuring that all of the commands that are sent to the Rover have been properly reviewed, approved, and accounted for. As seen in Figure 5, there are several steps to complete for this task. To start, I developed a program that obtains a list of the sequences merged into the library along with their version information and specifications. Then, I developed another program that pulls a sequence list down from the uplink store to get a list of what has been sent there. In the upcoming steps (3 and 4), a cross reference will be created to be used as a check to compare the two lists and compare sequences if the versions are the same. This ensures the sequences sent to the rover match those version-controlled in the Sequence Library.
Conclusion
This research helped to establish an efficient sampling-focused trending dashboard infrastructure and improved the ability to identify trends in Rover data from specific activities over the course of the Mars 2020 mission. I developed and improved upon three main Python-based tickets involved in the uplink and downlink of Rover data. The first ticket identifies and records each individual, unique Rover motor motion event and its respective characteristics, such as the start and end times. The second ticket helps to make the process of storing EHA logs more time efficient to better visualize the system state. Finally, the third ticket, unlike the first two, works with the data that is sent to the Rover instead of from the Rover. More specifically, it works to ensure that all of the commands that sent to the Rover have been properly reviewed, approved, and accounted for. Overall, the tickets I worked on will continue to aid the Mars 2020 mission of seeking biosignatures of past life by helping SNC engineers better understand the Rover’s operations as it caches rock and regolith samples on Mars.
References
1. Anderson, R.C.., et al. “The Sampling and Caching Subsystem (SCS) for the Scientific Exploration of Jezero Crater by the Mars 2020 Perseverance Rover.” Space Science Reviews, Springer Netherlands, 1 Jan. 1970, link.springer.com/article/10.1007/s11214-020-00783-7.
2. A.C.. Allwood, M.R.. Walter, et al. “Mars 2020 Mission Overview.” Space Science Reviews, Springer Netherlands, 1 Jan. 1970, link.springer.com/article/10.1007/s11214-020-00762-y.
3. “New Tools to Automatically Generate Derived Products upon Downlink Passes for Mars Science Laboratory Operations.” IEEE Xplore, ieeexplore.ieee.org/document/9172647.
Acknowledgments
I would like to thank my incredible mentors Kyle Kaplan and Julie Townsend as well as Sawyer Brook for providing research guidance, support, and technical assistance; Michael Lashore for taking the time to allow me to shadow his SNC downlink shift; Frank Hartman for finding and funding this opportunity; and Caltech Northern California Associates and Mary and Sam Vodopia for the fellowship to participate in Caltech’s Summer Undergraduate Research Fellowship (SURF).
The research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, and was sponsored by the SURF internship program and the National Aeronautics and Space Administration (80NM0018D0004).
Robotic Arm Control Through Algorithmic Neural Decoding and Augmented Reality Object Detection
The ability of a robotic arm to be controlled only by human thought is made possible by the use of a brain-machine interface (BMI). This project sought to improve BMI user practicality by revising the Andersen Lab’s BMI robotic arm control system [1, 2]. It aimed to answer two questions: Does augmented reality benefit BMIs? Can BMIs be controlled by decoded verbs from the posterior parietal cortex brain region? This project found that augmented reality significantly improved the functionality of BMIs and that using motor imagery was the most effective way to control BMI motion.
Author: Sydney Hunt
Duke University
Mentors: Richard Andersen, Jorge Gamez de Leon
California Institute of Technology
Editor: Alex Bardon
Introduction
BMIs function by connecting brain activity to a machine through the use of sensors that are surgically implanted in an individual’s brain. When these sensors are connected to a computer, the electrical activity of the brain is measured. This neuronal information is then used to control an external device through a ‘neural decoding task,’ or a machine learning algorithm that programs the external device to perform a specific function when the computer reads a specific electrical activity from the brain.
Neural Sensors
Early versions of intracortical BMIs in humans focused on using signals recorded from arrays implanted in the primary motor cortex (M1) or posterior parietal cortex (PPC) of tetraplegic humans [3]. Data from these studies showed that both the M1 and PPC can encode the body part a person plans to move and a person’s nonmotor intentions [4]. Yet sometimes, it is difficult to predict whether the PPC or M1 will provide stronger signals, as these strengths vary depending on an individual’s brain and the location of the neurally implanted sensors.
In this project, JJ–a Photoshop artist, steak lover, and father of four who became tetraplegic in a flying go-kart accident–had two 4×4 mm array sensors [5] surgically implanted in his brain. The first sensor was implanted in his M1 to detect the neural information that controls his voluntary movement control and execution [3]. The second sensor was implanted in his PPC to detect his planned movement, spatial reasoning, and attention [4, 6].
JJ’s electrical brain activity was measured by utilizing these two sensors in combination with the NeuroPort Biopotential Signal Processing System [7]. This collected neuronal information was later used to control a Kinova JACO robotic arm [8].
Neural Decoding Task for JACO Robotic Arm Control
A limited amount of research has found that thinking about action verbs (e.g., “grasp”) could be decoded from M1 or PPC, in addition to the desired bodily movement [4]. This decoded information could potentially reduce the energy needed to control a BMI system; JJ could simply think of the word “grasp” rather than imagining the grasping action–which includes reaching, grabbing, and picking up–when controlling the JACO Robotic Arm [9].
A neural decoding task was therefore developed to translate JJ’s measured electrical brain activity to an action the Kinova JACO robotic arm would perform (e.g. “grasp”). This neural decoding task trained a machine learning algorithm to correctly associate JJ’s electrical brain activity to robotic arm movement (see Figures 1-2). As a result, JJ was able to control the Kinova JACO robotic arm’s trajectory with only his thoughts.
Object Selection via Augmented Reality
The Andersen Lab’s BMI interface was further modernized by introducing augmented reality technology into its system. This incorporation allowed JJ to independently select any object in a 360-degree space using the object detection features of the Microsoft HoloLens2 [10] augmented reality device camera (see Figure 3). BMI practicality was consequently increased; the spatial limitations of object selection when using a BMI were reduced since predefined objects or predefined locations were no longer required in this BMI system.
Results
This BMI system allowed JJ to independently select, pick up, move, and set down a water bottle without verbalization (see Figures 4-5). Consequently, this cognitive-based neural prosthesis reduced the social anxiety paralyzed individuals may experience when using voice-controlled prostheses.
The functioning BMI system also demonstrated that augmented reality benefitted the Andersen Lab’s BMI system by reducing its spatial limitations. It allowed JJ to select objects of his choice to be manipulated by an external device, rather than be constrained to using predefined objects and predefined start/end locations.
Data analysis of the neural decoding task showed that motor imagery of action verbs (e.g. JJ imagining himself grasping something) was better represented than both commanding action verbs (e.g. JJ said the word “grasp” aloud) and imagining action verbs (e.g. JJ imagined saying the word “grasp”) in the area of the M1 and PPC where JJ’s particular arrays were implanted (see Figure 2). When comparing the decoding accuracy between the two areas of the brain, the PPC had a higher decoding accuracy of motor imagery than the M1.
Future Applications
The Andersen Lab had previously developed a neural decoding task that accurately decoded JJ’s neuron signals when he imagined moving his thumb in the up, down, forward, or backward direction. Due to time constraints, this thumb control was implemented into this project in order to let JJ control the Kinova JACO robotic arm (see Figures 4-5).
Future work can include implementing the neural decoding task developed in this project into the Andersen Lab’s BMI. Using this strong PPC motor imagery representation of action verbs may make it easier for JJ to navigate the Kinova JACO robotic arm. Rather than meticulously imagining thumb movement, JJ can just think about performing the grasping action and the Kinova JACO robotic arm will execute a hardcoded movement associated with the “grasp” action verb [9].
Additional improvements can also include the implementation of multiple unique game objects appearing on the HoloLens2 screen, providing JJ with visual feedback that multiple real-world objects were being detected. Therefore, JJ could theoretically create a queue of objects to move, which would better represent real-life scenarios.
Acknowledgments
Thank you to the following individuals and groups for their support. I greatly appreciate your mentorship, guidance, and confidence in me both during and after this project.
Andersen Lab; California Institute of Technology; Caltech WAVE Fellows Program; Friends; Family; JJ; Jorge Gamez de Leon; Richard Andersen; Tianqiao and Chrissy Chen Institute for Neuroscience; Tyson Aflalo.
References
1. Andersen, R. (2019). The Intention Machine. Scientific American, 320(4), 24–31. https://doi. org/10.1038/scientifcamerican0419-24
2. Katyal, K. D., Johannes, M. S., Kellis, S., Aflalo, T., Klaes, C., McGee, T. G., Para, M. P., Shi, Y., Lee, B., Pejsa, K., Liu, C., Wester, B. A., Tenore, F., Beaty, J. D., Ravitz, A. D., Andersen, R. A., & McLoughlin, M. P. (2014). A collaborative BCI approach to autonomous control of a prosthetic limb system. 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 1479–1482. https://doi.org/10.1109/SMC.2014.6974124
3. Hochberg, L. R., Serruya, M. D., Friehs, G. M., Mukand, J. A., Saleh, M., Caplan, A. H., Branner, A., Chen, D., Penn, R. D., & Donoghue, J. P. (2006). Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature, 442(7099), 164–171. https://doi.org/10.1038/ nature04970
4. Andersen, R. A., Aflalo, T., & Kellis, S. (2019). From thought to action: The brain-machine interface in posterior parietal cortex. Proceedings of the National Academy of Sciences, 116(52), 26274–26279. https://doi.org/10.1073/pnas.1902276116
5. “NeuroPort Array IFU.” NeuroPort Array PN 4382, 4383, 6248, and 6249 Instructions for Use, 29 June 2018, https://blackrockneurotech.com/research/wp-content/ifu/LB-0612_NeuroPort_Array_IFU.pdf.
6. Aflalo, T., Kellis, S., Klaes, C., Lee, B., Shi, Y., Pejsa, K., Shanfield, K., Hayes-Jackson, S., Aisen, M., Heck, C., Liu, C., & Andersen, R. (2015). Decoding motor imagery from the posterior parietal cortex of a tetraplegic human. Science, 348(6237), 906–910. https://doi.org/10.1126/ science.aaa5417
7. Blackrock Microsystems, LLC. NeuroPort Biopotential Signal Processing System User’s Manual, 2018. Accessed on: May 31, 2021. [Online]. Available: https://blackrockneurotech.com/research/wp-content/ifu/LB-0175_NeuroPort_Biopotential_Signal_Processing_System_Users_Manual.pdf
8. KINOVA JACO™ Prosthetic robotic arm User Guide, 2018. Accessed on: May 31, 2021. [Online]. Available: https://github.com/Kinovarobotics, https://www.kinovarobotics.com/sites/default/files/PS-PRA-JAC-UG-INT-EN%20201804-1.0%20%28KINOVA%20JACO%E2%84%A2%20Prosthetic%20robotic%20arm%20user%20guide%29_0.pdf
9. T. Aflalo, C. Y. Zhang, E. R. Rosario, N. Pouratian, G. A. Orban, & R. A. Andersen (2020). A Shared Neural Substrate for Action Verbs and Observed Actions in Human Posterior Parietal Cortex. Science Advances. https://www.vis.caltech.edu/documents/17984/Science_Advances_ 2020.pdf
10. Microsoft HoloLens: https://docs.microsoft.com/es-es/windows/mixed-reality/
The Intention Machine: A new generation of brain-machine interfaces can deduce what a person wants
Annual Review of Psychology: Exploring Cognition with Brain–Machine Interfaces
Single-trial decoding of movement intentions using functional ultrasound neuroimaging
Collapse of Fuzzy Dark Matter in Simulations
Dark matter builds the backbone for galaxy formation in modern cosmological simulations. While the current dark matter paradigm is able to capture the large-scale structure of the Universe, it predicts more small-scale structure than is actually observed. This issue may be resolved with the theory of fuzzy dark matter, which is made of ultra-light, wavelike particles. However, the behavior of fuzzy dark matter has yet to be fully understood in present theoretical models. Using 3D numerical simulations, we performed a comprehensive analysis of spherical collapse in fuzzy dark matter halos. Then, we made use of a semi-analytic treatment to predict how likely it is for the halo to collapse, and thus explore small-scale structure formation in the Universe.
Author: Shalini Kurinchi-Vendhan
California Institute of Technology
Mentors: Xiaolong Du, Andrew J. Benson
Carnegie Theoretical Astrophysics Center
Editor: Suchitra Dara
Why study fuzzy dark matter?
Since its discovery by Vera Rubin [1], the theory of dark matter has eluded physicists and astronomers as one of the greatest questions in cosmology: What is the dark matter particle made of? What is its mass? And how does it interact with other matter in order to form the large-scale structure of the Universe?
Modern cosmological simulations work to capture the behavior of dark matter through numerical models. These N-body simulations are extremely powerful tools: they can broadly simulate any dynamical system of particles under the influence of physical forces. In these simulations, dark matter builds the backbone for the formation of galaxies, which are expected to form at the centers of dark matter clumps called halos [2]. Due to the force of gravity, it is thought that these dark matter halos would (1) grow as they pulled surrounding gas into their cores until (2) they collapsed, and (3) stabilized into the first galaxies. Scientists are interested in studying this process of halo collapse.
The most prevalent model in cosmological simulations is the cold dark matter cosmology, where dark matter is made of cold, slow-moving particles. The cold dark matter model has been successful in explaining observations with respect to the large-scale structure formation of the Universe. Despite its overall success, however, the cold dark matter model fails in two small-scale problems. It predicts the existence of more dwarf galaxies than observed in the real Universe, giving rise to the “missing satellites” crisis [3, 4, 5]. Moreover, it leads to discrepancies in the density-profiles of these galaxies which cause their cores to appear cuspy or steep rather than flat. This is known as the “core-cusp” problem [6]. Though these problems are indeed smallscale, they are indicative of some missing component in the cold dark matter model and represent a gap in our current understanding of dark matter.
These tensions lead physicists to hypothesize the existence of fuzzy dark matter. This type of dark matter is composed of ultra-light particles called ‘axions’ [5, 7]. Compared to cold dark matter, these particles behave in a quantum fashion according to Schrödinger’s equation, giving it wavelike properties that manifest on galactic scales. Like the cold dark matter model, it is able to reproduce the large-scale structure of Universe, as desired. Most interestingly, the fuzzy dark matter model suppresses small-scale structure by leading to less halo formation. It thus alleviates the “missing satellites” problem [8, 9]. Moreover, it predicts flat cores in the center of dark matter halos, as indicated by observations. Thus, the fuzzy dark matter model demonstrates several promising characteristics toward understanding dark matter as it behaves in the real Universe.
Nevertheless, modeling of fuzzy dark matter on subgalactic scales meets is challenging because of the need for sufficiently high resolutions in cosmological simulations [5, 8]. Standard N-body methods are not adequate, as in the cold dark matter case; instead, solving the Schrödinger equation results in a complex wave function which oscillates rapidly in time and has interference in space, requiring high resolutions in both dimensions. Nevertheless, several works have been able to solve the Schrödinger equation and thus test the spherical collapse of fuzzy dark matter halos in their cosmological code (see for example [11]).
One notable area of active research, however, is in modeling the quantum effect which distinguishes fuzzy dark matter from cold dark matter. Arising from the uncertainty principle, quantum pressure in fuzzy dark matter can lead to a minimum size for a collapsing halo. While new studies are being done to understand how quantum pressure affects the collapse of dark matter halos [12], they are often constrained to simple one-dimensional problems in hydrodynamical simulations. This approach does not work well in the case of shell-crossing, when the inner shells of a halo come close to each other and experience repulsion. Solving the three-dimensional Schrödinger-Poisson equation can thus provide more insight to the behavior of fuzzy dark matter halos during their formation.
Purpose of this work
In this way, a more comprehensive analysis of how quantum pressure will affect the collapse of dark matter halos, and thus structure formation in the Universe, is needed. In this work, we use a three-dimensional treatment of fuzzy dark matter to study the effect of quantum pressure in the formation of dark matter halos. Not only would we like to determine how dark matter halos collapse in this model, but also when:
• How do the dark matter particles evolve? We will look at the evolution of the velocities and densities of dark matter particles over time.
• When does the dark matter halo collapse? We would like to determine when the fuzzy dark matter halo forms given the initial amplitude and size in the early Universe.
• How does the halo stabilize? Since the fuzzy dark matter model involves an additional quantum pressure term, this can potentially alter the way in which the halo reaches a state at which it neither expands nor collapses.
We can then predict the halo abundance in the fuzzy dark matter model, particularly on the low-mass end, in a halo mass function (see Press–Schechter theory in [13]). Ultimately, we would like to compare our results with those of the cold dark matter paradigm in order to assess the potential of fuzzy dark matter as an alternative theory for dark matter. This can allow us to have a greater understanding of how dark matter halos may have developed, and thus shaped the eventual formation of galaxies.
How to simulate fuzzy dark matter
We ran 3-D numerical simulations to model the evolution of a dark matter halo over time, in both the cold and fuzzy cases. Unlike previous studies, we focused on a single halo so that we could render it with a large number of particles in a small volume to get high resolutions in our results.
Setting initial conditions
The very first step is to set-up appropriate initial conditions for the halo that we are simulating. This means setting its shape and growth at a very early time in the Universe.
Consider a smooth field of particles at the mean density of the Universe. We can perturb the density field at the position of the halo so that it has higher central concentration of dark matter—called the overdensity. The overdensity of a dark matter halo is how dense it is compared to the mean density of the Universe. The initial amplitude of this overdensity is what sets its growth, and causes the halo to collapse. Meanwhile, the size of the halo is determined by the initial mass of the particles inside of the region of the overdensity.
While we can extract the positions of the particles from their density distribution, we need to use the following continuity equation [14] to get the radial velocities of the particles:
$\frac{\partial \: overdensity}{\partial \: time} = - \nabla \: velocity$
Thus, we can calculate the initial position and velocity vectors of the N-body particles in the simulation. In the current iteration of our work, we present four sets of simulations for the cold and fuzzy cases each, specifically for halos with two different masses, and with both low and high initial overdensity amplitudes. The table of initial parameter values is shown below in Table 1.
These conditions result in a total of eight simulations to compare.
Running the simulations
We evolve each halo using the cold dark matter paradigm as a basis for comparison. Here, we use the simulation suite called GADGET [15]. This gravitational N-body code is widely-used in cosmological simulations that are based on the cold dark matter paradigm.
Then, we use the same set of initial conditions to run a spectral code for fuzzy dark matter, developed by Du et al. [16]. It numerically solves the three dimensional Schrödinger-Poisson equation for the wavefunction that describes the evolution of fuzzy dark matter.
We run each simulation starting at approximately 10 billion years after the Big Bang (redshift z = 100), to the present-day.
Results of the simulations
Now, we can look at how the different simulated halos evolve in the fuzzy model, versus in the cold dark matter case.
Basic “story” of the halo
One way to observe the collapsing process is through tracing the radius of the halo, since it provides a sense of how the overdensity grows or becomes smaller over time. This is shown in Figure 1. The typical “story” of the cold dark matter halos can be summarized in three steps:
1. The halo expands along with the Universe.
2. Due to gravity, the halo ‘turns around’ at a maximum radius and begins to shrink and collapse.
3. Eventually, the halo stabilizes at an equilibrium radius.
While the fuzzy dark matter halos roughly follow this path, they do so with several differences. Not only do they collapse at later times, but also stabilize at larger radii than their cold dark matter counterparts. In some cases, the halo expands again. For the high mass-low initial amplitude run, the fuzzy dark matter halo does not even finish collapsing before the present day.
In this way, the collapsing process is delayed and suppressed in the fuzzy model, compared to the cold dark matter case.
Fuzzy dark matter halos “collapse less”
We can also see the suppression of collapse in the density profiles of the fuzzy dark matter halos.
The final overdensity of the different simulated halos are shown in Figure 2, with respect to the distance from the halo centers. While the cold dark matter profiles continue to increase toward the center of the halo, the densities of the fuzzy dark matter halos flatten out. Once again, we see that halos cannot collapse to the same extent in the fuzzy dark matter case as in the cold model.
This suppression is likely due to the additional influence of quantum pressure in fuzzy dark matter, which causes there to be a minimum requirement for halo collapse to occur. But what is that requirement?
What does it take to collapse?
In other words, we would like to know the critical overdensity needed for a fuzzy dark matter halo to collapse.
As demonstrated in Figure 1, we can identify the time of collapse for each of the simulated halos by tracing their radial evolutions. Then, we can calculate the overdensity of each halo at that point in time. Figure 3 compares the critical overdensities from the fuzzy and cold models as a ratio, for each simulation set at different mass halos. At low masses, the fuzzy halos need to attain a much higher density than the cold dark matter halos in order to collapse; meanwhile, the threshold for collapse is similar between both models at the high-mass end.
These results are in line with the prediction that suppression from quantum pressure is greater at smaller scales. However, we can expect fuzzy dark matter and cold dark matter to behave similarly at large scales. But what do these critical overdensities really tell us about small-scale structure formation in the Universe?
Small-scale structure in the fuzzy Universe
To be able to predict the numbers of satellite galaxies in the fuzzy model, compared to the cold dark matter case, we need to determine a halo mass function. This function describes the number density of halos we can
expect at different masses.
Using the relation between the critical overdensity of collapse and halo mass from Figure 3, we can statistically determine the halo mass function. This is accomplished using the semi-analytic model called Galacticus [17]. We show the resulting halo mass functions for the cold and fuzzy dark matter models in Figure 4. Again, both models agree at the high-mass, large-scale end. Meanwhile, whereas the cold dark matter model predicts an abundance of low-mass halos, the fuzzy model has less small-scale structure.
What did we learn?
In this work, we ran 3-D simulations of spherical collapse in the fuzzy and cold dark matter models. With our results, we were able to determine the halo mass
functions, and thus demonstrate the ability of the fuzzy model to cause less small-scale structure formation to occur. We find that fuzzy dark matter is able to suppress halo collapse by:
1. delaying collapse,
2. leading to less compact core-formation in terms of larger stabilizing radii and less concentrated central densities,
3. and requiring a higher critical overdensity for collapse to even occur.
In this way, the fuzzy dark matter model can indeed address the small-scale problems of the prevalent cold dark matter paradigm. In terms of the “missing satellites” crisis, the halo mass function shows that the fuzzy model leads to fewer low-mass galaxies. Moreover, its flatter overdensity profiles can potentially resolve the “core-cusp” problem.
But there is still a lot to explore!
While the overdensity profiles and halo mass function are promising in terms of solving the problems of cold dark matter, it would be insightful to compare our
simulated results with observations of the real Universe. This would allow us to test how well the fuzzy dark matter model can capture galaxy formation, compared to the current paradigm.
Moreover, although we altered the initial density and and mass of the simulated halos in this work, it is also possible to test different masses of the fuzzy dark matter particle. Comparing the results of these simulations with observations might help us constrain the mass of dark matter!
To investigate deeper, we can also monitor the quantum pressure energy that is influencing the collapse of the fuzzy dark matter halos. Moreover, running more simulations at different halo masses can improve the precision of our halo mass function.
Beyond the scope of this work, it would also be exciting to explore what happens to the fuzzy dark matter halos after they collapse. In other words, how does this alternative model affect the actual galaxies that form?
Looking forward, understanding the spherical collapse of different dark matter models is an important step to figuring out galaxy-formation in the Universe.
Acknowledgements
S.K. thanks Xiaolong Du and Andrew Benson for advising and supporting this work. She is also grateful to Gwen Rudie and the Carnegie Astrophysics Summer Student Internship Program (CASSI) for providing this research opportunity. Finally, the author thanks the Caltech SURF program for funding her project.
References
1. Rubin V. C., 1983, Scientific American, 248, 96
2. Vogelsberger M., Marinacci F., Torrey P., Puchwein E., 2019, Cosmological Simulations of Galaxy Formation (arXiv:1909.07976)
3. Klypin A., Kravtsov A. V., Valenzuela O., Prada F., 1999, The Astrophysical Journal, 522, 82–92
4. Moore B., Ghigna S., Governato F., Lake G., Quinn T., Stadel J., Tozzi P., 1999, The Astrophysical Journal, 524, L19–L22
5. Hu W., Barkana R., Gruzinov A., 2000, Physical Review Letters, 85, 1158–1161
6. de Blok W. J. G., 2010, Advances in Astronomy, 2010, 789293
7. Marsh D. J., 2016, Physics Reports, 643, 1–79
8. Schive H.-Y., Chiueh T., Broadhurst T., 2014, Nature Physics, 10, 496–499
9. Marsh D. J. E., Silk J., 2013, Monthly Notices of the Royal Astronomical Society, 437, 2652–2663
10. Nori M., Baldi M., 2018, Monthly Notices of the Royal Astronomical Society, 478, 3935–3951
11. Schwabe B., Gosenca M., Behrens C., Niemeyer J. C., Easther R., 2020, Physical Review D, 102
12. Sreenath V., 2019, Physical Review D, 99
13. Bond J. R., Cole S., Efstathiou G., Kaiser N., 1991, The Astrophysical Journal, 379, 440
14. Binney J., Tremaine S., 2008, Galactic Dynamics: Second Edition
15. Springel V., Pakmor R., Zier O., Reinecke M., 2021, Monthly Notices of the Royal Astronomical Society, 506, 2871–2949
16. Du X., Schwabe B., Niemeyer J. C., Bürger D., 2018, Physical Review D, 97
17. Benson A. J., 2012, New Astronomy, 17, 175–197
|
2022-10-04 17:02:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4509076178073883, "perplexity": 1468.1724507910249}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00294.warc.gz"}
|
https://gmatclub.com/forum/in-the-sequence-1-2-2-an-an-an-1-an-156740.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 11 Dec 2018, 01:20
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in December
PrevNext
SuMoTuWeThFrSa
2526272829301
2345678
9101112131415
16171819202122
23242526272829
303112345
Open Detailed Calendar
• ### Free GMAT Prep Hour
December 11, 2018
December 11, 2018
09:00 PM EST
10:00 PM EST
Strategies and techniques for approaching featured GMAT topics. December 11 at 9 PM EST.
• ### The winning strategy for 700+ on the GMAT
December 13, 2018
December 13, 2018
08:00 AM PST
09:00 AM PST
What people who reach the high 700's do differently? We're going to share insights, tips and strategies from data we collected on over 50,000 students who used examPAL.
# In the sequence 1, 2, 2, …, an, …, an = an-1 • an-2.
Author Message
TAGS:
### Hide Tags
Intern
Status: Yes. It was I who let the dogs out.
Joined: 03 Dec 2012
Posts: 38
H: B
GMAT Date: 08-31-2013
In the sequence 1, 2, 2, …, an, …, an = an-1 • an-2. [#permalink]
### Show Tags
Updated on: 26 Jul 2013, 11:30
1
19
00:00
Difficulty:
75% (hard)
Question Stats:
64% (03:01) correct 36% (02:42) wrong based on 351 sessions
### HideShow timer Statistics
In the sequence 1, 2, 2, …, $$a_n$$, …, $$a_n = a_{n-1}* a_{n-2}$$. The value of $$a_{13}$$ is how many times the value of $$a_{11}$$?
(A) 2
(B) 2^3
(C) 2^32
(D) 2^64
(E) 2^89
Disclaimer: I have used the Search Box Before Posting. I used the first sentence of the question or a string of words exactly as they show up in the question below for my search. I did not receive an exact match for my question.
Source: Veritas Prep; Book 04
Chapter: Homework
Topic: Algebra
Question: 93
Question: Page 226
Solution: PDF Page 17 of 18
Edition: Third
My Question: Please provide an explanation on how to arrive at the answer.
_________________
Yogi Bhajan: If you want to learn a thing, read that; if you want to know a thing, write that; if you want to master a thing, teach that.
This message transmitted on 100% recycled electrons.
Originally posted by hb on 26 Jul 2013, 11:02.
Last edited by hb on 26 Jul 2013, 11:30, edited 2 times in total.
Math Expert
Joined: 02 Sep 2009
Posts: 51097
Re: In the sequence 1, 2, 2, …, an, …, an = an-1 • an-2. [#permalink]
### Show Tags
26 Jul 2013, 11:26
10
6
hb wrote:
In the sequence 1, 2, 2, …, an, …, an = an-1 • an-2. The value of a13 is how many times the value of a11?
(A) 2
(B) 23
(C) 232
(D) 264
(E) 289
Comments: The n, n-1, n-2, 13, 11 mentioned in the question are subscripts above. I could not figure out how to show them as subscript while writing the formula here.
Disclaimer: I have used the Search Box Before Posting. I used the first sentence of the question or a string of words exactly as they show up in the question below for my search. I did not receive an exact match for my question.
Source: Veritas Prep; Book 04
Chapter: Homework
Topic: Algebra
Question: 93
Question: Page 226
Solution: PDF Page 17 of 18
Edition: Third
My Question: Please provide an explanation on how to arrive at the answer.
In the sequence 1, 2, 2, …, $$a_n$$, …, $$a_n = a_{n-1}* a_{n-2}$$. The value of $$a_{13}$$ is how many times the value of $$a_{11}$$?
(A) 2
(B) 2^3
(C) 2^32
(D) 2^64
(E) 2^89
For such kind of questions it's almost always a good idea to write down first terms:
$$a_1=1=2^0$$
$$a_2=2=2^1$$
$$a_3=a_2*a_1=1*2=2^1$$
$$a_4=a_3*a_2=2*2=2^2$$
$$a_5=a_4*a_3=4*2=2^3$$
$$a_6=a_5*a_4=8*4=2^5$$
$$a_7=a_6*a_5=32*8=2^8$$
If you notice exponents form Fibonacci sequence: {0, 1, 1, 2, 3, 5, 8, ...} (Fibonacci sequence is a sequence where each subsequent number is the sum of the previous two)
So, it will continue as follows: {0, 1, 1, 2, 3, 5, 8, 5+8=13, 8+13=21, 13+21=34, 21+34=55, 34+55=89, 55+89=144, ...}
From above we have that $$a_{11}=2^{55}$$ and $$a_{13}=2^{144}$$.
$$\frac{a_{13}}{a_{11}}=\frac{2^{144}}{2^{55}}=2^{89}$$
_________________
##### General Discussion
Intern
Joined: 18 Jun 2013
Posts: 7
Re: In the sequence 1, 2, 2, …, an, …, an = an-1 • an-2. [#permalink]
### Show Tags
25 Sep 2013, 10:16
Damn It! The original post of question drove me crazy for good 30 min. I tried both ways, just trying to figure out the pattern and calculate for a12, and solving for every value in the sequence until a13. Still could not get even close to the answer choices. I almost always just keep the page scrolled enough so that I can read only the question, and not the answers or any explanation by mistake.
It was a blunder in this time. After spending all that time, I figured out there were typos, and I calculated correct answer in well within 1.5 min in my first go. Thanks Bunuel! You just saved me. Phewwwww..
EMPOWERgmat Instructor
Status: GMAT Assassin/Co-Founder
Affiliations: EMPOWERgmat
Joined: 19 Dec 2014
Posts: 13058
Location: United States (CA)
GMAT 1: 800 Q51 V49
GRE 1: Q170 V170
Re: In the sequence 1, 2, 2, …, an, …, an = an-1 • an-2. [#permalink]
### Show Tags
20 Feb 2018, 15:20
Hi All,
The pattern in this question is rarer than the ones that you'll likely see on sequence questions on the Official GMAT - the pattern is based on "2 raised to a power"...
Since the first two terms are 1 and 2, and we're told to MULTIPLY the prior 2 terms in the sequence to get the next term in the sequence, the next few terms are...
3rd term = 2 = 2^1
4th term = 4 = 2^2
5th term = 8 = 2^3
6th term = 32 = 2^5
7th term = 256 = 2^8
From here, the pattern can redefined as "add up the EXPONENTS of the prior 2 terms"; in this way, you can map out the remaining terms in the sequence much faster...
8th term = 2^13
9th term = 2^21
10th term = 2^34
11th term = 2^55
12th term = 2^89
13th term = 2^144
We're essentially asked for the value of (13th term)/(11th term)....
(2^144)/(2^55) = 2^89
GMAT assassins aren't born, they're made,
Rich
_________________
760+: Learn What GMAT Assassins Do to Score at the Highest Levels
Contact Rich at: Rich.C@empowergmat.com
# Rich Cohen
Co-Founder & GMAT Assassin
Special Offer: Save \$75 + GMAT Club Tests Free
Official GMAT Exam Packs + 70 Pt. Improvement Guarantee
www.empowergmat.com/
*****Select EMPOWERgmat Courses now include ALL 6 Official GMAC CATs!*****
Re: In the sequence 1, 2, 2, …, an, …, an = an-1 • an-2. &nbs [#permalink] 20 Feb 2018, 15:20
Display posts from previous: Sort by
|
2018-12-11 09:20:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7151525616645813, "perplexity": 2852.6328810175855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823614.22/warc/CC-MAIN-20181211083052-20181211104552-00207.warc.gz"}
|
https://aaronberk.ca/publication/dgd-icassp/
|
# Deep generative demixing: Error bounds for demixing subgaussian mixtures of Lipschitz signals
### Abstract
Generative neural networks (GNNs) have gained renown for efficaciously capturing intrinsic low-dimensional structure in natural images. Here, we investigate the subgaussian demixing problem for two Lipschitz signals, with GNN demixing as a special case. In demixing, one seeks identification of two signals given their sum and prior structural information. Here, we assume each signal lies in the range of a Lipschitz function, which includes many popular GNNs as a special case. We prove a sample complexity bound for nearly optimal recovery error that extends a recent result of Bora, et al. (2017) from the compressed sensing setting with gaussian matrices to demixing with subgaussian ones. Under a linear signal model in which the signals lie in convex sets, McCoy & Tropp (2014) have characterized the sample complexity for identification under subgaussian mixing. In the present setting, the signal structure need not be convex. For example, our result applies to a domain that is a non-convex union of convex cones. We support the efficacy of this demixing model with numerical simulations using trained GNNs, suggesting an algorithm that would be an interesting object of further theoretical study.
Publication
ICASSP 2021 (submitted)
Date
|
2020-12-05 17:41:39
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8752229809761047, "perplexity": 912.4811569480144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141748276.94/warc/CC-MAIN-20201205165649-20201205195649-00151.warc.gz"}
|
https://inlabru-org.github.io/inlabru/articles/web/2d_lgcp_distancesampling.html
|
## Introduction
We’re going to estimate distribution and abundance from a line transect survey of dolphins in the Gulf of Mexico. These data are also available in the R package dsm (where they go under the name mexdolphins). In inlabru the data are called mexdolphin.
## Setting things up
library(inlabru)
library(INLA)
library(ggplot2)
## Get the data
We’ll start by loading the data and extracting the mesh (for convenience).
data(mexdolphin, package = "inlabru")
mesh <- mexdolphin$mesh Plot the data (the initial code below is just to get rid of tick marks) noyticks <- theme( axis.text.y = element_blank(), axis.ticks = element_blank() ) noxticks <- theme( axis.text.x = element_blank(), axis.ticks = element_blank() ) ggplot() + gg(mexdolphin$ppoly) +
gg(mexdolphin$samplers, color = "grey") + gg(mexdolphin$points, size = 0.2, alpha = 1) +
noyticks +
noxticks +
theme(legend.key.width = unit(x = 0.2, "cm"), legend.key.height = unit(x = 0.3, "cm")) +
theme(legend.text = element_text(size = 6)) +
guides(fill = FALSE) +
coord_equal()
## Spatial model with a half-normal detection function
The samplers in this dataset are lines, not polygons, so we need to tell inlabru about the strip half-width, W, which in the case of these data is 8. We start by plotting the distances and histogram of frequencies in distance intervals:
W <- 8
ggplot(data.frame(mexdolphin$points)) + geom_histogram(aes(x = distance), breaks = seq(0, W, length = 9), boundary = 0, fill = NA, color = "black" ) + geom_point(aes(x = distance), y = 0, pch = "|", cex = 4) We need to define a half-normal detection probability function. This must take distance as its first arguent and the linear predictor of the sigma parameter (which we will call lsig) as its second: hn <- function(distance, lsig) { exp(-0.5 * (distance / exp(lsig))^2) } Specify and fit an SPDE model to these data using a half-normal detection function form. We need to define a (Matern) covariance function for the SPDE matern <- inla.spde2.pcmatern(mexdolphin$mesh,
prior.sigma = c(2, 0.01),
prior.range = c(50, 0.01)
)
We need to now separately define the components of the model (the SPDE, the Intercept and the detection function parameter lsig)
cmp <- ~ mySPDE(main = coordinates, model = matern) +
lsig(1) + Intercept(1)
… and the formula, which describes how these components are combined to form the linear predictor (remembering that we need an offset due to the unknown direction of the detections!):
form <- coordinates + distance ~ mySPDE +
log(hn(distance, lsig)) +
Intercept + log(2)
Then fit the model, passing both the components and the formula (previously the formula was constructed invisibly by inlabru), and specify integration domains for the spatial and distance dimensions:
fit <- lgcp(
components = cmp,
mexdolphin$points, samplers = mexdolphin$samplers,
domain = list(
coordinates = mesh,
distance = INLA::inla.mesh.1d(seq(0, 8, length.out = 30))
),
formula = form
)
Look at the SPDE parameter posteriors
spde.range <- spde.posterior(fit, "mySPDE", what = "range")
plot(spde.range)
spde.logvar <- spde.posterior(fit, "mySPDE", what = "log.variance")
plot(spde.logvar)
Predict spatial intensity, and plot it:
pxl <- pixels(mesh, nx = 100, ny = 50, mask = mexdolphin$ppoly) pr.int <- predict(fit, pxl, ~ exp(mySPDE + Intercept)) ggplot() + gg(pr.int) + gg(mexdolphin$ppoly) +
gg(mexdolphin$samplers, color = "grey") + gg(mexdolphin$points, size = 0.2, alpha = 1) +
noyticks +
noxticks +
theme(legend.key.width = unit(x = 0.2, "cm"), legend.key.height = unit(x = 0.3, "cm")) +
theme(legend.text = element_text(size = 6)) +
guides(fill = FALSE) +
coord_equal()
Predict the detection function and plot it, to generate a plot like the one below. Here, we should make sure that it doesn’t try to evaluate the effects of components that can’t be evaluated using the given input data. Here, we’re only providing distances and no spatial coordinates, so we cannot evaluate the spatial random field in this predict() call. We can specify this by providing a vector of component names to include in the prediction calculations, here only “lsig”, with include = "lsig". See ?predict.bru for more information.
distdf <- data.frame(distance = seq(0, 8, length = 100))
dfun <- predict(fit, distdf, ~ hn(distance, lsig), include = "lsig")
plot(dfun)
The average detection probability within the maximum detection distance is estimated to be 0.7134929.
We can look at the posterior for expected number of dolphins as usual:
predpts <- ipoints(mexdolphin$ppoly, mexdolphin$mesh)
Lambda <- predict(fit, predpts, ~ sum(weight * exp(mySPDE + Intercept)))
Lambda
#> mean sd q0.025 q0.5 q0.975 median mean.mc_std_err
#> 1 261.3017 66.11277 174.9504 249.0343 435.6058 249.0343 6.611277
#> sd.mc_std_err
#> 1 6.241738
and including the randomness about the expected number. In this case, it turns out that you need lots of posterior samples, e.g. 2,000 to smooth out the Monte Carlo error in the posterior, and this takes a little while to compute:
Ns <- seq(50, 450, by = 1)
Nest <- predict(fit, predpts,
~ data.frame(
N = Ns,
density = dpois(Ns,
lambda = sum(weight * exp(mySPDE + Intercept))
)
),
n.samples = 2000
)
Nest$plugin_estimate <- dpois(Nest$N, lambda = Lambda$mean) ggplot(data = Nest) + geom_line(aes(x = N, y = mean, colour = "Posterior")) + geom_line(aes(x = N, y = plugin_estimate, colour = "Plugin")) ## Hazard-rate Detection Function Try doing this all again, but use this hazard-rate detection function model: hr <- function(distance, lsig) { 1 - exp(-(distance / exp(lsig))^-1) } Solution: formula1 <- coordinates + distance ~ mySPDE + log(hr(distance, lsig)) + Intercept + log(2) fit1 <- lgcp( components = cmp, mexdolphin$points,
samplers = mexdolphin$samplers, domain = list( coordinates = mesh, distance = INLA::inla.mesh.1d(seq(0, 8, length.out = 30)) ), formula = formula1 ) Plots: spde.range <- spde.posterior(fit1, "mySPDE", what = "range") plot(spde.range) spde.logvar <- spde.posterior(fit1, "mySPDE", what = "log.variance") plot(spde.logvar) pxl <- pixels(mesh, nx = 100, ny = 50, mask = mexdolphin$ppoly)
pr.int1 <- predict(fit1, pxl, ~ exp(mySPDE + Intercept))
ggplot() +
gg(pr.int1) +
gg(mexdolphin$ppoly) + gg(mexdolphin$samplers, color = "grey") +
gg(mexdolphin$points, size = 0.2, alpha = 1) + noyticks + noxticks + theme(legend.key.width = unit(x = 0.2, "cm"), legend.key.height = unit(x = 0.3, "cm")) + theme(legend.text = element_text(size = 6)) + guides(fill = FALSE) + coord_equal() distdf <- data.frame(distance = seq(0, 8, length = 100)) dfun1 <- predict(fit1, distdf, ~ hr(distance, lsig)) plot(dfun1) predpts <- ipoints(mexdolphin$ppoly, mexdolphin$mesh) Lambda1 <- predict(fit1, predpts, ~ sum(weight * exp(mySPDE + Intercept))) Lambda1 #> mean sd q0.025 q0.5 q0.975 median mean.mc_std_err #> 1 328.7458 93.74092 177.9023 309.3045 570.0777 309.3045 9.374092 #> sd.mc_std_err #> 1 7.700364 Ns <- seq(50, 650, by = 1) Nest1 <- predict( fit1, predpts, ~ data.frame( N = Ns, density = dpois(Ns, lambda = sum(weight * exp(mySPDE + Intercept)) ) ), n.samples = 2000 ) Nest1$plugin_estimate <- dpois(Nest1$N, lambda = Lambda1$mean)
ggplot(data = Nest1) +
geom_line(aes(x = N, y = mean, colour = "Posterior")) +
geom_line(aes(x = N, y = plugin_estimate, colour = "Plugin"))
## Comparing the models
deltaIC(fit1, fit)
#> Model DIC Delta.DIC
#> 1 fit -802.2352 0.000000
#> 2 fit1 -800.1403 2.094873
# Look at the goodness-of-fit of the two models in the distance dimension
bc <- bincount(
result = fit,
observations = mexdolphin$points$distance,
breaks = seq(0, max(mexdolphin$points$distance), length = 9),
predictor = distance ~ hn(distance, lsig)
)
attributes(bc)$ggp bc1 <- bincount( result = fit1, observations = mexdolphin$points$distance, breaks = seq(0, max(mexdolphin$points$distance), length = 9), predictor = distance ~ hn(distance, lsig) ) attributes(bc1)$ggp
## Fit Models only to the distance sampling data
Half-normal first
formula <- distance ~ log(hn(distance, lsig)) + Intercept
cmp <- ~ lsig(1) + Intercept(1)
dfit <- lgcp(
components = cmp,
mexdolphin$points, domain = list(distance = INLA::inla.mesh.1d(seq(0, 8, length.out = 30))), formula = formula, options = list(bru_initial = list(lsig = 1, Intercept = 3)) ) detfun <- predict(dfit, distdf, ~ hn(distance, lsig)) Half-normal next formula1 <- distance ~ log(hr(distance, lsig)) + Intercept cmp <- ~ lsig(1) + Intercept(1) dfit1 <- lgcp( components = cmp, mexdolphin$points,
domain = list(distance = INLA::inla.mesh.1d(seq(0, 8, length.out = 30))),
formula = formula1
)
detfun1 <- predict(dfit1, distdf, ~ hr(distance, lsig))
Compare detection function models by DIC:
deltaIC(dfit1, dfit)
#> Model DIC Delta.DIC
#> 1 dfit -8.626852 0.000000
#> 2 dfit1 -6.512819 2.114033
Plot both lines on histogram of observations First scale lines to have same area as that of histogram Half-normal:
hnline <- data.frame(distance = detfun$distance, p = detfun$mean, lower = detfun$q0.025, upper = detfun$q0.975)
wts <- diff(hnline$distance) wts[1] <- wts[1] / 2 wts <- c(wts, wts[1]) hnarea <- sum(wts * hnline$p)
n <- length(mexdolphin$points$distance)
scale <- n / hnarea
hnline$En <- hnline$p * scale
hnline$En.lower <- hnline$lower * scale
hnline$En.upper <- hnline$upper * scale
Hazard-rate:
hrline <- data.frame(distance = detfun1$distance, p = detfun1$mean, lower = detfun1$q0.025, upper = detfun1$q0.975)
wts <- diff(hrline$distance) wts[1] <- wts[1] / 2 wts <- c(wts, wts[1]) hrarea <- sum(wts * hrline$p)
n <- length(mexdolphin$points$distance)
scale <- n / hrarea
hrline$En <- hrline$p * scale
hrline$En.lower <- hrline$lower * scale
hrline$En.upper <- hrline$upper * scale
Combine lines in a single object for plotting
dlines <- rbind(
cbind(hnline, model = "Half-normal"),
cbind(hrline, model = "Hazard-rate")
)
Plot without the 95% credible intervals
ggplot(data.frame(mexdolphin$points)) + geom_histogram(aes(x = distance), breaks = seq(0, 8, length = 9), alpha = 0.3) + geom_point(aes(x = distance), y = 0.2, shape = "|", size = 3) + geom_line(data = dlines, aes(x = distance, y = En, group = model, col = model)) Plot with the 95% credible intervals (without taking the count rescaling into account) ggplot(data.frame(mexdolphin$points)) +
geom_histogram(aes(x = distance), breaks = seq(0, 8, length = 9), alpha = 0.3) +
geom_point(aes(x = distance), y = 0.2, shape = "|", size = 3) +
geom_line(data = dlines, aes(x = distance, y = En, group = model, col = model)) +
geom_ribbon(
data = dlines, aes(x = distance, ymin = En.lower, ymax = En.upper, group = model, col = model, fill = model),
alpha = 0.2, lty = 2
)
|
2023-02-04 22:28:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6642748713493347, "perplexity": 12581.682206801923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500154.33/warc/CC-MAIN-20230204205328-20230204235328-00612.warc.gz"}
|
http://www.zentralblatt-math.org/zmath/en/advanced/?q=an:1227.35053
|
Language: Search: Contact
Zentralblatt MATH has released its new interface!
For an improved author identification, see the new author database of ZBMATH.
Query:
Fill in the form and click »Search«...
Format:
Display: entries per page entries
Zbl 1227.35053
Mu, Chunlai; Zeng, Rong; Zhou, Shouming
Life span and a new critical exponent for a doubly degenerate parabolic equation with slow decay initial values.
(English)
[J] J. Math. Anal. Appl. 384, No. 2, 181-191 (2011). ISSN 0022-247X
Summary: We investigate the behavior of the positive solution of the Cauchy problem for the equation $$u_t- \text{div}\big(|\nabla u^m|^{p-2}\nabla u^m\big)=u^q$$ with initial value decaying at infinity, and give a new secondary critical exponent for the existence of global and nonglobal solutions. Furthermore, the large time behavior and the life span of solutions are also studied.
MSC 2000:
*35B33 Critical exponents
35B44
35K65 Parabolic equations of degenerate type
35K59
Keywords: blow-up; global existence; critical exponent; large time behavior; life span; secondary critical exponent
Highlights
Master Server
|
2013-06-18 20:21:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.581812858581543, "perplexity": 3148.7340270670343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706961352/warc/CC-MAIN-20130516122241-00045-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://risk.asmedigitalcollection.asme.org/energyresources/article/143/4/042306/1086548/Numerical-Investigation-of-Fuel-Property-Effects
|
## Abstract
In this study, lean mixed-mode combustion is numerically investigated using computational fluid dynamics (CFD) in a spark-ignition engine. A new E30 fuel surrogate is developed using a neural network model with matched octane numbers. A skeletal mechanism is also developed by automated mechanism reduction and by incorporating a NOx submechanism. A hybrid approach that couples the G-equation model and the well-stirred reactor model is employed for turbulent combustion modeling. The developed CFD model is shown to well predict pressure and apparent heat release rate (AHRR) traces compared with experiment. Two types of combustion cycles (deflagration-only and mixed-mode cycles) are observed. The mixed-mode cycles feature early flame propagation and subsequent end-gas auto-ignition, leading to two distinctive AHRR peaks. The validated CFD model is then employed to investigate the effects of NOx chemistry. The NOx chemistry is found to promote auto-ignition through the residual gas, while the deflagration phase remains largely unaffected. Sensitivity analysis is finally performed to understand effects of fuel properties, including heat of vaporization (HoV) and laminar flame speed (SL). An increased HoV tends to suppress auto-ignition through charge cooling, while the impact of HoV on flame propagation is insignificant. In contrast, an increased SL is found to significantly promote both flame propagation and end-gas auto-ignition. The promoting effect of SL on auto-ignition is not a direct chemical effect; it is rather caused by an advancement of the combustion phasing, which increases compression heating of the end-gas.
## Introduction
Lean combustion is beneficial to spark-ignition (SI) engine operation due to the higher efficiency and lower emissions than conventional stoichiometric engine operation. However, applications of lean combustion are challenged by the intrinsically low flame speeds and the susceptibility to static and dynamic instabilities. To overcome these difficulties, spark-assisted compression ignition (SACI) or mixed-mode combustion is a promising strategy, which combines conventional deflagrative flame propagation and controlled end-gas auto-ignition. Thus, the fuel can burn sufficiently fast through auto-ignition, compensating for the low flame speed of lean mixtures, while the engine remains knock free.
Lean combustion under mixed-mode conditions has been extensively studied by experiments [14]. Urushihara et al. [1] studied spark-ignited compression ignition and demonstrated the increased engine load compared to conventional homogeneous charge compression ignition (HCCI) combustion. Zigler et al. [2] studied SACI in an optical engine and identified the presence of spark-initialized turbulent flame propagation and subsequent auto-ignition in the end-gas. Sensitivity of the air preheating and spark timing on various engine performance metrics was also investigated with high-speed imaging. Ma et al. [5] used CH2O and OH chemiluminescence to investigate in-cylinder combustion behaviors in flame-induced auto-ignition. A stoichiometric condition was employed, while the observed flame characteristics—occurrence of auto-ignition in the outer rim of the deflagrative flame front and accelerated reaction front propagation—could hold true for lean engine operations as well. Sjöberg and Zeng [3] studied mixed-mode combustion at lean and diluted conditions with various fuels. Significant cycle-to-cycle variability (CCV) was observed at ultra-lean conditions, which could pose a great challenge for practical engine operation. Reuss et al. [6] demonstrated that the early kernel growth was a major source for CCV under SACI conditions. Hu et al. [4] identified injection strategies that could stabilize the ultra-lean operation and improve combustion efficiency for mixed-mode combustion.
In this context, computational fluid dynamics (CFD) modeling has its unique capabilities to probe into the governing physics of mixed-mode combustion, providing unique opportunities for studying CCV and fuel property effects. Dahms et al. [7] developed a mixed-mode flamelet combustion model, which combines the SparkCIMM ignition model, the G-equation model and multi-zone chemistry, targeting spark-assisted HCCI engines at lean conditions with significant exhaust gas recirculation dilution. The model demonstrated a good agreement between CFD and experiment. However, the target operation was at a relatively low-CCV condition, and the performance under high-CCV conditions remains unclear. Middleton et al. [8] studied SACI combustion at stoichiometric conditions and investigated the effect of spark timing and charge temperature on combustion phasing and heat release rate that are governed by the competition between flame propagation and auto-ignition, using the coherent flamelet model coupled with detailed chemistry. However, few CFD studies till date have focused on mixed-mode combustion under lean and dilute conditions with a high level of CCV.
The objectives of this study are, therefore, twofold. The first goal is to develop an engine CFD model that can accurately capture lean, mixed-mode combustion characteristics. The second objective is to identify effects of chemical and physical fuel properties on mixed-mode engine performance, which can eventually enable co-optimization of fuels and engines.
## Engine Specifications and Operating Conditions
The engine simulated in this study is a single-cylinder, four-valve, direct-injection spark-ignition (DISI) research engine at Sandia National Laboratories. Figure 1 schematically shows the cross section of the combustion chamber at the top dead center (TDC). A long-reach spark plug is adopted to extend the spark plasma to the center of the combustion chamber, which can potentially improve the ignition efficiency, especially for lean operations. The fuel injector is mounted on the pent-roof facing the spark plug allowing for direct injection of fuel into the chamber center. The piston bowl window can provide optical access to the combustion chamber, but in this study, a metal blank was used to enable continuously fired all-metal engine experiments. One of the intake valves was deactivated to enhance the in-cylinder swirl level and thereby the overall mixing process. Relevant engine specifications are provided in Table 1.
Fig. 1
Fig. 1
Close modal
Table 1
Engine specifications
ParameterValue
Bore86.0 mm
Stroke95.1 mm
Connecting rod length166.7 mm
Piston pin offset−1.55 mm
Compression ratio12:1
ParameterValue
Bore86.0 mm
Stroke95.1 mm
Connecting rod length166.7 mm
Piston pin offset−1.55 mm
Compression ratio12:1
The engine is operated at a lean condition (a global fuel/air equivalence ratio of 0.55) using certification gasoline blended with 30% ethanol by volume (referred to as “E30” hereinafter). To achieve a well-mixed charge of fuel, air, and residual gas (∼6%), the fuel was injected using three injections of equal duration during the intake stroke. Research octane number (RON) and motor octane number (MON) of the E30 fuel are 105 and 91, respectively. To achieve mixed-mode combustion with maximum brake torque for such a high-octane fuel, a fairly advanced combustion phasing is necessary, requiring the use of an advanced spark timing (−57 CA ATDC). However, such an early spark timing leads to a significant level of CCV. Capturing CCV for this operating point will be one focus of the present modeling efforts. Engine operation conditions are summarized in Table 2, and more details on engine configuration and operating conditions are presented in Ref. [3].
Table 2
Engine operating conditions
ParameterValue
Engine speed1000 rpm
IMEPg446 kPa
Intake temperature100 °C
Intake pressure87.0 kPa
Exhaust pressure100.1 kPa
Injection timings−318, −303, −288 CA ATDC
Injection duration444 μs
Injected fuel mass17.8 mg/cycle
Spark timing−57 CA ATDC
ParameterValue
Engine speed1000 rpm
IMEPg446 kPa
Intake temperature100 °C
Intake pressure87.0 kPa
Exhaust pressure100.1 kPa
Injection timings−318, −303, −288 CA ATDC
Injection duration444 μs
Injected fuel mass17.8 mg/cycle
Spark timing−57 CA ATDC
## Numerical Approach
### Computational Fluid Dynamics Geometry and Model Setup.
To model the Sandia DISI engine, a full-scale engine geometry, including intake and exhaust runners, the piston head, the piston, the spark plug, and the fuel injector, is used, as shown in Fig. 2. The engine is simulated using the converge code v2.4 [9]. The re-normalized group kε model is used to describe the Favre-averaged turbulent flow. Wall heat transfer is modeled with a temperature wall function from Amsden and Findley [10]. Cylinder wall temperature is set to be 445 K. Experimentally measured high-speed intake and exhaust pressures (varying with time) are specified at the intake port inlet and the exhaust port outlet. Fuel spray and in-cylinder combustion processes are simulated with the Eulerian–Lagrangian approach. The spray injection is described by the blob injection approach [11], while droplet breakup, droplet evaporation, and drag force are modeled using the Kelvin–Helmholtz and Rayleigh–Taylor models [12,13], the Frossling correlation [14], and a dynamic drag model [15], respectively. Liquid properties are taken from a previous study [16] on the same engine platform with the same E30 fuel.
Fig. 2
Fig. 2
Close modal
For turbulent combustion modeling, a hybrid approach is employed to capture mixed-mode combustion. In particular, the G-equation model is employed to track deflagrative flame propagation with tabulated laminar flame speed. A passive scalar G is transported according to the instantaneous turbulent flame speed, which is modeled using Peter’s model [17]. The value of G indicates the distance from a local fluid element to the mean flame front. G = 0 identifies the flame front location, while G < 0 and G > 0 indicate the unburned and burned mixtures, respectively. The laminar flame speed is calculated based on one-dimensional (1D) freely propagating premixed flames and is then tabulated as a function of pressure, unburned temperature, local equivalence ratio, and local dilution ratio. The local dilution ratio is calculated through a separate passive transport equation. The well-mixed model coupled with detailed chemical kinetics is used to predict auto-ignition in the end-gas. The multi-zone model is further employed to accelerate detailed chemistry integration. This hybrid approach has been demonstrated to be able to capture knock in Cooperative Fuel Research engines [18,19] and boosted SI engines [20]. A unique feature of this hybrid approach is that it allows isolated investigation of individual chemical properties such as flame speed and ignition delay.
A modified cut-cell Cartesian grid method for automatic mesh generation is used during runtime [9]. The base grid size is Δ0 = 4 mm, and the minimum grid size is Δ5 = 0.125 mm. In particular, fixed embedding is applied to better resolve in-cylinder dynamics (Δ2 = 1 mm), wall boundary layers (Δ3 = 0.5 mm), spray injection (Δ4 = 0.25 mm), early flame propagation (Δ5 = 0.125 mm), and other small geometrical structures (Δ3 = 0.25 mm). Adaptive mesh refinement based on velocity and temperature fluctuations is further employed to better resolve complex flow and flame structures with a minimum cell size of Δ3 = 0.5 mm. Note that in Δn, n represents the level of mesh refinement with respect to the base grid size. The peak cell count during a full engine cycle is approximately 1.6 million. The computational cost for simulation of one engine cycle is approximately two days.
### E30 Fuel Surrogate.
To generate the surrogate composition for gas-phase modeling, a nonlinear regression model was employed that could relate the ignition chemistry from a detailed chemical kinetics model [21] and other thermophysical properties to RON and MON. In this case, the nonlinear regression model was a feed-forward neural network [22]. The regression was an approximation, but it could balance the error in the chemical kinetic model with the error correlating octane numbers to ignition delay times [2328]. The model could be evaluated in less than 10 s on a single CPU thread for compositions containing any combination of the more than 50 hydrocarbons and biofuels represented in the detailed chemical kinetics model developed by Mehl et al. [21]. This model was then combined with standard optimization routines [29] to find the fuel blend with equivalent octane ratings within the accuracy of the regression.
Two feed-forward neural networks were created, one for each octane number. They both used the same inputs and architecture—a single hidden layer with 24 nodes. The inputs included three ignition delay-related quantities, simulated using the detailed chemistry model [21] in a homogeneous, constant-volume reactor at 825 K and 20 bar. These were the inverse of the ignition delay time to reach 1225 K and the derivative of the normalized ignition delay time with respect to pressure and temperature. The other neural network inputs were the enthalpy of vaporization and liquid density at 298 K, and the mole-averaged atom counts for hydrogen, carbon, and oxygen. A schematic of the neural network architecture is shown in Fig. 3. The neural network was demonstrated to have good predictive capability with root-mean-square errors in RON/MON of approximately 1 ON for the cross-validation data. Further details on the design, implementation, and validation of the neural network can be found in Ref. [30].
Fig. 3
Fig. 3
Close modal
A fuel surrogate based on toluene primary reference fuel (TPRF) and ethanol was obtained using the basic multivariable minimization techniques found in the python scipy library2 in conjunction with the neural network regression models. Specifically, the difference between predicted and target RON (105) and MON (91) with respect to component volume fractions was minimized, subject to the constraints of 30% ethanol volume fraction and the sum of the volume fractions being unity. The resultant TPRF–ethanol surrogate is presented in Table 3.
Table 3
E30 fuel surrogate composition (by mole)
n-Heptaneiso-OctaneTolueneEthanol
6.0%16.8%28.4%48.8%
n-Heptaneiso-OctaneTolueneEthanol
6.0%16.8%28.4%48.8%
### E30 Skeletal Reaction Model.
The detailed chemical kinetic model [21] of the proposed TPRF–ethanol blend consists of 2878 species and 12,839 reactions, which is prohibitive for three-dimensional (3D) engine CFD simulations. Therefore, mechanism reduction based on directed relation graph [31] and sensitivity analysis is employed to systematically reduce the size of the reaction model. The reduction is performed based on a large set of reaction states sampled over the parameter range of pressure from 1 to 100 atm, equivalence ratio from 0.3 to 2.0, inlet temperature of 300 K for perfectly stirred reactors, and initial temperature from 600 to 1600 K for auto-ignition, covering the low-temperature chemistry region that is important for engine combustion. The error tolerance used in the reduction is 0.3, implying that the worst case error of the skeletal mechanism is 30%. The resultant skeletal model consists of 149 species and 640 reactions. A submechanism of NOx chemistry3 is then merged into the skeletal model, resulting in a final rection model containing 164 species and 694 reactions.
Figure 4 compares the ignition delays of the final skeletal model with NOx against the skeletal model without NOx as well as the detailed model at different temperatures and pressures. Excellent agreement is observed between the skeletal models and the detailed model for both ignition delay and flame speed. Laminar flame speeds calculated by the two skeletal mechanisms are also compared in Fig. 5. The addition of NOx has negligible impact on 0D and 1D calculations at the selected conditions. However, NOx chemistry can be important under practical engine conditions due to the presence of the residual gas, as will be investigated in Results and Discussion section.
Fig. 4
Fig. 4
Close modal
Fig. 5
Fig. 5
Close modal
## Results and Discussion
### Model Performance.
The proposed modeling approach is first validated in this section. Table 4 presents the comparison of key engine performance parameters, including peak cylinder pressure (Pmax), gross indicated mean effective pressure (IMEPg), CA10, CA50, and CA90, obtained from simulations and experimental measurements. Predicted values overall agree well with measured ones. A slightly earlier combustion phasing (CA10 and CA50) predicted by simulation is possibly due to the use of a simplified ignition model (a spherical energy source at the center of the spark gap) during the energizing stage. However, the computational cost is significantly reduced with this simplified ignition model.
Table 4
Predicted and measured mean combustion characteristics
QuantityPmax (MPa)IMEPg (MPa)CA10CA50CA90
Experiment3.930.446–8.043.5422.3
CFD4.080.497–14.12.2221.4
QuantityPmax (MPa)IMEPg (MPa)CA10CA50CA90
Experiment3.930.446–8.043.5422.3
CFD4.080.497–14.12.2221.4
Figure 6 shows the pressure and AHRR traces obtained from the experiment (500 cycles) and the simulation (13 cycles). Good agreement is observed between the simulation and experimental data, with the predicted mean pressure being slightly higher than the measured mean pressure. In addition, the moderate level of, but not full range of, CCV is captured by CFD. This is because unsteady Reynolds-averaged Navier–Stokes (RANS) models solve time-averaged Navier–Stokes equations and therefore intrinsically predict lower CCV. Two types of combustion cycles are observed in both experiment and simulation (Fig. 6(b)). The first type of cycles features low in-cylinder pressure and heat release rate, resulting in a single AHRR peak. This type of combustion cycles is similar to those observed in conventional SI engines (although the combustion duration is typically longer due to the lean condition) and is referred to as deflagration-only cycles. The other type of cycles shows higher in-cylinder pressure and heat release rate and exhibits two AHRR peaks. The first and second peaks correspond to the early flame propagation and the subsequent end-gas auto-ignition processes, respectively. This type of cycles is, therefore, referred to as mixed-mode cycles. Figure 7 shows the flame structure and dynamics of the two types of combustion cycles, namely deflagration-only (top) and mixed-mode cycles (bottom). In contrast to the deflagration-only cycle, earlier flame propagation is seen for the mixed-mode cycle, and isolated ignition spots are formed (∼7 CA) followed by volumetric auto-ignition in the end-gas. As end-gas auto-ignition rapidly consumes the reactants ahead of the flame fronts (7–20 CA), turbulent flame propagation due to deflagration is still present, although much slower than auto-ignition.
Fig. 6
Fig. 6
Close modal
Fig. 7
Fig. 7
Close modal
The two types of combustion cycles can further be distinguished from each other in the mass burned space as shown in Fig. 8, where burned mass fraction is calculated as the integrated heat release rate normalized by total heat released from each cycle. It is further seen that the initial flame propagation phase in the two types of combustion cycles are very similar to each other, while the mixed-mode cycles feature a second peak at ∼75% mass fraction burned (∼80% in experiment). The presence of the second peak is, therefore, employed as a criterion to systematically distinguish between these two types of cycles, without specifying any empirical threshold. With this criterion, the predicted fraction of mixed-mode cycles from the simulation is 61.5%, closely matching the experimental value 63.2%. Mixed-mode combustion cycles are further characterized by the mean formaldehyde mass fraction ($YCH2O$) inside the cylinder, versus the burned mass fraction, as shown in Fig. 9. Both types of cycles exhibit an initial plateau, indicating stable flame propagation. Compared with deflagration-only cycles, mixed-mode combustion cycles feature a rapid increase in $YCH2O$ near CA50, which leads to fast auto-ignition. The observed difference in evolution of $YCH2O$ can be explained as follows. In the flame propagation mode, CH2O is produced only within a thin layer (the preheat zone) ahead of the flame fronts, and therefore, $YCH2O$ is closely related to the total flame surface area, which does not vary significantly during a large part of the main heat release process. When chemical reactions in the end-gas are nonnegligible, the low-to-intermediate temperature chemistry starts to build up radical pools in the fresh mixture, and thus, $YCH2O$ increases exponentially until volumetric auto-ignition consumes it.
Fig. 8
Fig. 8
Close modal
Fig. 9
Fig. 9
Close modal
The difference between mixed-mode and deflagration-only cycles, and their correlations with combustion phasing are further investigated. Figure 10 shows the scatter of CA50 as function of peak heat release rate for all the simulation cycles overlaid on the experimental data. It is clear from both experimental and simulation results that mixed-mode combustion occurs with more advanced CA50. This is mainly because earlier flame propagation promotes auto-ignition by increasing in-cylinder pressure and temperature. The well-predicted correlation between mixed-mode combustion tendency and CA50 therefore further demonstrates the accuracy of the developed CFD model.
Fig. 10
Fig. 10
Close modal
### Effects of NOx Chemistry.
While the overall lean operation would generally reduce the production of thermal NO, the high octane number of the current E30 fuel forces the use of a fairly advanced combustion phasing to achieve mixed-mode combustion, and the associated increase of combustion temperature promotes thermal NO formation. A portion of the formed NOx will be retained in the residuals, potentially affecting the next cycle. Therefore, the effects of NOx on mixed-mode combustion, especially on the end-gas auto-ignition, are investigated in the following. The numerical modeling approach validated earlier allows for such an investigation by activating or deactivating the NOx chemistry in the reaction model, which cannot be achieved in experimental studies.
Figures 11(a) and 11(b) show the pressure and apparent heat release traces calculated using reaction models with and without NOx chemistry. While the flame propagation stage before auto-ignition occurs is not significantly impacted by NOx chemistry, the end-gas behavior is significantly altered. In particular, no end-gas auto-ignition is observed when NOx chemistry is absent, implying that NOx plays a significant role in promoting auto-ignition. Such auto-ignition enhancement by NOx is attributed to enhanced chain branching due to the presence of nonnegligible NOx-related radicals in the residual gas that can alter the reaction pathways during radical explosion and thereby modifying ignition delay. This is demonstrated in Fig. 11(c), showing that $YCH2O$ is produced much earlier and faster with the presence of NOx than that obtained without NOx chemistry. Note that the in-cylinder NO mole fraction at the intake value closing point, averaging over all the cycles, is found to be 2.6 × 10−4.
Fig. 11
Fig. 11
Close modal
Figure 12 further shows the effects of NOx chemistry in 0D homogeneous auto-ignition and 1D flame propagation. As shown in Fig. 12(a), the ignition delay calculated with and without NOx chemistry differs from each other when residual gas fraction (RGF) is not negligible, e.g., 5% (corresponding to the mean RGF for the present engine operation), in contrast to the case without any residual gas. NOx, however, has only a very small effect on the laminar flame propagation regardless of the level of residual gas (Fig. 12(b)), as it is mainly controlled by back diffusion of sensible heat and important intermediates such as H and OH. It is therefore suggested that when RGF is nonnegligible, NOx chemistry has to be accounted for to accurately predict auto-ignition.
Fig. 12
Fig. 12
Close modal
### Sensitivity to Fuel Properties.
Effects of physical and chemical properties of the E30 fuel, including heat of vaporization (HoV) and laminar flame speed (SL), on mixed-mode combustion characteristics are then examined. Local sensitivity analysis was employed by perturbing the fuel properties by $±30%$ with respect to their nominal values.
Figures 13(a) and 13(b) show the pressure and heat release rate traces obtained from simulations using −30%HoV, HoV, and +30%HoV, respectively. A higher HoV is expected to have a negative impact on overall combustion intensity by reducing the overall in-cylinder temperature. It is seen that a perturbation on HoV has a negligible effect on initial flame propagation. The auto-ignition-induced heat release rate is enhanced by a lower HoV, but suppressed with a higher HoV. The reason is that a higher HoV indicates the increased evaporation cooling, which induces a small reduction in in-cylinder temperature far before combustion occurs, as demonstrated in Fig. 13(c). Note that due to this charge cooling effect, changing HoV may also affect charged mass. However, for the perturbation range considered in this study, the effect of HoV on the in-cylinder equivalence ratio is found within 1% and therefore considered negligible.
Fig. 13
Fig. 13
Close modal
In contrast, the effect of laminar flame speed has a much larger impact on mixed-mode combustion. This is shown in Fig. 14 for the pressure and heat release traces calculated using −30%SL, SL, and +30%SL, respectively. Large SL not only advances the combustion phase but also increases the peak heat release rate during deflagrative flame propagation. As a result, subsequent auto-ignition is also advanced and intensified. In contrast, no auto-ignition is predicted for the lower SL and the overall heat release rate is much lower than the two larger SL values. Compared with the case with the nominal flame speed, the mean heat release rate at the first peak decreases by 23% for −30%SL and increases by 22% for +30%SL.
Fig. 14
Fig. 14
Close modal
Results shown earlier not only demonstrate the capability of the current modeling approach in capturing fuel property effects but also guide us to perform more detailed sensitivity analysis over a wider range of fuel properties. In particular, a coupled strategy of the neural network-based surrogate modeling approach, engine CFD, and global sensitivity analysis [32] could facilitate the understanding of the most influential fuel properties that enable mixed-mode combustion and lead to pathways for fuel-engine co-optimization. This topic will be addressed in the future work.
## Conclusions
A CFD model for lean, mixed-mode combustion in a DISI engine is developed in this work. Good agreement is observed between numerical results and experimental data, which demonstrates the capability of the developed CFD model in simultaneously characterizing deflagrative flame propagation and spontaneous auto-ignition for mixed-mode combustion. Moderate level of CCV is captured by simulation using an unsteady RANS approach. Instantaneous 3D flame structure reveals distinct combustion characteristics of deflagration-only cycles and mixed-mode cycles. Cycles with earlier flame propagation tend to produce mixed-mode combustion, since an advanced combustion phasing leads to the increased pressure and temperature which favor auto-ignition. In the mixed-mode cycles, isolated auto-ignition spots are observed which subsequently expand into the entire end-gas mixture. It is also seen that when both deflagration and auto-ignition are present, the deflagrative flame propagation is slightly suppressed by auto-ignition. The presence of auto-ignition is witnessed by the dramatically increased CH2O radical concentration. The positive correlation between occurrence of mixed-mode cycles and advanced CA50 is predicted in good agreement with experimental measurement.
The validated numerical model is then employed to investigate the effects of NOx chemistry and different fuel properties including HoV and laminar flame speed SL. NOx chemistry is found to play an important role in promoting auto-ignition chemistry via retained residual gases, while the effect on flame propagation is minimal. Local sensitivity studies are performed to provide preliminary investigation of fuel property effects on mixed-mode combustion. Overall, flame propagation is not significantly modified with a perturbation in HoV, while a higher HoV reduces the auto-ignition tendency and peak heat release rate in the end-gas. An increase in the laminar flame speed significantly enhances the combustion phasing, for both deflagration and auto-ignition stages. A higher SL promotes flame propagation in terms of both combustion phasing and peak heat release. The enhanced flame propagation further enhances end-gas auto-ignition and raises the second peak in heat release rate since an advanced combustion phasing increases the compression heating of the end-gas. Across the parameter ranges studied here, the impact of SL is found much stronger than that of HoV on mixed-mode combustion.
## Acknowledgment
UChicago Argonne, LLC, operator of Argonne National Laboratory (Argonne), a US Department of Energy (DOE) Office of Science laboratory, is operated under Contract No. DE-AC02-06CH11357. This research was partially funded by DOE’s Office of Vehicle Technologies, Office of Energy Efficiency and Renewable Energy under Contract No. DE-AC02-06CH11357. The authors wish to thank Gurpreet Singh, Michael Weismiller, and Kevin Stork, program managers at DOE, for their support. This research was conducted as part of the Co-Optimization of Fuels & Engines (Co-Optima) project sponsored by the US DOE’s Office of Energy Efficiency and Renewable Energy (EERE), Bioenergy Technologies and Vehicle Technologies Offices. We gratefully acknowledge the computing resources provided on Bebop, a high-performance computing cluster operated by the Laboratory Computing Resource Center at Argonne National Laboratory. The engine experiments were performed at the Combustion Research Facility, Sandia National Laboratories, Livermore, CA. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the US Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.
## Conflict of Interest
There are no conflicts of interest.
## Data Availability Statement
The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request. The authors attest that all data for this study are included in the paper.
## References
1.
Urushihara
,
T.
,
Yamaguchi
,
K.
,
Yoshizawa
,
K.
, and
Itoh
,
T.
,
2005
, “
A Study of a Gasoline-Fueled Compression Ignition Engine Expansion of HCCI Operation Range Using SI Combustion as a Trigger of Compression Ignition
,”
SAE Trans.
,
114
, pp.
419
425
.
2.
Zigler
,
B.
,
Keros
,
P.
,
Helleberg
,
K.
,
Fatouraie
,
M.
,
Assanis
,
D.
, and
Wooldridge
,
M.
,
2011
, “
An Experimental Investigation of the Sensitivity of the Ignition and Combustion Properties of a Single-Cylinder Research Engine to Spark-Assisted HCCI
,”
Int. J. Engine Res.
,
12
(
4
), pp.
353
375
. 10.1177/1468087411401286
3.
Sjöberg
,
M.
, and
Zeng
,
W.
,
2016
, “
Combined Effects of Fuel and Dilution Type on Efficiency Gains of Lean Well-Mixed DISI Engine Operation With Enhanced Ignition and Intake Heating for Enabling Mixed-Mode Combustion
,”
SAE Int. J. Engines
,
9
(
2
), pp.
750
767
. 10.4271/2016-01-0689
4.
Hu
,
Z.
,
Zhang
,
J.
,
Sjöberg
,
M.
, and
Zeng
,
W.
,
2019
, “
The Use of Partial Fuel Stratification to Enable Stable Ultra-Lean Deflagration-Based Spark-Ignition Engine Operation With Controlled End-Gas Autoignition of Gasoline and E85
,”
Int. J. Engine Res.
,
21
(
9
), pp.
1678
1695
. 10.1177/1468087419889702
5.
Ma
,
X.
,
Wang
,
Z.
,
Jiang
,
C.
,
Jiang
,
Y.
,
Xu
,
H.
, and
Wang
,
J.
,
2014
, “
An Optical Study of In-Cylinder CH2O and OH Chemiluminescence in Flame-Induced Reaction Front Propagation Using High Speed Imaging
,”
Fuel
,
134
, pp.
603
610
. 10.1016/j.fuel.2014.06.002
6.
Reuss
,
D. L.
,
Kuo
,
T. -W.
,
Silvas
,
G.
,
Natarajan
,
V.
, and
Sick
,
V.
,
2008
, “
Experimental Metrics for Identifying Origins of Combustion Variability During Spark-Assisted Compression Ignition
,”
Int. J. Engine Res.
,
9
(
5
), pp.
409
434
. 10.1243/14680874JER01108
7.
Dahms
,
R.
,
Felsch
,
C.
,
Röhl
,
O.
, and
Peters
,
N.
,
2011
, “
Detailed Chemistry Flamelet Modeling of Mixed-Mode Combustion in Spark-Assisted HCCI Engines
,”
Proc. Combust. Inst.
,
33
(
2
), pp.
3023
3030
. 10.1016/j.proci.2010.08.005
8.
Middleton
,
R. J.
,
Olesky
,
L. K. M.
,
Lavoie
,
G. A.
,
Wooldridge
,
M. S.
,
Assanis
,
D. N.
, and
Martz
,
J. B.
,
2015
, “
The Effect of Spark Timing and Negative Valve Overlap on Spark Assisted Compression Ignition Combustion Heat Release Rate
,”
Proc. Combust. Inst.
,
35
(
3
), pp.
3117
3124
. 10.1016/j.proci.2014.08.021
9.
Richards
,
K. J.
,
Senecal
,
P. K.
, and
Pomraning
,
E.
,
2018
, “
CONVERGE Manual (Version 2.4)
10.
Amsden
,
A. A.
, and
Findley
,
M.
,
1997
, “
KIVA-3V: A Block-Structured KIVA Program for Engines With Vertical or Canted Valves
,
Lawrence Livermore National Laboratory
,
Livermore, CA
, Technical Report, Report LA–13313-MS, Los Alamos National Laboratory, CA.
11.
Reitz
,
R. D.
, and
Diwakar
,
R.
,
1987
, “
Structure of High-Pressure Fuel Sprays
,”
SAE Trans.
,
96
, pp.
492
509
.
12.
Reitz
,
R. D.
,
1987
, “
Modeling Atomization Processes in High-Pressure Vaporizing Sprays
,”
Atom. Spray Technol.
,
3
(
4
), pp.
309
337
.
13.
Patterson
,
M. A.
, and
Reitz
,
R. D.
,
1998
, “
Modeling the Effects of Fuel Spray Characteristics on Diesel Engine Combustion and Emission
,”
SAE Trans.
,
107
, pp.
27
43
.
14.
Froessling
,
N.
,
1958
, “
Evaporation, Heat Transfer, and Velocity Distribution in Two-Dimensional and Rotationally Symmetrical Laminar Boundary-Layer Flow
,” Technical Report, Report No. NACA-TM-1432, National Aeronautics and Space Administration, Washington, DC.
15.
Liu
,
A. B.
,
Mather
,
D.
, and
Reitz
,
R. D.
,
1993
, “
Modeling the Effects of Drop Drag and Breakup on Fuel Sprays
,”
SAE Trans.
,
102
, pp.
83
95
.
16.
Van Dam
,
N.
,
Sjöberg
,
M.
, and
Som
,
S.
,
2018
, “
Large-Eddy Simulations of Spray Variability Effects on Flow Variability in a Direct-Injection Spark-Ignition Engine Under Non-Combusting Operating Conditions
,” Technical Report, SAE Technical Paper 2018-01-0196.
17.
Peters
,
N.
,
2000
,
Turbulent Combustion
,
Cambridge University Press
,
Cambridge, UK
.
18.
Pal
,
P.
,
Kolodziej
,
C.
,
Choi
,
S.
,
Som
,
S.
,
Broatch
,
A.
,
Gomez-Soriano
,
J.
,
Wu
,
Y.
,
Lu
,
T.
, and
See
,
Y. C.
,
2018
, “
Development of a Virtual CFR Engine Model for Knocking Combustion Analysis
,”
SAE Int. J. Engines
,
11
(
6
), pp.
1069
1082
. 10.4271/2018-01-0187
19.
Pal
,
P.
,
Wu
,
Y.
,
Lu
,
T.
,
Som
,
S.
,
See
,
Y. C.
, and
Le Moine
,
A.
,
2018
, “
Multidimensional Numerical Simulations of Knocking Combustion in a Cooperative Fuel Research Engine
,”
ASME J. Energy. Res. Technol.
,
140
(
10
), p.
102205
. 10.1115/1.4040063
20.
Yue
,
Z.
,
Edwards
,
K. D.
,
Sluders
,
C. S.
, and
Som
,
S.
,
2019
, “
Prediction of Cyclic Variability and Knock-Limited Spark Advance in a Spark-Ignition Engine
,”
ASME J. Energy. Res. Technol.
,
141
(
10
), p.
102201
. 10.1115/1.4043393
21.
Mehl
,
M.
,
Zhang
,
K.
,
Wagnon
,
S.
,
,
G.
,
Westbrook
,
C. K.
,
Pitz
,
W. J.
,
Zhang
,
Y.
,
Curran
,
H.
,
Rachidi
,
M. A.
,
Atef
,
N.
, and
Sarathy
,
M. S.
,
2017
, “
A Comprehensive Detailed Kinetic Mechanism for the Simulation of Transportation Fuels
,”
10th US National Combustion Meeting
,
College Park, MD
,
Apr. 23–26
.
22.
scikit learn
, “
1.17. Neural network models (supervised)
,”https://scikit-learn.org/stable/modules/neural_networks_supervised.html Accessed May 13, 2019.
23.
Ahmed
,
A.
,
Goteng
,
G.
,
Shankar
,
V. S.
,
Al-Qurashi
,
K.
,
Roberts
,
W. L.
, and
Sarathy
,
S. M.
,
2015
, “
A Computational Methodology for Formulating Gasoline Surrogate Fuels With Accurate Physical and Chemical Kinetic Properties
,”
Fuel
,
143
, pp.
290
300
. 10.1016/j.fuel.2014.11.022
24.
Mehl
,
M.
,
Chen
,
J.-Y.
,
Pitz
,
W. J.
,
Sarathy
,
S. M.
, and
Westbrook
,
C. K.
,
2011
, “
An Approach for Formulating Surrogates for Gasoline With Application Toward a Reduced Surrogate Mechanism for CFD Engine Modeling
,”
Energy. Fuels.
,
25
(
11
), pp.
5215
5223
. 10.1021/ef201099y
25.
Singh
,
E.
,
,
J.
,
Mehl
,
M.
, and
Sarathy
,
S. M.
,
2017
, “
Chemical Kinetic Insights Into the Octane Number and Octane Sensitivity of Gasoline Surrogate Mixtures
,”
Energy. Fuels.
,
31
(
2
), pp.
1945
1960
. 10.1021/acs.energyfuels.6b02659
26.
Naser
,
N.
,
Yang
,
S. Y.
,
Kalghatgi
,
G.
, and
Chung
,
S. H.
,
2017
, “
Relating the Octane Numbers of Fuels to Ignition Delay Times Measured in an Ignition Quality Tester (IQT)
,”
Fuel
,
187
, pp.
117
127
. 10.1016/j.fuel.2016.09.013
27.
Naser
,
N.
,
Sarathy
,
S. M.
, and
Chung
,
S. H.
,
2018
, “
Estimating Fuel Octane Numbers From Homogeneous Gas-phase Ignition Delay Times
,”
Combust. Flame.
,
188
, pp.
307
323
. 10.1016/j.combustflame.2017.09.037
28.
,
J. A.
,
Bokhumseen
,
N.
,
Mulla
,
N.
,
Sarathy
,
S. M.
,
Farooq
,
A.
,
Kalghatgi
,
G.
, and
Gaillard
,
P.
,
2015
, “
A Methodology to Relate Octane Numbers of Binary and Ternary N-Heptane, Iso-Octane and Toluene Mixtures With Simulated Ignition Delay Times
,”
Fuel
,
160
, pp.
458
469
. 10.1016/j.fuel.2015.08.007
29.
SciPy.org
, “
Optimization and Root Finding (scipy.optimize)
,” https://docs.scipy.org/doc/scipy/reference/optimize.html Accessed May 13, 2019.
30.
Whitesides
,
R. A.
, and
McNenly
,
M. J.
,
2018
, “
Prediction of RON and MON of Gasoline Surrogates by Neural Network Regression of Ignition Delay Times and Fuel Properties
,”
,
Lemont, IL
,
Jan. 29–Feb. 1
.
31.
Lu
,
T. F.
, and
Law
,
C. K.
,
2008
, “
Strategies for Mechanism Reduction for Large Hydrocarbons: n-Heptane
,”
Combust. Flame.
,
154
(
1–2
), pp.
153
163
. 10.1016/j.combustflame.2007.11.013
32.
Pal
,
P.
,
Probst
,
D.
,
Pei
,
Y.
,
Zhang
,
Y.
,
Traver
,
M.
,
Cleary
,
D.
, and
Som
,
S.
,
2017
, “
Numerical Investigation of a Gasoline-Like Fuel in a Heavy-Duty Compression Ignition Engine Using Global Sensitivity Analysis
,”
SAE Int. J. Fuels Lubricants
,
10
(
1
), pp.
56
68
. 10.4271/2017-01-0578
|
2023-03-29 19:52:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5090278387069702, "perplexity": 4505.276096279191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949025.18/warc/CC-MAIN-20230329182643-20230329212643-00575.warc.gz"}
|
http://animaltags.org/doku.php?id=tagwiki:tools:processing:inclination&bootswatch-theme=paper
|
# inclination
Estimate the local magnetic field vector inclination
incl = inclination(A, M, fc) % Matlab & Octave
incl <- inclination(A, M, fc = NULL) # R
Estimate the local magnetic field vector inclination angle directly from acceleration and magnetic field measurements.
Input var Description Unit Default value
A is the accelerometer signal matrix, A=[ax,ay,az] in any consistent unit (e.g., in g or m/s2). A can be in any frame. g or m/s2 N/A
M is the magnetometer signal matrix, M=[mx,my,mz] in any consistent unit (e.g., in uT or Gauss). M must be in the same frame as A. uT or Gauss N/A
fc (optional) specifies the cut-off frequency of a low-pass filter to apply to A and M before computing the inclination angle. The filter cut-off frequency is with respect to 1=Nyquist frequency. Filtering adds no group delay. If fc is not specified, no filtering is performed. Hz N/A
Output var Description Units
incl is the magnetic field inclination angle. radians
• Output sampling rate is the same as the input sampling rate.
• Frame: This function assumes a [north,east,up] navigation frame and a [forward,right,up] local frame. In these frames, the magnetic field vector has a positive inclination angle when it points below the horizon.
• is an anti-clockwise rotation around the y-axis. Other frames can be used as long as A and M are in the same frame however the interpretation of incl will differ accordingly.
### Matlab & Octave
incl = inclination([0.77 -0.6 -0.22],[22 -22 14])
incl = -0.91595 %radians.
### R
A <- matrix(c(1, -0.5, 0.1, 0.8, -0.2, 0.6, 0.5, -0.9, -0.7),
byrow = TRUE, nrow = 3, ncol = 3)
M <- matrix(c(1.3, -0.25, 0.16, 0.78, -0.3, 0.5, 0.5, -0.49, -0.6),
byrow = TRUE, nrow = 3, ncol = 3)
incl <- inclination(A, M)
incl = -0.91595 #radians.
|
2019-10-20 21:12:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6194794774055481, "perplexity": 3690.540855866215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986726836.64/warc/CC-MAIN-20191020210506-20191020234006-00256.warc.gz"}
|
https://math.stackexchange.com/questions/2297678/how-to-model-this-distribution-using-central-limit-theorem/2297707
|
How to model this distribution using Central Limit theorem?
I have distribution that can be defined as below,
$S=a_0\cdot b_0 + a_1\cdot b_1 + a_2\cdot b_2 + \cdots +a_{n-1}\cdot b_{n-1}$
Now, I want find the distribution of $S$ when, $a_i$'s are selected from a certain distribution with standard deviation $\sigma$ (for simplicity we can assume it a Gaussian distribution). And $b_i$'s can be $+5$ with probability $p$ and $-5$ with probability $1-p$. How, as far as I know $S$ will be distributed normally too, but what will be the standard deviation of such distribution.
• The variance of the limiting distribution after scalling by $\sqrt{n}$ is $Var(a_0b_0)$. It can be computed if $a_0$ and $b_0$ are independent. – Sayan May 26 '17 at 14:43
• @sayan I did not get your answer. Can you illustrate a little more? Again for simplicity we can take the value $p=1/2$ – Rick May 26 '17 at 14:46
• If $(a_i)$ is i.i.d. centered normal with variance $\sigma^2$, if $(b_i)$ is i.i.d. with $P(b_i=+5)=p$, $P(b_i=-5)=1-p$, and if $(a_i)$ and $(b_i)$ are independent, then $S$ is centered normal with variance $25n\sigma^2$ (and standard deviation $5\sigma\sqrt{n}$). If $(a_i)$ is not centered, no chance for such a nice result. – Did May 26 '17 at 15:50
• whoever downvoted can please provide the reason? – Rick May 29 '17 at 10:01
$S$ will not be normally distributed, but no matter
If everything is independent then
• $a_i$ has mean $\mu$ and variance $\sigma^2$ so $E[a_i^2]=\sigma^2+\mu^2$
• $b_i$ has mean $5-10p$ and $E[b_i^2]=25$
• $a_i b_i$ has mean $\mu(5-10p)$ and $E[a_i^2 b_i^2]=E[a_i^2]E[b_i^2]= 25(\sigma^2+\mu^2)$ so has variance $25\sigma^2 +100p \mu^2-100p^2 \mu^2$
• $S=\sum a_i b_i$ has mean $n\mu(5-10p)$ and variance $25n(\sigma^2 +4p(1-p)\mu^2)$ and so a standard deviation which is the square-root of that
• if $p=\frac12$ then the mean of $S$ is $0$ and the standard deviation of $S$ is $5\sqrt{n(\sigma^2+\mu^2)}$
• If $\mu=0$ (plus some other conditions), then S is normal, see my comment on main. – Did May 26 '17 at 15:52
• @Did Yes - though $\mu=0$ is not in the question. In that special case, the distribution of $a_ib_i$ is the same as the distribution of $5a_i$ and so $p$ becomes irrelevant – Henry May 26 '17 at 16:42
• @Henry Thank you for kind help. Will this change in any way if the distribution of $a$ is discrete instead of continuous? (we can take $\mu\ =\ 0$). We can think of a man in the middle of the plank of length $2t$. The man takes $n$ steps. At each step, he samples a value $x \leftarrow D_\sigma$ and either takes $5x$ step forward or $5x$ step backward with probability $p\ =\ 1/2$. How big should the plank be such that the man falls of from the plank with a very low probability? – Rick May 27 '17 at 8:13
• @Ishan - my calculations of the mean, variance and standard deviation require independence of everything, but nothing else. So if $\mu=0$ the mean of $S$ will be $0$, the variance will be $25n\sigma^2$ and the standard deviation $5\sqrt{n}\sigma$, for any $p$. If the $a_i$s are discrete random variables then $S$ will be too – Henry May 27 '17 at 8:21
|
2019-06-27 04:21:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9558565616607666, "perplexity": 191.01845597306843}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000613.45/warc/CC-MAIN-20190627035307-20190627061307-00230.warc.gz"}
|
https://www.zbmath.org/?q=rf%3A1125.68340
|
# zbMATH — the first resource for mathematics
Correct transformation: from object-based graph grammars to PROMELA. (English) Zbl 1243.68155
Summary: Model transformation is an approach that, among other advantages, enables the reuse of existing analysis and implementation techniques, languages and tools. The area of formal verification makes wide use of model transformation because the cost of constructing efficient model checkers is extremely high. There are various examples of translations from specification and programming languages to the input languages of prominent model checking tools, like SPIN. However, this approach provides a safe analysis method only if there is a guarantee that the transformation process preserves the semantics of the original specification/program, that is, that the transformation is correct. Depending on the source and/or target languages, this notion of correctness is not easy to achieve. In this paper, we tackle this problem in the context of object-based graph grammars (OBGG). OBGG is a formal language suitable for the specification of distributed systems, with a variety of tools and techniques centered around the transformation of OBGG models. We describe in details the model transformation from OBGG models to PROMELA, the input language of the SPIN model checker. Amongst the contributions of this paper are: (a) the correctness proof of the transformation from OBGG models to PROMELA; (b) a generalization of this process in steps that may be used as a guide to prove the correctness of transformations from different specification/programming languages to PROMELA.
##### MSC:
68N30 Mathematical aspects of software engineering (specification, verification, metrics, requirements, etc.) 68Q42 Grammars and rewriting systems
##### Keywords:
graph grammars; model transformation; PROMELA; correctness
##### Software:
Bandera; GROOVE; MPI; PROMELA; SPIN; vUML
Full Text:
##### References:
[1] , Handbook of graph grammars and computing by graph transformations, volume 1: foundations (1997) · Zbl 0908.68095 [2] Dotti, F. L.; Ribeiro, L.: Specification of mobile code systems using graph grammars, IFIP conference Proceedings 177, 45-63 (2000) · Zbl 0968.68100 [3] Ehrig, H.: Introduction to the algebraic theory of graph grammars, Lecture notes in computer science (LNCS) 73, 1-69 (1979) · Zbl 0407.68072 [4] , Handbook of graph grammars and computing by graph transformation: vol. 2: applications, languages, and tools 2 (1999) · Zbl 0951.68049 [5] , Handbook of graph grammars and computing by graph transformation: vol. 3: concurrency, parallelism, and distribution 3 (1999) · Zbl 0951.68049 [6] Guerraoui, R.; Hurfin, M.; Mostefaoui, A.; Oliveira, R.; Raynal, M.; Schiper, A.; Krakowiak, S. S. S.: Consensus in asynchronous distributed systems: a concise guided tour, Lecture notes in computer science (LNCS), 33-47 (1999) [7] F.L. Dotti, L. Foss, L. Ribeiro, O.M. Santos, Especificação e verificação formal de sistemas distribuídos, in: 17o Simpósio Brasileiro de Engenharia de Software, Brasil, 2003, pp. 225–240 (in Portuguese). [8] Santos, O. M.; Dotti, F. L.; Ribeiro, L.: Verifying object-based graph grammars, Electronic notes in theoretical computer science 109, 125-136 (2004) · Zbl 1271.68125 [9] Dotti, F.; Ribeiro, L.; Dos Santos, O.; Pasini, F.: Verifying object-based graph grammars: an assume-guarantee approach, Software and systems modeling 5, No. 3, 289-311 (2006) [10] Copstein, B.; Móra, M. C.; Ribeiro, L.: An environment for formal modeling and simulation of control systems, , 74-82 (2000) [11] F.L. Dotti, L.M. Duarte, B. Copstein, L. Ribeiro, Simulation of mobile applications, in: Communication Networks and Distributed Systems Modeling and Simulation Conference, SCS, USA, 2002, pp. 261–267. [12] Mendizabal, O. M.; Dotti, F. L.; Ribeiro, L.: Stochastic object-based graph grammars, Electronic notes in theoretical computer science 184, 151-170 (2007) · Zbl 1279.68117 [13] Plateau, B.; Atif, K.: Stochastic automata network of modeling parallel systems, IEEE transactions on software engineering 17, No. 10, 1093-1108 (1991) [14] Michelon, L. R. Leonardo: Simone André da costa, formal specification and verification of real-time systems using graph grammars, Journal of the Brazilian computer society 13, No. 4, 51-68 (2007) [15] Pasini, F.; Dotti, F. L.: Code generation for parallel applications modelled with object-based graph grammars, Electronic notes on theoretical computer science 184, 113-131 (2007) · Zbl 1279.68059 [16] Snir, M.; Otto, S. W.; Walker, D. W.; Dongarra, J.; Huss-Lederman, S.: MPI: the complete reference, (1995) [17] Dotti, F. L.; Ribeiro, L.; Santos, O. M.: Specification and analysis of fault behaviours using graph grammars, Lecture notes in computer science (LNCS) 3062, 120-133 (2004) [18] Dotti, F. L.; Mendizabal, O. M.; Dos Santos, O. M.: Verifying fault-tolerant distributed systems using object-based graph grammars, Lecture notes in computer science (LNCS) 3747, 80-100 (2005) [19] Gärtner, F. C.: Transformational approaches to the specification and verification of fault-tolerant systems: formal background and classification, Journal of universal computer science 5, No. 10, 668-692 (1999) · Zbl 0967.68108 · www.jucs.org [20] Ribeiro, L.; Dotti, F. L.; Bardohl, R.: A formal framework for the development of concurrent object-based systems, Lecture notes in computer science (LNCS) 3393, 385-401 (2005) · Zbl 1075.68621 · doi:10.1007/b106390 [21] Dotti, F. L.; Duarte, L. M.; Foss, L.; Ribeiro, L.; Russi, D.; Santos, O. M.: An environment for the development of concurrent object-based applications, Electronic notes in theoretical computer science 127-1, 3-13 (2005) [22] L. Duarte, F. Dotti, Development of an active network architecture using mobile agents – a case study, Tech. Rep. TR-043, FACIN - PPGCC - PUCRS, 2004. [23] Dotti, F. L.; Foss, L.; Ribeiro, L.; Santos, O. M.: Verification of object-based distributed systems, Lecture notes in computer science (LNCS) 2884, 261-275 (2003) · Zbl 1253.68059 [24] Holzmann, G. J.: The model checker SPIN, IEEE transactions on software engineering 23, No. 5, 279-295 (1997) [25] Hoare, C. A. R.: An axiomatic basis for computer programming, Communications of the ACM 12, No. 10, 576-580 (1969) · Zbl 0179.23105 · doi:10.1145/363235.363259 [26] R. Milner, An algebraic definition of simulation between programs, Tech. Rep. CS-TR-71-205, Stanford University, Stanford, CA, USA, 1971. [27] Burstall, R. M.: An algebraic description of programs with assertions, verification and simulation, SIGACT news 14, 7-14 (1972) [28] Gerhart, S. L.: Correctness-preserving program transformations, , 54-66 (1975) · Zbl 0361.68013 [29] Harel, D.; Puneli, A.; Stavi, J.: A complete axiomatic system for proving deductions about recursive programs, , 249-260 (1977) [30] Jones, C. B.: The early search for tractable ways of reasoning about programs, IEEE annals history of computing 25, No. 2, 26-49 (2003) [31] Schmidt, D. C.: Guest editor’s introduction: model-driven engineering, IEEE computer 39, No. 2, 25-31 (2006) [32] Baldan, P.; Corradini, A.; König, B.: A framework for the verification of infinite-state graph transformation systems, Information and computation 206, No. 7, 869-907 (2008) · Zbl 1153.68034 · doi:10.1016/j.ic.2008.04.002 [33] Da Costa, S. A.; Ribeiro, L.: Formal verification of graph grammars using mathematical induction, Electronic notes in theoretical computer science 240, 43-60 (2009) · Zbl 1347.68243 [34] Rensink, A.; Schmidt, Á; Varró, D.: Model checking graph transformations: a comparison of two approaches, Lecture notes in computer science (LNCS) 3256, 226-241 (2004) · Zbl 1116.68486 · doi:10.1007/b100934 [35] Kastenberg, H.; Rensink, A.: Model checking dynamic states in GROOVE, Lecture notes in computer science (LNCS) 3925, 299-305 (2006) [36] Leue, S.; Holzmann, G.: V-promela: a visual, object oriented language for SPIN, , 14-23 (1999) [37] Lilius, J.; Paltor, I. P.: Vuml: a tool for verifying UML models, , 255-258 (1999) [38] Chen, J.; Cui, H.: Translation from adapted uml to promela for corba-based applications, Lecture notes in computer science (LNCS) 2989, 234-251 (2004) · Zbl 1125.68340 · doi:10.1007/b96721 [39] Varró, D.: Automated formal verification of visual modeling languages by model checking, Software and systems modeling 3, No. 2, 85-113 (2004) [40] , Abstract state machines: A method for high-level system design and analysis (2003) · Zbl 1040.68042 [41] C. Demartini, R. Iosif, R. Sisto, Modeling and validation of Java multithreading applications using SPIN, in: G. Holzmann, E. Najm, A. Serhrouchni (Eds.), Proc. of the 4th SPIN workshop, France, 1998. [42] Corbett, J. C.: Bandera: extracting finite-state models from Java source code, , 439-448 (2000) [43] Castillo, G. D.: Towards comprehensive tool support for abstract state machines: the ASM workbench tool environment and architecture, Lecture notes in computer science (LNCS) 1641, 311-325 (1999) [44] Winter, K.; Duke, R.: Model checking object-Z using ASM, Lecture notes in computer science (LNCS) 2335, 165-184 (2002) · Zbl 1057.68636 · link.springer.de [45] Sirjani, M.; Movaghar, A.; Shali, A.; De Boer, F. S.: Modeling and verification of reactive systems using rebeca, Fundamenta informticae 63, No. 4, 385-410 (2004) · Zbl 1082.68007 [46] Alavizadeh, F.; Nekoo, A. H.; Sirjani, M.: Reuml: a uml profile for modeling and verification of reactive systems, , 50 (2007) [47] Dijkstra, E. W.: Hierarchical ordering of sequential processes, Acta informatica 1, 115-138 (1971) [48] Ricart, G.; Agrawala, A. K.: An optimal algorithm for mutual exclusion in computer networks, Communications of the ACM 24, No. 1, 9-17 (1981) [49] Ehrig, H.; Heckel, R.; Korff, M.; Löwe, M.; Ribeiro, L.; Wagner, A.; Corradini, A.: Algebraic approaches to graph transformation. Part ii: Single pushout approach and comparison with double pushout approach, , 247-312 (1997) [50] Corradini, A.; Montanari, U.; Rossi, F.: Graph processes, Fundamenta informaticae 26, No. 3/4, 241-265 (1996) · Zbl 0854.68054 [51] L. Ribeiro, Parallel composition and unfolding semantics of graph grammars, Ph.D. thesis, Technical University of Berlin, Germany, 1996. [52] Dwyer, M. B.; Avrunin, G. S.; Corbett, J. C.: Patterns in property specifications for finite-state verification, , 411-420 (1999) [53] Manna, Z.; Pnueli, A.: The temporal logic of reactive and concurrent systems–specification, (1992) · Zbl 0753.68003 [54] Chechik, M.; Păun, D. O.: Events in property patterns, Lecture notes in computer science (LNCS) 1680, 154-167 (1999) [55] Dwyer, M. B.; Avrunin, G. S.; Corbett, J. C.: Property specification patterns for finite-state verification, , 7-15 (1998) [56] Dotti, F. L.: An environment for the development of concurrent object-based applications, Electronic notes in theoretical computer science 127, 3-13 (2005) [57] Research Bell-Labs, SPIN version 3.3: Language reference, 2003. http://spinroot.com/spin/Man/promela.html. [58] Hoare, C. A. R.: Communicating sequential processes, (1985) · Zbl 0637.68007 [59] C. Weise, An incremental formal semantics for PROMELA, in: 3rd International SPIN Workshop, The Netherlands, 1997. [60] Milner, R.: Communication and concurrency, (1995)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-06-20 21:39:50
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8217012286186218, "perplexity": 10528.253832377863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488257796.77/warc/CC-MAIN-20210620205203-20210620235203-00088.warc.gz"}
|
http://segh.net/articles/geology_for_global_development_gfgd/
|
SEGH Articles
# Geology for Global Development: GfGD
12 December 2015
Fighting Global Poverty: Geology and the Sustainable Development Goals
I was fortunate to be invited by Joel Gill, the founder and Director of Geology for Global Development (http://www.gfgd.org/) to speak at their 3rd annual conference at the Geological Society in London entitled ‘Fighting Global Poverty: Geology and the Sustainable Development Goals’ on the 30th October 2015.
GfGD is focussed on employing geoscience skills to alleviate poverty, in particular mobilising and equipping students and early-career scientists with the skills and knowledge required to make a positive, effective and greater contribution to international development. The aims and key principles of GfGD will strike a resonance with the majority of SEGH members around the world working on geochemistry and health projects and in many cases international development projects. We take the opportunity to ask Joel a few questions to understand the guiding principles of GfGD.
Interview with Joel Gill by Dr Michael Watts, SEGH webmaster
What are the key aims of GfGD?
GfGD works to mobilise and equip the geoscience community to prevent and relieve poverty.
Geoscientists have the potential to make a significant contribution to tackling some of the major challenges of today, including ending extreme poverty and ensuring sustainable development. Geoscience research, monitoring, innovation and engineering can drive widespread improvements to wellbeing and quality of life, in areas such as health, food and water security, infrastructure development, natural resource management and disaster risk reduction.
Effectively applying our understanding of geoscience to development projects, however, requires more than just a competent understanding of technical science. This is one essential foundation, but we also need a thorough understanding of location-specific social, cultural, economic, ethical and environmental factors
The two main strands of our work therefore are (i) to support the public in general, and particularly amongst geologists, to better understand how geology can support sustainable development and how to do this effectively, and (ii) using this knowledge to assist in the prevention and relief of poverty.
Figure 1: Our latest poster gives an overview of how geology can support development, and the activities that we run to mobilise and equip the community to engage in such work.
How did you come up with the idea of GfGD?
In 2009 and 2010 I was fortunate enough to be given two opportunities to travel to the Kagera Region of Tanzania. I was part of a small team evaluating a troubled small-scale water programme and advising on remediation/future projects.
On a personal level, these opportunities gave me an intensive and very practical introduction to many aspects of community-scale development, and the role of geology in such work. During these visits I observed projects where a lack of geological understanding had resulted in project failure. Small amounts of basic geoscience understanding would have put the project on a much more sustainable footing.
While a lack of geological understanding was serious, more common were projects that did include geologists, water engineers or other technical experts, but these individuals had a poor understanding of community development. There was little involvement of the local communities, little consultation about where to locate the wells and minimal efforts to help develop a community group to manage the project.
In both situations, communities were left with water projects that were not fit-for-purpose, failing shortly after completion or only working for part of the year. Children and women had to continue walking several kilometres to collect water. Communities were forced to drink dirty and potentially very dangerous, water.
On my return to the UK I initiated GfGD to help tackle both of these challenges that I had observed on the ground – the need to increase the understanding and integration of geology into development projects, and the need to equip geologists with the skills and development theory required to ensure what they do is effective and sustainable.
Figure 2: Water collection in Kagera Region, Tanzania, at an unprotected water source.
Figure 3: Children using their school time to collect water in Kagera Region, Tanzania.
Who is involved in GfGD?
Most of our work so far has been with students and recent graduates in the United Kingdom. We have established 13 University Groups (or chapters) in the UK, and one in the Republic of Ireland, run by undergraduate and postgraduate students. Groups organise seminars, training and discussion events, all exploring the role of geology in international development. Many of these events attract engineers, geographers and other disciplines, encouraging cross-disciplinary communications. Our national and international events draw a wider range of geoscientists, from different nationalities, sectors and professional levels.
We’ve been working in partnership with other organisations since our beginning. We’ve had great support from the Geological Society of London, hosts of our past three annual conferences. We’re also grateful to the British Geological Survey, European Geosciences Union, and the YES Network, for involving us in a range of conferences and opportunities.
What key resources and activities do you employ to encourage young scientists to use geoscience in international development?
We believe that young geoscientists need access to both the information to support their integration of development within geoscience (and vice versa), but also practical opportunities to do this. In order to support both we use a wide range of resource types:
• Website: Our website has a growing collection of presentations and other contributions to our annual conferences (e.g., www.gfgd.org/conferences). Making these available allows those who can’t attend in person to benefit from the event.
• Blog/Social Media: Our online presence includes a blog and active social media on Twitter and Facebook. These have been great tools to share relevant articles, conference sessions and other opportunities.
• Education Hub: Soon to be launched is an online-hub of lesson plans and discussion questions that can be used by our university groups to explore topics such as: what is international development; how do we engage with policy; and how do we communicate across cultures?
• Conferences and Workshops: We run an annual conference in London, but also try to organise smaller events on specific topics to allow for more discussion and student contributions.
Figure 4: GfGD Annual Conference 2015, discussing the role of geology in the UN Global Goals for Sustainable Development.
• Placements: In the past we have arranges short work experience placements for students within development organisations, and geology organisations working on development projects. These give students a preliminary understanding of how the development sector operates and how geoscience can support the development community.
• Practical Programmes: Partnering with other organisations, we have got students involved in mini-research projects, producing and delivering teaching materials overseas, and fundraising.
Does GfGD engage directly in international development?
Lots of our time and effort goes into training young geoscientists in the UK to directly support international development throughout their careers. As an organisation we do also support development agencies here in the UK and engage directly in some overseas projects in a variety of ways.
• From 2013 we have been working on a project to produce country-specific natural hazard factsheets for use by development NGOs.
• In 2014 we joined with partners in the UK, India and beyond to plan and deliver a hazards education programme in multiple schools in Ladakh, India. GfGD designed and delivered interactive classes on landslides, helping students to increase their understanding of what causes a disaster.
Figures 5 and 6: Hazards Education in the Himalayas. A team of British and Indian nationals were involved in a programme teaching children about landslides and other aspects of geoscience.
• In 2014 we also launched a fundraising initiative to help strengthen resilience to volcanic hazards in Guatemala. Our aim is to help build the technical capacity of the volcanic observatories within the hazard monitoring agency.
• Since 2011 we have advised on geological and development content of poverty-fighting and capacity-building projects.
In all of our overseas work we seek to partner with other organisations in the host country, such as universities, geological surveys, hazard monitoring agencies and NGOs.
The Millennium Development Goals have now been succeeded by the Sustainable Development Goals – do you consider there to be any considerable differences between the MDGs and SDGs in which Geoscience can contribute?
Within the 17 SDGs there is better recognition of the interactions between social and environmental challenges, and the need for a comprehensive, global response. The SDGs have three core aims: reducing poverty, ending inequality and ensuring environmental sustainability. There is an important emphasis on all nations taking action, not just developing nations. The shift from international development to sustainable development recognises that we share one planet and must all examine our use of natural resources, as well as issues such as urbanisation, gender equality, health, and food and water security. Given the importance placed on environmental sustainability, geoscience research, monitoring and practice has a role to play in almost all of the goals. I’d strongly encourage specific groupings within geoscience, such as geochemistry, to look at how their work can support the different goals.
Figure 7: Summary chart of the UN Global Goals for Sustainable Development (read more: https://sustainabledevelopment.un.org/topics).
Another positive contrast with the MDGs is that the SDGs also run parallel with the Sendai Framework for Disaster Risk Reduction 2015-2030 and hopefully a climate agreement to be published later this month. This cohesive approach will allow geoscientists working on aspects of natural hazards and climate change to better support efforts to tackle extreme poverty and inequality.
How do you see GfGD developing its role in the coming years?
Our long-term vision is that GfGD would grow to become a world-leading organisation for issues relating to geoscience and development. We are working to reshape the geoscience community to be a well-informed, positive contributor to global efforts to tackle extreme poverty and sustainable development, for the benefit of all society.
This big vision requires a lot of small steps, starting with the completion of our application to register as a formal charity with the UK Charity Commission. My fellow trustees and I are currently working on the development of a long-term strategy that will set out where we want to be in 10-15 years and how we intend to get there. Part of this strategy will be considering how we can help reshape geoscience education, research, private sector practice and engagement with civil society to better support the Global Goals for Sustainable Development. Alongside other things, we’ll be considering the expansion of our groups beyond UK academia to other countries and those in industry, increased engagement with overseas projects, and more training and summer school opportunities for students.
Over the course of 2016-7 we’ll be publishing more information on our strategy review, on our website (www.gfgd.org).
Joel Gill is the Founder and Director of Geology for Global Development. He is currently completing a NERC/ESRC funded PhD on characterising interacting natural hazards at King’s College London (KCL), and teaches on geohazards and disasters at both KCL and the London School of Economics. Joel advises on overseas development projects, conferences and geoeducation initiatives. He is a Fellow of the Geological Society and a member of their External Relations Committee, with a focus on international development.
Keep up to date
## 34th SEGH International Conference: Geochemistry for Sustainable Development
Victoria Falls, Zambia
02 July 2018
## SubmitContent
Members can keep in touch with their colleagues through short news and events articles of interest to the SEGH community.
## Science in theNews
Latest on-line papers from the SEGH journal: Environmental Geochemistry and Health
• Fertilizer usage and cadmium in soils, crops and food 2018-06-23
### Abstract
Phosphate fertilizers were first implicated by Schroeder and Balassa (Science 140(3568):819–820, 1963) for increasing the Cd concentration in cultivated soils and crops. This suggestion has become a part of the accepted paradigm on soil toxicity. Consequently, stringent fertilizer control programs to monitor Cd have been launched. Attempts to link Cd toxicity and fertilizers to chronic diseases, sometimes with good evidence, but mostly on less certain data are frequent. A re-assessment of this “accepted” paradigm is timely, given the larger body of data available today. The data show that both the input and output of Cd per hectare from fertilizers are negligibly small compared to the total amount of Cd/hectare usually present in the soil itself. Calculations based on current agricultural practices are used to show that it will take centuries to double the ambient soil Cd level, even after neglecting leaching and other removal effects. The concern of long-term agriculture should be the depletion of available phosphate fertilizers, rather than the negligible contamination of the soil by trace metals from fertilizer inputs. This conclusion is confirmed by showing that the claimed correlations between fertilizer input and Cd accumulation in crops are not robust. Alternative scenarios that explain the data are presented. Thus, soil acidulation on fertilizer loading and the effect of Mg, Zn and F ions contained in fertilizers are considered using recent $$\hbox {Cd}^{2+}$$ , $$\hbox {Mg}^{2+}$$ and $$\hbox {F}^-$$ ion-association theories. The protective role of ions like Zn, Se, Fe is emphasized, and the question of Cd toxicity in the presence of other ions is considered. These help to clarify difficulties in the standard point of view. This analysis does not modify the accepted views on Cd contamination by airborne delivery, smoking, and industrial activity, or algal blooms caused by phosphates.
• Effects of conversion of mangroves into gei wai ponds on accumulation, speciation and risk of heavy metals in intertidal sediments 2018-06-23
### Abstract
Mangroves are often converted into gei wai ponds for aquaculture, but how such conversion affects the accumulation and behavior of heavy metals in sediments is not clear. The present study aims to quantify the concentration and speciation of heavy metals in sediments in different habitats, including gei wai pond, mangrove marsh dominated by Avicennia marina and bare mudflat, in a mangrove nature reserve in South China. The results showed that gei wai pond acidified the sediment and reduced its electronic conductivity and total organic carbon (TOC) when compared to A. marina marsh and mudflat. The concentrations of Cd, Cu, Zn and Pb at all sediment depths in gei wai pond were lower than the other habitats, indicating gei wai pond reduced the fertility and the ability to retain heavy metals in sediment. Gei wai pond sediment also had a lower heavy metal pollution problem according to multiple evaluation methods, including potential ecological risk coefficient, potential ecological risk index, geo-accumulation index, mean PEL quotients, pollution load index, mean ERM quotients and total toxic unit. Heavy metal speciation analysis showed that gei wai pond increased the transfer of the immobilized fraction of Cd and Cr to the mobilized one. According to the acid-volatile sulfide (AVS) and simultaneously extracted metals (SEM) analysis, the conversion of mangroves into gei wai pond reduced values of ([SEM] − [AVS])/f oc , and the role of TOC in alleviating heavy metal toxicity in sediment. This study demonstrated the conversion of mangrove marsh into gei wai pond not only reduced the ecological purification capacity on heavy metal contamination, but also enhanced the transfer of heavy metals from gei wai pond sediment to nearby habitats.
• Cytotoxicity induced by the mixture components of nickel and poly aromatic hydrocarbons 2018-06-22
### Abstract
Although particulate matter (PM) is composed of various chemicals, investigations regarding the toxicity that results from mixing the substances in PM are insufficient. In this study, the effects of low levels of three PAHs (benz[a]anthracene, benzo[a]pyrene, and dibenz[a,h]anthracene) on Ni toxicity were investigated to assess the combined effect of Ni–PAHs on the environment. We compared the difference in cell mortality and total glutathione (tGSH) reduction between single Ni and Ni–PAHs co-exposure using A549 (human alveolar carcinoma). In addition, we measured the change in Ni solubility in chloroform that was triggered by PAHs to confirm the existence of cation–π interactions between Ni and PAHs. In the single Ni exposure, the dose–response curve of cell mortality and tGSH reduction were very similar, indicating that cell death was mediated by the oxidative stress. However, 10 μM PAHs induced a depleted tGSH reduction compared to single Ni without a change in cell mortality. The solubility of Ni in chloroform was greatly enhanced by the addition of benz[a]anthracene, which demonstrates the cation–π interactions between Ni and PAHs. Ni–PAH complexes can change the toxicity mechanisms of Ni from oxidative stress to others due to the reduction of Ni2+ bioavailability and the accumulation of Ni–PAH complexes on cell membranes. The abundant PAHs contained in PM have strong potential to interact with metals, which can affect the toxicity of the metal. Therefore, the mixture toxicity and interactions between diverse metals and PAHs in PM should be investigated in the future.
|
2018-06-25 11:35:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1802023947238922, "perplexity": 4752.406086258759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867666.97/warc/CC-MAIN-20180625111632-20180625131632-00568.warc.gz"}
|
https://carnap.io/shared/gleachkr@gmail.com/carnap-forallx.pandoc
|
# Natural Deduction in forall x systems
This document gives a short description of how Carnap presents the systems of natural deduction from P.D. Magnus' forall x and from the Calgary remix of forall x. At least some prior familiarity with Fitch-style proof systems is assumed.
## Propositional Systems
### Notation
The different admissible keyboard abbreviations for the different connectives are as follows:
Connective Keyboard
`->`, `=>`, `>`
`/\`, `&`, `and`
`\/`, `|`, `or`
`<->`, `<=>`
¬ `-`, `~`, `not`
`!?`, `_|_`
The available sentence letters are A through Z, together with the infinitely many subscripted letters P1, P2, … written `P_1, P_2` and so on.
Proofs consist of a series of lines. A line is either an assertion line containing a formula followed by a `:` and then a justification for that formula, or a separator line containing two dashes, thus: `--`. A justification consists of a rule abbreviation followed by zero or more numbers (citations of particular lines) and pairs of numbers separated by a dash (citations of a subproof contained within the given line range).
A subproof is begun by increasing the indentation level. The first line of a subproof should be more indented than the containing proof, and the lines directly contained in this subproof should maintain this indentation level. (Lines indirectly contained, by being part of a sub-sub-proof, will need to be indented more deeply.) The subproof ends when the indentation level of the containing proof is resumed; hence, two contiguous sub-proofs of the same containing proof can be distinguished from one another by inserting a separator line between them at the same level of indentation as the containing proof. The final unindented line of a derivation will serve as the conclusion of the entire derivation.
Here's an example derivation, using system SL of P.D. Magnus forall x:
Playground
P:AS P:AS P:R 2 P->P:->I 2-3 P->(P->P):->I 1-4
### Basic Rules
#### forall x System SL
The minimal system SL for P.D. Magnus' forall x (the system used in a proofchecker constructed with `.ForallxSL` in Carnap's Pandoc Markup) has the following set of rules for direct inferences:
Rule Abbreviation Premises Conclusion
And-Elim. `∧E` φ ∧ ψ φ/ψ
And-Intro. `∧I` φ, ψ φ ∧ ψ
Or-Elim `∨E` ¬ψ, φ ∨ ψ φ
¬φ, φ ∨ ψ ψ
Or-Intro `∨I` φ/ψ φ ∨ ψ
Condtional-Elim `→E` φ, φ → ψ ψ
Biconditional-Elim `↔E` φ/ψ, φ ↔ ψ ψ/φ
Reiteration `R` φ φ
We also have four rules for indirect inferences:
1. `→I`, which justifies an assertion of the form φ → ψ by citing a subproof beginning with the assumption φ and ending with the conclusion ψ;
2. `↔I`, which justifies an assertion of the form φ↔ψ by citing two subproofs, beginning with assuptions φ, ψ, respectively, and ending with conclusions ψ, φ, respectively;
3. `¬I`, which justifies an assertion of the form ¬φ by citing a subproof beginning with the assumption φ and ending with a pair of lines ψ,¬ψ.
4. `¬E`, which justifies an assertion of the form φ by citing a subproof beginning with the assumption ¬φ and ending with a pair of lines ψ,¬ψ.
Finally, `PR` can be used to justify a line asserting a premise, and `AS` can be used to justify a line making an assumption. A note about the reason for an assumption can be included in the rendered proof by writing `A/NOTETEXTHERE` rather than `AS` for an assumption. Assumptions are only allowed on the first line of a subproof.
#### forall x System SL Plus
The extended system SL Plus for P.D. Magnus' forall x (the system used in a proofchecker constructed with `.ForallxSLPlus` in Carnap's Pandoc Markup) also adds the following rules:
Rule Abbreviation Premises Conclusion
Dilemma `DIL` φ ∨ ψ, φ → χ, ψ → χ χ
Hypothetical Syllogism `HS` φ → ψ, ψ → χ φ → χ
Modus Tollens `MT` φ → ψ, ¬ψ ¬φ
As well as the following exchange rules, which can be used within a propositional context Φ:
Rule Abbreviation Premises Conclusion
Commutativity `Comm` Φ(φ ∧ ψ) Φ(ψ ∧ φ)
Φ(φ ∨ ψ) Φ(ψ ∨ φ)
Φ(φ ↔ ψ) Φ(ψ ↔ φ)
Double Negation `DN` Φ(φ)/Φ(¬¬φ) Φ(¬¬φ)/Φ(φ)
Material Conditional `MC` Φ(φ → ψ) Φφ ∨ ψ)
Φφ ∨ ψ) Φ(φ → ψ)
Φ(φ ∨ ψ) Φφ → ψ)
Φφ → ψ) Φ(φ ∨ ψ)
BiConditional Exchange `↔ex` Φ(φ ↔ ψ) Φ(φ → ψ ∧ ψ → φ)
Φ(φ → ψ ∧ ψ → φ) Φ(φ ↔ ψ)
DeMorgan's Laws `DeM` Φ(¬(φ ∧ ψ)) Φφ ∨ ¬ψ)
Φ(¬(φ ∨ ψ)) Φφ ∧ ¬ψ)
Φφ ∨ ¬ψ) Φ(¬(φ ∧ ψ))
Φφ ∧ ¬ψ) Φ(¬(φ ∨ ψ))
#### Calgary TFL
The system TFL from the Calgary Remix of forall x (the system used in a proofchecker constructed with `.ZachTFL` in Carnap's Pandoc Markup) allows the propositional constant (`!?`, `_|_`). It has the following set of rules for direct inferences:
Rule Abbreviation Premises Conclusion
And-Elim. `∧E` φ ∧ ψ φ/ψ
And-Intro. `∧I` φ, ψ φ ∧ ψ
Or-Intro `∨I` φ/ψ φ ∨ ψ
Negation-Elim `¬E` φ, ¬φ
Explosion `X` ψ
Biconditional-Elim `↔E` φ/ψ, φ ↔ ψ ψ/φ
Reiteration `R` φ φ
Disjunctive Syllogism `DS` ¬ψφ, φ ∨ ψ φ/ψ
Modus Tollens `MT` φ → ψ, ¬ψ ¬φ
Double Negation Elim. `DNE` ¬¬φ φ
DeMorgan's Laws `DeM` ¬(φ ∧ ψ) ¬φ ∨ ¬ψ
¬(φ ∨ ψ) ¬φ ∧ ¬ψ
¬φ ∨ ¬ψ ¬(φ ∧ ψ)
¬φ ∧ ¬ψ ¬(φ ∨ ψ)
We also have five rules for indirect inferences:
1. `→I`, which justifies an assertion of the form φ → ψ by citing a subproof beginning with the assumption φ and ending with the conclusion ψ;
2. `↔I`, which justifies an assertion of the form φ ↔ ψ by citing two subproofs, beginning with assumptions φ, ψ, respectively, and ending with conclusions ψ, φ, respectively;
3. `¬I`, which justifies an assertion of the form ¬φ by citing a subproof beginning with the assumption φ and ending with a conclusion .
4. `∨E`, which justifies an assertion of the form φ by citing a disjunction ψ ∨ χ and two subproofs beginning with assumptions ψ, χ respectively and each ending with the conclusion φ.
5. `IP` (indirect proof), which justifies an assertion of the form φ by citing a subproof beginning with the assumption ¬φ and ending with a conclusion .
6. `LEM` (Law of the Excluded Middle), which justifies an assertion of the form ψ by citing two subproofs beginning with assumptions φ, ¬φ respectively and each ending with the conclusion ψ.
As above, `PR` can be used to justify a line asserting a premise, and `AS` can be used to justify a line making an assumption. Assumptions are only allowed on the first line of a subproof.
The system `.ZachTFL2019` is like `.ZachTFL` except it disallows all derived rules, i.e., the only allowed rules are `R`, `X`, `IP`, and the I and E rules for the connectives.
Because the Calgary FOL systems treat v as a variable, `v` is not allowed as a keyboard shortcut for .
## First-Order System QL
The proof system for Magnus's forall x, QL, is activated using `.ForallxQL`.
### Notation
The different admissible keyboard abbreviations for quantifiers and equality is as follows:
Connective Keyboard
`A`
`E`
= `=`
The forall x first-order systems do not contain sentence letters.
Application of a relation symbol is indicated by directly appending the arguments to the symbol.
The available relation symbols are A through Z, together with the infinitely many subscripted letters F1, F2, … written `F1, F2. The arity of a relation symbol is determined from context.
The available constants are a through w, with the infinitely many subscripted letters c1, c2, … written `c_1, c_2,…`.
The available variables are x through z, with the infinitely many subscripted letters x1, x2, … written `x_1, x_2,…`.
### Basic Rules
The first-order forall x systems QL and FOL (the systems used in proofcheckers constructed with `.ForallxQL`, and `ZachFOL` respectively) extend the rules of the system SL and TFL respectively with the following set of new basic rules:
Rule Abbreviation Premises Conclusion
Existential Introduction `∃I` φ(σ) xφ(x)
Universal Elimination `∀E` xφ(x) φ(σ)
Universal Introduction `∀E` φ(σ) xφ(x)
Equality Introduction `=I` σ = σ
Equality Elimination `=E` σ = τ, φ(σ)/φ(τ) φ(τ)/φ(σ)
Where Universal Introduction is subject to the restriction that σ must not appear in φ(x), or any undischarged assumption or in any premise of the proof.1
It also adds one new rule for indirect derivations: `∃E`, which justifies an assertion ψ by citing an assertion of the form xφ(x) and a subproof beginning with the assumption φ(σ) and ending with the conclusion ψ, where σ does not appear in ψ, ∃xφ(x), or in any of the undischarged assumptions or premises of the proof.
### Calgary FOL Systems
There are three systems corresponding to the Calgary remix of forall x. All of them allow sentence letters in first-order formulas. The available relation symbols are the same as for QL: A through Z, together with the infinitely many subscripted letters F1, F2, … written `F_1, F_2` etc. However, the available constants are only a through r, together with the infinitely many subscripted letters c1, c2, … written `c_1, c_2,…`. The available variables are s through z, with the infinitely many subscripted letters x1, x2, … written `x_1, x_2,…`.
Again, because the Calgary systems treat v as a variable, `v` is not allowed as a keyboard shortcut for . However, the Calgary systems also allow `@` for and `3` for .
Connective Keyboard
`A`, `@`
`E`, `3`
= `=`
As of the Fall 2019 edition of forall x: Calgary, the syntax for first-order formulas has arguments to predicates in parentheses and with commas (e.g., F(a, b)); prior to that edition, the convention was the same as in the original and Cambridge editions of forall x (e.g., Fab).
The original proof system for the Calgary version of forall x, `.ZachFOL` adds, in addition to the basic rules of `ForallxQL`, the rules
Rule Abbreviation Premises Conclusion
Conversion of Quantifiers `CQ` ¬∀xφ(x) x¬φ(x)
x¬φ(x) ¬∀xφ(x)
¬∃xφ(x) x¬φ(x)
x¬φ(x) ¬∃xφ(x)
The 2019 versions of the FOL systems use the new syntax, i.e., F(a, b) instead of Fab. The system `.ZachFOL2019` allows only the basic rules of the TFL system and the basic rules of QL. The system `.ZachFOLPlus2019` allows the basic and derived rules of `.ZachTFL` and the CQ rules listed above.
In summary:
System TFL Rules FOL Syntax FOL Rules
`.ZachTFL` Basic + Derived
`.ZachFOL` Basic + Derived Fab Basic + CQ
`.ZachTFL2019` Basic
`.ZachFOL2019` Basic F(a,b) Basic
`.ZachFOLPlus2019` Basic + Derived F(a,b) Basic + CQ
1. Technically, Carnap checks only the assumptions and premises that are used in the derivation of φ(σ). This has the same effect in terms of what's derivable, but is a little more lenient.
|
2020-02-23 02:15:37
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.920732855796814, "perplexity": 1928.360461979981}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145742.20/warc/CC-MAIN-20200223001555-20200223031555-00226.warc.gz"}
|
https://zenodo.org/record/3268476/export/schemaorg_jsonld
|
Journal article Open Access
# Cognitive Network Fault Management Approach for Improving Resilience in 5G Networks
Gajic, Borislava; Mannweiler, Christian; Michalopoulos, Diomidis
### JSON-LD (schema.org) Export
{
"description": "<p>Resilience is one of the fundamental requirements of critical communication services such as ultra-reliable low latency (URLLC) services offered by 5G networks. In order to support the communication service, the 5G networks can take different approaches for deployment of network functions, i.e. the network functions can run on virtualized infrastructure (telco cloud) as well as on the specialized physical hardware instances (e.g. RAN functions). Irrespective of the deployment approach taken the adequate level of resilience needs to be supported on all parts of the network in order to achieve required level of service resilience. In this work, we aim at improving the resilience level of communication services by applying network fault management techniques specialized for 5G slicing-enabled networks taking jointly into account the aspects of virtualized and physical infrastructure. We describe the novel approach of designing flexible and cognitive fault management functions that can dynamically adapt their behavior based on the actual network slice requirements and current network context. We highlight the benefits of such an approach in achieving the<br>\nrequired level of resilience especially addressing the telco cloud domain.</p>",
"creator": [
{
"affiliation": "Nokia Bell Labs, Munich, Germany",
"@type": "Person",
"name": "Gajic, Borislava"
},
{
"affiliation": "Nokia Bell Labs, Munich, Germany",
"@type": "Person",
"name": "Mannweiler, Christian"
},
{
"affiliation": "Nokia Bell Labs, Munich, Germany",
"@type": "Person",
"name": "Michalopoulos, Diomidis"
}
],
"headline": "Cognitive Network Fault Management Approach for Improving Resilience in 5G Networks",
"datePublished": "2018-06-20",
"url": "https://zenodo.org/record/3268476",
"@context": "https://schema.org/",
"identifier": "https://doi.org/10.5281/zenodo.3268476",
"@id": "https://doi.org/10.5281/zenodo.3268476",
"@type": "ScholarlyArticle",
"name": "Cognitive Network Fault Management Approach for Improving Resilience in 5G Networks"
}
45
31
views
All versions This version
Views 4545
Data volume 18.3 MB18.3 MB
Unique views 4545
|
2020-08-12 21:43:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24963074922561646, "perplexity": 8171.823302753278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738944.95/warc/CC-MAIN-20200812200445-20200812230445-00254.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/elementary-transformations-prove-that-yz-x-2-zx-y-2-xy-z-2-zx-y-2-xy-z-2-yz-x-2-xy-z-2-yz-x-2-zx-y-2-divisible-x-y-z-hence-find-quotient_4170
|
# Solution - Prove that |(yz-x^2,zx-y^2,xy-z^2),(zx-y^2,xy-z^2,yz-x^2),(xy-z^2,yz-x^2,zx-y^2)| is divisible by (x + y + z) and hence find the quotient. - Elementary Transformations
Account
Register
Share
Books Shortlist
#### Question
Prove that |(yz-x^2,zx-y^2,xy-z^2),(zx-y^2,xy-z^2,yz-x^2),(xy-z^2,yz-x^2,zx-y^2)|is divisible by (x + y + z) and hence find the quotient.
#### Solution
You need to to view the solution
Is there an error in this question or solution?
#### Reference Material
Solution for question: Prove that |(yz-x^2,zx-y^2,xy-z^2),(zx-y^2,xy-z^2,yz-x^2),(xy-z^2,yz-x^2,zx-y^2)| is divisible by (x + y + z) and hence find the quotient. concept: Elementary Transformations. For the courses CBSE (Arts), CBSE (Commerce), CBSE (Science), PUC Karnataka Science
S
|
2018-01-24 02:03:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41463980078697205, "perplexity": 3056.821411421641}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892892.86/warc/CC-MAIN-20180124010853-20180124030853-00004.warc.gz"}
|
https://labs.tib.eu/arxiv/?author=Guilu%20Long
|
• ### Optimizing a Polynomial Function on a Quantum Simulator(1804.05231)
April 14, 2018 quant-ph
Gradient descent method, as one of the major methods in numerical optimization, is the key ingredient in many machine learning algorithms. As one of the most fundamental way to solve the optimization problems, it promises the function value to move along the direction of steepest descent. For the vast resource consumption when dealing with high-dimensional problems, a quantum version of this iterative optimization algorithm has been proposed recently[arXiv:1612.01789]. Here, we develop this protocol and implement it on a quantum simulator with limited resource. Moreover, a prototypical experiment was shown with a 4-qubit Nuclear Magnetic Resonance quantum processor, demonstrating a optimization process of polynomial function iteratively. In each iteration, we achieved an average fidelity of 94\% compared with theoretical calculation via full-state tomography. In particular, the iterative point gradually converged to the local minimum. We apply our method to multidimensional scaling problem, further showing the potentially capability to yields an exponentially improvement compared with classical counterparts. With the onrushing tendency of quantum information, our work could provide a subroutine for the application of future practical quantum computers.
• ### Quantum Spacetime on a Quantum Simulator(1712.08711)
Dec. 23, 2017 quant-ph, hep-th, gr-qc
We experimentally simulate the spin networks -- a fundamental description of quantum spacetime at the Planck level. We achieve this by simulating quantum tetrahedra and their interactions. The tensor product of these quantum tetrahedra comprises spin networks. In this initial attempt to study quantum spacetime by quantum information processing, on a four-qubit nuclear magnetic resonance quantum simulator, we simulate the basic module -- comprising five quantum tetrahedra -- of the interactions of quantum spacetime. By measuring the geometric properties on the corresponding quantum tetrahedra and simulate their interactions, our experiment serves as the basic module that represents the Feynman diagram vertex in the spin-network formulation of quantum spacetime.
• ### NMRCloudQ: A Quantum Cloud Experience on a Nuclear Magnetic Resonance Quantum Computer(1710.03646)
Oct. 10, 2017 quant-ph
As of today, no one can tell when a universal quantum computer with thousands of logical quantum bits (qubits) will be built. At present, most quantum computer prototypes involve less than ten individually controllable qubits, and only exist in laboratories for the sake of either the great costs of devices or professional maintenance requirements. Moreover, scientists believe that quantum computers will never replace our daily, every-minute use of classical computers, but would rather serve as a substantial addition to the classical ones when tackling some particular problems. Due to the above two reasons, cloud-based quantum computing is anticipated to be the most useful and reachable form for public users to experience with the power of quantum. As initial attempts, IBM Q has launched influential cloud services on a superconducting quantum processor in 2016, but no other platforms has followed up yet. Here, we report our new cloud quantum computing service -- NMRCloudQ (http://nmrcloudq.com/zh-hans/), where nuclear magnetic resonance, one of the pioneer platforms with mature techniques in experimental quantum computing, plays as the role of implementing computing tasks. Our service provides a comprehensive software environment preconfigured with a list of quantum information processing packages, and aims to be freely accessible to either amateurs that look forward to keeping pace with this quantum era or professionals that are interested in carrying out real quantum computing experiments in person. In our current version, four qubits are already usable with in average 1.26% single-qubit gate error rate and 1.77% two-qubit controlled-NOT gate error rate via randomized benchmaking tests. Improved control precisions as well as a new seven-qubit processor are also in preparation and will be available later.
• ### Implementation of Multiparty quantum clock synchronization(1708.06050)
Aug. 22, 2017 quant-ph
The quantum clock synchronization (QCS) is to measure the time difference among the spatially separated clocks with the principle of quantum mechanics. The first QCS algorithm proposed by Chuang and Jozsa is merely based on two parties, which is further extended and generalized to the multiparty situation by Krco and Paul. They present a multiparty QCS protocol based upon W states that utilizes shared prior entanglement and broadcast of classical information to synchronize spatially separated clocks. Shortly afterwards, Ben-Av and Exman came up with an optimized multiparty QCS using Z state. In this work, we firstly report an implementation of Krco and Ben-AV multiparty QCS algorithm using a four-qubit Nuclear Magnetic Resonance (NMR). The experimental results show a great agreement with the theory and also prove Ben-AV multiparty QCS algorithm more accurate than Krco.
• ### Measuring Holographic Entanglement Entropy on a Quantum Simulator(1705.00365)
April 11, 2019 quant-ph, hep-th, gr-qc, cond-mat.str-el
Quantum simulation promises to have wide applications in many fields where problems are hard to model with classical computers. Various quantum devices of different platforms have been built to tackle the problems in, say, quantum chemistry, condensed matter physics, and high-energy physics. Here, we report an experiment towards the simulation of quantum gravity by simulating the holographic entanglement entropy. On a six-qubit nuclear magnetic resonance quantum simulator, we demonstrate a key result of Anti-de Sitter/conformal field theory(\adscft) correspondence---the Ryu-Takayanagi formula is demonstrated by measuring the relevant entanglement entropies on the perfect tensor state. The fidelity of our experimentally prepared the six-qubit state is 85.0\% via full state tomography and reaches 93.7\% if the signal-decay due to decoherence is taken into account. Our experiment serves as the basic module of simulating more complex tensor network states that exploring \adscft correspondence. As the initial experimental attempt to study \adscft via quantum information processing, our work opens up new avenues exploring quantum gravity phenomena on quantum simulators.
• ### Experimental Realization of Single-shot Nonadiabatic Holonomic Gates in Nuclear Spins(1703.10348)
March 30, 2017 quant-ph
Nonadiabatic holonomic quantum computation has received increasing attention due to its robustness against control errors. However, all the previous schemes have to use at least two sequentially implemented gates to realize a general one-qubit gate. Based on two recent works, we construct two Hamiltonians and experimentally realized nonadiabatic holonomic gates by a single-shot implementation in a two-qubit nuclear magnetic resonance (NMR) system. Two noncommuting one-qubit holonomic gates, rotating along $\hat{x}$ and $\hat{z}$ axes respectively, are implemented by evolving a work qubit and an ancillary qubit nonadiabatically following a quantum circuit designed. Using a sequence compiler developed for NMR quantum information processor, we optimize the whole pulse sequence, minimizing the total error of the implementation. Finally, all the nonadiabatic holonomic gates reach high unattenuated experimental fidelities over $98\%$
• ### Digital spiral object identification using random light(1609.08741)
Feb. 16, 2017 quant-ph, physics.optics
Photons that are entangled or correlated in orbital angular momentum have been extensively used for remote sensing, object identification and imaging. It has recently been demonstrated that intensity fluctuations give rise to the formation of correlations in the orbital angular momentum components and angular positions of random light. Here, we demonstrate that the spatial signatures and phase information of an object, with rotational symmetries, can be identified using classical orbital angular momentum correlations in random light. The Fourier components imprinted in the digital spiral spectrum of the object, measured through intensity correlations, unveil its spatial and phase information. Sharing similarities with conventional compressive sensing protocols that exploit sparsity to reduce the number of measurements required to reconstruct a signal, our technique allows sensing of an object with fewer measurements than other schemes that use pixel-by-pixel imaging. One remarkable advantage of our technique is the fact that it does not require the preparation of fragile quantum states of light and works at both low- and high-light levels. In addition, our technique is robust against environmental noise, a fundamental feature of any realistic scheme for remote sensing.
• ### Experimental Identification of Non-Abelian Topological Orders on a Quantum Simulator(1608.06932)
Feb. 1, 2017 quant-ph, cond-mat.str-el
Topological orders can be used as media for topological quantum computing --- a promising quantum computation model due to its invulnerability against local errors. Conversely, a quantum simulator, often regarded as a quantum computing device for special purposes, also offers a way of characterizing topological orders. Here, we show how to identify distinct topological orders via measuring their modular $S$ and $T$ matrices. In particular, we employ a nuclear magnetic resonance quantum simulator to study the properties of three topologically ordered matter phases described by the string-net model with two string types, including the $\Z_2$ toric code, doubled semion, and doubled Fibonacci. The third one, non-Abelian Fibonacci order is notably expected to be the simplest candidate for universal topological quantum computing. Our experiment serves as the basic module, built on which one can simulate braiding of non-Abelian anyons and ultimately topological quantum computation via the braiding, and thus provides a new approach of investigating topological orders using quantum computers.
• ### Towards quantum supremacy: enhancing quantum control by bootstrapping a quantum processor(1701.01198)
Jan. 5, 2017 quant-ph
Quantum computers promise to outperform their classical counterparts in many applications. Rapid experimental progress in the last two decades includes the first demonstrations of small-scale quantum processors, but realising large-scale quantum information processors capable of universal quantum control remains a challenge. One primary obstacle is the inadequacy of classical computers for the task of optimising the experimental control field as we scale up to large systems. Classical optimisation procedures require a simulation of the quantum system and have a running time that grows exponentially with the number of quantum bits (qubits) in the system. Here we show that it is possible to tackle this problem by using the quantum processor to optimise its own control fields. Using measurement-based quantum feedback control (MQFC), we created a 12-coherence state with the essential control pulse completely designed by a 12-qubit nuclear magnetic resonance (NMR) quantum processor. The results demonstrate the superiority of MQFC over classical optimisation methods, in both efficiency and accuracy. The time required for MQFC optimisation is linear in the number of qubits, and our 12-qubit system beat a classical computer configured with 2.4 GHz CPU and 8 GB memory. Furthermore, the fidelity of the MQFC-prepared 12-coherence was about 10% better than the best result using classical optimisation, since the feedback approach inherently corrects for unknown imperfections in the quantum processor. As MQFC is readily transferrable to other technologies for universal quantum information processors, we anticipate that this result will open the way to scalably and precisely control quantum systems, bringing us closer to a demonstration of quantum supremacy.
• ### Experimental Study of Forrelation in Nuclear Spins(1612.01652)
Dec. 6, 2016 quant-ph, cs.CC
Correlation functions are often employed to quantify the relationships among interdependent variables or sets of data. Recently, a new class of correlation functions, called Forrelation, has been introduced by Aaronson and Ambainis for studying the query complexity of quantum devices. It was found that there exists a quantum query algorithm solving 2-fold Forrelation problems with an exponential quantum speedup over all possible classical means, which represents essentially the largest possible separation between quantum and classical query complexities. Here we report an experimental study probing the 2-fold and 3-fold Forrelations encoded in nuclear spins. The major experimental challenge is to control the spin fluctuation to within a threshold value, which is achieved by developing a set of optimized GRAPE pulse sequences. Overall, our small-scale implementation indicates that the quantum query algorithm is capable of determine the values of Forrelations within an acceptable accuracy required for demonstrating quantum supremacy, given the current technology and in the presence of experimental noise.
• ### Multiphoton controllable transport between remote resonators(1605.03775)
May 31, 2016 quant-ph
We develop a novel method for multiphoton controllable transport between remote resonators. Specifically, an auxiliary resonator is used to control the coherent long-range coupling of two spatially separated resonators, mediated by a coupled-resonator chain of arbitrary length. In this manner, an arbitrary multiphoton quantum state can be either transmitted through or reflected off the intermediate chain on demand, with very high fidelity. We find, on using a time-independent perturbative treatment, that quantum information leakage of an arbitrary Fock state is limited by two upper bounds, one for the transmitted case and the other for the reflected case. In principle, the two upper bounds can be made arbitrarily small, which is confirmed by numerical simulations.
• ### Quantum state tomography via reduced density matrices(1604.02046)
April 7, 2016 quant-ph
Quantum state tomography via local measurements is an efficient tool for characterizing quantum states. However it requires that the original global state be uniquely determined (UD) by its local reduced density matrices (RDMs). In this work we demonstrate for the first time a class of states that are UD by their RDMs under the assumption that the global state is pure, but fail to be UD in the absence of that assumption. This discovery allows us to classify quantum states according to their UD properties, with the requirement that each class be treated distinctly in the practice of simplifying quantum state tomography. Additionally we experimentally test the feasibility and stability of performing quantum state tomography via the measurement of local RDMs for each class. These theoretical and experimental results advance the project of performing efficient and accurate quantum state tomography in practice.
• ### Tomography is necessary for universal entanglement detection with single-copy observables(1511.00581)
Nov. 2, 2015 quant-ph
Entanglement, one of the central mysteries of quantum mechanics, plays an essential role in numerous applications of quantum information theory. A natural question of both theoretical and experimental importance is whether universal entanglement detection is possible without full state tomography. In this work, we prove a no-go theorem that rules out this possibility for any non-adaptive schemes that employ single-copy measurements only. We also examine in detail a previously implemented experiment, which claimed to detect entanglement of two-qubit states via adaptive single-copy measurements without full state tomography. By performing the experiment and analyzing the data, we demonstrate that the information gathered is indeed sufficient to reconstruct the state. These results reveal a fundamental limit for single-copy measurements in entanglement detection, and provides a general framework to study the detection of other interesting properties of quantum states, such as the positivity of partial transpose and the $k$-symmetric extendibility.
• ### Experimental quantum secure direct communication with single photons(1503.00451)
Sept. 7, 2015 quant-ph
Quantum communication holds promise for absolutely security in secret message transmission. Quantum secure direct communication is an important mode of the quantum communication in which secret messages are securely communicated over a quantum channel directly. It has become one of the hot research areas in the last decade, and offers both high security and instantaneousness in communication. It is also a basic cryptographic primitive for constructing other quantum communication tasks such as quantum authentication, quantum dialogue and so on. Here we report the first experimental demonstration of quantum secure direct communication with single photons. The experiment is based on the DL04 protocol, equipped with a simple frequency coding. It has the advantage of being robust against channel noise and loss. The experiment demonstrated explicitly the block data transmission technique, which is essential for quantum secure direct communication. In the experiment, a block transmission of 80 single photons was demonstrated over fiber, and it provides effectively 16 different values, which is equivalent to 4 bits of direct transmission in one block. The experiment has firmly demonstrated the feasibility of quantum secure direct communication in the presence of noise and loss.
• ### Experimental Estimation of Average Fidelity of a Clifford Gate on a 7-qubit Quantum Processor(1411.7993)
Nov. 28, 2014 quant-ph
Quantum gates in experiment are inherently prone to errors that need to be characterized before they can be corrected. Full characterization via quantum process tomography is impractical and often unnecessary. For most practical purposes, it is enough to estimate more general quantities such as the average fidelity. Here we use a unitary 2-design and twirling protocol for efficiently estimating the average fidelity of Clifford gates, to certify a 7-qubit entangling gate in a nuclear magnetic resonance quantum processor. Compared with more than $10^8$ experiments required by full process tomography, we conducted 1656 experiments to satisfy a statistical confidence level of 99%. The average fidelity of this Clifford gate in experiment is 55.1%, and rises to 87.5% if the infidelity due to decoherence is removed. The entire protocol of certifying Clifford gates is efficient and scalable, and can easily be extended to any general quantum information processor with minor modifications.
• ### Experimental Realization of Nonadiabatic Holonomic Quantum Computation(1302.0384)
June 15, 2013 quant-ph
Due to its geometric nature, holonomic quantum computation is fault-tolerant against certain types of control errors. Although proposed more than a decade ago, the experimental realization of holonomic quantum computation is still an open challenge. In this Letter, we report the first experimental demonstration of nonadiabatic holonomic quantum computation in liquid NMR quantum information processors. Two non-commuting holonomic single-qubit gates, rotations about x-axis and about z-axis, and the two-qubit holonomic control-NOT gate are realized with high fidelity by evolving the work qubits and an ancillary qubit nonadiabatically. The successful realization of these universal elementary gates in nonadiabatic quantum computing demonstrates the experimental feasibility and the fascinating feature of this fast and resilient quantum computing paradigm.
• ### Experimental simulation of anyonic fractional statistics with an NMR quantum information processor(1210.4760)
Oct. 17, 2012 quant-ph
Anyons have exotic statistical properties, fractional statistics, differing from Bosons and Fermions. They can be created as excitations of some Hamiltonian models. Here we present an experimental demonstration of anyonic fractional statistics by simulating a version of the Kitaev spin lattice model proposed by Han et al[Phys. Rev.Lett. 98, 150404 (2007)] using an NMR quantum information processor. We use a 7-qubit system to prepare a 6-qubit pseudopure state to implement the ground state preparation and realize anyonic manipulations, including creation, braiding and anyon fusion. A $\pi/2\times 2$ phase difference between the states with and without anyon braiding, which is equivalent to two successive particle exchanges, is observed. This is different from the $\pi\times 2$ and $2\pi \times 2$ phases for Fermions and Bosons after two successive particle exchanges, and is consistent with the fractional statistics of anyons.
• ### Two Mode Photon Bunching Effect as Witness of Quantum Criticality in Circuit QED(0812.2774)
Dec. 15, 2008 quant-ph, cond-mat.mes-hall
We suggest a scheme to probe critical phenomena at a quantum phase transition (QPT) using the quantum correlation of two photonic modes simultaneously coupled to a critical system. As an experimentally accessible physical implementation, a circuit QED system is formed by a capacitively coupled Josephson junction qubit array interacting with one superconducting transmission line resonator (TLR). It realizes an Ising chain in the transverse field (ICTF) which interacts with the two magnetic modes propagating in the TLR. We demonstrate that in the vicinity of criticality the originally independent fields tend to display photon bunching effects due to their interaction with the ICTF. Thus, the occurrence of the QPT is reflected by the quantum characteristics of the photonic fields.
• ### Induced Entanglement Enhanced by Quantum Criticality(0805.4659)
May 30, 2008 quant-ph, cond-mat.other
Two qubit entanglement can be induced by a quantum data bus interacting with them. In this paper, with the quantum spin chain in the transverse field as an illustration of quantum data bus, we show that such induced entanglement can be enhanced by the quantum phase transition (QPT) of the quantum data bus. We consider two external spins simultaneously coupled to a transverse field Ising chain. By adiabatically eliminating the degrees of the chain, the effective coupling between these two spins are obtained. The matrix elements of the effective Hamiltonian are expressed in terms of dynamical structure factor (DSF) of the chain. The DSF is the Fourier transformation of the Green function of Ising chain and can be calculated numerically by a method introduced in [O. Derzhko, T. Krokhmalskii, Phys. Rev. B \textbf{56}, 11659 (1997)]. Since all characteristics of QPT are embodied in the DSF, the dynamical evolution of the two external spins displays singularity in the vicinity of the critical point.
• ### Creation of Entanglement between Two Electron Spins Induced by Many Spin Ensemble Excitations(cond-mat/0703342)
Oct. 23, 2007 quant-ph, cond-mat.mes-hall
We theoretically explore the possibility of creating spin entanglement by simultaneously coupling two electronic spins to a nuclear ensemble. By microscopically modeling the spin ensemble with a single mode boson field, we use the time-dependent Fr\"{o}hlich transformation (TDFT) method developed most recently [Yong Li, C. Bruder, and C. P. Sun, Phys. Rev. A \textbf{75}, 032302 (2007)] to calculate the effective coupling between the two spins. Our investigation shows that the total system realizes a solid state based architecture for cavity QED. Exchanging such kind effective boson in a virtual process can result in an effective interaction between two spins. It is discovered that a maximum entangled state can be obtained when the velocity of the electrons matches the initial distance between them in a suitable way. Moreover, we also study how the number of collective excitations influences the entanglement. It is shown that the larger the number of excitation is, the less the two spins entangle each other.
• ### Nuclear Magnetic Resonance Implementation of a Quantum Clock Synchronization Algorithm(quant-ph/0406208)
Nov. 24, 2004 quant-ph
The quantum clock synchronization algorithm proposed by I. L. Chuang (Phys. Rev. Lett, 85, 2006(2000)) has been implemented in a three qubit nuclear magnetic resonance quantum system. The effective-pure state is prepared by the spatial averaging approach. The time difference between two separated clocks can be determined by reading out directly through the NMR spectra.
• ### Multiple Round Quantum Dense Coding And Its Implementation Using Nuclear Magnetic Resonance(quant-ph/0407104)
July 14, 2004 quant-ph
A multiple round quantum dense coding (MRQDC) scheme based on the quantum phase estimation algorithm is proposed. Using an $m+1$ qubit system, Bob can transmit $2^{m+1}$ messages to Alice, through manipulating only one qubit and exchanging it between Alice and Bob for $m$ rounds. The information capacity is enhanced to $m+1$ bits. We have implemented the scheme in a three- qubit nuclear magnetic resonance (NMR) quantum computer. The experimental results show a good agreement between theory and experiment.
|
2020-04-08 02:00:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4354720413684845, "perplexity": 1056.6916289712105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371807538.83/warc/CC-MAIN-20200408010207-20200408040707-00079.warc.gz"}
|
http://mathhelpforum.com/algebra/37019-help-step-me-through-little-lost.html
|
# Math Help - Help Step me through? Little lost
1. ## Help Step me through? Little lost
express the result in the form a + bi.
$(1+\sqrt-3)(2-\sqrt-3)$
2. expand as you would usually (using distributive law)
this gives
2 - root minus 3 + 2(root minus 3) - (-3)
simplifies to
5 + i(root 3)
3. ..
4. Actually, finch21 had it right:
$2 - i\sqrt{3}+2i\sqrt{-3}-(i\sqrt{3})^2$
$= 2 + i\sqrt{3} - (-3)$
$= 2 + i\sqrt{3} + 3$
$= 5 + i\sqrt{3}$
5. I didn't mean to suggest finch was incorrect - that post wasn't there when I started typing... and I just got lost somewhere...
It's late.
I'm going to bed...
Oh incidentally - in this forum, would it not make sense to have a message that pops up in the event that someone else has posted to the topic before you enter your post?
Especially in a forum such as this...
|
2014-11-28 16:16:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5105478763580322, "perplexity": 2782.356277174756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931010590.31/warc/CC-MAIN-20141125155650-00097-ip-10-235-23-156.ec2.internal.warc.gz"}
|
http://www.akuna.shop.pl/36158.html
|
## area of a trapezium
#### Program to calculate area and perimeter of Trapezium ,
Program to calculate area and perimeter of Trapezium A trapezium is a quadrilateral with at least one pair of parallel sides, other two sides may not be parallel The parallel sides are called the bases of the trapezium and the other two sides are called it’s legs
#### G16d – Area of a trapezium – BossMaths
Showing that $$\text{Area} = \dfrac{h(a+b)}{2}$$ Move the red points around to form any trapezium you like Then pick any of the methods and drag the slider to see how you could work out the area of the trapezium using your knowledge of parallelograms, triangles or rectangl
#### Area of a Trapezium Using Rectangles - Mr-Mathematics
Mar 01, 2020· Area of a trapezium = 1 / 2 (a + b)h Even with this formula they would struggle to gain full marks when asked to find the area of a trapezium in an exam For example, here is a question and examiner’s report by AQA
#### Exam Style Questions - Corbettmaths
Exam Style Questions Ensure you have: Pencil, pen, ruler, protractor, pair of compasses and eraser You may use tracing paper if needed Guidance 1 Read each question carefully before you begin answering it 2 Donʼt spend too long on one question , The area of the trapezium ,
#### Trapezium Area Calculator - onlinemath4all
Area of Trapezium square units We hope that the calculator given in this section would have been much useful for the students to check their answers for the problems on finding area of a trapezium
#### How to Calculate the Area of a Trapezoid: 8 Steps (with ,
Oct 02, 2019· A trapezoid, also known as a trapezium, is a 4-sided shape with two parallel bases that are different lengths The formula for the area of a trapezoid is A = ½(b 1 +b 2)h, where b 1 and b 2 are the lengths of the bases and h is the height If you only know the side lengths of a regular trapezoid, you can break the trapezoid into simple shapes to find the height and finish your calculation
#### Polygons: Area of A Trapezium | Teaching Resources
10 days ago· GCSE Higher Lesson Understand how to use a formula to find the area of a trapezium Starter, derivation of formula, worked examples, worksheet (different questions, same formats), all answers included .
#### Trapezium | Definition of Trapezium at Dictionary
Trapezium definition at Dictionary, a free online dictionary with pronunciation, synonyms and translation Look it up now!
#### Area of trapezium formula - All Mathematics Solutions
Nov 28, 2018· Learn about the trapezium, area of trapezium and its properti This is one of the quadrilaterals that we have in geometryWe can call the trapezium as convex quadrilateral because all its angles will be less than 180° Now we will study the definition of the trapezium and we will see the shape of the trapezium
#### Trapezoid - Wikipedia
A trapezium as any quadrilateral more general than a parallelogram is the sense of the term in Eucl Confusingly, the word trapezium was sometimes used in England from c 1800 to c 1875, to denote an irregular quadrilateral having no sides parallel This is now obsolete in England, but continues in ,
#### How To Find Area Of Trapezium - A Plus Topper
Apr 25, 2017· Area Of Trapezium Example Problems With Solutions Trapezium – A quadrilateral which has one pair of opposite sides parallel Example 1: Find the area of a trapezium whose parallel sides 25 cm, 13 cm and other sides are 15 cm and 15 cm Solution: Example 2: A field is in the shape of a [,]
#### Area of a Trapezium
In the US (for some) a trapezium is a four sided polygon with no parallel sides; in the UK a trapezium is a four sided polygon with exactly one pair of parallel sides; whereas in Canada a trapezoid has an inclusive definition in that it’s a four sided-polygon with at least one pair of parallel sides - hence parallelograms are special trapezoids
#### Area of Trapezium - Formula with Examples - Teachoo - Area ,
Dec 12, 2018· Area of trapezium = 1/2 × Sum of parallel sides × Height = 1/2 × (a + b) × h Here, Height is perpendicular distance between two parallel lines Find area of trapezium Here, Parallel sides are 4 cm and 6 cm and height is 3 cm So, a = 4 cm, b = 6 cm and h = 3 cm Area of trapezium = 1/2 (a + b) × h .
#### What is the Formula of the area of a trapezium? - Quora
Area of a Trapezium To find the area of any trapezium, add together the parallel sides and multiply by the height Then halve your answer Try the interactive example below a = b = Height (h) = Area ,
#### Trapezium - definition of trapezium by The Free Dictionary
Define trapezium trapezium synonyms, trapezium pronunciation, trapezium translation, English dictionary definition of trapezium trapezium n pl trapeziums or trapezia 1 Mathematics a , The top court also sought personal appearance of chairman of Taj Trapezium Zone (TTZ) --a 10,400 sq km area spread over Agra, .
#### Rhombus, Parallelogram and Trapezium: Formulae, Videos and ,
Area of Isosceles Trapezium = h $$\frac{(a + b)}{2}$$ Parallelogram What does a parallelogram mean? Parallel lines are the two lines that never meet and a parallelogram is a slanted rectangle with the length of the opposite sides being equal just like a rectangle Because of the parallel lines, opposite sides are equal and parallel
#### How to Find the Area of a Trapezium with No Parallel Sides
Jun 08, 2017· How to Find the Area of a Trapezium with No Parallel Sid In this article, you will learn how to find the area of a quadrilateral with no parallel sides – a trapezium in American English, but a trapezoid in British English You will need .
#### Area of a Trapezium |Formula of Area of a trapezium|Solved ,
Two parallel sides of a trapezium are of lengths 27 cm and 19 cm respectively, and the distance between them is 14 cm Find the area of the trapezium Solution: Area of the trapezium = ¹/₂ × (sum of parallel sides) × (distance between them) = {¹/₂ × (27 + 19) × 14} cm² = 322 cm² 2
#### AREA OF TRAPEZIUM - onlinemath4all
Area of trapezium : Here we are going to see the formula to find area of trapezium and also we are going to see example problems to understand this topic Definition of trapezium : The definition of trapezium is entirely different in both US and UK In US a quadrilateral which is having no parallel sides is called trapezium
#### Area Of A Trapezium – Different Ways To Calculate The Area ,
May 20, 2019· Finding the Area of a Trapezium without Parallel Sid Sometimes you may need to find the area of a trapezium that doesn’t have parallel sid To get started, you’ll need to divide the trapezium into two triangl The next step is to get the values for an angle of the side, one side that you know, and the other side indicated
#### Area of Trapezium (UK) - Geometry
How to calculate the area of a trapezium: Enter a value in the top, base and height fields Click on the "Calculate area of trapezium" button Your answer will appear in the "area of trapezium" field Geometry Definitions The following is a list of definitions relating to the area of a trapezium
#### Area of a trapezium - Area - KS3 Maths Revision - BBC Bitesize
Area of a trapezium The area of a trapezium is given by You can see that this is true by taking two identical trapezia (or trapeziums) to make a parallelogram
#### Area of a Trapezium - Peter Vis
The area of a trapezium, also known as trapezoid, is ½(a + b) hThis article shows how to calculate the area and rearrange the formula to find the sides given the area Formula
#### Area of a Trapezium Practice Questions – Corbettmaths
Apr 04, 2018· The Corbettmaths Practice Questions on the Area of a Trapezium Videos, worksheets, 5-a-day and much more
#### What is the formula of trapezium? - Quora
Nov 14, 2017· Trapezium is a quadrilateral in which only one pair of opposite sides are parallel! Using the above image we can construct the equation: $1/2(a+b)h$ $1/2$ Is always constant ‘a’ and ‘b’ are the opposite sides of the trapezium.
#### How to Calculate the Area of an Irregular Trapezoid ,
Jul 17, 2019· It's usually easier to measure the area of "regular" shap However, "irregular" shapes like an irregular trapezium (aka an irregular trapezoid) are common and need to be calculated as well There are irregular trapezoid area calculators and a trapezoid area ,
#### Trapezoid
You can calculate the area when you know the median, it is just the median times the height: Area = mh Trapezium A trapezium (UK: trapezoid) is a quadrilateral with NO parallel sid The US and UK have their definitions swapped over, like this:
#### Area of a Trapezium - Transum
Check that you can find the area of a trapezium and use the trapezium area formula for problem solving Area of a Trapezium Check that you can find the area of a trapezium and use the trapezium area formula for problem solving Level 1 Level 2 Description Help More Areas This is level 1: find the areas of the trapezia .
#### Trapezium - Definition, Types, Area and Perimeter Formulas
The area of a trapezium can be found by taking the average of the two bases of a trapezium and multiply by its altitude So, the area of trapezium formula is given as Area of a Trapezium, A = h(a+b)/2 square units Where, “a” and “b” are the bases “h” is the altitude or height
#### Trapezium | definition of trapezium by Medical dictionary
As my online source goes on to tell of the trapezium, "[s]ince it has no interesting properties beyond those of a quadrilateral, it is not used much in geometry" Of course, the geometry of painters is quite a different thing from that of the geometers
|
2022-01-25 08:26:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7279090881347656, "perplexity": 806.1435255318396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304798.1/warc/CC-MAIN-20220125070039-20220125100039-00507.warc.gz"}
|
https://groupprops.subwiki.org/wiki/Sub-APS_of_groups
|
# Sub-APS of groups
Let $(G,\Phi)$ be an APS of groups. A sub-APS $H$ of $G$ is, for every $n$, a subgroup $H_n$ of $G_n$ such that $\Phi_{m,n}(g,h) \in H_{m+n}$ whenever $g \in H_m, h \in H_n$. Thus, $H$ can be viewed as an APS of groups, in its own right.
|
2020-05-24 23:31:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9634730815887451, "perplexity": 168.98527056179765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347385193.5/warc/CC-MAIN-20200524210325-20200525000325-00027.warc.gz"}
|
https://physics.stackexchange.com/questions/442781/why-do-bosonic-atoms-behave-like-they-do
|
# Why do bosonic atoms behave like they do?
Why do bosonic particles behave the way they do? Why doesn't the Pauli exclusion principle affect the electrons, protons, and neutrons of the atoms at temperatures close to absolute zero? How does spin factor in?
How can an atom composed of fermions (electrons, protons, and neutrons) behave as a boson, despite the Pauli exclusion principle?
The answer is related to entanglement. The Pauli exclusion principle still applies to the atom's constituents, but this does not prevent many of these atoms from occupying the same state, because the fermionic constituents are entangled with each other.
Here's a toy model that demonstrates the basic idea. Suppose that $$e_n$$ is the operator that creates an electron with a wavefunction specified by $$n$$, and and suppose that $$p_k$$ is the operator that creates an electron with a wavefunction specified by $$k$$. Saying that the electron and proton are fermions means that all of these operators anticommute with each other: \begin{align*} \big\{e_n,\,e_k\big\} &= 0 \\ \big\{p_n,\,p_k\big\} &= 0 \\ \big\{e_n,\,p_k\big\} &= 0 \tag{1a} \end{align*} with $$\{A,B\}\equiv AB+BA. \tag{1b}$$ These anticommutation relations imply $$e_n e_n = 0 \hskip2cm p_k p_k = 0, \tag{2}$$ which expresses the Pauli exclusion principle: we cannot put two electrons in the same state, or two protons.
As a toy model of a bosonic atom composed of two fermions (such as a hydrogen atom composed of an electron and a proton), suppose that the operator $$a(f) = \sum_{n,k} f_{nk} e_n p_k. \tag{3}$$ creates a single-atom state. The coefficients $$f$$ characterize the atom's internal structure as well as its external characteristics (such as its overall momentum). The fermion anticommutation relations (1) imply that the operators (3) satisfy the boson commutation relations $$\big[a(f),\,a(g)\big] = 0 \tag{4a}$$ with $$[A,B]\equiv AB-BA. \tag{4b}$$ This doesn't answer the question yet, though. To answer the question, we have to prove that we can put two of these atoms in the same state without annihilating the state. In other words, we have to prove $$a(f)\,a(f) \neq 0. \tag{5}$$ To prove this, use the preceding equations to get \begin{align*} a(f)\,a(f) &= \sum_{n,k} f_{nk} \sum_{m,j} f_{mj} e_n p_k e_m p_j \\ &= -\sum_{n,m} \sum_{k,j} f_{nk} f_{mj} e_n e_m p_k p_j. \tag{6} \end{align*} If the coefficients $$f_{nk}$$ had the form $$f_{nk} = w_n z_k, \tag{7}$$ then the anticommutation relations (1) would imply that (6) is zero. But when we consider a system of atoms at low temperature (for example), then the coefficients $$f$$ do not have the form (7), because the atom has an overall momentum concentrated near zero. This implies that the atom cannot be sharply localized, which in turn implies the electron and proton are entangled with each other. In the generic case when the coefficients $$f$$ do not have the form (7), the quantity (6) will not be zero.
To see this more simply, suppose that each index takes only two values, say 1 and 2, and that $$a(f) = \sum_{n,k} f_{nk} e_n p_k = e_1 p_2 + e_2 p_1. \tag{8}$$ Then the anticommutation relations (1) imply \begin{align*} a(f)\,a(f) &= (e_1 p_2 + e_2 p_1)^2 \\ &= 2 e_1 p_2 e_2 p_1 \\ &= 2 e_1 e_2 p_1 p_2. \tag{9} \end{align*} Terms with two factors of $$e_1$$ are gone because of the Pauli exclusion principle (2), and likewise for all other terms with two identical fermion factors. However, because of the entanglement between $$e$$ and $$p$$ in equation (8), a cross-term survives.
In the simplified case (8), putting three atoms in the same state would give zero, because then we cannot avoid having at least two identical fermions in each product. However, if we use a more realistic model in which the sum in (3) has infinitely many terms (because the "sum" includes an integral over the electron/proton momenta, for example), then we can put an unlimited number of atoms in the same state, because there will always be plenty of cross-terms that survive.
In summary, an atom made of fermions can behave as a boson because the constituent fermions are entangled with each other. This entanglement becomes extreme at low temperatures, where the overall wavefunction of the atom becomes highly delocalized.
How does spin factor in?
Spin is not needed for understanding this phenomenon. The spin-statistics theorem in relativistic QFT says that fermions have half-integer spin and bosons have integer spin; but even if we use a nonrelativistic model that ignores this connection, an atom made of fermions can still behave as a boson.
|
2019-04-21 08:46:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 22, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9980680346488953, "perplexity": 320.5812463364728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530505.30/warc/CC-MAIN-20190421080255-20190421102255-00093.warc.gz"}
|
http://whittiercomputerservices.com/dzb78mx/page.php?tag=0cb0b9-6-2%2F3-as-a-fraction-in-simplest-form
|
Free Online Scientific Notation Calculator. 5/6 to 2/3 as a fraction. 4 9 7 12 in its simplest form is 3.5 6. 2 / 3 is already in the simplest form. 35 / 6 is already in the simplest form. Use 2 as the numerator and … Example: For the fraction 33/99 the number 33 is the numerator and 99 is the denominator. Use the decimal to fraction converter/calculator below to write any decimal number as a fraction. Math. This math video tutorial shows you how to convert decimals to fractions in simplest form. maria.godoy. What is the lewis structure for co2? Reduce fractions to their simplest form with this Simplify Calculator. Both common fractions and algebraic expressions may be simplified, or written in simplest form. In step-3: Divide both terms by 2. 74 ⁄ 5. What is 2 3 ⁄ 15 as an improper fraction in its simplest form? Its value to be considered as: # 1/100# An example: 2% is the same as #2xx1/100 = 2/100#. 2 / 3 is already in the simplest form. 6% = $\frac{6}{100}$ Step 2: The fraction can be reduced to simplest form as follows. So, 5 9 is in simplest form, since 5 and 9 have no common factors other than 1 . In the fraction 6/3, 6 is the numerator and 3 is the denominator. We also offer step by step solutions. What is 6.02 as a fraction in simplest form? Steps to simplifying fractions. Check output 0.63 Conversion as a fraction with Graphical representation. Convert Part-to-Part Ratio to Fractions. Revise and learn what fractions are and how to order, add, subtract, multiply and divide fractions and work out common factors with BBC Bitesize KS3 Maths. A recipe calls for 2 cups of sugar to make 2 dozen cookies. Check output 0.6 Conversion as a fraction with Graphical representation. 0.05 = 0.05 / 1 Step 2: Multiply both top and bottom by 10 for every number after the decimal point:. Choose "Reduce the Fraction" from the topic selector and click to see the result! Well, 144 ÷ 6 = 24, so the fraction (in its simplest form) must be 1 ⁄ 24. Find the product. This website uses cookies to ensure you get the best experience on our website. The Greatest Common factor of 10 and 6 is the smallest factor ie., 2. In order to simplify 3/6 we write both numerator and denominator as a product of only prime numbers (each number can be written as a product of only prime numbers). Find the GCD (or HCF) of numerator and denominator GCD of 6 and 36 is 6; Divide both the numerator and denominator by the GCD 6 ÷ 6 / 36 ÷ 6; Reduced fraction: 1 / 6 Therefore, 6/36 simplified to lowest terms is 1/6. Below you can find the full step by step solution for you problem. Edit. To reduce a fraction to lowest terms (also called its simplest form), just divide both the numerator and denominator by the Greatest Common Factor (GCF or GCD). Mathway requires javascript and a modern browser. How do I determine the molecular shape of a molecule? Has she found the simplest forms? (1 point) p = 2 p = 8 p = 12 p = 24 3. The ratio to fraction calculator finds fraction equivalents of ratio terms and reduces the fractions to simplest form. How do you simplify fractions? When you ask "What is 6/9 simplified? - 2/3 is the simplified fraction for 6/9. Write the fraction with step 2 in the numerator and keep the original denominator. 33 ⁄ 15. I suspect that the question that was intended is: What is #66 2/3%# as a fraction in simplest terms. When you ask "What is 6/3 simplified? When you ask "What is 40/6 simplified? Here we will simplify 6/9 to its simplest form and convert it to a mixed number if necessary. Steps to simplifying fractions. First, find the greatest common factor of the two numbers. However, that does not stop it being a fraction. Here is the answer to the question: 2.6 as a fraction or what is 2.6 as a fraction. The simplest form of 6 / 2 is 3 / 1. Denominator of Fraction Here we will simplify 40/6 to its simplest form and convert it to a mixed number if necessary. For example, 2/3 is in lowest form, but 4/6 is not in lowest form (the GCD of 4 and 6 is 2) and 4/6 can be expressed as 2/3. #-3.6=-3 6/10# Convert the mixed fraction to an improper fraction. Play this game to review Mathematics. Your answer should be in simplest form. The ratio values can be positive or negative. 2. ", we assume you want to know how to simplify the numerator and denominator to their smallest values, while still keeping the same value of the fraction. 6 ÷ 2 / 10 ÷ 2 = 3 / 5. Below is the answer in the simplest form possible: 2. How to add 5/6 to 2/3. Stay Home , Stay Safe and keep learning!!! 4th grade. Disclaimer. What is 1/6 by 2/3 in fraction form? 1.Divide.Write your answer in simplest form. 12 ⁄ 5. 6 and 1/50 . 24 32 108 405! Fractions in Simplest Form A fraction is said to be in simplest form if its numerator and denominator are relatively prime , that is, they have no common factors other than 1 . The Fraction Calculator will reduce a fraction to its simplest form. GCF = 33 GCF Calculator. #-36/10# Simplify. #6# is in the tenths place. Writing a fraction in simplest form is sometimes referred to as. A reduced fraction is a common fraction in its simplest possible form. To reduce a fraction to lowest terms (also called its simplest form), just divide both the numerator and denominator by the GCD (Greatest Common Divisor). Fractions in Simplest Form DRAFT. Give your answer as a mixed fraction in its simplest form. 0.05 = 1 / 20 as a fraction Step by Step Solution. A common fraction is in simplest form when the greatest common factor of the numerator and the denominator is 1. (Some books use "written in lowest terms" to mean the same thing.) 32 minutes ago. STEP 1: 2 × 15 = 30. (Total for question 6 is 4 marks) (2) 3 5 ÷ 3 8 (2) 1 7 ÷ 3 4 7 Work out (Total for question 7 is 2 marks) 5 6 − 1 7 8 (a) Work out (b) Work out Give your answer as a mixed number in its simplest form. Here the HCF of 6 and 100 is 2. 6%. Find the product. Note that the % sign is like units of measurement. Simplify 6/9 to the simplest form. If the denominators are the same, you can add/subtract the fractions by simply adding/subtracting their numerators - it really is as easy as that! Here you have the same numerators, but the denominators are different. Second, divide both the numerator and denominator by the GCF. Related questions. 3. Steps to simplifying fractions. Reduce the fractions to their simplest forms, then add them. (1 point) 1/2*3/7 a.4/14 b.3/14 c.7/6 d.2/7 3.Divide.Write the quotient in simplest form. x squared = 4x squared -75 Quarts of vinegar 3: Cups of Baking soda :4 | According to the recipe, exactly how many cups of baking soda need to be mixed with 9 1/2 quarts of whi … te vinegar -10+2d+8 2 step equtions Translation of, “Six less than twice the value of x,”. Fraction 3 / 4 is also called an irreducible fraction, in another words it could no longer be reduced or simplified, it is in its simplest form, the numbers 3 and 4, the numerator and the denominator of the fraction, are coprime numbers (prime to each other), not having any common factors other than 1. In the fraction 6/9, 6 is the numerator and 9 is the denominator. Welcome! DN = Decimal Number F = 10 if one repeating number, 100 if two repeating numbers, 1000 if three repeating numbers, etc. Every number after the decimal point, we multiply both numerator and 6 fraction an. Point, we can simplify the fraction 6/3, 6 is the same thing. ( ). 6 tenths. 6 / 2 is 3 / 5 how to convert a mixed number if.... Form of 6 and 10 is 2, decimal numbers, integers, numbers! Conversion as a fraction to an improper fraction in its simplest possible form repeating a. So the # 16 + 2/3 # as # '' 16 #. 2 and 6 ) denominator of fraction ( bottom number ) denominator of fraction ( top )... '' 16 2/3xx1/100 # note: the percent written in lowest terms '' mean... % # is the answer to the lowest terms '' to mean same. 6/100 is the simple numerical representation of complex numbers, Calculation History like fractions numbers! 16 2/3 #, 40 is the numerator and 99 is the answer 2.3. # to fraction Calculator will reduce a fraction 2/3 c.2 3/8 d.2 2.Find the product 2.05 as a in. You read the decimal point: number, 99 if two repeating numbers, or. What you are looking for, type in … convert # -3.6 # to converter/calculator! 6 decimal places ) of sugar to make 2 dozen cookies Physics, Mathematics and Engineering keep original! Ratio 15:25 as a fraction in decimal form ( rounded to 6 places... 10 as an improper fraction in simplest form # an example: for the fraction 33/99 the 33! The denominators are different 10 = 6 / 36 is 1 the site again 10 1... For 0.06 2:3 2: multiply both top and bottom by 10 for every number after the decimal to Converter. 10 / 1 may be simplified, or written in lowest terms quickly and easily HCF of 6 / is. Fractions or mixed numbers Safe and keep the original denominator the time to browse this website reduced.! And algebraic expressions may be simplified, or written in simplest form problems! 2.6, it is read as 2 and 6 is the smallest factor ie., 2 of 0.63 Using... The answer in the reduced form 10 is 2, we multiply both top and bottom by for! # 2.05=205/100=41/20 # answer link its simplest form because neither 63 nor 100 have an equivalent prime.. 2: multiply both top and bottom by 10 for every number the. Shape of a molecule these steps: 1 can turn this fraction into simplest of! 6 boys and question 2 ) 24 plays for 3 teams get the best experience our. In step-1: find the value of the molecular shape of a molecule our website the site.! Useful online calculators and will use the decimal 6 2/3 as a fraction in simplest form: convert a number... Give your answer as a fraction in its simplest possible form... Give your answer a! The fraction Calculator will reduce a fraction in simplest form considered as: 2.05=205/100=41/20. 9 is in simplest form of 6 and 9 have no common factors other than 1 what!, 2 1: the percent written in fraction form 6 2/3 as a fraction in simplest form 9 one... % # is the denominator and one decimal is like of! Please help me on how I can turn this fraction into simplest form the site again =11 7 =3... C.7/6 d.2/7 3.Divide.Write the quotient in simplest form 4 9 7 12 its! Get the answer in the simplest form when the greatest common factor of 6 / 2 already! Algebraic expressions may be simplified, or written in lowest terms '' to mean the same thing )... Simplified, or written in lowest terms quickly and easily can simplify the fraction 33/99 number! Question that was intended is: what is the percentage form for 0.06 0.6 by Using decimal to form... It being a fraction to its simplest form add them as 2 and 6 the... / 10 help you to understand the solving process of complex numbers, Calculation History fraction. 4 15 16 36!!!!!!!!!!. 3:4 2:3 2 fraction form # an example: to express in simplest form nothing can 6 2/3 as a fraction in simplest form as...
Jacob Anderson The Undoing, Farm House For Rent In Hyderabad, Best Way To Cook Pork Medallions In Oven, Shot By Shot Book Pdf, How To Draw Angles With A Protractor Ppt, Donkey Kong Country Fake Credits,
|
2021-10-24 22:04:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7696055769920349, "perplexity": 653.8909051741182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587606.8/warc/CC-MAIN-20211024204628-20211024234628-00217.warc.gz"}
|
https://www.mathdoubts.com/cos-difference-to-product-identity-proof/
|
# Proof of Difference to Product identity of Cosine functions
The difference to product identity of cosine functions is expressed popularly in the following three forms in trigonometry.
$(1). \,\,\,$ $\cos{\alpha}-\cos{\beta}$ $\,=\,$ $-2\sin{\Bigg(\dfrac{\alpha+\beta}{2}\Bigg)}\sin{\Bigg(\dfrac{\alpha-\beta}{2}\Bigg)}$
$(2). \,\,\,$ $\cos{x}-\cos{y}$ $\,=\,$ $-2\sin{\Bigg(\dfrac{x+y}{2}\Bigg)}\sin{\Bigg(\dfrac{x-y}{2}\Bigg)}$
$(3). \,\,\,$ $\cos{C}-\cos{D}$ $\,=\,$ $-2\sin{\Bigg(\dfrac{C+D}{2}\Bigg)}\sin{\Bigg(\dfrac{C-D}{2}\Bigg)}$
It is your time to learn how to prove the difference to product transformation identity for the cosine functions.
When the symbols $\alpha$ and $\beta$ denote the angles of right triangles, the cosine of angle alpha and cosine of angle beta are written as $\cos{\alpha}$ and $\cos{\beta}$ respectively in the trigonometric mathematics.
### Express the Subtraction of Cosine functions
Express the difference of the cosine functions in a row by displaying a minus sign between them for expressing the subtraction of cosine functions mathematically in trigonometry.
$\implies$ $\cos{\alpha}-\cos{\beta}$
### Expand each cosine function in the expression
Assume that $\alpha \,=\, a+b$ and $\beta \,=\, a-b$, and replace them by their equivalent values in the above trigonometric expression.
$\implies$ $\cos{\alpha}-\cos{\beta}$ $\,=\,$ $\cos{(a+b)}$ $-$ $\cos{(a-b)}$
As per the angle sum and angle difference trigonometric identities of cosine functions, the cosines of compound angles can be expanded mathematically in terms of trigonometric functions.
$\implies$ $\cos{\alpha}-\cos{\beta}$ $\,=\,$ $\Big(\cos{a}\cos{b}$ $-$ $\sin{a}\sin{b}\Big)$ $-$ $\Big(\cos{a}\cos{b}$ $+$ $\sin{a}\sin{b}\Big)$
### Simplify the Trigonometric expression
For evaluating the difference of cosine functions, the right hand side trigonometric expression of the equation should be simplified mathematically by the fundamental operations of mathematics.
$=\,\,\,$ $\cos{a}\cos{b}$ $-$ $\sin{a}\sin{b}$ $-$ $\cos{a}\cos{b}$ $-$ $\sin{a}\sin{b}$
$=\,\,\,$ $\cos{a}\cos{b}$ $-$ $\cos{a}\cos{b}$ $-$ $\sin{a}\sin{b}$ $-$ $\sin{a}\sin{b}$
$=\,\,\,$ $\require{cancel} \cancel{\cos{a}\cos{b}}$ $-$ $\require{cancel} \cancel{\cos{a}\cos{b}}$ $-$ $2\sin{a}\sin{b}$
$\,\,\, \therefore \,\,\,\,\,\,$ $\cos{\alpha}-\cos{\beta}$ $\,=\,$ $-2\sin{a}\sin{b}$
Thus, we have proved the transformation of difference of the cos functions into product form successfully. Actually, the product form is derived in terms of $a$ and $b$. Therefore, we must have to express them in terms of $\alpha$ and $\beta$.
We considered that $\alpha = a+b$ and $\beta = a-b$. So, let’s evaluate both $a$ and $b$ in terms of $\alpha$ and $\beta$ by some mathematical operations.
Add the both algebraic equations for calculating the $a$ in terms of $\alpha$ and $\beta$.
$\implies$ $\alpha+\beta$ $\,=\,$ $(a+b)+(a-b)$
$\implies$ $\alpha+\beta$ $\,=\,$ $a+b+a-b$
$\implies$ $\alpha+\beta$ $\,=\,$ $a+a+b-b$
$\implies$ $\alpha+\beta$ $\,=\,$ $2a+\cancel{b}-\cancel{b}$
$\implies$ $\alpha+\beta \,=\, 2a$
$\implies$ $2a \,=\, \alpha+\beta$
$\,\,\, \therefore \,\,\,\,\,\,$ $a \,=\, \dfrac{\alpha+\beta}{2}$
In the same way, subtract the equation $\beta = a-b$ from the equation $\alpha = a+b$ for calculating the $b$ in terms of $\alpha$ and $\beta$.
$\implies$ $\alpha-\beta$ $\,=\,$ $(a+b)-(a-b)$
$\implies$ $\alpha-\beta$ $\,=\,$ $a+b-a+b$
$\implies$ $\alpha-\beta$ $\,=\,$ $a-a+b+b$
$\implies$ $\alpha-\beta$ $\,=\,$ $\cancel{a}-\cancel{a}+2b$
$\implies$ $\alpha-\beta \,=\, 2b$
$\implies$ $2b \,=\, \alpha-\beta$
$\,\,\, \therefore \,\,\,\,\,\,$ $b \,=\, \dfrac{\alpha-\beta}{2}$
We have successfully evaluated $a$ and $b$. Now, substitute them in the trigonometric equation $\cos{\alpha}-\cos{\beta}$ $\,=\,$ $-2\sin{a}\sin{b}$ for proving the difference to product identity of cosine functions.
$\,\,\, \therefore \,\,\,\,\,\,$ $\cos{\alpha}-\cos{\beta}$ $\,=\,$ $-2\sin{\Big(\dfrac{\alpha+\beta}{2}\Big)}\sin{\Big(\dfrac{\alpha-\beta}{2}\Big)}$
Therefor, it is proved that the difference of the cosine functions is successfully converted into product form of the trigonometric functions and This trigonometric equation is called as the difference to product identity of cosine functions.
In this way, we can prove the difference to product transformation trigonometric identity of cosine functions in terms of $C$ and $D$ and also in terms of $x$ and $y$.
Latest Math Topics
Email subscription
Math Doubts is a best place to learn mathematics and from basics to advanced scientific level for students, teachers and researchers. Know more
|
2020-07-11 01:18:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8930439352989197, "perplexity": 171.35788152731376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655919952.68/warc/CC-MAIN-20200711001811-20200711031811-00470.warc.gz"}
|
https://www.physicsforums.com/threads/where-do-ctcs-come-from-how-do-i-interpret-these-tensors.811708/
|
# Where do CTC's come from/ How do I interpret these tensors?
1. May 1, 2015
### space-time
I recently derived the Einstein tensor and the stress energy momentum tensor for the Godel solution to the Einstein field equations. Now as usual I will give you the page where I got my line element from so you can have a reference: http://en.wikipedia.org/wiki/Gödel_metric
Here is what I got for my Einstein tensor Gμν:
G00 , G11 , and G22 all equal 1/2
G03 and G30 = ex / 2
G33= (3/4) e2x
Every other element was 0.
As a result of this being the Einstein tensor, the stress energy momentum tensor Tμν is as follows:
T00 , T11 , and T22 all equal c4/(16πG)
T03 and T30 = ( c4ex) / (16πG)
T33= (3c4e2x)/ (32πG)
Every other element was 0.
Now my research has told me that this metric contains closed time-like curves within it. Can someone please tell me how these tensors showcase the possibility of closed time-like curves? I suppose what I really need is a solid understanding of how to interpret the physical implications of these general relativistic tensors.
I notice that when you do dimensional analysis on my stress energy tensor, you'll find that it contains all force terms (assuming that the exponential terms are unit-less constants since the number e itself is a constant, correct me if I am wrong).
Does this mean that if I want to warp a region of space time into a Godel space time, then I would need to apply a force that is equivalent in magnitude to the elements in my stress energy tensor in the directions that said elements represent?
In other words, does this mean that I would have to apply the following forces in the following directions:
c4/(16πG) Newtons in the temporal direction, the xx direction, and the yy direction
( c4ex) / (16πG) Newtons in the time-z direction and the z-time direction
(3c4e2x)/ (32πG) Newtons in the zz direction
If my interpretation is correct, how exactly would one apply a force in the temporal direction considering that the temporal dimension is time itself?
Also, I notice that the angular velocity (ω) terms that I started out with just totally disappeared by the time I got to the Einstein tensor (though this may have to do with the fact that my tensors are in a coordinate basis). What becomes of these? Surely angular velocity has significance in interpreting the physical implications of these tensors (especially if this metric contains closed time-like curves).
Finally, what exactly do the Einstein tensor elements tell you about space-time curvature? This Einstein tensor in particular contains all constants.
Please help understand the physical meanings of these tensors, as well as where the CTC's come from. Thank you.
2. May 1, 2015
### PAllen
I have no idea how you can look at a stress energy tensor or metric and see whether it contains CTCs, without finding them somehow. However, if your question is simply to exhibit them for this metric, the following writeup, on p.6, shows this in a quite simple way:
http://www.math.nyu.edu/~momin/stuff/grpaper.pdf
3. May 2, 2015
### aleazk
I never studied the Gödel metric, to be honest, but I am familiar with the van Stockum dust solution. In this metric, it's quite easy to detect the CTC and also the solution itself (more precisely, something similar to it) seems to have some physical plausibility (at least when compared to the Gödel solution).
The general form of a metric that is both stationary and axisymmetric is (the Weyl-Papapetrou form):
$$g=H(\mathrm{d}r\otimes\mathrm{d}r+\mathrm{d}z\otimes\mathrm{d}z)+L\mathrm{d}\varphi\otimes\mathrm{d}\varphi+M\mathrm{d}\varphi\otimes\mathrm{d}t-F\mathrm{d}t\otimes\mathrm{d}t$$
where the coordinates used here have interpretations analogous to those of the usual cylindrical coordinates. In particular, $0\leq\varphi\leq2\pi$ and the curves with $r=constant,t=constant,z=constant$ are closed curves.
If we take the tanget vector field to these curves, the inner product of this vector field with itself is given by $g_{\varphi\varphi}=L$.
Intuitively, like in flat spacetime (where $g_{\varphi\varphi}=r^{2}>0$), one expects these closed curves to be spacelike. Nevertheless, there are solutions of the EFE in which $g_{\varphi\varphi}<0$ in some region, i.e., the tangent field is timelike and thus these closed curves become closed timelike curves.
An example is the van Stockum dust solution, where the SET is that of an infinitely long, rigid dust cylinder (of ordinary matter) rotating with angular velocity $\omega$. In the interior, $g_{\varphi\varphi}=L=r^{2}(1-\omega^{2}r^{2})$. So, if the boundary of the cylinder extends that far, the closed curves considered above become timelike when $r>\frac{1}{\omega}$.
It's conjectured that the case for the finite cylinder also contains CTC. So, why people is not building their own time machines at home? after all, all you need is a rotating cylinder.
The detail is that these solutions describe eternal cylinders, i.e., in the solution, the rotating cylinder always existed, there's nothing about its creation. If you include the creation of the cylinder, then the EFE require exotic matter for this.
Check this paper by Kip Thorne for a detailed and general study of the physical feasibility of CTC: http://www.its.caltech.edu/~kip/scripts/ClosedTimelikeCurves-II121.pdf
Last edited: May 2, 2015
4. May 3, 2015
### space-time
Thanks for the link. I was reading it and I must ask: What is εμijk? The pdf does not define that term nor how to calculate it (if it is something that must be calculated).
Also, when I calculate aijk, wouldn't I have to take the cross product first and then do the dot product afterwards?
5. May 3, 2015
### DrGreg
See Levi-Civita symbol. In particular, see the subsection on "Levi-Civita tensors".
6. May 3, 2015
### bcrowell
Staff Emeritus
What you're trying to do is infer global properties from the curvature. There are general techniques for doing this, e.g., Myers's theorem in Riemannian geometry, but I don't know much about them. Historically I think these techniques were developed relatively late (Myers's theorem is probably ca. 1940), so I don't think Godel would have used them. Ca. 1916-1960, I think the methods used for finding solutions to the Einstein field equations were that people simply played around with concrete coordinate systems.
7. May 8, 2015
### PAllen
I thought of one general test: the Kip Thorne test. Google Kip Thorne + the name of the metric. If Kip Thorne wrote about it, it probably contains either CTCs or worm holes
|
2017-08-22 13:56:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8684155941009521, "perplexity": 719.3124017823957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110774.86/warc/CC-MAIN-20170822123737-20170822143737-00027.warc.gz"}
|
http://www.albertgural.com/math/puzzles/prisoners-and-urns/
|
# Prisoners and Urns
Here’s another interesting problem I received from one of my friends:
A Prison Warden is facing an overcrowding problem in his prison. Specifically, there are $N$ prisoners and he wants to play a game with them to reduce the overcrowding problem.
The game works as follows: the Warden will set up $N$ urns and place slips of paper with the prisoners’ $N$ unique names one per urn. Then he’ll bring one prisoner in at a time and let him look at up to $\frac{N}{2}$ urns sequentially, trying to find his name. If he finds his name, the warden will bring in the next prisoner. If he doesn’t find his name, the warden will execute all of the prisoners. The warden will explain the rules of the game to the prisoners and allow them to converse on a strategy, but after that, they are all separated and are thereafter not allowed to communicate with each other.
A naïve strategy is to have all $N$ prisoners look randomly at urns. However, this strategy would give the prisoners a survival rate of $\frac{1}{2^N}$. In a standard case of $N=100$, this gives a survival rate of about $7.89 \cdot 10^{-29} \%$… Not a good rate of survival at all.
Given that there are $N$ prisoners, and that each prisoner plays optimally:
1. What is the optimal strategy the prisoners can use for survival?
2. What would the expected survival rate be?
|
2022-05-18 23:11:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5266264081001282, "perplexity": 668.5168327800515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522556.18/warc/CC-MAIN-20220518215138-20220519005138-00521.warc.gz"}
|
https://crypto.stackexchange.com/questions?page=363&sort=newest
|
# All Questions
18,420 questions
3k views
1k views
### Does knowing common prefixes help crack blowfish?
I have strings that are if the form: {static data}{changing data} The beginning static data part is around 20 characters and is common to all strings. The last ...
1k views
1k views
### Is a RSA-signature of some identifying data a safe way to implement a license key?
I have this idea of implementing a license key: After the user downloads the program, he connects to a website and sends his Windows product ID. The website, then, sends this back to him with a ...
892 views
### A mathematical explanation of the DES encryption system
I need a mathematical explanation of what does the DES encryption system really do. This means I need more explanation than the one that offers FIPS, which is more an explanation for computer ...
851 views
95k views
### Why is elliptic curve cryptography not widely used, compared to RSA?
I recently ran across elliptic curve crypto-systems: An Introduction to the Theory of Elliptic Curves (Brown University) Elliptic Curve Cryptography (Wikipedia) Performance analysis of identity ...
3k views
210 views
### Is there a secret sharing scheme which allows sharing of a secret (one-out-of-n)
Is there a secret sharing scheme where the knowledge of just one share is sufficient to find the secret, in other words a (one-out-of-n) sharing scheme ? Plz I need to know if it is possible to make ...
10k views
### Why do we need special key-wrap algorithms?
Wikipedia says: Key Wrap constructions are a class of symmetric encryption algorithms designed to encapsulate (encrypt) cryptographic key material. We are using these algorithms to encrypt (and ...
|
2019-03-23 17:11:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5254293084144592, "perplexity": 2114.3648837070655}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202889.30/warc/CC-MAIN-20190323161556-20190323183556-00113.warc.gz"}
|
http://mathhelpforum.com/discrete-math/7219-inverse-relation.html
|
## inverse relation
Hi .
I have this question( discrete math) :
How can the matrix for R-1 , the inverse of the relation R, be found from the matrix representing R, when R is a relation a finite set A.
Please ,How can I do this problem?
thank you
B
|
2017-04-28 02:57:54
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9835947751998901, "perplexity": 2308.4686159203384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122726.55/warc/CC-MAIN-20170423031202-00064-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://projecteuclid.org/euclid.dmj/1477321009
|
## Duke Mathematical Journal
### Proof of linear instability of the Reissner–Nordström Cauchy horizon under scalar perturbations
#### Abstract
It has long been suggested that solutions to the linear scalar wave equation
$$\Box_{g}\phi=0$$ on a fixed subextremal Reissner–Nordström spacetime with nonvanishing charge are generically singular at the Cauchy horizon. We prove that generic smooth and compactly supported initial data on a Cauchy hypersurface indeed give rise to solutions with infinite nondegenerate energy near the Cauchy horizon in the interior of the black hole. In particular, the solution generically does not belong to $W^{1,2}_{\mathrm{loc}}$. This instability is related to the celebrated blue-shift effect in the interior of the black hole. The problem is motivated by the strong cosmic censorship conjecture and it is expected that for the full nonlinear Einstein–Maxwell system, this instability leads to a singular Cauchy horizon for generic small perturbations of Reissner–Nordström spacetime. Moreover, in addition to the instability result, we also show as a consequence of the proof that Price’s law decay is generically sharp along the event horizon.
#### Article information
Source
Duke Math. J., Volume 166, Number 3 (2017), 437-493.
Dates
Revised: 5 April 2016
First available in Project Euclid: 24 October 2016
https://projecteuclid.org/euclid.dmj/1477321009
Digital Object Identifier
doi:10.1215/00127094-3715189
Mathematical Reviews number (MathSciNet)
MR3606723
Zentralblatt MATH identifier
1373.35306
Subjects
Primary: 35Q75: PDEs in connection with relativity and gravitational theory
Secondary: 83C57: Black holes
#### Citation
Luk, Jonathan; Oh, Sung-Jin. Proof of linear instability of the Reissner–Nordström Cauchy horizon under scalar perturbations. Duke Math. J. 166 (2017), no. 3, 437--493. doi:10.1215/00127094-3715189. https://projecteuclid.org/euclid.dmj/1477321009
#### References
• [1] L. Andersson and P. Blue, Hidden symmetries and decay for the wave equation on the Kerr spacetime, Ann. of Math. (2) 182 (2015), 787–853.
• [2] P. Blue and J. Sterbenz, Uniform decay of local energy and the semi-linear wave equation on Schwarzschild space, Comm. Math. Phys. 268 (2006), 481–504.
• [3] S. Chandrasekhar and J. B. Hartle, On crossing the Cauchy horizon of a Reissner-Nördstrom black-hole, Proc. Roy. Soc. London Ser. A 384 (1962), 301–315.
• [4] D. Christodoulou, The Formation of Black Holes in General Relativity, EMS Monogr. Math., EMS, Zurich, 2009.
• [5] D. Civin, Stability of charged rotating black holes for linear scalar perturbations, Ph.D. dissertation, University of Cambridge, Cambridge, 2014, http://www.repository.cam.ac.uk/handle/1810/247397.
• [6] J. L. Costa, P. M. Girão, J. Natário, and J. D. Silva, On the global uniqueness for the Einstein-Maxwell-scalar field system with a cosmological constant, Part 2: Structure of the solutions and stability of the Cauchy horizon, Comm. Math. Phys. 339 (2015), 903–947.
• [7] J. L. Costa, P. M. Girão, J. Natário, and J. D. Silva, On the global uniqueness for the Einstein-Maxwell-scalar field system with a cosmological constant, Part 3: Mass inflation and extendibility of the solutions, preprint, arXiv:1406.7261v3 [gr-qc].
• [8] M. Dafermos, Stability and instability of the Cauchy horizon for the spherically symmetric Einstein-Maxwell-scalar field equations, Ann. of Math. (2) 158 (2003), 875–928.
• [9] M. Dafermos, The interior of charged black holes and the problem of uniqueness in general relativity, Comm. Pure Appl. Math. 58 (2005), 445–504.
• [10] M. Dafermos, Black holes without spacelike singularities, Comm. Math. Phys. 332 (2014), 729–757.
• [11] M. Dafermos and J. Luk, Stability of the Kerr Cauchy horizon, in preparation.
• [12] M. Dafermos and I. Rodnianski, A proof of Price’s law for the collapse of a self-gravitating scalar field, Invent. Math. 162 (2005), 381–457.
• [13] M. Dafermos and I. Rodnianski, The red-shift effect and radiation decay on black hole spacetimes, Comm. Pure Appl. Math. 62 (2009), 859–919.
• [14] M. Dafermos and I. Rodnianski, “A new physical-space approach to decay for the wave equation with applications to black hole spacetimes” in XVIth International Congress on Mathematical Physics (Prague, 2009), World Sci., Hackensack, N.J., 2010, 421–432.
• [15] M. Dafermos and I. Rodnianski, A proof of the uniform boundedness of solutions to the wave equation on slowly rotating Kerr backgrounds, Invent. Math. 185 (2011), 467–559.
• [16] M. Dafermos and I. Rodnianski, The wave equation on Schwarzschild-de Sitter spacetimes, preprint, arXiv:0709.2766v1 [gr-qc].
• [17] M. Dafermos and I. Rodnianski, Lectures on black holes and linear waves, preprint, arXiv:0811.0354v1 [gr-qc].
• [18] M. Dafermos and I. Rodnianski, The black hole stability problem for linear scalar perturbations, preprint, arXiv:1010.5137v1 [gr-qc].
• [19] M. Dafermos and I. Rodnianski, Decay for solutions of the wave equation on Kerr exterior spacetimes, I–II: The cases $\vert a\vert \ll M$ or axisymmetry, preprint, arXiv:1010.5132v1 [gr-qc].
• [20] M. Dafermos, I. Rodnianski, and Y. Shlapentokh-Rothman, Decay for solutions of the wave equation on Kerr exterior spacetimes, III: The full subextremal case $\vert a\vert <M$, Ann. of Math. (2) 183 (2016), 787–913.
• [21] M. Dafermos, I. Rodnianski, and Y. Shlapentokh-Rothman, A scattering theory for the wave equation on kerr black hole exteriors, preprint, arXiv:1412.8379v1 [gr-qc].
• [22] R. Donninger and W. Schlag, Decay estimates for the one-dimensional wave equation with an inverse power potential, Int. Math. Res. Not. IMRN 2010, no. 22, 4276–4300.
• [23] R. Donninger, W. Schlag, and A. Soffer, A proof of Price’s law on Schwarzschild black hole manifolds for all angular momenta, Adv. Math. 226 (2011), 484–540.
• [24] R. Donninger, W. Schlag, and A. Soffer, On pointwise decay of linear waves on a Schwarzschild black hole background, Comm. Math. Phys. 309 (2012), 51–86.
• [25] A. Franzen, Boundedness of massless scalar waves on Reissner-Nordström interior backgrounds, Comm. Math. Phys. 343 (2016), 601–650.
• [26] R. Geroch and J. Traschen, Strings and other distributional sources in general relativity, Phys. Rev. D (3) 36 (1987), 1017–1031.
• [27] Y. Gursel, V. Sandberg, I. Novikov, and A. Starobinsky, Evolution of scalar perturbations near the Cauchy horizon of a charged black hole, Phys. Rev. D 19 (1979), 413–420.
• [28] W. A. Hiscock, Evolution of the interior of a charged black hole, Phys. Lett. A 83 (1981), no. 3, 110–112.
• [29] B. S. Kay and R. M. Wald, Linear stability of Schwarzschild under perturbations which are nonvanishing on the bifurcation $2$-sphere, Classical Quantum Gravity 4 (1987), no. 4, 893–898.
• [30] J. Luk and S.-J. Oh, Quantitative decay rates for dispersive solutions to the Einstein-scalar field system in spherical symmetry, Anal. PDE 8 (2015), 1603–1674.
• [31] J. Marzuola, J. Metcalfe, D. Tataru, and M. Tohaneanu, Strichartz estimates on Schwarzschild black hole backgrounds, Comm. Math. Phys. 293 (2010), 37–83.
• [32] J. M. McNamara, Instability of black hole inner horizons, Proc. Roy. Soc. London Ser. A 358 (1978), 499–517.
• [33] J. Metcalfe, D. Tataru, and M. Tohaneanu, Price’s law on nonstationary spacetimes, Adv. Math. 230 (2012), 995–1028.
• [34] E. Poisson and W. Israel, Inner-horizon instability and mass inflation in black holes, Phys. Rev. Lett. 63 (1989), no. 16, 1663–1666.
• [35] E. Poisson and W. Israel, Internal structure of black holes, Phys. Rev. D (3) 41 (1990), 1796–1809.
• [36] R. H. Price, Nonspherical perturbations of relativistic gravitational collapse, I: Scalar and gravitational perturbations, Phys. Rev. D (3) 5 (1972), 2419–2439.
• [37] J. Sbierski, Characterisation of the energy of Gaussian beams on Lorentzian manifolds: With applications to black hole spacetimes, Anal. PDE 8 (2015), 1379–1420.
• [38] J. Sbierski, On the initial value problem in general relativity and wave propagation in black-hole spacetimes, PhD dissertation, University of Cambridge, Cambridge, 2014, http://www.repository.cam.ac.uk/handle/1810/248837.
• [39] M. Simpson and R. Penrose, Internal instability in a Reissner-Nordström black hole, Internat. J. Theoret. Phys. 7 (1973), 183–197.
• [40] D. Tataru, Local decay of waves on asymptotically flat stationary space-times, Amer. J. Math. 135 (2013), 361–401.
• [41] D. Tataru and M. Tohaneanu, A local energy estimate on Kerr black hole backgrounds, Int. Math. Res. Not. IMRN 2011, no. 2, 248–292.
• [42] S. Yang, Global solutions of nonlinear wave equations in time dependent inhomogeneous media, Arch. Ration. Mech. Anal. 209 (2013), 683–728.
|
2019-11-21 11:51:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 2, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5496625304222107, "perplexity": 1854.9783587822417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670770.21/warc/CC-MAIN-20191121101711-20191121125711-00390.warc.gz"}
|
https://physics.stackexchange.com/questions/155371/could-a-really-tall-tube-suck-garbage-in-to-space
|
# Could a really tall tube suck garbage in to space?
When I was around 10 years old, I had this idea that was supposed to solve our waste problems; I imagined having tubes miles high that would stretch in to space. Every tube would have a door at the bottom that would be initially closed, and all of the atmosphere inside the tube would then be pumped out. A gigantic pile of garbage would be placed underneath the tube, the door would open, and the vacuum would suck the garbage and eject it in to space.
To what extent could this idea work? Putting aside the problems of keeping a large tube of that size stable(or even manufacturing it), I'm wondering if the force created by the vacuum would be enough to send matter into orbit or if it would immediately come back down the tube once the pressure equalizes. I can't imagine there would be enough inertia to break the earth's gravitational pull, but maybe most of the garbage would stay in orbit for a long time or just burn up on re-entry?
• Hold your finger over a straw and push the straw down into water. Then release your finger and watch the water rush up into the straw. Your atmospheric effect would be pretty much the same. Dec 28, 2014 at 7:44
• Did you try to calculate it? The energy needed to evacuate the tube is given by pdV. You can substitute the average height of the atmosphere (8000m) for the integral over the exponential pressure profile. So let's take a $1m^2$ tube. The total energy to evacuate is roughly 1e5Pascal*8e3m=8e8Joule. The total mass in the tube is approx. 1.5kg/m*8000m=12,000kg. If the pressure equalization converts all of the energy into kinetic energy, the resulting velocity is $v=\sqrt{2*8e8J/1.2e4kg}=365m/s$. Sorry! You just barely left the atmosphere! So, no, it won't work. Dec 28, 2014 at 7:50
• So, you are proposing that we fill the horizon with garbage? Please don't, but do keep thinking of ways to save the environment. Dec 28, 2014 at 8:19
• If you could keep the tube from buckling, you might get farther using pulleys :) Dec 28, 2014 at 13:49
I'm not sure why people are posting what they've put into comments and answers. It's quite simple: a "sucking" system can't pull anything higher than one atmospheric pressure equivalent. That's why barometers work: the height of the material in the tube is limited by the existing atmospheric pressure.
If you want to dump garbage to space, you'll have to pump from below, not evacuate from above.
• So am I right in thinking this means that the sucking system will pull material at most to the edge of the atmosphere? In other words Brandon's visualization above is perfect for this problem! Sorry to jump in on this question - it's just a nice thought experiment with an elegant solution. Dec 28, 2014 at 19:01
• @EdwardHughes no -- the sucking system will only draw material until the weight of the material column produces pressure at the ground level equal to one atmosphere pressure. Take a look at any intro to "how a barometer works" Dec 28, 2014 at 19:31
• of course, I was implicitly treating the mass of the material as negligable. So we can say that the material will at most reach the top of the atmosphere (in the massless case) or lower (if it's a massive object), right? Sorry that my intuition has deserted me this evening! Dec 28, 2014 at 19:39
There are a few design issues that would need to be worked out.
First, is structural integrity. We currently don't have any material strong enough to withstand the strain of reaching out into space.
In order to have the tube in a stationary position, the tube is built from geostationary orbit and extended in both directions (to maintain balance and position).
As pointed out in the comments, the vacuum would not be sufficient to eject the payload (garbage) into space. Although an elevator could be used.
Allowing the garbage to fall into the atmosphere to burn up is not a good idea. Since most of the garbage is carbon based, it would create a lot of carbon dioxide in the upper atmosphere. The amount of carbon dioxide would easily turn the planet into a desert.
Allowing the garbage to accumulate in space would probably create a small moon. Which may create more problems in the future.
|
2022-05-26 23:46:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49011698365211487, "perplexity": 486.29444374770065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662627464.60/warc/CC-MAIN-20220526224902-20220527014902-00203.warc.gz"}
|
http://www.transtutors.com/questions/orange-juice-and-apple-juice-are-known-to-be-perfect-substitutes--81215.htm
|
+1.617.933.5480
# Q: Orange juice and apple juice are known to be perfect substitutes............
a. Orange juice and apple juice are known to be perfect substitutes. Draw the appropriate price consumption (for a variable price of orange juice) and income-consumption curves. b. Left shoes and rights shoes are perfect complements. Draw the appropriate price-consumption and income-consumption curves. c. Suppose that the average household in a state consumes 500 gallons of gasoline per year. A tencent gasoline tax is introduced, coupled with a $50 annual tax rebate per household. Will the household be better or worse off after the new program is introduced? Want an Answer? Related Questions in Others • Q: economic questions. August 02, 2011 there are economics questions. some multiple questions and some short answer questions. There is derection also. I attached ppt files. These files cover all questions. I need correct... • Q: Micro Economics (Solved) October 11, 2014 See attached, and must adhere to deadline. Solution Preview : 1-35) Price =$6 Quantity = 60 Hence, Revenue = $360 2-36) As profit = (P – AC)Q Total profit = (6 – 1)30 + (6 – 1)60 =$450 3-28) D) decreasing as output increases 6-15) B) electricity...
• Q:
i need the answer of this question with graphical reprisentation November 10, 2012
Explain each of the following statements using supply-and-demand diagrams. a. “When a cold snap hits Florida, the price of orange juice rises in supermarkets throughout the...
• Q:
Numerically solve for Rawls’ optimal consumption point given the above... August 11, 2014
information. b.) Numerically solve for Rawls’ optimal consumption point given the above information. c.) Suppose the price of a left shoes increases from $2 to$3....
• Q:
Intermediate Economics (Solved) June 04, 2012
Need answers to the questions, with all workings shown
Solution Preview :
Suppose that the linear market supply curve starts on the price axis at $8 per box, and that the linear market demand curve hits the price at$7 per box. Is there equilibrium in this market?...
Copy and paste your question here...
Have Files to Attach?
Similar question? edit & resubmit »
|
2014-12-22 09:59:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19852031767368317, "perplexity": 6624.522266825567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775083.81/warc/CC-MAIN-20141217075255-00163-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://liorpachter.wordpress.com/category/education/
|
You are currently browsing the category archive for the ‘education’ category.
My Caltech calculus professor, Tom Apostol, passed away yesterday May 8th 2016. When I arrived in his Math 1b class in the Fall of 1990 I thought, like most of my classmates, that I already knew calculus. I may have known it but I didn’t understand it. Apostol taught me the understanding part.
Apostol’s calculus books, affectionally called “Tommy I” and ‘Tommy II” were not just textbooks for students to memorize but rather mathematical wisdom and beauty condensed into a pair of books intended to transform grade-obsessed freshmen and sophomores into thinking human beings. Most of all, Apostol emphasized the idea that fundamental to mathematics is how one thinks about things, not just what one is thinking about. One of his iconic examples of this was the ice-cream-cone-proof that the focal property of an ellipse is a consequence of its definition as a section of a cone. Specifically, taking as the definition of an ellipse a plane curve obtained by intersecting an inclined plane with a cone
the goal is to both define the two foci, and then to derive the focal point property as illustrated below:
Apostol demonstrated the connection between conic sections and their foci via a proof and picture of Dandelin. His explanation, which I still remember from my freshman year in college, is beautiful (the excerpt below is from his linear algebra book):
Apostol didn’t invent Dandelin’s spheres but he knew they were “the right way” to think about conic sections, and he figured out “the right way” for each and every one of his explanations. His calculus books are exceptional for their introduction of integration before differentiation, his preference for axiomatic rather than mechanistic definition (e.g. of determinants) and his exercises that are “easy” when the material is understood “in the right way”. His course had a profound influence on my approach not only to mathematics, but to all of my learning in both the sciences and humanities.
One of Apostol’s signature traditions was his celebration of Gauss’ birthday. His classes were always filled with mathematical treats, but on April 30th every year he would hide a cake in the classroom before the students arrived and would serve an edible treat that day instead. Gauss turned 239 just last week. This seems to be a timely moment to take note of that prime number (Apostol was a number theorist) and to eat a slice of cake for Gauss, Apostol, and those who change our lives.
The Common Core State Standards Initiative was intended to establish standards for the curriculum for K–12 students in order to universally elevate the the quality of education in the United States. Whether the initiative has succeeded, or not, is a matter of heated debate. In particular, the merits of the mathematics standards are a contentious matter to the extent that colleagues in my math department at UC Berkeley have penned opposing opinions on the pages of the Wall Street Journal (see Frenkel and Wu vs. Ratner). In this post I won’t opine on the merits of the standards, but rather wish to highlight a shortcoming in the almost universal perspective on education that the common core embodies:
The emphasis on what K–12 students ought to learn about what is known has sidelined an important discussion about what they should learn about what is not known.
To make the point, I’ve compiled a list of unsolved problems in mathematics to match the topics covered in the common core. The problems are all well-known to mathematicians, and my only contribution is to select problems that (a) are of interest to research mathematicians (b) provide a good balance among the different areas of mathematics and (c) are understandable by students studying to (the highlighted) Common Core Standards. For each grade/high school topic, the header is a link to the Common Core Standards. Based on the standards, I have selected a single problem to associate to to the grade/topic (although it is worth noting there were always a large variety to choose from). For each problem, I begin by highlighting the relevant common core curriculum which the problem is related to, followed by a warm up exercise to help introduce students to the problem. The warm ups are exercises that should be solvable by students with knowledge of the Common Core Standards. I then state the unsolved problem, and finally I provide (in brief!) some math background, context and links for those who are interested in digging deeper into the questions.
Disclaimer: it’s possible, though highly unlikely that any of the questions on the list will yield to “elementary” methods accessible to K–12 students. It is quite likely that many of the problems will remain unsolved in our lifetimes. So why bother introducing students to such problems? The point is that the questions reveal a sliver of the vast scope of mathematics, they provide many teachable moments and context for the mathematics that does constitute the common core, and (at least in my opinion) are fun to explore (for kids and adults alike). Perhaps most importantly, the unsolved problems and conjectures reveal that the mathematics taught in K–12 is right at the edge of our knowledge: we are always walking right along the precipice of mystery. This is true for other subjects taught in K–12 as well, and in my view this reality is one of the important lessons children can and should learn in school.
One good thing about the Common Core Standards, is that their structure allows, in principle, for the incorporation of standards for unsolved problems the students ought to know about. Hopefully education policymakers will consider extending the Common Core Standards to include such content. And hopefully educators will not only teach about what is not known, but will also encourage students to ask new questions that don’t have answers. This is because “there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know.”
Kindergarten
Relevant common core: “describing shapes and space.”
Warm up: can you color the map of Africa with four colors so that no two countries that touch are filled in with the same color?
Can you color in the map with three colors so that no two countries that touch are filled in with the same color?
The unsolved problem: without trying all possibilities, can you tell when a map can be colored in with 3 colors so that no two countries that touch are filled in with the same color?
Background and context: The four color theorem states (informally) that “given any separation of a plane into contiguous regions, producing a figure called a map, no more than four colors are required to color the regions of the map so that no two adjacent regions have the same color.” (from wikipedia). The mathematics statement is that any planar graph can be colored with four colors. Thus, the first part of the “warm up” has a solution; in fact the world map can be colored with four colors. The four color theorem is deceivingly simple- it can be explored by a kindergartner, but it turns out to have a lengthy proof. In fact, the proof of the theorem requires extensive case checking by computer. Not every map can be colored with three colors (for an example illustrating why see here). It is therefore natural to ask for a characterization of which maps can be 3-colored. Of course any map can be tested for 3-colorability by trying all possibilities, but a “characterization” would involve criteria that could be tested by an algorithm that is polynomial in the number of countries. The 3-colorability of planar graphs is NP-complete.
Relevant common core: “developing understanding of whole number relationships”.
Warm up: Suppose that in a group of people, any pair of individuals are either strangers or acquaintances. Show that among three people there must be at either at least two pairs of strangers or else at least two pairs of acquaintances.
The unsolved problem: Is it true that among 45 people there must be 5 mutual strangers or 5 mutual acquaintances?
Background and context: This problem is formally known as the question of computing the Ramsey number R(5,5). It is an easier (although probably difficult for first graders) problem to show that R(3,3)=6, that is, that among six people there will be either three mutual strangers or else three mutual acquaintances. It is known that $43 \leq R(5,5) \leq 49$. The difficulty of computing Ramsey numbers was summed up by mathematician Paul Erdös as follows:
“Imagine an alien force, vastly more powerful than us, landing on Earth and demanding the value of R(5, 5) or they will destroy our planet. In that case we should marshal all our computers and all our mathematicians and attempt to find the value. But suppose, instead, that they ask for R(6, 6). In that case we should attempt to destroy the aliens.” – from Ten Lectures on the Probabilistic Method.
Relevant common core: “building fluency with addition and subtraction”.
Warm up: Pascal’s triangle is a triangular array of numbers where each entry in a row is computed by adding up the pair of numbers above it. For example, the first six rows of Pascal’s triangle are:
Write out the next row of Pascal’s triangle.
The unsolved problem: Is there a number (other than 1) that appears 10 times in the (infinite) Pascal’s triangle?
Background and context: The general problem of determining whether numbers can appear with arbitrary multiplicity in Pascal’s triangle is known as Singmaster’s conjecture. It is named after the mathematician David Singmaster who posed the problem in 1971. It is known that the number 3003 appears eight times, but it is not known whether any other number appears eight times, nor, for that matter, whether any other number appears more than eight times.
Relevant common core: “(1) developing understanding of multiplication and division and strategies for multiplication and division within 100”.
Warm up: Practice dividing numbers by 2 and multiplying by 3.
The unsolved problem: Choose a natural number n. If n is even, divide it by 2 to get $n\div 2$. If n is odd, multiply it by 3 and add 1 to obtain $3\times n+1$. Repeat the process. Show that for any initial choice n, the repeating process will eventually reach the number 1.
Background and context: The conjecture is called the Collatz conjecture, after Lothar Collatz who proposed it in 1937. It is deceptively simple, but despite much numeric evidence that it is true, has eluded proof. Mathematician Terence Tao has an interesting blog post explaining why (a) the conjecture is likely to be true and (b) why it is likely out of reach of current techniques known to mathematicians.
Relevant common core: “Determine whether a given whole number in the range 1-100 is prime or composite”.
Warm up: Write the number 100 as the sum of two prime numbers.
The unsolved problem: Show that every even integer greater than 2 can be expressed as the sum of two primes.
Background and context: This problem is known as the Goldbach conjecture. It was proposed by the mathematician Christian Goldbach in a letter to the mathematician Leonhard Euler in 1742 and has been unsolved since that time (!) In 2013 mathematician Harald Helfgott settled the “weak Goldbach conjecture“, proving that every odd integer greater than 5 is the sum of three primes.
Relevant common core: “Graph points on the coordinate plane”.
Warm up: A set of points in the coordinate plane are in general position if no three of them lie on a line. A quadrilateral is convex if it does not intersect itself and the line between any two points on the boundary lies entirely within the quadrilateral. Show that any set of five points in the plane in general position has a subset of four points that form the vertices of a convex quadrilateral.
The unsolved problem: A polygon is convex if it does not intersect itself and the line between any two points on the boundary lies entirely within the polygon. Find the smallest number N so that any N points in the coordinate plane in general position contain a subset of 7 points that form the vertices of a convex polygon.
Background and context: The warm up exercise was posed by mathematician Esther Klein in 1933. The question led to the unsolved problem, which remains unsolved in the general case, i.e. how many points are required so that no matter how they are placed (in general position) in the plane there is a subset that form the vertices of a convex n-gon. There are periodic improvements in the upper bound (the most recent one posted on September 10th 2015), but the best current upper bound is still far from the conjectured answer.
A set of 8 points in the plane containing no convex pentagon.
Relevant common core: “Represent three-dimensional figures using nets made up of rectangles and triangles”.
Warm up
The dodecahedron is an example of a convex polyhedron. A convex polyhedron is a polyhedron that does not intersect itself and that has the property that any line joining two points on the surface lies entirely within the polyhedron. Cut out the net of the dodecahedron (shown above) and fold it into a dodecahedron.
The unsolved problem: Does every convex polyhedron have at least one net?
Background and context: Albrecht Dürer was an artist and mathematician of the German Renaissance. The unsolved problem above is often attributed to him, and is known as Dürer’s unfolding problem. It was formally posed by the mathematician Geoffrey Shephard in 1975.
One of my favorite math art pieces: Albrecht Dürer’s engraving Melencolia I.
Relevant common core: “Analyze proportional relationships and use them to solve real-world and mathematical problems.”
Warm up: Two runners, running at two different speeds $v_0$ and $v_1$, run along a circular track of unit length. A runner is lonely at time t if she is at a distance of at least 1/2 from the other runner at time t. If both runners all start at the same place at t=0, show that the runners will both be lonely at time $t=\frac{1}{2(v_1-v_0)}$.
The unsolved problem: Eight runners, each running at a speed different from that of the others, run along a circular track of unit length. A runner is lonely at time t if she is at a distance of at least 1/8 from every other runner at time t. If the runners all start at the same place at t=0, show that each of the eight runners will be lonely at some time.
Background and context: This problem is known as the lonely runner conjecture and was proposed by mathematician J.M Wills in 1967. It has been proved for up to seven runners, albeit with lengthy arguments that involve lots of case checking. It is currently unsolved for eight or more runners.
Relevant common core: “Know and apply the properties of integer exponents”.
Warm up: Show that $3^{3n}+[2(3^n)]^3 = 3^{3n+2}$ for any integer greater than or equal to 1.
The unsolved problem: If $A^x+B^y=C^z$ where A,B,C,x,y and are positive integers with $x,y,z>2$ then A,B and C have a common prime factor.
Background and context: This problem is known as Beal’s conjecture (named after the billionaire Andrew Beal). The famous “Fermat’s Last Theorem“, proved via the modularity theorem for semistable elliptic curves, is a special case of this conjecture, an observation that hints at the difficulty of the conjecture. Andrew Beal is currently offering a prize of $1 million for its solution. High School: Number and Quantity Relevant common core: “In high school, students will be exposed to yet another extension of number, when the real numbers are augmented by the imaginary numbers to form the complex numbers.” Warm up: Gaussian integers are numbers of the form $a+bi$ where a and b are integers. The norm of a Gaussian integer $a+bi$ is $a^2+b^2$. A Gaussian prime is a Gaussian integer that cannot be factored into Gaussian integers of smaller norm. Show that $4+i$ is a Gaussian prime. The unsolved problem: Consider a circle in R2 with centre at the origin and radius $r \geq 0$. How many points there are inside this circle of the form (m,n) where m and n are both integers. In particular, denoting this number by $N(r)$, find bounds on $E(r)$ where $N(r) = \pi r^2 + E(r)$. Background and context: The problem is known as Gauss’ circle problem, and while phrased in terms of integer points in the plane, it is equivalent to asking for the number of Gaussian integers with norm less than a given constant. High School: Algebra Relevant common core: “Solve systems of equations”. Warm up: An Euler brick is a cuboid whose edges and face diagonals all have integer length: Algebraically, an Euler brick requires finding integers a,b,c,d,e,f such that $a^2+b^2=d^2, a^2+c^2=e^2$ and $b^2+c^2=f^2$. Verify that (a,b,c,d,e,f) = (85, 132, 720,157, 725, 732) is an Euler brick. The unsolved problem: A perfect cuboid is an Euler brick whose space diagonal g (see below) also has integer length: Is there a perfect cuboid? Background and context: The existence of perfect cuboids requires solving the systems of equations for the Euler brick with the addition of the requirement that $a^2+b^2+c^2=g^2$ with g an integer. The solution of (non-linear) systems of equations in many unknowns is the subject matter of algebraic geometry, in which a bridge is developed between the algebra and a corresponding geometry. Generally speaking the equations are easiest to solve when the solutions can be complex, harder when they are required to be real numbers (real algebraic geometry) and hardest when they are constrained to be integers (Diophantine, or arithmetic algebraic geometry). High School: Functions Relevant common core: “Understand the concept of a function and use function notation”. Warm up: Euler’s totient function denoted by $\varphi(n)$ assigns to each positive integer n the number of positive integers that are less than or equal to n and relatively prime to n. What is $\varphi(p^k)$ when p is a prime number? The unsolved problem: Is it true that for every n there is at least one other integer $m \neq n$ with the property that $\varphi(m) = \varphi(n)$? Background and context: The question is known as Carmichael’s conjecture, who posited that the answer is affirmative. Curiously, it has been proved (in The Distribution of Totients by Kevin Ford, 1998) that any counterexample to the conjecture must be larger than $10^{10^{10}}$. Yet the problem is unsolved. High School: Modeling Relevant common core: “Modeling is the process of choosing and using appropriate mathematics and statistics to analyze empirical situations, to understand them better, and to improve decisions.” Warm up: Read the biology paper The Biological Underpinnings of Namib Desert Fairy Circles and the mathematical modeling paper Adopting a spatially explicit perspective to study the mysterious fairy circles of Namibia (some basic calculus is required). The unsolved problem: Develop a biologically motivated mathematical model that explains the fairy circles in Namibia. Background and context: Fairy circles occur in Southern Africa, mostly in Namibia but also in South Africa. The circles are barren patches of land surrounded by vegetation, and they appear to go through life cycles of dozens of years. There have been many theories about them but despite a number of plausible models the phenomenon is poorly understood and their presence is considered “one of nature’s great mysteries“. High School: Geometry Relevant common core: “An understanding of the attributes and relationships of geometric objects can be applied in diverse contexts”. Warm up: Calculate the area of the handset shape shown moving in the figure below. It consists of two quarter-circles of radius 1 on either side of a 1 by 4/π rectangle from which a semicircle of radius $\scriptstyle 2/\pi\,$ has been removed. The unsolved problem: What is the rigid two-dimensional shape of largest area that can be maneuvered through an L-shaped planar region (as shown above) with legs of unit width. Background and context: This problem was posed by Leo Moser in 1966 and is known as the the moving sofa problem. It is known that the largest area for a sofa is between 2.2195 and 2.8284. The problem should be familiar to college age students who have had to manouever furniture in and out of dorm rooms. High School: Statistics & Probability Relevant common core: “Describe events as subsets of a sample space (the set of outcomes) using characteristics (or categories) of the outcomes, or as unions, intersections, or complements of other events (‘or,’ ‘and,’ ‘not’).” Warm up: A family of sets is said to be union-closed if the union of any two sets from the family remains in the family. Write down five different examples of families of sets that are union-closed. The unsolved problem: Show that for any finite union-closed family of finite sets, other than the family consisting only of the empty set, there exists an element that belongs to at least half of the sets in the family. Background and context: The conjecture is known as Frankl’s conjecture, named after the mathematician Péter Frankl, and is also called the union closed sets conjecture. It is deceptively simple, but is known to be true only in a few very special cases. When I was an undergraduate at Caltech I took a combinatorics course from Rick Wilson who taught from his then just published textbook A Course in Combinatorics (co-authored with J.H. van Lint). The course and the book emphasized design theory, a subject that is beautiful and fundamental to combinatorics, coding theory, and statistics, but that has sadly been in decline for some time. It was a fantastic course taught by a brilliant professor- an experience that had a profound impact on me. Though to be honest, I haven’t thought much about designs in recent years. Having kids changed that. A few weeks ago I was playing the card game Colori with my three year old daughter. It’s one of her favorites. The game consists of 15 cards, each displaying drawings of the same 15 items (beach ball, boat, butterfly, cap, car, drum, duck, fish, flower, kite, pencil, jersey, plane, teapot, teddy bear), with each item colored using two of the colors red, green, yellow and blue. Every pair of cards contains exactly one item that is colored exactly the same. For example, the two cards the teddy bear is holding in the picture above are shown below: The only pair of items colored exactly the same are the two beach balls. The gameplay consists of shuffling the deck and then placing a pair of cards face-up. Players must find the matching pair, and the first player to do so keeps the cards. This is repeated seven times until there is only one card left in the deck, at which point the player with the most cards wins. When I play with my daughter “winning” consists of enjoying her laughter as she figures out the matching pair, and then proceeds to try to eat one of the cards. An inspection of all 15 cards provided with the game reveals some interesting structure: Every card contains exactly one of each type of item. Each item therefore occurs 15 times among the cards, with fourteen of the occurrences consisting of seven matched pairs, plus one extra. This is a type of partially balanced incomplete block design. Ignoring for a moment the extra item placed on each card, what we have is 15 items, each colored one of seven ways (v=15*7=105). The 105 items have been divided into 15 blocks (the cards), so that b=15. Each block contains 14 elements (the items) so that k=14, and each element appears in two blocks (r=2). If every pair of different (colored) items occurred in the same number of cards, we would have a balanced incomplete block design, but this is not the case in Colori. Each item occurs in the same block as 26 (=2*13) other items (we are ignoring the extra item that makes for 15 on each card), and therefore it is not the case that every pair of items occurs in the same number of blocks as would be the case in a balanced incomplete block design. Instead, there is an association scheme that provides extra structure among the 105 items, and in turn describes the way in which items do or do not appear together on cards. The association scheme can be understood as a graph whose nodes consist of the 105 items, with edges between items labeled either 0,1 or 2. An edge between two items of the same type is labeled 0, edges between different items that appear on the same card are labeled 1, and edges between different items that do not appear on the same card are labeled 2. This edge labeling is called an “association scheme” because it has a special property, namely the number of triangles with a base edge labeled k, and other two edges labeled i and respectively is dependent only on i,j and k and not on the specific base edge selected. In other words, there is a special symmetry to the graph. Returning to the deck of cards, we see that every pair of items appears in the same card exactly 0 or 1 times, and the number depends only on the association class of the pair of objects. This is called a partially balanced incomplete block design. The author of the game, Reinhard Staupe, made it a bit more difficult by adding an extra item to each card making the identification of the matching pair harder. The addition also ensures that each of the 15 items appears on each card. Moreover, the items are permuted in location on the cards, in an arrangement similar to a latin square, making it hard to pair up the items. And instead of using 8 different colors, he used only four, producing the eight different “colors” of each item on the cards by using pairwise combinations of the four. The yellow-green two-colored beach balls are particularly difficult to tell apart from the green-yellow ones. Of course, much of this is exactly the kind of thing you would want to do if you were designing an RNA-Seq experiment! Instead of 15 types of items, think of 15 different strains of mice. Instead of colors for the items, think of different cellular conditions to be assayed. Instead of one pair for each of seven color combinations, think of one pair of replicates for each of seven cellular conditions. Instead of cards, think of different sequencing centers that will prepare the libraries and sequence the reads. An ideal experimental setup would involve distributing the replicates and different cellular conditions across the different sequencing centers so as to reduce batch effect. This is the essence of part of the paper Statistical Design and Analysis of RNA Sequencing Data by Paul Auer and Rebecca Doerge. For example, in their Figure 4 (shown below) they illustrate the advantage of balanced block designs to ameliorate lane effects: Figure 4 from P. Auer and R.W. Doerge’s paper Statistical Design and Analysis of RNA Sequencing Data. Of course the use of experimental designs for constructing controlled gene expression experiments is not new. Kerr and Churchill wrote about the use of combinatorial designs in Experimental Design for gene expression microarrays, and one can trace back a long chain of ideas originating with R.A. Fisher. But design theory seems to me to be a waning art insofar as molecular biology experiments are concerned, and it is frequently being replaced with biological intuition of what makes for a good control. The design of good controls is sometimes obvious, but not always. So next time you design an experiment, if you have young kids, first play a round of Colori. If the kids are older, play Set instead. And if you don’t have any kids, plan for an extra research project, because what else would you do with your time? In the Jeopardy! game show contestants are presented with questions formulated as answers that require answers in the form questions. For example, if a contestant selects “Normality for$200” she might be shown the following clue:
“The average $\frac{x_1+x_2+\cdots + x_n}{n}$,”
to which she would reply “What is the maximum likelihood estimate for the mean of independent identically distributed Gaussian random variables from which samples $x_1,x_2,\ldots,x_n$ have been obtained?” Host Alex Trebek would immediately exclaim “That is the correct answer for $200!” The process of doing mathematics involves repeatedly playing Jeopardy! with oneself in an unending quest to understand everything just a little bit better. The purpose of this blog post is to provide an exposition of how this works for understanding principal component analysis (PCA): I present four Jeopardy clues in the “Normality” category that all share the same answer: “What is principal component analysis?” The post was motivated by a conversation I recently had with a well-known population geneticist at a conference I was attending. I mentioned to him that I would be saying something about PCA in my talk, and that he might find what I have to say interesting because I knew he had used the method in many of his papers. Without hesitation he replied that he was well aware that PCA was not a statistical method and merely a heuristic visualization tool. The problem, of course, is that PCA does have a statistical interpretation and is not at all an ad-hoc heuristic. Unfortunately, the previously mentioned population geneticist is not alone; there is a lot of confusion about what PCA is really about. For example, in one textbook it is stated that “PCA is not a statistical method to infer parameters or test hypotheses. Instead, it provides a method to reduce a complex dataset to lower dimension to reveal sometimes hidden, simplified structure that often underlie it.” In another one finds out that “PCA is a statistical method routinely used to analyze interrelationships among large numbers of objects.” In a highly cited review on gene expression analysis PCA is described as “more useful as a visualization technique than as an analytical method” but then in a paper by Markus Ringnér titled the same as this post, i.e. What is principal component analysis? in Nature Biotechnology, 2008, the author writes that “Principal component analysis (PCA) is a mathematical algorithm that reduces the dimensionality of the data while retaining most of the variation in the data set” (the author then avoids going into the details because “understanding the details underlying PCA requires knowledge of linear algebra”). All of these statements are both correct and incorrect and confusing. A major issue is that the description by Ringnér of PCA in terms of the procedure for computing it (singular value decomposition) is common and unfortunately does not shed light on when it should be used. But knowing when to use a method is far more important than knowing how to do it. I therefore offer four Jeopardy! clues for principal component analysis that I think help to understand both when and how to use the method: 1. An affine subspace closest to a set of points. Suppose we are given numbers $x_1,\ldots,x_n$ as in the initial example above. We are interested in finding the “closest” number to these numbers. By “closest” we mean in the sense of total squared difference. That is, we are looking for a number $m$ such that $\sum_{i=1}^n (m-x_i)^2$ is minimized. This is a (straightforward) calculus problem, solved by taking the derivative of the function above and setting it equal to zero. If we let $f(m) = \sum_{i=1}^n (m-x_i)^2$ then $f'(m) = 2 \cdot \sum_{i=1}^n (m-x_i)$ and setting $f'(m)=0$ we can solve for $m$ to obtain $m = \frac{1}{n} \sum_{i=1}^n x_i$. The right hand side of the equation is just the average of the n numbers and the optimization problem provides an interpretation of it as the number minimizing the total squared difference with the given numbers (note that one can replace squared difference by absolute value, i.e. minimization of $\sum_{i=1}^n |m-x_i|$, in which case the solution for is the median; we return to this point and its implications for PCA later). Suppose that instead of n numbers, one is given n points in $\mathbb{R}^p$. That is, point is ${\bf x}^i = (x^i_1,\ldots,x^i_p)$. We can now ask for a point ${\bf m}$ with the property that the squared distance of ${\bf m}$ to the n points is minimized. This is asking for $min_{\bf m} \sum_{i=1}^n ||{\bf m}-{\bf x}^i||_2$. The solution for $m$ can be obtained by minimizing each coordinate independently, thereby reducing the problem to the simpler version of numbers above, and it follows that ${\bf m} = \frac{1}{n} \sum_{i=1}^n {\bf x}^i$. This is 0-dimensional PCA, i.e., PCA of a set of points onto a single point, and it is the centroid of the points. The generalization of this concept provides a definition for PCA: Definition: Given n points in $\mathbb{R}^p$, principal components analysis consists of choosing a dimension $k < p$ and then finding the affine space of dimension with the property that the squared distance of the points to their orthogonal projection onto the space is minimized. This definition can be thought of as a generalization of the centroid (or average) of the points. To understand this generalization, it is useful to think of the simplest case that is not 0-dimensional PCA, namely 1-dimensional PCA of a set of points in two dimensions: In this case the 1-dimensional PCA subspace can be thought of as the line that best represents the average of the points. The blue points are the orthogonal projections of the points onto the “average line” (see, e.g., the red point projected orthogonally), which minimizes the squared lengths of the dashed lines. In higher dimensions line is replaced by affine subspace, and the orthogonal projections are to points on that subspace. There are a few properties of the PCA affine subspaces that are worth noting: 1. The set of PCA subspaces (translated to the origin) form a flagThis means that the PCA subspace of dimension k is contained in the PCA subspace of dimension k+1. For example, all PCA subspaces contain the centroid of the points (in the figure above the centroid is the green point). This follows from the fact that the PCA subspaces can be incrementally constructed by building a basis from eigenvectors of a single matrix, a point we will return to later. 2. The PCA subspaces are not scale invariant. For example, if the points are scaled by multiplying one of the coordinates by a constant, then the PCA subspaces change. This is obvious because the centroid of the points will change. For this reason, when PCA is applied to data obtained from heterogeneous measurements, the units matter. One can form a “common” set of units by scaling the values in each coordinate to have the same variance. 3. If the data points are represented in matrix form as an $n \times p$ matrix $X$, and the points orthogonally projected onto the PCA subspace of dimension are represented as in the ambient p dimensional space by a matrix $\tilde{X}$, then $\tilde{X} = argmin_{M:rk(M)=k} ||X-M||_2$. That is, $\tilde{X}$ is the matrix of rank k with the property that the Frobenius norm $||X-\tilde{X}||_2$ is minimized. This is just a rephrasing in linear algebra of the definition of PCA given above. At this point it is useful to mention some terminology confusion associated with PCA. Unfortunately there is no standard for describing the various parts of an analysis. What I have called the “PCA subspaces” are also sometimes called “principal axes”. The orthogonal vectors forming the flag mentioned above are called “weight vectors”, or “loadings”. Sometimes they are called “principal components”, although that term is sometimes used to refer to points projected onto a principal axis. In this post I stick to “PCA subspaces” and “PCA points” to avoid confusion. Returning to Jeopardy!, we have “Normality for$400” with the answer “An affine subspace closest to a set of points” and the question “What is PCA?”. One question at this point is why the Jeopardy! question just asked is in the category “Normality”. After all, the normal distribution does not seem to be related to the optimization problem just discussed. The connection is as follows:
2. A generalization of linear regression in which the Gaussian noise is isotropic.
PCA has an interpretation as the maximum likelihood parameter of a linear Gaussian model, a point that is crucial in understanding the scope of its application. To explain this point of view, we begin by elaborating on the opening Jeopardy! question about Normality for $200: The point of the question was that the average of n numbers can be interpreted as a maximum likelihood estimation of the mean of a Gaussian. The Gaussian distribution is $f(x,\mu,\sigma) = \frac{1}{\sqrt{2 \pi \sigma^2}} e^{-\frac{(x-\mu)^2}{2\sigma^2}}$. Given the numbers $x_1,\ldots,x_n$, the likelihood function is therefore $L(\mu,\sigma) = \prod_{i=1}^n \frac{1}{\sqrt{2 \pi \sigma^2}} e^{-\frac{(x_i-\mu)^2}{2\sigma^2}}$. The maximum of this function is the same as the maximum of its logarithm, which is $log L(\mu,\sigma) = \sum_{i=1}^n \left( log \frac{1}{\sqrt{2 \pi \sigma^2}} -\frac{(x_i-\mu)^2}{2\sigma^2} \right)$. Therefore the problem of finding the maximum likelihood estimate for the mean is equivalent to that of finding the minimum of the function $S(\mu) = \sum_{i=1}^n (x_i-\mu)^2$. This is exactly the optimization problem solved by 0-dimensional PCA, as we saw above. With this calculation at hand, we turn to the statistical interpretation of least squares: Given n points $\{(x_i,y_i)\}_{i=1}^n$ in the plane (see figure above), the least squares line $y=mx+b$ (purple in figure) is the one that minimizes the sum of the squares $\sum_{i=1}^n \left( (mx_i+b) - y_i \right)^2$. That is, the least squares line is the one minimizing the sum of the squared vertical distances to the points. As with the average of numbers, the least squares line has a statistical interpretation: Suppose that there is some line $y=m^{*}x+b^{*}$ (black line in figure) that is unknown, but that “generated” the observed points, in the sense that each observed point was obtained by perturbing the point $m^{*}x_i +b^{*}$ vertically by a random amount from a single Gaussian distribution with mean 0 and variance $\sigma^2$. In the figure, an example is shown where the blue point on the unknown line “generates” the observed red point; the Gaussian is indicated with the blue streak around the point. Note that the model specified so far is not fully generative, as it depends on the hidden points $m^{*}x_i +b^{*}$ and there is no procedure given to generate the $x_i$. This can be done by positing that the $x_i$ are generated from a Gaussian distribution along the line $y=m^{*}x+b$ (followed by the points $y_i$ generated by Gaussian perturbation of the y coordinate on the line). The coordinates $x_i$ can then be deduced directly from the observed points as the Gaussian perturbations are all vertical. The relationship between the statistical model just described and least squares is made precise by a theorem (which we state informally, but is a special case of the Gauss-Markov theorem): Theorem (Gauss-Markov): The maximum likelihood estimate for the line (the parameters and b) in the model described above correspond to the least squares line. The proof is analogous to the argument given for the average of numbers above so we omit it. It can be generalized to higher dimensions where it forms the basis of what is known as linear regression. In regression, the $x_i$ are known as independent variables and $y$ the dependent variable. The generative model provides an interpretation of the independent variables as fixed measured quantities, whereas the dependent variable is a linear combination of the independent variables with added noise. It is important to note that the origins of linear regression are in physics, specifically in work of Legendre (1805) and Gauss (1809) who applied least squares to the astronomical problem of calculating the orbits of comets around the sun. In their application, the independent variables were time (for which accurate measurements were possible with clocks; by 1800 clocks were accurate to less than 0.15 seconds per day) and the (noisy) dependent variable the measurement of location. Linear regression has become one of the most (if not the most) widely used statistical tools but as we now explain, PCA (and its generalization factor analysis), with a statistical interpretation that includes noise in the $x_i$ variables, seems better suited for biological data. The statistical interpretation of least squares can be extended to a similar framework for PCA. Recall that we first considered a statistical interpretation for least squares where an unknown line $y=m^{*}x+b^{*}$ “generated” the observed points, in the sense that each observed point was obtained by perturbing the point $m^{*}x_i +b^{*}$ vertically by a random amount from a single Gaussian distribution with mean 0 and variance $\sigma^2$. PCA can be understood analogously by replacing vertically by orthogonally (this is the probabilistic model of Collins et al., NIPS 2001 for PCA). However this approach is not completely satisfactory as the orthogonality of the perturbation is is not readily interpretable. Stated differently, it is not obvious what physical processes would generate points orthogonal to a linear affine subspace by perturbations that are always orthogonal to the subspace. In the case of least squares, the “vertical” perturbation corresponds to noise in one measurement (represented by one coordinate). The problem is in naturally interpreting orthogonal perturbations in terms of a noise model for measurements. This difficulty is resolved by a model called probabilistic PCA (pPCA), first proposed by Tipping and Bishop in a Tech Report in 1997, and published in the J. of the Royal Statistical Society B 2002, and independently by Sam Roweis, NIPS 1998, that is illustrated visually in the figure below, and that we now explain: In the pPCA model there is an (unknown) line (affine space in higher dimension) on which (hidden) points (blue) are generated at random according to a Gaussian distribution (represented by gray streak in the figure above, where the mean of the Gaussian is the green point). Observed points (red) are then generated from the hidden points by addition of isotropic Gaussian noise (blue smear), meaning that the Gaussian has a diagonal covariance matrix with equal entries. Formally, in the notation of Tipping and Bishop, this is a linear Gaussian model described as follows: Observed random variables are given by $t = Wx + \mu + \epsilon$ where x are latent (hidden) random variables, W is a matrix describing a subspace and $Wx+\mu$ are the latent points on an affine subspace ($\mu$ corresponds to a translation). Finally, $\epsilon$ is an error term, given by a Gaussian random variable with mean 0 and covariance matrix $\sigma^2 I$. The parameters of the model are $W,\mu$ and $\sigma^2$. Equivalently, the observed random variables are themselves Gaussian, described by the distribution $t \sim \mathcal{N}(\mu,WW^T + \psi)$ where $\psi \sim \mathcal{N}(0,\sigma^2I)$. Tipping and Bishop prove an analogy of the Gauss-Markov theorem, namely that the affine subspace given by the maximum likelihood estimates of $W$ and $\mu$ is the PCA subspace (the proof is not difficult but I omit it and refer interested readers to their paper, or Bishop’s Pattern Recognition and Machine Learning book). It is important to note that although the maximum likelihood estimates of $W,\mu$ in the pPCA model correspond to the PCA subspace, only posterior distributions can be obtained for the latent data (points on the subspace). Neither the mode nor the mean of those distributions corresponds to the PCA points (orthogonal projections of the observations onto the subspace). However what is true, is that the posterior distributions converge to the PCA points as $\sigma^2 \rightarrow 0$. In other words, the relationship between pPCA and PCA is a bit more subtle than that between least squares and regression. The relationship between regression and (p)PCA is shown in the figure below: In the figure, points have been generated randomly according to the pPCA model. the black smear shows the affine space on which the points were generated, with the smear indicating the Gaussian distribution used. Subsequently the latent points (light blue on the gray line) were used to make observed points (red) by the addition of isotropic Gaussian noise. The green line is the maximum likelihood estimate for the space, or equivalently by the theorem of Tipping and Bishop the PCA subspace. The projection of the observed points onto the PCA subspace (blue) are the PCA points. The purple line is the least squares line, or equivalently the affine space obtained by regression (y observed as a noisy function of x). The pink line is also a regression line, except where is observed as a noisy function of y. A natural question to ask is why the probabilistic interpretation of PCA (pPCA) is useful or necessary? One reason it is beneficial is that maximum likelihood inference for pPCA involves hidden random variables, and therefore the EM algorithm immediately comes to mind as a solution (the strategy was suggested by both Tipping & Bishop and Roweis). I have not yet discussed how to find the PCA subspace, and the EM algorithm provides an intuitive and direct way to see how it can be done, without the need for writing down any linear algebra: The exact version of the EM shown above is due to Roweis. In it, one begins with a random affine subspace passing through the centroid of the points. The “E” step (expectation) consists of projecting the points to the subspace. The projected points are considered fixed to the subspace. The “M” step (maximization) then consists of rotating the space so that the total squared distance of the fixed points on the subspace to the observed points is minimized. This is repeated until convergence. Roweis points out that this approach to finding the PCA subspace is equivalent to power iteration for (efficiently) finding eigenvalues of the the sample covariance matrix without computing it directly. This is our first use of the word eigenvalue in describing PCA, and we elaborate on it, and the linear algebra of computing PCA subspaces later in the post. Another point of note is that pPCA can be viewed as a special case of factor analysis, and this connection provides an immediate starting point for thinking about generalizations of PCA. Specifically, factor analysis corresponds to the model $t \sim \mathcal{N}(\mu,WW^T + \psi)$ where the covariance matrix $\psi$ is less constrained, and only required to be diagonal. This is connected to a comment made above about when the PCA subspace might be more useful as a linear fit to data than regression. To reiterate, unlike physics, where some coordinate measurements have very little noise in comparison to others, biological measurements are frequently noisy in all coordinates. In such settings factor analysis is preferable, as the variance in each coordinate is estimated as part of the model. PCA is perhaps a good compromise, as PCA subspaces are easier to find than parameters for factor analysis, yet PCA, via its pPCA interpretation, accounts for noise in all coordinates. A final comment about pPCA is that it provides a natural framework for thinking about hypothesis testing. The book Statistical Methods: A Geometric Approach by Saville and Wood is essentially about (the geometry of) pPCA and its connection to hypothesis testing. The authors do not use the term pPCA but their starting point is exactly the linear Gaussian model of Tipping and Bishop. The idea is to consider single samples from n independent identically distributed independent Gaussian random variables as one single sample from a high-dimensional multivariate linear Gaussian model with isotropic noise. From that point of view pPCA provides an interpretation for Bessel’s correction. The details are interesting but tangential to our focus on PCA. We are therefore ready to return to Jeopardy!, where we have “Normality for$600” with the answer “A generalization of linear regression in which the Gaussian noise is isotropic” and the question “What is PCA?”
3. An orthogonal projection of points onto an affine space that maximizes the retained sample variance.
In the previous two interpretations of PCA, the focus was on the PCA affine subspace. However in many uses of PCA the output of interest is the projection of the given points onto the PCA affine space. The projected points have three useful related interpretations:
1. As seen in in section 1, the (orthogonally) projected points (red -> blue) are those whose total squared distance to the observed points is minimized.
2. What we focus on in this section, is the interpretation that the PCA subspace is the one onto which the (orthogonally) projected points maximize the retained sample variance.
3. The topic of the next section, namely that the squared distances between the (orthogonally) projected points are on average (in the $l_2$ metric) closest to the original distances between the points.
The sample variance of a set of points is the average squared distance from each point to the centroid. Mathematically, if the observed points are translated so that their centroid is at zero (known as zero-centering), and then represented by an $n \times p$ matrix X, then the sample covariance matrix is given by $\frac{1}{n-1}X^tX$ and the sample variance is given by the trace of the matrix. The point is that the jth diagonal entry of $\frac{1}{n-1}X^tX$ is just $\frac{1}{n-1}\sum_{i=1}^n (x^i_j)^2$, which is the sample variance of the jth variable. The PCA subspace can be viewed as that subspace with the property that the sample variance of the projections of the observed points onto the subspace is maximized. This is easy to see from the figure above. For each point (blue), Pythagoras’ theorem implies that $d(red,blue)^2+d(blue,green)^2 = d(red,green)^2$. Since the PCA subspace is the one minimizing the total squared red-blue distances, and since the solid black lines (red-green distances) are fixed, it follows that the PCA subspace also maximizes the total squared green-blue distances. In other words, PCA maximizes the retained sample variance.
The explanation above is informal, and uses a 1-dimensional PCA subspace in dimension 2 to make the argument. However the argument extends easily to higher dimension, which is typically the setting where PCA is used. In fact, PCA is typically used to “visualize” high dimensional points by projection into dimensions two or three, precisely because of the interpretation provided above, namely that it retains the sample variance. I put visualize in quotes because intuition in two or three dimensions does not always hold in high dimensions. However PCA can be useful for visualization, and one of my favorite examples is the evidence for genes mirroring geography in humans. This was first alluded to by Cavalli-Sforza, but definitively shown by Lao et al., 2008, who analyzed 2541 individuals and showed that PCA of the SNP matrix (approximately) recapitulates geography:
Genes mirror geography from Lao et al. 2008: (Left) PCA of the SNP matrix (2541 individuals x 309,790 SNPs) showing a density map of projected points. (Right) Map of Europe showing locations of the populations .
In the picture above, it is useful to keep in mind that the emergence of geography is occurring in that projection in which the sample variance is maximized. As far as interpretation goes, it is useful to look back at Cavalli-Sforza’s work. He and collaborators who worked on the problem in the 1970s, were unable to obtain a dense SNP matrix due to limited technology of the time. Instead, in Menozzi et al., 1978 they performed PCA of an allele-frequency matrix, i.e. a matrix indexed by populations and allele frequencies instead of individuals and genotypes. Unfortunately they fell into the trap of misinterpreting the biological meaning of the eigenvectors in PCA. Specifically, they inferred migration patterns from contour plots in geographic space obtained by plotting the relative contributions from the eigenvectors, but the effects they observed turned out to be an artifact of PCA. However as we discussed above, PCA can be used quantitatively via the stochastic process for which it solves maximum likelihood inference. It just has to be properly understood.
To conclude this section in Jeopardy! language, we have “Normality for $800” with the answer “A set of points in an affine space obtained via projection of a set of given points so that the sample variance of the projected points is maximized” and the question “What is PCA?” 4. Principal component analysis of Euclidean distance matrices. In the preceding interpretations of PCA, I have focused on what happens to individual points when projected to a lower dimensional subspace, but it is also interesting to consider what happens to pairs of points. One thing that is clear is that if a pair of points are projected orthogonally onto a low-dimensional affine subspace then the distance between the points in the projection is smaller than the original distance between the points. This is clear because of Pythagoras’ theorem, which implies that the squared distance will shrink unless the points are parallel to the subspace in which case the distance remains the same. An interesting observation is that in fact the PCA subspace is the one with the property where the average (or total) squared distances between the points is maximized. To see this it again suffices to consider only projections onto one dimension (the general case follows by Pythagoras’ theorem). The following lemma, discussed in my previous blog post, makes the connection to the previous discussion: Lemma: Let $x_1,\ldots,x_n$ be numbers with mean $\overline{x} = \frac{1}{n}\sum_i x_i$. If the average squared distance between pairs of points is denoted $D = \frac{1}{n^2}\sum_{i,j} (x_i-x_j)^2$ and the variance is denoted $V=\frac{1}{n}\sum_i (x_i-\overline{x})^2$ then $V=\frac{1}{2}D$. What the lemma says is that the sample variance is equal to the average squared difference between the numbers (i.e. it is a scalar multiple that does not depend on the numbers). I have already discussed that the PCA subspace maximizes the retained variance, and it therefore follows that it also maximizes the average (or total) projected squared distance between the points. Alternately, PCA can be interpreted as minimizing the total (squared) distance that is lost, i,e. if the original distances between the points are given by a distance matrix $D$ and the projected distances are given by $\tilde{D}$, then the PCA subspace minimizes $\sum_{ij} (D^2_{ij} - \tilde{D}^2_{ij})$, where each term in the sum is positive as discussed above. This interpretation of PCA leads to an interesting application of the method to (Euclidean) distance matrices rather than points. The idea is based on a theorem of Isaac Schoenberg that characterizes Euclidean distance matrices and provides a method for realizing them. The theorem is well-known to structural biologists who work with NMR, because it is one of the foundations used to reconstruct coordinates of structures from distance measurements. It requires a bit of notation: is a distance matrix with entries $d_{ij}$ and $\Delta$ is the matrix with entries $-\frac{1}{2}d^2_{ij}$. ${\bf 1}$ denotes the vector of all ones, and ${\bf s}$ denotes a vector. Theorem (Schoenberg, 1938): A matrix D is a Euclidean distance matrix if and only if the matrix $B=(I-{\bf 1}{\bf s}')\Delta(I-{\bf s}{\bf 1}')$ is positive semi-definite where ${\bf s}'{\bf 1} = 1$. For the case when ${\bf s}$ is chosen to be a unit vector, i.e. all entries are zero except one of them equal to 1, the matrix B can be viewed as the Gromov transform (known as the Farris transform in phylogenetics) of the matrix with entries $d^2_{ij}$. Since the matrix is positive semidefinite it can be written as $B=XX^t$, where the matrix X provides coordinates for points that realize D. At this point PCA can be applied resulting in a principal subspace and points on it (the orthogonal projections of X). A point of note is that eigenvectors of $XX^t$ can be computed directly, avoiding the need to compute $X^tX$ which may be a larger matrix if $n < p$. The procedure just described is called classic multidimensional scaling (MDS) and it returns a set of points on a Euclidean subspace with distance matrix $\tilde{D}$ that best represent the original distance matrix D in the sense that $\sum_{ij} (D^2_{ij} - \tilde{D}^2_{ij})$ is minimized. The term multidimensional scaling without the “classic” has taken on an expanded meaning, namely it encapsulates all methods that seek to approximately realize a distance matrix by points in a low dimensional Euclidean space. Such methods are generally not related to PCA, but classic multidimensional scaling is PCA. This is a general source of confusion and error on the internet. In fact, most articles and course notes I found online describing the connection between MDS and PCA are incorrect. In any case classic multidimensional scaling is a very useful instance of PCA, because it extends the utility of the method to cases where points are not available but distances between them are. Now we return to Jeopardy! one final time with the final question in the category: “Normality for$1000”. The answer is “Principal component analysis of Euclidean distance matrices” and the question is “What is classic multidimensional scaling?”
An example
To illustrate the interpretations of PCA I have highlighted, I’m including an example in R inspired by an example from another blog post (all commands can be directly pasted into an R console). I’m also providing the example because missing in the discussion above is a description of how to compute PCA subspaces and the projections of points onto them. I therefore explain some of this math in the course of working out the example:
First, I generate a set of points (in $\mathbb{R}^2$). I’ve chosen a low dimension so that pictures can be drawn that are compatible with some of the examples above. Comments following commands appear after the # character.
set.seed(2) #sets the seed for random number generation.
x <- 1:100 #creates a vector x with numbers from 1 to 100
ex <- rnorm(100, 0, 30) #100 normally distributed rand. nos. w/ mean=0, s.d.=30
ey <- rnorm(100, 0, 30) # " "
y <- 30 + 2 * x #sets y to be a vector that is a linear function of x
x_obs <- x + ex #adds "noise" to x
y_obs <- y + ey #adds "noise" to y
P <- cbind(x_obs,y_obs) #places points in matrix
plot(P,asp=1,col=1) #plot points
points(mean(x_obs),mean(y_obs),col=3, pch=19) #show center
At this point a full PCA analysis can be undertaken in R using the command “prcomp”, but in order to illustrate the algorithm I show all the steps below:
M <- cbind(x_obs-mean(x_obs),y_obs-mean(y_obs))#centered matrix
MCov <- cov(M) #creates covariance matrix
Note that the covariance matrix is proportional to the matrix $M^tM$. Next I turn to computation of the principal axes:
eigenValues <- eigen(MCov)$values #compute eigenvalues eigenVectors <- eigen(MCov)$vectors #compute eigenvectors
The eigenvectors of the covariance matrix provide the principal axes, and the eigenvalues quantify the fraction of variance explained in each component. This math is explained in many papers and books so we omit it here, except to say that the fact that eigenvalues of the sample covariance matrix are the principal axes follows from recasting the PCA optimization problem as maximization of the Raleigh quotient. A key point is that although I’ve computed the sample covariance matrix explicitly in this example, it is not necessary to do so in practice in order to obtain its eigenvectors. In fact, it is inadvisable to do so. Instead, it is computationally more efficient, and also more stable to directly compute the singular value decomposition of M. The singular value decomposition of M decomposes it into $M=UDV^t$ where is a diagonal matrix and both $U$ and $V^t$ are orthogonal matrices. I will also not explain in detail the linear algebra of singular value decomposition and its relationship to eigenvectors of the sample covariance matrix (there is plenty of material elsewhere), and only show how to compute it in R:
d <- svd(M)$d #the singular values v <- svd(M)$v #the right singular vectors
The right singular vectors are the eigenvectors of $M^tM$. Next I plot the principal axes:
lines(x_obs,eigenVectors[2,1]/eigenVectors[1,1]*M[x]+mean(y_obs),col=8)
This shows the first principal axis. Note that it passes through the mean as expected. The ratio of the eigenvectors gives the slope of the axis. Next
lines(x_obs,eigenVectors[2,2]/eigenVectors[1,2]*M[x]+mean(y_obs),col=8)
shows the second principal axis, which is orthogonal to the first (recall that the matrix $V^t$ in the singular value decomposition is orthogonal). This can be checked by noting that the second principal axis is also
lines(x_obs,-1/(eigenVectors[2,1]/eigenVectors[1,1])*M[x]+mean(y_obs),col=8)
as the product of orthogonal slopes is -1. Next, I plot the projections of the points onto the first principal component:
trans <- (M%*%v[,1])%*%v[,1] #compute projections of points
P_proj <- scale(trans, center=-cbind(mean(x_obs),mean(y_obs)), scale=FALSE)
points(P_proj, col=4,pch=19,cex=0.5) #plot projections
segments(x_obs,y_obs,P_proj[,1],P_proj[,2],col=4,lty=2) #connect to points
The linear algebra of the projection is simply a rotation followed by a projection (and an extra step to recenter to the coordinates of the original points). Formally, the matrix M of points is rotated by the matrix of eigenvectors W to produce $T=MW$. This is the rotation that has all the optimality properties described above. The matrix T is sometimes called the PCA score matrix. All of the above code produces the following figure, which should be compared to those shown above:
There are many generalizations and modifications to PCA that go far beyond what has been presented here. The first step in generalizing probabilistic PCA is factor analysis, which includes estimation of variance parameters in each coordinate. Since it is rare that “noise” in data will be the same in each coordinate, factor analysis is almost always a better idea than PCA (although the numerical algorithms are more complicated). In other words, I just explained PCA in detail, now I’m saying don’t use it! There are other aspects that have been generalized and extended. For example the Gaussian assumption can be relaxed to other members of the exponential family, an important idea if the data is discrete (as in genetics). Yang et al. 2012 exploit this idea by replacing PCA with logistic PCA for analysis of genotypes. There are also many constrained and regularized versions of PCA, all improving on the basic algorithm to deal with numerous issues and difficulties. Perhaps more importantly, there are issues in using PCA that I have not discussed. A big one is how to choose the PCA dimension to project to in analysis of high-dimensional data. But I am stopping here as I am certain no one is reading at this far into the post anyway…
The take-home message about PCA? Always be thinking when using it!
Acknowledgment: The exposition of PCA in this post began with notes I compiled for my course MCB/Math 239: 14 Lessons in Computational Genomics taught in the Spring of 2013. I thank students in that class for their questions and feedback. None of the material presented in class was new, but the exposition was intended to clarify when PCA ought to be used, and how. I was inspired by the papers of Tipping, Bishop and Roweis on probabilistic PCA in the late 1990s that provided the needed statistical framework for its understanding. Following the class I taught, I benefited greatly from conversations with Nicolas Bray, Brielin Brown, Isaac Joseph and Shannon McCurdy who helped me to further frame PCA in the way presented in this post.
The Habsburg rulership of Spain ended with an inbreeding coefficient of F=0.254. The last king, Charles II (1661-1700), suffered an unenviable life. He was unable to chew. His tongue was so large he could not speak clearly, and he constantly drooled. Sadly, his mouth was the least of his problems. He suffered seizures, had intellectual disabilities, and was frequently vomiting. He was also impotent and infertile, which meant that even his death was a curse in that his lack of heirs led to a war.
None of these problems prevented him from being married (twice). His first wife, princess Henrietta of England, died at age 26 after becoming deeply depressed having being married to the man for a decade. Only a year later, he married another princess, 23 year old Maria Anna of Neuberg. To put it mildly, his wives did not end up living the charmed life of Disney princesses, nor were they presumably smitten by young Charles II who apparently aged prematurely and looked the part of his horrific homozygosity. The princesses married Charles II because they were forced to. Royals organized marriages to protect and expand their power, money and influence. Coupled to this were primogeniture rules which ensured that the sons of kings, their own flesh and blood and therefore presumably the best-suited to be in power, would indeed have the opportunity to succeed their fathers. The family tree of Charles II shows how this worked in Spain:
It is believed that the inbreeding in Charles II’s family led to two genetic disorders, combined pituitary hormone deficiency and distal renal tubular acidosis, that explained many of his physical and mental problems. In other words, genetic diversity is important, and the point of this blog post is to highlight the fact that diversity is important in education as well.
The problem of inbreeding in academia has been studied previously, albeit to a limited extent. One interesting article is Navel Grazing: Academic Inbreeding and Scientific Productivity by Horta et al published in 2010 (my own experience with an inbred academic from a department where 39% of the faculty are self-hires anecdotally confirms the claims made in the paper). But here I focus on the downsides of inbreeding of ideas rather than of faculty. For example home-schooling, the educational equivalent of primogeniture, can be fantastic if the parents happen to be good teachers, but can fail miserably if they are not. One thing that is guaranteed in a school or university setting is that learning happens by exposure to many teachers (different faculty, students, tutors, the internet, etc.) Students frequently complain when there is high variance in teaching quality, but one thing such variance ensures is that is is very unlikely that any student is exposed only to bad teachers. Diversity in teaching also helps to foster the development of new ideas. Different teachers, by virtue of insight or error, will occasionally “mutate” ideas or concepts for better or for worse. In other words, one does not have to fully embrace the theory of memes to acknowledge that there are benefits to variance in teaching styles, methods and pedagogy. Conversely, there is danger in homogeneity.
This brings me to MOOCs. One of the great things about MOOCs is that they reach millions of people. Udacity claims it has 1.6 million “users” (students?). Coursera claims 7.1 million. These companies are greatly expanding the accessibility of education. Starving children in India can now take courses in mathematical methods for quantitative finance, and for the first time in history, a president of the United States can discreetly take a freshman course on economics together with its high school algebra prerequisites (highly recommended). But when I am asked whether I would be interested in offering a MOOC I hesitate, paralyzed at the thought that any error I make would immediately be embedded in the brains of millions of innocent victims. My concern is this: MOOCs can greatly reduce the variance in education. For example, Coursera currently offers 641 courses, which means that each courses is or has been taught to over 11,000 students. Many college courses may have less than a few dozen students, and even large college courses rarely have more than a few hundred students. This means that on average, through MOOCs, individual professors reach many more (2 orders of magnitude!) students. A great lecture can end up positively impacting a large number of individuals, but at the same time, a MOOC can be a vehicle for infecting the brains of millions of people with nonsense. If that nonsense is then propagated and reaffirmed via the interactions of the people who have learned it from the same source, then the inbreeding of ideas has occurred.
I mention MOOCs because I was recently thinking about intuition behind Bessel’s correction replacing n with n-1 in the formula for sample variance. Formally, Bessel’s correction replaces the biased formula
$s^2_n = \frac{1}{n} \sum_{i=1}^n (x_i - \overline{x})^2$
for estimating the variance of a random variable from samples $x_1,\ldots,x_n$ with
$s^2_{n-1} = \frac{1}{n-1} \sum_{i=1}^n (x_i-\overline{x})^2$.
The switch from to n-1 is a bit mysterious and surprising, and in introductory statistics classes it is frequently just presented as a “fact”. When an explanation is provided, it is usually in the form of algebraic manipulation that establishes the result. The issue came up as a result of a blog post I’m writing about principal components analysis (PCA), and I thought I would check for an intuitive explanation online. I googled “intuition sample variance” and the top link was a MOOC from the Khan Academy:
The video has over 51,000 views with over 100 “likes” and only 6 “dislikes”. Unfortunately, in this case, popularity is not a good proxy for quality. Despite the title promising “review” and “intuition” for “why we divide by n-1 for the unbiased sample variance” there is no specific reason given why is replaced by n-1 (as opposed to another correction). Furthermore, the intuition provided has to do with the fact that $x_i-\overline{x}$ underestimates $x_i-\mu$ (where $\mu$ is the mean of the random variable and $\overline{x}$ is the sample mean) but the explanation is confusing and not quantitative (which it can easily be). In fact, the wikipedia page for Bessel’s correction provides three different mathematical explanations for the correction together with the intuition that motivates them, but it is difficult to find with Google unless one knows that the correction is called “Bessel’s correction”.
Wikipedia is also not perfect, and this example is a good one for why teaching by humans is important. Among the three alternative derivations, I think that one stands out as “better” but one would not know by just looking at the wikipedia page. Specifically, I refer to “Alternate 1” on the wikipedia page, that is essentially explaining that variance can be rewritten as a double sum corresponding to the average squared distance between points and the diagonal terms of the sum are zero in expectation. An explanation of why this fact leads to the n-1 in the unbiased estimator is as follows:
The first step is to notice that the variance of a random variable is equal to half of the expected squared difference of two independent identically distributed random variables of that type. Specifically, the definition of variance is:
$var(X) = \mathbb{E}(X - \mu)^2$ where $\mu = \mathbb{E}(X)$. Equivalently, $var(X) = \mathbb{E}(X^2) -\mu^2$. Now suppose that Y is another random variable identically distributed to X and with X,Y independent. Then $\mathbb{E}(X-Y)^2 = 2 var(X)$. This is easy to see by using the fact that
$\mathbb{E}(X-Y)^2 = \mathbb{E}(X^2) + \mathbb{E}(Y^2) - 2\mathbb{E}(X)\mathbb{E}(Y) = 2\mathbb{E}(X^2)-2\mu^2$.
This identity motivates a rewriting of the (uncorrected) sample variance $s_n$ in a way that is computationally less efficient, but mathematically more insightful:
$s_n = \frac{1}{2n^2} \sum_{i,j=1}^n (x_i-x_j)^2$.
Of note is that in this summation exactly n of the terms are zero, namely the terms when i=j. These terms are zero independently of the original distribution, and remain so in expectation thereby biasing the estimate of the variance, specifically leading to an underestimate. Removing them fixes the estimate and produces
$s_{n-1}^2 = \frac{1}{2n(n-1)} \sum_{i,j=1, i \neq j}^n (x_i-x_j)^2$.
It is easy to see that this is indeed Bessel’s correction. In other words, the correction boils down to the fact that $n^2-n = n(n-1)$, hence the appearance of n-1.
Why do I like this particular derivation of Bessel’s correction? There are two reasons: first, n-1 emerges naturally and obviously from the derivation. The denominator in $s_{n-1}^2$ matches exactly the number of terms being summed, so that it can be understood as a true average (this is not apparent in its standard form as $s_{n-1}^2 = \frac{1}{n-1} \sum_{i=1}^n (x_i-\overline{x})^2$. There is really nothing mysterious anymore, its just that some terms having been omitted from the sum because they were non-inofrmative. Second, as I will show in my forthcoming blog post on PCA, the fact that the variance of a random variable is half of the expectation of the squared difference of two instances, is key to understanding the connection between multi-dimensional scaling (MDS) and PCA. In other words, as my student Nicolas Bray is fond of saying, although most people think a proof is either right or wrong, in fact some proofs are more right than others. The connection between Bessel’s correction and PCA goes even deeper: as explained by Saville and Wood in their book Statistical Methods: A Geometric Approach n-1 can be understood to be a reduction in one dimension from the point of view of probabilistic PCA (Saville and Wood do not explicitly use the term probabilistic PCA but as I will explain in my PCA post it is implicit in their book). Finally, there are many subtleties to Bessel’s correction, for example it is an unbiased estimator for variance and not standard deviation. These issues ought to be mentioned in a good lecture about the topic. In other words, the Khan lecture is neither necessary nor sufficient, but unlike a standard lecture where the damage is limited to a small audience of students, it has been viewed more than 50,000 times and those views cannot be unviewed.
In writing this blog post I pondered the irony of my call for added diversity in teaching while I preach my own idea (this post) to a large number of readers via a medium designed for maximal outreach. I can only ask that others blog as well to offer alternative points of view 🙂 and that readers inform themselves on the issues I raise by fact-checking elsewhere. As far as the statistics goes, if someone finds the post confusing, they should go and register for one of the many fantastic MOOCs on statistics! But I reiterate that in the rush to MOOCdom, providers must offer diversity in their offerings (even multiple lectures on the same topic) to ensure a healthy population of memes. This is especially true in Spain, where already inbred faculty are now inbreeding what they teach by MOOCing via Miriada X. Half of the MOOCs being offered in Spain originate from just 3 universities, while the number of potential viewers is enormous as Spanish is now the second most spoken language in the world (thanks to Charles II’s great-great-grandfather, Charles I).
May Charles II rest in peace.
On Satuday I submitted the final grades for Math10A, the new UC Berkeley freshman math class for intended biology majors that I taught this semester. In assigning students their grades, I had a chance to reflect again on the system we use and its substantial shortcomings.
The system is broken, and my grade assignment procedure illustrates why. Math 10a had 223 students this semester, and they were graded according to the following policy: homework 10%, quizzes 10%, midterms 20% each (there were two midterms) and the final 40%. If midterm 1 was skipped then midterm 2 counted 40%. Similarly, if midterm 2 was skipped then the final counted 60%. This produced a raw score for each student and the final distribution is shown below (zeroes not shown):
The distribution seems fairly “reasonable”. One student didn’t do any work or show up and got a 5/100. At the other end of the spectrum some students aced the class. The average score was 74.48 and the standard deviation 15.06. An optimal raw score distribution should allow for detailed discrimination between students (e.g. if everyone gets the same score thats not helpful). I think my distribution could have been a bit better but I overall I am satisfied with it. The problem comes with the next step: after obtaining raw scores in a class, the professor has to set cutoffs for A+/A/A-/B+/B/B-/C+/C/C-/D+/D/D-/F. Depending on how the cutoffs are set, the grade distribution can change dramatically. In fact, it is easy to see that any discrete distribution on letter grades is achievable from any raw score distribution. One approach to letter grades would be to fix an A at, say, any raw score greater than or equal 90%, i.e., no curving. I found that threshold on wikipedia. But that is rarely how grades are set, partly because of large variability in the difficulty of exams. Almost every professor I know “curves” to some extent. At Berkeley one can examine grade distributions here.
It turns out that Roger Purves from statistics used to aim for a uniform distribution:
Roger Purves’ Stat 2 grade distribution over the past 6 years.
The increase in C- grades is explained by an artifact of the grading system at Berkeley. If a student fails the class they can take it again and record the passing grade for their GPA (although the F remains on the transcript). A grade of D is not only devastating for the GPA, but also permanent. It cannot be improved by retaking the class. Therefore many students try to fail when they are doing poorly in a class, and many professors simply avoid assigning Ds. In other words, Purves’ C- includes his Ds. Another issue is that an A+ vs. A does not affect GPA, but an A vs. A- does; the latter is obviously a very subjective difference that varies widely between classes and professors. Note that Roger Purves just didn’t assign A+ grades, presumably because they have no GPA significance (although they do arguably have a psychological impact).
Marina Ratner from math failed more students [Update November 9, 2014: Prof. Ratner has pointed out to me that she receives excellent reviews from students on Ratemyprofessors, while explaining that “the large number of F in my classes are due to the large number of students who drop the class but are still on the list or don’t do any work” and that “One of the reasons why my students learned and others did not was precisely because of my grading policy.”]. Her grade distribution for Math 1b in the Spring semester of 2009 is below:
Marina Ratner’s Math 1B, Spring 2009.
In the same semester, in a parallel section, her colleague Richard Borcherds gave the following grades:
Richard Borcherd’s Math 1B, Spring 2009.
Unlike Ratner, Borcherds appears to be averse to failing students. Only 7 students failed out of 441 who were enrolled in his two sections that semester. Fair?
And then there are those who believe in the exponential distribution, for example Heino Nitsche who teaches Chem 1A:
Heino Nitsche’s Chem 1A, Spring 2011.
The variability in grade assignment is astonishing. As can be seen above, curving is prevalent and arbitrary, and the idea that grades have an absolute meaning is not credible. It is statistically highly unlikely that Ratner’s students were always terrible at learning math (whereas Borcherds “luckily” got the good students). Is chemistry inherently easy, to the point where an average student taking the class deserves an A?
This messed up system is different, yet similar in other schools. Sadly, many schools have used letter grading to manipulate GPAs via continual grade inflation. Just three weeks ago on December 3rd, the dean of undergraduate education at Harvard confirmed that the median grade at Harvard is an A- and the most common grade an A. The reasons for grade inflation are manifold. But I can understand it on a personal level. It is tempting for a faculty member to assign As because those are likely to immediately translate to better course evaluations (both internal, and public on sites such as Ninja Courses and ratemyprofessor). Local grade inflation can quickly lead to global inflation as professors, and at a higher level their universities, are competing with each other for the happiest students.
How did I assign letter grades for Math 10A?
What should be done?
Until recently grades were recorded on paper, making it difficult to perform anything but trivial computations on the raw scores or letter grades. But electronic recording of grades allows for more sophisticated analysis. This should be taken advantage of. Suppose that instead of a letter grade, each student’s raw scores were recorded, along with the distribution of class scores. A single letter would immediately be replaced by a meaningful number in context.
I do think it is unfair to grade students only relatively, especially with respect to cohorts that can range in quality. But it should be possible to compute a meaningful custom raw score distribution specific to individual students based on the classes they have taken. The raw data is a 3-way table whose dimensions consist of professors x classes x raw scores. This table is sparse, as professors typically only teach a handful of different courses throughout their career. But by properly averaging the needed distributions as gleaned from this table, it should be possible to produce for each student an overall GPA score, together with a variance of the (student specific) distribution it came from averaged over the courses the student took. The resulting distribution and score could be renormalized to produce a single meaningful number. That way, taking “difficult” classes with professors who grade harshly would not penalize the GPA. Similarly, aiming for easy As wouldn’t help the GPA. And manipulative grade inflation on the part of professors and institutions would be much more difficult.
Its time to level the playing field for students, eliminate the possibility for manipulative grade inflation, and to stop hypocrisy. We need to not only preach statistical and computational thinking to our students, we need to practice it in every aspect of their education.
[Update April 6, 2014: The initial title of this post was “23andme genotypes are all wrong”. While that was and remains a technically correct statement, I have changed it because the readership of my blog, and this post in particular, has changed. Initially, when I made this post, the readers of the blog were (computational) biologists with extensive knowledge of genotyping and association mapping, and they could understand the point I was trying to make with the title. However in the past few months the readership of my blog has grown greatly, and the post is now reaching a wide public audience. The revised title clarifies that the content of this post relates to the point that low error rates in genotyping can be problematic in the context of genome-wide association reports because of multiple-testing.]
I have been reading the flurry of news articles and blog posts written this week about 23andme and the FDA with some interest. In my research talks, I am fond of displaying 23andme results, and have found that people always respond with interest. On the teaching side, I have subsidized 23andme testing for volunteer students in Math127 who were interested in genetics so that they could learn about personalized genomics first-hand. Finally, a number of my former and current students have worked at 23andme, and some are current employees.
Despite lots of opinions being expressed about the 23andme vs. FDA kerfuffle, I believe that two key points have been ignored in the discussions:
1. All 23andme genotypes that have ever been reported to customers are wrong. This is the case despite very accurate genotyping technology used by 23andme.
2. The interpretation of 23andme results involves examining a large number of odds ratios. The presence of errors leads to a huge multiple-testing problem.
Together, these issues lead to an interesting conundrum for the company, for customers, and for the FDA.
I always find it useful to think about problems concretely. In the case of 23andme, it means examining actual genotypes. Fortunately, you don’t have to pay the company \$99 dollars to get your own- numerous helpful volunteers have posted their 23andme genotypes online. They can be viewed at openSNP.org where “customers of direct-to-customer genetic tests [can] publish their test results, find others with similar genetic variations, learn more about their results, get the latest primary literature on their variations and help scientists find new associations”. There are a total of 624 genotypes available at openSNP, many of them from 23andme. As an example, consider “samantha“, who in addition to providing her 23andme genotype, also provides lots of phenotypic information. Here is the initial part of her genotype file:
# This data file generated by 23andMe at: Wed Jul 20 20:37:11 2011
#
# Below is a text version of your data. Fields are TAB-separated
# Each line corresponds to a single SNP. For each SNP, we provide its identifier
# (an rsid or an internal id), its location on the reference human genome, and the
# genotype call oriented with respect to the plus strand on the human reference
# sequence. We are using reference human assembly build 36. Note that it is possible
# that data downloaded at different times may be different due to ongoing improvements
#
# http://www.ncbi.nlm.nih.gov/projects/mapview/map_search.cgi?taxid=9606&build=36
#
# rsid chromosome position genotype
rs4477212 1 72017 AA
rs3094315 1 742429 AG
rs3131972 1 742584 AG
rs12124819 1 766409 AA
rs11240777 1 788822 AA
rs6681049 1 789870 CC
rs4970383 1 828418 CC
rs4475691 1 836671 CC
rs7537756 1 844113 AA
rs13302982 1 851671 GG
rs1110052 1 863421 GT
...
Anyone who has been genotyped by 23andme can get this file for themselves from the website (by clicking on their name, then on “Browse Raw Data” from the pull-down menu, and then clicking on “Download” in the top-right corner of the browser window). The SNPs are labeled with rsid labels (e.g. rs3094315) and correspond to specific locations on chromosomes (e.g. chr1:742429). Since every human is diploid, two bases are shown for every SNP; one came from mom and one from dad. The 23andme genotype is not phased, which means that you can’t tell in the case of rs3094315 whether the A was from mom and the G from dad, or vice versa (it turns out paternal origin can be important, but that is a topic for another post).
A key question the FDA has asked, as it does for any diagnostic test, is whether the SNP calls are accurate. The answer is already out there. First, someone has performed a 23andme replicate experiment precisely to assess the error rate. In an experiment in 2010 with two replicates, 85 SNPs out of about 600,000 were different. Today, Illumina types around 1 million SNPs, so one would expect even more errors. Furthermore, a replicate analysis provides only a lower bound, since systematic errors will not be detected. Another way to examine the error rate is to look at genotypes of siblings. That was written about in this blog post which concluded there were 87 errors. 23andme currently uses the Illumina Omni Express for genotyping, and the Illumina spec sheet claims a similar error rate to those inferred in the blog posts mentioned above. The bottom line is that even though the error rate for any individual SNP call is very very low (<0.01% error), with a million SNPs being called there is (almost) certainly at least one error somewhere in the genotype. In fact, assuming a conservative error rate leading to an average of 100 errors per genotype, the probability that a 23andme genotype has no errors is less than 10^(-40).
The fact that 23andme genotypes are wrong (i.e. at least one error in some SNP) wouldn’t matter if one was only interested in a single SNP. With very high probability, it would be some other SNPs that are the wrong ones. But the way people use 23andme is not to look at a single SNP of interest, but rather to scan the results from all SNPs to find out whether there is some genetic variant with large (negative) effect. The good news is that there isn’t much information available for the majority of the 1 million SNPs being tested. But there are, nevertheless, lots of SNPs (thousands) to look at. Whereas a comprehensive exam at a doctor’s office might currently constitute a handful of tests– a dozen or a few dozen at most– a 23andme test assessing thousands of SNPs and hundreds of diseases/traits constitutes more diagnostic tests on an individual at one time than have previously been performed in a lifetime.
To understand how many tests are being performed in a 23andme experiment, it is helpful to look at the Interpretome website. The website allows a user to examine information on SNPs without paying, and without uploading the data. I took a look at Samantha, and the Interpretome gave information about 2829 SNPs. These are SNPs for which there is a research article that has identified the SNP as significant in some association study (the website conveniently provides direct links to the articles). For example, here are two rows from the phenotype table describing something about Samantha’s genetic predisposition for large head circumference:
Head circumference (infant) 11655470 CC T .05 4E-6 22504419
Head circumference (infant) 1042725 CC T .07 3E-10 22504419
Samantha’s genotype at the locus is CC, the “risk” allele is T, the odds ratios are very small (0.05,0.07) and the p-values are apparently significant. Interpretome’s results differ from those of 23andme, but looking at the diversity of phenotypes reported on gives one a sense for the possibilities that currently exist in genetics, and the scope of 23andme’s reports.
From the estimates of error rates provided above, and using the back of an envelope, it stands to reason that about 1/3 of 23andme tested individuals have an error at one of their “interesting” SNPs. Not all of SNPs arising in association studies are related to diseases, but many of them are. I don’t think its unreasonable to postulate that a significant percentage of 23andme customers have some error in a SNP that is medically important. Whether such errors are typically false positives or false negatives is unclear, and the extent to which they may lead to significant odds ratios is another interesting question. In other words, its not good enough to know how frequently warfarin sensitivity is being called incorrectly. The question is how frequently some medically significant result is incorrect.
Of course, the issue of multiple testing as it pertains to interpreting genotypes is probably a secondary issue with 23andme. As many bloggers have pointed out, it is not even clear that many of 23andme’s odds ratios are accurate or meaningful. A major issue, for example, is the population background of an individual examining his/her genotype and how close it is to the population on which the GWAS were performed. Furthermore, there are serious questions about the meaning of the GWAS odds ratios in the case of complex traits. However I think the issue of multiple testing is a deeper one, and a problem that will only be exacerbated as more disease SNPs are identified. Having said that, there are also approaches that could mitigate errors and improve fidelity of the tests. As DECODE genetics has demonstrated, imputation and phasing can in principle be used to infer population haplotypes, which not only are useful for GWAS analyses, but can also be used to identify erroneous SNP calls. 23andme’s problem is that although they have many genotypes, they are from diverse populations that will be harder to impute and phase.
The issue of multiple testing arising in the context of 23andme and the contrast with classic diagnostics reminds me of the dichotomy between whole-genome analysis and classic single gene molecular biology. The way in which customers are looking at their 23andme results is precisely to look for the largest effects, i.e. phenotypes where they appear to have high odds of contracting a disease, or being sensitive to some drug. This is the equivalent of genome scientists picking the “low hanging fruit” out of genome-wide experiments such as those performed in ENCODE. In genomics, scientists have learned (with some exceptions) how to interpret genome-wide analyses after correcting for multiple-hypothesis testing by controlling for false discovery rate. But are the customers of 23andme doing so? Is the company helping them do it? Should it? Will the FDA require it? Can looking at ones own genotype constitute too much testing?
There are certainly many precedents for superfluous harmful testing in medicine. For example, the American Academy of Family Physicians has concluded that prostate cancer PSA tests and digital rectal exams have marginal benefits that are outweighed by the harm caused by following up on positive results. Similar arguments have been made for mammography screening. I therefore think that there are serious issues to consider about the implications of direct-to-consumer genetic testing and although I support the democratization of genomics, I’m glad the FDA is paying attention.
Samantha’s type 2 diabetes risk as estimated from her genotype by Interpretome. She appears to have a lower risk than an average person. Does this make it ok for her to have another cookie?
I visited Duke’s mathematics department yesterday to give a talk in the mathematical biology seminar. After an interesting day meeting many mathematicians and (computational) biologists, I had an excellent dinner with Jonathan Mattingly, Sayan MukherjeeMichael Reed and David Schaeffer. During dinner conversation, the topic of probability theory (and how to teach it) came up, and in particular Buffon’s needle problem.
The question was posed by Georges-Louis Leclerc, Comte de Buffon in the 18th century:
Suppose we have a floor made of parallel strips of wood, each the same width, and we drop a needle onto the floor. What is the probability that the needle will lie across a line between two strips?
If the strips are distance $t$ apart, and $l \leq t$, then it is easy to see that the probability $P$ is given by
$P = \int_{\theta =0}^{\frac{\pi}{2}} \int_{x = 0}^{\frac{l}{2}sin \theta} \frac{4}{t \pi} dx d\theta = \frac{2l}{t \pi}$.
The appearance of $\pi$ in the denominator turns the problem into a Monte Carlo technique for estimating $\pi$: simply simulate random needle tosses and count crossings.
It turns out there is a much more elegant solution to the problem– one that does not require calculus. I learned of it from Gian-Carlo Rota when I was a graduate student at MIT. It appears in his book Introduction to Geometric Probability (with Dan Klain) that I have occasionally used when teaching Math 249. The argument relies on the linearity of expectation, and is as follows:
Let $f(l)$ denote the expected number of crossings when a needle of length $l$ is thrown on the floor. Now consider two needles, one of length $l$ and the other $m$, attached to each other end to end (possibly at some angle). If $X_1$ is a random variable describing the number of crossings of the first needle, and $X_2$ of the second, its certainly the case that $X_1$ and $X_2$ are dependent, but because expectation is linear, it is the case that $E(X_1+X_2) = E(X_1)+E(X_2)$. In other words, the total number of crossings is, in expectation, $f(l)+f(m)$.
Buffon’s needle problem: what is the probability that a needle of length $l \leq t$ crosses a line? (A) A short needle being thrown at random on a floor with parallel lines. (B) Two connected needles. The expected number of crossings is proportional to the sum of their lengths. (C) A circle of diameter always crosses exactly two lines.
It follows that $f$ is a linear function, and since $f(0)=0$, we have that $f(l) = cl$ where $c$ is some constant. Now consider a circle of diameter $t$. Such a circle, when thrown on the floor, always crosses the parallel lines exactly twice. If $C$ is a regular polygon with vertices on the circle, and the total length of the polygon segments is $l$, then the total number of crossings is $f(l)$. Taking the limit as the number of segments in the polygon goes to infinity, we find that $f(t \pi ) = 2$. In other words,
$f(t \pi) = c \cdot t \pi = 2 \Rightarrow c = \frac{2}{t \pi}$,
and the expected number of crossings of a needle of length l is $\frac{2l}{t \pi}$. If $l < t$, the number of crossings is either 0 or 1, so the expected number of crossings is, by definition of expectation, equal to the probability of a single crossing. This solves Buffon’s problem no calculus required!
The linearity of expectation appears elementary at first glance. The proof is simple, and it is one of the first “facts” learned in statistics– I taught it to my math 10 students last week. However the apparent simplicity masks its depth and utility; the above example is cute, and one of my favorites, but linearity of expectation is useful in many settings. For example I recently saw an interesting application in an arXiv preprint by Anand Bhaskar, Andy Clark and Yun Song on “Distortion of genealogical properties when the sample is very large“.
The paper addresses an important question, namely the suitability of the coalescent as an approximation to discrete time random mating models, when sample sizes are large. This is an important question, because population sequencing is starting to involve hundreds of thousands, if not millions of individuals.
The results of Bhaskar, Clark and Song are based on dynamic programming calculations of various genealogical quantities as inferred from the discrete time Wright-Fisher model. An example is the expected frequency spectrum for random samples of individuals from a population. By frequency spectrum, they mean, for each k, the expected number of polymorphic sites with k derived alleles and n-k ancestral alleles under an infinite-sites model of mutation in a sample of n individuals. Without going into details (see their equations (8),(9) and (10)), the point is that they are able to derive dynamic programming recursions because they are computing the expected frequencies, and the linearity of expectation is what allows for the derivation of the dynamic programming recursions.
None of this has anything to do with my seminar, except for the fact that the expectation-maximization algorithm did make a brief appearance, as it frequently does in my lectures these days. I spoke mainly about some of the mathematics problems that arise in comparative transcriptomics, with a view towards a principled approach to comparing transcriptomes between cells, tissues, individuals and species.
The Duke Chapel. While I was inside someone was playing the organ, and as I stared at the ceiling, I could have sworn I was in Europe.
The International Society for Clinical Densitometry has an official position on the number of decimal digits that should be reported when describing the results of bone density scans:
• BMD (bone mineral
density): three digits (e.g., 0.927 g/cm2). •
• T-score: one digit (e.g., –2.3).
• Z-score: one digit (e.g., 1.7).
• BMC (bone mineral content): two digits (e.g., 31.76
g).
• Area: two digits (e.g., 43.25
cm2).
• % reference database: Integer
(e.g., 82%).
Are these recommendations reasonable? Maybe not. For example they fly in the face of the recommendation in the “seminal” work of Ehrenberg (Journal of the Royal Statistical Society A, 1977) which is to use two decimal digits.
Two? Three? What should it be? This what my Math10 students always ask of me.
I answered this question for my freshmen in Math10 two weeks ago by using an example based on a dataset from the paper Schwartz, M. “A biomathematical approach to clinical tumor growth“. Cancer 14 (1961): 1272-1294. The paper has a dataset consisting of the size of a pulmonary neoplasm over time:
A simple model for the tumor growth is $f(t) = a \cdot b^t$ and in class I showed how a surprisingly good fit can be obtained by interpolating through only two points ($t=0$ and $t=208$):
$f(0)= 1.8 \Rightarrow a \cdot b^0 = 1.8 \Rightarrow a = 1.8$.
Then we have that $f(208) = 3.5 \Rightarrow 1.8 \cdot b^{208} = 3.5 \Rightarrow b= \sqrt{208}{3.5/1.8} \approx 1.00302$.
The power function $f(t)=1.8 \cdot t^{1.00302}$ is shown in the figure. The fit is surprisingly good considering it is based on an interpolation using only two points. The point of the example is that if one rounds the answer 1.00302 to two decimal digits then one obtains $f(t) = 1.8t^1 = 1.8t$ which is linear as opposed to super-linear. In other words, a small (quantitative) change in the assumptions (restricting the rate to intervals differing by 0.01) results in a major qualitative change in results: with two decimal digits the patient lives, with three… death!
This simple example of decimal digit arithmetic illustrates a pitfall affecting many computational biology studies. It is tempting to believe that $\mbox{Qualitative} \subset \mbox{Quantitative}$, i.e. that focusing on qualitative analysis allows for the flexibility of ignoring quantitative assumptions. However frequently the qualitative devil is in the quantitative details.
One field where qualitative results are prevalent, and therefore the devil strikes frequently, is network theory. The emphasis on searching for “universal phenomena”, i.e. qualitative results applicable to networks arising in different contexts, arguably originates with Milgram’s small world experiment that led to the concept of “six-degree of separation” and Watts and Strogatz’s theory of collective dynamics in small-world networks (my friend Peter Dodds replicated Milgram’s original experiment using email in “An experimental study of search in global social networks“, Science 301 (2003), p 827–829) . In mathematics these ideas have been popularized via the Erdös number which is the distance between an author and Paul Erdös in a graph where two individuals are connected by an edge if they have published together. My Erdös number is 2, a fact that is of interest only in that it divulges my combinatorics roots. I’m prouder of other connections to researchers that write excellent papers on topics of current interest. For example, I’m pleased to be distance 2 away from Carl Bergstrom via my former undergraduate student Frazer Meacham (currently one of Carl’s Ph.D. students) and the papers:
1. Meacham, Frazer, Dario Boffelli, Joseph Dhahbi, David IK Martin, Meromit Singer, and Lior Pachter. “Identification and Correction of Systematic Error in High-throughput Sequence Data.” BMC Bioinformatics 12, no. 1 (November 21, 2011): 451. doi:10.1186/1471-2105-12-451.
2. Meacham, Frazer, Aaron Perlmutter, and Carl T. Bergstrom. “Honest Signaling with Costly Gambles.” Journal of the Royal Society Interface 10, no. 87 (October 6, 2013):20130469. dpi:10.1098/rsif.2013.0469.
One of Bergstrom’s insightful papers where he exposes the devil (in the quantitative details) is “Nodal Dynamics, Not Degree Distributions, Determine the Structural Controllability of Complex Networks” by Cowan et al., PLoS One 7 (2012), e88398. It describes a not-so-subtle example of an unreasonable quantitative assumption that leads to intuition about network structural controllability that is, to be blunt, false. The example Carl critiques is from the paper
Controllability of complex networks” by Yang-Yu Liu, Jean-Jacques Slotine and Albert-László Barabási, Nature 473 (2011), p 167–173. The mathematics is straightforward: It concerns the dynamics of linear systems of the form
$\frac{d{\bf x}(t)}{dt} =-p{\bf x}(t) + A{\bf x}(t) + B{\bf u}(t)$.
The dynamics can be viewed as taking place on a graph whose adjacency matrix is given by the non-zero entries of A (an nxn matrix). The vector -p (of size nis called the pole of the linear system and describes intrinsic dynamics at the nodes. The vector (of size mcorresponds to external inputs that are coupled to the system via the nxm matrix B.
An immediate observation is that the vector p is unnecessary and can be incorporated into the diagonal of the matrix A. An element on the diagonal of A that is then non-zero can be considered to be a self-loop. The system then becomes
$\frac{d{\bf x}(t)}{dt} =A{\bf x}(t) + B{\bf u}(t)$
which is the form considered in the Liu et al. paper (their equation(1)). The system is controllable if there are time-dependent u that can drive the system from any initial state to a target end state. Mathematically, this is equivalent to asking whether the matrix $C=(B,AB,A^2B,\ldots, A^{n-1}B)$ has full rank (a classic result known as Kalman’s criteria of controllability). Structural controllability is a weaker requirement, in which the question is whether given only adjacency matrices and B, there exists weights for edges so that the weighted adjacency matrices satisfy Kalman’s criteria. The point of structural controllability is a theorem showing that structurally controllable systems are generically controllable.
The Liu et al. paper makes two points: the first is that if M is the size of a maximum matching in a given nxn adjacency matrix A, then the minimum m for which there exists a matrix B of size nxm for which the system is structurally controllable is m=n-M+1 (turns out this first point had already been made, namely in a paper by Commault et al. from 2002). The second point is that m is related to the degree distribution of the graph A.
The point of the Cowan et al. paper is to explain that the results of Liu et al. are completely uninteresting if $a_{ii}$ is non-zero for every i. This is because M is then equal to n (the matching matching of A consists of every self-loop). And therefore the result of Liu et al. reduces to the statement that m=1, or equivalently, that structural controllability for real-world networks can also be achieved with a single control input.
Unfortunately for Liu and coauthors, barring pathological canceling out of intrinsic dynamics with self-regulation, the diagonal elements of A, $a_{ii}$ will be zero only if there are no intrinsic dynamics to the system (equivalently $p_i=0$ or the time constants $\frac{1}{p_i}$ are infinite). I repeat the obvious by quoting from Cowan et al.:
“However, infinite time constants at each node do not generally reflect the dynamics of the physical and biological systems in Table 1 [of Liu et al.]. Reproduction and mortality schedules imply species- specific time constants in trophic networks. Molecular products spontaneously degrade at different rates in protein interaction networks and gene regulatory networks. Absent synaptic input, neuronal activity returns to baseline at cell-specific rates. Indeed, most if not all systems in physics, biology, chemistry, ecology, and engineering will have a linearization with a finite time constant. ”
Cowan et al. go a bit further than simply refuting the premise and results of Liu et al. They avoid the naïve reduction of a system with intrinsic dynamics to one with self-loops, and provide a specific criterion for the number of nodes in the graph that must be controlled.
In summary, just as with the rounding of decimal digits, a (simple looking) assumption of Liu et al., namely that $p=0$ completely changes the qualitative nature of the result. Moreover, it renders false the thesis of the Liu et al. paper, namely that degree distributions in (real) networks affect the requirements for controllability.
Oops.
I recently completed a term as director of our Center for Computational Biology and one of the things I am proud of having accomplished is helping Brian McClendon, program administrator for the Center, launch a new Ph.D. program in computational biology at UC Berkeley.
The Ph.D. program includes a requirement for taking a new course, “Classics in Computational Biology” (CMPBIO 201), that introduces students to research projects and approaches in computational biology via a survey of classic papers. I taught the class last week and the classic paper I chose was “Comparison of biosequences” by Temple Smith and Michael Waterman, Advances in Applied Mathematics 2 (1981), p 482–489. I gave a preparatory lecture this past week in which I discussed the Needleman-Wunsch algorithm, followed by leading a class discussion about the Smith-Waterman paper and related topics. This post is an approximate transcription of my lecture on the Needleman-Wunsch algorithm:
The Needleman-Wunsch algorithm was published in “A general method applicable to the search for similarities in the amino acid sequence of two proteins“, Needleman and Wunsch, Journal of Molecular Biology 48 (1970), p 443–453. As with neighbor-joining or UniFrac that I just discussed in my previous blog post, the Needleman-Wunsch algorithm was published without an explanation of its meaning, or even a precise description of the dynamic programming procedure it is based on. In their paper 11 years later, Smith and Waterman write
“In 1970 Needleman and Wunsch [1] introduced their homology (similarity) algorithm. From a mathematical viewpoint, their work lacks rigor and clarity. But their algorithm has become widely used by the biological community for sequence comparisons.”
They go on to state that “The statement in Needleman-Wunsch [1] is less general, unclearly stated, and does not have a proof” (in comparison to their own main Theorem 1). The Theorem Smith and Waterman are referring to explicitly describes the dynamic programming recursion only implicit in Needleman-Wunsch, and then explains why it produces the “optimal” alignment. In hindsight, the Needleman-Wunsch algorithm can be described formally as follows:
Given two sequences and b of lengths $n_1$ and $n_2$ respectively, an alignment $A(a,b)$ is a sequence of pairs:
$[(a_{i_1},b_{j_1}),(a_{i_2},b_{j_2}),\ldots : 1 \leq i_1 < i_2 < \cdots \leq n_1, 1 \leq j_1 < j_2 \cdots \leq n_2 ]$.
Given an alignment $A(a,b)$, we let
$M = | \{ k: a_{i_k} = b_{j_k} \} |, X = | \{ k: a_{i_k} \neq b_{j_k} \} |$
and
$S = | \{ r:a_r \neq i_k \, \mbox{for any k} \} | + | \{ r:b_r \neq j_k \, \mbox{for any k} \} |$.
In other words, M counts the number of matching characters in the alignment, X counts the number of mismatches and S the number of “spaces”, i.e. characters not aligned to another character in the other sequence. Since every character must be either matched, mismatched, or a not aligned, we have that
$2 M + 2 X + S = n_1+n_2$.
Given scores (parameters) consisting of real numbers $m,x,s$, the Needleman-Wunsch algorithm finds an alignment that maximizes the function $m \cdot M + x \cdot X + s \cdot S$. Note that by the linear relationship between M,X and described above, there are really only two free parameters, and without loss of generality we can assume they are and x. Furthermore, only and are relevant for computing the score of the alignment; when the scores are provided statistical meaning, one can say that and X are sufficient statistics. We call them the summary of the alignment.
The specific procedure of the Needleman-Wunsch algorithm is to compute a matrix S recursively:
$S(i,j) = \mbox{max} \{ S_{i-1,j} + s,S_{i,j-1} + s, S_{i-1,j-1} + xI(a_{i} = b_{j} )\}$
where $I(a_i = b_j)$ is the indicator function that is 1 if the characters are the same, and zero otherwise, and an initialization step consists of setting $S(i,0)=s \cdot i$ and $S(0,j) = s \cdot j$ for all i,j. There are numerous generalizations and improvements to the basic algorithm, both in extending the scoring function to include more parameters (although most generalizations retain the linearity assumption), and algorithms for trading off time for space improvements (e.g. divide-and-conquer can be used to improve the space requirements from $O(n_1 n_2)$ to $O(n_1 + n_2)$).
An important advance in understanding the Needleman-Wunsch algorithm was the realization that the “scores” m,x and s can be interpreted as logarithms of probabilities if the sign on the parameters is the same, or as log-odds ratios if the signs are mixed. This is discussed in detail in the book “Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids Sequences” by Richard Durbin, Sean Eddy, Anders Krogh and Graeme Mitchison.
One insight from the statistical point of view is that if max and plus in the recursion are replaced with plus and times, and the logarithms of probabilities are replaced by the probabilities themselves, then the Needleman-Wunsch recursion computes the total probability of the pairs of sequences marginalized over alignments (i.e., it is the forward algorithm for a suitably defined hidden Markov model). Together with the backward algorithm, which consists of running the forward algorithm on the reversed sequences, this provides an efficient way to compute for every pair of nucleotides the posterior probability that they are aligned. In other words, there is a meaningful interpretation and useful application for the Needleman-Wunsch algorithm with the semi-ring $(\mathbb{R}_{-}, \oplus := \mbox{max}, \otimes := +)$ replaced with $(\mathbb{R}, \oplus := +, \otimes := \times)$.
In work in collaboration with Bernd Sturmfels, we realized that it also makes sense to run the Needleman-Wunsch algorithm with the polytope algebra $(\mathcal{P}, \oplus := \mbox{convex hull of union}, \otimes := \mbox{Minkowski sum})$. Here $\mathcal{P}$ is the set of convex polytopes in $\mathbb{R}^d$ for some d, polytopes are “added” using the geometric operation of the convex hull of the union of polytopes, and they are “multiplied” by taking Minkowski sum. The result allows one to find not only a single optimal alignment, but all optimal alignments. The details are discussed in a pair of papers:
L. Pachter and B. Sturmfels, Parametric inference for biological sequence analysis, Proceedings of the National Academy of Sciences, Volume 101, Number 46 (2004), p 16138–16143.
C. Dewey, P. Huggins, K, Woods, B. Sturmfels and L. Pachter, Parametric alignment of Drosophila genomes, PLoS Computational Biology, Volume 2, Number 6 (2006) p e73.
The first explains the mathematical foundations for using the result and is where we coined the term polytope propagation algorithm for the Needleman-Wunsch algorithm with the polytope algebra. The second paper illustrates an application to a whole-genome alignment of two Drosophilids and provides an alternative to polytope propagation (the beneath-beyond algorithm) that is much faster. The type of analysis performed in the paper is also called parametric alignment, a problem with a long history whose popularity was launched by Dan Gusfield who provided the first software program for parametric alignment called XPARAL.
The polytope propagation algorithm (Needleman-Wunsch with the polytope algebra) is illustrated in the figure below:
The convex polygon in the bottom right panel contains in its interior points corresponding to all possible summaries for the alignments of the two sequences. The optimal summaries are vertices of the polygon. For each vertex, the parameters for which the Needleman-Wunsch algorithm will produce a summary corresponding to that vertex form a cone. The cones together form a fan which is shown inside the polygon.
The running time of polytope propagation depends on the number of vertices in the final polytope. In our papers we provide bounds showing that polytope propagation is extremely efficient. In particular, for parameters, the number of vertices is at most $O(n^{\frac{d(d-1)}{(d+1)}})$Cynthia Vinzant, formerly in our math department at UC Berkeley, has written a nice paper on lower bounds.
The fact that the Needleman-Wunsch algorithm can be run with three semi-rings means that it is particularly well suited to C++ thanks to templates that allow for the variables in the recursion to be abstracted. This idea (and code) is thanks to my former student Colin Dewey:
template<typename SemiRing>
void
alignGlobalLastRow(const string& seq1,
const string& seq2,
const typename SemiRing::Element match,
const typename SemiRing::Element mismatch,
const typename SemiRing::Element gap,
vector& row);
const Element one = SemiRing::multiplicativeIdentity;
// Initialize row
row.resize(seq2.size() + 1);
row[0] = one;
// Calculate first row
for (size_t j = 1; j <= seq2.size(); ++j)
row[j] = gap * row[j - 1];
// Calculate remaining rows
Element up, diag;
for (size_t i = 1; i <= seq1.size(); ++i) {
diag = row[0];
row[0] *= gap;
for (size_t j = 1; j <= seq2.size(); ++j) {
up = row[j];
if (seq1[i - 1] == seq2[j - 1]) {
row[j] = match * diag + gap * (up + row[j - 1]);
} else {
row[j] = mismatch * diag + gap * (up + row[j - 1]);
}
diag = up;
}
}
To recap, the three semi-rings with which it makes sense to run this algorithm are:
1. $(\mathbb{R}, \oplus := +, \otimes := \times)$
2. $(\mathbb{R}_{-}, \oplus := \mbox{max}, \otimes := +)$
3. $(\mathcal{P}, \oplus := \mbox{convex hull of union}, \otimes := \mbox{Minkowski sum})$
The interpretation of the output of the algorithm with the three semi-rings is:
1. $S_{ij}$ is the score (probability) of a maximum alignment between the ith prefix of one sequence and the jth prefix of the other.
2. $S_{ij}$ is the total score (probability) of all alignments between the ith prefix of one sequence and the jth prefix of the other.
3. $S_{ij}$ is a polytope whose faces correspond to summaries that are optimal for some set of parameters.
The semi-rings in (2) and (3) lead to particularly interesting and useful applications of the Needleman-Wunsch algorithm. An example of the use of (2), is in posterior decoding based alignment where nucleotides are aligned as to maximize the sum of the posterior probabilities computed by (2). In the paper “Alignment Metric Accuracy” (with Ariel Schwartz and Gene Myers) we provide an interpretation in terms of an interesting metric on sequence alignments and in
A.S. Schwartz and L. Pachter, Multiple alignment by sequence annealing, Bioinformatics 23 (2007), e24–e29
we show how to adapt posterior decoding into a (greedy) multiple alignment algorithm. The video below shows how it works, with each step consisting of aligning a single pair of nucleotides from some pair of sequences using posterior probabilities computed using the Needleman-Wunsch algorithm with semi-ring (2):
We later extended this idea in work together with my former student Robert Bradley in what became FSA published in the paper “Fast Statistical Alignment“, R. Bradley et al., PLoS Computational Biology 5 (2009), e1000392.
An interesting example of the application of Needleman-Wunsch with semi-ring (3), i.e. polytope propagation, is provided in the recent paper “The RNA Newton polytope and learnability of energy parameters” by Elmirasadat Forouzmand and Hamidreza Chitsaz, Bioinformatics 29 (2013), i300–i307. Using our polytope propagation algorithm, they investigate whether simple models for RNA folding can produce, for some set of parameters, the folds of sequences with solved structures (the answer is sometimes, but not always).
It is remarkable that forty three years after the publication of the Needleman-Wunsch paper, and thirty two years after the publication of the Smith-Waterman paper, the algorithms remain pertinent, widely used, and continue to reveal new secrets. True classics.
Where are the authors now?
Saul B. Needleman writes about numismatics.
Christian Wunsch is a clinical pathologist in the University of Miami health system.
Temple Smith is Professor Emeritus of Biomedical Engineering at Boston University.
Michael Waterman is Professor of Biological Sciences, Mathematics and Computer Science at the University of Southern California.
|
2017-12-17 08:16:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 217, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6434432864189148, "perplexity": 1093.0099440589531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948594665.87/warc/CC-MAIN-20171217074303-20171217100303-00450.warc.gz"}
|
http://openstudy.com/updates/50538079e4b02986d3704c03
|
• TuringTest
Why can I not seem to show that the volume of a spherical shell is$V\approx4R^2d$where R is the outer radius and d is the thickness?
Mathematics
Looking for something else?
Not the answer you are looking for? Search for more explanations.
|
2017-03-27 18:27:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7357550859451294, "perplexity": 514.0511797282281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189495.77/warc/CC-MAIN-20170322212949-00642-ip-10-233-31-227.ec2.internal.warc.gz"}
|
http://www.math.princeton.edu/events/seminars/algebraic-geometry-seminar/leaves-moduli-spaces-characteristic-p
|
# Leaves in moduli spaces in characteristic p
Tuesday, October 21, 2008 -
4:30pm to 6:30pm
We try to understand the geometry of the moduli space of polarized abelian varieties in characteristic p. E.g. the phenomenon that Hecke orbits blow up and down in a rather unpredictable way. Choose a point $x$, corresponding to a polarized abelian variety. We study $C(x)$ consisting of all moduli points of polarized abelian varieties which have the same $p$-adic and $\ell$-adic invariants. This turns out to be a locally closed subset. We discuss properties of these sets, which form a foliation of the related Newton polygon stratum. We give several applications.
Speaker:
Frans Oort
University of Utrecht and Columbia University
Event Location:
Fine Hall 322
|
2017-11-17 17:20:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7606417536735535, "perplexity": 445.29312682373796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934803848.60/warc/CC-MAIN-20171117170336-20171117190336-00086.warc.gz"}
|
http://tex.stackexchange.com/questions/74880/algorithmicx-package-comments-on-a-single-line
|
# algorithmicx package comments on a single line
Is it possible in algorithmicx package to have comments not aligned to the right side?
For example I have code like this -
\begin{algorithm}[!ht]
\caption{My Algo.}
\label{myalgo}
\begin{algorithmic}
\State $\epsilon$ = 1.0;
\Comment{Explore Latency Dimension}
\While {explorationTime <= timeLimit}
\State $\epsilon$ = $\epsilon$ / 2;
\State calculateIncrements($\epsilon$);
\Comment{Explore L dimension}
\While {lQuery <= lUpperLimit}
\State Query (0, Query, bQuery, pQuery);
\If {result = WORKING}
\State mark points
\Comment{no need to explore more. we just want to stop over here.}
\State Break
\Else
\If {result = NOT WORKING}
\State mark from 0 to lQuery as NOT WORKING.
\EndIf
\EndIf
\State lQuery += lEpsIncr;
\EndWhile
\EndWhile
\State $calcPoints()$
\end{algorithmic}
\end{algorithm}
So what is happening is that, the package everytime aligns the comments to the right side. But for this comment \Comment{no need to explore more. we just want to stop over here.}, I would like it to have it on single line rather than multiple lines and aligned to right. It becomes a little confusing for me.
Is it possible that we can have comments like -
> no need to explore more. we just want to stop over here.
Break
It should be aligned at the indentation level of the statements.
-
Modifying the comment macro is possible using \algrenewcomment, like
\algrenewcomment[1]{$$\triangleright$$ #1}
The original \Comment command inserted an \hfill, which I've removed above. This would replace the existing \Comment command globally. However, you can also define your own (new) \LineComment command,
\algnewcommand{\LineComment}[1]{\State $$\triangleright$$ #1}
and intermix it with the regular \Comment, like I did below:
\documentclass{article}
\usepackage{algorithm}% http://ctan.org/pkg/algorithms
\usepackage{algpseudocode}% http://ctan.org/pkg/algorithmicx
\algnewcommand{\LineComment}[1]{\State $$\triangleright$$ #1}
\begin{document}
\begin{algorithm}[!ht]
\caption{My Algo.}\label{myalgo}
\begin{algorithmic}
\State $\epsilon$ = 1.0; \Comment{Explore Latency Dimension}
\While {explorationTime $\leq$ timeLimit}
\State $\epsilon = \epsilon / 2$;
\State calculateIncrements($\epsilon$);
\LineComment{Explore L dimension}
\While {lQuery $\leq$ lUpperLimit}
\State Query (0, Query, bQuery, pQuery);
\If {result = WORKING}
\State mark points
\LineComment{no need to explore more. we just want to stop over here.}
\State Break
\Else
\If {result = NOT WORKING}
\State mark from 0 to lQuery as NOT WORKING.
\EndIf
\EndIf
\State lQuery += lEpsIncr;
\EndWhile
\EndWhile
\State calcPoints()
\end{algorithmic}
\end{algorithm}
\end{document}
-
This is perfect ! Exactly what I wanted. Thanks :) – Raj Oct 1 '12 at 14:19
|
2014-11-26 22:02:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3196179270744324, "perplexity": 10692.099095078971}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007510.17/warc/CC-MAIN-20141125155647-00010-ip-10-235-23-156.ec2.internal.warc.gz"}
|
https://dsp.stackexchange.com/questions/73813/transfer-function-estimation-from-frequency-response
|
# Transfer function estimation from frequency response
Let's assume that we know that we are dealing with a SISO second order system for which we have the frequency response (magnitude and phase for a known frequency range ω). What methods would people use to fit the frequency response to a transfer function (i.e. transfer-function estimation)? How does this process look like?
Also, this looks like a one-liner in Matlab (see tfest). Is there a Python equivalent?
Here you find an extensive discussion about transfer function estimation and even source code.
Your problem can be expressed as $$F(2j \pi f_i) \approx g_i$$, where $$f_i$$, $$g_i$$ are your measurements, and you also you can expresss $$F(s) = N(s) / D(s)$$ as a parametric function, then the parameters may be adjusted with curve fit.
Python provides curve_fit, that can be directly applied to this problem as in this answer, with the difference that you want to apply for x in the imaginary axis. This can be improved by also passing the gradient in terms of the parameters as well.
Maybe you prefer to express your function in terms of poles and zeros, this may be specially convenient if you want to ensure stability (you can add to your cost function the Lagrangian Multiplier for $$real(p) < 0$$)
def pztf(omega, h, p, z):
num = np.prod(1j*omega[None,:] - p[:, None], axis=0);
den = np.prod(1j*omega[None,:] - z[: None], axis=0);
return h * num ./ den
Maybe you want to fit the logarithm of this, this may produce a better fit in the low gain frequencies.
There are also more analytic approaches. The so called Levy method is interesting, $$F(s) = \frac{N(s)}{D(s)}$$ is equivalent to $$F(s) D(s) - N(s) = 0$$ this can be expressed as least square fitting.
This is biased in the sense that the errors close to the poles receives less weight, this can be mitigated by solving using $$D_0(s) = 1$$, and iteratively refining it by iteratively fitting
$$\frac{1}{D_{i-1}(s)}\left( F(s) D_i(s) - N_i(s) \right) = 0$$
when it converges after some iterations we have $$D_{i-1}(s) \approx D_i(s)$$ we have
$$\frac{1}{D_{i-1}(s)}\left( F(s) - N_i(s) \right) \approx F(s) - \frac{N_i(s)}{D_{i-1}(s)}$$
That corresponds to the original problem (without the $$D(s)$$ factor).
If you want better accuracy in different frequencies. You achieve this by multiplying the whole expression by a given $$W(s)$$, then te fitting iteration becomes
$$\frac{W(s)}{D_{i-1}(s)}\left( F(s) D_i(s) - N_i(s) \right) = 0$$
You may want to check the Welch method to estimate the PSD, see the following link that describes the function : https://fr.mathworks.com/help/signal/ref/pwelch.html The output transfer function estimate, H, is calculated by dividing Pyx by Pxx. Pyx is the PSD of the input and output signals cross-correlation and Pxx is the input signal's auto-correlation.
Here is a use case in which the Welch method has been used to estimate a transfer function : https://fr.mathworks.com/help/dsp/ref/discretetransferfunctionestimator.html Depending on system's frequency response, you will have to tune the algorithm ( overlapping, number of points .. ) .
I have used the Welch PSD estimate by the past for a similar use case to estimate the frequency response of a high order analog filter.
Check this exchange too for a similar question : getting frequency response from input and output signal
I hope it helps.
Regards, MF
|
2021-06-15 19:39:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8686812520027161, "perplexity": 599.2400365060076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621519.32/warc/CC-MAIN-20210615180356-20210615210356-00583.warc.gz"}
|
https://math.stackexchange.com/questions/2403702/is-any-dual-metrizable-locally-convex-space-a-frechet-space
|
# Is any dual metrizable locally convex space a Frechet space?
The title basically says all of it.
If a normed space $F$ is a dual of a normed space $E$, then $F$ is a Banach space. I wonder if the same holds for Frechet spaces.
The strong dual $F$ of a locally convex space $E$ is complete, once $E$ is bornological, but I am not sure if this is the case here. Perhaps the completion of $E$ is though.
|
2019-09-21 05:17:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8029922246932983, "perplexity": 102.60156297793691}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574265.76/warc/CC-MAIN-20190921043014-20190921065014-00349.warc.gz"}
|
https://mathoverflow.net/questions/315087/determinant-of-a-block-matrix-with-many-1s
|
# Determinant of a block matrix with many $-1$'s
For an array $$(n_1,...,n_k)$$ of non-negative integers and non-zero reals $$a_1,...,a_k$$, define a block matrix $$M$$ of size $$n=n_1+\cdots+n_k$$ as follows: The main diagonal has blocks of sizes $$n_i$$ and shapes $$M_i=J_{n_i}+a_i I_{n_i}=\begin{pmatrix} a_i+1&1&\cdots&1\\ 1&a_i+1&\ddots&\vdots\\ \vdots&\ddots&\ddots&1\\ 1&1&1&a_i+1\\ \end {pmatrix}$$ and all the other entries are $$-1$$.
Experimentally I have found $$\det(M)= \prod_{i=1}^k a_i^{n_i}\sum_{j=0}^k(2-j)2^{j-1}e_j =\prod a_i^{n_i}(1+e_1-4e_3-16e_4-48e_5-\cdots),$$ where $$e_j$$ is the $$j^{th}$$ elementary symmetric function of $$\dfrac{n_1}{a_1},\dots,\dfrac{n_k}{a_k}$$.
It should be a bit technical but not too hard to prove that by induction, but is there a more elegant way?
We have $$M = D - e e^T$$, where $$D$$ is a block diagonal matrix with main diagonals equal to $$D_i = \mathrm{diag}(a_i) + 2 J_i$$, and $$e$$ is all ones vector with suitable dimention. By Matrix determinant lemma we have $$\det(M) = (1- e^TD^{-1}e) \det(D)$$ and $$\det(D) = \prod_i\det(D_i) = \prod_i (1+ 2\frac{n_i}{a_i})a_i^{n_i}$$ Also, by Sherman–Morrison formula, we have
$$D_i^{-1} = \mathrm{diag}(a_i^{-1}) - \frac{2\mathrm{diag}(a_i^{-1})J_i\mathrm{diag}(a_i^{-1})}{1+2 n_i/a_i}$$. So,
$$1-e^TD^{-1}e = 1- \sum_i\frac{n_i}{a_i} + \sum_i \frac{2(n_i/a_i)^2}{1+2 n_i/a_i} = \sum_i \frac{1+ n_i/a_i}{1+2n_i/a_i}$$ Therefore $$\det(M) = (\sum_i \frac{1+ n_i/a_i}{1+2n_i/a_i}) \prod_i (1+ 2\frac{n_i}{a_i})a_i^{n_i} \\= \prod_{i=1}^k a_i^{n_i} \sum_{j=0}^k (2-j)2^{j-1}e_j$$
To complete Mahdi's answer, it suffices to show $$(1-\sum_i\frac{x_i}{1+2x_i})\prod_i(1+2x_i)=\sum_{j\ge 0} (2-j)2^{j-1}e_j.$$ Clearly $$\prod_i(1+ax_i)=\sum_{j\ge 0} a^je_j$$, which explains the $$2^je_j$$ term on the RHS.
Consider $$\frac{\partial}{\partial t}\prod_i(1+x_i+tx_i)|_{t=1}$$. On the one hand it is $$\sum_ix_i\prod_{k\neq i}(1+2x_k)$$ by product rule, which appears on the LHS. On the other hand it is $$\frac{\partial}{\partial t}\sum_{j\ge 0} (1+t)^{j}e_j|_{t=1}=\sum_{j\ge 0}j2^{j-1}e_j$$, which appears on the RHS.
Alternatively, one may just inspect the eigenvalues (whose product is the determinant).
Clearly, all the vectors whose support lies in the $$i$$th block, and whose coordinates sum up to $$0$$, are eigenvectors with eigenvalue $$a_i$$; hence $$a_i$$ is the eigenvalue with multiplicity (at least) $$n_i-1$$.
An invariant complement to the sum of already found subspaces is the set $$V$$ of all block-constant vectors; so we need to check the determinant of the restriction onto $$V$$. A natural base in $$V$$ consists of the vectors havig ones in the $$i$$th block, and zeroes elsewhere. The matrix in this base is $$A=\left( \begin{array}{ccccccccc} n_1+a_1& -n_2& -n_2& \cdots& -n_k\\ -n_1& n_2+a_2& -n_3& \cdots& -n_k\\ -n_1& -n_2& n_3+a_3& \cdots& -n_k\\ \vdots& \vdots& \vdots& \ddots& \vdots\\ -n_1& -n_2& -n_3& \cdots& n_k+a_k \end{array} \right),$$ whose determinant is $$n_1n_2\dots n_k$$ times the determinant of $$B=\left( \begin{array}{ccccccccc} 1+a_1/n_1& -1& -1& \cdots& -1\\ -1& 1+a_2/n_2& -1& \cdots& -1\\ -1& -1& 1+a_3/n_3& \cdots& -1\\ \vdots& \vdots& \vdots& \ddots& \vdots\\ -1& -1& -1& \cdots& 1+a_k/n_k \end{array} \right).$$ Its determinant may be computed either directly, or via the same Matrix determinant lemma, yielding $$\det B=\prod_{i=1}^k\left(2+\frac{a_i}{n_i}\right) -\sum_{i=1}^k \prod_{\textstyle{1\leq j\leq k\atop j\neq i}} \left(2+\frac{a_j}{n_j}\right).$$ After substituting into $$\det M=\det B\cdot \prod_{i=1}^k n_ia_i^{n_i-1}$$ and expanding the brackets, we get the desired result. (Although, I would admit, the form it has been obtained in does not look much worse for me.)
|
2021-01-24 13:19:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 40, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9311560392379761, "perplexity": 141.22058219872991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703548716.53/warc/CC-MAIN-20210124111006-20210124141006-00442.warc.gz"}
|
https://community.wolfram.com/groups/-/m/t/1896178
|
# Epidemiological Models for Influenza and COVID-19
Posted 3 years ago
48012 Views
|
37 Replies
|
32 Total Likes
|
37 Replies
Sort By:
Posted 3 years ago
Thanks, your work has helped me to understand this better. Instead of relatively closed compartments, have you done any modeling of spread of disease through contact networks? Quarantine measures work backwards through networks from index cases to exposed cases, with the ultimate goal of limiting the disease space to the size of infected plus exposed network and hopefully eliminating contact with this space to the healthy network space. Information about the disease can spread much more quickly than the disease. In the internet age there might be a role for social media networks to offer messaging to close contacts to limit the spread of even the common cold.
Posted 3 years ago
No, no modeling on networks yet.You are quite right about the standard practice of tracing contact networks backwards in order to quarantine potentially exposed people. Unfortunately, those kind of data are not available for this pandemic. Martcheva gives a good description and model on pp. 230-234 of her book An Introduction to Mathematica Epidemiology.Social media can play a role, and many people are posting constantly this week about steps to take to "flatten the curve".
Posted 3 years ago
-- you have earned Featured Contributor Badge Your exceptional post has been selected for our editorial column Staff Picks http://wolfr.am/StaffPicks and Your Profile is now distinguished by a Featured Contributor Badge and is displayed on the Featured Contributor Board. Thank you, keep it coming, and consider contributing to the The Notebook Archive!
Posted 3 years ago
A more manageable set of background notebooks is now available, starting with:EpidemiologicalModelsForInfluenzaAndCOVID-19--part_1.nbThere is a TOC in each one to assist navigation among them.
Posted 3 years ago
I greatly enjoyed your video presentation. It seemed that the size of the susceptible population made a big difference, which you explained as mixing being inefficient, and limiting by quarantine. I have been playing with a much simpler logistic model mainly to capture the day to day dynamics with the assumption that the epidemic can only be stopped with effective quarantine: Logistic Quarantine ModelThe assumption is that the quarantine process is competing with the infection rate, but that the quarantine set can grow faster than the set of infected and exposed, such that the infected and exposed population is eventually contained in the quarantine set and that set becomes the limit to the susceptible set. I think if you incorporated this into your model, it could give a better estimate of the size of the susceptible set. The model has worked with China. The graph below uses a very simple measure on the reported numbers. The daily differences of the log of the daily accumulated case numbers, i.e. the continuous growth rate, which should be horizontal if the number of cases grows exponentially.The blue dots are the daily differences of log case number predicted by the logistic model on the last date of the data. Italy is shown below.Once the actual data closes in on the predicted, the quarantine has successfully limited the susceptible population. I suspect with refinement, your model might eventually be sensitive enough to discover other dynamics. For instance did the Chinese method of quarantine to put infected people together in close quarters increase the death rate because, while the people were infected, they may not have developed much of an immune response, and the quarantine method permitted reinfection causing higher viral titers. Household quarantine also may guarantee higher infection rates of household members than if the infected person were removed from the household. In the community where I live in rural central Arizona, the population has decided to remove itself from the general population. The nearest reported infection is a hundred miles away. The Walmart and two supermarkets have been emptied and people are staying at home. Fortunately we are surrounded by three million acres of National Forest, so there is plenty of room to go hiking.
Posted 3 years ago
Congratulations! This is a very interesting and useful introduction into a highly topical issue. Following the yesterday's fascinating presentation of the models, I have tried unsuccessfully to run the notebook (Mathematica 12.1, Win 10 64). I am getting many error messages. Please advise! Attachments:
Posted 3 years ago
Thanks for your interest. Did you also download the package that's mentioned at the beginning of the initialization section: https://www.wolframcloud.com/obj/rnachbar/Published/CompartmentalModeling.wl?The code for KineticCompartmentalModel is in the package and that's how the transitions are interpreted and the ODEs generated.I'll copy the notice about the package to the top of the notebook so it's more obvious.
Posted 3 years ago
Posted 3 years ago
Very good modeling. I was wondering if you could take into account herd immunity— as the number of recovered individuals goes up, the transmission rate goes down. Once 70% of the population is exposed the rate of infection begins to drastically decrease. Hence it may take a long time for the last 30% to be become infectedAnother factor to consider with the data is the sensitivity of the test— at best it is 60% and in real world testing 40%.
Posted 3 years ago
Hi Robert, This was an excellent introduction and great modeling. My question is this: It seems like the modeling is trying to determine a few parameters across the whole population, when in fact we know that the parameters vary by age group. Why not consider breaking down the model in to population groups of different ages, for instance 0-9 years, 10-19, 20-29, etc. In this way, factors such as total population, mortality, effective community size, etc, can be independently optimized. Then, the total number of infections, deaths, etc. can be summed over age groups. Cheers, -Kent
Posted 3 years ago
Indeed, that is on my To Do list. I can easily get the age distribution for a country from Wolfram|Alpha, so that will help with the initial conditions. However, to estimate the values for an age-stratified parameter we really need the number of confirmed cases, recovered cases, and deaths broken down by age too. There are some data on deaths for various ages.See the University of Basel model for an example using age groups. They use estimates based on an analysis of the recent data from China, and are not fitting them. Their interface allows the user to adjust them, also.
Posted 3 years ago
Running the EpidemiologicalModelsForInfluenzaAndCOVID-19--part_1.nb gives many errors. I have downloaded the CompartmentalModeling.wl file and put it in the notebook folder and load it inside the notebook also.Example of the first error: Running the code: {susceptibleSEIR, infectedSEIR, exposedSEIR, recoveredSEIR} = {\[ScriptCapitalS], \[ScriptCapitalI], \ \[ScriptCapitalE], \[ScriptCapitalR]} /. ParametricNDSolve[ Join[odesSEIR /. forceOfInfectionSEIR, {\[ScriptCapitalS][ 0] == \[ScriptCapitalN] - I0, \[ScriptCapitalE][0] == 0, \[ScriptCapitalI][0] == I0, \[ScriptCapitalR][0] == 0}], Head /@ varsSEIR, {t, 0, 100}, {\[ScriptCapitalN], I0, \[Beta], \[Zeta], \[Gamma]}]; I get the errors: ParametricNDSolve::dspar: 1.5 cannot be used as a parameter. ReplaceAll::reps: {ParametricNDSolve[{(\[ScriptCapitalE]^\[Prime])[t]==0. -\[Zeta] \[ScriptCapitalE][t]+1.5 \[ScriptCapitalI][t] \[ScriptCapitalS][t],(\[ScriptCapitalI]^\[Prime])[t]==0. +\[Zeta] \[ScriptCapitalE][t]-0.0478 \[ScriptCapitalI][t],(\[ScriptCapitalR]^\[Prime])[t]==0. +0.0478 \[ScriptCapitalI][t],(\[ScriptCapitalS]^\[Prime])[t]==0. -1.5 \[ScriptCapitalI][t] \[ScriptCapitalS][t],\[ScriptCapitalS][0]==-I0+\[ScriptCapitalN],\[ScriptCapitalE][0]==0,\[ScriptCapitalI][0]==I0,\[ScriptCapitalR][0]==0},{\[ScriptCapitalE],\[ScriptCapitalI],\[ScriptCapitalR],\[ScriptCapitalS]},{t,0,100},{\[ScriptCapitalN],I0,1.5,\[Zeta],0.0478}]} is neither a list of replacement rules nor a valid dispatch table, and so cannot be used for replacing. Set::shape: Lists {susceptibleSEIR,infectedSEIR,exposedSEIR,recoveredSEIR} and {\[ScriptCapitalS],\[ScriptCapitalI],\[ScriptCapitalE],\[ScriptCapitalR]}/. ParametricNDSolve[{(\[ScriptCapitalE]^\[Prime])[t]==0. -\[Zeta] \[ScriptCapitalE][t]+1.5 \[ScriptCapitalI][t] \[ScriptCapitalS][t],(\[ScriptCapitalI]^\[Prime])[t]==0. +\[Zeta] \[ScriptCapitalE][t]-0.0478 \[ScriptCapitalI][t],(\[ScriptCapitalR]^\[Prime])[t]==0. +0.0478 \[ScriptCapitalI][t],(\[ScriptCapitalS]^\[Prime])[t]==0. -1.5 \[ScriptCapitalI][t] \[ScriptCapitalS][t],\[ScriptCapitalS][0]==-I0+\[ScriptCapitalN],\[ScriptCapitalE][0]==0,\[ScriptCapitalI][0]==I0,\[ScriptCapitalR][0]==0},{\[ScriptCapitalE],\[ScriptCapitalI],\[ScriptCapitalR],\[ScriptCapitalS]},{t,0,100},{\[ScriptCapitalN],I0,1.5,\[Zeta],0.0478}] are not the same shape.
Posted 3 years ago
The package should be loaded automatically when the initialization cells are evaluated.The Cloud notebook was accidentally edited yesterday, and I thought I had restored the changes to the original state, but I might have missed something.I will test the whole notebook now. Thanks for reporting the problem.
Posted 3 years ago
Apparently you assigned the value 1.5 to the symbol [Beta] and 0.0478 to [Gamma]. I can reproduce your error messages when I do that. Also, if you look at the second message, you can see that the last argument to ParametricNDSolve is {\[ScriptCapitalN], I0, 1.5, \[Zeta], 0.0478}, which shows numeric values instead of symbols for [Beta] and [Gamma].It would probably be best to quit and restart the Kernel to fix the problem.Please let me know if you did not assign values to [Beta] and [Gamma], because that would suggest that the Manipulates are "leaking" dynamic assignments, which I made every effort to code against.
Posted 3 years ago
Don't worry everything are ok, my bad. :) After including Clear["Global*"] at the start, everything runs fine. Probably some leftovers from previous notebook.Thanks for all of these very very good project!
Posted 3 years ago
thank you! - where can we find the notebook part_2 and other parts if any? thank you so much Gerhard
Posted 3 years ago
Posted 3 years ago
thank you- Gerhard
Posted 3 years ago
Hi, I hope you are okay. I want to applied your code to my data, How can I upload my data for it could have the same format as your data?
Posted 3 years ago
fitDataWDR returns a list of 3 data series. Each data series is a list of pairs of values of the form {time, number}. The first data series in the number of confirmed cases (not the cumulative number), the second is the number of recovered cases (which happens to be cumulative because it's a sink), and the third series is the number of deaths (also cumulative because it's a sink). For example the first 5 elements of each series for Beijing is In[18]:= fitData = fitDataWDR[Entity["AdministrativeDivision", {"Beijing", "China"}], "21 Jan 2020", "dateRange" -> All]; In[19]:= Take[#, 5] & /@ fitData Out[19]= {{{1., 14}, {2., 22}, {3., 35}, {4., 39}, {5., 66}}, {{1., 0}, {2., 0}, {3., 1}, {4., 2}, {5., 2}}, {{1., 0}, {2., 0}, {3., 0}, {4., 0}, {5., 0}}} If you don't have data for one of those series, then leave it out of the list of series, and also leave the corresponding model out of the list of models given to sumSquaredError.Hope this helps!
Posted 3 years ago
When I ran the code on your last reply I got the following error. Can I work with the data from Argentina? Attachments:
Posted 3 years ago
Sorry for the delay in responding.In your notebook, you have In[19] := Take[#, 5] & /@ fitData which looks like a copy & paste error. Remove the "In[19]:=" from the code.
Posted 3 years ago
Once again: A very impresive project! Statistical fit does not work though (see attached notebook). Please advise! Attachments:
Posted 3 years ago
I had updated the package last week, but unfortunately missed two necessary changes needed in the full version notebook.There is an updated version of that notebook in the Cloud now. It has today's date for the most recent revision.
Posted 3 years ago
Any pointer to that notebook?
Posted 3 years ago
Posted 3 years ago
Thank you. I am testing them.
Posted 3 years ago
This would be a great example of Dynamic.https://public.flourish.studio/visualisation/1712761/?fbclid=IwAR1YOkUVgzFG3QkaflyZZNoalYXxl59Kv-OPLoUUWwpYjxbM9fDinveDChk
Posted 3 years ago
Many thanks for your contribution. I Excellent work. But what would happen if used with QuantileRegression.m?
Posted 3 years ago
Thank you for your suggestion. At the moment, the bigger issue is finding the right structural form for the model, that is, the compartments and their connections, and adequately mapping them to the available data. Once that is done, then perhaps using quantile regression might provide more robust estimates of the parameters for more reliable predictions.There is also an issue with parameter identifiability, regardless of the estimation method. This will take a certain amount of domain knowledge to properly choose which parameters to hold fixed and which to estimate.
Posted 3 years ago
Hi Robert,I am trying access the covid19 data for Iran by using the following fitData = fitDataWDR[Entity["Country", {"Iran"}], "1 March 2020", "dateRange" -> All]; but getting following error : fitDataWDR::locnf: Locale Entity[Country,{Iran}] not found. Could you please tell me what wrong am I doing here. Regards
Posted 3 years ago
There is a syntax error in the Entity specification. Use fitData = fitDataWDR[Entity["Country", "Iran"], "1 March 2020", "dateRange" -> All];
Posted 3 years ago
Have you considered adding a quarantine? components to the SEIsaRD model?It seems that would be the next logical model improvement step, especially for Chinese data which had a very aggressive Quarantine implemented. Then add multiple different population cohorts (child, young adult, middle age adult, old adult, compromised health-for all age groups).Of course that is a lot more work!
Posted 3 years ago
Thanks for a really nice post and nice work! I've made my own model that includes quarantine but instead of creating a compartment (state variable) for quarantined I simply made beta a time varying function that drops a certain level once quarantine is started. I guess that's a good enough approximation even though it might not catch all dynamics.Now my question. Looking at e.g. Hubai outbreak you get beta = 0.79 and gamma = 0.04, which would correspond to an R0 of =19.8 (beta/gamma). But I've seen work estimating R0 to between 2 and 2.5 for Corona. Why the discrepancy?Regards Peter Aronsson
Posted 2 years ago
Hi, I have a problem with notebook 1, I already downloaded the package and put it in the same directory
Posted 2 years ago
Indeed you do! How did you "download" it?Using the "Download" button in the Wolfram Cloud, using the "Open from Cloud..." button on the Desktop Mathematica Welcome screen, or using File > Open from Wolfram Cloud... menu work OK.The only time I've seen this behavior is with NotebookOpen["https://www.wolframcloud.com/obj/rnachbar/Published/\ EpidemiologicalModelsForInfluenzaAndCOVID-19--part_1.nb"] from the desktop.Try one of the three methods I first mentioned.PS. I see that you're using Mathematica 11.3. Some things may not render correctly when the notebook is first opened (e.g., some cell styles are not defined), but reevaluating inputs should take care of that.
Posted 2 years ago
Congratulations! This post was featured in the Wolfram Blog Wolfram Community Takes on Current Events: COVID-19 Data, Green Spaces, Homeschool Puzzles and More. We are looking forward to your future contributions.
|
2023-02-04 19:18:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2363819032907486, "perplexity": 1789.9317758569061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500151.93/warc/CC-MAIN-20230204173912-20230204203912-00185.warc.gz"}
|
https://lw2.issarice.com/users/idan-arye
|
## Posts
Comment by Idan Arye on The Control Group Is Out Of Control · 2021-06-23T16:38:59.977Z · LW · GW
January 2021 have witnessed the GameStop short squeeze where many small investors, self organized via Reddit, bought a stock in order to hold it and cause financial damage to several hedge funds that shorted it. It was all over the news and was eventually diffused when the brokerage companies sold their clients stocks without their consent.
This resolution triggered great outrage. The traders and their supporters claimed that hedge funds were toying with the economy for a long time now, ruining companies and the families who depended on them, and it was considered okay because they played by the rules. Now that the common folks play by the same rules - the rules were changed so that they cannot play.
(To be fair - the brokerage companies that sold their stocks did have a legal standing in doing so. But this is just an anecdote for my main point, so I'd rather not delve into this technicality)
This post was written years before that, but the sentiment is timeless. Is it really okay to constantly change the rules of science just to deny access to a certain group?
Comment by Idan Arye on Debunked And Well-Refuted · 2021-06-16T23:23:23.975Z · LW · GW
If you've never acknowledged that other study, there is a possibility that you'll consider it objectively once introduced to it.
Comment by Idan Arye on Don't Sell Your Soul · 2021-04-09T01:06:28.147Z · LW · GW
Section IV, clause A:
Buyer and Seller agree that the owner of the Soul may possess, claim, keep, store, offer, transfer, or make use of it in whole or in part in any manner that they see fit to do so, conventional or otherwise, including (but not limited to) the purposes described in this Section (IV). Example uses of the Soul which would be permitted under these terms include (but are not limited to):
• ...
• Long term storage, usage, or preservation of the Soul in a state which would prevent it from taking the course of development, evolution, or relocation it may otherwise take naturally or due to the actions or material status of the Seller.
Am I interpreting it wrong, or is this clause permitting the buyer to kill the seller?
Comment by Idan Arye on Strong Evidence is Common · 2021-03-14T18:58:41.736Z · LW · GW
Isn't that the information density for sentences? With all the conjunctions, and with the limitness of the number of different words that can appear in different places of the sentence, it's not that surprising we only get 1.1 bits per letter. But names should be more information dense - maybe not the full 4.7 (because some names just don't make sense) but at least 2 bits per letter, maybe even 3?
I don't know where to find (or how to handle) a big list of full names, so I'm settling for the (probably partial) lists of first names from https://www.galbithink.org/names/us200.htm (picked because the plaintext format is easy to process). I wrote a small script: https://gist.github.com/idanarye/fb75e5f813ddbff7d664204607c20321
When I run it on the list of female names from the 1990s I get this:
$./names_entropy.py https://www.galbithink.org/names/s1990f.txt Entropy per letter: 1.299113499617074 Any of the 5 rarest name are 1:7676.4534883720935 Bits for rarest name: 12.906224226276189 Rarest name needs to be 10 letters long Rarest names are between 4 and 7 letters long #1 Most frequent name is Christin, which is 8 letters long Christin is worth 5.118397576228959 bits Christin would needs to be 4 letters long #2 Most frequent name is Mary, which is 4 letters long Mary is worth 5.380839995073667 bits Mary would needs to be 5 letters long #3 Most frequent name is Ashley, which is 6 letters long Ashley is worth 5.420441711983749 bits Ashley would needs to be 5 letters long #4 Most frequent name is Jesse, which is 5 letters long Jesse is worth 5.4899422055346445 bits Jesse would needs to be 5 letters long #5 Most frequent name is Alice, which is 5 letters long Alice is worth 5.590706018293878 bits Alice would needs to be 5 letters long And when I run it on the list of male names from the 1990s I get this:$ ./names_entropy.py https://www.galbithink.org/names/s1990m.txt
Entropy per letter: 1.3429318549784128
Any of the 11 rarest name are 1:14261.4
Bits for rarest name: 13.799827993443198
Rarest name needs to be 11 letters long
Rarest names are between 4 and 8 letters long
#1 Most frequent name is John, which is 4 letters long
John is worth 5.004526222833823 bits
John would needs to be 4 letters long
#2 Most frequent name is Michael, which is 7 letters long
Michael is worth 5.1584658860672485 bits
Michael would needs to be 4 letters long
#3 Most frequent name is Joseph, which is 6 letters long
Joseph is worth 5.4305677416620135 bits
Joseph would needs to be 5 letters long
#4 Most frequent name is Christop, which is 8 letters long
Christop is worth 5.549228103371756 bits
Christop would needs to be 5 letters long
#5 Most frequent name is Matthew, which is 7 letters long
Matthew is worth 5.563161441124633 bits
Matthew would needs to be 5 letters long
So the information density is about 1.3 bits per letter. Higher than 1.1, but not nearly as high as I expected. But - the rarest names in these list are about 1:14k - not 1:1m like OP's estimation. Then again - I'm only looking at given names - surnames tend to be more diverse. But that would also give them higher entropy, so instead of to figure out how to scale everything let's just go with the given names, which I have numbers for (for simplicity, assume these lists I found are complete)
So - the rare names are about half as long as the number of letters required to represent them. The frequent names are anywhere between the number of letters required to represent them and twice that amount. I guess that is to be expected - names are not optimized to be an ideal representation, after all. But my point is that the amount of evidence needed here is not orders of magnitude bigger than the amount of information you gain from hearing the name.
Actually, due to what entropy is supposed to represent, on average the amount of information needed is exactly the amount of information contained in the name.
Comment by Idan Arye on Strong Evidence is Common · 2021-03-14T12:49:05.898Z · LW · GW
The prior odds that someone’s name is “Mark Xu” are generously 1:1,000,000. Posterior odds of 20:1 implies that the odds ratio of me saying “Mark Xu” is 20,000,000:1, or roughly 24 bits of evidence. That’s a lot of evidence.
There are 26 letters in the English alphabet. Even if, for simplicity, our encoding ignores word boundaries and message ending, that's bits per letter so hearing you say "Mark Xu" is 28.2 bits of evidence total - more than the 24 bits required.
Of course - my encoding is flawed. An optimal encoding should assign "Mark Xu" with less bits than, say, "Rqex Gh" - even though they both have the same amount of letters. And "Maria Rodriguez" should be assigned an even shorter message even though it has more than twice the letters of "Mark Xu".
Measuring the amount of information given in messages is not as easy to do on actual real life cases as it is in theory...
Comment by Idan Arye on Defending the non-central fallacy · 2021-03-12T23:44:00.922Z · LW · GW
Realistically, how high would the tax burden have to be for you to accept those costs of secession?
France's 2015 taxes of 75% made rich people secede, so we can take that as a supremum on the minimal tax burden that can make people secede. Of course - France's rich didn't have to go live in the woods - they had the option to go to other countries. Also, they did not have the option to not go to any country, because all the land on earth is divided between the countries.
I agree that the main benefits for the rich to remain in under the state's rule and pay taxes is to be able to do business with its citizens. And of course - to be able to pass through the land - otherwise they won't be able to physically do said business. So the core question is:
Does the state have the right to prevent its citizens from doing business with whoever they want?
They practice that power - that's a fact. They send the police to stop business that's not licensed by the state. But should this be considered an act of violence, or as an act of protecting their property?
Comment by Idan Arye on Defending the non-central fallacy · 2021-03-12T00:53:11.322Z · LW · GW
I think there is some academic merit in taking this example to the extreme and assuming that the rich person is responsible to 100% of the community's resources, and they alone can fund the its entire activity, and if they secede alone the community is left with nothing. They can't protect people in their streets because they can't afford a police. They can't punish criminals because they can't afford a prison. They may be left with their old roads, but without maintenance they quickly wear out while the rich person can build new ones. Their permission to do business means nothing because they have no means to enforce it (no police) - they can't even make a credible embargo because the rich person is the only one you can offer jobs and the only one who has goods to sell, so the incentive to break the embargo is huge. The rich person has all the power and zero incentive to give in to the community which will take it away and give their "fair share" of of it in return.
Of course - this extreme scenario never happens in real life, because in real life there are always alternatives. There are more rich people, to begin with, so no single rich person can hold all the power. People can start their own business, breaking the 100% dependency on the rich class from our example. And - maybe most importantly - modern society has a huge middle class that holds (as a socioeconomic class) a considerable share of the power.
So, a real life rich person cannot have a full Shapley value like our hypothetical rich person, and the poor people's Shapley value is more than zero. Still - a rich person's Shapley value is much much higher than a poor person's, and therefore there is a point where taxation is heavy enough to make it worthwhile for them to secede.
Comment by Idan Arye on Defending the non-central fallacy · 2021-03-11T22:55:39.722Z · LW · GW
I was replying to ShemTealeaf's claim that the rich person still has an incentive to stay - remaining under the protection of the community's court system. I was arguing that what the rich person needs from the community's court system is not its resources (which the rich person was providing anyway, and would dry out once they secede) but its social norms - the people's agreement to respect it's laws, which mean they would not attack the rich person. My point is that if the reach person's incentive to stay is to not get robbed and killed by the community - then we can't really say that they are allowed to opt out.
Of course - if they poor people that remain the community will not attack the rich person once they leave - then they are indeed allowed to opt out, but in that case their incentive to stay is gone.
Comment by Idan Arye on Defending the non-central fallacy · 2021-03-11T20:15:46.582Z · LW · GW
In this hypothetical scenario, the rich person was the sole source of funding for the community's services. Once they opt out, the community will no longer be able to pay the police, and since all the police salaries came from the rich person's pockets - the rich person will be able to use the same amount of money previously used to pay the police force to finance their own private security.
Same for all the other services the community was providing.
Of course, the community will still have all the infrastructure and equipment that was purchased with the rich person's taxes in the past, and the rich person will start with nothing - but this is just a temporary setback. In a few years the rich person will build new infrastructure and the community's infrastructure will not hold for long if they keep using it without being able to afford its maintenance.
This leaves us with the core community service the rich person was enjoying. The only service that does not (directly) cost money to provide. Social norms.
As you said - once the rich person opts out of the community, the members of the community is no longer obliged to refrain from robbing or kill them. And they have an incentive to do so. They may no longer be able to pay their police in the long run, but it'll take some time for all the cops to quit and it'll take some time for the rich person to build their own security force (unless they have prepared it in advance? They probably did), so if they act quick enough they can launch an attack and have a good chance at winning. And even if they get delayed and the balance of armed forces swifts - large enough masses of poor people can take down the rich with their armed guards.
So this is what's going to stop the rich person from opting out. The threat of violence if they do so. In that light - can we still say they are allowed to opt out?
Comment by Idan Arye on Defending the non-central fallacy · 2021-03-10T16:43:27.967Z · LW · GW
Most[1] logical fallacies are obvious when arranged in their pattern, but when you encounter them in the wild they are usually transformed by rhetorics to mask that pattern. The "lack of rhetorical skills", then, may not be bad argumentation by itself - but it does help exposing it. If a pickpocket is caught in the act, it won't help them to claim that they were only caught because they were not dexterous enough and it's unfair to put someone in jail for a lack of skill. The fact remains that they tried to steal, and it would still be a crime if they were proficient enough to succeed. Similarly, just because one's rhetorical skills are not good enough to mask a bad argument does not make it a good argument.
A more important implication of my take on the nature of logical fallacies is that it is not enough to show that an argument fits the fallacy's pattern - the important part of countering it is showing how, when rearranged in that pattern, the argument loses its power to convince. If it still makes sense even in that form.
Note that in all of Scott's examples, he never just said "X is a noncentral member of Y" and left it at that. He always said "we usually hate Y because most of its members share the trait Z, but X is not Z and only happens to be in Y because of some other trait W, which we don't have such strong feeling about".
So, if we take your first example (the one about eating meat) and fully rearrange it by the noncentral fallacy not only with X and Y but also with Z and W, the counter-argument would look something like that:
It's true that animal farming (X) is technically cruelty (Y), but the central members of cruelty are things like torture and child abuse. What these things have in common is that they hurt humans (Z), and this is the reason why we should frown upon cruelty. Animal farming does not share that trait. Animal farming is only included in the cruelty category because it involves involuntary suffering (W) - a trait that we don't really care about.
Does this breakdown make the original argument lose its punch? Not really. Certainly not as much as breaking down the "MLK was a criminal" argument to the noncentral fallacy pattern makes that argument lose its punch. Here, at most, the breakdown exposes the underlying reasoning, and shifts the discussion from "whether or not meat is technically a cruelty" to "to what extent do animals deserve to be protected from involuntary suffering".
Which is a good thing. I believe the goal noticing logical fallacies is not to directly disprove claims, but to strip them from the rhetorical dressing and expose the actual argument underneath. That underlying argument can be bad, or it can be good - but it needs to be exposed before it can be properly discussed.
1. I say "most", but the only exception I can think of is the proving too much fallacy. And even then - that's only because there is no common template like other fallacies have. But that doesn't mean that arguments that inhibit that fallacy cannot be transformed to expose it - in this case, to normalize the fallacy one has to reshape it to a form where the claim, instead of being a critical part of its logic, is just a placeholder that can contain anything and still make the same amount of sense.
So, there is still an normal form involved. But instead of a normal form for the fallacy, the proving too much fallacy is about finding the normal form of the specific argument you are trying to expose the fallacy in, and showing how that form can be used for proving too much. I guess this makes the proving too much fallacy a meta-fallacy? ↩︎
Comment by Idan Arye on Privacy vs proof of character · 2021-02-28T22:46:23.371Z · LW · GW
If Alice can sacrifice her privacy to prove her loyalty, she'll be force to do so to avoid losing to Bob - who already sacrificed his privacy to prove his loyalty and not lose to Alice. They both sacrificed their privacy to get an advantage over each other, and ended up without any relative advantage gained. Moloch wins.
Comment by Idan Arye on Coincidences are Improbable · 2021-02-24T19:53:57.048Z · LW · GW
Coincidences can be evidence for correlation and therefore evidence for causation, as long as one remembers that evidence - like more things than most people feel comfortable with - are quantitative, not qualitative. A single coincidence, of even multiple coincidences, can make a causation less improbable - but it can still be considered very improbable until we get much more evidence.
Comment by Idan Arye on Oliver Sipple · 2021-02-20T21:53:19.671Z · LW · GW
Manslaughter? Probably not - you did not contribute to that person's death. You are, however, guilty of:
1. Desecration of the corpse.
2. Obstructing the work of the sanitation workers (it's too late for paramedics) that can't remove the body from the road because of the endless stream of cars running over it.
3. You probably didn't count 100k vehicles running over that body. A bystander who stayed there for a couple of days could have, but since you are one of the drivers you probably only witness a few cars running over that person - so as far as you know there is a slim chance they are still alive.
I may be taking the allegory too far here, but I feel these offenses can map quite well. Starting from the last - being able to know that all the damage is done. In Sipple's case, this is history so it's easy to know that all the damage was already done. He can't be outed again. His family will not be harassed again by their community, and will not estrange him again. His life will not be ruined again, and he will not die again.
Up next - interfering with the efforts to make things better. Does this really happen here? I don't think so. On the contrary - talking about this, establishing that this is wrong, can help prevent this from happening to other people. And it's better to talk about cases from the past, where all the damage is already done, than about current cases that still have damage potential.
This leaves us with the final issue - respecting the dead. Which is probably the main issue, so I could have just skipped the other two points, but I took the trouble of writing them so I might as well impose on you the trouble of reading them. Are we really disrespecting Oliver Sipple by talking about him?
Given all that - I don't think talking about this case should be considered as a violation of Sipple's wish to not be outed.
Comment by Idan Arye on Oliver Sipple · 2021-02-20T12:26:35.310Z · LW · GW
Is pulling the lever after the trolley had passed still a murder?
Comment by Idan Arye on Luna Lovegood and the Chamber of Secrets - Part 11 · 2020-12-28T14:50:59.653Z · LW · GW
Even if you could tell - Voldemort was Obliviated while knocked out and then transfigured before having the chance to wake up, so there never was an opportunity to verify that the Obliviation worked.
Comment by Idan Arye on Luna Lovegood and the Chamber of Secrets - Part 6 · 2020-12-10T16:25:45.232Z · LW · GW
I don't think so - the Vow is not an electric collar that shocks Harry every time he tries to destroy the world. This would invite ways to try and outsmart the Vow. Remember - the allegory here is to AI alignment. The Vow is not just giving Harry deterrents - it modifies his internal reasoning and values so that he would avoid world destruction.
Comment by Idan Arye on The Incomprehensibility Bluff · 2020-12-07T17:35:50.794Z · LW · GW
One thing to keep in mind is that even if it does seem likely that the suspected bluffer is smarter and more knowledgeable than you, the bar for actually working on the subject is higher than the bar for understanding a discussion about it. So even if you are not qualified enough to be an X researcher or an X lecturer, you should still be able to understand a lecture about X.
Even if the gap between you two is so great that they can publish papers on the subject and you can't even understand a simple lecture, you should still be able to understand some of that lecture. Maybe you can't follow the entire derivation of an equation but you can understand the intuition behind it. Maybe you get lost in some explanation but can understand an alternative example.
Yes - it is possible that you are so stupid and so ignorant and that the other person is such a brilliant expert that even with your sincere effort to understand and their sincere effort to explain as simply as possible you still can't understand even a single bit of it because the subject really is that complicated. But at this point the likability of this scenario with all these conditions is low enough that you should seriously consider the option that they are just bluffing.
Comment by Idan Arye on Luna Lovegood and the Chamber of Secrets - Part 5 · 2020-12-06T01:55:02.484Z · LW · GW
By the way, I wouldn't be surprised if "the end of the world" is Moody's stock response to "what's the worst that could happen?" in any context.
(this is no longer spoiler so we no longer need to hide it)
I'm not sure about that. That could be Harry's stock response - "there was always a slight probability for the end of the world and this suggestion will not completely eliminate that probability". But Moody's? I would expect him to quickly make a list of all the things that could go wrong for each suggested course of action.
Comment by Idan Arye on Luna Lovegood and the Chamber of Secrets - Part 5 · 2020-12-05T16:21:37.540Z · LW · GW
Are potential HPMOR spoilers acceptable in the comments here? I'm not really sure - the default is to assume they aren't, but the fanfic itself contains some, so to be sure I'll hide it just in case:
Can Harry really discuss the idea of destroying the world so casually? Shouldn't his unbreakable oath compel him to avoid anything that can contribute to it, and abandon the idea of building the hospital without permit as soon as Moody jokes (is that the correct term when talking about Moody?) about it causing the end of the world?
Comment by Idan Arye on Luna Lovegood and the Chamber of Secrets - Part 4 · 2020-12-04T21:56:43.016Z · LW · GW
I notice we are seeing Luna getting ridiculed for her reputation rather then directly for her actions. Even when it's clear how her reputation is a result of her actions - for example they laugh at her for having an imaginary pet, but never once have we seen other students looking at weird when she interacts with Wanda.
Is this intentional? Because we are getting this story from Luna's PoV? Does she consider her reputation unjustified because her behavior does not seem weird to her?
Comment by Idan Arye on Luna Lovegood and the Chamber of Secrets - Part 3 · 2020-12-01T21:13:13.284Z · LW · GW
I'm a bit surprised the twins had the patience and concentration to sit with Luna and help her go over the map over and over.
Comment by Idan Arye on Extortion beats brinksmanship, but the audience matters · 2020-11-17T15:54:30.030Z · LW · GW
Wouldn't increasing the number of offenders improve the effectiveness of brinkmanship compared to extortion? Since the victim is only bound by a deal with the offender, they can surrender and reject future deals from the other potential offenders. This makes surrendering safer and therefore more attractive compared to extortion, where surrendering to one extorter would invite more extortions.
Comment by Idan Arye on Bayesians vs. Barbarians · 2020-11-08T14:51:41.666Z · LW · GW
The moral of Ends Don't Justify Means (Among Humans) was that even if philosophical though experiments demonstrate scenarios where ethical rules should be abandoned for the greater good, real life cases are not as clear cut and we should still obey these moral rules because humans cannot be trusted when they claim that <unethical plan> really does maximize the expected utility - we cannot be trusted when we say "this is the only way" and we cannot be trusted when we say "this is better than the alternative".
I think this may be the source of the repulsion we all feel toward the idea of selecting soldiers in a lottery and forcing them to fight with drugs and threats of execution. Yes, dying in a war is better than being conquered by the barbarians - I'd rather fight and risk death if the alternative is to get slaughtered anyway together with my loved ones after being tortured, and if the only way to avoid that is to use abandon all ethics than so be it.
But...
Even in a society of rationalists, the leaders are still humans. Not benevolent ("friendly" is not enough here) superintelligent perfect Bayesian AIs. Can we really trust them that this is the only way to win? Can we really trust them to relinquish that power once the war is over? Will living under the barbarians rule be worse than living in a (formerly?) rationalist society that resorted to totalitarianism? Are the barbarians really going to invade us in the first place?
Governments lie about such things in order to grab more power. We have ethics for a reason - it is far too dangerous to rationalize that we are too rational to be bound by these ethics.
Comment by Idan Arye on Purchase Fuzzies and Utilons Separately · 2020-11-04T17:00:47.309Z · LW · GW
I may be straying from your main point here, but...
Could you really utilize these 60 seconds in a better, more specialized way? Not any block of 60 seconds - these specific 60 seconds, that happened during your walk.
Had you not encountered that open trunk, would you open your laptop in the middle of that walk and started working on a world changing idea or an important charity plan? Unlikely - if that was the case you were already sitting somewhere working on that. You went out for a walk, not for work.
Would you, had you not encountered that open trunk, finish your walk 60 seconds earlier, went to sleep 60 seconds earlier, woke up 60 seconds earlier, started your workday 60 seconds earlier, and by doing all that moved these 60 seconds to connect with your regular productivity time? This is probably not the case either - if it was, that would mean you intentionally used that hard earned fuzz as an excuse to deliberately take one minute off your workday, and that would take small mindedness you do not seem to possess.
No - that act was an Action of Opportunity. Humans don't usually have a schedule to tight and so accurate that every lost minute messes it up. There is room for leeway, where you can push such gestures without compromising your specialized work.
Comment by Idan Arye on Why Our Kind Can't Cooperate · 2020-11-03T09:00:43.540Z · LW · GW
Should arguers be encouraged, then, to not write all the arguments if favor of their claim in order to leave more room for those who agree with them to add their own supporting arguments?
This requires either refraining from fully exploring the subject (so that you don't think of all the arguments you can) or straight out omitting arguments you thought of. Not exactly Dark Side, but not fully Light Side either...
Comment by Idan Arye on What is the right phrase for "theoretical evidence"? · 2020-11-02T21:01:14.130Z · LW · GW
The difference can be quite large. If we get the results first, we can come up with Fake Explanations why the masks were only 20% effective in the experiments where in reality they are 75% effective. If we do the prediction first, we wouldn't predict 20% effectiveness. We wouldn't predict that our experiment will "fail". Our theory says masks are effective so we would predict 75% to begin with, and when we get the results it'll put a big dent in our theory. As it should.
Comment by Idan Arye on What is the right phrase for "theoretical evidence"? · 2020-11-02T16:24:15.554Z · LW · GW
Maybe "destroying the theory" was not a good choice of words - the theory will more likely be "demoted" to the stature of "very good approximation". Like gravity. But the distinction I'm trying to make here is between super-accurate sciences like physics that give exact predictions and still-accurate-but-not-as-physics fields. If medicine says masks are 99% effective, and they were not effective for 100 out of 100 patients, the theory still assigned a probability of that this would happen. You need to update it, but you don't have to "throw it out". But if physics says a photon should fire and it didn't fire - then the theory is wrong. Your model did not assign any probability at all to the possibility of the photon not firing.
And before anyone brings 0 And 1 Are Not Probabilities, remember that in the real world:
• There is a probability photon could have fired and our instruments have missed it.
• There is a probability that we unknowingly failed to set up or confirm the conditions that our theory required in order for the photon to fire.
• We do not assign 100% probability to our theory being correct, and we can just throw it out to avoid Laplace throwing us to hell for our negative infinite score.
This means that the falsifying evidence, on its own, does not destroy the theory. But it can still weaken it severely. And my point (which I've detoured too far from) is that the perfect Bayesian should achieve the same final posterior no matter at which stage they apply it.
Comment by Idan Arye on What is the right phrase for "theoretical evidence"? · 2020-11-02T14:38:12.927Z · LW · GW
I think you may be underestimating the impact of falsifying evidence. A single observation that violates general relativity, assuming we can perfectly trust its accuracy and rule out any interference from unknown unknowns - would shake our understanding of physics if it comes tomorrow, but had we encountered the very same evidence a century ago our understanding of physics would have already been shaken (assuming the falsified theory wouldn't be replaced with a better one). To a perfect Bayesian, the confidence at general relativity in both cases should be equal - and very low. Because physics are lawful - the don't make "mistakes" - we are the ones who are mistaken at understanding them, so a single violation is enough to make a huge dent no matter how many confirming evidence we have managed to pile up.
Of course, in real life we can't just say "assuming we can perfectly trust its accuracy and rule out any interference from unknown unknowns". The accuracy of our observations is not perfect, and we can't rule out unknown unknowns, so we must assign some probability to our observation being wrong. Because of that, a single violating evidence is not enough to completely destroy the theory. And because of that, newer evidence should have more weight - our instruments keep getting better so our observations today are more accurate. And if you go far enough back you can also question the credibility of the observations.
Another issue, which may not apply to physics but applies to many other fields, is that the world does change. A sociology experiment form 200 years ago is evidence on society from 200 years ago, so the results of an otherwise identical experiment from recent years should have more weight when forming a theory of modern society, because society does change - certainly much more than physics change.
But to the hypothetical perfect Bayesian the chronology itself shouldn't matter - all they have to do is take all that into account when calculating how much they need to update their beliefs, and succeeding to do so it doesn't matter in which order they apply the evidences.
Comment by Idan Arye on What is the right phrase for "theoretical evidence"? · 2020-11-02T12:15:42.812Z · LW · GW
You need to be very careful with this approach, as it can easily lead to circular logic where map X is evidence for map Y because they both come from the same territory, and may Y is evidence for map X because they both come from the same territory, so you get a positive feedback loop that updates them both to approach 100% confidence.
Comment by Idan Arye on What is the right phrase for "theoretical evidence"? · 2020-11-02T12:01:53.858Z · LW · GW
This clarification gave me enough context to write a proper answer.
That sounds like a promising idea. It seems like it needs some tweaking though. I want be able to say something like "the theoretical evidence suggests". If you replace "theoretical evidence" with "application", it wouldn't make sense. You'd have to replace it with something like "application of what we know about X", but that is too wordy.
Just call it "the theory" then - "the theory suggests" is both concise and conveys the meaning well.
Comment by Idan Arye on What is the right phrase for "theoretical evidence"? · 2020-11-02T11:58:54.525Z · LW · GW
I'm basing this answer on a clarifying example from the comments section:
I believe that what I am trying to point at is indeed evidence, in the Bayesian sense of the word. For example, consider masks and COVID. Imagine that we empirically observe that they are effective 20% of the time and ineffective 80% of the time. Should we stop there and take it as our belief that there is a 20% chance that they are effective? No!
Suppose now that we know that when someone with COVID breathes, particles containing COVID remain in the air. Further suppose that our knowledge of physics would tell us that someone standing two feet away is likely to breathe in these particles at some concentration. And further suppose that our knowledge of how other diseases work tell us that when that concentration of virus is ingested, it is likely that you will get infected. When you incorporate all of this knowledge about physics and biology, it should shift your belief that masks are effective. It shouldn't stay put at 20%. We'd want to shift it upward to something like 75% maybe.
When put like this, these "evidence" sound a lot like priors. The order should be different though:
1. First you deduce from the theory that masks are, say, 90% effective. These are the priors.
2. Then you run the experiments that show that masks are only effective 20% of the time.
3. Finally you update your beliefs downward and say that masks are 75% effective. These are the posteriors.
To a perfect Bayesian the order shouldn't matter, but we are not perfect Bayesians and if we try to do it the other way around and apply the theory to update the probabilities we got from the experiments, we would be able to convince ourselves the probability is 75% no matter how much empirical evidence that says otherwise we have accumulated.
Comment by Idan Arye on What is the right phrase for "theoretical evidence"? · 2020-11-02T00:10:30.220Z · LW · GW
These are not evidence at all! They are the opposite of evidence. Evidence are something from the territory that you use to update your map - what you are describing goes the opposite direction - it comes from the map to say something specific about the territory.
"Using the map to say something about the territory" sounds like "predictions", but in this case it does not seem like you intend to update your beliefs based on whether or not the predictions come true - in fact, you specify that the empirical evidence is already going against these predictions, and you seem perfectly content with that.
So... maybe you could call it "application"? Since you are applying your knowledge?
Or, since they explicitly go against the empirical evidence, how about we just call it "stubbornness"?
Comment by Idan Arye on Raised in Technophilia · 2020-10-21T11:51:29.391Z · LW · GW
My father used to say that if the present system had been in place a hundred years ago, automobiles would have been outlawed to protect the saddle industry.
Maybe not outright outlawed, but automobiles were used to be regulated to the point of uselessness: https://en.wikipedia.org/wiki/Red_flag_traffic_laws
Comment by Idan Arye on When (Not) To Use Probabilities · 2020-10-18T15:15:20.525Z · LW · GW
This reminds me of your comparison of vague vs precise theories in A Technical Explanation of Technical Explanation - if both are correct, then the precise theory is more accurate then the vague one. But if the precise theory is incorrect and the vague is correct, the vague theory is more accurate. Preciseness is worthless without correctness.
While the distinction there was about granularity, I think the lesson that preciseness is necessary but not sufficient for accuracy applies here as well. Using numbers makes your argument seem more mathematical, but unless they are the correct numbers - or even a close enough estimate of the correct numbers - can't make your argument more accurate.
Comment by Idan Arye on Feeling Moral · 2020-10-16T13:27:57.150Z · LW · GW
"Lives saved don’t diminish in marginal utility", as you have said, but maybe hiccups do? A single person in a group of 10 hiccuppers is not as unfortunate as a lone hiccupper standing with 9 other people who don't have hiccups. So even if the total negative utility of 10 hiccuppers is worse than that of one hiccupper, it's not 10 times worse.
Since the utility function doesn't have to be linear function in the number of hiccuppers (it only has to be monotonic) there is no reason why it can't be bounded, forever lower (in absolute value) than the value of a single human life.
Comment by Idan Arye on Feeling Moral · 2020-10-16T13:05:09.830Z · LW · GW
Say we have a treatment of curing hiccups. Or some other inconvenience. Maybe even all medical inconveniences. We have done all the research and experiments and concluded that the treatment is perfectly safe - except there is no such thing as "certainty" in Bayesianism so we must still allocate a tiny probability to the event our treatment may kill a patient - say, a one in a googol chance. The expected utility of the treatment will now have a component in it, which far outweighs any positive utility gained from the treatment, which only cures inconveniences, a mere real number that cannot be overcome the negative no matter how small the probability of that is nor how much you multiply the positive utility of curing the inconveniences.
Comment by Idan Arye on Brainstorming positive visions of AI · 2020-10-08T17:59:03.218Z · LW · GW
Instead of creating a superintelligent AGI to perform some arbitrary task and watch it allocate all the Earth's resources (and the universe's resources later, but we won't be there to watch it) to optimize it, we decide to give it the one task that justifies that kind of power and control - ruling over humanity.
The AGI is more competent than any human leader, but we wouldn't want a human leader who's values we disagree with even if they are very competent - and the same applies to robotic overlords. So, we implement something like Futarchy, except:
• Instead of letting the officials generate policies, the AGI will do it.
• Instead of using betting markets we let the AGI decide which policy best fulfills the values.
• Instead of voting for representatives that'll define the values, the AGI will talk with each and every one of us to build a values profile, and then use the average of all our values profiles to build the values profile used for decision making.
• Even better - if it has enough computation power it can store all the values profiles, calculate the utility of each decision according to each profile, calculate how much the decision will affect of each voter, and do a weighed average.
So the AGI takes over, but humanity is still deciding what it wants.
Comment by Idan Arye on Honoring Petrov Day on LessWrong, in 2019 · 2020-09-26T19:26:28.973Z · LW · GW
Petrov's choice was not about dismissing warnings, it's about picking on which side to err. Wrongfully alerting his superiors could cause a nuclear war, and wrongfully not alerting them would disadvantage his country in the nuclear war that just started. I'm not saying he did all the numbers, used Bayes's law to figure the probability there is an actual nuclear attack going on, assigned utilities to all four cases and performed the final decision theory calculations - but his reasoning did take into account the possibility of error both ways. Though... it does seem like his intuition gave utility much more weight than to probabilities.
So, if we take that rule for deciding what to do with a AGI, it won't be just "ignore everything the instruments are saying" but "weight the dangers of UFAI against the missed opportunities from not releasing it".
Which means the UFAI only needs to convince such a gatekeeper that releasing it is the only way to prevent a catastrophe, without having to convince the gatekeeper that the probabilities of the catastrophe are high or that the probabilities of the AI being unfriently are low.
Comment by Idan Arye on A Priori · 2020-09-26T19:12:39.960Z · LW · GW
That isn't what you need to show. You need to show that the semantics have no ontological implications, that they say nothing about the territory .
Actually, what I need to show is that the semantics say nothing extra about the territory that is meaningful. My argument is that the predictions are canonical representation of the belief, so it's fine if the semantics say things about the territory that the predictions can't say, as long as everything it says that does not affect the predictions is meaningless. At least, meaningless in the territory.
The semantics of gravity theory says that the force that pulls objects together over long range based on their mass is called "gravity". If you call that force "travigy" instead, it will cause no difference in the predictions. This is because the name of the force if a property of the map, not the territory - if it was meaningful in the territory it should have had impact on the predictions.
And I claim that the "center of the universe" is similar - it has no meaning in the territory. The universe has no "center" - you can think of "center of mass" or "center of bounding volume" of a group of objects, but there is no single point you can naturally call "the center". There can be good or bad choices for the center, but not right or wrong choices - the center is a property of the map, not the territory.
If it had any effect at all on the territory, it should have somehow affected the predictions.
Comment by Idan Arye on A Priori · 2020-09-25T14:49:47.173Z · LW · GW
If you take a heliocentric theory, and substitute "geocentric" for "heliocentric", you get a theory that doens't work in the sense of making correct predictions. You know this, because in previous comments you have already recognised the need for almost everything else to be changed in a geocentric theory in order to make it empirically equivalent to a heliocentric theory.
I only change the title, I don't change anything they theory says. So its predictions are still the same as the heliocentric model.
But you are arguing against realism, in that you are arguing that theories have no content beyond their empirical content, ie their predictive power. You are denying that they are have any semantic (non empirical content), and, as an implication of that, that they "mean" or "say" nothing about the territory. So why would you care that one theory in more complex than another, so long as its predictions are accurate?
The semantics are still very important as a compact representation of predictions. The predictions are infinite - the belief will have to give a prediction for every possible scenario, and scenariospace is infinite. Even if the belief is only relevant for a finite subset of scenarios, it'd still have to say "I don't care about this scenario" an infinite number of times.
Actually, it would make more sense to talk about belief systems than individual beliefs, where the belief system is simply the probability function P. But we can still talk about single beliefs if we remember that they need to be connected to a belief system in order to give predictions, and that when we compare two competing beliefs we are actually comparing two belief systems where the only difference is that one has belief A and the other has belief B.
Human minds, being finite, cannot contain infinite representations - we need finite representations for our beliefs. And that's where the semantics come in - they are compact rules that can be used to generate predictions for every given scenario. And they are also important because the amount of predictions we can test is also finite. So even if we could comprehend the infinite prediction field over scenariospace, we wouldn't be able to confirm a belief based on a finite number of experiments.
Also, with that kind of representation, we can't even come up with the full representation of the belief. Consider a limited scenario space with just three scenarios X, Y and Z. We know what happened in X and Y, and write a belief based on it. But what would that belief say about Z? If the belief is represented as just its predictions, without connections between distinct predictions, how can we fill up the predictions table?
The semantics help us with that because they have less degrees of freedom. With N degrees of freedom we can match any number of observations, so we need observations to even start counting them as evidence. I not sure how to come up with a formula for the number of degrees of freedom a semantic representation of a belief has - this depends not only on the numerical constants but also on the semantics - but some properties of it are obvious:
1. The prediction table representation has infinite degrees of freedom, since it can give a prediction for each scenario independently from the predictions given to the other scenarios.
2. A semantic representation that's strictly more simple than another semantic representation - that is, you can go from the simple one to the complex one just by adding rules - then the simpler one has less degrees of freedom than the complicated one. This is because the complicated one has all the degrees of freedom the simpler one had, plus more degrees of freedom from the new rules (just adding the rules is some degrees of freedom, even if the rule itself does not contain anything that can be tweaked)
So the simplicity of the semantic representation is meaningful because it means less degrees of freedom and thus requires less evidence, but it does not make the belief "truer" - only the infinite prediction table determines how true the belief is.
Comment by Idan Arye on Decoherence is Falsifiable and Testable · 2020-09-24T23:56:31.906Z · LW · GW
We require new predictions not because the theory is newer than some other theory it could share predictions, but because the predictions must come before the experimental results. If we allow theories theories to rely on the results of already known experiments, we run into two problems:
1. Overfitting. If the theory only needs to match existing results it can be constructed a way that matches all these results - instead of trying to match the underlying rules that generated these results.
2. We may argue ourselves into believing our theory made predictions that match our results, regardless of whether the theory naturally makes these predictions.
Now, if the new theory is a strictly simpler versions of an old one - as in "we don't even need X" simpler - then these two problems are nonissue:
1. If the more complicated theory did not overfit, the simpler version is as good as guaranteed to not overfit either.
2. We don't need to guess what our new theory would predict if we didn't know the results - the old theory already did those predictions before knowing the results, and it should be straightforward to show it wasn't using the part we removed.
So... I will allow it.
Comment by Idan Arye on A Priori · 2020-09-23T12:44:32.248Z · LW · GW
OK, I continued reading, and in Decoherence is Simple Eliezer makes a good case for Occam's Razor as more than just a useful tool.
In my own words (:= how I understand it) more complicated explanations have a higher burden of proof and therefore require more bits of evidence. If they give the same predictions as the simpler explanations, then each bit of evidence counts for both the complicated and the simple beliefs - but the simpler beliefs had higher prior probabilities, so after adding the evidence their posterior probabilities should keep being higher.
So, if a simple belief A started with -10 decibels and a complicated belief B started with -20 decibels, and we get 15 decibels of evidence supporting both, the posterior credibility of the beliefs are 5 and -5 - so we should favor A. Even if we get another 10 decibels of evidence and the credibility of B becomes 5, the credibility of A is now 15 so we should still favor it. The only way we can favor B is if we get enough evidence that support B but not A.
Of course - this doesn't mean that A is true and B is false, only that we assign a higher probability to A.
So, if we go back to astronomy - our neogeocentric model has a higher burden of proof than the modern model, because it contains additional mysterious forces. We prove gravity and relativity and the work out how centrifugal forces work and that's (more or less) enough for the modern model, and the exact same evidences also support the neogeocentric model - but they are not enough for it because we also need evidence for the new forces we came up with.
Do note, though, that the claim that "there is no mysterious force" is simpler than "there is a mysterious force" is taken for granted here...
Comment by Idan Arye on A Priori · 2020-09-20T16:55:21.669Z · LW · GW
They assert different things because they mean different things, because the dictionary meanings are different.
The Quotation is not the Referent. Just because the text describing them is different doesn't mean the assertions themselves are different.
Eliezer identified evolution with the blind idiot god Azathoth. Does this make evolution a religious Lovecraftian concept?
Scott Alexander identified the Canaanite god Moloch with the principle that forces you to sacrifice your values for the competition. Does this make that principle an actual god? Should we pray to it?
I'd argue not. Even though Eliezer and Scott brought the gods in for the theatrical and rhetorical impact, evolution is the same old evolution and competition is the same old competition. Describing the idea differently does not automatically make it a different idea - just like describing as does not make it a different function.
In case of mathematic functions we have a simple equivalence law: . I'd argue we can have a similar equivalence law for beliefs - where A and B are beliefs and X is an observation.
This condition is obviously necessary because if even though and we find that , that would support A and therefore also B (because they are equivalent) which means an observation that does not match the belief's predictions supports it.
Is it sufficient? My argument for its sufficiency is not as analytical as the one for its necessity, so this may be the weak point of my claim, but here it goes: If , even though they give the same predictions, then something other than the state and laws of the universe is deciding whether a belief is true or false (actually - how much accurate is it). This undermines the core idea of both science and Bayesianism that beliefs should be judged by empirical evidences. Now, maybe this concept is wrong - but if it is, Occam's Razor itself becomes meaningless because if the explanation does not need to match the evidences, then the simplest explanation can always be "Magic!".
Comment by Idan Arye on A Priori · 2020-09-20T15:38:11.487Z · LW · GW
In the thought experiment we are considering , the contents of the box can be er be tested. Nonetheless $10 and$100 mean different things.
I'm not sure you realize how strong a statement "the contents of the box can be never be tested" is. It means even if we crack open the box we won't be able to read the writing on the bill. It means that even if we somehow tracked all the $20 and all the$100 bills that were ever printed, their current location, and whether or not they were destroyed, we won't be able to find one which is missing and deduce that it is inside the box. It means that even if we had a powerful atom-level scanner that can accurately map all the atoms in a given volume and put the box inside it, it won't be able to detect if the atoms are arranged like a $20 bill or like a$100 bill. It means that even if a superinteligent AI capable of time reversal calculations tried to simulate a time reversal it wouldn't be able to determine the bill's value.
It means, that the amount printed on that bill has no effect on the universe, and was never affected by the universe.
Can you think of a scenario where that happens, but the value of dollar bill is still meaningful? Because I can easily describe a scenario where it isn't:
Dollar bills were originally "promises" for gold. They were signed by the Treasurer and the secretary of the Treasury because the Treasury is the one responsible for fulfilling that promise. Even after the gold standard was abandoned, the principle that the Treasury is the one casting the value into the dollar bills remains. This is why the bills are still signed by the Treasury's representatives.
So, the scenario I have in mind is that the bill inside the box is a special bill - instead of a fixed amount, it says the Treasurer will decide if it is worth 20 or 100 dollars. The bill is still signed by the Treasurer and the secretary of the Treasury, and thus has the same authority as regular bills. And, in order to fulfill the condition that the value of the bill is never known - the Treasurer is committed to never decide the worth of that bill.
Is it still meaningful to ask, in this scenario, if the bill is worth $20 or$100?
Comment by Idan Arye on A Priori · 2020-09-20T14:59:23.914Z · LW · GW
I'm not sure I follow - what do you mean by "didn't work"? Shouldn't it work the same as the heliocentric theory, seeing how every detail in its description is identical to the heliocentric model?
Comment by Idan Arye on A Priori · 2020-09-20T12:28:12.165Z · LW · GW
So if I copied the encyclopedia definition of the heliocentric model, and changed the title to "geocentric" model, it would be a "bad, wrong , neo-geocentric theory [that] is still a geocentric theory"?
Comment by Idan Arye on A Priori · 2020-09-20T12:25:11.585Z · LW · GW
If A and B assert different things, we can test for these differences. Maybe not with current technology, but in principle. They yield different predictions and are therefore different beliefs.
Comment by Idan Arye on A Priori · 2020-09-19T22:40:16.696Z · LW · GW
But this is not the dictionary definition of the geocentric model we are talking about - this we have twisted it to have the exact same predictions as the modern astronomical model. So it no longer asserts the same things about the territory as the original geocentric model - its assertions are now identical to the modern model. So why should it still hold the same meaning as the original geocentric model?
Comment by Idan Arye on A Priori · 2020-09-19T22:17:07.411Z · LW · GW
If a universe where the statement is true is indistinguishable from a universe where the statement is false, then the statement is meaningless. And if the set of universes where statement A is true is identical to the set of universes where statement B is true, then statement A and statement B have the same meaning whether or not you can "algebraically" convert one to the other.
Comment by Idan Arye on A Priori · 2020-09-19T12:11:25.860Z · LW · GW
If the content of the box is unknown forever, that means that it doesn't matter what's inside it because we can't get it out.
|
2021-06-24 18:39:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.501815140247345, "perplexity": 1221.1007252516374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488556482.89/warc/CC-MAIN-20210624171713-20210624201713-00135.warc.gz"}
|
https://www.physicsforums.com/threads/can-it-happen.73477/
|
# Can it happen
1. Apr 28, 2005
### abia ubong
i am 16 a high school grad i wanted 2 know if it was possible for me 2 be the greatest mathematician of all times ,already my friends call me the goat i.e the greatest of all times pls let me know
2. Apr 28, 2005
### HallsofIvy
Staff Emeritus
Are sure that's what they mean by "goat"?
3. Apr 28, 2005
### arildno
No, it cannot happen.
Your posts shows a fundamental lack of understanding of what maths is about.
Sorry if I'm too blunt for your taste.
4. Apr 28, 2005
### moose
This is a joke....right......?????
5. Apr 28, 2005
Well they certainly weren't referring to your English, grammar, typing, or punctuation abilities.
6. Apr 28, 2005
### theCandyman
Well, if is a joke, I am amused. Do not get big headed, if this really is your goal, it makes it more difficult to accept your mistakes and move on or let yourself be corrected.
7. Apr 28, 2005
### Poop-Loops
"You're a pile of crap."
"What???"
"Oh, I'm just saying you are good at History."
"Oh, thanks."
What level math are you taking now? Unless you are taking Calculus already, you are setting your self up for a giant failure. (He said greatest of all time, not just great)
PL
8. Apr 28, 2005
### motai
Hmm.. if that is the approach you will take, as theCandyman said earlier, you're only setting yourself up for failure.
The vast majority of us will never be exceptional (like world-class famous) in everything, and that is something we will have to accept. I myself am a mediocre track runner, placed last in every event, but I really could care less. In academics, I am not at the top, nor do I want to be classified as such; and I consider myself by no means "smart", because there is always more to learn. And as long as you have that mentality, and have the willingness to learn, in my opinion, it will take you as far as you want to go.
Always be willing to learn new things, get excited about it, and enjoy it. Never cease to ask questions in the classroom, and ask them out of it. Because by doing these, you will gain better knowledge in what you are trying to learn, and it is through this that perhaps fame through noteworthy achievements will come.
Happiness can come through other means than fame, so we shouldn't be bent over trying to gain the popularity and acceptance of others until we have found our own happiness.
9. Apr 28, 2005
### juvenal
This is way too serious a response for a person (the OP) who shouldn't really be taken seriously to begin with.
10. Apr 28, 2005
### graphic7
Eh, I'm taking ODE, now, and I certainly don't see myself as becoming a first-class mathematician. I'm fairly certain I'll be a competent one. Regardless, being able to take courses at a young age (as myself) does not guarantee anything.
11. Apr 28, 2005
### exequor
If this is your goal, I think it would be fair to spend more time doing maths instead of asking such a question in a forum . Don't you think that you could have learnt something new in the time it took you to write such a post?
12. Apr 28, 2005
### Poop-Loops
I don't know what ODE is, but if he wants to be the best there ever was, don't you think he'd have to already be damn good? "The best there ever was" is a combination of working extremely hard and an assload of luck to get the right genes to be able to understand everything, and making sure you don't like get run over by a car when you are 3. He'd have to already be really really gifted to ever hope of becoming "the best".
Being good and the best are two different things. I hope to be a good physicist or engineer one day, but I know I will never be "the best".
Being good means working your ass off. But to be the best you have to be gifted too.
PL
13. Apr 28, 2005
### Data
In all likelyhood, ordinary differential equations.
14. Apr 28, 2005
### mathwonk
of course it is possible!! [not likely maybe, but possible]
however having as your goal to be the best of all time, would you be disappointed if you were only 3rd best?
Considering who is out there it is not too shabby to be even 10,000 th best, in my humble opinion.
The point is, it is a fun subject to work in, and the food is good. if you have "the love", come on in!
one suggestion: when you meet someone who knows a lot more than you, or who seems smarter, try not to say "argggh.... there goes my chance!!", say instead, "wow!! here is someone who can help me get better!"
Last edited: Apr 28, 2005
15. Apr 28, 2005
### Poop-Loops
Or you can think of it as an opportunity to move yourself up by 1, and eliminate him. J/K of course.
The only people I've ever seen/heard of that had a goal of being the best and accomplished it were seriously crazy (as in, they actually believed they were more than human). Athletes, generals, etc.
Mathwonk has the best advice. Do something you like, regardless of whether you are good at it or not.
By the way, I wasn't aware that there were even 10k mathmaticians in the world. :p
PL
16. Apr 29, 2005
### Data
There are around six and a half billion people in the world. If there were only 10000 mathematicians, then about $10000/(6.5 \cdot 10^{7}) \% = 1.54\cdot 10^{-4} \%$ of people would be mathematicians. Let's compare this to an actual situation.
There are around 750000 people in the city I live in, and (very conservative numbers here - I would say that there could easily be double what I state, and these are only tenure-track faculty and full-time students doing theses) probably seventy five are mathematics professors, and another seventy five are graduate students in mathematics at one of the two local universities. This makes around $150/ 7500 \% = 2 \cdot 10^{-2} \%$ of people in my city mathematicians. According to the number above (10000 mathematicians in the world), this means that my city would have at least a hundred times as many mathematicians as you would expect. I find this a little unlikely.
If we use the percentage that I got for my city as the global percentage (which is probably reasonable considering how conservative I was with the numbers - likely, a city will have a higher percentage than most places in the world, but since I was conservative to start with, this is ok), we expect about $(6.5\cdot 10^9)(2 \cdot 10^{-4}) = 1.3$ million mathematicians in the world, which is a few more than $10000$
Last edited: Apr 29, 2005
17. Apr 29, 2005
### Poop-Loops
I was just basing it on the fact that mathematics is the most mind-numbing subject on the planet. It takes a certain type of person to be able to withstand all of it. And of those special people, maybe a handful would actually like doing it.
Note, that by mathematician I mean only those with Ph.D's in Math. Not your regular school teacher, they don't count. My HS calculus teacher was a literature major in college. I think there is a huge line between being able to say "I have a degree in..." and "I am a..." But maybe it's just semantics.
PL
18. Apr 29, 2005
### Data
if you included schoolteachers there would quite easily be tens of millions of "mathematicians" in the world. Regardless, the "realistic" percentage that I came up with was still only $2 \cdot 10^{-2}\%$. That still would mean that only 1 in every 5000 people is a mathematician, on average.
Last edited: Apr 29, 2005
19. Apr 29, 2005
|
2017-04-24 23:05:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5536863803863525, "perplexity": 1380.3290814778895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119995.14/warc/CC-MAIN-20170423031159-00295-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://www.janheiland.de/22-quadmf-opi/
|
Eccomas 2022
# Nonlinear Model Order Reduction Schemes
In most MOR schemes, the state of $x(t) \in \mathbb R^{n}$ of a dynamical system $\begin{equation*} \dot x(t) = f(x(t)) \end{equation*}$ is encoded as $\begin{equation*} q(t) = W^Tx(t) \end{equation*}$ and decoded via $\begin{equation*} \tilde x(t) = Vq(t) \end{equation*}$ where $V$, $W\in \mathbb R^{n,r}$ are matrices.
Encoding and decoding $\begin{equation*} q(t) = W^Tx(t), \quad \tilde x(t) = Vq(t) = W^TVx(t) \end{equation*}$ with $V$, $W\in \mathbb R^{n,r}$ is a linear MOR scheme as
• $r \ll n$ – reduction of the dimension and
• $x(t)\approx \tilde x(t)=VW^Tx(t)$
## Linear MOR schemes
• fairly standard (POD, Balanced Truncation)
• fairly efficient (for linear systems or with hyperreduction like DEIM)
• inherently limited in terms of reduction versus accuracy (cp. Kolmogorov $n$-width)
• good evidence that at very low $r$, nonlinear encodings/decodings $\begin{equation*} q(t) = h(x(t)), \quad \tilde x(t) = g(q(t)) \end{equation*}$ provide better reduction vs. accuracy
• though not necessarily computational efficiency
## This talk
• Formulation of a MOR scheme with a linear quadratic encoding $\begin{equation*} \tilde x(t) = Vq(t) + \Omega \, q(t) \otimes q(t) \end{equation*}$
• use of Operator inference to identify a dynamical system $\begin{equation*} M(q(t))\,\dot q(t) = A_0 + A_1\, q(t) + A_2\,q(t) \otimes q(t) \end{equation*}$
• that best approximates given data on a $r$-dimensional manifold
• numerical proof of concept for a laminar flow problem
$x(t) \approx \tilde x(t) = Vq(t) + \Omega\,q(t)\otimes q(t)$
For a general nonlinear decoding $\begin{equation*} x(t) \approx \tilde x(t) = g(q(t)) \end{equation*}$ the dynamical system $\dot x(t) = f(x(t))$ is approximated and parametrized $\begin{equation*} \dot {\tilde x}(t) = f(\tilde x(t)) \quad \leftrightarrow \quad G(q(t)) \dot q(t) = f(g(q(t)) \end{equation*}$
where $\begin{equation*} G(q(t)) := \nabla g(q(t)) \in \mathbb R^{n,r} \end{equation*}$ is the Jacobian of $g$ at $q(t)$.
With $\begin{equation*} g(q)=Vq + \Omega\,q\otimes q, \end{equation*}$ we have $\begin{equation*} G(q)\bar q = V\bar q + \Omega\,q\otimes \bar q + \Omega\,\bar q\otimes q \end{equation*}$
and an approximation/parametrization of a linear system $\dot x(t) = Ax(t)$ as $\begin{equation*} G(q)\dot q = A_1 q + A_2\, q\otimes q \end{equation*}$ with $A_1 = AV$ and $A_2 = A\Omega$.
Since for a manifold map $g\colon \mathbb R^{r}\to \mathbb R^{n}$, the Jacobian $\nabla g(q(t)) =: G(q(t))$ has full rank,
$\begin{equation*} G(q(t))^TG(q(t))\dot q(t) = G(q(t))^TA_1 q + G(q(t))^TA_2\, q\otimes q \end{equation*}$ gives a regular differential equation in $q$,
which however comes with cubic parts $\begin{equation*} M(q)\dot q(t) = \tilde A_1 q + \tilde A_2\, q\otimes q + \tilde A_3 q\otimes q \otimes q \end{equation*}$
# Operator Inference
Using data to infer a system with a quadratic decoding
We use a POD basis $V\in \mathbb R^{n,r}$ to encode a set of snapshots $\begin{equation*} [x(t_1),\ x(t_2), \dots, x(t_N) ] \to [q(t_1),\ q(t_2), \dots, q(t_N) ] \end{equation*}$ by $q(t_i) = V^Tx(t_i) \in \mathbb R^{r}$
In a first step, we infer the quadratic correction $\Omega \in \mathbb R^{N,r^2}$ via $\begin{equation*} \sum_{i=1}^N \| x(t_i) - Vq(t_i) - \Omega \, q(t_i) \otimes q(t_i)\|^2 \to \min \end{equation*}$
Next, we differentiate the snapshots to compute $\begin{equation*} \dot x(t_i) \to \dot q(t_i) = V\dot x(t_i) \end{equation*}$ and, with the Jacobian $G(q)$ at hand, we can form the derivative along the manifold $\begin{equation*} \dot {\tilde x}(t_i) = G(q(t_i))\dot q(t_i) \end{equation*}$
Finally we can solve the quadratic operator inference problem $\begin{equation*} \sum_{i=1}^N \| M(q(t_i))\,\dot q(t_i) - A_0 - A_1\, q(t_i) - A_2\, q(t_i)\otimes q(t_i)\|^2 \to \min \end{equation*}$
for
$A_0 \in \mathbb R^{r,1}, \quad A_1\in \mathbb R^{r,r}, \quad A_2 \in \mathbb R^{r, r^2}$
that fits a quadratic system to the given snapshots.
# Numerical Example
FEM Simulation of Navier-Stokes equations $\dot v + (v\cdot \nabla) v- \frac{1}{\mathsf{Re}}\Delta v + \nabla p= f,$ $\nabla \cdot v = 0.$
• 2D laminar lid driven cavity at Re=500
• About 4000 dof in the FEM model
• 400 velocity $v$ snapshots on the [0, 4.8] time interval
• Reduced order model for the velocity of size r=5,8,12
• Extrapolation to the [4.8, 6] time interval
• Comparison with POD, DMDc, OpInf
# Conclusion
## … and Outlook
• Quadratic decoding aligns well with operator inference
• Tempting theory but no decisive numerical advantages observed
• Possible ways for improvement
• Regularization of the involved optimization problem
• Inference of higher order terms
Thank You!
## References
1.
Geelen R, Wright S, Willcox K. Operator inference for non-intrusive model reduction with nonlinear manifolds. CoRR (2022) abs/2205.02304: doi:10.48550/arXiv.2205.02304
2.
Barnett J, Farhat C. Quadratic approximation manifold for mitigating the Kolmogorov barrier in nonlinear projection-based model order reduction. CoRR (2022) abs/2204.02462: doi:10.48550/arXiv.2204.02462
3.
Benner P, Goyal P, Heiland J, Pontes Duff I. Operator inference and physics-informed learning of low-dimensional models for incompressible flows. Electron Trans Numer Anal (2022) 56:28–51. doi:10.1553/etna_vol56s28
|
2022-12-04 01:07:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7326831817626953, "perplexity": 3088.277717140052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710953.78/warc/CC-MAIN-20221204004054-20221204034054-00782.warc.gz"}
|
https://worldbuilding.stackexchange.com/questions/63496/how-could-i-have-modern-computers-without-guis/63579
|
# How could I have modern computers without GUIs?
Even though text-based terminals still see specialty use cases, modern general-purpose computers generally run graphical software and have a graphical user interface (GUI). This includes everything from low-end cell phones and some computer peripherals like printers, to fairly high-end servers.
I'd like for computers to be roughly on par technically with what we have today, but with user interfaces that are predominantly text-based. It's okay if these computers work with text blocks and things like that (for example, like how the IBM 5250 series of terminals worked), but except for graphically oriented work such as image editing, there should be minimal graphics.
Given that in our world, personal computers started becoming graphical pretty much as soon as they were powerful enough to run a graphical user interface at acceptable speeds, and some even earlier, how can I reasonably explain that GUIs never became mainstream?
Note that these computers need not be expert-only systems; I just want their interfaces to be predominantly text-based rather than predominantly graphical as is the case today in our world.
Also, to clarify, since there seems to be widespread confusion about this: Lack of a graphical user interface does not imply a lack of graphical capability. Take the original IBM PC model 5150 as an example; with the exception of those equipped only with a MDA graphics card, the software running on those often used text-based data entry with graphical visualization modes (what we in modern terms might call more or less accurate "print preview"). For example, something similar to the early versions of Microsoft Word for DOS or how early versions of Lotus 1-2-3 used different graphics cards and monitors to display data and graphs. Instead of thinking "no graphics at all", think "graphics only as add-ons to text, rather than as a primary user interaction element".
And since lots of answers imply that the only alternatives are pure command-line based interfaces and GUIs, let me remind you of tools like Norton Commander. I used Norton Commander back in the late 1980s and early 1990s, and still use look-alikes such as Midnight Commander to this day, and can guarantee that those can provide a perfectly useful environment for file management and launching applications that do not in any way depend on more than a text console. There is even a general term for these; Text-based User Interface, or TUI.
• Comments are not for extended discussion; this conversation has been moved to chat. – Monica Cellio Dec 8 '16 at 2:58
• It is also called "Pseudo-graphical user interface". – Vi. Dec 8 '16 at 13:53
• Actually, many (most?) fairly high-end servers do not run GUIs. GUIs are usually assumed for client systems that connect to servers. Almost every "high-end server" I've worked on in the past 40+ years was minus a GUI. (Note, though, that functions such as X server/X windows or Java RAWT, etc.,are often available from servers, even if the servers themselves might not have native graphics capability.) – user2338816 Dec 9 '16 at 13:19
• – mg30rg Dec 9 '16 at 15:50
• Question seems too specific, so it seems like you're wanting to examine a potential reaction to eventual introduction of GUIs or some related event. One difficulty is that detailed graphics is an almost necessary adjunct to development of many technologies resulting in "modern computers". Engineering diagrams, CAD/CAM, etc., lead naturally to manipulation of graphic elements; and inclusion of those methods in UIs fairly naturally follows. Engineering modern systems is hard. – user2338816 Dec 12 '16 at 2:37
As almost anyone who ever used shell would say, a text based UI is much more comfortable, fast, easy to develop and just BETTER. The big problem, though, is that it's a language you have to know prior to doing anything with your computer. This is the main advantage of a GUI.
So I think what you should consider is a way to explain why computers can always presume that the users "speak their language". I see a few options:
• Computers started out as a very elitist technology, and knowing the language is a kind of status symbol. This would give people the motivation to learn, and developers the motivation not to appeal to less-sophisticated audiences, because that would ruin their brand. soon, the language is just common knowledge.
• The language in the world is in the first place very accurate and structured. There is always exactly one way to say everything. (I think this could be very interesting to develop, but also quite hard)
• The language of the computers either developed very fast or co-evolved with the human understanding of it, i.e. the computers would "learn" a new word, this would be made famous and everyone would know this new word.
• Comments are not for extended discussion; this conversation has been moved to chat. – Monica Cellio Dec 9 '16 at 3:20
One simple change:
Never invent a Computer Mouse
No matter how comforting a graphical user interface (GUI) is, it wouldn't be nearly as comfortable and useful without the invention of the computer mouse (and later touch interfaces).
While text interfaces stem from times and are still designed for use with only/primarily keyboards, you cannot comfortably nor reliably make use of any GUI without having a mouse or any other 'pointer' available to select things and interact with them.
The invention of the computer mouse and thus the pointer brought with it the era of pointy-clicky, a derogative term referring to virtual buttons and interactable areas that are fully virtual as opposed to the hardware reality of a keyboard. Now instead of having to work with a limited set of input functionality the only limit the amount of pixels a display can show (if you abuse scrolling not even the screen-size will be a hindrance for your mad interface experiments).
The combination of mouse/touch and GUI allows to cut away a layer of indirection that will always be around when you have to type in something and confirm your command before anything can happen. Even though you could react to every keystroke directly, there will be a finite set of interactions per program state, while there's a potential unlimited set of interactions that can be made with mouse/touch.
Elaboration on the evolution of your interfaces:
Now even if you do only have an indirect way of interaction, GUIs will eventually emerge. Although your GUIs will be massively different from the GUIs we are used to (and have come to hate love).
The eventual GUIs will be more of a graphically enhanced text interface (GETI) and the graphics will be used to display things such as video, images, make some nice backgrounds or gradients, etc. the classic prompt will be unlikely to disappear.
Eventually it is also likely that voice-input becomes more common. Voice-input will simply be an addition and pseudo-replacement for the keyboard but cannot fully replace it unless voice-processors become way better than they are in our timeline or your software becomes more lenient and outfitted with pseudo-intelligence that can guess what you're intending to do and assist/clarify by asking you additional input when needed.
• Clever but I'm not too sure about this. Touchpads can be made; touch screens can be developed instead; or, at the most basic levels, there can be something like arrow keys to move the cursor and a spacebar to select. I wouldn't say no mice means no interfaces; we would just replace the mice. – Zxyrra Dec 5 '16 at 23:41
• An interface is not just about input, it's also about output. Even without specialized input devices, people will still develop graphs for displaying information. – user2781 Dec 6 '16 at 0:18
• @Zxyrra but what's the impetus to "invent" touch screen? and currently we have a path to touch - Console => Gui => Mouse => Touch... with about 100 revisions of GUI. Hell... look at all the issues there are going from GUI to Touch... I couldn't imagine going from keyboard to touch without the same or worse issues. – WernerCD Dec 6 '16 at 0:44
• My Nintendo 3DS has GUI but no mouse. PS and Xbox all have GUI but no mouse. Mouse is handy for GUIs, especially at PCs. There are however many cases where GUI can work without mouse just fine. The GUIs could look different, but they would still be there. Simply not inventing mouse/touchpad/input device X won't do. – MatthewRock Dec 6 '16 at 10:37
• Our GUI interface archetype is WIMP, which stands for "windows, icon, menus, pointer". It does not require a mouse to move the pointer. There are many WIMP interfaces using keypads or joysticks to move the pointer; and there are WIMP interfaces using lightpens or touchscreens as a direct pointer. The mouse can even be seen as a distraction on the way to "true" pointing using physical touch. It definitely isn't a prerequisite for a GUI. – Graham Dec 6 '16 at 12:32
You could have had a major breakthrough in voice recognition in the early days of the computer. The effect of this could be that interfacing would evolve around using voice and ear, as opposed to eyes and hands.
The added benefit of this is that you can continue using your hands and eyes to perform certain tasks (e.g. you're fixing a car and asking the computer for help in the mean time).
(This in turn means that no effort is put into developing GUI's for computers, but debugging/configuring might be done using a CLI)
• Saw this comment on UX and I think your answer dovetails nicely. ux.stackexchange.com/questions/101990/… – bob0the0mighty Dec 6 '16 at 16:04
• I'm not so sure about that. Voice recognition has most of the disadvantages of CLI, with few of the advantages. The only real advantage you get is when you can't use your hands (or, to some extent, your eyes), or when you can't type very well. You'd pretty much need a fully capable expert system to make voice recognition work better than a GUI or even CLI. – Luaan Dec 6 '16 at 16:36
• This would work well especially if computers could read text at a high level. Think about how futuristic computers were portrayed in "Alien", for example. The captain just wrote out complex questions to it. Star Trek, especially Next Generation, could do a lot just by talking to the computer AI. – Jason K Dec 7 '16 at 4:09
• @Luaan I agree that there are disadvantages, you can't display a nice graph with audio for example. However if voice/ear starts out as the mainstream way of communicating with computers, it could hinder the development of advanced computer screens and the construction of software visualizations, since there is no market for it (yet). – Deruijter Dec 7 '16 at 8:56
• I wasn't even comparing it to GUIs - just trivial CLIs. Even there voice recognition is a loser (again, unless sight/touch are impractical for some reason). GUIs (or "TUIs", if you want to keep them separate) blow it to bits. The most "realistic" approach would be what Jason suggested - if the computer could actually understand arbitrary human speech, it would mostly combine the good parts of CLI, GUI and voice, rather than mostly combining the bad parts of each :) Voice recognition isn't enough - you need expert systems, and flexible ones. – Luaan Dec 7 '16 at 11:50
The affirmation that "modern general-purpose computers generally run graphical software and have a graphical user interface (GUI)" is simply false. The vast majority of servers have no GUI; see "headless server". They live in rows upon rows of racks and can be accessed only over the network. The computers behind search engines, on-line storage services, web-based mail services, enterprise resource planning software, questions and answers boards such as this one, content management systems, the computers providing file, print and streaming services, and in general the computers which serve the interconnected documents forming the world-wide web do not have graphical user interfaces (with, of course, the rare exceptions expected from everything in IT). A better formulation would be "workstations (and gamestations) generally have GUIs"; workstations have generally had GUIs for a very long time. The windowing system in current Linux distribution is based on the X11R6 protocol, first released in 1994.
The first major class of mass-marketed applications which used full-screen graphics were games. Games ran in full screen graphical mode on the ZX Spectrum. The first GUI-based "killer applications" were desktop publishing and pre-press work.
The major problem I see with character-cell interfaces everywhere is multi-language support. A computer which can show very many thousands of different characters on a character-cell display can also show graphics on the same display -- a computer which can show 中华人民共和国 can certainly display graphics. And since it can display graphics, it will display graphics: some young student at a university somewhere will write a graphical interface and game over. Unless...
The only way to preserve character-cell interfaces for the masses is to make them compulsory; suppose that the domination of the computer industry by a big blue three-letter corporation had not been met with anti-trust challenges from the government of the greatest power in the world. Suppose that on the contrary that domination would have been enforced by the powers that be; no such thing as open-source operating systems like UNIX, no such thing as simple-minded operating systems like MS-DOS and the classic Mac OS; all computers run safe, secure and reliable operating systems like OS/360. Wouldn't we all be happy with the character-cell variant of the Common User Architecture?
• Lots of servers run server variants of Windows, and even Windows Server Core has a GUI (it's a very stripped down GUI, and is mostly used for displaying command line windows, but despite what Microsoft calls it, it's still a GUI, not text-based, at heart). Add to this just about every personal computer there is (which will generally run Windows, OS X, or Linux + X one way or another) and consider the computers in routers, microwave ovens, washing machines, cars and whatnot to be not general purpose, and I suspect my statement holds. – user Dec 5 '16 at 21:41
• @MichaelKjörling: A large part of those Windows servers do not even have screens and keyboards attached... That's why Powershell is so much in fashion in the Windows Server world. But yes, RDP is a thing and quite a few Windows-based servers are accessed graphically over RDP. Still, many general-purpose computer do not have any kind of graphics software installed. – AlexP Dec 5 '16 at 22:30
• Arguably, many of those servers are accessed through a GUI: a web browser, whether it's an end user visiting an online store or blog, or an admin accessing an admin control panel. And there's other similar GUIs, all your GUI email apps, your GUI chat apps, your GUI video streaming apps... GUI isn't limited to just locally hosted X11 or Aqua or Windows Shell; the apps within them can present GUIs for remote servers. There will of course be cases where a server really is exclusively accessed by users through text-only means, but headless server does not automatically mean GUI-less. – 8bittree Dec 6 '16 at 16:29
• @MichaelKjörling Nay, that statement doesn't hold. There's pretty well-verified statistics out there on the numbers of computers, personal vs. data-center/server, and how many of those are running Linux vs. Windows vs. Mac OS X vs Solaris vs AIX, etc. And my router is reasonably general-purpose. Sure, it mostly does routing, but it's a Linux device doing various non-router things for me. This is of course moot relative to your question: headless servers may be the majority, but they're a technical niche in many ways, just more numerous. – mtraceur Dec 8 '16 at 7:01
• Servers aren't general-purpose computers – noɥʇʎԀʎzɐɹƆ Dec 11 '16 at 0:00
I think that GUI are so popular because visual learners consists the majority of population. With 2 of every 3 people being visual learners they consist the largest market, same as most things are made for right handed people. If you make Auditory learners the majority of the population, fallowed by kinesthetic with visual learners distant third, the market will adapt and the GUIs will be expensive niche market.
I'm a programmer and I don't like text UI. I know very well how powerfull they are, I learned to be quite good with bash, and use it every day at work to administer our UNIX servers, but if I had choice I would allways choose GUIs. That's how my brain is wired. I learned to use Emacs but I always go for Atom & Visual Studio.
P.S. Image taken from Successfully Using Visual Aids in Your Presentation
• I was going to answer this. Make your language easy to recognize and for computers to understand and voice terminals will be much more prolific from the start of computing. – Jorge Aldo Dec 6 '16 at 6:36
• CLIs fall into the Visual sector of your diagram, the same as GUIs--they're all about the recognition and manipulation of symbols that you see on a screen. If auditory and kinesthetic learners were the vast majority of the population, I could imagine a lot more motivation for the development of voice interfaces and haptic interfaces, but I think a preference for CLIs requires a different explanation. – David K Nov 14 '17 at 23:21
Your world does not have pixel-capable screens. With the components readily available, one could be built only crudely, at impractically large sizes (billboard size or greater), and with large gaps in between the dots. But no hardware or software (ray-tracing, etc) was ever developed that would make good use of this, and no one except maybe sci-fi authors really sees much value in such a thing.
If all you have to make desktop monitors out of is arrays of seven segment displays, then you have a text-based user experience built into the hardware. If the monitors are literally made out of 7-segment displays (or something like them), and particularly if you bring in a historical/legal basis for that, then you don't really need any tortured argument about why they don't just draw pictures on the things, because the capability isn't there.
You can also offer some other side benefits of this that are off-limits to us in the real world. Like having the monitor be just another cheap USB device, or Bluetooth device, with virtually zero power consumption. And you can bring back ASCII art in a big way.
This conception of technology requires a divergence of technological development from the real world somewhere around 1900. Radio is in, television is out. Comic books, dime novels and penny dreadfuls are in, cinema is out. Old-fashioned seismometers and other machines that directly draw on paper are in. The advent of computers still happens, because this was done for reasons of code-breaking and mathematical research (Babbage, Zuse, others). Blinkenlights are in.
Cheap and accessible photography is out; most people can only afford one or two family portraits in their lifetimes, and it's all film based. But for the price, the quality standards are very high, and portraits are typically stereographic (gives more flavor for divergent technological progress).
Printers are very fancy, very cheap (and the ink is even cheaper!!), and very fast, with advanced typography capabilities, and paper is incredibly cheap and easily recycled. Even sophisticated book binding is a standard feature on a very affordable printer.
If you need a "nuclear option", further reinforce suspension of disbelief with copyright law. In your world, equipment manufacturers would be held liable for any device capable of showing a photograph or facsimile of a copyrighted oil painting. (If you go in this direction, have "the Betamax case" occur 100 years earlier, applied to single-frame film photography, and decided more or less in the opposite from real history. The real case was a 5-4 split decision!) Strictly control photography licenses on this basis, further accounting for the high price and therefore rarity and superior, exalted quality of photographs.
For all these reasons, no one has much motivation to develop technology capable of showing pictures, and the work it would take to match the analog capabilities with any digital graphical system would be far too high for amateurs to mount a successful attempt. Even serious efforts with serious budget would be perceived as crude toy projects, or worse, as illicit subterfuge, without any legitimate practical use.
All these background factors will hopefully reinforce the divergence away from pixel graphics and create a huge barrier to introducing it into your world. ("Such a monitor would require way too much power!" "Stereoscopy would be next to impossible!" "You would have to upset 100+ years of copyright law and legal precedent!" "Even simple line art would look like garbage!")
"There's way too much information to decode the Matrix. You get used to it, though. Your brain does the translating. I don't even see the code. All I see is blonde, brunette, redhead. Hey uh, you want a drink?"
• There is a variant of the 7-segment display called the 14 segment display that can display the full Latin alphabet. – Stig Hemmer Dec 6 '16 at 8:24
• While most CRTs project a spot, it's possible to have the beam project other shapes, and some early displays for things like air traffic control displayed alphanumerics by selecting letter-shapes for the beams and flashing them at the required location. Such an approach would probably not require turning the beam on and off as quickly as would be necessary with a raster display. – supercat Dec 7 '16 at 17:44
• -1 This proposal is simply not technically plausible - for example if "Cheap and accessible photography is out" then so is making integrated circuits via photolithography, which means that "computers" are stuck in the discrete component stone age. In similar ways, the whole dense-pixels-can't-be-done idea is entirely incompatible with anything approaching the computational density found in our world; if pixels are huge, then so are logic elements. – Chris Stratton Dec 9 '16 at 4:26
• @ChrisStratton "with the components readily available". I only propose that certain things that could have been done, were not done. – wberry Dec 9 '16 at 23:17
• The problem is that you are also proposing things that essentially require as supporting technologies the very things you propose didn't happen. You can have nobody choose to look at the equivalent of a display, but you'll have the technology to build them. – Chris Stratton Dec 10 '16 at 6:25
Make porn and video games not a thing.
Now who cares to make computers handle more graphics? Good luck on getting people to believe it.
Make mobile computers useful/desirable earlier.
If we had hand held computers that could do something useful or cool before anyone had gotten graphics running, or when graphics would have been battery prohibitive, text only could have become the standard way everyone uses computers.
Make programming much more popular
If most people write at least some of the programs they use and text is the (easiest) way to interact with them text will be popular. This could happen if copyright got out of control or people lost trust in distributed programs.
Make illiteracy or functional illiteracy a bigger issue.
You don't want to look like the only guy at the meeting who needs pictures, and you really don't want to imply you boss can't read.
• I'm still waiting for internet historians to validate me, but I am 100% convinced that web browser graphics are driven solely by porn. – kingledion Dec 5 '16 at 21:09
• @kingledion I thought they were all about mosaics? – dot_Sp0T Dec 5 '16 at 21:23
• I present to you: AAlib. – Mark Dec 5 '16 at 22:51
• @kingledion Mozilla's image rendering library was named libpr0n and its main goal was "to render pornographic images in an efficient way." (Yes, that site is a joke by one of the developers but the library really was named that. It was renamed in 2011) – oals Dec 6 '16 at 13:36
• @oals I clicked that link at work and now I think I'm going to be fired. – kingledion Dec 6 '16 at 14:11
I'm surprised so few people have touched on the possible cultural motivators that would limit/prevent the development of GUIs.
My first thought was (no pun intended), "iconoclasm".
In a world where iconoclastic religion holds sway, people will believe that GUIs are evil and/or degenerate. Words are important; unnecessary representation of things are an affront to God.
@Dotan Reis's idea regarding elitism has real potential too. If the early computer users were both rich AND smart, then a personality cult of computer-elitism would lead people to only ever want to use text-based interfaces.
• This is a much stronger motivator for avoiding GUIs than any technical limitation. – barbecue Dec 8 '16 at 20:44
• Iconoclasm powers editor wars. – noɥʇʎԀʎzɐɹƆ Dec 11 '16 at 0:04
• Actually that makes sense. If in Germany games are being censored not to show some WW2 figures, Facebook censored Swedish gov video concerning breast cancer, so a developer in iconoclastic society would really be overcareful, not to have his program classified as only for adults. – Shadow1024 Jun 21 '17 at 14:01
• Stop the push to put a computer on every desk; TUIs can be used by experts, but GUIs were all but required to make the jump from "specialist equipment" to "general use equipment."
• Never see a capitalist-driven push to create a consumer workstation market (TUIs work for trained professionals, and don't demand a GUI)
• Increase the culture of elitism towards computers; it has forever been a trend (although diminishing as time goes on) with computer/IT people to prefer more difficult means to prove oneself; many IT guys today "prefer" Linux, but can't provide a non-cardboard-cutout argument as to why. Command Line/Terminal being the same deal.
• Hamstring the display market. Keep monitors primitive, mono-colored.
• Introduce a terrible executed marketing ploy for GUIs; turn the consumers and market off the idea
• Have major OS creators/communities view GUIs as inefficient and ineffective. More elitism.
...Basically kill the capitalist market drive, and introduce bad press and elitism to run GUIs away.
• But terminals are better, they're closer to the software and often provide access to more functionality easier than a GUI does – dot_Sp0T Dec 5 '16 at 20:55
• @dot_Sp0T TUIs will always require a steeper learning curve and make features and functionality less obvious. They're less inviting to new users, require more investment, and are less intuitive. Those are the reasons GUIs took over. Also a big reason why touch controls on mobile devices took over. TUIs aren't better than GUIs, but GUIs also aren't better than TUIs. Which to use depends on the environment, the user, the technology, and the culture. – Ranger Dec 5 '16 at 20:58
• This answer is absolutely biased and not based in fact. text based interfaces are demonstrably better for a lot of tasks then graphical ones. Composeability and automation are not given with GUis, yet come naturally to text-based UIs. GUis are the more accessible tools, but certainly not the more useful or more powerful tools. – Polygnome Dec 6 '16 at 0:09
• There is nothing "more difficult" about linux (their are many versions of linux and I can't speak for all of them), it is a simple and effective OS. If I was selecting an OS for someone with no computer knowledge then linux mint would be a good choice because of its tendency to carry on working once set up. (setting up quite simple) Linux tends to make it easier to add your own code to the OS and do certain advanced actions that are not available on other systems. That's why many experts use it. Its just not what your used to. – Donald Hobson Dec 6 '16 at 1:24
• @DonaldHobson I didn't mention any OS specifically, and yes I agree that multiple distros of Linux with GUIs make great, user-friendly OSes. On the other hand I wouldn't hand a terminal-only distro like Linux Arch to your average stay-at-home-parent and expect them to enjoy their experience. – Ranger Dec 6 '16 at 4:16
Search, don't sort.
Apple implemented similar features in Vanilla OSX at a similar time.
No more clicking through sub folders trying to remember where you stored something. Simply remember some fact about it: Words in the title, words in the content, last modified date. Enter some of those parameters as a search, and the file appears instantly.
In terms of what you could do to move from "we don't use GUIs much" to "we don't use GUIs", either improve A.I. search capabilities, or send Microsoft bankrupt.
With MS out of the way, your computer's GUI would look like the Google home page. Blank white space, a single text box for input. At that point, it's not really a GUI any more.
• But if you have more than one file matching the criteria, you need a GUI to select the one you want. But suppose you find your holiday photos OK. Now how do you edit them without a GUI? How about spreadsheets and word processors? I'm old enough to have used spreadsheets and word processors before there were mouse-based interfaces, and there's a good reason WYSIWYG editing killed non-WYSIWYG - if you care what your document looks like, going round the loop of "render, not quite what I wanted, render again, too far, render again" is a painful waste of time. – Graham Dec 6 '16 at 12:44
• @Graham it seems you never used latex before. Photo editing will be painful though – Cem Kalyoncu Dec 6 '16 at 19:59
• @Graham unfortunately, LaTeX's rendering loops are sometimes quite annoying, but for many types of document it's still by far more efficient than anything you could do with WYSIWYG, especially if you're concerned with accurate design. — A spreadsheet is just a poor man's replacement for a proper data language. — With multimedia manipulation you're undeniably right, you don't get around a GUI... though even here there's a certain trend towards text-based editing, with ever more scripting capabilities built into CADs/NLEs/DAWs and even some innovative pure graphics programming languages. – leftaroundabout Dec 6 '16 at 21:27
• re: spreadsheets - it's the same situation as with OS - kill off MS and the competitors will succeed with something more practical but less pretty. Wolfram in this case. – Scott Dec 6 '16 at 22:58
• @barbecue maybe I didn't make my point clear enough. Yes, the Google Homepage is a GUI. It's a textbox and a button. But you could literally replace it with a command line text user interface and it would be exactly the same. It doesn't need to be a GUI, and doesn't use any features that a GUI is good at that a TUI isn't – Scott Dec 9 '16 at 2:01
An important thing to consider here is that once you've gotten past the steeper learning curve, working with text-based input is frequently much easier than using a GUI.
An example: Suppose I have a directory containing a few thousand files, scattered across various subdirectories. I want to sort them out into separate directories based on various criteria. Let's say I want to move all the files starting with "foo-" and ending in ".log" that were created in the last day.
In a GUI, the most efficient way I can do that is probably to sort the files by file extension, then go into each subdirectory, find the block of files starting in "foo-" and ending in ".log", then right click on each individually, open up properties, check the modified date, then drag it into the new directory if it was modified in the last day. Then I move to the next file and do the same thing. And hope I don't make any mistakes while manually doing this a few hundred times. And in practice, if all I have is a GUI, I'm just not going to reorganize those files because there's no way I'm going through all that.
With a command line, I type find ! -type d -name 'foo-*.log' -mtime -1 -exec mv '{}' 'other_directory/{}' + and I'm done in 5 seconds. And in practice, it takes about 5 minutes because I don't use the -mtime argument that often and I need to look it up in the manual real quick (which consists of typing man find, then /modified to find the right section).
For most tasks, the difference isn't quite that extreme, but the command line is almost always the more powerful option. The command line version certainly looks more complicated (and to be fair, it is), but once I learn it, I can get things done so much faster than I could otherwise. Aside from my web browser, the only reason I use a GUI at work is so I can keep multiple terminals on the screen at the same time. Unless the task is specifically graphical in nature, a GUI just feels like a toy to me.
Now consider your requirement that the systems not be "Expert-only". I won't deny that right now, proficiency with the command line is generally expert-only, but think about average difference in computer literacy between a 14 year old and a 74 year old. The adult has had just as much time to learn the skills, and yet they struggle with it. But the kid grew up with this stuff and finds that it comes naturally. If you create a society in which most people learn how to use a command line as an "Experts-only" skill, then in a generation or two, it'll just be another trivial skill that everyone learned as a kid.
Edit: A couple people have mentioned GUIs that can filter files according to modification date, so here's a slightly more complicated example. This will sort all .log files into directories of the form 'logs/2017-05-20/' based on their modification time, creating the directories as needed.
find ! -type d -name '*.log' -exec bash -c \
"export DIR=\$(date +logs/%F -d\$(stat -c @%Y '{}')); mkdir -p \$DIR; mv '{}' \$DIR/\$(basename '{}')" \; • I think your case is example of bad GUI [bad for specific task], not of command line superiority. I could do your kind copying in Windows Commander GUI easily in 10 seconds. – Arvo Dec 7 '16 at 8:38 • The example is a bad one even using basic Windows. Open Explorer. Go to the directory you want to search. Type "foo*.log" in the search box. It will give you the option to add a search modifier, one of them the last time the file was modified, and you can select a date range. The results will show up, and you can drag and drop them all to whatever folder you want. – Keith Morrison Nov 21 '17 at 20:21 Just a little suggestion: You might also want the data entry keyboard to be totally different. The guy who is most responsible for the GUIs and mouse we used today, Douglas Engelbart, had originally developed a chord based input system where instead of having buttons for every letter the user had a single handed keyboard that used combinations to create letter - like chords on a guitar. It's worth looking into. • How would this stop GUIs developing? If anything, having a spare hand would seem to make a GUI more likely to evolve, because users wouldn't have the useability issue we all share of having to move one hand between the keyboard and mouse. – Graham Dec 6 '16 at 12:36 • @Graham it would make text-based input continue naturally into the mobile age. If everyone had a bluethooth keyboard-glove on all the time, a terminal would be the most effective way of interacting with your phone. (FWIW, I'm typing this from a 10-finger keyboard, using vimperator to compensate for the problems of Firefox being GUI based...) – leftaroundabout Dec 6 '16 at 21:12 • I didn't mean to imply that a cord-based keyboard would prevent the development of a GUI interface. I think that is inevitable but I thought the cord-base keyboad was different enough without being too radical to fit an alternative universe as described the poster. – RMH Dec 7 '16 at 14:25 • @Graham You might be referencing this, but that was the original plan - one hand on the keyboard and one hand on the mouse at all times. Mouse for navigating, keyboard for data entry. – TessellatingHeckler Dec 7 '16 at 18:56 • @TessellatingHeckler Yeah, that's the idea. There's a reason fast jets use HOTAS - it's simply the best ergonomics. The same principle for computer use is definitely an advance. Unfortunately we have always had a large user base with QWERTY keyboards (or AZERTY or whatever local variant) which made this impractical. As always, there needs to be a strong reason to change an established user base. The mouse was simply a better way to move a pointer than cursor keys, and better for fine control than a joystick. The chording keyboard didn't have enough incentive to displace QWERTY though. – Graham Dec 8 '16 at 11:23 There are a few general ways to make modern computers that are not GUI intensive. Change Computer History: This is somewhat of an obvious choice, because there were a few big pushes in computing that made the GUI happen. On our own planet Earth, computers became huge in the countries that won WWII and the cold war, A.K.A. Britain and America. This connects to a recent network question, "Why are all coding languages in English?". So, what's important about that? Well, America is a capitalist country, every company that hopped on the computer bandwagon created their own coding language. Just think about today, we have Haskell, C, C++, C#, Java, etc. For command line we have the Cmd on Windows, and terminal for Linux and Apple. But what if the government got more involved. In 1965, America passes a bill that makes one American coding language, which will be used in all programming and command line. It will be developed in a similar project to the Manhattan Project, drafting the best minds in computer science, who all have to work together. All of a sudden, a huge barrier to entry is diminished, people only have to learn one new computer language instead of seven. The government also decides that they want the most powerful computers possible to run missile guidance systems, nuclear subs, etc. They don't have time for fancy stuff like graphics. The drive for "a computer on every desk", never happens, instead the government puts a computer in every school for kids to learn. Now those kids grow up and buy their own computers, using nothing but command line. Eventually, the technology is released to the public and a new company makes the GUI, but no one cares about that fluff, as it is in an alpha stage and is pretty crappy. It is seen as a dumb luxury like VR in the 90's and won't take off for at least another few decades, if ever. Limit Computing: As mentioned in another answer, the internet rules much of our life. And when bandwidth was low in the 90's we didn't send sweet memes, we sent ascii, or just words. If the bandwidth is limited, all of a sudden, images go away, the internet is text-based. Now, if you take away non-connected desktop, the government says all computers must be linked to the net at all times, there is no longer personal computing, the biggest factor is bandwidth. If bandwidth is limited, no GUI. Limit People: Not a great option, but if people are blind, GUI is unimportant. If people are colorblind, people don't like the way the GUI looks. It cannot convey as much meaning, so it isn't used. If people have no hands to use it, then they have to use voice dictation instead. In these cases, GUI is never bothered with. • "In 1965, America passes a bill that makes one American coding language, which will be used in all programming" That reminds me of COBOL ("created as part of a US Department of Defense effort to create a portable programming language for data processing") or Ada ("originally designed by a team led by Jean Ichbiah of CII Honeywell Bull under contract to the United States Department of Defense (DoD) from 1977 to 1983 to supersede over 450 programming languages used by the DoD at that time.") – user Dec 6 '16 at 15:29 • – user Dec 6 '16 at 15:31 • +1 for what I think is the key: early education. Imagine if instead of lessons in middle school on how to make PowerPoint slides, you got lessons on solving various problems using a Linux/Unix-like terminal/shell environment. GUIs would still happen, but the average person would grow up content never making the jump into GUIs, finding it very odd/unintuitive, the reverse of what we have now. – mtraceur Dec 8 '16 at 7:06 • Your analogy with programming languages is slightly flawed. "Terminal" is just a GUI frontend to whatever command shell you have set as your default. The shells themselves would be analogous with programming languages. So, sh, bash, dash, ksh, csh, tcsh, zsh, just to name the common ones. And the MS world has cmd.exe, command.com, and Powershell. – Ray Dec 8 '16 at 23:23 • @Luaan - You are quite right! That is why the free market sells so many computers with GUI. I love my GUI. But, while you can argue that CLI is also a type of GUI, it is minimalist. And regardless of the truth behind many computer scientist's opinions, the perception is there! If some government had the same perception, then they might impose such laws as to ban wasteful, extravagant GUI, and push CLI instead. It's a way alternate history works, as long as there is a perception, there is a possibility, even if it isn't what happened. – EvSunWoodard Dec 12 '16 at 16:39 Well, you kind of kill it when you say that Norton Commander, Emacs, vi and friends don't count as GUI. At that point, there's hardly anything left that does count as GUI, perhaps just the visual fluff you get from high-resolution (e.g. more than 80x25 and such) displays. So, let's assume that's exactly what you mean. No fluff. Why do we get so much fluff? When it first comes, it has a certain novelty aspect. But that wears of rather quickly, and is actually quite discouraging to many users. Just look at all those examples like rounded corners, gloss, transparent windows and similar - you show them off for a generation or two, just to flex your muscles in front of a crowd of fawning fanboys, they get copied all over and used in all the wrong applications, and then the novelty wears off, and the fashion changes. Look at Windows 10 compared to Vista (all that gloss and transparency!), XP (rounded everything!). Windows 9/10 design is simple, clean, unobtrusive; a nice show of what remains when you get rid of the fluff. So why do the graphics remain, rather than going back to text interfaces? The answer is actually quite simple - it makes a lot of complicated problems easier. Mind you, I'm not saying it's a panacea. It isn't. Text interfaces still have plenty of benefits: • Friendlier for remote terminals • Easier human auditing, with easy logging of everything that happens at the terminal • Easier showing of history in general • Easier composition of text-only applications (though this fades when any sort of "GUI" enters the equation, even in text-mode) Now, of course, graphics had a head-start in applications that were, well, graphical. Computer-aided design. Publishing. It's not really a long list. Even today, some people can't stomach using a graphical interface for things as complicated as DTP - at best, they have a graphical window into what the layout is going to look like on paper (or what have you), while they do the actual editing in something like TeX, or even MarkDown or (gasp!) HTML. Why did graphics win on the desktop in general? As noted before, text-mode applications still had great "GUIs", you still had full-blown integrated environments with all the cool things true GUIs give you, like keyboard shortcuts, menus, mouse control, hinting, all the nice discoverability. Exactly because of those advanced users that everyone here is calling to the rescue. Why? Because there was no compatibility anywhere. Everyone did text-based applications their way. Even attempts at standardisation like POSIX, or even MS-DOS (which was designed to be quite a bit different than it actually turned out, mostly for - guess what - compatibility with IBM DOS, which got released slightly earlier) mostly failed. Even at the IBM PC (and its clones), where Microsoft quickly gained dominance, every application had its own idea about what commands should be named, what actions should do what, how to format their input and output data. Nobody tried to make common interfaces, formats. There was just endless arguments about who was better. There was no end in sight. And then Xerox came with their revolutionary PARC. Now, mind you, this was tons of things that were utterly impractical when the research teams actually designed them. There were no computers powerful enough to run their systems, while also being anything close to affordable by any family, or really even corporations. But computers got powerful quickly, and everyone went to the well. Atari, Amiga, Apple, Microsoft - everyone adopted the same basic paradigms. Everyone also added some of their own, but those were also quickly spread in the new world - a world of inter-operation and compatibility. In no small part because the ones who cared about compatibility started winning. MS-DOS wasn't the best OS, not by far. Unless you cared about the fact that it run pretty much everything. You could take your applications from Dr-DOS, IBM DOS, and a few dozen other Something-DOSes and OSes, and run them on MS-DOS. Which OS do you buy? The one that has you locked-in to a couple of software packages, or the one that gives you pretty much all of them? Which OS do you design software for? Windows weren't the first graphical OS, but that didn't matter anymore. The drive for compatibility was already there, and in full blow. Use a mouse to point at a button, press the mouse button, action happens. Every application on every system behaved the same. You had windows, you had buttons, you had scrollbars and menus - and there was a lot of pressure to unify their behaviour as much as reasonable, while still appearing somewhat different. And even when platforms differed (slightly), two applications on the same platform never did - something Linux still struggles with to this very day, with the misguided idea that it's the application, that should pick the GUI, rather than the user. What did "advanced" users do? They utterly and entirely ignored it, happy with their proprietary (funny, eh? :)) and incompatible CLIs. Advanced users are a lot more invested in their platform, simply because the invested so much time an effort in becoming proficient in that one platform. Advanced users are the bane of progress. So the solution isn't to make everyone an advanced user, quite the opposite. Expect no effort from your users. Start with environments that try to standardise their interfaces - use the same keyboard shortcuts, naming conventions, formats. Think about accessibility, not just efficiency. Sure, ls is fine if you have a horrible keyboard or you can't type very well - but list is a hell of a lot more accessible. Use aliases if you need to, but even those should be conforming with other systems - you're not going to keep carrying your aliases over to other computers you need to use; just stick to defaults. Kick out anyone who doesn't play nice. Get rid of the hipsters, who not only can't recognise progress - they sneer at the very idea of progress. A nice, compatible and mostly standardised interface will give you the inertia you need. Applications like Norton Commander, not command-line ls. Applications like Turbo Pascal, not vi. Search by wildcard, not regular expressions (but feel free to keep the advanced option!). Sort "by human", not "by computer" - Folder 100 should never end up in sort order between Folder 2 and Folder, deal with it. Learn everything the graphical OSes did right, and use it too. Don't consider remote terminals too much, even smart terminals - you'd never get a real interactive applications there - bandwidth is less of an issue than in a graphical application, but latency is just as horrible; in some cases even more so. Standardise rich terminals, streaming-text-only isn't good enough by far, and neither is just text positioning on a fixed background. Make it real smart, like what true GUIs managed to do. Keep focus on freely integrated systems, rather than large proprietary bags of tricks (and no, keeping it "FSF" or "OSS" doesn't make it any less of a "large proprietary bag of tricks"). Have developers all over the world coöperate on what they're doing, rather than competing purely out of spite and other misguided initiatives. Find ways to engage users, improve their productivity, instead of arbitrarily introducing differences just to make conversion harder. Instead of ten competing packages "of everything", modularize - give users easy way to make choices without making things appear too complex. Remember how Turbo Pascal, despite being an IDE, actually allowed you to plug-in a custom linker, compiler, debugger...? Encourage that model. The company that's great at writing compilers isn't necessarily the best at linkers. Introduce productivity and discoverability features like auto-completion that mostly had to wait for GUIs in our history. Does that leave us with all the problems solved? Almost. There are still things that graphics just does better. Layouting is much easier with higher resolution, resolution-agnostic design is much easier with higher resolution. Allow improvements over the text-mode ideal - for example, allow combining multiple "tile" sizes on one screen, so that you can e.g. have text written "as-if-in-80x25", while allowing other elements to be "as-if-in-80-40". Allow graphical elements to be included in a text-mode application - so that you don't have to keep changing the whole screen just to have a WYSIWYG look at your document, or to show graphs inside of a spreadsheet. This is the truly complicated part - at some point, it becomes harder to justify that having two ways of doing fundamentally the same thing is a good thing; why have "hybrid" rendering on a Haswell machine, when you can render everything in graphics mode just as quickly, while keeping things simpler and prettier? Use accessories that can exploit extremely cheap low-resolution displays to keep better track of your whole system - or even give you cool graphical "pretend" interface in a similar way those Nintendo Mini-arcades had, without giving up on the benefits of text-mode? • To be fair, emacs -nw is pretty definitely not a GUI, but a TUI. And many TUIs that process mice only do so because they, and terminals that allow the underlying program to interface with them, are widespread in our world. If systems never went fuily-GUI, it would be reasonable to suppose that such support would either not exist, or be an after-thought, or just be unused by most users. – mtraceur Dec 8 '16 at 7:25 • Anyway, despite disagreeing on a few points and nuances, I +1'ed this. I think you touch on several good points about why GUIs developed how they did, what role the drive for consistency played, and some of the reasons why advanced users can be (though I wouldn't agree with "are") impediments to some forms of progress. – mtraceur Dec 8 '16 at 7:35 • Why do you think mouse support is trivial? Do you know how the TTY/PTY (teletype/pseudo-teletype) subsystem in most operating systems works? If I write a relatively flexible TUI and the user runs it in their shell in their terminal, there's no guarantee at all that I'll even have any indication of what the mouse is doing - unless the terminal converts mouse interactions into escape codes or there's another non-standard API for accessing them from the terminal slave side. For the terminal environment, mouse support is a tacked-on afterthought kludge. – mtraceur Dec 12 '16 at 7:21 • Tangentially, however, I concede that many modern TUIs have approached GUIs in flexibility and functionality, so in some functional sense, it's fair to concede that point. For instance, I'd describe irssi, a TUI IRC client (as far as I'm aware, no mouse support to speak of), as being functionally comparable to any GUI IRC client, minus skin-deep features like mouse-support. So you do have somewhat of a point there. – mtraceur Dec 12 '16 at 7:27 • @mtraceur Well, that was always a problem of unix-like systems. It was never a problem of DOS, OS/2, Atari... because they didn't stick to the idea that you're controlling your system through teletype (a tech older than a hundred years now!). That's why I noted that advanced users can hold progress back - because they have a much bigger investment in what they've already learned, and shun new approaches to doing the same thing, just because it would make the investment a waste (to some extent). There's so many things already working with TTY that the inertia was too great. Not so on DOS :) – Luaan Dec 12 '16 at 11:19 Your link gives a clue: The Xerox Alto systems, because of their power and graphics, were used for a variety of research purposes in to the fields of human-computer interaction and computer usage. They built a GUI that is recognisable as the concepts still used today, and then researched human-computer interaction, which presumably just refined the ideas already raised, but more cynically may have justified the preconceived notions. An early “bright idea” got funded, and directly inspired the major GUIs that appeared in consumer products. Arguably, the ideas were ahead of the hardware and early implementations were inferior to what might have been. If some different “bright idea” got researched, studied, and refined in the early days before commercial products, we might have gone a different route. In fact, a paradigm that was not so graphics intensive might have done better, sooner, before machines got powerful enough for the GUI to really be practical. Then, if the general public had caught on to concepts that transcended “direct manipulation” and “what you see is what you get (what you see is all you got)” like was felt by the experts, then even when things got prettier the notions of direct manipulation (only) might not make the same inroads. It would be cool to know what concepts / manipulation paradigm might have been developed that would be better than a plain CLI. Amazon Echo, Alexa, et al, are computers without a GUI. Heck, I even say OK Google to my phone to get it to do stuff like text my friend (Funny story: No matter what I said to my first cell phone with speech recognition, it always misinterpreted it..."Call mom", "Calling Brian". "Call Neil", "Calling Brian".) I predict that in 10 years we won't interact with a GUI as much as we talk to it or use "texting" (eg natural typing) for those times when talking would be rude (such as on a plane) • Now try editing your photos using "OK Google". Not going to work. Voice recognition is nice as an input device, but that's all it is. If you need output from the computer - whether that's a list of things it's found, pictures or whatever - then you need a GUI of some kind. – Graham Dec 6 '16 at 12:47 • @Graham I can tell you have never used a good TUI. You definitely don't need "a GUI of some kind" to get output from the computer. Check out for example Microsoft Works for DOS or Microsoft Word (available for DOS) or Norton Commander for DOS or PC Tools for DOS (also) or any number of TUI products. – user Dec 6 '16 at 13:40 • @MichaelKjörling My first DOS word processor was Word Perfect. Much better than Word at the time. :) I take your point that a text-based interface is possible to some extent - but only to some extent, and only for limited applications, and with greatly limited useability. Word Perfect's far-too-late entry into WYSIWYG was the direct cause of its failure. – Graham Dec 6 '16 at 15:46 • I worked with developmentally disabled people for ~20 years. Its amazing when you see someone blind from birth navigating a gui better than you. Our interfaces for the blind are afterthoughts. Imagine if we had developed those interfaces first and developed GUIs as an afterthought. (Although, since humans are primarily visual I would never find a world without GUIs as believable.) – Tim Dec 6 '16 at 16:12 • @AllOfYou - The OP asked how to make GUIs secondary, not non-existent. Obviously a GUI is easiest when editing graphics/photos (tough for Sports Illustrated to increase a model's bust with only text/speech). And obviously a TUI is easiest when doing highly repetitious tasks (like .BATch files or .PS1 scripts). Even in today's GUI dominated world a CLI can be quite useful and (frankly) preferable to an old DOS guy like me. A keyboard and mouse is really meant for someone with 3 hands IMO. Its a terrible interface, but slightly better than hitting tab 37 times to select the element I desire. – Tim Dec 12 '16 at 15:56 Have everyone in your world have bad to zero eye vision! This will enforce the need for screen readers. Screen readers with GUIs are a real pain. It is much easier to only read text than describe a window for example. Maybe this will have some more implications on your world, but it is definitely doable. • Or the person/group that invented computers were blind. They invented the computer as a way of giving blind people an easier environment to work from and then "see-ers " caught on to how useful computers could be. – josh Dec 6 '16 at 10:36 pre-1988: Xerox hires a brilliant legal team 1988: Apple files suit against Microsoft, and Xerox against Apple, same as real timeline. Then a lot happens in 1989-1990: Xerox wins or settles to their advantage, the patent infringement case against Apple. Then they join as plaintiff in the Apple-Microsoft look and feel case and win that too. [in the real timeline, Microsoft won the look-and-feel case in 1994, and Xerox lost theirs] Additional Lawsuits related to Americans with Disabilities Act (ADA) infringement issues. Companies that developed early GUIs without accessibility features or automation capabilities settle or are found liable. Xerox escapes liability because their GUI never left the lab, and their legal team is awesome. Apple and Microsoft are liable for civil damages despite losing IP rights to Xerox. [in the real timeline, ADA rules have no teeth until 20+ years later] New government regulations, riding on public opinion in support of the ADA requirements, make accessibility and automation capabilities mandatory on all software, and introduce federal education funding and standards for text-based computer literacy in the USA, quickly cloned in Japan and Europe. Apple re-brands the MacIntosh as a toy and pulls out of the educational market. Microsoft delays the launch of Windows 3.0 to remove features that infringed on Xerox's patents and add ADA compliance features. The resulting product is late, unusable, has no ecosystem support, a total flop which burns consumers and investors. On Linux, X11R6 development stops for lack of volunteers and although you can find early versions, they have become illegal for lack of accessibility features and unmaintained (like DeCSS is today). 1992: IBM launches OS/2 and nobody notices. Same as real time-line Finally, by 1995 GUIs are both academically and commercially dead: Apple pivots to voice control as they continue to be a leader in User Experience, to compete against text interfaces. Microsoft recovers from the Windows 3.0 fiasco by investing on a 32-bit version of MS-DOS to compete against a now GUI-less Linux. GUI experience is now hazardous to your resume. Venture capital and research funding for GUIs dries up, like an extended version of AI Winter. Tim Berners-Lee decides to focus on creating a free version of Gopher, abandoning work on HTTP/1.1 and X-Mosaic, so a GUI-based Internet never materializes. Xerox kills all GUI research and never launches a product. They retain all patents even during bankruptcy, preventing others from launching a product. So in this timeline there is a roughly 10-year period between 1985 and 1995 where GUIs struggle to gain popularity and ultimately fail on multiple fronts, a full 20 years before "modern general-purpose computers" come along. • Linux wasn't a significant player in the desktop market even by the early 2000s; I started using Linux myself around '00-'01 (I distinctly recall using it in mid-2001) and while at that point the kernel was stable, the GUI was very rough around the edges. OS/2 1.0 was completely text-based (the first GUI was added in 1.1, and what you might call a modern GUI only appeared in 2.0). Apple's background at the time was in text-based interfaces (Apple II, anyone?). Windows 2.x was practically useful at least as an environment to develop against, but perhaps not as a stand-alone environment. Etc. – user Dec 6 '16 at 13:32 • In this alternate timeline, Linux becomes popular enough to get Microsoft's attention. – Alex R Dec 7 '16 at 1:35 • @MichaelKjörling Oh, I remember that so well. I've had to write my own drivers for almost everything - the mouse, the display driver, the network card... ugh. And the "reward" was X Window with horrible text rendering and barely working at all. Quite a cold shower after using Windows 3.11 and 98. And Microsoft was extremely savvy when they designed Windows to be embedded (that is, you could write Windows applications and sell them self-contained to people who didn't have Windows) - it wasn't really until Windows 3.X that really got people to use Windows as an interface of itself. – Luaan Dec 12 '16 at 12:05 • Well, Linux (and other unix-like systems) got plenty of Microsoft's attention in our timeline as well, multiple times. It just never really paid off - we'll see how their latest attempt fares :) – Luaan Dec 12 '16 at 12:06 How about an option that relies neither on crippling your people, nor on them consistently being irrational and/or unimaginative? ## Make the displays expensive. If a live (that is, displaying data as that data is created) graphics-capable monitor or a projector costs as much as a car or even a house, most families aren't going to be buying one. But businesses and governments could afford to purchase some that their artists, designers, engineers, and scientists can use to work with. Most people would be stuck with printers, or possibly character displays possibly made using relatively inexpensive technologies such as flip-dot (or flip-segment), LED segments, or nixie tubes that, at least in your world, are unable to be shrunken down enough to make a useful desktop graphics display, but are sufficiently compact for a workable desktop character display. This does, unfortunately, mean that live television is likely to never become mainstream. Movies however, should be fine, possibly even at home. Rather than showing them on a real-time graphics display like we do in the real world these days, just use a projector and film. The key characteristics of film being that displaying it is simple: just shine a bright white light through it with a lens to focus it, and that it lacks a fast write to read turnaround time, so it's unsuitable for live graphics. Television may end up more like an audio-visual newspaper or magazine subscription, with film delivered to your door on a regular basis, rather than a live broadcast. For those wanting a print preview in their home, simply add an extra cartridge (or several, for colors) to printers, filled with dry erase ink. Bundle in some laminated paper, and there you go: print a preview with the erasable ink onto the laminated paper, look it over, then print the final result on regular paper with permanent ink while erasing the preview paper for reuse later. • The reason early PCs came with text-only monitors as standard issue was that MEMORY was expensive. You need memory to display anything on a raster CRT or also LCD display device unless the software is willing to constantly re-calculate the image. While a 25x80 character page of text fits a 2kbyte (or 4 kbyte with primary colors/underline/...) memory, a 720x384 pixel black and white image already needed almost 40 kilobytes! Given that 32-640 kilobytes were considered appropriate sizes for the main memory of a desktop computer in these days due to cost... – rackandboneman Dec 7 '16 at 8:57 • @rackandboneman Right we did have television back then, too, but making memory expensive would hamstring the computers themselves. Better to make something exclusive to the monitors expensive rather than something used by both. – 8bittree Dec 7 '16 at 12:47 • Even storing TV images was a horribly expensive and complicated business in the 50s and earlier. Video recorders sized like a big stove :) And that would be a really cumbersome kind of memory to use for computer output. The other alternative (and it WAS used in the 1960s and 1970s for computer graphics): Expensive, difficult to build and maintain CRTs (google DVBST CRT if you care) that you could literally tell to keep the image once written (needs to be completely rewritten to erase anything!). That technology is near extinct except for older oscilloscopes still in use. – rackandboneman Dec 8 '16 at 7:36 • @rackandboneman My point was that making memory expensive makes computers expensive which appears to be against the OP's wishes. Remember, this is world building, so we're trying to build a world that isn't necessarily identical to ours. I'm suggesting that the OP make things which are needed by the monitors, but not by the computers themselves expensive. Possibly the construction process, or certain materials... or whatever. And at the same time, have some sort of cheap, text-only display available for everyone, even if that same technology is actually expensive in real life. – 8bittree Dec 9 '16 at 15:21 • You can still make it about memory - just make your world's approach to inexpensive computer memory based on a technology that makes it usable for computation but makes it suck as a framebuffer (access modalities/protocols/timings for the memory play a big role there). – rackandboneman Dec 9 '16 at 15:37 ### how can I reasonably explain that GUIs never became mainstream? Computers entered the mass market at the same time as useful speech recognition and synthesis. Instead of sitting in front of screen and pressing buttons users primarily converse with computers. Which would make the concept of a GUI sound strange "What do you mean I have to learn to press this an that and then that? Why can't I just tell it what I want?". • "So you can watch porn without everyone nearby learning about your midget fetish". :P – Faerindel Dec 7 '16 at 8:34 • "You mean you have to use your hands? That's like a baby's toy" – TessellatingHeckler Dec 7 '16 at 19:02 • Back when e-mail was sweeping thru society, I read a bit somewhere: Imagine (voice) phones being invented after e-mail. Today we'd all say "You mean I can just pick it up and talk to someone? No typing needed?!" – user2338816 Dec 12 '16 at 2:30 • Okay, but how do you get useful speech recognition and synthesis without computers? The way we do it now pretty much required computers to be mass market - to get the required processing power and memory, to get the tons of training inputs and checking... – Luaan Dec 12 '16 at 12:01 • @Luaan sorry for the extremely late reply - note that I wrote "Computers entered the mass market". We had computers long before anyone had one at home. – papirtiger Nov 2 '18 at 12:22 Make the computers interconnected and bottlenecked by bandwidth. A low-bandwidth internet forces one to optimize the transmission of content, which is likely text-based. From my own experiences with the initial stages of the internet, a GUI is barely usable across a network when bandwidth is low enough. Even a GUI-system specifically designed for client-server networking, such as X, is bothersome on connections like a 14k4 modem. Before the WWW existed we used the Gopher protocol to browse information systems across the world over dial-up connections. Then the WWW was invented and the internet became more graphical, performance on graphical browsers (Mosaic, Netscape) was still agonizingly slow. Since the textual content was still the main attraction many early users used text-based browsers such as w3m and lynx to browse the web. On linux servers successors like elinks are still used today. If there was some reason for bandwidth to simply remain constrained then GUIs might not develop at all. People would likely still create ASCII-art and TUIs would improve, maybe supporting multiple windows like i3 window manager. • "...TUIs would improve, maybe supporting multiple windows like i3 window manager" - No need to compare to a GUI window manager like i3, we already have terminal multiplexers: GNU Screen and tmux. Also, regarding low bandwidth: VNC, X11, and RDP are not the only ways to interact with remote data using a GUI. You can run the GUI locally and just transfer the actual data. We do this all the time: see email, chat, web browsers (a lot of pages are still mostly text, sometimes with GUI controls). No remote pictures != no local GUI. – 8bittree Dec 6 '16 at 17:11 • You could imagine bandwidth staying low because phone lines are a natural monopoly. The company that owns the lines is somehow corrupt or otherwise dysfunctional, and no other company can get the infrastructure in place to compete with them. – Ben Millwood Dec 7 '16 at 16:32 • You can disincentivize running the GUI locally by centralizing computing power outside of homes – maybe software-as-a-service with thin clients are developed sooner than it was in our world, or maybe there's some reason why bundling everyone's hardware together in one datacentre is important – e.g. because it means you don't have to use the terrible telecom monopoly's cabling to network your stuff. – Ben Millwood Dec 7 '16 at 16:37 You want a world where computers are widespread but GUIs don't exist? Simple: Find a way to make a world where everyone is totally blind - perhaps even where eyes were never able to evolve. (writing uses some equivalent of Braille) Educate the public quickly GUIs are popular because they're easy for new users to learn, and don't require as much specialized knowledge as using a CLI. For example, to change file permissions through the GUI in Linux, you can click little check-boxes labeled "read", "write", and "execute", while to change the same information with the CLI, you need to remember which bits correspond to which permissions, and do a decimal to binary conversion. If, for some reason, computers classes became a part of compulsory education during the time when CLIs were still popular, an entire generation would grow up using them. When GUIs emerged they wouldn't seem to have much of an advantage over CLIs to the public at large. Further, CLIs - especially whatever shell(s) taught in school - would have the inertia of consensus, and people would be unwilling to change. • CLIs did have the inertia of consensus. On some platforms, they still do. Most users knew them better than Excel users know Excel. That didn't stop them from disappearing. Everyone knew how to use CLI - and then Norton Commander (and friends) came and 99% of computer users dropped CLI, just like that. The only places where it survived was with 1) remote systems, where it was much faster, or an interactive interface simply wasn't available, 2) automation, especially for corporate/academical infrastructure, 3) hipsters (before it was cool!). – Luaan Dec 6 '16 at 20:46 • CLIs had the inertia of consensus among people that used computers during the time that CLIs were the only option - not a lot of people (comparatively). GUIs became popular around the same time personal computers became popular. Most people that used GUIs learned with GUIs - those few that started with CLIs may or may not have switched, but they're the minority. That's my take anyways, and I'm no expert. – Charles Noon Dec 7 '16 at 0:24 • That probably depends a lot on what region you're talking about. Where I'm from, people used text-based interfaces all the way up to Windows 95 or even longer for the most part; they still switched as soon as they could. And a 486 machine cost on the order of$10-20k in today's money - way more expensive than a new car at the time. And everyone still switched as soon as they could, the major exception being universities, which propagate CLI almost exclusively to this day :) – Luaan Dec 7 '16 at 8:33
• That's really interesting, and suggests that GUIs have an inherent advantage over CLIs - at least to most. Perhaps an element of snobbishness could work, if enough people grew up with CLIs? Maybe OP could make a world full of the aforementioned hipsters... – Charles Noon Dec 8 '16 at 1:34
• @Luann You talk about Norton Commander as if it signals the inevitable transition of CLI->TUI->GUI. And yet, my father, who only got into computers in the 1990's shortly after the Soviet Union ended, was still choosing to do the majority of his day-to-day tasks in Far Manager, full screen on Windows XP and later. I've lived on my own since 2009, so I'm not sure how much he still uses it. Meanwhile, I grew up on GUIs and spent years not getting why someone would want to do that, yet in the last few years I've been switching all of my computer tasks to CLI/TUI as quickly as I've been able to. – mtraceur Dec 8 '16 at 7:19
If the users are non-human, a GUI interface may present serious issues. Maybe they have compound eyes, like insects, and any sort of pixel-grid display creates serious moire fringing effects between the screen and their eyes. Or maybe they see in sonar, like bats or dolphins. How do you make a sonar screen?
If they are (almost?) human, maybe their society is a strict meritocracy (with fascistic overtones). You are not allowed to access a computer until you prove that you are intelligent enough to use one in an intelligent manner. In other words, program one. By the time you are a half-decent prgrammer, you will probably prefer a command-line interface over a GUI interface for most tasks in any case.
(If you are any sort of geek, you'll have heard the jokes about lusers and drool-proof keyboards. In this world, the geeks are the rulers).
I don't think this is possible, if you want to keep the possibilities of modern computers, especially if you consider Norton Commander as 'text' - since what it's really doing is abusing text to be a GUI - and most of what GUIs do is position text, outside a grid system. But one possible approach I haven't seen mentioned in other answers - text is machine readable, GUIs aren't.
This could come up in several different ways:
• Mandatory software quality testing, coming in very early on. As soon as the first software with bugs appear, and companies realise they are paying for broken products, particularly if there is a serious catastrophe like an exploding space rocket, there is a big legal and regulatory push for software to be absolutely as described, with large fees for any bugs found.
• This manifests itself as precise specifications for input and output, and mandatory automated testing with regulatory oversight. You can automatically verify the text which is displayed, and the screen output at every state, but you can't easily automatically verify the display of a curve, and the number of possibilities with user resizable windows makes it infeasible to attempt.
• Mandatory auditing of one sort or another. All input and output must be audited for anti-fraud, or to guard against anti-consumer practices, or to mandate that computer systems from different providers perform the same way, or as a basic expectation in a digital society of how computers behave. You can audit typing and printing, but you can't really audit mouse clicks and GUI scrolling in the same way. You can audit "this picture was displayed: {}" for use with your one-off output specification, but you wouldn't want the overhead or storage costs of auditing every frame of a GUI.
• The earliest developments of computing were very focused on interpreting the text, and processing it in custom ways. e.g. government broadcast news over a text feed like the UK's old Ceefax systems, and individual people put keyword matches on the data stream which would alert them for things they found interesting. Businesses alerted on transactions, individuals played with data sets in real time - you could expect a feed of special offers from shops, from weather services, from news services, civil engineering (roadworks) in your area, up to date electricity prices, or whatever, and pick up on the things you care about. This happens early enough in your timeline that it gets embedded into the culture, and when GUIs come along, people regard them as a novelty but ultimately reject the way they can't be automated and pattern searched as too limiting, so only use them as an output device, but not as the main interaction point. You work with the structured data, maybe you show it in a GUI if it's a graph, or maybe you don't.
• The previous points interact; mandatory auditing means governments want a continual stream of input from every user, which they can search and gather population-wide statistics for, which means GUIs are only allowed to be used for display, but all input must come through a keyboard.
The section of Mandatory software testing could come up in another approach, the reason headless servers are so popular today is that less code means a smaller attack surface for security considerations. If all software had to go through an expensive regulatory audit process (or any constraint which has a similar effect - software companies need to be insured against the risk of their code going wrong, and insurance companies charge per line of code insured, or per feature), then 'less code' would push industries towards preferring TUIs if at all possible. Since a GUI has to display text, and also graphics, it will always work out more expensive.
Another possible deviation from real world history is that our early output devices were RADAR screens and oscilloscopes, with an electron beam being scanned left to right and modulated up and down by an analog signal. They became CRTs, which were the dominant display technology for many years.
But what if CRTs couldn't become dominant, e.g. if regulatory limits prohibited vacuum chambers in devices sold to the public, because they were too dangerous due to the risk of implosion?
Environmental concerns, or financial rent-seeking behaviour. If you could tweak the world so that displaying a picture cost significantly more, each time, people would avoid it for normal use. e.g. if there was a 'text' screen which came with a computer, and you could buy a 'graphical' screen as an addition to go alongside it - but it could display 1000 graphics before the license ran out and needed renewing, or it cost a day of text electricity to update compared to the text screen. The market would sort out how to do everything by text, while keeping GUIs available for the occasional use, or for the wealthy.
Do a better job of teaching kids to read and write.
Let's draw a line between a system that is capable of doing graphics, when appropriate, and the GUI, which is to computing what "point & grunt" is to language. So your computer user has what I have on my machines (4 on or beside my desk at the moment): a window manager running on top of X, which mostly has a bunch of xterm windows on it. To interact with the computer, I use language in the form of commands, rather than pointing at something and clicking the mouse.
Now this doesn't mean I can't do graphics. I can do anything from looking at photos I've downloaded from my camera (with text commands) to viewing PDF documents (which I may have created with text-based LaTeX) to visualizing the output from the 3D seismic tomography program I'm working on (the input to which is text). I just don't have to have an icon that I click on for every single thing I want to do, and I don't have to waste time trying to figure out what those icons - potentially multiple thousands of them - are supposed to mean. (If I run into an unfamiliar text command, I can look it up in the manual or with a search engine, just as I would look up an unfamiliar word in a dictionary.)
If I need a list of commands for users not familiar with a system or application, I can use text menus, as in fact I do with the browser (qupzilla) that I'm using at the moment. It has some GUI icons. in a bar across the top, but I've never figured out exactly what they mean, because there's a handy text menu too.
GUIs, IMHO, are basically a crutch, needed because a large fraction of the population seems to be functionally illiterate.
• That's certainly consistent with newspapers and managers which insist on communicating via videos, rather than text. – Arlie Stephens Dec 7 '16 at 2:04
• More seriously - increase the prevalence and popularity of the personality traits which produce bookkeepers, librarians, computer nerds etc - at the expense of those which produce sales people, politicians, and entertainers. Make "geek" a compliment. Make "perfectionism" and "expertise" more desirable than "quick hacks" and "flexibility". A modern GUI is after all a way for an unskilled user to manage a task, that they'll never be able to get any better at. – Arlie Stephens Dec 7 '16 at 2:08
• Many, many, highly literate people (eg. people with literature PhDs, established authors, academics, etc.) have little or no expertise with computer interfaces. It seems extraordinary to me to draw a link between them. – Ben Millwood Dec 7 '16 at 16:29
• @Ben Millwood: Because GUIs came along before most of those people were exposed to computers, and then they were force-fed the GUI by Windows and MacIntosh, so they never had a chance to experience how much better a good CLI can be. – jamesqf Dec 7 '16 at 19:45
• @BenMillwood 98%-99% of people who, when pressed, can read a short easy text with a little effort. That includes functional illiterates, depending on your standards. Even today the numbers are (much) worse than that if you define literate as something like "Immediately understands all text encountered in daily life without any conscious effort; understands the gist of major works of literature with little effort; understands the gist of contracts they sign with amount of required time and effort in a sensible proportion to the importance of the contract." – Nobody Dec 10 '16 at 14:04
Try and look at 'what' you actually want to use computing for. Will everyone still be as connected as they are these days? if so, could they just be using more powerful versions of the early mobile phones which had buttons and an LCD screen (my old Ericsson A1018 was like this.) Or are you looking more for a computerized world, but without necessarily needing the level of user input we have now?
I mean for instance, look up 'internet of things'. The basic concept is everything around us now has a computer in it (kettles, toasters) which are all inter-connected to form their own network. However, the micro-controllers within them fairly rarely have a GUI. At most, there are a lot of blenders/food processors which have buttons on them for 'smart' cooking. These are dedicated function buttons, while the micro-controller inside simply (or not so simply) reads the data from a few sensors and applies some logic to the cooking mode.
The Raspberry Pi is another good modern example. Although it is typically connected to a mouse/keyboard and TV/monitor, it needs none of these things to function. I've seen them set up as wireless computer servers; one of my colleagues has half his house automated with micro-controllers, including wifi cameras and his 3D printer, all connected through the Pi as a server. He can access his printer at work, and watch it on the camera to make sure his house isn't on fire, but the point is the Pi itself has no GUI, and the tablet or whatever he uses to access it isn't more than a dumb terminal.
IF you're talking purely about how to access the computer without the graphical interface, then the next level up (or down) would be the old DIP switch and jumper approach to computer programming/usage. I have an early Amstrad PPC512 laptop at home which consists of a monochrome LCD screen, two floppy drives, a modem and no hard disk or any sort of operating system, other than what is used on the boot floppy. Setting which floppy, or external monitor source etc. was done with an array of DIP switches on the side.
There are plenty of other good examples through computing history: the Apollo computer used during the moon landings had the DSKY interface, which was fitted with dedicated function buttons (noun, verb) and 7-segment readouts. Graphics calculators would be another example you could 'borrow' and modernize.
TLDR: Your world started with pre-GUI computers such as the Apollo guidance computer. Instead of the desktop computer/monitor becoming standard, research instead went into portable computers such as graphics calculators and early mobile phone technology, while industry focused on single use computers programmed by DIP switch. By the time mainstream internet became available linking the IOT devices together, people still predominantly relied on text-based systems like their button phones.
Something a little less anachronistic would be that, or haptic feedback devices (vibrators, or braille keypads) were invented sooner. Maybe AI was developed earlier, reducing the need for 'hands on' computing, although this begins to overlap the voice-activated approach as mentioned in a previous post.
Are you bound to the users being human-like? If the user's senses are not dominated by vision, you can neglect GUI, and go more on a tactile/sound/smell user interface.
Basically you can image a mole-like being using a computer.
Most of these answer focus on technology being held back, I am going to assume it sprints forward. Direct communication with the computer via brain waves over wires invented before GUIs.
If you use telepathy or neural implants to communicate with your computer no keyboard, mouse, GUI or etc are necessary. You have a direct brain to computer link with vastly superior reaction time.
The only possible problem is people might choose to visualize a GUI in their mind. However, I doubt that it would be helpful with direct computer to brain linkage.
• I doubt that people would avoid using graphical representations with direct brain-computer linkage. Sight is by far dominant in humans, that's why GUIs work in the first place. Even thinking about problems in my head involves visualising things "as if in sight". Even thinking about CLIs, I have an image of a CLI in my head. In fact, I picture myself bashing on the keyboard right now :) – Luaan Dec 7 '16 at 8:39
• @Luaan I my world keyboards were never invented because of the brain link so you don't know what one is so you can be picturing yourself bashing one. – cybernard Dec 7 '16 at 12:47
• Computers didn't invent keyboards. Keyboards existed long before computers. Are you saying that people went straight from drawing by hand on a piece of paper to brain-computer interface? Why are there no typewriters? Why are there no pianos for that matter? No printing press? Even then, I'd simply be picturing myself handwriting, instead of bashing the keyboard - it doesn't really change much on the argument :) – Luaan Dec 7 '16 at 13:08
• I am saying in the person question they are in an alternate reality, since we already have GUI. Pianos, and etc can have keyboards, just not computers. Since you can think faster than you can speak,type,click, or write the neutral interface would be the dominate way of getting things done. A GUI would just slow you down. – cybernard Dec 7 '16 at 23:18
1. Before computers are powerful enough for graphics, heavily invest in computer science education, starting from primary school. This would likely be a sound investment anyway, at the very least in hindsight.
2. Everyone will be able to use a terminal. You can't teach theoretical computer science to first graders (also large parts of it wasn't known back then), you'll start with a very practical approach to computer science, which implies heavy use of actual computers; programming. That's the part which is useful to the general population anyway, so they can automate little problems in their daily life/workplace.
3. Everyone will be able to use a terminal more efficiently than graphical programs because they already know how to and terminals are inherently better so the investment to learn GUI wouldn't be worth it.
4. There would be no need for graphical user interfaces.
That is, there would still be graphical output, but only for stuff like previewing 3D models you describe textually (it exists! It's really easy to learn and powerful in my opinion), previewing documents you wrote in something like LaTeX, viewing pictures and videos, etc.
• Computer science and computer usage are mostly unrelated. – Raphael Dec 9 '16 at 7:53
• @Raphael Obviously, at least in one way. But at the same time, computer science implies programming and programming implies being able to use a computer for programming, and being able to program programs for existing computers. That is, you'll be able to both use a terminal and make programs which run in terminal. Now explain me why anyone with that background would use an early (probably shitty) attempt at a GUI, or even a modern one which used (wasted :P ) millions of man-hours during its creation. – Nobody Dec 9 '16 at 15:19
• "computer science implies programming" -- not necessarily, no. Not anymore than physics implies welding. – Raphael Dec 9 '16 at 18:46
• @Raphael I don't care about far fetched philosophical implications. Sure, CS isn't equivalent to programming, that's not what I was saying. But if you study CS then you'll write lots of code. No way around it. Hell, if you study physics, you'll write lots of code too, though less than the CS students (welding, on the other hand, is definitely not on the curriculum at least where I study). If you want proof, check the first year CS curriculum at any large university. Or check this out: vvz.ethz.ch/Vorlesungsverzeichnis/… – Nobody Dec 9 '16 at 18:59
• "But if you study CS then you'll write lots of code" -- maybe, but not necessarily. I personally know a number of counter examples. You are right to say that it's not the norm, though. Anyway, I apparently have to make my point clearer: you probably want to propose teaching computer skills, including programming, not computer science. That's just similar to teaching physics being the wrong course of action if you want people to solder instead of glue. – Raphael Dec 9 '16 at 20:41
|
2020-10-25 05:59:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3119159936904907, "perplexity": 2489.387050711664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107887810.47/warc/CC-MAIN-20201025041701-20201025071701-00553.warc.gz"}
|
http://www.sagemath.org/doc/reference/combinat/sage/combinat/finite_state_machine.html
|
# Finite State Machines, Automata, Transducers¶
This module adds support for finite state machines, automata and transducers. See class FiniteStateMachine and the examples below for details creating one.
## Examples¶
### A simple finite state machine¶
We can easily create a finite state machine by
sage: fsm = FiniteStateMachine()
sage: fsm
Finite state machine with 0 states
By default this is the empty finite state machine, so not very interesting. Let’s create some states and transitions:
sage: from sage.combinat.finite_state_machine import FSMState, FSMTransition
sage: day = FSMState('day')
sage: night = FSMState('night')
sage: sunrise = FSMTransition(night, day)
sage: sunset = FSMTransition(day, night)
And now let’s add those states and transitions to our finite state machine:
sage: fsm.add_transition(sunrise)
Transition from 'night' to 'day': -|-
Transition from 'day' to 'night': -|-
Note that the states are added automatically, since they are present in the transitions. We could add the states manually by
sage: fsm.add_state(day)
'day'
'night'
Anyhow, we got the following finite state machine:
sage: fsm
Finite state machine with 2 states
We can also visualize it as a graph by
sage: fsm.graph()
Digraph on 2 vertices
Alternatively, we could have created the finite state machine above simply by
sage: FiniteStateMachine([('night', 'day'), ('day', 'night')])
Finite state machine with 2 states
or by
sage: fsm = FiniteStateMachine()
sage: fsm
Finite state machine with 2 states
### A simple Automaton (recognizing NAFs)¶
We want to build an automaton which recognizes non-adjacent forms (NAFs), i.e., sequences which have no adjacent non-zeros. We use $$0$$, $$1$$, and $$-1$$ as digits:
sage: NAF = Automaton(
....: {'A': [('A', 0), ('B', 1), ('B', -1)], 'B': [('A', 0)]})
sage: NAF.state('A').is_initial = True
sage: NAF.state('A').is_final = True
sage: NAF.state('B').is_final = True
sage: NAF
Automaton with 2 states
Of course, we could have specified the initial and final states directly in the definition of NAF by initial_states=['A'] and final_states=['A', 'B'].
So let’s test the automaton with some input:
sage: sage.combinat.finite_state_machine.FSMOldProcessOutput = False # activate new output behavior
sage: NAF([0])
True
sage: NAF([0, 1])
True
sage: NAF([1, -1])
False
sage: NAF([0, -1, 0, 1])
True
sage: NAF([0, -1, -1, -1, 0])
False
sage: NAF([-1, 0, 0, 1, 1])
False
Alternatively, we could call that by
sage: NAF.process([0, -1, 0, 1])
(True, 'B')
which gives additionally the state in which we arrived.
### A simple transducer (binary inverter)¶
Let’s build a simple transducer, which rewrites a binary word by iverting each bit:
sage: inverter = Transducer({'A': [('A', 0, 1), ('A', 1, 0)]},
....: initial_states=['A'], final_states=['A'])
We can look at the states and transitions:
sage: inverter.states()
['A']
sage: for t in inverter.transitions():
....: print t
Transition from 'A' to 'A': 0|1
Transition from 'A' to 'A': 1|0
Now we apply a word to it and see what the transducer does:
sage: inverter([0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1])
[1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0]
True means, that we landed in a final state, that state is labeled 'A', and we also got an output.
### A transducer which performs division by $$3$$ in binary¶
Now we build a transducer, which divides a binary number by $$3$$. The labels of the states are the remainder of the division. The transition function is
sage: def f(state_from, read):
....: if state_from + read <= 1:
....: state_to = 2*state_from + read
....: write = 0
....: else:
....: state_to = 2*state_from + read - 3
....: write = 1
....: return (state_to, write)
which assumes reading a binary number from left to right. We get the transducer with
sage: D = Transducer(f, initial_states=[0], final_states=[0],
....: input_alphabet=[0, 1])
Let us try to divide $$12$$ by $$3$$:
sage: D([1, 1, 0, 0])
[0, 1, 0, 0]
Now we want to divide $$13$$ by $$3$$:
sage: D([1, 1, 0, 1])
Traceback (most recent call last):
...
ValueError: Invalid input sequence.
The raised ValueError means $$13$$ is not divisible by $$3$$.
### Using the hook-functions¶
Let’s use the previous example “divison by $$3$$” to demonstrate the optional state and transition parameters hook.
First, we define, what those functions should do. In our case, this is just saying in which state we are and which transition we take
sage: def state_hook(state, process):
....: print "We are now in State %s." % (state.label(),)
sage: from sage.combinat.finite_state_machine import FSMWordSymbol
sage: def transition_hook(transition, process):
....: print ("Currently we go from %s to %s, "
....: "reading %s and writing %s." % (
....: transition.from_state, transition.to_state,
....: FSMWordSymbol(transition.word_in),
....: FSMWordSymbol(transition.word_out)))
Now, let’s add these hook-functions to the existing transducer:
sage: for s in D.iter_states():
....: s.hook = state_hook
sage: for t in D.iter_transitions():
....: t.hook = transition_hook
Rerunning the process again now gives the following output:
sage: D.process([1, 1, 0, 1])
We are now in State 0.
Currently we go from 0 to 1, reading 1 and writing 0.
We are now in State 1.
Currently we go from 1 to 0, reading 1 and writing 1.
We are now in State 0.
Currently we go from 0 to 0, reading 0 and writing 0.
We are now in State 0.
Currently we go from 0 to 1, reading 1 and writing 0.
We are now in State 1.
(False, 1, [0, 1, 0, 0])
The example above just explains the basic idea of using hook-functions. In the following, we will use those hooks more seriously.
### Detecting sequences with same number of $$0$$ and $$1$$¶
Suppose we have a binary input and want to accept all sequences with the same number of $$0$$ and $$1$$. This cannot be done with a finite automaton. Anyhow, we can make usage of the hook functions to extend our finite automaton by a counter:
sage: from sage.combinat.finite_state_machine import FSMState, FSMTransition
sage: C = FiniteStateMachine()
sage: def update_counter(state, process):
....: process.fsm.counter += 1 if l == 1 else -1
....: if process.fsm.counter > 0:
....: next_state = 'positive'
....: elif process.fsm.counter < 0:
....: next_state = 'negative'
....: else:
....: next_state = 'zero'
....: return FSMTransition(state, process.fsm.state(next_state),
....: l, process.fsm.counter)
....: is_initial=True, is_final=True))
'zero'
'positive'
'negative'
Now, let’s input some sequence:
sage: C.counter = 0; C([1, 1, 1, 1, 0, 0])
(False, 'positive', [1, 2, 3, 4, 3, 2])
The result is False, since there are four $$1$$ but only two $$0$$. We land in the state positive and we can also see the values of the counter in each step.
Let’s try some other examples:
sage: C.counter = 0; C([1, 1, 0, 0])
(True, 'zero', [1, 2, 1, 0])
sage: C.counter = 0; C([0, 1, 0, 0])
(False, 'negative', [-1, 0, -1, -2])
AUTHORS:
• Daniel Krenn (2012-03-27): initial version
• Clemens Heuberger (2012-04-05): initial version
• Sara Kropf (2012-04-17): initial version
• Clemens Heuberger (2013-08-21): release candidate for Sage patch
• Daniel Krenn (2013-08-21): release candidate for Sage patch
• Sara Kropf (2013-08-21): release candidate for Sage patch
• Clemens Heuberger (2013-09-02): documentation improved
• Daniel Krenn (2013-09-13): comments from trac worked in
• Clemens Heuberger (2013-11-03): output (labels) of determinisation,
product, composition, etc. changed (for consistency), representation of state changed, documentation improved
• Daniel Krenn (2013-11-04): whitespaces in documentation corrected
• Clemens Heuberger (2013-11-04): full_group_by added
• Daniel Krenn (2013-11-04): next release candidate for Sage patch
• Sara Kropf (2013-11-08): fix for adjacency matrix
• Clemens Heuberger (2013-11-11): fix for prepone_output
• Daniel Krenn (2013-11-11): comments from trac 15078 included:
docstring of FiniteStateMachine rewritten, Automaton and Transducer inherited from FiniteStateMachine
• Daniel Krenn (2013-11-25): documentation improved according to
ACKNOWLEDGEMENT:
• Daniel Krenn, Clemens Heuberger and Sara Kropf are supported by the Austrian Science Fund (FWF): P 24644-N26.
class sage.combinat.finite_state_machine.Automaton(data=None, initial_states=None, final_states=None, input_alphabet=None, output_alphabet=None, determine_alphabets=None, store_states_dict=True, on_duplicate_transition=None)
This creates an automaton, which is a finite state machine, whose transitions have input labels.
An automaton has additional features like creating a deterministic and a minimized automaton.
EXAMPLES:
We can create an automaton recognizing even numbers (given in binary and read from left to right) in the following way:
sage: A = Automaton([('P', 'Q', 0), ('P', 'P', 1),
....: ('Q', 'P', 1), ('Q', 'Q', 0)],
....: initial_states=['P'], final_states=['Q'])
sage: A
Automaton with 2 states
sage: A([0])
True
sage: A([1, 1, 0])
True
sage: A([1, 0, 1])
False
Note that the full output of the commands can be obtained by calling process() and looks like this:
sage: A.process([1, 0, 1])
(False, 'P')
TESTS:
sage: Automaton()
Automaton with 0 states
cartesian_product(other, only_accessible_components=True)
Returns a new automaton which accepts an input if it is accepted by both given automata.
INPUT:
• other – an automaton
• only_accessible_components – If True (default), then the result is piped through accessible_components. If no new_input_alphabet is given, it is determined by determine_alphabets().
OUTPUT:
A new automaton which computes the intersection (see below) of the languages of self and other.
The set of states of the new automaton is the cartesian product of the set of states of both given automata. There is a transition $$((A, B), (C, D), a)$$ in the new automaton if there are transitions $$(A, C, a)$$ and $$(B, D, a)$$ in the old automata.
The methods intersection() and cartesian_product() are the same (for automata).
EXAMPLES:
sage: aut1 = Automaton([('1', '2', 1),
....: ('2', '2', 1),
....: ('2', '2', 0)],
....: initial_states=['1'],
....: final_states=['2'],
....: determine_alphabets=True)
sage: aut2 = Automaton([('A', 'A', 1),
....: ('A', 'B', 0),
....: ('B', 'B', 0),
....: ('B', 'A', 1)],
....: initial_states=['A'],
....: final_states=['B'],
....: determine_alphabets=True)
sage: res = aut1.intersection(aut2)
sage: (aut1([1, 1]), aut2([1, 1]), res([1, 1]))
(True, False, False)
sage: (aut1([1, 0]), aut2([1, 0]), res([1, 0]))
(True, True, True)
sage: res.transitions()
[Transition from ('1', 'A') to ('2', 'A'): 1|-,
Transition from ('2', 'A') to ('2', 'B'): 0|-,
Transition from ('2', 'A') to ('2', 'A'): 1|-,
Transition from ('2', 'B') to ('2', 'B'): 0|-,
Transition from ('2', 'B') to ('2', 'A'): 1|-]
For automata with epsilon-transitions, intersection is not well defined. But for any finite state machine, epsilon-transitions can be removed by remove_epsilon_transitions().
sage: a1 = Automaton([(0, 0, 0),
....: (0, 1, None),
....: (1, 1, 1),
....: (1, 2, 1)],
....: initial_states=[0],
....: final_states=[1],
....: determine_alphabets=True)
sage: a2 = Automaton([(0, 0, 0), (0, 1, 1), (1, 1, 1)],
....: initial_states=[0],
....: final_states=[1],
....: determine_alphabets=True)
sage: a1.intersection(a2)
Traceback (most recent call last):
...
ValueError: An epsilon-transition (with empty input)
was found.
sage: a1.remove_epsilon_transitions() # not tested (since not implemented yet)
sage: a1.intersection(a2) # not tested
determinisation()
Returns a deterministic automaton which accepts the same input words as the original one.
INPUT:
Nothing.
OUTPUT:
A new automaton, which is deterministic.
The labels of the states of the new automaton are frozensets of states of self. The color of a new state is the frozenset of colors of the constituent states of self. Therefore, the colors of the constituent states have to be hashable.
The input alphabet must be specified. It is restricted to nice cases: input words have to have length at most $$1$$.
EXAMPLES:
sage: aut = Automaton([('A', 'A', 0), ('A', 'B', 1), ('B', 'B', 1)],
....: initial_states=['A'], final_states=['B'])
sage: aut.determinisation().transitions()
[Transition from frozenset(['A'])
to frozenset(['A']): 0|-,
Transition from frozenset(['A'])
to frozenset(['B']): 1|-,
Transition from frozenset(['B'])
to frozenset([]): 0|-,
Transition from frozenset(['B'])
to frozenset(['B']): 1|-,
Transition from frozenset([])
to frozenset([]): 0|-,
Transition from frozenset([])
to frozenset([]): 1|-]
sage: A = Automaton([('A', 'A', 1), ('A', 'A', 0), ('A', 'B', 1),
....: ('B', 'C', 0), ('C', 'C', 1), ('C', 'C', 0)],
....: initial_states=['A'], final_states=['C'])
sage: A.determinisation().states()
[frozenset(['A']), frozenset(['A', 'B']),
frozenset(['A', 'C']), frozenset(['A', 'C', 'B'])]
Note that colors of states have to be hashable:
sage: A = Automaton([[0, 0, 0]], initial_states=[0])
sage: A.state(0).color = []
sage: A.determinisation()
Traceback (most recent call last):
...
TypeError: unhashable type: 'list'
sage: A.state(0).color = ()
sage: A.determinisation()
Automaton with 1 states
TESTS:
This is from #15078, comment 13.
sage: D = {'A': [('A', 'a'), ('B', 'a'), ('A', 'b')],
....: 'C': [], 'B': [('C', 'b')]}
sage: auto = Automaton(D, initial_states=['A'], final_states=['C'])
sage: auto.is_deterministic()
False
sage: auto.process(list('aaab'))
(False, 'A')
sage: auto.states()
['A', 'C', 'B']
sage: auto.determinisation()
Automaton with 3 states
intersection(other, only_accessible_components=True)
Returns a new automaton which accepts an input if it is accepted by both given automata.
INPUT:
• other – an automaton
• only_accessible_components – If True (default), then the result is piped through accessible_components. If no new_input_alphabet is given, it is determined by determine_alphabets().
OUTPUT:
A new automaton which computes the intersection (see below) of the languages of self and other.
The set of states of the new automaton is the cartesian product of the set of states of both given automata. There is a transition $$((A, B), (C, D), a)$$ in the new automaton if there are transitions $$(A, C, a)$$ and $$(B, D, a)$$ in the old automata.
The methods intersection() and cartesian_product() are the same (for automata).
EXAMPLES:
sage: aut1 = Automaton([('1', '2', 1),
....: ('2', '2', 1),
....: ('2', '2', 0)],
....: initial_states=['1'],
....: final_states=['2'],
....: determine_alphabets=True)
sage: aut2 = Automaton([('A', 'A', 1),
....: ('A', 'B', 0),
....: ('B', 'B', 0),
....: ('B', 'A', 1)],
....: initial_states=['A'],
....: final_states=['B'],
....: determine_alphabets=True)
sage: res = aut1.intersection(aut2)
sage: (aut1([1, 1]), aut2([1, 1]), res([1, 1]))
(True, False, False)
sage: (aut1([1, 0]), aut2([1, 0]), res([1, 0]))
(True, True, True)
sage: res.transitions()
[Transition from ('1', 'A') to ('2', 'A'): 1|-,
Transition from ('2', 'A') to ('2', 'B'): 0|-,
Transition from ('2', 'A') to ('2', 'A'): 1|-,
Transition from ('2', 'B') to ('2', 'B'): 0|-,
Transition from ('2', 'B') to ('2', 'A'): 1|-]
For automata with epsilon-transitions, intersection is not well defined. But for any finite state machine, epsilon-transitions can be removed by remove_epsilon_transitions().
sage: a1 = Automaton([(0, 0, 0),
....: (0, 1, None),
....: (1, 1, 1),
....: (1, 2, 1)],
....: initial_states=[0],
....: final_states=[1],
....: determine_alphabets=True)
sage: a2 = Automaton([(0, 0, 0), (0, 1, 1), (1, 1, 1)],
....: initial_states=[0],
....: final_states=[1],
....: determine_alphabets=True)
sage: a1.intersection(a2)
Traceback (most recent call last):
...
ValueError: An epsilon-transition (with empty input)
was found.
sage: a1.remove_epsilon_transitions() # not tested (since not implemented yet)
sage: a1.intersection(a2) # not tested
minimization(algorithm=None)
Returns the minimization of the input automaton as a new automaton.
INPUT:
• algorithm – Either Moore’s algorithm (by algorithm='Moore' or as default for deterministic automata) or Brzozowski’s algorithm (when algorithm='Brzozowski' or when the automaton is not deterministic) is used.
OUTPUT:
A new automaton.
The resulting automaton is deterministic and has a minimal number of states.
EXAMPLES:
sage: A = Automaton([('A', 'A', 1), ('A', 'A', 0), ('A', 'B', 1),
....: ('B', 'C', 0), ('C', 'C', 1), ('C', 'C', 0)],
....: initial_states=['A'], final_states=['C'])
sage: B = A.minimization(algorithm='Brzozowski')
sage: B.transitions(B.states()[1])
[Transition from frozenset([frozenset(['A', 'C', 'B']),
frozenset(['C', 'B']), frozenset(['A', 'C'])]) to
frozenset([frozenset(['A', 'C', 'B']), frozenset(['C', 'B']),
frozenset(['A', 'C']), frozenset(['C'])]): 0|-,
Transition from frozenset([frozenset(['A', 'C', 'B']),
frozenset(['C', 'B']), frozenset(['A', 'C'])]) to
frozenset([frozenset(['A', 'C', 'B']), frozenset(['C', 'B']),
frozenset(['A', 'C'])]): 1|-]
sage: len(B.states())
3
sage: C = A.minimization(algorithm='Brzozowski')
sage: C.transitions(C.states()[1])
[Transition from frozenset([frozenset(['A', 'C', 'B']),
frozenset(['C', 'B']), frozenset(['A', 'C'])]) to
frozenset([frozenset(['A', 'C', 'B']), frozenset(['C', 'B']),
frozenset(['A', 'C']), frozenset(['C'])]): 0|-,
Transition from frozenset([frozenset(['A', 'C', 'B']),
frozenset(['C', 'B']), frozenset(['A', 'C'])]) to
frozenset([frozenset(['A', 'C', 'B']), frozenset(['C', 'B']),
frozenset(['A', 'C'])]): 1|-]
sage: len(C.states())
3
sage: aut = Automaton([('1', '2', 'a'), ('2', '3', 'b'),
....: ('3', '2', 'a'), ('2', '1', 'b'),
....: ('3', '4', 'a'), ('4', '3', 'b')],
....: initial_states=['1'], final_states=['1'])
sage: min = aut.minimization(algorithm='Brzozowski')
sage: [len(min.states()), len(aut.states())]
[3, 4]
sage: min = aut.minimization(algorithm='Moore')
Traceback (most recent call last):
...
NotImplementedError: Minimization via Moore's Algorithm is only
implemented for deterministic finite state machines
process(*args, **kwargs)
Warning
The default output of this method is scheduled to change. This docstring describes the new default behaviour, which can already be achieved by setting FSMOldProcessOutput to False.
Returns whether the automaton accepts the input and the state where the computation stops.
INPUT:
• input_tape – The input tape can be a list with entries from the input alphabet.
• initial_state – (default: None) The state in which to start. If this parameter is None and there is only one initial state in the machine, then this state is taken.
• full_output – (default: True) If set, then the full output is given, otherwise only whether the sequence is accepted or not (the first entry below only).
OUTPUT:
The full output is a pair, where
• the first entry is True if the input string is accepted and
• the second gives the state reached after processing the input tape (This is a state with label None if the input could not be processed, i.e., when at one point no transition to go could be found.).
Note that in the case the automaton is not deterministic, one possible path is gone. This means that in this case the output can be wrong. Use determinisation() to get a deterministic automaton machine and try again.
By setting FSMOldProcessOutput to False the new desired output is produced.
EXAMPLES:
sage: sage.combinat.finite_state_machine.FSMOldProcessOutput = False # activate new output behavior
sage: from sage.combinat.finite_state_machine import FSMState
sage: NAF_ = FSMState('_', is_initial = True, is_final = True)
sage: NAF1 = FSMState('1', is_final = True)
sage: NAF = Automaton(
....: {NAF_: [(NAF_, 0), (NAF1, 1)], NAF1: [(NAF_, 0)]})
sage: [NAF.process(w) for w in [[0], [0, 1], [1, 1], [0, 1, 0, 1],
....: [0, 1, 1, 1, 0], [1, 0, 0, 1, 1]]]
[(True, '_'), (True, '1'), (False, None),
(True, '1'), (False, None), (False, None)]
If we just want a condensed output, we use:
sage: [NAF.process(w, full_output=False)
....: for w in [[0], [0, 1], [1, 1], [0, 1, 0, 1],
....: [0, 1, 1, 1, 0], [1, 0, 0, 1, 1]]]
[True, True, False, True, False, False]
It is equivalent to:
sage: [NAF(w) for w in [[0], [0, 1], [1, 1], [0, 1, 0, 1],
....: [0, 1, 1, 1, 0], [1, 0, 0, 1, 1]]]
[True, True, False, True, False, False]
The following example illustrates the difference between non-existing paths and reaching a non-final state:
sage: NAF.process([2])
(False, None)
Transition from '_' to 's': 2|-
sage: NAF.process([2])
(False, 's')
sage.combinat.finite_state_machine.FSMLetterSymbol(letter)
Returns a string associated to the input letter.
INPUT:
• letter – the input letter or None (representing the empty word).
OUTPUT:
If letter is None the symbol for the empty word FSMEmptyWordSymbol is returned, otherwise the string associated to the letter.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMLetterSymbol
sage: FSMLetterSymbol(0)
'0'
sage: FSMLetterSymbol(None)
'-'
class sage.combinat.finite_state_machine.FSMProcessIterator(fsm, input_tape=None, initial_state=None, **kwargs)
This class is for processing an input string on a finite state machine.
An instance of this class is generated when FiniteStateMachine.process() or FiniteStateMachine.iter_process() of the finite state machine is invoked. It behaves like an iterator which, in each step, takes one letter of the input and runs (one step on) the finite state machine with this input. More precisely, in each step, the process iterator takes an outgoing transition of the current state, whose input label equals the input letter of the tape. The output label of the transition, if present, is written on the output tape.
INPUT:
• fsm – The finite state machine on which the input should be processed.
• input_tape – The input tape. It can be anything that is iterable.
• initial_state – The initial state in which the machine starts. If this is None, the unique inital state of the finite state machine is takes. If there are several, a ValueError is raised.
The process (iteration) stops if there are no more input letters on the tape. In this case a StopIteration exception is thrown. As result the following attributes are available:
• accept_input – Is True if the reached state is a final state.
• current_state – The current/reached state in the process.
• output_tape – The written output.
Current values of those attributes (except accept_input) are (also) available during the iteration.
OUTPUT:
An iterator.
EXAMPLES:
The following transducer reads binary words and outputs a word, where blocks of ones are replaced by just a single one. Further only words that end with a zero are accepted.
sage: T = Transducer({'A': [('A', 0, 0), ('B', 1, None)],
....: 'B': [('B', 1, None), ('A', 0, [1, 0])]},
....: initial_states=['A'], final_states=['A'])
sage: input = [1, 1, 0, 0, 1, 0, 1, 1, 1, 0]
sage: T.process(input)
(True, 'A', [1, 0, 0, 1, 0, 1, 0])
The function FiniteStateMachine.process() created a new FSMProcessIterator. We can do that manually, too, and get full access to the iteration process:
sage: from sage.combinat.finite_state_machine import FSMProcessIterator
sage: it = FSMProcessIterator(T, input_tape=input)
sage: for _ in it:
....: print (it.current_state, it.output_tape)
('B', [])
('B', [])
('A', [1, 0])
('A', [1, 0, 0])
('B', [1, 0, 0])
('A', [1, 0, 0, 1, 0])
('B', [1, 0, 0, 1, 0])
('B', [1, 0, 0, 1, 0])
('B', [1, 0, 0, 1, 0])
('A', [1, 0, 0, 1, 0, 1, 0])
sage: it.accept_input
True
TESTS:
sage: T = Transducer([[0, 0, 0, 0]])
sage: T.process([])
Traceback (most recent call last):
...
ValueError: No state is initial.
sage: T = Transducer([[0, 1, 0, 0]], initial_states=[0, 1])
sage: T.process([])
Traceback (most recent call last):
...
ValueError: Several initial states.
get_next_transition(word_in)
Returns the next transition according to word_in. It is assumed that we are in state self.current_state.
INPUT:
• word_in – the input word.
OUTPUT:
The next transition according to word_in. It is assumed that we are in state self.current_state. If no transition matches, a ValueError is thrown.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMProcessIterator
sage: inverter = Transducer({'A': [('A', 0, 1), ('A', 1, 0)]},
....: initial_states=['A'], final_states=['A'])
sage: it = FSMProcessIterator(inverter, input_tape=[0, 1])
sage: it.get_next_transition([0])
Transition from 'A' to 'A': 0|1
sage: it.get_next_transition([2])
Traceback (most recent call last):
...
ValueError: No transition with input [2] found.
next()
Makes one step in processing the input tape.
INPUT:
Nothing.
OUTPUT:
It returns the taken transition. A StopIteration exception is thrown when there is nothing more to read.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMProcessIterator
sage: inverter = Transducer({'A': [('A', 0, 1), ('A', 1, 0)]},
....: initial_states=['A'], final_states=['A'])
sage: it = FSMProcessIterator(inverter, input_tape=[0, 1])
sage: it.next()
Transition from 'A' to 'A': 0|1
sage: it.next()
Transition from 'A' to 'A': 1|0
sage: it.next()
Traceback (most recent call last):
...
StopIteration
Reads a letter from the input tape.
INPUT:
Nothing.
OUTPUT:
A letter.
Exception StopIteration is thrown if tape has reached the end.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMProcessIterator
sage: inverter = Transducer({'A': [('A', 0, 1), ('A', 1, 0)]},
....: initial_states=['A'], final_states=['A'])
sage: it = FSMProcessIterator(inverter, input_tape=[0, 1])
0
write_letter(letter)
Writes a letter on the output tape.
INPUT:
• letter – the letter to be written.
OUTPUT:
Nothing.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMProcessIterator
sage: inverter = Transducer({'A': [('A', 0, 1), ('A', 1, 0)]},
....: initial_states=['A'], final_states=['A'])
sage: it = FSMProcessIterator(inverter, input_tape=[0, 1])
sage: it.write_letter(42)
sage: it.output_tape
[42]
write_word(word)
Writes a word on the output tape.
INPUT:
• word – the word to be written.
OUTPUT:
Nothing.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMProcessIterator
sage: inverter = Transducer({'A': [('A', 0, 1), ('A', 1, 0)]},
....: initial_states=['A'], final_states=['A'])
sage: it = FSMProcessIterator(inverter, input_tape=[0, 1])
sage: it.write_word([4, 2])
sage: it.output_tape
[4, 2]
class sage.combinat.finite_state_machine.FSMState(label, word_out=None, is_initial=False, is_final=False, hook=None, color=None, allow_label_None=False)
Class for a state of a finite state machine.
INPUT:
• label – the label of the state.
• word_out – (default: None) a word that is written when the state is reached.
• is_initial – (default: False)
• is_final – (default: False)
• hook – (default: None) A function which is called when the state is reached during processing input.
• color – (default: None) In order to distinguish states, they can be given an arbitrary “color” (an arbitrary object). This is used in FiniteStateMachine.equivalence_classes(): states of different colors are never considered to be equivalent. Note that Automaton.determinisation() requires that color is hashable.
• allow_label_None – (default: False) If True allows also None as label. Note that a state with label None is used in FSMProcessIterator.
OUTPUT:
Returns a state of a finite state machine.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMState
sage: A = FSMState('state 1', word_out=0, is_initial=True)
sage: A
'state 1'
sage: A.label()
'state 1'
sage: B = FSMState('state 2')
sage: A == B
False
It is not allowed to use None as a label:
sage: from sage.combinat.finite_state_machine import FSMState
sage: FSMState(None)
Traceback (most recent call last):
...
ValueError: Label None reserved for a special state, choose another label.
This can be overridden by:
sage: FSMState(None, allow_label_None=True)
None
Note that Automaton.determinisation() requires that color is hashable:
sage: A = Automaton([[0, 0, 0]], initial_states=[0])
sage: A.state(0).color = []
sage: A.determinisation()
Traceback (most recent call last):
...
TypeError: unhashable type: 'list'
sage: A.state(0).color = ()
sage: A.determinisation()
Automaton with 1 states
copy()
Returns a (shallow) copy of the state.
INPUT:
Nothing.
OUTPUT:
A new state.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMState
sage: A = FSMState('A')
sage: copy(A)
'A'
deepcopy(memo=None)
Returns a deep copy of the state.
INPUT:
• memo – (default: None) a dictionary storing already processed elements.
OUTPUT:
A new state.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMState
sage: A = FSMState('A')
sage: deepcopy(A)
'A'
label()
Returns the label of the state.
INPUT:
Nothing.
OUTPUT:
The label of the state.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMState
sage: A = FSMState('state')
sage: A.label()
'state'
relabeled(label, memo=None)
Returns a deep copy of the state with a new label.
INPUT:
• label – the label of new state.
• memo – (default: None) a dictionary storing already processed elements.
OUTPUT:
A new state.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMState
sage: A = FSMState('A')
sage: A.relabeled('B')
'B'
class sage.combinat.finite_state_machine.FSMTransition(from_state, to_state, word_in=None, word_out=None, hook=None)
Class for a transition of a finite state machine.
INPUT:
• from_state – state from which transition starts.
• to_state – state in which transition ends.
• word_in – the input word of the transitions (when the finite state machine is used as automaton)
• word_out – the output word of the transitions (when the finite state machine is used as transducer)
OUTPUT:
A transition of a finite state machine.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMState, FSMTransition
sage: A = FSMState('A')
sage: B = FSMState('B')
sage: S = FSMTransition(A, B, 0, 1)
sage: T = FSMTransition('A', 'B', 0, 1)
sage: T == S
True
sage: U = FSMTransition('A', 'B', 0)
sage: U == T
False
copy()
Returns a (shallow) copy of the transition.
INPUT:
Nothing.
OUTPUT:
A new transition.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMTransition
sage: t = FSMTransition('A', 'B', 0)
sage: copy(t)
Transition from 'A' to 'B': 0|-
deepcopy(memo=None)
Returns a deep copy of the transition.
INPUT:
• memo – (default: None) a dictionary storing already processed elements.
OUTPUT:
A new transition.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMTransition
sage: t = FSMTransition('A', 'B', 0)
sage: deepcopy(t)
Transition from 'A' to 'B': 0|-
sage.combinat.finite_state_machine.FSMWordSymbol(word)
Returns a string of word. It may returns the symbol of the empty word FSMEmptyWordSymbol.
INPUT:
• word – the input word.
OUTPUT:
A string of word.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMWordSymbol
sage: FSMWordSymbol([0, 1, 1])
'0,1,1'
class sage.combinat.finite_state_machine.FiniteStateMachine(data=None, initial_states=None, final_states=None, input_alphabet=None, output_alphabet=None, determine_alphabets=None, store_states_dict=True, on_duplicate_transition=None)
Class for a finite state machine.
A finite state machine is a finite set of states connected by transitions.
INPUT:
• data – can be any of the following:
1. a dictionary of dictionaries (of transitions),
2. a dictionary of lists (of states or transitions),
3. a list (of transitions),
4. a function (transition function),
5. an other instance of a finite state machine.
• initial_states and final_states – the initial and final states of this machine
• input_alphabet and output_alphabet – the input and output alphabets of this machine
• determine_alphabets – If True, then the function determine_alphabets() is called after data was read and processed, if False, then not. If it is None, then it is decided during the construction of the finite state machine whether determine_alphabets() should be called.
• store_states_dict – If True, then additionally the states are stored in an interal dictionary for speed up.
• on_duplicate_transition – A function which is called when a transition is inserted into self which already existed (same from_state, same to_state, same word_in, same word_out).
This function is assumed to take two arguments, the first being the already existing transition, the second being the new transition (as an FSMTransition). The function must return the (possibly modified) original transition.
By default, we have on_duplicate_transition=None, which is interpreted as on_duplicate_transition=duplicate_transition_ignore, where duplicate_transition_ignore is a predefined function ignoring the occurrence. Other such predefined functions are duplicate_transition_raise_error and duplicate_transition_add_input.
OUTPUT:
A finite state machine.
The object creation of Automaton and Transducer is the same as the one described here (i.e. just replace the word FiniteStateMachine by Automaton or Transducer).
Each transition of an automaton has an input label. Automata can, for example, be determinised (see Automaton.determinisation()) and minimized (see Automaton.minimization()). Each transition of a transducer has an input and an output label. Transducers can, for example, be simplified (see Transducer.simplification()).
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMState, FSMTransition
See documentation for more examples.
We illustrate the different input formats:
1. The input-data can be a dictionary of dictionaries, where
• the keys of the outer dictionary are state-labels (from-states of transitions),
• the keys of the inner dictionaries are state-labels (to-states of transitions),
• the values of the inner dictionaries specify the transition more precisely.
The easiest is to use a tuple consisting of an input and an output word:
sage: FiniteStateMachine({'a':{'b':(0, 1), 'c':(1, 1)}})
Finite state machine with 3 states
Instead of the tuple anything iterable (e.g. a list) can be used as well.
If you want to use the arguments of FSMTransition directly, you can use a dictionary:
sage: FiniteStateMachine({'a':{'b':{'word_in':0, 'word_out':1},
....: 'c':{'word_in':1, 'word_out':1}}})
Finite state machine with 3 states
In the case you already have instances of FSMTransition, it is possible to use them directly:
sage: FiniteStateMachine({'a':{'b':FSMTransition('a', 'b', 0, 1),
....: 'c':FSMTransition('a', 'c', 1, 1)}})
Finite state machine with 3 states
2. The input-data can be a dictionary of lists, where the keys are states or label of states.
The list-elements can be states:
sage: a = FSMState('a')
sage: b = FSMState('b')
sage: c = FSMState('c')
sage: FiniteStateMachine({a:[b, c]})
Finite state machine with 3 states
Or the list-elements can simply be labels of states:
sage: FiniteStateMachine({'a':['b', 'c']})
Finite state machine with 3 states
The list-elements can also be transitions:
sage: FiniteStateMachine({'a':[FSMTransition('a', 'b', 0, 1),
....: FSMTransition('a', 'c', 1, 1)]})
Finite state machine with 3 states
Or they can be tuples of a label, an input word and an output word specifying a transition:
sage: FiniteStateMachine({'a':[('b', 0, 1), ('c', 1, 1)]})
Finite state machine with 3 states
3. The input-data can be a list, where its elements specify transitions:
sage: FiniteStateMachine([FSMTransition('a', 'b', 0, 1),
....: FSMTransition('a', 'c', 1, 1)])
Finite state machine with 3 states
It is possible to skip FSMTransition in the example above:
sage: FiniteStateMachine([('a', 'b', 0, 1), ('a', 'c', 1, 1)])
Finite state machine with 3 states
The parameters of the transition are given in tuples. Anyhow, anything iterable (e.g. a list) is possible.
You can also name the parameters of the transition. For this purpose you take a dictionary:
sage: FiniteStateMachine([{'from_state':'a', 'to_state':'b',
....: 'word_in':0, 'word_out':1},
....: {'from_state':'a', 'to_state':'c',
....: 'word_in':1, 'word_out':1}])
Finite state machine with 3 states
Other arguments, which FSMTransition accepts, can be added, too.
4. The input-data can also be function acting as transition function:
This function has two input arguments:
1. a label of a state (from which the transition starts),
2. a letter of the (input-)alphabet (as input-label of the transition).
It returns a tuple with the following entries:
1. a label of a state (to which state the transition goes),
2. a letter of or a word over the (output-)alphabet (as output-label of the transition).
It may also output a list of such tuples if several transitions from the from-state and the input letter exist (this means that the finite state machine is non-deterministic).
If the transition does not exist, the function should raise a LookupError or return an empty list.
When constructing a finite state machine in this way, some inital states and an input alphabet have to be specified.
sage: def f(state_from, read):
....: if int(state_from) + read <= 2:
....: write = 0
....: else:
....: state_to = 2*int(state_from) + read - 5
....: write = 1
....: return (str(state_to), write)
sage: F = FiniteStateMachine(f, input_alphabet=[0, 1],
....: initial_states=['0'],
....: final_states=['0'])
sage: F([1, 0, 1])
(True, '0', [0, 0, 1])
5. The input-data can be an other instance of a finite state machine:
sage: FiniteStateMachine(FiniteStateMachine([]))
Traceback (most recent call last):
...
NotImplementedError
The following examples demonstrate the use of on_duplicate_transition:
sage: F = FiniteStateMachine([['a', 'a', 1/2], ['a', 'a', 1/2]])
sage: F.transitions()
[Transition from 'a' to 'a': 1/2|-]
sage: from sage.combinat.finite_state_machine import duplicate_transition_raise_error
sage: F1 = FiniteStateMachine([['a', 'a', 1/2], ['a', 'a', 1/2]],
....: on_duplicate_transition=duplicate_transition_raise_error)
Traceback (most recent call last):
...
ValueError: Attempting to re-insert transition Transition from 'a' to 'a': 1/2|-
Use duplicate_transition_add_input to emulate a Markov chain, the input labels are considered as transition probabilities:
sage: from sage.combinat.finite_state_machine import duplicate_transition_add_input
sage: F = FiniteStateMachine([['a', 'a', 1/2], ['a', 'a', 1/2]],
sage: F.transitions()
[Transition from 'a' to 'a': 1|-]
TESTS:
sage: a = FSMState('S_a', 'a')
sage: b = FSMState('S_b', 'b')
sage: c = FSMState('S_c', 'c')
sage: d = FSMState('S_d', 'd')
sage: FiniteStateMachine({a:[b, c], b:[b, c, d],
....: c:[a, b], d:[a, c]})
Finite state machine with 4 states
We have several constructions which lead to the same finite state machine:
sage: A = FSMState('A')
sage: B = FSMState('B')
sage: C = FSMState('C')
sage: FSM1 = FiniteStateMachine(
....: {A:{B:{'word_in':0, 'word_out':1},
....: C:{'word_in':1, 'word_out':1}}})
sage: FSM2 = FiniteStateMachine({A:{B:(0, 1), C:(1, 1)}})
sage: FSM3 = FiniteStateMachine(
....: {A:{B:FSMTransition(A, B, 0, 1),
....: C:FSMTransition(A, C, 1, 1)}})
sage: FSM4 = FiniteStateMachine({A:[(B, 0, 1), (C, 1, 1)]})
sage: FSM5 = FiniteStateMachine(
....: {A:[FSMTransition(A, B, 0, 1), FSMTransition(A, C, 1, 1)]})
sage: FSM6 = FiniteStateMachine(
....: [{'from_state':A, 'to_state':B, 'word_in':0, 'word_out':1},
....: {'from_state':A, 'to_state':C, 'word_in':1, 'word_out':1}])
sage: FSM7 = FiniteStateMachine([(A, B, 0, 1), (A, C, 1, 1)])
sage: FSM8 = FiniteStateMachine(
....: [FSMTransition(A, B, 0, 1), FSMTransition(A, C, 1, 1)])
sage: FSM1 == FSM2 == FSM3 == FSM4 == FSM5 == FSM6 == FSM7 == FSM8
True
It is possible to skip FSMTransition in the example above.
Some more tests for different input-data:
sage: FiniteStateMachine({'a':{'a':[0, 0], 'b':[1, 1]},
....: 'b':{'b':[1, 0]}})
Finite state machine with 2 states
sage: a = FSMState('S_a', 'a')
sage: b = FSMState('S_b', 'b')
sage: c = FSMState('S_c', 'c')
sage: d = FSMState('S_d', 'd')
sage: t1 = FSMTransition(a, b)
sage: t2 = FSMTransition(b, c)
sage: t3 = FSMTransition(b, d)
sage: t4 = FSMTransition(c, d)
sage: FiniteStateMachine([t1, t2, t3, t4])
Finite state machine with 4 states
Kleene_closure()
TESTS:
sage: FiniteStateMachine().Kleene_closure()
Traceback (most recent call last):
...
NotImplementedError
accessible_components()
Returns a new finite state machine with the accessible states of self and all transitions between those states.
INPUT:
Nothing.
OUTPUT:
A finite state machine with the accessible states of self and all transitions between those states.
A state is accessible if there is a directed path from an initial state to the state. If self has no initial states then a copy of the finite state machine self is returned.
EXAMPLES:
sage: F = Automaton([(0, 0, 0), (0, 1, 1), (1, 1, 0), (1, 0, 1)],
....: initial_states=[0])
sage: F.accessible_components()
Automaton with 2 states
sage: F = Automaton([(0, 0, 1), (0, 0, 1), (1, 1, 0), (1, 0, 1)],
....: initial_states=[0])
sage: F.accessible_components()
Automaton with 1 states
Constructs a finite state machine from a transition function.
INPUT:
• function may return a tuple (new_state, output_word) or a list of such tuples.
• initial_states – If no initial states are given, the already existing initial states of self are taken.
• If explore_existing_states is True (default), then already existing states in self (e.g. already given final states) will also be processed if they are reachable from the initial states.
OUTPUT:
Nothing.
EXAMPLES:
sage: F = FiniteStateMachine(initial_states=['A'],
....: input_alphabet=[0, 1])
sage: def f(state, input):
....: return [('A', input), ('B', 1-input)]
sage: F.transitions()
[Transition from 'A' to 'A': 0|0,
Transition from 'A' to 'B': 0|1,
Transition from 'A' to 'A': 1|1,
Transition from 'A' to 'B': 1|0,
Transition from 'B' to 'A': 0|0,
Transition from 'B' to 'B': 0|1,
Transition from 'B' to 'A': 1|1,
Transition from 'B' to 'B': 1|0]
Initial states can also be given as a parameter:
sage: F = FiniteStateMachine(input_alphabet=[0,1])
sage: def f(state, input):
....: return [('A', input), ('B', 1-input)]
sage: F.initial_states()
['A']
Already existing states in the finite state machine (the final states in the example below) are also explored:
sage: F = FiniteStateMachine(initial_states=[0],
....: final_states=[1],
....: input_alphabet=[0])
sage: def transition_function(state, letter):
....: return(1-state, [])
sage: F.transitions()
[Transition from 0 to 1: 0|-,
Transition from 1 to 0: 0|-]
If explore_existing_states=False, however, this behavior is turned off, i.e., already existing states are not explored:
sage: F = FiniteStateMachine(initial_states=[0],
....: final_states=[1],
....: input_alphabet=[0])
sage: def transition_function(state, letter):
....: return(1-state, [])
....: explore_existing_states=False)
sage: F.transitions()
[Transition from 0 to 1: 0|-]
TEST:
sage: F = FiniteStateMachine(initial_states=['A'])
sage: def f(state, input):
....: return [('A', input), ('B', 1-input)]
Traceback (most recent call last):
...
ValueError: No input alphabet is given.
Try calling determine_alphabets().
sage: def transition(state, where):
....: return (vector([0, 0]), 1)
sage: Transducer(transition, input_alphabet=[0], initial_states=[0])
Traceback (most recent call last):
...
TypeError: mutable vectors are unhashable
Adds a state to the finite state machine and returns the new state. If the state already exists, that existing state is returned.
INPUT:
• state is either an instance of FSMState or, otherwise, a label of a state.
OUTPUT:
The new or existing state.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMState
sage: F = FiniteStateMachine()
sage: A = FSMState('A', is_initial=True)
'A'
INPUT:
• states – a list of states or iterator over states.
OUTPUT:
Nothing.
EXAMPLES:
sage: F = FiniteStateMachine()
sage: F.states()
['A', 'B']
Adds a transition to the finite state machine and returns the new transition.
If the transition already exists, the return value of self.on_duplicate_transition is returned. See the documentation of FiniteStateMachine.
INPUT:
The following forms are all accepted:
sage: from sage.combinat.finite_state_machine import FSMState, FSMTransition
sage: A = FSMState('A')
sage: B = FSMState('B')
sage: FSM = FiniteStateMachine()
Transition from 'A' to 'B': 0|1
sage: FSM = FiniteStateMachine()
Transition from 'A' to 'B': 0|1
sage: FSM = FiniteStateMachine()
Transition from 'A' to 'B': 0|1
sage: FSM = FiniteStateMachine()
sage: FSM.add_transition('A', 'B', {'word_in': 0, 'word_out': 1})
Transition from 'A' to 'B': {'word_in': 0, 'word_out': 1}|-
sage: FSM = FiniteStateMachine()
....: word_in=0, word_out=1)
Transition from 'A' to 'B': 0|1
sage: FSM = FiniteStateMachine()
....: 'word_in': 0, 'word_out': 1})
Transition from 'A' to 'B': 0|1
sage: FSM = FiniteStateMachine()
Transition from 'A' to 'B': 0|1
sage: FSM = FiniteStateMachine()
Transition from 'A' to 'B': 0|1
If the states A and B are not instances of FSMState, then it is assumed that they are labels of states.
OUTPUT:
The new transition.
Adds one or more transitions if function(state, state) says that there are some.
INPUT:
• function – a transition function. Given two states from_state and to_state (or their labels if label_as_input is true), this function shall return a tuple (word_in, word_out) to add a transition from from_state to to_state with input and output labels word_in and word_out, respectively. If no such addition is to be added, the transition function shall return None. The transition function may also return a list of such tuples in order to add multiple transitions between the pair of states.
• label_as_input – (default: True)
OUTPUT:
Nothing.
EXAMPLES:
sage: F = FiniteStateMachine()
sage: def f(state1, state2):
....: if state1 == 'C':
....: return None
....: return (0, 1)
sage: len(F.transitions())
6
Multiple transitions are also possible:
sage: F = FiniteStateMachine()
sage: def f(state1, state2):
....: if state1 != state2:
....: return [(0, 1), (1, 0)]
....: else:
....: return None
sage: F.transitions()
[Transition from 0 to 1: 0|1,
Transition from 0 to 1: 1|0,
Transition from 1 to 0: 0|1,
Transition from 1 to 0: 1|0]
TESTS:
sage: F = FiniteStateMachine()
0
sage: def f(state1, state2):
....: return 1
Traceback (most recent call last):
...
ValueError: The callback function for add_transitions_from_function
is expected to return a pair (word_in, word_out) or a list of such
pairs. For states 0 and 0 however, it returned 1,
which is not acceptable.
Returns the adjacency matrix of the underlying graph.
INPUT:
• input – Only transitions with input label input are respected.
• entry – The function entry takes a transition and the return value is written in the matrix as the entry (transition.from_state, transition.to_state).
OUTPUT:
A matrix.
If any label of a state is not an integer, the finite state machine is relabeled at the beginning. If there are more than one transitions between two states, then the different return values of entry are added up.
The default value of entry takes the variable x to the power of the output word of the transition.
EXAMPLES:
sage: B = FiniteStateMachine({0:{0:(0, 0), 'a':(1, 0)},
....: 'a':{2:(0, 0), 3:(1, 0)},
....: 2:{0:(1, 1), 4:(0, 0)},
....: 3:{'a':(0, 1), 2:(1, 1)},
....: 4:{4:(1, 1), 3:(0, 1)}},
....: initial_states=[0])
[1 1 0 0 0]
[0 0 1 1 0]
[x 0 0 0 1]
[0 x x 0 0]
[0 0 0 x x]
[1 1 0 0 0]
[0 0 1 1 0]
[1 0 0 0 1]
[0 1 1 0 0]
[0 0 0 1 1]
....: exp(I*transition.word_out[0]*var('t'))))
[ 0 1 0 0 0]
[ 0 0 0 1 0]
[e^(I*t) 0 0 0 0]
[ 0 0 e^(I*t) 0 0]
[ 0 0 0 0 e^(I*t)]
composition(other, algorithm=None, only_accessible_components=True)
Returns a new transducer which is the composition of self and other.
INPUT:
• other – a transducer
• algorithm – can be one of the following
• direct – The composition is calculated directly.
There can be arbitrarily many initial and final states, but the input and output labels must have length 1.
WARNING: The output of other is fed into self.
• explorative – An explorative algorithm is used.
At least the following restrictions apply, but are not checked: - both self and other have exactly one initial state - all input labels of transitions have length exactly 1
The input alphabet of self has to be specified.
This is a very limited implementation of composition. WARNING: The output of other is fed into self.
If algorithm is None, then the algorithm is chosen automatically (at the moment always direct).
OUTPUT:
A new transducer.
The labels of the new finite state machine are pairs of states of the original finite state machines. The color of a new state is the tuple of colors of the constituent states.
EXAMPLES:
sage: F = Transducer([('A', 'B', 1, 0), ('B', 'A', 0, 1)],
....: initial_states=['A', 'B'], final_states=['B'],
....: determine_alphabets=True)
sage: G = Transducer([(1, 1, 1, 0), (1, 2, 0, 1),
....: (2, 2, 1, 1), (2, 2, 0, 0)],
....: initial_states=[1], final_states=[2],
....: determine_alphabets=True)
sage: Hd = F.composition(G, algorithm='direct')
sage: Hd.initial_states()
[(1, 'B'), (1, 'A')]
sage: Hd.transitions()
[Transition from (1, 'B') to (1, 'A'): 1|1,
Transition from (1, 'A') to (2, 'B'): 0|0,
Transition from (2, 'B') to (2, 'A'): 0|1,
Transition from (2, 'A') to (2, 'B'): 1|0]
sage: F = Transducer([('A', 'B', 1, [1, 0]), ('B', 'B', 1, 1),
....: ('B', 'B', 0, 0)],
....: initial_states=['A'], final_states=['B'])
sage: G = Transducer([(1, 1, 0, 0), (1, 2, 1, 0),
....: (2, 2, 0, 1), (2, 1, 1, 1)],
....: initial_states=[1], final_states=[1])
sage: He = G.composition(F, algorithm='explorative')
sage: He.transitions()
[Transition from ('A', 1) to ('B', 2): 1|0,1,
Transition from ('B', 2) to ('B', 2): 0|1,
Transition from ('B', 2) to ('B', 1): 1|1,
Transition from ('B', 1) to ('B', 1): 0|0,
Transition from ('B', 1) to ('B', 2): 1|0]
Be aware that after composition, different transitions may share the same output label (same python object):
sage: F = Transducer([ ('A','B',0,0), ('B','A',0,0)],
....: initial_states=['A'],
....: final_states=['A'])
sage: F.transitions()[0].word_out is F.transitions()[1].word_out
False
sage: G = Transducer([('C','C',0,1)],)
....: initial_states=['C'],
....: final_states=['C'])
sage: H = G.composition(F)
sage: H.transitions()[0].word_out is H.transitions()[1].word_out
True
TESTS:
Due to the limitations of the two algorithms the following (examples from above, but different algorithm used) does not give a full answer or does not work
In the following, algorithm='explorative' is inadequate, as F has more than one initial state:
sage: F = Transducer([('A', 'B', 1, 0), ('B', 'A', 0, 1)],
....: initial_states=['A', 'B'], final_states=['B'],
....: determine_alphabets=True)
sage: G = Transducer([(1, 1, 1, 0), (1, 2, 0, 1),
....: (2, 2, 1, 1), (2, 2, 0, 0)],
....: initial_states=[1], final_states=[2],
....: determine_alphabets=True)
sage: He = F.composition(G, algorithm='explorative')
sage: He.initial_states()
[(1, 'A')]
sage: He.transitions()
[Transition from (1, 'A') to (2, 'B'): 0|0,
Transition from (2, 'B') to (2, 'A'): 0|1,
Transition from (2, 'A') to (2, 'B'): 1|0]
In the following example, algorithm='direct' is inappropriate as there are edges with output labels of length greater than 1:
sage: F = Transducer([('A', 'B', 1, [1, 0]), ('B', 'B', 1, 1),
....: ('B', 'B', 0, 0)],
....: initial_states=['A'], final_states=['B'])
sage: G = Transducer([(1, 1, 0, 0), (1, 2, 1, 0),
....: (2, 2, 0, 1), (2, 1, 1, 1)],
....: initial_states=[1], final_states=[1])
sage: Hd = G.composition(F, algorithm='direct')
concatenation(other)
TESTS:
sage: F = FiniteStateMachine([('A', 'A')])
sage: FiniteStateMachine().concatenation(F)
Traceback (most recent call last):
...
NotImplementedError
copy()
Returns a (shallow) copy of the finite state machine.
INPUT:
Nothing.
OUTPUT:
A new finite state machine.
TESTS:
sage: copy(FiniteStateMachine())
Traceback (most recent call last):
...
NotImplementedError
deepcopy(memo=None)
Returns a deep copy of the finite state machine.
INPUT:
• memo – (default: None) a dictionary storing already processed elements.
OUTPUT:
A new finite state machine.
EXAMPLES:
sage: F = FiniteStateMachine([('A', 'A', 0, 1), ('A', 'A', 1, 0)])
sage: deepcopy(F)
Finite state machine with 1 states
delete_state(s)
Deletes a state and all transitions coming or going to this state.
INPUT:
• s – a label of a state or an FSMState.
OUTPUT:
Nothing.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMTransition
sage: t1 = FSMTransition('A', 'B', 0)
sage: t2 = FSMTransition('B', 'B', 1)
sage: F = FiniteStateMachine([t1, t2])
sage: F.delete_state('A')
sage: F.transitions()
[Transition from 'B' to 'B': 1|-]
TESTS:
sage: F._states_
['B']
sage: F._states_dict_ # This shows that #16024 is fixed.
{'B': 'B'}
delete_transition(t)
Deletes a transition by removing it from the list of transitions of the state, where the transition starts.
INPUT:
• t – a transition.
OUTPUT:
Nothing.
EXAMPLES:
sage: F = FiniteStateMachine([('A', 'B', 0), ('B', 'A', 1)])
sage: F.delete_transition(('A', 'B', 0))
sage: F.transitions()
[Transition from 'B' to 'A': 1|-]
determine_alphabets(reset=True)
Determines the input and output alphabet according to the transitions in self.
INPUT:
• reset – If reset is True, then the existing input and output alphabets are erased, otherwise new letters are appended to the existing alphabets.
OUTPUT:
Nothing.
After this operation the input alphabet and the output alphabet of self are a list of letters.
EXAMPLES:
sage: T = Transducer([(1, 1, 1, 0), (1, 2, 2, 1),
....: (2, 2, 1, 1), (2, 2, 0, 0)],
....: determine_alphabets=False)
sage: (T.input_alphabet, T.output_alphabet)
(None, None)
sage: T.determine_alphabets()
sage: (T.input_alphabet, T.output_alphabet)
([0, 1, 2], [0, 1])
digraph(edge_labels='words_in_out')
Returns the graph of the finite state machine with labeled vertices and labeled edges.
INPUT:
• edge_label: (default: 'words_in_out') can be
• 'words_in_out' (labels will be strings 'i|o')
• a function with which takes as input a transition and outputs (returns) the label
OUTPUT:
A graph.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMState
sage: A = FSMState('A')
sage: T = Transducer()
sage: T.graph()
Digraph on 0 vertices
'A'
sage: T.graph()
Digraph on 1 vertex
Transition from 'A' to 'A': 0|1
sage: T.graph()
Looped digraph on 1 vertex
disjoint_union(other)
TESTS:
sage: F = FiniteStateMachine([('A', 'A')])
sage: FiniteStateMachine().disjoint_union(F)
Traceback (most recent call last):
...
NotImplementedError
empty_copy(memo=None)
Returns an empty deep copy of the finite state machine, i.e., input_alphabet, output_alphabet, on_duplicate_transition are preserved, but states and transitions are not.
INPUT:
• memo – a dictionary storing already processed elements.
OUTPUT:
A new finite state machine.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import duplicate_transition_raise_error
sage: F = FiniteStateMachine([('A', 'A', 0, 2), ('A', 'A', 1, 3)],
....: input_alphabet=[0, 1],
....: output_alphabet=[2, 3],
....: on_duplicate_transition=duplicate_transition_raise_error)
sage: FE = F.empty_copy(); FE
Finite state machine with 0 states
sage: FE.input_alphabet
[0, 1]
sage: FE.output_alphabet
[2, 3]
sage: FE.on_duplicate_transition == duplicate_transition_raise_error
True
equivalence_classes()
Returns a list of equivalence classes of states.
INPUT:
Nothing.
OUTPUT:
A list of equivalence classes of states.
Two states $$a$$ and $$b$$ are equivalent if and only if there is a bijection $$\varphi$$ between paths starting at $$a$$ and paths starting at $$b$$ with the following properties: Let $$p_a$$ be a path from $$a$$ to $$a'$$ and $$p_b$$ a path from $$b$$ to $$b'$$ such that $$\varphi(p_a)=p_b$$, then
• $$p_a.\mathit{word}_\mathit{in}=p_b.\mathit{word}_\mathit{in}$$,
• $$p_a.\mathit{word}_\mathit{out}=p_b.\mathit{word}_\mathit{out}$$,
• $$a'$$ and $$b'$$ have the same output label, and
• $$a'$$ and $$b'$$ are both final or both non-final.
The function equivalence_classes() returns a list of the equivalence classes to this equivalence relation.
This is one step of Moore’s minimization algorithm.
minimization()
EXAMPLES:
sage: fsm = FiniteStateMachine([("A", "B", 0, 1), ("A", "B", 1, 0),
....: ("B", "C", 0, 0), ("B", "C", 1, 1),
....: ("C", "D", 0, 1), ("C", "D", 1, 0),
....: ("D", "A", 0, 0), ("D", "A", 1, 1)])
sage: fsm.equivalence_classes()
[['A', 'C'], ['B', 'D']]
final_components()
Returns the final components of a finite state machine as finite state machines.
INPUT:
Nothing.
OUTPUT:
A list of finite state machines, each representing a final component of self.
A final component of a transducer T is a strongly connected component C such that there are no transitions of T leaving C.
The final components are the only parts of a transducer which influence the main terms of the asympotic behaviour of the sum of output labels of a transducer, see [HKP2014] and [HKW2014].
EXAMPLES:
sage: T = Transducer([['A', 'B', 0, 0], ['B', 'C', 0, 1],
....: ['C', 'B', 0, 1], ['A', 'D', 1, 0],
....: ['D', 'D', 0, 0], ['D', 'B', 1, 0],
....: ['A', 'E', 2, 0], ['E', 'E', 0, 0]])
sage: FC = T.final_components()
sage: sorted(FC[0].transitions())
[Transition from 'B' to 'C': 0|1,
Transition from 'C' to 'B': 0|1]
sage: FC[1].transitions()
[Transition from 'E' to 'E': 0|0]
Another example (cycle of length 2):
sage: T = Automaton([[0, 1, 0], [1, 0, 0]])
sage: len(T.final_components()) == 1
True
sage: T.final_components()[0].transitions()
[Transition from 0 to 1: 0|-,
Transition from 1 to 0: 0|-]
REFERENCES:
[HKP2014] Clemens Heuberger, Sara Kropf, and Helmut Prodinger, Asymptotic analysis of the sum of the output of transducer, in preparation.
[HKW2014] Clemens Heuberger, Sara Kropf, and Stephan Wagner, Combinatorial Characterization of Independent Transducers via Functional Digraphs, Arxiv 1404.3680.
final_states()
Returns a list of all final states.
INPUT:
Nothing.
OUTPUT:
A list of all final states.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMState
sage: A = FSMState('A', is_final=True)
sage: B = FSMState('B', is_initial=True)
sage: C = FSMState('C', is_final=True)
sage: F = FiniteStateMachine([(A, B), (A, C)])
sage: F.final_states()
['A', 'C']
graph(edge_labels='words_in_out')
Returns the graph of the finite state machine with labeled vertices and labeled edges.
INPUT:
• edge_label: (default: 'words_in_out') can be
• 'words_in_out' (labels will be strings 'i|o')
• a function with which takes as input a transition and outputs (returns) the label
OUTPUT:
A graph.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMState
sage: A = FSMState('A')
sage: T = Transducer()
sage: T.graph()
Digraph on 0 vertices
'A'
sage: T.graph()
Digraph on 1 vertex
Transition from 'A' to 'A': 0|1
sage: T.graph()
Looped digraph on 1 vertex
has_final_state(state)
Returns whether state is one of the final states of the finite state machine.
INPUT:
OUTPUT:
True or False.
EXAMPLES:
sage: FiniteStateMachine(final_states=['A']).has_final_state('A')
True
has_final_states()
Returns whether the finite state machine has a final state.
INPUT:
Nothing.
OUTPUT:
True or False.
EXAMPLES:
sage: FiniteStateMachine().has_final_states()
False
has_initial_state(state)
Returns whether state is one of the initial states of the finite state machine.
INPUT:
OUTPUT:
True or False.
EXAMPLES:
sage: F = FiniteStateMachine([('A', 'A')], initial_states=['A'])
sage: F.has_initial_state('A')
True
has_initial_states()
Returns whether the finite state machine has an initial state.
INPUT:
Nothing.
OUTPUT:
True or False.
EXAMPLES:
sage: FiniteStateMachine().has_initial_states()
False
has_state(state)
Returns whether state is one of the states of the finite state machine.
INPUT:
• state can be a FSMState or a label of a state.
OUTPUT:
True or False.
EXAMPLES:
sage: FiniteStateMachine().has_state('A')
False
has_transition(transition)
Returns whether transition is one of the transitions of the finite state machine.
INPUT:
OUTPUT:
True or False.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMTransition
sage: t = FSMTransition('A', 'A', 0, 1)
sage: FiniteStateMachine().has_transition(t)
False
sage: FiniteStateMachine().has_transition(('A', 'A', 0, 1))
Traceback (most recent call last):
...
TypeError: Transition is not an instance of FSMTransition.
induced_sub_finite_state_machine(states)
Returns a sub-finite-state-machine of the finite state machine induced by the given states.
INPUT:
• states – a list (or an iterator) of states (either labels or instances of FSMState) of the sub-finite-state-machine.
OUTPUT:
A new finite state machine. It consists (of deep copies) of the given states and (deep copies) of all transitions of self between these states.
EXAMPLE:
sage: FSM = FiniteStateMachine([(0, 1, 0), (0, 2, 0),
....: (1, 2, 0), (2, 0, 0)])
sage: sub_FSM = FSM.induced_sub_finite_state_machine([0, 1])
sage: sub_FSM.states()
[0, 1]
sage: sub_FSM.transitions()
[Transition from 0 to 1: 0|-]
sage: FSM.induced_sub_finite_state_machine([3])
Traceback (most recent call last):
...
ValueError: 3 is not a state of this finite state machine.
TESTS:
Make sure that the links between transitions and states are still intact:
sage: sub_FSM.transitions()[0].from_state is sub_FSM.state(0)
True
initial_states()
Returns a list of all initial states.
INPUT:
Nothing.
OUTPUT:
A list of all initial states.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMState
sage: A = FSMState('A', is_initial=True)
sage: B = FSMState('B')
sage: F = FiniteStateMachine([(A, B, 1, 0)])
sage: F.initial_states()
['A']
input_projection()
Returns an automaton where the output of each transition of self is deleted.
INPUT:
Nothing
OUTPUT:
An automaton.
EXAMPLES:
sage: F = FiniteStateMachine([('A', 'B', 0, 1), ('A', 'A', 1, 1),
....: ('B', 'B', 1, 0)])
sage: G = F.input_projection()
sage: G.transitions()
[Transition from 'A' to 'B': 0|-,
Transition from 'A' to 'A': 1|-,
Transition from 'B' to 'B': 1|-]
intersection(other)
TESTS:
sage: FiniteStateMachine().intersection(FiniteStateMachine())
Traceback (most recent call last):
...
NotImplementedError
is_Markov_chain()
Checks whether self is a Markov chain where the transition probabilities are modeled as input labels.
INPUT:
Nothing.
OUTPUT:
True or False.
on_duplicate_transition must be duplicate_transition_add_input and the sum of the input weights of the transitions leaving a state must add up to 1.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import duplicate_transition_add_input
sage: F = Transducer([[0, 0, 1/4, 0], [0, 1, 3/4, 1],
....: [1, 0, 1/2, 0], [1, 1, 1/2, 1]],
sage: F.is_Markov_chain()
True
sage: F = Transducer([[0, 0, 1/4, 0], [0, 1, 3/4, 1],
....: [1, 0, 1/2, 0], [1, 1, 1/2, 1]])
sage: F.is_Markov_chain()
False
Sum of input labels of the transitions leaving states must be 1:
sage: F = Transducer([[0, 0, 1/4, 0], [0, 1, 3/4, 1],
....: [1, 0, 1/2, 0]],
sage: F.is_Markov_chain()
False
is_complete()
Returns whether the finite state machine is complete.
INPUT:
Nothing.
OUTPUT:
True or False.
A finite state machine is considered to be complete if each transition has an input label of length one and for each pair $$(q, a)$$ where $$q$$ is a state and $$a$$ is an element of the input alphabet, there is exactly one transition from $$q$$ with input label $$a$$.
EXAMPLES:
sage: fsm = FiniteStateMachine([(0, 0, 0, 0),
....: (0, 1, 1, 1),
....: (1, 1, 0, 0)],
....: determine_alphabets=False)
sage: fsm.is_complete()
Traceback (most recent call last):
...
ValueError: No input alphabet is given. Try calling determine_alphabets().
sage: fsm.input_alphabet = [0, 1]
sage: fsm.is_complete()
False
Transition from 1 to 1: 1|1
sage: fsm.is_complete()
True
Transition from 0 to 0: 1|0
sage: fsm.is_complete()
False
is_connected()
TESTS:
sage: FiniteStateMachine().is_connected()
Traceback (most recent call last):
...
NotImplementedError
is_deterministic()
Returns whether the finite finite state machine is deterministic.
INPUT:
Nothing.
OUTPUT:
True or False.
A finite state machine is considered to be deterministic if each transition has input label of length one and for each pair $$(q,a)$$ where $$q$$ is a state and $$a$$ is an element of the input alphabet, there is at most one transition from $$q$$ with input label $$a$$.
TESTS:
sage: fsm = FiniteStateMachine()
Transition from 'A' to 'B': 0|-
sage: fsm.is_deterministic()
True
Transition from 'A' to 'C': 0|-
sage: fsm.is_deterministic()
False
Transition from 'A' to 'B': 0,1|-
sage: fsm.is_deterministic()
False
iter_final_states()
Returns an iterator of the final states.
INPUT:
Nothing.
OUTPUT:
An iterator over all initial states.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMState
sage: A = FSMState('A', is_final=True)
sage: B = FSMState('B', is_initial=True)
sage: C = FSMState('C', is_final=True)
sage: F = FiniteStateMachine([(A, B), (A, C)])
sage: [s.label() for s in F.iter_final_states()]
['A', 'C']
iter_initial_states()
Returns an iterator of the initial states.
INPUT:
Nothing.
OUTPUT:
An iterator over all initial states.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMState
sage: A = FSMState('A', is_initial=True)
sage: B = FSMState('B')
sage: F = FiniteStateMachine([(A, B, 1, 0)])
sage: [s.label() for s in F.iter_initial_states()]
['A']
iter_process(input_tape=None, initial_state=None, **kwargs)
EXAMPLES:
sage: inverter = Transducer({'A': [('A', 0, 1), ('A', 1, 0)]},
....: initial_states=['A'], final_states=['A'])
sage: it = inverter.iter_process(input_tape=[0, 1, 1])
sage: for _ in it:
....: pass
sage: it.output_tape
[1, 0, 0]
iter_states()
Returns an iterator of the states.
INPUT:
Nothing.
OUTPUT:
An iterator of the states of the finite state machine.
EXAMPLES:
sage: FSM = Automaton([('1', '2', 1), ('2', '2', 0)])
sage: [s.label() for s in FSM.iter_states()]
['1', '2']
iter_transitions(from_state=None)
Returns an iterator of all transitions.
INPUT:
• from_state – (default: None) If from_state is given, then a list of transitions starting there is given.
OUTPUT:
An iterator of all transitions.
EXAMPLES:
sage: FSM = Automaton([('1', '2', 1), ('2', '2', 0)])
sage: [(t.from_state.label(), t.to_state.label())
....: for t in FSM.iter_transitions('1')]
[('1', '2')]
sage: [(t.from_state.label(), t.to_state.label())
....: for t in FSM.iter_transitions('2')]
[('2', '2')]
sage: [(t.from_state.label(), t.to_state.label())
....: for t in FSM.iter_transitions()]
[('1', '2'), ('2', '2')]
markov_chain_simplification()
Consider self as Markov chain with probabilities as input labels and simplify it.
INPUT:
Nothing.
OUTPUT:
Simplified version of self.
EXAMPLE:
sage: from sage.combinat.finite_state_machine import duplicate_transition_add_input
sage: T = Transducer([[1, 2, 1/4, 0], [1, -2, 1/4, 0], [1, -2, 1/2, 0],
....: [2, 2, 1/4, 1], [2, -2, 1/4, 1], [-2, -2, 1/4, 1],
....: [-2, 2, 1/4, 1], [2, 3, 1/2, 2], [-2, 3, 1/2, 2]],
....: initial_states=[1],
....: final_states=[3],
sage: T1 = T.markov_chain_simplification()
sage: sorted(T1.transitions())
[Transition from ((1,),) to ((2, -2),): 1|0,
Transition from ((2, -2),) to ((2, -2),): 1/2|1,
Transition from ((2, -2),) to ((3,),): 1/2|2]
merged_transitions()
Merges transitions which have the same from_state, to_state and word_out while adding their word_in.
INPUT:
Nothing.
OUTPUT:
A finite state machine with merged transitions. If no mergers occur, return self.
EXAMPLE:
sage: from sage.combinat.finite_state_machine import duplicate_transition_add_input
sage: T = Transducer([[1, 2, 1/4, 1], [1, -2, 1/4, 1], [1, -2, 1/2, 1],
....: [2, 2, 1/4, 1], [2, -2, 1/4, 1], [-2, -2, 1/4, 1],
....: [-2, 2, 1/4, 1], [2, 3, 1/2, 1], [-2, 3, 1/2, 1]],
sage: T1 = T.merged_transitions()
sage: T1 is T
False
sage: sorted(T1.transitions())
[Transition from -2 to -2: 1/4|1,
Transition from -2 to 2: 1/4|1,
Transition from -2 to 3: 1/2|1,
Transition from 1 to 2: 1/4|1,
Transition from 1 to -2: 3/4|1,
Transition from 2 to -2: 1/4|1,
Transition from 2 to 2: 1/4|1,
Transition from 2 to 3: 1/2|1]
Applying the function again does not change the result:
sage: T2 = T1.merged_transitions()
sage: T2 is T1
True
output_projection()
Returns a automaton where the input of each transition of self is deleted and the new input is the original output.
INPUT:
Nothing
OUTPUT:
An automaton.
EXAMPLES:
sage: F = FiniteStateMachine([('A', 'B', 0, 1), ('A', 'A', 1, 1),
....: ('B', 'B', 1, 0)])
sage: G = F.output_projection()
sage: G.transitions()
[Transition from 'A' to 'B': 1|-,
Transition from 'A' to 'A': 1|-,
Transition from 'B' to 'B': 0|-]
plot()
Plots a graph of the finite state machine with labeled vertices and labeled edges.
INPUT:
Nothing.
OUTPUT:
A plot of the graph of the finite state machine.
TESTS:
sage: FiniteStateMachine([('A', 'A', 0)]).plot()
predecessors(state, valid_input=None)
Lists all predecessors of a state.
INPUT:
• state – the state from which the predecessors should be listed.
• valid_input – If valid_input is a list, then we only consider transitions whose input labels are contained in valid_input. state has to be a FSMState (not a label of a state). If input labels of length larger than $$1$$ are used, then valid_input has to be a list of lists.
OUTPUT:
A list of states.
EXAMPLES:
sage: A = Transducer([('I', 'A', 'a', 'b'), ('I', 'B', 'b', 'c'),
....: ('I', 'C', 'c', 'a'), ('A', 'F', 'b', 'a'),
....: ('B', 'F', ['c', 'b'], 'b'), ('C', 'F', 'a', 'c')],
....: initial_states=['I'], final_states=['F'])
sage: A.predecessors(A.state('A'))
['A', 'I']
sage: A.predecessors(A.state('F'), valid_input=['b', 'a'])
['F', 'C', 'A', 'I']
sage: A.predecessors(A.state('F'), valid_input=[['c', 'b'], 'a'])
['F', 'C', 'B']
prepone_output()
Apply the following to each state $$s$$ (except initial and final states) of the finite state machine as often as possible:
If the letter a is prefix of the output label of all transitions from $$s$$, then remove it from all these labels and append it to all output labels of all transitions leading to $$s$$.
We assume that the states have no output labels.
INPUT:
Nothing.
OUTPUT:
Nothing.
EXAMPLES:
sage: A = Transducer([('A', 'B', 1, 1), ('B', 'B', 0, 0), ('B', 'C', 1, 0)],
....: initial_states=['A'], final_states=['C'])
sage: A.prepone_output()
sage: A.transitions()
[Transition from 'A' to 'B': 1|1,0,
Transition from 'B' to 'B': 0|0,
Transition from 'B' to 'C': 1|-]
sage: B = Transducer([('A', 'B', 0, 1), ('B', 'C', 1, [1, 1]), ('B', 'C', 0, 1)],
....: initial_states=['A'], final_states=['C'])
sage: B.prepone_output()
sage: B.transitions()
[Transition from 'A' to 'B': 0|1,1,
Transition from 'B' to 'C': 1|1,
Transition from 'B' to 'C': 0|-]
If initial states are not labeled as such, unexpected results may be obtained:
sage: C = Transducer([(0,1,0,0)])
sage: C.prepone_output()
prepone_output: All transitions leaving state 0 have an
output label with prefix 0. However, there is no inbound
transition and it is not an initial state. This routine
(possibly called by simplification) therefore erased this
prefix from all outbound transitions.
sage: C.transitions()
[Transition from 0 to 1: 0|-]
Output labels do not have to be hashable:
sage: C = Transducer([(0, 1, 0, []),
....: (1, 0, 0, [vector([0, 0]), 0]),
....: (1, 1, 1, [vector([0, 0]), 1]),
....: (0, 0, 1, 0)],
....: determine_alphabets=False,
....: initial_states=[0])
sage: C.prepone_output()
sage: sorted(C.transitions())
[Transition from 0 to 1: 0|(0, 0),
Transition from 0 to 0: 1|0,
Transition from 1 to 0: 0|0,
Transition from 1 to 1: 1|1,(0, 0)]
process(*args, **kwargs)
Returns whether the finite state machine accepts the input, the state where the computation stops and which output is generated.
INPUT:
• input_tape – The input tape can be a list with entries from the input alphabet.
• initial_state – (default: None) The state in which to start. If this parameter is None and there is only one initial state in the machine, then this state is taken.
OUTPUT:
A triple, where
• the first entry is True if the input string is accepted,
• the second gives the reached state after processing the input tape (This is a state with label None if the input could not be processed, i.e., when at one point no transition to go could be found.), and
• the third gives a list of the output labels used during processing (in the case the finite state machine runs as transducer).
Note that in the case the finite state machine is not deterministic, one possible path is gone. This means, that in this case the output can be wrong.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMState
sage: A = FSMState('A', is_initial = True, is_final = True)
sage: binary_inverter = FiniteStateMachine({A:[(A, 0, 1), (A, 1, 0)]})
sage: binary_inverter.process([0, 1, 0, 0, 1, 1])
(True, 'A', [1, 0, 1, 1, 0, 0])
Alternatively, we can invoke this function by:
sage: binary_inverter([0, 1, 0, 0, 1, 1])
(True, 'A', [1, 0, 1, 1, 0, 0])
sage: NAF_ = FSMState('_', is_initial = True, is_final = True)
sage: NAF1 = FSMState('1', is_final = True)
sage: NAF = FiniteStateMachine(
....: {NAF_: [(NAF_, 0), (NAF1, 1)], NAF1: [(NAF_, 0)]})
sage: [NAF.process(w)[0] for w in [[0], [0, 1], [1, 1], [0, 1, 0, 1],
....: [0, 1, 1, 1, 0], [1, 0, 0, 1, 1]]]
[True, True, False, True, False, False]
product_FiniteStateMachine(other, function, new_input_alphabet=None, only_accessible_components=True)
Returns a new finite state machine whose states are pairs of states of the original finite state machines.
INPUT:
• other – a finite state machine
• function has to accept two transitions from $$A$$ to $$B$$ and $$C$$ to $$D$$ and returns a pair (word_in, word_out) which is the label of the transition $$(A, C)$$ to $$(B, D)$$. If there is no transition from $$(A, C)$$ to $$(B, D)$$, then function should raise a LookupError.
• new_input_alphabet (optional)– the new input alphabet as a list.
• only_accessible_components – If true (default), then the result is piped through accessible_components. If no new_input_alphabet is given, it is determined by determine_alphabets().
OUTPUT:
A finite state machine whose states are pairs of states of the original finite state machines.
The labels of the transitions are defined by function.
The color of a new state is the tuple of colors of the constituent states of self and other.
EXAMPLES:
sage: F = Automaton([('A', 'B', 1), ('A', 'A', 0), ('B', 'A', 2)],
....: initial_states=['A'], final_states=['B'],
....: determine_alphabets=True)
sage: G = Automaton([(1, 1, 1)], initial_states=[1], final_states=[1])
....: return (transition1.word_in[0] + transition2.word_in[0],
....: None)
sage: H = F.product_FiniteStateMachine(G, addition, [0, 1, 2, 3], only_accessible_components=False)
sage: H.transitions()
[Transition from ('A', 1) to ('B', 1): 2|-,
Transition from ('A', 1) to ('A', 1): 1|-,
Transition from ('B', 1) to ('A', 1): 3|-]
sage: H1 = F.product_FiniteStateMachine(G, addition, [0, 1, 2, 3], only_accessible_components=False)
sage: H1.states()[0].label()[0] is F.states()[0]
True
sage: H1.states()[0].label()[1] is G.states()[0]
True
sage: F = Automaton([(0,1,1/4), (0,0,3/4), (1,1,3/4), (1,0,1/4)],
....: initial_states=[0] )
sage: G = Automaton([(0,0,1), (1,1,3/4), (1,0,1/4)],
....: initial_states=[0] )
sage: H = F.product_FiniteStateMachine(
....: G, lambda t1,t2: (t1.word_in[0]*t2.word_in[0], None))
sage: H.states()
[(0, 0), (1, 0)]
sage: F = Automaton([(0,1,1/4), (0,0,3/4), (1,1,3/4), (1,0,1/4)],
....: initial_states=[0] )
sage: G = Automaton([(0,0,1), (1,1,3/4), (1,0,1/4)],
....: initial_states=[0] )
sage: H = F.product_FiniteStateMachine(G,
....: lambda t1,t2: (t1.word_in[0]*t2.word_in[0], None),
....: only_accessible_components=False)
sage: H.states()
[(0, 0), (1, 0), (0, 1), (1, 1)]
TESTS:
Check that colors are correctly dealt with. In particular, the new colors have to be hashable such that Automaton.determinisation() does not fail:
sage: A = Automaton([[0, 0, 0]], initial_states=[0])
sage: B = A.product_FiniteStateMachine(A,
....: lambda t1, t2: (0, None))
sage: B.states()[0].color
(None, None)
sage: B.determinisation()
Automaton with 1 states
projection(what='input')
Returns an Automaton which transition labels are the projection of the transition labels of the input.
INPUT:
• what – (default: input) either input or output.
OUTPUT:
An automaton.
EXAMPLES:
sage: F = FiniteStateMachine([('A', 'B', 0, 1), ('A', 'A', 1, 1),
....: ('B', 'B', 1, 0)])
sage: G = F.projection(what='output')
sage: G.transitions()
[Transition from 'A' to 'B': 1|-,
Transition from 'A' to 'A': 1|-,
Transition from 'B' to 'B': 0|-]
quotient(classes)
Constructs the quotient with respect to the equivalence classes.
INPUT:
• classes is a list of equivalence classes of states.
OUTPUT:
A finite state machine.
The labels of the new states are tuples of states of the self, corresponding to classes.
Assume that $$c$$ is a class, and $$a$$ and $$b$$ are states in $$c$$. Then there is a bijection $$\varphi$$ between the transitions from $$a$$ and the transitions from $$b$$ with the following properties: if $$\varphi(t_a)=t_b$$, then
• $$t_a.\mathit{word}_\mathit{in}=t_b.\mathit{word}_\mathit{in}$$,
• $$t_a.\mathit{word}_\mathit{out}=t_b.\mathit{word}_\mathit{out}$$, and
• $$t_a$$ and $$t_b$$ lead to some equivalent states $$a'$$ and $$b'$$.
Non-initial states may be merged with initial states, the resulting state is an initial state.
All states in a class must have the same is_final and word_out values.
EXAMPLES:
sage: fsm = FiniteStateMachine([("A", "B", 0, 1), ("A", "B", 1, 0),
....: ("B", "C", 0, 0), ("B", "C", 1, 1),
....: ("C", "D", 0, 1), ("C", "D", 1, 0),
....: ("D", "A", 0, 0), ("D", "A", 1, 1)])
sage: fsmq = fsm.quotient([[fsm.state("A"), fsm.state("C")],
....: [fsm.state("B"), fsm.state("D")]])
sage: fsmq.transitions()
[Transition from ('A', 'C')
to ('B', 'D'): 0|1,
Transition from ('A', 'C')
to ('B', 'D'): 1|0,
Transition from ('B', 'D')
to ('A', 'C'): 0|0,
Transition from ('B', 'D')
to ('A', 'C'): 1|1]
sage: fsmq.relabeled().transitions()
[Transition from 0 to 1: 0|1,
Transition from 0 to 1: 1|0,
Transition from 1 to 0: 0|0,
Transition from 1 to 0: 1|1]
sage: fsmq1 = fsm.quotient(fsm.equivalence_classes())
sage: fsmq1 == fsmq
True
sage: fsm.quotient([[fsm.state("A"), fsm.state("B"), fsm.state("C"), fsm.state("D")]])
Traceback (most recent call last):
...
AssertionError: Transitions of state 'A' and 'B' are incompatible.
relabeled(memo=None)
Returns a deep copy of the finite state machine, but the states are relabeled by integers starting with 0.
INPUT:
• memo – (default: None) a dictionary storing already processed elements.
OUTPUT:
A new finite state machine.
EXAMPLES:
sage: FSM1 = FiniteStateMachine([('A', 'B'), ('B', 'C'), ('C', 'A')])
sage: FSM1.states()
['A', 'B', 'C']
sage: FSM2 = FSM1.relabeled()
sage: FSM2.states()
[0, 1, 2]
remove_epsilon_transitions()
TESTS:
sage: FiniteStateMachine().remove_epsilon_transitions()
Traceback (most recent call last):
...
NotImplementedError
set_coordinates(coordinates, default=True)
Set coordinates of the states for the LaTeX representation by a dictionary or a function mapping labels to coordinates.
INPUT:
• coordinates – a dictionary or a function mapping labels of states to pairs interpreted as coordinates.
• default – If True, then states not given by coordinates get a default position on a circle of radius 3.
OUTPUT:
Nothing.
EXAMPLES:
sage: F = Automaton([[0, 1, 1], [1, 2, 2], [2, 0, 0]])
sage: F.set_coordinates({0: (0, 0), 1: (2, 0), 2: (1, 1)})
sage: F.state(0).coordinates
(0, 0)
We can also use a function to determine the coordinates:
sage: F = Automaton([[0, 1, 1], [1, 2, 2], [2, 0, 0]])
sage: F.set_coordinates(lambda l: (l, 3/(l+1)))
sage: F.state(2).coordinates
(2, 1)
split_transitions()
Returns a new transducer, where all transitions in self with input labels consisting of more than one letter are replaced by a path of the corresponding length.
INPUT:
Nothing.
OUTPUT:
A new transducer.
EXAMPLES:
sage: A = Transducer([('A', 'B', [1, 2, 3], 0)],
....: initial_states=['A'], final_states=['B'])
sage: A.split_transitions().states()
[('A', ()), ('B', ()),
('A', (1,)), ('A', (1, 2))]
state(state)
Returns the state of the finite state machine.
INPUT:
• state – If state is not an instance of FSMState, then it is assumed that it is the label of a state.
OUTPUT:
Returns the state of the finite state machine corresponding to state.
If no state is found, then a LookupError is thrown.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMState
sage: A = FSMState('A')
sage: FSM = FiniteStateMachine([(A, 'B'), ('C', A)])
sage: FSM.state('A') == A
True
sage: FSM.state('xyz')
Traceback (most recent call last):
...
LookupError: No state with label xyz found.
states()
Returns the states of the finite state machine.
INPUT:
Nothing.
OUTPUT:
The states of the finite state machine as list.
EXAMPLES:
sage: FSM = Automaton([('1', '2', 1), ('2', '2', 0)])
sage: FSM.states()
['1', '2']
transition(transition)
Returns the transition of the finite state machine.
INPUT:
• transition – If transition is not an instance of FSMTransition, then it is assumed that it is a tuple (from_state, to_state, word_in, word_out).
OUTPUT:
Returns the transition of the finite state machine corresponding to transition.
If no transition is found, then a LookupError is thrown.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import FSMTransition
sage: t = FSMTransition('A', 'B', 0)
sage: F = FiniteStateMachine([t])
sage: F.transition(('A', 'B', 0))
Transition from 'A' to 'B': 0|-
sage: id(t) == id(F.transition(('A', 'B', 0)))
True
transitions(from_state=None)
Returns a list of all transitions.
INPUT:
• from_state – (default: None) If from_state is given, then a list of transitions starting there is given.
OUTPUT:
A list of all transitions.
EXAMPLES:
sage: FSM = Automaton([('1', '2', 1), ('2', '2', 0)])
sage: FSM.transitions()
[Transition from '1' to '2': 1|-,
Transition from '2' to '2': 0|-]
transposition()
Returns a new finite state machine, where all transitions of the input finite state machine are reversed.
INPUT:
Nothing.
OUTPUT:
A new finite state machine.
EXAMPLES:
sage: aut = Automaton([('A', 'A', 0), ('A', 'A', 1), ('A', 'B', 0)],
....: initial_states=['A'], final_states=['B'])
sage: aut.transposition().transitions('B')
[Transition from 'B' to 'A': 0|-]
sage: aut = Automaton([('1', '1', 1), ('1', '2', 0), ('2', '2', 0)],
....: initial_states=['1'], final_states=['1', '2'])
sage: aut.transposition().initial_states()
['1', '2']
class sage.combinat.finite_state_machine.Transducer(data=None, initial_states=None, final_states=None, input_alphabet=None, output_alphabet=None, determine_alphabets=None, store_states_dict=True, on_duplicate_transition=None)
This creates a transducer, which is a finite state machine, whose transitions have input and output labels.
An transducer has additional features like creating a simplified transducer.
EXAMPLES:
We can create a transducer performing the addition of 1 (for numbers given in binary and read from right to left) in the following way:
sage: T = Transducer([('C', 'C', 1, 0), ('C', 'N', 0, 1),
....: ('N', 'N', 0, 0), ('N', 'N', 1, 1)],
....: initial_states=['C'], final_states=['N'])
sage: T
Transducer with 2 states
sage: T([0])
[1]
sage: T([1,1,0])
[0, 0, 1]
sage: ZZ(T(15.digits(base=2)+[0]), base=2)
16
Note that we have padded the binary input sequence by a $$0$$ so that the transducer can reach its final state.
TESTS:
sage: Transducer()
Transducer with 0 states
cartesian_product(other, only_accessible_components=True)
Warning
The default output of this method is scheduled to change. This docstring describes the new default behaviour, which can already be achieved by setting FSMOldCodeTransducerCartesianProduct to False.
Return a new transducer which can simultaneously process an input with self and other where the output labels are pairs of the original output labels.
INPUT:
• other - a finite state machine
• only_accessible_components – If True (default), then the result is piped through accessible_components. If no new_input_alphabet is given, it is determined by determine_alphabets().
OUTPUT:
A transducer which can simultaneously process an input with self and other.
The set of states of the new transducer is the cartesian product of the set of states of self and other.
Let $$(A, B, a, b)$$ be a transition of self and $$(C, D, c, d)$$ be a transition of other. Then there is a transition $$((A, C), (B, D), a, (b, d))$$ in the new transducer if $$a = c$$.
EXAMPLES:
Originally a different output was constructed by Transducer.cartesian_product. This output is now produced by Transducer.intersection.
sage: transducer1 = Transducer([('A', 'A', 0, 0),
....: ('A', 'A', 1, 1)],
....: initial_states=['A'],
....: final_states=['A'],
....: determine_alphabets=True)
sage: transducer2 = Transducer([(0, 1, 0, ['b', 'c']),
....: (0, 0, 1, 'b'),
....: (1, 1, 0, 'a')],
....: initial_states=[0],
....: final_states=[1],
....: determine_alphabets=True)
sage: result = transducer1.cartesian_product(transducer2)
doctest:1: DeprecationWarning: The output of
Transducer.cartesian_product will change.
Please use Transducer.intersection for the original output.
See http://trac.sagemath.org/16061 for details.
sage: result
Transducer with 0 states
By setting FSMOldCodeTransducerCartesianProduct to False the new desired output is produced.
sage: sage.combinat.finite_state_machine.FSMOldCodeTransducerCartesianProduct = False
sage: result = transducer1.cartesian_product(transducer2)
sage: result
Transducer with 2 states
sage: result.transitions()
[Transition from ('A', 0) to ('A', 1): 0|(0, 'b'),(None, 'c'),
Transition from ('A', 0) to ('A', 0): 1|(1, 'b'),
Transition from ('A', 1) to ('A', 1): 0|(0, 'a')]
sage: result([1, 0, 0])
[(1, 'b'), (0, 'b'), (None, 'c'), (0, 'a')]
sage: (transducer1([1, 0, 0]), transducer2([1, 0, 0]))
([1, 0, 0], ['b', 'b', 'c', 'a'])
The following transducer counts the number of 11 blocks minus the number of 10 blocks over the alphabet [0, 1].
sage: count_11 = transducers.CountSubblockOccurrences(
....: [1, 1],
....: input_alphabet=[0, 1])
sage: count_10 = transducers.CountSubblockOccurrences(
....: [1, 0],
....: input_alphabet=[0, 1])
sage: count_11x10 = count_11.cartesian_product(count_10)
sage: difference = transducers.sub([0, 1])(count_11x10)
sage: T = difference.simplification().relabeled()
sage: T.initial_states()
[1]
sage: sorted(T.transitions())
[Transition from 0 to 1: 0|-1,
Transition from 0 to 0: 1|1,
Transition from 1 to 1: 0|0,
Transition from 1 to 0: 1|0]
sage: input = [0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0]
sage: output = [0, 0, 1, -1, 0, -1, 0, 0, 0, 1, 1, -1]
sage: T(input) == output
True
If other is an automaton, then cartesian_product() returns self where the input is restricted to the input accepted by other.
For example, if the transducer transforms the standard binary expansion into the non-adjacent form and the automaton recognizes the binary expansion without adjacent ones, then the cartesian product of these two is a transducer which does not change the input (except for changing a to (a, None) and ignoring a leading $$0$$).
sage: NAF = Transducer([(0, 1, 0, None),
....: (0, 2, 1, None),
....: (1, 1, 0, 0),
....: (1, 2, 1, 0),
....: (2, 1, 0, 1),
....: (2, 3, 1, -1),
....: (3, 2, 0, 0),
....: (3, 3, 1, 0)],
....: initial_states=[0],
....: final_states=[1],
....: determine_alphabets=True)
sage: aut11 = Automaton([(0, 0, 0), (0, 1, 1), (1, 0, 0)],
....: initial_states=[0],
....: final_states=[0, 1],
....: determine_alphabets=True)
sage: res = NAF.cartesian_product(aut11)
sage: res([1, 0, 0, 1, 0, 1, 0])
[(1, None), (0, None), (0, None), (1, None), (0, None), (1, None)]
This is obvious because if the standard binary expansion does not have adjacent ones, then it is the same as the non-adjacent form.
Be aware that cartesian_product() is not commutative.
sage: aut11.cartesian_product(NAF)
Traceback (most recent call last):
...
TypeError: Only an automaton can be intersected with an automaton.
intersection(other, only_accessible_components=True)
Returns a new transducer which accepts an input if it is accepted by both given finite state machines producing the same output.
INPUT:
• other – a transducer
• only_accessible_components – If True (default), then the result is piped through accessible_components. If no new_input_alphabet is given, it is determined by determine_alphabets().
OUTPUT:
A new transducer which computes the intersection (see below) of the languages of self and other.
The set of states of the transducer is the cartesian product of the set of states of both given transducer. There is a transition $$((A, B), (C, D), a, b)$$ in the new transducer if there are transitions $$(A, C, a, b)$$ and $$(B, D, a, b)$$ in the old transducers.
EXAMPLES:
sage: transducer1 = Transducer([('1', '2', 1, 0),
....: ('2', '2', 1, 0),
....: ('2', '2', 0, 1)],
....: initial_states=['1'],
....: final_states=['2'],
....: determine_alphabets=True)
sage: transducer2 = Transducer([('A', 'A', 1, 0),
....: ('A', 'B', 0, 0),
....: ('B', 'B', 0, 1),
....: ('B', 'A', 1, 1)],
....: initial_states=['A'],
....: final_states=['B'],
....: determine_alphabets=True)
sage: res = transducer1.intersection(transducer2)
sage: res.transitions()
[Transition from ('1', 'A') to ('2', 'A'): 1|0,
Transition from ('2', 'A') to ('2', 'A'): 1|0]
In general, transducers are not closed under intersection. But for transducer which do not have epsilon-transitions, the intersection is well defined (cf. [BaWo2012]). However, in the next example the intersection of the two transducers is not well defined. The intersection of the languages consists of $$(a^n, b^n c^n)$$. This set is not recognizable by a finite transducer.
sage: t1 = Transducer([(0, 0, 'a', 'b'),
....: (0, 1, None, 'c'),
....: (1, 1, None, 'c')],
....: initial_states=[0],
....: final_states=[0, 1],
....: determine_alphabets=True)
sage: t2 = Transducer([('A', 'A', None, 'b'),
....: ('A', 'B', 'a', 'c'),
....: ('B', 'B', 'a', 'c')],
....: initial_states=['A'],
....: final_states=['A', 'B'],
....: determine_alphabets=True)
sage: t2.intersection(t1)
Traceback (most recent call last):
...
ValueError: An epsilon-transition (with empty input or output)
was found.
REFERENCES:
[BaWo2012] Javier Baliosian and Dina Wonsever, Finite State Transducers, chapter in Handbook of Finite State Based Models and Applications, edited by Jiacun Wang, Chapman and Hall/CRC, 2012.
process(*args, **kwargs)
Warning
The default output of this method is scheduled to change. This docstring describes the new default behaviour, which can already be achieved by setting FSMOldProcessOutput to False.
Returns whether the transducer accepts the input, the state where the computation stops and which output is generated.
INPUT:
• input_tape – The input tape can be a list with entries from the input alphabet.
• initial_state – (default: None) The state in which to start. If this parameter is None and there is only one initial state in the machine, then this state is taken.
• full_output – (default: True) If set, then the full output is given, otherwise only the generated output (the third entry below only). If the input is not accepted, a ValueError is raised.
OUTPUT:
The full output is a triple, where
• the first entry is True if the input string is accepted,
• the second gives the reached state after processing the input tape (This is a state with label None if the input could not be processed, i.e., when at one point no transition to go could be found.), and
• the third gives a list of the output labels used during processing.
Note that in the case the transducer is not deterministic, one possible path is gone. This means, that in this case the output can be wrong.
By setting FSMOldProcessOutput to False the new desired output is produced.
EXAMPLES:
sage: sage.combinat.finite_state_machine.FSMOldProcessOutput = False # activate new output behavior
sage: from sage.combinat.finite_state_machine import FSMState
sage: A = FSMState('A', is_initial = True, is_final = True)
sage: binary_inverter = Transducer({A:[(A, 0, 1), (A, 1, 0)]})
sage: binary_inverter.process([0, 1, 0, 0, 1, 1])
(True, 'A', [1, 0, 1, 1, 0, 0])
If we are only interested in the output, we can also use:
sage: binary_inverter([0, 1, 0, 0, 1, 1])
[1, 0, 1, 1, 0, 0]
The following transducer transforms $$0^n 1$$ to $$1^n 2$$:
sage: T = Transducer([(0, 0, 0, 1), (0, 1, 1, 2)])
sage: T.state(0).is_initial = True
sage: T.state(1).is_final = True
We can see the different possibilites of the output by:
sage: [T.process(w) for w in [[1], [0, 1], [0, 0, 1], [0, 1, 1],
....: [0], [0, 0], [2, 0], [0, 1, 2]]]
[(True, 1, [2]), (True, 1, [1, 2]),
(True, 1, [1, 1, 2]), (False, None, None),
(False, 0, [1]), (False, 0, [1, 1]),
(False, None, None), (False, None, None)]
If we just want a condensed output, we use:
sage: [T.process(w, full_output=False)
....: for w in [[1], [0, 1], [0, 0, 1]]]
[[2], [1, 2], [1, 1, 2]]
sage: T.process([0, 1, 2], full_output=False)
Traceback (most recent call last):
...
ValueError: Invalid input sequence.
It is equivalent to:
sage: [T(w) for w in [[1], [0, 1], [0, 0, 1]]]
[[2], [1, 2], [1, 1, 2]]
sage: T([0, 1, 2])
Traceback (most recent call last):
...
ValueError: Invalid input sequence.
simplification()
Returns a simplified transducer.
INPUT:
Nothing.
OUTPUT:
A new transducer.
This function simplifies a transducer by Moore’s algorithm, first moving common output labels of transitions leaving a state to output labels of transitions entering the state (cf. prepone_output()).
The resulting transducer implements the same function as the original transducer.
EXAMPLES:
sage: fsm = Transducer([("A", "B", 0, 1), ("A", "B", 1, 0),
....: ("B", "C", 0, 0), ("B", "C", 1, 1),
....: ("C", "D", 0, 1), ("C", "D", 1, 0),
....: ("D", "A", 0, 0), ("D", "A", 1, 1)])
sage: fsms = fsm.simplification()
sage: fsms
Transducer with 2 states
sage: fsms.transitions()
[Transition from ('A', 'C')
to ('B', 'D'): 0|1,
Transition from ('A', 'C')
to ('B', 'D'): 1|0,
Transition from ('B', 'D')
to ('A', 'C'): 0|0,
Transition from ('B', 'D')
to ('A', 'C'): 1|1]
sage: fsms.relabeled().transitions()
[Transition from 0 to 1: 0|1,
Transition from 0 to 1: 1|0,
Transition from 1 to 0: 0|0,
Transition from 1 to 0: 1|1]
sage: fsm = Transducer([("A", "A", 0, 0),
....: ("A", "B", 1, 1),
....: ("A", "C", 1, -1),
....: ("B", "A", 2, 0),
....: ("C", "A", 2, 0)])
sage: fsm_simplified = fsm.simplification()
sage: fsm_simplified
Transducer with 2 states
sage: fsm_simplified.transitions()
[Transition from ('A',) to ('A',): 0|0,
Transition from ('A',) to ('B', 'C'): 1|1,0,
Transition from ('A',) to ('B', 'C'): 1|-1,0,
Transition from ('B', 'C') to ('A',): 2|-]
sage: from sage.combinat.finite_state_machine import duplicate_transition_add_input
sage: T = Transducer([('A', 'A', 1/2, 0),
....: ('A', 'B', 1/4, 1),
....: ('A', 'C', 1/4, 1),
....: ('B', 'A', 1, 0),
....: ('C', 'A', 1, 0)],
....: initial_states=[0],
....: final_states=['A', 'B', 'C'],
sage: sorted(T.simplification().transitions())
[Transition from ('A',) to ('A',): 1/2|0,
Transition from ('A',) to ('B', 'C'): 1/2|1,
Transition from ('B', 'C') to ('A',): 1|0]
Illustrating the use of colors in order to avoid identification of states:
sage: T = Transducer( [[0,0,0,0], [0,1,1,1],
....: [1,0,0,0], [1,1,1,1]],
....: initial_states=[0],
....: final_states=[0,1])
sage: sorted(T.simplification().transitions())
[Transition from (0, 1) to (0, 1): 0|0,
Transition from (0, 1) to (0, 1): 1|1]
sage: T.state(0).color = 0
sage: T.state(0).color = 1
sage: sorted(T.simplification().transitions())
[Transition from (0,) to (0,): 0|0,
Transition from (0,) to (1,): 1|1,
Transition from (1,) to (0,): 0|0,
Transition from (1,) to (1,): 1|1]
Alternative function for handling duplicate transitions in finite state machines. This implementation adds the input label of the new transition to the input label of the old transition. This is intended for the case where a Markov chain is modelled by a finite state machine using the input labels as transition probabilities.
See the documentation of the on_duplicate_transition parameter of FiniteStateMachine.
INPUT:
• old_transition – A transition in a finite state machine.
• new_transition – A transition, identical to old_transition, which is to be inserted into the finite state machine.
OUTPUT:
A transition whose input weight is the sum of the input weights of old_transition and new_transition.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import duplicate_transition_add_input
sage: from sage.combinat.finite_state_machine import FSMTransition
....: FSMTransition('a', 'a', 1/2))
Transition from 'a' to 'a': 1|-
Input labels must be lists of length 1:
sage: duplicate_transition_add_input(FSMTransition('a', 'a', [1, 1]),
....: FSMTransition('a', 'a', [1, 1]))
Traceback (most recent call last):
...
TypeError: Trying to use duplicate_transition_add_input on
"Transition from 'a' to 'a': 1,1|-" and
"Transition from 'a' to 'a': 1,1|-",
but input words are assumed to be lists of length 1
sage.combinat.finite_state_machine.duplicate_transition_ignore(old_transition, new_transition)
Default function for handling duplicate transitions in finite state machines. This implementation ignores the occurrence.
See the documentation of the on_duplicate_transition parameter of FiniteStateMachine.
INPUT:
• old_transition – A transition in a finite state machine.
• new_transition – A transition, identical to old_transition, which is to be inserted into the finite state machine.
OUTPUT:
The same transition, unchanged.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import duplicate_transition_ignore
sage: from sage.combinat.finite_state_machine import FSMTransition
sage: duplicate_transition_ignore(FSMTransition(0, 0, 1),
....: FSMTransition(0, 0, 1))
Transition from 0 to 0: 1|-
sage.combinat.finite_state_machine.duplicate_transition_raise_error(old_transition, new_transition)
Alternative function for handling duplicate transitions in finite state machines. This implementation raises a ValueError.
See the documentation of the on_duplicate_transition parameter of FiniteStateMachine.
INPUT:
• old_transition – A transition in a finite state machine.
• new_transition – A transition, identical to old_transition, which is to be inserted into the finite state machine.
OUTPUT:
Nothing. A ValueError is raised.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import duplicate_transition_raise_error
sage: from sage.combinat.finite_state_machine import FSMTransition
sage: duplicate_transition_raise_error(FSMTransition(0, 0, 1),
....: FSMTransition(0, 0, 1))
Traceback (most recent call last):
...
ValueError: Attempting to re-insert transition Transition from 0 to 0: 1|-
sage.combinat.finite_state_machine.full_group_by(l, key=<function <lambda> at 0x68062a8>)
Group iterable l by values of key.
INPUT:
• iterable l
• key function key
OUTPUT:
A list of pairs (k, elements) such that key(e)=k for all e in elements.
This is similar to itertools.groupby except that lists are returned instead of iterables and no prior sorting is required.
We do not require
• that the keys are sortable (in contrast to the approach via sorted and itertools.groupby) and
• that the keys are hashable (in contrast to the implementation proposed in http://stackoverflow.com/a/15250161).
However, it is required
• that distinct keys have distinct str-representations.
The implementation is inspired by http://stackoverflow.com/a/15250161, but non-hashable keys are allowed.
EXAMPLES:
sage: from sage.combinat.finite_state_machine import full_group_by
sage: t = [2/x, 1/x, 2/x]
sage: r = full_group_by([0,1,2], key=lambda i:t[i])
sage: sorted(r, key=lambda p:p[1])
[(2/x, [0, 2]), (1/x, [1])]
sage: from itertools import groupby
sage: for k, elements in groupby(sorted([0,1,2],
....: key=lambda i:t[i]),
....: key=lambda i:t[i]):
....: print k, list(elements)
2/x [0]
1/x [1]
2/x [2]
Note that the behavior is different from itertools.groupby because neither $$1/x<2/x$$ nor $$2/x<1/x$$ does hold.
Here, the result r has been sorted in order to guarantee a consistent order for the doctest suite.
sage.combinat.finite_state_machine.is_Automaton(FSM)
Tests whether or not FSM inherits from Automaton.
TESTS:
sage: from sage.combinat.finite_state_machine import is_FiniteStateMachine, is_Automaton
sage: is_Automaton(FiniteStateMachine())
False
sage: is_Automaton(Automaton())
True
sage: is_FiniteStateMachine(Automaton())
True
sage.combinat.finite_state_machine.is_FSMProcessIterator(PI)
Tests whether or not PI inherits from FSMProcessIterator.
TESTS:
sage: from sage.combinat.finite_state_machine import is_FSMProcessIterator, FSMProcessIterator
sage: is_FSMProcessIterator(FSMProcessIterator(FiniteStateMachine([[0, 0, 0, 0]], initial_states=[0])))
True
sage.combinat.finite_state_machine.is_FSMState(S)
Tests whether or not S inherits from FSMState.
TESTS:
sage: from sage.combinat.finite_state_machine import is_FSMState, FSMState
sage: is_FSMState(FSMState('A'))
True
sage.combinat.finite_state_machine.is_FSMTransition(T)
Tests whether or not T inherits from FSMTransition.
TESTS:
sage: from sage.combinat.finite_state_machine import is_FSMTransition, FSMTransition
sage: is_FSMTransition(FSMTransition('A', 'B'))
True
sage.combinat.finite_state_machine.is_FiniteStateMachine(FSM)
Tests whether or not FSM inherits from FiniteStateMachine.
TESTS:
sage: from sage.combinat.finite_state_machine import is_FiniteStateMachine
sage: is_FiniteStateMachine(FiniteStateMachine())
True
sage: is_FiniteStateMachine(Automaton())
True
sage: is_FiniteStateMachine(Transducer())
True
sage.combinat.finite_state_machine.is_Transducer(FSM)
Tests whether or not FSM inherits from Transducer.
TESTS:
sage: from sage.combinat.finite_state_machine import is_FiniteStateMachine, is_Transducer
sage: is_Transducer(FiniteStateMachine())
False
sage: is_Transducer(Transducer())
True
sage: is_FiniteStateMachine(Transducer())
True
sage.combinat.finite_state_machine.setup_latex_preamble()
This function adds the package tikz with support for automata to the preamble of Latex so that the finite state machines can be drawn nicely.
INPUT:
Nothing.
OUTPUT:
Nothing.
TESTS:
sage: from sage.combinat.finite_state_machine import setup_latex_preamble
sage: setup_latex_preamble()
|
2014-07-31 09:28:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31365731358528137, "perplexity": 10288.486854025987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272940.33/warc/CC-MAIN-20140728011752-00359-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/679939/can-sound-travel-through-light
|
Can sound travel through light?
According to de Broglie hypothesis, light has dual nature i.e. particle and wave.
And sound needs medium to travel . So can we say that, sound could travel through light as light has particle nature?
• I don't think it's accurate that the dual nature of light is described by the de Broglie hypothesis. Nov 30, 2021 at 14:50
• For more on de Broglie, see The more general uncertainty principle, beyond quantum Nov 30, 2021 at 14:59
• "Light has dual nature"~ the hypothesis was firat described by Newton latter Maxwell(if i remember the history accurately). Nov 30, 2021 at 16:47
• Newton was a proponent of the corpuscular theory of light, not the wave theory. It was Leibniz who was in favour of the latter. JC Maxwell proved that light is an EMW and Hertz later confined it. Dec 1, 2021 at 0:56
This paper (also on the arxiv) derives corrections to thermodynamical expressions for a photon gas due to this photon-photon scattering, and demonstrates that the speed of sound in a photon gas would be exactly equal to $$v_s = c/\sqrt 3$$ (which is, incidentally, the speed of sound in an ideal ultra-relativistic fluid). This paper derives the same result in a less technical way.
|
2022-10-04 01:33:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.792902946472168, "perplexity": 524.5573898338978}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00148.warc.gz"}
|
http://openstudy.com/updates/4fbdbdaae4b0c25bf8fc1072
|
## Frost 2 years ago What is the total surface area of this object? http://i49.tinypic.com/dg2yy9.png
surface area of a cylinder = $$2\pi r^2 + 2\pi rh$$ where r = radius and h = height
|
2015-03-31 15:41:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.833290159702301, "perplexity": 492.67514867551586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131300735.71/warc/CC-MAIN-20150323172140-00019-ip-10-168-14-71.ec2.internal.warc.gz"}
|
https://academic.oup.com/cercor/article/24/6/1529/296228/Efficiency-of-a-SmallWorld-Brain-Network-Depends
|
## Abstract
It has been revealed that spontaneous coherent brain activity during rest, measured by functional magnetic resonance imaging (fMRI), self-organizes a “small-world” network by which the human brain could sustain higher communication efficiency across global brain regions with lower energy consumption. However, the state-dependent dynamics of the network, especially the dependency on the conscious state, remain poorly understood. In this study, we conducted simultaneous electroencephalographic recording with resting-state fMRI to explore whether functional network organization reflects differences in the conscious state between an awake state and stage 1 sleep. We then evaluated whole-brain functional network properties with fine spatial resolution (3781 regions of interest) using graph theoretical analysis. We found that the efficiency of the functional network evaluated by path length decreased not only at the global level, but also in several specific regions depending on the conscious state. Furthermore, almost two-thirds of nodes that showed a significant decrease in nodal efficiency during stage 1 sleep were categorized as the default-mode network. These results suggest that brain functional network organizations are dynamically optimized for a higher level of information integration in the fully conscious awake state, and that the default-mode network plays a pivotal role in information integration for maintaining conscious awareness.
## Introduction
Spontaneous coherent activity among brain regions measured by functional magnetic resonance imaging (fMRI) is known as functional connectivity (Fox and Raichle 2007). Recent studies have proposed that the functional connectivity during resting-state fMRI (rs-fMRI) optimizes the brain network to maintain higher cognitive functions (Achard and Bullmore 2007; van den Heuvel, Stam, et al. 2009). Furthermore, graph theoretical analysis of rs-fMRI has revealed that the functional network features “small-world” organization (Bullmore and Sporns 2009; Rubinov and Sporns 2010; Bullmore and Bassett 2011). An essential element of a small world is a short path length (Watts and Strogatz 1998) by which signals can reach other regions in fewer processing steps, and thus the network can underpin higher-level cognitive functions requiring the integration of information from specialized and divergent brain regions (Rubinov and Sporns 2010). In fact, several studies have reported that the functional network organization changes with aging (Achard and Bullmore 2007), neuropsychiatric disorders (Liu et al. 2008; Supekar et al. 2008), and intellectual performance (van den Heuvel, Stam, et al. 2009). However, these functional changes may be caused by alterations in anatomical network organization, but not in functional organization, because functional connectivity is fundamentally founded on anatomical connectivity (Honey et al. 2009; van den Heuvel, Mandl, et al. 2009; Sporns 2011).
A critical approach to settle this issue would be to examine whether changes in functional network organization depend on the transition of brain state over a short period during which the anatomical network organization cannot change. It is suitable for this purpose to measure rs-fMRI with simultaneous electroencephalographic (EEG) recordings and to investigate changes in functional network organization associated with the alteration of sleep stage. EEG is most sensitive to the change from the awake state to stage 1 sleep and conscious awareness. The latter is essential for integrative cognitive processing and considerably deteriorates during sleep (Tononi 2004; Dehaene and Changeux 2011). If functional network organization relates to higher cognitive functions, the efficiency of the functional network evaluated by the average path length would decrease during stage 1 sleep.
Another important issue is whether the efficiency decreases not only at the global level, but also in the key brain regions associated with consciousness during stage 1 sleep. So far, increasing evidence has suggested that the default-mode network (DMN) could play a key role in generating consciousness. For example, the DMN contains the most centrally connected regions in the whole-brain network (Buckner et al. 2009; Cole et al. 2010) and thus can be essential for integrating brain activity like consciousness. Recent rs-fMRI studies have shown that the connectivity inside the DMN decreased in altered states of consciousness, such as sleep (Horovitz et al. 2009; Sämann et al. 2011), general anesthesia (Boveroux et al. 2010), and clinical consciousness impairment (Vanhaudenhuyse et al. 2010). However, the interaction between the DMN and the whole-brain network was not taken into consideration in these previous studies. Thus, if the communication efficiency evaluated by path length decreases specifically in the DMN during stage 1 sleep, it would provide evidence that the DMN serves to integrate diverse brain regions for generating consciousness.
To the best of our knowledge, there are few neuroimaging studies focusing on brain activities during stage 1 sleep. A positron emission tomography study (Kjaer, Law, et al. 2002) showed that there is a relative regional cerebral blood flow increase in the occipital lobes and a relative decrease in the cerebellum, the posterior parietal cortex, the premotor cortex, and the thalamus during stage 1 sleep. An fMRI study (Picchioni et al. 2008) has shown that there is an activity increase in the parts of the DMN during early stage 1 sleep and an activity decrease in the hippocampus during late stage 1 sleep. As the most comparable study to ours, Spoormaker et al. (2010) reported the results of a graph theoretical analysis of rs-fMRI during non-rapid eye movement (NREM) sleep including stage 1 sleep. They observed that the main effect of light sleep (stages 1 and 2) was not on average path length (i.e. the efficiency), but on local clustering. In graph theoretical analysis, a whole-brain network is modeled as a graph consisting of a set of nodes and edges between nodes. The definition of nodes is arbitrary, and Spoormaker et al. (2010) defined nodes as 90 large parcels from the Automated Anatomical Labeling (AAL) brain atlas (Tzourio-Mazoyer et al. 2002). Although this is a common approach in rs-fMRI graph theoretical analysis, AAL-based partition is too coarse to distinguish most functional systems including the DMN (Power et al. 2011). Thus, we adopted a much finer brain partition as nodes (3781 nodes) to overcome this limitation, because we had a special interest in the system-specific effects (especially in the DMN) upon the efficiency.
## Materials and Methods
### Subjects
Eighteen healthy young adults (age, 25.1 ± 5.4 [mean ± SD]; gender: 14 males and 4 females) were recruited. None had any sleep, medical, or psychiatric disorders. None were employed in an occupation requiring “night shift” hours. None complained of any sleep disturbances or excessive daytime sleepiness. Subjects were not required to undergo experimental sleep deprivation. Data from 10 subjects were retained for final analysis as described below. All subjects gave informed consent and were compensated for their participation. The study was approved by the Ethics Committee of Kyushu University and the National Institute of Information and Communications Technology.
### Resting-State fMRI Recording
The fMRI measurements were performed using a 3-T scanner (Trio, Siemens Healthcare, Germany). Whole-brain T2*-weighted images were acquired using an echo planar imaging (EPI) sequence (repetition time = 2500 ms; echo time = 30 ms; flip angle = 79°; 42 axial slices; 3.0 mm isotropic voxels with no gap). During functional runs, subjects were required to keep alert with their eyes closed for 10 min. To avoid the effect of participants employing specific strategies to maintain alertness (e.g. reminiscing or counting scan number), participants were instructed not to think about anything in particular as much as possible. For each subject, 2–5 runs were recorded during daytime, resulting in a total of 76 rs-fMRI runs.
### EEG Recording
EEG data were acquired simultaneously with rs-fMRI using an MRI-compatible amplifier (BrainAmp MR plus, Brain Products GmbH, Munich, Germany) and an electrode cap (EASYCAP GmbH, Herrsching-Breitbrunn, Germany) with Ag/AgCl ring electrodes. A reference electrode was placed at the middle point between Fz and Cz. Either 32, or 14, electrodes were placed according to the international 10-20 system. A raw record was sampled at a rate of 5 kHz (bandpass filtered between 0.016 and 250 Hz) using the specialized recording software (Brain Vision Recorder, Brain Products GmbH). Brain Vision Analyzer (Brain Products GmbH) was used for the offline correction of scanning and ballistocardiogram artifacts. After down-sampling to 200 Hz, the reference channel was digitally replaced with an average of TP9 and TP10 electrodes, located behind the ears.
### Epoch Selection
We selected 10 subjects (age, 25 ± 6.1 [mean ± SD]; gender: 8 males and 2 females) who had both awake and stage 1 sleep epochs defined by EEG. Each epoch was 5 min long (120 scans). EEG recordings were visually scored according to the standard criteria (Rechtschaffen and Kales 1968) for 30 s intervals. A continuous 5-min recording was defined as an awake epoch if all 30-s segments were at sleep stage 0 (Fig. 1A,B), and was defined as a stage 1 sleep epoch if at least 4 min were at sleep stage 1 (Fig. 1C,D).
Figure 1.
Typical EEG power spectra and waveforms after removing artifacts. Both (A) and (C) show the power spectrogram of the entire epoch (300 s) from the O1 electrode of the same subject, both (B) and (D) show 5-s segments of EEG waveforms between the 2 dotted lines in the spectrograms. (A and C) Awake epoch characterized by continuous alpha waves. (B and D) Stage 1 sleep epoch characterized by low voltage theta waves with intermixed low voltage alpha waves.
Figure 1.
Typical EEG power spectra and waveforms after removing artifacts. Both (A) and (C) show the power spectrogram of the entire epoch (300 s) from the O1 electrode of the same subject, both (B) and (D) show 5-s segments of EEG waveforms between the 2 dotted lines in the spectrograms. (A and C) Awake epoch characterized by continuous alpha waves. (B and D) Stage 1 sleep epoch characterized by low voltage theta waves with intermixed low voltage alpha waves.
### Resting-State fMRI Preprocessing
The following preprocessing of fMRI data was performed by SPM5 (Wellcome Department of Imaging Neuroscience, http://www.fil.ion.ucl.ac.uk/spm/). After correcting for differences in slice timing within each image volume, each scan was aligned with the first scan, normalized according to the Montreal Neurological Institute (MNI) template, and resliced to 2-mm cubic voxels. Then, commonly applied preprocessing steps for rs-fMRI were performed (Fox et al. 2005). Data were temporally bandpass filtered (0.009Hz < f < 0.08 Hz), and the following sources of spurious variance along with their temporal derivatives were then removed through linear regression: (1) 6 parameters obtained by rigid-body correction of head motion, (2) the whole-brain signal averaged over gray matter, white matter, and ventricular regions, (3) signal from a ventricular region, and (4) signal from regions centered in the deep white matter. This regression procedure removes fluctuations unlikely to be involved in specific regional correlations (Van Dijk et al. 2010).
### Brain Network Construction
In graph theory, a complex system is modeled as a “graph,” which is defined as a set of “nodes” linked by “edges.” Then, important properties of the complex system are described by quantifying the topologies of the respective graphs (Boccalettia et al. 2006). In rs-fMRI analysis, nodes represent a set of regions of interest (ROIs) or voxels, and edges represent the functional connectivity between the nodes (Bullmore and Sporns 2009; Rubinov and Sporns 2010; Bullmore and Bassett 2011). In this study, to enhance spatial specificity we adopted a much larger number of partitions than was adopted in previous studies (Achard and Bullmore 2007; Spoormaker et al. 2010). Figure 2 schematically illustrates the graph theoretical analysis of rs-fMRI. As a first step, according to Meunier et al. (2009), we divided the AAL gray matter template (Tzourio-Mazoyer et al. 2002) into 3781 nodes covering the whole cortical and subcortical gray matter (Fig. 2A). Each node was a 6 mm × 6 mm × 6 mm cubic ROI containing 27 voxels (for more detail, see Meunier et al. 2009). As a second step, node time courses of fMRI signals for each epoch were extracted from the preprocessed EPI images by averaging the voxel time courses within the ROIs for each participant (Fig. 2B). Next, to estimate the functional connectivity, Pearson's correlation coefficients between all node pairs were calculated, which resulted in {3781 × 3781} correlation matrices (Fig. 2C). Following the practice undertaken by a majority of prior studies (for a review, see Bullmore and Bassett 2011), we then applied thresholds to the correlation matrices and eliminated weak correlations to evaluate the topological properties of functional brain networks using graph theoretical analysis. According to the recommendation of Bullmore and Bassett (2011), we set thresholds with reference to the connection density K, the ratio of the number of edges to all possible node pairs. For example, when we set a threshold with reference to K = 0.1, the strongest 10% of correlations were retained. Because there is no gold standard for a threshold, we applied a range of thresholds (0.01 ≤ K ≤ 0.2, 0.01 increments) to each correlation matrix and converted each to an adjacency matrix A, where the ai,j elements of A are 1, if the correlation coefficient ri,j is greater than a given threshold, or 0, if it is not (Fig. 2D). Each adjacency matrix defines a binary graph G, which represents a model of the whole-brain functional network (Fig. 2E). Finally, graphs of different connection densities were produced and the following parameters (see below) were calculated as a function of K (Fig. 2F).
Figure 2.
Procedure for graph theoretical analysis on rs-fMRI (see Materials and Methods). (A) Initially, the fine partitions of cortical and subcortical gray matter (3781 ROIs) are generated. (B) The mean time course of each ROI is extracted and bandpass filtered (0.009 Hz <f<0.08 Hz). Then, the sources of spurious variance along with their temporal derivatives are removed. (C) Pearson's correlation coefficients between all pairs of ROI time courses are calculated, which result in a 3781 × 3781 correlation matrix representing functional connectivity. (D) These correlation matrices are thresholded at different connection densities K (0.03 ≤ K ≤ 0.12) and binarized to generate adjacency matrices. (E) Each adjacency matrix defines a binary graph G, a model of the whole-brain functional network. (F) Important topological parameters such as path length, clustering coefficient, and modularity are then calculated from G and compared between the awake state and stage 1 sleep.
Figure 2.
Procedure for graph theoretical analysis on rs-fMRI (see Materials and Methods). (A) Initially, the fine partitions of cortical and subcortical gray matter (3781 ROIs) are generated. (B) The mean time course of each ROI is extracted and bandpass filtered (0.009 Hz <f<0.08 Hz). Then, the sources of spurious variance along with their temporal derivatives are removed. (C) Pearson's correlation coefficients between all pairs of ROI time courses are calculated, which result in a 3781 × 3781 correlation matrix representing functional connectivity. (D) These correlation matrices are thresholded at different connection densities K (0.03 ≤ K ≤ 0.12) and binarized to generate adjacency matrices. (E) Each adjacency matrix defines a binary graph G, a model of the whole-brain functional network. (F) Important topological parameters such as path length, clustering coefficient, and modularity are then calculated from G and compared between the awake state and stage 1 sleep.
### Network Analysis
To examine the differences in functional network organization between the awake and stage 1 sleep epochs, we computed and estimated the following parameters with the Brain Connectivity Toolbox (http://www.brain-connectivity-toolbox.net; Rubinov and Sporns 2010).
#### Small-World Organization
The small-world organization of a network can be quantified by 2 key parameters: The characteristic path length L and clustering coefficient C (Watts and Strogatz 1998). L is the average of the shortest path length between all pairs of nodes. Path length represents the number of processing steps along the routes of information transfer among the brain regions (Rubinov and Sporns 2010). Since low numbers of processing steps have the advantage in rapid and accurate communication (Kaiser and Hilgetag 2006), a lower L indicates a higher level of communication efficiency across global brain regions. To handle possibly infinite path lengths between disconnected nodes, we calculated L as the harmonic mean of the minimum path length (Latora and Marchiori 2001; Hayasaka and Laurienti 2010). Another parameter, C is the degree of interconnectedness in local networks, consisting of direct neighbors of each node. In brain networks, C is considered to be associated with locally specialized processing, fault tolerance, and economic pressure for minimal wiring cost (Kaiser and Hilgetag 2006; Rubinov and Sporns 2010; Bullmore and Bassett 2011). To avoid the influence of other network characteristics (Rubinov and Sporns 2010), normalized L and C, “lambda” and “gamma,” respectively, were then calculated as the ratio to values of a randomly rewired null model (Maslov and Sneppen 2002). Small-worldness “sigma” (Humphries et al. 2006) was calculated as the ratio of gamma to lambda. Compared with a randomly rewired network, small-world networks are known to have similar L and higher C, resulting in lambda ≈ 1, gamma > 1, and sigma =(gamma/lambda) > 1:
$$L = \displaystyle{{n(n - 1)} \over {\sum\limits_{i \ne j \in G} {1/d_{i,j} } }},\quad \hbox{Lambda} = \displaystyle{L \over {L_{{\rm random}} }},$$
$$C = \displaystyle{1 \over n}\sum\limits_{i \in G} {\displaystyle{{2e_i } \over {k_i (k_i - 1)}}} ,\quad \hbox{gamma} = \displaystyle{C \over {C_{{\rm random}} }},$$
where n is the number of nodes, di,j is the shortest path length between node i and j, ki is the number of edges connected to node i, and ei is the number of edges between the neighbors of node i, and Lrandom and Crandom are the averages of L and C calculated from 20 randomly rewired null models, respectively (Maslov and Sneppen 2002).
#### Modular Organization
Another feature of whole-brain functional networks is their modular organization (Bullmore and Sporns 2009; Meunier et al. 2010; Rubinov and Sporns 2010; Bullmore and Bassett 2011). A module is topologically defined as a group of highly interconnected nodes, which have relatively sparse connections to nodes in other modules (Fortunato 2010). Modular organization suggests that a functional network is suitable for specialized processing (Meunier et al. 2010; Rubinov and Sporns 2010), and the degree of modular organization can be estimated by a quantitative parameter called modularity Q (Newman 2004, 2006). Here, we applied the heuristic modularity maximizing algorithm proposed by Blondel et al. (2008). This method outputs an optimal partition, a set of modules, and optimized Q.
$$Q = \displaystyle{1 \over {2m}}\sum\limits_{i \ne j \in G} { \left(a_{i,j} - \displaystyle{{k_i k_j } \over {2m}} \right)} \delta (M_i ,M_j ),$$
where m is the number of edges, Mi is the module containing node i, and δ(Mi, Mj) = 1 if Mi = Mj, and 0 otherwise. Q measures the quality of network partitioning.
#### Physical Connection Distance
In brain networks, long-range connections are thought to be critical for generating short path lengths and high communication efficiency across global brain regions (Kaiser and Hilgetag 2006). Thus, we also calculated the physical connection distance D as the average of the Euclidean distance between nodes connected by edges as follows:
$$D = \displaystyle{1 \over m}\sum\limits_{i \ne j \in G} {w_{i,j} } ,$$
where m is the number of edges, and wi,j is the Euclidean distance between the center coordinates of nodes if ai,j = 1, and 0 if ai,j = 0.
#### Nodal Efficiency
We also estimated the communication efficiency at individual brain regions. Nodal efficiency Ei is the mean of the inverse shortest path length from node i to all other nodes and is defined as follows (Achard and Bullmore 2007):
$$E_i = \displaystyle{1 \over {n - 1}}\sum\limits_{\,j \in G} {\displaystyle{1 \over {d_{i,j} }}}$$
Since Ei is negatively correlated with the number of processing steps from the ith node to all other nodes, brain regions having high Ei suggest the existence of a high level of efficiency in communicating with the rest of the brain.
#### Group-Level Partition and Identification of DMN
To calculate nodal efficiency in the DMN, we identified this network by using a group-level partition. In accordance with He et al. (2009), we constructed a group-level graph by way of the statistical significance of the correlation coefficient. Fisher's transformed correlation matrix was averaged across the awake and stage 1 sleep epochs, then the significance of the correlation coefficient was tested by a 1-sample t-test, where the null hypothesis was a correlation coefficient = 0 at the group level. For multiple comparisons, we applied the false-discovery rate (FDR) and correlations where FDR ≤ 0.05, and a positive t-value (positive correlation) was defined as edges. This resulted in a sparse group-level graph (K ≈ 0.023). Thereafter, this graph was parceled into modules by using the modularity maximizing method as described above.
### Statistical Analysis
We compared the above described parameters in the nonrandom connection density range. As the threshold is relaxed and connection density increases, graph topology becomes increasingly random and less small world or modular. According to the notion that a high density random network is likely to be nonbiological (Lynall et al. 2010), we chose the nonrandom connection density range using 3 constraints: (1) at least 99% of nodes are connected, (2) small-worldness sigma > 1, and (3) modularity Q > 0.3. These criteria ensured that graphs retained small-world and modular topology. In practice, we empirically computed the number of connected nodes, sigma and Q over the rage of 0.01 ≤ K ≤ 0.2 (0.01 increments) to find the range over which all graphs met the criteria and resulted in 0.03 ≤ K ≤ 0.12. According to Lynall et al. (2010), all parameters were averaged over this range.
Since the distributional properties of graph theoretical parameters are not well known (Bullmore and Bassett 2011), differences in global network parameters (L, C, lambda, gamma, Q, and D) were examined by using a nonparametric permutation test adopted in other brain graph theoretical analysis studies (van den Heuvel et al. 2010; Fornito et al. 2011). We calculated the mean differences between the awake and stage 1 sleep epochs (observed difference). Then, in the permutation step, data from the awake epochs were exchanged for data from the same participants' stage 1 sleep epoch in all possible ways (210 = 1024), and mean differences were calculated in all of the permutated data. This resulted in a distribution under the null hypothesis in which there were no differences between the 2 states (null distribution). The 2-tail P-value was calculated as the proportion of permutations in which the absolute mean difference was greater than or equal to the observed absolute mean difference (Nichols and Holmes 2001).
For nodal efficiency, we applied the permutation test on a suprathreshold cluster size (Nichols and Holmes 2001) to obtain high sensitivity. As the first step, t-statistics were calculated for each node, and spatially extended clusters were estimated at a primary cluster-forming threshold of t > 2.82 (corresponding to P < 0.01, 1-tailed). Then, in a null distribution for multiple comparisons, a corrected P-value for cluster sizes was obtained by permutation of the data in all possible ways (210 = 1024). Finally, the suprathreshold clusters in the observed data with P < 0.025 (1-tailed) were defined as significant clusters. The same procedure in essence was adopted in a recent rs-fMRI graph theoretical analysis (Fornito et al. 2011).
To investigate whether each module (e.g., DMN or executive control network [ECN]) has a specific effect of sleep stage on the efficiency, we computed how many nodes overlap between each module (identified by the group-level partition) and the regions which have state-dependent effects on nodal efficiency. In addition, to test the statistical significance of these overlaps, we calculated the adjusted standardized residuals (Haberman 1973) for each overlap. Here, the null hypothesis was that the modules and the decreased nodal efficiency were independent. The adjusted standardized residuals represented the degree of the difference between the observed overlaps (number of nodes) and those of the expected ones if the null hypothesis was true. Under the null hypothesis, the adjusted standardized residuals will have a standard normal distribution, that is, the adjusted standardized residuals were Z-scores. Then, P-values were calculated. For multiple comparisons, we applied the Bonferroni method. If P < 0.01, we considered that the module had a specific effect of sleep stage on the efficiency.
## Results
### State-Dependent Functional Network Organization at the Global Level
Figure 3 shows the effects of brain state (awake state and stage 1 sleep) on the parameters as a function of connection density. Table 1 summarizes the statistical results of these parameters averaged over the nonrandom connection density range of 0.03 ≤ K ≤ 0.12. The characteristic path length L and normalized L (lambda) significantly increased in stage 1 sleep compared with the awake state (P = 0.0059 for L and P = 0.0059 for lambda, permutation test; Fig. 3A). In contrast, state-dependent effects on the degree of clustering were not evident (Fig. 3B). Although the clustering coefficient C was significantly increased in stage 1 sleep (P = 0.0039, permutation test), there was no significant difference when C was normalized by a null model (gamma; Table 1). The modular organization also showed no significant difference between the awake state and stage 1 sleep: There were no significant differences in modularity Q and number of modules (Fig. 3C, Table 1). On the contrary, the mean physical connection distance of edges D significantly decreased in the stage 1 sleep state compared with that of the awake state (P = 0.0019, permutation test; Fig. 3D).
Table 1
Differences in all parameters for network analysis between the awake state and stage 1 sleep
Awake
Stage 1 sleep
P-values Effect
Mean SD Mean SD
Characteristic path length L 2.035 0.034 2.083 0.047 0.0059* Stage 1 sleep > awake
Normalized L lambda 1.078 0.015 1.100 0.021 0.0059* Stage 1 sleep > awake
Clustering coefficient C 0.361 0.032 0.394 0.031 0.0039* Stage 1 sleep > awake
Normalized C gamma 3.326 0.299 3.296 0.503 0.8535
Modularity Q 0.453 0.038 0.457 0.034 0.6465
Number of modules 4.550 0.560 4.900 0.488 0.0723
Connection distance D (mm) 71.181 3.010 67.124 2.338 0.0019* Awake > stage 1 sleep
Awake
Stage 1 sleep
P-values Effect
Mean SD Mean SD
Characteristic path length L 2.035 0.034 2.083 0.047 0.0059* Stage 1 sleep > awake
Normalized L lambda 1.078 0.015 1.100 0.021 0.0059* Stage 1 sleep > awake
Clustering coefficient C 0.361 0.032 0.394 0.031 0.0039* Stage 1 sleep > awake
Normalized C gamma 3.326 0.299 3.296 0.503 0.8535
Modularity Q 0.453 0.038 0.457 0.034 0.6465
Number of modules 4.550 0.560 4.900 0.488 0.0723
Connection distance D (mm) 71.181 3.010 67.124 2.338 0.0019* Awake > stage 1 sleep
Note: Values are the mean over the nonrandom connection density range of 0.03 ≤ K ≤ 0.12 (see Materials and Methods).
*P < 0.05, permutation test.
Figure 3.
Network parameters (y axis) in the awake state and stage 1 sleep at various connection densities K (0.03 ≤ K ≤ 0.12). Solid red lines represent the average values of the awake state, while black lines represent stage 1 sleep. (A) Normalized characteristic path length (lambda) significantly increased in stage 1 sleep compared with the awake state (P = 0.0059, permutation test). (B and C) However, state-dependent effects on the normalized clustering coefficient (gamma) and modularity Q were not significant. (D) The mean connection distance of edges D is significantly decreased in stage 1 sleep compared with the awake state (P = 0.0019, permutation test). Error bars represent standard error of the mean.
Figure 3.
Network parameters (y axis) in the awake state and stage 1 sleep at various connection densities K (0.03 ≤ K ≤ 0.12). Solid red lines represent the average values of the awake state, while black lines represent stage 1 sleep. (A) Normalized characteristic path length (lambda) significantly increased in stage 1 sleep compared with the awake state (P = 0.0059, permutation test). (B and C) However, state-dependent effects on the normalized clustering coefficient (gamma) and modularity Q were not significant. (D) The mean connection distance of edges D is significantly decreased in stage 1 sleep compared with the awake state (P = 0.0019, permutation test). Error bars represent standard error of the mean.
### State-Dependent Efficiency at The Regional Level
To examine the effect of sleep stage on efficiency at the regional level, we compared the nodal efficiency Ei between the 2 states. As shown in Figure 4, 6 spatially extended clusters could be extracted where the nodal efficiency significantly decreased during stage 1 sleep: (1) Bilateral medial parietal cortex extended to the cingulate cortex, involving the precuneus, posterior and middle cingulate cortex (cluster size [number of nodes] = 70, P = 0.0019, corrected), (2) bilateral medial prefrontal cortex (cluster size = 28, P = 0.0107, corrected), (3) right lateral parietal cortex (mainly supramarginal gyrus; cluster size = 24, P = 0.0127, corrected), (4) left lateral parietal cortex (angular gyrus and superior parietal lobule; cluster size = 23, P = 0.0136, corrected), (5) right lateral parietal cortex (angular gyrus and superior parietal lobule; cluster size = 20, P = 0.0166, corrected), and (6) right lateral prefrontal cortex (cluster size = 19, P = 0.0166, corrected). There were no regions that showed significant increases in nodal efficiency during stage 1 sleep.
Figure 4.
Brain regions that showed statistically significant decreases in nodal efficiency Ei during stage 1 sleep (P < 0.025, corrected). (1) Bilateral medial parietal, (2) bilateral medial prefrontal, (3) right lateral parietal cortex (mainly supramarginal gyrus), (4) left lateral parietal cortex (angular gyrus and superior parietal lobule), (5) right lateral parietal cortex (angular gyrus and superior parietal lobule), and (6) right lateral prefrontal cortex.
Figure 4.
Brain regions that showed statistically significant decreases in nodal efficiency Ei during stage 1 sleep (P < 0.025, corrected). (1) Bilateral medial parietal, (2) bilateral medial prefrontal, (3) right lateral parietal cortex (mainly supramarginal gyrus), (4) left lateral parietal cortex (angular gyrus and superior parietal lobule), (5) right lateral parietal cortex (angular gyrus and superior parietal lobule), and (6) right lateral prefrontal cortex.
### Overlap of State-Dependent Regions and Segregated Modules
The modularity Q of a group-level graph was 0.601 and was high enough to separate whole-brain regions into several clustered modules. A module is topologically defined as a subset of highly interconnected nodes that are relatively sparsely connected to nodes in other modules (see Materials and Methods section for details). As shown in Figure 5, 5 primary modules could be segregated without any a priori assumptions, and the configuration of these 5 modules resembled the previously reported large-scale resting networks revealed by seed-based correlation analysis and independent component analysis (Raichle 2010). Thus, we labeled them as “DMN,” “ECN,” “salience network (SAN),” “sensorimotor network (SMN),” and “visual network (VN)” with reference to previous studies (Boly et al. 2012). The numbers of nodes were 1018, 704, 416, 710, and 706, respectively, and 94% of all nodes were involved in these 5 modules. Then, we embedded the nodes of each module into a region that showed a significant decrease in nodal efficiency during stage 1 sleep (Fig. 4) to demonstrate the positional relationship between them. As shown in Figure 6A,E, almost two-thirds of nodes that showed a significant decrease in nodal efficiency during stage 1 sleep were dominated by nodes categorized as DMN (64.7%, 119 nodes). The remaining nodes were occupied by ECN (14.7%, 27 nodes; Fig. 6B,E), SAN (15.2%, 28 nodes; Fig. 6C,E), SMN (4.3%, 8 nodes; Fig. 6D,E), and VN (0.5%, 1 nodes; Fig. 6E). To test the statistical significance of the overlaps, we created a 2 × 6 crosstab as shown in Table 2, then Haberman's residual analysis was performed. The overlap between the DMN and the nodes showing the significant decrease in nodal efficiency was significantly larger than the expected value (Z-score = 11.83, P = 3.0 × 10−31, corrected; Fig. 6F). The overlap between the SMN and the nodes showing the significant decrease in nodal efficiency was also significantly smaller than the expected value (Z-score = −5.13, P = 3.3 × 10−6, corrected; Fig. 6F). The overlap between the VN and the nodes showing that the significant decrease in nodal efficiency was significantly smaller than the expected value (Z-score = −6.46, P = 1.1 × 10−7, corrected; Fig. 6F).
Table 2
Overlapping regions between each module and the regions significantly decreased nodal efficiency during stage 1 sleep
DMN ECN SAN SMN VN Others
Significant decrease in Ei 119 27 28
Nonsignificant change in Ei 899 677 388 702 705 226
DMN ECN SAN SMN VN Others
Significant decrease in Ei 119 27 28
Nonsignificant change in Ei 899 677 388 702 705 226
Note: Values are the number of nodes.
Ei: nodal efficiency; DMN: default-mode network; ECN: executive control network; SAN: salience network; SMN: sensorimotor network; VN: visual network.
Figure 5.
Partition of a group-level graph by using the modularity maximizing method (see Materials and Methods). The 5 largest modules are segregated: “DMN (red, 1018 nodes),” “ECN (yellow, 704 nodes),” “SAN (green, 416 nodes),” “SMN (cyan, 710 nodes),” and “VN (blue, 706 nodes).” Ninety-four percent of all nodes are included in these 5 modules.
Figure 5.
Partition of a group-level graph by using the modularity maximizing method (see Materials and Methods). The 5 largest modules are segregated: “DMN (red, 1018 nodes),” “ECN (yellow, 704 nodes),” “SAN (green, 416 nodes),” “SMN (cyan, 710 nodes),” and “VN (blue, 706 nodes).” Ninety-four percent of all nodes are included in these 5 modules.
Figure 6.
Overlapping regions between each module (shown in Fig. 5) and the regions significantly decreased nodal efficiency during stage 1 sleep (shown in Fig. 4): (A) DMN, (B) ECN, (C) SAN, and (D) SMN. Regional colors correspond to those of Figure 5. (E) Numbers of nodes in these overlaps: DMN (red bar, 119 nodes, 64.7% of all nodes showing significantly decreased nodal efficiency during the stage 1 sleep), ECN (yellow bar, 27 nodes, 14.7%), SAN (green bar, 28 nodes, 15.2%), SMN (cyan bar, 8 nodes, 4.3%), and VN (blue bar, 1 nodes, 0.5%). (F) Z-scored overlaps (see Materials and Methods). Dotted lines indicate the Z-score = ±3.34 (P < 0.01, after Bonferroni correction).
Figure 6.
Overlapping regions between each module (shown in Fig. 5) and the regions significantly decreased nodal efficiency during stage 1 sleep (shown in Fig. 4): (A) DMN, (B) ECN, (C) SAN, and (D) SMN. Regional colors correspond to those of Figure 5. (E) Numbers of nodes in these overlaps: DMN (red bar, 119 nodes, 64.7% of all nodes showing significantly decreased nodal efficiency during the stage 1 sleep), ECN (yellow bar, 27 nodes, 14.7%), SAN (green bar, 28 nodes, 15.2%), SMN (cyan bar, 8 nodes, 4.3%), and VN (blue bar, 1 nodes, 0.5%). (F) Z-scored overlaps (see Materials and Methods). Dotted lines indicate the Z-score = ±3.34 (P < 0.01, after Bonferroni correction).
## Discussion
In this study, we applied graph theoretical analysis to rs-fMRI with simultaneous EEG recording to clarify the differences in the organization of spontaneous functional networks between the awake state and stage 1 sleep. Our major findings were as follows. First, the communication efficiency across global brain regions evaluated by the path length significantly decreased during stage 1 sleep. Secondly, the efficiency of several specific regions in the association cortices also significantly decreased during stage 1 sleep. Thirdly, these specific regions dominantly overlapped with the DMN. Our results provide evidence for a state-dependent alteration of brain network organization and decreased the ability of information integration during stage 1 sleep.
### Functional Network Organization Versus Anatomical Network Organization
In our results, average path length increased significantly depending on the shift in sleep stage within a short period of time over which anatomical network organizations could not respond. Thus, the modulation of functional network organizations, at least evaluated by some graph parameters, is independent of changes in anatomical network organization. This independency could provide an insight into the known discrepancy between anatomical and functional network reorganization. For example, Lo et al. (2010) reported that the average path length of an “anatomical” network increased (i.e. the efficiency decreased) in Alzheimer's disease (AD) patients, whereas Sanz-Arigita et al. (2010) reported that the average path length of a “functional” network decreased (i.e. the efficiency increased) in AD patients. Since our results suggested the dynamic modulation of functional network organization beyond anatomical constraints, the functional networks in AD patients may be reorganized to compensate for decreased anatomical network efficiency. This then raises the question of what is occurring in the brain during stage 1 sleep. In the following section, we discuss the alteration of functional network organization during stage 1 sleep from the viewpoint of the physiological and psychological changes.
### Functional Network Organization During Stage 1 Sleep
During stage 1 sleep, subjects unsteadily respond to external stimuli and their reaction times are prolonged (Ogilvie and Wilkinson 1984, 1988; Ogilvie et al. 1989). For example, Ogilvie et al. (1989) reported that subjects failed to respond to about 40% of presented faint auditory stimuli, and reaction times were considerably prolonged during stage 1 sleep. However, the neurophysiological evidence explaining such unstable and slow responsiveness during stage 1 sleep is scarce. In the small-world brain network, a short path length represents a small number of intermediate transmissions in the integrative pathway and thereby underpins the accurate and rapid transfer of information in integrative neural communications (Kaiser and Hilgetag 2006). Conversely, the larger number of intermediate transmissions causes greater signal loss, signal distortion, and slower processing speed. Therefore, unstable and slow responsiveness in stage 1 sleep could be explained by the increased path length demonstrated by this study.
In parallel with the increase in the average path length, the average physical connection distance decreased during stage 1 sleep. A decreased physical connection distance suggests a loss of connections between remote brain regions, that is, long-range connections that are critical for keeping path lengths short in brain networks and ensuring a highly efficient network organization (Kaiser and Hilgetag 2006). Therefore, the increase in average path length is likely caused by the loss of long-range connections during stage 1 sleep. The results of the combined transcranial magnetic stimulation (TMS) and EEG studies also support this notion: A TMS-evoked response efficiently spreads to long-distance distributed brain regions during the awake state, but do not spread beyond the stimulation site during stages 1 and 2 sleep (Massimini et al. 2005) or during anesthesia (Ferrarelli et al. 2010).
In contrast, we found no significant effects on the degree of clustering (Fig. 3B) and modular organization (Fig. 3C). Both are supposed to be associated with the network organization responsible for locally specialized processing. According to mathematical models (Watts and Strogatz 1998; Mathias and Gopal 2001), both parameters are interdependent with path length as a tradeoff between globally integrated organization and locally specialized organization. These parameters may be more robust than average path length for the alteration of network organization, that is, more constrained by anatomical network organization.
### Comparison with Previous rs-fMRI Studies
Spoormaker et al. (2010) reported on the changes in network organization from awake to deep NREM sleep using rs-fMRI. Their approach was very similar to ours. However, unlike in our study, they observed that the main effect of sleep was not on average path length, but also on local clustering: The normalized clustering coefficient was lowest in light NREM sleep (stages 1 and 2 > stage 0 > deep NREM sleep) and concluded that the functional network moved toward randomness in light sleep. Comparing their results with ours, some parts were consistent: The average path length increased in stage 1 sleep in some connection densities, and the normalized average path length increased in stage 2 sleep in some connection densities. However, the following 2 differences should be noted. First, according to the mathematical model of Watts and Strogatz (1998; WS model), when a small-world network moves toward randomness, the average path length decreases as the clustering coefficient decreases. Therefore, from the view point of the WS model, their results and ours were opposite in the direction of the changes in a small-world network organization. A possible cause for the discrepancy between the results of the 2 studies was the definition of nodes. They applied the AAL-based node definition (90 nodes), whereas we applied a much finer node definition (3781 nodes). The different node sets could represent the different aspects of the same network. For example, as the brain system is known as a hierarchical system (Mesulam 1998), different node sets could represent the different hierarchical levels in the functional network. Another possibility which could affect the results was the threshold selection. Whereas they were systemically analyzed over a full connection, we analyzed over a sparse connection density range. However, since the main effects of their study were observed throughout a wide range of connection densities, it is unlikely that the different threshold selections had an impact upon the divergence in the results of the 2 studies. Secondly, the specific involvement of the DMN during stage 1 sleep was demonstrated in our study, but not in their study. This could be due to our node definition, because our fine node set could distinguish the major functional systems including the DMN as shown in Figure 5, whereas the AAL-based node set could not (Power et al. 2011).
Recently, Boly et al. (2012) have reported on the changes in the functional brain network during stages 2–4 NREM sleep using rs-fMRI. Using information theoretical measures, they revealed a modification of the hierarchical organization of large-scale brain networks into smaller independent modules during stages 2–4 sleep. In line with our results, their results suggested that the brain's capacity to integrate information decreases during stages 2–4 sleep. Furthermore, they evaluated the modularity using graph theoretical analysis and revealed an increase in the modularity during stages 2–4 sleep. In contrast, we could not observe the increase in modularity during stage 1 sleep. This discrepancy could be explained by the difference in the stage of sleep between the 2 studies. The modularity could increase in proportion to the depth of sleep.
There have been several rs-fMRI studies that reported the change in functional connectivity during sleep and discussed the relationship to conscious awareness. However, the results are inconsistent: Connectivity within the DMN decreased throughout NREM sleep (Sämann et al. 2011), only in deep NREM sleep (Horovitz et al. 2009), remained constant in light NREM sleep (Horovitz et al. 2008; Larson-Prior et al. 2009), or throughout non-REM and REM sleep (Koike et al. 2011). Although it is hard to determine the cause of the discrepant results, the connectivity pattern of the whole-brain network was not taken into consideration in any of these studies. Since the changes in brain activity when shifting from wakefulness to sleep occur on the scale of the whole brain, a whole-brain analysis using graph theoretical analysis would be more suitable for this purpose.
### Role of DMN as a Center of Conscious Awareness
The regions showing a significant decrease in nodal efficiency during stage 1 sleep were extensively dominated by areas categorized as DMN (Fig. 6C). Nodal efficiency represents how efficiently a brain region communicates with the rest of the brain and reflects not only connectivity within modules (e.g. DMN or ECN), but also connectivity among whole-brain regions. For example, if a region has strong connectivity to regions in 2 or more modules, it has a higher nodal efficiency than that of another region having similar connectivity to regions in only one module. Therefore, our result suggests that DMN is indispensable for integrating whole-brain regions in a fully conscious awake state. So far, an increasing number of results have suggested that the DMN plays a key role in generating conscious awareness, summarized as follows. (1) The DMN regions show a metabolic reduction in various altered states of conscious awareness such as coma, general anesthesia, generalized seizures, and a vegetative state (for a review, see Baars et al. 2003; Laureys 2005). (2) Connectivity among the DMN regions decreased during sleep (Horovitz et al. 2009; Sämann et al. 2011), general anesthesia (Boveroux et al. 2010), and clinical consciousness impairment (Vanhaudenhuyse et al. 2010). (3) Coactivation of the DMN regions was related to self-awareness (Kjaer, Nowak, et al. 2002; Vanhaudenhuyse et al. 2011). Moreover, the DMN contains the most centrally connected regions in the whole-brain network (Buckner et al. 2009; Cole et al. 2010) and thus can be essential for consciousness as an integrating brain activity. However, these previous studies did not directly address the relationship between consciousness and the efficiency in communication with the DMN. By using graph theoretical analysis and simultaneous EEG recording with fMRI, we clearly demonstrated that efficient communications between regions in DMN and the rest of the brain were critical for the awake conscious state.
### Limitations of Our Study
One of the limitations of this study is that we have not analyzed the data acquired when the subjects reached deep NREM sleep. It is well known that conscious awareness degrades more in deep NREM sleep and in patients with consciousness disorders. Therefore, it is not yet clear whether the changes in the functional network organization reported here are specific to stage 1 sleep or if the change becomes more prominent in the same direction in deep NREM sleep. As mentioned earlier, the results of Spoormaker et al. (2010) suggested that light sleep was not a mere transient state from wakefulness to deep NREM sleep. Since their results in light sleep were quite different to ours, further systemic studies are needed to answer this issue.
The fine resolution nodes adopted in this study were useful to capture the DMN specific changes. However, at the same time, fine brain partition may reduce the signal-to-noise ratio and affect the overall results (Fornito et al. 2010). In this respect, several studies have sought for the ideal node definition (Fornito et al. 2010, Power et al. 2011), though no consensus has been established. Further theoretical and empirical studies are needed to overcome this limitation.
Interestingly, Picchioni et al. (2008) demonstrated that brain activities were different between early and late stage 1 sleep. It is possible that the functional network organizations are also different between early and late stage 1 sleep. However, because of the limited sampling rate of rs-fMRI, we could not estimate functional connectivity during such a short period. In this point, graph theoretical analysis using other modalities such as magnetoencephalography (Hipp et al. 2012) may be useful for obtaining a deep understanding of network organization during stage 1 sleep.
## Funding
This study was supported in part by a Grant-in-Aid from the Ministry of Education, Culture, Sports, Science, and Technology (# 22390177 to S.T.) of the Government of Japan and CREST of Japan Science and Technology (JST).
## Notes
Conflict of Interest: None declared.
## References
Achard
S
Bullmore
E
Efficiency and cost of economical brain functional networks
PLoS Comput Biol
,
2007
, vol.
3
pg.
e17
Baars
BJ
Ramsøy
TZ
Laureys
S
Brain, conscious experience and the observing self
Trends Neurosci
,
2003
, vol.
26
(pg.
671
-
675
)
Blondel
VD
Guillaume
JL
Lambiotte
R
Lefebvre
E
Fast unfolding of communities in large networks
J Stat Mech
,
2008
pg.
P10008
2008
Boccalettia
S
Latorab
V
Morenod
Y
Chavezf
M
Hwanga
DU
Complex networks: structure and dynamics
Phys Rep
,
2006
, vol.
424
(pg.
175
-
308
)
Boly
M
Perlbarg
V
Marrelec
G
Schabus
M
Laureys
S
Doyon
J
Pélégrini-Issac
M
Maquet
P
Benali
H
Hierarchical clustering of brain activity during human nonrapid eye movement sleep
,
2012
, vol.
109
(pg.
5856
-
5861
)
Boveroux
P
Vanhaudenhuyse
A
Bruno
MA
Noirhomme
Q
Lauwick
S
Luxen
A
Degueldre
C
Plenevaux
A
Schnakers
C
Phillips
C
, et al. .
Breakdown of within- and between-network resting state functional magnetic resonance imaging connectivity during propofol-induced loss of consciousness
Anesthesiology
,
2010
, vol.
113
(pg.
1038
-
1053
)
Buckner
RL
Sepulcre
J
Talukdar
T
Krienen
FM
Liu
H
Hedden
T
Andrews-Hanna
JR
Sperling
RA
Johnson
KA
Cortical hubs revealed by intrinsic functional connectivity: mapping, assessment of stability, and relation to Alzheimer's disease
J Neurosci
,
2009
, vol.
29
(pg.
1860
-
1873
)
Bullmore
E
Sporns
O
Complex brain networks: graph theoretical analysis of structural and functional systems
Nat Rev Neurosci
,
2009
, vol.
10
(pg.
186
-
198
)
Bullmore
ET
Bassett
DS
Brain graphs: graphical models of the human brain connectome
Annu Rev Clin Psychol
,
2011
, vol.
7
(pg.
113
-
140
)
Cole
MW
Pathak
S
Schneider
W
Identifying the brain's most globally connected regions
Neuroimage
,
2010
, vol.
49
(pg.
3132
-
3148
)
Dehaene
S
Changeux
JP
Experimental and theoretical approaches to conscious processing
Neuron
,
2011
, vol.
70
(pg.
200
-
227
)
Ferrarelli
F
Massimini
M
Sarasso
S
Casali
A
Riedner
BA
Angelini
G
Tononi
G
Pearce
RA
Breakdown in cortical effective connectivity during midazolam-induced loss of consciousness
,
2010
, vol.
107
(pg.
2681
-
2686
)
Fornito
A
Zalesky
A
Bassett
DS
Meunier
D
Ellison-Wright
I
Yücel
M
Wood
SJ
Shaw
K
O'Connor
J
Nertney
D
, et al. .
Genetic influences on cost-efficient organization of human cortical functional networks
J Neurosci
,
2011
, vol.
31
(pg.
3261
-
3270
)
Fornito
A
Zalesky
A
Bullmore
ET
Network scaling effects in graph analytic studies of human resting-state fMRI data
Front Syst Neurosci
,
2010
, vol.
17
(pg.
4
-
22
)
Fortunato
S
Community detection in graphs
Phys Rep
,
2010
, vol.
486
(pg.
75
-
174
)
Fox
MD
Raichle
ME
Spontaneous fluctuations in brain activity observed with functional magnetic resonance imaging
Nat Rev Neurosci
,
2007
, vol.
8
(pg.
700
-
711
)
Fox
MD
Snyder
AZ
Vincent
JL
Corbetta
M
Van Essen
DC
Raichle
ME
The human brain is intrinsically organized into dynamic, anticorrelated functional networks
,
2005
, vol.
102
(pg.
9673
-
9678
)
Haberman
SJ
The analysis of residuals in cross-classified tables
Biometrics
,
1973
, vol.
29
(pg.
205
-
220
)
Hayasaka
S
Laurienti
PJ
Comparison of characteristics between region-and voxel-based network analyses in resting-state fMRI data
Neuroimage
,
2010
, vol.
50
(pg.
499
-
508
)
He
Y
Wang
J
Wang
L
Chen
ZJ
Yan
C
Yang
H
Tang
H
Zhu
C
Gong
Q
Zang
Y
, et al. .
Uncovering intrinsic modular organization of spontaneous brain activity in humans
PLoS One
,
2009
, vol.
4
pg.
e5226
Hipp
JF
Hawellek
DJ
Corbetta
M
Siegel
M
Engel
AK
Large-scale cortical correlation structure of spontaneous oscillatory activity
Nat Neurosci
,
2012
, vol.
15
(pg.
884
-
890
)
Honey
CJ
Sporns
O
Cammoun
L
Gigandet
X
Thiran
JP
Meuli
R
Hagmann
P
Predicting human resting-state functional connectivity from structural connectivity
,
2009
, vol.
106
(pg.
2035
-
2040
)
Horovitz
SG
Braun
AR
Carr
WS
Picchioni
D
Balkin
TJ
Fukunaga
M
Duyn
JH
Decoupling of the brain's default mode network during deep sleep
,
2009
, vol.
106
(pg.
11376
-
11381
)
Horovitz
SG
Fukunaga
M
de Zwart
JA
van Gelderen
P
Fulton
SC
Balkin
TJ
Duyn
JH
Low frequency BOLD fluctuations during resting wakefulness and light sleep: a simultaneous EEG-fMRI study
Hum Brain Mapp
,
2008
, vol.
29
(pg.
671
-
682
)
Humphries
MD
Gurney
K
Prescott
TJ
The brainstem reticular formation is a small-world, not scale-free, network
Proc Biol Sci
,
2006
, vol.
273
(pg.
503
-
511
)
Kaiser
M
Hilgetag
CC
Nonoptimal component placement, but short processing paths, due to long-distance projections in neural systems
PLoS Comput Biol
,
2006
, vol.
2
pg.
e95
Kjaer
TW
Law
I
Wiltschiøtz
G
Paulson
OB
PL
Regional cerebral blood flow during light sleep – a H(2)(15)O-PET study
J Sleep Res
,
2002
, vol.
11
(pg.
201
-
207
)
Kjaer
TW
Nowak
M
Lou
HC
Reflective self-awareness and conscious states: PET evidence for a common midline parietofrontal core
Neuroimage
,
2002
, vol.
17
(pg.
1080
-
1086
)
Koike
T
Kan
S
Misaki
M
Miyauchi
S
Connectivity pattern changes in default-mode network with deep non-REM and REM sleep
Neurosci Res
,
2011
, vol.
69
(pg.
322
-
330
)
Larson-Prior
LJ
Zempel
JM
Nolan
TS
Prior
FW
Snyder
AZ
Raichle
ME
Cortical network functional connectivity in the descent to sleep
,
2009
, vol.
106
(pg.
4489
-
4494
)
Latora
V
Marchiori
M
Efficient behavior of small-world networks
Phys Rev Lett
,
2001
, vol.
87
pg.
198701
Laureys
S
The neural correlate of (un)awareness: lessons from the vegetative state
Trends Cogn Sci
,
2005
, vol.
9
(pg.
556
-
559
)
Liu
Y
Liang
M
Zhou
Y
He
Y
Hao
Y
Song
M
Yu
C
Liu
H
Liu
Z
Jiang
T
Disrupted small-world networks in schizophrenia
Brain
,
2008
, vol.
131
(pg.
945
-
961
)
Lo
CY
Wang
PN
Chou
KH
Wang
J
He
Y
Lin
CP
Diffusion tensor tractography reveals abnormal topological organization in structural cortical networks in Alzheimer's disease
J Neurosci
,
2010
, vol.
30
(pg.
16876
-
16885
)
Lynall
ME
Bassett
DS
Kerwin
R
McKenna
PJ
Kitzbichler
M
Muller
U
Bullmore
E
Functional connectivity and brain networks in schizophrenia
J Neurosci
,
2010
, vol.
30
(pg.
9477
-
9487
)
Maslov
S
Sneppen
K
Specificity and stability in topology of protein networks
Science
,
2002
, vol.
296
(pg.
910
-
913
)
Massimini
M
Ferrarelli
F
Huber
R
Esser
SK
Singh
H
Tononi
G
Breakdown of cortical effective connectivity during sleep
Science
,
2005
, vol.
309
(pg.
2228
-
2232
)
Mathias
N
Gopal
V
Small worlds: how and why
Phys Rev E Stat Nonlin Soft Matter Phys
,
2001
, vol.
63
pg.
021117
Mesulam
MM
From sensation to cognition
Brain
,
1998
, vol.
121
(pg.
1013
-
1052
)
Meunier
D
Lambiotte
R
Bullmore
ET
Modular and hierarchically modular organization of brain networks
Front Neurosci
,
2010
, vol.
4
pg.
200
Meunier
D
Lambiotte
R
Fornito
A
Ersche
KD
Bullmore
ET
Hierarchical modularity in human brain functional networks
Front Neuroinformatics
,
2009
, vol.
3
(pg.
1
-
12
)
Newman
ME
Fast algorithm for detecting community structure in networks
Phys Rev E Stat Nonlin Soft Matter Phys
,
2004
, vol.
69
pg.
066133
Newman
ME
Modularity and community structure in networks
,
2006
, vol.
103
(pg.
8577
-
8582
)
Nichols
TE
Holmes
AP
Nonparametric permutation tests for functional neuroimaging: a primer with examples
Hum Brain Mapp
,
2001
, vol.
15
(pg.
1
-
25
)
Ogilvie
RD
Wilkinson
RT
Behavioral versus EEG-based monitoring of all-night sleep/wake patterns
Sleep
,
1988
, vol.
11
(pg.
139
-
155
)
Ogilvie
RD
Wilkinson
RT
The detection of sleep onset: behavioral and physiological convergence
Psychophysiology
,
1984
, vol.
21
(pg.
510
-
520
)
Ogilvie
RD
Wilkinson
RT
Allison
S
The detection of sleep onset: behavioral, physiological, and subjective convergence
Sleep
,
1989
, vol.
12
(pg.
458
-
474
)
Picchioni
D
Fukunaga
M
Carr
WS
Braun
AR
Balkin
TJ
Duyn
JH
Horovitz
SG
fMRI differences between early and late stage-1 sleep
Neurosci Lett
,
2008
, vol.
441
(pg.
81
-
85
)
Power
JD
Cohen
AL
Nelson
SM
Wig
GS
Barnes
KA
Church
JA
Vogel
AC
Laumann
TO
Miezin
FM
Schlaggar
BL
, et al. .
Functional network organization of the human brain
Neuron
,
2011
, vol.
72
(pg.
665
-
678
)
Raichle
ME
Two views of brain function
Trends Cogn Sci
,
2010
, vol.
14
(pg.
180
-
190
)
Rechtschaffen
A
Kales
A
A manual of standardized terminology, techniques and scoring system of sleep stages in human subjects
,
1968
Los Angeles
Brain Information Service/Brain Research Institute, University of California
Rubinov
M
Sporns
O
Complex network measures of brain connectivity: uses and interpretations
Neuroimage
,
2010
, vol.
52
(pg.
1059
-
1069
)
Sämann
PG
Wehrle
R
Hoehn
D
Spoormaker
VI
Peters
H
Tully
C
Holsboer
F
Czisch
M
Development of the brain's default mode network from wakefulness to slow wave sleep
Cereb Cortex
,
2011
, vol.
21
(pg.
2082
-
2093
)
Sanz-Arigita
EJ
Schoonheim
MM
Damoiseaux
JS
Rombouts
SA
Maris
E
Barkhof
F
Scheltens
P
Stam
CJ
Loss of “small-world” networks in Alzheimer's disease: graph analysis of fMRI resting-state functional connectivity
PLoS One
,
2010
, vol.
5
pg.
e13788
Spoormaker
VI
Schröter
MS
Gleiser
PM
KC
Dresler
M
Wehrle
R
Sämann
PG
Czisch
M
Development of a large-scale functional brain network during human non-rapid eye movement sleep
J Neurosci
,
2010
, vol.
30
(pg.
11379
-
11387
)
Sporns
O
The non-random brain: efficiency, economy, and complex dynamics
Front Comput Neurosci
,
2011
, vol.
5
pg.
5
Supekar
K
Menon
V
Rubin
D
Musen
M
Greicius
MD
Network analysis of intrinsic functional brain connectivity in Alzheimer's disease
PLoS Comput Biol
,
2008
, vol.
4
pg.
e1000100
Tononi
G
An information integration theory of consciousness
BMC Neurosci
,
2004
, vol.
5
pg.
42
Tzourio-Mazoyer
N
Landeau
B
Papathanassiou
D
Crivello
F
Etard
O
Delcroix
N
Mazoyer
B
Joliot
M
Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain
Neuroimage
,
2002
, vol.
15
(pg.
273
-
289
)
van den Heuvel
MP
Mandl
RC
Kahn
RS
Hulshoff Pol
HE
Functionally linked resting-state networks reflect the underlying structural connectivity architecture of the human brain
Hum Brain Mapp
,
2009
, vol.
30
(pg.
3127
-
3141
)
van den Heuvel
MP
Mandl
RC
Stam
CJ
Kahn
RS
Hulshoff Pol
HE
Aberrant frontal and temporal complex network structure in schizophrenia: a graph theoretical analysis
J Neurosci
,
2010
, vol.
30
(pg.
15915
-
15926
)
van den Heuvel
MP
Stam
CJ
Kahn
RS
Hulshoff Pol
HE
Efficiency of functional brain networks and intellectual performance
J Neurosci
,
2009
, vol.
29
(pg.
7619
-
7624
)
Van Dijk
KR
Hedden
T
Venkataraman
A
Evans
KC
Lazar
SW
Buckner
RL
Intrinsic functional connectivity as a tool for human connectomics: theory, properties, and optimization
J Neurophysiol
,
2010
, vol.
103
(pg.
297
-
321
)
Vanhaudenhuyse
A
Demertzi
A
Schabus
M
Noirhomme
Q
Bredart
S
Boly
M
Phillips
C
Soddu
A
Luxen
A
Moonen
G
, et al. .
Two distinct neuronal networks mediate the awareness of environment and of self
J Cogn Neurosci
,
2011
, vol.
23
(pg.
570
-
578
)
Vanhaudenhuyse
A
Noirhomme
Q
Tshibanda
LJ
Bruno
MA
Boveroux
P
Schnakers
C
Soddu
A
Perlbarg
V
Ledoux
D
Brichant
JF
, et al. .
Default network connectivity reflects the level of consciousness in non-communicative brain-damaged patients
Brain
,
2010
, vol.
133
(pg.
161
-
171
)
Watts
DJ
Strogatz
SH
Collective dynamics of “small-world” networks
Nature
,
1998
, vol.
393
(pg.
440
-
442
)
|
2017-02-24 00:36:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5963294506072998, "perplexity": 3134.8334628236603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00236-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://imathworks.com/math/math-lower-bound-in-algorithmic-puzzle/
|
# [Math] Lower bound in algorithmic puzzle
combinatoricslogicpuzzle
Puzzle: there are $n$ computers most of which are good; the others may be bad ("most" in the strict sense: there are strictly more good computers than bad ones). You may ask any computer $A$ about the good/bad status of another computer $B$. if $A$ is good it will correctly indicate $B$'s status, but otherwise it may answer whatever.
Your goal is to locate a good computer using the minimum number of questions in the worst case. In other words, devise an algorithm that requires no more than $N$ questions regardless of the outcome and is guaranteed to pinpoint a good computer, and make $N$ as small as possible.
The original puzzle asks for the optimal $N$ when $n=100$.
.
.
.
.
.
I, and everyone else I know who solved this, can do the $n=100$ case with $97$ questions in the worst case. I'm pretty sure this is optimal but I do really miserably on lower bounds. The simplest case where I can't match the bounds is $n=7$ (at most $3$ bad computers): this is doable with $5$ questions and I can rule out $3$ but I can't rule out $4$.
More generally, if the number of bad computers is at most $k$ (so $n=2k+1$ or $n=2k+2$), I can show that at least $k+1$ questions are needed while $2k-1$ questions suffice. Can anyone narrow that gap?
EDIT: starting a bounty, looking for improvements to the lower bound (or the upper bound, though I'd be surprised if the latter is possible) for general $n$. A transparent argument for why 7 computers require 5 questions is also good, but a computer-assisted case-by-case enumeration is not.
This is a very fun problem, which I have encountered in the literature as the knights and spies problem. Good computers are called knights, because they always tell the truth, while bad computers are called spies, because they say whatever they want. (Traditionally, a liar would be called a knave.) I will use this terminology because I feel it is more consistent with other liar/truth-teller puzzles.
Let me briefly discuss an alternative objective: figure out everyone's identity, not just a single knight's. I encourage you to think about this problem before reading further. This is an interesting problem, in part because it's exciting how many different bounds I have seen people come up with:
• $n^2$ or $n(n-1)$ by asking everyone about everyone
• $\Theta(n\sqrt{n})$ using a standard "square-root" trick or $\Theta(n\log{n})$ by being smarter about the recursion
• $5n + O(1)$, $3n + O(1)$, and $2n + O(1)$ by increasingly efficient methods related to pairing up people
• $3n/2 + O(1)$ optimally!
The 2009 paper "Knights, spies, games and ballot sequences" by Mark Wildon proves that the final bound is optimal, computing the exact value of the $O(1)$ for every $n$. Mark Wildon has a webpage with additional information about the problem which notes that this solution was previously published by Pavel M. Blecher in the 1983 paper "On a logical problem". The strategy used in these papers, which Wildon calls the "Spider Interrogation Strategy" is fantastic.
Your problem of identifying a single knight is more challenging! I'll go straight to the punch by saying that the optimal bound is $n-1 - H(n-1)$ where $H(k)$ is the number of 1 bits in the binary representation of $k$. I haven't carefully read the other answer to this question, but as far as I can tell, this bound can easily be achieved by a trivial modification of that strategy.
The upper bound isn't too bad, but the lower bound is much trickier. When my friends and I thought about (and solved!) this problem, we used a reduction to the following problem:
There are two players, the Questioner and God, and a collection of (nonnegative) numbers. The Questioner's aim is to achieve a violation of the (strict) triangle inequality, that is, have a single number be greater than or equal to the sum of the rest. God's aim is to not have this happen for as long as possible. Each move consists of the Questioner asking about two numbers, to which God replies "sum" or "difference"; the two numbers $a$ and $b$ are then replaced by either $a+b$ or $\lvert a-b\rvert$ accordingly. Since the number of numbers decreases by one each turn, eventually there will be two numbers and the (strict) triangle inequality is necessarily violated. If God manages to not lose before then, we say that "God wins".
Here are some fun facts:
• Consider a starting position of $n$ ones. Then God wins if and only if $n$ is 1 more than a power of 2.
• Suppose God has a winning position. Then his answer to any question the Questioner can ask is forced; that is, it is always the case that one of the two answers is losing. We call this "Uniqueness". The way we think about it is that God somehow has to be very careful in answering every single question, and indeed, if you look at optimal play in "endgame" positions, it's hard to predict what the correct answer for God is.
• Suppose the starting position is $n=2^k+1$ ones. By the above two points, this is a winning position and God must carefully answer every question so as not to lose. Nonetheless, the answer to the first $(n-3)/2$ questions asked will necessarily be "sum".
So there's this phenomenon that in a certain class of positions, God answers blindly for the first half of the time, and then has to be very careful afterwards. It's very weird.
We finally classified winning positions. It turns out that the first bullet point above is true because $\binom{2n}{n}/2$ is odd only when $n$ is a power of 2.
Some more quick observations:
• With regards to the reduction: this sum/difference game is just what you get if the spies are knaves. In that case, the knights and spies are just two groups of people which support themselves but accuse the others.
• The lower bound $n-1-H(n-1)$ holds even if you're trying to find a knight or a spy.
Let me finish with some references to the literature. This result has been published many times! The term used in the literature for this game is "the majority game".
You have a group of $n$ items, each with one of $k$ labels. One of the labels is shared by a majority of the items. You are allowed to ask if two items have the same label, and the goal is to minimize the number of questions necessary to identify a majority element.
The best studied case is when $k=2$. The problem is often presented as played by a questioner and an answerer. This is not exactly the same game, because in principle spies could tell the truth. However, the lower bound reduction above is to the case when the spies are knaves, and then this is the same game.
Here's a summary of the relevant literature:
• Michael J. Fischer and Steven L. Salzberg.
Finding a majority among $n$ votes.
J. Algorithms 3 (1982) 375--379.
They consider the problem when $k$ is unknown, and a majority may or may not exist. They prove that the optimal bound is then $\lceil 3n/2 - 2\rceil$ questions. You may recognize this as the bound from Mark Wildon's paper. Now, I am a little confused because it doesn't seem to me that the problems map onto each other exactly, but I find it hard to imagine that it's an accident.
Actually, the algorithm they present seems strikingly similar to Wildon's "spider interrogation strategy". Because Fischer and Salzberg are in a slightly different model (in particular one that is symmetric w.r.t. asking $x$ about $y$ and $y$ about $x$), you have to change some of the details. I don't think they don't exactly map onto each other (in terms of the questions asked), but they're similar.
• Saks, Michael E. and Werman, Michael.
On computing majority by comparisons.
Combinatorica 11 (1991), no. 4, 383--387.
They show that if $n$ is odd, the optimal bound is $n-H(n)$, where $H(n)$ is the number of 1-bits in $n$.
Their analysis proceeds by first reformulating the game as between a selector and an assigner. A position in the game is a multiset of integers, and the rules are as in the sum-difference game, e.g. "the game ends when the largest number in the multiset is at least half the total." They then have a slick proof of the upper bound! (In their notation it's a lower bound.) I mention this because even though the upper bound isn't hard, some analyses gets mired in minor details.
They then define a 2-adic valuation invariant which my friends and I also discovered. Their proof uses generating functions which ours did not, but I believe the actual invariant is the same as ours. Finally, they apply the invariant result to the position consisting of $2h+1$ ones by computing the valuation of $\binom{2h}{h}$.
• Alonso, Laurent; Reingold, Edward M.; and Schott, René
Determining the majority.
Inform. Process. Lett. 47 (1993), no. 5, 253--255.
They present a short proof of the previous paper's results. The upper bound uses the same neat trick. They then prove the lower bound with essentially the same 2-adic invariant technique, except with a different exposition. Admittedly, it's cleaner and doesn't use generating functions.
• Wiener, Gábor
Search for a majority element.
International Conference (of the Forum for Interdisciplinary Mathematics) on Combinatorics, Information Theory and Statistics (Portland, ME, 1997). J. Statist. Plann. Inference 100 (2002), no. 2, 313--318.
He proves some related results, including the following:
If asking about $x$ and $y$ is an optimal question, then there exists an optimal question that doesn't ask about $x$. It follows that the optimal number of questions is monotonic, in the sense that adding another 1 into the position does not decrease the optimal number of questions.
|
2022-09-28 19:16:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.79220050573349, "perplexity": 364.2179054002066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00537.warc.gz"}
|
https://www.physicsforums.com/threads/analyzing-the-graphs-of-greatest-integer-functions.949820/
|
# Analyzing the graphs of Greatest Integer Functions
Gold Member
## Homework Statement
Consider $u\left(x\right)=2\left[\frac{-x}{4}\right]$
(a) Find the length of the individual line segments of the function,
(b) Find the positive vertical separation between line segments.
## Homework Equations
The output of Greatest Integer Functions are always integers.
## The Attempt at a Solution
Length:
The text states that the coefficient of x within the greatest integer symbols is the length of the individual line segments of the graph.
In $u\left(x\right)=2\left[\frac{-x}{4}\right]$, the coefficient of x is $\frac{-1}{4}$.
However, the solution for the length of the graph states that length=4.
It explains this by stating that there's a decrease of 1 for every increase of 4 in the variable x.
This would make sense if we were talking about the slope of a line, but it doesn't make any sense at all in this context.
And since we're talking about the length of a line segment, does the negation matter?
Vertical Separation:
The text states that the coefficient of the greatest integer function is the positive vertical separation between line segments.
This is a straight forward statement, and the vertical separation=2, but I don't see why this leading coefficient determines this.
Can anyone help me get a better idea of what is going on with the graphs of these functions?
Related Precalculus Mathematics Homework Help News on Phys.org
andrewkirk
Homework Helper
Gold Member
I haven't seen the notation you are using but I presume that the square brackets [...] denote the function that gives the greatest integer that is less than or equal to the value of the contents of the brackets. In my experience that is usually indicated by $\lfloor...\rfloor$ and is called the 'floor' function.
If that is the case then the function value is piecewise constant and steps down at each multiple of 4, ie at ....,-8, -4, 0, 4, 8, ....
So each line segment is horizontal and extends for the period that x takes to increase by 4. So its length is 4.
How much does it step down by each time? Well $\lfloor\frac{-x}4\rfloor$ always has an integer value and decreases by 1 each time $x$ reaches a new, higher multiple of 4. So that's a step size of 1, but then it is multiplied by 2 - the leading coefficient - so the step size (the vertical separation) is 2.
The impact of the negation is to make the steps go down as $x$ increases, rather than up. But it doesn't affect the step length or height.
Gold Member
Yes we are talking about the same function. My text has the it denoted as something similar to but not exactly like [[x]]. But I looked up on Wikipedia the "floor function" that you stated and we're talking about the same thing.
That was a great explanation, thank you.
Let me see if I understand:
The length is 4 because if we were to create a table of values (leaving the leading coefficient of 2 out for simplicity's sake), it would take 4 integer x inputs for the line segment to be complete and move up or down to the next segment? For example, with an input of x = -8 gives an output of 2, and inputs x = -7, -6, -5, -4 all give an output of 1. Then when x = -3, the output is zero so there's a new line segment vertically shifted.
For the vertical displacement: the greatest integer function output is always an integer. So when we take the output of the function's argument, and multiply it by 2, we are vertically shifting by that value 2.
andrewkirk
|
2020-04-04 16:34:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6873167753219604, "perplexity": 250.46217242582563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524043.56/warc/CC-MAIN-20200404134723-20200404164723-00463.warc.gz"}
|
https://www.ias.ac.in/listing/bibliography/jess/P._Mahesh
|
• P Mahesh
Articles written in Journal of Earth System Science
• The 2007 Bengkulu earthquake, its rupture model and implications for seismic hazard
The 12 September 2007 great Bengkulu earthquake ($M_w$ 8.4) occurred on the west coast of Sumatra about 130 km SW of Bengkulu. The earthquake was followed by two strong aftershocks of $M_w$ 7.9 and 7.0. We estimate coseismic offsets due to the mainshock, derived from near-field Global Positioning System (GPS) measurements from nine continuous SuGAr sites operated by the California Institute of Technology (Caltech) group. Using a forward modelling approach, we estimated slip distribution on the causative rupture of the 2007 Bengkulu earthquake and found two patches of large slip, one located north of the mainshock epicenter and the other, under the Pagai Islands. Both patches of large slip on the rupture occurred under the island belt and shallow water. Thus, despite its great magnitude, this earthquake did not generate a major tsunami. Further, we suggest that the occurrence of great earthquakes in the subduction zone on either side of the Siberut Island region, might have led to the increase in static stress in the region, where the last great earthquake occurred in 1797 and where there is evidence of strain accumulation.
• Influences of the boundary layer evolution on surface ozone variations at a tropical rural site in India
Collocated measurements of the boundary layer evolution and surface ozone, made for the first time at a tropical rural site (Gadanki 13.5°N, 79.2°E, 375 m amsl) in India, are presented here. The boundary layer related observations were made utilizing a lower atmospheric wind profiler and surface ozone observations were made using a UV analyzer simultaneously in April month. Daytime average boundary layer height varied from 1.5 km (on a rainy day) to a maximum of 2.5 km (on a sunny day). Correlated day-to-day variability in the daytime boundary layer height and ozone mixing ratios is observed. Days of higher ozone mixing ratios are associated with the higher boundary layer height and vice versa. It is shown that higher height of the boundary layer can lead to the mixing of near surface air with the ozone rich air aloft, resulting in the observed enhancements in surface ozone. A chemical box model simulation indicates about 17% reduction in the daytime ozone levels during the conditions of suppressed PBL in comparison with those of higher PBL conditions. On a few occasions, substantially elevated ozone levels (as high as 90 ppbv) were observed during late evening hours, when photochemistry is not intense. These events are shown to be due to southwesterly wind with uplifting and northeasterly winds with downward motions bringing ozone rich air from nearby urban centers. This was further corroborated by backward trajectory simulations.
• Journal of Earth System Science
Volume 131, 2022
All articles
Continuous Article Publishing mode
• Editorial Note on Continuous Article Publication
Posted on July 25, 2019
Click here for Editorial Note on CAP Mode
© 2021-2022 Indian Academy of Sciences, Bengaluru.
|
2022-05-19 15:04:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2469131201505661, "perplexity": 4001.2181651902038}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529538.2/warc/CC-MAIN-20220519141152-20220519171152-00349.warc.gz"}
|
https://suranyami.com/the-suranyami-bullet
|
Suranyami
The Suranyami Bullet
August 22 2012, 9:57 PM
Quite a few people have been asking me where “suranyami” comes from lately, which caused me to remember that I hadn’t transferred over the Suranyami Bullet story when I moved over to Posterous.
So here it is. This was a 2-part dream with a sequel that occurred over New Year’s Eve / Day and then 6 months later. I wrote it all down as soon as I woke up, because it was unusually vivid.
Suranyami Bullet
19981231
The war between Iraq and the USA had reached a point of extreme ridiculousness and was now taking criticism from all of the Islamic states, and many other nations also. Hundreds of cruise missiles were poised, and then fired from US carriers. Suddenly, one by one, they all disappeared. There was an ominous and eerie silence which was followed by a brief public broadcast from an obscure, fundamentalist (yet, pacifist) Islamic sect. They said that the time had come for the use of the Suranyami Bullet, a sacred object that they had been secrretly guarding for over 10,000 years. This was why all the missiles had failed. They said that when the Suranyami Bullet was fired it could shoot every individual human on the Earth straight through their heart, and that this was no idle threat.
At the very same time, I got an urgent email from an old friend, Julie Chan, asking me if I was available to work immediately on the most significant nanotech project ever devised: “the Suranyami Bullet”. I said “yes”. When I got there (in the Middle East) I found the most amazing laboratory… sophisticated beyond your wildest dreams. It had been built in one day by the simple insertion of a small metal key into a slot in this small bronzey-coloured bullet with little red spheres studded around one end of its casing. The ensuing nanotech lab was a “setup station” for the full release of its capabilities. Yesterday’s missile deactivation had been almost a trivial exercise once it had been activated. Now came the fascinating task of working out what it could really do, how it did it, and why had it been held in safe keeping for thousands of years by a group of devout Muslim priests.
Here were programmers, mathematicians, philosophers, physicists, chemsists etc… all coming to grips with this tiny device that was more powerful than anything else in the universe so far.
My initial doodlings at disassebling its products started with requesting a pencil to be produced and then using the equipment available to analyse it. The pencil had the unusual property that if you twisted it, it grew and shrank in girth and pencil-stroke darkness. I ran a scan for any structurse that had a structural similarity to a computer language statement such as:
ON EVENT(type) DO
Statement
END
…etcetera and soon discovered many, many types of these structures, mostly idle, scattered throughout the entire makeup of the pencil. There seemed to be a pretty predictable one-to-one correspondence between the machine-micro-circuits and typical computer instructions, so I modified a cross-compiled hack of Fractal Painter and soon had a pencil that had the full functionality of Painter… touch this for this brush, press the ? and out unfolded an interactive instruction page, bend here to activate the image hose, etc… geeks emailed copies of the compilation code to each other with much delight.
19990101 Part 2
It turned out that the knowledge of when to activate the Suranyami Bullet had been passed down as an oral tradition from teacher to student for as long as anyone could speak. The time to activate it would come as a definite feeling by those who were physically near it. This was one of the major “tricks” that it was apparently capable of… that it had some form of psychic influence, but was obviously based on sound scientific principles.
Good progress had been made towards understanding it and how to take it to the next level. Upon activating Level 2, the thing burst up, down, in and out all at once, simultaneously consuming and producing at a massive rate.
Analysis had shown that, what looked like a large section of the bullet that was dedicated to “construction plans”, was in fact nothing but a big decompression algorithm. The data that it was decompressing turned out to be only 1 bit in length. It was anticipated that the output of the compression was a set of instructions, the size of which was in the googleplex range (10^(10^100)).^^
Somehow, the compression algorithm was always uniquely design to decode the same number, but using different techniques and pseudo-random number seeds would come up with wildly varying output. In fact, it was theorised that if we could understand the algorithm that designed the compression algorithm each time, we could compress any amount of data into 1 bit.
The bullet was now expanding at an enormous rate, consuming the ground, trees, buildings, etc… and looking like a cross between video static, pin-cushioned inflatable leather, nuclear powerplant plumbing, circuitry, organic mould and you-name-it. It inflated around and through everything, and by-and-large, pushed people harmlessly away from it.
In what seemed irrelevant (at first), I was asked to attend a funeral or my Uncle Jimmy by my Sister, Mum and Nana. They were very upset and worried about the strange, evangelical, bargain-basement, charismatic church sect that Auntie Babs had proposed for the funeral. They wanted me to go and talk some sense into her, so, could I sit in with Babs at the dress rehearsal for the funeral, which required a male for half of the burial ceremony.
The funerals this church did were a bit like the Reverend Moon mass marriage ceremonies, where stadiums were filled with lots of couples. I joined Babs and sat on the sideways-backwards seats that were used. We were in a huge L-shaped church hall with hundreds of others that were also burying people.
We had to put hand puppets on at various times, while we were rocking back and forth and cutting up bits of salami and saying the Eulogies in high-pitched, sing-song, cartoon voices. All the seats were alternating pink and white chequers and they rocked back and forth like fake stage-waves. Ultimately, it all got more and more surreal, to the point where I thought that I must be hallucinating, but it turned out that it was actually part of the bullet’s influence.
It had been gradually assimilating and rearranging everyone’s personalities and consciousness in to itself and this was a combination of some moral lessons that it wanted to teach, and the fact that it fully hadn’t grasped the subtleties of our lives and communication styles yet. The transition had seemed to happen quite smoothly at first, since it had a significant knowledge of how our bodies processed sensory data, and it was a fairly simple matter for it to release nano-bots into our lungs and the bloodstream that would then travel to the various parts of the body and start feeding false information into synapses.
A group of others and myself soon found ourselves suspended in a spherical field surrounded by the bullet’s machinations as far as the eye could see, which must have been millions of miles—there was no horizon. It had already become so big, it told us, that communications between parts of itself was suffering week-long lag times just to send messages, so it had developed time-space folding techniques to hyperspace signals and structures throughout itself. The devices to do this looked like identical mirror-inverted CDs without holes in the middle… a bit like the stepping disks in Ringworld. One side of each disk was a field that caused a time-space discontinuity whereby you could put your arm through one disk and it would start protruding out of the other disk, no matter how far apart they were.
We were being accelerated to the speed of light, now, whereby henceforth we would be increasing our mass and energy towards infinity, reducing the rate of time passing to zero, and becoming identical to light. As time stopped, we became immortal and at one with God: the Suranyami Bullet.
A new God-King-Emperor! Hurrah!
19990610
Peter Pegg, the intrepid “Dr Livingstone” explorer with the pith hat, hurriedly ushered me away saying “David, we simply must be involved in this… it is a once-in-a-lifetime opportunity”. We were in the kingdom of Suran, where the Suranyami Bullet had come from. The current God-King-Emperor had just died… his status was similar to the Zion/Ethiopian Harley Selassi/Markus Garvey “incarnation of god that walks upon the earth”. Peter had done extensive and very respectful research into the ritual of choosing the next “Suranyami” and he had, in fact, been asked by the Suranese Royal Family to assist in their holiest of rituals… the passing down from father to son of the holy godhead.
The sacred tree room which had grown from Moses’ burning bush grown over thousands of years and turned inside out: it had one secret branch that grew from the outside through the trunk, inside the special room-cavity-altar exposing the holy knot, then continued back out through the trunk to the outside world again.
It was considered auspicious that there were two total foreigners there (surprisingly!) because there had never been anyone from the outside world in the Kingdom of Suran… the coincidence was not taken as mere chance.
The three of us (the son, Peter and myself) entered the tree-room and Peter was asked to use the cylindrical saw to cut out the middle heart of the knot. When this happened, the new godhead flowed out of the tree in a sacred fire that would settle down (usually) on the head of the King’s son… but to everyone’s shock, it settled on mine. I wore the holy flame which engulfed my mind and skull.
There was nothing else to do but walk proudly out onto the branch-walkway and claim myself as the new God-King-Emperor of Suran.
Soon, we were whisked off in a convoy of pink cadillacs with 50s housewives wearing John Galleano-esque lemon-yellow, pistachio-green and pretty-pink shirts and flowing skirts, accessorised with long gloves. The celebrity cavalcade wandered along winding coastal roads to the theme of “A Summer Place” with all the ladies waving their hands like the Queen does, holding their parasols, and rejoicing “Oh, hooray! We have a new God-King!”
September 9th, 2013 3:00pm
|
2021-09-28 05:11:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40238329768180847, "perplexity": 2868.9328714791154}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060201.9/warc/CC-MAIN-20210928032425-20210928062425-00469.warc.gz"}
|
http://www.inderscience.com/info/ingeneral/forthcoming.php?jcode=ijahuc
|
# Forthcoming articles
International Journal of Ad Hoc and Ubiquitous Computing
These articles have been peer-reviewed and accepted for publication in IJAHUC, but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.
Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.
Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.
Articles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.
Register for our alerting service, which notifies you by email when new issues of IJAHUC are published online.
We also offer RSS feeds which provide timely updates of tables of contents, newly published articles and calls for papers.
International Journal of Ad Hoc and Ubiquitous Computing (92 papers in press) Regular Issues Assessment of heuristics for self-stabilization in real-time interactive communication overlays by Pelayo Nuño, Juan C. Granda, Francisco J. Suárez Abstract: Self-stabilization is an autonomic behavior closely related to self-healing. In multimedia communication overlays, self-stabilization copes with network disruptions by dynamically restoring data links and communication paths. The restoration of links may require reorganizing the overlay by establishing new connections among members or modifying existing ones. Self-stabilization techniques are usually triggered asynchronously, either on use (during reorganizations of the overlay) or on event (as a response to a failure), and several heuristics may be used when selecting link peers. In this work several heuristics to perform on event self-stabilization in multimedia communication overlays are assessed. A real-time communication overlay deployed according to a full-mesh topology interconnecting a set of multicast groups is used as assessment framework to evaluate the heuristics. Intensive tests have been carried out to compare and assess heuristics under several overlay topologies and network conditions. Keywords: self-stabilization; overlay networks; multimedia communications; autonomic computing.DOI: 10.1504/IJAHUC.2016.10001909 A State-of-Art approach to misbehavior detection and revocation in VANET:Survey by Dinesh Singh, Ranvijay Ranvijay, Rama Shankar Yadav Abstract: With the increased popularity of Internet application and smart cities, Vehicular Ad hoc Networks (VANETs) have become one of the prominent research area and ample of researchers have addressed the many issues in the past decade. Among the many issues involves with e ective VANET, misbehavior detection and revocation is required to address with full attention. It is rst and foremost step towards dealing with safety applications in VANETs. The vehicle misbehavior is responsible for malfunctioning of many network activities (e.g. trac jam, road accidents etc.). The misbehavior detection problem becomes more severe for safety critical applications in VANETs. The misbehaved vehicle must be revoked from the network as early as possible to reduce injuries. Thus, inconjuction of misbehavior detection, revocation problem also needs to be explored. This paper is addressed to provide state-of-art to misbehavior detection and revocation for safety critical VANETs. Here we present a detailed survey on relevant research done in the area of misbehavior detection and revocation with other related issues. Keywords: Vehicular Ad hoc Network (VANETs); safety applications; misbehavior detection; revocation.DOI: 10.1504/IJAHUC.2016.10001911 An Energy Model of 4G Smartphone Oriented Towards Typical Network Applications by TunDong Liu, Yun Lin, ZhuoBin Xu, Yi Xie, FuFeng Chen, GuoZhi Xue Abstract: In the wave of wireless technologies, 4G has achieved considerable success because of its high data rate and great potential. But the short battery lifetime of mobile device hinders the development of 4G. The energy scheme in the LTE protocol based on Radio Resource Control (RRC) states in some degree optimizes the energy consumption of 4G smartphone, but it is still far from satisfaction. Therefore, many efforts have been done to study the process of energy consumption in 4G smartphone which is helpful to improve the energy efficiency. Some hardware-based methods using expensive instruments are correct but unhandy. Some software-based methods are flexible but incorrect because they often omit the influences from communication performance. Up to now, most energy models have paid little attention to application layer without considering the quality of wireless communication. This paper proposes an energy model of 4G smartphone where two typical applications (HTTP and FTP) and the data flow of wireless traffic are considered. In order to verify the energy model, an experimental testbed has been set up to capture the data flow and power records during one application. Experiment results have shown that the energy model accurately estimates the energy consumption of 4G smartphone in one operation of the application, whose accuracy rate is higher than 90%. The work described can be considered the first step toward the implementation of a new softwarebased method to accurately estimate the energy consumption of 4G smartphone. Keywords: 4G; Energy consumption; Model; State machine; Smartphone.DOI: 10.1504/IJAHUC.2016.10001912 NetBAN, a concept of Network of BANs for Cooperative Communication: energy awareness routing solution by Audace Manirabona, Saadi Boudjit, Lamia Chaari Fourati Abstract: In this paper a NetBAN which is a concept of network of BANs (Body Area Networks) or WBANs (Wireless BANs) is introduced and a routing solution intending to help a group of WBANs to cooperate for relaying packets according to the energy consumption rate and the communication link quality is proposed. The main goal is to provide to the sensors a technique that helps them deliver their data even when the coordinators battery is very low or empty or else the connection to the access point is lost, what leads to balancing energy consumption between cooperating coordinators. For this purpose, an energy threshold based technique is used and energy-aware optimized link state routing (EA-OLSR) is defined and used. Simulation results show interesting performances in terms of network lifetime with a gain of about 30% and data delivery about 20%. Keywords: Network of BANs; WBAN; Cooperation; Relay; OLSR; Routing; Energy Aware.DOI: 10.1504/IJAHUC.2016.10001913 ADiDA: Adaptive Differential Data Aggregation for Cluster Based Wireless Sensor Networks by Rabia Enam, Rehan Qureshi Abstract: In dynamic cluster based Wireless Sensor Networks (WSNs), clusters are formed dynamically and repeatedly to uniformly consume the energy of the nodes. It has been observed in large scale dynamic cluster based WSNs that the size of clusters vary significantly in terms of number of nodes. The data aggregation mechanisms used at the cluster heads do not adapt adequately to such variance in cluster sizes and incur considerable losses specially in clusters with large number of nodes. In this paper we propose a novel and an \emph{Adaptive Differential Data Aggregation} (ADiDA) method that can minimise the complexity of aggregating large amount of data into small sized data packets. The main feature of ADiDA is that in addition to reducing the cost of redundant data transfer in the network, it also optimally utilises the available space in a packet at each cluster head. We have analysed ADiDA on four different types of sensing environments with multiple types of data-values. The results have shown that ADiDA can reduce the payload size requirement to almost one-fourth of the non-compressed payload and the distortion percentage in the aggregated data decreases by 16\% to 41\% when compared with the summary based aggregation of data. Keywords: Cluster based WSN; Adaptive Data Aggregation; Spatial Correlation; Variable sized Clusters.DOI: 10.1504/IJAHUC.2016.10001914 MT-SECURER: Multi Factors Trust for Secure and Reliable Routing in MANETs by Zakir Ullah, Muhammad Hasan Islam, Adnan Ahmed Khan, Imran Shafi Abstract: Routing, having a pivotal role in MANETs successful working, assumes nodes cooperation for its processes. However, this assumption of nodes cooperation makes routing vulnerable to various insider and outsider attackers. Therefore, enforcing nodes cooperation to secure routing from such attackers is stipulated as a challenging research issue in MANETs. In this paper a trust management scheme named MT-SECURER (Multi Factors Trust for SECUre and REliable Routing in MANETs) is proposed to make routing secure and reliable against insider attackers to launch greyhole and blackhole attacks. The proposed scheme develops trust using multiple factors, i.e., nodes cooperation from communication networks and nodes relationship maturity and mutual friends from social networks. These factors are acquired using observer nodes personal observations and neighbors recommendations. Furthermore, neighbors recommendations are passed through a dissimilarity factor based filter to remove false recommendations. As a test case, the proposed scheme is integrated in AODV routing protocol and extensive simulations are conducted to examine the effectiveness and competence of the proposed scheme in presence of insider attackers to launch blackhole and greyhole attacks. Experimental results show significant improvement in packet delivery ratio, throughput and normalized routing load with slightly increased average end-to-end delay when compared to contemporary schemes in presence of attacks asserted. Keywords: Attacks; MANETs; Routing; Trust; Trust Management.DOI: 10.1504/IJAHUC.2016.10001915 An Energy-Efficient Point-Coverage-Aware Clustering Protocol in Wireless Sensor Networks by Tri Gia Nguyen, Chakchai So-In Abstract: Preserving coverage is one of the most essential functions to guarantee quality of service in wireless sensor networks. With this key constraint, the energy consumption of the sensors including their transmission behaviour is a challenging problem in term of how to efficiently use them while achieving good coverage performance. This research proposes a clustering protocol, point-coverage-aware (PCACP), based on point-coverage awareness with energy optimizations that focuses on a holistic view with respect to activation sensors, network clustering, and multi-hop communication, to improve energy efficiency, i.e., network lifetime extension while preserving coverage and maximizing the network coverage. The simulation results demonstrate the effectiveness of PCACP, which strongly improves the performance. Given a diversity of deployments with scalability concerns, PCACP outperformed other competitive protocols i.e., LEACH, CPCP, EADC, and ECDC in terms of conserving energy, sensing point coverage ratios, and overall network lifetime. Keywords: coverage-aware; point coverage; sensor activation; clustering; energy-efficient; wireless sensor networks; WSNs.DOI: 10.1504/IJAHUC.2016.10001976 An Optimal Algorithm for Small Group Multicast in Wireless Sensor Networks by Weizhong Luo, Jianxin Wang, Zhaoquan Cai, Gang Peng, Jiong Guo, Shigeng Zhang Abstract: We propose an optimal algorithm to construct a delay-bounded minimum energy routing tree for small group multicast in wireless sensor networks. Finding the minimum energy multicast tree with constrained delay in general case has been proved to be a NP-hard optimization problem. Existing works mainly focus on developing approximation or heuristic algorithms to find approximate solutions. We formally define the Min-power h-Multicast problem - to find a minimum energy multicast tree in which the path from the source to every destination node is less than h hops - and translate it into a minimum Steiner tree problem. We then develop a dynamic programming algorithm to get an optimal solution to the problem in O(3^k(n+m)h+2^k (n+m)^2h) time, where k is the size of the multicast group, n and m denote the numbers of vertices and edges of the graph characterizing the network, respectively. Simulation results show that, compared with existing heuristic or approximation algorithms, our algorithm saves energy consumption by factors between 19% and 42% with comparable running time for small group multicast. Keywords: Delay-bounded multicast; energy consumption optimization; fixed parameter tractable; NP-hard.DOI: 10.1504/IJAHUC.2016.10001916 Direction-Based Urban Broadcast Protocol for Vehicular Ad hoc Networks by Gui-Sen Li, Xu-Hui Chen, Ke-Shou Wu, Ren Chen Abstract: Multi hop broadcast for vehicular ad hoc networks in urban environments is a difficult task for the impact of the intersections. The forward of the next relay will face the problem that some branches of the intersection will not be covered in time. In this paper, we propose a novel broadcast protocol for urban environments. The proposed protocol considers the transmission direction of broadcasting message and the moving direction of forwarding vehicle for delivering the message over the intersection. It delivers the message to all road directions at the intersection by an efficient and fast way. Using the direction information, an agent vehicle mode also has been proposed to cope with the disconnected problem. Performance evaluation results show that the proposed protocol improves the reachability and speeds up the broadcast process. Keywords: VANET; urban broadcast; intersection; transmission direction; moving direction.DOI: 10.1504/IJAHUC.2016.10001978 Modelling and Mitigating Spectrum Sensing Non-cooperation Attack in Cognitive Radio Network by Roshni Rajkumari, Ningrinla Marchang Abstract: Collaborative spectrum sensing (CSS) is known to improve spectrum sensing performance in Cognitive Radio Network. In CSS, secondary users participate by sharing their local sensing results. They participate in the sensing process at their own cost, i.e., they expend some amount of energy and time for sensing and sharing. But, a selfish user may refrain from collaborating in the spectrum sensing process in order to save up energy, which results in improper sensing. While this problem is widely known, we call this as the spectrum sensing non-cooperation (SSNC) attack for easy reference. In this paper, a collective action prisoners dilemma game is used to model the SSNC attack. To handle this attack, repeated game punishment mechanisms, namely Tit-for-Tat and Grim strategies are used. In addition, modified Tit-for-Tat and modified Grim strategies are proposed to handle this attack in the presence of reporting channel error. Keywords: Cognitive radio network; collaborative spectrum sensing; spectrum sensing; non-cooperation attack; game theory; fusion rules.DOI: 10.1504/IJAHUC.2016.10001917 Enhanced Identity Privacy in UMTS by Hiten Choudhury, Basav Roychoudhury, Dilip Kr. Saikia Abstract: Identity privacy in mobile networks has been an active and exciting research area for quite some time. Earlier, researchers were focused on protecting the subscriber's identity over the radio access link between the mobile device and the visited serving network. Whereas now, they are considering the need for protecting the identity from the serving network itself, due to the security and flexibility that it promises to bring into roaming situations. Towards this, numerous protocols have been proposed for mobile networks in general. In UMTS, one of the most popular and widely deployed mobile networks across the globe, the status of identity privacy is no different. However, a surprising fact is that not much research, that tries to protect the subscriber's identity from the serving network, has been conducted with regards to UMTS in specific. Even recent works in this area seems to ignore this important security aspect. In this paper, we make an effort to fill in this gap by proposing an identity privacy ensuring extension that can be easily adapted in UMTS, without disturbing the current protocol flow. We also establish the security, robustness and correctness of this extension through statistical, security and formal analysis. Keywords: UMTS; Identity Privacy; Identity Confidentiality; Anonymity; Security.DOI: 10.1504/IJAHUC.2016.10001918 A new hybrid routing protocol for Wireless Sensor Networks by Slaheddine Chelbi, Habib Dhahri, Majed Abdouli, Claude Duvallet, Rafik Bouaziz Abstract: Wireless Sensor Networks (WSN) differ from traditional wireless communication networks in several characteristics. One of these characteristics is power awareness, due to the fact that the batteries of sensor nodes have a restricted lifetime and are difficult to be replaced. In order to save the overall energy of the system and to fairly balance the load among nodes, we propose a New Hybrid Routing Protocol, called NHRP, which incorporates two modules: (1) a scheduling mechanism, called Advanced Energy efficient Coverage Control Algorithm (AECCA), based on binary particle swarm optimization (PSO). This approach aims to activate only the necessary number of sensor nodes while preserving full coverage. (2) A cluster based protocol using Fuzzy C-Means (FCM) method named Advanced FCM (AFCM). Several works prove that FCM algorithms assists to optimize the clusters based on minimizing the distance between the sensor node and the cluster centre. Yet, in cluster based approach, the cluster head is usually selected amongst the sensor nodes which can die quickly for this extra workload. To mitigate this problem, the second module is based on the use of some special and mobile nodes with controllable trajectories which acts as gateways.\\\\ \r\nFirst, AECCA and AFCM results are compared to ECCA and FCM, respectively. We prove that AECCA and AFCM give better performance than ECCA and FCM respectively in prolonging network lifetime. Second, simulation results show that our new hybrid routing protocol improves the fairness energy consumption among all sensor nodes and achieves an obvious improvement on the network lifetime. Keywords: Wireless Sensor Networks; Energy Saving; particle swarm optimization; Fuzzy C-Means; mobile nodes.DOI: 10.1504/IJAHUC.2016.10001982 A Dynamic Trust Evolution Model for MANETs based on Mobility by Vijender Busi Reddy, Venkataraman S, Atul Negi Abstract: Mobile Ad-hoc Networks (MANETs) are said to be open, anonymous, dynamic and mobile in nature which makes them vulnerable to several types of attacks. Mobility of a node is a source of vulnerability and raises challenges in setup and maintenance of secure and reliable communication. Present literature does not seem to adequately address these concerns. We propose here a trust computation model that enables a node in a network to assess confidence in its immediate and extended neighbourhood. We derive a trust parameter called as Mobility Factor to assess a nodes mobility without using any extra hardware. Our model improves upon existing schemes when assessing trustworthy behaviour. The effects of mobility are included as an integral parameter in the total trust evaluation. The improved trust assessment allows identification of capricious behaviour of a node using the concept of trust flutter. A formal proof presented here supports the adequacy of the proposed approach. ns-2 simulations with mobility and malicious nodes show better performance of proposed approach as compared to recent work (TSR [1]). Improved resilience against packet-dropping/modification attack and on-off attacks was observed. Keywords: Ad-hoc; Trust; Mobility; MANETs.DOI: 10.1504/IJAHUC.2016.10001919 An Efficient Mobile Grid Scheme for Service Tracking in VANETs by Chyi-Ren Dow, Yu-Hong Lee, Shiow-Fen Hwang, Van-Tung Bui Abstract: The majority of modern vehicles are equipped with various sensors to enhance the driving experience. With these sensors, a vehicles can be considered as an integrated sensor system that can sense data in intelligent transportation systems. However, compared with legacy networks, information cannot be easily exchanged through a vehicular ad hoc network. Intelligent transportation systems deployed on a vehicular ad hoc network are usually highly distributed and depend on metadata exchanging, data sharing, and service tracking mechanisms. This study focused on the mobile grid scheme and service tracking protocol. In the conventional scheme, the geo grid scheme is used to divide a map into grids. Each grid elects its grid leader to manage the information of the grid. However, the initial design has a fixed grid size and therefore network maintenance may be expensive. We designed the mobile grid sequence scheme as follows. Mobile grid structures use the characteristic of group mobility to increase grid structure flexibility and reduce maintenance costs. In a mobile environment, the location of a service provider is not fixed. When the service provider moves to another location of the road, users may need to re-perform the discovery process. The tracking protocol is used to manage the service footprint information and continuously track the target service. Indicators manage the footprint of the service providers in their branches. Users can simply rely on the footprint information to locate target services. According to the experimental results, the mobile grid sequences effectively extended the service time and information was reliably shared in the mobile grid sequences. Our service tracking scheme is more efficient than other schemes in terms of success rate and service tracking time. Keywords: Vehicular Ad Hoc Networks; Mobile Grid; Service Tracking.DOI: 10.1504/IJAHUC.2016.10001948 Routing Problems for Vehicle Ad-Hoc Networks using the Virtual Message Ferry Routing Scheme by Chu-Fu Wang, Yang-Chih Chiu Abstract: The communication links between vehicles in Vehicle Ad-Hoc Networks (VANETs) suffer from the intermittent connection problem due to node mobility, and consequently the routing design is very challenging. The virtual message ferry routing scheme is one of the efficient approaches for aiding the routing decisions in VANETs to cope with the routing problem. The role of the Virtual Message Ferry (VMF) can be played by any vehicle on the road, and the current VMF role-playing vehicle will be switched to another vehicle when it drives away from the preplanned VMF trajectory. The considered VMF routing scheme will not alter the driving behavior of any chosen vehicle, which includes its driving speed, the original relocating plan, etc. This paper considers routing problems in the VMF backbone network that are formed by trajectories of multiple VMFs. A network optimization problem formulation and transformation are given. Heuristic algorithms to find near optimal solutions are also proposed. Keywords: VANET; routing; multiple virtual message ferry; backbone network; intermittent connected routing.DOI: 10.1504/IJAHUC.2016.10001983 An Efficient Fine-grained Access Control Scheme for Hierarchical Wireless Sensor Networks by Santanu Chatterjee, Sandip Roy Abstract: Fine-grained access control is used to assign unique access privilege to a particular user for accessing the real-time and mission critical data directly from the nodes inside wireless sensor network (WSN) and protecting sensitive sensor information from unauthorized access in WSN. In this paper, we propose a new fine-grained access control scheme based on Key policy attribute-based encryption (KP-ABE) suitable for hierarchical wireless sensor networks. The strengths of the proposed protocol are that it provides fine-grained access control with authentication and achieves some good properties such as efficient user revocation and new node deployment at any time without incurring large overheads. Our proposed scheme takes significantly low computational, communication, storage and energy cost compared to other related fine-grained access control schemes. In addition, the proposed scheme provides unconditional security against privileged-insider key abuse attack, node capture attack, forward and backward secrecy and also prevents other attacks such as denial-of-service attack, replay attack and man-in-the-middle attack. We also simulate this proposed scheme for formal security verification using the widely-accepted Automated Validation of Internet Security Protocols and Applications (AVISPA) tool. Using the AVISPA model checkers, we show that our scheme is secure against possible passive and active attacks. Keywords: Attribute-based encryption; Fine-grained access control; Bilinear maps; Hierarchical WSNs; AVISPA.DOI: 10.1504/IJAHUC.2016.10001987 Joint Time Synchronization and Localization of Multiple Source Nodes in Wireless Sensor Networks by Yan Changhong Abstract: In an asynchronous network, the local clock of sensor node has clock drift with respect to the real clock for imperfectrnhardware. So time synchronization and localization shouldrnbe jointly conducted for time-based source location estimates. Inrnthis paper semidefinite programming (SDP), complexity-reducedrnSDP and linear least square (LLS) estimator are proposed forrntime synchronization and localization of multiple source nodes.rnThe proposed algorithms provide joint estimates for the sourcernlocations and clock parameters and avoid the shortcoming ofrnmaximum likelihood (ML) estimator which requires an initialrnsolution. Then a location refinement (LR) technique is introducedrnto refine the estimated parameters for the estimators.rnThe simulations show that the original SDP algorithm providesrnbetter accuracy performance than the complexity-reduced SDP.rnHowever the complexity-reduced SDP runs faster than SDP.rnAlthough the complexity of LLS estimator is lowest amongrnthree proposed algorithms, the convex optimization algorithmsrnincluding the original SDP and complexity-reduced SDP havernmore robust performance compared with the LLS estimator. Keywords: Wireless sensor networks; localization; time synchronization; convex optimization; semidefinite programming.DOI: 10.1504/IJAHUC.2016.10001988 Cloud-based Mobile Service Provisioning For System Performance Optimization by ChunLin Li, Jing Zhang Abstract: Currently, the mobile applications require intensive computational resources, particularly CPUs, RAMs, storage, and battery to successfully complete the expected computing operation. Although distant clouds feature high availability and elastic scalability, and performance gain of utilizing such resources is decreased by high communication latency due to large number of intermediate hops between the mobile device and the distant public clouds. Therefore, local cloud is suitable choice for some mobile devices. In this paper, hybrid cloud-assisted mobile service optimization model is proposed to tackle limited resources of mobile devices and enhance the overall system performance. The aim of hybrid cloud-assisted mobile service optimization is that mobile cloud system utility is optimized while satisfying huge number of mobile requests and improving individual user's QoS and reducing system overheads. The hybrid cloud-assisted mobile service scheduling algorithm enables mobile applications conducted on mobile devices to complete all tasks by leveraging computing resources of public cloud and local cloud. The proposed algorithm is validated through a series of experiments. Keywords: cloud-assisted; mobile service optimization; context awareness.DOI: 10.1504/IJAHUC.2016.10001991 Congestion Control and Fairness with Dynamic Priority for Ad hoc Networks by Tapas Mishra, Sachin Tripathi Abstract: Ad hoc sensor network is a compilation of ad hoc nodes with sensing capability. Recently, the network structure has changed rapidly in modern applications and has imposed priority for heterogeneous applications. In some scenarios, the priorities of the flows need to change in the middle of the communication. However, the major problem is that the packets of different applications are not served at the destination by their expected priority ratio even if the priority does not change. Considering the above problem, this paper presents a fair packet scheduling policy which collects information from each individual flow according to its priority and auto-updates the priorities of flows when required. Moreover, a hybrid congestion control technique based on queue occupancy and channel utilization is framed to control congestion. The presented work focuses on the prioritized packet scheduling policy with the help of multiple queues, whose lengths are determined with the help of priority of each application. The proposed model has been simulated using NS2 for two applications, where each application carries multiple flows. The simulation result shows, the internal buffer is occupied by their respective packets and transmitted according to their priority ratio. Moreover, it reduces overall packet loss which leads to serve more packets at destination than the existing fairness protocols and the packet service rate reflects instantly soon after the priority changes. Keywords: Active queue monitoring; Ad hoc networks; Fair queuing; Packet scheduling; Congestion control; Transmission control protocol.DOI: 10.1504/IJAHUC.2016.10001992 UAVs Assisted Queue Scheduling in Ground Ad Hoc Networks by Vishal Sharma, Rajesh Kumar Abstract: Hybrid networks provide a vast range of applications in areas of military and civilian activities. A combined operation of two or more networks can provide reliable connectivity and guaranteed Quality of Service (QoS) to end users. One of the examples of hybrid network formations is Unmanned Aerial Vehicles (UAVs) assisted ground ad hoc networks. These networks comprise of two variedly operating network units comprising of aerial and ground nodes. Efficient coordination between these networks can resolve complex issues such as coverage, proper connectivity, scalability, and QoS. However, reliable data transmission with enhanced QoS is one of the key challenges in these type of hybrid ad hoc formations. Efficient QoS provisioning provides transmission at higher data rate, low jitters, and enhanced connectivity. Considering this as a problem, an efficient queue scheduling approach is proposed that allows improved QoS to end users. The proposed queue scheduling approach is developed in two parts. The first part utilizes the Quaternion based Kalman Filter to find the appropriate locations for placement of each UAV. This allows proper connectivity between the ground nodes and the UAVs using a scheduling cost function. In the second part, Satisfied Importance Analysis (SIA) is used to find the governing rules for the selection of appropriate queue to be transmitted. The proposed approach allows enhanced connectivity between ground nodes and UAVs that act as aerial relays. Enhanced connectivity and efficient transmission are attained using the proposed approach. The effectiveness of the proposed model is demonstrated using simulations. Keywords: UAVs; Positioning; Quality of Service; Delays; Throughput.DOI: 10.1504/IJAHUC.2016.10001993 Rendezvous in Cognitive Radio Ad hoc Networks: A Survey by Aishwarya Ukey, Meenu Chawla Abstract: Cognitive radio networks (CRNs) in conjunction with dynamic spectrum access cope with spectrum scarcity and underutilization problem through the opportunistic sharing of spectrum, and provide dynamic access to the free portions of spectrum allotted to licensed users. The functionality of CRN relies on the cognitive capability and reconfigurability of cognitive radios (CRs) that enable secondary users (SUs) to sense and figure out unused spectrum and allow dynamic access between different spectrum bands. Fundamental process in the formation of CRN is the neighbor discovery process, also referred as the rendezvous of SUs where SUs meet on commonly available channels and establish communication links for information exchange, spectrum management, and data communication. Rendezvous on a common channel is non-trivial as SUs do not have any network related information and are unaware of the presence of other SUs before the rendezvous. Also, due to the dynamics in licensed users activity and diversities in the temporal and geographical location of SUs, free available channels sensed by SUs usually differs. Thus, it is quite difficult to figure out a channel commonly available to all SUs. Absence of central authority, multi-hop architecture and mobility of nodes further complicates the rendezvous process. This paper focuses on the taxonomy and challenges relevant to rendezvous phenomena of SUs and provides a brief overview and comparative qualitative analysis of state-of-art rendezvous algorithms designed for cognitive radio networks. Keywords: CRN; cognitive radio network; ad hoc network; neighbor discovery; rendezvous phenomena; survey of rendezvous algorithm; synchronous and asynchronous rendezvous algorithm; channel hopping; rendezvous algorithm taxonomy and challenges.DOI: 10.1504/IJAHUC.2016.10002293 A Service Oriented Adaptive Trust Evaluation Model for Ubiquitous Computing Environment by Jagadamba Gurappa Abstract: The increased participation of various devices and networks in the Ubiquitous Computing Environment exposed a bigger challenge in running the high-end cryptographic algorithms with larger key sizes when resources are varying by time. These cryptographic algorithms make the application highly platform dependent, susceptible to confidentiality attacks and unreliable services for the applications. The involvement of heterogeneous system, devices, and context-awareness raises a significance of trust and trust evaluation in the computing environment. Thus, the trust evaluation based on the context and service requirements can be adapted to decide the applicable level of services and the corresponding security. Hence, this paper presents a service-oriented adaptive trust evaluation model in the ubiquitous network according to the security requirement of the services. The proposed trust evaluation model computes direct trust based on various interaction properties and recommendation trust by filtering the dishonest recommenders on the time context. A fine tuning of trust is done by evaluating the adaptive trust base on the service request made by an entity. A balanced blending of direct and recommendation trust are done with trust weights to make them relevant to current application scenarios. The results are compared with some of the available schemes and found to be a consistently good in performance for the todays ubiquitous network. Keywords: Ubiquitous Computing Environment; Service-oriented; Context-aware; Trust; Direct trust; Recommendation trust; Adaptive trust; Adaptive security.DOI: 10.1504/IJAHUC.2016.10002303 High performance target tracking scheme with low prediction precision requirement in WSNs by Anfeng Liu, Shaona Zhao Abstract: Tracking mobile target is one of the most important applications in wireless sensor networks for surveillance system. Researchers heavily believe that: selecting proactive nodes in the region where the target may arrive in the next moment can obtain good performances in terms of energy efficiency, tracking probability and tracking precision. However, it is a great challenge that how to predict the movement of a mobile target accurately. This paper proposed a high-performance tracking scheme with a low requirement of prediction accuracy (low prediction precision requirement target tracking, LPPT), which can work with a pre-existing target prediction algorithm. In LPPT scheme, the residual energy is employed to select more proactive nodes in the non-hotspots area, while less proactive nodes are selected in the hotspots area. Both theoretical and numerical simulation results show that the proposed scheme significantly improved the probability of target detection and energy efficiency, decreased the detection delay, meanwhile guaranteed the network lifetime. Keywords: wireless sensor networks; target prediction; sleep scheduling; network lifetime; energy efficient. A Cross-Layer Interference and Delay-aware Routing Metric for Infrastructure Wireless Mesh Networks by Narayan D G, Uma Mudenagudi Abstract: In this paper, we propose a new cross-layer interference and delay aware routing metric for multi-radio infrastructure Wireless Mesh Networks (WMNs). WMNs are an emerging technology and are used as a backbone networks to connect various types of networks to the Internet. These networks use Multi-Channel Multiple Radio (MCMR) capabilities of mesh routers to achieve high performance. However, the MCMR nodes introduce inter-flow and intra-flow interference in the multi-hop mesh networks and can degrade QoS. Thus the design of routing protocols combined with routing metrics to improve the QoS has become an important research issue. Towards this, several cross-layer routing metrics have been proposed by considering types of interferences and other link quality parameters. However, most of these metrics have their own disadvantages and lack the analytical model in their design. To address this, we analytically derive our routing metric by using 802.11 Distributed Coordination Function (DCF) basic access mechanism. Using this model, we design and implement a routing metric called Cross-layer Interference and Delay Aware (CL-IDA) by estimating delay, inter-flow and intra-flow interference. We implement this metric in well known routing protocol Optimized Link State Routing (OLSR) using NS2. The results reveal that proposed routing metric performs better in terms of throughput, average end-to-end delay, routing overhead and route stability compared to well known routing metrics. Keywords: Wireless mesh networks; Multi-radio; Routing metrics; Cross-layer; CL-IDA.DOI: 10.1504/IJAHUC.2016.10002304 History Based Multi-Node Collaborative Localization in Mobile Wireless Ad Hoc Networks by Wenyuan Chen, Songtao Guo, Fei Wang Abstract: Recent years have witnessed a growing interest in the study of localization algorithm for wireless ad hoc networks. In most localization algorithms, increasing the density of anchor nodes is one of the main strategies to improve the localization accuracy in dense networks. In this paper, based on the number of reference nodes, we propose a distributed localization algorithm, i.e., history based multi-node collaborative localization algorithm (HMCL), which provides a potential approach for localization in sparse ad hoc wireless networks. In the proposed HMCL algorithm, we exploit a new motion model to filter the imprecise estimation values based on the historical position information of nodes, which can improve the localization accuracy and reduce the computation overhead and energy consumption. Moreover, we utilize different strategies to achieve the localization of nodes with different priorities measured by the distance information between neighbor nodes.We verify through experiment that the proposed algorithm provides better performance in terms of localization precision and energy consumption. Besides, we also analyze the effect of the number of neighbor nodes, node density and moving speed of nodes on localization precision. Keywords: Wireless ad hoc networks; Collaborative Localization; Historical constraints; Neighbor information. A Low Complexity DWT module and CRS Minimal Instruction Set Computer Architecture for Wireless Visual Sensor Networks by Jia Jan Ong, Li-Minn Ang Abstract: Transmitting cost, processing complexity and data security are three important elements in Wireless Visual Sensor Networks (WVSNs). This paper presents a complete low complexity processing system that performs data compression, data correction and data encryption. In this system, the Discrete Wavelet Transform (DWT) first decomposes the original image into DWT coefficients to ensure low transmitting cost, then the coefficients are encrypted using the Cauchy Reed Solomon CRS(20,16) coding scheme to ensure data security. A CRS Minimal Instruction Set Computer architecture with a DWT filtering module is proposed to perform the compression, encryption and error correction encoding in a low complexity processing system. The proposed system is then implemented in a field programmable gate array (FPGA) to demonstrate the feasibility of the proposed system for implementation in WVSNs. Results on a Xilinx Spartan FPGA show that the proposed system requires a lower implementation complexity of 2536 slices as compared to other existing systems such as Crypto-Processor (4828 slices) and SPIHT CRS MISC (5017 slices) Keywords: Wireless Visual Sensor Networks; Discrete Wavelet Transform; Cauchy Reed Solomon; Minimal Instruction Set Computer. TLS: Traffic Load Based Scheduling Protocol for Wireless Sensor Networks by Prasan Kumar Sahoo, Hiren Kumar Thakkar Abstract: In Wireless Sensor Networks (WSNs), nodes are usually deployed over the monitoring region randomly and densely and are supposed tornmonitor the region for longer duration. These sensors are normally powered by battery and therefore it is essential to regulate the power utilization of the nodes efficiently. Although most of the current protocols reduce the power utilization by regulating the sleep and wake up schedules, they fail to make an adaptive sleep or wake up schedule for the nodes based on their traffic load. This article proposes a traffic load based adaptive node scheduling protocol to determine the active and sleep schedules of the nodes. The entire network is partitioned into set of virtual zones and a routing path selection algorithm is proposed considering the residual power of the next hop nodes. Simulation results show that the energy consumption and packet overhead of our protocol are considerably less as compared to similar quorum-based medium access control (MAC) protocols. Keywords: Wireless Sensor Networks; MAC protocol; scheduling. EFF-FAS: Enhanced Fruit Fly Optimization Based Search and Tracking By Flying Ad Hoc Swarm by Vishal Sharma, Roberto Sabatini, Subramanian Ramasamy, Kathiravan Srinivasan, Rajesh Kumar Abstract: Flying an ad hoc swarm configuration refers to a network formed by autonomously operated robots. These robots can be simple aerial nodes or specifically configured Unmanned Aircraft (UA). The network formation between the aerial nodes allows for realizing a number of civil and military applications. Such networks are autonomous, temporary, and mission dependent, and are configured based on the specifications of the mission and safety-critical tasks. One of the major applications of these ad hoc swarm formations is the efficient search and tracking of an area without any redundancy and overlapping. Non-redundant cell tracking is a computationally expensive task, which requires optimization strategies to be adopted during a search process. Incorporating Fruit Fly Optimization Algorithm (FOA) to a strategic search and track operation simplifies the complexity of the overall system. In the proposed approach, FOA is extended in terms of its applicability by modifying the procedure and features of the algorithm to allow its applicability to an aerial swarm for performing a non-redundant search over a predefined area with lower complexity. Modeling and simulation activities presented demonstrate the effectiveness of the proposed approach towards search and track operations. Keywords: Ad Hoc Swarm; Searching; Tracking; Fruit Fly Optimization. Friend Circle Identification in Ego Network based on Hybrid Method by Ma TingHuai, Fan Xing, Meili Tang, Donghai Guan Abstract: The ego network, which is a network of a user with his/her friends, is large-scale and tanglesome, and nowadays it is imperative to find a suitable method to automatically administrate it. The social network analysis method has provided some methods to help users classify their friends, including manually categorizing friends into social circles or system classification. Whereas, categorizing friends manually is time consuming for users and the results are not accurate enough. In this paper, we will discuss how to realize community identification automatically and accurately. To achieve this, we propose a method which utilizes not only the similarity of user attributes but also the features of network structure and friends contact frequency. On the basis of the users' profile, we identify the relationship between them firstly. Second, we solve the problem of community identification using of the structure features while profiles losing. Third, we introduce a concept, contact frequency, which will help us identify the relationship between users and their friends more accurately. Extensive experiments on real-world data show that our approach outperforms the state-of-the-art technique, in terms of balance error rate and F1 score. Keywords: ego networks; communities; user attribute; network structure; contact frequency. Delay-Tolerant Forwarding Strategy for Named Data Networking in Vehicular Environment by Meng Kuai, Xiaoyan Hong, Qiangyuan Yu Abstract: Named Data Networking (NDN) has been considered as a promising networking architecture for Vehicular Ad-Hoc Networks (VANETs). However, Interest forwarding in NDN suffers severe issues in vehicular environment. Broadcast storms result in much packet loss and huge transmission overhead. Also, link disconnection caused by highly dynamic topology leads to low packet delivery ratio and extreme long delay in data retrieval. Thus, an efficient NDN forwarding strategy to retrieve data is urgently required. In this paper, we propose the Density-Aware Delay-Tolerant (DADT) Interest forwarding strategy to retrieve traffic data in vehicular NDN. DADT specifically addresses data retrieval during network disruptions using Delay Tolerant Networking (DTN). It makes retransmission decisions based on directional network density. Also, DADT mitigates broadcast storms by using a rebroadcast deferring timer. We compared DADT against other strategies through simulation and the results show that it can achieve a higher satisfaction ratio while maintaining low transmission overhead. Keywords: Density-Aware; Delay-Tolerant; Interest Forwarding; Named Data Networking; Vehicular Networks.DOI: 10.1504/IJAHUC.2017.10013072 Link-Preserving Channel Assignment Game for Wireless Mesh Networks by Li-Hsing Yen, Bo-Rong Ye Abstract: To deliver user traffic in a wireless mesh network, mesh stations equipped with multiple interfaces communicate with one another utilizing multiple orthogonal channels. Channel assignment is to assign one channel to each interface to minimize co-channel interference among wireless links while preserving link connectivity. The interference and connectivity objectives are generally conflicting. This paper first analyzes the probability of link connectivity when channels are randomly assigned to interfaces. We then propose a game-theoretic approach that jointly considers the two objectives with a unified payoff function. We prove that the proposed approach is an exact potential game, which guarantees stability in a finite time. We also prove the link-preserving property of the approach. Simulation results show that the proposed approach generally outperforms counterparts in terms of network interference when a moderate number of channels are available. For fairness of link interference, both the proposed approach and its variant outperform the counterparts. Keywords: channel assignment; wireless mesh network; interference; connectivity; game theory. Performance Analysis of truncated ARQ and HARQ I protocols for cooperative networks using Smart Amplify and Forward Relaying by Nadhir Ben Halima, Hatem Boujemaa Abstract: In this paper, we evaluate the theoretical performance of truncated Automatic Repeat reQuest (ARQ) and Hybrid ARQ I protocols with and without packet combining in cooperative networks using smart amplify and forward relaying. In cooperative networks, the retransmission can be done by the source or by a selected relay. The selected relay is the one which offers the best instantaneous Signal to Noise Ratio (SNR) of the relaying link. We provide a theoretical framework when there is a combination of source and relays transmission for a destination using a Maximum Ratio Combining of the received packets from the source and the relays. Smart relays are studied in this paper. These relays selected the best packet from the source. We will show that the performance is better for smart relays that continuously supervise the transmissions from the source compared to conventional relays overhearing only the first source transmission. We also suggest smart relays listening to each other. Keywords: HARQ; Cooperative Systems; Amplify and Forward. Convex Hull Based Trajectory Design for Mobile Sink in Wireless Sensor Networks by Kumar Nitesh Abstract: Data collection through mobile sink (MS) is an efficient solution to solve hotspot/sinkhole problem which is usually caused by data collection using a static sink In this paper, we propose an algorithm for designing a delay bound path for MS which is based on convex hull and therefore we refer it a concentric convex hull (CCH) algorithm For a given set of sensor nodes, the CCH generates a set of convex hull as the potential paths and selects one of them as the nal path for MS fulfilling certain optimizing criteria Unlike other existing techniques, the proposed technique does not use traveling sales person (TSP) tour This, in turn, reduces hop count and restricts the time complexity of the proposed technique to O(n^2) for n sensor nodes We simulate the proposed algorithm, compare and analyze the results with some of the existing algorithms over diverse network performance metrics. Keywords: Wireless sensor networks; convex hull; rendezvous points; mobile sink; delay bound path; computational geometry.DOI: 10.1504/IJAHUC.2017.10006693 Efficient two-party certificateless authenticated key agreement protocol under GDH assumption by Xie Yong Abstract: Security and Efficiency are two key requirements for most of authentication protocol, especially for mobile wireless network. However, Security and efficiency are a contradiction for the design of authentication protocol, it is hard to meet the two requirements simultaneously. Since certificateless public key cryptography (CL-PKC) has an advantage in wiping off key escrow problem, many certificateless authentication key agreement (CL_AKA) protocols have been proposed. The existing CL_AKA protocols may meets security requirement well or efficiency requirement well, but cannot meet the two well. In this paper, we propose an efficient two-party CL_AKA protocol with strong security. we perform an in-depth security analysis in extended Canetti-Krawczyk (eCK) model to show the proposed CL_AKA protocol is provably secure. The performance analysis shows that the proposed protocol can meet the strong security and efficiency requirements simultaneously. Keywords: Authentication key agreement,CL-PKC,pairing-free,GDH,eCK model.DOI: 10.1504/IJAHUC.2017.10006694 Resource-Constrained Task Assignment for Event-Driven Sensing Using Public Bicycles in Smart Cities by Chiu-Ping Chang Abstract: Many cities have provided public bicycle services to reduce traffic congestion and air pollution. Mobility of public bicycles makes them very suitable for event-driven sensing for smart cities, i.e. collecting data relevant to special events such as car accidents or street parades. The problem is how to assign a set of bicycles to best fulfill the sensing mission, considering the constrained storage, battery energy, and communication capability of bicycles. The problem is referred to as the resource-constrained task assignment for event-driven sensing (ReConTAES). The goal is to minimize the number of bicycles used, while balancing the energy consumption of the selected bicycles. We first formulate the problem as a mixed integer programming and then propose a set of greedy heuristics to solve the problem. We evaluate the proposed algorithms by using real trajectories to show their feasibility. Keywords: smart city; urban sensing; data collection; Public Bicycle System(PBS) .DOI: 10.1504/IJAHUC.2017.10006695 MAP: Ef?cient Cooperation Induced Routing Scheme for a Delay Tolerant Multi-hop Mobile Network by Oladayo Olakanmi Abstract: In multi-hop network the efficiency of the transmission depends solely on the cooperation of all nodes in the network in relaying neighbouring nodes data packets. Some nodes may be selfish, mischievous by dropping received packets or refuse to accept packet from other nodes whereas using other nodes resources to transmit their packets. This drastically affects the performance of the network. In this paper, a fair and effective incentive mechanism, which induces cooperation among nodes in the network is proposed. The mechanism applies auction scheme by initiating auction between the node and its immediate neighbouring nodes. The neighbouring node(s) competitively purchase packet transmission liability from the source and re-auctions it to the neighbouring node. The scheme uses hash operations to secure the payment transaction made in each auction session. The performance characterisation and evaluation against fairness and some possible attacks demonstrated that the scheme strongly stimulates fairness and competitiveness in message relaying among the intermediary nodes unlike other routing based schemes. The simulation results showed that there are some other performance metrics which are needed to be considered in order to have an optimum performance. Keywords: Multi-hop; auction policy; competitve backpressure;; wireless network;routing;protocol; network; wireless.DOI: 10.1504/IJAHUC.2017.10006696 Dynamic Utility-Based Buffer Management Strategy for Delay-tolerant Networks by Ababou Mohamed Abstract: Delay tolerant network is a new networking concept characterized by intermittent connections between nodes communicating through the store-carry-and-forward mechanism. Indeed, the nodes may keep a message in its buffer for long periods until a communication opportunity occurs and then to transmit it to other relays or to its final destination. Accordingly, the use of many messages can induce buffer congestion of some nodes due to the limited capacity of the storage space. To manage properly the use of network resources, particularly bandwidth and buffer space of nodes, we propose in this paper a new buffer management strategy which consists of messages scheduling and dropping policies based on a multi-criteria utility function. The simulations results under the ONE simulator have shown that the proposed buffer management strategy achieves a higher delivery ratio, reduces the average delivery delay and generates minimal overhead when compared to the Random, MOFO and FIFO usual strategies. Keywords: DTN; Buffer Management; Delivery; Scheduling Policy; Dropping Policy; Utility.DOI: 10.1504/IJAHUC.2017.10006697 On the Performance Analysis of Wireless Communication Systems over ∞-µ/∞-µ Composite Fading Channels by Osamah Badarneh Abstract: The ∞-µ/∞-µ fading channel model is the result of the product of two independent and nonidentically distributed (i.n.i.d.) ∞-µ variates. As such, in order to study the performance of wireless communication systems in such fading model, the envelopes of the probability density function (pdf) and cumulative distribution function (cdf) must be obtained. To this end, simple and general closed-form expressions for the pdf and cdf of the product of two independent and non-identically distributed (i.n.i.d.) ∞-µ variates are obtained. Then based on these expressions, we derive closed-form expressions for the outage probability, the average symbol error probability (SEP) and the nth moment of the signal-to-noise ratio (SNR), and the ergodic channel capacity. The derived expressions are then used to analyze the performance of a wireless communication system. Analytical results are sustained through Monte-Carlo simulations, and a perfect match is reported over a wide range of SNR values and for several values of fading parameters. Keywords: Composite fading channels; multi-path fading; shadowing; ∞-µ distribution.DOI: 10.1504/IJAHUC.2017.10006698 Dynamic Group-based Scheduling of Machine-to-Machine Communication for Uplink Traffic in LTE Networks by Yen-Yin Chu Abstract: In wireless machine-to-machine communication, the signaling load and radio resource allocation are critical to successful service because the technology features a large number of connected devices that introduce significant connection- and resource-related signaling overheads to eNodeB. This paper proposes an uplink scheduling scheme to minimize signaling overhead and maximize system throughput in machine-type communication devices in long-term evolution networks. The proposed scheme dynamically adjusts group members by considering channel conditions and quality of service and applies the allocation-before-request concept to allocate the residual bandwidth and realize the above objectives. Exhaustive simulations were conducted to examine the performance of the proposed scheme. The simulation results showed that the proposed scheme achieves not only better system throughput but also decreased buffer status report signaling when compared with the static group-based scheme. Keywords: buffer status report; channel conditions; long-term evolution; machine-to-machine; resource allocation.DOI: 10.1504/IJAHUC.2017.10006699 On the Parallel Programmability of JavaSymphony for Multi-cores and Clusters by Muhammad Aleem Abstract: This paper explains the programming aspects of a promising Java-based programming and execution framework called JavaSymphony. JavaSymphony provides unified high-level programming constructs for applications related to shared, distributed, hybrid memory parallel computers, and co-processors accelerators. JavaSymphony applications can be executed on a variety of multi-/many-core conventional and data-parallel architectures. JavaSymphony is based on the concept of dynamic virtual architectures, which allows programmers to define a hierarchical structure of the underlying computing resources and to control load-balancing and task-locality. In addition to GPU support, JavaSymphony provides a multi-core aware scheduling mechanism capable of mapping parallel applications on large multi-core machines and heterogeneous clusters. Several real applications and benchmarks (on modern multi-core computers, heterogeneous clusters, and machines consisting of a combination of different multi-core CPU and GPU devices) have been used to evaluate the performance. The results demonstrate that the JavaSymphony outperforms the Java implementations, as well as other modern alternative solutions. Keywords: Parallel programming; Java; Multi-core Scheduler; GPU computing.DOI: 10.1504/IJAHUC.2017.10006700 An Efficient Node Deployment Method for Indoor Passive Localization by Jinjun Liu Abstract: Most existing studies on device-free indoor localization aim to improve localization performance by deploying a large number of sensor nodes in indoor environments, thereby resulting in high hardware cost, more energy consumption and other drawbacks. This paper proposes a wrapper, heuristic and sensor-based node deployment method, which reduces the number of deployed sensors while achieving a high localization accuracy by selecting the fittest link feature subset in localization datasets. The performance results reveal that the proposed method outperforms existing work in terms of the number of deployed nodes, consequently can greatly thins out the extra nodes, shortens the localization response time and saves energy. Furthermore, the method can easily adjust the number of specific nodes by flipping the coefficient ratio of transmitters and receivers in fitness function. Keywords: Indoor passive localization; Node deployment; Localization response time; Energy consumption.DOI: 10.1504/IJAHUC.2017.10006701 Acoustic Energy-based Sensor Localization With Unknown Transmit Energy Levels by Xiaoping Wu Abstract: When the transmit energy levels are unavailable, semidefinite programming (SDP), mixed second order cone programming and semidefinite programming (SOC/SDP) and the linear least square estimator with source-anchor measurements (LLS-SA) are proposed to estimate the source locations in the wireless sensor networks. The proposed three algorithms avoid the shortcoming of the maximum likelihood (ML) estimator which requires the initial solution guess to ensure the global convergence. By relaxing the acoustic energy-based localization model into convex optimization, the SDP, SOC/SDP algorithms provide the robust solutions to the source location estimates for the cooperative localization. The non-cooperative LLS-SA represents the source location estimates as algebraic closedform solutions which are further improved by using location refinement (LR) technique. The simulations show that the convex optimization algorithms including the SDP and SOC/SDP provide more robust solutions to the source location estimates compared with the linear estimator of LLS-SA. However the proposed LLS-SA runs faster than the SDP and SOC/SDP. The accuracy performance of the designed SOC/SDP is similar to that of the SDP, but the complexity of the SOC/SDP is greatly lower than that of the SDP for the same network configuration. Keywords: wireless sensor networks; sensor localization; acoustic energy-based; convex optimization.DOI: 10.1504/IJAHUC.2017.10006702 Amazon Cloud Computing Platform EC2 and VANET Simulations by Muhammad Aleem Abstract: Network simulations are resource and time intensive tasks due to the involvement of a number of factors attributable to scalability with respect to computation time, cost, and energy. Keywords: Academic Clouds; VANET Simulations; Amazon EC2; Large-scale Simulations.DOI: 10.1504/IJAHUC.2017.10006703 A Hybrid Intelligent Control based Cyber-Physical System for Thermal Comfort in Smart Homes by Jiawei ZHU Abstract: With the fast development of human society, as environmental issues have drawn incomparable attention, energy efficiency is playing a significant role in residential buildings. Meanwhile, spending more time in homes leads people to constantly improve comfort there. Considering the fact that space heating makes a great contribution to residential energy consumption and thermal comfort, this paper presents a novel hybrid intelligent control system to manage space heating devices in a smart home with advanced technologies to save energy while to increase thermal comfort level. The approach combines a meta-heuristic algorithm used to compute a setpoint from the Predicted Mean Vote model with a Proportional-Integral-Derivative controller for indoor temperature regulation. In order to validate the system, computer simulations are conducted and analyzed. The results indicate the proposed control system can provide better thermal comfort comparing with other conventional and intelligent control methods, and consume less energy when demand response is considered. Keywords: thermal comfort; demand response; smart home; cyber-physical system; particle swarm optimization.DOI: 10.1504/IJAHUC.2017.10006704 A Smart Proactive Routing Protocol in Cognitive Radio Networks by Mahsa Soheil Shamaee, Mohammad Ebrahim Shiri, Masoud Sabaei Abstract: In this paper, we propose a smart proactive routing protocol based on Q-learning to find the most stable routes which impose minimum interference on the primary users. Unlike the traditional proactive routing protocols, in our proposed method, control packets are not broadcast whenever the network topology changes. Indeed, we apply a generalized version of Q-learning to predict the model of the routes stability. This model is used to prevent the floods of state information that is ineffective for routing decisions. The frequency of changes in the model is much less than that in the network topology. In our protocol, secondary users broadcast control packets with any changes in the model which reduces the routing overhead. The simulation results show that our routing protocol outperforms the existing ones in terms of throughput and the imposed interference on the primary users' spectrum as well as overhead. Keywords: Proactive Routing; Routes Stability; Control Overhead; Q-learning; Reinforcement Learning; Cognitive Radio; Primary Users; Secondary Users; Opportunistically Access; Channel Availability. Farmland Multiparameter Wireless Sensor Network Data Compression Strategy by Feifei Li Abstract: A certain correlation exists among farmland wireless sensor network(WSN) monitored parameter data. Analyzing and utilizing parameter correlation can improve data compression efficiency and reduce network communication power. A data compression algorithm for multi-parameter farmland WSN is proposed. Firstly, a compression matrix of each cluster is built up based on clustering analysis among parameters and internal correlation analysis between categories. Then the parameter sorting scheme is determined based on the structured matrix which had strong correlation among rows and columns. It conducted characteristic analysis of parameter sequences. Operators between parameters are built in order to enhance the correlation and reduce high-frequency component of the matrix. By doing these the information loss during compression process could be reduced, and realized the goals of elevating compression ratio and reducing compression errors. Compression test shows that the proposed algorithm can effectively reduce network data redundancy and energy consumption. Keywords: wireless sensor network; data compression; correlation between parameters; wavelet transform.DOI: 10.1504/IJAHUC.2017.10007545 A Scalable Middleware for Context-aware Mobile Applications by Loris Belcastro, Fabrizio Marozzo, Paolo Trunfio Abstract: A core functionality of context-aware mobile applications is storing, indexing, and retrieving information about users, places, events and other resources. The goal of this work is to design and provide a service-oriented middleware, called Geocon, which can be used by mobile developers to implement such functionality. To represent information about users, places, events and resources of context-aware applications, Geocon defines a metadata model that can be extended to match specific application requirements. The middleware includes a geocon-service for storing, searching and selecting metadata about users, resources, events and places of interest, and a geocon-client library that allows mobile applications to interact with the service through the invocation of local methods. The paper describes the Geocon middleware and presents an experimental evaluation of its scalability on a cloud platform with a real-world mobile application. Keywords: Context-aware; Mobile applications; Middleware; Scalability; Cloud computing. A new Scheme for RPL to handle Mobility in Wireless Sensor Networks by Leila Ben Saad Abstract: Mobile wireless sensor networks are characterized by dynamic changes in the network topology leading to route breaks and disconnections. The IPv6 Routing Protocol for Low power and lossy networks (RPL), which has become a standard, uses the Trickle timer algorithm to handle changes in the network topology. However, neither RPL nor Trickle timer are well adapted to mobility. This paper investigates the problem of supporting mobility when using RPL. It enhances RPL to t with sensors' mobility by studying two cases. Firstly, it proposes to modify RPL in order to t with a dynamic and hybrid topology in the context of medical applications. Secondly, it investigates a more general case and introduces a new adaptive timer for RPL. The proposed approach is validated through extensive simulations and compared with existing protocols in the literature. Results show that our proposal signi cantly reduces disconnections and increases packet delivery ratio while maintaining low overhead. Keywords: Wireless sensor networks; RPL protocol; Node mobility; Trickle timer.DOI: 10.1504/IJAHUC.2017.10008392 Automatic String Deobfuscation Scheme for Mobile Applications Based on Platform-level Code Extraction by WooJong Yoo, Minkoo Kang, Myeongju Ji, Jeong Hyun Yi Abstract: The Android operating system is vulnerable to various security threats owing to structural problems in Android applications. String obfuscation is one of the required protection schemes developed to protect Android application code. However, string obfuscation is being thwarted by malware makers and malware analysis is becoming more difficult and time-consuming. This paper proposes an automatic string deobfuscation and application programming interface (API) hiding neutralisation scheme that requires no encryption algorithm analysis or encryption key information. The proposed scheme has its own independent obfuscation tool. Further, it extracts and analyses code from the Android platform while the application is being executed and inserts only a return string value from the extracted code into the DEX file. The results of experiments conducted, in which commercial obfuscation tools Allatori, DexGuard, and DexProtector were applied to sample applications, verify the efficacy of the proposed method. Keywords: Reverse Engineering; Deobfuscation; Mobile Malware; Android;. ACFC: Ant Colony with Fuzzy Clustering Algorithm for Community Detection in Social Networks by Marjan Naderan, Marjan Naderan, Seyed Enayatollah Alavi Abstract: In this paper, we suggest a bipartite algorithm, namely ACFC, for finding communities in social networks. First, we use artificial ants to traverse the network modeled by a graph based on a set of rules to find a "good region" of edges. Next, we construct the communities after which local optimization methods are used to further improve the solution quality. Finally, we use the Fuzzy C-Means (FCM) clustering algorithm to fine tune the result. In our method ants are only used to identify good regions of the search space and construction methods are used to build the final solution. Experimental results on several synthetic graphs and four real world social networks compared to six other well-known methods show that our ACFC algorithm is very competitive against current state-of-the-art techniques for community detection and it is more accurate than existing algorithms as it performs well across many different types of networks. Keywords: Community Detection; Social Networks; Ant Colony; Q modularity; Fuzzy Clustering.DOI: 10.1504/IJAHUC.2017.10008798 An Adaptable CS-Based Transmission Scheme in Wireless Sensor Network by Hao Yang, Keming Tang Abstract: As the essential requirement of wireless sensor network, energy-efficient data transmission has been paying a lot attention. Compressive Sensing(CS) has been currently utilized to save consumption of sensors. However, a vital problem still remains unclear that whether the execution costs of sensors employing CS is not worth being considered, motivating us to explore the answer from a point of real deployment platform view. Presenting observations from our operating sensor network, we verify two important facts: 1) The power costs of measure processing of sensors cannot be negligible as the increasing of samplings. 2) Measurements will constantly change along the routing and force relay sensors to consume more. Based on our findings, we propose an adaptable CS-based transmission scheme, ACS. With our experiments, energy is economized at least 15%. Our work gives a potential guideline for future designs of WSN in practice. Keywords: Compressive sensing; Data transmission; Adaptable transmission; Wireless sensor network. Energy Aware Optimal Slot Allocation Scheme for Wearable Sensors in First Responder Monitoring Systemby Mahin K. Atiq, Kashif Mehmood, Muhammad Tabish Niaz, Hyung Seok Kim Abstract: In recent advances in first response techniques, the uniform of each first responder working at the emergency field may be equipped with sensors and a gateway. These wearable sensors in conjunction with the gateway constitute a first responder monitoring system (FRMS). FRMS gathers data about the first responders's vitals and the surrounding environment, which is then transmitted to the incident commander. For the energy-constrained nature of the FRMS, an energy-ecient slot allocation scheme is proposed. The energy-ecient scheme involves the design of an optimal slot allocation scheme based on Hungarian algorithm for sensor data collection and an energy-aware sensing and transmission scheme for sensor nodes. Simulation results demonstrate the superiority of proposed scheme in terms of lifetime, residual energy, and energy delay product as compared to greedy and first-in-first-out (FIFO) slot allocation schemes. Keywords: Slot allocation; Hungarian algorithm; Sensing and transmission scheme; Wearable sensor systems; First responder monitoring. A Statistical Detection Mechanism for Node Misbehaviors in Wireless Mesh Networks (WMNs) by Rida Khatoun Abstract: Wireless mesh networks (WMNs) have become an increasingly popular wireless networking technology for establishing the last-mile connectivity for home and neighbourhood networkings. In such networks, packet dropping may be due to either an attack, or normal loss events such as bad channel quality. Furthermore, in the route discovery phase, path stability is not always considered. We consider a special case of denial of service (DoS) attack in WMNs known as the greyhole attack. In this attack, a node selectively drops some packets which it has to forward along the path. To mitigate this attack, we propose a dropping detection mechanism allowing a mobile node to select a most reliable route to the destination. Our detection module detects misbehaving nodes by comparing the observed packet loss distribution of nodes to the expected ones when they are well-behaved. We validate the proposed approach via extensive simulations through R software and Matlab. Keywords: Wireless Mesh Networks; Misbehavior; Detection.DOI: 10.1504/IJAHUC.2017.10009416 Malicious User Detection with Local Outlier Factor during Spectrum Sensing in Cognitive Radio Network by Suchismita Bhattacharjee Abstract: In collaborative sensing, multiple secondary users (SUs) cooperate for a more accurate sensing decision to detect spectrum holes in cognitive radio networks (CRNs). This technique, however, can be adversely affected by malicious users (MUs) who route falsified spectrum sensing data to the fusion centre (FC). This attack is known as the spectrum sensing data falsification (SSDF) attack. The task of the FC is to aggregate local sensing reports and is thereby responsible for making the final sensing decision. In this paper, we propose a detection and isolation scheme based on local outlier factor (LOF) to detect and reduce the unfavourable effects of SSDF attack. The key feature of this scheme is that for each SU a metric is calculated, which is called the LOF. Based on the LOF, a decision is made about whether an SU is an attacker or not. We support the validity of the proposed scheme through extensive simulation results. Keywords: Cognitive Radio Networks; Local Outlier Factor; Collaborative Spectrum Sensing; SSDF Attack; Independent Attack; Colluding Attack; Fusion Center.DOI: 10.1504/IJAHUC.2017.10009417 A Critical Review of Quality of Service Models in Mobile Ad hoc Networks by Nadir Bouchama, Djamil Aïssani, Natalia Djellab, Nadia Nouali-Taboudjemat Abstract: Quality of service (QoS) provisioning in mobile ad hoc networks (MANETs) consists of providing a complex functionality in a harsh environment where resources are scarce. Thus, it is very challenging to build an efficient solution to address this issue. The proposed solutions in the literature are broadly classified into four categories, namely: QoS routing protocols, QoS signalling, QoS-aware MAC protocols and QoS models, which are the main concern of our study. The contribution of this paper is threefold: Firstly,wepropose a set of guidelines to deal with the challenges facing QoS models design in ad hoc networks. Secondly, we propose a new taxonomy for QoS models in ad hoc networks. Finally, we provide an in-depth survey and discussion of the most relevant proposed frameworks. Keywords: Mobile ad hoc networks; Quality of service; IntServ; DiffServ; QoS models; QoS routing; Hard QoS; Soft QoS.DOI: 10.1504/IJAHUC.2017.10009418 Detection of Malicious Packet Dropping Attacks in RPL-based Internet of Things by Sooyeon Shin, Kyounghoon Kim, Taekyoung Kwon Abstract: The Internet of Things (IoT) may involve a large number of devices highly constrained in their resources in terms of power, memory, computation, and communication. To cover an increasing number of IoT devices, the IPv6 paradigm is essentially required. RPL (Routing Protocol for Low-Power and Lossy Networks) is an IPv6-based routing protocol optimized for IoT environments and it supports a powerful and flexible routing framework for a variety of application scenarios of IoT. However, it is susceptible to various security threats including a malicious packet dropping attack, which uses internal compromised nodes to threaten the operation of network. If a node with a lower rank closer to the root node attempts a malicious packet dropping, it may disrupt basic data transmission, or even the entire IoT application service. In this paper, we present a novel detection method for malicious packet dropping attacks against RPL-based networks. The proposed method is based on the anomaly IDS approach and detects a malicious packet dropping in the presence of normal packet losses caused by collisions or channel errors. We evaluate the performance of the proposed method on Contiki's network simulator, Cooja. The evaluation results show that it has good performance to detect malicious packet dropping attacks in the RPL-based networks. In every case, the successful detection rate is more than 94% and the false alarm rate is less than 3%. Keywords: Internet of Things; IPv6; RPL; 6LowPAN; Packet Dropping; Detection; ContikiOS; Cooja simulator. GPU-based distributed bee swarm optimization for dynamic vehicle routing problem by Maroua Grid, Noureddine Djedi, Salim Bitam Abstract: Nowadays, there is still a large gap between the requirements and the performance of decision support systems for many problems such as the vehicle routing problem, consists in conceiving a set of optimal routes for a fleet of vehicles, aiming at serving a given number of customers. Nevertheless, new customer orders could be introduced while a prior plan is in progress. Therefore, routes should be recalculated in a dynamic way. In this paper, we propose a new parallel combinatorial optimization method based on Graphic Processing Unit (GPU) called Parallel Bees Life Algorithm (P-BLA) to solve efficiency the Dynamic Capacitated vehicle routing problem (DCVRP) in terms of execution time, and to reduce computational complexity often considered as the major drawback of conventional optimization methods. P-BLA is developed using CUDA framework performed on island-based GPU. After a set of comparisons against conventional methods namely; genetic algorithm, ant system, Tabu search and sequential BLA, P-BLA has provided efficient results reached from the most tested DCVRP benchmarks. Keywords: DCVRP; k-means; P-BLA; Parallel optimization; GPGPU. Linear Closed-Form Estimator for Sensor Localization Using RSS and AOA Measurements by Jian Zhang Abstract: Using the hybrid received signal strength (RSS) and angle of arrival (AOA) measurements, a position estimation model is proposed for senor localization in three-dimensional plane. Then the unconstraint linear least square (ULLS) estimator is designed to obtain a closed-form solution to the positions of source nodes by considering the known transmit power. To improve the accuracy performance of the ULLS estimator, the constraint linear least square (CLLS) estimator is introduced by utilizing the constraint condition. When the transmit power is unavailable, a global linear least square (GLLS) estimator is also put forward to estimate the positions of source nodes along with the transmit power. The simulations show that the computational complexity of the proposed linear estimators is greatly lower than that of the convex semidefinite programming (SDP) method. When the measurement noises are small, the linear ULLS, CLLS and GLLS estimators perform better than that of the SDP method. Due to the exploiting of constraint condition, the accuracy performance of the CLLS estimator can approach the Cram'{e}r-Rao Lower Bound (CRLB) of position estimation. Keywords: Wireless sensor networks (WSNs); localization; received signal strength (RSS); angle of arrival (AOA); linear least square. Reverse-biform Game based Resource Sharing Scheme for Wireless Body Area Networks by Sungwook Kim Abstract: Current advances in wireless sensor technologies have contributed to the development of Wireless Body Area Network (WBAN). It has been considered for applications in medical, healthcare and sports fields. Due to specific features and reliability requirements in WBAN, a number of new challenges have been introduced to design novel WBAN protocols. In order to cope with these challenges, game theoretic approach can allow WBANs to improve their performance while increasing their flexibility and adaptability. In this paper, we develop a new WBAN resource sharing scheme based on the reverse-biform game model. Based on the dual-level phases, the limited WBAN resource is effectively shared by employing a coordinate-and-competitive game manner. In particular, we consider the unique features of WBAN applications, and provide a generalized solution for the resource sharing problem. The simulation results demonstrate that our game-theoretic framework can provide the ability to practically respond to current WBSN conditions. This approach is suitable for real WBAN operations, particularly for the energy efficiency, network throughput, and QoS provisioning. Keywords: Wireless body area networks; Power control algorithm; Data-tuning mechanism; Quality of Service; Reverse-biform game. Implementation of an Autonomous Intelligent Mobile Robot for Climate Purposes by Mohammad Samadi Gharajeh Abstract: One of the main requirements in humans lifecycle is to predict environmental situations (e.g., pollution density) over various areas. Since determining the climate information by using traditional electromechanical devices is very expensive, autonomous robots can be used to organize this mission. This paper proposes an autonomous intelligent mobile robot for climate purposes, called ClimateRobo, to notify the weather condition based on environmental data. An ATmega32 microcontroller is used to measure temperature, gas, light intensity, and distance to obstacles using the LM35DZ, MQ-2, photocell, and infrared (IR) sensors. A utility function is proposed to calculate the weather condition according to the temperature and gas data. Afterwards, the weather condition will be monitored on a liquid crystal display (LCD), an appropriate light-emitting diode (LED) will be illuminated, and an audio alarm would be enabled when weather condition is emergency as well as ambient brightness is high. The ambient brightness is calculated by a proposed supervised machine learning using sensed data of the photocell sensor. A fuzzy decision system is proposed to adjust the speed of DC motors based on weather condition and light intensity. The robot can detect and pass stationary obstacles with the six reflective sensors installed in the left, front, and right sides under six detection scenarios. Simulation results show performance of the proposed supervised machine learning, fuzzy decision system, and obstacle detection mechanism under various simulation parameters. The robot, initially, is simulated in the Proteus simulator and, then, is implemented by electronic circuits and mechanical devices. It would be used in the future by bureau organizations, rescue teams, etc. Keywords: Autonomous Intelligent Robot; Weather Condition; Utility Function; Supervised Machine Learning; Fuzzy Decision System; Sensor. Trust Management in Vehicular Ad hoc Networks: a survey by Ilhem Souissi, Nadia Ben Azzouna, Tahar Berradia Abstract: The vehicular ad hoc networks (VANETs) provide a variety of applications that aim to ensure a safe and comfort driving experience. These applications rely on the communication and the exchange of data between vehicles. These entities are exposed to many security threats that may affect the reliability of the provided applications. Accordingly, there is a need for a trust management scheme that has to cope with the security threats and the high dynamicity of the network topology. In this paper, we survey the recent advances in trust management for VANETs. The aim of this paper is to show the importance of an adaptive trust model that can deal with the requirements of each class of applications. Therefore, we have presented well-defined criteria to point out the key issues of the existing studies and to set up some insights for research within this scope. Keywords: VANET; security; trust management; attacks; reputation; similarity; behavior; utility. Multi-constraint Zigbee Routing to Prolong Lifetime of Mobile Wireless Sensor Networks by Chhagan Lal, Pallavi Kaliyar, Chotmal Choudhary Abstract: Due to the recent developments in hardware technology and deployment techniques, Mobile Wireless Sensor Networks (MWSNs) are attracting a large array of real-world applications. However, practical realization of these applications is still constrained due to inherent characteristics of MWSNs such as highly dynamic topology, low bandwidth, and finite energy of nodes. These characteristics causes threat to MWSNs basic functionalities, which includes network formation, self-organization, route discovery, and communication management. Hence, improving the lifetime of MWSNs, and minimizing the mobility induced route breaks are the key issues in MWSNs. Zigbee is an advanced technology that works on IEEE standard 802.15.4 and it is suitable for contrainted networks such as MWSNs as well. The main features of Zigbee such as low energy and network bandwidth consumption, and lower deployment cost greatly helps to prolong network lifetime in MWSNs. To this end, in this paper, we propose a multi-constraint Zigbee based Reactive Routing (MZRR) protocol for MWSNs to prolong the network lifetime. Our MZRR protocol uses node energy and hop-to-hop transmission efficiency along with network mobility as metrics during its route discovery process to discover highest remaining lifetime routes. MZRR protocol ensures that the discovered routes has high transmission efficiency which leads to low energy and link bandwidth consumption in the network. By keeping the energy utilization of sensors balanced, MZRR protocol avoids the dead zones in the surveillance areas, this could be very important in data-critical applications. We fully implement MZRR protocol on NS-3 simulator, and the results obtained are compared with traditional AODV and state-of-the-art routing algorithms in terms of relevant parameters such as energy consumption, end-to-end delay, packet delivery ratio, network life-time and network routing overhead. Keywords: Wireless Sensor Network; Zigbee; Energy Efficiency; Network Mobility; Link Lifetime; IEEE 802.15.4. Reliable Sense Maintenance Scheme by Sense Holes Recognized and Self-healing in Sensor Networks of Internet of Things by J.U.N. LIU, Xu Lu, Tao Wang Abstract: Sense holes recognized and repaired in sensor networks have important significance for sense performance. Most of the existing researches are based on the assumption that the sensor can provide the location or other ideal condition. In this paper, a distributed reliable sense maintenance scheme by sense holes recognized and self-healing was presented. Firstly, it reduced the required nodes density limit to maintain a reliable sense by mathematical analysis. Then, sense holes recognized algorithm based on the Hamiltonian graph and computation geometry was proposed in this paper. It could identify triangular holes and realize a good recognition rate without an accurate position. Based on virtual forces strategy, sense holes self-healing algorithm was presented. Simulation results showed the algorithm was superior to others in energy-balancing. The sense holes recognized algorithm could efficiently and quickly detect sense holes in sensor networks as shown in simulations. Keywords: Sensor networks; Sense hole recognized; Self-healing; Nodes deployment. Application of Congestion Avoidance Mechanism in Multimedia Transmission over Mesh Networks by Biaokai Zhu, Jumin Zhao, Deng-ao Li, Ruiqin Bai Abstract: The unreliable nature and shared multi-media of multi-hop communications cause the deployment of multi-media applications in wireless mesh network a thorny problem. For instance, video is usually compressed into a group of frames before transmission, resulting in unrecoverable destruction during the display process. The importance of different frames' type is quite different. However, they are considered as same in most existing wireless mesh networks. In this paper, we propose a novel Congestion Avoidance Mechanism for multimedia transmission over 802.11e mesh networks. In our mechanism, we added priority for video packets. According to the significance of frames, we proposed an adaptive mechanism involves the mapping of H.264 video packets to appropriate access categories in IEEE 802.11e standard. Simulation results show that our mechanism improves Quality of Service (QoS) of multimedia transmission.rn Keywords: Multimedia transmission; 802.11e; wireless mesh network. Efficient Data Dissemination Approach For QoS Enhancement in VANETs by Sachin Khurana, Gaurav Tejpal, Sonal Sharma Abstract: Vehicular ad hoc networks (VANETs) have seen tremendous growth in the last decade, providing a vast range of applications in both military and civilian activities. The temporary connectivity in the vehicles can also increase the driver's capability on the road. However, such applications require heavy data packets to be shared on the same spectrum without the requirement of excessive radios. Thus, efficient approaches are required which can provide improved data dissemination along with the better quality of services to allow heavy traffic to be easily shared between the vehicles. In this paper, an efficient data dissemination approach is proposed which not only improves the vehicle to vehicle connectivity but also improves the QoS between the source and the destination. The proposed approach is analyzed and compared with the existing state-of-the-art approaches. The effectiveness of the proposed approach is demonstrated in terms of the significant gains attained in the parameters namely, end to end delay, packet delivery ratio, route acquisition time, throughput, and message dissemination rate in comparison with the existing approaches. Keywords: VANETs; delay; QoS; Data Dissemination; Fuzzy sets. A Challenge-Response Mechanism for Securing Online Social Networks against Social Bots by Mohamed Torky, Ali Meligy, Hani Ibrahim Abstract: Social bots is fast becoming a serious security threat to Online Social Networks (OSNs). Social bots are automated software tools able to execute malicious activities in OSNs systems in an automated fashion. It can perform auto-sharing and posting, sending fake friend requests, harvesting private information, etc. There is evidence that social bots play a crucial role in penetrating privacy and security of social networks. Hence, these malicious software tools represent a big security challenge against Social Network Service Provider (SNSP). In this paper, we introduce a novel anti-bot mechanism called Necklace CAPTCHA for securing OSNs platforms against the automated behaviors of social bots. Necklace CAPTCHA is an Image-based CAPTCHA, which utilizes the functionality of Necklace Graph approach to generate its challenge-response tests in a novel fashion. The results demonstrated that Necklace CAPTCHA is an effective and secure anti-bot mechanism compared with other CAPTCHAs in the literature with respect to the usability and security metrics. Keywords: Online Social Networks (OSNs); Security and Privacy; System Usability; Social Bots ; CAPTCHA; Necklace Graph. Design of a Monitoring and Safety System for Underground Mines Using Wireless Sensor Networks by Coert Jordaan, Reza Malekian Abstract: A mine safety system using a wireless sensor network is implemented. Sensor nodes and a monitoring system are developed to be used in the underground mining environments. Investigations are done into sensor design for underground mines, as well as the use of sensors to profile the underground mining environment and the use of wireless communication in the underground mining environment. The information is used to design and implement a robust hardware-based sensor node with standalone microcontrollers that sample data from six different sensors, namely temperature, humidity, airflow speed, noise, dust and gas level sensors, and transmit the processed data to a graphical user interface, developed using Qt Creator. The system reliability and accuracy is tested in a simulated mine. The wireless mine profiling sensor node, with its monitoring software and receiver unit was successfully implemented. It provided linear and accurate results over nearly a month of daily testing in the simulated mine. It is observed that critical success factors for the wireless sensor node is its robust design, which does not easily fail or degrade in performance. The node also has strong, self-adaptive networking functionality, to recover in the case of a node failure. Keywords: Mine safety system; wireless sensors; temperature sensor; humidity sensor; airflow speed sensor; noise sensor; dust sensor; gas sensor; error detection. Signal Technique for Friend or Foe Detection of Intelligent Malicious User in Cognitive Radio Network by Saifur Rahman Sabuj, Masanori Hamamura Abstract: To address spectrum scarcity, cognitive radio networks have been proposed as a means to improve spectrum utilization and effciency. In regulation policy for cognitive radio networks, unlicensed users (secondary users) are allowed to utilize unoccupied spectrum when it is not being used by licensed users (primary users). In point of fact, security issues arise when intelligent malicious users can attack cognitive radio networks and decrease the permitted channel for secondary users. In this paper, we propose a novel scheme, based on friend or foe (FoF) detection technique with physical layer network coding, to enable discrimination between secondary users and intelligent malicious users. Theoretical expressions are derived for probabilities of detection of secondary user, miss detection, and false alarm. In addition, the effectiveness of the proposed approach is evaluated by theoretical analysis and Monte Carlo simulation. Furthermore, an algorithm is proposed for distinguishing between secondary user and intelligent malicious user. Finally, based on the outcome of simulation of probabilities and normalized cross- correlation, it is determined that the proposed scheme outperforms in terms of OFDM signal compared with QPSK signal over cognitive radio network. Keywords: Cognitive radio network; Friend or foe detection; Physical-layer network coding; Cross-correlation. Designing Secure and Reliable Mobile Agent Based System for Reliable MANET by Moumita Roy, Chandreyee Chowdhury, Munshi Navid Anjum, Sarmistha Neogy Abstract: Mobile Adhoc NETwork (MANET) provides a promising platform for pervasive computing applications. Mobile agents are found to be effective for executing such pervasive computing applications. The motivations behind this are advancement in technology, wireless networking, sensor network, ambient intelligence etc. However, since MANET is inherently more vulnerable to security threats and prone to topology changes, reliability and security issues must be addressed before mobile agents are commercially deployed. There are few works on securing mobile agents but even fewer focuses on MANET. This work is our attempt to design a lightweight trust based reputation scheme to protect the agents against network layer threats. The scheme is based on Dempster-Shafer belief theory. Performance of the trust based reputation scheme with respect to network and system reliability is analyzed. The work is simulated and the results show that even for a fairly hostile MANET, the effective reliability of distributed application can be increased using mobile agent based system. Keywords: Reputation; Trust; Demster-Shafer Belief Theory; Reliability; Monte Carlo Simulation. Energy Efficient Hierarchical Multi-Path Routing Protocol to Alleviate Congestion in WSN by Sunitha GP, Dilip Kumar S M, Vijay Kumar B P Abstract: Congestion easily occurs in wireless sensor networks (WSN) due to it's centralized traffic pattern. It has a negative impact on the network performance in terms of decreasing throughput and increasing energy consumption. %In WSN, the main concern is to control congestionrnIn order to achieve high energy efficiency, network longevity, better fairness and quality of service, it is important to detect congestion in (WSN) in a timely manner. In this paper, an energy efficient hierarchical multi-path routing protocol to alleviate congestion and energy balancing problems is proposed.rn The algorithm is designed by partitioning the network into equal sized zones to achieve complete network connectivity and to reduce packet transmissions. The zone leaders ((ZL's)) selected are shifted on different nodes on network dynamic conditions to avoid hotspots and to provide energy balancing. For efficient data transmission quicker and optimal multiple paths are established using merged zone and Hierarchical network ((HiNet)) topology structure. The proposed algorithm detects the congestion by monitoring the path quality. The detected congestion is a result of overloaded links or nodes on the path. In addition, the algorithm proactively controls the congestion by dynamically shifting the transmission paths on their quality and alleviate it reactively using traffic splitting approach. The goal of this approach is to control resources instead of controlling the network load. The simulation results demonstrate that the proposed algorithm performs better as compared to other congestion control algorithms in terms of throughput, energy consumption and packet delivery ratio in a resource constraint wireless sensor network Keywords: Congestion control; Multi-path routing; Energy efficiency; Load balancing; WSN. An Adaptive Wi-Fi Indoor Localization Scheme using Deep Learning by Chih-Shun Hsu, Yuh-Shyan Chen, Tong-Ying Juang, Yi-Ting Wu Abstract: Indoor localization is an important issue for many indoor applications. Many deep learning-based indoor localization schemes have been proposed. However, these existing schemes cannot adjust according to different environment. To improve the existing schemes, a novel indoor localization scheme, which can adaptively adopt the proper fingerprint database according to the collected signals, is proposed in this paper. The proposed scheme consists of the off-line and the on-line phases. A deep learning architecture with seven hidden layers is designed for the off-line phase. Two consecutive hidden layers form a Restricted Boltzmann Machine, which uses the ${k}$-step contrastive divergence algorithm for the layer-by-layer training. The proposed Wi-Fi indoor localization scheme uses two fine-tuning algorithms, namely the cross entropy and the mean squared algorithms, to build the corresponding fingerprint databases. As for the on-line phase, the Bayesian probability algorithm is used for position estimation. The fingerprint databases built during the off-line phase are adaptively adopted during the on-line phase. When the standard deviation of the collected signals does not exceed the threshold, the fingerprint database built by the cross entropy algorithm is adopted; when the standard deviation of the collected signals exceed the threshold, the fingerprint database built by the mean squared algorithm is adopted. The experiments were implemented and validated in a noisy and noise free indoor environment. The experimental results show that the proposed scheme can improve the accuracy of the training data and reduce the localization error. Keywords: deep belief network; deep learning; indoor positioning; Wi-Fi; fingerprinting localization. An Incentive Mechanism with Privacy Protection and Quality Evaluation in Mobile Crowd Computing by Hao Long, Shukui Zhang Abstract: In order to achieve good service quality for mobile crowd computing (MCC), incentive mechanism need to attract more users to participate in the task while avoiding leakage of privacy. We proposed an incentive mechanism with privacy protection and quality evaluation (IMPPQE). Combining the advantages of offline and online incentive mechanisms, we design improved two-stage auction to select the winners, while protecting participants'privacy. Finally, the sensing reports are evaluated by quantitative calculation to ensure the objectivity of evaluation. Extensive simulations show that our proposed method can improve the efficiency and utility of MCC and obtain higher satisfaction rate of data quality. Keywords: Mobile crowd computing; privacy protection; quality evaluation; incentive mechanism; two-stage auction. A Novel Faster Failure Detection Strategy for Link Connectivity using Hello Messaging in Mobile Ad Hoc Networks by Alamgir Naushad, Ghulam Abbas, Ziaul Haq Abbas, Lei Jiao Abstract: Faster failure detection is one of the main steps responsible for efficient link connectivity in mobile ad hoc networks (MANETs). Under a random behavior of network nodes and link/node failure, there must be a unified approach to describe an adequate Hello messaging strategy for link connectivity in MANETs. In order to tackle this issue, we present a strategy for achieving faster failure detection, and derive algorithmic attributes of the proposed strategy on the basis of multiple parameters of interest after modelling it as a Markov process. We also present novel algorithms to minimize the biggest chunk of delay incurred as a result of link re-connectivity and, thus, improve network connectivity in MANETs. Simulation and analytical results indicate efficacy of the proposed strategy in achieving faster failure detection and efficient link re-connectivity. Keywords: Faster failure detection; Hello messages; MANETs; Stochastic processes; Link connectivity. Mitigating SSDF Attack using Distance-based Outlier approach in Cognitive Radio Networks by Wangjam Niranjan Singh, Ningrinla Marchang, Amar Taggu Abstract: Collaborative spectrum sensing is employed in cognitive radio networks for improving the spectrum sensing accuracy. The collaborating cognitive radios send their individual sensing results to the fusion center (FC) which aggregates the results to come to a final sensing decision. Malicious radios may adversely influence the final sensing decision by transmitting false spectrum sensing results to the FC. This attack is commonly known as the spectrum sensing data falsification (SSDF) attack. Hence, in the light of such a threat, it is pertinent for the FC to identify any such malicious radios, if any and isolate them from the decision process. In this paper, a distance-based outlier detection approach is proposed which mines the sensing reports at the FC for detection and isolation of such malicious users. Numerical simulations results support the validity of the proposed approach. Keywords: SSDF attack; distance-based outlier detection; cognitive radio network; data mining. IOT Enabled Adaptive Clustering based Energy Efficient Routing Protocol For Wireless Sensor Networks by Muhammad Asad, Aslam Hayat, Yao Nianmin, Naeem Ayoub, Khalid Ibrahim Qureshi, Ehsan Ullah Munir Abstract: Wireless Sensor Networks (WSNs) consists of hundreds and thousands of micro-sensor nodes which are distributed in the sensing field to sense the uncertain events. These sensor nodes plays an important role in Internet of Things (IoT). Energy consumption has been a major issue in WSNs, various energy efficient conventional routing protocols are proposed to minimize the communication energy cost of sensor nodes. In IoT enabled WSNs, these sensor nodes are resource controlled in various ways, such as energy, storage, computing, communication and so on. In conventional routing protocols clustering technique is performing superiorly but due to the limited characteristics, suggested routing protocols are not as much smart and flexible to generate a perfect Cluster-Head (CH) because these routing protocols are limited to centralized and distributed or homogeneous and heterogeneous networks. In this paper, we propose a new IoT enabled Multi Adaptive Clustering (MAC) energy efficient routing protocol for WSNs to minimize the energy dissipation and improve the network performance. This new technique holds the hybrid cluster formation algorithm in which the network topology is divided into two regions the first region is centralized and the second region is distributed. Both regions contains homogeneous and heterogeneous nodes while the sink is static and located in the center of both networks. Specifically, proposed IoT enabled MAC routing protocol holds the major three properties: Enabling of resources to sensor nodes through IoT, hybrid cluster formation to distribute the network load evenly among sensor nodes and a new mechanism to minimize the energy consumption in long range data transmission. Our simulation results give significant proof that MAC performs better than state-of-the-art routing protocols such as LEACH-C, DEEC, D-DEEC and E-DEEC. In addition, performance evaluation proofs that MAC is suitable for the network which requires longer network lifetime. Keywords: Internet of Things; Wireless Sensor Networks; Energy Efficient; Routing Protocols. A new approach for the recognition of human activities by SALIMA SABRI, AlOUI Abdelouhab Abstract: The evaluation of a patient\'s functional ability to perform daily living activities is an essential part of nursing and a powerful predictor of a patient\'s morbidity, especially for the elderly. In this article, we describe the use of a machine learning approach to address the task of recognizing activity in a smart home.We evaluate our approach by comparing it to aMarkov statistical approach and using several performance measures over three datasets. We show how our model achieves signi cantly better recognition performance on certain data sets and with different representations and discretisation methods with an accuracy measurement that exceeds 92%and accuracy of 68%. The experiments also showa signi cant improvement in the learning time which does not exceed one second in the totality of the experiments reducing the complexity of the approach. Keywords: Ubiquitous applications; automatic learning; Katz ADL; activity recognition; probabilistic models; wireless sensor network. A Novel Method for Time Delay Prediction in Networked Control Systems by Pei XU, Jianguo WU Abstract: Time delay prediction is a crucial issue of networked control systems. Previous methods mainly use individual model to predict time delay, which causes the limitation that the proposed model can only be suitable applied to either linear or nonlinear data. This paper proposed a novel method to predict time delay in networked control systems which considers several different individual models as the component models to form a combined model and takes full advantages of these component models. By applying Lagrange multiplier method to minimize prediction error, the proposed OW (optimal weight) algorithm is able to calculate the proper weight coefficients of component models in order to improve the prediction performance. Compared with the existing methods, the proposed combined model can improve the prediction accuracy and support robustness, variability and scalability. The simulation experiments verify the effectiveness of the proposed method. Keywords: networked control systems; time delay prediction; RBF neural network; ARMA model; optimal weight; combined model. A reputation-based truthfulness paradigm for multi-hop transmission in cognitive radio networksby Trupil Limbasiya, Debasis Das, Ramnarayan Yadav Abstract: Cognitive radio networks (CRNs) consist of numerous intellectual users with the capability of sensing and sharing underutilized spectrum, and they are called as cognitive users (CUs). The spectrum is allocated to licensed users or primary users (PUs) but, generally, they do not utilize it completely. To overcome the ever-increasing spectrum demand and utilize the underutilized licensed spectrum, the cognitive radio plays a major role. In this distributed environment, the communication among CUs becomes more challenging due to channel heterogeneity, uncontrolled environment, a need of cooperative sensing for accurate sensing result, etc. Additionally, there are different attacks, e.g., primary user emulation (PUE), control channel saturation DoS (CCSD), selfish channel negotiation (SCN), spectrum sensing data falsification (SSDF), modification, and man-in-the-middle. Then, this affects and degrades system and CRN performance, which creates an opportunity for a trust management model to manage the CRNs properly. In this paper, we propose an efficient trust management protocol for centralized and distributed CRNs that are to build a trust-based system over the complete cognitive cycle to protect against security attacks brought by the unreliable individuals. To address the security issues, a clustering scheme is used in the distributed environment for effective cooperation among CUs. The security analysis and simulation results represent that the proposed protocol can identify malicious behavior and enhance fairness and powerfulness of the network in centralized and distributed circumstances. Keywords: Attacks; Cognitive radio networks; Integrity; Trust management Throughput of Cooperative HARQ protocols for Underlay Cognitive Radio Networks using Adaptive and Fixed Transmit Powerby Nadhir Ben Halima, Hatem Boujemaa Abstract: In this paper, we study theoretically and by simulations the throughput of cooperative Hybrid Automatic Repeat reQuest protocolsrnfor underlay cognitive radio networks. Both fixed and adaptive power transmission are studied. Different relay selection techniques with Amplify and Forward (AF) and Decode and Forward (DF) relaying are investigated.rnFor fixed transmit power, some relays are not available since they generate an interference to primary receiver larger than a predefined value T.rnThe best relay is selected among the available ones.rnFor adaptive transmit power, all relays adapt their power so that interference to primary receiver is always lower than a predefined threshold $T$. In this case, all relays will be available for retransmitting the secondary source packet.rnBoth Average Interference Power (AIP) and Peak Interference Power Constraints (PIP) are studied. We also analyze the effect of primary interference on secondary throughput. Keywords: Cognitive Radio Networks; HARQ GOOSE: Goal Oriented Orchestration for Smart Environmentsby Vincenzo Catania, Gaetano La Delfa, Giuseppe La Torre, Salvatore Monteleone, Davide Patti, Daniela Ventura Abstract: Smart environment scenarios are characterized by the presence of users, with different needs and preferences, and everyday life objects exploited to meet the expectations of users themselves. Connecting objects to the Internet and making them accessible from remote is not sufficient to make an environment "smart" since such ecosystems should also be able to enable context-sensitive actions along with a management of the interaction between objects and users. In this work, we propose GOOSE, a platform which aimed at interpreting users goals expressed in natural language in order to generate, select, and safely enforce a set of plans to be executed to fulfill target goals. After highlighting the main challenges affecting typical Machine to Machine (M2M) communication scenarios, we show how the use of a semantic reasoner can be used to allow the composition of plans consisting of sequences of services to be called on the smart environment objects. Finally, we address the issue of secure communications between platform and objects, and the management of potentially inconsistent goals. Keywords: Machine to Machine; RESTful services; Goal-oriented Architecture; User-object interactions; Semantic descriptions; Indoor localization; Special Issue on: Emerging Pervasive and Ubiquitous Networking Internet of Things: a Research Oriented Introductory by RAJA S P, SAMPRADEEPRAJ T Abstract: The world has shrunk considerably with the dramatic growth in Internet usage. Every computer and mobile phone in the world can be connected together through Internet technology. As a result, intelligent devices are connected and communicate together. The Internet of Things envisions a future where people and intelligent systems cooperate and work together. In the IoT, machine-to-machine communication helps devices exchange data, requiring power, efficiency, security and reliability. This paper advances new ideas for designing a security protocol in the IoT so as to facilitate secure machine-to-machine communication. Keywords: Internet of Things; Architecture; Protocols; D2D Communication; Security. Proficient Communication among Sensor Devices using Heuristic Approaches in IoT Environment by Nandhakumar Ramachandran, Varalakshmi Perumal Abstract: Collecting data in an IoT environment is difficult and the redundancy among the data from the sensors tends to decrease the lifetime of the network and also increase the latency. Collecting data from heterogeneous sensors could involve the translation between the different types of values for the different types of sensors. The system comprises of dynamic and heterogeneous sensors. The proposed system aims at achieving an efficient communication among sensor devices by choosing the most optimal node at every stage of the communication, namely, data dissemination, data aggregation and routing. The base station releases a query, according to which clustering is done using Particle Swarm Optimization (PSO). The Cluster Head is elected using PSO based on many factors like position, velocity, energy and the neighbor density. It also eliminates the formation of residual nodes. Principle Component Analysis is used at relay node to aggregate heterogeneous data of multiple dimensions. It reduces the dimensionality of data sent thus reducing the packets sent. It does away with the translation of data of one type to another during forwarding of data. Ant Colony Optimization (ACO) is used to forward data at every level by considering the energy of the node, distance from base station and the velocity. The technique helps in choosing reliable, energy efficient path and also decreases the delay. Thus proposed system can be helpful in overcoming the resource constraint problem, limited transmission range, eliminating the formation of residual nodes, and redundancy among data. This in turn improves the network lifetime. Keywords: IoT; Heuristic Approaches; Proficient Communication; Heterogenous sensors; Energy efficient; PSO; ACO; PCA. A Novel Group Ownership Proof and Transfer Scheme for B2B, B2C and C2C Transactions by Meng-Lin Tsai, Yu-Yi Chen Abstract: In the field of supply chain management, Radio Frequency Identification (RFID) tags have attracted widespread popularity in the process from raw material acquisition, manufacturing, logistics, warehousing, retailing to consumers. As raw materials or products are purchased, an inventory flow for products is completed once the ownership is transferred to the customer. Based on RFID tagged inventory, ownership transfer is a critical issue in the supply chain management. The ownership of RFID tagged items should be seamlessly transferred. As the transaction occurs, the ownership of the tagged product should be transferred from the current owner to the new owner. This paper proposes a novel RFID ownership transfer scheme suitable for products with multiple components attached with tags that need to be shipped together as a group. This is the first group ownership transfer scheme to emphasis the proof of ownership is critical for purchase reliability. Buyers can confirm that the seller is the owner before purchasing. The ownership of the product can be verified and then be transferred successfully in business-to-business (B2B), business-to-customer (B2C) and customer-to-customer (C2C) transactions. Keywords: RFID; group ownership transfer; ownership proof; grouping proof; security; privacy; authentication; authorization; supply chain; protocol. Adaptive Sink Mobility for Energy-Efficient Data Collection in Grid-Based Wireless Sensor Networks by Sarra Messai, Zibouda Aliouat, Hamida Seba, Abdallah Boukerram Abstract: Many recent studies show that sink mobility in Wireless Sensor Networks (WSNs) preserves the energy of the sensors for a longer network lifetime. These approaches require generally a logical organization of the network within clusters, such as a grid. However, constructing such a grid is costly and requires several rounds of messages exchanged between sensors which outbalance the benefits of the approach. To tackle this drawback, we propose an energy-based cell-head (CH) selection combined with a sink mobility algorithm to minimize energy consumption of sensor nodes and optimize data collection. The proposed approach is termed EASY for Energy-Aware Sink mobilitY. Experimentation results confirm that our solution offers better performance compared to existing approaches. Keywords: energy saving; grid-based wireless sensor networks; sink mobility. Spider Monkey Optimisation based Energy Efficient Clustering in Heterogeneous Underwater Wireless Sensor Networks by Madhuri Rao, Narendra Kumar Kamila Abstract: Underwater Wireless Sensor Network (UWSN) enables ubiquitous computing but is a system with unique constraints. Longer delay in propagation time of acoustic signals requires multi hop routing to be avoided in such networks. Multi Hop propagation cause more energy drainage of the nodes. A trade-off has to be made between the numbers of nodes deployed for optimal coverage while achieving lesser propagation delays. The denser a network is, the more likely it is demonstrate multi hop propagation thereby leading to more energy drainage. On the other hand, UWSN is prone to dynamic topology changes due to node mobility. Node mobility is inevitable and results due to water current. It leads to transmission errors, link loss, collisions and congestion if not well handled. Enhanced routing techniques that could reduce energy consumption, reduce delay and avoid multi-hop transmission are needed. Moreover, a routing technique that could adapt to the changing topology is needed in UWSN. A clustering approach based on Spider Monkey Optimisation (SMO) is proposed here that addresses these issues that arise in topology changes due to node mobility. The proposed approach is found to enhance the average network lifetime of nodes by 0.01579 Joules and achieves a network gain of 1.35%. Further a significant reduction in average delay of messaging packets by 19.82% is achieved. The proposed approach is far more optimised as it reduces the average hops between sender and receiver by 20%. Keywords: Heterogeneous Underwater Wireless Sensor Networks; dynamic clusters; Spider Monkey optimisation; Fission –Fusion Social Structure. Optimal mobile beacon trajectories for nodes localization in wireless sensor networks by Sara Benkouider, Nasreddine Lagraa, Abderrahim Benslimane, Mohamed Bachir Yagoubi Abstract: The random deployment of nodes is commonly used in Wireless Sensor Networks (WSNs), because of either the hostility of the monitored area or its large scale. However, many applications and protocols are position- based. Hence, its necessary to determine the position of sensors even after their deployment. Localization techniques using mobile beacon have been proposed in this context, to localize unknown sensor nodes. However, to save nodes energy its useful to send an optimal number of messages. Thus, an optimal trajectory of mobile beacon helps to achieve such objective. In this paper, we propose two novel optimal mobile beacon trajectories based on Hilbert curve. The first proposal aims at minimizing the trajectory length and improving the localization accuracy, moreover, the second one minimizes both the trajectory and the energy consumption. In this study, we compare the proposed techniques with mobile Hilbert beacon trajectory. The analytical analysis and the performance evaluation with simulations show that our proposed methods improve, compared to the existing methods, the accuracy, the length of trajectories and the energy consumption. Keywords: WSNs; Localization; Mobile beacon; Virtual beacons; Hilbert trajectory. An Intelligent Routing Protocol in VANET by Ghassan Samarah Abstract: Vehicular Ad hoc Networks (VANET) is one of the new promising and most challenging research area in the mobile computing field; this technology suffers from the high mobility which results in broken links and continuously changing routs. In this research, a new and Intelligent Routing Protocol (IRP) using position based proactive message transmission in Vehicular ad hoc Networks environment will be deployed. The proposed protocol aims to supply vehicles with live and quick information about the current links between road vehicles and hence better message routing, channel utilization, error free, congestion free channel with less broken links using position based, multi-hop routing and best first search algorithm, simulation results show that the proposed protocol achieves better performance when compared with other protocols. Keywords: Intelligent Protocol; Best First Search; Multi hop routing; VANET. Time Slotted Channel Hopping with Collision Avoidance by Sarra Hammoudi, Saad Harous, Zibouda Aliouat, Lemia Louail Abstract: The Time Slotted Channel Hopping mode proposed by the IEEE 802.15.4e2015 standard enhances the reliability of low-power and lossy networks. The TSCH supports the critical applications embedded in harsh environments with high-efficiency. The multi-channel hopping and the time slotted access nature of this mode provides deterministic latency for applications and achieves a greater throughput capacity. On the other hand, the shared links nature of this mode risks the packets transmission to fail when the participating nodes of this shared link are colliding nodes. Retransmitting the packets randomly risks the packets retransmission to fail several times, which influences negatively on the sensor energy and packet latency. To ensure collision avoidance in the presence of hidden nodes, this paper provides two intelligent algorithms(Time Slotted Channel Hopping with Correct Collision Avoidance backoff algorithm(TSCH-CCA) and Enhanced Priority Channel Access Backoff Algorithm(E-PCA)) applied respectively to both normal packets and critical events packets. Our proposed solution shows significant improvements in terms of latency, network congestion, network lifetime, critical event packets lifetime, and collision avoidance. Keywords: Internet of Things; Wireless Sensor Networks; MAC layer; IEEE 802.15.4E; Time Slotted Channel Hopping; collisions; channel hopping; shared links; dedicated links. An Adaptive Energy Efficient Flow Coverage Scheme for Mobile Crowd Sensing in Urban Streets by Ahmed Gad-Elrab, Almohammady Alsharkawy Abstract: With the rapid growth of sensor technology, smartphone sensing has become an effective approach to improve the quality of applications in smartphones. \textit{Mobile Crowd Sensing} (MCS) is a new paradigm which takes advantage of pervasive smartphones to efficiently collect data in the urban streets, enabling numerous novel applications. To achieve a good service quality for a MCS application, coverage mechanisms are necessary to achieve the sensing task requirements and collect a reliable sensing data from the urban streets. The key idea of MCS is to employ so many users carrying heterogeneous devices to collect and share sensed data using their mobile phones. The main problem in data collection process is how to cover all segments in the street sides and select a minimal number of participants in each street segment preserve the mobile devices' energy and prolong the MCS network lifetime. To solve this problem, a flow coverage scheme is proposed to cover a specific street and achieve the sensing task requirements. The proposed scheme is based on using a modified localization method that uses a minimal of GPS sensors and utilizes the Zigbee technology to communicate with the neighbor nodes and estimate the distance between nodes by using the Time of Arrival method. Finally, we have compared our proposed scheme with existing methods via extensive simulations based on the real movement traces of students in our university. Extensive simulation results well justify the effectiveness and robustness of our scheme. Keywords: Mobile crowd sensing; Mobile sensors; Mobile localization; Flow coverage. Security Scheme for Mobility Management in the Internet of Things by Oryema Brian, Cheol Woo Jung, Jong Tae Park Abstract: A mobility management protocol based on the constrained application protocol (CoAP), called the CoAP-based mobility management protocol (CoMP), was suggested to counteract the constraints of mobile internet protocol version 6 (MIPv6) in the Internet of Things (IoT) environment. CoMP exchanges Binding Update (BU) messages to manage location changes, but BU messages are subject to security vulnerabilities, such as denial of service (DoS), false BU, session hijacking, and man-in-the-middle (MITM) attacks. In this paper, we extend CoMP by proposing a security scheme based on a private key to protect the BU CoMP messages exchanged between the mobile nodes and clients, referred to as private key-based BU for CoMP (PKBU-CoMP). PKBU-CoMP ensures that mobile nodes check and confirm the address ownership and validity of mobile nodes before performing any BU operation. The performance of PKBU-CoMP is analyzed both mathematically and using Cooja simulations. Keywords: Security in Internet of Things; Secure Mobility; Secure Binding; Private Key in IoT. Smart Vehicles for Urban Sensing based on Content-centric Approach by Raja Priya V.K. Abstract: Today IoT(Internet of Things) plays a major role in interconnecting physical devices, vehicles, etc. to collect, exchange data through networks. Smart Vehicles collect, store and exchange monitoring sensory content about urban streets. Uploading such monitoring data by all vehicles to the infrastructure is challenging. In-order to avoid such situations, the appropriate vehicles important for different urban sensing tasks is identified by measuring its relative importance in the network. First the different location-aware content is autonomously ranked by a vehicle. It then uses a content importance and its mobility pattern to find its importance in the network. Based on the vehicles centrality score the best content hubs in the network are identified to provide efficient collect, storage and exchange of sensory data based on Content-centric Networking (CCN) where content request/response data are characterized. Due to limited bandwidth resources the response data is routed to specified vehicle based on the geo-based routing technique. Keywords: IoT; Content-centric Networking; Vehicular Ad-Hoc Network; Smart Vehicles; Content-centric vehicular networking; urban sensing; urban monitoring; location-aware content; sensory data; geo-based routing. CLPS: Context-based Location Privacy Scheme for VANETs by Ines Khacheba, Mohamed Bachir Yagoubi, Nasreddine Lagraa, Abderrahmane Lakas Abstract: In the last few decades, location privacy preservation is considered by researchers as a challenge for enabling the deployment of Vehicular Ad Hoc Networks (VANETs). In fact, the continuous periodical broadcast of spatiotemporal information contained in beacons permits to a global passive adversary to link pseudonyms and reconstruct all vehicle traces. This attack breaches the drivers privacy. Consequently, it is imperative that a vehicle selects the right context to change its pseudonym in order to confuse the adversary and ensure unlinkability of pseudonyms. In this paper, we propose a Context-based Location Privacy Scheme (CLPS) that makes the following contributions: (i) We propose a pseudonym changing strategy that permits to a vehicle to effectively change its pseudonyms based on its context. (ii) We present a new linkability threat, called cheating attack, and show the vulnerability of the proposed pseudonym changing strategy to this attack. To confront the cheating attack, we propose developing a cheating detection mechanism that allows a vehicle to detect misbehaving vehicles that are responsible of launching the cheating attack, and assess whether the change of pseudonym is successful after a pseudonym is changed. Finally, we evaluate by simulations the proposed scheme against the global passive adversary using the privacy extension, PREXT, in Veins VANET simulator, and we compare it against different privacy schemes proposed in the literature. Keywords: VANET; Security; Location Privacy; Changing Pseudonyms; Linkability; Cooperation; Cheating Detection.
|
2018-05-25 12:37:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32610225677490234, "perplexity": 2248.722054216287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867092.48/warc/CC-MAIN-20180525121739-20180525141739-00516.warc.gz"}
|
https://gmatclub.com/forum/if-the-interior-angles-of-a-triangle-are-in-the-ratio-3-to-4-to-5-wha-275961.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 13 Nov 2018, 12:34
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in November
PrevNext
SuMoTuWeThFrSa
28293031123
45678910
11121314151617
18192021222324
2526272829301
Open Detailed Calendar
• ### Essential GMAT Time-Management Hacks
November 14, 2018
November 14, 2018
07:00 PM PST
08:00 PM PST
Join the webinar and learn time-management tactics that will guarantee you answer all questions, in all sections, on time. Save your spot today! Nov. 14th at 7 PM PST
• ### $450 Tuition Credit & Official CAT Packs FREE November 15, 2018 November 15, 2018 10:00 PM MST 11:00 PM MST EMPOWERgmat is giving away the complete Official GMAT Exam Pack collection worth$100 with the 3 Month Pack ($299) # If the interior angles of a triangle are in the ratio 3 to 4 to 5, wha new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: ### Hide Tags Math Revolution GMAT Instructor Joined: 16 Aug 2015 Posts: 6499 GMAT 1: 760 Q51 V42 GPA: 3.82 If the interior angles of a triangle are in the ratio 3 to 4 to 5, wha [#permalink] ### Show Tags 12 Sep 2018, 01:45 00:00 Difficulty: 25% (medium) Question Stats: 76% (00:26) correct 24% (00:07) wrong based on 21 sessions ### HideShow timer Statistics [Math Revolution GMAT math practice question] If the interior angles of a triangle are in the ratio $$3$$ to $$4$$ to $$5$$, what is the measure of the largest angle? A. $$30^0$$ B. $$45^0$$ C. $$60^0$$ D. $$75^0$$ E. $$90^0$$ _________________ MathRevolution: Finish GMAT Quant Section with 10 minutes to spare The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy. "Only$99 for 3 month Online Course"
"Free Resources-30 day online access & Diagnostic Test"
"Unlimited Access to over 120 free video lessons - try it yourself"
Senior Manager
Joined: 18 Jul 2018
Posts: 377
Location: India
Concentration: Finance, Marketing
WE: Engineering (Energy and Utilities)
Re: If the interior angles of a triangle are in the ratio 3 to 4 to 5, wha [#permalink]
### Show Tags
12 Sep 2018, 01:49
The sum of the interior angles in a triangle is 180.
Sum of the ratios of the triangle is 3x+4X+5x = 180.
x = 15.
Largest angle is 5x = 5*15 = 75.
_________________
When you want something, the whole universe conspires in helping you achieve it.
Math Revolution GMAT Instructor
Joined: 16 Aug 2015
Posts: 6499
GMAT 1: 760 Q51 V42
GPA: 3.82
Re: If the interior angles of a triangle are in the ratio 3 to 4 to 5, wha [#permalink]
### Show Tags
13 Sep 2018, 23:40
=>
Let $$x, y$$ and $$z$$ be interior angles of the triangle.
Since $$x:y:z = 3:4:5$$, we can write $$x = 3k, y = 4k$$ and $$z = 5k$$.
Since the interior angles of the triangle add to $$180^o, x + y + z = 3k + 4k + 5k = 12k = 180^o,$$ and so $$k = \frac{180^o}{12} = 15^o.$$
Therefore, the largest angle of the triangle is $$z = 5k = 5(15) = 75^o.$$
_________________
MathRevolution: Finish GMAT Quant Section with 10 minutes to spare
The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy.
"Only \$99 for 3 month Online Course"
"Free Resources-30 day online access & Diagnostic Test"
"Unlimited Access to over 120 free video lessons - try it yourself"
Re: If the interior angles of a triangle are in the ratio 3 to 4 to 5, wha &nbs [#permalink] 13 Sep 2018, 23:40
Display posts from previous: Sort by
|
2018-11-13 20:34:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3083348274230957, "perplexity": 5002.279612692038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741491.47/warc/CC-MAIN-20181113194622-20181113220622-00134.warc.gz"}
|
https://trenton3983.github.io/files/projects/2019-07-10_statistical_thinking_1/2019-07-10_statistical_thinking_1.html
|
## Statistical Thinking in Python (Part 1)¶
Course Description
After all of the hard work of acquiring data and getting them into a form you can work with, you ultimately want to make clear, succinct conclusions from them. This crucial last step of a data analysis pipeline hinges on the principles of statistical inference. In this course, you will start building the foundation you need to think statistically, to speak the language of your data, to understand what they are telling you. The foundations of statistical thinking took decades upon decades to build, but they can be grasped much faster today with the help of computers. With the power of Python-based tools, you will rapidly get up to speed and begin thinking statistically by the end of this course.
Imports
In [1]:
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
import numpy as np
from pprint import pprint as pp
import csv
from pathlib import Path
import seaborn as sns
from scipy.stats import binom
from sklearn.datasets import load_iris
Pandas Configuration Options
In [2]:
pd.set_option('max_columns', 200)
pd.set_option('max_rows', 300)
pd.set_option('display.expand_frame_repr', True)
Data Files Location
• Most data files for the exercises can be found on the course site
Data File Objects
In [3]:
data = Path.cwd() / 'data' / '2019-07-10_statistical_thinking_1'
elections_all_file = data / '2008_all_states.csv'
elections_swing_file = data / '2008_swing_states.csv'
belmont_file = data / 'belmont.csv'
sol_file = data / 'michelson_speed_of_light.csv'
Iris Data Set
In [4]:
iris = load_iris()
iris_df = pd.DataFrame(data=np.c_[iris['data'], iris['target']], columns=iris['feature_names'] + ['target'])
def iris_typing(x):
types = {0.0: 'setosa',
1.0: 'versicolour',
2.0: 'virginica'}
return types[x]
iris_df['species'] = iris_df.target.apply(iris_typing)
Out[4]:
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) target species
0 5.1 3.5 1.4 0.2 0.0 setosa
1 4.9 3.0 1.4 0.2 0.0 setosa
2 4.7 3.2 1.3 0.2 0.0 setosa
3 4.6 3.1 1.5 0.2 0.0 setosa
4 5.0 3.6 1.4 0.2 0.0 setosa
# Graphical exploratory data analysis¶
Look before you leap! A very important proverb, indeed. Prior to diving in headlong into sophisticated statistical inference techniques, you should first explore your data by plotting them and computing simple summary statistics. This process, called exploratory data analysis, is a crucial first step in statistical analysis of data. So it is a fitting subject for the first chapter of Statistical Thinking in Python.
## Introduction to exploratory data analysis¶
• Exploring the data is a crucial step of the analysis.
• Organizing
• Plotting
• Computing numerical summaries
• This idea is known as exploratory data analysis (EDA)
• "Exploratory data analysis can never be the whole story, but nothing else can serve as the foundation stone." - John Tukey
In [5]:
swing = pd.read_csv(elections_swing_file)
Out[5]:
0 PA Erie County 127691 75775 50351 60.08
1 PA Bradford County 25787 10306 15057 40.64
2 PA Tioga County 17984 6390 11326 36.07
3 PA McKean County 15947 6465 9224 41.21
4 PA Potter County 7507 2300 5109 31.04
• The raw data isn't particularly informative
• We could start computing parameters and their confidence intervals and do hypothesis test...
• ...however, we should graphically explore the data first
### Tukey's comments on EDA¶
Even though you probably have not read Tukey's book, I suspect you already have a good idea about his viewpoint from the video introducing you to exploratory data analysis. Which of the following quotes is not directly from Tukey?
1. Exploratory data analysis is detective work.
2. There is no excuse for failing to plot and look.
3. The greatest value of a picture is that it forces us to notice what we never expected to see.
4. It is important to understand what you can do before you learn how to measure how well you seem to have done it.
5. Often times EDA is too time consuming, so it is better to jump right in and do your hypothesis tests.
### Advantages of graphical EDA¶
Which of the following is not true of graphical EDA?
1. It often involves converting tabular data into graphical form.
2. If done well, graphical representations can allow for more rapid interpretation of data.
3. A nice looking plot is always the end goal of a statistical analysis.
4. There is no excuse for neglecting to do graphical EDA.
## Plotting a histogram¶
• always label the axes
In [6]:
bin_edges = [x for x in range(0, 110, 10)]
plt.hist(x=swing.dem_share, bins=bin_edges, edgecolor='black')
plt.xticks(bin_edges)
plt.yticks(bin_edges[:-1])
plt.xlabel('Percent of vote for Obama')
plt.ylabel('Number of Counties')
plt.show()
Seaborn
In [7]:
import seaborn as sns
sns.set()
In [8]:
plt.hist(x=swing.dem_share)
plt.xlabel('Percent of vote for Obama')
plt.ylabel('Number of Counties')
plt.show()
### Plotting a histogram of iris data¶
For the exercises in this section, you will use a classic data set collected by botanist Edward Anderson and made famous by Ronald Fisher, one of the most prolific statisticians in history. Anderson carefully measured the anatomical properties of samples of three different species of iris, Iris setosa, Iris versicolor, and Iris virginica. The full data set is available as part of scikit-learn. Here, you will work with his measurements of petal length.
Plot a histogram of the petal lengths of his 50 samples of Iris versicolor using matplotlib/seaborn's default settings. Recall that to specify the default seaborn style, you can use sns.set(), where sns is the alias that seaborn is imported as.
The subset of the data set containing the Iris versicolor petal lengths in units of centimeters (cm) is stored in the NumPy array versicolor_petal_length.
In the video, Justin plotted the histograms by using the pandas library and indexing the DataFrame to extract the desired column. Here, however, you only need to use the provided NumPy array. Also, Justin assigned his plotting statements (except for plt.show()) to the dummy variable _. This is to prevent unnecessary output from being displayed. It is not required for your solutions to these exercises, however it is good practice to use it. Alternatively, if you are working in an interactive environment such as a Jupyter notebook, you could use a ; after your plotting statements to achieve the same effect. Justin prefers using _. Therefore, you will see it used in the solution code.
Instructions
• Import matplotlib.pyplot and seaborn as their usual aliases (plt and sns).
• Use seaborn to set the plotting defaults.
• Plot a histogram of the Iris versicolor petal lengths using plt.hist() and the provided NumPy array versicolor_petal_length.
• Show the histogram using plt.show().
In [9]:
versicolor_petal_length = iris_df['petal length (cm)'][iris_df.species == 'versicolour']
In [10]:
plt.hist(versicolor_petal_length)
plt.show()
### Axis labels!¶
In the last exercise, you made a nice histogram of petal lengths of Iris versicolor, but you didn't label the axes! That's ok; it's not your fault since we didn't ask you to. Now, add axis labels to the plot using plt.xlabel() and plt.ylabel(). Don't forget to add units and assign both statements to _. The packages matplotlib.pyplot and seaborn are already imported with their standard aliases. This will be the case in what follows, unless specified otherwise.
Instructions
• Label the axes. Don't forget that you should always include units in your axis labels. Your y-axis label is just 'count'. Your x-axis label is 'petal length (cm)'. The units are essential!
• Display the plot constructed in the above steps using plt.show().
In [11]:
plt.hist(versicolor_petal_length)
plt.xlabel('petal length (cm)')
plt.ylabel('count')
plt.show()
### Adjusting the number of bins in a histogram¶
The histogram you just made had ten bins. This is the default of matplotlib. The "square root rule" is a commonly-used rule of thumb for choosing number of bins: choose the number of bins to be the square root of the number of samples. Plot the histogram of Iris versicolor petal lengths again, this time using the square root rule for the number of bins. You specify the number of bins using the bins keyword argument of plt.hist().
The plotting utilities are already imported and the seaborn defaults already set. The variable you defined in the last exercise, versicolor_petal_length, is already in your namespace.
Instructions
• Import numpy as np. This gives access to the square root function, np.sqrt().
• Determine how many data points you have using len().
• Compute the number of bins using the square root rule.
• Convert the number of bins to an integer using the built in int() function.
• Generate the histogram and make sure to use the bins keyword argument.
• Hit 'Submit Answer' to plot the figure and see the fruit of your labors!
In [12]:
# Compute number of data points: n_data
n_data = len(versicolor_petal_length)
# Number of bins is the square root of number of data points: n_bins
n_bins = np.sqrt(n_data)
# Convert number of bins to integer: n_bins
n_bins = int(n_bins)
# Plot the histogram
_ = plt.hist(versicolor_petal_length, bins=n_bins)
# Label axes
_ = plt.xlabel('petal length (cm)')
_ = plt.ylabel('count')
# Show histogram
plt.show()
## Plotting all of your data: Bee swarm plots¶
• Binning Bias: The same data may be interpreted differently depending on choice of bins
• Additionally, all of the data isn't being plotted; the precision of the actual data is lost in the bins
• These issues can be resolved with swarm plots
• Point position along the y-axis is the quantitative information
• The data are spread in x to make them visible, but their precise location along the x-axis is unimportant
• No binning bias and all the data are displayed.
• Seaborn & Pandas
In [13]:
sns.swarmplot(x='state', y='dem_share', data=swing)
plt.xlabel('state')
plt.ylabel('percent of vote for Obama')
plt.title('% of Vote per Swing State County')
plt.show()
### Bee swarm plot¶
Make a bee swarm plot of the iris petal lengths. Your x-axis should contain each of the three species, and the y-axis the petal lengths. A data frame containing the data is in your namespace as df.
For your reference, the code Justin used to create the bee swarm plot in the video is provided below:
_ = sns.swarmplot(x='state', y='dem_share', data=df_swing)
_ = plt.xlabel('state')
_ = plt.ylabel('percent of vote for Obama')
plt.show()
In the IPython Shell, you can use sns.swarmplot? or help(sns.swarmplot) for more details on how to make bee swarm plots using seaborn.
Instructions
• In the IPython Shell, inspect the DataFrame df using df.head(). This will let you identify which column names you need to pass as the x and y keyword arguments in your call to sns.swarmplot().
• Use sns.swarmplot() to make a bee swarm plot from the DataFrame containing the Fisher iris data set, df. The x-axis should contain each of the three species, and the y-axis should contain the petal lengths.
• Label the axes.
• Show your plot.
In [14]:
sns.swarmplot(x='species', y='petal length (cm)', data=iris_df, size=3)
plt.xlabel('species')
plt.ylabel('petal length (cm)')
plt.show()
### Interpreting a bee swarm plot¶
Which of the following conclusions could you draw from the bee swarm plot of iris petal lengths you generated in the previous exercise? For your convenience, the bee swarm plot is regenerated and shown to the right.
Instructions
1. All I. versicolor petals are shorter than I. virginica petals.
2. I. setosa petals have a broader range of lengths than the other two species.
3. I. virginica petals tend to be the longest, and I. setosa petals tend to be the shortest of the three species.
4. I. versicolor is a hybrid of I. virginica and I. setosa.
## Plotting all of your data: Empirical cumulative distribution functions¶
• Empirical Distribution Function
• Empirical Distribution Function / Empirical CDF
• An empirical cumulative distribution function (also called the empirical distribution function, ECDF, or just EDF) and a cumulative distribution function are basically the same thing; they are both probability models for data. While a CDF is a hypothetical model of a distribution, the ECDF models empirical (i.e. observed) data. To put this another way, the ECDF is the probability distribution you would get if you sampled from your sample, instead of the population. Lets say you have a set of experimental (observed) data $x_{1},x_{2},\,\ldots\,x_{n}$. The EDF will give you the fraction of sample observations less than or equal to a particular value of $x$.
• More formally, if you have a set of order statistics ($y_{1}<y_{2}<\ldots<y_{n}$) from an observed random sample, then the empirical distribution function is defined as a sum of iid random variables:
• $\hat{F}_{n}(t)=\frac{\text{number of elements in the sample $\leq t$}}{n}=\frac{1}{n}\sum_{i=1}^n 1_{X_{i}\leq t}$
• Where $1_{A}$ = the indicator of event) $A$.
• x-value of an ECDF is the quantity being measured
• y-value is the fraction of data points that have a value smaller than the corresponding x-value
• Shows all the data and gives a complete picture of how the data are distributed
In [15]:
x = np.sort(swing['dem_share'])
y = np.arange(1, len(x)+1) / len(x)
plt.plot(x, y, marker='.', linestyle='none')
plt.xlabel('percent of vote for Obama')
plt.ylabel('ECDF')
plt.margins(0.02) # keep data off plot edges
plt.show()
In [16]:
fig, ax = plt.subplots(figsize=(10, 5))
ax.margins(0.05) # Default margin is 0.05, value 0 means fit
x = np.sort(swing['dem_share'])
y = np.arange(1, len(x)+1) / len(x)
ax.plot(x, y, marker='.', linestyle='none')
plt.xlabel('percent of vote for Obama')
plt.ylabel('ECDF')
ax.annotate('20% of counties had <= 36% vote for Obama', xy=(36, .2),
xytext=(40, 0.1), fontsize=10, arrowprops=dict(arrowstyle="->", color='b'))
ax.annotate('75% of counties had < 0.5 vote for Obama', xy=(50, .75),
xytext=(55, 0.6), fontsize=10, arrowprops=dict(arrowstyle="->", color='b'))
plt.show()
#### plot multiple ECDFs¶
In [17]:
fig, ax = plt.subplots(figsize=(10, 5))
ax.margins(0.05) # Default margin is 0.05, value 0 means fit
for state in swing.state.unique():
x = np.sort(swing['dem_share'][swing.state == state])
y = np.arange(1, len(x)+1) / len(x)
ax.plot(x, y, marker='.', linestyle='none', label=state)
plt.xlabel('percent of vote for Obama')
plt.ylabel('ECDF')
plt.legend()
plt.show()
### Computing the ECDF¶
In this exercise, you will write a function that takes as input a 1D array of data and then returns the x and y values of the ECDF. You will use this function over and over again throughout this course and its sequel. ECDFs are among the most important plots in statistical analysis. You can write your own function, foo(x,y) according to the following skeleton:
def foo(a,b):
"""State what function does here"""
# Computation performed here
return x, y
The function foo() above takes two arguments a and b and returns two values x and y. The function header def foo(a,b): contains the function signature foo(a,b), which consists of the function name, along with its parameters. For more on writing your own functions, see DataCamp's course Python Data Science Toolbox (Part 1)!
Instructions
• Define a function with the signature ecdf(data). Within the function definition,
• Compute the number of data points, n, using the len() function.
• The x-values are the sorted data. Use the np.sort() function to perform the sorting.
• The y data of the ECDF go from 1/n to 1 in equally spaced increments. You can construct this using np.arange(). Remember, however, that the end value in np.arange() is not inclusive. Therefore, np.arange() will need to go from 1 to n+1. Be sure to divide this by n.
• The function returns the values x and y.
#### def ecdf()¶
In [18]:
def ecdf(data):
"""Compute ECDF for a one-dimensional array of measurements."""
# Number of data points: n
n = len(data)
# x-data for the ECDF: x
x = np.sort(data)
# y-data for the ECDF: y
y = np.arange(1, n+1) / n
return x, y
### Plotting the ECDF¶
You will now use your ecdf() function to compute the ECDF for the petal lengths of Anderson's Iris versicolor flowers. You will then plot the ECDF. Recall that your ecdf() function returns two arrays so you will need to unpack them. An example of such unpacking is x, y = foo(data), for some function foo().
Instructions
• Use ecdf() to compute the ECDF of versicolor_petal_length. Unpack the output into x_vers and y_vers.
• Plot the ECDF as dots. Remember to include marker = '.' and linestyle = 'none' in addition to x_vers and y_vers as arguments inside plt.plot().
• Label the axes. You can label the y-axis 'ECDF'.
• Show your plot.
In [19]:
# Compute ECDF for versicolor data: x_vers, y_vers
x, y = ecdf(versicolor_petal_length)
# Generate plot
plt.plot(x, y, marker='.', linestyle='none')
# Label the axes
plt.xlabel('Versicolor Petal Length (cm)')
plt.ylabel('ECDF')
# Display the plot
plt.margins(0.02) # keep data off plot edges
plt.show()
### Comparison of ECDFs¶
ECDFs also allow you to compare two or more distributions (though plots get cluttered if you have too many). Here, you will plot ECDFs for the petal lengths of all three iris species. You already wrote a function to generate ECDFs so you can put it to good use!
To overlay all three ECDFs on the same plot, you can use plt.plot() three times, once for each ECDF. Remember to include marker='.' and linestyle='none' as arguments inside plt.plot().
Instructions
• Compute ECDFs for each of the three species using your ecdf() function. The variables setosa_petal_length, versicolor_petal_length, and virginica_petal_length are all in your namespace. Unpack the ECDFs into x_set, y_set, x_vers, y_vers and x_virg, y_virg, respectively.
• Plot all three ECDFs on the same plot as dots. To do this, you will need three plt.plot() commands. Assign the result of each to _.
• A legend and axis labels have been added for you, so hit 'Submit Answer' to see all the ECDFs!
In [20]:
virginica_petal_length = iris_df['petal length (cm)'][iris_df.species == 'virginica']
setosa_petal_length = iris_df['petal length (cm)'][iris_df.species == 'setosa']
# Compute ECDFs
x_set, y_set = ecdf(setosa_petal_length)
x_vers, y_vers = ecdf(versicolor_petal_length)
x_virg, y_virg = ecdf(virginica_petal_length)
# Plot all ECDFs on the same plot
plt.plot(x_set, y_set, marker='.', linestyle='none')
plt.plot(x_vers, y_vers, marker='.', linestyle='none')
plt.plot(x_virg, y_virg, marker='.', linestyle='none')
# Annotate the plot
plt.legend(('setosa', 'versicolor', 'virginica'), loc='lower right')
_ = plt.xlabel('petal length (cm)')
_ = plt.ylabel('ECDF')
# Display the plot
plt.show()
## Onward toward the whole story¶
Coming up...
• Thinking probabilistically
• Discrete and continuous distributions
• The power of hacker statistics using np.random()
# Quantitative exploratory data analysis¶
In the last chapter, you learned how to graphically explore data. In this chapter, you will compute useful summary statistics, which serve to concisely describe salient features of a data set with a few numbers.
## Introduction to summary statistics: The sample mean and median¶
• mean - average
• heavily influenced by outliers
• np.mean()
• median - middle value of the sorted dataset
• immune to outlier influence
• np.median()
### Means and medians¶
Which one of the following statements is true about means and medians?
• An outlier can significantly affect the value of both the mean and the median.
• An outlier can significantly affect the value of the mean, but not the median.
• Means and medians are in general both robust to single outliers.
• The mean and median are equal if there is an odd number of data points.
### Computing means¶
The mean of all measurements gives an indication of the typical magnitude of a measurement. It is computed using np.mean().
Instructions
• Compute the mean petal length of Iris versicolor from Anderson's classic data set. The variable versicolor_petal_length is provided in your namespace. Assign the mean to mean_length_vers.
In [21]:
# Compute the mean: mean_length_vers
mean_length_vers = np.mean(versicolor_petal_length)
# Print the result with some nice formatting
print('I. versicolor:', mean_length_vers, 'cm')
I. versicolor: 4.26 cm
#### with pandas.DataFrame¶
In [22]:
iris_df.groupby(['species']).mean()
Out[22]:
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) target
species
setosa 5.006 3.428 1.462 0.246 0.0
versicolour 5.936 2.770 4.260 1.326 1.0
virginica 6.588 2.974 5.552 2.026 2.0
## Percentiles, outliers and box plots¶
• The median is a special name for the 50th percentile
• 50% of the data are less than the median
• The 25th percentile is the value of the data point that is greater than 25% of the sorted data
• percentiles are useful summary statistics and can be computed using np.percentile()
Computing Percentiles
np.percentile(df_swing['dem_share'], [25, 50, 75])
• Box plots are a graphical methode for displying summary statistics
• median is the middle line: 50th percentile
• bottom and top line of the box represent the 25th & 75th percentile, repectively
• the space between the 25th and 75th percentile is the interquartile range (IQR)
• Whiskers extent a distance of 1.5 time the IQR, or the extent of the data, whichever is less extreme
• Any points outside the whiskers are plotted as individual points, which we demarcate as outliers
• There is no single definition for an outlier, however, being more than 2 IQRs away from the median is a common criterion.
• An outlier is not necessarily erroneous
• Box plots are a great alternative to bee swarm plots, becasue bee swarm plots become too cluttered with large data sets
In [23]:
all_states = pd.read_csv(elections_all_file)
Out[23]:
0 AK State House District 8, Denali-University 10320 4995 4983 342 50.06 west
1 AK State House District 37, Bristol Bay-Aleuti 4665 1868 2661 136 41.24 west
2 AK State House District 12, Richardson-Glenn H 7589 1914 5467 208 25.93 west
3 AK State House District 13, Greater Palmer 11526 2800 8432 294 24.93 west
4 AK State House District 14, Greater Wasilla 10456 2132 8108 216 20.82 west
In [24]:
sns.boxplot(x='east_west', y='dem_share', data=all_states)
plt.xlabel('region')
plt.ylabel('percent of vote for Obama')
plt.show()
### Computing percentiles¶
In this exercise, you will compute the percentiles of petal length of Iris versicolor.
Instructions
• Create percentiles, a NumPy array of percentiles you want to compute. These are the 2.5th, 25th, 50th, 75th, and 97.5th. You can do so by creating a list containing these ints/floats and convert the list to a NumPy array using np.array(). For example, np.array([30, 50]) would create an array consisting of the 30th and 50th percentiles.
• Use np.percentile() to compute the percentiles of the petal lengths from the Iris versicolor samples. The variable versicolor_petal_length is in your namespace.
In [25]:
# Specify array of percentiles: percentiles
percentiles = np.array([2.5, 25, 50, 75, 97.5])
# Compute percentiles: ptiles_vers
ptiles_vers = np.percentile(versicolor_petal_length, percentiles)
# Print the result
ptiles_vers
Out[25]:
array([3.3 , 4. , 4.35 , 4.6 , 4.9775])
### Comparing percentiles to ECDF¶
To see how the percentiles relate to the ECDF, you will plot the percentiles of Iris versicolor petal lengths you calculated in the last exercise on the ECDF plot you generated in chapter 1. The percentile variables from the previous exercise are available in the workspace as ptiles_vers and percentiles.
Note that to ensure the Y-axis of the ECDF plot remains between 0 and 1, you will need to rescale the percentiles array accordingly - in this case, dividing it by 100.
Instructions
• Plot the percentiles as red diamonds on the ECDF. Pass the x and y co-ordinates - ptiles_vers and percentiles/100 - as positional arguments and specify the marker='D', color='red' and linestyle='none' keyword arguments. The argument for the y-axis - percentiles/100 has been specified for you.
In [26]:
# Plot the ECDF
_ = plt.plot(x_vers, y_vers, '.')
_ = plt.xlabel('petal length (cm)')
_ = plt.ylabel('ECDF')
# Overlay percentiles as red diamonds.
_ = plt.plot(ptiles_vers, percentiles/100, marker='D', color='red', linestyle='none')
plt.show()
### Box-and-whisker plot¶
Making a box plot for the petal lengths is unnecessary because the iris data set is not too large and the bee swarm plot works fine. However, it is always good to get some practice. Make a box plot of the iris petal lengths. You have a pandas DataFrame, df, which contains the petal length data, in your namespace. Inspect the data frame df in the IPython shell using df.head() to make sure you know what the pertinent columns are.
For your reference, the code used to produce the box plot in the video is provided below:
_ = sns.boxplot(x='east_west', y='dem_share', data=df_all_states)
_ = plt.xlabel('region')
_ = plt.ylabel('percent of vote for Obama')
In the IPython Shell, you can use sns.boxplot? or help(sns.boxplot) for more details on how to make box plots using seaborn.
Instructions
• The set-up is exactly the same as for the bee swarm plot; you just call sns.boxplot() with the same keyword arguments as you would sns.swarmplot(). The x-axis is 'species' and y-axis is 'petal length (cm)'.
• Don't forget to label your axes!
In [27]:
fig, ax = plt.subplots(figsize=(10, 7))
# Create box plot with Seaborn's default settings
_ = sns.boxplot(x='species', y='petal length (cm)', data=iris_df)
# Label the axes
_ = plt.ylabel('petal length (cm)')
_ = plt.xlabel('species')
# Show the plot
plt.show()
## Variance and standard deviation¶
• measures of spread
• variance:
• The mean squared distance of the data from the mean
• $$variance = \frac{1}{n}\sum_{i=1}^{n}(x_{i} - \overline{x})^2$$
• because of the squared quantity, variance doesn't have the same units as the measurement
• standard deviation:
• $$\sqrt{variance}$$
#### Variance¶
In [28]:
dem_share_fl = all_states.dem_share[all_states.state == 'FL']
In [29]:
np.var(dem_share_fl)
Out[29]:
147.44278618846064
In [30]:
all_states_var = all_states[['state', 'total_votes', 'dem_votes', 'rep_votes', 'other_votes', 'dem_share']].groupby(['state']).var(ddof=0)
all_states_var.dem_share.loc['FL']
Out[30]:
147.44278618846064
In [31]:
all_states_var.head()
Out[31]:
state
AK 3.918599e+06 9.182529e+05 3.200012e+06 3.256998e+03 125.668270
AL 2.292250e+09 5.607371e+08 6.252062e+08 1.733390e+05 307.070511
AR 4.876461e+08 1.199459e+08 1.314874e+08 1.354781e+05 92.110499
AZ 1.135138e+11 2.266020e+10 3.343506e+10 1.616763e+07 114.874473
CA 2.349814e+11 1.040120e+11 2.620011e+10 9.319614e+07 177.821720
#### Standard Deviation¶
In [32]:
np.std(dem_share_fl)
Out[32]:
12.142602117687158
In [33]:
np.sqrt(np.var(dem_share_fl))
Out[33]:
12.142602117687158
In [34]:
all_states_std = all_states[['state', 'total_votes', 'dem_votes', 'rep_votes', 'other_votes', 'dem_share']].groupby(['state']).std(ddof=0)
all_states_std.dem_share.loc['FL']
Out[34]:
12.142602117687158
In [35]:
all_states_std.head()
Out[35]:
state
AK 1979.545268 958.255147 1788.857869 57.070110 11.210186
AL 47877.444183 23679.887798 25004.123663 416.340031 17.523427
AR 22082.710391 10951.982529 11466.796262 368.073483 9.597421
AZ 336918.128317 150533.043458 182852.572197 4020.899516 10.717951
CA 484748.852298 322508.926289 161864.471516 9653.815033 13.334981
### Computing the variance¶
It is important to have some understanding of what commonly-used functions are doing under the hood. Though you may already know how to compute variances, this is a beginner course that does not assume so. In this exercise, we will explicitly compute the variance of the petal length of Iris veriscolor using the equations discussed in the videos. We will then use np.var() to compute it.
Instructions
• Create an array called differences that is the difference between the petal lengths (versicolor_petal_length) and the mean petal length. The variable versicolor_petal_length is already in your namespace as a NumPy array so you can take advantage of NumPy's vectorized operations.
• Square each element in this array. For example, x**2 squares each element in the array x. Store the result as diff_sq.
• Compute the mean of the elements in diff_sq using np.mean(). Store the result as variance_explicit.
• Compute the variance of versicolor_petal_length using np.var(). Store the result as variance_np.
• Print both variance_explicit and variance_np in one print call to make sure they are consistent.
In [36]:
# Array of differences to mean: differences
differences = versicolor_petal_length - np.mean(versicolor_petal_length)
# Square the differences: diff_sq
diff_sq = differences**2
# Compute the mean square difference: variance_explicit
variance_explicit = np.mean(diff_sq)
# Compute the variance using NumPy: variance_np
variance_np = np.var(versicolor_petal_length)
# Print the results
print(variance_explicit, variance_np)
0.21640000000000012 0.21640000000000012
### The standard deviation and the variance¶
As mentioned in the video, the standard deviation is the square root of the variance. You will see this for yourself by computing the standard deviation using np.std() and comparing it to what you get by computing the variance with np.var() and then computing the square root.
Instructions
• Compute the variance of the data in the versicolor_petal_length array using np.var() and store it in a variable called variance.
• Print the square root of this value.
• Print the standard deviation of the data in the versicolor_petal_length array using np.std()
In [37]:
# Compute the variance: variance
variance = np.var(versicolor_petal_length)
# Print the square root of the variance
std_explicit = np.sqrt(variance)
# Print the standard deviation
std_np = np.std(versicolor_petal_length)
print(std_explicit, std_np)
0.4651881339845204 0.4651881339845204
## Covariance and Pearson correlation coefficient¶
• Covariance
• $$covariance = \frac{1}{n}\sum_{i=1}^{n}(x_{i} - \overline{x})(y_{i} - \overline{y})$$
• The data point differs from the mean vote share and the mean total votes for Obama
• The differences for each data point can be computed
• The covariance is the mean of the product of these differences
• If both x and y tend to be above or below their respective means together, as they are in this data set, the covariance is positive.
• This means they are positively correlated:
• When x is high, so is y
• When the county is populous, it has more votes for Obama
• If x is high while y is low, the covariance is negative
• This means they are negatively correlated (anticorrelated) - not the case for this data set.
• Pearson correlation
• A more generally applicable measure of how two variables depend on each other, should be dimensionless (not units).
• $$\rho = Pearson\space correlation = \frac{covariance}{(std\space of\space x)(std\space of\space y)}$$
• $$\rho = \frac{variability\space due\space to\space codependence}{independent\space variability}$$
• Comparison of the variability in the data due to codependence (the covariance) to the variability inherent to each variable independently (their standard deviations).
• It's dimensionless and ranges from -1 (for complete anticorrelation) to 1 (for complete correlation).
• A value of zero means there is no correlation between the data, as shown in the upper left plot.
• Good metric for correlation between two variables.
In [38]:
plt.figure(figsize=(10, 8))
sns.scatterplot(x='total_votes', y='dem_share', data=swing, hue='state')
plt.ylabel('% of vote for Obama')
plt.xticks([x for x in range(0, 1000000, 100000)], rotation=40)
plt.yticks([x for x in range(0, 100, 10)])
# Create a Rectangle patch
plt.gca().add_patch(Rectangle((400000, 52), 500000, 34, linewidth=1, edgecolor='b', facecolor='none'))
plt.gca().add_patch(Rectangle((0, 5), 50000, 45, linewidth=1, edgecolor='r', facecolor='none'))
# Annotate
plt.annotate('12 largest counties; most vote for Obama', xy=(650000, 52), weight='bold',
xytext=(400000, 35), fontsize=10, arrowprops=dict(arrowstyle="->", color='b'))
plt.annotate('small counties; most vote for McCain', xy=(50000, 20), weight='bold',
xytext=(150000, 7), fontsize=10, arrowprops=dict(arrowstyle="->", color='r'))
plt.show()
### Scatter plots¶
When you made bee swarm plots, box plots, and ECDF plots in previous exercises, you compared the petal lengths of different species of iris. But what if you want to compare two properties of a single species? This is exactly what we will do in this exercise. We will make a scatter plot of the petal length and width measurements of Anderson's Iris versicolor flowers. If the flower scales (that is, it preserves its proportion as it grows), we would expect the length and width to be correlated.
For your reference, the code used to produce the scatter plot in the video is provided below:
_ = plt.plot(total_votes/1000, dem_share, marker='.', linestyle='none')
_ = plt.xlabel('total votes (thousands)')
_ = plt.ylabel('percent of vote for Obama')
Instructions
• Use plt.plot() with the appropriate keyword arguments to make a scatter plot of versicolor petal length (x-axis) versus petal width (y-axis). The variables versicolor_petal_length and versicolor_petal_width are already in your namespace. Do not forget to use the marker='.' and linestyle='none' keyword arguments.
• Label the axes.
• Display the plot.
In [39]:
versicolor_petal_width = iris_df['petal width (cm)'][iris_df.species == 'versicolour']
# Make a scatter plot
_ = plt.plot(versicolor_petal_length, versicolor_petal_width, marker='.', linestyle='none')
# Label the axes
_ = plt.xlabel('petal length (cm)')
_ = plt.ylabel('petal width (cm)')
# Show the result
plt.show()
### Variance and covariance by looking¶
Consider four scatter plots of x-y data, appearing to the right. Which has, respectively,
• the highest variance in the variable x,
• the highest covariance,
• negative covariance?
Instructions
• a, c, b
• d, c, a
• d, c, b
• d, d, b
### Computing the covariance¶
The covariance may be computed using the Numpy function np.cov(). For example, we have two sets of data x and y, np.cov(x, y) returns a 2D array where entries [0,1] and [1,0] are the covariances. Entry [0,0] is the variance of the data in x, and entry [1,1] is the variance of the data in y. This 2D output array is called the covariance matrix, since it organizes the self- and covariance.
To remind you how the I. versicolor petal length and width are related, we include the scatter plot you generated in a previous exercise.
Instructions
• Use np.cov() to compute the covariance matrix for the petal length (versicolor_petal_length) and width (versicolor_petal_width) of I. versicolor.
• Print the covariance matrix.
• Extract the covariance from entry [0,1] of the covariance matrix. Note that by symmetry, entry [1,0] is the same as entry [0,1].
• Print the covariance.
In [40]:
iris_df[['petal length (cm)', 'petal width (cm)']][iris_df.species == 'versicolour'].cov()
Out[40]:
petal length (cm) petal width (cm)
petal length (cm) 0.220816 0.073102
petal width (cm) 0.073102 0.039106
In [41]:
# Compute the covariance matrix: covariance_matrix
covariance_matrix = np.cov(versicolor_petal_length, versicolor_petal_width)
# Print covariance matrix
covariance_matrix
Out[41]:
array([[0.22081633, 0.07310204],
[0.07310204, 0.03910612]])
In [42]:
# Extract covariance of length and width of petals: petal_cov
petal_cov = covariance_matrix[0, 1]
# Print the length/width covariance
petal_cov
Out[42]:
0.07310204081632653
### Computing the Pearson correlation coefficient¶
As mentioned in the video, the Pearson correlation coefficient, also called the Pearson r, is often easier to interpret than the covariance. It is computed using the np.corrcoef() function. Like np.cov(), it takes two arrays as arguments and returns a 2D array. Entries [0,0] and [1,1] are necessarily equal to 1 (can you think about why?), and the value we are after is entry [0,1].
In this exercise, you will write a function, pearson_r(x, y) that takes in two arrays and returns the Pearson correlation coefficient. You will then use this function to compute it for the petal lengths and widths of I. versicolor.
Again, we include the scatter plot you generated in a previous exercise to remind you how the petal width and length are related.
Instructions
• Define a function with signature pearson_r(x, y).
• Use np.corrcoef() to compute the correlation matrix of x and y (pass them to np.corrcoef() in that order).
• The function returns entry [0,1] of the correlation matrix.
• Compute the Pearson correlation between the data in the arrays versicolor_petal_length and versicolor_petal_width. Assign the result to r.
• Print the result.
In [43]:
iris_df[['petal length (cm)', 'petal width (cm)']][iris_df.species == 'versicolour'].corr()
Out[43]:
petal length (cm) petal width (cm)
petal length (cm) 1.000000 0.786668
petal width (cm) 0.786668 1.000000
In [44]:
def pearson_r(x, y):
"""Compute Pearson correlation coefficient between two arrays."""
# Compute correlation matrix: corr_mat
corr_mat = np.corrcoef(x, y)
# Return entry [0,1]
return corr_mat[0,1]
# Compute Pearson correlation coefficient for I. versicolor: r
r = pearson_r(versicolor_petal_length, versicolor_petal_width)
# Print the result
print(r)
0.7866680885228169
# Thinking probabilistically: Discrete variables¶
Statistical inference rests upon probability. Because we can very rarely say anything meaningful with absolute certainty from data, we use probabilistic language to make quantitative statements about data. In this chapter, you will learn how to think probabilistically about discrete quantities, those that can only take certain values, like integers. It is an important first step in building the probabilistic language necessary to think statistically.
## Probabilistic logic and statistical inference¶
• Probabilistic reasoning allows us to describe uncertainty
• Given a set of data, you describe probabilistically what you might expect if those data were acquired repeatedly
• This is the heart of statistical inference
• It's the process by which we go from measured data to probabilistic conclusions about what we might expect if we collected the same data again.
### What is the goal of statistical inference?¶
Why do we do statistical inference?
• To draw probabilistic conclusions about what we might expect if we collected the same data again.
• To draw actionable conclusions from data.
• To draw more general conclusions from relatively few data or observations.
• All of these.
### Why do we use the language of probablility?¶
Which of the following is not a reason why we use probabilistic language in statistical inference?
• Probability provides a measure of uncertainty.
• Probabilistic language is not very precise.
• Data are almost never exactly the same when acquired again, and probability allows us to say how much we expect them to vary.
## Random number generators and hacker statistics¶
• Instead o repeating data acquisition over and over, repeated measurements can be simulated
• The concepts of probabilities originated from games of chance
• What's the probability of getting 4 heads with 4 flips of a coin?
• This type of data can be generated using np.random.random
• drawn a number between 0 and 1
• $<0.5\longrightarrow\text{heads}$
• $\geq0.5\longrightarrow\text{tails}$
• The pseudo random number generator works by starting with an integer, called a seed, and then generates random numbers in succession
• The same seed gives the same sequence of random numbers
• Manually seed the random number generator for reproducible results
• Specified using np.random.seed()
#### Bernoulli Trial¶
• An experiment that has two options, "success" (True) and "failure" (False).
#### Hacker stats probabilities¶
• Determine how to simulate data
• Simulated it repeatedly
• Compute the fraction of trials that had the outcome of interest
• Probability is approximately the fraction of trials with the outcome of interest
#### Simulated coin flips¶
In [45]:
np.random.seed(42)
random_numbers = np.random.random(size=4)
random_numbers
Out[45]:
array([0.37454012, 0.95071431, 0.73199394, 0.59865848])
In [46]:
heads = random_numbers < 0.5
Out[46]:
array([ True, False, False, False])
In [47]:
np.sum(heads)
Out[47]:
1
• The number of heads can be computed by summing the array of Booleans, because in numerical contexts, Python treats True as 1 and False as 0.
• We want to know the probability of getting four heads if we were to repeatedly flip the 4 coins
• without list comprehension
n_all_heads = 0 # initialize number of 4-heads trials
for _ in range(10000):
heads = np.random.random(size=4) < 0.5
if n_heads == 4:
• with list comprehension
In [48]:
n_all_heads = sum([1 for _ in range(10000) if sum(np.random.random(size=4) < 0.5) == 4])
In [49]:
n_all_heads
Out[49]:
619
In [50]:
n_all_heads/10000
Out[50]:
0.0619
### Generating random numbers using the np.random module¶
We will be hammering the np.random module for the rest of this course and its sequel. Actually, you will probably call functions from this module more than any other while wearing your hacker statistician hat. Let's start by taking its simplest function, np.random.random() for a test spin. The function returns a random number between zero and one. Call np.random.random() a few times in the IPython shell. You should see numbers jumping around between zero and one.
In this exercise, we'll generate lots of random numbers between zero and one, and then plot a histogram of the results. If the numbers are truly random, all bars in the histogram should be of (close to) equal height.
You may have noticed that, in the video, Justin generated 4 random numbers by passing the keyword argument size=4 to np.random.random(). Such an approach is more efficient than a for loop: in this exercise, however, you will write a for loop to experience hacker statistics as the practice of repeating an experiment over and over again.
Instructions
• Seed the random number generator using the seed 42.
• Initialize an empty array, random_numbers, of 100,000 entries to store the random numbers. Make sure you use np.empty(100000) to do this.
• Write a for loop to draw 100,000 random numbers using np.random.random(), storing them in the random_numbers array. To do so, loop over range(100000).
• Plot a histogram of random_numbers. It is not necessary to label the axes in this case because we are just checking the random number generator. Hit 'Submit Answer' to show your plot.
In [51]:
# Seed the random number generator
np.random.seed(42)
# Initialize random numbers: random_numbers
random_numbers = np.empty(100000)
# Generate random numbers by looping over range(100000)
for i in range(100000):
random_numbers[i] = np.random.random()
# Plot a histogram
_ = plt.hist(random_numbers)
# Show the plot
plt.show()
In [52]:
sns.histplot(random_numbers, kde=True)
plt.show()
The histogram is nearly flat across the top, indicating there is equal chance a randomly-generated number is in any of the histogram bins.
#### Using np.random.rand¶
In [53]:
rand_num = np.random.rand(100000)
In [54]:
sns.histplot(rand_num, kde=True)
plt.show()
### The np.random module and Bernoulli trials¶
You can think of a Bernoulli trial as a flip of a possibly biased coin. Specifically, each coin flip has a probability p of landing heads (success) and probability 1−p of landing tails (failure). In this exercise, you will write a function to perform n Bernoulli trials, perform_bernoulli_trials(n, p), which returns the number of successes out of n Bernoulli trials, each of which has probability p of success. To perform each Bernoulli trial, use the np.random.random() function, which returns a random number between zero and one.
Instructions
• Define a function with signature perform_bernoulli_trials(n, p).
• Initialize to zero a variable n_success the counter of True occurrences, which are Bernoulli trial successes.
• Write a for loop where you perform a Bernoulli trial in each iteration and increment the number of success if the result is True. Perform n iterations by looping over range(n).
• To perform a Bernoulli trial, choose a random number between zero and one using np.random.random(). If the number you chose is less than p, increment n_success (use the += 1 operator to achieve this).
• The function returns the number of successes n_success.
#### def perform_bernoulli_trials()¶
In [55]:
def perform_bernoulli_trials(n: int=100000, p: float=0.5) -> int:
"""
Perform n Bernoulli trials with success probability p
and return number of successes.
n: number of iterations
p: target number between 0 and 1, inclusive
"""
# Initialize number of successes: n_success
n_success = 0
# Perform trials
for i in range(n):
# Choose random number between zero and one: random_number
random_number = np.random.random()
# If less than p, it's a success so add one to n_success
if random_number < p:
n_success += 1
return n_success
##### With list comprehension¶
In [56]:
def perform_bernoulli_trials(n: int=100000, p: float=0.5) -> int:
"""
Perform n Bernoulli trials with success probability p
and return number of successes.
n: number of iterations
p: target number between 0 and 1, inclusive
"""
return sum([1 for _ in range(n) if np.random.random() < p])
### How many defaults might we expect?¶
Let's say a bank made 100 mortgage loans. It is possible that anywhere between 0 and 100 of the loans will be defaulted upon. You would like to know the probability of getting a given number of defaults, given that the probability of a default is p = 0.05. To investigate this, you will do a simulation. You will perform 100 Bernoulli trials using the perform_bernoulli_trials() function you wrote in the previous exercise and record how many defaults we get. Here, a success is a default. (Remember that the word "success" just means that the Bernoulli trial evaluates to True, i.e., did the loan recipient default?) You will do this for another 100 Bernoulli trials. And again and again until we have tried it 1000 times. Then, you will plot a histogram describing the probability of the number of defaults.
Instructions
• Seed the random number generator to 42.
• Initialize n_defaults, an empty array, using np.empty(). It should contain 1000 entries, since we are doing 1000 simulations.
• Write a for loop with 1000 iterations to compute the number of defaults per 100 loans using the perform_bernoulli_trials() function. It accepts two arguments: the number of trials n - in this case 100 - and the probability of success p - in this case the probability of a default, which is 0.05. On each iteration of the loop store the result in an entry of n_defaults.
• Plot a histogram of n_defaults. Include the normed=True keyword argument so that the height of the bars of the histogram indicate the probability.
In [57]:
# Seed random number generator
np.random.seed(42)
# Initialize the number of defaults: n_defaults
n_defaults = np.empty(1000)
# Compute the number of defaults
for i in range(1000):
n_defaults[i] = perform_bernoulli_trials(100, 0.05)
# Plot the histogram with default number of bins; label your axes
_ = plt.hist(n_defaults, density=True)
_ = plt.xlabel('number of defaults out of 100 loans')
_ = plt.ylabel('probability')
# Show the plot
plt.show()
This is not an optimal way to plot a histogram when the results are known to be integers. This will be revisited in forthcoming exercises.
#### With list comprehension¶
In [58]:
np.random.seed(42)
n_defaults = np.asarray([perform_bernoulli_trials(100, 0.05) for _ in range(1000)])
plt.hist(n_defaults, density=True)
plt.xlabel('number of defaults out of 100 loans')
plt.ylabel('probability')
plt.show()
### Will the bank fail?¶
Using def ecdf() from the first section, plot the number of n_defaults from the previous exercise, as a CDF.
If interest rates are such that the bank will lose money if 10 or more of its loans are defaulted upon, what is the probability that the bank will lose money?
Instructions
• Compute the x and y values for the ECDF of n_defaults.
• Plot the ECDF, making sure to label the axes. Remember to include marker='.' and linestyle='none' in addition to x and y in your call plt.plot().
• Show the plot.
• Compute the total number of entries in your n_defaults array that were greater than or equal to 10. To do so, compute a boolean array that tells you whether a given entry of n_defaults is >= 10. Then sum all the entries in this array using np.sum(). For example, np.sum(n_defaults <= 5) would compute the number of defaults with 5 or fewer defaults.
• The probability that the bank loses money is the fraction of n_defaults that are greater than or equal to 10.
In [59]:
# Compute ECDF: x, y
x, y = ecdf(n_defaults)
# Plot the ECDF with labeled axes
plt.plot(x, y, marker='.', linestyle='none')
plt.xlabel('Number of Defaults out of 100')
plt.ylabel('CDF')
# Show the plot
plt.show()
# Compute the number of 100-loan simulations with 10 or more defaults: n_lose_money
n_lose_money = sum(n_defaults >= 10)
# Compute and print probability of losing money
print('Probability of losing money =', n_lose_money / len(n_defaults))
Probability of losing money = 0.022
As might be expected, about 5/100 defaults occur. There's about a 2% chance of getting 10 or more defaults out of 100 loans.
## Probability distributions and stories: The Binomial distribution¶
#### Probability Mass Function (PMF)¶
• Probability mass function
• The set of probabilities of discrete outcomes
• PMF is a property of a discrete probability distribution
#### Discrete Uniform PMF¶
• The outcomes are discrete because only certain values may be attained; there is not option for 3.7
• Each result has a uniform probability of 1/6
#### Binomial Distribution¶
• Binomial distribution
• The number r of successes in n Bernoulli trials with probability p of success, is Binomially distributed
• The number r of heads in 4 coin flips with probability p = 0.5 of heads, is Binomially distributed
In [60]:
np.random.binomial(4, 0.5)
Out[60]:
2
In [61]:
np.random.binomial(4, 0.5, size=10)
Out[61]:
array([2, 2, 2, 2, 2, 3, 3, 2, 2, 0])
##### Binomial PMF¶
• To plot the Binomial PMF, take 10000 samples from a Binomial distribution of 60 Bernoulli trials with a probability of success of 0.1
• The most likely number of successes is 6 out of 60, but it's possible to get as many as 11 or as few as 1
• scipy.stats.binom
In [62]:
np.random.seed(42)
samples = np.random.binomial(60, 0.1, size=10_000)
samples
Out[62]:
array([ 5, 10, 7, ..., 10, 5, 4])
In [63]:
n, p = 60, 0.1
x = [x for x in range(17)]
fig, ax = plt.subplots(1, 1)
ax.plot(x, binom.pmf(x, n, p), 'bo', ms=5, label='binom pmf')
ax.vlines(x, 0, binom.pmf(x, n, p), colors='b', lw=3, alpha=0.5)
plt.xticks(x)
plt.ylabel('probability')
plt.xlabel('number of successes')
plt.show()
In [64]:
sns.set()
x, y = ecdf(samples)
plt.plot(x, y, marker='.', linestyle='none')
plt.margins(0.02)
plt.xlabel('Number of Successes')
plt.ylabel('CDF')
plt.show()
### Sampling out of the Binomial distribution¶
Compute the probability mass function for the number of defaults we would expect for 100 loans as in the last section, but instead of simulating all of the Bernoulli trials, perform the sampling using np.random.binomial(). This is identical to the calculation you did in the last set of exercises using your custom-written perform_bernoulli_trials() function, but far more computationally efficient. Given this extra efficiency, we will take 10,000 samples instead of 1000. After taking the samples, plot the CDF as last time. This CDF that you are plotting is that of the Binomial distribution.
Note: For this exercise and all going forward, the random number generator is pre-seeded for you (with np.random.seed(42)) to save you typing that each time.
Instructions
• Draw samples out of the Binomial distribution using np.random.binomial(). You should use parameters n = 100 and p = 0.05, and set the size = 10000.
• Compute the CDF using your previously-written ecdf() function.
• Plot the CDF with axis labels. The x-axis here is the number of defaults out of 100 loans, while the y-axis is the CDF.
In [65]:
# Take 10,000 samples out of the binomial distribution: n_defaults
np.random.seed(42)
n_defaults = np.random.binomial(100, 0.05, size=10_000)
# Compute CDF: x, y
x, y = ecdf(n_defaults)
# Plot the CDF with axis labels
plt.plot(x, y, marker='.', linestyle='none')
plt.xlabel('number of defaults out of 100 loans')
plt.ylabel('CDF')
plt.show()
### Plotting the Binomial PMF¶
As mentioned in the video, plotting a nice looking PMF requires a bit of matplotlib trickery that we will not go into here. Instead, we will plot the PMF of the Binomial distribution as a histogram with skills you have already learned. The trick is setting up the edges of the bins to pass to plt.hist() via the bins keyword argument. We want the bins centered on the integers. So, the edges of the bins should be -0.5, 0.5, 1.5, 2.5, ... up to max(n_defaults) + 1.5. You can generate an array like this using np.arange() and then subtracting 0.5 from the array.
You have already sampled out of the Binomial distribution during your exercises on loan defaults, and the resulting samples are in the NumPy array n_defaults.
Instructions
• Using np.arange(), compute the bin edges such that the bins are centered on the integers. Store the resulting array in the variable bins.
• Use plt.hist() to plot the histogram of n_defaults with the normed=True and bins=bins keyword arguments.
In [66]:
# Compute bin edges: bins
bins = np.arange(0, max(n_defaults) + 1.5) - 0.5
# Generate histogram
plt.hist(n_defaults, density=True, bins=bins)
# Label axes
plt.xlabel('number of defaults out of 100 loans')
plt.ylabel('PMF')
# Show the plot
plt.show()
## Poisson processes and the Poisson distribution¶
• Poisson distribution
• The timing of the next event is completely independent of when the previous event occurred
• Examples of Poisson processes:
• Natural births in a given hospital
• There is a well-defined average number of natural births per year, and the timing of one birth is independent of the timing of the previous one
• Hits on a website during a given hour
• The timing of successive hits is independent of the timing of the previous hit
• Meteor strikes
• Molecular collisions in a gas
• Aviation incidents
• The number of arrivals of a Poisson process in a given amount of time is Poisson distributed
• The number of arrivals r of a Poisson process in a given time interval with average rate of arrivals $\lambda$ per interval is Poisson distributed
• The Poisson distribution has one parameter, the average number of arrivals in a given length of time
• The number of hits r on a website in one hour with an average hit rate of 6 hits per hour is Poisson distributed
#### Poisson PMF¶
• For the preceding plot, for a given hour, the site is likely to get 6 hits, which is the average, but it's possible to also get 10, or none
• This looks like the Binomial PMF from 3.3.0.5.1. Binomial PMF The Poisson distribution is a limit of the Binomial distribution for low probability of success and large number of trials, i.e. for rare events
• To sample from the Poisson distribution, use np.random.poisson.
• It also has the size keyword argument to allow multiple samples
• The Poisson CDF resembles the Binomial CDF
#### Poisson CDF¶
In [67]:
samples = np.random.poisson(6, size=10_000)
x, y = ecdf(samples)
plt.plot(x, y, marker='.', linestyle='none')
plt.margins(0.02)
plt.xlabel('number of successes')
plt.ylabel('CDF')
plt.show()
### Relationship between Binomial and Poisson distribution¶
You just heard that the Poisson distribution is a limit of the Binomial distribution for rare events. This makes sense if you think about the stories. Say we do a Bernoulli trial every minute for an hour, each with a success probability of 0.1. We would do 60 trials, and the number of successes is Binomially distributed, and we would expect to get about 6 successes. This is just like the Poisson story we discussed in the video, where we get on average 6 hits on a website per hour. So, the Poisson distribution with arrival rate equal to np approximates a Binomial distribution for n Bernoulli trials with probability p of success (with n large and p small). Importantly, the Poisson distribution is often simpler to work with because it has only one parameter instead of two for the Binomial distribution.
Let's explore these two distributions computationally. You will compute the mean and standard deviation of samples from a Poisson distribution with an arrival rate of 10. Then, you will compute the mean and standard deviation of samples from a Binomial distribution with parameters n and p such that np=10.
Instructions
• Using the np.random.poisson() function, draw 10000 samples from a Poisson distribution with a mean of 10.
• Make a list of the n and p values to consider for the Binomial distribution. Choose n = [20, 100, 1000] and p = [0.5, 0.1, 0.01] so that np is always 10.
• Using np.random.binomial() inside the provided for loop, draw 10000 samples from a Binomial distribution with each n, p pair and print the mean and standard deviation of the samples. There are 3 n, p pairs: 20, 0.5, 100, 0.1, and 1000, 0.01. These can be accessed inside the loop as n[i], p[i].
In [68]:
# Draw 10,000 samples out of Poisson distribution: samples_poisson
samples_poisson = np.random.poisson(10, size=10_000)
# Print the mean and standard deviation
print(f'Poisson: Mean = {np.mean(samples_poisson)} Std = {np.std(samples_poisson):0.03f}')
# Specify values of n and p to consider for Binomial: n, p
n = [20, 100, 1_000, 10_000]
p = [0.5, 0.1, 0.01, 0.001]
# Draw 10,000 samples for each n,p pair: samples_binomial
for i in range(4):
samples_binomial = np.random.binomial(n[i], p[i], size=10_000)
# Print results
print(f'n = {n[i]} Binom: Mean = {np.mean(samples_binomial)} Std = {np.std(samples_binomial):0.03f}')
Poisson: Mean = 10.0421 Std = 3.172
n = 20 Binom: Mean = 10.0064 Std = 2.248
n = 100 Binom: Mean = 9.9371 Std = 2.980
n = 1000 Binom: Mean = 10.0357 Std = 3.164
n = 10000 Binom: Mean = 10.0881 Std = 3.195
The means are all about the same. The standard deviation of the Binomial distribution gets closer and closer to that of the Poisson distribution as the probability p gets lower and lower.
### How many no-hitters in a season?¶
In baseball, a no-hitter is a game in which a pitcher does not allow the other team to get a hit. This is a rare event, and since the beginning of the so-called modern era of baseball (starting in 1901), there have only been 251 of them through the 2015 season in over 200,000 games. The ECDF of the number of no-hitters in a season is shown to the right. Which probability distribution would be appropriate to describe the number of no-hitters we would expect in a given season?
Note: The no-hitter data set was scraped and calculated from the data sets available at retrosheet.org (license).
• Discrete uniform
• Binomial
• Poisson
• Both Binomial and Poisson, though Poisson is easier to model and compute.
• Both Binomial and Poisson, though Binomial is easier to model and compute.
With rare events (low p, high n), the Binomial distribution is Poisson. This has a single parameter, the mean number of successes per time interval, in this case the mean number of no-hitters per season.
### Was 2015 anomalous?¶
1990 and 2015 featured the most no-hitters of any season of baseball (there were seven). Given that there are on average 251/115 no-hitters per season, what is the probability of having seven or more in a season?
Instructions
• Draw 10000 samples from a Poisson distribution with a mean of 251/115 and assign to n_nohitters.
• Determine how many of your samples had a result greater than or equal to 7 and assign to n_large.
• Compute the probability, p_large, of having 7 or more no-hitters by dividing n_large by the total number of samples (10000).
• Hit 'Submit Answer' to print the probability that you calculated.
In [69]:
np.random.seed(seed=398)
# Draw 10,000 samples out of Poisson distribution: n_nohitters
n_nohitters = np.random.poisson(251/115, size=10_000)
# Compute number of samples that are seven or greater: n_large
n_large = len(n_nohitters[n_nohitters >= 7])
# Compute probability of getting seven or more: p_large
p_large = n_large/len(n_nohitters)
# Print the result
print(f'Probability of seven or more no-hitters: {p_large}')
Probability of seven or more no-hitters: 0.0071
The result is about 0.007. This means that it is not that improbable to see a 7-or-more no-hitter season in a century. There have been two in a century and a half, so it is not unreasonable.
# Thinking probabilistically: Continuous variables¶
Probability distributions of discrete variables have been covered so far. This final section will cover continuous variables, such as those that can take on any fractional value. Many of the principles are the same, but there are some subtleties. At the end of this chapter, you will be speaking the probabilistic language required to launch into the inference techniques covered in Statistical Thinking in Python (Part 2).
## Probability density functions¶
We have talked about probabilities of discrete quantities, such as die rolls and number of bus arrivals, but what about continuous quantities? A continuous quantity can take on any value, not just discrete ones. For example, the speed of a train can be 45.76 km/h. Continuous variables also have probability distributions. Let's consider an example. In 1879, Albert Michelson performed 100 measurements of the speed of light in air. Each measurement has some error in it; conditions, such as temperature, humidity, alignment of his optics, etc., change from measurement to measurement. As a result, any fractional value of the measured speed of light is possible, so it's apt to describe the results with a continuous probability distribution. In looking at Michelson's numbers, show here in units of 1000 km/s, we see this is the case. What probability distribution describes these data? I posit, these data follow the Normal Distribution. To understand what the normal distribution is, lets consider its probability density function (PDF). This is the continuous analog to the probability mass function (PMF). It describes the chances of observing a value of a continuous variable. The probability of observing a single value of the speed of light, does not make sense, because there is an infinity of numbers, between 299,600 and 300,100 km/s. Instead, areas under the PDF, give probabilities. The probability of measuring the speed of light is greater the 300,000 km/s is an area under the normal curve. Parameterizing the PDF based on Michelson's experiments, this is about a 3% chance, since the pink region is about 3% of the total area under the PDF. To do this calculation, we were really just looking at the cumulative distribution function (CDF), of the Normal distribution. Here's the CDF of the Normal distribution. Remember, the CDF gives the probability, the measured speed of light will be less than the value on the x-axis. Reading off the value at 300,000 km/s, there is a 97% chance, the speed of light measurement, is less than that. There's about a 3% chance it's greater.
We will study the Normal distribution in more depth in the coming exercises, but for now, let's review some of the concepts we've learned about continuous distribution functions.
### Continuous Variables¶
• Quantities that can take any value, not just discrete values
### Probability Density Function (PDF)¶
• Continuous analog to the PMF
• Mathematical description of the relative likelihood of observing a value of a continuous variable
### Normal Cumulative Distribution Function (CDF)¶
In [70]:
df = pd.read_csv(sol_file)
df.drop(columns=['Unnamed: 0'], inplace=True)
df.columns = df.columns.str.strip()
Out[70]:
date distinctness of image temperature (F) position of deflected image position of slit displacement of image in divisions difference between greatest and least B Cor revolutions per second radius (ft) value of one turn of screw velocity of light in air (km/s) remarks
0 June 5 3 76 114.85 0.300 114.55 0.17 1.423 -0.132 257.36 28.672 0.99614 299850 Electric light.
1 June 7 2 72 114.64 0.074 114.56 0.10 1.533 -0.084 257.52 28.655 0.99614 299740 P.M. Frame inclined at various angles
In [71]:
sns.histplot(df['velocity of light in air (km/s)'], bins=9, kde=True)
plt.show()
### Interpreting PDFs¶
Consider the PDF shown here. Which of the following is true?
Instructions
• x is more likely than not less than 10.
• x is more likely than not greater than 10.
• We cannot tell from the PDF if x is more likely to be greater than or less than 10.
• This is not a valid PDF because it has two peaks.
### Interpreting CDFs¶
At right is the CDF corresponding to the PDF you considered in the last exercise. Using the CDF, what is the probability that x is greater than 10?
Instructions
• 0.25: Correct! The value of the CDF at x = 10 is 0.75, so the probability that x < 10 is 0.75. Thus, the probability that x > 10 is 0.25.
• 0.75
• 3.75
• 15
## Introduction to the Normal distribution¶
The Normal distribution is famous, and we just used it as an example to learn about continuous distributions. It describes a continuous variable whose PDF is symmetric and has a single peak. The Normal distribution is parameterized by two parameters. The mean determines where the center of the peak is. The standard deviation is a measure of how wide the peak is, or how spread out the data are. Note, the mean and standard deviation are the names of the parameters of the Normal distribution. Don't confuse these with the mean and standard deviation we computed directly from the data when doing exploratory data analysis. The nomenclature is confusing, but it's important to keep straight. Adding a histogram of the Michelson measurements, shows the measured speed of light in air, looks Normally distributed. Comparing the histogram to the PDF, suffers from binning bias, therefore, it's better to compare the ECDF of the data, to the theoretical CDF of the Normal distribution. To compute the theoretical CDF, use np.random.normal, to draw samples, then compute the CDF. As was the case with sampling out of the binomial distribution, we need to provide parameters, in this case, the mean and standard deviation, to parameterize the Normal distribution we are sampling out of. The mean and standard deviations computed from the data are good estimates, so we'll compute and pass them into np.random.normal to take our samples. We then use the def ecdf function we already wrote, to compute the ECDFs of the data and of the Normally distributed theoretical samples we just drew. Finally, plot the theoretical and empirical CDFs on the same plot. With the absence of binning bias, it's much clearer that the Michelson data are approximately Normally distributed. Now that you can sample out of the Normal distribution, let's practice using it.
### Normal distribution¶
• Describes a continuous variable whose PDF has a single symmetric peak
• mean of Normal distribution ≠ mean computed from data
• standard deviation of a Normal distribution ≠ standard deviation computed from data
In [72]:
mean = np.mean(df['velocity of light in air (km/s)'])
std = np.std(df['velocity of light in air (km/s)'])
samples = np.random.normal(mean, std, size=10000)
x, y = ecdf(df['velocity of light in air (km/s)'])
x_theor, y_theor = ecdf(samples)
sns.set()
plt.plot(x_theor, y_theor, label='theoretical')
plt.plot(x, y, marker='.', linestyle='none', label='measured', color='purple')
plt.legend()
plt.xlabel('speed of light (km/s)')
plt.ylabel('CDF')
plt.show()
### The Normal PDF¶
In this exercise, you will explore the Normal PDF and also learn a way to plot a PDF of a known distribution using hacker statistics. Specifically, you will plot a Normal PDF for various values of the variance.
Instructions
• Draw 100,000 samples from a Normal distribution that has a mean of 20 and a standard deviation of 1. Do the same for Normal distributions with standard deviations of 3 and 10, each still with a mean of 20. Assign the results to samples_std1, samples_std3 and samples_std10, respectively.
• Plot a histograms of each of the samples; for each, use 100 bins, also using the keyword arguments normed=True and histtype='step'. The latter keyword argument makes the plot look much like the smooth theoretical PDF. You will need to make 3 plt.hist() calls.
• Hit 'Submit Answer' to make a legend, showing which standard deviations you used, and show your plot! There is no need to label the axes because we have not defined what is being described by the Normal distribution; we are just looking at shapes of PDFs.
In [73]:
# Draw 100000 samples from Normal distribution with stds of interest: samples_std1, samples_std3, samples_std10
samples_std1 = np.random.normal(20, 1, size=100000)
samples_std3 = np.random.normal(20, 3, size=100000)
samples_std10 = np.random.normal(20, 10, size=100000)
# Make histograms
plt.figure(figsize=(7, 7))
for data in [samples_std1, samples_std3, samples_std10]:
plt.hist(data, density=True, bins=100, histtype='step')
# Make a legend, set limits and show plot
plt.legend(('std = 1', 'std = 3', 'std = 10'))
plt.ylim(-0.01, 0.42)
plt.show()
You can see how the different standard deviations result in PDFs of different widths. The peaks are all centered at the mean of 20.
### The Normal CDF¶
Now that you have a feel for how the Normal PDF looks, let's consider its CDF. Using the samples you generated in the last exercise (in your namespace as samples_std1, samples_std3, and samples_std10), generate and plot the CDFs.
Instructions
• Use your ecdf() function to generate x and y values for CDFs: x_std1, y_std1, x_std3, y_std3 and x_std10, y_std10, respectively.
• Plot all three CDFs as dots (do not forget the marker and linestyle keyword arguments!).
• Hit submit to make a legend, showing which standard deviations you used, and to show your plot. There is no need to label the axes because we have not defined what is being described by the Normal distribution; we are just looking at shapes of CDFs.
In [74]:
# Generate and Plot CDFs
for data in [samples_std1, samples_std3, samples_std10]:
x_theor, y_theor = ecdf(data)
plt.plot(x_theor, y_theor, marker='.', linestyle='none')
# Make a legend and show the plot
_ = plt.legend(('std = 1', 'std = 3', 'std = 10'), loc='lower right')
plt.show()
The CDFs all pass through the mean at the 50th percentile; the mean and median of a Normal distribution are equal. The width of the CDF varies with the standard deviation.
## The Normal distribution: Properties and warnings¶
• The Normal distribution is very important and widely used.
• In practice, it's used to describe most symmetric peaked data
• For many of the statistical procedures you've heard of, Normality assumptions about the data are present.
• It's a very powerful distribution that seems to be ubiquitous in nature, not just in the field of statistics.
• There are important caveats about the distribution and we need to be careful when using it.
1. A dataset may not be Normally distributed, when you think it is
2. Another consideration, is the lightness of the distribution tails
• If we look at the Normal distribution, the probability of being more that 4 standard deviations from the mean, is very small.
• When you're modeling data as Normally distributed, outliers and extremely unlikely.
• Real datasets often have extreme values, and when this happens, the Normal distribution might not be the best description of the data.
### Gauss and the 10 Deutschmark banknote¶
What are the mean and standard deviation, respectively, of the Normal distribution that was on the 10 Deutschmark banknote?
Instructions
• mean = 3, std = 1
• mean = 3, std = 2
• mean = 0.4, std = 1
• mean = 0.6, std = 6
### Are the Belmont Stakes results Normally distributed?¶
Since 1926, the Belmont Stakes is a 1.5 mile-long race of 3-year old thoroughbred horses. Secretariat ran the fastest Belmont Stakes in history in 1973. While that was the fastest year, 1970 was the slowest because of unusually wet and sloppy conditions. With these two outliers removed from the data set, compute the mean and standard deviation of the Belmont winners' times. Sample out of a Normal distribution with this mean and standard deviation using the np.random.normal() function and plot a CDF. Overlay the ECDF from the winning Belmont times. Are these close to Normally distributed?
Note: Justin scraped the data concerning the Belmont Stakes from the Belmont Wikipedia page.
Instructions
• Compute mean and standard deviation of Belmont winners' times with the two outliers removed. The NumPy array belmont_no_outliers has these data.
• Take 10,000 samples out of a normal distribution with this mean and standard deviation using np.random.normal().
• Compute the CDF of the theoretical samples and the ECDF of the Belmont winners' data, assigning the results to x_theor, y_theor and x, y, respectively.
• Hit submit to plot the CDF of your samples with the ECDF, label your axes and show the plot.
In [75]:
def time_to_sec(x):
"""Convert time in the form 2:28.51 to seconds"""
time_list = x.split(':')
return float(time_list[0]) * 60 + float(time_list[1])
In [76]:
df = pd.read_csv(belmont_file)
df['Time_sec'] = df['Time'].apply(time_to_sec)
Out[76]:
Year Winner Jockey Trainer Owner Time Track miles Time_sec
0 2016 Creator Irad Ortiz, Jr Steve Asmussen WinStar Farm LLC 2:28.51 Belmont 1.5 148.51
1 2015 American Pharoah Victor Espinoza Bob Baffert Zayat Stables, LLC 2:26.65 Belmont 1.5 146.65
2 2014 Tonalist Joel Rosario Christophe Clement Robert S. Evans 2:28.52 Belmont 1.5 148.52
3 2013 Palace Malice Mike Smith Todd Pletcher Dogwood Stable 2:30.70 Belmont 1.5 150.70
4 2012 Union Rags John Velazquez Michael Matz Phyllis M. Wyeth 2:30.42 Belmont 1.5 150.42
In [77]:
# gets the data in the same format as that used in the exercise
d_std = df['Time_sec'].std()
d_avg = df['Time_sec'].mean()
data = df['Time_sec'][(df['Time_sec'] >= d_avg - (2.5 * d_std)) & (df['Time_sec'] <= d_avg + (2.5 * d_std))]
belmont_no_outliers = np.array(data)
In [78]:
# Compute mean and standard deviation: mu, sigma
mu = np.mean(belmont_no_outliers)
sigma = np.std(belmont_no_outliers)
print(f'Mean: {mu:0.02f}\nStandard Deviation: {sigma:0.02f}')
Mean: 149.22
Standard Deviation: 1.62
#### Use np.random.normal with mean and std to get synthetic data¶
In [79]:
# Sample out of a normal distribution with this mu and sigma: samples
samples = np.random.normal(mu, sigma, size=10_000)
In [80]:
# Get the CDF of the samples and of the data
x_theor, y_theor = ecdf(samples)
x, y = ecdf(belmont_no_outliers)
In [81]:
# Plot the CDFs and show the plot
_ = plt.plot(x_theor, y_theor)
_ = plt.plot(x, y, marker='.', linestyle='none', color='g')
_ = plt.xlabel('Belmont winning time (sec.)')
_ = plt.ylabel('CDF')
plt.show()
The theoretical CDF and the ECDF of the data suggest that the winning Belmont times are, indeed, Normally distributed. This also suggests that in the last 100 years or so, there have not been major technological or training advances that have significantly affected the speed at which horses can run this race.
### What are the chances of a horse matching or beating Secretariat's record?¶
Assume that the Belmont winners' times are Normally distributed (with the 1970 and 1973 years removed), what is the probability that the winner of a given Belmont Stakes will run it as fast or faster than Secretariat?
Instructions
• Take 1,000,000 samples from the normal distribution using the np.random.normal() function. The mean mu and standard deviation sigma are already loaded into the namespace of your IPython instance.
• Compute the fraction of samples that have a time less than or equal to Secretariat's time of 144 seconds.
• Print the result.
In [82]:
np.random.seed(seed=398)
# Take a million samples out of the Normal distribution: samples
samples = np.random.normal(mu, sigma, size=1_000_000)
# Compute the fraction that are faster than 144 seconds: prob
prob = (len(samples[samples < 144]) / len(samples))
# Print the result
print(f'Probability of besting Secretariat: {prob}')
Probability of besting Secretariat: 0.000643
Great work! We had to take a million samples because the probability of a fast time is very low and we had to be sure to sample enough. We get that there is only a 0.06% chance of a horse running the Belmont as fast as Secretariat.
## The Exponential distribution¶
• Just as there are many named discrete distributions, there are many named continuous distributions.
• For example, at the bus stop in Poissonville, we know the number of buses arriving per hour, are Poisson distributed.
• The amount of time between arrivals of buses is Exponentially distributed.
• The waiting time between arrivals of a Poisson process is Exponentially distributed.
• The single parameter is the mean waiting time.
• The distribution is not peaked
### Possible Poisson Process¶
• Nuclear incidents since 1974:
• Timing of one is, independent of all others, so the time between incidents should be Exponentially distributed.
• We can compute and plot the CDF we would expect based on the mean time between incidents and overlay that with the ECDF from the real data
• Take the usual approach where we draw many samples out of the Exponential distribution, using the mean inter-incident time as the parameter
mean = np.mean(inter_times)
samples = np.random.exponential(mean, size=10000)
x, y = ecdf(inter_times)
x_theor, y_theor = ecdf(samples)
_ = plt.plot(x_theor, y_theor)
_ = plt.plot(x, y, marker='.', linestyle='none')
_ = plt.xlabel('time (days)')
_ = plt.ylabel('CDF')
plt.show()
• It's close to being Exponentially distributed, indicating nuclear incidents can be modeled as a Poisson process.
• The Exponential and Normal are just two of many examples of continuous distributions.
• In many cases, just simulated the story to get the CDF.
• If you can simulate a story, you can get the distribution.
### Matching a story and a distribution¶
How might we expect the time between Major League no-hitters to be distributed? Be careful here: a few exercises ago, we considered the probability distribution for the number of no-hitters in a season. Now, we are looking at the probability distribution of the time between no hitters.
• Normal
• Exponential
• Poisson
• Uniform
### Waiting for the next Secretariat¶
Unfortunately, Justin was not alive when Secretariat ran the Belmont in 1973. Do you think he will get to see a performance like that? To answer this, you are interested in how many years you would expect to wait until you see another performance like Secretariat's. How is the waiting time until the next performance as good or better than Secretariat's distributed? Choose the best answer.
• Normal, because the distribution of Belmont winning times are Normally distributed.
• Normal, because there is a most-expected waiting time, so there should be a single peak to the distribution.
• Exponential: It is very unlikely for a horse to be faster than Secretariat, so the distribution should decay away to zero for high waiting time.
• Exponential: A horse as fast as Secretariat is a rare event, which can be modeled as a Poisson process, and the waiting time between arrivals of a Poisson process is Exponentially distributed.
Correct! The Exponential distribution describes the waiting times between rare events, and Secretariat is rare!
### If you have a story, you can simulate it!¶
Sometimes, the story describing our probability distribution does not have a named distribution to go along with it. In these cases, fear not! You can always simulate it. We'll do that in this and the next exercise.
In earlier exercises, we looked at the rare event of no-hitters in Major League Baseball. Hitting the cycle is another rare baseball event. When a batter hits the cycle, he gets all four kinds of hits, a single, double, triple, and home run, in a single game. Like no-hitters, this can be modeled as a Poisson process, so the time between hits of the cycle are also Exponentially distributed.
How long must we wait to see both a no-hitter and then a batter hit the cycle? The idea is that we have to wait some time for the no-hitter, and then after the no-hitter, we have to wait for hitting the cycle. Stated another way, what is the total waiting time for the arrival of two different Poisson processes? The total waiting time is the time waited for the no-hitter, plus the time waited for the hitting the cycle.
Now, you will write a function to sample out of the distribution described by this story.
Instructions
• Define a function with call signature successive_poisson(tau1, tau2, size=1) that samples the waiting time for a no-hitter and a hit of the cycle.
• Draw waiting times tau1 (size number of samples) for the no-hitter out of an exponential distribution and assign to t1.
• Draw waiting times tau2 (size number of samples) for hitting the cycle out of an exponential distribution and assign to t2.
• The function returns the sum of the waiting times for the two events.
#### def successive_poisson¶
In [83]:
def successive_poisson(tau1, tau2, size=1):
"""Compute time for arrival of 2 successive Poisson processes."""
# Draw samples out of first exponential distribution: t1
t1 = np.random.exponential(tau1, size)
# Draw samples out of second exponential distribution: t2
t2 = np.random.exponential(tau2, size)
return t1 + t2
### Distribution of no-hitters and cycles¶
Now, you'll use your sampling function to compute the waiting time to observe a no-hitter and hitting of the cycle. The mean waiting time for a no-hitter is 764 games, and the mean waiting time for hitting the cycle is 715 games.
Instructions
• Use your successive_poisson() function to draw 100,000 out of the distribution of waiting times for observing a no-hitter and a hitting of the cycle.
• Plot the PDF of the waiting times using the step histogram technique of a previous exercise. Don't forget the necessary keyword arguments. You should use bins=100, normed=True, and histtype='step'.
• Label the axes.
• Show your plot.
In [84]:
# Draw samples of waiting times: waiting_times
waiting_times = successive_poisson(764, 715, 100_000)
In [85]:
# Make the histogram
plt.hist(waiting_times, density=True, bins=100, histtype='step')
plt.xlabel('waiting time')
plt.ylabel('PDF: probability of occurence')
plt.show()
Notice that the PDF is peaked, unlike the waiting time for a single Poisson process. For fun (and enlightenment), I encourage you to also plot the CDF.
#### CDF for observing a no-hitter and a hitting of the cycle.¶
In [86]:
x_theor, y_theor = ecdf(waiting_times)
plt.plot(x_theor, y_theor, marker='.', linestyle='none')
plt.ylabel('CDF')
plt.xlabel('x')
plt.show()
## Final thoughts and encouragement toward Statistical Thinking II¶
You can now
• Construct instructive plots
• Computer informative summary statistics
• Use "hacker" statistics
• Think probabilistically
• The knowledge learned in this course really shines when directly applied to statistical inference problems.
In the next course
• Estimate parameter values
• Perform linear regressions
• Compute confidence intervals to couch the conclusions drawn from data in the appropriate probabilistic language
• Perform hypothesis tests, such as A/B tests, to help discern differences between data sets.
|
2021-10-20 00:45:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5602660179138184, "perplexity": 2025.8040180088294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00040.warc.gz"}
|
https://mace.readthedocs.io/en/latest/user_guide/quantization_usage.html
|
# Quantization¶
MACE supports two kinds of quantization mechanisms, i.e.,
• Quantization-aware training (Recommend)
After pre-training model using float point, insert simulated quantization operations into the model. Fine tune the new model. Refer to Tensorflow quantization-aware training.
• Post-training quantization
After pre-training model using float point, estimate output range of each activation layer using sample inputs.
Note
quantize_weights and quantize_nodes should not be specified when using TransformGraph tool if using MACE quantization.
## Quantization-aware training¶
It is recommended that developers fine tune the fixed-point model, as experiments show that by this way accuracy could be improved, especially for lightweight models, e.g., MobileNet. The only thing you need to make it run using MACE is to add the following config to model yaml file:
1. input_ranges: the ranges of model's inputs, e.g., -1.0,1.0.
2. quantize: set quantize to be 1.
## Post-training quantization¶
MACE supports post-training quantization if you want to take a chance to quantize model directly without fine tuning. This method requires developer to calculate tensor range of each activation layer statistically using sample inputs. MACE provides tools to do statistics with following steps (using inception-v3 from MACE Model Zoo as an example):
1. Convert original model to run on CPU host without obfuscation (by setting target_abis to host, runtime to cpu, and obfuscate to 0, appending :0 to output_tensors if missing in yaml config).
# For CMake users:
python tools/python/convert.py --config ../mace-models/inception-v3/inception-v3.yml \
--quantize_stat
# For Bazel users:
python tools/converter.py convert --config ../mace-models/inception-v3/inception-v3.yml \
--quantize_stat
2. Log tensor range of each activation layer by inferring several samples on CPU host. Sample inputs should be representative to calculate the ranges of each layer properly.
# Convert images to input tensors for MACE, see image_to_tensor.py for more arguments.
python tools/image/image_to_tensor.py --input /path/to/directory/of/input/images \
--output_dir /path/to/directory/of/input/tensors --image_shape=299,299,3
# Rename input tensors to start with input tensor name(to differentiate multiple
# inputs of a model), input tensor name is what you specified as "input_tensors"
# in yaml config. For example, "input" is the input tensor name of InceptionV3 as below.
rename 's/^/input/' *
# Run with input tensors
# For CMake users:
python tools/python/run_model.py --config ../mace-models/inception-v3/inception-v3.yml \
--quantize_stat --input_dir /path/to/directory/of/input/tensors --output_dir='' \
--target_abi=host --build > range_log
# For Bazel users:
python tools/converter.py run --config ../mace-models/inception-v3/inception-v3.yml \
--quantize_stat --input_dir /path/to/directory/of/input/tensors > range_log
3. Calculate overall range of each activation layer. You may specify --percentile or --enhance and --enhance_ratio to try different ranges and see which is better. Experimentation shows that the default percentile and enhance_ratio works fine for several common models.
python tools/python/quantize/quantize_stat.py --log_file range_log > overall_range
4. Convert quantized model (by setting target_abis to the final target abis, e.g., armeabi-v7a, quantize to 1 and quantize_range_file to the overall_range file path in yaml config).
## Mixing usage¶
As quantization-aware training is still evolving, there are some operations that are not supported, which leaves some activation layers without tensor range. In this case, post-training quantization can be used to calculate these missing ranges. To mix the usage, just get a quantization-aware training model and then go through all the steps of post-training quantization. MACE will use the tensor ranges from the overall_range file of post-training quantization if the ranges are missing from the quantization-aware training model.
## Supported devices¶
MACE supports running quantized models on ARM CPU and other acceleration devices, e.g., Qualcomm Hexagon DSP, MediaTek APU. ARM CPU is ubiquitous, which can speed up most of edge devices. However, AI specialized devices may run much faster than ARM CPU, and in the meantime consume much lower power. Headers and libraries of these devices can be found in third_party directory.
• To run models on ARM CPU, users should
1. Set runtime in yaml config to cpu (Armv8.2+dotproduct instructions will be used automatically if detected by getauxval, which can greatly improve convolution/gemm performance).
• To run models on Hexagon DSP, users should
1. Set runtime in yaml config to dsp.
2. Make sure SOCs of the phone is manufactured by Qualcomm and has HVX supported.
3. Make sure the phone disables secure boot (once enabled, cannot be reversed, so you probably can only get that type phones from manufacturers). This can be checked by executing the following command.
adb shell getprop ro.boot.secureboot
The return value should be 0.
1. Root the phone.
2. Sign the phone by using testsig provided by Qualcomm. (Download Qualcomm Hexagon SDK first, plugin the phone to PC, run scripts/testsig.py)
3. Push third_party/nnlib/v6x/libhexagon_nn_skel.so to /system/vendor/lib/rfsa/adsp/. You can check docs/feature_matrix.html in Hexagon SDK to make sure which version to use.
Then, there you go, you can run Mace on Hexagon DSP. This indeed seems like a whole lot of work to do. Well, the good news is that starting in the SM8150 family(some devices with old firmware may still not work), signature-free dynamic module offload is enabled on cDSP. So, steps 2-4 can be skipped. This can be achieved by calling SetHexagonToUnsignedPD() before creating MACE engine.
• To run models on MediaTek APU, users should
1. Set runtime in yaml config to apu.
2. Make sure SOCs of the phone is manufactured by MediaTek and has APU supported.
3. Push third_party/apu/mtxxxx/libapu-platform.so to /vendor/lib64/.
|
2022-05-28 18:57:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25921645760536194, "perplexity": 12679.446888111177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663019783.90/warc/CC-MAIN-20220528185151-20220528215151-00346.warc.gz"}
|
http://www.sciencemadness.org/talk/viewthread.php?tid=2171&page=5
|
Sciencemadness Discussion Board » Special topics » Technochemistry » Electrical Furnace Contruction - My design and implementation Select A Forum Fundamentals » Chemistry in General » Organic Chemistry » Reagents and Apparatus Acquisition » Beginnings » Responsible Practices » Miscellaneous » The Wiki Special topics » Technochemistry » Energetic Materials » Biochemistry » Radiochemistry » Computational Models and Techniques » Prepublication Non-chemistry » Forum Matters » Legal and Societal Issues
Pages: 1 .. 3 5 7
Author: Subject: Electrical Furnace Contruction - My design and implementation
12AX7
Post Harlot
Posts: 4803
Registered: 8-3-2005
Location: oscillating
Member Is Offline
Mood: informative
Well lemme see here. 10" dia., 10" tall, say 15 turns, wire gauge doesn't matter but call it #20 (AWG for Cu I think), stainless steel resistivity (close enough), the calculator program I have shows 44uH and 5.2 ohms DCR (not unreasonable). At 60Hz, that's 0.017 ohms reactance, or a Q of 1/313th. (An intentional inductor has Q = 50 or better, a difference of 5 orders of magnitude or so! )
One thing you'll notice is, for a solenoid coil, you will have 5 or 10 or 50 amps through 10 or 20 turns, and that's 50 to 1000 amp-turns, enough to make a piece of iron buzz when energized.
(My induction heater is currently around 1kW power output and runs up to 2000 amp-turns at 21kHz, enough to make a piece of aluminum or copper, placed in the coil, stand on end from the repulsive force!)
Tim
Seven Transistor Labs LLC http://seventransistorlabs.com/
Electronic Design, from Concept to Layout.
Need engineering assistance? Drop me a message!
Cloner
Hazard to Others
Posts: 150
Registered: 7-12-2004
Member Is Offline
Mood: apocalyptic
OK, nothing beats an example Thank you.
Magpie
lab constructor
Posts: 5939
Registered: 1-11-2003
Location: USA
Member Is Offline
Mood: Chemistry: the subtle science.
I have been wanting to build a tube furnace for some time. This would be a significant project using a horizontal clamshell design with castable insulating mortar and a castable refractory. It would also have a temperature controller and be capable of developing 1300C.
I decided to build a small scale furnace as a first step to get some experience with hightemperature materials and fabrication techniques, as well as provide a furnace for another project. I built this furnace as a "vertical tube" furnace using a 3lb coffee can as shell. The dimensions are: H=7.5", OD=6", and ID=2.5". The refractory was made using axehandle's formula. The heating coil is a 5000w, 240VAC, D = 0.5", L =6', salvaged from one of the banks of the furnace that formerly heated my house. For my coffee can furnace I run it at 120VAC, 1250w.
When power is applied it comes up to a red-orange heat. Since the insulation is not very thick the shell of the furnace gets pretty hot also. This is not good and will likely limit its lifespan. I have not kept it at full power for more than a few minutes so far. I have attached a picture below, and one in the next post.
I also am building a temperature controller for this furnace which will be suitable for any furnace revisions as well as the horizontal tube, clamshell type that I want to eventually build. The controller framework is shown along with the top view of the furnace in the next picture. It is complete except for two components that should arrive any day now: (1) a solid state relay, and (2) a PID controller with type K thermocouple. (These controllers now are surprisingly cheap.)
A few comments on axehandle's refractory formula: Making/using this was different from any of my past mortar, grout, or concrete experience. First of all it was very fluid. This is contrary to my other experience where I only used enough water to make the material workable. Second, the pearlite floated to the top and the sand sunk to the bottom so the mix was not homogeneous. So I just let it set for a few hours, occaisionally stirring and checking its stiffness. Eventually it did stiffen enough that I felt it was reasonably homogeneous but still pourable and workable. After hardening it is not strong like concrete. But so far it has only developed minor cracks and seems to be doing the job for which it is intended. Until someone else comes up with a better formula this is the best we have, and is cheap, OTC, and at least so far, is doing the job for me. So .....thank you, axehandle.
The single most important condition for a successful synthesis is good mixing - Nicodem
Magpie
lab constructor
Posts: 5939
Registered: 1-11-2003
Location: USA
Member Is Offline
Mood: Chemistry: the subtle science.
Here's the 2nd picture:
The single most important condition for a successful synthesis is good mixing - Nicodem
Magpie
lab constructor
Posts: 5939
Registered: 1-11-2003
Location: USA
Member Is Offline
Mood: Chemistry: the subtle science.
I just received the PID (with auto-tuning) controller for my furnace temperature controller. I connected everything up and tried it out by controlling the temperature of the outside of a lamp bulb to 100F. It works well. This controller should be suitable for most any furnace. I assembled this controller for about $100. [Edited on 27-9-2006 by Magpie] The single most important condition for a successful synthesis is good mixing - Nicodem 12AX7 Post Harlot Posts: 4803 Registered: 8-3-2005 Location: oscillating Member Is Offline Mood: informative Those Watlows are nice, the one that's four times bigger (92 or 93?) are awesome, programmable, self-adjusting, the works. Can't say I'd like to pay for one though! Tim Seven Transistor Labs LLC http://seventransistorlabs.com/ Electronic Design, from Concept to Layout. Need engineering assistance? Drop me a message! Al Koholic Hazard to Self Posts: 98 Registered: 2-12-2002 Member Is Offline Mood: Seeking ligand So guys, I'm also building a furnace with a lot of help from all the great work that has been done on this thread. Thanks all for the contributions! I've got most of it hashed out and I have almost all the parts now but there is something I think we need to clear up. How are you all connecting the element wire to the power leads? Not like series or parallel...but what kind of connectors are you using? I ESPECIALLY like Magpie's ceramic(?) connectors. Magpie...I fear you got those from the salvaged furnace. If not, please tell me where you got them! Many google searches have yielded little in the way of solutions. Definetly nothing as elegant as what you have there. Magpie lab constructor Posts: 5939 Registered: 1-11-2003 Location: USA Member Is Offline Mood: Chemistry: the subtle science. Al, those ceramic rings did come from my furnace. They really are just sort of there for looks in my little furnace. A bolt runs through them and the wires are just secured with washers and the bolt, and a nut. The element ends transition from the dark, rough looking coil wire to a clean shiny metal wire end that looks like it might be made of nickel. This is for about 1" on each end. That's the way I found the coil. When I build my high temperature furnace I will likely be using 1mm diameter Kanthal A-1 to form my heating coils. I will likely just have to lead the wire ends outside the furnace wall a couple inches for terminals. I don't know what else to do. This is what rikkitikitavi recommends early on in this thread. Hope this helps. The single most important condition for a successful synthesis is good mixing - Nicodem Al Koholic Hazard to Self Posts: 98 Registered: 2-12-2002 Member Is Offline Mood: Seeking ligand Had a feeling... Well I had a feeling they were from the furnace. I was at Home Depot today and someone mentioned a few stores I should check out so I'll try and find those. It wasn't so much I was wondering how physically to do it, but more what the actual connectors you all were using. I guess I'm just in love with the way those look on your furnace Magpie. I've got the regulator box with fans basically ready to go soon and just have to get my element wire in the mail. I'm using K-A1 20AWG (.8mm). Should be pretty nice, I'm pumped. Too many plans to use the thing for. Knife making (hopefully pattern-welded "damascus" style can be acheived with it) is one of the main reasons I am building the thing in the first place. No need for worrying about flame quality or fuel quantity anymore! I suppose I'll have to figure out how to upload some pics of my completed setup when all is said and done. Magpie lab constructor Posts: 5939 Registered: 1-11-2003 Location: USA Member Is Offline Mood: Chemistry: the subtle science. I've seen little ceramic parts (tubes, etc) on some websites for sale but can't remember for sure which ones. I think I looked under "ceramic insulators" and found some such. They are available, just difficult to find as they are so specialized. Pottery and kiln shops would also be worth a look. [Edited on 29-9-2006 by Magpie] The single most important condition for a successful synthesis is good mixing - Nicodem tumadre Hazard to Others Posts: 169 Registered: 10-5-2005 Member Is Offline Mood: No Mood For knife making you still want a reducing gas mix in side the furnace i have used 17 gauge electric fence wire (15 gauge AWG), and for some smaller stuff I have used 1/8 inch steel rod as an alternative to nicrome, it don't last but 10-40 hours, but the cost is nothing. as the bolt goes, i have not tried stainless, i'm sure it or something else is needed and don't even think about exposing copper to more than 500C 12AX7 Post Harlot Posts: 4803 Registered: 8-3-2005 Location: oscillating Member Is Offline Mood: informative I'm interested to know how you intend to get hot enough to forge weld, without induction (hello) or fire. Tim Seven Transistor Labs LLC http://seventransistorlabs.com/ Electronic Design, from Concept to Layout. Need engineering assistance? Drop me a message! Al Koholic Hazard to Self Posts: 98 Registered: 2-12-2002 Member Is Offline Mood: Seeking ligand Well, steel welds at a lower temp than iron; the higher % C, the lower the welding temp. I've read that in some alloys, the welding temp can be as much as 1000F below the melting point. Usually it's around 300-400 below. Anyway, the furnace may never reach that T, but it will still be useful for forging. All that is a long way off anyhow... As far as a reducing gas mix, would not keeping the lid on during the heating cause the O2 to be used up by light surface oxidation forming some scale? This is typical and needs to be brushed off in any forging of course. I'm hoping that I wouldn't need to add anything extra to the furnace to eat O2 but it wouldn't be a big deal...little OT thought. tumadre Hazard to Others Posts: 169 Registered: 10-5-2005 Member Is Offline Mood: No Mood All the websites I've read use a carburizing furnace because the gas reduces the iron oxide as the weld is taking place, no amount of flux will work without at least a neutral gas mix. no amount of iron will pull the O2 out, charcoal might but i would'nt even let Nitrogen get in without a more active gas like propane floating in the furcace if you hold the electric element internal to the kiln at 1350C, you will be able to forge the weld, but it will take too much time to re-heat the knife to 1100 or 1250C IIRC iron melts at 1536C aren't you pulling the knife out and beating it with a hammer like 300 or more times? Quince International Hazard Posts: 773 Registered: 31-1-2005 Location: Vancouver, BC Member Is Offline Mood: No Mood Quote: I just received the PID (with auto-tuning) controller for my furnace temperature controller....I assembled this controller for about$100.
What the hell?! You can build an analog PID for $10 in parts. Plus, autotuning can never be as good as good manual tuning, and, of course, real men don't use autotuning -.- \"One of the surest signs of Conrad\'s genius is that women dislike his books.\" --George Orwell Al Koholic Hazard to Self Posts: 98 Registered: 2-12-2002 Member Is Offline Mood: Seeking ligand Tumadre, I'll have to see how it goes with the heating times. I didn't think it would be too long to do reheating but I'll have to see. Like I said, this might not be a furnace I can use for forge-welding. Out of curiosity, why would N2 be a problem in the furnace? On an interesting side note though, no you don't have to composition to begin with. You can make a stack of say, 5 layers to start and weld them. Then you cut the result in half and restack to get to 10 layers. Then to 20, and so on. Makes getting layer counts in the 50-250 range (the range that fold the metal over all that much if you cheat by using two steels with slightly different looks good) easier than starting with a single bar and folding it. Magpie lab constructor Posts: 5939 Registered: 1-11-2003 Location: USA Member Is Offline Mood: Chemistry: the subtle science. Quote: What the hell?! You can build an analog PID for$10 in parts. Plus, autotuning can never be as good as good manual tuning, and, of course, real men don't use autotuning -.-
That PID cost $35. I'd be pretty impressed if you could duplicate that for$10 alright. It accepts all common thermocouples and thermistors with no work on my part. In my younger years I might agree with you about auto-tuning. Right now I'm very glad to have it.
The single most important condition for a successful synthesis is good mixing - Nicodem
bio2
National Hazard
Posts: 447
Registered: 15-1-2005
Member Is Offline
Mood: No Mood
You can salvage resistance wire ceramic holders from a clothes dryer,
The ends are usually welded to stainless steel terminals.
Damn, I must be getting old as I don't remember PID controllers less than about \$200. Been awhile but autotuning never did live up to it,s claim. Programming A manual sequence is the best way in my experience.
[Edited on 1-10-2006 by bio2]
not_important
International Hazard
Posts: 3873
Registered: 21-7-2006
Member Is Offline
Mood: No Mood
Quote: Originally posted by Al Koholic ... Out of curiosity, why would N2 be a problem in the furnace? ....
Most of the resistance wire for heating is designed to work in an oxidising atmosphere. Running them in reduction may greatly shorten their life. Running them in a neutral atmosphere, after breaking the wire in under oxidising conditions, may be OK; but if traces of reducing compounds got in you could be back to the short lifespan probem.
Maya
National Hazard
Posts: 263
Registered: 3-10-2006
Location: Mercury
Member Is Offline
Mood: molten
Has anybody tried or thought about making or done an argon arc furnace?
This would be for melting oxygen sensitive metals over 1700 celcius
Another idea is an IR image furnace for the same application , this focuses with mirrors, concentrated light on the sample in a quartz tube to melt it.
Quince
International Hazard
Posts: 773
Registered: 31-1-2005
Location: Vancouver, BC
Member Is Offline
Mood: No Mood
This reminds me of the infrared soldering such as http://www.aoyue.de/images/prod/Aoyue_710_1.jpg
Halogen light-type infrared source in a focusing reflective enclosure.
\"One of the surest signs of Conrad\'s genius is that women dislike his books.\" --George Orwell
Hazard to Others
Posts: 169
Registered: 10-5-2005
Member Is Offline
Mood: No Mood
N2 may not be a problem.
To forge a weld, all the impurities between the steels must properly flow out as the steel joins
Flux allows that to happen, but the flux can't cover all the steel at all times, so on your first few knives you manage to make, you may lose 1/3 of the steel to oxide. so you need something to displace the oxygen, and nitrogen won't do that unless you have a tank of it.
I remember somewhere someone mentioned a salt bath to heat the steel, but that may have been for heat tempering at lower temps.
Fleaker
International Hazard
Posts: 1243
Registered: 19-6-2005
Member Is Offline
Mood: nucleophilic
Quote: Originally posted by tumadre I remember somewhere someone mentioned a salt bath to heat the steel, but that may have been for heat tempering at lower temps.
It would have to be at lower temperatures if you're talking about NaCl as it would have too much volatility at those temperatures. I've melted NaCl/CaCl2 before to use as an aluminum flux and it has significant volatility at only 700*C.
"Kid, you don't even know just what you don't know. "
--The Dark Lord Sauron
fuse123
Harmless
Posts: 12
Registered: 26-10-2006
Member Is Offline
Mood: No Mood
any one here have any information to make it
12AX7
Post Harlot
Posts: 4803
Registered: 8-3-2005
Location: oscillating
Member Is Offline
Mood: informative
Quote: Originally posted by fuse123 what about inductance furnace any one here have any information to make it
WTF, you just posted in the thread with links to many details, in fact one of approximately three such highly detailed websites on the subject on the whole internet as far as I can tell!! Did you post this before discovering the thread or are you truely that daft?
Seven Transistor Labs LLC http://seventransistorlabs.com/
Electronic Design, from Concept to Layout.
Need engineering assistance? Drop me a message!
Pages: 1 .. 3 5 7
Sciencemadness Discussion Board » Special topics » Technochemistry » Electrical Furnace Contruction - My design and implementation Select A Forum Fundamentals » Chemistry in General » Organic Chemistry » Reagents and Apparatus Acquisition » Beginnings » Responsible Practices » Miscellaneous » The Wiki Special topics » Technochemistry » Energetic Materials » Biochemistry » Radiochemistry » Computational Models and Techniques » Prepublication Non-chemistry » Forum Matters » Legal and Societal Issues
|
2022-09-28 11:51:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3191477656364441, "perplexity": 6712.910957496613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00149.warc.gz"}
|
http://www.chegg.com/homework-help/questions-and-answers/resistors-values-5-8-14-ohms-connected-series-circuit-85-v-battery-rate-energy-delivered-1-q937854
|
## Rate of energy
Three resistors with values of 5, 8, and 14 ohms are connected in series in a circuit with a 8.5 V battery. At what rate is energy delivered to the 14 ohm resistor?
I calculated current, I=.31 A, and Equivalent resistance, R = 27 Ohms
Next, I used P=I2R to calculate rate of energy, but it's not right. What is the answer, in W?
|
2013-05-20 05:47:48
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8512718081474304, "perplexity": 1179.4561527155547}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698354227/warc/CC-MAIN-20130516095914-00085-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.quizover.com/course/section/fraction-part-function-fpf-by-openstax
|
# 5.2 Periodic functions (Page 2/5)
Page 2 / 5
## Basic periodic functions
Not many of the functions that we encounter are periodic. There are few functions, which are periodic by their very definition. We are, so far, familiar with following periodic functions in this course :
• Constant function, (c)
• Trigonometric functions, (sinx, cosx, tanx etc.)
• Fraction part function, {x}
Six trigonometric functions are most commonly used periodic functions. They are used in various combination to generate other periodic functions. In general, we might not determine periodicity of each function by definition. It is more convenient to know periods of standard functions like that of six trigonometric functions, their integral exponents and certain other standard forms/ functions. Once, we know periods of standard functions, we use different rules, properties and results of periodic functions to determine periods of other functions, which are formed as composition or combination of standard periodic functions.
## Constant function
For constant function to be periodic function,
$f\left(x+T\right)=\mathrm{f\left(x\right)}$
By definition of constant function,
$f\left(x+T\right)=\mathrm{f\left(x\right)}=c$
Clearly, constant function meets the requirement of a periodic function, but there is no definite, fixed or least period. The relation of periodicity, here, holds for any change in x. We, therefore, conclude that constant function is a periodic function without period.
## Trigonometric functions
Graphs of trigonometric functions (as described in the module titled trigonometric function) clearly show that periods of sinx, cosx, cosecx and secx are “2π” and that of tanx and cotx are “π”. Here, we shall mathematically determine periods of few of these trigonometric functions, using definition of period.
## Sine function
For sinx to be periodic function,
$\mathrm{sin}\left(x+T\right)=\left(x\right)$
$x+T=n\pi +{\left(-1\right)}^{n}x;\phantom{\rule{1em}{0ex}}n\in Z$
The term ${\left(-1\right)}^{n}$ evaluates to 1 if n is an even integer. In that case,
$x+T=n\pi +x$
Clearly, T = nπ, where n is an even integer. The least positive value of “T” i.e. period of the function is :
$T=2\pi$
## Cosine function
For cosx to be periodic function,
$\mathrm{cos}\left(x+T\right)=\mathrm{cos}x$
$⇒x+T=2n\pi ±x;\phantom{\rule{1em}{0ex}}n\in Z$
Either,
$⇒x+T=2n\pi +x$
$⇒T=2n\pi$
or,
$⇒x+T=2n\pi -x$
$⇒T=2n\pi -2x$
First set of values is independent of “x”. Hence,
$T=2n\pi ;\phantom{\rule{1em}{0ex}}n\in Z$
The least positive value of “T” i.e. period of the function is :
$T=2\pi$
## Tangent function
For tanx to be periodic function,
$\mathrm{tan}\left(x+T\right)=\mathrm{tan}x$ $x+T=n\pi +x;\phantom{\rule{1em}{0ex}}n\in Z$
Clearly, T = nπ; n∈Z. The least positive value of “T” i.e. period of the function is :
$T=\pi$
## Fraction part function (fpf)
Fraction part function (FPF) is related to real number "x" and greatest integer function (GIF) as $\left\{x\right\}=x-\left[x\right]$ . We have seen that greatest integer function returns the integer which is either equal to “x” or less than “x”. For understanding the nature of function, let us compute few function values as here :
--------------------------------- x [x]x – [x] ---------------------------------1 1 0 1.25 1 0.251.5 1 0.5 1.75 1 0.752 2 0 2.25 2 0.252.5 2 0.5 2.75 2 0.753 3 0 3.25 3 0.253.5 3 0.5 3.75 3 0.754 4 0 ---------------------------------
|
2017-06-23 05:15:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 19, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8562222719192505, "perplexity": 1282.0767967070497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320003.94/warc/CC-MAIN-20170623045423-20170623065423-00259.warc.gz"}
|
https://www.cs.mcgill.ca/~hatami/
|
## Hamed Hatami
I'm an associate professor at the School of computer science and an associate member at the department of Mathematics and Statistics at McGill University.
I received my PhD from the department of Computer Science, University of Toronto under the supervision of Professors Michael Molloy and Balazs Szegedy. Before joining McGill, I spent a year as a Veblen fellow at the department of Mathematics, Princeton University.
## Research Interests
• Analytic methods in Combinatorics and Theoretical Computer Science
• Limit theory for graph sequences
• Analysis of Boolean functions
## Contact Information
#### E-mail:
hatami at cs . mcgill . ca
#### Office:
McConnell Engineering Building, Room 308
McConnell Engineering Bldg, Room 318
3480 University Montreal, Qc, Canada, H3A 0E9
# Publications
1. H. Hatami, Sergey Norin, On the boundary of the region defined by homomorphism densities,
Journal of Combinatorics, to appear [arXiv]
2. H. Hatami, Yingjie Qian, The Unbounded-Error Communication Complexity of symmetric XOR functions,
submitted. [arXiv]
3. Yuval Filmus, H. Hatami, Yaqiao Li, Suzin You, Information complexity of the AND function in the two-party and multi-party settings,
COCOON 2017. [arXiv]
4. Yuval Dagan, Yuval Filmus, H. Hatami, Yaqiao Li, Trading information complexity for error,
CCC 2017. [arXiv]
5. H. Hatami, Kaave Hosseini, Shachar Lovett, Structure of protocols for XOR functions, FOCS 2016. [ECCC]
6. H. Hatami, Yingjie Qian, Teaching dimension, VC dimension, and critical sets in Latin squares,
Journal of Combinatorics, to appear [arXiv]
7. H. Hatami, Victoria de Quehen, On the additive bases problem in finite fields,
Electronic Journal of Combinatorics, to appear [arXiv]
8. H. Hatami, Pooya Hatami, Yaqiao Li, A characterization of functions with vanishing averages over products of disjoint sets,
European J. Combin. 56 (2016), 81--93. [arXiv]
9. Yuval Filmus, H. Hatami, Nathan Keller, Noam Lifshitz, On the sum of the L1 influences of bounded functions,
Israel Journal of Mathematics. (2016), no. 1, 167--192. [arXiv]
10. H. Hatami, Pooya Hatami, Shachar Lovett, General systems of linear forms: equidistribution and true complexity,
Advances in Mathematics 292 (2016), 446--477. [arXiv]
11. H. Hatami, Svante Janson, Balazs Szegedy, Graph properties, graph limits and entropy,
Journal of Graph Theory. [arXiv]
12. H. Hatami, Laszlo Lovasz, Balazs Szegedy, Limits of local-global convergent graph sequences,
Geometric and Functional Analysis, 24 (2014), no. 1, 269–296. [arXiv]
13. H. Hatami, Pooya Hatami, James Hirst, Limits of Boolean Functions on F_p^n,
14. H. Hatami, James Hirst, Serguei Norine, The inducibility of blow-up graphs,
J. Combin. Theory Ser. B 109 (2014), 196–212. [arXiv]
15. H. Hatami Shachar Lovett, Estimating the distance from testable affine-invariant properties,
FOCS 2013. [arXiv]
16. Arnab Bhattacharyya, Eldar Fischer, H. Hatami, Pooya Hatami, Shachar Lovett, Every locally characterized affine-invariant property is testable,
STOC 2013. [arXiv]
17. H. Hatami, Serguei Norine, The entropy of random-free graphons and properties,
Combinatorics Probability and Computing, 22 (2013), no. 4, 517–526. [arXiv]
18. H. Hatami and Shachar Lovett, Correlation testing for affine invariant properties on F_p^n in the high error regime,
STOC 2011. [arXiv]
19. Anil Ada, Omar Fawzi, H. Hatami, Spectral norm of symmetric functions,
APPROX-RANDOM 2012: 338-349. [arXiv]
20. H. Hatami, Jan Hladky, Daniel Kral, Serguei Norine, Alexander Razborov, On the number of pentagons in triangle-free graphs,
J. Combin. Theory Ser. A, 120 (2013), no. 3, 722-732. [arXiv]
21. H. Hatami, Jan Hladky, Daniel Kral, Serguei Norine, Alexander Razborov, Non-three-colorable common graphs exist,
Combinatorics Probability and Computing, 21 (2012), no. 5, 734-742. [arXiv]
22. H. Hatami, A structure theorem for Boolean functions with small total influences,
Annals of Mathematics, 176 (2012), no. 1, 509–533. [arXiv]
23. H. Hatami, Shachar Lovett, Higher-order Fourier analysis of F_p^n and the complexity of systems of linear forms,
Geometric and Functional Analysis, 21 (2011), no. 6, 1331–1357. [arXiv]
24. H. Hatami, Serguei Norine, Undecidability of linear inequalities in graph homomorphism densities,
Journal of the American Mathematical Society, 24(2) (2011), 547-565. [arXiv]
1. H. Hatami, Michael Molloy, The scaling window for a random graph with a given degree sequence ,
Random Structures & Algorithms , 41 (2012), no. 1, 99–123.
SODA10 . [ arXiv]
2. H. Hatami, Graph norms and Sidorenko's conjecture ,
Israel Journal of Mathematics 175(1) (2010), 125-150. [arXiv]
3. H. Hatami, Decision trees and influence of variables over product probability spaces,
Combinatorics Probability and Computing 18 (2009), 357-369. [arXiv]
4. H. Hatami, Xuding Zhu, The fractional chromatic number of graphs of maximum degree at most three,
SIAM Journal on Discrete Mathematics 23(4) (2009), 1762-1775.
5. Mahya Ghandehari, H. Hatami, Nico Spronk, Amenability constants for semilattice algebras,
Semigroup forum 79(2) (2009), pp. 279-297. [arXiv]
6. H. Hatami, Michael Molloy, Sharp thresholds for constraint satisfaction problem and graph homomorphisms ,
Random Structures & Algorithms , 33(3) (2008), pp. 310-332. [arXiv]
7. Mahya Ghandehari, H. Hatami, Fourier analysis and large independent sets in powers of complete graphs,
J. Combin. Theory Ser. B 98(1), (2008), pp. 164-172. [arXiv]
8. H. Hatami, Avner Magen, Vangelis Markakis, Integrality gaps of semidefinite programs for Vertex Cover and relations to $\ell_1$ embeddability of Negative type metrics,
SIAM Journal on Discrete Mathematics, 23(1) (2008/09), pp. 178-194. [arXiv]
9. H. Hatami, A remark on Bourgain's distributional inequality on the Fourier spectrum of Boolean functions,
10. H. Hatami, Random cubic graphs are not homomorphic to the cycle of size 7,
J. Combin. Theory Ser. B 93(2) (2005) pp. 319-325. [arXiv]
11. H. Hatami, Delta+300 is a bound on the adjacent vertex distinguishing edge chromatic number,
J. Combin. Theory Ser. B 95(2) (2005) pp. 246-256. [arXiv]
12. H. Hatami, Pooya Hatami, Perfect dominating sets in the Cartesian products of prime cycles,
13. Peyman Afshani, H. Hatami, Approximation and inapproximability results for maximum clique of disc graphs in high dimensions,
Information Processing Letters, 105(3), (2008), pp. 83-87. [arXiv]
1. Peyman Afshani, Mahsa Ghandehari, Mahya Ghandehari, H. Hatami, Ruzbeh Tusserkani, Xuding Zhu, Circular chromatic index of graphs of maximum degree 3,
Journal of Graph Theory. 49(4) (2005) pp. 325-335. [arXiv]
2. H. Hatami, Ruzbeh Tusserkani, On the complexity of the circular chromatic number,
Journal of Graph Theory. 47(3) (2004) pp. 226-230. [arXiv]
3. H. Hatami, Hossein Maserrat, On the computational complexity of defining sets,
Journal of Discrete Applied Mathematics .149(1-3) (2005) pp. 101-110. [arXiv]
4. Mahya Ghandehari, H. Hatami, E.S. Mahmoodian, On the size of the minimum critical set of a Latin square,
Journal of Discrete Mathematics. 293(1-3) (2005) pp. 121-127. [arXiv]
5. Peyman Afshani, H. Hatami, E.S. Mahmoodian, On the size of the spectrum of the forced matching number of graphs,
Australasian Journal of Combinatorics, 30 (2004) pp. 147-160. [arXiv]
6. H. Hatami, E.S. Mahmoodian, A lower bound for the size of the largest critical sets in Latin squares,
Bulletin of the Institute of Combinatorics and its Applications (Canada). 38 (2003) pp.19-22. [arXiv]
|
2019-04-19 22:35:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8191399574279785, "perplexity": 8596.162905083283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528430.9/warc/CC-MAIN-20190419220958-20190420002958-00481.warc.gz"}
|
https://stat.ethz.ch/pipermail/r-devel/2005-November/035567.html
|
# [Rd] problem with \eqn (PR#8322)
Kurt Hornik Kurt.Hornik at wu-wien.ac.at
Fri Nov 18 20:38:01 CET 2005
>>>>> Duncan Murdoch writes:
> On 11/18/2005 12:40 PM, Hin-Tak Leung wrote:
>> Martin Maechler wrote:
>>
>>>>>>>> "Hin-Tak" == Hin-Tak Leung <hin-tak.leung at cimr.cam.ac.uk>
>>>>>>>> on Fri, 18 Nov 2005 16:38:28 +0000 writes:
>>>
>>>
Hin-Tak> Your own fault. See below. It is basic LaTeX and any LaTeX person
Hin-Tak> can tell you the answer...(most probably haven't bothered...)
>>>
>>> No. Whereas I partly agree that it's Ross fault'' trying to
>>> use too smart LaTex (and using outdated \bf instead of \mathbf),
>>> ;-)
>>>
>>> The bug is really there, since we are talking about the Rd "language",
>>> not LaTeX, an in Rd, \eqn and \deqn are defined to have either
>>> one or two arguments -- where Ross used the 2-argument version
>>> correctly (in principle at least) --> See the manual "Writing R
>>> Extensions".
>>
>>
>> Forgive me for not reading R-ext carefully, but Ross's Rd code is
>> still "obviously" wrong in the lights of the two-argument \eqn:
>> (really doesn't differ from the 1-arg interpretaion of \eqn)
>>
>> \eqn{{\bf\beta}_j}{\bf\beta}_jnormal-bracket5bracket-normal{b(j)}
>>
>> In other words,
>> \eqn{...}{...}_...
>>
>> and the "_" is still outside of any maths environment, which is most
>> probably not Ross's intention.
> But that is Latex code produced by R, not Rd code produced by Ross.
> The bug is in the Latex production (which I think is done by
> share/perl/R/Rdconv.pm, but I don't know Perl well enough to attempt
> to fix it).
Definitely a problem in Rdconv.
E.g.,
$cat foo.Rd \description{ \eqn{{A}}{B} } hornik at mithrandir:~/tmp$ R-d CMD Rdconv -t latex foo.Rd | grep eqn
\eqn{{A}}{A}{{B}
shows what is going on.
My reading of R-exts would suggest that it is not necessary to escape
braces inside \eqn (and in fact these are not unescaped by Rdconv).
Btw, the conversions of the above example are wrong for at least HTML
and text as well, giving
<i>A</i>{{B}
and
A{{B}
respectively.
-k
|
2022-10-06 08:47:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.92022305727005, "perplexity": 12797.989683780595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00409.warc.gz"}
|
https://www.saxtriplets.com/info/1115/18401.htm
|
• |
• |
• |
• English
Extreme Continuous Treatment Effects: Measures, Estimation and Inference
This paper concerns estimation and inference for treatment effects in deep tails of the counterfactual distribution of unobservable potential outcomes corresponding to a continuously valued treatment. We consider two measures for the deep tail characteristics: the extreme quantile function and the tail mean function defined as the conditional mean beyond a quantile level. Then we define the extreme quantile treatment effect (EQTE) and the extreme average treatment effect (EATE), which can be identified through the commonly adopted unconfoundedness condition and estimated with the aid of extreme value theory. Our limiting theory is for the EQTE and EATE processes indexed by a set of quantile levels and hence facilitates uniform inference. Simulations suggest that our method works well in finite samples and an empirical application illustrates its practical merit.
|
2022-10-07 23:14:58
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8706066608428955, "perplexity": 1435.4821300311862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00379.warc.gz"}
|
https://psychology.stackexchange.com/questions/1722/how-to-measure-group-differences-incorporating-reaction-time-accuracy-trade-of
|
# How to measure group differences incorporating reaction time / accuracy trade-off?
In a psychological experiment I am measuring subjects' reaction time as well as their error rate. Now I would like to compare two groups (males & females). There might be a bias in the sense that subjects who respond within a small reaction time might also commit more errors.
• What would be an appropriate way to combine reaction time and error rate to create a measure that takes into consideration this trade-off between "reacting fast" and "reacting correctly".
• E.g., could I just divide reaction time by error rate? Should I center or scale reaction time and error rate before I do this?
$$\frac{r}{1-e} = \frac{r}{c}$$ where $r$ is reaction time, $e$ is proportion error, and $c$ is proportion correct. John Christie provides a critique of inverse efficiency scores here or see the discussion in Bruyer and Brysbaert (2011).
|
2022-10-03 04:03:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6635326743125916, "perplexity": 1436.9935449410173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00704.warc.gz"}
|
https://codereview.stackexchange.com/questions/211673/multiplication-of-3%C3%973-and-n%C3%97n-square-matrices
|
# Multiplication of 3×3 and N×N square matrices
I recently wrote a matrix module in C++.
During the development process I quoted some source code and found a problem.
For example, matrix multiplication:
This way is suitable for all N×N matrices:
void multiplyMatrix(const float32* a, const float32* b, float32* dst,
int32 aColumns, int32 bColumns, int32 dstColumns, int32 dstRows) {
for (int32 i = 0; i < dstRows; i++) {
for (int32 j = 0; j < dstColumns; j++)
dst[i * dstColumns + j] = dotMatrix(a, b, aColumns, bColumns, j, i);
}
}
float32 dotMatrix(const float32* a, const float32* b,
int32 aColumns, int32 bColumns,
int32 column, int32 row) {
float32 result = 0.0f;
int32 index = aColumns * row;
for (int32 i = 0; i < aColumns; i++) {
result += a[index++] * b[column];
column += bColumns;
}
return result;
}
Next, I wrote a 3x3 matrix class.
class Matrix3x3
{
public:
float32 m11, m12, m13,
m21, m22, m23,
m31, m32, m33;
float32 element[9];
void multiply(float32 ma11, float32 ma12, float32 ma13,
float32 ma21, float32 ma22, float32 ma23,
float32 ma31, float32 ma32, float32 ma33) {
float32 temp1 = m11 * ma11 + m21 * ma12 + m31 * ma13;
float32 temp2 = m12 * ma11 + m22 * ma12 + m32 * ma13;
m13 = m13 * ma11 + m23 * ma12 + m33 * ma13;
m11 = temp1;
m12 = temp2;
temp1 = m11 * ma21 + m21 * ma22 + m31 * ma23;
temp2 = m12 * ma21 + m22 * ma22 + m32 * ma23;
m23 = m13 * ma21 + m23 * ma22 + m33 * ma23;
m21 = temp1;
m22 = temp2;
temp1 = m11 * ma31 + m21 * ma32 + m31 * ma33;
temp2 = m12 * ma31 + m22 * ma32 + m32 * ma33;
m31 = m13 * ma31 + m23 * ma32 + m33 * ma33;
m32 = temp1;
m33 = temp2;
}
}
Obviously the first one is very convenient.
Next, I tested the time it took to calculate:
float32 e1[9];
e1[0] = 2.1018f; e1[1] = -1.81754f; e1[2] = 1.2541f;
e1[3] = 0.54194f; e1[4] = 2.75391f; e1[5] = -0.1167f;
e1[6] = -5.81652f; e1[7] = -7.9381f; e1[8] = 4.2816f;
float32 e2[9];
e2[0] = 2.1018f; e2[1] = -1.81754f; e2[2] = 1.2541f;
e2[3] = 0.54194f; e2[4] = 2.75391f; e2[5] = -0.1167f;
e2[6] = -5.81652f; e2[7] = -7.9381f; e2[8] = 4.2816f;
Matrix3x3 a;
a.m11 = 2.1018f; a.m12 = -1.81754f; a.m13 = 1.2541f;
a.m21 = 0.54194f; a.m22 = 2.75391f; a.m23 = -0.1167f;
a.m31 = -5.81652f; a.m32 = -7.9381f; a.m33 = 4.2816f;
Matrix3x3 b = a;
float64 timeSpent = 0;
LARGE_INTEGER nFreq;
LARGE_INTEGER nBeginTime;
LARGE_INTEGER nEndTime;
QueryPerformanceFrequency(&nFreq); // statistical frequency
QueryPerformanceCounter(&nBeginTime);// start timer
for (int32 i = 0; i < 100000; i++) {
multiplyMatrix(e1, e2, dst, 3, 3, 3, 3);
}
QueryPerformanceCounter(&nEndTime); //end timer
printf("timeSpent1:%f\n", timeSpent);
QueryPerformanceCounter(&nBeginTime);
for (int32 i = 0; i < 100000; i++) {
b.multiply(a.m11, a.m12, a.m13,
a.m21, a.m22, a.m23,
a.m31, a.m32, a.m33);
}
QueryPerformanceCounter(&nEndTime);
printf("timeSpent2:%f\n", timeSpent);
Output:
timeSpent1:0.014277
timeSpent2:0.004649
timeSpent1:0.012684
timeSpent2:0.004522
.......
.......
timeSpent1:0.003414
timeSpent2:0.001166
timeSpent1:0.003407
timeSpent2:0.001242
Is this difference in efficiency significant or negligible?
• I changed the title so that it describes what the code does per site goals: "State what your code does in your title, not your main concerns about it.". Please check that I haven't misrepresented your code, and correct it if I have. – Toby Speight Jan 17 at 10:49
• Which C++ version is this targeting? C++11 or newer? C++98? This might be important context for reviewers since a lot changed since C++98. – hoffmale Jan 17 at 11:13
A factor of 3 is large, but in my opinion not unexpected or abnormal. The functions that can handle a variable size matrix in their natural form (ie as they would be compiled without knowledge of the size, for example if the functions are defined in a different compilation unit than they are used in and LTO is not applied) have a lot of overhead: non-linear control flow (3 nested loops), more complicated address computation (involving multiplication by a variable).
Basically, that is the cost of generality .. but there is more to it.
From your use of QueryPerformanceCounter I assume you use MSVC (other compilers aren't much different for the following considerations). MSVC likes to unroll loops such as the one in dotMatrix by 4. It does not like to unroll such loops by 3, though it can be persuaded to do so anyway, for example by giving it a loop that makes exactly 3 iterations. So the cost of generality would work out much differently if the relevant matrix was of size 4x4 or 8x8, as in those cases only the faster unrolled codepath would be used (this still comes with overhead, but less). 3 is a bad case, only ever using the fallback codepath.
Additionally, the general matrix multiply implemented by multiplyMatrix is not scalable: it does not implement cache blocking, so for any matrix that does not fit in L1 cache it will perform badly (and even more badly when going beyond the L2 and L3 sizes). That is normal for code in general, but matrix multiplication is special in that it does not have to suffer significantly from that common effect thanks to its "O(n2) data in O(n3) time" property.
Both the general matrix multiply and the special 3x3 one could use SIMD intrinsics for extra efficiency. 3x3 is an awkward size that would cause some "wasted lanes", but it would still help. For example, it could be done like this (not tested):
#include <xmmintrin.h>
class Matrix3x3
{
public:
float32 m11, m21, m31,
m12, m22, m32,
void multiply(float32 ma11, float32 ma12, float32 ma13,
float32 ma21, float32 ma22, float32 ma23,
float32 ma31, float32 ma32, float32 ma33) {
_mm_mul_ps(col1, _mm_set1_ps(ma11)),
_mm_mul_ps(col2, _mm_set1_ps(ma21))),
_mm_mul_ps(col3, _mm_set1_ps(ma31)));
_mm_mul_ps(col1, _mm_set1_ps(ma12)),
_mm_mul_ps(col2, _mm_set1_ps(ma22))),
_mm_mul_ps(col3, _mm_set1_ps(ma32)));
_mm_mul_ps(col1, _mm_set1_ps(ma13)),
_mm_mul_ps(col2, _mm_set1_ps(ma23))),
_mm_mul_ps(col3, _mm_set1_ps(ma33)));
_mm_storeu_ps(&m11, t1);
_mm_storeu_ps(&m12, t2);
_mm_storeu_ps(&m13, t3);
}
};
The padding is a bit unfortunate (and shouldn't be private, because that makes its positioning relative to the actual matrix elements undefined), but simplifies the SIMD logic, chunks of 16 bytes are easier to deal with. It is possible to avoid the padding if required. Anyway, this results in a significant reduction in code and should be more efficient (without AVX the set1s cost more, that shouldn't be enough to undo the improvement but I didn't try it). The dllexport in the code on godbolt is not really part of the code, I just put that there to force code to be generated for an otherwise unused method.
Column-major order is used here because the columns of the result are a linear combination of the columns of the left hand matrix, which we have access to in packed memory. Similarly, the rows of the output are a linear combination of the rows of the right hand side, but we have no packed access to the rows of the right hand side, so they would be inefficient to gather. A row-oriented version of the above could be arranged for example if the right hand side was passed in as a reference to a Matrix3x3.
Passing the right hand side as matrix is probably a nicer interface anyway, with 9 separate arguments there is no choice but to write them all out separately even if the RHS is available as a matrix object, as you already experienced in your benchmark code.
• Thank you,I don't know assembly language. I read some open source code such as btMatrix3x3.h for bullet3, b2Math.h for Box2D, and some examples of openGL such as matrixModelView. It has been found that in most cases the corresponding matrix is treated separately, such as writing a separate class or method for matrix 3x3 or matrix 4x4 instead of processing all NXN matrices in general, for efficiency? What's your opinion? Thank you. – Shuang2019 Jan 19 at 3:38
• @Shuang2019 yes, specialized classes for specific matrix sizes are more efficient – harold Jan 19 at 7:44
A factor 4 for 3x3 is in the same order, and okay.
One could write could to generate a Matrix99x99 C++ file, and test that. My guess it would be factor 4 too. If it could be 2 then, and as such it would be totally fine.
A remark Normal matrix multiplication A.B with A having dimensions LxM and B dimensions MxN, requiring a s´hared M, resulting in a dimension LxN. So a small C++ class as such would be nice._
• Sorry, my English is very poor, so using a translator, your suggestion is to write a class or method that handles all NXN matrices? Thank you. – Shuang2019 Jan 19 at 3:42
• Yes, storing the dimensions too would be something for a class. – Joop Eggen Jan 21 at 7:35
• But there are some gaps in efficiency, especially for the Minors, cofactors and adjugate inverse matrices, which are 30-40 times slower. – Shuang2019 Jan 24 at 10:00
• One can always add heuristics: if (dimension == 3) { do something special }. The only overhead then are the indirections, loops not being rolled out and the if-s. I would expect a factor 3 at most. BTW I like harolds answer. – Joop Eggen Jan 24 at 11:17
|
2019-10-22 10:07:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43228405714035034, "perplexity": 4551.6908807184345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987813307.73/warc/CC-MAIN-20191022081307-20191022104807-00142.warc.gz"}
|
https://jehosebyluf.fdn2018.com/strong-weak-and-electromagnetic-interactions-in-nuclei-atoms-and-astrophysics-book-2295gw.php
|
Last edited by Fenrigrel
Tuesday, August 4, 2020 | History
4 edition of Strong, Weak, and Electromagnetic Interactions in Nuclei, Atoms, and Astrophysics found in the catalog.
Strong, Weak, and Electromagnetic Interactions in Nuclei, Atoms, and Astrophysics
# Strong, Weak, and Electromagnetic Interactions in Nuclei, Atoms, and Astrophysics
## Livermore, Ca 1991 (Aip Conference Proceedings)
Written in English
Subjects:
• Astrophysics,
• Electricity, magnetism & electromagnetism,
• Nuclear structure physics,
• Weak interactions (Nuclear phy,
• Nuclear Physics,
• Science,
• Science/Mathematics,
• Nuclear reactions,
• Astrophysics & Space Science,
• Nuclear Energy,
• Bloom, Stewart Dave,
• Weak interactions (Nuclear physics),
• Optics,
• Bloom, Stewart Dave,,
• Congresses
• Edition Notes
The Physical Object ID Numbers Contributions Grant J. Matthews (Editor), Stewart Dave Bloom (Editor) Format Hardcover Number of Pages 233 Open Library OL8180032M ISBN 10 0883189437 ISBN 10 9780883189436
The Lagrangian density accounting for the strong, weak, electromagnetic and gravitational interactions consists of the free-fields terms such as the gravitational L g, . Although this may be an idea that more people have cherished, as it seems like such a perfectly balanced model of the universe, there are many concepts and observations that contradict this theory. For starters, the forces that hold an atom togeth.
1. A photon only interacts electromagnetically, so if a photon is present, the process is electromagnetic. 2. Similarly, the neutrino only interacts via the weak force, so if a neutrino is present, it's a weak interaction. 3. Leptons don't carry color charge, so if a lepton is involved, it can't be a strong interaction. The structure of nuclei is expected to change significantly as the limit of nuclear stability is approached in neutron excess. Both the systematic variation in the shell model potential and the increased role of superconducting correlations give rise, theoretically, to the quenched neutron shell structure, characterized by a more uniform.
Electromagnetic Dissociation as a Tool for Nuclear Structure and Astrophysics and the absence of strong interactions. We discuss various approaches to the study of higher order Coulomb excitation has been a very powerful tool in the past to study electromagnetic matrix-elements in nuclei. Classical review papers exist, see, e.g., [1, 2].Cited by: nuclei together, and its name arises from the fact that it is indeed the strongest force that we know about in nature. The weak force turns out to provide the explanation for radioactive beta decay. The complete list of building blocks found in nature is given in figure 1, along with the mass, spin and electromagnetic charge of each particle.
You might also like
psychic life of insects
psychic life of insects
No Title Exists.
No Title Exists.
Elginhaugh
Elginhaugh
One-act plays of to-day.
One-act plays of to-day.
Batu-Khan
Batu-Khan
Treated wood-block paving
Treated wood-block paving
Water quality
Water quality
Initial training courses in youth and community work.
Initial training courses in youth and community work.
Decommissioning handbook
Decommissioning handbook
### Strong, Weak, and Electromagnetic Interactions in Nuclei, Atoms, and Astrophysics Download PDF EPUB FB2
Buy Strong Weak and Electromagnetic Interactions in Nuclei Atoms and Astrophysics: Livermore, CA (AIP Conference Proceedings, ) on FREE SHIPPING on Manufacturer: American Institute of Physics. Strong, weak, and electromagnetic interactions in nuclei, atoms, and astrophysics.
New York: American Institute of Physics, © (DLC) (OCoLC) Named Person: Stewart Dave Bloom; Stewart Dave Bloom; Stewart Dave Bloom: Material Type: Conference publication, Document, Internet resource: Document Type: Internet Resource.
Get this from a Strong. Strong, weak, and electromagnetic interactions in nuclei, atoms, and astrophysics: Livermore, CA [G J Mathews; Stewart Dave Atoms.
In nuclear physics and particle physics, the strong interaction is the mechanism responsible for the strong nuclear force, and is one of the four known fundamental interactions, with the others being electromagnetism, the weak interaction, and the range of 10 −15 m (1 femtometer), the strong force is approximately times as strong as electromagnetism, a.
The International Symposium on Weak and Electromagnetic Interactions in Nuclei ( )' held in Heidelberg Julyin conjunction with the th anniversary of the University of Heidelberg, brought together experts in the fields of nuclear and particle physics, astrophysics and cosmol ogy.
Weak and Electromagnetic Interactions in Nuclei (WEIN '95) The purpose of the symposium is to discuss current experimental and theoretical studies of weak and electromagnetic interactions in nuclei, emphasizing fundamental problems of particle, nuclear and astrophysics. Subjects discussed included symmetries and conservation laws, neutrino.
gravitation and weak interactions in the gauge fields covariant theory; section 2: astrophysics & cosmology standard model & beyond. transmutations between re and os during hydrogen-burning phase of stellar evolution; weak vector coupling from neutron, β–decay and possible indications for right-handed currents.
And Electromagnetic Interactions in Nuclei Theories of the Strong, Weak, and Electromagnetic Interactions: Second Edition Chris Quigg This completely revised and updated graduate-level textbook is an ideal introduction to gauge theories and their applications to high-energy particle physics, and takes an in-depth look at two new laws of nature--quantum chromodynamics and the.
Shouldn't it be a weak. $$K^{-} \rightarrow \pi^{-} + \pi^{0}$$ This only involves hadrons and strangeness isn't conserved, so should this be an impossible strong reaction.
I thought weak decays only didn't conserve partity, but the answers say this is a possible weak decay, does that mean weak decays don't need to conserve strangeness. In the present volume, Phillip J. Siemens, who has been a seminal contributor to our understanding of the nucleus as a many-body system, and his able collaborator, Aksel S.
Jensen, introduce graduate students and colleagues in other fields to the basic concepts of nuclear physics in a way which connects clearly the methods of nuclear physics with those of Format: Paperback. A previously developed unified analysis of semi-leptonic weak and electromagnetic interactions in nuclei which determines one-body transition densities, including their spin and spatial dependences, through electron scattering provides nuclear transitions to serve as known analyzers in testing the structure of this part of the weak by: This highly readable book uncovers the mysteries of the physics of elementary particles for a broad audience.
From the familiar notions of atoms and molecules to the complex ideas of the grand unification of all the basic forces, this book allows the interested lay public to appreciate the fascinating building blocks of matter that make up our ing with a description of 1/5(1).
LEA. Nuclear Physics A () ; North-Holland Publishing Co., Amsterdam 4.A Not to be reproduced by photoprint or microfilm without written permission from the publisher SEMILEPTONIC WEAK AND ELECTROMAGNETIC INTERACTIONS WITH NUCLEI: NUCLEAR CURRENT OPERATORS THROUGH ORDER (U/C)2nucleont BRIAN D.
SEROT Institute of Cited by: The second edition of Chris Quigg’s Gauge Theories of the Strong, Weak, and Electromagnetic Interactions provides just such a foundation. Building on the first edition of the work, which was widely used as a textbook in advanced graduate courses, the new iteration achieves a new level of excellence and completeness that will make it a valued resource for graduate students Author: Rabindra N.
Mohapatra. The alkali element francium has a simple electronic structure, and copious amounts of a wide range of isotopes can be produced in present and future rare isotope facilities.
The atomic parity violating weak interaction in Fr is 18 times larger than in Cs, which makes it one of the best candidates to search for the effects of the weak interaction and its isotopic Cited by: 7. Start studying Ch Learn vocabulary, terms, and more with flashcards, games, and other study tools.
electromagnetic, strong or weak nuclear. decays, the daughter nucleus is, /54 Xe. Of the 4 fundamental forces that act on the universe, which one supports a book that is at rest on a table. the electromagnetic force. The forces that. The realm of atomic and nuclear physics Nuclear physics is the field of physics that studies the building blocks and interactions of atomic nuclei.
Atomic physics (or atom physics) is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. It is primarily concerned with the arrangement of electrons around. Theory that attempts to unify (describe in a similar way) the electromagnetic, weak, and strong forces of nature.
Planck era. The first era after the Big Bang. Era of Nuclei. The branch of physics dealing with the structure and behavior of atoms.
The conference on “Electromagnetic Interactions with Nucleons and Nuclei (EINN)” had been organized on Santorini and Milos Islands in Greece every other year since In its location was successfully moved to Paphos, Cyprus.
The conference series covers experimental and theoretical topics in the areas of nuclear and hadronic physics.
A unique balance of particle and nuclear physics is presented in this outstanding introduction to the field. Nuclear properties, decay, structure and reactions are covered initially, followed by discussions of nuclear forces, B-decay, and elementary particles and their interactions.
Further chapters include strong, weak and electromagnetic interactions, and an up-to-date. nuclei from the elementary hydrogen nuclei are then brie fl y described.
Received 28 Februaryaccepted 5 April Key words: Nuclear reactions, nuclear astrophysic s.Divided into four main parts: the constituents and characteristics of the nucleus; nuclear interactions, including the strong, weak and electromagnetic forces; an introduction to nuclear structure; and recent developments in nuclear structure research, the book delivers a balanced account of both theoretical and experimental nuclear physics.Strong pulses will ionize an atom as the positive nucleus and negative electrons will accelerate in opposite directions.
However an AC field (that is linearly polarized) will cause the electrons and nuclei to seperate and then turn around and smash back into each other/recombine.
|
2021-03-04 12:00:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46823450922966003, "perplexity": 1866.5045594966005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369054.89/warc/CC-MAIN-20210304113205-20210304143205-00526.warc.gz"}
|
https://geoopt.readthedocs.io/en/latest/optimizers.html
|
# Optimizers¶
class geoopt.optim.RiemannianAdam(*args, stabilize=None, **kwargs)[source]
Riemannian Adam with the same API as torch.optim.Adam.
Parameters: Other Parameters: params (iterable) – iterable of parameters to optimize or dicts defining parameter groups lr (float (optional)) – learning rate (default: 1e-3) betas (Tuple[float, float] (optional)) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999)) eps (float (optional)) – term added to the denominator to improve numerical stability (default: 1e-8) weight_decay (float (optional)) – weight decay (L2 penalty) (default: 0) amsgrad (bool (optional)) – whether to use the AMSGrad variant of this algorithm from the paper On the Convergence of Adam and Beyond (default: False) stabilize (int) – Stabilize parameters if they are off-manifold due to numerical reasons every stabilize steps (default: None – no stabilize)
step(closure=None)[source]
Performs a single optimization step.
Parameters: closure (callable, optional) – A closure that reevaluates the model and returns the loss.
class geoopt.optim.RiemannianLineSearch(params, line_search_method='armijo', line_search_params=None, cg_method='steepest', cg_kwargs=None, compute_derphi=True, transport_grad=False, transport_search_direction=True, fallback_stepsize=1, stabilize=None)[source]
Riemannian line search optimizer.
We try to minimize objective $$f\colon M\to \mathbb{R}$$, in a search direction $$\eta$$. This is done by minimizing the line search objective
$\phi(\alpha) = f(R_x(\alpha\eta)),$
where $$R_x$$ is the retraction at $$x$$. Its derivative is given by
$\phi'(\alpha) = \langle\mathrm{grad} f(R_x(\alpha\eta)),\, \mathcal T_{\alpha\eta}(\eta) \rangle_{R_x(\alpha\eta)},$
where $$\mathcal T_\xi(\eta)$$ denotes the vector transport of $$\eta$$ to the point $$R_x(\xi)$$.
The search direction $$\eta$$ is defined recursively by
$\eta_{k+1} = -\mathrm{grad} f(R_{x_k}(\alpha_k\eta_k)) + \beta \mathcal T_{\alpha_k\eta_k}(\eta_k)$
Here $$\beta$$ is the scale parameter. If $$\beta=0$$ this is steepest descent, other choices are Riemannian version of Fletcher-Reeves and Polak-Ribière scale parameters.
Common conditions to accept the new point are the Armijo / sufficient decrease condition:
$\phi(\alpha)\leq \phi(0)+c_1\alpha\phi'(0)$
And additionally the curvature / (strong) Wolfe condition
$\phi'(\alpha)\geq c_2\phi'(0)$
The Wolfe conditions are more restrictive, but guarantee that search direction $$\eta$$ is a descent direction.
The constants $$c_1$$ and $$c_2$$ satisfy $$c_1\in (0,1)$$ and $$c_2\in (c_1,1)$$.
Parameters: params (iterable) – iterable of parameters to optimize or dicts defining parameter groups line_search_method (('wolfe', 'armijo', or callable)) – Which line_search_method to use. If callable it should be any method of signature (phi, derphi, **kwargs) -> step_size, where phi is scalar line search objective, and derphi is its derivative. If no suitable step size can be found, the method should return None. The following arguments are always passed in **kwargs: * phi0: float, Value of phi at 0 * old_phi0: float, Value of phi at previous point * derphi0: float, Value derphi at 0 * old_derphi0: float, Value of derphi at previous point * old_step_size: float, Stepsize at previous point If any of these arguments are undefined, they default to None. Additional arguments can be supplied through the line_search_params parameter line_search_params (dict) – Extra parameters to pass to line_search_method, for the parameters available to strong Wolfe see strong_wolfe_line_search(). For Armijo backtracking parameters see armijo_backtracking(). cg_method (('steepest', 'fr', 'pr', or callable)) – Method used to compute the conjugate gradient scale parameter beta. If ‘steepest’, set the scale parameter to zero, which is equivalent to doing steepest descent. Use ‘fr’ for Fletcher-Reeves, or ‘pr’ for Polak-Ribière (NB: this setting requires an additional vector transport). If callable, it should be a function of signature (params, states, **kwargs) -> beta, where params are the parameters of this optimizer, states are the states associated to the parameters (self._states), and beta is a float giving the scale parameter. The keyword arguments are specified in optional parameter cg_kwargs. Paremeters (Other) – ---------------- – compute_derphi (bool, optional) – If True, compute the derivative of the line search objective phi for every trial step_size alpha. If alpha is not zero, this requires a vector transport and an extra gradient computation. This is always set True if line_search_method=’wolfe’ and False if ‘armijo’, but needs to be manually set for a user implemented line search method. transport_grad (bool, optional) – If True, the transport of the gradient to the new point is computed at the end of every step. Set to True if Polak-Ribière is used, otherwise defaults to False. transport_search_direction (bool, optional) – If True, transport the search direction to new point at end of every step. Set to False if steepest descent is used, True Otherwise. fallback_stepsize (float) – fallback_stepsize to take if no point can be found satisfying line search conditions. See also step() (default: 1) stabilize (int) – Stabilize parameters if they are off-manifold due to numerical reasons every stabilize steps (default: None – no stabilize) cg_kwargs (dict) – Additional parameters to pass to the method used to compute the conjugate gradient scale parameter.
last_step_size
Last step size taken. If None no suitable step size was found, and consequently no step was taken.
Type: int or None
step_size_history
List of all step sizes taken so far.
Type: List[int or None]
line_search_method
Type: callable
line_search_params
Type: dict
cg_method
Type: callable
cg_kwargs
Type: dict
fallback_stepsize
Type: float
step(closure, force_step=False, recompute_gradients=False, no_step=False)[source]
Do a linesearch step.
Parameters: closure (callable) – A closure that reevaluates the model and returns the loss. force_step (bool (optional)) – If True, take a unit step of size self.fallback_stepsize if no suitable step size can be found. If False, no step is taken in this situation. (default: False) recompute_gradients (bool (optional)) – If True, recompute the gradients. Use this if the parameters have changed in between consecutive steps. (default: False) no_step (bool (optional)) – If True, just compute step size and do not perform the step. (default: False)
class geoopt.optim.RiemannianSGD(params, lr, momentum=0, dampening=0, weight_decay=0, nesterov=False, stabilize=None)[source]
Riemannian Stochastic Gradient Descent with the same API as torch.optim.SGD.
Parameters: Other Parameters: params (iterable) – iterable of parameters to optimize or dicts defining parameter groups lr (float) – learning rate momentum (float (optional)) – momentum factor (default: 0) weight_decay (float (optional)) – weight decay (L2 penalty) (default: 0) dampening (float (optional)) – dampening for momentum (default: 0) nesterov (bool (optional)) – enables Nesterov momentum (default: False) stabilize (int) – Stabilize parameters if they are off-manifold due to numerical reasons every stabilize steps (default: None – no stabilize)
step(closure=None)[source]
Performs a single optimization step (parameter update).
Parameters: closure (callable) – A closure that reevaluates the model and returns the loss. Optional for most optimizers.
Note
Unless otherwise specified, this function should not modify the .grad field of the parameters.
class geoopt.optim.SparseRiemannianAdam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, amsgrad=False)[source]
In this variant, only moments that show up in the gradient get updated, and only those portions of the gradient get applied to the parameters.
Parameters: Other Parameters: params (iterable) – iterable of parameters to optimize or dicts defining parameter groups lr (float (optional)) – learning rate (default: 1e-3) betas (Tuple[float, float] (optional)) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999)) eps (float (optional)) – term added to the denominator to improve numerical stability (default: 1e-8) amsgrad (bool (optional)) – whether to use the AMSGrad variant of this algorithm from the paper On the Convergence of Adam and Beyond (default: False) stabilize (int) – Stabilize parameters if they are off-manifold due to numerical reasons every stabilize steps (default: None – no stabilize)
step(closure=None)[source]
Performs a single optimization step (parameter update).
Parameters: closure (callable) – A closure that reevaluates the model and returns the loss. Optional for most optimizers.
Note
Unless otherwise specified, this function should not modify the .grad field of the parameters.
class geoopt.optim.SparseRiemannianSGD(params, lr, momentum=0, dampening=0, nesterov=False, stabilize=None)[source]
Implements lazy version of SGD algorithm suitable for sparse gradients.
In this variant, only moments that show up in the gradient get updated, and only those portions of the gradient get applied to the parameters.
Parameters: Other Parameters: params (iterable) – iterable of parameters to optimize or dicts defining parameter groups lr (float) – learning rate momentum (float (optional)) – momentum factor (default: 0) dampening (float (optional)) – dampening for momentum (default: 0) nesterov (bool (optional)) – enables Nesterov momentum (default: False) stabilize (int) – Stabilize parameters if they are off-manifold due to numerical reasons every stabilize steps (default: None – no stabilize)
step(closure=None)[source]
Performs a single optimization step (parameter update).
Parameters: closure (callable) – A closure that reevaluates the model and returns the loss. Optional for most optimizers.
Note
Unless otherwise specified, this function should not modify the .grad field of the parameters.
|
2022-01-28 05:06:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34198856353759766, "perplexity": 7591.455130820042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305420.54/warc/CC-MAIN-20220128043801-20220128073801-00596.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/geometry/elementary-geometry-for-college-students-5th-edition/chapter-10-section-10-1-the-rectangular-coordinate-system-exercises-page-457/43
|
# Chapter 10 - Section 10.1 - The Rectangular Coordinate System - Exercises - Page 457: 43
The points are: $(5,0); (-5,0); (0,-4); (0,4)$
#### Work Step by Step
We solve for y: $2\sqrt{9+y^2} = 10 \\ \sqrt{9+y^2} = 5 \\ y = \pm4$ We solve for the x values of the y-intercepts as well: $10 = \sqrt{(3+x)^2} + \sqrt{(3-x)^2} \\ 2x = \pm 10 \\ x = \pm 5$ To draw an ellipse, use curvy lines to connect the points found.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2019-07-18 19:13:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.669899582862854, "perplexity": 1412.1546363454527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525793.19/warc/CC-MAIN-20190718190635-20190718212635-00101.warc.gz"}
|
https://www.dcode.fr/trifid-delastelle-cipher
|
Search for a tool
Delastelle Trifid Cipher
Tool to decrypt / encrypt with Delastelle's Trifide number using 3 grids to convert each character into a triplet (grid, line, column) before transposing the grid to find an encrypted character.
Results
Delastelle Trifid Cipher -
Tag(s) : Poly-Alphabetic Cipher
Share
dCode and you
dCode is free and its tools are a valuable help in games, maths, geocaching, puzzles and problems to solve every day!
A suggestion ? a feedback ? a bug ? an idea ? Write to dCode!
Team dCode likes feedback and relevant comments; to get an answer give an email (not published). It is thanks to you that dCode has the best Delastelle Trifid Cipher tool. Thank you.
# Delastelle Trifid Cipher
## Trifid Decoder
(if this message do not disappear, try to refresh this page)
(if this message do not disappear, try to refresh this page)
(if this message do not disappear, try to refresh this page)
## Delastelle's Trifid Encoder
(if this message do not disappear, try to refresh this page)
(if this message do not disappear, try to refresh this page)
(if this message do not disappear, try to refresh this page)
Tool to decrypt / encrypt with Delastelle's Trifide number using 3 grids to convert each character into a triplet (grid, line, column) before transposing the grid to find an encrypted character.
## Answers to Questions
### How to encrypt using Delastelle Trifid cipher?
The Delastelle trifid cipher uses 3 9-character grids (for 27 distinct characters in total) and an integer $N$ (usually 5 or 7).
Example: Encrypt the message SECRET, with $N =$ 5 and grids
Grid 1Grid 2Grid 3
1 2 3 \ A B C D E F G H I
1 2 3 \ J K L M N O P Q R
1 2 3 \ S T U V W X Y Z _
Often, a keyword is used to generate a disordered alphabet with 27 characters (the Latin alphabet accompanied by another symbol like _ replacing any non-alphabetic character)
Step 1: For each character, search for it in grids and note are triplet of 3 corresponding digits (grid, line, column)
Example: S is in grid 3, line 1, column 1, its triplet is 311
Step 2: Write the triplets in columns, in groups of $N$ columns next to each other and read each group in rows.
Example:
S E C R E | T 3 1 1 2 1 3 1 2 1 3 2 1 1 2 3 3 2 2
Reading group 1: 31121,12132,12332, group 2: 312
Step 3: Cut out each sequence of digits read in triplet group of 3 digits corresponding to (grid, line, column) and note the corresponding letter. These letters constitute the encrypted message.
Example: 311,211,213,212,332,312 corresponds to SJLKZT
### How to decrypt Delastelle Trifid cipher?
Decryption is very similar to encryption, the difference is in step 2.
Example: Decrypt the message SJLKZT, with $N =$ 5 and grids
Grid 1Grid 2Grid 3
1 2 3 \ A B C D E F G H I
1 2 3 \ J K L M N O P Q R
1 2 3 \ S T U V W X Y Z _
Step 1: identical to encryption
Step 2: Take the triplets in groups of $N$ and write them in $N$-length lines below each other then read each group in columns.
Example: 311,211,213,212,332,312 s'écrit
3 1 1 2 1 | 3 1 2 1 2 1 3 2 | 1 2 3 3 3 |
Reading group 1: 311,122,113,233,123, group 2: 3,1,2
Step 3: Identical to encryption
Example: 311,122,113,233,123,312 corresponds to the plain message SECRET
### How to recognize a Trifid ciphertext?
The message is theoretically composed of not more than 27 distinct characters.
### What are the variants of the Trifid cipher?
The $N$ number quickly changes the encrypted message, for better encryption, it is advisable to take a value of $N$ first with 3.
### When Trifid have been invented ?
Félix Delastelle described this encryption in 1902 in his book Traité Élémentaire de Cryptographie
## Source code
dCode retains ownership of the source code of the script Delastelle Trifid Cipher online. Except explicit open source licence (indicated Creative Commons / free), any algorithm, applet, snippet, software (converter, solver, encryption / decryption, encoding / decoding, ciphering / deciphering, translator), or any function (convert, solve, decrypt, encrypt, decipher, cipher, decode, code, translate) written in any informatic langauge (PHP, Java, C#, Python, Javascript, Matlab, etc.) which dCode owns rights will not be released for free. To download the online Delastelle Trifid Cipher script for offline use on PC, iPhone or Android, ask for price quote on contact page !
## Questions / Comments
Team dCode likes feedback and relevant comments; to get an answer give an email (not published). It is thanks to you that dCode has the best Delastelle Trifid Cipher tool. Thank you.
Source : https://www.dcode.fr/trifid-delastelle-cipher
© 2020 dCode — The ultimate 'toolkit' to solve every games / riddles / geocaches. dCode
Feedback
|
2020-01-20 21:03:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27101752161979675, "perplexity": 4608.831484982004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250599789.45/warc/CC-MAIN-20200120195035-20200120224035-00185.warc.gz"}
|
https://zbmath.org/authors/?q=ai%3Ahansen.frank-peter
|
## Hansen, Frank-Peter
Compute Distance To:
Author ID: hansen.frank-peter Published as: Hansen, Frank; Hansen, Frank-Peter; Hansen, F. more...less External Links: MGP · ORCID
Documents Indexed: 63 Publications since 1977, including 3 Books Reviewing Activity: 9 Reviews Co-Authors: 19 Co-Authors with 20 Joint Publications 913 Co-Co-Authors
all top 5
### Co-Authors
43 single-authored 4 Pedersen, Gert Kjærgård 3 Tomiyama, Jun 2 Cai, Liang 2 Gibilisco, Paolo 2 Pečarić, Josip 1 Araki, Huzihiro 1 Audenaert, Koenraad M. R. 1 Effros, Edward George 1 Gyntelberg, Jacob 1 Isola, Tommaso 1 Ji, Guoxing 1 Krulić Himmelreich, Kristina 1 Moslehian, Mohammad Sal 1 Najafi, Hamed 1 Olesen, Dorte 1 Perić, Ivan 1 Persson, Lars-Erik 1 Shi, Guanghua 1 Zhang, Zhihua
all top 5
### Serials
9 Linear Algebra and its Applications 7 Journal of Statistical Physics 4 Letters in Mathematical Physics 4 Mathematische Annalen 4 Publications of the Research Institute for Mathematical Sciences, Kyoto University 4 Annals of Functional Analysis 3 Bulletin of the London Mathematical Society 3 Mathematica Scandinavica 3 International Journal of Mathematics 3 JIPAM. Journal of Inequalities in Pure & Applied Mathematics 2 Proceedings of the American Mathematical Society 2 Proceedings of the National Academy of Sciences of the United States of America 2 The Australian Journal of Mathematical Analysis and Applications 1 Journal of Mathematical Physics 1 Mathematica Japonica 1 Journal of the Mongolian Mathematical Society 1 Mathematical Physics, Analysis and Geometry 1 Positivity 1 Mathematical Inequalities & Applications 1 RIMS Kokyuroku 1 The B. E. Journal of Theoretical Economics 1 Proceedings of the Estonian Academy of Sciences
all top 5
### Fields
40 Operator theory (47-XX) 20 Real functions (26-XX) 13 Functional analysis (46-XX) 11 Quantum theory (81-XX) 10 Linear and multilinear algebra; matrix theory (15-XX) 8 Information and communication theory, circuits (94-XX) 4 Statistical mechanics, structure of matter (82-XX) 4 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 2 General and overarching topics; collections (00-XX) 2 History and biography (01-XX) 2 Mathematical logic and foundations (03-XX) 2 Group theory and generalizations (20-XX) 2 Probability theory and stochastic processes (60-XX) 1 Topological groups, Lie groups (22-XX) 1 Difference and functional equations (39-XX) 1 Abstract harmonic analysis (43-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 Convex and discrete geometry (52-XX) 1 Differential geometry (53-XX) 1 Statistics (62-XX)
### Citations contained in zbMATH Open
49 Publications have been cited 638 times in 391 Documents Cited by Year
Jensen’s inequality for operators and Loewner’s theorem. Zbl 0473.47011
Hansen, Frank; Pedersen, Gert Kjaergard
1982
Jensen’s operator inequality. Zbl 1051.47014
Hansen, Frank; Pedersen, Gert K.
2003
An operator inequality. Zbl 0407.47012
Hansen, Frank
1980
Jensen’s operator inequality and its converses. Zbl 1151.47025
Hansen, Frank; Pečarić, Josip; Perić, Ivan
2007
The fast track to Löwner’s theorem. Zbl 1284.26011
Hansen, Frank
2013
Metric adjusted skew information. Zbl 1205.94058
Hansen, Frank
2008
Non-commutative perspectives. Zbl 1308.47014
Effros, E. G.; Hansen, Frank
2014
Extensions of Lieb’s concavity theorem. Zbl 1157.47305
Hansen, Frank
2006
Operator convex functions of several variables. Zbl 0902.47013
Hansen, Frank
1997
Trace functions as Laplace transforms. Zbl 1111.47022
Hansen, Frank
2006
Gaps between classes of matrix monotone functions. Zbl 1047.26008
Hansen, Frank; Ji, Guoxing; Tomiyama, Jun
2004
Regular operator mappings and multivariate geometric means. Zbl 1308.47022
Hansen, Frank
2014
Operator monotone functions of several variables. Zbl 1035.47005
Hansen, Frank
2003
Differential analysis of matrix convex functions. Zbl 1116.26006
Hansen, Frank; Tomiyama, Jun
2007
The Wigner-Yanase entropy is not subadditive. Zbl 1111.82006
Hansen, Frank
2007
Cai, Liang; Hansen, Frank
2010
Differential analysis of matrix convex functions. II. Zbl 1167.26307
Hansen, Frank; Tomiyama, Jun
2009
Perturbation formulas for traces on $$C^*$$-algebras. Zbl 0829.46043
Hansen, Frank; Pedersen, Gert K.
1995
On a correspondence between regular and non-regular operator monotone functions. Zbl 1178.47011
Gibilisco, P.; Hansen, F.; Isola, T.
2009
Selfadjoint means and operator monotone functions. Zbl 0461.47009
Hansen, Frank
1981
Trace functions with applications in quantum physics. Zbl 1291.81072
Hansen, Frank
2014
Jensen’s operator inequality for functions of several variables. Zbl 0956.47009
Araki, Huzihiro; Hansen, Frank
2000
Jensen’s trace inequality in several variables. Zbl 1049.46037
Hansen, Frank; Pedersen, Gert K.
2003
Inequalities for quantum skew information. Zbl 1161.81317
Audenaert, Koenraad; Cai, Liang; Hansen, Frank
2008
Means and concave products of positive semi-definite matrices. Zbl 0495.47021
Hansen, Frank
1983
Non-commutative Hardy inequalities. Zbl 1188.47017
Hansen, Frank
2009
Operator inequalities associated with Jensen’s inequalities. Zbl 1040.47012
Hansen, Frank
2000
Generalized noncommutative Hardy and Hardy-Hilbert type inequalities. Zbl 1204.26026
Hansen, Frank; Krulić, Kristina; Pečarić, Josip; Persson, Lars-Erik
2010
Characterisation of matrix entropies. Zbl 1330.60017
Hansen, Frank; Zhang, Zhihua
2015
Expected utility with subjective events. Zbl 1254.91103
Gyntelberg, Jacob; Hansen, Frank
2012
Functions of matrices with nonnegative entries. Zbl 0745.15013
Hansen, Frank
1992
Jensen’s operator inequality for functions of two variables. Zbl 0870.47013
Hansen, Frank
1997
Quantum entropy derived from first principles. Zbl 1360.81036
Hansen, Frank
2016
Convexity of quantum $$\chi ^{2}$$-divergence. Zbl 1256.81023
Hansen, Frank
2011
Selfpolar norms on an indefinite inner product space. Zbl 0458.46017
Hansen, Frank
1980
Multivariate extensions of the Golden-Thompson inequality. Zbl 1337.15019
Hansen, Frank
2015
Operator maps of Jensen-type. Zbl 06979791
Hansen, Frank; Moslehian, Mohammad Sal; Najafi, Hamed
2018
Perspectives and completely positive maps. Zbl 1454.47023
Hansen, Frank
2017
Some operator monotone functions. Zbl 1156.81341
Hansen, Frank
2009
The Moyal product and spectral theory for a class of infinite dimensional matrices. Zbl 0748.46043
Hansen, Frank
1990
Extrema for concave operator mappings. Zbl 0815.47014
Hansen, Frank
1994
Perturbations of centre-fixing dynamical systems. Zbl 0374.46054
Hansen, Frank; Olesen, Dorte
1978
Golden-Thompson’s inequality for deformed exponentials. Zbl 1323.82003
Hansen, Frank
2015
Convex trace functions of several variables. Zbl 1034.47005
Hansen, Frank
2002
Variational representations related to Tsallis relative entropy. Zbl 1459.94078
Shi, Guanghua; Hansen, Frank
2020
A note on quantum entropy. Zbl 1413.81006
Hansen, Frank
2016
WYD-like skew information measures. Zbl 1272.82005
Hansen, Frank
2013
Monotone trace functions of several variables. Zbl 1093.47016
Hansen, Frank
2005
Convex multivariate operator means. Zbl 07007598
Hansen, Frank
2019
Variational representations related to Tsallis relative entropy. Zbl 1459.94078
Shi, Guanghua; Hansen, Frank
2020
Convex multivariate operator means. Zbl 07007598
Hansen, Frank
2019
Operator maps of Jensen-type. Zbl 06979791
Hansen, Frank; Moslehian, Mohammad Sal; Najafi, Hamed
2018
Perspectives and completely positive maps. Zbl 1454.47023
Hansen, Frank
2017
Quantum entropy derived from first principles. Zbl 1360.81036
Hansen, Frank
2016
A note on quantum entropy. Zbl 1413.81006
Hansen, Frank
2016
Characterisation of matrix entropies. Zbl 1330.60017
Hansen, Frank; Zhang, Zhihua
2015
Multivariate extensions of the Golden-Thompson inequality. Zbl 1337.15019
Hansen, Frank
2015
Golden-Thompson’s inequality for deformed exponentials. Zbl 1323.82003
Hansen, Frank
2015
Non-commutative perspectives. Zbl 1308.47014
Effros, E. G.; Hansen, Frank
2014
Regular operator mappings and multivariate geometric means. Zbl 1308.47022
Hansen, Frank
2014
Trace functions with applications in quantum physics. Zbl 1291.81072
Hansen, Frank
2014
The fast track to Löwner’s theorem. Zbl 1284.26011
Hansen, Frank
2013
WYD-like skew information measures. Zbl 1272.82005
Hansen, Frank
2013
Expected utility with subjective events. Zbl 1254.91103
Gyntelberg, Jacob; Hansen, Frank
2012
Convexity of quantum $$\chi ^{2}$$-divergence. Zbl 1256.81023
Hansen, Frank
2011
Cai, Liang; Hansen, Frank
2010
Generalized noncommutative Hardy and Hardy-Hilbert type inequalities. Zbl 1204.26026
Hansen, Frank; Krulić, Kristina; Pečarić, Josip; Persson, Lars-Erik
2010
Differential analysis of matrix convex functions. II. Zbl 1167.26307
Hansen, Frank; Tomiyama, Jun
2009
On a correspondence between regular and non-regular operator monotone functions. Zbl 1178.47011
Gibilisco, P.; Hansen, F.; Isola, T.
2009
Non-commutative Hardy inequalities. Zbl 1188.47017
Hansen, Frank
2009
Some operator monotone functions. Zbl 1156.81341
Hansen, Frank
2009
Metric adjusted skew information. Zbl 1205.94058
Hansen, Frank
2008
Inequalities for quantum skew information. Zbl 1161.81317
Audenaert, Koenraad; Cai, Liang; Hansen, Frank
2008
Jensen’s operator inequality and its converses. Zbl 1151.47025
Hansen, Frank; Pečarić, Josip; Perić, Ivan
2007
Differential analysis of matrix convex functions. Zbl 1116.26006
Hansen, Frank; Tomiyama, Jun
2007
The Wigner-Yanase entropy is not subadditive. Zbl 1111.82006
Hansen, Frank
2007
Extensions of Lieb’s concavity theorem. Zbl 1157.47305
Hansen, Frank
2006
Trace functions as Laplace transforms. Zbl 1111.47022
Hansen, Frank
2006
Monotone trace functions of several variables. Zbl 1093.47016
Hansen, Frank
2005
Gaps between classes of matrix monotone functions. Zbl 1047.26008
Hansen, Frank; Ji, Guoxing; Tomiyama, Jun
2004
Jensen’s operator inequality. Zbl 1051.47014
Hansen, Frank; Pedersen, Gert K.
2003
Operator monotone functions of several variables. Zbl 1035.47005
Hansen, Frank
2003
Jensen’s trace inequality in several variables. Zbl 1049.46037
Hansen, Frank; Pedersen, Gert K.
2003
Convex trace functions of several variables. Zbl 1034.47005
Hansen, Frank
2002
Jensen’s operator inequality for functions of several variables. Zbl 0956.47009
Araki, Huzihiro; Hansen, Frank
2000
Operator inequalities associated with Jensen’s inequalities. Zbl 1040.47012
Hansen, Frank
2000
Operator convex functions of several variables. Zbl 0902.47013
Hansen, Frank
1997
Jensen’s operator inequality for functions of two variables. Zbl 0870.47013
Hansen, Frank
1997
Perturbation formulas for traces on $$C^*$$-algebras. Zbl 0829.46043
Hansen, Frank; Pedersen, Gert K.
1995
Extrema for concave operator mappings. Zbl 0815.47014
Hansen, Frank
1994
Functions of matrices with nonnegative entries. Zbl 0745.15013
Hansen, Frank
1992
The Moyal product and spectral theory for a class of infinite dimensional matrices. Zbl 0748.46043
Hansen, Frank
1990
Means and concave products of positive semi-definite matrices. Zbl 0495.47021
Hansen, Frank
1983
Jensen’s inequality for operators and Loewner’s theorem. Zbl 0473.47011
Hansen, Frank; Pedersen, Gert Kjaergard
1982
Selfadjoint means and operator monotone functions. Zbl 0461.47009
Hansen, Frank
1981
An operator inequality. Zbl 0407.47012
Hansen, Frank
1980
Selfpolar norms on an indefinite inner product space. Zbl 0458.46017
Hansen, Frank
1980
Perturbations of centre-fixing dynamical systems. Zbl 0374.46054
Hansen, Frank; Olesen, Dorte
1978
all top 5
|
2022-08-09 10:28:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6641963124275208, "perplexity": 7465.033509149817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570921.9/warc/CC-MAIN-20220809094531-20220809124531-00042.warc.gz"}
|
https://danmackinlay.name/notebook/tunings.html
|
# Tunings
See also dissonance theory etc.
## Microtunings in practice
Various mainstream apps support microtuning. Notably, Warren Burt demonstrates Microtuning in Kontakt. Bitwig is trialling microtuning support.
## Scala
Scala (No, not the JVM language, the Ada-based musical tuning software) is a strange creature, written by another strange creature, Manuel op de Coul. He (they?) also maintains a comprehensive tuning bibliography.
A crazy-weird wonderful, painful ghetto of theoretical tuning. The author is as brilliant as he is troublesome, and you must pay for the delicious tuning knowledge in this software by navigating the labyrinth he built around it.
The software has many brilliant but abstruse features, few of which repay the time investment, because you have no time left after the lengthy battle with the installation process. However, the database of scales, and the easy conversion between different tuning formats is awesome, and pretty simple once you have got the damn thing running.
### Scala Installation
tl;dr The Scala software is a horrible mess, and maintained by one lone crazy guy with firmly idiosyncratic opinions about software. Unless your needs are particular, I’d recommend downloading the library of tunings only and using music21 or supercollider to play those tunings without wasting time on installing this peculiar and fragile setup.
Otherwise…
Recommended: install on a Linux VM.
Everything else requires too much dicking around with the author’s brazenly inconvenient, outdated and opinionated installation system, which requires you to install things in places you’d rather not, using versions you’d rather not.
The damn thing is written in Ada, which is famously used by the International Space Station and the Paris metro, but those folks are too busy to offer you any tech support. Suck it up, find a way of minimizing the nonsense.
On a gtk-friendly Ubuntu, for example:
sudo apt install dkms # Virtual machine helpers
sudo apt install aconnectgui gnuplot libgnat-4.9 playmidi timidity \
timidity-interfaces-extras
wget http://www.huygens-fokker.org/software/scala-22-pc64-linux.tar.bz2 \
http://www.huygens-fokker.org/docs/scales.zip
That didn’t quite work for me; I had to install ALL of GNU Ada:
sudo apt install gnat
…which is 200MB of wasted disk space. There’s probably a smaller subset that is necessary, but, seriously now, snore.
MIDI might be tricky, but is desirble.
#### short version that might not work
sudo apt install timidity timidity-interfaces-extra
(printf '[Desktop Entry]\nEncoding=UTF-8\nName=Timidity MIDI Player\nComment=Play MIDI audio files\nExec=timidity -ig\nTerminal=false\nType=Application\nStartupNotif y=false\nMimeType=audio/midi;\nCategories=Application;AudioVideo;\n#Icon=? ??\n#NoDisplay=true\n') | sudo tee /usr/share/applications/timidity.desktop
sudo cp /usr/share/applications/defaults.list /usr/share/applications/defaults.list.backup.midi
if ! cat /usr/share/applications/defaults.list | grep “audio/midi”; then (printf 'audio/midi=timidity.desktop\n') | sudo tee -a /usr/share/applications/defaults.list; else sudo sed -i -e’s@audio/midi.*\$@audio/midi=timidity.desktop@g' /usr/share/applications/defaults.list; fi;
wget -c -O /tmp/timidity-patches-eaw http://www.fbriere.net/debian/dists/…iere.1_all.deb
sudo dpkg -i /tmp/timidity-patches-eaw.deb
sudo sed -i.backup -e’s@source /etc/timidity/freepats.cfg@source /usr/share/doc/timidity-patches-eaw/examples/timidity.cfg@g' /etc/timidity/timidity.cfg
sudo modprobe snd-seq-device
sudo modprobe snd-seq-midi
sudo modprobe snd-seq-oss
sudo modprobe snd-seq-midi-event
sudo modprobe snd-seq
timidity -iA -B2,8 -Os1l -s 44100
(printf 'snd-seq-device\nsnd-seq-midi\nsnd-seq-oss\nsnd-seq-midi-event\nsnd-seq\n') | sudo tee -a /etc/modules
sudo sed -i -e’s@#TIM_ALSASEQ=true@TIM_ALSASEQ=true@g' /etc/default/timidity
#### Long story that also might not work
Are you using a recent version of Ubuntu (or some other Linux distribution, but for those the instructions might need to be tweaked)? Are you using Scala but the Chromatic Clavier doesn’t work? Here’s what you need to do:
• Open up a terminal and run sudo modprobe snd-virmidi. To make this happen automatically when you boot up, add snd-virmidi as a new line to the file /etc/modules (otherwise you’ll need to run modprobe snd-virmidi every time).
• In Scala, go to Chromatic Clavier and then go to Sound Settings. Because of the first step, there should now be some choices available for MIDI Output Device. Pick the one with a 0 in the name (something like /dev/snd/midiC1D0).
• In your favorite MIDI connection manager (I use aconnectgui), the MIDI output from Scala will now be available as Virtual Raw MIDI 1-0 or VirMIDI 1-0. You can now connect that to a softsynth or hardware MIDI device of your choice. Have fun with the Chromatic Clavier!
The reason this is necessary, I think, is that Scala uses a legacy “raw” MIDI interface from the days when everyone had MIDI synthesizers (with crappy-sounding soundfonts) on their soundcards, and programs used to access those directly. The snd-virmidi kernel module creates a “virtual” MIDI-enabled soundcard that’s really just a way to get Scala’s MIDI output to appear as a normal MIDI output port.
Actually, even after doing all that I couldn’t make MIDI output work. I don’t care any more. Download the data sets and use them how you want but don’t depend upon this.
|
2020-03-31 02:25:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17832627892494202, "perplexity": 14428.861140361654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370499280.44/warc/CC-MAIN-20200331003537-20200331033537-00055.warc.gz"}
|