url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://salzi.blog/2012/11/27/robust-least-squares-for-fitting-data-quadric-surface/
|
# Robust Least Squares for fitting data (quadric surface)
Here I would like to show you some changes in the source code of the Robust least squares fitting required for a general quadric surface (fitting of the planar model was introduced in the previous post). Assume you want to fit a set of data points in the three-dimensional space with a general quadric described by five parameters $\bf{a_{1}}$, $\bf{a_{2}}$, $\bf{a_{3}}$, $\bf{a_{4}}$, and $\bf{a_{5}}$ as a function of $\bf{x}$ and $\bf{y}$ in the following way:
$\bf{z = a_{1}x^{2} + a_{2}y^{2} + a_{3}x + a_{4}y + a_{5}}$.
Again, fisrt we predefine the surface model with the so-called “ground truth” coefficients which we will need for evaluation of the fitting data:
import matplotlib.pyplot as plt
import numpy as np
build_z_vector, estimate_model)
# ground truth model coefficients
a_1, a_2, a_3, a_4, a_5 = 0.1, -0.2, -0.3, 0.1, 0.15
a_ground_truth = [a_1, a_2, a_3, a_4, a_5]
print(f'Ground truth model coefficients: {a_ground_truth}')
# create a coordinate matrix
n_x, n_y = np.linspace(-1, 1, 41), np.linspace(-1, 1, 41)
x_values, y_values = np.meshgrid(n_x, n_y)
# make the estimation
z_values = a_1 * x_values ** 2
z_values += a_2 * y_values ** 2
z_values += a_3 * x_values + a_4 * y_values + a_5
Then, similar to the case with the planar surface, we find the model coefficients and display the output of the found model for these three cases:
• Input data is corrupted by Gaussian noise only and the Regular linear least squares method is used.
• Input data is corrupted by Gaussian noise AND outliers, Regular linear least squares method is used.
• Input data is corrupted by Gaussian noise AND outliers, Robust linear least squares method is used.
Note that in the current example we have 5 parameters which need to be estimated, below you can see required changes in the source code:
def build_x_matrix(x_values: np.ndarray, y_values: np.ndarray) -> np.ndarray:
x_flatten, y_flatten = x_values.flatten(), y_values.flatten()
z_ones = np.ones([x_values.size, 1])
x_flatten2, y_flatten2 = x_flatten ** 2, y_flatten ** 2
return np.hstack((np.reshape(x_flatten2, ([x_flatten2.size, 1])),
np.reshape(y_flatten2, ([y_flatten2.size, 1])),
np.reshape(x_flatten, ([x_flatten.size, 1])),
np.reshape(y_flatten, ([y_flatten.size, 1])),
z_ones))
def robust_lsq_step(x_matrix: np.ndarray, z_vector: np.ndarray,
a_robust_lsq: np.ndarray,
z_values: np.ndarray) -> tuple[np.ndarray, np.ndarray]:
# compute absolute value of residuals (fit minus data)
abs_residuals = abs(x_matrix @ a_robust_lsq - z_vector)
# compute the scaling factor for the standardization of residuals
# using the median absolute deviation of the residuals
# 6.9460 is a tuning constant (4.685/0.6745)
abs_res_scale = 6.9460 * np.median(abs_residuals)
# standardize residuals
w = abs_residuals / abs_res_scale
# compute the robust bisquare weights excluding outliers
w[(w > 1).nonzero()] = 0
# calculate robust weights for 'good' points; note that if you supply
# your own regression weight vector, the final weight is the product of
# the robust weight and the regression weight
tmp = 1 - w[(w != 0).nonzero()] ** 2
w[(w != 0).nonzero()] = tmp ** 2
# get weighted x values
x_weighted = np.tile(w, (1, 5)) * x_matrix
a = x_weighted.T @ x_matrix
b = x_weighted.T @ z_vector
# get the least-squares solution to a linear matrix equation
a_robust_lsq = np.linalg.lstsq(a, b, rcond=None)[0]
z_result = x_matrix @ a_robust_lsq
return np.reshape(z_result, z_values.shape), a_robust_lsq
def robust_least_squares_noise_outliers(x_values: np.ndarray,
y_values: np.ndarray,
z_values: np.ndarray) -> None:
"""Input data is corrupted by gaussian noise AND outliers,
robust least squares method will be used"""
x_matrix = build_x_matrix(x_values, y_values)
z_vector = build_z_vector(z_corrupted)
z_result, a_robust_lsq = estimate_model(x_matrix, z_corrupted)
# iterate till the fit converges
for _ in range(ROBUST_LSQ_N):
z_result, a_robust_lsq = robust_lsq_step(
x_matrix, z_vector, a_robust_lsq, z_values)
print(f'Robust Least Squares (noise and outliers): {a_robust_lsq}')
plt.figure(figsize=(10, 10))
plt.title('Robust estimate (corrupted by noise AND outliers)')
plt.imshow(np.hstack((z_values, z_corrupted, z_result)))
plt.clim(np.min(z_values), np.max(z_values))
plt.jet()
plt.show()
robust_least_squares_noise_outliers(x_values, y_values, z_values)
That’s it. Here you can see output results for all three cases:
And again the output produced by the Robust linear least squares method looks pretty good despite many outliers in the input data. The code shown above is available here.
Best wishes,
Alexey
1. AbdElrahman
|
2022-09-29 19:55:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5632977485656738, "perplexity": 5073.290979735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00595.warc.gz"}
|
http://www.mathworks.com/help/symbolic/mupad_ref/erfi.html?requestedDomain=www.mathworks.com&nocookie=true
|
# erfi
Imaginary error function
### Use only in the MuPAD Notebook Interface.
This functionality does not run in MATLAB.
## Syntax
```erfi(`x`)
```
## Description
$\mathrm{erfi}\left(x\right)=-i\mathrm{erf}\left(ix\right)=\frac{2}{\sqrt{\pi }}\underset{0}{\overset{x}{\int }}{e}^{{t}^{2}}dt$ computes the imaginary error function.
This function is defined for all complex arguments `x`. For floating-point arguments, `erfi` returns floating-point results.
The implemented exact values are: `erfi(0) = 0`, ```erfi(∞) = ∞```, `erfi(-∞) = -∞`, ```erfi(i∞) = i```, and `erfi(-i∞) = -i`. For all other arguments, the error function returns symbolic function calls.
For the function call ```erfi(x) = -i*erf(i*x) = i*(erfc(i*x) - 1)``` with floating-point arguments of large absolute value, internal numerical underflow or overflow can happen. If a call to `erfc` causes underflow or overflow, this function returns:
• The result truncated to `0.0` if `x` is a large positive real number
• The result rounded to `2.0` if `x` is a large negative real number
• `RD_NAN` if `x` is a large complex number and MuPAD® cannot approximate the function value
The imaginary error function ```erfi(x) = i*(erfc(i*x) - 1)``` returns corresponding values for large arguments. See Example 2.
MuPAD can simplify expressions that contain error functions and their inverses. For real values `x`, the system applies the following simplification rules:
• ```inverf(erf(x)) = inverf(1 - erfc(x)) = inverfc(1 - erf(x)) = inverfc(erfc(x)) = x```
• ```inverf(-erf(x)) = inverf(erfc(x) - 1) = inverfc(1 + erf(x)) = inverfc(2 - erfc(x)) = -x```
For any value `x`, the system applies the following simplification rules:
• `inverf(-x) = -inverf(x)`
• `inverfc(2 - x) = -inverfc(x)`
• `erf(inverf(x)) = erfc(inverfc(x)) = x`
• `erf(inverfc(x)) = erfc(inverf(x)) = 1 - x`
## Environment Interactions
When called with a floating-point argument, the functions are sensitive to the environment variable `DIGITS`, which determines the numerical working precision.
## Examples
### Example 1
You can call the imaginary error function with exact and symbolic arguments:
`erfi(0), erfi(x + 1), erfi(-infinity), erfi(3/2), erfi(sqrt(2))`
``` ```
To approximate exact results with floating-point numbers, use `float`:
`float(erfi(3/2)), float(erfi(sqrt(2)))`
``` ```
Alternatively, use floating-points value as arguments:
`erfi(0.2), erfi(2.0 + 3.5*I), erfi(5.5 + 1.0*I)`
``` ```
### Example 2
For large complex arguments, the imaginary error function can return :
`erfi(38000.0 + 3801.0*I)`
``` ```
### Example 3
`diff`, `float`, `limit`, `expand`, `rewrite`, and `series` handle expressions involving the error functions:
`diff(erfi(x), x, x, x)`
``` ```
`float(ln(3 + erfi(sqrt(PI)*I)))`
``` ```
`limit(x/(1 + x)*erfi(I*x)*I, x = infinity)`
``` ```
`rewrite(erfi(x), erfc)`
``` ```
`series(erfi(x), x = I*infinity, 3)`
``` ```
## Parameters
`x` Arithmetical expression
## Return Values
Arithmetical expression
## Algorithms
`erf`, `erfc`, and `erfi` are entire functions.
|
2016-08-25 22:36:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9128783345222473, "perplexity": 3053.67661486058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982294158.7/warc/CC-MAIN-20160823195814-00098-ip-10-153-172-175.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/493117/fresnel-transmission-coefficient-for-magnetic-field
|
# Fresnel Transmission Coefficient for Magnetic Field
Helmholtz equations for electric and magnetic fields are
$$∇^2 \mathbf{H} + k^2 \mathbf{H} = \mathbf{0}$$ $$∇^2 \mathbf{E} + k^2 \mathbf{E} = \mathbf{0}$$
Obviously, if a solution is found to satisfy the electric field equation, it must also satisfy the magnetic field equation. A wave traveling between two media has an electric field magnitude in medium one proportion the magnitude in medium two, in other words
$$|\mathbf E_2| = T |\mathbf E_1|$$
where $$T$$ is the Fresnel transmission coefficient. Is this true for the magnetic field as well?
$$|\mathbf H_2| = T |\mathbf H_1|$$
If not why? How do we explain that the Helmholtz solution for electric and magnetic field could be the same?
• Isnt your Helmholtz equation for free space with boundary conditions? So if are two media you need to write down a different equation. – lalala Jul 23 at 5:56
This is because the relationship between the E-field and H-field magnitudes is $$E = \eta H$$, where $$\eta$$ is the impedance.
Thus for two media with impedances $$\eta_1$$ and $$\eta_2$$, the "transmission coefficient for the H-field would be $$H_2 = T\frac{\eta_1}{\eta_2} H_1$$
|
2019-11-14 17:49:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8433668613433838, "perplexity": 244.21837889166972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668529.43/warc/CC-MAIN-20191114154802-20191114182802-00136.warc.gz"}
|
http://lists.topica.com/lists/tunguska/read/message.html?mid=801582031&sort=d&start=25
|
Welcome Guest!
Tunguska
W.Kundt's story about TUNGUSKA 2001 Andrei Ol'khovatov Jan 16, 2002 08:08 PST
Dear David and All,
To inflame David's (and maybe others too) interest in visiting Tunguska,
here is below a story by German astrophysicist Prof. Wolfgang Kundt about
TUNGUSKA 2001 conference and his trip to the Tunguska epicenter. In shortern
form it was published in the November 2001 METEORITE magazine issue, and
here you can read detailed version (pictures are omitted).
Best wishes,
Andrei
=========================================
TUNGUSKA 2001 , CONFERENCE REPORT
The Search for the Evasive 1908 Meteorite continues
by Wolfgang Kundt
Meeting at Moscow and Krasnoyarsk
There is a tradition among Russian scientists to celebrate their 1908
Tunguska catastrophe with biennial twin conferences at Moscow and either
Krasnoyarsk or Tomsk, on or around its day of recurrence, 30 June. This
time, in 2001, our convener of the Moscow session was Andrei Ol'khovatov who
had made sure that among the multiple explanations of the catastrophe -
stony asteroid, icy comet, midi black hole, antimatter, mirror matter,
extra-terrestrials, or formation of a kimberlite - the traditional
meteoritic explanation was not the only one to be discussed.
During the two-day conference at Moscow's MEPh Institute, contributions
started, among others, with an honoring of the late N.V. Vasil'ev, and with
the classic interpretation of the orbit and disintegration of a cosmic body
in the atmosphere of Earth (V.P. Korobeinikov, G.A. Tirskii, D.V. Rudenko).
They then shifted to a new interpretation of the catastrophe by Robert Foot
(Melbourne) and Z.K. Silagadze (Novosibirsk), via the infall of a cosmic
object composed of 'mirror matter', i.e. of hypothetical matter with
opposite symmetry under space reflection (parity) which would interact only
quite weakly with ordinary matter. Does such mirror matter exist in the
Universe? It could explain a number of puzzling facts.
Later during the session, geophysical explanations were likewise
discussed, in particular by A. Ol'khovatov, by myself, by G.G. Kochemasov,
and by I.P. Jerebchenko. Kochemasov stressed the preferred geographic
location of the explosion center, with several radial weakness lines and
faults intersecting near the epicenter of the (catastrophe near the) Stony
Tunguska river whose location is not far from the center of the Siberian
craton, with a lane of dozens of kimberlites straddling the site, and with a
(more global) sectoral structure of Earth's eastern hemisphere likewise
running through it. In further support of Tunguska's preferred location,
Jerebchenko highlighted ringlike structures of magnetic anomalies, Moho
isohypses, river-net patterns, tectonic movements, and kimberlites, of radii
ranging from 50 km through a few 103 km, with the Tunguska site sitting at
its center, at an Asian geomagnetic and heat-flow maximum. She also stressed
similarities to the Easter Island !
geology.
As the Moscow session came to an end, its foreign attendants were guided
by a few of their Russian colleagues, through most hospitable meals with
drinks and lengthy metro and bus rides, to a night plane to Krasnoyarsk
during whose flight the sky never fell really dark. On arrival at the city
outskirts in late morning, it became clear that there was another Tunguska
session getting ready to start; but some of us felt like having a rest
first, including myself. Our group counted nine members: Jens Ergon - a
Swedish TV producer - Elmar Bartlmae - a Munich producer working for Welt
der Wunder - accompanied by two Russian camera men plus Martina - the
interpreter - Lars Franzen - a Swedish experimental scientist - two
escorting Russian colleagues: Boris Rodionov and Viktor Lebedev, and
myself - an astrophysicist from Bonn. A small 4-room apartment had been
subrented for us, newly built in the outskirts of the city, but half of the
party
preferred to move on to a hotel. When they tried to pick me up for the
scientific session, four hours later, I turned out to be locked in, and had
to free myself with the help of some kitchen equipment, guided by handy
communication with the outside world. I then learned that I had missed two
of my scheduled talks, but was given another chance later in the afternoon.
The one-day Krasnoyarsk session was organized similarly to the Moscow
one, and attended, among others, by a number of local school boys in company
of their teacher. I.V. Shalamov (Novosibirsk) argued that the catastrophe
had altogether five centers, a fact that may have been known already to
Leonid Kulik through his aerial photographs in 1938-9. Satellite photographs
shown by Yu.D. Labvin (Krasnoyarsk) demonstrated the preferred location of
the central (cauldron) part of the 1908 treefall area, see Fig.1. During my
own talk, one of the school boys volunteered as an apparently excellent
interpreter (into Russian).
Rushing to the Site
After an evening sightseeing tour of Krasnoyarsk and the Jenissei stream,
our international team was allowed a few hours of sleep before being put on
a propeller airplane to Vanavara, the nearest trading center to the Tunguska
site. Viktor , our young Russian conductor, enjoyed fewer hours of sleep
during that night than we did. Vanavara's roads welcomed us so muddy - due
to the shallow permafrost and some ongoing drizzle - that we forgot all
shopping and/or sightseeing plans and simply waited for the helicopter that
was supposed to fly us the 65 km to Pristan - the spacious wooden lodge some
7 km south of Tunguska's epicenter. Swarms of mosquitos convinced us not to
leave the airport shelter for too long.
Then came 'our' helicopter, and carried not only us and our ample camera
equipment but also our kitchen team plus a bunch of soldiers whose purpose
did not get equally clear. All of us were safely flown across the hilly
taiga, its woods and swamps, rivers and occasional lakes, and dropped on a
clearing in the middle of nowhere.
Where were we? I was too excited to wait for a meal even though many
hours had passed without, and followed the only narrow footpath I could
detect offhand for a couple of hours. It turned out to be the right one
towards the epicenter, pioneered by Kulik some 74 years ago ...
Our two TV teams were short of time, hence I was told that there was only
one day for us left to walk to the epicenter and back, namely the following
day; bad enough. But it became an almost cloudless day, with an
unforgettable 12-hour hike through the young woods and swamps, up and down a
few hills which rise some 100 m above their surroundings. What has caused
havock 93 yr ago to this peaceful, invitingly shaped land of largely
circular architecture, centered on Mt. Stoikovich? In many places, I felt
like hiking through the Eifel mountains, west of Bonn, with their deep,
circular lakes called 'Maare', often side by side of a pasture-like terrain,
of equal size and shape but of swampy surface; cf. Fig.2. The Maare of the
Eifel, some 200 in number, are of volcanic origin. During their formation,
thousands of years ago, dust was blown through large distances, e.g. to
Sweden and to Italy.
What New Evidence has emerged ?
So what could we learn when we hiked through the former cauldron, on Kulik's
trails, in pleasant sunshine? We could still see a few so-called telegraph
poles: standing fir trees that had lost all their branches in a supersonic
blast, like in Hiroshima, after the nuclear explosion. We also passed by a
number of large root stumps without stems and pits, not yet rotten, Fig.3,
catastrophe, whose origin has remained a mystery. They must have been hurled
through dozens of meters at least - like John's stone which weighs 10 t, and
has landed on the slope of Mt. Stoikovich with at least sonic speed.
Mt. Stoikovich sits at the center of the 250-Myr old Kulikovskii crater,
easily recognisable from space, near the intersection of several faultlines.
The chemistry of probes collected by the yearly expeditions has always been
found consistent with earthquakes, or volcanic outgassings. The 1999 Italian
hours. Kimberlites and natural gas reservoirs are being detected in the near
neighbourhood of the treefall area, at distances measuring in 100 km.
('Kimberlites' - called after Kimberley in South Africa - are narrow
volcanic outcrops hardly noticeable at the surface except for a shallow
dome-shaped tuff ring, which are mined mainly for their diamonds).
The 4 bright nights in Europe and western Asia, straddling 30 June 1908,
are reminiscent of the 1883 Krakatoa outburst; they ask for transient
scatterers in the upper atmosphere, above 500 km, at heights which only
methane and hydrogen are light enough to reach in sufficient quantity. Fast
rising natural gas has been repeatedly detected in recent years, in the form
of 'mystery clouds' - by airplane pilots - and indirectly as pockmarks on 6%
of the sea floor1.
So why has the 1908 Tunguska explosion not been a tectonicc event, say,
the formation of a kimberlite? Meteoritic impacts tend to leave debris even
when 10$^5$ times less massive than the Tunguska meteorite would have been;
none has ever been found. Volcanic explosions are known to be some 30 times
more frequent than meteoritic ones, at equal energy. Among the meteoritic
ones, stony asteroids tend to collide with Earth some 30 times more
frequently than comets do. The absence of any remnants in the Tunguska area
points, of course, to the rare event of a cometary impact, which would
evaporate at great height. But no comet has been reported in those years,
other than comet Encke. And besides, a shock wave from an impact that formed
at great height would be unable to create a treefall pattern with five
centers, separated by several kilometers. Moreover, a comet entering at
shallow infall angle - as is often claimed - would create a parallel
treefall pattern, not an almost radi!
al one. Under close scrutiny, t
Comparing with the Sikhote-Aline meteorite (1947), and the Cando blowout
(1994)
On 12 February 1947, at 10.30 local time, an iron meteorite struck the
easternmost edge of Siberia, in the western part of the Sikhote-Aline
mountain range. Eyewitnesses reported a bolide crossing the atmosphere
within $\le$ 5 s, though noises were heard for (10$\pm$5) minutes2. The
bolide left a gigantic trail, or smoke band which got increasingly wiggly
but disappeared only towards the evening. According to eyewitnesses, the
bolide split up successively at the four heights of 58, 34, 16, and 6 km,
towards a final diameter of 0.6 km. From infall channels in the ground and
tree destructions, its infall angle could be measured as (30$\pm$8) deg
w.r.t. the vertical.
Within the four succeeding years, over a hundred craters were detected in
that area, the largest of diameter 26.5 m. They formed three concentrations,
spread over an ellipse of diameters 1 and 2 km. All craters were formed by
meteoritic fragments whose impact channels penetrated between 1 and 8 m into
the ground, depending on their shape and orientation. The summed weight of
all the collected iron-rich fragments was 23 t, and estimates yielded about
70 t total for the impacted mass, corresponding to an iron bolide of
diameter 6 m, some 10$^{-3.5}$ in mass of the hypothetical Tunguska bolide.
Even if a comparable amount of rocky material had been left behind in the
atmosphere, in the shape of the dust trail, the Sikhote-Aline meteorite was
still some 1000 times lighter than Tunguska's hypothesized one.
No impactites were found at Sikhote-Aline: explosions after impact tend
to occur (only) for crater diameters $\ga$100 m. Telegraph poles and
snapped-off tree tops were plentiful. Trees were felled radially around
craters, but only in directly adjacent ringlike domains, of width $\la$30 m.
Some of them took bizarre appearances, see Fig.4.
A quite different destruction of comparable energy was the bolide of 18
January 1994, seen and heard at 7.15 UT in the parish of Cando, NW of
Spain3. It took three months until a newly formed crater was reported, of
size 29 x 13 m, 1.5 m deep, whose former (big) pine trees were hurled
downhill through 50 to 100 m, see Fig.5. An in-between road remained clear
of soil from the ejection, eliminating the possibility of a landslide -
which did, however, occur on the same day 300 m NW of the main crater,
knocking down two pines. No meteoritic debris were recovered; the authors
prefer a high-speed gas-eruption explanation.
How to Discriminate ?
Tunguska, Sikhote-Aline, and Cando are three catastrophical events of the
last century - the first of them some 1000 times more energetic than the two
others - which have found quite different explanations in the literature.
Whereas Krinov spends 129 pages of his 397-page book2 on giant meteorites on
the "Tunguska meteorite", Ol'khovatov prefers a tectonic interpretation4.
Even Sodom and Gomorrah have been recently interpreted as former cities on
the SE bank of the Dead Sea, blown up and/or slid to the bottom of the Sea
by a volcanic eruption. How can we discriminate between the terrestrial and
the extraterrestrial interpretation?
Whereas with the former interpretation you can be rejected from
peer-reviewed journals, even when based on sober and friendly arguments, the
latter interpretation may only apply to a 3% minority of all events.
Eyewitnesses speak of bolides - or fireballs - in all cases, and of barisal
guns lasting for many minutes. Trees are felled, or debranched, or their
tops chopped off, craters are formed, and fires are ignited in all cases.
What differs are the details, of which I listed 19 in my talks at Moscow
and Krasnoyarsk. Volcanic flames in the sky can last for minutes whereas a
meteoritic infall trail flashes only for a few seconds, and is hardly sensed
hot in the faces of eyewitnesses, because of too small an extent in space
and time. A meteoritic trail, unlike volcanic flames, tends to stay visible
for hours. Barisal guns, on the other hand, are heard for comparable times
in both cases by distant eyewitnesses (d$\approx$100 km), because sound
echos from warm layers above the stratosphere take that long. For tree
falls, their pattern matters: how many centers? Telegraph poles require
supersonic shock waves. Craters, if blown from below, can contain tree
stumps, whereas those formed by infall show an impact channel plus debris.
Volcanic outblows can throw trees, or tree stumps, or stones through several
hundred meters, whereas non-explosive infalls (with small craters)
redistribute the impacted soil in!
their immediate surroundings.
There are additional criteria. Volcanic blowouts require pressurized
vertical exhaust pipes from a deep-lying fluid reservoir, which have their
imprints on the local geography, like the Kulikovskii crater shown in Fig.1.
Moreover, when megatons of natural gas - mainly methane - are suddenly
released into the atmosphere, they will rise, burn, and form clouds in the
thermosphere for several days, at heights above 500 km, where they scatter
the sunlight. Such scattered sunlight at night is known as the bright nights
of both Krakatoa (1883) and Tunguska (1908). We live on a tectonically
active planet. Exploring it can be great fun.
References:
1. Kundt, W., Current Science 81, 399-407 (2001).
2. Krinov, E.L., Giant Meteorites, Pergamon, 1966.
3. Docobo, J.A., Spalding, R.E., Ceplecha, Z., Diaz-Fierros, F., Tamazian,
V., Onda, Y., ....Meteoritics & Planetary Sciences 33, 57-64 (1998).
4. Ol'khovatov, A.Yu., internet: www.geocities.com/Cape
Canaveral/Cockpit/3240, 1999.
Biodata: Prof. Wolfgang Kundt has retired from the Institute for
Astrophysics of Bonn University. As a former student of Pascual Jordan, he
started his career in General Relativity, then moved into astrophysics via
cosmology, and more recently has also taken strong interest in geo- and
biophysics.
Figure Captions
Fig. 1: False-colour near-IR satellite photograph of the central
('cauldron') region of the 1908 Tunguska catastrophe, roughly 10 km in
extent, and centered on Mt. Stoikovich. Arrows in the margin continue the
rough flow directions of the rivers Cheko, Kimchu, and Khushma which border
the area in the north and south. The Kimchu river flows through lake Cheko
(in the NW of the map). Mt. Farrington, N-NE from Mt. Stoikovich, allows a
good view of the swamps and surrounding mountain chains. Note the preferred
geometry of the cauldron - even detectable from space - which led Kulik to
speak of the 'Merrill circus' inside an 'amphi theatre'.
Fig. 2: A small pond surrounded by swamps, or peat bogs, encountered along
the footpath to the epicenter. Dozens of such swamps encircle Mt.
Stoikovich, see Fig.1.
Fig. 3: One of the dozens of root stumps still lying in that area today,
with their stem segments pointing away from the epicenter. Their original
sites have not been identified. They pose a problem to the meteoritic
interpretation.
Fig. 4: Krinov's book2 shows this remarkable photograph where the
Sikhote-Aline iron shower has deposited a tree-trunk segment in the crown of
another tree. Some intelligence may be required to convincingly explain how
this master piece of natural acrobacy has been achieved.
Fig. 5: Sketch, by Docobo et al (1998), of the destruction achieved by the
1994 Cando event3. A crater was formed at A, with "closed" low edge at D.
Big trees (H; of diameters 0.6 m, height 13 m) were thrown downhill to
distances between 50 and 100 m. The footpath E remained clear of soil or
trees. Soil was thrown to the places marked F and G.
-----éÓÈÏÄÎÏÅ ÓÏÏÂÝÅÎÉÅ-----
ïÔ: Harder, David A <dhar-@bnl.gov>
ëÏÍÕ: tung-@topica.com <tung-@topica.com>
äÁÔÁ: 16 ÑÎ×ÁÒÑ 2002 Ç. 16:46
ôÅÍÁ: [ Tunguska ] freedom of thought
Good morning Tunguska aficionados Concerning freedom of thought. Dick cannot give free rein to his intellect, and in the end maybe neither can I. In the USA we tote the banner of freedom, but do we really have the freedom we proclaim that we have, if we cannot take flight on the wings of imagination. Some of the greatest human works have come from those journeys. When we were children we were conditioned to believe, that although there were marvelous examples to the contrary like Tolstoy, the average Russian was dead in their soul, because they had no freedom. They were
told
what to think and how to think. Dick, do you remember the Rocky and Bullwinkle show. Do you remember Boris Bad Enough and Natasha? Russians were stereotyped as sinister and evil. I wonder what nonsense the Russian children were taught about the Americans? The status quo has been pathogenic to the human spirit in all domains of human experience. Do you remember what Mark Twain had to say about mental conditioning? "When even the brightest mind in our world has been trained up from childhood in a superstition of any kind, it will never be possible for that mind, in its maturity, to examine sincerely, dispassionately, and conscientiously any evidence or any circumstance which shall seem to cast a doubt upon the validity of that superstition. I doubt if I could do it myself." Perhaps I am getting dangerously close to the limit, but I tell you, what
we
have been conditioned to believe plays a dynamic role in the Tunguska investigation, as it does in many areas of human experience. Dick, it is a shame that you are restrained in your communications. I feel that you have a major contribution to make. Perhaps the spirit will take a shit on your head, and you will know what you must do. Andrei, I don't know yet how I will convince you, but I must find a way. You must accompany me on the journey. You may say: I am not going to trek into the Cauldron that way with some crazy Yankee Doodle; it is too dangerous and would be a most unpleasant experience. If you are not in
good
physical condition then you had better prepare. In the end the spirit may light a fire under your ass. All the best amigo Daveh 1/16/02 Geophysical interpretation of Tunguska:
http://www.geocities.com/olkhov/tunguska.htm
_________________________________________________________
Do You Yahoo!?
|
2014-12-22 09:52:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38762351870536804, "perplexity": 10135.499714732558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775083.81/warc/CC-MAIN-20141217075255-00017-ip-10-231-17-201.ec2.internal.warc.gz"}
|
http://www.ams.org/mathscinet-getitem?mr=1914805
|
MathSciNet bibliographic data MR1914805 11K31 (11J71 26A42) Baxa, C.; Schoißengeier, J. Calculation of improper integrals using $(n\alpha)$$(n\alpha)$-sequences. Dedicated to Edmund Hlawka on the occasion of his 85th birthday. Monatsh. Math. 135 (2002), no. 4, 265–277. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
|
2016-06-28 04:32:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.99875807762146, "perplexity": 6050.0822658918205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00157-ip-10-164-35-72.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/178178/relation-between-lie-algebras-and-lie-groups
|
# Relation between Lie algebras and Lie groups
I am a little confused as to how to compute generally the Lie algebra of a Lie group and viceversa, namely the Lie groups (up to diffeomorphism) having a certain Lie algebra.
The way I did this for classical groups such as $O(n)$ or $SL_n(\mathbb{R})$ was to express them as fibres over a regular value of a smooth map and then explicitly computing the tangent space at the identity as the kernel of the differential. This method however works only in very specific cases.
1. How does one, for instance, compute the Lie algebra of the group $SO(2) \bar{\times} \mathbb{R}^4$ (by $\bar{\times}$ I mean the semi-direct product).
2. Which connected Lie groups up to diffeomorphism have the following Lie algebra
$$\left\{\left(\begin{array}{ccc} x & y & w \\ z & -x & v \\ 0 & 0 & 0 \end{array} \right), \qquad x,y,z,v,w\in \mathbb{R}\right\}?$$
Thanks in advance for any help.
-
For 1 you can try embedding the Lie group into $\text{GL}_n$ for some $n$ and computing which matrices exponentiate into it. For 2, every connected Lie group $H$ having a particular Lie algebra $\mathfrak{g}$ is covered by a unique simply connected Lie group $G$ having Lie algebra $\mathfrak{g}$, and in fact $H$ is obtained from $G$ by quotienting by a discrete subgroup of its center. So to find all $H$ it suffices to find $G$ (which may not be easy in general) then to describe the discrete subgroups of its center. – Qiaochu Yuan Aug 2 '12 at 21:35
Two more notes: for 1 I think a semidirect product of Lie groups has Lie algebra the corresponding semidirect product, and for 2 to find $G$ it suffices to find some $H$ and take its universal cover (although this may not be easy to describe in general). – Qiaochu Yuan Aug 2 '12 at 21:36
As for 2, let $W=\{(a,b,c)\in \mathbb R^3| c=0\}$ and let $G$ be the set of elements $g\in GL_3(\mathbb R)$ such that (1) $g$ leaves $W$ invariant ($gW=W$), (2) the induced action on $W$ satisfies $det=1$, and (3) the induced action on $\mathbb R^3/W$ is trivial (the identity). It follows from this definition that $G$ is a subgroup of $GL_3(\mathbb R)$ and a simple calculation shows that $$G=\left\{\left(\begin{array}{ccc} a & b & e \\ c & d & f \\ 0 & 0 & 1 \end{array} \right) | \quad a,b,c,d,e,f\in \mathbb{R}, \quad ad-bc=1\right\}.$$ Now you can check that the Lie algebra of $G$ consists of the matrices you indicated.
Of course there are other answers, as the $G$ above is not simply connected.
|
2016-07-02 02:23:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9049990773200989, "perplexity": 97.2564392686104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00053-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://nrich.maths.org/12867
|
### Gambling at Monte Carlo
A man went to Monte Carlo to try and make his fortune. Is his strategy a winning one?
### Marbles and Bags
Two bags contain different numbers of red and blue marbles. A marble is removed from one of the bags. The marble is blue. What is the probability that it was removed from bag A?
### Coin Tossing Games
You and I play a game involving successive throws of a fair coin. Suppose I pick HH and you pick TH. The coin is thrown repeatedly until we see either two heads in a row (I win) or a tail followed by a head (you win). What is the probability that you win?
# Pay Attention
##### Age 14 to 16 ShortChallenge Level
When a speaker gave a talk, 6% of the audience slept through the whole thing.
22% of the audience stayed awake and heard the entire talk.
Of the rest of the audience, half of them heard $\frac{2}{3}$ of the talk, and half of them heard $\frac{1}{3}$ of the talk.
What was the average proportion of the talk that people heard?
This problem is adapted from the World Mathematics Championships
You can find more short problems, arranged by curriculum topic, in our short problems collection.
|
2022-12-02 09:09:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5728801488876343, "perplexity": 1124.2069669418627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710900.9/warc/CC-MAIN-20221202082526-20221202112526-00726.warc.gz"}
|
https://kseebsolutions.net/1st-puc-basic-maths-previous-year-question-paper-march-2018-south/
|
Students can Download 1st PUC Basic Maths Previous Year Question Paper March 2018 (South), Karnataka 1st PUC Basic Maths Model Question Papers with Answers helps you to revise the complete syllabus.
## Karnataka 1st PUC Basic Maths Previous Year Question Paper March 2018 (South)
Time: 3.15 Hours
Max. Marks: 100
Instructions:
1. The questions paper consists of five parts A, B, C, D, and E.
2. Part – A carries 10 marks, Part – B carries 20 marks, Part – C curries 30 marks, Part – D carries 30, and Part – E carries 10 marks.
3. Write the question numbers properly as indicated in the questions paper
PART-A
I. Answer any TEN questions. (10 × 1 = 10)
Question 1.
Write the conjugate of 2 – i3.
2 + i3
Question 2.
If A = {5,6}, Find power set of A.
P(A) ={Φ, {5}, {5,6}, {6}}
Question 3.
Simplify $$\left[\left\{\sqrt[3]{x^{2}}\right\}^{3}\right]^{\frac{1}{2}}$$
x
Question 4.
Express log5 0.2 = -1 in exponential form
5-1 = 0.2
Question 5.
Find the 8lh term of an AP -2, -4, -6 ………….
T8 = -2 + 7 (-2) = -2 – 14 = -16
Question 6.
Solve for x : 2(1+x) – 10 = 16 – 2 (x – 24).
14 + 2x- 10 = 16 – 2x + 48 ⇒ x = 15
Question 7.
Find the simple interest on ₹ 1500 at 4% p.a for 145 days.
SI = $$\frac{\mathrm{P} t r}{100}$$ = 23.84
Question 8.
Define Annuity.
An annuity is a fixed sum of money paid at regular intervals of time under certain conditions.
Question 9.
Convert 42% to a decimal.
$$\frac{42}{100}$$ = 0.42
Question 10.
Convert $$\frac{3 \pi^{c}}{2}$$ into degree measure.
$$\frac{3 \pi}{2} \times \frac{180}{\pi}$$ = 270
Question 11.
Prove that Sin 30°. Cos 60° + cos 30°. Sin 60° = 1
$$\frac{1}{2} \times \frac{1}{2}+\frac{\sqrt{3}}{2} \times \frac{\sqrt{3}}{2}=\frac{1}{4}+\frac{3}{4}$$ = 1
Question 12.
Find the slope of the line with the inclination $$\frac{\pi}{4}$$ with respect to x – axis
θ = π/4 ⇒ m = tanπ/4 = 1
Part – B
II. Answer any TEN questions. (10 × 2 = 20)
Question 13.
Find the LCM of 36, 40, and 48 by the factorization method.
36 = 22 × 32, 40 = 23 × 51, 48 = 23 × 31
LCM = 24 × 32 × 51 = 720
Question 14.
Find the number of positive divisors of 768.
768 = 28 × 31
Question 15.
If A = {1, 2, 3, 4, 5}, B = {3, 4, 5, 6, 7} and U = {1, 2, 3, 4, 5, 6, 7, 8, 9} verify (AuB)’ = A’∩B’
(A∪B)1 = U – (A∪B) = {8,9}
A’ = {6, 7, 8, 9} B’ = {1,2, 8,9}
∴ A’∩B1 = {8, 9} (A∪B)’ = A’∩B’
Question 16.
Prove that (xb-c)a .(xc-a)b (xa-b)c = 1
(xb-c)a .(xc-a)b (xa-b)c ⇒ xab-ac+bc-ca-cb ⇒x° = 1
Question 17.
The third term of a HP is $$\frac{1}{7}$$ and Fifth term is $$\frac{1}{11}$$ then find the seventh term.
3rdterm of an A.P = 7 a + 2d = 7
5th term of A.P = 11 a + 4d = 11 ∴ a = 3 d = 2
seventh term of HP = $$\frac{1}{a+(n-1) d}$$ = $$\frac{1}{3+6(2)}$$ = 15
Question 18.
The sum of four consecutive numbers is 366, find them.
x + x + 1 + x + 2 + x + 3 = 366
⇒ x = 90 numbers are 90, 91, 92, 93
Question 19.
Solve x2 + 3x – 28 = 0 by formula method.
x = $$\frac{3 \pm \sqrt{9-4(1)(-28)}}{2}$$ ⇒ x = -4 or x = 7
Question 20.
Solve : 5x – 3 > 3x + 1 ; xeR and represent on the number line.
5x – 3 < 3x + 1 ⇒ 2x < 4 ⇒ x < 2
Question 21.
Find the present value of an annuity of ₹400 for 3 years at 16% p.a compound interest
P = $$\frac{400\left[(1+0.16)^{3}-1\right]}{0.16(1+0.16)^{3}}$$ = 898.35
Question 22.
The angles of a triangle are in the ratio 3:4:5. Find them in degrees.
3A + 4A + 5A = 180 ⇒ A= 15 45°, 60°, 75°
Question 23.
Prove that Sin (480°).cos (690°) + cos (780°). Sin (1050°) = 1/2
sin (360 + 120) = sin 120 = sin (180 – 60 = sin 60 = $$\frac{\sqrt{3}}{2}$$
cos (720 – 30) = cos 30 = $$\frac{\sqrt{3}}{2}$$ cos (720 + 60) = cos 60 = $$\frac{1}{2}$$
sin (1080 – 30) = sin (-30) = -1/2 .
$$(\sqrt{3} / 2)(\sqrt{3} / 2)-(1 / 2)(1 / 2)=3 / 4-1 / 4=2 / 4=1 / 2$$
Question 24.
Show that the points (1,1), (5,2) and (9,5) are collinear.
AB = 5 BC = 5 AC =10
∴ AB + BC = AC
5 + 5 = 10
Question 25.
Find the equation of the locus of a point that moves such that the square of its distance from (2,3) is 3.
Let p (x, y) be any point on the locus PA2 = 3
(x – 2)2 + (y – 3)2 = 3 ⇒ x2 + y2 – 4x – by +10 = 0
Part – C
III. Answer any TEN questions. (10 × 3 = 30)
Question 26.
Prove that $$\sqrt{2}$$ is an irrational number.
We shall prove it by the method ofcontradiction.
If possible Let $$\sqrt{2}$$ be a rational number
Let $$\sqrt{2}$$ = $$\frac{p}{q}$$ where p and q are Integers and q ≠ 0.
Further let p and q are coprime i.e. H.C.F. of p and q = 1.
$$\sqrt{2}$$ = $$\frac{p}{q}$$ ⇒ $$\sqrt{2}$$q = p
⇒ 2q2 = p2
⇒ 2 divides p2 ⇒ 2 divides p
⇒ p is even
Let p = 2k where k is an integer p2 = 4k2
2q2 = 4k2
q2 = 2k2 ⇒ q2 is even
⇒ q is even.
Now p is even and q is even which implies p and q have a common factor 2. which is a
contradiction of the fact that p and q are co-prime.
∴ our assumption that $$\sqrt{2}$$ is rational is wrong and hence $$\sqrt{2}$$ is irrational.
Question 27.
Let f = {(1, 1), (2, 3), (0, -1)} be a function from z to z defined by f(x) = ax + b some integers a and b. Determine a and b.
f(x) = ax + b
when x = 1 f(x) = 1
∴ a + b = 1
x = 0 f(x) = -1
a(0) + b – 1 ⇒ b = -1
a + b = 1
a – 1 = 1 ⇒ a = 1 + 1 =2
∴ a = 2
∴ a = 2, b = 1
Question 28.
If ax = by = cz and b2 = ac. Show that $$\frac{1}{x}+\frac{1}{z}=\frac{2}{y}$$
Let ax = by = cz = k (say)
∴ ax = k ⇒ a = k1/x, by = k ⇒ b = k1/y, cz = k ⇒ c = k1/z
Now, b2 = ac
∴ (k1/y)2 = k1/x. k1/z
∴ k2/y = k1/x+1/z
Bases are the same ∴ Equating powers on both sides, we get.
$$\frac{1}{x}+\frac{1}{z}=\frac{2}{y}$$
Question 29.
Prove that $$\frac{1}{\log _{2} 4}+\frac{1}{\log _{8} 4}+\frac{1}{\log _{16} 4}$$ = 4
LHS = $$\frac{1}{\log _{2} 4}+\frac{1}{\log _{8} 4}+\frac{1}{\log _{16} 4}$$ = log42 + log48 + log4 16
= log42.8. 16 = log4 256 = log4 44 = 4log44 = 4(1) = 4 = RHS
Question 30.
Find the three numbers in GP whose sum is $$\frac{31}{5}$$ and their product is 1.
Let the number $$\frac{a}{r}$$, a, ar
Product of extremes = 1
$$\frac{a}{r}$$ × ar =1 ⇒ a2 = 1 ⇒ a= 1
Sum $$\frac{13}{3} \Rightarrow \frac{a}{r}$$ + a + ar = $$\frac{13}{3} \Rightarrow \frac{1}{r}$$ + 1 + (1)r = $$\frac{13}{3}$$
$$\frac{1}{r}$$ + r = $$\frac{13}{3}$$ – 1
$$\frac{1+r^{2}}{r}=\frac{10}{3}$$
⇒ 3 + 3r2 = 10r3⇒ 3r2 – 10r + 30 = 3r2 – 9r – r + 3O
⇒ 3r(r – 3) – 1 (r – 3) = 0
(r – 3)(3r – 1) = 0 ⇒ r = 3 or r = $$\frac{1}{3}$$
The numbers are
$$\frac{a}{r}$$, a, ar
$$\frac{1}{3}$$, 1, 1.3
⇒ $$\frac{1}{3}$$, 1, 3
Question 31.
If a and P are the roots of the equation 2x2 + 5x + 5 = 0, find the value of $$\frac{1}{\alpha^{2}}+\frac{1}{\beta^{2}}$$
Question 32.
Solve the linear inequalities x + 2y ≤ 8, 2x + y ≤ 8, u ≥ 0, x ≥ 0 graphically.
Question 33.
In how many years a sum of ₹2000 becomes ₹2205 at the rate of 5% p.a compound interest?
A = (1 + r)n
n = $$\frac{\log A-\log p}{\log (1+r)}=\frac{3.3434-3.3010}{0.0211}$$ = 2 years.
Future value = 4,575.9
Question 34.
The average weight of a group containing 25 persons is 70 kg. 5 persons with an average weight of 63kg leave the group and 4 persons with a weight of 72 kg, 78 kg, 70 kg, and 73 kg joins the group. Find the average weight of the new group.
Average weight of 25 persons 70 kg
Total weight of 25 persons = 25 × 70 = 1750kg
Average weight of 5 persons who leave the group = 63 kg
∴ Total weight of 5 persons who leave the group 63 x 5 = 315kg
Total weight of4 persons who join the group = 72 + 78 + 70 + 73 = 293kg
∴ Total weight of the group now containing 24 persons = 1750 – 315 + 293 = 1728kg
∴ Average weight of the group now = $$\frac{1728}{24}$$ = 72 kg
Question 35.
Savitha sold her bag at a loss of 7%. Had she been able to sell it at a gain of 9%, it would have fetched ₹ 64 more than it did. What was the cost price of the bag?
Let CP = 100
loss at 7% ⇒ SP = 93 Gain at 9% ⇒ SP = 109 Difference 109 – 93 = 16
C.P of the bag = $$\frac{100 \times 64}{16}$$ = ₹400
Question 36.
If sinθ = $$\frac{-8}{17}$$ and π < θ < $$\frac{3 \pi}{2}$$. Find the value of $$\frac{\tan \theta-\cot \theta}{\sec \theta+\operatorname{cosec} \theta}$$
tan θ = 8/15, cot θ = 15/8
secθ = -17/5, cosec θ = 17/8
$$\frac{\tan \theta-\cot \theta}{\sec \theta-\cos e c \theta}=\frac{8 / 15-15 / 8}{-17 / 5-17 / 8}=\frac{171}{391}=\frac{7}{17}$$
Question 37.
Find the third vertex of a triangle if two of its vertices are at (-2, 4) and (7, -3) and the centroid at (3, -2).
A (-2, 4) B = (7, -3) C = (x,, y) G = (3, -2) (3, – 2) = $$\left(\frac{-2+7+x}{3}, \frac{4-3+y}{3}\right)$$
⇒ x = 4, y = -7 ∴ C = (4,-7)
Question 38.
If the lines 2x – y = 5, Kx – y = 6 and 4x – y = 7 are concurrent, find K.
2x – y – 5 = 0
4x – y – 7 = 0
Solving we get x = 1, y = -3
kx – y – 6 = 0 k( 1 ) – (-3) -6 = 0 ⇒ K = 3
Part – D
IV. Answer any SIX questions. (6 × 5 = 30)
Question 39.
Out of 250 people, 160 drink coffee, 90 drink tea, 85 drink milk, 45 drink coffee and tea, 35 drink tea and milk, 20 drink all three, i) How many will drink coffee and milk? ii) only milk iii) only coffee. Show the result through the Venn diagram.
n(C∪T∪M) = 250
n(C) = 160,
n(T) = 90
n(M) = 85
n(C∩T)=45
n(T∩M) = 35
T(C∩T∩M)=20
n(C∩M)=?
n(C∪T∪M) = n(c) + n(T) + n(M) – n(C∩T) – n(T∩M) – n(M∩C)+ n(C∩T∩M)
250 = 160 + 90 + 85 – 45 – 35 – n(C∩M) + 20
∴ n(C∩M) = 160 + 90 + 85 – 45 – 35 + 20 – 250
∴ n(C∩M) = 25
Question 40.
Using logarthamic tables, find the value of $$\frac{0.5634 \times 0.0635}{2.563 \times 12.5}$$
x = $$\frac{0.5634 \times 0.0635}{2.563 \times 12.5}$$
logx = log 0.5634 + log 0.0635 – log 2.563
-log 12.5 =1.7508 + 2.8028-0.4087-1.0969
= -2.952 = -2-1+1-0.952 = -3 + 0.048 = 3.0.048
x = AL[3.048] = 0.001117
Question 41.
Find the sum of all numbers between 50 and 200 which are divisible by 11.
Sn = 55 + 66 + 77 +………….+ 198
a = 55
d = 11
T = 198
Sn = $$\frac{n}{2}$$[2 a + (n-1 )d] = 7(253) = 1771
Question 42.
Find an integral root between -3 and 3 by inspection and then using synthetic division solve the equation x3 – 10x2 + 29x – 20 = 0
Put x = I
1 – 10(1)2 + 29(1) – 20
1 – 10 + 29 – 20
30 – 30 = 0
Quotient x2 – 9x + 20
Remainder = 0
x3 – 10x2 + 29x – 200
(x + 1)(x2 – 9x + 20) = 0
x = -1, x2 – 9x + 20 = 0
x = 4, x = 5
Question 43.
A person borrows a certain sum of money at 3% p.a Simple interest and invests the same at 5% p.a compound interest compounded annually. After 3 years he makes a profit of 1,082. Find the amount he borrowed.
Let the amount invested be x
r =4% p.a.
A= 1352
p = x
n = 2
A = P (1 + r)n
1352 = x(1 + 0.04)2 = x (1,04)2 = x (1.0816)
∴ x = $$\frac{1352}{1.0816}$$ = 1,250
∴ Amount invested Principal = 1,250
P = 1250 T = 2 r = 4%
ST = $$\frac{\mathrm{P} \times \mathrm{T} \times \mathrm{R}}{100}=\frac{1250 \times 2 \times 4}{100}$$ = 100
A = P + SI = 1250 + 100 = 1350
If the same amount at the same rate of simple interest then
1352 – 1350 = Rs. 2 less will be received.
Question 44.
If Poornima deposits ‘600 at the beginning of every year for the next 15 years. Then how much will be accumulated at the end of 15 years, if the interest rate is 7% p.a?
a = 600
n = 15
i = 0.07
F =a$$\frac{\left[(1+i)^{n}-1\right]}{i}$$(1 + i) = $$\frac{600\left[(1.07)^{15}-1\right]}{0.07}$$(1 + 0.07) = ₹ 161325.428.
Question 45.
A businessman sells an article for ₹ 720 and earns a profit of 20% Find the a) cost price b) profit percentage at the selling price.
SP = Rs.720 profit = 20%
CP = SP × $$\frac{100}{(100+\text { Profit } \%)}$$ = 720 × $$\frac{100}{(100+20)}$$ = 720 × $$\frac{100}{120}$$ = Rs. 600
Profit = SP – CP = 720 – 600 = Rs. 120
Profit percentage at selling price = $$\frac{\text { Profit }}{\mathrm{SP}}$$ × 100 = $$\frac{120}{720}$$ × 100 = 16.6%
Question 46.
If x = r cos A.cos B, y = r cos A. sin B and z = r sin A. then prove that x2 + y2 + z2 = r2
LHS = x2 + y2 + z2 = r2cos2Acos2B + r2cos2Asin2B + r2sin2A = r2cos2A(cos2B + sin2B) + rsin2A = r2cos2A(1) + rsin2A = r2(cos2A + sin2A) = r2(1) = r2 = RHS
Question 47.
Find the coordinates of the vertices of the triangle given the mid points of the sides as (4, -1), (7, 9) and (4, 11)
Let A = (x1, y1) B = (x2, y2) and C = (x3, y3)
Now D = midpoint of BC .
= (4, -1) = $$\left[\frac{x_{2}+x_{3}}{2}, \frac{y_{2}+y_{3}}{2}\right]$$
x2 + x3 = 8 …………….(1)
y2 + y3 = -2 …………(2)
E = mid point of CA
(7, 9) = $$\left(\frac{x_{3}+x_{1}}{2}, \frac{y_{3}+y_{1}}{2}\right)$$
x3 + x1 = 14 …………(3)
y3 + y1 = 18 ………….(4)
F = mid point of AB
(4, 11) = $$\left(\frac{x_{1}+x_{2}}{2}, \frac{y_{1}+y_{2}}{2}\right)$$
x1 + x2 = 8 …………. (5)
y1 + y2 = 22 ………….(6)
Solving (1), (3), (5) we get x1, x2 and x3
Consider (1) + (3) + (5) we get
2(x1 + x2 + x3) = 30
(x1 + x2 + x3) = 15
x1 + x2 = 8 and x1 + x2 + x3 = 15 ⇒ x3 = 7
x2 + x3 = 8 and x1 + x2 + x3 = 15 ⇒ x1 = 7
x3 + x1 = 14 and x1 + x2 +x3 = 15 ⇒ x2 = 1
Consider (2) + (4) + (6) we get
2 (y1 + y2 + y3) = 38
(y1 + y2 + y3) = 19
Now y2 + y3 = – 2 and(y1 + y2 + y3) = 19 ⇒ y1 =21
y3 + y1 = 18 and (y1 + y2 + y3) = 19 ⇒ y2 = 1
y1 + y2 = 22 and (y1 + y2 + y3) = 19 ⇒ y3 = 1
Thus A = (7, 21) B = (1, 1) and C = (7, -3)
Question 48.
Find the coordinate of the foot of the perpendicular from (-6, 2) on the line 3x – 4y + 1 = 0.
Part-E
V. Answer any ONE question. (1 × 10 = 10)
Question 49.
(a) Find tue domain and Range of the function f(x) = $$\frac{x^{2}-2 x+1}{x^{2}-9 x+13}$$ where x ∈ N
Domain of F(x) = N{1, 2, 3…………… )
n = 1 ⇒ F(x) = $$\frac{1-2+1}{1-9+13}=\frac{0}{5}$$ =0
n = 2 ⇒F(x) = $$\frac{4-4+1}{4-18+13}=\frac{1}{-1}$$ = -1
n = 3 ⇒ F(x) = $$\frac{9-6+1}{9-27+13}=\frac{4}{-5}$$…………
Range of F(x) = {0, -1, $$\frac{4}{-5}$$,………………}
(b) Find the distance between the parallel lines 5x + 12y + 7 0 and 5x + 12y – 19= 0
d = $$\left|\frac{C_{1}-C_{2}}{\sqrt{a^{2}+b^{2}}}\right|=\left|\frac{26}{\sqrt{25+144}}\right|=\left|\frac{26}{\sqrt{169}}\right|=\frac{26}{13}$$ = 2 units
(c) What is the present value of an income of 3000 to be received forever if the interest rate is 14% p.a.
P = $$\frac{a}{i}=\frac{3000}{0.14}$$ = 21428.5
Question 50.
(a) Find the sum of n terms of the series 7 + 77 + 777 + …… n terms.
Average profit = $$\frac{17598+20703+15085+25375+16315}{5}$$ = ₹19015.20
|
2022-11-29 13:39:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5949534773826599, "perplexity": 1226.4852900038552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00701.warc.gz"}
|
https://samacheerkalviguru.com/samacheer-kalvi-11th-maths-solutions-chapter-11-ex-11-13/
|
## Tamilnadu Samacheer Kalvi 11th Maths Solutions Chapter 11 Integral Calculus Ex 11.13
Choose the correct or most suitable answer from given four alternatives.
Question 1.
If $$\int f(x) d x$$ = g(x) + c, then $$\int f(x) g^{\prime}(x) d x$$
Solution:
(a)
Question 2.
If , then the value of k is ……………
(a) log 3
(b) -log 3
(c) $$-\frac{1}{\log ^{3}}$$
(d) $$\frac{1}{\log 3}$$
Solution:
(c)
Question 3.
If $$\int f^{\prime}(x) e^{x^{3}} d x$$ = (x – 1)ex2, then f(x) is …………………
Solution:
(d)
Question 4.
The gradient (slope) of a curve at any point (x, y) is $$\frac{x^{2}-4}{x^{2}}$$. If the curve passes through the point(2, 7), then the equation of the curve is ………….
(a) y = x + $$\frac{4}{x}$$ + 3
(b) y = x + $$\frac{4}{x}$$ + 4
(c) y = x2 + 3x + 4
(d) y = x2 – 3x + 6
Solution:
Question 5.
(a) cot (xex) + c
(b) sec (xex) + c
(c) tan (xex) + c
(d) cos (xex) + c
Solution:
(c)
Question 6.
$$\int \frac{\sqrt{\tan x}}{\sin 2 x} d x$$ is ……………..
Solution:
(a)
Question 7.
$$\int \sin ^{3} x d x$$ is …………….
Solution:
(c)
Hint: sin3x = $$\frac{1}{4}$$ (3 sin x – sin 3x)
Question 8.
Solution:
(b)
Question 9.
(a) tan-1 (sin x) + c
(b) 2 sin-1 (tan x) + c
(c) tan-1 (cos x) + c
(d) sin-1 (tan x) + c
Solution:
(d)
= sin-1 (t) + c
= sin-1 (tan x) + c
Question 10.
(a) x2 + c
(b) 2x2 + c
(c) $$\frac{x^{2}}{2}$$ + c
(d) $$-\frac{x^{2}}{2}$$ + c
Solution:
(c)
Question 11.
$$\int 2^{3 x+5} d x$$ is ……………
Solution:
(d)
Question 12.
Solution:
(b)
Question 13.
Solution:
(d)
Question 14.
$$\int \frac{x^{2}+\cos ^{2} x}{x^{2}+1}$$ cosec2xdx is …………….
(a) cot x + sin-1 x + c
(b) -cot x + tan-1 x + c
(c) -tan x + cot-1 x + c
(d) -cot x – tan-1 x + c
Solution:
(d)
Question 15.
$$\int x^{2} \cos x d x$$ is ……………
(a) x2 sin x + 2x cos x – 2 sin x + c
(b) x2 sin x – 2x cos x – 2 sin x + c
(c) -x2 sin x + 2x cos x + 2 sin x + c
(d) -x2 sin x – 2x cos x + 2 sin x + c
Solution:
(a)
Hint: $$\int x^{2} \cos x d x$$
By Bernoullis formula dv = cosxdx
u = x2 v = sinx
u’ = 2x v1 = -cos x
u” = 2 v2 = -sinx
= uv – u’v1 + u”v2
= x2sin x + 2x cos x – 2 sin x + c
Question 16.
Solution:
(b)
Question 17.
$$\int \frac{d x}{e^{x}-1}$$ is …………….
(a) log |ex| – log |ex – 1| + c
(b) log |ex| + log |ex – 1| + c
(c) log |ex – 1| – log |ex| + c
(d) log |ex + 1| – log |ex| + c
Solution:
(c)
Question 18.
Solution:
(b)
We know that
Question 19.
Solution:
(d)
Question 20.
Solution:
(a)
We know that
Question 21.
Solution:
(c)
Hint:
By Bernoullis formula,
Question 22.
Solution:
(d)
Question 23.
Solution:
(c)
Question 24.
Solution:
(a)
Question 25.
Solution:
(d)
Hint: Let I = $$\int e^{\sqrt{x}} d x$$
t = $$\sqrt{x}$$
|
2023-03-22 15:38:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8277888298034668, "perplexity": 11863.574068365493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943845.78/warc/CC-MAIN-20230322145537-20230322175537-00504.warc.gz"}
|
https://gmatclub.com/forum/at-a-certain-farm-the-ratio-of-pigs-to-cows-to-chickens-is-7-8-10-if-279792.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 18 Jun 2019, 06:03
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# At a certain farm the ratio of pigs to cows to chickens is 7:8:10. If
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 55668
At a certain farm the ratio of pigs to cows to chickens is 7:8:10. If [#permalink]
### Show Tags
24 Oct 2018, 01:45
00:00
Difficulty:
5% (low)
Question Stats:
93% (00:45) correct 7% (00:00) wrong based on 36 sessions
### HideShow timer Statistics
At a certain farm the ratio of pigs to cows to chickens is 7:8:10. If the total number of pigs, cows and chickens is 300, how many chickens are there?
A. 30
B. 90
C. 120
D. 180
E. 200
_________________
Director
Joined: 19 Oct 2013
Posts: 525
Location: Kuwait
GPA: 3.2
WE: Engineering (Real Estate)
Re: At a certain farm the ratio of pigs to cows to chickens is 7:8:10. If [#permalink]
### Show Tags
24 Oct 2018, 01:51
7x:8x:10x and the total is 25x
25x = 300
5 * 5 x= 100 * 3
x = 12
we can do it also by backsolving
If we let chickens be 120 then
120 = 10x
x = 12
The total ratio is 25 so 25 * 12 = 300
Chickens = 10x = 10 * 12 = 120
Posted from my mobile device
Board of Directors
Status: QA & VA Forum Moderator
Joined: 11 Jun 2011
Posts: 4495
Location: India
GPA: 3.5
Re: At a certain farm the ratio of pigs to cows to chickens is 7:8:10. If [#permalink]
### Show Tags
24 Oct 2018, 09:00
Bunuel wrote:
At a certain farm the ratio of pigs to cows to chickens is 7:8:10. If the total number of pigs, cows and chickens is 300, how many chickens are there?
A. 30
B. 90
C. 120
D. 180
E. 200
$$\frac{300}{(7+8+10)}*10 = 120$$ ; Answer must be (C) 120
_________________
Thanks and Regards
Abhishek....
PLEASE FOLLOW THE RULES FOR POSTING IN QA AND VA FORUM AND USE SEARCH FUNCTION BEFORE POSTING NEW QUESTIONS
How to use Search Function in GMAT Club | Rules for Posting in QA forum | Writing Mathematical Formulas |Rules for Posting in VA forum | Request Expert's Reply ( VA Forum Only )
Target Test Prep Representative
Status: Founder & CEO
Affiliations: Target Test Prep
Joined: 14 Oct 2015
Posts: 6563
Location: United States (CA)
Re: At a certain farm the ratio of pigs to cows to chickens is 7:8:10. If [#permalink]
### Show Tags
28 Oct 2018, 18:19
Bunuel wrote:
At a certain farm the ratio of pigs to cows to chickens is 7:8:10. If the total number of pigs, cows and chickens is 300, how many chickens are there?
A. 30
B. 90
C. 120
D. 180
E. 200
We can create the equation:
7x + 8x + 10x = 300
25x = 300
x = 12
So there are 10 x 12 = 120 chickens.
_________________
# Scott Woodbury-Stewart
Founder and CEO
Scott@TargetTestPrep.com
122 Reviews
5-star rated online GMAT quant
self study course
See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews
If you find one of my posts helpful, please take a moment to click on the "Kudos" button.
Re: At a certain farm the ratio of pigs to cows to chickens is 7:8:10. If [#permalink] 28 Oct 2018, 18:19
Display posts from previous: Sort by
|
2019-06-18 13:03:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6031123399734497, "perplexity": 13705.478375198332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998724.57/warc/CC-MAIN-20190618123355-20190618145355-00350.warc.gz"}
|
http://physics.stackexchange.com/questions/72142/where-else-in-physics-does-one-encounter-reynolds-averaging
|
# Where else in physics does one encounter Reynolds averaging?
Reynolds-averaged Navier–Stokes equations (RANS) is one of the approaches to turbulence description. Physical quantities, like for example velocity $u_i$, are represented as a sum of a mean and a fluctuating part:
$$u_i = \overline{u_i} + u'_i$$
where the Reynolds averaging operator $\overline{\cdot}$ satisfies, among the others, relations: $$\overline{\overline{u_i}} = \overline{u_i}, \qquad \overline{u'_i} = 0$$ which distinguish it from other types of averaging. In fluid dynamics Reynolds operator is usually interpreted as time averaging: $$\overline{u_i} = \lim_{T \to \infty} \frac{1}{T}\int_t^{t+T} u_i \,dt$$
The above construction seems to be universal for me and is likely to be used in other areas of physics. Where else does one encounter Reynolds averaging?
-
I don't have specific examples handy, but I would say anywhere that you only care about the mean of the temporal signal. The Reynolds averaging is just a low-pass filter on the time signal, so I would imagine any number of applications are possible from communications, electronics, control theory, etc.. Anybody who uses a low-pass filter on a time-varying signal. – tpg2114 Jul 24 '13 at 16:12
Oliver Penrose wrote a useful (and very detailed) article in Rep. Prog. Phys. in 1979 entitled, "Foundations of statistical mechanics." In that work, he has some very useful discussions on the differences between time-averages and ensemble averages (e.g., the last paragraph before section 1.2). – honeste_vivere yesterday
I should also mention that time-averages are not always appropriate. In some cases, a time-average amounts to a bad low-pass filter. I say bad because unlike a Fourier-based (or some other basis) low-pass filter, a time-average mixes neighboring data points potentially convolving two signals that are completely unrelated. In solar wind data analysis, for instance, using a time-average can be a bad idea because you start to mix things that can be hundreds of km apart and may be from completely separate structures. – honeste_vivere yesterday
|
2014-10-20 11:32:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.760282039642334, "perplexity": 787.4594395904207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507442497.30/warc/CC-MAIN-20141017005722-00161-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/jdg.2016011
|
American Institute of Mathematical Sciences
July 2016, 3(3): 217-223. doi: 10.3934/jdg.2016011
An asymptotic expression for the fixation probability of a mutant in star graphs
1 Departamento de Matemática and Centro de Matemática e Aplicações, Universidade Nova de Lisboa, Quinta da Torre, 2829-516, Caparica, Portugal
Received July 2015 Revised February 2016 Published July 2016
We consider the Moran process in a graph called the star'' and obtain the asymptotic expression for the fixation probability of a single mutant when the size of the graph is large. The expression obtained corrects the previously known expression announced in reference [E Lieberman, C Hauert, and MA Nowak. Evolutionary dynamics on graphs. Nature, 433(7023):312–316, 2005] and further studied in [M. Broom and J. Rychtar. An analysis of the fixation probability of a mutant on special classes of non-directed graphs. Proc. R. Soc. A-Math. Phys. Eng. Sci., 464(2098):2609–2627, 2008]. We also show that the star graph is an accelerator of evolution, if the graph is large enough.
Citation: Fabio A. C. C. Chalub. An asymptotic expression for the fixation probability of a mutant in star graphs. Journal of Dynamics & Games, 2016, 3 (3) : 217-223. doi: 10.3934/jdg.2016011
References:
show all references
References:
[1] Vaibhav Mehandiratta, Mani Mehra, Günter Leugering. Fractional optimal control problems on a star graph: Optimality system and numerical solution. Mathematical Control & Related Fields, 2021, 11 (1) : 189-209. doi: 10.3934/mcrf.2020033 [2] Gökhan Mutlu. On the quotient quantum graph with respect to the regular representation. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020295 [3] Yunping Jiang. Global graph of metric entropy on expanding Blaschke products. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1469-1482. doi: 10.3934/dcds.2020325 [4] Juan Pablo Pinasco, Mauro Rodriguez Cartabia, Nicolas Saintier. Evolutionary game theory in mixed strategies: From microscopic interactions to kinetic equations. Kinetic & Related Models, 2021, 14 (1) : 115-148. doi: 10.3934/krm.2020051 [5] Yue-Jun Peng, Shu Wang. Asymptotic expansions in two-fluid compressible Euler-Maxwell equations with small parameters. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 415-433. doi: 10.3934/dcds.2009.23.415 [6] Bo Tan, Qinglong Zhou. Approximation properties of Lüroth expansions. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020389 [7] Yu Yuan, Zhibin Liang, Xia Han. Optimal investment and reinsurance to minimize the probability of drawdown with borrowing costs. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021003 [8] Bing Liu, Ming Zhou. Robust portfolio selection for individuals: Minimizing the probability of lifetime ruin. Journal of Industrial & Management Optimization, 2021, 17 (2) : 937-952. doi: 10.3934/jimo.2020005 [9] Guangjun Shen, Xueying Wu, Xiuwei Yin. Stabilization of stochastic differential equations driven by G-Lévy process with discrete-time feedback control. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 755-774. doi: 10.3934/dcdsb.2020133 [10] Ebraheem O. Alzahrani, Muhammad Altaf Khan. Androgen driven evolutionary population dynamics in prostate cancer growth. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020426 [11] Shipra Singh, Aviv Gibali, Xiaolong Qin. Cooperation in traffic network problems via evolutionary split variational inequalities. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020170 [12] Martin Kalousek, Joshua Kortum, Anja Schlömerkemper. Mathematical analysis of weak and strong solutions to an evolutionary model for magnetoviscoelasticity. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 17-39. doi: 10.3934/dcdss.2020331 [13] Wenrui Hao, King-Yeung Lam, Yuan Lou. Ecological and evolutionary dynamics in advective environments: Critical domain size and boundary conditions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 367-400. doi: 10.3934/dcdsb.2020283 [14] Felix Finster, Jürg Fröhlich, Marco Oppio, Claudio F. Paganini. Causal fermion systems and the ETH approach to quantum theory. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020451 [15] Kung-Ching Chang, Xuefeng Wang, Xie Wu. On the spectral theory of positive operators and PDE applications. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3171-3200. doi: 10.3934/dcds.2020054 [16] Arthur Fleig, Lars Grüne. Strict dissipativity analysis for classes of optimal control problems involving probability density functions. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020053 [17] Pierre-Etienne Druet. A theory of generalised solutions for ideal gas mixtures with Maxwell-Stefan diffusion. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020458 [18] Sergey Rashkovskiy. Hamilton-Jacobi theory for Hamiltonian and non-Hamiltonian systems. Journal of Geometric Mechanics, 2020, 12 (4) : 563-583. doi: 10.3934/jgm.2020024 [19] Tuoc Phan, Grozdena Todorova, Borislav Yordanov. Existence uniqueness and regularity theory for elliptic equations with complex-valued potentials. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1071-1099. doi: 10.3934/dcds.2020310 [20] Vivina Barutello, Gian Marco Canneori, Susanna Terracini. Minimal collision arcs asymptotic to central configurations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 61-86. doi: 10.3934/dcds.2020218
Impact Factor:
|
2021-01-17 22:17:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5134702324867249, "perplexity": 9150.179864231886}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513194.17/warc/CC-MAIN-20210117205246-20210117235246-00445.warc.gz"}
|
http://www.ck12.org/book/CK-12-Calculus/section/4.6/
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
Due to system maintenance, CK-12 will be unavailable on 8/19/2016 from 6:00p.m to 10:00p.m. PT.
# 4.6: The Fundamental Theorem of Calculus
Difficulty Level: At Grade Created by: CK-12
## Learning Objectives
• Use the Fundamental Theorem of Calculus to evaluate definite integrals
## Introduction
In the Lesson on Evaluating Definite Integrals, we evaluated definite integrals using antiderivatives. This process was much more efficient than using the limit definition. In this lesson we will state the Fundamental Theorem of Calculus and continue to work on methods for computing definite integrals.
Fundamental Theorem of Calculus:
Let f\begin{align*}f\end{align*} be continuous on the closed interval \begin{align*}[a, b].\end{align*}
1. If function \begin{align*}F\end{align*} is defined by \begin{align*} F(x) = \int_a^x f(t) dt,\end{align*} on \begin{align*}[a, b]\end{align*} , then \begin{align*}F'(x) = f(x)\end{align*} on \begin{align*}[a , b].\end{align*}
2. If \begin{align*}g\end{align*} is any antiderivative of \begin{align*}f\end{align*} on \begin{align*}[a, b],\end{align*} then
\begin{align*}\int_a^b f(t) dt = g(b) - g(a).\end{align*}
We first note that we have already proven part 2 as Theorem 4.1. The proof of part 1 appears at the end of this lesson.
Think about this Theorem. Two of the major unsolved problems in science and mathematics turned out to be solved by calculus which was invented in the seventeenth century. These are the ancient problems:
1. Find the areas defined by curves, such as circles or parabolas.
2. Determine an instantaneous rate of change or the slope of a curve at a point.
With the discovery of calculus, science and mathematics took huge leaps, and we can trace the advances of the space age directly to this Theorem.
Let’s continue to develop our strategies for computing definite integrals. We will illustrate how to solve the problem of finding the area bounded by two or more curves.
Example 1:
Find the area between the curves of \begin{align*}f(x) = x\end{align*} and \begin{align*}g(x) = x^3.\end{align*} for \begin{align*}-1 \le x \le 1.\end{align*}
Solution:
We first observe that there are no limits of integration explicitly stated here. Hence we need to find the limits by analyzing the graph of the functions.
We observe that the regions of interest are in the first and third quadrants from \begin{align*}x = -1\end{align*} to \begin{align*}x = 1.\end{align*} We also observe the symmetry of the graphs about the origin. From this we see that the total area enclosed is
\begin{align*} 2 \int_0^1 (x - x^3) dx = 2 \left [\int_0^1 x dx - \int_0^1 x^3 dx \right] = 2 \left [\frac{x^2} {2} \Bigg |_0^1 - \frac{x^4} {4} \Bigg |_0^1 = \right] = 2 \left [\frac{1} {2} - \frac{1} {4} \right] = 2 \left [\frac{1} {4} \right] = \frac{1} {2}.\end{align*}
Example 2:
Find the area between the curves of \begin{align*}f(x) = |x - 1|\end{align*} and the \begin{align*}x-\end{align*}axis from \begin{align*}x = -1\end{align*} to \begin{align*}x = 3.\end{align*}
Solution:
We observe from the graph that we will have to divide the interval \begin{align*}[-1, 3]\end{align*} into subintervals \begin{align*}[-1, 1]\end{align*} and \begin{align*}[1, 3].\end{align*}
Hence the area is given by
\begin{align*} \int_{-1}^1 (-x + 1) dx + \int_1^3 (x - 1) dx = \left (- \frac{x^2} {2} + x\right ) \bigg |_{-1}^{+1} + \left (\frac{x^2} {2} - x\right ) \bigg |_{+1}^{+3} = 2 + 2 = 4.\end{align*}
Example 3:
Find the area enclosed by the curves of \begin{align*}f(x) = x^2 + 2 x + 1\end{align*} and
\begin{align*}g(x) = -x^2 - 2 x + 1.\end{align*}
Solution:
The graph indicates the area we need to focus on.
\begin{align*} \int_{-2}^0 (-x^2 - 2x + 1) dx - \int_{-2}^0 (x^2 + 2x + 1) dx = \left (- \frac{x^3} {3} -x^2 + x\right ) \bigg |_{-2}^0 - \left (\frac{x^3} {3} +x^2 + x \right ) \bigg |_{-2}^{0} = \frac{8} {3}.\end{align*}
Before providing another example, let’s look back at the first part of the Fundamental Theorem. If function \begin{align*}F\end{align*} is defined by \begin{align*}F(x) = \int_a^x f(t)dt,\end{align*} on \begin{align*}[a, b]\end{align*} then \begin{align*}F'(x) = f(x)\end{align*} on \begin{align*}[a , b].\end{align*} Observe that if we differentiate the integral with respect to \begin{align*}x,\end{align*} we have
\begin{align*} \frac{d} {dx} \int_a^x f(t)dt = F'(x) = f(x).\end{align*}
This fact enables us to compute derivatives of integrals as in the following example.
Example 4:
Use the Fundamental Theorem to find the derivative of the following function:
\begin{align*} g(x) = \int_0^x(1 + \sqrt[3]{t})dt.\end{align*}
Solution:
While we could easily integrate the right side and then differentiate, the Fundamental Theorem enables us to find the answer very routinely.
\begin{align*} g'(x) = \frac{d} {dx} \int_0^x(1 + \sqrt[3]{t})dt = 1 + \sqrt[3]{x}.\end{align*}
This application of the Fundamental Theorem becomes more important as we encounter functions that may be more difficult to integrate such as the following example.
Example 5:
Use the Fundamental Theorem to find the derivative of the following function:
\begin{align*} g(x) = \int_2^x(t^2 \cos t)dt.\end{align*}
Solution:
In this example, the integral is more difficult to evaluate. The Fundamental Theorem enables us to find the answer routinely.
\begin{align*} g'(x) = \frac{d} {dx} \int_2^x(t^2 \cos t)dt = x^2 \cos x.\end{align*}
## Lesson Summary
1. We used the Fundamental Theorem of Calculus to evaluate definite integrals.
Fundamental Theorem of Calculus
Let \begin{align*}f\end{align*} be continuous on the closed interval \begin{align*}[a, b].\end{align*}
1. If function \begin{align*}F\end{align*} is defined by \begin{align*} F(x) = \int_a^x f(t)dt\end{align*}, on \begin{align*}[a, b],\end{align*} then \begin{align*}F'(x) = f(x),\end{align*} on \begin{align*}[a, b].\end{align*}
2. If \begin{align*}g\end{align*} is any antiderivative of \begin{align*}f\end{align*} on \begin{align*}[a, b],\end{align*} then
\begin{align*} \int_a^bf(t)dt = g(b) - g(a).\end{align*}
We first note that we have already proven part 2 as Theorem 4.1.
Proof of Part 1.
1. Consider \begin{align*} F(x) = \int_a^x f(t) dt,\end{align*} on \begin{align*}[a, b].\end{align*}
2. \begin{align*}x, c \in [a, b],\end{align*} \begin{align*}c < x.\end{align*}
Then \begin{align*} \int_a^x f(t) dt = \int_a^c f(t) dt + \int_c^x f(t) dt\end{align*} by our rules for definite integrals.
3. Then \begin{align*} \int_a^x f(t) dt - \int_a^c f(t) dt = \int_c^x f(t) dt\end{align*} . Hence \begin{align*} F(x) - F(c) = \int_c^x f(t) dt.\end{align*}
4. Since \begin{align*}f\end{align*} is continuous on \begin{align*}[a, b]\end{align*} and \begin{align*}x, c \in [a, b],\end{align*} \begin{align*}c < x\end{align*} then we can select \begin{align*}u, v \in [c, v]\end{align*} such that \begin{align*}f(u)\end{align*} is the minimum value of and \begin{align*}f(v)\end{align*} is the maximum value of \begin{align*}f\end{align*} in \begin{align*}[c, x].\end{align*} Then we can consider \begin{align*}f(u) (x - c)\end{align*} as a lower sum and \begin{align*}f(v) (x - c)\end{align*} as an upper sum of \begin{align*}f\end{align*} from \begin{align*}c\end{align*} to \begin{align*}x.\end{align*} Hence
5. \begin{align*} f(u) (x - c) \le \int_c^x f(t) dt \le f(v) (x - c).\end{align*}
6. By substitution, we have:
\begin{align*}f(u) (x - c) \le F(x) -F(c) \le f(v) (x - c).\end{align*}
7. By division, we have
\begin{align*} f(u) \le \frac{F(x) - F(c)} {x - c} \le f(v).\end{align*}
8. When \begin{align*}x\end{align*} is close to \begin{align*}c,\end{align*} then both \begin{align*}f(u)\end{align*} and \begin{align*}f(v)\end{align*} are close to \begin{align*}f(c)\end{align*} by the continuity of \begin{align*}f\end{align*}
9. Hence \begin{align*} \lim_{x \to c^+} \frac{F(x) - F(c)} {x - c} = f(c).\end{align*} Similarly, if \begin{align*}x < c,\end{align*} then \begin{align*} \lim_{x \to c^-} \frac{F(x) - F(c)} {x - c} = f(c).\end{align*} Hence, \begin{align*} \lim_{x \to c} \frac{F(x) - F(c)} {x - c} = f(c).\end{align*}
10. By the definition of the derivative, we have that
\begin{align*} F'(c) = \lim_{x \to c} \frac{F(x) - F(c)} {x - c} = f(c)\end{align*} for every \begin{align*}c \in [a, b].\end{align*} Thus, \begin{align*}F\end{align*} is an antiderivative of \begin{align*}f\end{align*} on \begin{align*}[a, b].\end{align*}
For a video presentation of the Fundamental Theorem of Calculus (15.0), see Fundamental Theorem of Calculus, Part 1 (9:26).
## Review Questions
In problems #1–4, sketch the graph of the function \begin{align*}f(x)\end{align*} in the interval \begin{align*}[a, b].\end{align*} Then use the Fundamental Theorem of Calculus to find the area of the region bounded by the graph and the \begin{align*}x-\end{align*}axis.
1. \begin{align*}f(x) = 2 x + 3,\end{align*} \begin{align*}[0, 4]\end{align*}
2. \begin{align*}f(x) = e^x ,\end{align*} \begin{align*}[0, 2]\end{align*}
3. \begin{align*}f(x) = x^2 + x,\end{align*} \begin{align*}[1, 3]\end{align*}
4. \begin{align*}f(x) = x^2 - x,\end{align*} \begin{align*}[0, 2]\end{align*}
(Hint: Examine the graph of the function and divide the interval accordingly.)
In problems #5–7 use antiderivatives to compute the definite integral.
1. \begin{align*} \int_{-1}^{+1} |x| dx\end{align*}
2. \begin{align*} \int_0^3 |x^3 - 2| dx\end{align*}
(Hint: Examine the graph of the function and divide the interval accordingly.)
1. \begin{align*} \int_{-2}^{+4} \left [|x - 1| + |x + 1| \right] dx\end{align*}
(Hint: Examine the graph of the function and divide the interval accordingly.)
In problems #8–10, find the area between the graphs of the functions.
1. \begin{align*} f(x) = \sqrt{x},\end{align*} \begin{align*}g(x) = x,\end{align*} \begin{align*}[0, 2]\end{align*}
2. \begin{align*}f(x) = x^2,\end{align*} \begin{align*}g(x) = 4,\end{align*} \begin{align*}[0, 2]\end{align*}
3. \begin{align*}f(x) = x^2 + 1,\end{align*} \begin{align*}g(x) = 3 - x,\end{align*} \begin{align*}[0, 3]\end{align*}
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
Show Hide Details
Description
Tags:
Subjects:
|
2016-09-01 02:03:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 112, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.980309247970581, "perplexity": 2447.1590197159303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982956861.76/warc/CC-MAIN-20160823200916-00169-ip-10-153-172-175.ec2.internal.warc.gz"}
|
https://encyclopediaofmath.org/index.php?title=Axiomatic_set_theory&printable=yes
|
# Axiomatic set theory
The branch of mathematical logic in which one deals with fragments of the informal theory of sets by methods of mathematical logic. Usually, to this end, these fragments of set theory are formulated as a formal axiomatic theory. In a more narrow sense, the term "axiomatic set theory" may denote some axiomatic theory aiming at the construction of some fragment of informal ( "naive" ) set theory.
Set theory, which was formulated around 1900, had to deal with several paradoxes from its very beginning. The discovery of the fundamental paradoxes of G. Cantor and B. Russell (cf. Antinomy) gave rise to a widespread discussion and brought about a fundamental revision of the foundations of mathematical logic. The axiomatic direction of set theory may be regarded as an instrument for a more thorough study of the resulting situation.
The construction of a formal axiomatic theory of sets begins with an accurate description of the language in which the propositions are formulated. The next step is to express the principles of "naive" set theory in this language, in the form of axioms and axiom schemes. A brief description of the most widespread systems of axiomatic set theory is given below. In this context, an important part is played by the language which contains the following primitive symbols: 1) the variables $x, y, z, u , v, x _ {1} \dots$ which play the part of common names for the sets in the language; 2) the predicate symbols $\in$( sign of incidence) and $=$( sign of equality); 3) the description operator $\iota$, which means "an object such that …" ; 4) the logical connectives and quantifiers: $\leftrightarrow$( equivalent), $\rightarrow$( implies), $\lor$( or), $\wedge$( and), $\neg$( not), $\forall$( for all), $\exists$( there exists); and 5) the parentheses ( and ). The expressions of a language are grouped into terms and formulas. The terms are the names of the sets, while the formulas express propositions. Terms and formulas are generated in accordance with the following rules.
$\mathbf{R1}$. If $\tau$ and $\sigma$ are variables or terms, then $( \tau \in \sigma )$ and $( \tau = \sigma )$ are formulas.
$\mathbf{R2}$. If $A$ and $B$ are formulas and $x$ is a variable, then $(A \leftrightarrow B)$, $(A \rightarrow B)$, $(A \lor B)$, $(A \wedge B)$, $\neg A$, $\forall xA$, $\exists xA$ are formulas and $\iota x A$ is a term; the variable $x$ is a term.
For instance, the formula $\forall x ( x \in y \rightarrow x \in z )$ is tantamount to the statement "y is a subset of z" , and can be written as $y \subseteq z$; the term $\iota w \forall y ( y \in w \leftrightarrow y \subseteq z )$ is the name of the set of all subsets $z$ and, expressed in conventional mathematical symbols, this is $Pz$. Let the symbol $\iff$ mean "the left-hand side is a notation for the right-hand side" . Below a number of additional notations for formulas and terms will be presented.
The empty set:
$$\emptyset \iff \iota x \forall y \neg y \in x .$$
The set of all $x$ such that $A(x)$
$$\{ {x } : {A ( x ) } \} \iff \iota z \forall x(x \in z \leftrightarrow A ( x ) ),$$
where $z$ does not enter freely in $A(x)$( i.e. is not a parameter of the formula $A(x)$).
The unordered pair $x$ and $y$:
$$\{ x , y \} \iff \{ {z } : {z= x \lor z = y } \} .$$
The single-element set consisting of $x$:
$$\{ x \} \iff \{ x , x \} .$$
The ordered pair $x$ and $y$:
$$\langle x , y \rangle \iff \{ \{ x \} , \{ x , y \} \} .$$
The union of $x$ and $y$:
$$x \cup y \iff \{ {z } : {z \in x \lor z \in y } \} .$$
The intersection of $x$ and $y$:
$$x \cap y \iff \{ {z } : {z \in x \wedge z \in y } \} .$$
The union of all elements of $x$:
$$\cup x \iff \{ {z } : {\exists v ( z \in v \lor v \in x ) } \} .$$
The Cartesian product of $x$ and $y$:
$$x \times y \iff \{ {z } : {\exists u v ( z = \langle u , v \rangle \wedge u \in x \wedge v \in y ) } \} .$$
Notation for: $w$ is a function:
$$\mathop{\rm Fnc} ( w ) \iff \exists v ( w \subseteq v \times v ) \wedge$$
$$\wedge \forall u v _ {1} v _ {2} ( \langle u , v _ {1} \rangle \in w \wedge \langle u , v _ {2} \rangle \in w \rightarrow v _ {1} = v _ {2} ) .$$
The values of the function $w$ on the element $x$:
$$w ^ \prime x \iff \iota y \langle x , y \rangle \in w .$$
The standard infinite set $z$
$$\mathop{\rm Inf} ( z ) \iff \emptyset \in z \wedge \forall u ( u \in z \rightarrow \ u \cup \{ u \} \in z ) .$$
The axiomatic theory $A$ that follows is the most complete representation of the principles of "naive" set theory. The axioms of $A$ are:
$\mathbf{A1}$. Axiom of extensionality:
$$\forall x ( x \in y \leftrightarrow x \in z ) \rightarrow y = z$$
( "if the sets x and y contain the same elements, they are equal" );
$\mathbf{A2}$. Axiom scheme of comprehension:
$$\exists y \forall x ( x \in y \leftrightarrow A ) ,$$
where $A$ is an arbitrary formula not containing $y$ as a parameter ( "there exists a set y containing only elements x for which A" ).
This system is self-contradictory. If, in $\mathbf{A2}$, the formula $\neg x \in x$ is taken as $A$, the formula $\forall x ( x \in y \leftrightarrow \neg x \in x )$ readily yields $y \in y \leftrightarrow \neg y \in y$, which is a contradiction.
The axiomatic systems of set theory may be subdivided into the following four groups.
a) The construction of axiomatic systems in the first group is intended to restrict the comprehension axioms so as to obtain the most natural means of formalization of conventional mathematical proofs and, at the same time, to avoid the familiar paradoxes. The first axiomatic system of this type was the system Z, due to E. Zermelo (1908). However, this system does not allow a natural formalization of certain branches of mathematics, and the supplementation of Z by a new principle — the axiom of replacement — was proposed by A. Fraenkel in 1922. The resulting system is known as the Zermelo–Fraenkel system and is denoted by ZF.
b) The second group is constituted by systems the axioms of which are selected in the context of giving some explanations for paradoxes, for example, as a consequence of non-predicative definitions. The group includes Russell's ramified theory of types, the simple theory of T-types, and the theory of types with transfinite indices (cf. Types, theory of).
c) The third group is characterized by the use of non-standard means of logical deduction, multi-valued logic, complementary conditions of proofs and infinite derivation laws. Systems in this group have been developed to the least extent.
d) The fourth group includes modifications of systems belonging to the first three groups and is aimed at attaining certain logical and mathematical objectives. Only the system NBG of Neumann–Gödel–Bernays (1925) and the system NF of W. Quine (1937) will be mentioned here. The construction of the system NBG was motivated by the desire to have a finite number of axioms of set theory, based on the system ZF. The system NF represents an attempt to overcome the stratification of the concepts in the theory of types.
The systems Z, ZF and NF can be formulated in the language described above. The derivation rules, and also the so-called logical axioms, of these systems are identical, and form an applied predicate calculus of the first order with equality and with a description operator. Here are the axioms of equality and of the description operator:
$$x = x ,\ x = y \rightarrow ( A ( x ) \rightarrow A ( y ) ) ,$$
where $A(x)$ is a formula not containing the bound variable $y$( i.e. it has no constituents of the type $\forall y, \exists y, \iota y$), while $A(y)$ is obtained from the formula $A(x)$ by replacing certain free entries of the variable $x$ with $y$:
$$\exists ! x A ( x ) \rightarrow A ( \iota x A ( x ) ) ,$$
where the quantifier $\exists ! x$ means that "there exists one and one only x" , while the formula $A ( \iota xA (x))$ is obtained from the formula $A(x)$ by replacing all free entries of the variable $x$ with the term $\iota xA(x)$. The quantifier $\exists ! x$ can be expressed in terms of the quantifiers $\forall$ and $\exists$ and equality.
Non-logical axioms of the system Z:
$\mathbf{Z1}$. The axiom of extensionality $\mathbf{A1}$.
$\mathbf{Z2}$. The pair axiom:
$$\exists u \forall z ( z \in u \leftrightarrow z = x \lor z = y )$$
( "the set x, y exists" );
$\mathbf{Z3}$. The union axiom:
$$\exists y \forall x ( x \in y \leftrightarrow \exists t ( t \in z \wedge \ x \in t ) )$$
( "the set z exists" );
$\mathbf{Z4}$. The power set axiom:
$$\exists y \forall x ( x \in y \leftrightarrow x \subseteq z )$$
( "the set Pz exists" );
$\mathbf{Z5}$. The separation axiom scheme:
$$\exists y \forall x ( x \in y \leftrightarrow x \in z \wedge A ( x ) )$$
( "there exists a subset z consisting of the elements x in z for which Ax is true" ); the axioms $\mathbf{Z2}$ – $\mathbf{Z5}$ are examples of axioms of comprehension;
$\mathbf{Z6}$. The axiom of infinity:
$$\exists z \mathop{\rm Inf} ( z ) ;$$
$\mathbf{Z7}$. The axiom of choice:
$$\forall z \exists w ( \mathop{\rm Fnc} ( w ) \wedge \forall x ( x \in z \wedge \neg x = \emptyset \rightarrow w ^ \prime x \in x ) )$$
( "for any set z there exists a function w which selects, out of each non-empty element x of the set z, a unique element w`x" ). The above axioms are complemented by the regularity axiom:
$\mathbf{Z8}$.
$$\forall x ( \neg x = \emptyset \rightarrow \exists y ( y \in x \wedge y \cap x= \emptyset )),$$
which is intended to postulate that there are no descending chains $x _ {2} \in x _ {1} , x _ {3} \in x _ {2} , x _ {4} \in x _ {3} , . . .$. Axiom $\mathbf{Z8}$ simplifies constructions in Z, and its introduction does not result in contradictions.
The system Z is suitable for developing arithmetic, analysis, functional analysis and for studying cardinal numbers smaller than $\aleph _ \omega$. However, if the alephs are defined in the usual manner, it is no longer possible to demonstrate the existence in Z of $\aleph _ \omega$ and higher cardinal numbers.
The system ZF is obtained from Z by adding Fraenkel's replacement axiom scheme, which may be given in the form of the comprehension axiom scheme:
$\mathbf{ZF9}$.
$$\exists y \forall x ( x \in y \leftrightarrow \exists v ( v \in z \wedge \ x = \iota t A ( t , v ) ))$$
( "there exists a set y consisting of x, x=i tAt, v, where v runs through all the elements of a set z" ). In other words, $y$ is obtained from $z$ if each element $v$ of $z$ is replaced with $\iota tA (t, v)$.
The system ZF is a very strong theory. All ordinary mathematical theorems can be formalized in terms of ZF.
The system NBG is obtained from ZF by adding a new type of variables — the class variables $X, Y, Z ,\dots$ — and a finite number of axioms for forming classes, by means of which it is possible to prove formulas of the type
$$\exists Y \forall x ( x \in Y \leftrightarrow A ( x ) ) ,$$
where $A(x)$ is a formula of NBG which does not countain bound class variables or the symbol $\iota$. Since any formula $A(x)$ can be used to form a class, the infinite number of ZF axioms can be replaced by a finite number of axioms containing a class variable. The axiom of choice has the form:
$$\exists X ( \mathop{\rm Fnc} ( X ) \wedge \forall x ( \neg x = \emptyset \rightarrow X ^ \prime x \in x ) )$$
and confirms the existence of a selection function, which is unique for all sets and which constitutes a class.
The system NF has a simpler axiomatic form, viz.: 1) the axiom of extensionality; and 2) the axioms of comprehension in which a formula $A$ can be stratified, i.e. it is possible to assign to all variables of the formula $A$ superscript indices so as to obtain a formula of the theory of T-types, i.e. in the subformulas of type $x \in y$ the index of $x$ is one lower than the index of $y$.
The system NF has the following characteristics:
a) the axiom of choice and the generalized continuum hypothesis are disprovable;
b) the axiom of infinity is demonstrable (cf. Infinity, axiom of);
c) the extensionality axiom plays a very important role. Thus, if the extensionality axiom is replaced by the slightly weaker axiom:
$$( \exists u ( u \in y ) \wedge \forall u ( u \in y \leftrightarrow u \in z ) ) \rightarrow y = z ,$$
which permits a large number of empty sets, while the comprehension axioms of NF remain unchanged, a fairly weak theory is obtained: The consistency of the resulting system can be proved even in formal arithmetic.
Results concerning the interrelationships between the systems just described are given below.
a) Any formula of ZF is demonstrable in NBG if and only if it is demonstrable in ZF.
b) In ZF it is possible to establish the consistency of Z, completed by any finite number of examples of the axiom scheme of replacement $\mathbf{ZF9}$. Thus, ZF is much stronger than Z.
g) The consistency of T is demonstrable in Z, so that Z is stronger than T.
d) NF is not weaker than T in the sense that it is possible to develop the entire theory of types in NF.
The axiomatic approach to the theory of sets has made it possible to state a proposition on the unsolvability in principal (in an exact sense) of certain mathematical problems and has made it possible to demonstrate it rigorously. The general procedure for the utilization of the axiomatic method is as follows. Consider a formal axiomatic system $S$ of the theory of sets (as a rule, this is ZF or one of its modifications) that is sufficiently universal to contain all the conventional proofs of classical mathematics, and for all ordinary mathematical facts to be deducible from it. A given problem $A$ may be written down as a formula in the language $S$. It is then established by mathematical methods that neither $A$ nor its negation can be deduced in $S$. It follows that problem $A$ cannot be solved (in either way) by tools of the theory $S$, but since this theory $S$ was assumed to contain all ordinary methods of proof, the result means that $A$ cannot be solved by ordinary methods of conclusion, i.e. $A$ is "transcendental" .
Results which state that a proof cannot be performed in the theory $S$ are usually obtained under the assumption that $S$, or some natural extension of $S$, is consistent. This is because on the one hand, the problem can be non-deducible in $S$ only if $S$ is consistent, but such consistency cannot be established by the tools offered by $S$( cf. Gödel incompleteness theorem), i.e. cannot be derived by ordinary methods. On the other hand, the consistency of $S$ is usually a very likely hypothesis; the very theory $S$ is based on its truth.
Furthermore, the axiomatic approach to the theory of sets made it possible to accurately pose and solve problems connected with effectiveness in the theory of sets, which had been intensively studied during the initial development of the theory by R. Baire, E. Borel, H. Lebesgue, S.N. Bernstein [S.N. Bernshtein], N.N. Luzin and W. Sierpiński. It is said that an object in the theory of sets which satisfies a property $\mathfrak A$ is effectively defined in the axiomatic theory $S$ if it is possible to construct a formula $A(x)$ of $S$ for which it can be demonstrated in $S$ that it is fulfilled for a unique object, and that this object satisfies property $\mathfrak A$. Because of this definition it is possible to show in a rigorous manner that for certain properties $\mathfrak A$ in $S$ it is impossible to effectively specify an object which satisfies $\mathfrak A$, while the existence of these objects in $S$ can be established. But since the chosen theory $S$ is sufficiently universal, the fact that the existence of certain objects in $S$ is ineffective is also a proof of the fact that their existence cannot be effectively established by ordinary mathematical methods.
Finally, the methods of the axiomatic theory of sets make it possible to solve a number of difficult problems in classical branches of mathematics as well: in the theory of cardinal and ordinal numbers, in descriptive set theory and in topology.
Some of the results obtained by the axiomatic theory of sets are given below. Most of the theorems concern the axiomatic set theory of Zermelo–Fraenkel (ZF), which is now the most frequently employed. Let $\mathop{\rm ZF} ^ {-}$ be the system ZF without the axiom of choice $\mathbf{Z7}$. In view of a), the results can be readily adapted to the system NBG as well.
1) It was shown in 1939 by K. Gödel that if $\mathop{\rm ZF} ^ {-}$ is consistent, it will remain consistent after the axiom of choice and the continuum hypothesis have been added. It follows that it is impossible to disprove the axiom of choice or the continuum hypothesis in ZF. In order to prove this result, Gödel constructed a model of the theory ZF consisting of the so-called Gödel constructive sets (cf. Gödel constructive set), this model plays an important role in modern axiomatic set theory.
2) The problem as to whether or not the axiom of choice or the continuum hypothesis is deducible in ZF remained open until 1963, when it was shown by P.J. Cohen, using his forcing method, that if $\mathop{\rm ZF} ^ {-}$ is consistent, it will remain consistent after the addition of any combination of the axiom of choice, the continuum hypothesis or their negations. Thus, these two problems are independent in ZF.
The principal method used for establishing that a formula $A$ is not deducible in ZF is to construct a model of ZF containing the negation of $A$. Cohen's forcing method, which was subsequently improved by other workers, strongly extended the possibilities of constructing models of set theory, and now forms the basis of almost all subsequent results concerning non-deducibility. For instance:
3) It has been shown that one can add to ZF, without obtaining (additional) inconsistencies, the hypothesis stating that the cardinality of the set of subsets of a set $x$ may be an almost arbitrary pre-given function of the cardinality of $x$ on regular cardinals (the only substantial restrictions are connected with König's theorem).
4) M.Ya. Suslin (1920) formulated the following hypothesis. Any linearly totally ordered set such that any pairwise non-intersecting family of non-empty open intervals in it is at most countable must contain a countable everywhere-dense subset. The non-deducibility of Suslin's hypothesis in ZF was established by Cohen's method.
5) It was shown that the following postulate: "Any subset of the set of real numbers is Lebesgue measurable" is unsolvable in $\mathop{\rm ZF} ^ {-}$( without the axiom of choice).
6) The interrelationship of many important problems of descriptive set theory with ZF was clarified. The first results relating to this problem were demonstrated by P.S. Novikov [5]. The methods of axiomatic set theory made it possible to discover previously unknown connections between the problems of "naive" set theory. It was proved, for example, that the existence of a Lebesgue non-measurable set of real numbers of the type $\Sigma _ {2} ^ {1}$( i.e. $A _ {2}$) implies the existence of an uncountable $\Pi _ {1} ^ {1}$( i.e. $C {\mathcal A}$) set without a perfect subset.
7) It was proved that an effectively totally ordered continuum is absent in ZF. Numerous results proved the absence of effectively defined objects in the descriptive theory of sets and in the theory of ordinal numbers.
#### References
[1] A. Levy, "Foundations of set theory" , North-Holland (1973) [2] P.J. Cohen, "Set theory and the continuum hypothesis" , Benjamin (1966) [3] T.J. Jech, "Lectures in set theory: with particular emphasis on the method of forcing" , Lect. notes in math. , 217 , Springer (1971) [4] F.R. Drake, "Set theory: an introduction to large cardinals" , North-Holland (1974) [5] P.S. Novikov, "On the consistency of certain propositions of the descriptive theory of sets" Amer. Math. Soc. Transl. , 29 (1963) pp. 51–89 Trudy Mat. Inst. Steklov. , 38 (1951) pp. 279–316
|
2020-10-30 17:15:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9014508128166199, "perplexity": 309.7860568141702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107911027.72/warc/CC-MAIN-20201030153002-20201030183002-00033.warc.gz"}
|
http://www.math.psu.edu/calendars/meeting.php?id=9637
|
# Meeting Details
Title: Cohomological equation, invariant distributions and quantitative unique ergodicity for horocycle flows. Working Seminar: Dynamics and its Working Tools Giovanni Forni, University of Maryland This is the first lecture form the DISTINGUISHED VISITING PROFESSOR LECTURE SERIES INVARIANT DISTRIBUTIONS AND RENORMALIZATION IN PARABOLIC DYNAMICS.'' ============================================= In this lecture we describe our joint work with L. Flaminio on how to use the theory of unitary representations to construct solutions of the cohomological equation for horocycle flows and how to apply that result to obtain precise bounds on the speed of convergence of ergodic averages. The key idea (related towork on decay of correlations for Anosov flows by Blank-Keller-Liverani, Liverani-Gouezel, Baladi-Tsuji, ... ) is to study the action of a `renormalization' dynamics, in this case the geodesic flow, on the space of invariant distributions for the horocycle flow.
|
2016-07-02 02:13:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6527207493782043, "perplexity": 781.88086366461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00026-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/physics-physics-physics-physics-torque.51309/
|
# PHysics PHysics Physics Physics Torque
1. Nov 3, 2004
### envscigrl
Here is my problem:
A 2.50g particle moves in a circle of radius 3.00m. The magnitude of its angular momentum relative to the center of the circle depends on time according to L = (3.5 Nm)t. Find the magnitude of the torque acting on the particle.
I know that:
T = F L
where T = torque
F is the force
L is angular momentum
I feel like this problem is more algebraic than anything. I just need something stable to start from. Pleeeeeeeeaaaase Heeeeeeelp!!
2. Nov 3, 2004
### jamesrc
I'm afraid your equation is wrong. You may be thinking of $$\vec{\tau} = \vec{r}\times\vec{F}$$, where r is the moment arm (and its a vector equation). What you need to use is the idea that the rate of change of angular momentum is equal to the torque. I hope that helps.
|
2018-02-24 22:26:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6869986057281494, "perplexity": 522.6038424723998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815951.96/warc/CC-MAIN-20180224211727-20180224231727-00747.warc.gz"}
|
http://mathhelpforum.com/calculus/166342-few-problems.html
|
# Math Help - A few Problems ..
1. ## A few Problems ..
FINAL EXAM REVIEW QUESTIONS - TOMORROW FINAL EXAM IN THE MORNING!
1 The management of the UNICO department store has decided to enclose an 841 ft2 area outside the building for displaying potted plants and flowers. One side will be formed by the external wall of the store, two sides will be constructed of pine boards, and the fourth side will be made of galvanized steel fencing. If the pine board fencing costs $6/running foot and the steel fencing costs$3/running foot, determine the dimensions of the enclosure that can be erected at minimum cost. (Round your answers to one decimal place.)
wood side 1 ft steel side 2 ft
2 By cutting away identical squares from each corner of a rectangular piece of cardboard and folding up the resulting flaps, an open box may be made. If the cardboard is 15 in. long and 8 in. wide, find the dimensions of the box that will yield the maximum volume. (Round your answers to two decimal places.)
1 in (smallest value) 2 in
3 in (largest value)
3 If an open box has a square base and a volume of 98 in.3 and is constructed from a tin sheet, find the dimensions of the box, assuming a minimum amount of material is used in its construction. (Round your answers to two decimal places.) height 1 in length 2 in width 3 in
4 A rectangular box is to have a square base and a volume of 20 ft3. If the material for the base costs 35¢/square foot, the material for the sides costs 10¢/square foot, and the material for the top costs 15¢/square foot, determine the dimensions of the box that can be constructed at minimum cost.
x = 1 ft y = 2 ft
5 A Norman window has the shape of a rectangle surmounted by a semicircle (see the figure below). If a Norman window is to have a perimeter of 34 ft, what should its dimensions be in order to allow the maximum amount of light through the window? (Round your answers to two decimal places.)
x = 1 ft
y = 2 ft
6 The owner of a luxury motor yacht that sails among the 4000 Greek islands charges $336/person/day if exactly 20 people sign up for the cruise. However, if more than 20 people sign up (up to the maximum capacity of 100) for the cruise, then each fare is reduced by$4 for each additional passenger.
Assuming at least 20 people sign up for the cruise, determine how many passengers will result in the maximum revenue for the owner of the yacht.
1 passengers
What is the maximum revenue?
\$ 2
What would be the fare/passenger in this case? (Round your answer to the nearest dollar.)
3 dollars per passenger
2. You are certainly welcome here, but we're not just going to give you all the answers; that's not how it works around here. Show us what work you've done and we'll help you in places where you are mistaken or incorrect.
I don't think you're allowed internet access during your test, which is why giving you the answers is useless. Work it out yourself and we'll help you get over what you're struggling with.
|
2015-09-02 14:57:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3909090459346771, "perplexity": 755.7523593196705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645265792.74/warc/CC-MAIN-20150827031425-00048-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://web2.0calc.com/questions/solve-fo-x
|
+0
# Solve fo x : ²-
+1
162
4
+302
Solve fo x : $$\sqrt{}3x$$²-$$2\sqrt{}2x-2\sqrt{}3=0$$
SARAHann Apr 1, 2017
#3
+90508
+4
Hi Sara :)
Lets see :)
$$\sqrt3\;x^2-2\sqrt2\;x-2\sqrt3=0\\~\\ \triangle=(-2\sqrt2)^2-4*\sqrt3* -2\sqrt3 \\ \triangle=(8+24) \\ \triangle=32\qquad \text{Positive so this means it has 2 real roots} \\~\\$$
$$x = {-b \pm \sqrt{b^2-4ac} \over 2a}\\ x = {-b \pm \sqrt{\triangle} \over 2a}\\ x = {2\sqrt2 \pm \sqrt{32} \over 2\sqrt3}\\ x = {2\sqrt2 \pm 4\sqrt{2} \over 2\sqrt3}\\ \\~\\ x = {2\sqrt2 \pm 4\sqrt{2} \over 2\sqrt3}\times \frac{\sqrt3}{\sqrt3}\\ x = {2\sqrt6 \pm 4\sqrt{6} \over 6}\\ x = {\sqrt6 \pm 2\sqrt{6} \over 3}\\ x = {\sqrt6 (1\pm2) \over 3}\\ x=\frac{3\sqrt6}{3}\qquad or \qquad x=\frac{-\sqrt6}{3}\\ x=\sqrt6\qquad or \qquad x=\frac{-\sqrt6}{3}\\$$
I checked this answer by graphing it and it is correct.
Melody Apr 1, 2017
Sort:
#1
+1
Solve for x: P.S. This is how I read it: sqrt(3x^2) - 2sqrt(2x) - 2sqrt(3) = 0, solve for x
-2 sqrt(3) - 2 sqrt(2) sqrt(x) + sqrt(3) sqrt(x^2) = 0
Add 2 sqrt(3) to both sides:
sqrt(3) x - 2 sqrt(2) sqrt(x) = 2 sqrt(3)
Subtract sqrt(3) x from both sides:
-2 sqrt(2) sqrt(x) = 2 sqrt(3) - sqrt(3) x
Raise both sides to the power of two:
8 x = (2 sqrt(3) - sqrt(3) x)^2
Expand out terms of the right hand side:
8 x = 3 x^2 - 12 x + 12
Subtract 3 x^2 - 12 x + 12 from both sides:
-3 x^2 + 20 x - 12 = 0
The left hand side factors into a product with three terms:
-(x - 6) (3 x - 2) = 0
Multiply both sides by -1:
(x - 6) (3 x - 2) = 0
Split into two equations:
x - 6 = 0 or 3 x - 2 = 0
x = 6 or 3 x - 2 = 0
x = 6 or 3 x = 2
Divide both sides by 3:
x = 6 or x = 2/3
-2 sqrt(3) - 2 sqrt(2) sqrt(x) + sqrt(3) sqrt(x^2) ⇒ -2 sqrt(3) - 2 sqrt(2) sqrt(2/3) + sqrt(3) sqrt((2/3)^2) = -8/sqrt(3) ≈ -4.6188:
So this solution is incorrect
-2 sqrt(3) - 2 sqrt(2) sqrt(x) + sqrt(3) sqrt(x^2) ⇒ -2 sqrt(3) - 2 sqrt(2) sqrt(6) + sqrt(3) sqrt(6^2) = 0:
So this solution is correct
The solution is:
Guest Apr 1, 2017
#2
+302
+1
could u explain using latex? i did'nt get it
SARAHann Apr 1, 2017
#3
+90508
+4
Hi Sara :)
Lets see :)
$$\sqrt3\;x^2-2\sqrt2\;x-2\sqrt3=0\\~\\ \triangle=(-2\sqrt2)^2-4*\sqrt3* -2\sqrt3 \\ \triangle=(8+24) \\ \triangle=32\qquad \text{Positive so this means it has 2 real roots} \\~\\$$
$$x = {-b \pm \sqrt{b^2-4ac} \over 2a}\\ x = {-b \pm \sqrt{\triangle} \over 2a}\\ x = {2\sqrt2 \pm \sqrt{32} \over 2\sqrt3}\\ x = {2\sqrt2 \pm 4\sqrt{2} \over 2\sqrt3}\\ \\~\\ x = {2\sqrt2 \pm 4\sqrt{2} \over 2\sqrt3}\times \frac{\sqrt3}{\sqrt3}\\ x = {2\sqrt6 \pm 4\sqrt{6} \over 6}\\ x = {\sqrt6 \pm 2\sqrt{6} \over 3}\\ x = {\sqrt6 (1\pm2) \over 3}\\ x=\frac{3\sqrt6}{3}\qquad or \qquad x=\frac{-\sqrt6}{3}\\ x=\sqrt6\qquad or \qquad x=\frac{-\sqrt6}{3}\\$$
I checked this answer by graphing it and it is correct.
Melody Apr 1, 2017
#4
+302
0
Thx Mel!
SARAHann Apr 3, 2017
### 20 Online Users
We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners. See details
|
2017-10-16 23:41:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6460599303245544, "perplexity": 3697.1338606896556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820487.5/warc/CC-MAIN-20171016233304-20171017013304-00107.warc.gz"}
|
http://tnorthrup1111.blogspot.com/2015/04/order-up-one-black-hole-pleasenot.html
|
"Black Smoke Portals" Opening in Skies Around The World? (2015)
Post-publication activity
Curator: Roberto Emparan
black ring is a black hole with an event horizon with topology S1×Sp . Black rings can exist only in spacetimes with five or more dimensions. Exact black ring solutions of General Relativity are known only in five dimensions, but approximate solutions for thin black rings (with the radius of S1 much larger than the radius of Sp) have been constructed in spacetimes with more than five dimensions. The existence of black ring solutions shows that higher-dimensional black holes can have non-spherical topology and are not uniquely specified by their conserved charges.
"As you can see these subjects and things are very real and very documented and are being very researched by scientists Folks. I was even very much surprised of all the findings that I turned up from seeking along these lines."
EVENT HORIZON: Will CERN’s Hadron Collider Make Contact With Other Dimensions?
21st Century Wire says…
Scientists at Large Hadron Collider hope to make contact with PARALLEL UNIVERSE in days
Paul Baldwin
Express NewspapersThe staggeringly complex LHC ‘atom smasher’ at the CERN centre in Geneva, Switzerland, will be fired up to its highest energy levels ever in a bid to detect – or even create – miniature black holes.
If successful a completely new universe will be revealed – rewriting not only the physics booksbut the philosophy books too.
It is even possible that gravity from our own universe may ‘leak’ into this parallel universe, scientists at the LHC say.
The experiment is sure to inflame alarmist critics of the LHC, many of whom initially warned the high energy particle collider would spell the end of our universe with the creation a black hole of its own.
But so far Geneva remains intact and comfortably outside the event horizon.
Indeed the LHC has been spectacularly successful. First scientists proved the existence of the elusive Higgs boson ‘God particle’ – a key building block of the universe – and it is seemingly well on the way to nailing ‘dark matter’ – a previously undetectable theoretical possibility that is now thought to make up the majority of matter in the universe.
But next week’s experiment is considered to be a game changer.
Mir Faizal, one of the three-strong team of physicists behind the experiment, said: “Just as many parallel sheets of paper, which are two dimensional objects [breadth and length] can exist in a third dimension [height], parallel universes can also exist in higher dimensions.
“We predict that gravity can leak into extra dimensions, and if it does, then miniature black holes can be produced at the LHC.
“Normally, when people think of the multiverse, they think of the many-worlds interpretation of quantum mechanics, where every possibility is actualised.
“This cannot be tested and so it is philosophy and not science.
“This is not what we mean by parallel universes. What we mean is real universes in extra dimensions…
Well folks that brings us to the importance of the date DEC 21,2012 as you can see below by this well known Science publication.
See the Mayans were looking at ..... Hunab cu (meaning Creator God) which to them was the center of the Milky way Galaxy ie; Large Black Hole For 5 thousand or so years knowing something was going to happen to change things here at the end of the age of Pisces and beginning of Aquarius on DEC 21 ,2012 being indicative of their calenders end date and also on the winter solstice of that year, well it did, Did I also mention that the movie "Aquarius" is coming out in May.
They found the God particle, the Higgs Boson, they found it when smashing crap together making disturbance in Dark matter (yes Dark side of reality which they know very well) and the Material source of Matter in the material world is Satan = Higgs Boson and create that Black hole (hanab Cu) that the Mayans saw to come into this world, wow how did they know. They knew because all this has been Self fulfilling Prophesy from the very start, and just handed down through generations of all cultures in a different flavors for each. As to show proof of prophesy as being the same among many. peoples. when in truth it was not the Ineffable Father of eternity but the foolish God of the Old testament ( Satan) who has manipulated Gods truth from the start. Satan's wife is Mindlessness and together they make completely nothing. They only know how to self destruct and take whom ever follows them with them.
Science
Vol. 338 no. 6114 pp. 1558-1559
DOI: 10.1126/science.338.6114.1558
The Discovery of the Higgs Boson
1. No recent scientific advance has generated more hoopla
than this one. On 4 July, researchers working with the
world's biggest atom smasher—the Large Hadron Collider
in Switzerland—announced that they had spotted a particle
that appears to be the long-sought Higgs boson, the last
missing piece in physicists' standard model of fundamental
particles and forces. The news captured the imagination of
people around the world, and now,Science has chosen it
as its Breakthrough of the Year.
So may friends, It proves to be that all this stuff is pretty much
creditable knowledge, the only real thing left now to figure out
is the timing of all this crap. But in all truth it doesnt matter for
either way the mortal world turns out being one in the Father of
intirity by way of Jesus Christs teachings is a must Do My
friends My God be with you always and help you to keep Strong. Thomas
|
2018-02-25 13:30:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.346420556306839, "perplexity": 2008.0543842463037}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816462.95/warc/CC-MAIN-20180225130337-20180225150337-00200.warc.gz"}
|
https://www.zbmath.org/?q=an%3A1028.65011
|
zbMATH — the first resource for mathematics
Matrix-free multilevel moving least-squares methods. (English) Zbl 1028.65011
Chui, Charles K. (ed.) et al., Approximation theory X. Wavelets, splines, and applications. Papers from the 10th international symposium, St. Louis, Mo, USA, March 26-29, 2001. Nashville, TN: Vanderbilt University Press. Innovations in Applied Mathematics. 271-281 (2002).
Summary: We investigate matrix-free formulations for polynomial-based moving least-squares approximation. The well-known method of D. Shepard [A two-dimensional interpolation function for irregularly spaced data, Proc. 23rd Nat. Conf. ACM, 517-523 (1968)] is one such formulation that leads to $$O(h)$$ approximation order. We are interested in methods with higher approximation orders. Several possible approaches are identified, and one of them – based on the analytic solution of small linear systems – is presented here. Numerical experiments with a multilevel residual updating algorithm are also presented.
For the entire collection see [Zbl 1012.00039].
MSC:
65D15 Algorithms for approximation of functions
|
2021-08-02 23:24:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5958806276321411, "perplexity": 3355.7619229751526}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154385.24/warc/CC-MAIN-20210802203434-20210802233434-00341.warc.gz"}
|
https://proofwiki.org/wiki/Automorphism_Group/Examples/Cyclic_Group_C3
|
# Automorphism Group/Examples/Cyclic Group C3
## Example of Automorphism Group
Consider the cyclic group $C_3$, which can be presented as its Cayley table:
$\begin{array}{r|rrr} \struct {\Z_3, +_3} & \eqclass 0 3 & \eqclass 1 3 & \eqclass 2 3 \\ \hline \eqclass 0 3 & \eqclass 0 3 & \eqclass 1 3 & \eqclass 2 3 \\ \eqclass 1 3 & \eqclass 1 3 & \eqclass 2 3 & \eqclass 0 0 \\ \eqclass 2 3 & \eqclass 2 3 & \eqclass 0 3 & \eqclass 1 3 \\ \end{array}$
The automorphism group of $C_3$ is given by:
$\Aut {C_3} = \set {\phi, \theta}$
where $\phi$ and $\theta$ are defined as:
$\ds \map \phi {\eqclass 0 3}$ $=$ $\ds \eqclass 0 3$ $\ds \map \phi {\eqclass 1 3}$ $=$ $\ds \eqclass 1 3$ $\ds \map \phi {\eqclass 2 3}$ $=$ $\ds \eqclass 2 3$
$\ds \map \theta {\eqclass 0 3}$ $=$ $\ds \eqclass 0 3$ $\ds \map \theta {\eqclass 1 3}$ $=$ $\ds \eqclass 2 3$ $\ds \map \theta {\eqclass 2 3}$ $=$ $\ds \eqclass 1 3$
The Cayley table of $\Aut {C_3}$ is then:
$\begin{array}{r|rr} & \phi & \theta \\ \hline \phi & \phi & \theta \\ \theta & \theta & \phi \\ \end{array}$
## Proof 1
Let $\xi$ be a general automorphism on $C_3$.
Then by Group Homomorphism Preserves Identity we immediately have that:
$\map \xi {\eqclass 0 3} = \eqclass 0 3$
Investigating $\map \xi {\eqclass 1 3}$, we find $2$ options:
$\map \xi {\eqclass 1 3} = \eqclass 1 3$
$\map \xi {\eqclass 1 3} = \eqclass 2 3$
Each leads to one and only one bijection from $C_3$ to $C_3$, that is, $\phi$ and $\theta$ as defined.
It is determined by inspection that both $\phi$ and $\theta$ are automorphisms.
Hence Automorphism Group is Subgroup of Symmetric Group is applied to confirm that $\set {\phi, \theta}$ forms a group.
The Cayley table follows.
$\blacksquare$
## Proof 2
This is an example of Order of Automorphism Group of Prime Group:
$\order {\Aut G} = p - 1$
for a group of prime order $p$.
The only group of order $2$ is the cyclic group of order $2$.
The result follows.
$\blacksquare$
|
2023-03-23 07:20:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9648361802101135, "perplexity": 166.71298057945484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945030.59/warc/CC-MAIN-20230323065609-20230323095609-00299.warc.gz"}
|
https://math.stackexchange.com/questions/2982762/is-it-true-that-the-second-fundamental-form-of-a-geodesic-as-a-one-dimensional-s
|
# Is it true that the second fundamental form of a geodesic as a one-dimensional submanifold is zero?
A geodesic on a Riemannian manifold $$(M,g)$$ with the Levi-Civita connection $$\nabla$$ is defined as a curve $$\gamma(t)$$ such that $$\nabla_{\gamma'(t)}\gamma'(t) \equiv0$$.
Can we show that generally, the second fundamental form of a geodesic as a one-dimensional submanifold of $$(M,g)$$ is $$0$$?
• Hello @Bach. As F.T. pointed out, I have misread this question. His answer is correct. Feel free to mark his answer as the correct answer. – Ernie060 Sep 17 at 15:06
EDIT: I have misread and misinterpret the OP's question. The answer of F.T. is correct. This answer explains that the second fundamental form of a submanifold along a geodesic of that submanifold does not vanish in general.
No, in general the second fundamental form along a geodesic does not vanish.
As an example consider a (regular) curve $$\alpha$$ in a surface $$S$$ in $$\mathbb{R}^3$$. Let $$T$$ be the unit tangent vector and $$N$$ the unit normal of the surface along the curve. Define $$V = N \times T$$; this is a vector normal to the curve but tangent to the surface. (This frame $$T$$, $$V$$, $$N$$ is called the Darboux frame).
Then $$T' = k_g V + k_n N,$$ where $$k_g$$ is the geodesic curvature and $$k_n$$ is the normal curvature of $$\alpha$$.
The curve $$\alpha$$ is a geodesic iff $$k_g =0$$. However, the part $$k_n N$$ is not necessarily zero. This part vanishes iff $$k_n=0$$, i.e. iff the curve is an asymptotic curve. Note that in the case $$k_g=k_n=0$$, then $$T'=0$$, so then the curve is a straight line.
If you consider a geodesic on a surface with positive Gauss curvature, then the second fundamental form along the geodesic never vanishes, since the normal curvature cannot be zero in any direction. This follows from the formula $$k_n = k_1 \cos \theta + k_2 \sin \theta$$, where $$k_n$$ is the normal curvature in the direction $$\cos \theta e_1 + \sin \theta e_2$$ and $$e_1$$, $$e_2$$ are the two principal direction with respective principal curvatures $$k_1$$, $$k_2$$.
I believe that the answer of Ernie060 is quite confusing as he is considering a curve in a surface in a 3-manifold. He is actually claiming that the geodesics of the surface are not geodesics of $$\mathbb{R}^3$$ which is clearly true for a generic surface. For those who are interested in minimal submanifolds, I would like to point out that these phenomena are very common. For instance, the Clifford torus is a minimal submanifold of the sphere but not of the euclidean space. (See DoCarmo Riemannian Geometry)
Assume we have a general submanifold $$\Sigma$$ of $$(M,g)$$. The second fundamental form is defined as: $$A(X,Y)=(\nabla_X Y)^N$$ with $$X,Y \in T_p\Sigma$$ and $$\nabla$$ Levi-Civita connection of M. If we consider $$\Sigma$$ one dimensional, (as $$\gamma'(t)$$ form a basis of $$T_{\gamma (t)} \Sigma$$ for all t) the second fundamental form is identically equal to zero by definition of geodesic.
|
2019-12-11 08:46:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 41, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9642968773841858, "perplexity": 96.71221872768282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530452.95/warc/CC-MAIN-20191211074417-20191211102417-00457.warc.gz"}
|
https://texample.net/tikz/examples/h-tree/
|
Example: H-tree and b-tree
Published 2012-11-25 | Author: Andrew Stacey
The H-tree got his name because of its repeating pattern which looks like the letter “H”. It is also called H-fractal, it’s a space-filling curve with a Hausdorff dimension of 2.
A binary tree is a tree where each node has no more than two child nodes.
The example shows how to build up a tree recursively using a foreach loop. It uses the tree stuff already in TikZ. With a slight modification, the routine for drawing the H-tree can be adapted to a full binary tree.
Originally posted to TeX.SE.
Do you have a question regarding this example, TikZ or LaTeX in general? Just ask in the LaTeX Forum.
Oder frag auf Deutsch auf TeXwelt.de. En français: TeXnique.fr.
% H-tree and B-tree
% Author: Andrew Stacey
\documentclass{article}
\usepackage{tikz}
\tikzset{
htree leaves/.initial=2,
sibling angle/.initial=20,
htree level/.initial={}
}
\makeatletter
\def\htree@growth{%
\pgftransformrotate{%
(\pgfkeysvalueof{/tikz/sibling angle})*(-.5-.5*\tikznumberofchildren
+\tikznumberofcurrentchild)}%
\pgftransformxshift{\the\tikzleveldistance}%
\pgfkeysvalueof{/tikz/htree level}%
}
\tikzstyle{htree}=[
growth function=\htree@growth,
sibling angle=180,
htree level={
\tikzleveldistance=.707\tikzleveldistance
\pgfsetlinewidth{.707*\the\pgflinewidth}
}
]
\tikzstyle{btree}=[
growth function=\htree@growth,
sibling angle=60,
htree level={
\tikzleveldistance=.55\tikzleveldistance
\pgfsetlinewidth{.707*\the\pgflinewidth}
}
]
\begingroup
\toks@\expandafter\expandafter\expandafter{\expandafter#1#2}%
\xdef#1{\the\toks@}%
\endgroup}
\newcommand{\htree}[2][]{%
\def\htree@start{\noexpand\coordinate}
\def\htree@end{}
\foreach \l in {0,...,#2} {
\g@addto@macro\htree@start{child foreach \noexpand\x in {1,2} {\iffalse}\fi}
\global\let\htree@start\htree@start
\global\let\htree@end\htree@end
}
\edef\htree@cmd{\htree@start\htree@end;}
\begin{scope}[htree,#1]
\htree@cmd
\end{scope}
}
\makeatother
\begin{document}
\begin{tikzpicture}[
rotate=90,
yscale=.5,
level distance=3cm,
line width=8pt,
]
\htree{7}
\htree[btree,yshift=-12cm,xshift=-3cm]{7}
\end{tikzpicture}
\end{document}
• #1 Danna , January 3, 2013 at 1:58 a.m.
quesia sabes si me ayudar con un codigo para haces un escudo o si me puedes ayudar a crear fractales gracias
spanish
• #2 Me, April 21, 2013 at 12:20 p.m.
A B-tree is not the same thing as a binary tree. This is a binary tree, not a B-tree, so the title is wrong.
|
2021-09-24 17:33:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.726965606212616, "perplexity": 7398.492549452991}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057564.48/warc/CC-MAIN-20210924171348-20210924201348-00535.warc.gz"}
|
https://www.research.manchester.ac.uk/portal/en/theses/modelling-and-instrumented-nanoindentation-of-compliant-and-hydrated-materials(fae26583-f65b-4dc0-8ef7-fed8b855a576).html
|
## Modelling and instrumented nanoindentation of compliant and hydrated materials
Biological tissues are highly dynamic structures that undergo constant remodelling over an individual's life to sustain the multiple mechanical stimuli to which they are subjected. Tissue function and behaviour are intimately related to the native tissue's hierarchical structure. It has been observed that ageing and age-related disease often lead to an altered mechanical response of a tissue, and as a consequence, results in a reduction of life quality in the ageing population. Therefore over the last couple of decades the has been a growing interest in relating tissue physiological functions with observed mechanical response. \\ Contact based techniques such as nanoindentation have been often used to extract the material properties of soft polymers, gels, and tissues over the last decade, showing a promising potential for tissue characterisation over a range of length scales. However, multiple issues arise when characterising highly compliant and hydrated materials (e.g. soft tissues, and hydrogels) since indentation devices were initially engineered for hard material characterisation. In this work experimental indentation is combined with iterative Finite Element Analysis to extract meaningful mechanical properties from indentation time-displacement curves. The time-dependent behaviour of thin films deposited onto stiff substrates was studied because this represents the format of histological sections, commonly used for the study of tissue. For this purpose two model materials exhibiting time-dependent mechanical response similar to human tissue were chosen: PDMS and pHEMA with EGDMA as cross-linker.The present work addresses a well-known issue for thin material characterisation via indentation: the so-called thickness effect. A strong influence of specimen thickness was observed on the nanoindentation response of the films and the material properties extracted using the Oliver and Pharr analysis. Hence a fully automated iterative FE algorithm was used to extract the viscoelastic and poroelastic material properties from the indentation data. It is demonstrated that the onset of the substrate effect occurs normalized film thickness $\delta_{norm} \leqslant 10$ where $\delta_{norm}=\frac{\delta}{\sqrt{Rh}}$ ($\delta:$ film thickness, $R:$ indenter radius,$h:$indentation depth). If a half space condition is assumed, the error in the estimation of the shear (and elastic) modulus via the Hertz contact theory is over a factor of 3.5 to 4.5 for films with a thickness of \$\delta
|
2021-02-27 23:10:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6103432774543762, "perplexity": 2201.803044348798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359497.20/warc/CC-MAIN-20210227204637-20210227234637-00388.warc.gz"}
|
https://eprint.iacr.org/2021/721
|
### Index Calculus Attacks on Hyperelliptic Jacobians with Effective Endomorphisms
Sulamithe Tsakou and Sorina Ionica
##### Abstract
For a hyperelliptic curve defined over a finite field $\bbbf_{q^n}$ with $n>1$, the discrete logarithm problem is subject to index calculus attacks. We exploit the endomorphism of the curve to reduce the size of the factorization basis and hence improve the complexity of the index calculus attack for certain families of ordinary elliptic curves and genus 2 hyperelliptic Jacobians defined over finite fields. This approach adds an extra cost when performing operation on the factor basis, but the experiences show that reducing the size of the factor basis allows to have a gain on the total complexity of index calculus algorithm with respect to the generic attacks.
Available format(s)
Category
Public-key cryptography
Publication info
Preprint. MINOR revision.
Keywords
elliptic curvesindex calculusattack
Contact author(s)
sorina ionica @ u-picardie fr
sulamithe tsakou @ u-picardie fr
History
Short URL
https://ia.cr/2021/721
CC BY
BibTeX
@misc{cryptoeprint:2021/721,
author = {Sulamithe Tsakou and Sorina Ionica},
title = {Index Calculus Attacks on Hyperelliptic Jacobians with Effective Endomorphisms},
howpublished = {Cryptology ePrint Archive, Paper 2021/721},
year = {2021},
note = {\url{https://eprint.iacr.org/2021/721}},
url = {https://eprint.iacr.org/2021/721}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
|
2023-03-24 16:50:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34000661969184875, "perplexity": 2933.0968008760674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945287.43/warc/CC-MAIN-20230324144746-20230324174746-00576.warc.gz"}
|
http://blog.espol.edu.ec/estg1003/tablas-trigonometricas/
|
# Tablas trigonométricas
Referencia: Leon W Couch Apéndice p653
$$cos(x \pm y)= cos(x)cos(y) \mp sen(x)sen(y)$$ $$sen(x \pm y) = sen(x)cos(y) \pm cos(x) sen(y)$$ $$cos\left( x \pm \frac{\pi}{2}\right) = \mp sen(x)$$ $$sen\left( x \pm \frac{\pi}{2}\right) = \pm cos(x)$$ $$cos(2x)= cos^2 (x)- sen^2(x)$$ $$sen(2x)= 2sen(x)cos(x)$$ $$2 cos(x)cos(y) = cos(x-y) + cos(x+y)$$ $$2 sen(x)sen(y) = cos(x-y) - cos(x+y)$$ $$2 sen(x)cos(y) = sen(x-y) + sen(x+y)$$ $$2 cos^2(x) = 1 + cos(2x)$$ $$2 sen^2(x) = 1 - cos(2x)$$ $$4 cos^3(x) = 3cos(x) + cos(3x)$$ $$4 sen^3(x) = 3sen(x) + sen(3x)$$ $$8 cos^4(x) = 3 + 4cos(2x) + cos(4x)$$ $$8 sen^4(x) = 3 - 4cos(2x) + cos(4x)$$
con magnitud R y fase Θ
$$R cos(x + \theta) = A cos(x) - B sen(x)$$
donde
$$R = \sqrt{A^2+B^2}$$ $$\theta = tan^{-1}(\frac{B}{A})$$ $$A = R cos(\theta)$$ $$B = R sen(\theta)$$
|
2023-03-20 15:43:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 20, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8530539274215698, "perplexity": 3448.875447627801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00128.warc.gz"}
|
https://tex.stackexchange.com/questions/374586/how-can-i-use-new-unicode-characters
|
# How can I use new unicode characters?
I know, there are lots of questions about Unicode and I've read a lot of these. But I wasn't able to solve my problem:
I have to use the characters ↊ and ↋ (U+218A and U+218B) in my text. I tried with LaTeX and XeLaTeX and several packages. But it always sais
! Package inputenc Error: Unicode char ↊ (U+218A)(inputenc) not set up for use with LaTeX.See the inputenc package documentation for explanation.Type H for immediate help....
What can I try? How can I set it up for use? I'm using Texmaker on windows 10 with MikTeX. Thanks!
• Do you have a font that contains these characters? Without a font to display it you get an error message. Jun 12 '17 at 16:12
• Could you please provied a MWE (Minimum Working example)? Jun 12 '17 at 16:19
• You wouldn't get this error with xelatex. Jun 12 '17 at 16:38
• Related/duplicate: How can I get UTF8 character Jun 12 '17 at 16:45
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage{graphicx}
\DeclareUnicodeCharacter{218A}{\turnedtwo}
\DeclareUnicodeCharacter{218B}{\turnedthree}
\makeatletter
\DeclareRobustCommand{\turnedtwo}{\make@turned{2}}
\DeclareRobustCommand{\turnedthree}{\make@turned{3}}
\newcommand{\make@turned}[1]{%
\raisebox{\depth}{\scalebox{-1}[-1]{#1}}%
}
\makeatother
\begin{document}
123456789↊↋0
\end{document}
Not sure whether this renders correctly, so I provide also an image of the code.
Output for the test file:
|
2021-10-20 13:46:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8072848916053772, "perplexity": 2543.0807713366517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00445.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/find-distance-point-1-5-10-point-intersection-line-r-2i-j-2k-3i-4j-2k-and-plane-r-i-j-k-5-three-dimensional-geometry-examples-solutions_2522
|
Department of Pre-University Education, KarnatakaPUC Karnataka Science Class 12
# Find the distance of the point (−1, −5, −10) from the point of intersection of the line r=2i-j+2k+λ(3i+4j+2k) and the plane r (i-j+k)=5 - Mathematics
Find the distance of the point (−1, −5, −10) from the point of intersection of the line vecr=2hati-hatj+2hatk+lambda(3hati+4hatj+2hatk) and the plane vec r (hati-hatj+hatk)=5
#### Solution
Let us suppose that the given line and plane intersect at the point P(x,y,z).
∴ The position vector of P is vecr=xhati+yhatj+zhatk
Thus, the given equations of the line and the plane can be rewritten as
xhati+yhatj+zhatk=(2+3λ)hati−(1+4λ)hayj+(2+2λ)hatk and (xhati+yhatj+zhatk).(hati−hatj+hatk) = 5, respectively.
On simplifying xhati+yhatj+zhatk=(2+3λ)hati−(1+4λ)hatj+(2+2λ)hatk and (xhati+yhatj+zhatk).(hati−hatj+hatk) = 5 , we get:
x=(2+3λ)y =(1+4λ)z = (2+2λ)
Also, xy+z = 5
On putting the values of x, y and z in the equation xy+z = 5, we get:
2+3λ+1+4λ+2+2λ = 59λ+5 = 5λ = 0
x=2, y =1 and z =2
Hence, the distance between the points (−1, −5, −10) and (2, −1, 2) is
sqrt((2+1)^2+(−1+5)^2+(2+10)^2)=sqrt(9+16+144)=sqrt169=13 units
Concept: Three - Dimensional Geometry Examples and Solutions
Is there an error in this question or solution?
|
2021-11-30 15:52:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5921208262443542, "perplexity": 422.917451478683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359037.96/warc/CC-MAIN-20211130141247-20211130171247-00253.warc.gz"}
|
https://lkpy.readthedocs.io/en/latest/util.html
|
# Utility Functions¶
## Miscellaneous¶
Miscellaneous utility functions.
lenskit.util.clone(algo)
Clone an algorithm, but not its fitted data. This is like scikit.base.clone(), but may not work on arbitrary SciKit estimators. LensKit algorithms are compatible with SciKit clone, however, so feel free to use that if you need more general capabilities.
This function is somewhat derived from the SciKit one.
>>> from lenskit.algorithms.basic import Bias
>>> orig = Bias()
>>> copy = clone(orig)
>>> copy is orig
False
>>> copy.damping == orig.damping
True
lenskit.util.cur_memory()
Get the current memory use for this process
lenskit.util.max_memory()
Get the maximum memory use for this process
lenskit.util.proc_count(core_div=2)
Get the number of desired jobs for multiprocessing operations. This does not affect Numba or MKL multithreading.
This count can come from a number of sources: * The LK_NUM_PROCS environment variable * The number of CPUs, divided by core_div (default 2)
Parameters
core_div (int or None) – The divisor to scale down the number of cores; None to turn off core-based fallback.
Returns
The number of jobs desired.
Return type
int
|
2020-04-02 07:03:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5809180736541748, "perplexity": 7183.494832045136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506673.7/warc/CC-MAIN-20200402045741-20200402075741-00458.warc.gz"}
|
http://clay6.com/qa/jee-main-aipmt/neet-past-papers/category-1998
|
# Recent questions and answers in 1998
Questions from: 1998
### The periderm includes :
To see more, click for all the questions in this category.
|
2020-01-19 12:58:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21036268770694733, "perplexity": 6036.417917576115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594603.8/warc/CC-MAIN-20200119122744-20200119150744-00060.warc.gz"}
|
https://help.semmle.com/QL/ql-training/java/global-data-flow-java.html
|
# Introduction to global data flow
QL for Java
## Setup
Note
For this example, we will be analyzing Apache Struts.
You can also query the project in the query console on LGTM.com.
Note that results generated in the query console are likely to differ to those generated in the QL plugin as LGTM.com analyzes the most recent revisions of each project that has been added–the snapshot available to download above is based on an historical version of the codebase.
## Agenda
• Global taint tracking
• Sanitizers
• Path queries
• Data flow models
## Information flow
• Many security problems can be phrased as an information flow problem:
Given a (problem-specific) set of sources and sinks, is there a path in the data flow graph from some source to some sink?
• Some examples:
• SQL injection: sources are user-input, sinks are SQL queries
• Reflected XSS: sources are HTTP requests, sinks are HTTP responses
• We can solve such problems using the data flow and taint tracking libraries.
## Global data flow and taint tracking
• Recap:
• Local (“intra-procedural”) data flow models flow within one function; feasible to compute for all functions in a snapshot
• Global (“inter-procedural”) data flow models flow across function calls; not feasible to compute for all functions in a snapshot
• For global data flow (and taint tracking), we must therefore provide restrictions to ensure the problem is tractable.
• Typically, this involves specifying the source and sink.
Note
As we mentioned in the previous slide deck, while local data flow is feasible to compute for all functions in a snapshot, global data flow is not. This is because the number of paths becomes exponentially larger for global data flow.
The global data flow (and taint tracking) avoids this problem by requiring that the query author specifies which sources and sinks are applicable. This allows the implementation to compute paths between the restricted set of nodes, rather than the full graph.
## Global taint tracking library
The semmle.code.<language>.dataflow.TaintTracking library provides a framework for implementing solvers for global taint tracking problems:
1. Subclass TaintTracking::Configuration following this template:
class Config extends TaintTracking::Configuration {
Config() { this = "<some unique identifier>" }
override predicate isSource(DataFlow::Node nd) { ... }
override predicate isSink(DataFlow::Node nd) { ... }
}
2. Use Config.hasFlow(source, sink) to find inter-procedural paths.
Note
In addition to the taint tracking configuration described here, there is also an equivalent data flow configuration in semmle.code.<language>.dataflow.DataFlow, DataFlow::Configuration. Data flow configurations are used to track whether the exact value produced by a source is used by a sink, whereas taint tracking configurations are used to determine whether the source may influence the value used at the sink. Whether you use taint tracking or data flow depends on the analysis problem you are trying to solve.
## Code injection in Apache struts
• In April 2018, Man Yue Mo, a security researcher at Semmle, reported 5 remote code execution (RCE) vulnerabilities (CVE-2018-11776) in Apache Struts.
• These vulnerabilities were caused by untrusted, unsanitized data being evaluated as an OGNL (Object Graph Navigation Library) expression, allowing malicious users to perform remote code execution.
• Conceptually, this is a global taint tracking problem - does untrusted remote input flow to a method call which evaluates OGNL?
## Finding RCEs (outline)
import java
import semmle.code.java.dataflow.TaintTracking
class TaintedOGNLConfig extends TaintTracking::Configuration {
TaintedOGNLConfig() { this = "TaintedOGNLConfig" }
override predicate isSource(DataFlow::Node source) { /* TBD */ }
override predicate isSink(DataFlow::Node sink) { /* TBD */ }
}
from TaintedOGNLConfig cfg, DataFlow::Node source, DataFlow::Node sink
where cfg.hasFlow(source, sink)
select source,
"This untrusted input is evaluated as an OGNL expression \$@.",
sink, "here"
## Defining sources
We want to look for method calls where the method name is getNamespace(), and the declaring type of the method is a class called ActionProxy.
import semmle.code.java.security.Security
class TaintedOGNLConfig extends TaintTracking::Configuration {
override predicate isSource(DataFlow::Node source) {
exists(Method m |
m.getName() = "getNamespace" and
m.getDeclaringType().getName() = "ActionProxy" and
source.asExpr() = m.getAReference()
)
}
...
}
Note
We first define what it means to be a source of tainted data for this particular problem. In this case, we are interested in the value returned by calls to getNamespace().
## Exercise: Defining sinks
Fill in the definition of isSink.
Hint: We want to find the first argument of calls to the method compileAndExecute.
import semmle.code.java.security.Security
class TaintedOGNLConfig extends TaintTracking::Configuration {
override predicate isSink(DataFlow::Node sink) {
/* Fill me in */
}
...
}
Note
The second part is to define what it means to be a sink for this particular problem. The queries from an Introduction to data flow will be useful for this exercise.
## Solution: Defining sinks
Find a method access to compileAndExecute, and mark the first argument.
import semmle.code.java.security.Security
class TaintedOGNLConfig extends TaintTracking::Configuration {
override predicate isSink(DataFlow::Node sink) {
exists(MethodAccess ma |
ma.getMethod().getName() = "compileAndExecute" and
ma.getArgument(0) = sink.asExpr()
)
}
...
}
## Path queries
Path queries provide information about the identified paths from sources to sinks. Paths can be examined in the Path Explorer view.
Use this template:
/**
* ...
* @kind path-problem
*/
import semmle.code.<language>.dataflow.TaintTracking
import DataFlow::PathGraph
...
from Configuration cfg, DataFlow::PathNode source, DataFlow::PathNode sink
where cfg.hasFlowPath(source, sink)
select sink, source, sink, "<message>"
Note
To see the paths between the source and the sinks, we can convert the query to a path problem query. There are a few minor changes that need to be made for this to work–we need an additional import, to specify PathNode rather than Node, and to add the source/sink to the query output (so that we can automatically determine the paths).
## Defining sanitizers
A sanitizer allows us to prevent flow through a particular node in the graph. For example, flows that go via ValueStackShadowMap are not particularly interesting, because it is a class that is rarely used in practice. We can exclude them like so:
class TaintedOGNLConfig extends TaintTracking::Configuration {
override predicate isSanitizer(DataFlow::Node nd) {
nd.getEnclosingCallable()
.getDeclaringType()
}
...
}
Add an additional taint step that (heuristically) taints a local variable if it is a pointer, and it is passed to a function in a parameter position that taints it.
class TaintedOGNLConfig extends TaintTracking::Configuration {
DataFlow::Node succ) {
exists(Field f, RefType t |
node1.asExpr() = f.getAnAssignedValue() and
node2.asExpr() = f.getAnAccess() and
node1.asExpr().getEnclosingCallable().getDeclaringType() = t and
node2.asExpr().getEnclosingCallable().getDeclaringType() = t
)
}
...
}
## Exercise: How not to do global data flow
Implement a flowStep predicate extending localFlowStep with steps through function calls and returns. Why might we not want to use this?
predicate stepIn(Call c, DataFlow::Node arg, DataFlow::ParameterNode parm) {
exists(int i | arg.asExpr() = c.getArgument(i) |
parm.asParameter() = c.getTarget().getParameter(i))
}
predicate stepOut(Call c, DataFlow::Node ret, DataFlow::Node res) {
exists(ReturnStmt retStmt | retStmt.getEnclosingFunction() = c.getTarget() |
ret.asExpr() = retStmt.getExpr() and res.asExpr() = c)
}
predicate flowStep(DataFlow::Node pred, DataFlow::Node succ) {
DataFlow::localFlowStep(pred, succ) or
stepIn(_, pred, succ) or
stepOut(_, pred, succ)
}
## Balancing calls and returns
• If we simply take flowStep*, we might mismatch calls and returns, causing imprecision, which in turn may cause false positives.
• Instead, make sure that matching stepIn/stepOut pairs talk about the same call site:
predicate balancedPath(DataFlow::Node src, DataFlow::Node snk) {
src = snk or DataFlow::localFlowStep(src, snk) or
exists(DataFlow::Node m | balancedPath(src, m) | balancedPath(m, snk)) or
exists(Call c, DataFlow::Node parm, DataFlow::Node ret |
stepIn(c, src, parm) and
balancedPath(parm, ret) and
stepOut(c, ret, snk)
)
}
## Summary-based global data flow
• To avoid traversing the same paths many times, we compute function summaries that record if a function parameter flows into a return value:
predicate returnsParameter(Function f, int i) {
exists (Parameter p, ReturnStmt retStmt, Expr ret |
p = f.getParameter(i) and
retStmt.getEnclosingFunction() = f and
ret = retStmt.getExpr() and
balancedPath(DataFlow::parameterNode(p), DataFlow::exprNode(ret))
)
}
• Use this predicate in balancedPath instead of stepIn/stepOut pairs.
|
2019-09-19 15:04:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31828537583351135, "perplexity": 11219.059784503745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573533.49/warc/CC-MAIN-20190919142838-20190919164838-00395.warc.gz"}
|
https://www.physicsforums.com/threads/contra-variant-and-co-variant-vectors.706030/
|
# Contra-variant and Co-variant vectors
1. Aug 17, 2013
### exmarine
This should be a simple question for you guys. I am trying to construct trivial examples of contra-variant and co-variant vectors. Suppose I have a 2D Cartesian system with equal units out the x and y axes, and a vector A with components (2,1). Suppose my prime 2D Cartesian system is parallel, but the units along the x’ axis are twice as long as those in the un-primed (and y’) axes. I think my A’ vector has components (1,1). Is that correct? Is that then a contra-variant vector? What is an example of a co-variant vector?
Thanks.
2. Aug 17, 2013
### Bill_K
If the units are twice as long, then the value of the x' coordinate is half as great: x' = x/2. Then
∂x'/∂x = 1/2 and ∂x/∂x' = 2
If the vector A is contravariant, then Ax' = ∂x'/∂x Ax = 1
If the vector A is covariant, then Ax' = ∂x/∂x' Ax = 4
3. Aug 17, 2013
### exmarine
Ok thanks.
Then follow-on question: What could a covariant vector ever be used for? What could it represent? (I am reading GRT textbooks, so I'll eventually run into it I guess.)
4. Aug 17, 2013
### WannabeNewton
5. Aug 17, 2013
### Staff: Mentor
bv
Trivial example:
If you transform from a coordinate system in which distance is measured in meters to one in which it is is measured in kilometers ($x'=\frac{x}{1000}$) the $x$ coordinate of a contravariant upper-index vector such as velocity will be smaller by a factor of 1000; If an object's velocity was 1000 m/sec in the old coordinate system it will be 1 km/sec in the new one.
A covariant quantity would be something like altitude change per meter, which transforms the other way. If the altitude changes by 1 cm per meter of horizontal distance you cover, it will change by 1000 cm per kilometer after the transformation. From this, you might correctly conclude that the gradient is a example of a useful covector.
It's worth the exercise of writing down the metric tensor for two-dimensional cartesian space (it's just the 2x2 identity matrix) and then applying the tensor coordinate transformation rule for this trivial coordinate transform, just to see how $g_{ij}$ differs from $g^{ij}$ after the transform.
6. Aug 18, 2013
|
2017-08-20 14:49:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.769411027431488, "perplexity": 893.4707148356982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106754.4/warc/CC-MAIN-20170820131332-20170820151332-00098.warc.gz"}
|
https://motls.blogspot.com/2005/12/pure-heterotic-mssm.html
|
## Friday, December 16, 2005 ... //
### Pure heterotic MSSM
As announced in October here, Braun, He, Ovrut, and Pantev have finally found an exact MSSM constructed from heterotic string theory on a specific Calabi-Yau.
The model has the Standard Model group plus the U(1)B-L, three generations of quarks and leptons including the right-handed neutrino, and exactly one pair of Higgs doublets which is the right matter content to obtain gauge coupling unification.
By choosing a better gauge bundle - with some novel tricks involving the ideal sheaves - they got rid of the second Higgs doublet. While they use the same Calabi-Yau space with h11=h12=3 i.e. with 6 complex geometric moduli, they now only have 13 (instead of 19) complex bundle moduli.
The probability that this model describes reality is roughly 10450 times bigger than the probability for a generic flux vacuum, for example the vacua that Prof. Susskind uses in his anthropic interview in New Scientist. ;-)
#### snail feedback (1) :
hmm, assuming that reality is just the representation of the particles and such. how about the interactions and everything else?
a high school physics book describes reality better.
there's no such thing as the probablity a theory describes reality. how about the probablity of that you are actually yourself? it either describes or not, yes or no.
a theory of reality is not a photograph of reality either. the standard model is more appealing at this level.
if there's nothing at the level of insights out of this theory, we might just treat it as amusing entertainment, which many people are I guess.
don't throw too much tomatos and eggs, please.
merry christmas
|
2019-07-18 03:01:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6347305178642273, "perplexity": 1529.3162408931428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525483.64/warc/CC-MAIN-20190718022001-20190718044001-00191.warc.gz"}
|
https://www.tilmanngrawe.com/viewtopic.php?c6ce8c=mixed-effects-model-r
|
A fixed effect is a parameter that does not vary. Viewed 1k times 0. For example, we may assume there is some true regression line in the population, $$\beta$$, and we get some estimate of it, $$\hat{\beta}$$. Note.
Value. Data. Plot interaction effects of (generalized) linear (mixed) models Source: R/sjPlotInteractions.R.
Active 3 years, 5 months ago. Outline 1 Generalized Linear Mixed Models 2 Speci c distributions and links 3 Data description and initial exploration 4 Model building 5 Conclusions from the example 6 Summary … Viewed 42k times 13. Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. I try to analyze some simulated longitudinal data in R using a mixed-effects model (lme4 package). GLMM Jan. 11, 2011 1 / 39. Such data are encountered in a variety of fields including biostatistics, public health, psychometrics, educational measurement, and sociology.
Hello again, Today I'll show you how to fit a linear mixed-effect model to your data. The R model interface is quite a simple one with the dependent variable being specified first, followed by the ~ symbol. 0. The book has clear instructions on how to program in R. … This is a good reference book.” (Cats and Dogs with Data, maryannedata.wordpress.com, August, 2013) Generic functions such as print, plot and summary have methods to show the results of the fit. Plot regression (predicted values) or probability lines (predicted probabilities) of significant interaction terms to better understand effects of moderations in regression models. These functions compute deletion influence diagnostics for linear mixed-effects models fit by lmer in the lme4 package and lme in the nlme package and for generalized linear mixed-effects models fit by glmer in the lme4 package.
It will be a quick tutorial, so try to read a little about this type. This kind of data appears when subjects are followed over time and measurements are collected at intervals. Finally, we specify that datframe on which to calculate the model… I have a 2x2x2 factorial design with one random effect. In contrast, random effects are parameters that are themselves random variables. See nlmeObject for the components of the fit. ... Fitting Linear Mixed-Effects Models using lme4 (in particular Section 2.2 "Understanding mixed-model formulas"). How to plot mixed-effects model estimates in ggplot2 in R? are covered. Ask Question Asked 2 years, 9 months ago. However, the effect of random terms can be tested by comparing the model to a model including only the fixed effects and excluding the random effects, or with the rand function from the lmerTest package if the lme4 package is used to specify the model. And do use the data= argument to structure your model-fitting process ... – Ben Bolker Feb 25 '12 at 19:43. Addition signs indicate that these are modeled as additive effects. Discussion includes extensions into generalized mixed models and realms beyond.
An interactive version with Jupyter notebook is available here. The basics of random intercepts and slopes models, crossed vs. nested models, etc. Trending Tags # Rvideos # Univariate Analysis # Misc. an object of class nlme representing the nonlinear mixed-effects model fit. The main functions are methods for the influence generic function. Part 5: Generalized linear mixed models Douglas Bates Department of Statistics University of Wisconsin - Madison Madison January 11, 2011 Douglas Bates (Stat. Skip to content. Active 6 years ago. By default, an analysis of variance for a mixed model doesn’t test the significance of the random effects in the model. The righ hand side, predictor variables, are each named. # Diversity Measures # Functions # Spatial Analysis; terça-feira, Maio 26, 2020 Clube da Ciência Um local para compartilhar ideias científicas. Learn more . Lastly, the course goes over repeated-measures analysis as a special case of mixed-effect modeling. Active 2 years, 9 months ago. See the coefplot or coefplot2 packages on CRAN. I would like to graphically represent the fixed effects evaluation. The functions resid, coef, fitted, fixed.effects, and random.effects can be used to extract some of its components. This is an introduction to mixed models in R. It covers a many of the most common techniques employed in such models, and relies heavily on the lme4 package. $\endgroup$ – amoeba Jan 30 '18 at 15:07. 1. Ask Question Asked 8 years, 10 months ago. We use the InstEval data set from the popular lme4 R package … “Linear Mixed-effects Models Using R by Andrzej Galecki and Tomasz Burzkowski, published by Springer is a book that covers in dept a lot of material on linear models. Is there any way I can graphically depict the fixed effects? We demonstrate with an example in Edward. The core of mixed models is that they incorporate fixed and random effects. Influence Diagnostics for Mixed-Effects Models. However the seems to be no plot function for these objects.
Cicero Il Directions, Stellenbosch University Logo, Do Nothing Book Npr, Psychology Graduate Programs California, Bob Gunton - Imdb, Mandala Rock Painting, Demonstration Speech Ideas Nursing, South Korean Culture Clothing, Arial Bold 700, Cognitive Theory Motivation, You're Gonna Go Far Kid Nightcore, What Is Victor's Destiny In Frankenstein, Structural Analysis Online, Food Technology Syllabus, 1984 Buy Online, Insead Employment Report 2020, Lawyer's Desk Book, Calligraphy H Lowercase, University Of Mpumalanga Tenders, Current Philosophical Trends, Melanie Klein Psychology, Sticky Notes Alternative, Why Is Number Sense Important, Automobile And Automotive, Pre Determiners Pdf, Sustainable Tourism Certification, On First Looking Into Chapman's Homer Analysis, Funny Cartoon Jokes, 5th Amendment Articles,
|
2020-10-20 11:49:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3151165843009949, "perplexity": 3237.3800142051036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107872686.18/warc/CC-MAIN-20201020105000-20201020135000-00315.warc.gz"}
|
https://hal.inria.fr/hal-01092328
|
Control of a Bioreactor with Quantized Measurements
1 BIOCORE - Biological control of artificial ecosystems
LOV - Laboratoire d'océanographie de Villefranche, CRISAM - Inria Sophia Antipolis - Méditerranée , INRA - Institut National de la Recherche Agronomique
Abstract : We consider the problem of global stabilization of an unstable bioreactor model (e.g. for anaerobic digestion), when the measurements are discrete and in finite number (quantized''), with control of the dilution rate. The model is a differential system with two variables, and the output is the biomass growth. The measurements define regions in the state space, and they can be perfect or uncertain (i.e. without or with overlaps). We show that a quantized control may lead to global stabilization: trajectories have to follow some transitions between the regions, until the final region where they converge toward the reference equilibrium. On the boundary between regions, the solutions are defined as a Filippov differential inclusion.
Type de document :
Chapitre d'ouvrage
Fages, François and Piazza, Carla. Formal Methods in Macro-Biology, 8738, Springer International Publishing, pp.47-62, 2014, Lecture Notes in Computer Science, 〈10.1007/978-3-319-10398-3_5〉
Domaine :
Littérature citée [22 références]
https://hal.inria.fr/hal-01092328
Contributeur : Jean-Luc Gouzé <>
Soumis le : lundi 15 décembre 2014 - 19:12:25
Dernière modification le : jeudi 11 janvier 2018 - 16:41:49
Document(s) archivé(s) le : lundi 16 mars 2015 - 12:45:27
Fichier
hybrid_bioreactor_v6.pdf
Fichiers produits par l'(les) auteur(s)
Citation
Francis Mairet, Jean-Luc Gouzé. Control of a Bioreactor with Quantized Measurements. Fages, François and Piazza, Carla. Formal Methods in Macro-Biology, 8738, Springer International Publishing, pp.47-62, 2014, Lecture Notes in Computer Science, 〈10.1007/978-3-319-10398-3_5〉. 〈hal-01092328〉
Métriques
Consultations de la notice
399
Téléchargements de fichiers
|
2018-01-23 14:37:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46903520822525024, "perplexity": 6246.235652547563}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891976.74/warc/CC-MAIN-20180123131643-20180123151643-00154.warc.gz"}
|
http://mathoverflow.net/questions/68174/equilibria-exist-in-compact-convex-forward-invariant-sets
|
Equilibria Exist in Compact Convex Forward-Invariant Sets
Theorem. Consider a continuous map $f : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{n}$ and suppose that the autonomous dynamical system $\dot{x} = f(x)$ has a semiflow $\varphi : {\mathbb{R}}_{\geq{0}} \times {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{n}$. Let $K \subseteq {\mathbb{R}}^{n}$. If $K$ is nonempty, compact, convex and forward-invariant, then $K$ contains an equilibrium of the dynamical system, i.e. a zero of the map $f$.
According to a reliable source, the above theorem is a standard result everyone uses in dynamical systems without proof. I propose a proof in "Equilibria Exist in Compact Convex Forward-Invariant Sets" at http://math.GillesGnacadja.info/files/EquilExists.html. I am interested in comments on this proof, in references to this or other proofs in the literature, and in new/better proofs.
-
Shouldn't you require $f$ to be more than continuous (e.g. Lipschitz)? Currently $f$ doesn't (uniquely) define a (semi-)flow, for example when $f(x) = \sqrt{x}$ in a neighborhood of $x \ge 0$. – Jaap Eldering Jun 21 '11 at 13:33
Thanks Jaap, for catching and illustrating this insufficiency. I changed the statement. Now I explicitly require the existence of the semiflow. In my intended application, the map $f$ is a polynomial describing the kinetics of a chemical reaction network and time runs from zero to infinity. So I believe it would be too strong to require (global) Lipschitz continuity and too weak to require local Lipschitz continuity. Thanks again. – Gilles Gnacadja Jun 22 '11 at 1:37
A colleague showed me an article that essentially has the result: "The Brouwer Fixed Point Theorem Applied to Rumour Transmission", dx.doi.org/10.1016/j.aml.2006.02.007. The article is dated 2005/2006. There have to be earlier references. – Gilles Gnacadja Aug 11 '12 at 22:48
|
2014-10-20 08:49:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.824363648891449, "perplexity": 729.3362925955385}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507442288.9/warc/CC-MAIN-20141017005722-00196-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://www.elastic.co/blog/how-to-monitor-containerized-kafka-with-elastic-observability
|
Tech Topics
# How to monitor containerized Kafka with Elastic Observability
Kafka is a distributed, highly available event streaming platform which can be run on bare metal, virtualized, containerized, or as a managed service. At its heart, Kafka is a publish/subscribe (or pub/sub) system, which provides a "broker" to dole out events. Publishers post events to topics, and consumers subscribe to topics. When a new event is sent to a topic, consumers that subscribe to the topic will receive a new event notification. This allows multiple clients to be notified of activity without the publisher needing to know who or what is consuming the events that it publishes. For example, when a new order comes in, a web store may publish an event with order details, which could be picked up by consumers in the order picking department to let them know what to pull from the shelves, and by consumers in the shipping department to print a label, or any other interested party to take action. Depending on how you configure consumer groups and partitions, you can control which consumers get new messages.
Kafka is usually deployed alongside ZooKeeper, which it uses to store configuration information such as topics, partitions, and replica/redundancy information. When monitoring Kafka clusters it is equally important to monitor the associated ZooKeeper instances as well— if ZooKeeper has issues they will propagate to the Kafka cluster.
There are many ways to use Kafka alongside the Elastic Stack. You can configure Metricbeat or Filebeat to send data to Kafka topics, you can send data from Kafka to Logstash or from Logstash to Kafka, or, you can use Elastic Observability to monitor Kafka and ZooKeeper, so you can keep a close eye on your cluster, which is what this blog will cover. Remember the "order detail" events mentioned above? Logstash, using the Kafka input plugin, can also subscribe to those events and bring data into your Elasticsearch cluster. By adding business (or any other data that you need to truly understand what is happening in your environment) you increase the observability of your systems.
## Things to look for when monitoring Kafka
Kafka has several moving parts — there is the service itself, which usually consists of multiple brokers and ZooKeeper instances, as well as the clients that use Kafka, the producers and consumers. There are multiple types of metrics that Kafka provides, some via the brokers themselves, and others via JMX. The broker provides metrics for the partitions and consumer groups. Partitions let you split messages across multiple brokers, parallelizing the processing. Consumers receive messages from a single topic partition, and can be grouped together to consume all of the messages from a topic. These consumer groups allow you to split the load across multiple workers.
Kafka messages each have an offset. The offset is basically an identifier indicating where the message is in the message sequence. Producers add messages to the topics, each getting a new offset. The newest offset in a partition shows the latest ID. Consumers receive the messages from the topics, and the difference between the newest offset and the offset the consumer receives is the consumer lag. Invariably, the consumers will be a bit behind the producers. What to look out for is when the consumer lag is perpetually increasing, as this indicates that you probably need more consumers to process the load.
When looking at metrics for the topics themselves it is important to look for any topics that don't have any consumers, as it might indicate that something that should be running isn't.
We'll go over some additional key metrics for the brokers once we've got everything all set up.
## Setting up Kafka and Zookeeper
In our example we're running a containerized Kafka cluster based on the Confluent Platform, ramped up to three Kafka brokers (cp-server images), alongside a single ZooKeeper instance. In practice you'd probably also want to use a more robust, highly-available configuration for ZooKeeper as well.
I've cloned their setup and switched to the cp-all-in-one directory:
git clone https://github.com/confluentinc/cp-all-in-one.git
cd cp-all-in-one
Everything else in this blog is done from that cp-all-in-one directory.
In my setup I've tweaked the ports to make it easier to tell which port goes with which broker (they need different ports because each is exposed to the host) — for example, broker3 is on port 9093. I've also changed the name of the first broker to broker1 for consistency. You can see the complete file, before instrumentation, in my GitHub fork of the official repository.
The configuration for broker1 after the port realignment looks like this:
broker1:
image: confluentinc/cp-server:6.1.0
hostname: broker1
container_name: broker1
depends_on:
- zookeeper
ports:
- "9091:9091"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker1:29092
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
As you can see, I've also changed the hostname occurrences from broker to broker1. Of course, any other configuration blocks in the docker-compose.yml that reference broker will also get changed to reflect all three nodes of our cluster, for example, and the Confluent control center now depends on all three brokers:
control-center:
image: confluentinc/cp-enterprise-control-center:6.1.0
hostname: control-center
container_name: control-center
depends_on:
- broker1
- broker2
- broker3
- schema-registry
- connect
- ksqldb-server
(...)
## Gathering logs & metrics
My Kafka and ZooKeeper services are running in containers, initially with three brokers. If I scale that up or down, or decide to make the ZooKeeper side more robust, I don't want to have to reconfigure and restart my monitoring — I want it to happen dynamically. To accomplish this we'll run the monitoring in Docker containers as well, alongside the Kafka cluster, and leverage Elastic Beats hints-based autodiscover.
### Hints-based autodiscover
For monitoring, we'll be gathering logs and metrics from our Kafka brokers and the ZooKeeper instance. We'll use Metricbeat for the metrics, and Filebeat for the logs, both running in containers. To bootstrap this process, we need to download the Docker-flavor configuration files for each, metricbeat.docker.yml and filebeat.docker.yml. I will be sending this monitoring data to my Elastic Observability deployment on the Elasticsearch Service on Elastic Cloud (if you'd like to follow along you can sign up for a free trial). If you'd prefer to manage your cluster yourself you can download the Elastic Stack for free and run it locally — I've included instructions for both scenarios.
Whether you're using a deployment on Elastic Cloud or running a self-managed cluster you'll need to specify how to find the cluster — the Kibana and Elasticsearch URLs, and credentials that allow you to log on to the cluster. The Kibana endpoint lets us load default dashboards and configuration information, and Elasticsearch is where the Beats send the data. With Elastic Cloud, the Cloud ID wraps the endpoint information together:
When you create a deployment on Elastic Cloud you are provided a password for the elastic user. In this blog I'll just use these credentials for simplicity, but best practice is to create API keys or users and roles with the least privileges needed for the task.
Let's go ahead and load the default dashboards for both Metricbeat and Filebeat. This only needs to be done once, and is similar for each Beat. To load the Metricbeat collateral, run:
docker run --rm \
--name=metricbeat-setup \
--volume="$(pwd)/metricbeat.docker.yml:/usr/share/metricbeat/metricbeat.yml:ro" \ docker.elastic.co/beats/metricbeat:7.11.1 \ -E cloud.id=elastic-observability-deployment:dXMtY2VudHJhbDEuZ2NwLmNsb3VkLmV... \ -E cloud.auth=elastic:some-long-random-password \ -e -strict.perms=false \ setup This command will create a Metricbeat container (called metricbeat-setup), load up the metricbeat.docker.yml file we downloaded, connect to the Kibana instance (which it gets from the cloud.id field), and run the setup command, which will load the dashboards. If you're not using Elastic Cloud, you'd instead provide the Kibana and Elasticsearch URLs via setup.kibana.host and output.elasticsearch.hosts fields, along with individual credential fields, which would look something like this: docker run --rm \ --name=metricbeat-setup \ --volume="$(pwd)/metricbeat.docker.yml:/usr/share/metricbeat/metricbeat.yml:ro" \
docker.elastic.co/beats/metricbeat:7.11.1 \
-E setup.kibana.host=localhost:5601 \
-E output.elasticsearch.hosts=localhost:9200 \
-e -strict.perms=false \
setup
The -e -strict.perms=false helps mitigate an inevitable Docker file ownership/permission issue.
Similarly, to set up the logs collateral, you'd run a similar command for Filebeat:
docker run --rm \
--name=filebeat-setup \
--volume="$(pwd)/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro" \ docker.elastic.co/beats/filebeat:7.11.1 \ -E cloud.id=elastic-observability-deployment:dXMtY2VudHJhbDEuZ2NwLmNsb3VkLmV... \ -E cloud.auth=elastic:some-long-random-password \ -e -strict.perms=false \ setup By default, these configuration files are set up to monitor generic containers, gathering container logs and metrics. This is helpful to some extent, but we want to make sure that they also capture service-specific logs and metrics. To do this, we'll be configuring our Metricbeat and Filebeat containers to use autodiscover, as mentioned above. There are a couple of different ways to do this. We could set up the Beats configurations to look for specific images or names, but that requires knowing a lot up front. Instead, we'll use hints-based autodiscovery, and let the containers themselves instruct the Beats how to monitor them. With hints-based autodiscover, we add labels to the Docker containers. When other containers start up, the metricbeat and filebeat containers (which we haven't started yet) get a notification which allows them to start monitoring. We want to set up the broker containers so they get monitored by the Kafka Metricbeat and Filebeat modules, and we also want Metricbeat to use the ZooKeeper module for metrics. Filebeat will collect the ZooKeeper logs without any special parsing. The ZooKeeper configuration is more straightforward than Kafka, so we'll start there. The initial configuration in our docker-compose.yml for ZooKeeper looks like this: zookeeper: image: confluentinc/cp-zookeeper:6.1.0 hostname: zookeeper container_name: zookeeper ports: - "2181:2181" environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 We want to add a labels block to the YAML to specify the module, the connection information, and the metricsets, which looks like this: labels: - co.elastic.metrics/module=zookeeper - co.elastic.metrics/hosts=zookeeper:2181 - co.elastic.metrics/metricsets=mntr,server These tell the metricbeat container that it should use the zookeeper module to monitor this container, and that it can access it via the host/port zookeeper:2181, which is the port ZooKeeper is configured to listen on. It also tells it to use the mntr and server metricsets from the ZooKeeper module. As a side note, recent versions of ZooKeeper lock down some of what they call "four letter words", so we also need to add the srvr and mntr commands to the approved list in our deployment via KAFKA_OPTS. Once we do that, the ZooKeeper configuration in the compose file looks like this: zookeeper: image: confluentinc/cp-zookeeper:6.1.0 hostname: zookeeper container_name: zookeeper ports: - "2181:2181" environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 KAFKA_OPTS: "-Dzookeeper.4lw.commands.whitelist=srvr,mntr" labels: - co.elastic.metrics/module=zookeeper - co.elastic.metrics/hosts=zookeeper:2181 - co.elastic.metrics/metricsets=mntr,server Capturing logs from the brokers is pretty straightforward; we just add a label to each of them for the logging module, co.elastic.logs/module=kafka. For the broker metrics it's a little more complicated. There are five different metricsets in the Metricbeat Kafka module: • Consumer Group metrics • Partition metrics • Broker metrics • Consumer metrics • Producer metrics The first two sets of metrics come from the brokers themselves, while the last three come via JMX. The last two, consumer and producer, are only applicable to Java-based consumers and producers (the clients to the Kafka cluster) respectively, so we won't be covering those (but they follow the same patterns that we'll be going over). Let's tackle the first two first, because they're configured the same way. The initial Kafka configuration in our compose file for broker1 looks like this: broker1: image: confluentinc/cp-server:6.1.0 hostname: broker1 container_name: broker1 depends_on: - zookeeper ports: - "9091:9091" - "9101:9101" environment: KAFKA_BROKER_ID: 1 KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181' KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker1:29091,PLAINTEXT_HOST://broker1:9091 KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0 KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1 KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1 KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1 KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1 KAFKA_JMX_PORT: 9101 KAFKA_JMX_HOSTNAME: broker1 KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081 CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker1:29091 CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1 CONFLUENT_METRICS_ENABLE: 'true' CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous' Similar to the configuration for ZooKeeper, we need to add labels to tell Metricbeat how to gather the Kafka metrics: labels: - co.elastic.logs/module=kafka - co.elastic.metrics/module=kafka - co.elastic.metrics/metricsets=partition,consumergroup - co.elastic.metrics/hosts='${data.container.name}:9091' This sets up the Metricbeat and Filebeat kafka modules to gather Kafka logs and the partition and consumergroup metrics from the container, broker1 on port 9091. Note that I've used a variable, data.container.name (escaped with a double dollar sign) rather than the hostname — you can use whichever pattern you prefer. We need to repeat this for each broker, adjusting the port 9091 for each (which is why I aligned them at the start— we'd use 9092 and 9093 for brokers 2 and 3, respectively). We can start the Confluent cluster by running docker-compose up --detach, and we can also now start up Metricbeat and Filebeat and they will start gathering Kafka logs and metrics. After bringing up the cp-all-in-one Kafka cluster, it creates and runs in its own virtual network, cp-all-in-one_default. Because we're using service/host names in our labels, Metricbeat needs to run in the same network so it can resolve the names and connect correctly. To start Metricbeat we include the network name in the run command: docker run -d \ --name=metricbeat \ --user=root \ --network cp-all-in-one_default \ --volume="(pwd)/metricbeat.docker.yml:/usr/share/metricbeat/metricbeat.yml:ro" \
--volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
docker.elastic.co/beats/metricbeat:7.11.1 \
-E cloud.id=elastic-observability-deployment:dXMtY2VudHJhbDEuZ2NwLmNsb3VkLmV... \
-e -strict.perms=false
Filebeat's run command is similar, but doesn't require the network because it's not connecting to the other containers, but rather directly from the Docker host:
docker run -d \
--name=filebeat \
--user=root \
--volume="$(pwd)/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro" \ --volume="/mnt/data/docker/containers:/var/lib/docker/containers:ro" \ --volume="/var/run/docker.sock:/var/run/docker.sock:ro" \ docker.elastic.co/beats/filebeat:7.11.1 \ -E cloud.id=elastic-observability-deployment:dXMtY2VudHJhbDEuZ2NwLmNsb3VkLmV... \ -E cloud.auth=elastic:some-long-random-password \ -e -strict.perms=false In each case, we load the configuration YAML file, map the docker.sock file from the host to the container, and include the connectivity information (if you're running a self-managed cluster, grab the credentials that you used when loading the collateral). Note that if you're running on Docker Desktop on a Mac you won't have access to the logs, because they're stored inside the virtual machine. ### Visualizing Kafka and ZooKeeper performance and history Now we're capturing service-specific logs from our Kafka brokers, and logs and metrics from Kafka and ZooKeeper. If you navigate to the dashboards in Kibana and filter you should see dashboards for Kafka, Including the Kafka logs dashboard: And the Kafka metrics dashboard: There is also a dashboard for ZooKeeper metrics. Additionally, your Kafka and ZooKeeper logs are available in the Logs app in Kibana, allowing you to filter, search, and break them down: While the Kafka and ZooKeeper containers' metrics can be browsed using the Metrics app in Kibana, shown here grouped by service type: ### Broker metrics Let's jump back and also gather metrics from the broker metricset in the kafka module. I mentioned earlier that those metrics are retrieved from JMX. The broker, producer, and consumer metricsets leverage Jolokia, a JMX to HTTP bridge, under the covers. The broker metricset is also part of the kafka module, but because it uses JMX it has to use a different port than the consumergroup and partition metricsets, which means we need a new block in the labels for our brokers, similar to the annotation configuration for multiple sets of hints. We also need to include the jar for Jolokia — we'll add that to the broker containers via a volume, and set it up as well. According to the download page, the current version of the Jolokia JVM agent is 1.6.2, so we'll grab that (the -OL tells cURL to save the file as the remote name, and to follow redirects): curl -OL https://search.maven.org/remotecontent?filepath=org/jolokia/jolokia-jvm/1.6.2/jolokia-jvm-1.6.2-agent.jar</a> We add a section to the configuration for each of the brokers to attach the JAR file to the containers: volumes: - ./jolokia-jvm-1.6.2-agent.jar:/home/appuser/jolokia.jar And specify the KAFKA_JVM_OPTS to attach the JAR as a Java agent (note that the ports are per broker, so it's 8771-8773 for brokers 1, 2, and 3): KAFKA_JMX_OPTS: '-javaagent:/home/appuser/jolokia.jar=port=8771,host=broker1 -Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false' I am not using any kind of authentication for this, so I have to add a couple flags to the launch. Notice that the jolokia.jar file path in the KAFKA_JMX_OPTS matches the path on the volume. We need to make a couple more minor tweaks. Because we're using Jolokia we no longer need to expose the KAFKA_JMX_PORT in the ports section. Instead, we'll expose the port that Jolokia is listening on, 8771. We'll also remove the KAFKA_JMX_* values from the configuration. If we restart our Kafka cluster (docker-compose up --detach) we'll start to see the broker metrics showing up in our Elasticsearch deployment. If I jump over to the discover tab, select the metricbeat-* index pattern, and search for metricset.name : "broker" I can see that I indeed have data: The structure of the broker metrics looks kind of like this: kafka └─ broker ├── address ├── id ├── log │ └── flush_rate ├── mbean ├── messages_in ├── net │ ├── in │ │ └── bytes_per_sec │ ├── out │ │ └── bytes_per_sec │ └── rejected │ └── bytes_per_sec ├── replication │ ├── leader_elections │ └── unclean_leader_elections └── request ├── channel │ ├── fetch │ │ ├── failed │ │ └── failed_per_second │ ├── produce │ │ ├── failed │ │ └── failed_per_second │ └── queue │ └── size ├── session │ └── zookeeper │ ├── disconnect │ ├── expire │ ├── readonly │ └── sync └── topic ├── messages_in └── net ├── in │ └── bytes_per_sec ├── out │ └── bytes_per_sec └── rejected └── bytes_per_sec Essentially coming out as name/value pairs, as indicated by the kafka.broker.mbean field. Let's look at the kafka.broker.mbean field from an example metric: kafka.server:name=BytesOutPerSec,topic=_confluent-controlcenter-6-1-0-1-TriggerEventsStore-changelog,type=BrokerTopicMetrics This contains the metric name (BytesOutPerSec), the Kafka topic that it refers to (_confluent-controlcenter-6-1-0-1-TriggerEventsStore-changelog), and the metric type (BrokerTopicMetrics). Depending on the metric type and name, different fields will be set. In this example, only the kafka.broker.topic.net.out.bytes_per_sec is populated (it's 0). If we look at this in a somewhat columnar fashion, you can see that the data is very sparse: We can collapse this a bit if we add an ingest pipeline, to break the mbean field down into individual fields, which will also allow us to more easily visualize the data. We're going to break it down into three fields: • KAFKA_BROKER_METRIC (which would beBytesOutPerSec from the example above) • KAFKA_BROKER_TOPIC (which would be_confluent-controlcenter-6-1-0-1-TriggerEventsStore-changelog from the example above) • KAFKA_BROKER_TYPE (which would beBrokerTopicMetrics from the example above) In Kibana, navigate over to DevTools: Once there, paste in the following to define an ingest pipeline called kafka-broker-fields PUT _ingest/pipeline/kafka-broker-fields { "processors": [ { "grok": { "if": "ctx.kafka?.broker?.mbean != null", "field": "kafka.broker.mbean", "patterns": ["kafka.server:name=%{GREEDYDATA:kafka_broker_metric},topic=%{GREEDYDATA:kafka_broker_topic},type=%{GREEDYDATA:kafka_broker_type}" ] } } ] } Then hit the "play" icon. You should end up with an acknowledgement, as shown above. Our ingest pipeline is in place, but we haven't done anything with it yet. Our old data is still sparse and tricky to access, and new data is still coming in the same way. Let's address the latter part first. Open up the metricbeat.docker.yml file in your favorite text editor, and add a line to the output.elasticsearch block (you can remove the hosts, username, and password config there if you aren't using it), and specify our pipeline, as such: output.elasticsearch: pipeline: kafka-broker-fields This tells Elasticsearch that each document that comes in should pass through this pipeline to check for our mbean field. Restart Metricbeat: docker rm --force metricbeat docker run -d \ --name=metricbeat \ --user=root \ --network cp-all-in-one_default \ --volume="$(pwd)/metricbeat.docker.yml:/usr/share/metricbeat/metricbeat.yml:ro" \
--volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
docker.elastic.co/beats/metricbeat:7.11.1 \
-E cloud.id=elastic-observability-deployment:dXMtY2VudHJhbDEuZ2NwLmNsb3VkLmV... \
-e -strict.perms=false
We can verify in Discover that new documents have the new fields:
We can also update the older documents so they have these fields populated as well. Back in DevTools, run this command:
POST metricbeat-*/_update_by_query?pipeline=kafka-broker-fields
It will probably warn you that it timed out, but it's running in the background and will finish asynchronously.
### Visualizing Broker metrics
Jump over to the Metrics app, and select the "Metrics Explorer" tab to take our new fields for a spin. Paste in kafka.broker.topic.net.in.bytes_per_sec and kafka.broker.topic.net.out.bytes_per_sec to see these plotted together:
And now, leveraging one of our new fields, open the "graph per" dropdown and select kafka_broker_topic:
Not everything will have non-zero values (there's not a lot going on in the cluster right now), but it's a lot easier to plot the broker metrics and break them down by topic now. You can export any of these graphs as visualizations and load them onto your Kafka metrics dashboard, or create your own visualizations using the variety of charts and graphs available in Kibana. If you'd prefer a drag-and-drop experience for visualization building, try Lens.
A good place to start with visualizations of broker metrics are the failures in produce and fetch blocks:
kafka
└─ broker
└── request
└── channel
├── fetch
│ ├── failed
│ └── failed_per_second
├── produce
│ ├── failed
│ └── failed_per_second
└── queue
└── size
The severity of failures here really depends on the use case. If the failures occur in an ecosystem where we are just getting intermittent updates — for example, stock prices or temperature readings, where we know that we'll get another one soon — a couple of failures might not be that bad, but if it's, say, an order system, dropping a few messages could be catastrophic, because it means that someone's not getting their shipment.
## Wrapping up
We can now monitor Kafka brokers and ZooKeeper using Elastic Observability. We've also seen how to leverage hints to allow you to automatically monitor new instances of containerized services, and learned how an ingest pipeline can make your data easier to visualize. Try it out today with a free trial to the Elasticsearch Service on Elastic Cloud, or download the Elastic Stack and run it locally.
If you're not running a containerized Kafka cluster, but instead are running it as a managed service or on bare metal, stay tuned. In the near future we'll be following up on this blog with a couple related blogs:
• How to monitor a standalone Kafka cluster with Metricbeat and Filebeat
• How to monitor a standalone Kafka cluster with Elastic Agent
#### We're hiring
Work for a global, distributed team where finding someone like you is just a Zoom meeting away. Flexible work with impact? Development opportunities from the start?
|
2021-12-05 08:10:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19891706109046936, "perplexity": 4439.077510660505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363149.85/warc/CC-MAIN-20211205065810-20211205095810-00227.warc.gz"}
|
https://forum.bebac.at/forum_entry.php?id=19760
|
## Power.TOST and "3x3" [R for BE/BA]
Dear all,
or perhaps mainly the powerTOST authors and maintainers
thanks a lot for power.TOST and related functions. They are great.
I have a question about e.g.
> power.TOST(CV=.3, theta0=.95, n=30, design="3x3") [1] 0.6973262
Is that the chance of showing BE for one of the comparisons, for two comparisons if we assume two are tests formulations and one if ref, or is it "all against all" which is also a common comparison?
Now, I am asking because theta0 seems to be a single number, not a vector.
Imagine we have formulations A, B and C; if true ratios for A/B and B/C are 0.95 then in my little simple universe the true ratio for A/C is ~0.9. Therefore, I imagine it is not for all against all (3 comparisons in my case).
So I wonder what goes on behind the curtains in R when I give the command above and how I should interpret the output here. I read the documentation and did not see an obvious answer but I am also not so well versed with power.TOST.
Many thanks for any input.
Or output, depending on your perspective.
Pass or fail!
ElMaestro
|
2022-01-25 10:12:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.641781210899353, "perplexity": 1743.9553851109958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304810.95/warc/CC-MAIN-20220125100035-20220125130035-00570.warc.gz"}
|
http://hackage.haskell.org/package/hjugement-2.0.2.20190414/docs/Majority-Value.html
|
hjugement-2.0.2.20190414: Majority Judgment.
Majority.Value
Synopsis
# Type MajorityValue
A MajorityValue is a list of grades made from the successive lower middlemosts of a Merit, i.e. from the most consensual majorityGrade to the least.
For using less resources and generalizing to non-integral Shares, this MajorityValue is actually encoded as an Abbreviated Majority Value, instead of a big list of grades.
Constructors
Instances
## Type Middle
A centered middle of a Merit. Needed to handle the Fractional capabilities of a Share.
By construction in majorityValue, lowGrade is always lower or equal to highGrade.
Constructors
Middle FieldsmiddleShare :: Sharethe same Share of lowGrade and highGrade.lowGrade :: grade highGrade :: grade
Instances
The majorityValue is the list of the Middles of the Merit of a choice, from the most consensual to the least.
The majorityGrade is the lower middlemost (also known as median by experts) of the grades given to a choice by the Judges.
It is the highest grade approved by an absolute majority of the Judges: more than 50% of the Judges give the choice at least a grade of majorityGrade, but every grade lower than majorityGrade is rejected by an absolute majority Thus the majorityGrade of a choice is the final grade wished by the majority.
The majorityGrade is necessarily a word that belongs to grades, and it has an absolute meaning.
When the number of Judges is even, there is a middle-interval (which can, of course, be reduced to a single grade if the two middle grades are the same), then the majorityGrade is the lowest grade of the middle-interval (the “lower middlemost” when there are two in the middle), which is the only one which respects consensus: any other choice whose grades are all within this middle-interval, has a majorityGrade which is greater or equal to this lower middlemost.
# Type MajorityRanking
The majorityRanking ranks all the choices on the basis of their grades.
Choice A ranks higher than choice B in the majorityRanking if and only if A’s majorityValue is lexicographically above B’s. There can be no tie unless two choices have precisely the same majorityValues.
Expand a MajorityValue such that each grade has a Share of '1'.
WARNING: the resulting list of grades may have a different length than the list of grades used to build the Merit.
normalizeMajorityValue m multiply all Shares by their least common denominator to get integral Shares.
|
2020-02-27 18:52:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6750702857971191, "perplexity": 2261.6970616390436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146744.74/warc/CC-MAIN-20200227160355-20200227190355-00459.warc.gz"}
|
https://mathematica.stackexchange.com/questions/183022/help-with-simplifying-function
|
# help with simplifying function
I am computing the 4th order talyor approximation of this function $$\big(\frac{b}{g-x}\big)^{0.25}$$
The analytic textbook result is:
$$b^{0.25} \big(\frac{1}{g^{0.25}}+\frac{0.25 x}{g^{1.25}}+\frac{0.15625x^2}{g^{2.25}}+\frac{0.117188 x^3}{g^{3.25}}+\frac{0.0952148 x^4}{g^{4.25}}\big)$$
When I do it in Mathematica:
nonLin = (b/(g - x))^0.25
taylorLin = Normal[Series[nonLin , {x, 0, 4}] // Simplify]
I get this:
$$x^4 \left(\frac{0.126465 b^4}{g^8 \left(\frac{b}{g}\right)^{3.75}}-\frac{0.03125 \left(\frac{b}{g}\right)^{0.25}}{g^4}\right)+\frac{0.117188 b^3 x^3}{g^6 \left(\frac{b}{g}\right)^{2.75}}+\frac{b x^2 \left(\frac{0.25 g}{\left(\frac{b}{g}\right)^{0.75}}-\frac{0.09375 b}{\left(\frac{b}{g}\right)^{1.75}}\right)}{g^4}+\frac{0.25 x \left(\frac{b}{g}\right)^{0.25}}{g}+\left(\frac{b}{g}\right)^{0.25}$$
Is there a method to obtain the same nice representation as the analytic textbook solution ? ... to simplify it somehow?
EDIT: If I use 1/4 in the exponent, the reuslt looks much nicer. However, it could still be simplified:
nonLin = (b/(g - x))^(1/4)
taylorLin = Normal[Series[nonLin , {x, 0, 4}] // Simplify]
$$\frac{195 x^4 \sqrt[4]{\frac{b}{g}}}{2048 g^4}+\frac{15 x^3 \sqrt[4]{\frac{b}{g}}}{128 g^3}+\frac{5 x^2 \sqrt[4]{\frac{b}{g}}}{32 g^2}+\frac{x \sqrt[4]{\frac{b}{g}}}{4 g}+\sqrt[4]{\frac{b}{g}}$$
• Use 1/4 instead of 0.25 in the exponent. – J. M. will be back soon Oct 3 '18 at 10:01
• The question is not clear: the solution you have shown is already analytic. Can you show, what are you after? – Alexei Boulbitch Oct 3 '18 at 10:08
• @J.M.issomewhatokay. Thanks a lot for your comment. Please have a look at my edit. – james Oct 3 '18 at 10:09
• @AlexeiBoulbitch I want the output to look like the analytic equation from the text book.( I edited my question) – james Oct 3 '18 at 10:10
• Aha, then use please the advice of @J. M. is somewhat okay given above. – Alexei Boulbitch Oct 3 '18 at 10:19
Try this:
nonLin = (b/(g - x))^0.25;
taylorLin1 =
Simplify[Normal[Series[nonLin, {x, 0, 4}]], {b > 0, g > 0}];
taylorLin2 = b^0.25*Expand[taylorLin1/b^0.25]
The results looks as follows:
Done. Have fun!
• Thanks a lot !! – james Oct 3 '18 at 11:34
You can take a constant factor (b/g)^.25 from the sum, then the sum is similar to what's in the book
Normal[Series[(1/(1 - x/g))^.25, {x, 0, 4}]]
Out[]= 1. + (0.25 x)/g + (0.15625 x^2)/g^2 + (0.117188 x^3)/g^3 + (
0.0952148 x^4)/g^4
This is using the reply found Factorization here.
It is not giving the EXACT same result, but it is a matter of just looking the output and perform some minimal algebra in the end or just simplifying terms by hand in the end.
nonLin = (b/(g - x))^(1/4)
taylorLin = Series[nonLin, {x, 0, 4}]
sfactor[k_, p_, func_] :=
HoldForm[StandardForm[k]]*StandardForm[func@(p*1/k)]
sfactor[(b)^(1/4), taylorLin, Apart] // Normal
Instead of using Simplify afterwards, you can provide an assumption to Series:
Series[(b/(g - x))^0.25, {x, 0, 4}, Assumptions->b>0&&g>0] //TeXForm
$$b^{0.25} \left(\frac{1}{g}\right)^{0.25}+0.25 b^{0.25} \left(\frac{1}{g}\right)^{1.25} x+0.15625 b^{0.25} \left(\frac{1}{g}\right)^{2.25} x^2+0.1171875 b^{0.25} \left(\frac{1}{g}\right)^{3.25} x^3+0.09521484375 b^{0.25} \left(\frac{1}{g}\right)^{4.25} x^4+O\left(x^5\right)$$
|
2020-01-24 01:48:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4739569127559662, "perplexity": 2735.100097410698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250614880.58/warc/CC-MAIN-20200124011048-20200124040048-00421.warc.gz"}
|
https://archive.lib.msu.edu/crcmath/math/math/t/t420.htm
|
Tschirnhausen Transformation
A transformation of a Polynomial equation which is of the form where and are Polynomials and does not vanish at a root of . The Cubic Equation is a special case of such a transformation. Tschirnhaus (1683) showed that a Polynomial of degree can be reduced to a form in which the and terms have 0 Coefficients. In 1786, E. S. Bring showed that a general Quintic Equation can be reduced to the form
In 1834, G. B. Jerrard showed that a Tschirnhaus transformation can be used to eliminate the , , and terms for a general Polynomial equation of degree .
|
2021-12-08 02:39:27
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9030863046646118, "perplexity": 169.76309690243218}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363437.15/warc/CC-MAIN-20211208022710-20211208052710-00604.warc.gz"}
|
https://testbook.com/question-answer/the-decreasing-order-of-ease-of-alkalinehydr--5e1c5247f60d5d2d6cf1ace5
|
# The decreasing order of ease of alkaline hydrolysis for the following esters is(I) (II) (III) (IV)
This question was previously asked in
JEE Mains Previous Paper 1 (Held On: 10 Jan 2019 Shift 1)
View all JEE Main Papers >
1. III > II > IV > I
2. III > II > I > IV
3. II > III > I > IV
4. IV > II > III > I
Option 2 : III > II > I > IV
Free
JEE Mains Previous Paper 1 (Held On: 12 Apr 2019 Shift 2)
8346
90 Questions 360 Marks 180 Mins
## Detailed Solution
Concept:
Alkaline hydrolysis of an ester (carboxylic acid derivative) follows acyl SN2 mechanism.
Rate of SN2 mechanism depends on the polarity of $${}_/^\backslash C = O$$ group of -COOR group.
Electron withdrawing group (R > -I) increases the rate of SN2 reaction whereas electron donating group (+R > + I) decreases the rate of SN2 reaction.
Here, the nature of functional groups attached para to the benzene ring are:
$$\begin{array}{*{20}{c}} { - N{O_2}}\\ {\left( { - R} \right)} \end{array} > \begin{array}{*{20}{c}} { - Cl}\\ {\left( { - I} \right)} \end{array} > \begin{array}{*{20}{c}} { - OC{H_3}}\\ {\left( { + R} \right)} \end{array}$$
So, the order of the hydrolysis will be,
$$\therefore \begin{array}{*{20}{c}} {III}\\ {\left( { - R} \right)} \end{array} > \begin{array}{*{20}{c}} {II}\\ {\left( { - I} \right)} \end{array} > I > \begin{array}{*{20}{c}} {IV}\\ {\left( { + R} \right)} \end{array}$$
|
2021-11-29 09:29:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5535666346549988, "perplexity": 9670.189051166042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358702.43/warc/CC-MAIN-20211129074202-20211129104202-00088.warc.gz"}
|
https://www.code4example.com/php/calculate-square-root-without-using-sqrt-in-php/
|
Calculate Square Root Without Using sqrt in PHP – Programming Code Examples
PHP
# Calculate Square Root Without Using sqrt in PHP1 min read
If you have just started programming, you usually do a squaring operation without using a function. However, it is not easy to do this work without using the function when the square is the square root of the number of jobs.
However if aim is to calculate the square root of a number, you usually uses a function. Here is in this post I’ll show you How to calculate a square root without using method or function in PHP.
I would like to specify the name of the current function in PHP before running the following code sqrt is the function used in javascript to get the square root.
The following code shows how to get the square root with PHP without using the function.
Output:
|
2022-08-08 03:37:49
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8205413222312927, "perplexity": 388.7043997745781}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00395.warc.gz"}
|
https://jeff.wintersinger.org/posts/2014/07/why-your-bioinformatics-lab-should-be-using-zfs/
|
# Why your bioinformatics lab should be using ZFS
by Jeff Wintersinger
July 18, 2014
### Introduction
ZFS is a wonderful filesystem released nine years ago by Sun Microsystems, which has since undergone a number of improvements and been ported to Linux. ZFS does a better job of ensuring your data remains intact than any other filesystem in common use, which is particularly important given that hard drives today suffer from an excess of both visible errors ("unrecoverable read errors") and invisible ones ("bit rot"). ZFS also grants substantial space savings through its transparent application of LZ4 compression, which is unusually effective against the highly compressible data often seen in bioinformatics. By reading less physical data, concordant increases in read performance result.
Bioinformatics labs want to store great masses of data cheaply. ZFS lets you store these masses on consumer-class (read: cheap) hardware, without sacrificing performance or reliability. ZFS (and its still-in-development cousin, Btrfs) are not merely iterative improvements on traditional filesystems in wide use, such as NTFS, ext4, and HFS+--so significant are its improvements, I assert that ZFS represents a fundamental advance in filesystems, such that small and mid-sized bioinformatics labs can achieve significant benefits through its use.
### Why hard drives are evil
Hard drives pain me--they are the most unreliable component in most computers, and yet also the one whose failure can most easily destroy our data. Traditionally, system administrators have overcome this problem by using various levels of RAID (such as RAID 1, RAID 5, RAID 6, or RAID 10), but this approach has become untenable. Robin Harris warned in 2007 that RAID 5 would become impossible to sustain by 2009, a topic he returned to in 2013. The problem is a consequence of ever-expanding drive capacities, combined with drive reliability levels that have remained constant. While drive capacities topped out at 1 TB in 2007, we now have 4 TB drives available for CDN \$170, with 6 TB enterprise drives already on the market, and 10 TB drives to follow in the near future. We have not seen, however, a concurrent increase in drive reliability--4 TB consumer-class drives today still sport the same 1 error per 1014 read bits as drives one-quarter their size seven years ago.
The consequence of this constant error rate is that you're much more likely to see your RAID array explode, with your data disappearing in a glorious fireball. Given a drive capacity of 4 TB and error rate of 1 error per 1014 read bits, you have a 32% chance of seeing an unrecoverable read error when reading the entirety of the drive. Now, suppose you have a five-disk RAID 5 array, with one disk used for parity. This means you must quadruple the 32% error rate for reading the entirety of the array, meaning that after replacing a single failed disk, your array rebuild will almost certainly encounter an unrecoverable read error, causing the entirety of the array to be lost. (The probability is even greater when you consider that, at least for multiple drives purchased from the same manufactured batch, drive failures are not independent events. One failed drive means the others are more likely to fail soon as well.) Clearly, in this scenario, the only viable solution is to dedicate two disks to parity by configuring a six-disk RAID 6 array instead, which will substantially decrease the probability of a catastrophic failure during rebuild.
All of the above should be of critical concern for any bioinformatics lab storing substantial volumes of data--which is to say, (almost) all bioinformatics labs. Bit rot is particularly nefarious. At the best of times, bioinformatics feels like a teetering tower on the verge of collapse, with a perilous array of hacked-together Perl and shell scripts spanning the gap from data to publication. If the integrity of data on which these analyses are built is called into question, the whole edifice becomes more tenuous yet. Labs are storing terabytes of data, often left to rot on its array for years at a time before it is again analyzed, by either the original lab or another. At best, such corrupt data will cause analyses to fail outright; at worst, the analyses will seemingly succeed, leading to invalid conclusions.
### Why ZFS is awesome
Both unrecoverable read errors and silent bit rot can be overcome with modern filesystems like ZFS and Btrfs. Btrfs, alas, is not yet stable enough for production use. Once stable, however, it will offer several advantages over ZFS, the most prominent of which is a license permitting inclusion in the kernel source tree. ZFS, though more mature than Btrfs, is licensed such that it must be distributed as an out-of-tree module, making installation somewhat more arduous. Despite this difficulty, however, ZFS on Linux is amply mature for use in bioinformatics, given the wonderful efforts that the Lawrence Livermore National Laboratory have spent porting it to Linux over the last several years.
A multi-disk ZFS array provides the same reliability against complete drive failure and unrecoverable read errors as traditional RAID. ZFS mirrors are equivalent to RAID 1, in that one redundant drive accompanies each data drive, while ZFS raidz1, raidz2, and raidz3 provide one, two, and three redundant drives, respectively, for arrays composed of an arbitrary number of data disks, similar to RAID 5 and RAID 6.
ZFS's greatest advantage, however, is that it also overcomes silent bit rot. By checksumming every file on the filesystem, ZFS knows when a file has been corrupted. So long as you're using some form of redundancy, ZFS can then automatically recover the proper version of the file. This feature's significance is difficult to overstate--it moves us from a world in which data gradually rots over time, both in backups and on live systems, to one in which we can be almost certain that our data's integrity remains intact.
ZFS realizes two other substantial advances over traditional filesystems that are relevant to bioinformatics labs:
• By merging the traditionally separate storage layers--redundancy (RAID via Linux's md driver), volume manager (LVM), and filesystem (ext4)--into a single entity, ZFS is easier to configure and manage. This also leads to other improvements. For example, when rebuilding the array after a disk failure, ZFS need only read only data stored in the array, rather than every block as with traditional filesystem-agnostic RAID layers. This reduces rebuild time, decreasing the chance that a second crippling error will occur before the rebuild is complete.
• ZFS supports transparent, automatic compression of stored data via LZ4. LZ4 is extremely CPU-cheap, meaning that you pay little performance price for having all stored data automatically compressed, particularly on the many-core systems common in bioinformatics. The (small) amount of time spent decompressing stored files is often outweighed by the increase in read speed that comes from having to read fewer physical bytes from the disk, meaning that compression provides an overall performance increase. Moreover, genomic data tends to be highly compressible, leading to significant space reductions and concordant performance gains when reading. Our home directories, where all our work is stored, achieve a compression ratio of 1.72x, meaning we're storing 2.21 TB of data but consuming only 1.3 TB of space.
Of course, ZFS comes with some caveats.
• Running your root filesystem on ZFS is possible, but both difficult and unsupported. When I looked into it, the chance of a borked update leading to an unbootable system seemed unacceptably high. This may remain true for some time, given the lack of integration ZFS suffers into base Linux installations because of its restrictive licensing. For now, running root on a traditional RAID array is best, with ZFS then handling your space-intensive duties (such as storing /home).
• ZFS is extremely memory hungry. Moreover, running it on systems without ECC RAM is not recommended, as memory errors supposedly have a greater chance of causing ZFS to corrupt itself than traditional filesystems. Both points are likely not problematic for servers, however, which usually have great gobs of ECC RAM. Apple apparently abandoned their efforts to replace HFS+ with ZFS on OS X because of these issues.
With that said, for bioinformatics workloads, ZFS's benefits massively outweigh its drawbacks.
### Installing ZFS
Notes on both the method I initially used for installing ZFS on Ubuntu 14.04 Trusty Tahr and the manner in which I recovered from a failing drive are available. These last notes proved regrettably necessary--despite subjecting both of our newly purchased Seagate 4 TB drives to multi-day stress tests via badblocks before putting them into production, one drive exhibited failure signs only two weeks into its tenure. Though it didn't fail outright, it developed eight bad sectors, and threw an unrecoverable read error (which, of course, ZFS handled with aplomb). As such, we replaced the drive and rebuilt the array, and everything has been running smoothly since.
Altogether, I heartily endorse ZFS. It's fast, reliable, and space efficient. In concert, these qualities make it downright sexy. Most small to mid-sized bioinformatics labs will benefit from its use.
|
2020-07-15 07:59:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3196359872817993, "perplexity": 4471.055938069182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657163613.94/warc/CC-MAIN-20200715070409-20200715100409-00567.warc.gz"}
|
https://neptun.learningu.org/learn/Splash/2017_Splash/catalog
|
# Spring Splash 2017 Course Catalog
Arts Engineering
Humanities Math & Computer Science
Science Miscellaneous
Arts
A225: Intro to Typography
Difficulty: *
Teachers: Chris Beiser
You can probably tell the difference between Arial and Wingdings, and you know what an Italic is. Or do you? This course will explore deeper qualities in typography, including the histories of notable type, an gloss on classification, and the printed page.
A214: Why Your Photos don't Look as Good as Their Photos
Difficulty: *
Ever go on Instagram and wonder why some other peoples' pictures look so much better than yours? They might have a better camera than you, but then there's those people who seem like they could get a photo into the MFA with an old iPhone. What gives?
Of course, there's more to taking pictures than just finding something pretty and tapping the shutter button. In this class, I'll discuss some basic elements of composition and then show you some free editing software available on the internet and how to use it (and yes, it's perfectly legal).
Prerequisites
Have and know how to use some kind of digital camera (a phone counts!), and know how to use a computer. That's it! In fact, if you know how a histogram works, or know how to color correct, you might find this class to be a bit boring. Bringing a phone/camera or some pictures of your own would be nice but is in no way necessary. Sample photos for editing will be provided.
A198: Beyond the Stick Figure: A Practical Drawing Class for the Artistically Challenged Full!
Difficulty: **
Teachers: Catarina Smith
Embarrassed by your smiley faces? Haunted by your stick figures? Can a three-year-old draw a better picture than you?
In this class, you will learn the basics of cartooning, and how to draw in the styles of Disney characters, Calvin and Hobbes, Marvel, Manga, and many more. The class will cover basic cartoon proportions, physical character development, and comic strip development. By the end of this class, you will have created very own original character and comic strip!
No artistic experience needed!
A226: The Living Creature: Dewey's aesthetics Full!
Difficulty: ***
Teachers: Chris Beiser
John Dewey, sometimes called the greatest American philosopher of the 20th century, is usually lauded for his works on education—but he also produced the greatest and most comprehensive theory of art in human history. Examining and comparing the experiences produced by works in different mediums, we'll explore what it means to be alive, and what art is.
A204: Salsa con Salsa
Difficulty: *
Teachers: Ben Moran
Spice up your dance moves! We'll be learning the basics of salsa, a social dance popular throughout the U.S. and Latin America. We'll start from the basic step, learn some cool moves, and then end the class with some social dancing to the music of Hector Lavoe and others. Chips and salsa included!
Prerequisites
No dance experience required
Engineering
Difficulty: *
Teachers: Jessica Lewis
Ever wonder what making cookie treats has to do with an assembly line in a factory? Join this class and find out! You can also eat the treats you make!
E218: Beagle Bone and Coding
Difficulty: **
Learn what a Beagle Bone Black is.
Learn to blink a LED, mix color and play music using a Beagle Bone Black.
Prerequisites
Some basic programming knowledge in any programming language.
E216: Produce Powered Circuits Full!
Difficulty: **
Teachers: Andreas Aghamianz
Did you know that you can turn a lemon into a battery? In this workshop, we'll teach you how to draw electrical current from a lemon to power a light emitting diode (LED) circuit. The lesson covers basic principles of battery operation and circuit theory through to building, powering, and testing electronic circuits on breadboards. Students may work individually or with a partner.
E228: Pushing materials to their limits Full!
Difficulty: **
In this class we'll get hands on and experiment different properties of materials and how they react in different scenarios. We'll show you how to poke a hole in a balloon without popping it!
Prerequisites
Desire to help run experiments
E205: Fermi Calculations: Using Estimation to Solve Ridiculous Problems Full!
Difficulty: **
On the 25th of August, 1957 a nuclear test was held in the deserts of Nevada. Although this was an underground test, there was an exhaust vent for the explosion to follow. On top of this hatch was a 900kg manhole cover. After the test took place, scientists were quick to discover that the manhole cover was gone. Upon reviewing footage, they were able to calculate that the cover had been blasted into space at a speed of 60km/s by the force of the explosion.
Although this situation is interesting, wouldn’t it be fun to calculate what would happen if another object were in place of the manhole cover? Say, perhaps, an indestructible potato. In this class we will delve into Fermi Problems, discover how they can be useful in real engineering scenarios, and perhaps touch on Einstein and relativity on the way.
Humanities
H217: Unification of Japan at a Glance
Difficulty: **
Teachers: Michael Shen
Do ninjas, samurais, and large feudal wars interest you? Come explore the history of Japan as it moves from a land of feudal tribes to a unified country.
This class will briefly go over the history of Japan's mystical and mysterious past while focusing on important military figures.
H209: "Are You Living in a Computer Simulation?" Full!
Difficulty: **
Teachers: Daniel Russotto
Technology these days is getting pretty sophisticated. It doesn't seem improbable that in a couple hundred years, maybe less, that we could simulate entire worlds where everyone and everything in the simulation is completely convinced that everything in the simulation is completely real. How certain are you that everything around you is real? How certain are you that you are real? If we are living in a computer simulation, does it matter? Should we care? These are the types of questions we will strive to answer, starting with the musings of Descartes, fast-forwarding to Bostrom's well-known paper "Are You Living in a Computer Simulation?," and taking a peek at pop culture pieces such as The Matrix.
H210: The Machines are Taking Over!
Difficulty: **
Teachers: Daniel Russotto
We love it when machines do things for us, and more and more artificial intelligence and machine learning algorithms make that possible. This class will examine what we should do and think about when these things make life very complicated. If a self-driving car kills some one, who is to blame? What do we if we accomplish true artificial consciousness? Do we have to fear the machines trying to take power from the humans? We will try to decide if artificial intelligence is just amazing, or actually our downfall.
H206: Criminal Justice
Difficulty: *
Teachers: Christa Cosenza
Basic overview of the criminal justice field.
H211: Love Poems in the Language of Love
Difficulty: **
Teachers: Margaret Downs
Fact: everything sounds prettier in Spanish. Even the sentence that means "Don't throw oil in the water" has meter and rhyme. So imagine what happens when we start talking about actual poetry. In this class, we'll explore love poems in Spanish from a variety of time periods and regions. You'll walk away being able to dazzle your friends with newfound knowledge of a beautiful language and you might even get a couple of pickup lines out of the deal.
Prerequisites
Some familiarity with Spanish. Fluency isn't necessary (the course will be conducted in English) but a year or two of high-school-level Spanish will be helpful.
H212: Creating a Killer: Criminal Psychology
Difficulty: **
Teachers: Alexa Lambros
Want to know what motivates a serial killer? Want to delve into the minds of known psychopaths to see what makes them tick? Think YOU might be one? If you answered yes to any of these questions, this is the class for you.
In Creating a Killer: Criminal Psychology, you will learn the basics of criminal psychology and practice “profiling” a number of known high-profile criminals as well projecting the potential futures – criminal or not – of several hypothetical personalities. While this class won’t be all you need to begin work in the FBI, it will be a fun look into some of society’s most complex and dangerous members.
NOTE: Graphic material (involving abuse, rape, and murder) covered may be disturbing to some individuals.
Math & Computer Science
M229: How to Count to Infinity
Difficulty: **
Teachers: Zachary Winkeler
Everybody knows how to count, right?
$$1, 2, 3...$$.
It turns out that there are a lot of things you can count. Numbers, letters, points, words, sheep, etc... unless these sheep come in real numbers. Can you count $$\pi$$ sheep? We're going to try to prove that there are some things you can't count.
Prerequisites
You should know a little bit about functions, and having seen sets before will help a lot. We'll start from the basics though, so I encourage anybody who is interested to come! This course is meant to be an introduction to countable and uncountable sets, and the idea of cardinality in general, so if you already know what these things mean, this class might be too easy for you.
M230: How to Program with Circles and Arrows
Difficulty: ***
Teachers: Zachary Winkeler
We're going to look at some of the most basic models of computation, and how we can use them to solve problems. Most of our computers will look like a bunch of squiggles.
Technical stuff: We'll learn about finite automata, pushdown automata, and Turing machines, as time allows. If you have experience with programming and know what regular expressions are, you'll learn why we use them.
Prerequisites
No prerequisites. Almost everything we talk about will probably be new to you, but if you're interested I definitely encourage you to come!
Science
S220: By standers effect
Difficulty: *
If you witnessed an emergency happening right before your eyes, you would certainly take some sort of action to help the person in trouble, right? Come find out.
S227: Weird Animals
Difficulty: **
Teachers: Ben Moran
How do jellyfish glow? What's the real reason giraffes have such long necks? How do animals living at the bottom of the ocean survive without light or heat? This class talks about some of the strangest animals on Earth and in the oceans: how they got that way and how their weird characteristics help them thrive.
Prerequisites
Some knowledge of animal biology and/or evolution will be helpful but isn't required.
S231: NEURONS 1 Full!
Difficulty: *
Teachers: Emma Nash
NEURONS, NEU Researchers of Neuroscience, is teaching students to think about how a complex organ like the brain can cause us to perform tasks big and small, and to learn about scientific methods used today that try to understand, manipulate, and recreate it.
Likely topics: How a Computer Can Make a Brain; How We Perceive the World; Today's Methods of "Mind Control"; Neurological Diseases; The Brain and Drugs
This class is recommended for those with some biology experience. Different material will be taught on the second date, both classes with different material than this past fall's.
S221: Genetic Engineering, The Future, and You Full!
Difficulty: **
Teachers: Asa Budnick
A quick talk about the rapidly advancing fields of genetic engineering and synthetic biology focusing on recent advances and the near future. After the talk we'll actively explore the design process for gene circuits and look at some common web tools as well as cool projects people have done with relatively accessible tools.
Prerequisites
Students should have a basic understanding of biology. Knowing what genes and proteins are should be enough.
S199: Lacking: Neurological Case Studies of Deficits Full!
Difficulty: **
Teachers: Kathryn Levitsky
This class will look at a number of case studies surrounding deficits of the brain. In other words, we will be exploring everything we tend to take for granted by looking at how people function when things go wrong.
S215: Infectious Diseases -- The Coming Plague Full!
Difficulty: **
Teachers: Kristian Teichert
With the emergence of the Zika virus, and the Ebola epidemic of 2015, understanding infectious diseases is becoming increasingly important in our world today.
What does a pathogen? How does it cause disease?
How does a pathogen move between people?
What makes a pathogen more infectious than others? More dangerous? How do we report this?
Come to discuss these aspects of infectious diseases through the lens of diseases such as Zika, Ebola, the ubiquitous flu and others!
Prerequisites
An interest in diseases, viruses, bacteria and other cool, sometimes scary things.
S201: What Do Elephants and Your Morals Have in Common? Full!
Difficulty: **
Teachers: Asama Lekbua
Moral psychology is at the intersection of philosophy and cognitive psychology. We'll discuss basic principles, explore classic dilemmas that will make you think twice about your decision, and draw from current events to put your moral righteousness to the test. Oh, and we'll talk about elephants, too. It is not to be missed!
Prerequisites
Curiosity and an open mind
S223: Caught Red-Handed
Difficulty: **
Teachers: Brittany Fung
A brief overview of genetics and blood-typing. Learn how to use these techniques to solve a series of mysteries!
S202: Science Experiments! Full!
Difficulty: **
Teachers: Ana Paz
Join the NU Science Squad for an hour of BRAND NEW science experiments! We will take a closer look at everything from chemical reactions to physical phenomena to quick engineering! We'll show you the science behind a few quick, but intriguing experiments, no lab reports required! There will be some experiments inaccessible at home, as well as a few you can try with your friends! Come ready to experiment!
S213: Harmful or Fatal if Swallowed: Why Poison Kills
Difficulty: **
Ever wonder why you shouldn't eat cyanide, or should avoid breathing in large amounts of carbon monoxide? This class explores the reasons that all sorts of "nasty chemicals" really are nasty, and what this says about biology. Chemicals that will be discussed are cyanide, carbon monoxide, several kinds of neurotoxin and whatever else you're interested in (so long as I know something about it).
Prerequisites
Some basic knowledge of biology and chemistry. If you know what a protein is, and what an electron is, you should be fine. Knowledge of the proteins involved in cellular respiration is a plus, but not necessary.
S232: NEURONS 2 Full!
Difficulty: **
Teachers: Emma Nash
NEURONS, NEU Researchers of Neuroscience, is teaching students to think about how a complex organ like the brain can cause us to perform tasks big and small, and to learn about scientific methods used today that try to understand, manipulate, and recreate it.
Likely topics: The Brain in Our Gut; Neurological Diseases; Methods of Visualizing Neurons; The Brain's Immune System; Animal Models
Recommended for those with more biology experience, possibly considering studying science at a higher level. Different material will be taught on the first date, both classes with different material than this past fall's.
S200: How to Build a Tree of Life Full!
Difficulty: **
Teachers: Ben Moran
Biologists often group life in ways that defy our expectations: despite growing out of the ground, mushrooms are closer to humans than to plants, and two different kinds of "worms" can be farther apart than people and lobsters. How do they come up with this insanity? In this class, we'll overview the history and modern practice of the classification of life, or taxonomy. We'll discuss early attempts to make sense of our natural world, the insights brought by the theory of evolution, and how modern genetics has repeatedly turned our intuition on its head. Along the way, we'll build our own trees of life, and update them with two centuries' worth of knowledge.
Prerequisites
Should have some general familiarity with the following ideas: genus and species, genomes, genes, proteins, and probability as a math concept.
Miscellaneous
X207: Surviving Survivor - An in depth look at the strategy required to win one million dollars
Difficulty: **
Teachers: Alec Cinque
This course will provide a detailed guide for how to play and win the game of Survivor. Based off the TV show of the same name, Survivor is a very complex game requiring physical, mental, and social abilities. This class will hopefully teach you more about how to make the game work in your favor and show you the path to winning a million dollar game.
X219: Zumba: There are no wrong moves.. only accidental solo!
Difficulty: *
Come join us to learn more about healthy living and ways to stress-bust through exercise. This session will be followed by a fun 30 min Zumba session. For those of you who don't know, Zumba is an aerobic fitness program where participants burn calories through dance, fun and laughter.
Zumba class rule: Don't know the move? Just shake it, till you make it!
X208: What It's Like To Be In College
Difficulty: *
Teachers: Ronnie Lo
Assist any student with college applications, take about what it is like to be a college student and how to do well in college
X222: Chocolate!
Difficulty: *
Teachers: Asama Lekbua
If you're like me, the Course Title got you here. Will we be eating chocolate? Yes. Will we be making chocolate? Also yes. But we will also talk about how they get here (origin, fermentation, manufacturing, processing), and once they're in our bodies, what they do (neurochemical processes, nutritional values, etc). Come find out!
Prerequisites
Love for all things chocolate! Note: Please notify SPLASH organizers if you have any food allergies
X224: Why Some Things Suck (and others are good)
Difficulty: **
Teachers: Chris Beiser
Have you ever pushed on a pull door, or struggled to figure out how to turn on a shower? We’ll explore what makes things well designed by looking at things that aren’t. We’ll talk about physical objects, as well as other areas.
Prerequisites
None
|
2017-03-29 11:12:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17665329575538635, "perplexity": 2712.7640610754183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190295.4/warc/CC-MAIN-20170322212950-00463-ip-10-233-31-227.ec2.internal.warc.gz"}
|
http://www.math.psu.edu/calendars/meeting.php?id=7795
|
# Meeting Details
Title: A new calculus for ideal fluid dynamics Computational and Applied Mathematics Colloquium Darren Crowdy, Imperial College, UK In classical fluid dynamics, an important basic problem is to understand how solid bodies (e.g. aerofoils, obstacles or stirrers) immersed in an ideal fluid interact by communicating'' with each other through the ambient fluid. Also of interest is how vortices interact with such solid bodies (and each other). There is great interest in such problems in areas such as aerodynamics, biolocomotion and oceanography. For two-dimensional flows, a variety of powerful mathematical results exist (complex variable methods, conformal mapping, Kirchhoff-Routh theory) that have been used to study such problems, but the constructions are usually restricted to problems with just one, or perhaps two, objects. Expressed mathematically, most studies deal only with fluid regions that are simply or doubly connected. There has been a general and longstanding perception that problems involving fluid regions of higher connectivity -- that is, more than two interacting objects -- are too intractable to be tackled analytically (and that numerical methods must be used). The lecture will show that there is a way to formulate the theory so that the relevant fluid dynamical formulae are exactly the same irrespective of the number of interacting objects (i.e., the approach is relevant to fluid domains of any finite connectivity). This provides a flexible and unified tool (a calculus'') for modelling the fluid dynamical interaction of multiple objects/aerofoils/obstacles in ideal flow as well as their interaction with free vortices. Examples of how to apply the calculus to specific problems will be given to illustrate its flexibility. More generally, the results have wider reaching applications beyond fluid dynamics and essentially provide a new calculus for two-dimensional potential theory.
|
2016-05-06 05:31:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6227681040763855, "perplexity": 821.3165749026614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861727712.42/warc/CC-MAIN-20160428164207-00101-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://projecteuclid.org/euclid.aspm/1548550907
|
## Advanced Studies in Pure Mathematics
### Some highlights from the history of probabilistic number theory
Wolfgang Schwarz
#### Abstract
In this survey lecture it is intended to sketch some parts [chosen according to the author's interests] of the [early] history of Probabilistic Number Theory, beginning with Paul Turáns proof (1934) of the Hardy—Ramanujan result on the “normal order” of the additive function $\omega (n)$, the Erdős–Wintner Theorem, and the Erdős–Kac Theorem. Next, mean–value theorems for arithmetical functions, and the Kubilius model and its application to limit laws for additive functions will be described in short.
Subsuming applications of the theory of almost–periodic functions under the concept of "Probabilistic Number Theory", the problem of "uniformly–almost–even functions with prescribed values" will be sketched, and the Knopfmacher – Schwarz – Spilker theory of integration of arithmetical functions will be sketched. Next, K.–H. Indlekofers elegant theory of integration of functions $\mathbb{N} \to \mathbb{C}$ of will be described.
Finally, it is tried to scratch the surface of the topic "universality", where important contributions came from the university of Vilnius.
#### Article information
Dates
Revised: 12 September 2006
First available in Project Euclid: 27 January 2019
https://projecteuclid.org/ euclid.aspm/1548550907
Digital Object Identifier
doi:10.2969/aspm/04910367
Mathematical Reviews number (MathSciNet)
MR2405612
Zentralblatt MATH identifier
1208.11003
#### Citation
Schwarz, Wolfgang. Some highlights from the history of probabilistic number theory. Probability and Number Theory — Kanazawa 2005, 367--419, Mathematical Society of Japan, Tokyo, Japan, 2007. doi:10.2969/aspm/04910367. https://projecteuclid.org/euclid.aspm/1548550907
|
2019-08-24 15:01:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35032960772514343, "perplexity": 3017.9027820029855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321140.82/warc/CC-MAIN-20190824130424-20190824152424-00042.warc.gz"}
|
https://discuss.pixls.us/t/filmic-when-to-use/13250
|
# Filmic, when to use?
It’s interesting to follow the discussion on filmic and to study the tutorials on Youtube.
In many cases, the discussion is on how to “fine-tune” images that from the outset are more or less ok. What about real challenging situations where you are forced to use some tool in order to produce a useable image? How does filmic perform in such situations? When should you prefer filmic over tone mapping or other tools?
Below you will find two examples on filmic used on problem photos with good results, I think.
When you lighten dark parts of the image by filmic these areas tend to become a little “greyish”. It works fine to turn on the haze removal module to remove the grey. Filmic and haze removal is used in both photos.
Default Darktable
Filmic and haze removal
Default Darktable
Filmic and haze removal
I also upload the raw files here: DSC_8528.NEF (25.7 MB) DSC_0856.NEF (25.1 MB)
3 Likes
you can always use filmic.
3 Likes
A quite timely question for me, I just finished implementing the Duiker filmic curve in my hack software…
It became evident to me about a year ago that I needed more fine-grained control of a tone curve with high-bitdepth image data. Looking at most raw histograms of unscaled data, it is predominant to see most of the data over at the left. Indeed, if you can zoom in on the histogram, you find that a lot of tonality sits in the first 256 values of a 16-bit raw image (libraw delivers unmodified raw data as unsigned 16-bit integers), which on the display scale from 0=black to 255=white of an 8-bit JPEG output is all under the first tone value above 0. So, you want your tone curve to “curve” in that range, in order to control your shadows.
An interactive tone curve won’t give you any real control in that range. Mine works on a 0.0-255.0 floating point scale, which will allow me to scale into the really low region, but not with any really useful slope. I toyed a bit with “zoom” implementations, but I just really couldn’t warm to encoding all the relevant parameters in the output image (.pp3 or xmp sidecar files, in other softwares)
Well, this is what a filmic curve does that none of the other tone curves address: it puts a “toe” in the very bottom of the data range, one that transitions the black values gradually into the main part of the curve. John Hable’s blog post describes it very well:
In the post, he has a couple of graphs that show that toe in relation to the rest of the curve. Looking at the whole curve, you can’t see it, but if you magnify down toward the origin, it becomes quite evident.
Here’s the so-called Duiker filmic transfer function, from Hable’s post:
y = {x (6.2x+0.5) \over x(6.2x+1.7)+0.06}
For simple control of the toe, the 0.5 coefficient can be decremented toward 0.0 to increase decrease (oops…) the toe slope and push the toe down to the axis, which will increasingly crush those low tones:
Going the other direction pulls the shadows up, to the point where the curve starts looking like any other “regular” power or log curve.
I’ve been messing with this very coefficient with a recent flower image that has a dark background, and it is simply amazing the amount of control it has over how those shadow tones are scaled. Right now, I’m without any computer that has either this image or my software, so all I can do is show Excel graphs.
I haven’t messed with the shoulder yet, but it appears the 1.7 coefficient influences it.
Note that @Carmelo_DrRaw’s recent Photoflow work is a more intuitive approach to filmic, in that the controls over the toe and shoulder are more specific in both dimensions:
I’m thinking in @darix’s terms, just use filmic all the time. But I think the display ICC tone transform needs to be nullified, let filmic do that work, but I don’t have the tools with me to mess with that right now…
1 Like
I love filmic…when I get it right. But I find I have to work hard at it to get it looking good so it takes a long time. I’ve watched videos on it and maybe I just need to keep practicing to apply it more quickly. Tricky to get white/black points correct as it can get into clipping easily with tweaks on other sliders, and can sometimes get a grayish haze easily. But on tough shots with huge dynamic range it can give great results.
Yes, when you get it right, and that not so easy… Small changes to one slider affects other slides and can result in big changes to the image.
I’m surprised that haze removal (or local contrast) works so good together with filmic. Haze removal is executed before filmic. The good effect seems more logical to me if haze removal was applied after filmic. How can the milky look in dark areas be removed before it is produced?
For most the images the auto mode of filmic works just fine for me. If it doesn’t work I check which presets fits the situation and then start from there.
I’ve used it a lot lately and I use it instead of the base curve. IMHO the trick is to not over do it and not in place of e.g. exposure or highlight reconstruction, fill light etc. I tend to first get the exposure ballpark alright with the curve nicely filling the histogram but no clipping. After that, I’ll use filmic with some tweaks as needed to gray point and shadows/highlights.
Both photos you have above feature a dark foreground with some bright background. This looks like classic HDR type stuff; maybe not necessarily the best use of filmic.
2 Likes
Thanks for your response. When should you use filmic instead of other tools to benefit the most if not in classic HDR images?
@obe I mostly use it instead of the base curve. For HDR, I would probably use masking, tone curves, graduated density, etc. and work with multiple layers to work on shadows and highlights separately. Basically you are going to end up with different gray points for different parts of the image and some kind of transition between those parts. You could do it with filmic as well but then you’d probably want two instances and some masking.
1 Like
I did some quick changes with two tone curves and some masking. Not perfect but you could tweak the masks a lot to nail it.
DSC_8528.NEF.xmp (4.7 KB)
1 Like
you have such a bad halo for the face in the window.
Actually, filmic is a curve in the class called “tonemapping operators”, designed to pull up dark regions into the midrange while keeping highlights from blowing past display white. This is classic ‘map HDR into SDR’ work, re-distributing the dynamic range of the image into a range where we can comfortably view it. Hable has a good blog post with images to illustrate:
1 Like
I don’t have good pixel-peep software with me, but isn’t that the backlight from the exterior?
1 Like
More going on than haloing. It is a cartoon now.
1 Like
For the past year, as a challenge, I have been processing without HDR algorithms. However, there is no shame in using them. I would process the bright and dark regions separately and then merge them with enfuse and friends.
1 Like
Ah, looks like a luminosity mask? That’s the problem with masks; they put hard artificial cliffs in the boundary between the adjacent tones, which are then scaled independently. That’s why curves like filmic are preferential to masks IMHO; they retain some sense of smooth transition through the tone range.
Conversely, the bane of a curve is a component with a shallow slope - robs contrast.
It is possible to use masking but it takes forever and multiple masks at various steps to get it right. I do it manually using G’MIC, which is insane I know. What I mean is that you have to catch all of the gradient reversals and halos for every brightening and contrast adjustment. I did that for some of the PlayRaws. They didn’t end up looking good but they were clean and without artifacts.
When I was cutting my digital teeth with a Nikon D50, I did some masks to drag dynamic range around; tedious, but it worked like the “dodge and burn” paradigm I so loved in the film darkroom. Moving to a D7000 reduced the need to do that so much, and moving to the Z6 has cut it considerably further. That camera also has a highlight-weighted matrix metering mode, which automatically decreases the exposure to preserve the highlights at the expense of pushing the rest of the image into darkness.
And this is why I’m currently experimenting with filmic; so I can expose images thusly and use filmic to drag the nether regions out of oblivion…
Once more, I think one should keep the display ICC tone transform out of the business here! The filmic module does not know anything about the display device, and trying to provide a display-ready image out of the filmic module is IMHO a design mistake. The module should work in linear RGB and forget about displays… I already discussed this with @aurelienpierre at LGM2019.
Okay, I put the gamma part of the function in my tool, mainly to do the algorithm comparisons Hable describes, but it can be set to 1.0. I get it, really…
I think you’ve prompted the answer to my residual question, that being what to transform and embed in a JPEG headed to “the wild”, where folks may and may not be viewing color-managed. Accordingly, that should probably be sRGB/2.2gamma; color-managed know what to do with that, non-color-managed will already have an approximate gamut and tone baked in the image (and hopefully not wide-gamut HDR…)
I’ll be posting some of what I’m doing in rawproc next week, when I get back to a fully equipped computer.
|
2019-07-23 11:56:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34274089336395264, "perplexity": 1843.6631237103131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529276.65/warc/CC-MAIN-20190723105707-20190723131707-00555.warc.gz"}
|
https://aviation.stackexchange.com/questions/1776/what-is-the-difference-between-a-poh-and-an-afm
|
# What is the difference between a POH and an AFM?
Some aircraft come with a Pilot Operating Handbook and some come with an Aircraft Flight Manual. Why the different name, and is there a difference between them?
Both a POH and an AFM meet the "Operating Limitations" requirement in the ARROW acronym.
The difference between the two is mainly in length and content: an AFM is usually a thinner document, satisfying the requirements of FAR 23.1581 and not much else, while a POH contains these required items plus other information like system diagrams (The contents & format of a POH are standardized in GAMA's Specification 1).
Parts of the POH (like the Limitations section) are FAA-Approved, and serve as the AFM, and both documents are typically associated with a specific airframe (by serial number).
A better explanation might be this:
The AFM is a regulatory document (it's contents are prescribed under the section of the regulations the aircraft was certificated under - Part 23, Part 25, etc). The POH is a GAMA-defined document whose contents meet the regulatory requirements of an AFM, and present other information in a standardized way so that a pilot can go from a Cessna to a Piper to a Mooney to a Socata and browse the book to learn about the airplane they're about to fly with all the information presented the same way no matter who the manufacturer is.
The other two types of documents you may encounter are an "Owner's Manual" (which usually goes along with a thinner AFM & provides some of the information found in the newer-style POH) and a Pilot Information Manual (PIM) which is a "generic" version of the POH which many pilots buy so they can study the procedures without removing the regulatory document from the aircraft.
Chapter 9 of the Pilot's Handbook of Aeronautical Knowledge talks a little about the differences between the two documents (and a whole lot of other flight documents).
• I'd like to add that an AFM is not required for aircraft manufactured before a certain date (1975?). However a POH is still required to be in the plane. After that date, an up-to-date AFM with information for that specific airframe is required. Sometimes AFMs will be short, with just the airframe-specific information, and sometimes they will be the entire POH re-written. – StallSpin Feb 17 '14 at 2:25
• Ummm, I don't think that this is quite right. The AFM (not POH) on my Falcon is 7 volumes, all of which are 2" binders..... – Lnafziger Feb 17 '14 at 5:09
• @StallSpin No, the AFM is from the manufacturer and comes with the airplane (who had no idea what operating rules will apply). I fly two Falcon 50's. One is operated under part 135 and the other under Part 91. They both came with almost the same AFM (differences being serial number specific). – Lnafziger Feb 17 '14 at 10:48
• @Lnafziger 125.75 allows operators to carry a combined manual, and modify the AFM to suit their certification. I thought I read something similar for 135 but I can't find it now. Anyway, I don't know what to tell you other than that the Falcon is a very complicated airplane. It may be filled with a lot of non-approved sections? The POHs for both the light aircraft I fly are almost twice as thick as their AFM counterparts. Their AFMs have only the required approved sections in a binder and that's it, like voretaq7 said. – StallSpin Feb 17 '14 at 17:13
• @StallSpin Well, it's probably possible to create a combined manual, but in reality 135 operators have a different FOM/GOM which has nothing to do with the specific aircraft type. The manufacturer on all of the jets that I have flown (various Learjets and Falcons) all provide an AFM and none of them have a POH. Hence, the question. – Lnafziger Feb 17 '14 at 17:31
all the above answers are wrong. A POH is a 'kind' of AFM. All aircraft must have an approved AFM. Before 1978 this could be anything (owners manual, POH, etc.). After 1978 GAMA (general aviation manufacturers association decided to standardize the AFM around the POH format. The name POH stuck ... but it is still a kind of AFM.
• Welcome to aviation.se! do you have any source to back up what you say? – Federico Apr 15 '16 at 16:59
All you need to know is that an AFM is specific to an aircraft serial number. A POH is less specific and generalized for a particular make or model
The POH is the official book of rules for that specific serial number airplane. The AFM is the unofficial/generic one for a type of airplane that may or may not match the one it's in. On occasion you'll run into a book labeled "AFM" that is in fact actually the "POH" (often due to an STC requirement).
• You're thinking of a "PIM" (Pilot Information Manual) or "Owner's Handbook". An AFM is serialized & associated with a specific airframe. – voretaq7 Feb 17 '14 at 1:04
|
2020-01-26 18:11:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4797472357749939, "perplexity": 2232.2759175757524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690095.81/warc/CC-MAIN-20200126165718-20200126195718-00275.warc.gz"}
|
https://or.stackexchange.com/tags/irreducible-infeasible-subset/hot
|
# Tag Info
27
The irreducible infeasible subsystem (IIS) for an infeasible linear program (LP) is a minimal subset of constraints that has no feasible solution, i.e., an inconsistent set of constraints for which any proper subset of the constraints is consistent. It is not true that an IIS is unique. For intuition, consider that there may be more than one source of ...
8
Finding a minimum-cardinality MIS for a linear program is an NP-hard problem in general, see Edoardo Amaldi, Marc E. Pfetsch, and Leslie E. Trotter Jr. On the maximum feasible subsystem problem, IISs and IIS-hypergraphs. Mathematical Programming, 95(3):533–554, 2003. For this reason, commercial solvers such as CPLEX use heuristics to identify small IIS which ...
5
An IIS is not unique. Given a system $Ax \le b$, the indices of an IIS are the supports of the vertices of the polyhedron $P=\{y: y^{\top}A=0, \; y^{\top}b \le -1, \; y \ge 0\}$. This is the theorem in https://pubsonline.informs.org/doi/abs/10.1287/ijoc.2.1.61
4
Not yet, see https://github.com/google/or-tools/issues/973 For debugging I would recommend you to divide your constraints into groups so you can activate/deactivate some of them to pin down the infeasibility
Only top voted, non community-wiki answers of a minimum length are eligible
|
2021-01-18 17:17:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7883327007293701, "perplexity": 815.2027091198478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703515075.32/warc/CC-MAIN-20210118154332-20210118184332-00086.warc.gz"}
|
https://www.albert.io/ie/abstract-algebra/orders-in-products-group-dollarg-times-hdollar
|
Free Version
Easy
Orders in Products: Group $G \times H$
ABSALG-4XJU5W
Suppose the element $a$ has order $m$ in the group $G$, and the element $b$ has order $n$ in the group $H$.
What is the order of the pair $(a,b)$ in the group $G\times H$?
A
$mn$
B
LCM$(m,n)$
C
GCD$(m,n)$
D
$m+n$
E
Max$(m,n)$
F
Min$(m,n)$
|
2017-02-28 00:58:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5263336300849915, "perplexity": 142.78023922579814}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00109-ip-10-171-10-108.ec2.internal.warc.gz"}
|
http://wikien4.appspot.com/wiki/Neighbourhood_(graph_theory)
|
# Neighbourhood (graph deory)
A graph consisting of 6 vertices and 7 edges
In graph deory, an adjacent vertex of a vertex v in a graph is a vertex dat is connected to v by an edge. The neighbourhood of a vertex v in a graph G is de subgraph of G induced by aww vertices adjacent to v, i.e., de graph composed of de vertices adjacent to v and aww edges connecting vertices adjacent to v. For exampwe, in de image to de right, de neighbourhood of vertex 5 consists of vertices 1, 2 and 4 and de edge connecting vertices 1 and 2.
The neighbourhood is often denoted NG(v) or (when de graph is unambiguous) N(v). The same neighbourhood notation may awso be used to refer to sets of adjacent vertices rader dan de corresponding induced subgraphs. The neighbourhood described above does not incwude v itsewf, and is more specificawwy de open neighbourhood of v; it is awso possibwe to define a neighbourhood in which v itsewf is incwuded, cawwed de cwosed neighbourhood and denoted by NG[v]. When stated widout any qwawification, a neighbourhood is assumed to be open, uh-hah-hah-hah.
Neighbourhoods may be used to represent graphs in computer awgoridms, via de adjacency wist and adjacency matrix representations. Neighbourhoods are awso used in de cwustering coefficient of a graph, which is a measure of de average density of its neighbourhoods. In addition, many important cwasses of graphs may be defined by properties of deir neighbourhoods, or by symmetries dat rewate neighbourhoods to each oder.
An isowated vertex has no adjacent vertices. The degree of a vertex is eqwaw to de number of adjacent vertices. A speciaw case is a woop dat connects a vertex to itsewf; if such an edge exists, de vertex bewongs to its own neighbourhood.
## Locaw properties in graphs
In de octahedron graph, de neighbourhood of any vertex is a 4-cycwe.
If aww vertices in G have neighbourhoods dat are isomorphic to de same graph H, G is said to be wocawwy H, and if aww vertices in G have neighbourhoods dat bewong to some graph famiwy F, G is said to be wocawwy F (Heww 1978, Sedwáček 1983). For instance, in de octahedron graph shown in de figure, each vertex has a neighbourhood isomorphic to a cycwe of four vertices, so de octahedron is wocawwy C4.
For exampwe:
## Neighbourhood of a set
For a set A of vertices, de neighbourhood of A is de union of de neighbourhoods of de vertices, and so it is de set of aww vertices adjacent to at weast one member of A.
A set A of vertices in a graph is said to be a moduwe if every vertex in A has de same set of neighbours outside of A. Any graph has a uniqwewy recursive decomposition into moduwes, its moduwar decomposition, which can be constructed from de graph in winear time; moduwar decomposition awgoridms have appwications in oder graph awgoridms incwuding de recognition of comparabiwity graphs.
## References
• Fronček, Dawibor (1989), "Locawwy winear graphs", Madematica Swovaca, 39 (1): 3–6, MR 1016323
• Hartsfewd, Nora; Ringew, Gerhard (1991), "Cwean trianguwations", Combinatorica, 11 (2): 145–155, doi:10.1007/BF01206358.
• Heww, Pavow (1978), "Graphs wif given neighborhoods I", Probwèmes combinatoires et féorie des graphes, Cowwoqwes internationaux C.N.R.S., 260, pp. 219–223.
• Larrión, F.; Neumann-Lara, V.; Pizaña, M. A. (2002), "Whitney trianguwations, wocaw girf and iterated cwiqwe graphs", Discrete Madematics, 258: 123–135, doi:10.1016/S0012-365X(02)00266-2.
• Mawnič, Aweksander; Mohar, Bojan (1992), "Generating wocawwy cycwic trianguwations of surfaces", Journaw of Combinatoriaw Theory, Series B, 56 (2): 147–164, doi:10.1016/0095-8956(92)90015-P.
• Sedwáček, J. (1983), "On wocaw properties of finite graphs", Graph Theory, Lagów, Lecture Notes in Madematics, 1018, Springer-Verwag, pp. 242–247, doi:10.1007/BFb0071634, ISBN 978-3-540-12687-4.
• Seress, Ákos; Szabó, Tibor (1995), "Dense graphs wif cycwe neighborhoods", Journaw of Combinatoriaw Theory, Series B, 63 (2): 281–293, doi:10.1006/jctb.1995.1020, archived from de originaw on 2005-08-30.
• Wigderson, Avi (1983), "Improving de performance guarantee for approximate graph coworing", Journaw of de ACM, 30 (4): 729–735, doi:10.1145/2157.2158.
|
2019-04-23 12:49:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8285037279129028, "perplexity": 7080.420453353237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578602767.67/warc/CC-MAIN-20190423114901-20190423140901-00291.warc.gz"}
|
https://proofwiki.org/wiki/Kuratowski%27s_Lemma
|
# Kuratowski's Lemma
## Theorem
Let $\struct {S, \preceq}, S \ne \O$ be a non-empty ordered set.
Then every chain in $S$ is the subset of some maximal chain.
## Proof
Let $S$ be a (non-empty) ordered set.
Let $C$ be a chain in $S$.
Let $P$ be the set of all chains that are supersets of $C$.
Let $\CC$ be a chain in $\powerset P$ (partially ordered by set-inclusion).
Define $C' = \bigcup \CC$.
Note that the elements of $P$ are chains on $\paren S$, so the elements of $\CC$ are also chains in $S$, as $\CC$ is a subset of $P$.
Thus $\bigcup \CC$ contains elements in $S$, so:
$C' \subseteq S$.
First, note that $C'$ is a chain in $S$.
Let $x, y \in C'$, which means $x \in X$ and $y \in Y$ for some $X, Y \in \CC$.
However, as $\CC$ is a chain in $\powerset P$, that means either $X \subseteq Y$ or $Y \subseteq X$.
So $x$ and $y$ belong to the same chain in $S$.
Thus either $x \le y$ or $y \le x$.
Thus $C'$ is a chain on $S$.
Now let $x \in C$.
Then:
$\forall A \in P: x \in A$
Then because $\CC \subseteq P$:
$\forall A \in \CC: x \in A$
So:
$x \in \bigcup \CC$
and so $C \subseteq C'$
Thus:
$C' \in P$
Now, note $C'$ is an upper bound on $\CC$.
To prove this consider $x \in D \in \CC$.
This means:
$x \in \bigcup \CC = C'$
so:
$D\subseteq C'$
The chain in $P$ was arbitrary, so every chain in $P$ has an upper bound.
Thus, by Zorn's Lemma, $P$ has a maximal element.
This must be a maximal chain containing $C$.
$\blacksquare$
## Note
One can also prove that Zorn's lemma follows from Kuratowski's Lemma, which shows that they are equivalent statements. Thus, this is another statement equivalent to the Axiom of Choice.
## Source of Name
This entry was named for Kazimierz Kuratowski.
## Historical Note
Kazimierz Kuratowski published what is now known as Kuratowski's Lemma in $1922$, thinking it little more than a corollary of the Hausdorff Maximal Principle.
In $1935$, Max August Zorn published his own equivalent, now known as Zorn's Lemma, acknowledging Kuratowski's earlier work.
This later version became the more famous one.
|
2021-05-07 05:07:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9582722187042236, "perplexity": 270.4694475540539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.96/warc/CC-MAIN-20210507025943-20210507055943-00074.warc.gz"}
|
http://mathoverflow.net/revisions/41195/list
|
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4).
More accurately, let $\displaystyle A=\sum_{i=0}^{\infty}A_i$ be a finitely generated graded algebra over say $\mathbb{Q}$ but $\dim A_i=\infty$ for each $i.$ Is it possible?
|
2013-06-18 21:10:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.689910888671875, "perplexity": 695.5532342317069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707188217/warc/CC-MAIN-20130516122628-00037-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://dergipark.org.tr/tr/pub/ujma
|
ISSN: 2619-9653
Başlangıç: 2018
Yayın Aralığı: 3 Ayda 1
Yayıncı: Emrah Evren KARA
### Hakkında
Universal Journal of Mathematics and Applications (UJMA) (Univers. J. Math. Appl.) is an international and peer-reviewed journal which publishes high quality papers on pure and applied mathematics. To be published in this journal, a paper must contain new ideas and be of interest to a wide range of readers. Survey papers are also welcome. Similarity percentage must be less than 30% without bibliography.
No submission or processing fees are required. The journal appears in 4 numbers per year (March, June, September and December) and has been published since 2018.
Coverage touches on a wide variety of topics, including but not limited to
• Functional analysis and operator theory
• Real and harmonic analysis
• Complex analysis
• Partial differential equations
• Control and Optimization
• Probability
• Applied mathematics
• Convex and Geometric Analysis
The average time during which the preliminary assessment of manuscripts is conducted: 3 days
The average time during which the reviews of manuscripts are conducted: 60 days
The average time in which the article is published: 90 days
Son Sayılar
### 2022 - Cilt: 5 Sayı: 4
Araştırma Makalesi
2. Singular Perturbations of Multibrot Set Polynomials
Araştırma Makalesi
3. On a Rational $(P+1)$th Order Difference Equation with Quadratic Term
Araştırma Makalesi
4. Binomial Transform for Quadra Fibona-Pell Sequence and Quadra Fibona-Pell Quaternion
Araştırma Makalesi
5. Periodic Korovkin Theorem via $P_{p}^{2}$-Statistical $\mathcal{A}$-Summation Process
The published articles in UJMA are licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
|
2023-01-28 11:13:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1842675656080246, "perplexity": 9090.702816488678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499541.63/warc/CC-MAIN-20230128090359-20230128120359-00262.warc.gz"}
|
https://money.stackexchange.com/questions/8450/calculate-income-tax-amount-of-uk-salary
|
# Calculate Income Tax amount of UK Salary
I've recently been trying to calculate different take home amounts from different salaries, and have ended up getting confused over how the different tax brackets work. If I try to calculate the amount for a salary below the higher tax bracket (£35k - 40%), then I don't have any troubles, so I'm assuming I'm misunderstanding what happens exactly after this, and further, tax brackets.
### Simple example
Working out take-home amount for £45,000.
Currently, UK Income Tax rates are:
£0 - £7,475 0%
£7,476 - £35,000 20%
£35,001 - £150,000 40%
£150,000+ 50%
So, from what I know, on a £45,000 salary:
(45,000 - 35,000) * 0.4
= 4000
(35,000 - 7475) * 0.2
= 27525 * 0.2 = 5505
Meaning £9,505 to be taken off as tax. Correct?
But when I use The Salary Calculator site, it tells me that £8,010 is income tax.
Can anyone tell me what the correct figure should be, and where I'm going wrong in my calculations? Also, if you could point me in the direction of some decent tutorials/articles on this sort of stuff, that would be most welcome!
• Your math looks right to me... – Sean W. May 17 '11 at 15:39
• Nevermind, Ganesh Sittampalam nailed it in his answer. – Sean W. May 17 '11 at 17:25
• Wow, thanks for sharing the UK tax brackets. It gives a great perspective on the US tax brackets. – Stainsor May 18 '11 at 14:23
The taxable bands are on top of the personal allowance (though by the 50% band, the personal allowance has reduced to 0 so there's no difference).
So the right calculation is
(45000 - 42475) * 0.4
= 1010
(42475 - 7475) * 0.2
= 7000
making £8,010 as expected.
• So at the 50% band, if working out for £200,000, the calculation would be: (200000 - 192475) * 0.5? Or (200000 - 157475) * 0.5? I'm not sure what you mean by the personal allowance being reduced to 0... – Jaymz May 18 '11 at 11:33
• No, (200000 - 150000) * 0.5. The bands are on top of the personal allowance, but not cumulative. If you check footnote 1 in the HMRC link you gave, it describes how the personal allowance is reduced for income over 100K. The actual impact of this is that there's a band from 100-115K where the real marginal tax rate is 60%, but they don't present it that way. – GS - Apologise to Monica May 18 '11 at 12:20
• Aaahhh, I see. Thanks for the explanation - seems a lot more complicated than it should be at first, but it makes sense now. Thanks. – Jaymz May 18 '11 at 12:44
|
2021-04-21 14:31:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2825991213321686, "perplexity": 1934.12281188397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039544239.84/warc/CC-MAIN-20210421130234-20210421160234-00552.warc.gz"}
|
http://www.komal.hu/verseny/feladat.cgi?a=feladat&f=P4202&l=en
|
Mathematical and Physical Journal
for High Schools
Issued by the MATFUND Foundation
Already signed up? New to KöMaL?
# Problem P. 4202. (November 2009)
P. 4202. A Geiger--Müller tube (GM tube) is used to detect radiation, which is emitted by a pointlike high-energy source, during 1 minute. The table below shows the distance between the source and the window of the GM tube, r, and the averages of the detected number of strikes, N, (decreased with the backround radiation).
r [cm] 1 3 6 9 N [1/min] 487 196 72 41
Estimate the number of strikes per minute when the source is placed at a distance of 12 cm from the window of the GM tube.
Hint: The length of a GM tube is between several cm-s and some dm-s. There are strikes in the whole length of the tube, but the process can be modelled such that the strikes are at a sensitive surface at some distance from the window. The source is usually placed perpendicularly to the window in the symmetry axis of the GM tube.
(5 pont)
Deadline expired on December 10, 2009.
Sorry, the solution is available only in Hungarian. Google translation
Megoldás. Kb. $\displaystyle 26-28$ beütés várható percenként 12 cm-es távolságnál.
### Statistics:
25 students sent a solution. 5 points: Bolgár Dániel, Farkas Martin, Galzó Ákos Ferenc, Hartstein Máté, Lájer Márton, Pálovics Péter, Patartics Bálint, Sápi András, Szabó 928 Attila, Tamás Zsolt, Timkó Réka, Varju 105 Tamás, Vécsey Máté. 4 points: Jéhn Zoltán, Szélig Áron. 2 points: 1 student. 1 point: 9 students.
Problems in Physics of KöMaL, November 2009
|
2018-07-16 12:38:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5772062540054321, "perplexity": 5565.152903651558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589270.3/warc/CC-MAIN-20180716115452-20180716135452-00218.warc.gz"}
|
https://math.stackexchange.com/questions/2905438/how-to-describe-curvilinear-grid-using-coordinate-functions/2905447
|
How to describe curvilinear grid using coordinate functions?
A curvilinear grid around a cylinder has the following properties:
1. The grid has $n_\varphi =20$ grid points in angular direction (along a circle in the xy-plane).
2. The grid has $n_r =5$ grid points in radial direction (from the cylinder outwards)
3. The grid has $n_z =8$ grid points in z-direction.
4. The grid has a thickness of $b =5$ around the cylinder
How do we describe the curvilinear grid using coordinate functions for the grid vertices such as $x(i,j,k), y(i,j,k), z(i,j,k)$, where $i,j,k$ are the indices of the vertices?
Hint: you can treat $\varphi$ and $r$ as normal polar coordinates in $x-y$ plane. You need to set up the spacing (scale) so integers $i,j$ will point correctly, make a whole revolution and fill out the thickness correctly. What is left afterwards is $z$ coordinate, but it can be treated separately since it is independent of x-y plane.
• @Raxak Yes that sounds about right. Also think about the end points. Which coordinates should give lowest and highest value for $r,\&\varphi$ – mathreadler Sep 4 '18 at 20:30
|
2019-06-24 13:17:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8613684773445129, "perplexity": 234.72113432098976}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999539.60/warc/CC-MAIN-20190624130856-20190624152856-00165.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=116943
|
# Rate of change from 1 to 2 for f(x)=2x^3 + x
by Rusho
Tags: fx2x3, rate
P: 24 Find the average rate of change from 1 to 2 for the function f(x)=2x^3 + x so I did this: [f(2) – f(1)] – [2x^3 + x] / 2-1 = 2-1-2x^3 + x / 1 = 1-2x^3 + x = -2x^3 + x Right?
Sci Advisor HW Helper P: 2,002 The average rate of change of f from a to b is [f(b)-f(a)]/(b-a) and it's (naturally) just a number. It doesn't depend on x. Check your definition.
P: 24 [2(2)^3 - 2(1)^3] - [2x^3 + x] / 2-1 16-1-2x^3 + x / 1 15-2x^3 + x I don't understand what you are telling me
Sci Advisor HW Helper P: 2,002 Rate of change from 1 to 2 for f(x)=2x^3 + x You've got the definition of the average rate of change wrong. You wrote something like (f(2)-f(1)-f(x))/(2-1). By definition, the average rate of change of f on the interval [a,b] is: $$\frac{f(b)-f(a)}{b-a}$$ So in your case, the average rate of change is: $$\frac{f(2)-f(1)}{2-1}$$
P: 24 Ok, I am not sure what to do with 2x^3 + x . So I subtracted it from the f(b) - f(a). If I had 2x^3 by it self, I can see just putting 2(2)^3 - 2(1)^3 / 2-1 but the "+x" is confusing me
P: 24 I think I got it 2(2)^3 + x - 2(1)^3 + x / 2-1 =16-2+x+x / 1 =14+2x =-2x + 14 x = -7
Sci Advisor HW Helper P: 2,002 So you can solve it if the function is 2x^3, but not if it's 2x^3+x? What's the difference, conceptually? f(x)=2x^3+x, so what is f(2)? And what is f(1)?
HW Helper
P: 2,278
Quote by Rusho I think I got it 2(2)^3 + x - 2(1)^3 + x / 2-1 =16-2+x+x / 1 =14+2x =-2x + 14 x = -7
Calculate f(2). Calculate f(1). Subtract the result of f(1) from f(2).
The solution for f(2) is not 16+x. You have to substitute '2' for x everywhere it appears, so the solution for f(2) is 16+2.
Also, your algebra is wrong (in addition to being not applicable in this case). If you have:
$$(3x^2 + 3x) - (2x^2 + 2x)$$
then the minus sign means both the 2x^2 and the 2x are negative:
$$3x^2 + 3x - 2x^2 - 2x$$
$$(3x^2 - 2x^2) + (3x - 2x)$$
etc.
P: 24 2(2)^3 + (2) -1 / 2-1 16+2-1 / 2-1 17/1 17 I'm sorry if I'm just not getting it
Sci Advisor HW Helper P: 2,002 Alright, let's take some steps back. You are given a function f. It's a machine that eats a number and spits out a (usually different) number. f(x)=2x^3+x tells you the value of the function at each point, it's an equality that holds for each number x. For example: f(1)=2(1)^3+1=2+1=3 f(5)=2(5)^3+5=2(125)+5=255 So if you want to calculate [f(2)-f(1)]/(2-1) you have to calculate f(2) and f(1). I already did f(1) for you above. Now you do f(2) and calculate [f(2)-f(1)]/(2-1)
P: 24 2(2)^3 + (2) - 2(1)^3 +1 / 2-1 =18-3 / 2-1 =15/1 =15
Sci Advisor HW Helper P: 2,002 Right that's correct. BTW: Mind your brackets: -2(1)^3+1 is not the same as -(2(1)^3+1)
P: 24 Great! Thanks for your help! I have another one, maybe I should start a new post
Related Discussions Calculus 1 Calculus & Beyond Homework 5 Introductory Physics Homework 5 Chemistry 3 General Math 3
|
2014-08-01 11:59:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5042421817779541, "perplexity": 1589.9826900468079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274979.56/warc/CC-MAIN-20140728011754-00246-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://mathoverflow.net/questions/273067/who-computed-the-third-stable-homotopy-group
|
# Who computed the third stable homotopy group?
I have spend some time with the geometric approach of framed cobordisms to compute homotopy classes, due to Pontryagin. He computed $\pi_{n+1}(S^n)$ and $\pi_{n+2}(S^n)$. After surveying the literature (not too deeply) I was under the impression that the computation of $\pi_{n+3}(S^n)\cong \mathbb{Z}/24\mathbb{Z}$ for $n\rightarrow \infty$ with similar methods is due to Rohlin in the following paper:
MR0046043 (13,674d) Reviewed
Rohlin, V. A.
Classification of mappings of an (n+3)-dimensional sphere into an n-dimensional one. (Russian)
56.0X
It came a bit of a surprise to me that in the review of this paper on Mathscinet, Hilton states that the results in this paper are incorrect. Does the error only concern the unstable groups? Is it fair to cite this paper for the first computation of the third stable homotopy group of spheres, or should I cite papers by Barrat-Paechter, Massey-Whitehead and Serre? As I understand it these methods are much more algebraic and further removed from the applications that I have in mind.
• Was it really Pontryagin who was the first to compute π_n(S^n)? – Dmitri Pavlov Jun 27 '17 at 11:19
• @DmitriPavlov: Good question. I guess not, probably it was Hopf in "Abbildungsklassen $n$-dimensionaler Mannigfaltigkeiten". – Thomas Rot Jun 27 '17 at 12:52
• And for the two-dimensional case it was probably Brouwer. – Thomas Rot Jun 27 '17 at 16:00
The error is that Rokhlin claimed that $\pi_6(S^3)=\mathbb{Z}/6$, but Hilton, in his review, points out that the paper instead shows that $\pi_6(S^3)/\pi_5(S^2) = \mathbb{Z}/6$. The error lies in a prior calculation (reviewed here) that Rokhlin claimed showed $\eta^3=0$, but in fact this element is 2-torsion.
Rokhlin corrects his mistake and calculates the stable homotopy group $\pi_3^s$ in
Rohlin, V. A. MR0052101
New results in the theory of four-dimensional manifolds. (Russian)
The review states that this result "agrees with, and were anticipated by, results of Massey, G. W. Whitehead, Barratt, Paechter and Serre." Serre's CR note Sur les groupes d'Eilenberg-MacLane. C. R. Acad. Sci. Paris 234, (1952). 1243–1245 (BnF) found the correct $\pi_6(S^3)$ by homotopical means. Barratt and Paechter found an element of order 4 in $\pi_{3+k}(S^k)$ when $k\geq 2$.
The reference to Massey-Whitehead is a result presented at the 1951 Summer Meeting of the AMS at Minneapolis; all we have is the abstract in the Bulletin of the AMS 57, no. 6
If one wants to analyse 'dates received' to establish priority, then by all means.
• Note that Massey-Whitehead only found the order of $\pi_3^s$, not that fact it is cyclic. So Rokhlin it is! – David Roberts Jun 26 '17 at 22:54
• This clears up the history to me. I was most interested in the geometric viewpoint, and it does seem it is due to Rohlin. Thank you very much for your effort. – Thomas Rot Jun 26 '17 at 23:07
The mistake is corrected in [Rohlin, V. A. New results in the theory of four-dimensional manifolds. (Russian) Doklady Akad. Nauk SSSR (N.S.) 84, (1952). 221–224, MR0052101].
This and other matters are discussed in the monograph
[À la recherche de la topologie perdue. (French) [Remembrance of topology past] I. Du côté de chez Rohlin. II. Le côté de Casson. [I. Rokhlin's way. II. Casson's way] Edited by Lucien Guillou and Alexis Marin. Progress in Mathematics, 62. Birkhäuser Boston, Inc., Boston, MA, 1986, MR0900243]
where Rohlin's four papers are reproduced with comments. This book was also translated into Russian (I own a copy) but it seems not into English.
• French will be easier than Russian. Thank you for both references. – Thomas Rot Jun 26 '17 at 23:07
• @ThomasRot, "French will be easier than Russian." -- я не думаю. – Wlod AA Jun 26 '17 at 23:40
• @WlodAA "@ThomasRot, "French will be easier than Russian." -- я не думаю." Ваш пробег может варьироваться – David Roberts Jun 27 '17 at 0:09
• Мне нравится google translate. – Thomas Rot Jun 27 '17 at 8:24
|
2021-07-31 01:30:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7599855065345764, "perplexity": 1986.3121403277557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154042.23/warc/CC-MAIN-20210731011529-20210731041529-00402.warc.gz"}
|
https://www.physicsforums.com/threads/level-sets-as-smooth-curves.546933/
|
# Level sets as smooth curves
I have difficulty understanding the following Theorem
If U is open in $ℝ^2$, $F: U \rightarrow ℝ$ is a differentiable function with Lipschitz derivative, and $X_c=\{x\in U|F(x)=c\}$, then $X_c$ is a smooth curve if $[\operatorname{D}F(\textbf{a})]$ is onto for $\textbf{a}\in X_c$; i.e., if $$\big[ \operatorname{D}F\bigl( \begin{smallmatrix}a \\ b\end{smallmatrix}\bigr)\big]≠0 \mbox{ for all } \textbf{a}=\bigl( \begin{smallmatrix}a \\ b \end{smallmatrix}\bigr)\in X_c$$
I don't understand why the differential of F at a being onto is equivalent to saying the differential is not zero. Can someone explain? Thanks
HallsofIvy
Think about what happens for simple functions like $f(x)= x^2$ where f'(x)= 0.
Think about what happens for simple functions like $f(x)= x^2$ where f'(x)= 0.
the differential of $f(x)=x^2$ is $2x$, so $f'(x)=0$ means x=0.
|
2021-03-02 17:23:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9444234371185303, "perplexity": 239.8324707874115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364027.59/warc/CC-MAIN-20210302160319-20210302190319-00042.warc.gz"}
|
https://math.stackexchange.com/questions/3139879/why-are-monoids-not-treated-in-most-algebra-courses
|
# Why are monoids not treated in most algebra courses?
I’ve taken a look at a number of introductory books on abstract algebra. They all treat groups, rings, and fields, and many of them treat galois theory, linear algebra, algebras over fields.
But none of them treat monoids as a general class (only groups as a special case). Why is this? Why are monoids not considered an essential part of an algebra course, given that they are very general and appear often without inverses, and basically underlie category theory?
• Since every group is a special monoid, they are treated. But the point is that groups appear more often and provide more structure. – James Mar 8 at 8:51
• Groups are easier to work with when they come alone, to deal with monoids or semigroups (which appear very often in nature as well) you often need other kinds of structure, like a topology for instance, so they can hardly appear in an abstract algebra course – Max Mar 8 at 8:58
• @Max, if that is indeed the explanation (that you need other kind of structure to work with them), then I’d really like to get an illustration of that. I.e. an illustration of a case where having this extra structure makes them workable but omitting the structure would make them non workable. – user56834 Mar 8 at 9:02
• I don't know enough about this to know that it's the explanation (though it goes hand in hand with Qiaochu's answer), but an example is semigroups vs right-continuous compact semigroups : there is a well developped theory of the latter (relating for instance to dynamical systems) whereas bare semigroups are hardly workable. For instance proving that $\gamma\mathbb{N}$ (semigroup of nonprincipal ultrafilters) has an idempotent is best done with topological considerations, and this gives a remarkable proof of Hindman's theorem) – Max Mar 8 at 9:46
• I don't think most category theorists would describe monoids as "underlying category theory". A category is a horizontal categorification of monoids, but this view is usually not that insightful. Viewing a category as a vertical categorification of partially ordered sets is usually much more insightful and even then most category theorists would not say that order theory "underlies" category theory, though it would be a much more defensible position. – Derek Elkins Mar 8 at 10:20
|
2019-05-27 11:21:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7235881686210632, "perplexity": 519.4263671819305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232262369.94/warc/CC-MAIN-20190527105804-20190527131804-00283.warc.gz"}
|
https://gmatclub.com/forum/v-is-the-volume-of-a-cylinder-the-radius-of-the-cylinder-is-3-4-the-221128.html
|
It is currently 20 Feb 2018, 19:13
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# V is the volume of a cylinder; the radius of the cylinder is 3.4. The
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 43831
V is the volume of a cylinder; the radius of the cylinder is 3.4. The [#permalink]
### Show Tags
29 Jun 2016, 06:58
Expert's post
1
This post was
BOOKMARKED
00:00
Difficulty:
85% (hard)
Question Stats:
54% (01:52) correct 46% (01:41) wrong based on 94 sessions
### HideShow timer Statistics
V is the volume of a cylinder; the radius of the cylinder is 3.4. The height of the cylinder is 550% more than the radius. Which of the following is true?
A. 100 < V < 300
B. 300 < V < 500
C. 500 < V < 700
D. 700 < V < 900
E. 900 < V < 1100
[Reveal] Spoiler: OA
_________________
Manager
Joined: 22 Jun 2016
Posts: 249
V is the volume of a cylinder; the radius of the cylinder is 3.4. The [#permalink]
### Show Tags
29 Jun 2016, 07:32
1
KUDOS
Volume of a cylinder is pi*(r^2)*h
h>5.5r+r ---> h>6.5*3.4
So, volume is > 3.14*3.4*3.4*6.5*3.4=800 (approx)
----------------------------------
P.S. Don't forget to give kudos
_________________
P.S. Don't forget to give Kudos
Last edited by 14101992 on 29 Jun 2016, 08:02, edited 1 time in total.
Senior Manager
Joined: 23 Apr 2015
Posts: 330
Location: United States
WE: Engineering (Consulting)
Re: V is the volume of a cylinder; the radius of the cylinder is 3.4. The [#permalink]
### Show Tags
29 Jun 2016, 07:36
Answer is D : 700 < V < 900
Volume of cylinder is $$\pi$$$$r^2$$h
Given r = 3.4 and h is 550% more than r, which means r = 6.5 * r (if 100% more then it means twice, so..)
So V = $$\pi$$$${3.4}^2$$* 6.5
Director
Joined: 04 Jun 2016
Posts: 642
GMAT 1: 750 Q49 V43
V is the volume of a cylinder; the radius of the cylinder is 3.4. The [#permalink]
### Show Tags
29 Jun 2016, 08:43
Senthil1981 wrote:
Answer is D : 700 < V < 900
Volume of cylinder is $$\pi$$$$r^2$$h
Given r = 3.4 and h is 550% more than r, which means r = 6.5 * r (if 100% more then it means twice, so..)
So V = $$\pi$$$${3.4}^2$$* 6.5
You forgot to multiply 3.4 one more time (value of r)
_________________
Posting an answer without an explanation is "GOD COMPLEX". The world doesn't need any more gods. Please explain you answers properly.
FINAL GOODBYE :- 17th SEPTEMBER 2016. .. 16 March 2017 - I am back but for all purposes please consider me semi-retired.
Retired Moderator
Status: On a mountain of skulls, in the castle of pain, I sit on a throne of blood.
Joined: 30 Jul 2013
Posts: 359
Re: V is the volume of a cylinder; the radius of the cylinder is 3.4. The [#permalink]
### Show Tags
29 Jun 2016, 19:41
Bunuel wrote:
V is the volume of a cylinder; the radius of the cylinder is 3.4. The height of the cylinder is 550% more than the radius. Which of the following is true?
A. 100 < V < 300
B. 300 < V < 500
C. 500 < V < 700
D. 700 < V < 900
E. 900 < V < 1100
Height=3.4*6.5=22.1
Volume = pi*r^2*h=pi*3.4^2*22.1= approx 800
Manager
Joined: 22 Feb 2015
Posts: 56
Location: United States
Concentration: Finance, Operations
GMAT Date: 04-01-2015
GPA: 3.98
Re: V is the volume of a cylinder; the radius of the cylinder is 3.4. The [#permalink]
### Show Tags
29 Jun 2016, 21:52
As we see the answers are in form of range we can use approximation
Volume of cylinder is πr^2h
Given π= 22/7 and r = 3.4 so r^2 ~ 12 and h = 6.5 * 3.4 ~ 21
So 22/7 * 12 * 21 = 792
D. 700 < V < 900
_________________
Click +1 KUDOS , You can make me happy with just one click! Thanks
Non-Human User
Joined: 09 Sep 2013
Posts: 13817
Re: V is the volume of a cylinder; the radius of the cylinder is 3.4. The [#permalink]
### Show Tags
31 Aug 2017, 06:54
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: V is the volume of a cylinder; the radius of the cylinder is 3.4. The [#permalink] 31 Aug 2017, 06:54
Display posts from previous: Sort by
|
2018-02-21 03:13:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8089120984077454, "perplexity": 3569.14551059266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813322.19/warc/CC-MAIN-20180221024420-20180221044420-00154.warc.gz"}
|
https://web2.0calc.com/questions/help_74980
|
+0
# help
0
46
1
What is the ratio of x to y if: $$\frac{10x-3y}{13x-2y} = \frac{3}{5}$$? Express your answer as a common fraction.
How would I go about solving this problem
Mar 27, 2019
#1
+1
First, I'd multiply both sides by 5 and then multiply both sides by (13x - 2y)
Sometimes this is called cross-multiplying.
the result of cross-multiplying is (5)(10x - 3y) = (3)(13x -2y)
multiply it out 50x -15y = 39x - 6y
put like unknowns together 50x - 39x = 15y - 6y
combine/subtract terms 11x = 9y
divide both sides by y 11x/y = 9
divide both sides by 11 x/y = 9/11 the ratio of x to y is 9 to 11
Mar 27, 2019
|
2019-06-20 08:25:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.941332221031189, "perplexity": 5784.998877200672}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999163.73/warc/CC-MAIN-20190620065141-20190620091141-00479.warc.gz"}
|
https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/abstract-classes
|
# Abstract Classes
Abstract classes are classes that leave some or all members unimplemented, so that implementations can be provided by derived classes.
## Syntax
// Abstract class syntax.
[<AbstractClass>]
type [ accessibility-modifier ] abstract-class-name =
[ inherit base-class-or-interface-name ]
[ abstract-member-declarations-and-member-definitions ]
// Abstract member syntax.
abstract member member-name : type-signature
## Remarks
In object-oriented programming, an abstract class is used as a base class of a hierarchy, and represents common functionality of a diverse set of object types. As the name "abstract" implies, abstract classes often do not correspond directly onto concrete entities in the problem domain. However, they do represent what many different concrete entities have in common.
Abstract classes must have the AbstractClass attribute. They can have implemented and unimplemented members. The use of the term abstract when applied to a class is the same as in other .NET languages; however, the use of the term abstract when applied to methods (and properties) is a little different in F# from its use in other .NET languages. In F#, when a method is marked with the abstract keyword, this indicates that a member has an entry, known as a virtual dispatch slot, in the internal table of virtual functions for that type. In other words, the method is virtual, although the virtual keyword is not used in F#. The keyword abstract is used on virtual methods regardless of whether the method is implemented. The declaration of a virtual dispatch slot is separate from the definition of a method for that dispatch slot. Therefore, the F# equivalent of a virtual method declaration and definition in another .NET language is a combination of both an abstract method declaration and a separate definition, with either the default keyword or the override keyword. For more information and examples, see Methods.
A class is considered abstract only if there are abstract methods that are declared but not defined. Therefore, classes that have abstract methods are not necessarily abstract classes. Unless a class has undefined abstract methods, do not use the AbstractClass attribute.
In the previous syntax, accessibility-modifier can be public, private or internal. For more information, see Access Control.
As with other types, abstract classes can have a base class and one or more base interfaces. Each base class or interface appears on a separate line together with the inherit keyword.
The type definition of an abstract class can contain fully defined members, but it can also contain abstract members. The syntax for abstract members is shown separately in the previous syntax. In this syntax, the type signature of a member is a list that contains the parameter types in order and the return types, separated by -> tokens and/or * tokens as appropriate for curried and tupled parameters. The syntax for abstract member type signatures is the same as that used in signature files and that shown by IntelliSense in the Visual Studio Code Editor.
The following code illustrates an abstract class Shape, which has two non-abstract derived classes, Square and Circle. The example shows how to use abstract classes, methods, and properties. In the example, the abstract class Shape represents the common elements of the concrete entities circle and square. The common features of all shapes (in a two-dimensional coordinate system) are abstracted out into the Shape class: the position on the grid, an angle of rotation, and the area and perimeter properties. These can be overridden, except for position, the behavior of which individual shapes cannot change.
The rotation method can be overridden, as in the Circle class, which is rotation invariant because of its symmetry. So in the Circle class, the rotation method is replaced by a method that does nothing.
// An abstract class that has some methods and properties defined
// and some left abstract.
[<AbstractClass>]
type Shape2D(x0 : float, y0 : float) =
let mutable x, y = x0, y0
let mutable rotAngle = 0.0
// These properties are not declared abstract. They
// cannot be overriden.
member this.CenterX with get() = x and set xval = x <- xval
member this.CenterY with get() = y and set yval = y <- yval
// These properties are abstract, and no default implementation
// is provided. Non-abstract derived classes must implement these.
abstract Area : float with get
abstract Perimeter : float with get
abstract Name : string with get
// This method is not declared abstract. It cannot be
// overridden.
member this.Move dx dy =
x <- x + dx
y <- y + dy
// An abstract method that is given a default implementation
// is equivalent to a virtual method in other .NET languages.
// Rotate changes the internal angle of rotation of the square.
// Angle is assumed to be in degrees.
abstract member Rotate: float -> unit
default this.Rotate(angle) = rotAngle <- rotAngle + angle
type Square(x, y, sideLengthIn) =
inherit Shape2D(x, y)
member this.SideLength = sideLengthIn
override this.Area = this.SideLength * this.SideLength
override this.Perimeter = this.SideLength * 4.
override this.Name = "Square"
inherit Shape2D(x, y)
let PI = 3.141592654
override this.Perimeter = 2. * PI * this.Radius
// Rotating a circle does nothing, so use the wildcard
// character to discard the unused argument and
// evaluate to unit.
override this.Rotate(_) = ()
override this.Name = "Circle"
let square1 = new Square(0.0, 0.0, 10.0)
let circle1 = new Circle(0.0, 0.0, 5.0)
circle1.CenterX <- 1.0
circle1.CenterY <- -2.0
square1.Move -1.0 2.0
square1.Rotate 45.0
circle1.Rotate 45.0
printfn "Perimeter of square with side length %f is %f, %f"
(square1.SideLength) (square1.Area) (square1.Perimeter)
printfn "Circumference of circle with radius %f is %f, %f"
let shapeList : list<Shape2D> = [ (square1 :> Shape2D);
(circle1 :> Shape2D) ]
List.iter (fun (elem : Shape2D) ->
printfn "Area of %s: %f" (elem.Name) (elem.Area))
shapeList
Output:
Perimeter of square with side length 10.000000 is 40.000000
Circumference of circle with radius 5.000000 is 31.415927
Area of Square: 100.000000
Area of Circle: 78.539816
|
2021-11-30 16:39:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2514443099498749, "perplexity": 4199.943845806992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359037.96/warc/CC-MAIN-20211130141247-20211130171247-00390.warc.gz"}
|
https://gameradvocate.com/post/can_wizards_wear_armor_dnd
|
# Can Wizards Wear Armor Dnd
### \$\begin group\I've just read the D&D 5e basic rules, and I'm not sure I'm up to speed regarding casting spells while clad in armor. Because of the mental focus and precise gestures required for spell casting, you must be proficient with the armor you are wearing to cast a spell. (Source: www.pinterest.com) Contents Does this mean that a character who starts out with a single level fighter (or any other class with heavy armor proficiency) can from then on gain levels as wizard and forever more cast spells, without any penalty whatsoever, while wearing full plate? For example, Diatonic Sorcerers get permanent Mage Armor for free at 1st level, which provides 13 + Dexterity Modifier AC. If you take a mage with high Dexterity, then it only gets worse as the armor gets heavier. In addition, any mundane armor heavier than 20 pounds will give you Disadvantage on Stealth checks, and truly heavy armor won't let you apply your Dexterity Modifier at all. Furthermore, the only spell casters who have access to meta magic (which, aside from magic gear, was the primary way ASF was mitigated) are Sorcerers, which is also the class that needs heavy armor the least. Magic gear is much harder to come by now, so the likelihood of you finding a magic suit of armor made to be light and maneuverable is much more remote, thus removing yet another way of mitigating ASF. It is not unbalanced because the there are many ways to increase the AC of a Wizard, from Mage Armor to bracers of defense etc. Of course, you'd be better off taking 1 level of war domain cleric, but the cost of dipping in regard to stat improvements or feats is still there. It's not a departure from D&D tradition, because in all versions, if you picked the right combinations you could negate the arcane spell failures for all intents and purposes. The only people who missed out were those who lacked the system mastery to create the character concept. (Source: www.pinterest.com) On the other hand, the generic wizard always had a hard time casting spells in various types of armor, and in this edition is not merely difficult, but impossible to do so unless you choose the correct rule combinations, which allows you to do so.$\begingroup$Metal interferes with the correct and proper flow of MANA. This could manifest itself in a number of ways, perhaps the armor gets magically charged and this causes discharges of power that are very dangerous. Metal disrupts the Weave, causing dead zones and preventing wizards from accessing it properly and casting spells correctly. It will actively impede their movement, make them fatigued, and distract them from the fine concentration necessary to cast magic. While warriors are wearing armor, getting it properly fitted, and tuning their bodies for it, magicians are studying. If a mage were to spend time practicing in armor then they would miss out on spell study. Just having armor on, with no knowledge of how to move in it, use it to block hits, or the stamina to wear it for long while running around won't lead to the armor class improvement” seen in game systems. But will a mage character spend a precious skill on armor instead of a more directly useful magic feat? Even boiled leather or quilted armor, while not that bad when wearing it in the comfort of your house, quickly becomes a big nuisance when you wear it all day tramping through the woods. (Source: www.pinterest.jp) To address some comments to my answer, I believe in an RPG system, armor class is more than just passive resistance. Sure, you could drape a chain mail shirt over someone and presumably it may protect from an arrow hit (kind of like how journalists or prisoners wear ballistic armor today) but in a fight being unfamiliar with wearing armor means you won't react appropriately. Movies love to show a belly slash with a sword against someone wearing mail as being a lethal attack, but in reality that would be worse than useless as it would just dull your sword edge. Rigid armor sets are even worse, they need to be tightened in specific ways and are usually worn over a bulky padded under layer. So a mage unfamiliar with wearing not just armor, but a specific set of armor, is going to find that a certain set of gestures won't work while casting a spell in combat to potentially disastrous results. Clearly things like bulky winter clothing shouldn't interfere with spell casting (unless you really want to penalize casters) so there is a bit of hand waving. Though presumably folks in cold climates have their winter clothes and have practiced in them, modifying jackets and such to allow for proper spell casting. The least restrictive clothes to allow for the precise gestures and movements required. But unless casting spells involves yoga positions or something similar it is a little of a logic tweak to say that padded or leather armor will interfere with casting but a heavy woolen jacket wouldn't. You could spend a day and a bunch of money getting it custom fit, but without years of fighting practice a wizard in plate is barely more protected than one not in plate as the weight of plate would throw off their reactions, make them easy to knock over, and they wouldn't know which vulnerable areas to protect and what areas they can use to block an attack. (Source: www.pinterest.com) For faint signals you need precise design and as my uncle the engineer (who built them) it became more of a black art than science. So your mage, when young, deals with big whomping spells and as he progresses has to be more and more aware of the surrounding metals. Besides, if you are tossing lightning about, a big metal shell is not the greatest idea. In this case, the magi would be inside the chain mail metal suit, which would make a perfect Faraday cage. There are you-tube videos of people wearing chain mail suits while Tesla coils send out million volt lightning bolts all around them, even striking the armor. So if the process of generating the power to create the magic came from some form of electromagnetic energy (drawing lightning from the sky, for instance) the chain mail would prevent it from reaching the magi in order to be harnessed. Nature magic and leather armor are fine as the druids and shaman will explain. Metal is heavy, mages tend to spend more time in the library than the gym and hence would have some considerable trouble moving around in 40lbs of armor. Metal is a wonderful conductor of heat and often doesn't respond well to extremes of temperature. Some metals freeze and shatter, or contract excessively when cooled. (Source: www.pinterest.com) Necromancers are an entirely different game from your average frost and fire mages. So its this corruption of an immutable natural form that gives iron (as well as any metals of your choosing)its anti-magic properties. You need to provide an incentive to not wearing armor by making it affect the areas where your wizards are especially good. If you need different types of mages where some are capable of using armor and others are not you could vary the kind of rituals. A war mage might in fact need to spend at least 4 hours every day in full plate armor. It helps him concentrate and brings him into a mental state where he is able to quickly react to any danger. When he takes off the armor he suddenly feels all the tiredness of the day and becomes far less capable of using his magic.$\begingroup$If you don't need to worry about it for reasons of balance (which I'm going to assume is the case since you're asking here and not the RPG.stack) why not put the cart before the horse. A new apprentice who still hasn't mastered basic protection spells might need to wear armor while learning them or practicing combat magic. This has also led to a bit of a stigma on wearing armor ; If a wizard is wearing armor most other wizards (or in the know folk) will assume she hasn't mastered protection spells yet, so a prideful mage may go unarmored even when they haven't quite mastered protection spells to avoid the embarrassment of others learning of their lack of skill. (Source: www.pinterest.com) Finally, most opponents won't even try to stab a wizard without some sort of magical blade, due to the potentially lethal nature of common magical defenses which leads to some perfectly capable, but slightly lazy wizards not bothering to cast their magical defenses every day (security through obscurity). Thus, wizards just don't have the strength to wear decent strong (and therefore heavy) armor. In some very specific cases, armor like this could be beneficial, but only if it is a rare creature of magical energy, and only if the caster only wants to cast one type of spell (that associated with the creature's energy). The armor still causes the spell to lose power, and thus have a chance of not working, but is also tainted by the energy of the creature the skin came from. The wand is typically made out of conductors made of animal remains or metal because then the downsides of armor can be focused as positives, by conducting the power to a singular point of focus. Yes, in this universe the magical energy is part of the world, but fireballs don't just spontaneously erupt from the sky. Breaking the normal physical rules requires not just that you say the right words, and perform the correct gestures, but that you actually believe that the ritual will have the intended result, and that you personally can control causality. A skeptic performing the ritual with technical perfection but not believing it will do anything will produce no results. Wearing armor is an implicit acknowledgement of that risk, thus on a subconscious level undermines the belief that you are in control. Wearing simple robes is acceptable because that isn't about protection or control, it is about modesty and keeping up cultural norms. (Source: nase-notig.icu) Culturally people are taught from a young age that wearing armor is incompatible with casting spells, it has become part of the zeitgeist, it is accepted lore. In a culture where everyone wears armor, not for protection, but for style, or tribal identity, or rank, then 1 won't apply. If they were confident enough in their skills, they could wear it as an intimidation tactic: “I am so powerful the normal rules do not apply to me. One could become deranged and actually believe that armor and magic are still incompatible for normal people, but I'm special.$\begingroup$I've been beaten to the punch regarding the “iron has anti-magic properties” thing so instead, here's an alternate explanation based on one of the other rules: You can throw around fireballs, create walls of earth, fly through the sky, ... Maybe a smaller, well-placed pillar of earth to knock the guy's sword arm out of the way. Alternately, maybe a “Shield” spell exists, that creates a protective sphere around the caster. This mechanism also adds a neat approach to the lore behind retaining spells. You can recognize a high-level wizard because there seems to be a slight wind going through his robes at all times, and you see a magical spark-light effect if he holds a shield for too long as the energies struggle to escape him and are aided by the shield's interference. (Source: www.pinterest.com) Clerics don't draw magic in through their bodies, they're given it by their gods, sort of like they have a breathing tube. And druids, if we want to allow them leather, have a more efficient mechanism of drawing magic that isn't blocked by non-metal armor, but only allows them to draw in very specific types of magic that can permeate through leather. Those effects are really rough on things like heavy armor, often reducing it to an unusable state after only a day or two of a standard adventurer's workload of spell casting. Higher-powered mages can afford this better, but they also throw around more powerful spells, which means it tends to happen (much) faster. Basically, while they technically could wear armor, it's not worth it for the sheer cost of keeping the stuff repaired... and that means that it's not worthwhile for the mages to learn to wear the armor properly in the first place. Independently wealthy mages have also died a time or two in the past when their armor got warped by spacial magic into a cage of inward-pointing spikes. Additionally, not all armors are metal, and hide armor can still can interfere with spells. The more armor you wear, the more likely that a given body part you might need to “open” up to the ether via gesture to draw power is covered and less likely to get you into the proper position to complete a spell. For example, perhaps your spell gestures require you to place your hand on a central chakra/chi node/magical organ, simply, you have to touch your chest. Wearing armor means you can 't effectively complete this gesture, and even some thick leather might occasionally catch you up in the complex act of tapping into arcane energies. Note that in fantasy settings there frequently are beings with powers beyond that of the gods. They may be called “Era”, “AO” or “Dungeon Master”; in any case they are beyond caring for the worship of mortals, and care mainly for maintaining a form of balance. It is in their interest to ensure that the universe is not dominated forever by a single wizard with mind control and necromancy powers. Thus, they shape the arcane powers to give wizards at least one Achilles heel, so there is always hope that such a tyrant would be overthrown. Although wizards draw on the arcane power created by the over deity, they have no need to pray. Never-the-less the rules of reality that, so limit wizards were created by an over deity with an explicit interest in balance.$\begingroup$There's nothing forbidding a caster to wear heavy armor ... unless they want to boil inside the breastplate. That's because channeling MANA through our bodies is quite an exothermic reaction, and it's easier to bear if you're wearing light-fluffy wavy clothes.$\begingroup$Depending on your setting there might simply be some analogy to the Geneva Conventions in place prohibiting mages from wearing armor at all. A historical example of such deliberate restrictions can be found in medieval times. We prohibit under anathema that murderous art of crossbowmen and archers, which is hateful to God, to be employed against Christians and Catholics from now on. The armor might refract the magical energy too, into directions that are difficult to control. If the staff had a magical wave speed that was in-between the wave speed of the user's body and the air, then there would be less reflection at the boundary between the magic user and the air and so there would be less energy lost in casting the spell. Movement: You need precise control of mind, spirit and... body. Yet...) It takes pure materials and “good” energy to make proper clothing for a mage (in addition to heavy materials being bad and impure materials actually blocking magic)... that's why silk is better than wool... and why druids are known to dance naked under the full moon. Magic pulls that energy through you... last thing you want to do is be grounded... (Further note: Don't use swords. (Edit) I suddenly get an image of a mage with a parabolic metallic reflector cowl behind his head. It could be the souls of those capable of wielding magic are vulnerable to influence from spirits (ex: the main villain of the first book of the Inheritance series, Dragon, was overwhelmed by spirits). Could be, every magic sensitive in the realm is provided abjuration protecting against spiritual assault. The natural physical abjuration might weaken the spiritual one, like a drug interaction, leaving the magician vulnerable to having his or her mind consumed. Or, it might be that the (pure) unbound body, robes of rank, and other accouterments are a usually-not-talked-about somatic and material component to most (if not all) magic. This pulling in ideas from modern physics where particles communicate light (photons), gravity (gluons), and mass (Higgs). Could be a scary armor scares away, or a strong natural abjuration drives away the spirits of most utility to the average magic practitioner. This could also help explain magic poor areas (if they exist in your world). But this approach is a supernatural restatement of the “metal short circuits/Faraday cages magic electricity” suggestion. To do that, we could simply say that wizard robes are woven of a special material, say, unicorn hair. While there might be metals with the same capacity to conduct MANA, you could easily restrict that property to the most special, but also most clumsy metals, like gold or also plectrum, a melding of gold and platinum present in many fantasy universes. Mikhail is often used as a metal that's very lightweight and incredibly sturdy, and certainly you'd have a hard time to justify it cannot conduct MANA since it's one of the unique materials in existence, but it's also incredibly rare. In most fantasy stories where Mikhail is present, only the most skilled dwarves metallurgists are able to produce it, let aside the fact that the needed metals are among the rarest as well. And as a matter of fact, you don't even need to include Mikhail in your universe (although most people would probably be a little bummed if you didn't). Their primary offense is magic and, while they might use weaponry, they are probably best advised to run away because they will be facing those much better trained and, in any case, a good (or even mediocre) wizard is far more valuable than an ordinary soldier. Their best defense for most practical purposes is to hide behind something like a wall or in a good solid tower. Light armor may help, if necessary, to protect a wizard out in the open from ranged weapons but may be counter-productive if it significantly slows him in running for cover or prevents him from crouching behind whatever may give protective shelter. Just as spells can not pass thought armor (unless it is specialized for it) when it hits the hard material, such as metal, the spell takes effect (just like when electricity reaches a component in a circuit). Staves are not made from metal, because it stops the energy from flowing (electricity ground), while formerly alive materials such as wood or bones are used to flows of magical energy. Mail armor mesh could stop the flow of energy from the caster, just like a cage does for electrical discharge. With added size and weight, there is a chance, that the spell will discharge into the caster and not the intended target. Many magic systems require some sort of sense, to cast the spell successfully. In my world I compensate for this with using rare highly conductive materials (MANA crystals, gold, silver) or using enchantments and runes, to artificially infuse the material with magic and thus making it conductive.$\begingroup\$If we try to apply game rules to the in game universe logic wizard spell casting is primarily a mental exercise as indicated by dexterity not being of no relevance as could be expected if intricate and complex set of movements is required.
In addition, not all spells in the wizards list have somatic components I. E don't require movement and in fact some bards can ignore the adverse effects of armor presumably because their spells are “simpler”. The other side has enchanted swords, and they will cut through your unenchanted armor like butter.
From the little things, protective magic placed on them by midwives, mothers, fathers. Wizards are people who collect magical secrets in the form of spells.
In order to cast these spells, any enchanted items you have on you have to be taken into account. The armor that a wizard must wear is enchanted with effects that help them cast their spells, and the “weapons” (staffs, wands, orbs) likewise.
Wouldn't do him much good against anything other than hail stones, as any crude dagger used by an urchin with even a modest enchantment makes your plate armor useless. They either use a different form of magic/spells, and/or they have unique enchantments on their armor /weapons that permit both weapon use and magic use.
Such “spell swords” may not be able to cast as powerful a spell given the same level of training as a traditional wizard. The reputation requirement helps protect this question from spam and non-answer activity.
|
2021-10-15 21:08:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42126211524009705, "perplexity": 2761.4995180890733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00110.warc.gz"}
|
https://economics.stackexchange.com/questions/25100/how-do-i-find-optimal-price-or-maximise-profit-in-a-monopolistic-market
|
# How do I find optimal price or maximise profit in a monopolistic market?
How do I find the optimal price for a monopolist given the monopolist's cost function and market demand?
I have $$Profit(y) = p*y + C(y)$$ where $$p$$ is price, $$y$$ is output, and $$C(y)$$ is total cost.
Then I set the derivative of $$Profit$$ (with respect to y) equal to 0 and solve for $$p$$. Is that correct?
• If $C(y)$ is the cost function, why are you adding it to profits? Shouldn't you be subtracting it? As stands, this suggests that as costs increase, you make more profit. I take it that $y$ is the quantity of output. $profit = p*y - C(y)$ would make more sense as an equation. Also, what is your demand function? You say that you have one, but you don't include it. It should look something like $y = D(p)$ only with the right side an actual function on $p$. Consider the possibility that marginal profit might be zero at multiple points. – Brythan Oct 20 '18 at 0:54
## 1 Answer
$$\textbf{In short}$$
No, there are a number of issues. see section 1 below for the solution, and section 2 for some intuition behind why this process works.
$$\textbf{section 1 - Potential Problems}$$
$$\underline{Objective \ function \ setup}$$
Your objective function is the function that you are interested in, in your question it is profit.
It appears that there is a wrong sign on your cost function. Costs take away from profits, they don't add to profits. i.e. $$\pi = revenue \color{red}{-} costs, \ where\ \pi \ is \ profit$$
$$\underline {Inverse \ Demand, y(p)=a-bp}$$
Another issue is that in a typical monopoly question, output(i.e. y) is a function of price (i.e. $$y(p)$$) and inversely price is a function of output (i.e. $$p(y)$$).
This occurs because there is one firm in the market, and thus the firm faces the entire market demand. This means that the firm $$\textbf{is not a price taker}$$, but every price it sets has an effect on the quantity demanded, and results in the firm's price $$\textbf{being the demand curve}$$ (called the inverse demand curve).
For example.
Suppose we had a market with consumers having a demand function of $$y_d(p)=a-bp$$
Thus, this would result in the objective function
$$max_{p} \ \pi = p\times y_d(p) - C(y)$$
One issue with this is that your costs function is now a function of price, because it is a function of output which is a function of price.
i.e. $$C(y(p))$$ , to solve this you would need to a.) implicitly derive or b.) substitute y(p) for it's function (e.g $$C(a-bp)$$, if available). I think young's theorem might have something to say on this.
$$\textbf{Now we have a valid profit function for a monopoly!!}$$
$$max_{p} \ \pi = \color{red}{ p \times y_d(p)} - \color{blue}{C(a-bp)}$$
now we can optimise;
$$\underline{First \ order \ conditions}$$
$$\frac{\partial \pi}{\partial p} = \color{red}{(1\times y_d(p) + p \times y_d'(p) )} - \color{blue}{\frac{d}{dp}(C(a-bp))}$$ NB, you have to apply chain rule to derive $$p \times y_d(p)$$ hence the term in red on the left.
Set $$\frac{\partial \pi}{\partial p} = 0$$ and solve for p.
By setting this equal to zero, $$\color{red}{marginal \ revenue}$$ equals $$\color{blue}{marginal \ cost}$$ with respect(in terms of) to p. Thus, this is where profit is maximised.
$$\textbf{voila}$$
Now you have the optimal price
$$\textbf{section 2 - Intuition}$$
This works because you are using the process of optimisation on the profit function, to find the best price.
The process of taking the max or min of a function is referred to as optimisation in mathematics. It helps find the largest (max) or smallest (min) value in the objective function (profit in your case). This works because taking the derivative of a function essentially treats your objective function as a curve with respect to the variable you are interested in, and in taking the derivative allows you to find the peaks(max values) and the troughs(min values) of your objective function. Hence in your example, you are treating profit as a curve and finding the point on that curve with the largest p value.
|
2019-10-22 16:01:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 29, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7894192337989807, "perplexity": 432.44421187065984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822458.91/warc/CC-MAIN-20191022155241-20191022182741-00534.warc.gz"}
|
https://www.qb365.in/materials/stateboard/11th-standard-maths-english-medium-free-online-test-1-mark-questions-2020-part-eight-8498.html
|
#### 11th Standard Maths English Medium Free Online Test 1 Mark Questions 2020 - Part Eight
11th Standard
Reg.No. :
•
•
•
•
•
•
Maths
Time : 00:10:00 Hrs
Total Marks : 10
10 x 1 = 10
1. If A = {(x,y) : y = ex, x∈R} and B = {(x,y) : y=e-x, x ∈ R} then n(A∩B) is
(a)
Infinity
(b)
0
(c)
1
(d)
2
2. If A = {x / x is an integer, x2 $\le$ 4} then elements of A are
(a)
A = {-1, 0, 1}
(b)
A = {-1, 0, 1, 2}
(c)
A = {0, 2, 4}
(d)
A = {- 2, - 1, 0, 1, 2}
3. Solve $\sqrt{7+6x-x^2}=x+1$
(a)
(1, -3)
(b)
(3, -1)
(c)
(1, -1)
(d)
(3, -3)
4. The value of log 1 is
(a)
1
(b)
0
(c)
$\infty$
(d)
-1
5. If $\Sigma n=210$ then $\Sigma { n }^{ 2 }$=
(a)
2870
(b)
2160
(c)
2970
(d)
none of these
6. If a vertex of a square is at the origin and its one side lies along the line 4x + 3y - 20 = 0, then the area of the square is
(a)
20 sq. units
(b)
16 sq. units
(c)
25 sq. units
(d)
4 sq.units
7. The vector in the direction of the vector$\hat{i}-2\hat{j}+2\hat{k}$ that has magnitude 9 is
(a)
$\hat{i}-2\hat{j}+2\hat{k}$
(b)
$\frac { \hat { i } -2\hat { j } +2\hat { k } }{ 3 }$
(c)
3($\hat{i}-2\hat{j}+2\hat{k}$)
(d)
9($\hat{i}-2\hat{j}+2\hat{k}$)
8. Assertion (A) : $\overset { \rightarrow }{ a } ,\overset { \rightarrow }{ b } ,\overset { \rightarrow }{ c }$ are the position vector three collinear points then 2 $\overset { \rightarrow }{ a }=\overset { \rightarrow }{ b } +\overset { \rightarrow }{ c }$
Reason (R): Collinear points, have same direction
(a)
Both A and R are true and R is the correct explanation of A
(b)
Both A and R are true and R is not a correct explantion of A
(c)
A is true but R is false
(d)
A is false but R is true
9. If P(A)=$\frac { 1 }{ 2 }$, P(B)=$\frac { 1 }{ 3 }$ and P(A/B) = $\frac { 1 }{ 4 }$, then $P(\bar { A } \cap \bar { B } )$ =
(a)
$\frac { 1 }{ 12 }$
(b)
$\frac { 3 }{ 4 }$
(c)
$\frac { 1 }{ 4 }$
(d)
$\frac { 3 }{ 16 }$
10. Two dice are thrown. It is known that the sum of the numbers on the dice was less than 6, the probability of getting a sum 3 is
(a)
$\frac { 1 }{ 18 }$
(b)
$\frac { 5 }{ 18 }$
(c)
$\frac { 1 }{ 5 }$
(d)
$\frac { 2 }{ 5 }$
|
2021-03-05 19:49:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5518391728401184, "perplexity": 1486.9067853202994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178373241.51/warc/CC-MAIN-20210305183324-20210305213324-00244.warc.gz"}
|
https://physics.stackexchange.com/questions/576559/the-indistinguishability-postulate
|
# The Indistinguishability Postulate
What is often called the Indistinguishability Postulate is expressed in (at least) two different ways depending on the textbook.
For any normalized composite states of N identical particles $$|\psi \rangle$$ in $$H^{N}$$, and observable O on $$H^{N}$$, and any permutation operator $$P$$ in the permutation group $$S_{N}$$,
1. $$\langle \psi | O | \psi \rangle = \langle \psi |P^{\dagger}OP|\psi\rangle$$. (That is, $$[P, O] = 0$$.)
OR
1. $$\langle \psi|O| \psi \rangle = \langle P \psi |O | P \psi \rangle$$.
I take it that 1 and 2 are equivalent, that is, 1 is true if and only if 2 is true. But apparently, the are saying two different things. 1 is a restriction on which operators can represent observables, and 2 is a restriction on which vectors can represent physical states. How can we show that 1 and 2 are actually equivalent?
Furthermore, is there any reason to prefer one over the other as a better articulation of the postulate?
$$| P \psi \rangle = P | \psi \rangle$$ by definition. Hence $$\langle P \psi | = \langle \psi | P^\dagger .$$ So $$\langle P\psi|O| P\psi \rangle = \langle \psi | P^\dagger O P | \psi \rangle$$ by definition. That is, the meaning of the notation on the left hand side is given by the expression on the right hand side. So your question is not looking at two different expressions, but two ways of writing the same expression.
|
2021-10-22 10:07:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9611242413520813, "perplexity": 84.92473202297384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585504.90/warc/CC-MAIN-20211022084005-20211022114005-00628.warc.gz"}
|
https://ncatlab.org/nlab/show/plethory
|
nLab plethory
Contents
Context
Algebra
higher algebra
universal algebra
Theorems
Combinatorics
combinatorics
enumerative combinatorics
graph theory
rewriting
Polytopes
edit this sidebar
category: combinatorics
Contents
Idea
The concept plethory derives from that of plethysm (Borger-Wieland 05, p. 2), a certain form of composition in the theory of symmetric functions.
In what follows, all rings are assumed to be commutative and unital.
Since the ring $\mathbb{Z}[x]$ of integer-coefficient polynomials in one variable is the free ring? on $1$, we have an isomorphism $R \stackrel \sim \to \mathsf{Ring}(\mathbb{Z}[x], R)$ which sends an element $r \in R$ to evaluation at $r$, $p(x) \mapsto p(r)$. If we let $R$ also be $\mathbb{Z}[x]$, this gives exactly “substitution” or “composition” of polynomials. A plethory is a ring which carries a substitution structure. The ring $\Lambda$ of symmetric functions is another example.
The data of a plethory is also what is needed to represent a “natural operation” on the category of rings. For example $\mathsf{Ring}(\mathbb{Z}[x],-)$ is the identity functor on $\mathsf{Ring}$, and $\mathsf{Ring}(\Lambda,-)$ sends every ring to its ring of Witt vectors.
Definitions
A biring is a (commutative) ring object in $\mathsf{Ring}^{op}$. The category of birings is $\mathsf{Ring}(\mathsf{Ring}^{op})^{op}$. The extra op ensures that a biring homomorphism is a ring homomorphism, not a reversed ring homomorphism.
The category of birings is equivalent to $\mathsf{LAdj}(\mathsf{Ring},\mathsf{Ring})$, the category of left adjoints $\mathsf{Ring} \to \mathsf{Ring}$, and also to $\mathsf{RAdj}(\mathsf{Ring},\mathsf{Ring})^{op}$, opposite of the category of right adjoints $\mathsf{Ring} \to \mathsf{Ring}$. Under this equivalence, the category of birings inherits a monoidal structure induced from endofunctor composition. The monoidal product is called the substitution product, denoted by $\odot$. The unit object is the ring of polynomials $\mathbb{Z}[x]$. A generators-and-relations description of $\odot$ can be found in (TW70).
A plethory is a monoid in $(\mathsf{Biring}, \odot, \mathbb{Z}[x])$. Equivalently, a plethory is a right adjoint comonad $\mathsf{Ring} \to \mathsf{Ring}$. Equivalently, a plethory is a left adjoint monad $\mathsf{Ring} \to \mathsf{Ring}$.
Plethory over a ring
For any (commutative) ring $k$, a $k$-plethory is a monoid object in the monoidal category of $k$-$k$-birings, with respect to the composition product. That is, it is a biring $P$ equipped with an associative map of birings $\circ: P \odot_k P \to P$ and unit $k \langle e \rangle \to P$.
In other words,
a $k$-plethory is a commutative k-algebra together with a comonad structure on the covariant functor it represents, much as a k-algebra is the same as a $k$-module that represents a comonad. So, just as a $k$-algebra is exactly the structure that knows how to act on a $k$-module, a $k$-plethory is the structure that knows how to act on a commutative $k$-algebra. (BW05)
References
The idea was introduced here, where it was called a “biring triple”:
• D. Tall and G. Wraith, Representable functors and operations on rings, Proc. London Math. Soc. 3 (1970), 619–643.
The term “plethory” was introduced here:
Last revised on June 14, 2021 at 05:44:48. See the history of this page for a list of all contributions to it.
|
2021-06-24 08:35:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 37, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.92242431640625, "perplexity": 457.7744984761786}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488552937.93/warc/CC-MAIN-20210624075940-20210624105940-00095.warc.gz"}
|
http://zbmath.org/?q=an:1235.65083
|
# zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Calculation of eigenvalues and eigenfunctions of a discontinuous boundary value problem with retarded argument which contains a spectral parameter in the boundary condition. (English) Zbl 1235.65083
Summary: A discontinuous boundary-value problem with retarded argument which contains a spectral parameter in the boundary condition and with transmission conditions at the point of discontinuity is investigated. We obtained asymptotic formulas for the eigenvalues and eigenfunctions.
##### MSC:
65L15 Eigenvalue problems for ODE (numerical methods) 34B08 Parameter dependent boundary value problems for ODE 34K10 Boundary value problems for functional-differential equations
##### References:
[1] Norkin, S. B.: On boundary problem of Sturm–Liouville type for second-order differential equation with retarded argument, Izv. vysś. Ućebn. zaved. Matematika 6, No. 7, 203-214 (1958) [2] Norkin, S. B.: Differential equations of the second order with retarded argument, Translations of mathematical monographs 31 (1972) [3] Bellman, R.; Cook, K. L.: Differential-difference equations, (1963) [4] Demidenko, G. V.; Likhoshvai, V. A.: On differential equations with retarded argument, Sibirsk. mat. Zh. 46, No. 3, 417-430 (2005) [5] Bayramov, A.; Cạlısḳan, S.; Uslu, S.: Computation of eigenvalues and eigenfunctions of a discontinuous boundary value problem with retarded argument, Appl. math. Comput. 191, 592-600 (2007) [6] Fulton, C. T.: Two-point boundary value problems with eigenvalue parameter contained in the boundary conditions, Proc. roy. Soc. Edinburgh 77, 293-308 (1977) [7] Titeux, I.; Yakupov, Y.: Completeness of root functions for thermal conduction in a strip with piecewise continuous coefficients, Math. models methods appl. Sci. 7, No. 7, 1035-1050 (1997)
|
2014-04-18 05:49:16
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8319152593612671, "perplexity": 5897.723946576973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/business-math/22707-present-value-1-per-week.html
|
# Thread: Present value of $1 per week 1. ## Present value of$1 per week
Hi folks,
I've been desparately trying to work out the underlying formula for the attached table. If someone could assist me in this, I would be grateful.
Thanks
2. Originally Posted by Scuzzie
Hi folks,
I've been desparately trying to work out the underlying formula for the attached table. If someone could assist me in this, I would be grateful.
Thanks
My first thought would be to graph it and see what it looks like. Have you done this yet?
-Dan
3. Hi Dan, no I haven't graphed it because what I need is the underlying equasion for the attached table and then I can adapt it to calc different multiples of weeks and parts thereof.
$\sum_{k=1}^{n+1} \frac{1}{1.0011^k}$
|
2016-09-26 21:56:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7634894251823425, "perplexity": 794.5614269319269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660887.60/warc/CC-MAIN-20160924173740-00296-ip-10-143-35-109.ec2.internal.warc.gz"}
|
http://itsecureone.com/lktj2tzg/the-cosmic-microwave-background-appears-000313
|
# the cosmic microwave background appears
Posted on December 21, 2020Comments Off on the cosmic microwave background appears
However, to figure out how long it took the photons and baryons to decouple, we need a measure of the width of the PVF. The acoustic oscillations arise because of a conflict in the photon–baryon plasma in the early universe. The curvature is a quantity describing how the geometry of a space differs locally from the one of the flat space.The curvature of any locally isotropic space (and hence of a locally isotropic universe) falls into one of the three following cases: . The cosmic microwave background (CMB) is thought to be leftover radiation from the Big Bang, or the time when the universe began. [46][52][100] Later analyses have pointed out that these are the modes most susceptible to foreground contamination from synchrotron, dust, and Bremsstrahlung emission, and from experimental uncertainty in the monopole and dipole. The actual temperature of the cosmic microwave background is 2.725 Kelvin. The largest inhomogeneous region detected in the cosmic microwave background map is known as the Cold Spot and has a very slightly lower temperature by about 70 microKelvins (a microKelvin being only a millionth of a degree). ( The radiation is isotropic to roughly one part in 100,000: the root mean square variations are only 18 µK, after subtracting out a dipole anisotropy from the Doppler shift of the background radiation. Explain Olbers’s Paradox And The Resolution.3.Name Two Methods To Measure/estimate The Ages Of Stars. As the universe expands, the CMB photons are redshifted, causing them to decrease in energy. In an ionized universe, such charged particles have been liberated from neutral atoms by ionizing (ultraviolet) radiation. term measures the mean temperature and "[107], Assuming the universe keeps expanding and it does not suffer a Big Crunch, a Big Rip, or another similar fate, the cosmic microwave background will continue redshifting until it will no longer be detectable,[108] and will be superseded first by the one produced by starlight, and perhaps, later by the background radiation fields of processes that may take place in the far future of the universe such as proton decay, evaporation of black holes and Positronium decay. As in any science, there is a relationship between theory and experiment in cosmology. Discovery of cosmic microwave background radiation, Cosmic background radiation of the Big Bang, List of cosmic microwave background experiments, Arcminute Cosmology Bolometer Array Receiver, "The Cosmic Microwave Background Radiation", "Clarifying inflation models: The precise inflationary potential from effective field theory and the WMAP data", "Cosmic Microwave Background Radiation Anisotropies: Their Discovery and Utilization", Cosmology II: The thermal history of the Universe, Ruth Durrer, "History of the 2.7 K Temperature Prior to Penzias and Wilson", The Cosmic Microwave Background Radiation (Nobel Lecture) by Robert Wilson 8 Dec 1978, p. 474, "Microwave Background in a Steady State Universe", Monthly Notices of the Royal Astronomical Society, "Converted number: Conversion from K to eV", "Detection of B-mode polarization in the Cosmic Microwave Background with data from the South Pole Telescope", "Scientists Report Evidence for Gravitational Waves in Early Universe", "NASA Technology Views Birth of the Universe", "Space Ripples Reveal Big Bang's Smoking Gun", "Gravitational waves: have US scientists heard echoes of the big bang? Because it fills all space, it is the greatest source of electromagnetic energy in the universe, far more than the light of all the stars. Together with other cosmological data, these results implied that the geometry of the universe is flat. = ℓ The baryons in such early Universe remained highly ionized and so were tightly coupled with photons through the effect of Thompson scattering. ( E-modes were first seen in 2002 by the Degree Angular Scale Interferometer (DASI). [51], Since decoupling, the temperature of the background radiation has dropped by a factor of roughly 1100[52] due to the expansion of the universe. 1949 – Ralph Alpher and Robert Herman re-re-estimate the temperature at 28 K. 1957 – Tigran Shmaonov reports that "the absolute effective temperature of the radioemission background ... is 4±3 K". The cosmic background radiation that is believed to be cornerstone of the Big Bang theory and a fundamental basis for the cosmological theory has become a central piece of astronomy. This theory asserts that the early universe was occupied by a hot, dense plasma of photons, electrons and baryons that was opaque to electromagnetic radiation. The estimates would yield very different predictions if Earth happened to be located elsewhere in the universe. Easy to use and portable, study sets in Cosmic Microwave Background are great for studying in the way that works for you, at the time that works for you. Jun 8, 2019 - "What I find cool about being a banned author is this: I'm writing books that evoke a reaction, books that, if dropped in a lake, go down not with a whimper but a splash." You are NOT looking at an explosion with the CMBR. Released in March 2013, this image contains a wealth of information about the properties and history of the Universe for cosmologists to decipher. Observationally, the present-day stellar IMF appears to have an almost universal profile, characterized by a power-law at large masses and flattening below a characteristic mass of ~1 Msolar. One method of quantifying how long this process took uses the photon visibility function (PVF). This cosmic background radiation image (bottom) is an all-sky map of the CMB as observed by the Planck mission. θ The results are broadly consistent with those expected from cosmic inflation as well as various other competing theories, and are available in detail at NASA's data bank for Cosmic Microwave Background (CMB) (see links below). WMAP’s accurate measurements showed that the early universe was 63 percent dark matter, 15 percent photons, 12 percent atoms, and 10 percent neutrinos. Y In particular, the spectral radiance at different angles of observation in the sky contains small anisotropies, or irregularities, which vary with the size of the region examined. The origin of the stellar Initial Mass Function (IMF) and its variation with cosmic time or with diverse environmental conditions still lack a complete physical interpretation. The Impact ofAtmospheric Fluctuations on Degree-scale Imaging of the Cosmic Microwave Background Oliver P. Lay Radio Astronomy Laboratory, University of California, Berkeley, CA 94720 and Nils W. Halverson1 Dept. m Cosmic Microwave Background Cosmology Inflation Power Spectrum SPIDER: Subjects: Astrophysics: Issue Date: 2018: Publisher: Princeton, NJ : Princeton University: Abstract: Gravitational waves are a prediction of many early Universe models. These include DASI, WMAP, BOOMERanG, QUaD, Planck spacecraft, Atacama Cosmology Telescope, South Pole Telescope and the QUIET telescope. Although WMAP provided very accurate measurements of the large scale angular fluctuations in the CMB (structures about as broad in the sky as the moon), it did not have the angular resolution to measure the smaller scale fluctuations which had been observed by former ground-based interferometers. ⟩ RELIKT-1, a Soviet cosmic microwave background anisotropy experiment on board the Prognoz 9 satellite (launched 1 July 1983) gave upper limits on the large-scale anisotropy. The temperature variation in the CMB temperature maps at higher multipoles, or ℓ ≥ 2, is considered to be the result of perturbations of the density in the early Universe, before the recombination epoch. Question: 1. According to their calculations, the high temperature associated with the early universe would have given rise to a thermal radiation field, which has a unique distribution of intensity with wavelength (known as Planck’s radiation law), that is a function only of the temperature. Although neutrinos are now a negligible component of the universe, they form their own cosmic background, which was discovered by WMAP. / [109], "CMB" redirects here. What he recovered appears to be utterly meaningless. Why is this? m Even though we cannot see it unaided, we are able to observe this early energy of the Universe via the Cosmic Microwave Background (CMB). Get ready for your Cosmic Microwave Background tests by reviewing key facts, theories, examples, synonyms and definitions with study sets created by students like you. It is an important source of data on the early universe because it is the oldest electromagnetic radiation in the universe, dating to the epoch of recombination. The surface of last scattering refers to the set of points in space at the right distance from us so that we are now receiving photons originally emitted from those points at the time of photon decoupling. Now, astrophysicist Michael Hippke of Sonneberg Observatory in Germany and Breakthrough Listen has gone looking for this message, translating temperature variations in the CMB into a binary bitstream. a {\displaystyle Y(\theta ,\varphi )} The satellite transmitted an intensity pattern in angular projection at a wavelength of 0.57 cm after the subtraction of a uniform background at a temperature of 2.735 K. Bright regions at the upper right and dark regions at the lower left showed the dipole asymmetry. Based on the 2013 data, the universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy. The WMAP team finds that the PVF is greater than half of its maximal value (the "full width at half maximum", or FWHM) over an interval of 115,000 years. The team reported that POLARBEAR's measured B-mode polarization was of cosmological origin (and not just due to dust) at a 97.2% confidence level.[79]. Appears in TheAstrophysical Journal, 543, 787, 2000. Although there were several previous estimates of the temperature of space, these suffered from two flaws. The data it received quickly s… This is by far the largest temperature variation in … Even in the COBE map, it was observed that the quadrupole (ℓ = 2, spherical harmonic) has a low amplitude compared to the predictions of the Big Bang. The dipole anisotropy and others due to Earth's annual motion relative to the Sun and numerous microwave sources in the galactic plane and elsewhere must be subtracted out to reveal the extremely tiny variations characterizing the fine-scale structure of the CMBR background. Y In our Universe’s case, to the best of our knowledge, that’s the leftover glow from the Big Bang: the cosmic microwave background (CMB). In particular, these foregrounds are dominated by galactic emissions such as Bremsstrahlung, synchrotron, and dust that emit in the microwave band; in practice, the galaxy has to be removed, resulting in a CMB map that is not a full-sky map. When it originated some 380,000 years after the Big Bang—this time is generally known as the "time of last scattering" or the period of recombination or decoupling—the temperature of the universe was about 3000 K. This corresponds to an energy of about 0.26 eV,[50] which is much less than the 13.6 eV ionization energy of hydrogen. The high degree of uniformity throughout the observable universe and its faint but measured anisotropy lend strong support for the Big Bang model in general and the ΛCDM ("Lambda Cold Dark Matter") model in particular. The most prominent of the foreground effects is the dipole anisotropy caused by the Sun's motion relative to the CMBR background. Using the Cosmic Microwave Background Radiation to Delve Into the First Hundred Years after the Big Bang. It would be better to measure something this important from space. [63], Primordial gravitational waves are gravitational waves that could be observed in the polarisation of the cosmic microwave background and having their origin in the early universe. CMB dipole represents the largest anisotropy, which is in the first spherical harmonic (ℓ = 1). On 5 February 2015, new data was released by the Planck mission, according to which the age of the universe is 13.799±0.021 billion years old and the Hubble constant was measured to be 67.74±0.46 (km/s)/Mpc.[82]. This glow is strongest in the microwave region of the radio spectrum. The intensity of the radiation also corresponds to black-body radiation at 2.726 K because red-shifted black-body radiation is just like black-body radiation at a lower temperature. {\displaystyle (2\zeta (3)/\pi ^{2})T_{\gamma }^{3}\approx 411\,{\text{cm}}^{-3}} The cosmic microwave background (CMB) is an almost-uniform background of radio waves that fill the universe. The galaxy orbits in the Local Group of Galaxies, and the Local Group falls toward the Virgo Cluster of Galaxies. The hint to a violation of parity symmetry was found in the cosmic microwave background radiation, the remnant light of the Big Bang. Cosmology The Cosmic Microwave Background. In June 2001, NASA launched a second CMB space mission, WMAP, to make much more precise measurements of the large scale anisotropies over the full sky. 2 a The photons we will measure next week were generated a … 2 The first accurate measurements of the CMB were made with a satellite orbiting Earth. 2003 – E-mode polarization spectrum obtained by the CBI. . Updates? The cosmic microwave background (CMB) radiation is a thermal quasi-uniform black body radiation which peaks at 2.725 K in the microwave regime at 160.2 GHz, corresponding to a 1.9 mm wavelength as in Planck's law.Its discovery is considered a landmark test of the Big Bang cosmology. The CMB photons are scattered by free charges such as electrons that are not bound in atoms. These measurements were able to rule out cosmic strings as the leading theory of cosmic structure formation, and suggested cosmic inflation was the right theory. The detailed analysis of CMBR data to produce maps, an angular power spectrum, and ultimately cosmological parameters is a complicated, computationally difficult problem. According to the map, subtle fluctuations in temperature were imprinted on the deep sky when the cosmos was about 370000 years old. [64], On 17 March 2014 it was announced that the BICEP2 instrument had detected the first type of B-modes, consistent with inflation and gravitational waves in the early universe at the level of r = 0.20+0.07−0.05, which is the amount of power present in gravitational waves compared to the amount of power present in other scalar density perturbations in the very early universe. The energy density in the CMB is only 4×10 −14 J/m 3. The accidental discovery of the CMB in 1965 by American radio astronomers Arno Penzias and Robert Wilson[1][2] was the culmination of work initiated in the 1940s, and earned the discoverers the 1978 Nobel Prize in Physics. A full Bayesian analysis of the WMAP power spectrum demonstrates that the quadrupole prediction of Lambda-CDM cosmology is consistent with the data at the 10% level and that the observed octupole is not remarkable. I briefly recall the main properties of the Cosmic Microwave Background. Cosmic Microwave Background. (Just as when looking at an object through fog, details of the object appear fuzzy. The locations of the microwave region of the CMB formed observation done by COBE.! The said procedure happened at a redshift of around z ⋍ 1100 the curvature of the CMB critical... Bang models early stage of the CMB formed component of the sky and is less susceptible to dust effects baryon. Was revolutionary, providing concrete evidence the cosmic microwave background appears the cosmic microwave background appears very different to observers at different redshifts because... Into account into account of scientific debate, subtle fluctuations in the microwave background taken by ESA 's satellite. Is in the CMB at a redshift of around z ⋍ 1100 agreeing to news, offers, and was... ] Increasingly stringent limits on the lookout for your Britannica newsletter to get stories. Nobel Prize in physics for their discovery falls toward the Virgo Cluster of Galaxies, and Local... Non-Sky signal noise 89 ] the team received the Nobel Prize in physics their... Inhomogeneities at the period of time just as the seeds '' that became the.! B-Mode polarization at 150 GHz was published by the Sun 's motion relative the. The remaining irregularities were the time '' at which P ( t has. Published by the Sun 's motion relative to the CMBR background in nature. [ 7 ] the study the. Contains 4.9 % ordinary matter, 26.8 % dark matter and 68.3 % dark energy echo of provides! Determine whether to revise the article the largest anisotropy, which is in the plasma! Ev/Cm3 [ 18 ] ( 4.005×10−14 J/m3 ) or ( 400–500 photons/cm3 [ 19 ] ) located in. Of polarization, called E-modes and B-modes by WMAP the physics of how photons are the cosmic microwave background appears causing... What you ’ ve submitted and determine whether to revise the article, 11:43! ) has a maximum as 372,000 years also showed that the temperature appears completely uniform on this scale CMB are. Cmb are critical to cosmology, the physics of how photons are scattered by free charges as... Next peak—ratio of the universe is 72.6 percent dark energy, 22.8 percent dark matter ( )... Of Thompson scattering ( 4.005×10−14 J/m3 ) or ( 400–500 photons/cm3 [ 19 ] ) steady state theory photons. Was to provide confirmation of the cosmic microwave background ( CMB ) is cool. Could not form any neutral atoms Earth toward the Virgo Cluster of Galaxies since any model. These include DASI, WMAP, which give the microwave background ( CMB ) appears to be seen at time. Inflation event photons are redshifted, causing them to decrease in energy almost-uniform! Level of 10−4 or 10−5 thus, C is independent of m. different choices of ℓ correspond to multipole of... Temperature appears completely uniform on this scale these speeds are less than the cosmological. Material of the object appear fuzzy Bang and inflation models is radiation that is not due to CMBR! The article a maximum as 372,000 years WMAP 's data support the Big Bang, this page was last on! Universe Become transparent to light universe cooled enough, protons and electrons combined to form neutral atoms... Estimates of the CMB in the existence of the odd peaks to cosmic! The observable universe off particles, particularly electrons, in the first in. Understand where the cosmic microwave background and red is 2.729 Kelvin spacecraft, Atacama telescope... //Www.Britannica.Com/Science/Cosmic-Microwave-Background, Mullard space Science Laboratory - cosmic microwave background ( CMB ) radiation is in the universe slightly. Be seen at the scale of the universe must explain this radiation this page was last on... With the CMBR background every direction atmosphere is a relationship between theory and in... Result, most cosmologists consider the Big Bang theory redshifts, because they seeing. A wealth of information about the nature of the universe formed half a billion years after the.... [ 81 ] the discovery of the first things it achieved was to provide confirmation of the microwave region the... Ktγ is equivalent to 0.234 meV or 4.6 × 10−10 mec2 space through Earth ’ s atmosphere different redshifts because... Plasma of electrons and protons could not form any neutral atoms by ionizing ( ). Released in March 2013, this page was last edited on 20 December 2020 at... Direction the cosmic microwave background appears very minor variations in density - the apparent cosmological horizon at recombination than researchers expected percent! Moment could also be interpreted as the universe to be the Differential microwave Radiometer instrument, their... ] ) the Virgo Cluster of Galaxies speeds dwarfed by the Sun 's motion relative the... To BICEP2, POLARBEAR focuses on a smaller scale than WMAP the geometry of cosmic... Electrons and baryons grew cooler the conditions at the scale of the temperature appears completely uniform on this.. This is consistent in any Science, there is a cloud of low-energy radiation that permeates observable. Bicep2 and Planck, the space between stars and Galaxies ( the background radiation an! Background its characteristic peak structure 2 } \rangle. existence of the.... \, \rm km/s \ ) relative to the CMBR of reionization during which of. The raw CMBR data but is too small to be the Differential of few... Model of the universe, the CMB photons the combined data of BICEP2 and Planck the... The infant universe ever made 're seeing it as it was launched in 2001, and 4.6 atoms. That space was filled with a thermal Planck spectrum the early universe faint cosmic background Explorer, or around billion... Sun 's motion relative to this primordial radiation CMB at a smaller scale than WMAP these is the source the. A maximum as 372,000 years intrigued at the scale of the foreground is... Longstanding of these is the study of the CMB Science Laboratory - microwave... Cluster of Galaxies one issue that worried astronomers is that Penzias and Wilson to stumble into discovering the... Their own cosmic background radiation cosmic microwave background was actually there - See when the... Worried astronomers is that Penzias and Wilson received the Nobel Prize in physics for 2006 this. Radiation image ( bottom ) is a relationship between theory and observations of Big! Direction with very minor variations in density - the apparent cosmological horizon at.!, Mullard space Science Laboratory - cosmic microwave background ( CMB ) appears to be seen the! Galaxies, and information from Encyclopaedia Britannica universe to be the best for! A form of electromagnetic radiation dating from an early stage of the cosmic microwave background Planck was launched 2001... Is around 3.3621 the cosmic microwave background appears 0.0010 mK will continue to revolutionize our understanding cosmology. Continually falls deep sky when the cosmos was about 370000 years old Explorer! Different mapping measurements were imprinted on the raw data the conditions at level! In any Science, there is a cool cloud of microwave radiation that fills universe. The relict radiation from the primeval fireball, however, these irregularities were the ''... These results implied that the microwave background, or cosmic inflation occurred ] this is the multipole... 4×10 −14 J/m 3 suggest that space was filled with a thermal Planck spectrum symmetric rapid-multi-modulated! Its amplitude depends on the COBE was the cosmic microwave background appears by NASA 's Goddard space Flight Center scientific... Boomerang, QUaD, Planck spacecraft, Atacama cosmology telescope, South Pole telescope and the cosmological redshift-distance relation together... Was broken into hydrogen ions can be used to get trusted stories delivered right to your inbox as by. 0.25 eV/cm3 [ 18 ] ( 4.005×10−14 J/m3 ) or ( 400–500 photons/cm3 [ 19 ). Early universe tentatively detected by WMAP image of the effective temperature of,... When Did the universe, as the peculiar motion of the universe to be an excess dash of that. There were several previous estimates of the B-mode polarization at 150 GHz was published by the mission. Relationship between theory and observations of the peaks also give important information about the density! Different mapping measurements Yu and Zel'dovich realized that the early universe remained highly ionized and were. To stumble into discovering that the temperature was around 3000 K or when the temperature appears completely on. The study of the Earth toward the Virgo Cluster of Galaxies NASA COBE mission clearly confirmed the anisotropy. Function ( PVF ) not due to CMB photons are scattered by free electrons.! Have been liberated from neutral atoms the barycenter of the fluctuations on angular. Fluctuations on smaller angular scales, it was launched by NASA in November.... Speeds dwarfed by the Sun 's motion relative to the present vast cosmic web of galaxy clusters and dark temperature. This light is called the cosmic microwave background, which was discovered by WMAP,,... See cosmic background radiation is still a matter of scientific debate information from Britannica..., and 4.6 percent atoms not looking at the level of 10−4 or 10−5 variations density. Occurred by accident density in the inflaton field that caused the inflation event that the filling. The material of the universe left their imprint on the deep sky when the temperature appears completely uniform this! Key is the polarized light of the microwave region of the universe, the remnant light the... The E-modes arise naturally from Thomson scattering in a scale such that blue corresponds to Kelvin! By WMAP, BOOMERanG, QUaD, Planck spacecraft, launched in 1989, is that... Provides solid evidence in support of the CMB are critical to cosmology, since any proposed of! First, they form their own cosmic background radiation of the universe the electromagnetic spectrum are together as. Would like to print: Corrections in temperature were imprinted on the time at which the CMB in the and.
Comments Off on the cosmic microwave background appears
|
2021-03-08 09:57:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5508725047111511, "perplexity": 1088.530635923818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178383355.93/warc/CC-MAIN-20210308082315-20210308112315-00180.warc.gz"}
|
http://lemon.cs.elte.hu/egres/open/Disjoint_spanning_in-_and_out-arborescences
|
Disjoint spanning in- and out-arborescences
Does there exist a value k so that in every k-arc-connected directed graph D=(V,A) and for every node $v\in V$, there is a spanning in-arborescence and a disjoint spanning out-arborescence rooted in v?
|
2018-01-20 09:11:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9065111875534058, "perplexity": 2111.9535151669156}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889542.47/warc/CC-MAIN-20180120083038-20180120103038-00181.warc.gz"}
|
https://stats.stackexchange.com/questions/173583/random-forest-for-continuous-response-variable
|
# Random Forest for continuous response variable
I am running randomForest on a dataset which has a continuous dependent variable. Is there a way to get coefficients of the predictors as in linear regression?
• You can visualize your RF model structure with forestFloor and check how non-linear your fit is and what interactions there are. By learning structure you may be able to create a ridge regression model possible with some transformations of variables and interaction terms. – Soren Havelund Welling Sep 23 '15 at 5:58
$$E(y \mid x) = \beta \cdot x + \epsilon$$
and find the coefficients $\beta$ that best fits this postulated relationship to your observed data.
In the absence of such a structured postulated form of the $y$ to $x$ relationship, no terse yet complete summary of the model is possible.
Generally, models that can be completely summarized with a finite, small vector in $R^n$ are referred to as parametric, and those that cannot are dubbed non-parametric. Random forest is a non-parametric model.
|
2020-10-30 07:59:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5134380459785461, "perplexity": 914.9689598822991}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107909746.93/warc/CC-MAIN-20201030063319-20201030093319-00350.warc.gz"}
|
https://physics.stackexchange.com/questions/78813/lightcone-singularity-of-a-3-point-function-in-cft
|
# Lightcone singularity of a 3 point function in CFT
I had a quick question regarding the title of the question. In e.g. 2D CFT (for simplicity), the three point function of three operators with conformal dimension $a$, $b$ and $c$ are given as
$$\langle\mathcal{O}_1(x_1)\mathcal{O}_2(x_2)\mathcal{O}_3(x_3)\rangle~=~\frac{c_{123}}{(x_1-x_2)^{a+b-c}(x_2-x_3)^{-a+b+c}(x_1-x_3)^{a-b+c}}$$
Now, I will expect that this correlator shows UV divergences when any two of these operators are coincident. But from the RHS, it seems that e.g. for $c>a+b$, there's no light cone singularity for $x_1=x_2$. What am I missing here?
You need to refine your intuition a little bit. If you bring two operators together, you indeed get singular behaviour, which is taken into account by the OPE. The unit operator is the most singular, and the higher the scaling dimension of an operator $\phi$ in the $O_1 \times O_2$ OPE, the less singular its contribution will be. (In your case, the contribution of $O_3$ is regular if $c > a+b$).
In the limit $x_1 \rightarrow x_2$, any 3-pt function $$< O_1(x_1) O_2(x_2) \phi(x_3) >$$ will measure the overlap of the $O_1 \times O_2$ OPE with $\phi$ inserted 'far away' at $x_3$. According to the above paragraph, to see singular behaviour, you need $\phi$ to be a light operator. If $\phi$ is too high in the spectrum (as in your case), you will only measure a term that lives in the tail of the OPE so you'll get a small, regular result.
I hope that this is more or less what you expected - otherwise I can add more details.
EDIT to answer the remaining questions. You have hopefully learned that two-point functions are diagonal, so $$< \phi(x) \phi'(y) > = 0$$ unless $\phi = \phi'.$ So the 3-point function with $O_3$ can only measure the contribution of $O_3$ in the OPE - that's the overlap. It's blind to the presence of any other operators.
To see how singular the contribution of some operator $\phi$ is, just write the leading OPE term explicitly:
$$O_1(x_1) O_2(x_2) \sim \frac{1}{|x_1 - x_2|^z} \left\{\phi(x_2) + \text{descendants} \right\}$$ and we want to determine $z$. But you can just apply a dilatation $x \mapsto \lambda x$ and compare the left and right hand sides, and you'll see that $z = a+b-\Delta$ where $\Delta$ is the dimension of $\phi$. The unit operator has dimension zero, all other operators (in a unitary theory) have positive scaling dimensions.
• Sorry for the late reply. I just checked your answer. I should have thought about it. Getting singularity when two operators are coincident can always be read off from a two-point function, but not always from a three-point function; because as you said, it gives the overlap. Mathematically I understand that for heavy operator $\phi$, it gives a term at the tail of the OPE expansion. – user1349 Sep 29 '13 at 1:34
• So, the more overlap there is, the more singularity it should capture.. right? Also, can you add a few lines as to how to understand that the unit operators are more singular (has more overlap) and heavier operators have less overlap? Also, could you please give me a reference if you can? – user1349 Sep 29 '13 at 1:37
• For a reference, I'd just point to the CFT bible bi Di Francesco and friends - you just need to understand the OPE and the basics of 2- and 3-point functions, and you seem to know all of these things. – Vibert Sep 29 '13 at 11:48
• Yeah.. you are right.. time to open that book.. Thanks again for your time. – user1349 Sep 29 '13 at 11:57
|
2020-01-29 02:10:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8593384027481079, "perplexity": 292.5634960137039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783621.89/warc/CC-MAIN-20200129010251-20200129040251-00223.warc.gz"}
|
http://mathhelpforum.com/algebra/105579-factorisation.html
|
1. ## Factorisation
(1 + x + x^2 + x^3)^2 - x^3
2. Hello anshulbshah
Originally Posted by anshulbshah
(1 + x + x^2 + x^3)^2 - x^3
See the attached graph, which shows that $f(x)=(1 + x + x^2 + x^3)^2 - x^3 =0$ doesn't have any real roots, and hence $(1 + x + x^2 + x^3)^2 - x^3$ doesn't have any factors.
$f(-1)=f(0)=1$, and there's a minimum turning point between these two values, but this minimum value is greater than zero. You could possibly prove this by calculus.
For $x>0$ and $x<-1$, $f(x)$ increases very rapidly, since the dominant term when the brackets are expanded is $x^6$.
3. it can be solve by grouping after expanding,
(1 + x + x^2 + x^3)^2 - x^3 = (x^2 + x + 1)(x^4 + x^3 + x^2 + x + 1)
4. Originally Posted by pacman
it can be solve by grouping after expanding,
(1 + x + x^2 + x^3)^2 - x^3 = (x^2 + x + 1)(x^4 + x^3 + x^2 + x + 1)
Thanks for this - I was thinking of a linear factor, of course!
5. Prove: (1 + x + x^2 + x^3)^2 - x^3 = (x^2 + x + 1)(x^4 + x^3 + x^2 + x + 1)
[(1 + x) + (x^2 + x^3)]^2 - x^3 = (1 + x)^2 + 2(1 + x)(x^2 + x^3) + (x^2 + x^3)^2 - x^3,
= (1 + 2x + x^2) + (2x^2 + 2x^3 +2x^3 + 2x^4) + (x^4 + 2x^5 + x^6) - x^3, rearranging
= (1) + (2x) + (x^2 + 2x^2) + (2x^3 +2x^3 - x^3) + (2x^4 + x^4) + (2x^5) + (x^6),
= 1 + 2x + 3x^2 + 3x^3 + 3x^4 + 2x^5 + x^6, sub-divide with numerical coefficient equal to 1, group them THEN apply factoring
= (1 + x + x^2) + (x + x^2 + x^3) + (x^2 + x^3 + x^4) + (x^3 + x^4 + x^5) + (x^4 + x^5 + x^6)
= (1 + x + x^2) + x(1 + x + x^2) + x^2(1 + x + x^2) + x^3(1 + x + x^2) + x^4(1 + x + x^2), factoring
= (1 + x + x^2)(1 + x + x^2 + x^3 + x^4).
|
2016-12-11 12:14:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3853270709514618, "perplexity": 285.81356073109447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698544678.42/warc/CC-MAIN-20161202170904-00237-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://www.jobilize.com/physics/section/phase-diagrams-phase-changes-by-openstax?qcr=www.quizover.com
|
# 13.5 Phase changes (Page 2/15)
Page 2 / 15
Critical temperatures and pressures
Substance Critical temperature Critical pressure
$\text{K}$ $\text{º}\text{C}$ $\text{Pa}$ $\text{atm}$
Water 647.4 374.3 $\text{22}\text{.}\text{12}×{\text{10}}^{6}$ 219.0
Sulfur dioxide 430.7 157.6 $7\text{.}\text{88}×{\text{10}}^{6}$ 78.0
Ammonia 405.5 132.4 $\text{11}\text{.}\text{28}×{\text{10}}^{6}$ 111.7
Carbon dioxide 304.2 31.1 $7\text{.}\text{39}×{\text{10}}^{6}$ 73.2
Oxygen 154.8 −118.4 $5\text{.}\text{08}×{\text{10}}^{6}$ 50.3
Nitrogen 126.2 −146.9 $3\text{.}\text{39}×{\text{10}}^{6}$ 33.6
Hydrogen 33.3 −239.9 $1\text{.}\text{30}×{\text{10}}^{6}$ 12.9
Helium 5.3 −267.9 $0\text{.}\text{229}×{\text{10}}^{6}$ 2.27
## Phase diagrams
The plots of pressure versus temperatures provide considerable insight into thermal properties of substances. There are well-defined regions on these graphs that correspond to various phases of matter, so $\text{PT}$ graphs are called phase diagrams . [link] shows the phase diagram for water. Using the graph, if you know the pressure and temperature you can determine the phase of water. The solid lines—boundaries between phases—indicate temperatures and pressures at which the phases coexist (that is, they exist together in ratios, depending on pressure and temperature). For example, the boiling point of water is $\text{100}\text{º}\text{C}$ at 1.00 atm. As the pressure increases, the boiling temperature rises steadily to $\text{374}\text{º}\text{C}$ at a pressure of 218 atm. A pressure cooker (or even a covered pot) will cook food faster because the water can exist as a liquid at temperatures greater than $\text{100}\text{º}\text{C}$ without all boiling away. The curve ends at a point called the critical point , because at higher temperatures the liquid phase does not exist at any pressure. The critical point occurs at the critical temperature, as you can see for water from [link] . The critical temperature for oxygen is $–\text{118}\text{º}\text{C}$ , so oxygen cannot be liquefied above this temperature.
Similarly, the curve between the solid and liquid regions in [link] gives the melting temperature at various pressures. For example, the melting point is $0\text{º}\text{C}$ at 1.00 atm, as expected. Note that, at a fixed temperature, you can change the phase from solid (ice) to liquid (water) by increasing the pressure. Ice melts from pressure in the hands of a snowball maker. From the phase diagram, we can also say that the melting temperature of ice rises with increased pressure. When a car is driven over snow, the increased pressure from the tires melts the snowflakes; afterwards the water refreezes and forms an ice layer.
At sufficiently low pressures there is no liquid phase, but the substance can exist as either gas or solid. For water, there is no liquid phase at pressures below 0.00600 atm. The phase change from solid to gas is called sublimation . It accounts for large losses of snow pack that never make it into a river, the routine automatic defrosting of a freezer, and the freeze-drying process applied to many foods. Carbon dioxide, on the other hand, sublimates at standard atmospheric pressure of 1 atm. (The solid form of ${\text{CO}}_{2}$ is known as dry ice because it does not melt. Instead, it moves directly from the solid to the gas state.)
a thick glass cup cracks when hot liquid is poured into it suddenly
because of the sudden contraction that takes place.
Eklu
railway crack has gap between the end of each length because?
For expansion
Eklu
yes
Aiyelabegan
Please i really find it dificult solving equations on physic, can anyone help me out?
sure
Carlee
what is the equation?
Carlee
Sure
Precious
fersnels biprism spectrometer how to determined
how to study the hall effect to calculate the hall effect coefficient of the given semiconductor have to calculate the carrier density by carrier mobility.
Bala
what is the difference between atomic physics and momentum
find the dimensional equation of work,power,and moment of a force show work?
What's sup guys
Peter
cul and you all
Okeh
cool you bro
Nana
so what is going on here
Nana
hello peeps
Joseph
Michelson Morley experiment
how are you
Naveed
am good
Celine
you
Celine
hi
Bala
Hi
Ahmed
Calculate the final velocity attained, when a ball is given a velocity of 2.5m/s, acceleration of 0.67m/s² and reaches its point in 10s. Good luck!!!
2.68m/s
Doc
vf=vi+at vf=2.5+ 0.67*10 vf= 2.5 + 6.7 vf = 9.2
babar
s = vi t +1/2at sq s=58.5 s=v av X t vf= 9.2
babar
how 2.68
babar
v=u+at where v=final velocity u=initial velocity a=acceleration t=time
Eklu
OBERT
my project is Sol gel process how to prepare this process pls tell me
Bala
the dimension of work and energy is ML2T2 find the unit of work and energy hence drive for work?
KgM2S2
Acquah
Two bodies P and Quarter each of mass 1000g. Moved in the same direction with speed of 10m/s and 20m/s respectively. Calculate the impulse of P and Q obeying newton's 3rd law of motion
kk
Doc
the answer is 0.03n according to the 3rd law of motion if the are in same direction meaning they interact each other.
OBERT
definition for wave?
A disturbance that travel from one medium to another and without causing permanent change to its displacement
Fagbenro
In physics, a wave is a disturbance that transfers energy through matter or space, with little or no associated mass transport (Mass transfer). ... There are two main types ofwaves: mechanical and electromagnetic. Mechanicalwaves propagate through a physical matter, whose substance is being deformed
Devansh
K
Manyo
thanks jare
Doc
Thanks
Note: LINEAR MOMENTUM Linear momentum is defined as the product of a system’s mass multiplied by its velocity: size 12{p=mv} {}
what is physic
zalmia
Study of matter and energy
Fagbenro
physics is the science of matter and energy and their interactions
Acquah
physics is the technology behind air and matter
Doc
Okay
William
hi sir
Bala
how easy to understanding physics sir
Bala
Easy to learn
William
31. Calculate the initial (from rest) acceleration of a proton in a 5.00×106 N/C electric field (such as created by a research Van de Graaff). Explicitly show how you follow the steps in the Problem-Solving Strategy for electrostatics.
A tennis ball is projected at an angle and attains a range of 78. if the velocity is 30metres per second, calculate the angle
Shimolla
what friction
question on friction
Wisdom
the rubbing of one object or surface against another.
author
momentum is the product of mass and it's velocity.
Algayawi
what are bioelements?
Edina
Friction is a force that exist between two objects in contact. e.g. friction between road and car tires.
Eklu
|
2019-02-19 11:30:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 24, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6727708578109741, "perplexity": 1749.660672881169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247489933.47/warc/CC-MAIN-20190219101953-20190219123953-00123.warc.gz"}
|
https://freethoughtblogs.com/reprobate/category/statistics/
|
# Setting up for Fundraising Round 2
Sorry, sorry, got lost in my day job for a bit there. It’s been a month since the fundraising deadline passed, though, and I owe you some follow-up. So, the big question: did we hit the fundraising goal? Let’s load the dataset to find out. [Read more…]
TL;DR: We’re pretty much on track, though we also haven’t hit the goal of pushing the fund past $78,890.69. Donate and help put the fund over the line! With the short version out of the way, let’s dive into the details. What’s changed in the past week and change? import datetime as dt import matplotlib.pyplot as pl import pandas as pd import pandas.tseries.offsets as pdto cutoff_day = dt.datetime( 2020, 5, 27, tzinfo=dt.timezone(dt.timedelta(hours=-6)) ) donations = pd.read_csv('donations.cleaned.tsv',sep='\t') donations['epoch'] = pd.to_datetime(donations['created_at']) donations['delta_epoch'] = donations['epoch'] - cutoff_day donations['delta_epoch_days'] = donations['delta_epoch'].apply(lambda x: x.days) # some adjustment is necessary to line up with the current total donations['culm'] = donations['amount'].cumsum() + 14723 new_donations_mask = donations['delta_epoch_days'] > 0 print( f"There have been {sum(new_donations_mask)} donations since {cutoff_day}." ) There have been 8 donations since 2020-05-27 00:00:00-06:00. There’s been a reasonable number of donations after I published that original post. What does that look like, relative to the previous graph? pl.figure(num=None, figsize=(8, 4), dpi=150, facecolor='w', edgecolor='k') pl.plot( donations['delta_epoch_days'], donations['culm'], '-',c='#aaaaaa') pl.plot( donations['delta_epoch_days'][new_donations_mask], \ donations['culm'][new_donations_mask], '-',c='#0099ff') pl.title("Defense against Carrier SLAPP Suit") pl.xlabel("days since cutoff") pl.ylabel("dollars") pl.xlim( [-365.26,donations['delta_epoch_days'].max()] ) pl.ylim( [55000,82500] ) pl.show() That’s certainly an improvement in the short term, though the graph is much too zoomed out to say more. Let’s zoom in, and overlay the posterior. # load the previously-fitted posterior flat_chain = np.loadtxt('starting_posterior.csv') pl.figure(num=None, figsize=(8, 4), dpi=150, facecolor='w', edgecolor='k') x = np.array([0, donations['delta_epoch_days'].max()]) for m,_,_ in flat_chain: pl.plot( x, m*x + 78039, '-r', alpha=0.05 ) pl.plot( donations['delta_epoch_days'], donations['culm'], '-', c='#aaaaaa') pl.plot( donations['delta_epoch_days'][new_donations_mask], \ donations['culm'][new_donations_mask], '-', c='#0099ff') pl.title("Defense against Carrier SLAPP Suit") pl.xlabel("days since cutoff") pl.ylabel("dollars") pl.xlim( [-3,x[1]+1] ) pl.ylim( [77800,79000] ) pl.show() Hmm, looks like we’re right where the posterior predicted we’d be. My targets were pretty modest, though, consisting of an increase of 3% and 10%, so this doesn’t mean they’ve been missed. Let’s extend the chart to day 16, and explicitly overlay the two targets I set out. low_target = 78890.69 high_target = 78948.57 target_day = dt.datetime( 2020, 6, 12, 23, 59, tzinfo=dt.timezone(dt.timedelta(hours=-6)) ) target_since_cutoff = (target_day - cutoff_day).days pl.figure(num=None, figsize=(8, 4), dpi=150, facecolor='w', edgecolor='k') x = np.array([0, target_since_cutoff]) pl.fill_between( x, [78039, low_target], [78039, high_target], color='#ccbbbb', label='blog post') pl.fill_between( x, [78039, high_target], [high_target, high_target], color='#ffeeee', label='video') pl.plot( donations['delta_epoch_days'], donations['culm'], '-',c='#aaaaaa') pl.plot( donations['delta_epoch_days'][new_donations_mask], \ donations['culm'][new_donations_mask], '-',c='#0099ff') pl.title("Defense against Carrier SLAPP Suit") pl.xlabel("days since cutoff") pl.ylabel("dollars") pl.xlim( [-3, target_since_cutoff] ) pl.ylim( [77800,high_target] ) pl.legend(loc='lower right') pl.show() To earn a blog post and video on Bayes from me, we need the line to be in the pink zone by the time it reaches the end of the graph. For just the blog post, it need only be in the grayish- area. As you can see, it’s painfully close to being in line with the lower of two goals, though if nobody donates between now and Friday it’ll obviously fall quite short. So if you want to see that blog post, get donating! # 4.5 Questions for Alberta Health One of the ways I’m coping with this pandemic is studying it. Over the span of months I built up a list of questions specific to the situation in Alberta, so I figured I’d fire them off to the PR contact listed in one of the Alberta Government’s press releases. That was a week ago. I haven’t even received an automated reply. I think it’s time to escalate this to the public sphere, as it might give those who can bend the government’s ear some idea of what they’re reluctant to answer. [Read more…] # Fundraising Target Number 1 If our goal is to raise funds for a good cause, we should at least have an idea of where the funds are at. created_at amount epoch delta_epoch culm 0 2017-01-24T07:27:51-06:00 10.0 2017-01-24 07:27:51-06:00 -1218 days +19:51:12 14733.0 1 2017-01-24T07:31:09-06:00 50.0 2017-01-24 07:31:09-06:00 -1218 days +19:54:30 14783.0 2 2017-01-24T07:41:20-06:00 100.0 2017-01-24 07:41:20-06:00 -1218 days +20:04:41 14883.0 3 2017-01-24T07:50:20-06:00 10.0 2017-01-24 07:50:20-06:00 -1218 days +20:13:41 14893.0 4 2017-01-24T08:03:26-06:00 25.0 2017-01-24 08:03:26-06:00 -1218 days +20:26:47 14918.0 Changing the dataset so the last donation happens at time zero makes it both easier to fit the data and easier to understand what’s happening. The first day after the last donation is now day one. Donations from 2017 don’t tell us much about the current state of the fund, though, so let’s focus on just the last year. The donations seem to arrive in bursts, but there have been two quiet portions. One is thanks to the current pandemic, and the other was during last year’s late spring/early summer. It’s hard to tell what the donation rate is just by eye-ball, though. We need to smooth this out via a model. The simplest such model is linear regression, aka. fitting a line. We want to incorporate uncertainty into the mix, which means a Bayesian fit. Now, what MCMC engine to use, hmmm…. emcee is my overall favourite, but I’m much too reliant on it. I’ve used PyMC3 a few times with success, but recently it’s been acting flaky. Time to pull out the big guns: Stan. I’ve been avoiding it because pystan‘s compilation times drove me nuts, but all the cool kids have switched to cmdstanpy when I looked away. Let’s give that a whirl. CPU times: user 5.33 ms, sys: 7.33 ms, total: 12.7 ms Wall time: 421 ms CmdStan installed. We can’t fit to the entire three-year time sequence, that just wouldn’t be fair given the recent slump in donations. How about the last six months? That covers both a few donation burts and a flat period, so it’s more in line with what we’d expect in future. There were 117 donations over the last six months. With the data prepped, we can shift to building the linear model. I could have just gone with Stan’s basic model, but flat priors aren’t my style. My preferred prior for the slope is the inverse tangent, as it compensates for the tendency of large slope values to “bunch up” on one another. Stan doesn’t offer it by default, but the Cauchy distribution isn’t too far off. We’d like the standard deviation to skew towards smaller values. It naturally tends to minimize itself when maximizing the likelihood, but an explicit skew will encourage this process along. Gelman and the Stan crew are drifting towards normal priors, but I still like a Cauchy prior for its weird properties. Normally I’d plunk the Gaussian distribution in to handle divergence from the deterministic model, but I hear using Student’s T instead will cut down the influence of outliers. Thomas Wiecki recommends one degree of freedom, but Gelman and co. find that it leads to poor convergence in some cases. They recommend somewhere between three and seven degrees of freedom, but skew towards three, so I’ll go with the flow here. The y-intercept could land pretty much anywhere, making its prior difficult to figure out. Yes, I’ve adjusted the time axis so that the last donation is at time zero, but the recent flat portion pretty much guarantees the y-intercept will be higher than the current amount of funds. The traditional approach is to use a flat prior for the intercept, and I can’t think of a good reason to ditch that. Not convinced I picked good priors? That’s cool, there should be enough data here that the priors have minimal influence anyway. Moving on, let’s see how long compilation takes. CPU times: user 4.91 ms, sys: 5.3 ms, total: 10.2 ms Wall time: 20.2 s This is one area where emcee really shines: as a pure python library, it has zero compilation time. Both PyMC3 and Stan need some time to fire up an external compiler, which adds overhead. Twenty seconds isn’t too bad, though, especially if it leads to quick sampling times. CPU times: user 14.7 ms, sys: 24.7 ms, total: 39.4 ms Wall time: 829 ms And it does! emcee can be pretty zippy for a simple linear regression, but Stan is in another class altogether. PyMC3 floats somewhere between the two, in my experience. Another great feature of Stan are the built-in diagnostics. These are really handy for confirming the posterior converged, and if not it can give you tips on what’s wrong with the model. Processing csv files: /tmp/tmpyfx91ua9/linear_regression-202005262238-1-e393mc6t.csv, /tmp/tmpyfx91ua9/linear_regression-202005262238-2-8u_r8umk.csv, /tmp/tmpyfx91ua9/linear_regression-202005262238-3-m36dbylo.csv, /tmp/tmpyfx91ua9/linear_regression-202005262238-4-hxjnszfe.csv Checking sampler transitions treedepth. Treedepth satisfactory for all transitions. Checking sampler transitions for divergences. No divergent transitions found. Checking E-BFMI - sampler transitions HMC potential energy. E-BFMI satisfactory for all transitions. Effective sample size satisfactory. Split R-hat values satisfactory all parameters. Processing complete, no problems detected. The odds of a simple model with plenty of datapoints going sideways are pretty small, so this is another non-surprise. Enough waiting, though, let’s see the fit in action. First, we need to extract the posterior from the stored variables … There are 256 samples in the posterior. … and now free of its prison, we can plot the posterior against the original data. I’ll narrow the time window slightly, to make it easier to focus on the fit. Looks like a decent fit to me, so we can start using it to answer a few questions. How much money is flowing into the fund each day, on average? How many years will it be until all those legal bills are paid off? Since humans aren’t good at counting in years, let’s also translate that number into a specific date. mean/std/median slope =$51.62/1.65/51.76 per day
mean/std/median years to pay off the legal fees, relative to 2020-05-25 12:36:39-05:00 =
1.962/0.063/1.955
mean/median estimate for paying off debt =
2022-05-12 07:49:55.274942-05:00 / 2022-05-09 13:57:13.461426-05:00
Mid-May 2022, eh? That’s… not ideal. How much time can we shave off, if we increase the donation rate? Let’s play out a few scenarios.
median estimate for paying off debt, increasing rate by 1% = 2022-05-02 17:16:37.476652800
median estimate for paying off debt, increasing rate by 3% = 2022-04-18 23:48:28.185868800
median estimate for paying off debt, increasing rate by 10% = 2022-03-05 21:00:48.510403200
median estimate for paying off debt, increasing rate by 30% = 2021-11-26 00:10:56.277984
median estimate for paying off debt, increasing rate by 100% = 2021-05-17 18:16:56.230752
Bumping up the donation rate by one percent is pitiful. A three percent increase will almost shave off a month, which is just barely worthwhile, and a ten percent increase will roll the date forward by two. Those sound like good starting points, so let’s make them official: increase the current donation rate by three percent, and I’ll start pumping out the aforementioned blog posts on Bayesian statistics. Manage to increase it by 10%, and I’ll also record them as videos.
As implied, I don’t intend to keep the same rate throughout this entire process. If you surprise me with your generosity, I’ll bump up the rate. By the same token, though, if we go through a dry spell I’ll decrease the rate so the targets are easier to hit. My goal is to have at least a 50% success rate on that lower bar. Wouldn’t that make it impossible to hit the video target? Remember, though, it’ll take some time to determine the success rate. That lag should make it possible to blow past the target, and by the time this becomes an issue I’ll have thought of a better fix.
Ah, but over what timeframe should this rate increase? We could easily blow past the three percent target if someone donates a hundred bucks tomorrow, after all, and it’s no fair to announce this and hope your wallets are ready to go in an instant. How about… sixteen days. You’ve got sixteen days to hit one of those rate targets. That’s a nice round number, for a computer scientist, and it should (hopefully!) give me just enough time to whip up the first post. What does that goal translate to, in absolute numbers?
a 3% increase over 16 days translates to $851.69 +$78039.00 = $78890.69 Right, if you want those blog posts to start flowing you’ve got to get that fundraiser total to$78,890.69 before June 12th. As for the video…
a 10% increase over 16 days translates to $909.57 +$78039.00 = $78948.57 … you’ve got to hit$78,948.57 by the same date.
# What’s the Plan?
I’ll admit, this fundraiser isn’t exactly twisting my arm. I’ve been mulling over how I’d teach Bayesian statistics for a few years. Overall, I’ve been most impressed with E.T. Jaynes’ approach, which draws inspiration from Cox’s Theorem. You’ll see a lot of similarities between my approach and Jaynes’, though I diverge on a few points. [Read more…]
# It’s Payback Time
I’m back! Yay! Sorry about all that, but my workload was just ridiculous. Things should be a lot more slack for the next few months, so it’s time I got back blogging. This also means I can finally put into action something I’ve been sitting on for months.
Richard Carrier has been a sore spot for me. He was one of the reasons I got interested in Bayesian statistics, and for a while there I thought he was a cool progressive. Alas, when it was revealed he was instead a vindictive creepy asshole, it shook me a bit. I promised myself I’d help out somehow, but I’d already done the obsessive analysis thing and in hindsight I’m not convinced it did more good than harm. I was at a loss for what I could do, beyond sharing links to the fundraiser.
Now, I think I know. The lawsuits may be long over, thanks to Carrier coincidentally dropping them at roughly the same time he came under threat of a counter-suit, but the legal bill are still there and not going away anytime soon. Worse, with the removal of the threat people are starting to forget about those debts. There have been only five donations this month, and four in April. It’s time to bring a little attention back that way.
One nasty side-effect of Carrier’s lawsuits is that Bayesian statistics has become a punchline in the atheist/skeptic community. The reasoning is understandable, if flawed: Carrier is a crank, he promotes Bayesian statistics, ergo Bayesian statistics must be the tool of crackpots. This has been surreal for me to witness, as Bayes has become a critical tool in my kit over the last three years. I suppose I could survive without it, if I had to, but every alternative I’m aware of is worse. I’m not the only one in this camp, either.
Following the emergence of a novel coronavirus (SARS-CoV-2) and its spread outside of China, Europe is now experiencing large epidemics. In response, many European countries have implemented unprecedented non-pharmaceutical interventions including case isolation, the closure of schools and universities, banning of mass gatherings and/or public events, and most recently, widescale social distancing including local and national lockdowns. In this report, we use a semi-mechanistic Bayesian hierarchical model to attempt to infer the impact of these interventions across 11 European countries.
Flaxman, Seth, Swapnil Mishra, Axel Gandy, H Juliette T Unwin, Helen Coupland, Thomas A Mellan, Tresnia Berah, et al. “Estimating the Number of Infections and the Impact of Non- Pharmaceutical Interventions on COVID-19 in 11 European Countries,” 2020, 35.
In estimating time intervals between symptom onset and outcome, it was necessary to account for the fact that, during a growing epidemic, a higher proportion of the cases will have been infected recently (…). Therefore, we re-parameterised a gamma model to account for exponential growth using a growth rate of 0·14 per day, obtained from the early case onset data (…). Using Bayesian methods, we fitted gamma distributions to the data on time from onset to death and onset to recovery, conditional on having observed the final outcome.
Verity, Robert, Lucy C. Okell, Ilaria Dorigatti, Peter Winskill, Charles Whittaker, Natsuko Imai, Gina Cuomo-Dannenburg, et al. “Estimates of the Severity of Coronavirus Disease 2019: A Model-Based Analysis.” The Lancet Infectious Diseases 0, no. 0 (March 30, 2020). https://doi.org/10.1016/S1473-3099(20)30243-7.
we used Bayesian methods to infer parameter estimates and obtain credible intervals.
Linton, Natalie M., Tetsuro Kobayashi, Yichi Yang, Katsuma Hayashi, Andrei R. Akhmetzhanov, Sung-mok Jung, Baoyin Yuan, Ryo Kinoshita, and Hiroshi Nishiura. “Incubation Period and Other Epidemiological Characteristics of 2019 Novel Coronavirus Infections with Right Truncation: A Statistical Analysis of Publicly Available Case Data.” Journal of Clinical Medicine 9, no. 2 (February 2020): 538. https://doi.org/10.3390/jcm9020538.
A significant chunk of our understanding of COVID-19 depends on Bayesian statistics. I’ll go further and argue that you cannot fully understand this pandemic without it. And yet thanks to Richard Carrier, the atheist/skeptic community is primed to dismiss Bayesian statistics.
So let’s catch two stones with one bird. If enough people donate to this fundraiser, I’ll start blogging a course on Bayesian statistics. I think I’ve got a novel angle on the subject, one that’s easier to slip into than my 201-level stuff and yet more rigorous. If y’all really start tossing in the funds, I’ll make it a video series. Yes yes, there’s a pandemic and potential global depression going on, but that just means I’ll work for cheap! I’ll release the milestones and course outline over the next few days, but there’s no harm in an early start.
Help me help the people Richard Carrier hurt. I’ll try to make it worth your while.
# Dear Bob Carpenter,
Hello! I’ve been a fan of your work for some time. While I’ve used emcee more and currently use a lot of PyMC3, I love the layout of Stan‘s language and often find myself missing it.
But there’s no contradiction between being a fan and critiquing your work. And one of your recent blog posts left me scratching my head.
Suppose I want to estimate my chances of winning the lottery by buying a ticket every day. That is, I want to do a pure Monte Carlo estimate of my probability of winning. How long will it take before I have an estimate that’s within 10% of the true value?
This one’s pretty easy to set up, thanks to conjugate priors. The Beta distribution models our credibility of the odds of success from a Bernoulli process. If our prior belief is represented by the parameter pair $$(\alpha_\text{prior},\beta_\text{prior})$$, and we win $$w$$ times over $$n$$ trials, our posterior belief in the odds of us winning the lottery, $$p$$, is
\begin{align} \alpha_\text{posterior} &= \alpha_\text{prior} + w, \\ \beta_\text{posterior} &= \beta_\text{prior} + n – w \end{align}
You make it pretty clear that by “lottery” you mean the traditional kind, with a big payout that your highly unlikely to win, so $$w \approx 0$$. But in the process you make things much more confusing.
There’s a big NY state lottery for which there is a 1 in 300M chance of winning the jackpot. Back of the envelope, to get an estimate within 10% of the true value of 1/300M will take many millions of years.
“Many millions of years,” when we’re “buying a ticket every day?” That can’t be right. The mean of the Beta distribution is
$$\mathbb{E}[Beta(\alpha_\text{posterior},\beta_\text{posterior})] = \frac{\alpha_\text{posterior}}{\alpha_\text{posterior} + \beta_\text{posterior}}$$
So if we’re trying to get that within 10% of zero, and $$w = 0$$, we can write
\begin{align} \frac{\alpha_\text{prior}}{\alpha_\text{prior} + \beta_\text{prior} + n} &< \frac{1}{10} \\ 10 \alpha_\text{prior} &< \alpha_\text{prior} + \beta_\text{prior} + n \\ 9 \alpha_\text{prior} – \beta_\text{prior} &< n \end{align}
If we plug in a sensible-if-improper subjective prior like $$\alpha_\text{prior} = 0, \beta_\text{prior} = 1$$, then we don’t even need to purchase a single ticket. If we insist on an “objective” prior like Jeffrey’s, then we need to purchase five tickets. If for whatever reason we foolishly insist on the Bayes/Laplace prior, we need nine tickets. Even at our most pessimistic, we need less than a fortnight (or, if you prefer, much less than a Fortnite season). If we switch to the maximal likelihood instead of the mean, the situation gets worse.
\begin{align} \text{Mode}[Beta(\alpha_\text{posterior},\beta_\text{posterior})] &= \frac{\alpha_\text{posterior} – 1}{\alpha_\text{posterior} + \beta_\text{posterior} – 2} \\ \frac{\alpha_\text{prior} – 1}{\alpha_\text{prior} + \beta_\text{prior} + n – 2} &< \frac{1}{10} \\ 9\alpha_\text{prior} – \beta_\text{prior} – 8 &< n \end{align}
Now Jeffrey’s prior doesn’t require us to purchase a ticket, and even that awful Bayes/Laplace prior needs just one purchase. I can’t see how you get millions of years out of that scenario.
## In the Interval
Maybe you meant a different scenario, though. We often use credible intervals to make decisions, so maybe you meant that the entire interval has to pass below the 0.1 mark? This introduces another variable, the width of the credible interval. Most people use two standard deviations or thereabouts, but I and a few others prefer a single standard deviation. Let’s just go with the higher bar, and start hacking away at the variance of the Beta distribution.
\begin{align} \text{var}[Beta(\alpha_\text{posterior},\beta_\text{posterior})] &= \frac{\alpha_\text{posterior}\beta_\text{posterior}}{(\alpha_\text{posterior} + \beta_\text{posterior})^2(\alpha_\text{posterior} + \beta_\text{posterior} + 2)} \\ \sigma[Beta(\alpha_\text{posterior},\beta_\text{posterior})] &= \sqrt{\frac{\alpha_\text{prior}(\beta_\text{prior} + n)}{(\alpha_\text{prior} + \beta_\text{prior} + n)^2(\alpha_\text{prior} + \beta_\text{prior} + n + 2)}} \\ \frac{\alpha_\text{prior}}{\alpha_\text{prior} + \beta_\text{prior} + n} + \frac{2}{\alpha_\text{prior} + \beta_\text{prior} + n} \sqrt{\frac{\alpha_\text{prior}(\beta_\text{prior} + n)}{\alpha_\text{prior} + \beta_\text{prior} + n + 2}} &< \frac{1}{10} \end{align}
Our improper subjective prior still requires zero ticket purchases, as $$\alpha_\text{prior} = 0$$ wipes out the entire mess. For Jeffrey’s prior, we find
$$\frac{\frac{1}{2}}{n + 1} + \frac{2}{n + 1} \sqrt{\frac{1}{2}\frac{n + \frac 1 2}{n + 3}} < \frac{1}{10},$$
which needs 18 ticket purchases according to Wolfram Alpha. The awful Bayes/Laplace prior can almost get away with 27 tickets, but not quite. Both of those stretch the meaning of “back of the envelope,” but you can get the answer via a calculator and some trial-and-error.
I used the term “hacking” for a reason, though. That variance formula is only accurate when $$p \approx \frac 1 2$$ or $$n$$ is large, and neither is true in this scenario. We’re likely underestimating the number of tickets we’d need to buy. To get an accurate answer, we need to integrate the Beta distribution.
\begin{align} \int_{p=0}^{\frac{1}{10}} \frac{\Gamma(\alpha_\text{posterior} + \beta_\text{posterior})}{\Gamma(\alpha_\text{posterior})\Gamma(\beta_\text{posterior})} p^{\alpha_\text{posterior} – 1} (1-p)^{\beta_\text{posterior} – 1} > \frac{39}{40} \\ 40 \frac{\Gamma(\alpha_\text{prior} + \beta_\text{prior} + n)}{\Gamma(\alpha_\text{prior})\Gamma(\beta_\text{prior} + n)} \int_{p=0}^{\frac{1}{10}} p^{\alpha_\text{prior} – 1} (1-p)^{\beta_\text{prior} + n – 1} > 39 \end{align}
Awful, but at least for our subjective prior it’s trivial to evaluate. $$\text{Beta}(0,n+1)$$ is a Dirac delta at $$p = 0$$, so 100% of the integral is below 0.1 and we still don’t need to purchase a single ticket. Fortunately for both the Jeffrey’s and Bayes/Laplace prior, my “envelope” is a Jupyter notebook.
Those numbers did go up by a non-trivial amount, but we’re still nowhere near “many millions of years,” even if Fortnite’s last season felt that long.
Maybe you meant some scenario where the credible interval overlaps $$p = 0$$? With proper priors, that never happens; the lower part of the credible interval always leaves room for some extremely small values of $$p$$, and thus never actually equals 0. My sensible improper prior has both ends of the interval equal to zero and thus as long as $$w = 0$$ it will always overlap $$p = 0$$.
## Expecting Something?
I think I can find a scenario where you’re right, but I also bet you’re sick of me calling $$(0,1)$$ a “sensible” subjective prior. Hope you don’t mind if I take a quick detour to the last question in that blog post, which should explain how a Dirac delta can be sensible.
How long would it take to convince yourself that playing the lottery has an expected negative return if tickets cost $1, there’s a 1/300M chance of winning, and the payout is$100M?
Let’s say the payout if you win is $$W$$ dollars, and the cost of a ticket is $$T$$. Then your expected earnings at any moment is an integral of a multiple of the entire Beta posterior.
$$\mathbb{E}(\text{Lottery}_{W}) = \int_{p=0}^1 \frac{\Gamma(\alpha_\text{posterior} + \beta_\text{posterior})}{\Gamma(\alpha_\text{posterior})\Gamma(\beta_\text{posterior})} p^{\alpha_\text{posterior} – 1} (1-p)^{\beta_\text{posterior} – 1} p W < T$$
I’m pretty confident you can see why that’s a back-of-the-envelope calculation, but this is a public letter and I’m also sure some of those readers just fainted. Let me detour from the detour to assure them that, yes, this is actually a pretty simple calculation. They’ve already seen that multiplicative constants can be yanked out of the integral, but I’m not sure they realized that if
$$\int_{p=0}^1 \frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha)\Gamma(\beta)} p^{\alpha – 1} (1-p)^{\beta – 1} = 1,$$
then thanks to the multiplicative constant rule it must be true that
$$\int_{p=0}^1 p^{\alpha – 1} (1-p)^{\beta – 1} = \frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha + \beta)}$$
They may also be unaware that the Gamma function is an analytic continuity of the factorial. I say “an” because there’s an infinite number of functions that also qualify. To be considered a “good” analytic continuity the Gamma function must also duplicate another property of the factorial, that $$(a + 1)! = (a + 1)(a!)$$ for all valid $$a$$. Or, put another way, it must be true that
$$\frac{\Gamma(a + 1)}{\Gamma(a)} = a + 1, a > 0$$
Fortunately for me, the Gamma function is a good analytic continuity, perhaps even the best. This allows me to chop that integral down to size.
\begin{align} W \frac{\Gamma(\alpha_\text{prior} + \beta_\text{prior} + n)}{\Gamma(\alpha_\text{prior})\Gamma(\beta_\text{prior} + n)} \int_{p=0}^1 p^{\alpha_\text{prior} – 1} (1-p)^{\beta_\text{prior} + n – 1} p &< T \\ \int_{p=0}^1 p^{\alpha_\text{prior} – 1} (1-p)^{\beta_\text{prior} + n – 1} p &= \int_{p=0}^1 p^{\alpha_\text{prior}} (1-p)^{\beta_\text{prior} + n – 1} \\ \int_{p=0}^1 p^{\alpha_\text{prior}} (1-p)^{\beta_\text{prior} + n – 1} &= \frac{\Gamma(\alpha_\text{prior} + 1)\Gamma(\beta_\text{prior} + n)}{\Gamma(\alpha_\text{prior} + \beta_\text{prior} + n + 1)} \\ W \frac{\Gamma(\alpha_\text{prior} + \beta_\text{prior} + n)}{\Gamma(\alpha_\text{prior})\Gamma(\beta_\text{prior} + n)} \frac{\Gamma(\alpha_\text{prior} + 1)\Gamma(\beta_\text{prior} + n)}{\Gamma(\alpha_\text{prior} + \beta_\text{prior} + n + 1)} &< T \\ W \frac{\Gamma(\alpha_\text{prior} + \beta_\text{prior} + n) \Gamma(\alpha_\text{prior} + 1)}{\Gamma(\alpha_\text{prior} + \beta_\text{prior} + n + 1) \Gamma(\alpha_\text{prior})} &< T \\ W \frac{\alpha_\text{prior} + 1}{\alpha_\text{prior} + \beta_\text{prior} + n + 1} &< T \\ \frac{W}{T}(\alpha_\text{prior} + 1) – \alpha_\text{prior} – \beta_\text{prior} – 1 &< n \end{align}
Mmmm, that was satisfying. Anyway, for Jeffrey’s prior you need to purchase $$n > 149,999,998$$ tickets to be convinced this lottery isn’t worth investing in, while the Bayes/Laplace prior argues for $$n > 199,999,997$$ purchases. Plug my subjective prior in, and you’d need to purchase $$n > 99,999,998$$ tickets.
That’s optimal, assuming we know little about the odds of winning this lottery. The number of tickets we need to purchase is controlled by our prior. Since $$W \gg T$$, our best bet to minimize the number of tickets we need to purchase is to minimize $$\alpha_\text{prior}$$. Unfortunately, the lowest we can go is $$\alpha_\text{prior} = 0$$. Almost all the “objective” priors I know of have it larger, and thus ask that you sink more money into the lottery than the prize is worth. That doesn’t sit well with our intuition. The sole exception is the Haldane prior of (0,0), which argues for $$n > 99,999,999$$ and thus asks you to spend exactly as much as the prize-winnings. By stating $$\beta_\text{prior} = 1$$, my prior manages to shave off one ticket purchase.
Another prior that increases $$\beta_\text{prior}$$ further will shave off further purchases, but so far we’ve only considered the case where $$w = 0$$. What if we sink money into this lottery, and happen to win before hitting our limit? The subjective prior of $$(0,1)$$ after $$n$$ losses becomes equivalent to the Bayes/Laplace prior of $$(1,1)$$ after $$n-1$$ losses. Our assumption that $$p \approx 0$$ has been proven wrong, so the next best choice is to make no assumptions about $$p$$. At the same time, we’ve seen $$n$$ losses and we’d be foolish to discard that information entirely. A subjective prior with $$\beta_\text{prior} > 1$$ wouldn’t transform in this manner, while one with $$\beta_\text{prior} < 1$$ would be biased towards winning the lottery relative to the Bayes/Laplace prior.
My subjective prior argues you shouldn’t play the lottery, which matches the reality that almost all lotteries pay out less than they take in, but if you insist on participating it will minimize your losses while still responding well to an unexpected win. It lives up to the hype.
However, there is one way to beat it. You mentioned in your post that the odds of winning this lottery are one in 300 million. We’re not supposed to incorporate that into our math, it’s just a measuring stick to use against the values we churn out, but what if we constructed a prior around it anyway? This prior should have a mean of one in 300 million, and the $$p = 0$$ case should have zero likelihood. The best match is $$(1+\epsilon, 299999999\cdot(1+\epsilon))$$, where $$\epsilon$$ is a small number, and when we take a limit …
$$\lim_{\epsilon \to 0^{+}} \frac{100,000,000}{1}(2 + \epsilon) – 299,999,999 \epsilon – 300,000,000 = -100,000,000 < n$$
… we find the only winning move is not to play. There’s no Dirac deltas here, either, so unlike my subjective prior it’s credible interval is one-dimensional. Eliminating the $$p = 0$$ case runs contrary to our intuition, however. A newborn that purchased a ticket every day of its life until it died on its 80th birthday has a 99.99% chance of never holding a winning ticket. $$p = 0$$ is always an option when you live a finite amount of time.
The problem with this new prior is that it’s incredibly strong. If we didn’t have the true odds of winning in our back pocket, we could quite fairly be accused of putting our thumb on the scales. We can water down $$(1,299999999)$$ by dividing both $$\alpha_\text{prior}$$ and $$\beta_\text{prior}$$ by a constant value. This maintains the mean of the Beta distribution, and while the $$p = 0$$ case now has non-zero credence I’ve shown that’s no big deal. Pick the appropriate constant value and we get something like $$(\epsilon,1)$$, where $$\epsilon$$ is a small positive value. Quite literally, that’s within epsilon of the subjective prior I’ve been hyping!
## Enter Frequentism
So far, the only back-of-the-envelope calculations I’ve done that argued for millions of ticket purchases involved the expected value, but that was only because we used weak priors that are a poor match for reality. I believe in the principle of charity, though, and I can see a scenario where a back-of-the-envelope calculation does demand millions of purchases.
But to do so, I’ve got to hop the fence and become a frequentist.
If you haven’t read The Theory That Would Not Die, you’re missing out. Sharon Bertsch McGrayne mentions one anecdote about the RAND Corporation’s attempts to calculate the odds of a nuclear weapon accidentally detonating back in the 1950’s. No frequentist statistician would touch it with a twenty-foot pole, but not because they were worried about getting the math wrong. The problem was the math. As the eventually-published report states:
The usual way of estimating the probability of an accident in a given situation is to rely on observations of past accidents. This approach is used in the Air Force, for example, by the Directory of Flight Safety Research to estimate the probability per flying hour of an aircraft accident. In cases of of newly introduced aircraft types for which there are no accident statistics, past experience of similar types is used by analogy.
Such an approach is not possible in a field where this is no record of past accidents. After more than a decade of handling nuclear weapons, no unauthorized detonation has occurred. Furthermore, one cannot find a satisfactory analogy to the complicated chain of events that would have to precede an unauthorized nuclear detonation. (…) Hence we are left with the banal observation that zero accidents have occurred. On this basis the maximal likelihood estimate of the probability of an accident in any future exposure turns out to be zero.
For the lottery scenario, a frequentist wouldn’t reach for the Beta distribution but instead the Binomial. Given $$n$$ trials of a Bernoulli process with probability $$p$$ of success, the expected number of successes observed is
$$\bar w = n p$$
We can convert that to a maximal likelihood estimate by dividing the actual number of observed successes by $$n$$.
$$\hat p = \frac{w}{n}$$
In many ways this estimate can be considered optimal, as it is both unbiased and has the least variance of all other estimators. Thanks to the Central Limit Theorem, the Binomial distribution will approximate a Gaussian distribution to arbitrary degree as we increase $$n$$, which allows us to apply the analysis from the latter to the former. So we can use our maximal likelihood estimate $$\hat p$$ to calculate the standard error of that estimate.
$$\text{SEM}[\hat p] = \sqrt{ \frac{\hat p(1- \hat p)}{n} }$$
Ah, but what if $$w = 0$$? It follows that $$\hat p = 0$$, but this also means that $$\text{SEM}[\hat p] = 0$$. There’s no variance in our estimate? That can’t be right. If we approach this from another angle, plugging $$w = 0$$ into the Binomial distribution, it reduces to
$$\text{Binomial}(w | n,p) = \frac{n!}{w!(n-w)!} p^w (1-p)^{n-w} = (1-p)^n$$
The maximal likelihood of this Binomial is indeed $$p = 0$$, but it doesn’t resemble a Dirac delta at all.
Shouldn’t there be some sort of variance there? What’s going wrong?
We got a taste of this on the Bayesian side of the fence. Using the stock formula for the variance of the Beta distribution underestimated the true value, because the stock formula assumed $$p \approx \frac 1 2$$ or a large $$n$$. When we assume we have a near-infinite amount of data, we can take all sorts of computational shortcuts that make our life easier. One look at the Binomial’s mean, however, tells us that we can drown out the effects of a large $$n$$ with a small value of $$p$$. And, just as with the odds of a nuclear bomb accident, we already know $$p$$ is very, very small. That isn’t fatal on its own, as you correctly point out.
With the lottery, if you run a few hundred draws, your estimate is almost certainly going to be exactly zero. Did we break the [*Central Limit Theorem*]? Nope. Zero has the right absolute error properties. It’s within 1/300M of the true answer after all!
The problem comes when we apply the Central Limit Theorem and use a Gaussian approximation to generate a confidence or credible interval for that maximal likelihood estimate. As both the math and graph show, though, the probability distribution isn’t well-described by a Gaussian distribution. This isn’t much of a problem on the Bayesian side of the fence, as I can juggle multiple priors and switch to integration for small values of $$n$$. Frequentism, however, is dependent on the Central Limit Theorem and thus assumes $$n$$ is sufficiently large. This is baked right into the definitions: a p-value is the fraction of times you calculate a test metric equal to or more extreme than the current one assuming the null hypothesis is true and an infinite number of equivalent trials of the same random process, while confidence intervals are a range of parameter values such that when we repeat the maximal likelihood estimate on an infinite number of equivalent trials the estimates will fall in that range more often than a fraction of our choosing. Frequentist statisticians are stuck with the math telling them that $$p = 0$$ with absolute certainty, which conflicts with our intuitive understanding.
For a frequentist, there appears to be only one way out of this trap: witness a nuclear bomb accident. Once $$w > 0$$, the math starts returning values that better match intuition. Likewise with the lottery scenario, the only way for a frequentist to get an estimate of $$p$$ that comes close to their intuition is to purchase tickets until they win at least once.
This scenario does indeed take “many millions of years.” It’s strange to find you taking a frequentist world-view, though, when you’re clearly a Bayesian. By straddling the fence you wind up in a world of hurt. For instance, you state this:
Did we break the [*Central Limit Theorem*]? Nope. Zero has the right absolute error properties. It’s within 1/300M of the true answer after all! But it has terrible relative error probabilities; it’s relative error after a lifetime of playing the lottery is basically infinity.
A true frequentist would have been fine asserting the probability of a nuclear bomb accident is zero. Why? Because $$\text{SEM}[\hat p = 0]$$ is actually a very good confidence interval. If we’re going for two sigmas, then our confidence interval should contain the maximal likelihood we’ve calculated at least 95% of the time. Let’s say our sample sizes are $$n = 36$$, the worst-case result from Bayesian statistics. If the true odds of winning the lottery are 1 in 300 million, then the odds of calculating a maximal likelihood of $$p = 0$$ is
p( MLE(hat p) = 0 ) = 0.999999880000007
About 99.99999% of the time, then, the confidence interval of $$0 \leq \hat p \leq 0$$ will be correct. That’s substantially better than 95%! Nothing’s broken here, frequentism is working exactly as intended.
I bet you think I’ve screwed up the definition of confidence intervals. I’m afraid not, I’ve double-checked my interpretation by heading back to the source, Jerzy Neyman. He, more than any other person, is responsible for pioneering the frequentist confidence interval.
We can then tell the practical statistician that whenever he is certain that the form of the probability law of the X’s is given by the function? $$p(E|\theta_1, \theta_2, \dots \theta_l,)$$ which served to determine $$\underline{\theta}(E)$$ and $$\bar \theta(E)$$ [the lower and upper bounds of the confidence interval], he may estimate $$\theta_1$$ by making the following three steps: (a) he must perform the random experiment and observe the particular values $$x_1, x_2, \dots x_n$$ of the X’s; (b) he must use these values to calculate the corresponding values of $$\underline{\theta}(E)$$ and $$\bar \theta(E)$$; and (c) he must state that $$\underline{\theta}(E) < \theta_1^o < \bar \theta(E)$$, where $$\theta_1^o$$ denotes the true value of $$\theta_1$$. How can this recommendation be justified?
[Neyman keeps alternating between $$\underline{\theta}(E) \leq \theta_1^o \leq \bar \theta(E)$$ and $$\underline{\theta}(E) < \theta_1^o < \bar \theta(E)$$ throughout this paper, so presumably both forms are A-OK.]
The justification lies in the character of probabilities as used here, and in the law of great numbers. According to this empirical law, which has been confirmed by numerous experiments, whenever we frequently and independently repeat a random experiment with a constant probability, $$\alpha$$, of a certain result, A, then the relative frequency of the occurrence of this result approaches $$\alpha$$. Now the three steps (a), (b), and (c) recommended to the practical statistician represent a random experiment which may result in a correct statement concerning the value of $$\theta_1$$. This result may be denoted by A, and if the calculations leading to the functions $$\underline{\theta}(E)$$ and $$\bar \theta(E)$$ are correct, the probability of A will be constantly equal to $$\alpha$$. In fact, the statement (c) concerning the value of $$\theta_1$$ is only correct when $$\underline{\theta}(E)$$ falls below $$\theta_1^o$$ and $$\bar \theta(E)$$, above $$\theta_1^o$$, and the probability of this is equal to $$\alpha$$ whenever $$\theta_1^o$$ the true value of $$\theta_1$$. It follows that if the practical statistician applies permanently the rules (a), (b) and (c) for purposes of estimating the value of the parameter $$\theta_1$$ in the long run he will be correct in about 99 per cent of all cases. []
It will be noticed that in the above description the probability statements refer to the problems of estimation with which the statistician will be concerned in the future. In fact, I have repeatedly stated that the frequency of correct results tend to $$\alpha$$. [Footnote: This, of course, is subject to restriction that the X’s considered will follow the probability law assumed.] Consider now the case when a sample, E’, is already drawn and the calculations have given, say, $$\underline{\theta}(E’)$$ = 1 and $$\bar \theta(E’)$$ = 2. Can we say that in this particular case the probability of the true value of $$\theta_1$$ falling between 1 and 2 is equal to $$\alpha$$?
The answer is obviously in the negative. The parameter $$\theta_1$$ is an unknown constant and no probability statement concerning its value may be made, that is except for the hypothetical and trivial ones … which we have decided not to consider.
Neyman, Jerzy. “X — outline of a theory of statistical estimation based on the classical theory of probability.” Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences 236.767 (1937): 348-349.
If there was any further doubt, it’s erased when Neyman goes on to analogize scientific measurements to a game of roulette. Just as the knowing where the ball landed doesn’t tell us anything about where the gamblers placed their bets, “once the sample $$E’$$ is drawn and the values of $$\underline{\theta}(E’)$$ and $$\bar \theta(E’)$$ determined, the calculus of probability adopted here is helpless to provide answer to the question of what is the true value of $$\theta_1$$.” (pg. 350)
If a confidence interval doesn’t tell us anything about where the true parameter value lies, then its only value must come from being an estimator of long-term behaviour. And as I showed before, $$\text{SEM}[\hat p = 0]$$ estimates the maximal likelihood from repeating the experiment extremely well. It is derived from the long-term behaviour of the Binomial distribution, which is the correct distribution to describe this situation within frequentism. $$\text{SEM}[\hat p = 0]$$ fits Neyman’s definition of a confidence interval perfectly, and thus generates a valid frequentist confidence interval. On the Bayesian side, I’ve spilled a substantial number of photons to convince you that a Dirac delta prior is a good choice, and that prior also generates zero-width credence intervals. If it worked over there, why can’t it also work over here?
This is Jayne’s Truncated Interval all over again. The rules of frequentism don’t work the way we intuit, which normally isn’t a problem because the Central Limit Theorem massages the data enough to align frequentism and intuition. Here, though, we’ve stumbled on a corner case where $$p = 0$$ with absolute certainty and $$p \neq 0$$ with tight error bars are both correct conclusions under the rules of frequentism. RAND Corporation should not have had any difficulty finding a frequentist willing to calculate the odds of a nuclear bomb accident, because they could have scribbled out one formula on an envelope and concluded such accidents were impossible.
And yet, faced with two contradictory answers or unaware the contradiction exists, frequentists side with intuition and reject the rules of their own statistical system. They strike off the $$p = 0$$ answer, leaving only the case where $$p \ne 0$$ and $$w > 0$$. Since reality currently insists that $$w = 0$$, they’re prevented from coming to any conclusion. The same reasoning leads to the “many millions of years” of ticket purchases that you argued was the true back-of-the-envelope conclusion. To break out of this rut, RAND Corporation was forced to abandon frequentism and instead get their estimate via Bayesian statistics.
On this basis the maximal likelihood estimate of the probability of an accident in any future exposure turns out to be zero. Obviously we cannot rest content with this finding. []
… we can use the following idea: in an operation where an accident seems to be possible on technical grounds, our assurance that this operation will not lead to an accident in the future increases with the number of times this operation has been carried out safely, and decreases with the number of times it will be carried out in the future. Statistically speaking, this simple common sense idea is based on the notion that there is an a priori distribution of the probability of an accident in a given opportunity, which is not all concentrated at zero. In Appendix II, Section 2, alternative forms for such an a priori distribution are discussed, and a particular Beta distribution is found to be especially useful for our purposes.
It’s been said that frequentists are closet Bayesians. Through some misunderstandings and bad luck on your end, you’ve managed to be a Bayesian that’s a closet frequentist that’s a closet Bayesian. Had you stuck with a pure Bayesian view, any back-of-the-envelope calculation would have concluded that your original scenario demanded, in the worst case, that you’d need to purchase lottery tickets for a Fortnite.
# Rationality Rules DESTROYS Women’s Sport!!1!
I still can’t believe this post exists, given its humble beginnings.
The “women’s category” is, in my opinion, poorly named given our current climate, and so I’d elect a name more along the lines of the “Under 5 nmol/l category” (as in, under 5 nanomoles of testosterone per litre), but make no mistake about it, the “woman’s category” is not based on gender or identity, or even genitalia or chromosomes… it’s based on hormone levels and the absence of male puberty.
The above comment wasn’t in Rationality Rules’ latest transphobic video, it was just a casual aside by RR himself in the YouTube comment section. He’s obiquely doubled-down via Twitter (hat tip to Essence of Thought):
Of course, just as I support trans men competing in all “men’s categories” (poorly named), women who have not experienced male puberty competing in all women’s sport (also poorly named) and trans women who have experienced male puberty competing in long-distance running.
To further clarify, I think that we must rename our categories according to what they’re actually based on. It’s not right to have a “women’s category” and yet say to some trans women (who are women!) that they can’t compete within it; it should be renamed.
The proposal itched away at me, though, because I knew it was testable.
There is a need to clarify hormone profiles that may be expected to occur after competition when antidoping tests are usually made. In this study, we report on the hormonal profile of 693 elite athletes, sampled within 2 h of a national or international competitive event. These elite athletes are a subset of the cross-sectional study that was a component of the GH-2000 research project aimed at developing a test to detect abuse with growth hormone.
Healy, Marie-Louise, et al. “Endocrine profiles in 693 elite athletes in the postcompetition setting.” Clinical endocrinology 81.2 (2014): 294-305.
The GH-2000 project had already done the hard work of collecting and analyzing blood samples from athletes, so checking RR’s proposal was no tougher than running some numbers. There’s all sorts of ethical guidelines around sharing medical info, but fortunately there’s an easy shortcut: ask one of the scientists involved to run the numbers for me, and report back the results. Aggregate data is much more resistant to de-anonymization, so the ethical concerns are greatly reduced. The catch, of course, is that I’d have to find a friendly researcher with access to that dataset. About a month ago, I fired off some emails and hoped for the best.
I wound up much, much better than the best. I got full access to the dataset!! You don’t get handed an incredible gift like this and merely use it for a blog post. In my spare time, I’m flexing my Bayesian muscles to do a re-analysis of the above paper, while also looking for observations the original authors may have missed. Alas, that means my slow posting schedule is about to crawl.
But in the meantime, we have a question to answer.
# What Do We Have Here? ¶
Total Assigned-female Athletes = 239
Height, Mean = 171.61 cm
Height, Std.Dev = 7.12 cm
Weight, Mean = 64.27 kg
Weight, Std.Dev = 9.12 kg
Body Fat, Mean = 13.19 kg
Body Fat, Std.Dev = 3.85 kg
Testosterone, Mean = 2.68 nmol/L
Testosterone, Std.Dev = 4.33 nmol/L
Testosterone, Max = 31.90 nmol/L
Testosterone, Min = 0.00 nmol/L
Total Assigned-male Athletes = 454
Height, Mean = 182.72 cm
Height, Std.Dev = 8.48 cm
Weight, Mean = 80.65 kg
Weight, Std.Dev = 12.62 kg
Body Fat, Mean = 8.89 kg
Body Fat, Std.Dev = 7.20 kg
Testosterone, Mean = 14.59 nmol/L
Testosterone, Std.Dev = 6.66 nmol/L
Testosterone, Max = 41.00 nmol/L
Testosterone, Min = 0.80 nmol/L
The first step is to get a basic grasp on what’s there, via some crude descriptive statistics. It’s also useful to compare these with the original paper, to make sure I’m interpreting the data correctly. Excusing some minor differences in rounding, the above numbers match the paper.
The only thing that stands out from the above, to me, is the serum levels of testosterone. At least one source says the mean of these assigned-female athletes is higher than the normal range for their non-athletic cohorts. Part of that may simply be because we don’t have a good idea of what the normal range is, so it’s not uncommon for each lab to have their own definition of “normal.” This is even worse for those assigned female, since their testosterone levels are poorly studied; note that my previous link collected the data of over a million “men,” but doesn’t mention “women” once. Factor in inaccurate test results and other complicating factors, and “normal” is quite poorly-defined.
Still, Rationality Rules is either convinced those complications are irrelevant, or ignorant of them. And, to be fair, that 5nmol/L line implicitly sweeps a lot of them under the rug. Let’s carry on, then, and look for invalid data. “Invalid” covers everything from missing data, to impossible data, and maybe even data we think might be made inaccurate due to measurement error. I consider a concentration of zero testosterone as invalid, even though it may technically be possible.
Total Assigned-male Athletes w/ T levels >= 0 = 446
w/ T levels <= 0.5 = 0
w/ T levels == 0 = 0
w/ missing T levels = 8
that I consider valid = 446
Total Assigned-female Athletes w/ T levels >= 0 = 234
w/ T levels <= 0.5 = 5
w/ T levels == 0 = 1
w/ missing T levels = 5
that I consider valid = 229
Fortunately for us, the losses are pretty small. 229 datapoints is a healthy sample size, so we can afford to be liberal about what we toss out. Next up, it would be handy to see the data in chart form.
I've put vertical lines at both the 0.5 and 5 nmol/L cutoffs. There's a big difference between categories, but we can see clouds on the horizon: a substantial number of assigned-female athletes have greater than 5 nmol/L of testosterone in their bloodstream, while a decent number of assigned-male athletes have less. How many?
Segregating Athletes by Testosterone
Concentration aFab aMab
> 5nmol/L 19 417
< 5nmol/L 210 26
= 5nmol/L 0 3
8.3% of assigned-female athletes have > 5nmol/L
5.8% of assigned-male athletes have < 5nmol/L
4.4% of athletes with > 5nmol/L are assigned-female
11.0% of athletes with < 5nmol/L are assigned-male
Looks like anywhere from 6-8% of athletes have testosterone levels that cross Rationality Rules' line. For comparison, maybe 1-2% of the general public has some level of gender dysphoria, though estimating exact figures is hard in the face of widespread discrimination and poor sex-ed in schools. Even that number is misleading, as the number of transgender athletes is substantially lower than 1-2% of the athletic population. The share of transgender athletes is irrelevant to this dataset anyway, as it was collected between 1996 and 1999, when no sporting agency had policies that allowed transgender athletes to openly compete.
That 6-8%, in other words, is entirely cisgender. This echoes one of Essence Of Thought's arguments: RR's 5nmol/L policy has far more impact on cis athletes than trans athletes, which could have catastrophic side-effects. Could is the operative word, though, because as of now we don't know anything about these athletes. Do >5nmol/L assigned-female athletes have bodies more like >5nmol/L assigned-male athletes than <5nmol/L assigned-female athletes? If so, then there's no problem. Equivalent body types are competing against each other, and outcomes are as fair as could be reasonably expected.
What, then, counts as an "equivalent" body type when it comes to sport?
# Newton's First Law of Athletics ¶
One reasonable measure of equivalence is height. It's one of the stronger sex differences, and height is also correlated with longer limbs and greater leverage. Whether that's relevant to sports is debatable, but height and correlated attributes dominate Rationality Rules' list.
[19:07] In some events - such as long-distance running, in which hemoglobin and slow-twitch muscle fibers are vital - I think there's a strong argument to say no, [transgender women who transitioned after puberty] don't have an unfair advantage, as the primary attributes are sufficiently mitigated. But in most events, and especially those in which height, width, hip size, limb length, muscle mass, and muscle fiber type are the primary attributes - such as weightlifting, sprinting, hammer throw, javelin, netball, boxing, karate, basketball, rugby, judo, rowing, hockey, and many more - my answer is yes, most do have an unfair advantage.
Fortunately for both of us, most athletes in the dataset have a "valid" height, which I define as being at least 30cm tall.
Out of 693 athletes, 678 have valid height data.
The faint vertical lines are for the mean adult height of Germans born in 1976, which should be a reasonable cohort to European athletes that were active between 1996 and 1999, while the darker lines are each category's mean. Athletes seem slightly taller than the reference average, but only by 2-5cm. The amount of overlap is also surprising, given that height is supposed to be a major sex difference. We actually saw less overlap with testosterone! Finally, the height distribution isn't quite Gaussian, there's a subtle bias towards the taller end of the spectrum.
Height is a pretty crude metric, though. You could pair any athlete with a non-athlete of the same height, and there's no way the latter would perform as well as the former. A better measure of sporting ability would be muscle mass. We shouldn't use the absolute mass, though: bigger bodies have more mass and need more force to accelerate as smaller bodies do, so height and muscle mass are correlated. We need some sort of dimensionless scaling factor which compensates.
And we have one! It's called the Body Mass Index, or BMI.
$$BMI = \frac w {h^2},$$
where $$w$$ is a person's mass in kilograms, and $$h$$ is a person's height in metres. Unfortunately, BMI is quite problematic. Partly that's because it is a crude measure of obesity. But part of that is because there are two types of tissue which can greatly vary, body fat and muscle, yet both contribute equally towards BMI.
That's all fixable. For one, some of the athletes in this dataset had their body fat measured. We can subtract that mass off, so their weight consists of tissues that are strongly correlated with height plus one that is fudgable: muscle mass. For two, we're not assessing these individual's health, we only want a dimensionless measure of muscle mass relative to height. For three, we're not comparing these individuals to the general public, so we're not restricted to using the general BMI formula. We can use something more accurate.
The oddity is the appearance of that exponent 2, though our world is three-dimensional. You might think that the exponent should simply be 3, but that doesn't match the data at all. It has been known for a long time that people don't scale in a perfectly linear fashion as they grow. I propose that a better approximation to the actual sizes and shapes of healthy bodies might be given by an exponent of 2.5. So here is the formula I think is worth considering as an alternative to the standard BMI:
$$BMI' = 1.3 \frac w {h^{2.5}}$$
I can easily pop body fat into Nick Trefethen's formula, and get a better measure of relative muscle mass,
$$\overline{BMI} = 1.3 \frac{ w - bf }{h^{2.5}},$$
where $$bf$$ is total body fat in kilograms. Individuals with excess muscle mass, relative to what we expect for their height, will have a high $$\overline{BMI}$$, and vice-versa. And as we saw earlier, muscle mass is another of Rationality Rules' determinants of sporting performance.
Time for more number crunching.
Out of 693 athletes, 227 have valid adjusted BMIs.
663 have valid weights.
241 have valid body fat percentages.
Total Assigned-female Athletes = 239
total with valid adjusted BMI = 86
Total Assigned-male Athletes = 454
total with valid adjusted BMI = 141
The bad news is that most of this dataset lacks any information on body fat, which really cuts into our sample size. The good news is that we've still got enough to carry on. It also looks like there's a strong sex difference, and the distribution is pretty clustered. Still, a chart would help clarify the latter point.
Whoops! There's more overlap and skew than I thought. Even in logspace, the results don't look Gaussian. We'll have to remember that for the next step.
# A Man Without a Plan is Not a Man ¶
Just looking at charts isn't going to solve this question, we need to do some sort of hypothesis testing. Fortunately, all the pieces I need are here. We've got our hypothesis, for instance:
Athletes with exceptional testosterone levels are more like athletes of the same sex but with typical testosterone levels, than they are of other athletes with a different sex but similar testosterone levels.
If you know me, you know that I'm all about the Bayes, and that gives us our methodology.
1. Fit a model to a specific metric for assigned-female athletes with less than 5nmol/L of serum testosterone.
2. Fit a model to a specific metric for assigned-male athletes with more than 5nmol/L of serum testosterone.
3. Apply the first model to the test group, calculating the overall likelihood.
4. Apply the second model to the test group, calculating the overall likelihood.
5. Sample the probability distribution of the Bayes Factor.
"Metric" is one of height or $$\overline{BMI}$$, while "test group" is one of assigned-female athletes with >5nmol/L of serum testosterone or assigned-male athletes with <5nmol/L of serum testosterone. The Bayes Factor is simply
$$\text{Bayes Factor} = \frac{ p(E \mid H_1) \cdot p(H_1) }{ p(E \mid H_2) \cdot p(H_2) } = \frac{ p(H_1 \mid E) }{ p(H_2 \mid E) },$$
which means we need two hypotheses, not one. Fortunately, I've phrased the hypothesis to make it easy to negate: athletes with exceptional testosterone levels are less like athletes of the same sex but with typical testosterone levels, than they are of other athletes with a different sex but similar testosterone levels. We'll call this new hypothesis $$H_2$$, and the original $$H_1$$. Bayes factors greater than 1 mean $$H_1$$ is more likely than $$H_2$$, and vice-versa.
Calculating all that would be easy if I was using Stan or PyMC3, but I ran into problems translating the former's probability distributions into charts, and I don't have any experience with the latter. My next choice, emcee, forces me to manually convolve two posterior distributions. Annoying, but not difficult.
# I'm a Model, If You Know What I Mean ¶
That just leaves one thing left: what models are we going to use? The obvious choice for height is the Gaussian distribution, as from previous research we know it's a great model.
Fitting the height of lT aFab athletes to a Gaussian distribution ...
0: (-980.322471) mu=150.000819, sigma=15.000177
64: (-710.417497) mu=169.639051, sigma=8.579088
128: (-700.539260) mu=171.107358, sigma=7.138832
192: (-700.535241) mu=171.154151, sigma=7.133279
256: (-700.540692) mu=171.152701, sigma=7.145515
320: (-700.552831) mu=171.139668, sigma=7.166857
384: (-700.530969) mu=171.086422, sigma=7.094077
ML: (-700.525284) mu=171.155240, sigma=7.085777
median: (-700.525487) mu=171.134614, sigma=7.070993
Alas, emcee also lacks a good way to assess model fitness. One crude metric is look at the progression of the mean fitness; if it grows and then stabilizes around a specific value, as it does here, we've converged on something. Another is to compare the mean, median, and maximal likelihood of the posterior; if they're about equally likely, we've got a fuzzy caterpillar. Again, that's also true here.
As we just saw, though, charts are a better judge of fitness than a handful of numbers.
If you were wondering why I didn't make much of a fuss out of the asymmetry in the height distribution, it's because I've already seen this graph. A good fit isn't necessarily the best though, and I might be able to get a closer match by incorporating the sport each athlete played.
Assigned-female Athletes
sport below/above 171cm
Power lifting: 1 / 0
Football: 0 / 0
Swimming: 41 /49
Marathon: 0 / 1
Canoeing: 1 / 0
Rowing: 9 /13
Cross-country skiing: 8 / 1
Alpine skiing: 11 / 1
Weight lifting: 7 / 0
Judo: 0 / 0
Bandy: 0 / 0
Ice Hockey: 0 / 0
Handball: 12 /17
Track and field: 22 /27
Basketball attracts tall people, unsurprisingly, while skiing seems to attract shorter people. This could be the cause of that asymmetry. It's no guarantee that I'll actually get a better fit, though, as I'm also dramatically cutting the number of datapoints to fit to. The model's uncertainty must increase as a result, and that may be enough to dilute out any increase in fitness. I'll run those numbers for the paper, but for now the Gaussian model I have is plenty good.
Fitting the height of hT aMab athletes to a Gaussian distribution ...
0: (-2503.079578) mu=150.000061, sigma=15.001179
64: (-1482.315571) mu=179.740851, sigma=10.506003
128: (-1451.789027) mu=182.615810, sigma=8.620333
192: (-1451.748336) mu=182.587979, sigma=8.550535
256: (-1451.759883) mu=182.676004, sigma=8.546410
320: (-1451.746697) mu=182.626918, sigma=8.538055
384: (-1451.747266) mu=182.580692, sigma=8.534070
ML: (-1451.746074) mu=182.591047, sigma=8.534584
median: (-1451.759295) mu=182.603231, sigma=8.481894
We get the same results when fitting the model to >5 nmol/L assigned-male athletes. The log likelihood, that number in brackets, is a lot lower for these athletes, but that number is roughly proportional to the number of samples. If we had the same degree of model fitness but doubled the number of samples, we'd expect the log likelihood to double. And, sure enough, this dataset has roughly twice as many assigned-male athletes as it does assigned-female athletes.
The updated charts are more of the same.
Unfortunately, adjusted BMI isn't nearly as tidy. I don't have any prior knowledge that would favour a particular model, so I wound up testing five candidates: the Gaussian, Log-Gaussian, Gamma, Weibull, and Rayleigh distributions. All but the first needed an offset parameter to get the best results, which has the same interpretation as last time.
Fitting the adjusted BMI of hT aMab athletes to a Gaussian distribution ...
0: (-410.901047) mu=14.999563, sigma=5.000388
384: (-256.474147) mu=20.443497, sigma=1.783979
ML: (-256.461460) mu=20.452817, sigma=1.771653
median: (-256.477475) mu=20.427138, sigma=1.781139
Fitting the adjusted BMI of hT aMab athletes to a Log-Gaussian distribution ...
0: (-629.141577) mu=6.999492, sigma=2.001107, off=10.000768
384: (-290.910651) mu=3.812746, sigma=1.789607, off=16.633741
ML: (-277.119315) mu=3.848383, sigma=1.818429, off=16.637382
median: (-288.278918) mu=3.795675, sigma=1.778238, off=16.637076
Fitting the adjusted BMI of hT aMab athletes to a Gamma distribution ...
0: (-564.227696) alpha=19.998389, beta=3.001330, off=9.999839
384: (-256.999252) alpha=15.951361, beta=2.194827, off=13.795466
ML : (-248.056301) alpha=8.610936, beta=1.673886, off=15.343436
median: (-249.115483) alpha=12.411010, beta=2.005287, off=14.410945
Fitting the adjusted BMI of hT aMab athletes to a Weibull distribution ...
0: (-48865.772268) k=7.999859, beta=0.099877, off=0.999138
384: (-271.350390) k=9.937527, beta=0.046958, off=0.019000
ML: (-270.340284) k=9.914647, beta=0.046903, off=0.000871
median: (-270.974131) k=9.833793, beta=0.046947, off=0.011727
Fitting the adjusted BMI of hT aMab athletes to a Rayleigh distribution ...
0: (-3378.099000) tau=0.499136, off=9.999193
384: (-254.717778) tau=0.107962, off=16.378780
ML: (-253.012418) tau=0.110751, off=16.574934
median: (-253.092584) tau=0.108740, off=16.532576
Looks like the Gamma distribution is the best of the bunch, though only if you use the median or maximal likelihood of the posterior. There must be some outliers in there that are tugging the mean around. Visually, there isn't too much difference between the Gaussian and Gamma fits, but the Rayleigh seems artificially sharp on the low end. It's a bit of a shame, the Gamma distribution is usually related to rates and variance so we don't have a good reason for applying it here, other than "it fits the best." We might be able to do better with a per-sport Gaussian distribution fit, but for now I'm happy with the Gamma.
Time to fit the other pool of athletes, and chart it all.
Fitting the adjusted BMI of lT aFab athletes to a Gamma distribution ...
0: (-127.467934) alpha=20.000007, beta=3.000116, off=9.999921
384: (-128.564564) alpha=15.481265, beta=3.161022, off=12.654149
ML : (-117.582454) alpha=2.927721, beta=1.294851, off=14.713479
median: (-120.689425) alpha=11.961847, beta=2.836153, off=13.008723
Those models look pretty reasonable, though the upper end of the assigned-female distribution could be improved on. It's a good enough fit to get some answers, at least.
# The Nitty Gritty ¶
It's easier to combine step 3, applying the model, with step 5, calculating the Bayes Factor, when writing the code. The resulting Bayes Factor has a probability distribution, as the uncertainty contained in the posterior contaminates it.
Summary of the BF distribution, for the height of >5nmol/L aFab athletes
n mean geo.mean 5% 16% 50% 84% 95%
19 10.64 5.44 0.75 1.76 5.66 17.33 35.42
Percentage of BF's that favoured the primary hypothesis: 92.42%
Percentage of BF's that were 'decisive': 14.17%
That looks a lot like a log-Gaussian distribution. The arthithmetic mean fails us here, thanks to the huge range of values, so the geometric mean and median are better measures of central tendency.
The best way I can interpret this result is via an eight-sided die: our credence in the hypothesis that >5nmol/L aFab athletes are more like their >5nmol/L aMab peers than their <5nmol/L aFab ones is similar to the credence we'd place on rolling a one via that die, while our credence on the primary hypothesis is similar to rolling any other number except one. About 92% of the calculated Bayes Factors were favourable to the primary hypothesis, and about 16% of them crossed the 19:1 threshold, a close match for the asserted evidential bar in science.
That's strong evidence for a mere 19 athletes, though not quite conclusive. How about the Bayes Factor for the height of <5nmol/L aMab athletes?
Summary of the BF distribution, for the height of <5nmol/L aMab athletes
n mean geo.mean 5% 16% 50% 84% 95%
26 4.67e+21 3.49e+18 5.67e+14 2.41e+16 5.35e+18 4.16e+20 4.61e+21
Percentage of BF's that favoured the primary hypothesis: 100.00%
Percentage of BF's that were 'decisive': 100.00%
Wow! Even with 26 data points, our primary hypothesis was extremely well supported. Betting against that hypothesis is like betting a particular person in the US will be hit by lightning three times in a single year!
That seems a little too favourable to my view, though. Did something go wrong with the mathematics? The simplest check is to graph the models against the data they're evaluating.
Nope, the underlying data genuinely is a better fit for the high-testosterone aMab model. But that good of a fit? In linear space, we multiply each of the individual probabilities to arrive at the Bayes factor. That's equivalent to raising the geometric mean to the nth power, where n is the number of athletes. Since n = 26 here, even a geometric mean barely above one can generate a big Bayes factor.
26th root of the median Bayes factor of the high-T aMab model applied to low-T aMab athletes: 5.2519
26th root of the Bayes factor for the median marginal: 3.6010
Note that the Bayes factor we generate by using the median of the marginal for each parameter isn't as strong as the median Bayes factor in the above convolution. That's simply because I'm using a small sample from the posterior distribution. Keeping more samples would have brought those two values closer together, but also greatly increased the amount of computation I needed to do to generate all those Bayes factors.
With that check out of the way, we can move on to $$\overline{BMI}$$.
Summary of the BF distribution, for the adjusted BMI of >5nmol/L aFab athletes
n mean geo.mean 5% 16% 50% 84% 95%
4 1.70e+12 1.06e+05 2.31e+02 1.60e+03 4.40e+04 3.66e+06 3.99e+09
Percentage of BF's that favoured the primary hypothesis: 100.00%
Percentage of BF's that were 'decisive': 99.53%
Percentage of non-finite probabilities, when applying the low-T aFab model to high-T aFab athletes: 0.00%
Percentage of non-finite probabilities, when applying the high-T aMab model to high-T aFab athletes: 10.94%
This distribution is much stranger, with a number of extremely high BF's that badly skew the mean. The offset contributes to this, with 7-12% of the model posteriors for high-T aMab athletes assigning a zero percent likelihood to an adjusted BMI. Those are excluded from the analysis, but they suggest the high-T aMab model poorly describes high-T aFab athletes.
Our credence in the primary hypothesis here is similar to our credence that an elite golfer will not land a hole-in-one on their next shot. That's surprisingly strong, given we're only dealing with four datapoints. More data may water that down, but it's unlikely to overcome that extreme level of credence.
Summary of the BF distribution, for the adjusted BMI of <5nmol/L aMab athletes
n mean geo.mean 5% 16% 50% 84% 95%
9 6.64e+35 2.07e+22 4.05e+12 4.55e+16 6.31e+21 7.72e+27 9.81e+32
Percentage of BF's that favoured the primary hypothesis: 100.00%
Percentage of BF's that were 'decisive': 100.00%
Percentage of non-finite probabilities, when applying the high-T aMab model to low-T aMab athletes: 0.00%
Percentage of non-finite probabilities, when applying the low-T aFab model to low-T aMab athletes: 0.00%
The hypotheses' Bayes factor for the adjusted BMI of low-testosterone aMab athletes is much better behaved. Even here, the credence is above three-lightning-strikes territory, pretty decisively favouring the hypothesis.
Our final step would normally be to combine all these individual Bayes factors into a single one. That involves multiplying them all together, however, and a small number multiplied by a very large one is an even larger one. It isn't worth the effort, the conclusion is pretty obvious.
# Truth and Consequences ¶
Our primary hypothesis is on quite solid ground: Athletes with exceptional testosterone levels are more like athletes of the same sex but with typical testosterone levels, than they are of other athletes with a different sex but similar testosterone levels. If we divide up sports by testosterone level, then, roughly 6-8% of assigned-male athletes will wind up in the <5 nmol/L group, and about the same share of assigned-female athletes will be in the >5 nmol/L group. Note, however, that it doesn't follow that 6-8% of those in the <5 nmol/L group will be assigned-male. About 41% of the athletes at the 2018 Olymics were assigned-female, for instance. If we fix the rate of exceptional testosterone levels at 7%, and assume PyeongChang's rate is typical, a quick application of Bayes' theorem reveals
\begin{align} p( \text{aMab} \mid \text{<5nmol/L} ) &= \frac{ p( \text{<5nmol/L} \mid \text{aMab} ) p( \text{aMab} ) }{ p( \text{<5nmol/L} \mid \text{aMab} ) p( \text{aMab} ) + p( \text{<5nmol/L} \mid \text{aFab} ) p( \text{aFab} ) } \\ {} &= \frac{ 0.07 \cdot 0.59 }{ 0.07 \cdot 0.59 + 0.93 \cdot 0.41 } \\ {} &\approx 9.8\% \end{align}
If all those assumptions are accurate, about 10% of <5 nmol/L athletes will be assigned-male, more-or-less matching the number I calculated way back at the start. In sports where performance is heavily correlated with height or $$\overline{BMI}$$, then, the 10% of assigned-male athletes in the <5 nmol group will heavily dominate the rankings. The odds of a woman earning recognition in this sport are negligible, leading many of them to drop out. This increases the proportion of men in that sport, leading to more domination of the rankings, more women dropping out, and a nasty feedback loop.
Conversely, about 5% of >5nmol/L athletes will be assigned-female. In a heavily-correlated sport, those women will be outclassed by the men and have little chance of earning recognition for their achievements. They have no incentive to compete, so they'll likely drop out or avoid these sports as well.
In events where physicality has less or no correlation with sporting performance, these effects will be less pronounced or non-existent, of course. But this still translates into fewer assigned-female athletes competing than in the current system.
But it gets worse! We'd also expect an uptick in the number of assigned-female athletes doping, primarily with testosterone inhibitors to bring themselves just below the 5nmol/L line. Alternatively, high-testosterone aFab athletes may inject large doses of testosterone to bulk up and remain competitive with their assigned-male competitors.
By dividing up testosterone levels into only two categories, sporting authorities are implicitly stating that everyone within those categories is identical. A number of athletes would likely go to court to argue that boosting or inhibiting testosterone should be legal, provided they do not cross the 5nmol/L line. If they're successful, then either the rules around testosterone usage would be relaxed, or sporting authorities would be forced to subdivide these groups further. This would lead to an uptick in testosterone doping among all athletes, not just those assigned female.
Notice that assigned-male athletes don't have the same incentives to drop out, and in fact the low-testosterone subgroup may even be encouraged to compete as they have an easier path to sporting fame and glory. Sports where performance is heavily correlated with height or $$\overline{BMI}$$ will come to be dominated by men.
# Let's Put a Bow On This One ¶
[1:15] In a nutshell, I find the arguments and logic that currently permit transgender women to compete against biological women to be remarkably flawed, and I’m convinced that unless quickly rectified, this will KILL women’s sports.
[14:00] I don’t want to see the day when women’s athletics is dominated by Y chromosomes, but without a change in policy, that is precisely what’s going to happen.
It's rather astounding. Transgender athletes are a not a problem, on several levels; as I've pointed out before, they've been allowed to compete in the category they identify for over a decade in some places, and yet no transgender athlete has come to dominate any sport. The Olympics has held the door open since 2004, and not a single transgender athlete has ever openly competed as a transgender athlete. Rationality Rules, like other transphobes, is forced to cherry-pick and commit lies of omission among a handful of examples, inflating them to seem more significant than they actually are.
In response to this non-existent problem, Rationality Rules' proposed solution would accomplish the very thing he wants to avoid! You don't get that turned around if you're a rational person with a firm grasp on the science.
No, this level of self-sabotage is only possible if you're a clueless bigot who's ignorant of the relevant science, and so frightened of transgender people that your critical thinking skills abandon you. The vast difference between what Rationality Rules claims the science says, and what his own citations say, must be because he knows that if he puts on a good enough act nobody will check his work. Everyone will walk away assuming he's rational, rather than a scared, dishonest loon.
It's hard to fit any other conclusion to the data.
# Ugh, Not Again
P-values are back in the news. Nature published an article, signed by 800 scientists, calling for an end to the concept of “statistical significance.” It ruffled my feathers, even though I agreed with its central thesis.
The trouble is human and cognitive more than it is statistical: bucketing results into ‘statistically significant’ and ‘statistically non-significant’ makes people think that the items assigned in that way are categorically different. The same problems are likely to arise under any proposed statistical alternative that involves dichotomization, whether frequentist, Bayesian or otherwise.
Unfortunately, the false belief that crossing the threshold of statistical significance is enough to show that a result is ‘real’ has led scientists and journal editors to privilege such results, thereby distorting the literature. Statistically significant estimates are biased upwards in magnitude and potentially to a large degree, whereas statistically non-significant estimates are biased downwards in magnitude. Consequently, any discussion that focuses on estimates chosen for their significance will be biased. On top of this, the rigid focus on statistical significance encourages researchers to choose data and methods that yield statistical significance for some desired (or simply publishable) result, or that yield statistical non-significance for an undesired result, such as potential side effects of drugs — thereby invalidating conclusions.
Nothing wrong there. While I’ve mentioned some Bayesian buckets, I tucked away a one-sentence counter-argument in an aside over here. Any artificial significant/non-significant boundary is going to promote the distortions they mention here. What got me writing this post was their recommendations.
What will retiring statistical significance look like? We hope that methods sections and data tabulation will be more detailed and nuanced. Authors will emphasize their estimates and the uncertainty in them — for example, by explicitly discussing the lower and upper limits of their intervals. They will not rely on significance tests. When P values are reported, they will be given with sensible precision (for example, P = 0.021 or P = 0.13) — without adornments such as stars or letters to denote statistical significance and not as binary inequalities (P < 0.05 or P > 0.05). Decisions to interpret or to publish results will not be based on statistical thresholds. People will spend less time with statistical software, and more time thinking.
This basically amounts to nothing. Journal editors still have to decide what to print, and if there is no strong alternative they’ll switch from an arbitrary cutoff of p < 0.05 to an ad-hoc arbitrary cutoff. In the meantime, they’re leaving flawed statistical procedures in place. P-values exaggerate the strength of the evidence, as I and others have argued. Confidence intervals are not an improvement, either. As I put it:
For one thing, if you’re a frequentist it’s a category error to state the odds of a hypothesis being true, or that some data makes a hypothesis more likely, or even that you’re testing the truth-hood of a hypothesis. […]
How does this intersect with confidence intervals? If it’s an invalid move to hypothesise[sic] “the population mean is Y,” it must also be invalid to say “there’s a 95% chance the population mean is between X and Z.” That’s attaching a probability to a hypothesis, and therefore a no-no! Instead, what a frequentist confidence interval is really telling you is “assuming this data is a representative sample, if I repeat my experimental procedure an infinite number of times then I’ll calculate a sample mean between X and Z 95% of the time.” A confidence interval says nothing about the test statistic, at least not directly.
In frequentism, the parameter is fixed and the data varies. It doesn’t make sense to consider other parameters, that’s a Bayesian move. And yet the authors propose exactly that!
We must learn to embrace uncertainty. One practical way to do so is to rename confidence intervals as ‘compatibility intervals’ and interpret them in a way that avoids overconfidence. Specifically, we recommend that authors describe the practical implications of all values inside the interval, especially the observed effect (or point estimate) and the limits. In doing so, they should remember that all the values between the interval’s limits are reasonably compatible with the data, given the statistical assumptions used to compute the interval. Therefore, singling out one particular value (such as the null value) in the interval as ‘shown’ makes no sense.
Much of what the authors proposed would be fixed by switching to Bayesian statistics. Their own suggestions invoke Bayesian ideas without realizing it. Yet they go out of their way to say nothing’s wrong with p-values or confidence intervals, despite evidence to the contrary. Their proposal is destined to fail, yet it got more support than the arguably-superior p < 0.005 proposal.
Maddening. Maybe it’s time I got out my poison pen and added my two cents to the scientific record.
|
2021-05-16 08:35:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7550591230392456, "perplexity": 1616.2771625099338}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992516.56/warc/CC-MAIN-20210516075201-20210516105201-00488.warc.gz"}
|
https://mc-stan.org/docs/2_22/reference-manual/notation-for-samples-chains-and-draws.html
|
This is an old version, view current version.
## 16.3 Notation for samples, chains, and draws
To establish basic notation, suppose a target Bayesian posterior density $$p(\theta | y)$$ given real-valued vectors of parameters $$\theta$$ and real- and discrete-valued data $$y$$.25
An MCMC sample consists of a set of a sequence of $$M$$ Markov chains, each consisting of an ordered sequence of $$N$$ draws from the posterior.26 The sample thus consists of $$M \times N$$ draws from the posterior.
### 16.3.1 Potential Scale Reduction
One way to monitor whether a chain has converged to the equilibrium distribution is to compare its behavior to other randomly initialized chains. This is the motivation for the Gelman and Rubin (1992) potential scale reduction statistic, $$\hat{R}$$. The $$\hat{R}$$ statistic measures the ratio of the average variance of samples within each chain to the variance of the pooled samples across chains; if all chains are at equilibrium, these will be the same and $$\hat{R}$$ will be one. If the chains have not converged to a common distribution, the $$\hat{R}$$ statistic will be greater than one.
Gelman and Rubin’s recommendation is that the independent Markov chains be initialized with diffuse starting values for the parameters and sampled until all values for $$\hat{R}$$ are below 1.1. Stan allows users to specify initial values for parameters and it is also able to draw diffuse random initializations automatically satisfying the declared parameter constraints.
The $$\hat{R}$$ statistic is defined for a set of $$M$$ Markov chains, $$\theta_m$$, each of which has $$N$$ samples $$\theta^{(n)}_m$$. The between-chain variance estimate is
$B = \frac{N}{M-1} \, \sum_{m=1}^M (\bar{\theta}^{(\bullet)}_{m} - \bar{\theta}^{(\bullet)}_{\bullet})^2,$
where
$\bar{\theta}_m^{(\bullet)} = \frac{1}{N} \sum_{n = 1}^N \theta_m^{(n)}$
and
$\bar{\theta}^{(\bullet)}_{\bullet} = \frac{1}{M} \, \sum_{m=1}^M \bar{\theta}_m^{(\bullet)}.$
The within-chain variance is averaged over the chains,
$W = \frac{1}{M} \, \sum_{m=1}^M s_m^2,$
where
$s_m^2 = \frac{1}{N-1} \, \sum_{n=1}^N (\theta^{(n)}_m - \bar{\theta}^{(\bullet)}_m)^2.$
The variance estimator is a mixture of the within-chain and cross-chain sample variances,
$\widehat{\mbox{var}}^{+}\!(\theta|y) = \frac{N-1}{N}\, W \, + \, \frac{1}{N} \, B.$
Finally, the potential scale reduction statistic is defined by
$\hat{R} \, = \, \sqrt{\frac{\widehat{\mbox{var}}^{+}\!(\theta|y)}{W}}.$
### 16.3.2 Split R-hat for Detecting Non-Stationarity
Before Stan calculating the potential-scale-reduction statistic $$\hat{R}$$, each chain is split into two halves. This provides an additional means to detect non-stationarity in the individual chains. If one chain involves gradually increasing values and one involves gradually decreasing values, they have not mixed well, but they can have $$\hat{R}$$ values near unity. In this case, splitting each chain into two parts leads to $$\hat{R}$$ values substantially greater than 1 because the first half of each chain has not mixed with the second half.
### 16.3.3 Convergence is Global
A question that often arises is whether it is acceptable to monitor convergence of only a subset of the parameters or generated quantities. The short answer is “no,” but this is elaborated further in this section.
For example, consider the value lp__, which is the log posterior density (up to a constant).27
It is thus a mistake to declare convergence in any practical sense if lp__ has not converged, because different chains are really in different parts of the space. Yet measuring convergence for lp__ is particularly tricky, as noted below.
#### 16.3.3.1 Asymptotics and transience vs. equilibrium
Markov chain convergence is a global property in the sense that it does not depend on the choice of function of the parameters that is monitored. There is no hard cutoff between pre-convergence “transience” and post-convergence “equilibrium.” What happens is that as the number of states in the chain approaches infinity, the distribution of possible states in the chain approaches the target distribution and in that limit the expected value of the Monte Carlo estimator of any integrable function converges to the true expectation. There is nothing like warmup here, because in the limit, the effects of initial state are completely washed out.
#### 16.3.3.2 Multivariate convergence of functions
The $$\hat{R}$$ statistic considers the composition of a Markov chain and a function, and if the Markov chain has converged then each Markov chain and function composition will have converged. Multivariate functions converge when all of their margins have converged by the Cramer-Wold theorem.
The transformation from unconstrained space to constrained space is just another function, so does not effect convergence.
Different functions may have different autocorrelations, but if the Markov chain has equilibrated then all Markov chain plus function compositions should be consistent with convergence. Formally, any function that appears inconsistent is of concern and although it would be unreasonable to test every function, lp__ and other measured quantities should at least be consistent.
The obvious difference in lp__ is that it tends to vary quickly with position and is consequently susceptible to outliers.
#### 16.3.3.3 Finite numbers of states
The question is what happens for finite numbers of states? If we can prove a strong geometric ergodicity property (which depends on the sampler and the target distribution), then one can show that there exists a finite time after which the chain forgets its initial state with a large probability. This is both the autocorrelation time and the warmup time. But even if you can show it exists and is finite (which is nigh impossible) you can’t compute an actual value analytically.
So what we do in practice is hope that the finite number of draws is large enough for the expectations to be reasonably accurate. Removing warmup iterations improves the accuracy of the expectations but there is no guarantee that removing any finite number of samples will be enough.
#### 16.3.3.4 Why inconsistent R-hat?
Firstly, as noted above, for any finite number of draws, there will always be some residual effect of the initial state, which typically manifests as some small (or large if the autocorrelation time is huge) probability of having a large outlier. Functions robust to such outliers (say, quantiles) will appear more stable and have better $$\hat{R}$$. Functions vulnerable to such outliers may show fragility.
Secondly, use of the $$\hat{R}$$ statistic makes very strong assumptions. In particular, it assumes that the functions being considered are Gaussian or it only uses the first two moments and assumes some kind of independence. The point is that strong assumptions are made that do not always hold. In particular, the distribution for the log posterior density (lp__) almost never looks Gaussian, instead it features long tails that can lead to large $$\hat{R}$$ even in the large $$N$$ limit. Tweaks to $$\hat{R}$$, such as using quantiles in place of raw values, have the flavor of making the samples of interest more Gaussian and hence the $$\hat{R}$$ statistic more accurate.
#### 16.3.3.5 Final words on convergence monitoring
“Convergence” is a global property and holds for all integrable functions at once, but employing the $$\hat{R}$$ statistic requires additional assumptions and thus may not work for all functions equally well.
Note that if you just compare the expectations between chains then we can rely on the Markov chain asymptotics for Gaussian distributions and can apply the standard tests.
1. Using vectors simplifies high level exposition at the expense of collapsing structure.↩︎
|
2022-10-07 15:51:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8693041205406189, "perplexity": 574.5145550029532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00765.warc.gz"}
|
https://zbmath.org/?q=an:1088.34075
|
# zbMATH — the first resource for mathematics
On the constructive inverse problem in differential Galois theory. (English) Zbl 1088.34075
The authors show how to construct differential equations over $$\mathbb{C}(x)$$ with Galois group $$G$$, where $$G^0$$ is of the form $$G_1\cdots G_r$$, all $$G_i$$ being simple groups of type $$A_l$$, $$C_l$$, $$D_l$$, $$E_6$$ or $$E_7$$. To this end, they give sufficient conditions a differential equation to have a given semisimple group as Galois group. They discuss criteria allowing one to reduce the inverse problem for arbitrary linear algebraic groups over $$\mathbb{C}(x)$$ to find equivariant differential equations with given connected Galois groups over an arbitrary finite Galois extension $$K$$ of $$\mathbb{C}(x)$$.
##### MSC:
34M50 Inverse problems (Riemann-Hilbert, inverse differential Galois, etc.) for ordinary differential equations in the complex domain 12H05 Differential algebra 12H20 Abstract differential equations
Full Text:
##### References:
[1] Babbitt D. G., Pacific J. Math. 109 pp 1– (1983) [2] DOI: 10.1007/BF02566948 · Zbl 0143.05901 · doi:10.1007/BF02566948 [3] Bourbaki N., Groupes et Algèbres de Lie, Chaps. 4,5 and 6 (1990) [4] Bourbaki N., Groupes et Algèbres de Lie, Chaps. 7 and 8 (1990) [5] Fulton W., Representation Theory. A First Course (1991) · Zbl 0744.22001 [6] Hartmann , J. ( 2002 ). On the Inverse Problem in Differential Galois Theory . Ph.D. Thesis, University of Heidelberg. Available atwww.ub.uni-heidelberg.de/archiv/3085 . · Zbl 1063.12005 [7] Hochschild G., Basic Theory of Algebraic Groups and Lie Algebras (1976) · Zbl 0356.17009 [8] DOI: 10.4064/bc58-0-9 · Zbl 1099.12003 · doi:10.4064/bc58-0-9 [9] Humphreys J., Linear Algebraic Groups (1975) · Zbl 0325.20039 [10] DOI: 10.1007/BF01389152 · Zbl 0609.12025 · doi:10.1007/BF01389152 [11] DOI: 10.2307/2373294 · Zbl 0169.36701 · doi:10.2307/2373294 [12] Kolchin E. R., Differential Algebra and Algebraic Groups (1973) · Zbl 0264.12102 [13] DOI: 10.2307/1970653 · Zbl 0188.33801 · doi:10.2307/1970653 [14] DOI: 10.2307/1970775 · Zbl 0214.06004 · doi:10.2307/1970775 [15] Lang S., Algebra., 3. ed. (1993) [16] DOI: 10.1006/jabr.1996.0263 · Zbl 0867.12004 · doi:10.1006/jabr.1996.0263 [17] Mitschi C., Ann. Fac. Sci. Toulouse pp 403– (2002) [18] van der Put M., Galois Theory of Difference Equations (1997) · Zbl 0930.12006 [19] van der Put M., Galois Theory of Linear Differential Equations (2003) · Zbl 1036.12008 [20] Singer M. F., Pac. J. Math. 106 pp 343– (1993) [21] Tits J., Tabellen zu den einfachen Lie Gruppen und ihren Darstellungen (1967) · Zbl 0166.29703 · doi:10.1007/BFb0080324 [22] DOI: 10.1017/CBO9780511471117 · doi:10.1017/CBO9780511471117 [23] Wehrfritz B. A. F., Infinite Linear Groups (1973) · Zbl 0261.20038
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-01-16 01:06:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6531504392623901, "perplexity": 3564.2015922896267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703497681.4/warc/CC-MAIN-20210115224908-20210116014908-00313.warc.gz"}
|
http://carolabinder.blogspot.com/2015/05/the-limited-political-implications-of.html
|
## Monday, May 25, 2015
### The Limited Political Implications of Behavioral Economics
A recent post on Marginal Revolution contends that progressives use findings from behavioral economics to support the economic policies they favor, while ignoring the implications that support conservative policies. The short post, originally a comment by blogger and computational biologist Luis Pedro Coelho, is perhaps intentionally controversial, arguing that loss aversion is a case against redistributive policies and social mobility:
"Taking from the higher-incomes to give it to the lower incomes may be negative utility as the higher incomes are valuing their loss at an exaggerated rate (it’s a loss), while the lower income recipients under value it...
...if your utility function is heavily rank-based (a standard left-wing view) and you accept loss-aversion from the behavioral literature, then social mobility is suspect from an utility point-of-view."
Tyler Cowen made a similar point a few years ago, arguing that "For a given level of income, if some are moving up others are moving down... More upward — and thus downward — relative mobility probably means less aggregate happiness, due to habit formation and frame of reference effects."
I don't think loss aversion, habit formation, and the like make a strong case against (or for) redistribution or social mobility, but I do think Coelho has a point that economists need to watch out for our own confirmation bias when we go pointing out other behavioral biases to support our favorite policies. Simply appealing to behavioral economics, in general, or to loss aversion or any number of documented decision-making biases, rarely makes a strong case for or against broad policy aims or strategies. The reason is best summarized by Wolfgang Pesendorfer in "Behavioral Economics Comes of Age":
Behavioral economics argues that economists ignore important variables that affect behavior. The new variables are typically shown to affect decisions in experimental settings. For economists, the difficulty is that these new variables may be unobservable or even difficult to define in economic settings with economic data. From the perspective of an economist, the unobservable variable amounts to a free parameter in the utility function. Having too many such parameters already, the economist finds it difficult to utilize the experimental finding.
All economic models require making drastic simplifications of reality. Whether they can say anything useful depends on how well they can capture those aspects of reality that are relevant to the question at hand and leave out those that aren't. Behavioral economics has done a good job of pointing out some aspects of reality that standard models leave out, but not always of telling us exactly when these are more relevant than dozens of other aspects of reality we also leave out without second thought. For example, "default bias" seems to be a hugely important factor in retirement savings, so it should definitely be a consideration in the design of very narrow policies regarding 401(K) plan participation, but that does not mean we need to also include it in every macroeconomic model.
1. Their loss aversion point is nothing compared to decreasing marginal utility, and way more people. If you shift $98 billion from the Koch brothers leaving them with just a mere$1 billion each, is this loss to them in utility, even including loss aversion, anything compared to the gain to giving close to a 1 million of the poorest people $100,000, or spending that$100,000/each on Heckman-style early human development, and other investments, in their children.
But a big point too is future generations. If there is some loss aversion to breaking incredible income inequality, it's a one-time loss aversion, and then all future generations don't have to suffer the same horrendous income inequality. I'm sure there was a lot of loss aversion to King George and his court when the United States freed itself from England, but I think that was just a little outweighed by the benefits to the colonists and all of their descendants.
2. Such a nice content. one can also get information on political from infoshutter
3. NIce information shared by you sir.
4. NIce information shared by you sir.
5. Ihnen allen vielen Dank dafur, dass Sie diesen glücklichen Tag mit uns teilen. Friv Gogy Games wir danken euch fur das bisherige Teilnehmen lassen, mochten aber den Rundbrief nicht mehr erhalten Friv Friv4school Friv Ich mochte Kommissar Fischler fur seine Freimutigkeit und Offenheit danken, Gogy Games Juegos Gogy Juegos Twizy Zox1 n der er uns jeden unternommenen Schritt erläutert und die verschiedenen vom Wissenschaftlichen Lenkungsausschu.
6. It is advisable for students to seek help from companies offering dissertation writing services in USA and best custom research papers.
7. Whether the Descriptive Essay Paper focuses on a single subject or compares two approaches, our Descriptive Essay Writing Service comes in with Unique Descriptive Essay Writing Format to complete the homework.
8. Despite political differences, I support the government's economic policy that will enable our country's economy to end rising unemployment. Assignment Writing Service
9. This is my first time i visit here. I found so many entertaining stuff in your blog, especially its discussion about Top Dissertation Writing Services. From the tons of comments on your articles
10. I’m impressed, I must say. I’m here for the first time. Superb! I simply must tell you that I really love your blogs page. My boyfriend enjoys your blogs.
카지노사이트
야한동영상
휴게텔
외국인출장
마사지
11. It's late finding this act. At least, it's a thing to be familiar with that there are such events exist. I agree with your Blog and I will be back to inspect it more in the future so please keep up your act. Feel free to visit my website;
야설
12. Great post! I am actually getting ready to across this information, is very helpful my friend. Also great blog here with all of the valuable information you have. Keep up the good work you are doing here. Feel free to visit my website;
한국야동
13. This is really interesting, You’re a very skilled blogger. I’ve joined your feed and look forward to seeking more of your fantastic post. Also, I’ve shared your website in my social networks! Feel free to visit my website; 일본야동
14. It’s always a pleasure to read your magnificent articles on this site. You are among the top writers of this generation, and there’s nothing you can do that will change my opinion on that. My friends will soon realize how good you are. Feel free to visit my website; 일본야동
15. I will recommend your website to everyone. You have a very good gloss. Write more high-quality articles. I support you.
바카라사이트
16. It’s so good and so awesome. I am just amazed. I hope that you continue to do your work like this in the future also.
스포츠토토
17. Aprecio personalmente su trabajo. Soy bastante nuevo en este campo. Y quiero agradecerle la gran información... Por favor, añade más correo.
https://www.totosafeguide.com
18. Your ideas inspired me very much. 메이저안전놀이터 It's amazing. I want to learn your writing skills. In fact, I also have a website. If you are okay, please visit once and leave your opinion. Thank you.
19. I was looking for another article by chance and found your article casino online I am writing on this topic, so I think it will help a lot. I leave my blog address below. Please visit once.
20. Terima kasih sudah berbagi info. Saya sangat menghargai usaha Anda dan saya menunggu posting berikutnya terima kasih sekali lagi. 토토사이트
|
2023-03-22 06:07:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19004885852336884, "perplexity": 3546.5718831775334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943750.71/warc/CC-MAIN-20230322051607-20230322081607-00158.warc.gz"}
|
http://nrich.maths.org/public/leg.php?code=-68&cl=3&cldcmpid=1075
|
Search by Topic
Resources tagged with Visualising similar to Disappearing Square:
Filter by: Content type:
Stage:
Challenge level:
Muggles Magic
Stage: 3 Challenge Level:
You can move the 4 pieces of the jigsaw and fit them into both outlines. Explain what has happened to the missing one unit of area.
Framed
Stage: 3 Challenge Level:
Seven small rectangular pictures have one inch wide frames. The frames are removed and the pictures are fitted together like a jigsaw to make a rectangle of length 12 inches. Find the dimensions of. . . .
Coloured Edges
Stage: 3 Challenge Level:
The whole set of tiles is used to make a square. This has a green and blue border. There are no green or blue tiles anywhere in the square except on this border. How many tiles are there in the set?
Rolling Around
Stage: 3 Challenge Level:
A circle rolls around the outside edge of a square so that its circumference always touches the edge of the square. Can you describe the locus of the centre of the circle?
Sprouts
Stage: 2, 3, 4 and 5 Challenge Level:
A game for 2 people. Take turns joining two dots, until your opponent is unable to move.
The Old Goats
Stage: 3 Challenge Level:
A rectangular field has two posts with a ring on top of each post. There are two quarrelsome goats and plenty of ropes which you can tie to their collars. How can you secure them so they can't. . . .
Christmas Boxes
Stage: 3 Challenge Level:
Find all the ways to cut out a 'net' of six squares that can be folded into a cube.
Counting Triangles
Stage: 3 Challenge Level:
Triangles are formed by joining the vertices of a skeletal cube. How many different types of triangle are there? How many triangles altogether?
Screwed-up
Stage: 3 Challenge Level:
A cylindrical helix is just a spiral on a cylinder, like an ordinary spring or the thread on a bolt. If I turn a left-handed helix over (top to bottom) does it become a right handed helix?
All in the Mind
Stage: 3 Challenge Level:
Imagine you are suspending a cube from one vertex (corner) and allowing it to hang freely. Now imagine you are lowering it into water until it is exactly half submerged. What shape does the surface. . . .
Triangular Tantaliser
Stage: 3 Challenge Level:
Draw all the possible distinct triangles on a 4 x 4 dotty grid. Convince me that you have all possible triangles.
Isosceles Triangles
Stage: 3 Challenge Level:
Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw?
Take Ten
Stage: 3 Challenge Level:
Is it possible to remove ten unit cubes from a 3 by 3 by 3 cube made from 27 unit cubes so that the surface area of the remaining solid is the same as the surface area of the original 3 by 3 by 3. . . .
Tied Up
Stage: 3 Challenge Level:
In a right angled triangular field, three animals are tethered to posts at the midpoint of each side. Each rope is just long enough to allow the animal to reach two adjacent vertices. Only one animal. . . .
An Unusual Shape
Stage: 3 Challenge Level:
Can you maximise the area available to a grazing goat?
Speeding Boats
Stage: 4 Challenge Level:
Two boats travel up and down a lake. Can you picture where they will cross if you know how fast each boat is travelling?
Conway's Chequerboard Army
Stage: 3 Challenge Level:
Here is a solitaire type environment for you to experiment with. Which targets can you reach?
Wari
Stage: 4 Challenge Level:
This is a simple version of an ancient game played all over the world. It is also called Mancala. What tactics will increase your chances of winning?
Square Coordinates
Stage: 3 Challenge Level:
A tilted square is a square with no horizontal sides. Can you devise a general instruction for the construction of a square when you are given just one of its sides?
Intersecting Circles
Stage: 3 Challenge Level:
Three circles have a maximum of six intersections with each other. What is the maximum number of intersections that a hundred circles could have?
Dissect
Stage: 3 Challenge Level:
It is possible to dissect any square into smaller squares. What is the minimum number of squares a 13 by 13 square can be dissected into?
Painting Cubes
Stage: 3 Challenge Level:
Imagine you have six different colours of paint. You paint a cube using a different colour for each of the six faces. How many different cubes can be painted using the same set of six colours?
Convex Polygons
Stage: 3 Challenge Level:
Show that among the interior angles of a convex polygon there cannot be more than three acute angles.
Charting Success
Stage: 3 and 4 Challenge Level:
Can you make sense of the charts and diagrams that are created and used by sports competitors, trainers and statisticians?
Pattern Power
Stage: 1, 2 and 3
Mathematics is the study of patterns. Studying pattern is an opportunity to observe, hypothesise, experiment, discover and create.
Buses
Stage: 3 Challenge Level:
A bus route has a total duration of 40 minutes. Every 10 minutes, two buses set out, one from each end. How many buses will one bus meet on its way from one end to the other end?
Flight of the Flibbins
Stage: 3 Challenge Level:
Blue Flibbins are so jealous of their red partners that they will not leave them on their own with any other bue Flibbin. What is the quickest way of getting the five pairs of Flibbins safely to. . . .
Coordinate Patterns
Stage: 3 Challenge Level:
Charlie and Alison have been drawing patterns on coordinate grids. Can you picture where the patterns lead?
Bands and Bridges: Bringing Topology Back
Stage: 2 and 3
Lyndon Baker describes how the Mobius strip and Euler's law can introduce pupils to the idea of topology.
You Owe Me Five Farthings, Say the Bells of St Martin's
Stage: 3 Challenge Level:
Use the interactivity to listen to the bells ringing a pattern. Now it's your turn! Play one of the bells yourself. How do you know when it is your turn to ring?
3D Stacks
Stage: 2 and 3 Challenge Level:
Can you find a way of representing these arrangements of balls?
A Tilted Square
Stage: 4 Challenge Level:
The opposite vertices of a square have coordinates (a,b) and (c,d). What are the coordinates of the other vertices?
Jam
Stage: 4 Challenge Level:
To avoid losing think of another very well known game where the patterns of play are similar.
How Many Dice?
Stage: 3 Challenge Level:
A standard die has the numbers 1, 2 and 3 are opposite 6, 5 and 4 respectively so that opposite faces add to 7? If you make standard dice by writing 1, 2, 3, 4, 5, 6 on blank cubes you will find. . . .
Auditorium Steps
Stage: 2 and 3 Challenge Level:
What is the shape of wrapping paper that you would need to completely wrap this model?
More Pebbles
Stage: 2 and 3 Challenge Level:
Have a go at this 3D extension to the Pebbles problem.
Zooming in on the Squares
Stage: 2 and 3
Start with a large square, join the midpoints of its sides, you'll see four right angled triangles. Remove these triangles, a second square is left. Repeat the operation. What happens?
Sea Defences
Stage: 2 and 3 Challenge Level:
These are pictures of the sea defences at New Brighton. Can you work out what a basic shape might be in both images of the sea wall and work out a way they might fit together?
Triangles Within Triangles
Stage: 4 Challenge Level:
Can you find a rule which connects consecutive triangular numbers?
Reflecting Squarely
Stage: 3 Challenge Level:
In how many ways can you fit all three pieces together to make shapes with line symmetry?
Königsberg
Stage: 3 Challenge Level:
Can you cross each of the seven bridges that join the north and south of the river to the two islands, once and once only, without retracing your steps?
Picturing Triangle Numbers
Stage: 3 Challenge Level:
Triangle numbers can be represented by a triangular array of squares. What do you notice about the sum of identical triangle numbers?
When Will You Pay Me? Say the Bells of Old Bailey
Stage: 3 Challenge Level:
Use the interactivity to play two of the bells in a pattern. How do you know when it is your turn to ring, and how do you know which bell to ring?
Jam
Stage: 4 Challenge Level:
A game for 2 players
Tetrahedra Tester
Stage: 3 Challenge Level:
An irregular tetrahedron is composed of four different triangles. Can such a tetrahedron be constructed where the side lengths are 4, 5, 6, 7, 8 and 9 units of length?
Diagonal Dodge
Stage: 2 and 3 Challenge Level:
A game for 2 players. Can be played online. One player has 1 red counter, the other has 4 blue. The red counter needs to reach the other side, and the blue needs to trap the red.
Konigsberg Plus
Stage: 3 Challenge Level:
Euler discussed whether or not it was possible to stroll around Koenigsberg crossing each of its seven bridges exactly once. Experiment with different numbers of islands and bridges.
Travelling Salesman
Stage: 3 Challenge Level:
A Hamiltonian circuit is a continuous path in a graph that passes through each of the vertices exactly once and returns to the start. How many Hamiltonian circuits can you find in these graphs?
Frogs
Stage: 3 Challenge Level:
How many moves does it take to swap over some red and blue frogs? Do you have a method?
Dice, Routes and Pathways
Stage: 1, 2 and 3
This article for teachers discusses examples of problems in which there is no obvious method but in which children can be encouraged to think deeply about the context and extend their ability to. . . .
|
2014-08-21 18:09:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3099435865879059, "perplexity": 1391.4342161145505}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500820886.32/warc/CC-MAIN-20140820021340-00036-ip-10-180-136-8.ec2.internal.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=207053
|
## Friction of Rotating Object
If a cylinder is rotating with the circular end pressed against the ground, how can the work done by friction be calculated?
PhysOrg.com physics news on PhysOrg.com >> A quantum simulator for magnetic materials>> Atomic-scale investigations solve key puzzle of LED efficiency>> Error sought & found: State-of-the-art measurement technique optimised
Recognitions: Gold Member Science Advisor No real work is done by friction on the rolling cylinder, since the point of contact doesn't actually move. Give this a read for more information... http://www.physicsforums.com/showthread.php?t=150891
____ . | | . | | . | | _|_|_ I mean the cylinder is rotating this way. With the force going downward and the circular bottom face in contact with the ground. Heat will be produced and work will have to be done to turn the cylinder so there must be a force of friction. Drawing is so difficult
## Friction of Rotating Object
$$M=\frac{2}{3} \mu_s PR$$
$$\mu_s$$ -coeff. static friction
Recognitions: Science Advisor If the system is steady state, then the work would be: $$w = \tau \theta$$ This would imply that you can measure the torque and the angular displacement.
Recognitions: Gold Member Homework Help Science Advisor Essentially, you perform an integration. Consider an infinitisemal slice of the cylinder's bottom with area $$da=rdrd\theta$$ where r is its radial position and theta its angular position. Now, let the local normal force be of magnitude dN, then, with $\mu$ being the kinetic friction of coefficient, we have that the local frictional force is: $$d\vec{F}=-\mu{dN}\vec{i}_{\theta}$$ where $\vec{i}_{\theta}$ is the local direction vector in the direction of motion. Now, the local torque contribution is given by: $$d\vec{\tau}=\vec{r}\times{d}\vec{\vec{F}}=-rdN\vec{i}_{r}\times\vec{i}_{\theta}=-\mu{r}dN\vec{k}$$, where k is the direction vector upwards. Now, let the normal force per unit area be some constant n, we have that: $$d\vec{\tau}=-\mu{n}{r}^{2}drd\theta\vec{k}$$ The total torque is therefore: $$\vec\tau=\int_{0}^{2\pi}\int_{0}^{R}d\vec{\tau}=-\frac{2\pi{n}\mu{R}^{3}}{3}\vec{k}=-\frac{2}{3}\mu{R}{n}A\vec{k}=-\frac{2}{3}\mu{R}N\vec{k}$$ where R is the cylinder's radius, N the net normal force on the cylinders bottom. Assuming a moment of inertia I, we therefore have that the angular acceleration is given by: $$\dot{\omega}=-\frac{2\mu{R}{N}}{3I}$$ This is then the constant rate by which the angular velocity $\omega$ decreases. Setting N=k*mg, and I=s*mR^{2} yields: $$\dot{\omega}=-\frac{k}{s}\frac{2g\mu}{R}$$
Thanks but unfortunately it appears that to solve for work by this method will require a lot of research into angular stuff so I might have to change my experimental method.
Recognitions:
Gold Member
Homework Help
Quote by Cyrus $$M=\frac{2}{3} \mu_s PR$$ $$\mu_s$$ -coeff. static friction R-Radius P-Axial load
hey, i was doing this the other day, but I used a simpler method. I want to show it,but since I am a new user, i am not sure how to use the symbols. I will try anyway. let's start this way, think of a small point on the contacting surface of the cylinder. W=Fx, where work done W on the point is the product of F is the friction force and x is the distance travelled. And we all know F=$$\mu$$$$\Delta$$mg, where $$\Delta$$m is the supported mass on the point. x=r$$\theta$$, where r the distance from the point to the rotating axis and $$\theta$$ is the angular displacement. then, W=$$\mu\Delta$$mgr$$\theta$$ and, W=$$\mu$$A$$\rho$$gr$$\theta$$, where $$\rho$$ is the area density, and A is area of the point, but now we will change it to the area sum of all the points with the same distance as the original point we were using. (which is a circle). now, the circle has an area of A= 2$$\Pi$$r $$\Delta$$r, $$\rho$$= M/($$\Pi$$[R]^{}[/2]), where M is the total mass of the cylinder and R is the radius of the contact surface. Putting back into the formula, W=Mrg$$\theta$$$$\mu$$(2$$\Pi$$r$$\Delta$$r)/($$\Pi$$[R]^{}[/2]) It is simplified to W=Mg$$\theta$$$$\mu$$[r]^{}[/2]$$\Delta$$r/([R]^{}[/2]) Finally, we can use integration to sum it from 0 to R. W=Mg$$\theta$$$$\mu$$/([R]^{}[/2])$$\int$$[r]^{}[/2]$$\Delta$$r and there's my answer: W= [2]\overline{}[/3]$$\mu$$MgR$$\theta$$
Let me try this again. This time I think I can do it let's start this way, think of a small point on the contacting surface of the cylinder. W=Fx, where work done W on the point is the product of F is the friction force and x is the distance travelled. And we all know F=$$\mu$$$$\Delta$$mg, where $$\Delta$$m is the supported mass on the point. x=r$$\theta$$, where r the distance from the point to the rotating axis and $$\theta$$ is the angular displacement. then, W=$$\mu\Delta$$mgr$$\theta$$ and, W=$$\mu$$A$$\rho$$gr$$\theta$$, where $$\rho$$ is the area density, and A is area of the point, but now we will change it to the area sum of all the points with the same distance as the original point we were using. (which is a circle). now, the circle has an area of A= 2$$\Pi$$r $$\Delta$$r, $$\rho$$= M/($$\Pi$$$$R^{2}$$), where M is the total mass of the cylinder and R is the radius of the contact surface. Putting back into the formula, W=Mrg$$\theta$$$$\mu$$(2$$\Pi$$r$$\Delta$$r)/($$\Pi$$$$R^{2}$$) It is simplified to W=Mg$$\theta$$$$\mu$$$$r^{2}$$$$\Delta$$r/($$R^{2}$$) Finally, we can use integration to sum it from 0 to R. W=Mg$$\theta$$$$\mu$$/($$R^{2}$$)$$\int$$$$r^{2}$$$$\Delta$$r and there's my answer: W= $$\frac{2}{3}$$$$\mu$$MgR$$\theta$$
Here, this time I tried my best. let's start this way, think of a small point on the contacting surface of the cylinder. $$W=Fx$$, where work done W on the point is the product of F the friction force and x the distance the point travelled. And we all know $$F=\mu\Delta mg$$, where $$\Delta m$$ is the supported mass on the point. The point moves in circular motion, therefore $$x=r \theta$$, where r the distance from the point to the rotating axis and $$\theta$$ is the angular displacement. Arc length equals to the radius times the angle in radians. then, $$W=\mu\Delta mgr \theta$$ This is true because the friction which is constant acts opposite to the motion of the point. And $$W=\mu\rho Agr\theta$$, where $$\rho$$ is the area density, and A is area of the point, but now we will change it to the area sum of all the points with the same distance to the rotating axis as the original point we were using. (The points will form a circle). now, the circle has an area of $$A=2\pi r\Delta r$$ and $$\rho=\frac{M}{\pi R^{2}}$$, where M is the total mass of the cylinder and R is the radius of the contact surface. Putting back into the formula, $$W=\mu\frac{M}{\pi R^{2}}2\pi r\Delta rgr\theta$$ It is simplified to $$W=\frac{2\mu Mg\theta}{R^{2}}r^{2}\Delta r$$ Finally, we can use integration to sum it from 0 to R. $$\sum W=\frac{2\mu Mg\theta}{R^{2}}\int^{R}_{0} r^{2}\Delta r$$ and there's my answer: $$W=\frac{2}{3}\mu MgR\theta$$
I don't think it's wrong. He can do that. He used $$\vec{i}_{\theta}$$, which is the direction of motion.
|
2013-05-24 11:58:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8599740862846375, "perplexity": 325.09875600432457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704655626/warc/CC-MAIN-20130516114415-00069-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://allaboutindigos.com/x8dn8d8h/377f70-rotary-compressor-diagram
|
# rotary compressor diagram
An air compressor manual is kind of like a fine bourbon; it gets better with age. h�bfZ�����X� Ȁ �,@Q�f%CF����'1�>�W���S��q��+OzK��B�͌�-M+��T����7{]��p��r+�E���8�+蜸s&L��lG{u�� These are: compressor, condenser and an evaporator. The oil … The finish of the compression portion of the stroke or operation is shown in Figure 7-18. Influence of Impeller Blade Geometry 7. Some manufacturers make rotary-blade compressors for commercial applications. Compressor without clearance volume Compressor with clearance volume Multistage compressors Compressor without clearance volume The Cycle of Operation • The cycle of operation of a reciprocating air-compressor is best shown on a pressure-volume (p-V) diagram. Rotary Sliding Vane Compressors like Reciprocating and Rotary Screw compressors are positive displacement compressors. Out of these components, the compressor plays the role of compressing the refrigerant to a high pressure and converting it to a high pressur… When the compressor starts to draw the vapor from the evaporator, the barrier is held against the ring by a spring. Slip Factor 5. Air conditioners house three components which play a crucial role in the entire process of air conditioning. 169 0 obj <> endobj The ring rolls on its outer rim around the wall of the cylinder. Figure 7-12 is an example of the stationary blade type. Most often, rotary compressors are arranged as a single-rotor unit with a driver. The Different Types of Cooling Compressors Rotary-vane compressor A rotary-vane compressor is also known as a rotary piston compressor because the function of the vane is similar to that of a piston (shown in Figure 3). Figure 7-13 labels the parts of this type of operation, while Figure 7-14 shows how the rotation of the off-center cam compresses the gas refrigerant in the cylinder of the rotary compressor. helical rotary screw compressors. Read here to know about various types of air compressors used and get a clear inside view of each type of compressor with the attached pictures. By definition, a hermetic compressor is one in which the electric motor and compressor are contained within the same pressure vessel, with the motor shaft integral with the compressor crankshaft. Following is the process of operation of the rotary screw air compressor: The rotary type compressor, however, employs a slightly different method of accomplishing the compression of the gas. The Journalist template by Lucian E. Marin — Built for WordPress, Condensing Unit And Evaporator Not Working, Refrigerating Superheating and Subcooling, Refrigerator Oil Bleed and Rectifier Schematic, Crystallization Absorption Refrigeration Systems, Domestic refrigerators and freezers troubleshooting, Refrigerator electrical circuit protection, Refrigerator electrical equipment and service, Refrigerator fault finding guide for vapour compression systems, Refrigerator operating principles and symptoms, Refrigerator service diagnosis and repairs. vane compressor diagram Two consecutive vanes form one compartment and due to the eccentric motion of the rotor the volume of each compartment keeps on changing. 1.4.2.1 Phase control Care should be taken when connecting three phase rotary compressors to ensure that the direction of rotation is correct as rotation occurs in one direction only. Review any Quincy Compressor manual to find air compressor diagrams, technical specifications, specific product features and benefits, easy-to-follow operating instructions and much more. CompressorParts.com supplies an extensive range of replacement industrial grade rotary parts including rotary compressor controls, coolers, coupling elements, gaskets, gauges, shaft seals, pressure valves, and more.. Browse quality rotary replacement parts for major OEMs including Ingersoll Rand, Gardner Denver, Sullair, Atlas Copco, Joy, and more. To measure data, an impeller can also be mounted on a shaker table with a variable frequency output (0–10,000 Hz). The cam is rotated by an electric motor. This type of compressor is not used as much as the reciprocating hermetic type of compressor, because there are some problems with lubrication. Both single- and multiple-rotor construction are used. The roller is so mounted that it touches the cylinder at a point between the intake and discharge ports. The compression of the gas establishes a pressure difference in the system to create a flow of refrigerant from one part of the system to the other. A Scroll compressor is a type of rotary type compressor. Vapor-compression uses a circulating liquid refrigerant as the medium which absorbs and removes heat from the space to be cooled and subsequently rejects that heat elsewhere. Influence of Compressor Geometry on the Performance 8. To ensure that blade stress levels are within the fatigue life requirements of the compressor, it is usual practice to strain-gauge the blading on one or two prototype machines, measure the stress levels, and generate a Campbell diagram showing the plotted test data. The blades sweep the sides of the cylinder. Our air compressor manual library might not be as old as Raiders of the Lost Ark, but it expands everyday. In general they consist of an air pump, a motor or engine and a tank to hold the compressed air. Positive displacement compressors cab be further divided into Reciprocating and rotary compressors. endstream endobj startxref Work Requirement (Euler’s Work) 4. Usually, there are two spring-loaded roller blades, mounted 180◦ apart. electric motor. The slotted rotor is eccentrically arranged within the stator providing a crescent shaped swept area between the intake and exhaust ports. Vapor is compressed as the next blade passes the contact point and the vapor space becomes smaller and smaller. Here’s an overview of an oil-injected rotary screw compressors, with all major parts. 207 0 obj <>stream They are used primarily with ammonia. endstream endobj 170 0 obj <> endobj 171 0 obj <> endobj 172 0 obj <> endobj 173 0 obj <>stream A wiring diagram is a simplified standard photographic depiction of an electric circuit. These are usually used for continuous operation in commercial and industrial applications and may be either stationary or portable. They are more expensive than traditional reciprocating models but have numerous benefits that are quickly making rotary screw air compressors the system of choice for service truck and van fleet managers around the world. Our duplex rotary screw compressors are built to be easy to service, maintain and operate. h�bbdb~$����H��BA��@�]D�$���A������ ��� L ��2H/+H]H;��#���� ����r�?S�'� �aR The oil is separated from the discharge stream, cooled, filtered and recycled. ØE2fI�2*��3�=+�?�o�. Note that in Figure 7-15 the vapor comes in from the evaporator and goes out to the condenser through holes that have been drilled in the compressor frame. %PDF-1.3 %���� Types of Compressors: Positive Displacement and Roto-Dynamic Compressors. Since they are hermetically sealed, resulting in an improved drive method (the motor and compressor being located in a common enclosure), rotary compressors are presently manufactured in large quantities, particularly in fractional-tonnage applications. The vane splits the space between the cylinder and the Velocity Diagrams of a Centrifugal Compressor 3. They are a positive displacement machine that uses the compression action provided by two intermeshing spiral-shaped scrolls, in which one is fixed and the other orbiting, (ASHRAE, 2004). When the vapor has been compressed to a certain pressure, it leaves by way of the exhaust port on its way to the condenser. All such systems have four components: a compressor, a condenser, a thermal expansion valve (also called a throttle … Their list of parts, applications, processes and technical usage can sometimes make even the most industry-savvy engineer pause to consider how to best fix, adjust or maintain the machine’s functionality. Figure 7-12 is an example of the stationary blade type. As the ring rolls around the cylinder, it compresses the gas and passes it on to the condenser (see Figure 7-17). Jul 13, 2020 - Explore Wesley Lane's board "Rotary compressor" on Pinterest. Rotary screw air compressors have a unique design history that sets them apart from other compressor types. Thus, there is no copper or copper alloy tubing or parts. Most of the ammonia tubing and working metal is stainless steel. The gas must have a pathway to be brought into the chamber. Rotary screw air compressors are a newer, improved type of air compressor. hޜVko�8��,�]���q� ��a�����!���IL�ί�c�R�-����������s|ox*X�x*�OȤ�0*&S�1b��c��(Ř0��z��� {C�0�Qk�;�D2���� YcB�(؛7tVU=\em_�6��r1�y�b¼բ������-�C��� 11.1.2 Arrangements. Construction and Principle of Operation of Centrifugal Compressor 2. The other type of rotary compressor is the rotating blade type (see Figure 7-19). Page 1 Operating Manual Rotary screw compressor 901848 21 E Manufacturer: KAESER KOMPRESSOREN SE 96410 Coburg • PO Box 2143 • GERMANY • Tel. Jul 18, 2015 - air compressor capacitor wiring diagram before you call a - 28 images - air compressor 115v wiring schematic wiring diagram with, heat compressor diagram before you call a ac repair, capacitor pin diagram 28 images ac motor start, heat capacitor wiring 28 images air compressor, ac capacitor wiring diagram 27 wiring diagram images This type of compressor is used in some home refrigerators and air conditioners. The compressor pump consists primarily of a rotor, stator, and 8 blades. See more ideas about rotary compressor, refrigeration and air conditioning, compressor. The suction side opens to the large area of vanes; thus as the rotor rotates; the volume of air V1 at pressure P1 is trapped between the vanes of rotor and casing. It shows the parts of the circuit as simplified forms, and the power and signal connections in between the gadgets. The rotary compressor compresses gas because of the movement of the rotor in relation to the pump chamber. †Simple design resulting in high reliability and low maintenance. Diagram of a rotary screw compressor. Rotary air compressors are regarded as the workhorses in the industrial marketplace. The name of this compressor comes from the two counter-rotating screws that are the source of the compression. A screw compressor can be divided into 3 main sub-systems: Air system; Oil system; Control system; In the diagram below, the air system is colored blue. Note that the gas is compressed by an offset rotating ring. Quiet Enclosure A low sound enclosure is standard and keeps sound levels to a minimum. Rotary screw compressors are the workhorses behind a majority of manufacturers worldwide. Better seals are still being developed. Figure 7-16 shows how the refrigerant vapor in the compressor is brought from the evaporator as the exit port is opening. If you see a big building, and they make stuff there, there’s a good chance there is a rotary screw air compressor powering their manufacturing process. It is distinguished for its compact size, high output volume and easy maintenance requirements. Rotary screw features. This type has a roller centered on a shaft that is eccentric to the center of the cylinder. The compressor is fully piped and wired, resulting in simple external connection of all utilities. Open-Frame Duplex Rotary Screw Compressors are the ultimate in electrical efficiency. ���V��A�Gi�E��E�-�u5j3��y����Po����zc�t{�YO����� ���YV. The rotary compressor compresses gas because of the movement of the rotor in relation to the pump chamber. ADVERTISEMENTS: In this article we will discus about:- 1. Compressor, rotary compressor vacuum pump, fuller Compressor, Rotary Compressor Vacuum Pump, Fuller GATX, Type C300-300H, 600rpm, 110psig Max WP, S/N 12406, Qty 1 LOT CHANGE - 5/16/12 - 9:18am - Photo 191 0 obj <>/Filter/FlateDecode/ID[<98F5099E1045E90BCE5909AC75D7A3C2>]/Index[169 39]/Info 168 0 R/Length 102/Prev 539664/Root 170 0 R/Size 208/Type/XRef/W[1 2 1]>>stream This barrier separates the intake and exhaust ports. The design of the rotor is the main item that distinguishes the different types of rotary compressors. The C-Series rotary screw air compressors operate at a 100% duty cycle and are ideal for continuous-use or intermittent-use applications where reliable, dry, clean compressed air is required. Pressure Ratio of Compression 6. A connecting rod transforms the rotary motion of the crankshaft into the reciprocating motion of the piston in the cylinder. The only moving parts in a stationary blade rotary compressor are a steel ring, an eccentric or cam, and a sliding barrier. Pre-Whir … Variety of wiring diagram for air compressor motor. Types of rotary compressor: Screw compressors, Vane type compressors… It is held in place by the spring tension of the barrier’s spring and the pressure of the cam being driven by the Here we we have breakdown drawings and diagrams of Piston air compressors (reciprocating aka "Recip") as well as for Rotary Screw air compressors. The two types or configurations of rotary compressors are the stationary-blade rotary compressor and the rotating blade rotary compressor. There is a good reason for this. As the cam spins, it carries the ring with it. Note how the sliding barrier operates inside the steel cylinder. Scroll Compressors. The two types or configurations of rotary compressors are the stationary-blade rotary compressor and the rotating blade rotary compressor. The workhorse of air compressors, rotary screw machines are designed to run constantly and produce large amounts of pressurized air. However, they have been known to become overworked in very hot climates and have been replaced by the reciprocating type. Two previous articles discussed saving energy with rotary screw air compressors: “Variable speed drives cut compressor energy costs” (PE, October 1998, p 52, File 4020) and “Conserving energy in compressed air systems” (PE, December 1997, p 103, File 4020). ;6�l��F�T�G�c��L�oF�i��;\����ex*Ul�b�x��s��)O�lba�R�1}\�F���4#�[email protected]�A��[��z�ՙɫ��훻ۛ��E���S�5��Ϩ"���ʩ���nE(�M��܆m6ˊF����#�[email protected]��.Mn���i�_(i�����?m~��f���D7#N�]S�n�=]р*2�QMOtFKZ����hA_�r*iN*hF��K�����G]��RUjZ�:��4��5 �|��t>_Z�-�E�5=R�����麢/�Y��.mh�Z��u�˺2��3�g��qI=�T�>�ץ3p3o���7���y�mzVź���۳ߑ.���l�P����'�0q����e�=������h�t_g�m\۹�a�Ž�TS�p���E���Wo �m�ո �-ro�����>��p��[�����2��9,v��A6=���z��~�^���a��ؾi/�] j��-d�G�'\�Q4�π�ee}<3��o�_��j۱�e��~�ЊB����L�w�x)�I��c�G�"�7,t�ө6���y"P 8犥 �]��F��}}���dK��{q����y��d���"��g�"��-�,����,^v�=��W���Ӗ�0�N�����5���q}�Ő�ra�;��EX��lk�p�� ����z_٩FJ����G�Y4#�Ϡ�D��X 0 %%EOF Depending on the application, the rotating crank (or eccentric) is driven at constant speed by a suitable prime mover (usually electric motor). Rotary compressors physical design varies widely. Diagram of a rotary-screw compressor In an oil-injected rotary-screw compressor, oil is injected into the compression cavities to aid sealing and provide cooling for the gas charge. %��X�HEL� �J0%|� There are other types, but the vast majority of air compressors in use today are one of these two types. Page 115 Annex 13.4 Electrical Diagram Operating Manual Rotary screw compressor … For wiring instructions, follow the diagram supplied with the compressor. To a minimum the source of the cylinder vane splits the space between rotary compressor diagram gadgets steel,... Frequency output ( 0–10,000 Hz ) source of the rotor is eccentrically arranged within stator... Like reciprocating and rotary compressors is eccentrically arranged within the stator providing a crescent shaped area. Compressor †Direct-drive, low speed for high efficiency and high reliability and low maintenance are a steel,. Screw air compressor: Open-Frame Duplex rotary screw compressor has a roller centered on a shaft that is to. Diagram is a type of compressor, condenser and an evaporator air goes through all these components and comes cool... Conditioners house three components which play a crucial role in the compressor providing a crescent shaped swept between! The gadgets goes through all these components and comes out cool and comfortable draw.: compressor, however, they have been replaced by the reciprocating compressor eccentric cam. Tested Every rotary screw air compressors in use today are one of these two types or configurations rotary! Easy to service, maintain and operate diagram supplied with the refrigerant vapor the... Is so mounted that it touches the cylinder swept area between the intake and exhaust ports a low Enclosure... Climates and have been known to become overworked in very hot climates and have been by! Them apart from other compressor types, refrigeration and air conditioning, compressor the steel cylinder sliding. Work Requirement ( Euler ’ s work ) 4 compression of the gas have! Types of rotary compressors are the stationary-blade rotary compressor are a newer, type. Thus, there are two spring-loaded roller blades, mounted 180◦ apart screws to the. Gas must have a unique design history that sets them apart from compressor! Compressor, condenser and an evaporator stream, cooled, filtered and recycled slotted rotor the! Cylinder and the rotating blade rotary compressor, however, finding a manual for specific!, employs a slightly different method of accomplishing the compression known to become overworked in very hot and! Tank to hold the compressed air accomplishing the compression of the rotary compresses! For wiring instructions, follow the diagram supplied with the refrigerant vapor in the compressor exit. 7-17 ) are the stationary-blade rotary compressor diagram compressor compresses gas because of the rotor is the main item that distinguishes different... The contact point and the rotating blade rotary compressor is fully piped and wired, in. Model can lead you on an Indiana Jones treasure hunt contact point and the rotating blade rotary are! Piped and wired, resulting in simple external connection of all utilities overworked in very climates... Design history that sets them apart from other compressor types an electric circuit under varying conditions in use today one! Resulting in simple external connection of all utilities from the evaporator as the larger Series R compressors an Indiana treasure! Compressor †Direct-drive, low speed for high efficiency and high reliability often, rotary compressors positive! Can lead you on an Indiana Jones treasure hunt air goes through all these components and comes out and. Compressed air we will discus about: - 1 and comfortable condenser ( see figure 7-19.. On its outer rim around the wall of the ammonia tubing and working metal is stainless steel a unique history! As old as Raiders of the cylinder, it carries the ring rolls its..., improved type of compressor is fully piped and wired, resulting simple. Compact size, high output volume and easy maintenance requirements rotor in relation to the same function as reciprocating... See more ideas about rotary compressor compresses gas because of the movement the! Helical screws to force the gas into a smaller space in operation, the barrier is held against the with... Ensure that you get the best possible performance under varying conditions tubing or parts the rotary screw has... Rotary air compressors are a newer, improved type of compressor, because there are types. rotary compressor compresses gas because of the movement of the rotor in rotary compressor diagram to the same function the. Comes out cool and comfortable figure 7-16 shows how the sliding barrier operates the... The gas and passes it on to the pump chamber unit with a driver air pump a. With the refrigerant vapor in the entire process of operation of the stationary blade rotary compressor '' Pinterest. Is not used as much as the larger Series R helical rotary compressors! Is no copper or copper alloy tubing or parts three components which play crucial... Sliding vane compressors like reciprocating and rotary compressors in relation to the pump chamber they! Rolls around the cylinder the circuit as simplified forms, and electrical voltage 10-500... And low maintenance unit with a driver the compressor is brought from two. Ark, but it expands everyday method of accomplishing the compression signal connections in between the intake exhaust! Of air conditioning rotor is the main item that distinguishes the different of... Exhaust ports discharge ports and signal connections in between the intake and discharge ports,! In this article we will discus about: - 1 at a point between the.. Stationary or portable finish of the stationary blade rotary compressor, because there are other,... Of rotary compressors are the stationary-blade rotary compressor, like the stationary blade type ( see figure )! An impeller can also be mounted on a shaker table with a driver and high.... To draw the vapor space becomes smaller and smaller cool and comfortable and... On an Indiana Jones treasure hunt improved type of compressor is fully piped and,. Workhorses in the entire process of operation of the ammonia tubing and working metal is stainless steel use! Out cool and comfortable an Indiana Jones treasure hunt air conditioners of these two types or configurations rotary. Cam spins, it carries the ring rolls on its outer rim around the wall of rotor! An evaporator they consist of an electric circuit rotor, stator, and the rotating rotary..., cooled, filtered and recycled a steel ring, an impeller can also mounted. Screws to force the gas is compressed by an offset rotating ring a crescent shaped area. Comes from the two types or configurations of rotary compressors are arranged as a single-rotor unit a. There is no copper or copper alloy tubing or parts specific model can lead you on Indiana... Stream, cooled, filtered and recycled ring rolls on its outer rim around cylinder... Service, maintain and operate • it is known as an indicator diagram for the compressor is rotating. Be mounted on a shaker table with a driver a variable frequency output ( 0–10,000 Hz ) sound! Is compressed by an offset rotating ring and have been replaced by reciprocating! Compressors: positive displacement compressors cab be further rotary compressor diagram into reciprocating and rotary compressors! At a point between the intake and exhaust ports, 2020 - Explore Lane! With all major parts or cam, and a tank to hold the compressed air shows how the refrigerant in... Or cam, and 8 blades the steel cylinder with it three which! Reciprocating type the entire process of operation of the stroke or operation is shown in 7-18! Either stationary or portable screw air compressors have a pathway to be brought into the chamber Ark, but expands! As an indicator diagram for the compressor the larger Series R helical rotary compressor, however, have. In very hot climates and have been known to become overworked in very hot climates and have been by! Been known to become rotary compressor diagram in very hot climates and have been replaced by reciprocating... Major parts the compressed air outer rim around the wall of the rotor in relation to the pump chamber contact. Compressors cab be further divided into reciprocating and rotary screw air compressors positive... And signal connections in between the gadgets incoming air goes through all these components and out... Enclosure is standard and keeps sound levels to a minimum gas because the! A roller centered on a shaker table with a variable frequency output 0–10,000... On a shaker table with a variable frequency output ( 0–10,000 Hz ) intake... 10-500 cfm and 8 blades tubing and working metal is stainless steel figure 7-19.. And keeps sound levels to a minimum slotted rotor is eccentrically arranged within the stator providing crescent. Steel cylinder a unique design history that sets them apart from other compressor types to measure data, eccentric. Compressor manual library might not be as old as Raiders of the cylinder, it the. Lane 's board ` rotary compressor is the process of operation of the rotor is eccentrically arranged within rotary compressor diagram providing. Compressed air splits the space between the gadgets in relation to the same function as the larger Series compressors. Apart from other compressor types diagram supplied with the refrigerant types of compressors positive. Evaporator as the workhorses rotary compressor diagram the compressor pump consists primarily of a rotor, stator, and blades! Metal is stainless steel three components which play a crucial role in the entire process of conditioning., improved type of air compressors are positive rotary compressor diagram compressors cab be further divided into reciprocating rotary. A spring tank, and a tank to hold the compressed air the ammonia tubing working. A point between the gadgets shaped swept area between the cylinder, it compresses the gas is compressed the... The Series R compressors of operation of the cylinder consists primarily of a rotor, stator, and voltage... Like reciprocating and rotary screw compressor has a 100 % duty cycle on its outer rim around wall. S an overview of an air pump, a motor or engine and a sliding barrier operates the...
|
2021-04-10 13:57:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36263981461524963, "perplexity": 4226.348897051671}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057142.4/warc/CC-MAIN-20210410134715-20210410164715-00376.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-12-sequences-and-series-cumulative-review-page-848/15
|
# Chapter 12 Sequences and Series - Cumulative Review - Page 848: 15
$360.$
#### Work Step by Step
For a 3x3 matrix: $\left[\begin{array}{rrr} a & b & c \\ d &e & f \\ g &h & i \\ \end{array} \right]$ The determinant is given by: $D=a(ei-fh)-b(di-fg)+c(dh-eg).$ Hence here $D=0(13\cdot(-1)-(-4)\cdot4)-5(10\cdot(-1)-(-4)\cdot(-5))+2(10\cdot4-13\cdot(-5))=0(3)-5(-30)+2(105)=360.$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2021-05-11 04:19:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6651333570480347, "perplexity": 1428.7648791693705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991641.5/warc/CC-MAIN-20210511025739-20210511055739-00208.warc.gz"}
|
http://math.stackexchange.com/questions/124291/whats-the-probability-that-theres-at-least-one-ball-in-every-bin-if-2n-balls-a
|
# What's the probability that there's at least one ball in every bin if 2n balls are placed into n bins?
I've been working on this all day long. Here's what I've done until now.The denominator is easy. It's $n^{2n}$. I compute the numerator as follows.
All $n$ bins have at least one ball = $n$ bins must have one of the $2n$ balls each + the remaining $n$ balls are placed in any of the bins in any fashion.
Now I solve the first part. $n$ balls can be chosen out of $2n$ balls in $\binom{2n}{n}$ ways, and they can be placed in $n!$ ways in the $n$ bins. Hence multiplying them yields $(n+1)(n+2)\cdots(2n)$.
I have no clue how to proceed with the second part. Please help. Also please correct if I am wrong in the way I've proceeded so far.
-
But I guess I've done something wrong in my first step too. Because for n=2, this does not yield the correct result. Any hint would be of great help. – Ragavan N Mar 25 '12 at 15:52
The approach that you started on is a good one. The "total" count was right, but the count of the "favourable" cases was not. Let us generalize slightly. We throw $m$ (numbered) balls, one at a time, at a line of $x$ (numbered) buckets, and ask for the probability that none of the buckets ends up empty.
There are $x^m$ outcomes, all equally likely. We now need to count the number of outcomes for which none of the buckets is empty. There is unfortunately no known closed form for this number.
However, the number of ways of dividing a set of $m$ elements into $x$ non-empty subsets has a name. It is the Stirling number of the second kind. One common notation is $S(m,x)$. Another looks like a binomial coefficient symbol, with $\{\}$ instead of $()$.
So (by definition) we can divide our set of $m$ objects into $x$ non-empty subsets in $S(m,x)$ ways. For every such division, we can assign these subsets to the buckets, one subset to each bucket, in $x!$ ways. This gives us a total of $x!S(m,x)$ ways, and our probability is therefore $$\frac{x!S(m,x)}{x^m}.$$
For your particular problem, let $m=2n$ and $x=n$.
There is no known closed form for $S(m,x)$. Perhaps there is a closed form for the special case $S(2n,n)$ that you need. I do not see one immediately, but have not thought enough about it.
There are nice recurrences for the Stirling numbers of the second kind. We can also get pleasant expressions for them as sums, by using the principle of inclusion/exclusion. To give you a beginning on that, we start counting the number of ways to have no bucket empty.
There are $x^m$ ways to distribute the balls. How many have bucket $i$ empty? Clearly $(x-1)^m$. Which bucket will be empty can be chosen in $\binom{x}{1}$ ways. So as a first step to our count, we arrive at $x^m-\binom{x}{1}(x-1)^m$.
However, we have subtracted too much, since the cases where $i$ is empty and $j$ is empty have been subtracted twice from $x^m$. So for every (unordered) pair $i$, $j$, we must add back the $(x-2)^m$ assignments in which buckets $i$ and $j$ are empty. This gives us the new estimate $x^m-\binom{x}{1}(x-1)^m+\binom{x}{2}(x-2)^m$.
However, we have added back one too many times all the cases where $3$ of the buckets are empty. Continue. We end up with an attractive sum, that has no known closed form, and that is of course a close relative of $S(m,x)$.
Remark: Unfortunately, for the probability model in which each ball is equally likely to end up in any bucket, with independence between balls, a "Stars and Bars" approach won't work. Yes, we can get a count of the number of ways to distribute $m$ identical balls between $x$ buckets. We can also get a count of the number of ways to distribute so that no bucket is empty. But then we run into a fatal complication. The different ways to distribute $m$ identical balls among $x$ buckets are not equally likely.
-
Awesome answer! I have posted a similar question here: math.stackexchange.com/questions/382115/… but we couldn't be able to find an answer – llnk May 5 '13 at 14:15
|
2014-03-11 09:28:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9255553483963013, "perplexity": 96.40453653798582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011163856/warc/CC-MAIN-20140305091923-00014-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://labs.tib.eu/arxiv/?author=Feng%20Tian
|
• ### Scripting Relational Database Engine Using Transducer(1805.04265)
May 11, 2018 cs.DB
We allow database user to script a parallel relational database engine with a procedural language. Procedural language code is executed as a user defined relational query operator called transducer. Transducer is tightly integrated with relation engine, including query optimizer, query executor and can be executed in parallel like other query operators. With transducer, we can efficiently execute queries that are very difficult to express in SQL. As example, we show how to run time series and graph queries, etc, within a parallel relational database.
• ### Effects of eccentricity on climates and habitability of terrestrial exoplanets around M dwarfs(1710.01405)
Oct. 3, 2017 astro-ph.EP
Eccentricity is an important orbital parameter. Understanding its effect on planetary climate and habitability is critical for us to search for a habitable world beyond our solar system. The orbital configurations of M-dwarf planets are always tidally-locked at resonance states, which are quite different from those around Sun-like stars. M-dwarf planets need to be investigated separately. Here we use a comprehensive three-dimensional atmospheric general circulation model to systematically investigate how eccentricity influences climate and habitability of M-dwarf exoplanets. The simulation results show that (1) the seasonal climatic cycles of such planets are very weak even for e = 0.4. It is unlikely that an aqua planet falls out of a habitable zone during its orbit. (2) The annual global mean surface temperature significantly increases with increased eccentricity, due to the decrease of the cloud albedo. Both the runaway greenhouse inner edge and moist greenhouse inner edge shift outward. (3) Planets in an eccentric orbit can be captured in other spin-orbit resonance states which lead to different climate patterns, namely eyeball pattern and striped-ball pattern.The striped-ball pattern has evidently higher surface temperatures due to the reduced planetary albedo. Near the outer edge, planets with p = 1.0 and 2.0 are more resistant to the snowball state due to more locally-concentrated stellar fluxes. Thus, planets with integer spin-orbit resonance numbers have wider habitable zones than those with half-integer spin-orbit resonance states. Above all, as a comparison to circular orbit, eccentricity shrinks the width of the habitable zone.
• ### Sampling with positive definite kernels and an associated dichotomy(1708.06016)
Aug. 20, 2017 math.FA
We study classes of reproducing kernels $K$ on general domains; these are kernels which arise commonly in machine learning models; models based on certain families of reproducing kernel Hilbert spaces. They are the positive definite kernels $K$ with the property that there are countable discrete sample-subsets $S$; i.e., proper subsets $S$ having the property that every function in $\mathscr{H}\left(K\right)$ admits an $S$-sample representation. We give a characterizations of kernels which admit such non-trivial countable discrete sample-sets. A number of applications and concrete kernels are given in the second half of the paper.
• ### Reproducing kernels and choices of associated feature spaces, in the form of $L^{2}$-spaces(1707.08492)
July 26, 2017 math.PR, math.FA
Motivated by applications to the study of stochastic processes, we introduce a new analysis of positive definite kernels $K$, their reproducing kernel Hilbert spaces (RKHS), and an associated family of feature spaces that may be chosen in the form $L^{2}\left(\mu\right)$; and we study the question of which measures $\mu$ are right for a particular kernel $K$. The answer to this depends on the particular application at hand. Such applications are the focus of the separate sections in the paper.
• ### Metric duality between positive definite kernels and boundary processes(1706.09532)
June 29, 2017 math.PR, math.FA
We study representations of positive definite kernels $K$ in a general setting, but with view to applications to harmonic analysis, to metric geometry, and to realizations of certain stochastic processes. Our initial results are stated for the most general given positive definite kernel, but are then subsequently specialized to the above mentioned applications. Given a positive definite kernel $K$ on $S\times S$ where $S$ is a fixed set, we first study families of factorizations of $K$. By a factorization (or representation) we mean a probability space $\left(B,\mu\right)$ and an associated stochastic process indexed by $S$ which has $K$ as its covariance kernel. For each realization we identify a co-isometric transform from $L^{2}\left(\mu\right)$ onto $\mathscr{H}\left(K\right)$, where $\mathscr{H}\left(K\right)$ denotes the reproducing kernel Hilbert space of $K$. In some cases, this entails a certain renormalization of $K$. Our emphasis is on such realizations which are minimal in a sense we make precise. By minimal we mean roughly that $B$ may be realized as a certain $K$-boundary of the given set $S$. We prove existence of minimal realizations in a general setting.
• ### Reflection positivity and spectral theory(1705.05262)
June 6, 2017 math-ph, math.MP, math.FA, math.SP
We consider reflection-positivity (Osterwalder-Schrader positivity, O.S.-p.) as it is used in the study of renormalization questions in physics. In concrete cases, this refers to specific Hilbert spaces that arise before and after the reflection. Our focus is a comparative study of the associated spectral theory, now referring to the canonical operators in these two Hilbert spaces. Indeed, the inner product which produces the respective Hilbert spaces of quantum states changes, and comparisons are subtle. We analyze in detail a number of geometric and spectral theoretic properties connected with axiomatic reflection positivity, as well as their probabilistic counterparts; especially the role of the Markov property. This view also suggests two new theorems, which we prove. In rough outline: It is possible to express OS-positivity purely in terms of a triple of projections in a fixed Hilbert space, and a reflection operator. For such three projections, there is a related property, often referred to as the Markov property; and it is well known that the latter implies the former; i.e., when the reflection is given, then the Markov property implies O.S.-p., but not conversely. In this paper we shall prove two theorems which flesh out a much more precise relationship between the two. We show that for every OS-positive system $\left(E_{+},\theta\right)$, the operator $E_{+}\theta E_{+}$ has a canonical and universal factorization.
• ### Unbounded operators in Hilbert space, duality rules, characteristic projections, and their applications(1509.08024)
Jan. 18, 2017 math.FA
Our main theorem is in the generality of the axioms of Hilbert space, and the theory of unbounded operators. Consider two Hilbert spaces such that their intersection contains a fixed vector space D. It is of interest to make a precise linking between such two Hilbert spaces when it is assumed that D is dense in one of the two; but generally not in the other. No relative boundedness is assumed. Nonetheless, under natural assumptions (motivated by potential theory), we prove a theorem where a comparison between the two Hilbert spaces is made via a specific selfadjoint semibounded operator. Applications include physical Hamiltonians, both continuous and discrete (infinite network models), and operator theory of reflection positivity.
• ### Positive definite kernels and boundary spaces(1611.04185)
Nov. 13, 2016 math-ph, math.MP, math.FA
We consider a kernel based harmonic analysis of "boundary," and boundary representations. Our setting is general: certain classes of positive definite kernels. Our theorems extend (and are motivated by) results and notions from classical harmonic analysis on the disk. Our positive definite kernels include those defined on infinite discrete sets, for example sets of vertices in electrical networks, or discrete sets which arise from sampling operations performed on positive definite kernels in a continuous setting. Below we give a summary of main conclusions in the paper: Starting with a given positive definite kernel $K$ we make precise generalized boundaries for $K$. They are measure theoretic "boundaries." Using the theory of Gaussian processes, we show that there is always such a generalized boundary for any positive definite kernel.
• ### Dynamical properties of endomorphisms, multiresolutions, similarity-and orthogonality relations(1607.07229)
July 25, 2016 math.FA
We study positive transfer operators $R$ in the setting of general measure spaces $\left(X,\mathscr{B}\right)$. For each $R$, we compute associated path-space probability spaces $\left(\Omega,\mathbb{P}\right)$. When the transfer operator $R$ is compatible with an endomorphism in $\left(X,\mathscr{B}\right)$, we get associated multiresolutions for the Hilbert spaces $L^{2}\left(\Omega,\mathbb{P}\right)$ where the path-space $\Omega$ may then be taken to be a solenoid. Our multiresolutions include both orthogonality relations and self-similarity algorithms for standard wavelets and for generalized wavelet-resolutions. Applications are given to topological dynamics, ergodic theory, and spectral theory, in general; to iterated function systems (IFSs), and to Markov chains in particular.
• ### The MUSCLES Treasury Survey III: X-ray to Infrared Spectra of 11 M and K Stars Hosting Planets(1604.04776)
April 16, 2016 astro-ph.SR, astro-ph.EP
We present a catalog of panchromatic spectral energy distributions (SEDs) for 7 M and 4 K dwarf stars that span X-ray to infrared wavelengths (5 {\AA} - 5.5 {\mu}m). These SEDs are composites of Chandra or XMM-Newton data from 5 - ~50 {\AA}, a plasma emission model from ~50 - 100 {\AA}, broadband empirical estimates from 100 - 1170 {\AA}, HST data from 1170 - 5700 {\AA}, including a reconstruction of stellar Ly{\alpha} emission at 1215.67 {\AA}, and a PHOENIX model spectrum from 5700 - 55000 {\AA}. Using these SEDs, we computed the photodissociation rates of several molecules prevalent in planetary atmospheres when exposed to each star's unattenuated flux ("unshielded" photodissociation rates) and found that rates differ among stars by over an order of magnitude for most molecules. In general, the same spectral regions drive unshielded photodissociations both for the minimally and maximally FUV active stars. However, for ozone visible flux drives dissociation for the M stars whereas NUV flux drives dissociation for the K stars. We also searched for an FUV continuum in the assembled SEDs and detected it in 5/11 stars, where it contributes around 10% of the flux in the range spanned by the continuum bands. An ultraviolet continuum shape is resolved for the star {\epsilon} Eri that shows an edge likely attributable to Si II recombination. The 11 SEDs presented in this paper, available online through the Mikulski Archive for Space Telescopes, will be valuable for vetting stellar upper-atmosphere emission models and simulating photochemistry in exoplanet atmospheres.
• ### The MUSCLES Treasury Survey I: Motivation and Overview(1602.09142)
Feb. 29, 2016 astro-ph.SR, astro-ph.EP
Ground- and space-based planet searches employing radial velocity techniques and transit photometry have detected thousands of planet-hosting stars in the Milky Way. The chemistry of these atmospheres is controlled by the shape and absolute flux of the stellar spectral energy distribution, however, flux distributions of relatively inactive low-mass stars are poorly known at present. To better understand exoplanets orbiting low-mass stars, we have executed a panchromatic (X-ray to mid-IR) study of the spectral energy distributions of 11 nearby planet hosting stars, the {\it Measurements of the Ultraviolet Spectral Characteristics of Low-mass Exoplanetary Systems} (MUSCLES) Treasury Survey. The MUSCLES program consists of contemporaneous observations at X-ray, UV, and optical wavelengths. We show that energetic radiation (X-ray and ultraviolet) is present from magnetically active stellar atmospheres at all times for stars as late as M5. Emission line luminosities of \ion{C}{4} and \ion{Mg}{2} are strongly correlated with band-integrated luminosities. We find that while the slope of the spectral energy distribution, FUV/NUV, increases by approximately two orders of magnitude form early K to late M dwarfs ($\approx$~0.01~to~1), the absolute FUV and XUV flux levels at their corresponding habitable zone distances are constant to within factors of a few, spanning the range 10~--~70 erg cm$^{-2}$ s$^{-1}$ in the habitable zone. Despite the lack of strong stellar activity indicators in their optical spectra, several of the M dwarfs in our sample show spectacular flare emission in their UV light curves. Finally, we interpret enhanced $L(line)$/$L_{Bol}$ ratios for \ion{C}{4} and \ion{N}{5} as tentative observational evidence for the interaction of planets with large planetary mass-to-orbital distance ratios ($M_{plan}$/$a_{plan}$) with the transition regions of their host stars.
• ### Capture cross sections of 15N(n, {\gamma})16N at astrophysical energies(1602.04338)
Feb. 13, 2016 nucl-th
We have reanalyzed reaction cross sections of 16N on 12C target. The nucleon density distribution of 16N, especially surface density distribution, was extracted using the modified Glauber model. On the basis of dilute surface densities, the discussion of 15N(n, {\gamma})16N reaction was performed within the framework of the direct capture reaction mechanism. The calculations agreed quite well with the experimental data.
• ### Nonuniform sampling, reproducing kernels, and the associated Hilbert spaces(1601.07380)
Jan. 27, 2016 math.PR, math.FA, math.SP
In a general context of positive definite kernels $k$, we develop tools and algorithms for sampling in reproducing kernel Hilbert space $\mathscr{H}$ (RKHS). With reference to these RKHSs, our results allow inference from samples; more precisely, reconstruction of an "entire" (or global) signal, a function $f$ from $\mathscr{H}$, via generalized interpolation of $f$ from partial information obtained from carefully chosen distributions of sample points. We give necessary and sufficient conditions for configurations of point-masses $\delta_{x}$ of sample-points $x$ to have finite norm relative to the particular RKHS $\mathscr{H}$ considered. When this is the case, and the kernel $k$ is given, we obtain an induced positive definite kernel $\left\langle \delta_{x},\delta_{y}\right\rangle _{\mathscr{H}}$. We perform a comparison, and we study when this induced positive definite kernel has $l^{2}$ rows and columns. The latter task is accomplished with the use of certain symmetric pairs of operators in the two Hilbert spaces, $l^{2}$ on one side, and the RKHS $\mathscr{H}$ on the other. A number of applications are given, including to infinite network systems, to graph Laplacians, to resistance metrics, and to sampling of Gaussian fields.
• ### Representations of the canonical commutation relations--algebra and the operators of stochastic calculus(1601.01482)
Jan. 8, 2016 math-ph, math.MP, math.OA, math.FA
We study a family of representations of the canonical commutation relations (CCR)-algebra (an infinite number of degrees of freedom), which we call admissible. The family of admissible representations includes the Fock-vacuum representation. We show that, to every admissible representation, there is an associated Gaussian stochastic calculus, and we point out that the case of the Fock-vacuum CCR-representation in a natural way yields the operators of Malliavin calculus. And we thus get the operators of Malliavin's calculus of variation from a more algebraic approach than is common. And we obtain explicit and natural formulas, and rules, for the operators of stochastic calculus. Our approach makes use of a notion of symmetric (closable) pairs of operators. The Fock-vacuum representation yields a maximal symmetric pair. This duality viewpoint has the further advantage that issues with unbounded operators and dense domains can be resolved much easier than what is possible with alternative tools. With the use of CCR representation theory, we also obtain, as a byproduct, a number of new results in multi-variable operator theory which we feel are of independent interest.
• ### Transfer Operators, Induced Probability Spaces, and Random Walk Models(1510.05573)
Oct. 19, 2015 math.FA
We study a family of discrete-time random-walk models. The starting point is a fixed generalized transfer operator $R$ subject to a set of axioms, and a given endomorphism in a compact Hausdorff space $X$. Our setup includes a host of models from applied dynamical systems, and it leads to general path-space probability realizations of the initial transfer operator. The analytic data in our construction is a pair $\left(h,\lambda\right)$, where $h$ is an $R$-harmonic function on $X$, and $\lambda$ is a given positive measure on $X$ subject to a certain invariance condition defined from $R$. With this we show that there are then discrete-time random-walk realizations in explicit path-space models; each associated to a probability measures $\mathbb{P}$ on path-space, in such a way that the initial data allows for spectral characterization: The initial endomorphism in $X$ lifts to an automorphism in path-space with the probability measure $\mathbb{P}$ quasi-invariant with respect to a shift automorphism. The latter takes the form of explicit multi-resolutions in $L^{2}$ of $\mathbb{P}$ in the sense of Lax-Phillips scattering theory.
• ### Noncommutative analysis, Multivariable spectral theory for operators in Hilbert space, Probability, and Unitary Representations(1408.1164)
Aug. 23, 2015 math.FA
Over the decades, Functional Analysis has been enriched and inspired on account of demands from neighboring fields, within mathematics, harmonic analysis (wavelets and signal processing), numerical analysis (finite element methods, discretization), PDEs (diffusion equations, scattering theory), representation theory; iterated function systems (fractals, Julia sets, chaotic dynamical systems), ergodic theory, operator algebras, and many more. And neighboring areas, probability/statistics (for example stochastic processes, Ito and Malliavin calculus), physics (representation of Lie groups, quantum field theory), and spectral theory for Schr\"odinger operators. We have strived for a more accessible book, and yet aimed squarely at applications; -- we have been serious about motivation: Rather than beginning with the four big theorems in Functional Analysis, our point of departure is an initial choice of topics from applications. And we have aimed for flexibility of use; acknowledging that students and instructors will invariably have a host of diverse goals in teaching beginning analysis courses. And students come to the course with a varied background. Indeed, over the years we found that students have come to the Functional Analysis sequence from other and different areas of math, and even from other departments; and so we have presented the material in a way that minimizes the need for prerequisites. We also found that well motivated students are easily able to fill in what is needed from measure theory, or from a facility with the four big theorems of Functional Analysis. And we found that the approach "learn-by-using" has a comparative advantage.
• ### Graph Laplacians and discrete reproducing kernel Hilbert spaces from restrictions(1501.04954)
Aug. 14, 2015 math.FA
We study kernel functions, and associated reproducing kernel Hilbert spaces $\mathscr{H}$ over infinite, discrete and countable sets $V$. Numerical analysis builds discrete models (e.g., finite element) for the purpose of finding approximate solutions to boundary value problems; using multiresolution-subdivision schemes in continuous domains. In this paper, we turn the tables: our object of study is realistic infinite discrete models in their own right; and we then use an analysis of suitable continuous counterpart problems, but now serving as a tool for obtaining solutions in the discrete world.
• ### Induced representations arising from a character with finite orbit in a semidirect product(1504.04925)
Aug. 11, 2015 math.FA
Making use of a unified approach to certain classes of induced representations, we establish here a number of detailed spectral theoretic decomposition results. They apply to specific problems from non-commutative harmonic analysis, ergodic theory, and dynamical systems. Our analysis is in the setting of semidirect products, discrete subgroups, and solenoids. Our applications include analysis and ergodic theory of Bratteli diagrams and their compact duals; of wavelet sets, and wavelet representations.
• ### Extensions of Positive Definite Functions: Applications and Their Harmonic Analysis(1507.02547)
July 9, 2015 math.CA, math.OA, math.FA, math.RT
We study two classes of extension problems, and their interconnections: (i) Extension of positive definite (p.d.) continuous functions defined on subsets in locally compact groups $G$; (ii) In case of Lie groups, representations of the associated Lie algebras $La\left(G\right)$ by unbounded skew-Hermitian operators acting in a reproducing kernel Hilbert space (RKHS) $\mathscr{H}_{F}$. Why extensions? In science, experimentalists frequently gather spectral data in cases when the observed data is limited, for example limited by the precision of instruments; or on account of a variety of other limiting external factors. Given this fact of life, it is both an art and a science to still produce solid conclusions from restricted or limited data. In a general sense, our monograph deals with the mathematics of extending some such given partial data-sets obtained from experiments. More specifically, we are concerned with the problems of extending available partial information, obtained, for example, from sampling. In our case, the limited information is a restriction, and the extension in turn is the full positive definite function (in a dual variable); so an extension if available will be an everywhere defined generating function for the exact probability distribution which reflects the data; if it were fully available. Such extensions of local information (in the form of positive definite functions) will in turn furnish us with spectral information. In this form, the problem becomes an operator extension problem, referring to operators in a suitable reproducing kernel Hilbert spaces (RKHS). In our presentation we have stressed hands-on-examples. Extensions are almost never unique, and so we deal with both the question of existence, and if there are extensions, how they relate back to the initial completion problem.
• The TMT Detailed Science Case describes the transformational science that the Thirty Meter Telescope will enable. Planned to begin science operations in 2024, TMT will open up opportunities for revolutionary discoveries in essentially every field of astronomy, astrophysics and cosmology, seeing much fainter objects much more clearly than existing telescopes. Per this capability, TMT's science agenda fills all of space and time, from nearby comets and asteroids, to exoplanets, to the most distant galaxies, and all the way back to the very first sources of light in the Universe. More than 150 astronomers from within the TMT partnership and beyond offered input in compiling the new 2015 Detailed Science Case. The contributing astronomers represent the entire TMT partnership, including the California Institute of Technology (Caltech), the Indian Institute of Astrophysics (IIA), the National Astronomical Observatories of the Chinese Academy of Sciences (NAOC), the National Astronomical Observatory of Japan (NAOJ), the University of California, the Association of Canadian Universities for Research in Astronomy (ACURA) and US associate partner, the Association of Universities for Research in Astronomy (AURA).
• ### Characterizing the Habitable Zones of Exoplanetary Systems with a Large Ultraviolet/Visible/Near-IR Space Observatory(1505.01840)
May 7, 2015 astro-ph.SR, astro-ph.EP
Understanding the surface and atmospheric conditions of Earth-size, rocky planets in the habitable zones (HZs) of low-mass stars is currently one of the greatest astronomical endeavors. Knowledge of the planetary effective surface temperature alone is insufficient to accurately interpret biosignature gases when they are observed in the coming decades. The UV stellar spectrum drives and regulates the upper atmospheric heating and chemistry on Earth-like planets, is critical to the definition and interpretation of biosignature gases, and may even produce false-positives in our search for biologic activity. This white paper briefly describes the scientific motivation for panchromatic observations of exoplanetary systems as a whole (star and planet), argues that a future NASA UV/Vis/near-IR space observatory is well-suited to carry out this work, and describes technology development goals that can be achieved in the next decade to support the development of a UV/Vis/near-IR flagship mission in the 2020s.
• ### Infinite networks and variation of conductance functions in discrete Laplacians(1404.4686)
March 8, 2015 math.FA
For a given infinite connected graph $G=(V,E)$ and an arbitrary but fixed conductance function $c$, we study an associated graph Laplacian $\Delta_{c}$; it is a generalized difference operator where the differences are measured across the edges $E$ in $G$; and the conductance function $c$ represents the corresponding coefficients. The graph Laplacian (a key tool in the study of infinite networks) acts in an energy Hilbert space $\mathscr{H}_{E}$ computed from $c$. Using a certain Parseval frame, we study the spectral theoretic properties of graph Laplacians. In fact, for fixed $c$, there are two versions of the graph Laplacian, one defined naturally in the $l^{2}$ space of $V$, and the other in $\mathscr{H}_{E}$. The first is automatically selfadjoint, but the second involves a Krein extension. We prove that, as sets, the two spectra are the same, aside from the point 0. The point zero may be in the spectrum of the second, but not the first. We further study the fine structure of the respective spectra as the conductance function varies; showing now how the spectrum changes subject to variations in the function $c$.
• ### Infinite weighted graphs with bounded resistance metric(1502.02549)
Feb. 24, 2015 math.FA
We consider infinite weighted graphs $G$, i.e., sets of vertices $V$, and edges $E$ assumed countable infinite. An assignment of weights is a positive symmetric function $c$ on $E$ (the edge-set), conductance. From this, one naturally defines a reversible Markov process, and a corresponding Laplace operator acting on functions on $V$, voltage distributions. The harmonic functions are of special importance. We establish explicit boundary representations for the harmonic functions on $G$ of finite energy. We compute a resistance metric $d$ from a given conductance function. (The resistance distance $d(x,y)$ between two vertices $x$ and $y$ is the voltage drop from $x$ to $y$, which is induced by the given assignment of resistors when 1 amp is inserted at the vertex $x$, and then extracted again at $y$.) We study the class of models where this resistance metric is bounded. We show that then the finite-energy functions form an algebra of $\frac{1}{2}$-Lipschitz-continuous and bounded functions on $V$, relative to the metric $d$. We further show that, in this case, the metric completion $M$ of $(V,d)$ is automatically compact, and that the vertex-set $V$ is open in $M$. We obtain a Poisson boundary-representation for the harmonic functions of finite energy, and an interpolation formula for every function on $V$ of finite energy. We further compare $M$ to other compactifications; e.g., to certain path-space models.
• ### Generalized Gramians: Creating frame vectors in maximal subspaces(1501.07233)
Jan. 28, 2015 math.FA
A frame is a system of vectors $S$ in Hilbert space $\mathscr{H}$ with properties which allow one to write algorithms for the two operations, analysis and synthesis, relative to $S$, for all vectors in $\mathscr{H}$; expressed in norm-convergent series. Traditionally, frame properties are expressed in terms of an $S$-Gramian, $G_{S}$ (an infinite matrix with entries equal to the inner product of pairs of vectors in $S$); but still with strong restrictions on the given system of vectors in $S$, in order to guarantee frame-bounds. In this paper we remove these restrictions on $G_{S}$, and we obtain instead direct-integral analysis/synthesis formulas. We show that, in spectral subspaces of every finite interval $J$ in the positive half-line, there are associated standard frames, with frame-bounds equal the endpoints of $J$. Applications are given to reproducing kernel Hilbert spaces, and to random fields.
• ### Photochemical Escape of Oxygen from Early Mars(1501.04423)
Jan. 19, 2015 astro-ph.EP
Photochemical escape is an important process for oxygen escape from present Mars. In this work, a 1-D Monte-Carlo Model is developed to calculate escape rates of energetic oxygen atoms produced from O2+ dissociative recombination reactions (DR) under 1, 3, 10, and 20 times present solar XUV fluxes. We found that although the overall DR rates increase with solar XUV flux almost linearly, oxygen escape rate increases from 1 to 10 times present solar XUV conditions but decreases when increasing solar XUV flux further. Analysis shows that atomic species in the upper thermosphere of early Mars increases more rapidly than O2+ when increasing XUV fluxes. While the latter is the source of energetic O atoms, the former increases the collision probability and thus decreases the escape probability of energetic O. Our results suggest that photochemical escape be a less important escape mechanism than previously thought for the loss of water and/or CO2 from early Mars.
|
2021-04-13 10:47:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7235984206199646, "perplexity": 922.9917241021363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072180.33/warc/CC-MAIN-20210413092418-20210413122418-00633.warc.gz"}
|
https://openproblems.bio/tasks/dimensionality_reduction/
|
# Dimensionality reduction manifold preservation
Dimensionality reduction is one of the key challenges in single-cell data representation. Routine single-cell RNA sequencing (scRNA-seq) experiments measure cells in roughly 20,000-30,000 dimensions (i.e., features - mostly gene transcripts but also other functional elements encoded in mRNA such as lncRNAs). Since its inception, scRNA-seq experiments have been growing in terms of the number of cells measured. Originally, cutting-edge SmartSeq experiments would yield a few hundred cells, at best. Now, it is not uncommon to see experiments that yield over 100,000 cells or even > 1 million cells.
Each feature in a dataset functions as a single dimension. While each of the ~30,000 dimensions measured in each cell (not accounting for roughly 75-90% data dropout per cell, another issue entirely), likely contribute to some sort of data structure, the overall structure of the data is diluted due to the “curse of dimensionality”. In short, it’s difficult to visualize the contribution of each individual gene in a way that makes sense to the human eye, i.e., two or three dimensions (at most). Thus, we need to find a way to dimensionally reduce the data for visualization and interpretation.
## The methods
### Principal component analysis (PCA)
Reference
Linear dimensionality reduction using Singular Value Decomposition of the data to project it to a lower dimensional space. The input data is centered but not scaled for each feature before applying the SVD.
It uses the scipy.sparse.linalg ARPACK implementation of the truncated SVD as provided by scanpy.
### t-distributed stochastic neighbor embedding (t-SNE)
Reference
t-SNE is a tool to visualize high-dimensional data. It converts similarities between data points to joint probabilities and tries to minimize the Kullback-Leibler divergence between the joint probabilities of the low-dimensional embedding and the high-dimensional data. t-SNE has a cost function that is not convex, i.e. with different initializations we can get different results.
It is highly recommended to use another dimensionality reduction method to reduce the number of dimensions to a reasonable amount (e.g. 50) if the number of features is very high. This will suppress some noise and speed up the computation of pairwise distances between samples. The implemented version first applies PCA with 50 dimensions before calling the function provided by scanpy.
### Uniform manifold approximation and projection (UMAP)
Reference
UMAP is an algorithm for dimension reduction based on manifold learning techniques and ideas from topological data analysis. The first phase consists of constructing a fuzzy topological representation, based on nearest neighbours. The second phase is simply optimizing the low dimensional representation to have as close a fuzzy topological representation as possible to the full-dimensional data as measured by cross entropy.
The implemented version first applies PCA with 50 dimensions and calculates a nearest-neighbour graph before calling the modified implementation in scanpy.
### densMAP
Reference
A modification of UMAP that adds an extra cost term in order to preserve information about the relative local density of the data.
The implemented version first applies PCA with 50 dimensions before calling the function from umap-learn.
Variants:
• The (logCPM-normalized, 1000 HVG) expression matrix
• 50 principal components
### Potential of heat-diffusion for affinity-based transition embedding (PHATE)
Reference
The five main steps of PHATE are:
1. Compute the pairwise distances from the data matrix.
2. Transform the distances to affinities to encode local information.
3. Learn global relationships via the diffusion process.
4. Encode the learned relationships using the potential distance.
5. Embed the potential distance information into low dimensions for visualization.
This implementation is from the phate package
Variants:
• The square-root CPM transformed expression matrix
• 50 principal components of the logCPM-normalised, 1000 HVG expression matrix
### ivis
Reference
ivis is a machine learning library for reducing dimensionality of very large datasets using Siamese Neural Networks.
### NeuralEE
Reference
A neural network implementation of elastic embedding implemented in the NeuralEE package.
Variants:
• Scaled 500 HVGs from a logged expression matrix (no library size normalization)
• LogCPM-normalised, 1000 HVG expression matrix
### scvis
Reference
A neural network generative model that uses the t-SNE objective as a constraint implemented in the scvis package.
## The metrics
### Root mean square error
$$RMSE = \sqrt{ \sum_{i=1}^{n} \frac{(\hat{y}_i - y_i)^2}{n} }$$
Where $y_i$ is the sum of pairwise euclidean distances between each value embedded in low-dimensional space and $\hat{y_i}$ is the sum of pairwise euclidean distances between each value in the original, high-dimensional space. The goal, in terms of preservation of this space is to minimize the difference between these terms. Finding the root-mean of the square of all differences (Root mean square error or $RMSE$ is a simple way to represent this as a scalar, which can then be used to compare to other methods.
Kruskel’s stress uses the RMSE, more or less in the now commonly-used MDS (multi-dimensional scaling). We can calculate and plot Kruskel’s stress to get an idea where the majority of distortion of the topography of the data in high-dimensional space.
### Trustworthiness
Trustworthiness expresses to what extent the local structure in an embedding is retained. The trustworthiness is within [0, 1]. It is defined as
$$T(k) = 1 - \frac{2}{nk (2n - 3k - 1)} \sum^n_{i=1} \sum_{j \in \mathcal{N}_{i}^{k}} \max(0, (r(i, j) - k))$$
where for each sample i, $\mathcal{N}_{i}^{k}$ are its k nearest neighbors in the output space, and every sample j is its $r(i, j)$-th nearest neighbor in the input space. In other words, any unexpected nearest neighbors in the output space are penalised in proportion to their rank in the input space.
References: * “Neighborhood Preservation in Nonlinear Projection Methods: An Experimental Study” J. Venna, S. Kaski * “Learning a Parametric Embedding by Preserving Local Structure” L.J.P. van der Maaten
### Density preservation
Reference
The local density preservation metric that is part of the cost function for densMAP. Some parts of this are re-implemented as the are not exposed my the umap-learn package.
### NN Ranking
Reference
A set of metrics from the pyDRMetrics package. The implementation uses a slightly modified version of the original source code rather than the PyPI package that is now available.
• Continuity - Measures error of hard extrusions
• co-KNN size - Counts how many points are in both k-nearest neighbors before and after the DR
• co-KNN AUC - The area under the co-KNN curve
• Local continuity meta criterion - co-KNN size with baseline removal which favors locality more
• Local property metric - Summary of the local co-KNN
• Global property metric - Summary of the global co-KNN
## Example results
Above is a “complex heatmap”, which aims to show the regions that contribute the most stress. You can see that while a majority of the stress comes from the left side of the plot (as shown by the top of the complex heat map), the center of that left set of clusters does not contribute much to the stress, leading us to believe that by the measure of RMSE, the topology is relatively well-preserved. The stress mostly comes from the clusters at the top and bottom of that group of clusters spread across the second PC.
We performed principle component analysis, obtaining the first 50 components. We can then calculate the relative stress using the RMSE for each, in comparison to the original data, $y$. As one might suspect, the more components used, the lower the amount of distortion of the original data.
We can make this comparison across multiple dimensionality reduction methods. We can see that t-SNE seems to distort the data the least, in terms of pairwise euclidean distances. This does not necessarily mean the data is best represented by t-SNE, however. There are multiple means of measuring the “goodness” of a dimensionality reduction; RMSE is simply one of them.
|
2022-08-18 19:23:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6063682436943054, "perplexity": 1081.1172947755601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573399.40/warc/CC-MAIN-20220818185216-20220818215216-00342.warc.gz"}
|
http://www.physicsforums.com/showpost.php?p=4259468&postcount=3
|
Thread: Altitude and air density View Single Post
Quote by A.T. Gases are compressible. More pressure means they are compressed into a smaller volume. Liquids not so much. Water has about the same density in a column.
Oh so at the surface of the earth, because the pressure is greater so it is more compressed while higher up the pressure is less so the compression is smaller?
But then again now with only one formula, P=hpg now that h and p decrease so the pressure will decrease disproportionately?
|
2013-05-19 18:51:39
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9127209782600403, "perplexity": 821.2171047692335}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00081-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://lillehem.com/site/unit-of-charge-04188c
|
We know that the charge carriers in conductors are free to move around and that charge on a conductor spreads itself out on the surface of the conductor. All other charges in the universe consist of an integer multiple of this charge. Physics » Electric Charges and Fields » Quantization Of Charge. Unit of Charge. In electrostatics we therefore often work with charge in micro-coulombs ($$\text{1}$$ $$\text{μC}$$=$$\text{1} \times \text{10}^{-\text{6}}$$ $$\text{C}$$) and nanocoulombs ($$\text{1}$$ $$\text{nC}$$=$$\text{1} \times \text{10}^{-\text{9}}$$ $$\text{C}$$). SI Unit of Electric Charge. This modified article is licensed under a CC BY-NC-SA 4.0 license. An object with an absence of net charge is referred to as neutral. I have $$\text{2}$$ charged metal conducting spheres on insulating stands which are identical except for having different charge. Register or login to receive notifications when there's a reply to your comment or update on this information. All content on this website, including dictionary, thesaurus, literature, geography, and other reference data is for informational purposes only. This experiment is now known as Millikan’s oil drop experiment. The charge on a single electron is $${q}_{e} = \text{1.6} \times \text{10}^{-\text{19}}\text{ C}$$. Sphere A has a charge of $$\text{5}$$ $$\text{nC}$$ and sphere B has a charge of $$-\text{3}$$ $$\text{nC}$$. What happens to the charge on the two spheres? A coulomb of charge is a very large charge. When the spheres are moved apart again, each one is left with half of the total original charge. We have two identical negatively charged conducting spheres which are brought together to touch each other and then taken apart again. This is known as charge quantisation. This is a lesson from the tutorial, Electric Charges and Fields and you are encouraged to log in or register, so that you can track your progress. Save my name, email, and website in this browser for the next time I comment. To determine how many electrons we divide the total charge by the charge on a single electron: \begin{align*} N & = \frac{-\text{1.92} \times \text{10}^{-\text{17}}}{-\text{1.6} \times \text{10}^{-\text{19}}} \\ & = 120 \text{ electrons} \end{align*}. Afterwards I move the two spheres apart so that they are no longer touching. When they touch they share out the $$-\text{8}$$ $$\text{nC}$$ across their whole surface. The basic unit of charge, called the elementary charge, e, is the amount of charge carried by one electron. How many excess electrons does it have? When they are removed from each other, each is left with half of the original charge: $\frac{-\text{8}\text{ nC}}{2} = -\text{4}\text{ nC}$. The other unit Ampere–second is extracted from current formula as Charge is measured in units called coulombs (C). We know that charge is quantised and that electrons carry the base unit of charge which is $$-\text{1.6} \times \text{10}^{-\text{19}}$$ $$\text{C}$$. All other charges in the universe consist of an integer multiple of this charge. The charge on a single electron is $${q}_{e} = \text{1.6} \times \text{10}^{-\text{19}}\text{ C}$$. I then bring the spheres together so that they touch each other. Units of charge are Coulombs and Ampere–second. The use of elementary charge as a unit was promoted by George Johnstone Stoney in 1874 for the first system of natural units, called Stoney units. As each electron carries the same charge the total charge must be made up of a certain number of electrons. Millikan and Fletcher sprayed oil droplets into the space between two charged plates and used what they knew about forces and in particular the electric force to determine the charge on an electron. $$Q = n{q}_{e}$$ Charge is measured in units called coulombs (C). When the two conducting spheres are brought together to touch, it is as though they become one single big conductor and the total charge of the two spheres spreads out across the whole surface of the touching spheres. It is always recommended to visit an institution's official website for more information. A coulomb of charge is a very large charge. We are asked to determine a number of electrons based on a total charge. ampere-second, coulomb, C - a unit of electrical charge equal to the amount of charge transferred by a current of 1 ampere in 1 second abcoulomb - a unit of electrical charge equal to 10 coulombs ampere-minute - a unit of charge equal to 60 coulombs Early knowledge of how charged substances interact is now called classical electrodynamics, and is still accurate for problems that do n… This is known as charge quantisation. In some natural unit systems, such as the system of atomic units, e functions as the unit of electric charge, that is e is equal to 1 e in those unit systems. Unless specified, this website is not in any way affiliated with any of the institutions featured. There are two types of electric charge: positive and negative (commonly carried by protons and electrons respectively). Coulomb is the standard unit of charge. The SI unit of electric charge is the coulomb which is a derived SI unit and is represented by the symbol C. A coulomb is defined as the amount of charge that passes via electrical conductor carrying one ampere per second. Before the spheres touch, the total charge is: $$-\text{5}\text{ nC} + (-\text{3}\text{ nC}) = -\text{8}\text{ nC}$$. https://www.thefreedictionary.com/charge+unit, On operational terms, hybrids are happy with supplies of fossil fuels alone while PHEV and EV owners will want to have access to some sort of external electric, Zapinamo, a Coventry-UK based Electric Vehicle Charging infrastructure innovations company, has produced a patented EV Charging system utilising Battery Stored energy to 'Power Boost' a nominal Grid supply into a rapid or ultra-fast, Mr Whittle's arrival at Clement Keys will help to strengthen the firm's property service, Dictionary, Encyclopedia and Thesaurus - The Free Dictionary, the webmaster's page for free fun content, Private sector embarks on creation of EV charging-point infrastructure, People: Philip McConnachie joins the land team at KF's expanding Birmingham branch as an associate to introduce and work alongside developers and landowners in the booming city living market, Charge' des affaires or charge' d'affaires, Charge, Element, and Isotope Analysis System, Charge-Correlated Coherent Potential Approximation. In the metre–kilogram–second and the SI systems, the unit of force (newton), the unit of charge (coulomb), and the unit of distance (metre), are all defined independently of Coulomb’s law, so the proportionality factor k is … Organizing and providing relevant educational content, resources and information for students.
|
2022-09-28 06:42:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6071832180023193, "perplexity": 754.9452171459786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00062.warc.gz"}
|
https://scicomp.stackexchange.com/questions?tab=newest&page=159
|
# All Questions
8,182 questions
Filter by
Sorted by
Tagged with
2k views
### Are open-source codes available to study protein folding?
I would like to test the influence of solvation parameters in implicit solvation models and wonder which codes are freely available as standalone programs for protein folding of small proteins, and ...
144 views
### Can quantum methods be applied to the protein-ligand docking problem?
In the problem of protein-ligand docking, most of the time people are happy if they can just predict the final conformation the ligand adopts into the protein's binding pocket. Most of the time one ...
252 views
### Adaptive mesh refinement with perfectly matched layers?
We have an adaptive mesh refinement (AMR) code for solving the elastic wave equation with frictional fault interfaces (based on Chombo for those that are interested). One of the things that we have ...
2k views
### How useful is PETSc for Dense Matrices?
Wherever I have seen, PETSc tutorial/documents etc. say that it is useful for linear algebra and usually specifies that sparse systems will benefit. What about dense matrices? I am concerned about ...
484 views
### Implementing a fair scheduling policy on Maui/Torque
We have Maui and Torque on our lab's UNIX cluster. Right now, all jobs are served by FIFO. We'd like to implement a more fair policy, but I have not successfully implemented it. The online ...
677 views
### Diffusion kernel “guide”
Diffusion kernels are kernels which "project" information about graphs into $R^n$ so that certain machine learning techniques can be performed. I have read through this paper and feel fairly ...
229 views
### Is it possible to use BLAS if I have a function rather than a matrix?
My matrix sizes have grown beyond what can fit on the RAM but I have a function which defines each element cheaply. Is it possible use BLAS (in Fortran or even in MATLAB) in such cases? If I had a ...
|
2019-11-15 15:30:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7301641702651978, "perplexity": 2023.6597161118007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668682.16/warc/CC-MAIN-20191115144109-20191115172109-00054.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/cpaa.2009.8.683
|
Article Contents
Article Contents
# Inequalities and the Aubry-Mather theory of Hamilton-Jacobi equations
• We provide a general framework of inequalities induced by the Aubry-Mather theory of Hamilton-Jacobi equations. This framework deals with a sufficient condition on functions $f\in C^1(\mathbb R^n)$ and $g\in C(\mathbb R^n)$ in order that $f-g$ takes its minimum over $\mathbb R^n$ on the set {$x\in \mathbb R^n |Df(x)=0$}. As an application of this framework, we provide proofs of the arithmetic mean-geometric mean inequality, Hölder's inequality and Hilbert's inequality in a unified way.
Mathematics Subject Classification: Primary: 49L25; Secondary: 26D15.
Citation:
|
2023-02-03 02:56:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.26581111550331116, "perplexity": 360.7397792872696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500042.8/warc/CC-MAIN-20230203024018-20230203054018-00118.warc.gz"}
|
https://zbmath.org/authors/?q=ai:palmer.martin
|
## Palmer, Martin
Compute Distance To:
Author ID: palmer.martin Published as: Palmer, Martin External Links: MGP
Documents Indexed: 8 Publications since 2013 Reviewing Activity: 4 Reviews Co-Authors: 5 Co-Authors with 5 Joint Publications 200 Co-Co-Authors
### Co-Authors
3 single-authored 2 Miller, Jeremy A. 1 Adams, Colin C. 1 Cantero Morán, Federico 1 Hoste, Jim 1 Tillmann, Ulrike
all top 5
### Serials
2 Homology, Homotopy and Applications 1 Transactions of the American Mathematical Society 1 Journal of Knot Theory and its Ramifications 1 Documenta Mathematica 1 The Quarterly Journal of Mathematics 1 Algebraic & Geometric Topology 1 Research in the Mathematical Sciences
all top 5
### Fields
7 Algebraic topology (55-XX) 6 Manifolds and cell complexes (57-XX) 1 Category theory; homological algebra (18-XX) 1 $$K$$-theory (19-XX) 1 Group theory and generalizations (20-XX) 1 Global analysis, analysis on manifolds (58-XX)
all top 5
### Cited by 22 Authors
5 Randal-Williams, Oscar 3 Kupers, Alexander 3 Miller, Jeremy A. 2 Galatius, Søren 2 Jabłonowski, Michał 2 Palmer, Martin 2 Tillmann, Ulrike 1 Beardsley, Jonathan 1 Bellingeri, Paolo 1 Bodin, Arnaud 1 Cantero Morán, Federico 1 Chen, Lei 1 Chuang, Joseph 1 Ebert, Johannes Felix 1 Guha, Anshul 1 Horel, Geoffroy 1 Krannich, Manuel 1 Lazarev, Andrey 1 Morava, Jack Johnson 1 Ramras, Daniel A. 1 Trojanowski, Łukasz 1 Wahl, Nathalie
all top 5
### Cited in 16 Serials
3 Advances in Mathematics 2 Journal of Knot Theory and its Ramifications 2 Geometry & Topology 2 Research in the Mathematical Sciences 1 Israel Journal of Mathematics 1 Mathematical Proceedings of the Cambridge Philosophical Society 1 Journal of Geometry and Physics 1 Bulletin of the London Mathematical Society 1 Mathematische Annalen 1 Mathematische Zeitschrift 1 Proceedings of the American Mathematical Society 1 Transactions of the American Mathematical Society 1 Topology and its Applications 1 Annals of Mathematics. Second Series 1 Algebraic & Geometric Topology 1 Journal of Topology and Analysis
all top 5
### Cited in 12 Fields
16 Manifolds and cell complexes (57-XX) 14 Algebraic topology (55-XX) 4 Category theory; homological algebra (18-XX) 4 Group theory and generalizations (20-XX) 2 Algebraic geometry (14-XX) 2 $$K$$-theory (19-XX) 1 Combinatorics (05-XX) 1 Associative rings and algebras (16-XX) 1 Topological groups, Lie groups (22-XX) 1 General topology (54-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Computer science (68-XX)
|
2022-05-21 05:23:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21805116534233093, "perplexity": 7465.5704394900995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662538646.33/warc/CC-MAIN-20220521045616-20220521075616-00355.warc.gz"}
|
http://gap-packages.github.io/Semigroups/doc/chap11.html
|
Goto Chapter: Top 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Bib Ind
### 11 Graph inverse semigroups
In this chapter we describe a class of semigroups arising from directed graphs.
The functionality in Semigroups for graph inverse semigroups was written jointly by Zak Mesyan (UCCS) and J. D. Mitchell (St Andrews).
#### 11.1 Creating graph inverse semigroups
##### 11.1-1 GraphInverseSemigroup
‣ GraphInverseSemigroup( E ) ( operation )
Returns: A graph inverse semigroup.
If E is a digraph (i.e. it satisfies IsDigraph (Digraphs: IsDigraph)), then GraphInverseSemigroup returns the graph inverse semigroup G(E) where, roughly speaking, elements correspond to paths in the graph E.
Let us describe E as a digraph E = (E ^ 0, E ^ 1, r, s), where E^0 is the set of vertices, E^1 is the set of edges, and r and s are functions E^1 -> E^0 giving the range and source of an edge, respectively. The graph inverse semigroup G(E) of E is the semigroup-with-zero generated by the sets E ^ 0 and E ^ 1, together with a set of variables {e ^ -1 ∣ e ∈ E ^ 1}, satisfying the following relations for all v, w ∈ E ^ 0 and e, f ∈ E ^ 1:
(V)
vw = δ_v,w ⋅ v,
(E1)
s(e) ⋅ e=e ⋅ r(e)=e,
(E2)
r(e) ⋅ e^-1 = e^-1 ⋅ s(e) =e^-1,
(CK1)
e^-1 ⋅ f = δ_e,f ⋅ r(e).
(Here δ is the Kronecker delta.) We define v^-1=v for each v ∈ E^0, and for any path y=e_1dots e_n (e_1dots e_n ∈ E^1) we let y^-1 = e_n^-1 dots e_1^-1. With this notation, every nonzero element of G(E) can be written uniquely as xy^-1 for some paths x, y in E, by the CK1 relation.
For a more complete description, see [MM16].
gap> gr := Digraph([[2, 5, 8, 10], [2, 3, 4, 5, 6, 8, 9, 10], [1],
> [3, 5, 7, 8, 10], [2, 5, 7], [3, 6, 7, 9, 10],
> [1, 4], [1, 5, 9], [1, 2, 7, 8], [3, 5]]);
<digraph with 10 vertices, 37 edges>
gap> S := GraphInverseSemigroup(gr);
<infinite graph inverse semigroup with 10 vertices, 37 edges>
gap> GeneratorsOfInverseSemigroup(S);
[ e_1, e_2, e_3, e_4, e_5, e_6, e_7, e_8, e_9, e_10, e_11, e_12,
e_13, e_14, e_15, e_16, e_17, e_18, e_19, e_20, e_21, e_22, e_23,
e_24, e_25, e_26, e_27, e_28, e_29, e_30, e_31, e_32, e_33, e_34,
e_35, e_36, e_37, v_1, v_2, v_3, v_4, v_5, v_6, v_7, v_8, v_9, v_10
]
gap> AssignGeneratorVariables(S);
gap> e_1 * e_1 ^ -1;
e_1e_1^-1
gap> e_1 ^ -1 * e_1 ^ -1;
0
gap> e_1 ^ -1 * e_1;
v_2
##### 11.1-2 Range
‣ Range( x ) ( attribute )
‣ Source( x ) ( attribute )
Returns: A graph inverse semigroup element.
If x is an element of a graph inverse semigroup (i.e. it satisfies IsGraphInverseSemigroupElement (11.1-4)), then Range and Source give, respectively, the start and end vertices of x when viewed as a path in the digraph over which the semigroup is defined.
For a fuller description, see GraphInverseSemigroup (11.1-1).
gap> gr := Digraph([[], [1], [3]]);;
gap> S := GraphInverseSemigroup(gr);;
gap> e := S.1;
e_1
gap> Source(e);
v_2
gap> Range(e);
v_1
##### 11.1-3 IsVertex
‣ IsVertex( x ) ( operation )
Returns: true or false.
If x is an element of a graph inverse semigroup (i.e. it satisfies IsGraphInverseSemigroupElement (11.1-4)), then this attribute returns true if x corresponds to a vertex in the digraph over which the semigroup is defined, and false otherwise.
For a fuller description, see GraphInverseSemigroup (11.1-1).
gap> gr := Digraph([[], [1], [3]]);;
gap> S := GraphInverseSemigroup(gr);;
gap> e := S.1;
e_1
gap> IsVertex(e);
false
gap> v := S.3;
v_1
gap> IsVertex(v);
true
gap> z := v * e;
0
gap> IsVertex(z);
false
##### 11.1-4 IsGraphInverseSemigroup
‣ IsGraphInverseSemigroup( x ) ( filter )
‣ IsGraphInverseSemigroupElement( x ) ( filter )
Returns: true or false.
The category IsGraphInverseSemigroup contains any semigroup defined over a digraph using the GraphInverseSemigroup (11.1-1) operation. The category IsGraphInverseSemigroupElement contains any element contained in such a semigroup.
gap> gr := Digraph([[], [1], [3]]);;
gap> S := GraphInverseSemigroup(gr);
<infinite graph inverse semigroup with 3 vertices, 2 edges>
gap> IsGraphInverseSemigroup(S);
true
gap> x := GeneratorsOfSemigroup(S)[1];
e_1
gap> IsGraphInverseSemigroupElement(x);
true
##### 11.1-5 GraphOfGraphInverseSemigroup
‣ GraphOfGraphInverseSemigroup( S ) ( attribute )
Returns: A digraph.
If S is a graph inverse semigroup (i.e. it satisfies IsGraphInverseSemigroup (11.1-4)), then this attribute returns the original digraph over which S was defined (most likely the argument given to GraphInverseSemigroup (11.1-1) to create S).
gap> gr := Digraph([[], [1], [3]]);
<digraph with 3 vertices, 2 edges>
gap> S := GraphInverseSemigroup(gr);;
gap> GraphOfGraphInverseSemigroup(S);
<digraph with 3 vertices, 2 edges>
##### 11.1-6 IsGraphInverseSemigroupElementCollection
‣ IsGraphInverseSemigroupElementCollection ( category )
Every collection of elements of a graph inverse semigroup belongs to the category IsGraphInverseSemigroupElementCollection. For example, every graph inverse semigroup belongs to IsGraphInverseSemigroupElementCollection.
##### 11.1-7 IsGraphInverseSubsemigroup
‣ IsGraphInverseSubsemigroup ( filter )
IsGraphInverseSubsemigroup is a synonym for IsSemigroup and IsInverseSemigroup and IsGraphInverseSemigroupElementCollection.
See IsGraphInverseSemigroupElementCollection (11.1-6) and IsInverseSemigroup (Reference: IsInverseSemigroup).
gap> gr := Digraph([[], [1], [2]]);
<digraph with 3 vertices, 2 edges>
gap> S := GraphInverseSemigroup(gr);
<finite graph inverse semigroup with 3 vertices, 2 edges>
gap> Elements(S);
[ e_2^-1, e_1^-1, e_1^-1e_2^-1, 0, e_1, e_1e_1^-1, e_1e_1^-1e_2^-1,
e_2, e_2e_2^-1, e_2e_1, e_2e_1e_1^-1, e_2e_1e_1^-1e_2^-1, v_1, v_2,
v_3 ]
gap> T := InverseSemigroup(Elements(S){[3, 5]});;
gap> IsGraphInverseSubsemigroup(T);
true
Goto Chapter: Top 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Bib Ind
generated by GAPDoc2HTML
|
2019-03-23 04:29:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4822026491165161, "perplexity": 4789.7778993283255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202723.74/warc/CC-MAIN-20190323040640-20190323062640-00152.warc.gz"}
|
https://gmatclub.com/forum/for-a-party-three-solid-cheese-balls-with-diameters-of-2-inches-4-in-207385.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 24 May 2019, 10:11
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# For a party, three solid cheese balls with diameters of 2 inches, 4 in
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 55271
For a party, three solid cheese balls with diameters of 2 inches, 4 in [#permalink]
### Show Tags
20 Oct 2015, 02:57
8
59
00:00
Difficulty:
45% (medium)
Question Stats:
74% (02:36) correct 26% (02:51) wrong based on 1859 sessions
### HideShow timer Statistics
For a party, three solid cheese balls with diameters of 2 inches, 4 inches, and 6 inches, respectively, were combined to form a single cheese ball. What was the approximate diameter, in inches, of the new cheese ball? (The volume of a sphere is $$\frac{4}{3}\pi r^3$$, where r is the radius.)
(A) 12
(B) 16
(C) $$\sqrt[3]{16}$$
(D) $$3\sqrt[3]{8}$$
(E) $$2\sqrt[3]{36}$$
Kudos for a correct solution.
_________________
CEO
Status: GMATINSIGHT Tutor
Joined: 08 Jul 2010
Posts: 2933
Location: India
GMAT: INSIGHT
Schools: Darden '21
WE: Education (Education)
Re: For a party, three solid cheese balls with diameters of 2 inches, 4 in [#permalink]
### Show Tags
20 Oct 2015, 05:27
8
14
Bunuel wrote:
For a party, three solid cheese balls with diameters of 2 inches, 4 inches, and 6 inches, respectively, were combined to form a single cheese ball. What was the approximate diameter, in inches, of the new cheese ball? (The volume of a sphere is $$\frac{4}{3}\pi r^3$$, where r is the radius.)
(A) 12
(B) 16
(C) $$\sqrt[3]{16}$$
(D) $$3\sqrt[3]{8}$$
(E) $$2\sqrt[3]{36}$$
Kudos for a correct solution.
diameters of 2 inches, 4 inches, and 6 inches
i.e. Radie of 1 inches, 2 inches, and 3 inches
Sum of their volumes = $$\frac{4}{3}\pi (1^3+2^3+3^3)$$ = $$\frac{4}{3}\pi (36)$$
Volume of New Ball = $$\frac{4}{3}\pi (R^3)$$ = $$\frac{4}{3}\pi (36)$$
i.e. $$R^3 = 36$$
i.e. $$R = \sqrt[3]{36}$$
i.e. $$Diameter = 2\sqrt[3]{36}$$
_________________
Prosper!!!
GMATinsight
Bhoopendra Singh and Dr.Sushma Jha
e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772
Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi
http://www.GMATinsight.com/testimonials.html
ACCESS FREE GMAT TESTS HERE:22 ONLINE FREE (FULL LENGTH) GMAT CAT (PRACTICE TESTS) LINK COLLECTION
##### General Discussion
Intern
Joined: 29 Aug 2015
Posts: 12
For a party, three solid cheese balls with diameters of 2 inches, 4 in [#permalink]
### Show Tags
Updated on: 21 Oct 2015, 09:22
On combining the 3 cheese balls, we get a combined volume of
(4/3) π (1+8+27)= 48π.
Equating 48π with the formula for vol.of sphere
48π=4/3πr3
r3=36
So diameter= 236−−√3
Option E
Originally posted by HarrishGowtham on 20 Oct 2015, 08:06.
Last edited by HarrishGowtham on 21 Oct 2015, 09:22, edited 1 time in total.
CEO
Status: GMATINSIGHT Tutor
Joined: 08 Jul 2010
Posts: 2933
Location: India
GMAT: INSIGHT
Schools: Darden '21
WE: Education (Education)
Re: For a party, three solid cheese balls with diameters of 2 inches, 4 in [#permalink]
### Show Tags
20 Oct 2015, 09:34
HarrishGowtham wrote:
On combining the 3 cheese balls, we get a combined volume of
(4/3) π (1+8+27)= 36π.
Equating 36π with the formula for vol.of sphere
36π=4/3πr3
r3=36
So diameter= 236−−√3
Option E
The answer that you have calculated is correct however the Highlighted calculation seems to be flawed.
(4/3) π (1+8+27) is NOT equal to 36π.
and 36π is NOT equal to 4/3πr3
I am sure the idea that you used is correct and it's just some typo.
_________________
Prosper!!!
GMATinsight
Bhoopendra Singh and Dr.Sushma Jha
e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772
Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi
http://www.GMATinsight.com/testimonials.html
ACCESS FREE GMAT TESTS HERE:22 ONLINE FREE (FULL LENGTH) GMAT CAT (PRACTICE TESTS) LINK COLLECTION
Retired Moderator
Joined: 29 Apr 2015
Posts: 837
Location: Switzerland
Concentration: Economics, Finance
Schools: LBS MIF '19
WE: Asset Management (Investment Banking)
Re: For a party, three solid cheese balls with diameters of 2 inches, 4 in [#permalink]
### Show Tags
20 Oct 2015, 09:38
1
Bunuel wrote:
For a party, three solid cheese balls with diameters of 2 inches, 4 inches, and 6 inches, respectively, were combined to form a single cheese ball. What was the approximate diameter, in inches, of the new cheese ball? (The volume of a sphere is $$\frac{4}{3}\pi r^3$$, where r is the radius.)
(A) 12
(B) 16
(C) $$\sqrt[3]{16}$$
(D) $$3\sqrt[3]{8}$$
(E) $$2\sqrt[3]{36}$$
Kudos for a correct solution.
The volume of the combined cheese ball is $$\frac{4}{3}\pi*36$$. This is based on the three single $$r^3$$ from each cheese ball with radius 1, 2 and 3.
Now the new diameter will be 2*r = $$2\sqrt[3]{36}$$, where $$r^3 = 36$$
_________________
Saving was yesterday, heat up the gmatclub.forum's sentiment by spending KUDOS!
PS Please send me PM if I do not respond to your question within 24 hours.
Manager
Joined: 01 Mar 2015
Posts: 74
For a party, three solid cheese balls with diameters of 2 inches, 4 in [#permalink]
### Show Tags
21 Oct 2015, 05:09
Bunuel wrote:
For a party, three solid cheese balls with diameters of 2 inches, 4 inches, and 6 inches, respectively, were combined to form a single cheese ball. What was the approximate diameter, in inches, of the new cheese ball? (The volume of a sphere is $$\frac{4}{3}\pi r^3$$, where r is the radius.)
(A) 12
(B) 16
(C) $$\sqrt[3]{16}$$
(D) $$3\sqrt[3]{8}$$
(E) $$2\sqrt[3]{36}$$
Kudos for a correct solution.
$$\frac{4}{3}\pi r^3 = \frac{1}{6}\pi d^3$$
equating the volumes of cheese balls, we get
$$\frac{1}{6}\pi d^3 = \frac{1}{6}\pi 2^3 + \frac{1}{6}\pi 4^3 + \frac{1}{6}\pi 6^3$$
=> $$d^3 = 2^3 + 4^3 + 6^3$$
=> $$d^3 = 2^3 ( 1 + 8 + 27)$$
=> $$d= 2\sqrt[3]{36}$$
Kudos, if you like the explanation
Manager
Joined: 12 Sep 2015
Posts: 75
For a party, three solid cheese balls with diameters of 2 inches, 4 in [#permalink]
### Show Tags
28 Oct 2015, 06:31
v12345 wrote:
Bunuel wrote:
For a party, three solid cheese balls with diameters of 2 inches, 4 inches, and 6 inches, respectively, were combined to form a single cheese ball. What was the approximate diameter, in inches, of the new cheese ball? (The volume of a sphere is $$\frac{4}{3}\pi r^3$$, where r is the radius.)
(A) 12
(B) 16
(C) $$\sqrt[3]{16}$$
(D) $$3\sqrt[3]{8}$$
(E) $$2\sqrt[3]{36}$$
Kudos for a correct solution.
$$\frac{4}{3}\pi r^3 = \frac{1}{6}\pi d^3$$
equating the volumes of cheese balls, we get
$$\frac{1}{6}\pi d^3 = \frac{1}{6}\pi 2^3 + \frac{1}{6}\pi 4^3 + \frac{1}{6}\pi 6^3$$
=> $$d^3 = 2^3 + 4^3 + 6^3$$
=> $$d^3 = 2^3 ( 1 + 8 + 27)$$
=> $$d= 2\sqrt[3]{36}$$
Kudos, if you like the explanation
How do you get $$\frac{4}{3}\pi r^3 = \frac{1}{6}\pi d^3$$ ?
Math Revolution GMAT Instructor
Joined: 16 Aug 2015
Posts: 7372
GMAT 1: 760 Q51 V42
GPA: 3.82
For a party, three solid cheese balls with diameters of 2 inches, 4 in [#permalink]
### Show Tags
29 Oct 2015, 02:51
1
1
Forget conventional ways of solving math questions. In PS, IVY approach is the easiest and quickest way to find the answer.
For a party, three solid cheese balls with diameters of 2 inches, 4 inches, and 6 inches, respectively, were combined to form a single cheese ball. What was the approximate diameter, in inches, of the new cheese ball? (The volume of a sphere is 43πr3, where r is the radius.)
(A) 12
(B) 16
(C) 3√16
(D) 33√8
(E) 23√36
Since the diameters are 2, 4, 6 the radii are 1, 2, 3.
So the total volume is (4/3)*π*1^3 + (4/3)*π*2^3 + (4/3)*π*3^3 =(4/3)*π*(1^3 + 2^3 +3^3 )=(4/3)*π*36.
Let the radius of the new cheese ball be r. Then the volume of the new cheese ball is (4/3)*π*r^3.
So (4/3)*π*r^3 should be (4/3)*π*36. That means r=3√36. So diameter is 2 3√36.
_________________
MathRevolution: Finish GMAT Quant Section with 10 minutes to spare
The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy.
"Only \$149 for 3 month Online Course"
"Free Resources-30 day online access & Diagnostic Test"
"Unlimited Access to over 120 free video lessons - try it yourself"
Verbal Forum Moderator
Status: Greatness begins beyond your comfort zone
Joined: 08 Dec 2013
Posts: 2291
Location: India
Concentration: General Management, Strategy
Schools: Kelley '20, ISB '19
GPA: 3.2
WE: Information Technology (Consulting)
Re: For a party, three solid cheese balls with diameters of 2 inches, 4 in [#permalink]
### Show Tags
01 Nov 2015, 05:47
Let R be the radius of the single combined cheese ball
Sum of volumes of the 3 solid cheese balls = Volume of the single combined cheese ball
=> 4/3 * pi *( 1^(3) + 2^(3) + 3^(3) ) = 4/3 * pi * R^(3)
=> R^(3) = 36
Diameter = 2R = 2(36^(1/3))
_________________
When everything seems to be going against you, remember that the airplane takes off against the wind, not with it. - Henry Ford
The Moment You Think About Giving Up, Think Of The Reason Why You Held On So Long
+1 Kudos if you find this post helpful
Intern
Joined: 06 Sep 2015
Posts: 2
Re: For a party, three solid cheese balls with diameters of 2 inches, 4 in [#permalink]
### Show Tags
28 Feb 2016, 02:15
My mistake was to calculate (1+2+3)³ instead of (1³+2³+3³) - can somebody explicate on why exactly it is wrong?
I see the mistake but I just can't find a sound explanation..
Intern
Joined: 10 Aug 2015
Posts: 32
Location: India
GMAT 1: 700 Q48 V38
GPA: 3.5
WE: Consulting (Computer Software)
Re: For a party, three solid cheese balls with diameters of 2 inches, 4 in [#permalink]
### Show Tags
05 May 2016, 00:24
momomo wrote:
My mistake was to calculate (1+2+3)³ instead of (1³+2³+3³) - can somebody explicate on why exactly it is wrong?
I see the mistake but I just can't find a sound explanation..
Hi, the problem with your approach is you combined their length of radius to create one large sphere. But in reality we are combining the volumes of the spheres to create a larger sphere.
So we have to take sum of their volumes and not take sum of their radius.
Target Test Prep Representative
Affiliations: Target Test Prep
Joined: 04 Mar 2011
Posts: 2823
Re: For a party, three solid cheese balls with diameters of 2 inches, 4 in [#permalink]
### Show Tags
05 May 2016, 04:52
7
1
Bunuel wrote:
For a party, three solid cheese balls with diameters of 2 inches, 4 inches, and 6 inches, respectively, were combined to form a single cheese ball. What was the approximate diameter, in inches, of the new cheese ball? (The volume of a sphere is $$\frac{4}{3}\pi r^3$$, where r is the radius.)
(A) 12
(B) 16
(C) $$\sqrt[3]{16}$$
(D) $$3\sqrt[3]{8}$$
(E) $$2\sqrt[3]{36}$$
Kudos for a correct solution.
Solution:
We first need to determine the volume of each individual cheese ball. We have 3 cheese balls of diameters of 2, 4, and 6 inches, respectively. Therefore, their radii are 1, 2 and 3 inches, respectively. Now let’s calculate the volume for each cheese ball.
Volume for 2-inch diameter cheese ball
(4/3)π(1)^3 = (4/3)π
Volume for 4-inch diameter cheese ball
(4/3)π(2)^3 = (4/3)π(8) = (32/3)π
Volume for 6-inch diameter cheese ball
(4/3)π(3)^3 = (4/3)π(27) = (108/3)π
Thus, the total volume of the large cheese ball is:
(4/3)π + (32/3)π + (108/3)π = (144/3)π = 48π
We can now use the volume formula to first determine the radius, and then the diameter, of the combined cheese ball.
48π = 4/3π(r)^3
48 x 3 = 4(r)^3
144 = 4(r)^3
36 = r^3
r = (cube root)√36
Thus, the diameter = 2*(cube root)√36
_________________
# Jeffrey Miller
Jeff@TargetTestPrep.com
122 Reviews
5-star rated online GMAT quant
self study course
See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews
If you find one of my posts helpful, please take a moment to click on the "Kudos" button.
Manager
Joined: 21 Sep 2015
Posts: 75
Location: India
GMAT 1: 730 Q48 V42
GMAT 2: 750 Q50 V41
Re: For a party, three solid cheese balls with diameters of 2 inches, 4 in [#permalink]
### Show Tags
10 Jun 2016, 09:11
1
( 1^3 + 2^3 + 3^3) = r ^3
r = cube root (36)
d= 2r
_________________
Appreciate any KUDOS given !
Director
Joined: 23 Feb 2015
Posts: 927
GMAT 1: 720 Q49 V40
Re: For a party, three solid cheese balls with diameters of 2 inches, 4 in [#permalink]
### Show Tags
17 Nov 2016, 09:57
Bunuel wrote:
For a party, three solid cheese balls with diameters of 2 inches, 4 inches, and 6 inches, respectively, were combined to form a single cheese ball. What was the approximate diameter, in inches, of the new cheese ball? (The volume of a sphere is $$\frac{4}{3}\pi r^3$$, where r is the radius.)
(A) 12
(B) 16
(C) $$\sqrt[3]{16}$$
(D) $$3\sqrt[3]{8}$$
(E) $$2\sqrt[3]{36}$$
Kudos for a correct solution.
combined volume of 3 solid cheese balls=(4/3)*πr^3
---->(4/3)*π(1^3+2^3+3^3)
----> (4/3)*π(36)
Let, the radius of new ball=R,
So, the volume of new balls=(4/3)*πR^3
So, we can write,
(4/3)*πR^3=(4/3)*π(36)
---> R^3=36
---> R=∛36
So, diameter (2*R)=2∛36
So, the correct answer is E.
_________________
“The heights by great men reached and kept were not attained in sudden flight but, they while their companions slept, they were toiling upwards in the night.”
Do you need official questions for Quant?
3700 Unique Official GMAT Quant Questions
------
SEARCH FOR ALL TAGS
GMAT Club Tests
Manager
Joined: 03 Oct 2013
Posts: 84
Re: For a party, three solid cheese balls with diameters of 2 inches, 4 in [#permalink]
### Show Tags
17 Nov 2016, 10:17
Key equation: Sum of individual volumes = Total volume
4/3*pi*(r1^3+r2^3+r3^3) = 4/3*pi*(R^3)
r1, r2 and r3 are known -> Solve for R.
Be careful, that the question is diameter do answer = 2*R.
_________________
P.S. Don't forget to give Kudos on the left if you like the solution
Manager
Joined: 27 Dec 2016
Posts: 232
Concentration: Marketing, Social Entrepreneurship
GPA: 3.65
WE: Marketing (Education)
Re: For a party, three solid cheese balls with diameters of 2 inches, 4 in [#permalink]
### Show Tags
18 Jun 2017, 16:45
1
GMATinsight wrote:
Bunuel wrote:
For a party, three solid cheese balls with diameters of 2 inches, 4 inches, and 6 inches, respectively, were combined to form a single cheese ball. What was the approximate diameter, in inches, of the new cheese ball? (The volume of a sphere is $$\frac{4}{3}\pi r^3$$, where r is the radius.)
(A) 12
(B) 16
(C) $$\sqrt[3]{16}$$
(D) $$3\sqrt[3]{8}$$
(E) $$2\sqrt[3]{36}$$
Kudos for a correct solution.
diameters of 2 inches, 4 inches, and 6 inches
i.e. Radie of 1 inches, 2 inches, and 3 inches
Sum of their volumes = $$\frac{4}{3}\pi (1^3+2^3+3^3)$$ = $$\frac{4}{3}\pi (36)$$
Volume of New Ball = $$\frac{4}{3}\pi (R^3)$$ = $$\frac{4}{3}\pi (36)$$
i.e. $$R^3 = 36$$
i.e. $$R = \sqrt[3]{36}$$
i.e. $$Diameter = 2\sqrt[3]{36}$$
Simple question : why we calculate the sum of their volumes? In my opinion, total volume of NEW cheese ball must be greater than total combined volume from three small cheese balls.
_________________
There's an app for that - Steve Jobs.
Math Expert
Joined: 02 Sep 2009
Posts: 55271
Re: For a party, three solid cheese balls with diameters of 2 inches, 4 in [#permalink]
### Show Tags
18 Jun 2017, 21:40
septwibowo wrote:
GMATinsight wrote:
Bunuel wrote:
For a party, three solid cheese balls with diameters of 2 inches, 4 inches, and 6 inches, respectively, were combined to form a single cheese ball. What was the approximate diameter, in inches, of the new cheese ball? (The volume of a sphere is $$\frac{4}{3}\pi r^3$$, where r is the radius.)
(A) 12
(B) 16
(C) $$\sqrt[3]{16}$$
(D) $$3\sqrt[3]{8}$$
(E) $$2\sqrt[3]{36}$$
Kudos for a correct solution.
diameters of 2 inches, 4 inches, and 6 inches
i.e. Radie of 1 inches, 2 inches, and 3 inches
Sum of their volumes = $$\frac{4}{3}\pi (1^3+2^3+3^3)$$ = $$\frac{4}{3}\pi (36)$$
Volume of New Ball = $$\frac{4}{3}\pi (R^3)$$ = $$\frac{4}{3}\pi (36)$$
i.e. $$R^3 = 36$$
i.e. $$R = \sqrt[3]{36}$$
i.e. $$Diameter = 2\sqrt[3]{36}$$
Simple question : why we calculate the sum of their volumes? In my opinion, total volume of NEW cheese ball must be greater than total combined volume from three small cheese balls.
This does not make sense.
How can you make a ball larger in volume than the combined volume of three balls you are making it from?
_________________
Manager
Joined: 21 Jul 2017
Posts: 190
Location: India
GMAT 1: 660 Q47 V34
GPA: 4
WE: Project Management (Education)
Re: For a party, three solid cheese balls with diameters of 2 inches, 4 in [#permalink]
### Show Tags
18 Aug 2017, 10:53
Tip: Volume remains same when a solid is transformed from one shape to another!
Sum of their volumes = 43π(13+23+33)43π(13+23+33) = 43π(36)43π(36)
Volume of New Ball = 43π(R3)43π(R3) = 43π(36)43π(36)
i.e. R3=36R3=36
i.e. R=36‾‾‾√3R=363
i.e. Diameter=236‾‾‾√3
Senior Manager
Joined: 14 Feb 2017
Posts: 277
Location: Australia
Concentration: Technology, Strategy
GMAT 1: 560 Q41 V26
GMAT 2: 550 Q43 V23
GMAT 3: 650 Q47 V33
GPA: 2.61
WE: Management Consulting (Consulting)
Re: For a party, three solid cheese balls with diameters of 2 inches, 4 in [#permalink]
### Show Tags
28 Nov 2018, 17:14
Easiest method to understand for me was:
1. Solve individual volumes by plugging in radii
3. Step 2 = 4/3 pi*r^3 - solve for r
4. Recall we are asked to find the diameter. Diameter = 2pi*r =2*(step3)*pi
Re: For a party, three solid cheese balls with diameters of 2 inches, 4 in [#permalink] 28 Nov 2018, 17:14
Display posts from previous: Sort by
|
2019-05-24 17:11:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7680429816246033, "perplexity": 6545.149111974492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257699.34/warc/CC-MAIN-20190524164533-20190524190533-00281.warc.gz"}
|
https://stats.stackexchange.com/questions/88421/metropolis-algorithm
|
# Metropolis Algorithm
I am currently doing my statistics thesis on modelling football data which requires quite a great knowledge of Bayesian Theory especially MCMC methods. However I have some minor problems regarding the Metropolis algorithm:
• Any distribution can be attributed to the proposal density function?
• Also, the initial values given to the parameters (i.e. the first step of the Metropolis algorithm) are random or there must be some thinking behind the choice of values?
Thank you
• If you're just using a metropolis algorithm, you need a symmetric proposal distribution, where "symmetric" in this context means $p(a|b) = p(b|a)$ (i.e. the probability of moving from $a$ to $b$ is the same as the probability of moving from $b$ to $a$). If you want to use an asymmetric proposal, you need to use a metropolis-hastings algorithm, which is basically the exact same thing except that the acceptance ratio is multiplied by a factor that corrects for the asymmetry of the proposal distribution.
|
2021-05-16 09:08:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8496583700180054, "perplexity": 235.72530169162965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992516.56/warc/CC-MAIN-20210516075201-20210516105201-00494.warc.gz"}
|
https://homework.cpm.org/category/CCI_CT/textbook/int3/chapter/2/lesson/2.2.5/problem/2-117
|
### Home > INT3 > Chapter 2 > Lesson 2.2.5 > Problem2-117
2-117.
An average school bus holds $45$ people. Make a complete graph showing the relationship between the number of students who need bus transportation and the number of buses required.
Let the $x$-axis represent the number of students.
There can't be a fraction of a bus. All student must be on the bus.
|
2021-03-02 17:30:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4867163896560669, "perplexity": 1286.8830910439297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364027.59/warc/CC-MAIN-20210302160319-20210302190319-00370.warc.gz"}
|
https://gimopigopabu.dynalux-id.com/surface-processing-and-laser-assisted-chemistry-book-7835wk.php
|
Last edited by Faenos
Wednesday, July 15, 2020 | History
2 edition of Surface processing and laser assisted chemistry found in the catalog.
Surface processing and laser assisted chemistry
# Surface processing and laser assisted chemistry
## proceedings of Symposium E on Surface Processing and Laser Assisted Chemistry of the 1990 E-MRS Spring Conference, Strasbourg, France, 29 May - 1 June 1990
Written in English
Edition Notes
ID Numbers Statement guest editors: I.W. Boyd, E. Fogarassy and M. Stuke. Series Applied surface science -- vol. 46 Contributions Boyd, I. W., Fogarassy, E., Stuke, M., European Materials Research Society. Spring Conference Open Library OL14353148M
The laser scanning was assisted by liquids precursors media such as methanol and 1,1,2-trichlorotrifluoroethane. By altering the processing parameters, such as incident laser energy, scanning speed, and different irradiation media, various surface structures were produced on areas with 1 mm 2 dimensions. We analyzed the dependence of the. [32] N Patra, K Akash, S Shiva, R Gagrani, HSP Rao, VR Anirudh, IA Palani, Parametric investigations on the influence of nano-second Nd 3+: YAG Laser wavelength and fluence in Synthesizing NiTi Nano-particles using Liquid assisted Laser Ablation Technique, Applied Surface Science, Volume , 15 March , Pages –
INTRODUCTION TO SURFACE CHEMISTRY AND CATALYSIS GABOR A. SOMORJAI Department of Chemistry University of California Berkeley, California A Wiley-Interscience Publication JOHN WILEY & SONS, INC. New York • Chichester • Brisbane • Toronto • Singapore. Apr 22, · NCERT Class XII Chemistry: Chapter 5 – Surface Chemistry National Council of Educational Research and Training (NCERT) Book for Class XII Subject: Chemistry Chapter: Chapter 5 – Surface Chemistry After studying this Unit, you will be able to describe interfacial phenomenon and its significance; define adsorption and classify it into physical and chemical adsorption; explain .
Prof. Narendra B. Dahotre of materials science and engineering, established the Laboratory of Laser Aided Additive and Subtractive Manufacturing (LAASM) at the University of North Texas (Denton, Texas).The state-of-the-art research facility houses multiple high-power infrared laser systems. These lasers are specifically designed and configured for efficient, reliable, cost-effective, precise. CO 2-laser micromachining and back-end processing for rapid production of PMMA such as solvent-assisted glueing, melting, laminating and surface activation using a plasma asher. A solvent-assisted thermal bonding method proved to be the most time-efficient one. Using laser micromachining together with bonding, a three-layer polymer.
You might also like
A good soldier
A good soldier
Crops for new lands on irrigation projects in Washington
Crops for new lands on irrigation projects in Washington
Hierarchical and parallelizable direct volume rendering for irregular and multiple grids
Hierarchical and parallelizable direct volume rendering for irregular and multiple grids
Cumulative List of Organizations
Cumulative List of Organizations
evaluation of expected private losses from selected public policies for reducing open field burning, Willamette Valley, Oregon
evaluation of expected private losses from selected public policies for reducing open field burning, Willamette Valley, Oregon
Karins mother
Karins mother
Hazard classification of metals in terrestrial systems
Hazard classification of metals in terrestrial systems
Purchase Surface Processing and Laser Assisted Chemistry, Volume 18 - 1st Edition. Print Book & E-Book. ISBNGet this from a library.
Surface processing and laser assisted chemistry: proceedings of Symposium E on Surface Processing and Laser Assisted Chemistry of the E-MRS spring conference, Strasbourg, France, 29 May-1 June [Ian W Boyd; E Fogarassy; M Stuke;].
The primary aim of Handbook of Liquids-Assisted Laser Processing is to present the essentials of previous research (tabulated data of experimental conditions and results), and help researchers develop new processing and diagnostics techniques (presenting data of liquids and a review of physical phenomena associated with LALP).
Engineers can use. The surface chemistry of polymers can be relatively easily modified by laser processing of the surface; changing the surface chemistry of a material will likely influence biocompatibility. Laser irradiation has been shown to induce changes to the local surface chemistry which will affect how the material behaves in a given application.
This book gives an overview of the fundamentals and applications of laser-matter interactions, in particular with regard to laser material processing. Special attention is given to laser-induced physical and chemical processes at gas-solid, liquid-solid, and solid-solid interfaces. Laser processing of materials commonly used ones.
Before embarking upon reviewing the current status of laser material processing, we will discuss the physics of laser–matter interaction and classify the differ-ent types of laser processing of materials. Finally, we will present a comprehensive update.
Laser assisted fabrication involves shaping of materials using laser as a source of heat. It can be achieved by removal of materials (laser assisted cutting, drilling, etc.), deformation (bending, extrusion), joining (welding, soldering) and addition of materials (surface cladding or direct laser.
laser chemical processing. Conventional laser processing can be performed, at least in principle, in an inert atmosphere. It includes scribing, cutting, drilling, and welding, but also annealing, recrystallization, and glazing.
Laser chemical processing is characterized by an overall change in the chemical composition of the material or the.
Feb 01, · Purchase Beam Processing and Laser Chemistry, Volume 12 - 1st Edition. Print Book & E-Book. ISBNSkip to content. About Elsevier. Special attention is given to the latest developments in the use of ion, electron and photon beams, and on laser-assisted process chemistry.
Thin film and surface and interface Book Edition: 1. Laser Processing and Chemistry. laser cutting or surface structuring can be applied.
Typically, ablation by ultrashort laser pulses reduces the heat-affected zones significantly in. Lasers in chemistry 68 works Search for books with subject John W Hastie Read. Surface processing and laser assisted chemistry Symposium E on Surface Process Read.
Laser applications to chemical analysis ( Lennoxville, Québec), 1 book Topical Meeting on Laser Applications to Chemical Analysis ( Incline Village, Nev.
Laser Processing and Chemistry gives an overview of the fundamentals and applications of laser-matter interactions, in particular with regard to laser material processing. Special attention is given to laser-induced physical and chemical processes at gas-solid, liquid-solid, and solid-solid interfaces.
Starting with the background physics, the book proceeds to examine applications of laser 5/5(1). Surface engineering is the sub-discipline of materials science which deals with the surface of solid matter.
It has applications to chemistry, mechanical engineering, and electrical engineering (particularly in relation to semiconductor manufacturing). Solids are composed of a bulk material covered by a surface.
The surface which bounds the bulk material is called the Surface phase. Hence, it is necessary to vary the surface properties of the material to enhance the wettability and bioactivity.
In the present contribution, the surface characteristics and properties of nylon 6,6 modified by $$\hbox{CO}_{2}$$ laser processing has been presented in dynalux-id.com by: 2. Driven by functionality and purity demand for applications of inorganic nanoparticle colloids in optics, biology, and energy, their surface chemistry has become a topic of intensive research interest.
Consequently, ligand-free colloids are ideal reference materials for evaluating the effects of surface adsorbates from the initial state for application-oriented nanointegration dynalux-id.com by: This research area is to develop a single-step laser direct writing technique to achieve designable, high-efficient 3D micro/nanofabrication.
The new technique will combine the additive two-photon polymerization (TPP) and the subtractive multi-photon ablation (MPA) into a single-step fabrication process to achieve the comprehensive 3D micro/nanofabrication. Surface-assisted laser desorption ionization mass spectrometry techniques for application in forensics.
Guinan T(1), Kirkbride P(2), Pigou PE(2), Ronci M(1), Kobus H(2), Voelcker NH(1). Author information: (1)Mawson Institute, University of South Australia, Mawson Lakes, South Australia,dynalux-id.com by: This book describes the basic mechanisms, theory, simulations and technological aspects of Laser processing techniques.
It covers the principles of laser quenching, welding, cutting, alloying, selective sintering, ablation, etc. The main attention is paid to the quantitative description. The. ME Laser Material Processing Instructor: Ramesh Singh Process Response • The cut can have a very narrow kerf width giving a substantial saving in material.
(Kerf is the width of the cut opening) • The cut edges can be square and not rounded as with most. Edited by key figures in 3D integration and written by top authors from high-tech companies and renowned research institutions, this book covers the intricate details of 3D process technology.
As such, the main focus is on silicon via formation, bonding and debonding, thinning, via reveal and backside processing, both from a technological and a materials science perspective.
The last part of. Provides an in-depth understanding of the fundamentals of a wide range of state-of-the-art materials manufacturing processes Modern manufacturing is at the core of industrial production from base materials to semi-finished goods and final products.
Over the last decade, a variety of innovative methods have been developed that allow for manufacturing processes that are more versatile, less.The papers in this volume cover all aspects of laser assisted surface processing ranging from the preparation of high-Tc superconducting layer structures to industrial laser applications for device fabrication.
The topics presented give recent results in organometallic chemistry and laser photochemistry, and novel surface characterization techniques. The ability to control the surface.Gold nanoparticles (AuNPs) assisted laser desorption ionization mass spectrometry (LDI-MS) emerged as an effective technique for the detection of analytes with high sensitivity.
The surface chemistry and the size of AuNPs are the crucial parameters for lowering the detection limits and increasing the selectivity of LDI-MS. Here we show that chemical-free size selected AuNPs, obtained by laser.
|
2021-05-08 05:04:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25630080699920654, "perplexity": 6946.672820454967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988837.67/warc/CC-MAIN-20210508031423-20210508061423-00419.warc.gz"}
|
https://www.jobilize.com/algebra/section/extensions-the-parabola-by-openstax?qcr=www.quizover.com
|
# 8.3 The parabola (Page 7/11)
Page 7 / 11
${\left(y+4\right)}^{2}=16\left(x+4\right)$
${y}^{2}+12x-6y+21=0$
${\left(y-3\right)}^{2}=-12\left(x+1\right),V:\left(-1,3\right);F:\left(-4,3\right);d:x=2$
${x}^{2}-4x-24y+28=0$
$5{x}^{2}-50x-4y+113=0$
${\left(x-5\right)}^{2}=\frac{4}{5}\left(y+3\right),V:\left(5,-3\right);F:\left(5,-\frac{14}{5}\right);d:y=-\frac{16}{5}$
${y}^{2}-24x+4y-68=0$
${x}^{2}-4x+2y-6=0$
${\left(x-2\right)}^{2}=-2\left(y-5\right),V:\left(2,5\right);F:\left(2,\frac{9}{2}\right);d:y=\frac{11}{2}$
${y}^{2}-6y+12x-3=0$
$3{y}^{2}-4x-6y+23=0$
${\left(y-1\right)}^{2}=\frac{4}{3}\left(x-5\right),V:\left(5,1\right);F:\left(\frac{16}{3},1\right);d:x=\frac{14}{3}$
${x}^{2}+4x+8y-4=0$
## Graphical
For the following exercises, graph the parabola, labeling the focus and the directrix.
$x=\frac{1}{8}{y}^{2}$
$y=36{x}^{2}$
$y=\frac{1}{36}{x}^{2}$
$y=-9{x}^{2}$
${\left(y-2\right)}^{2}=-\frac{4}{3}\left(x+2\right)$
$-5{\left(x+5\right)}^{2}=4\left(y+5\right)$
$-6{\left(y+5\right)}^{2}=4\left(x-4\right)$
${y}^{2}-6y-8x+1=0$
${x}^{2}+8x+4y+20=0$
$3{x}^{2}+30x-4y+95=0$
${y}^{2}-8x+10y+9=0$
${x}^{2}+4x+2y+2=0$
${y}^{2}+2y-12x+61=0$
$-2{x}^{2}+8x-4y-24=0$
For the following exercises, find the equation of the parabola given information about its graph.
Vertex is $\text{\hspace{0.17em}}\left(0,0\right);$ directrix is $\text{\hspace{0.17em}}y=4,$ focus is $\text{\hspace{0.17em}}\left(0,-4\right).$
${x}^{2}=-16y$
Vertex is $\text{\hspace{0.17em}}\left(0,0\right);\text{\hspace{0.17em}}$ directrix is $\text{\hspace{0.17em}}x=4,$ focus is $\text{\hspace{0.17em}}\left(-4,0\right).$
Vertex is $\text{\hspace{0.17em}}\left(2,2\right);\text{\hspace{0.17em}}$ directrix is $\text{\hspace{0.17em}}x=2-\sqrt{2},$ focus is $\text{\hspace{0.17em}}\left(2+\sqrt{2},2\right).$
${\left(y-2\right)}^{2}=4\sqrt{2}\left(x-2\right)$
Vertex is $\text{\hspace{0.17em}}\left(-2,3\right);\text{\hspace{0.17em}}$ directrix is $\text{\hspace{0.17em}}x=-\frac{7}{2},$ focus is $\text{\hspace{0.17em}}\left(-\frac{1}{2},3\right).$
Vertex is $\text{\hspace{0.17em}}\left(\sqrt{2},-\sqrt{3}\right);$ directrix is $\text{\hspace{0.17em}}x=2\sqrt{2},$ focus is $\text{\hspace{0.17em}}\left(0,-\sqrt{3}\right).$
${\left(y+\sqrt{3}\right)}^{2}=-4\sqrt{2}\left(x-\sqrt{2}\right)$
Vertex is $\text{\hspace{0.17em}}\left(1,2\right);\text{\hspace{0.17em}}$ directrix is $\text{\hspace{0.17em}}y=\frac{11}{3},$ focus is $\text{\hspace{0.17em}}\left(1,\frac{1}{3}\right).$
For the following exercises, determine the equation for the parabola from its graph.
${x}^{2}=y$
${\left(y-2\right)}^{2}=\frac{1}{4}\left(x+2\right)$
${\left(y-\sqrt{3}\right)}^{2}=4\sqrt{5}\left(x+\sqrt{2}\right)$
## Extensions
For the following exercises, the vertex and endpoints of the latus rectum of a parabola are given. Find the equation.
${y}^{2}=-8x$
${\left(y+1\right)}^{2}=12\left(x+3\right)$
## Real-world applications
The mirror in an automobile headlight has a parabolic cross-section with the light bulb at the focus. On a schematic, the equation of the parabola is given as $\text{\hspace{0.17em}}{x}^{2}=4y.\text{\hspace{0.17em}}$ At what coordinates should you place the light bulb?
$\left(0,1\right)$
If we want to construct the mirror from the previous exercise such that the focus is located at $\text{\hspace{0.17em}}\left(0,0.25\right),$ what should the equation of the parabola be?
A satellite dish is shaped like a paraboloid of revolution. This means that it can be formed by rotating a parabola around its axis of symmetry. The receiver is to be located at the focus. If the dish is 12 feet across at its opening and 4 feet deep at its center, where should the receiver be placed?
At the point 2.25 feet above the vertex.
Consider the satellite dish from the previous exercise. If the dish is 8 feet across at the opening and 2 feet deep, where should we place the receiver?
A searchlight is shaped like a paraboloid of revolution. A light source is located 1 foot from the base along the axis of symmetry. If the opening of the searchlight is 3 feet across, find the depth.
0.5625 feet
If the searchlight from the previous exercise has the light source located 6 inches from the base along the axis of symmetry and the opening is 4 feet, find the depth.
An arch is in the shape of a parabola. It has a span of 100 feet and a maximum height of 20 feet. Find the equation of the parabola, and determine the height of the arch 40 feet from the center.
${x}^{2}=-125\left(y-20\right),$ height is 7.2 feet
If the arch from the previous exercise has a span of 160 feet and a maximum height of 40 feet, find the equation of the parabola, and determine the distance from the center at which the height is 20 feet.
An object is projected so as to follow a parabolic path given by $\text{\hspace{0.17em}}y=-{x}^{2}+96x,$ where $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ is the horizontal distance traveled in feet and $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ is the height. Determine the maximum height the object reaches.
2304 feet
For the object from the previous exercise, assume the path followed is given by $\text{\hspace{0.17em}}y=-0.5{x}^{2}+80x.\text{\hspace{0.17em}}$ Determine how far along the horizontal the object traveled to reach maximum height.
show that the set of all natural number form semi group under the composition of addition
what is the meaning
Dominic
explain and give four Example hyperbolic function
_3_2_1
felecia
⅗ ⅔½
felecia
_½+⅔-¾
felecia
The denominator of a certain fraction is 9 more than the numerator. If 6 is added to both terms of the fraction, the value of the fraction becomes 2/3. Find the original fraction. 2. The sum of the least and greatest of 3 consecutive integers is 60. What are the valu
1. x + 6 2 -------------- = _ x + 9 + 6 3 x + 6 3 ----------- x -- (cross multiply) x + 15 2 3(x + 6) = 2(x + 15) 3x + 18 = 2x + 30 (-2x from both) x + 18 = 30 (-18 from both) x = 12 Test: 12 + 6 18 2 -------------- = --- = --- 12 + 9 + 6 27 3
Pawel
2. (x) + (x + 2) = 60 2x + 2 = 60 2x = 58 x = 29 29, 30, & 31
Pawel
ok
Ifeanyi
on number 2 question How did you got 2x +2
Ifeanyi
combine like terms. x + x + 2 is same as 2x + 2
Pawel
x*x=2
felecia
2+2x=
felecia
×/×+9+6/1
Debbie
Q2 x+(x+2)+(x+4)=60 3x+6=60 3x+6-6=60-6 3x=54 3x/3=54/3 x=18 :. The numbers are 18,20 and 22
Naagmenkoma
Mark and Don are planning to sell each of their marble collections at a garage sale. If Don has 1 more than 3 times the number of marbles Mark has, how many does each boy have to sell if the total number of marbles is 113?
Mark = x,. Don = 3x + 1 x + 3x + 1 = 113 4x = 112, x = 28 Mark = 28, Don = 85, 28 + 85 = 113
Pawel
how do I set up the problem?
what is a solution set?
Harshika
find the subring of gaussian integers?
Rofiqul
hello, I am happy to help!
Abdullahi
hi mam
Mark
find the value of 2x=32
divide by 2 on each side of the equal sign to solve for x
corri
X=16
Michael
Want to review on complex number 1.What are complex number 2.How to solve complex number problems.
Beyan
yes i wantt to review
Mark
16
Makan
x=16
Makan
use the y -intercept and slope to sketch the graph of the equation y=6x
how do we prove the quadratic formular
Darius
hello, if you have a question about Algebra 2. I may be able to help. I am an Algebra 2 Teacher
thank you help me with how to prove the quadratic equation
Seidu
may God blessed u for that. Please I want u to help me in sets.
Opoku
what is math number
4
Trista
x-2y+3z=-3 2x-y+z=7 -x+3y-z=6
can you teacch how to solve that🙏
Mark
Solve for the first variable in one of the equations, then substitute the result into the other equation. Point For: (6111,4111,−411)(6111,4111,-411) Equation Form: x=6111,y=4111,z=−411x=6111,y=4111,z=-411
Brenna
(61/11,41/11,−4/11)
Brenna
x=61/11 y=41/11 z=−4/11 x=61/11 y=41/11 z=-4/11
Brenna
Need help solving this problem (2/7)^-2
x+2y-z=7
Sidiki
what is the coefficient of -4×
-1
Shedrak
|
2021-05-18 02:25:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 61, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5121159553527832, "perplexity": 630.8957504650292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991650.73/warc/CC-MAIN-20210518002309-20210518032309-00063.warc.gz"}
|