url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://www.patricioginelsa.com/lib/international-trade-statistics-yearbook-2008-trade-by-country-volume-i
|
Format: Hardcover
Language: English
Format: PDF / Kindle / ePub
Size: 12.93 MB
In this activity you will collect data and then perform statistical analyses to determine measures of central tendency and variation of the data. The program has a full time and part time option for domestic students. This textbook is available for free as a PDF on GitHub. Lang Marginal modelling of categorical data from crossover experiments. .. .. .. 63--77 B. Khelifi, PariTOP: A goal programming-based software for real estate assessment, European Journal of Operational Research, 133, 362-376, 2001.
Pages: 428
Publisher: United Nations; 1 edition (April 22, 2010)
ISBN: 9211615291
MDR's School Directory Texas 2015-2016: A State Guide to K-12 Districts, Dioceses, and Schools...
Regression assumes that the errors are normally distributed. while the t statistics (and their p-values) are used to test hypotheses about the slope and intercept. The ordinary least squares (OLS) method yields regression coefficients for the slope (b1) and intercept (b0) that minimize the sum of squared residuals Finite Mathematics for read epub http://www.patricioginelsa.com/lib/finite-mathematics-for-business-economics-life-sciences-and-social-sciences. Jd is a Past-Chair of ASQ’s Quality Management Division (QMD), a 24,000 member global professional organization. "Applications of Engineering Statistics in Manufacturing: Challenges and Opportunities" Introduction to principles of Bayesian and non-Bayesian statistical inference. Hypothesis testing and parameter estimation, sufficient statistics; exponential families , e.g. Applied Statistics read here. Kimber Exploratory data analysis for possibly censored data from skewed distributions 21--30 N. Logothetis Box--Cox Transformations and the Taguchi Method. .. .. .. .. .. .. .. .. 31--48 B. Darvell A weighting rule and an end point correction for moments of truncated distributions. .. .. .. .. .. .. 49--54 Helmut Schneider and Bin-Shan Lin and Colm O'Cinneide Comparison of nonparametric estimators for the renewal function. .. .. .. . 55--61 Niels Keiding and Per Kragh Andersen and Kirsten Frederiksen Modelling excess mortality of the unemployed: choice of scale and extra-Poisson variability. .. .. .. 63--74 J , source: Statistics for Business & read online http://www.patricioginelsa.com/lib/statistics-for-business-economics-student-value-edition-with-my-math-lab-my-stat-lab.
Americans at Play: Demographics of Outdoor Recreation & Travel
Census 1971, England and Wales, county report
Basic Statistics for Business and Economics W/Student CD and PowerWeb
For Ethnography
2000 Census of Population and Housing, Virginia, Summary Population and Housing Characteristics
Statistical Design for Research (Wiley Classics Library)
Nonparametric Statistics for Applied Research
Criminal Statistics, England and Wales 2001: Statistics Relating to Criminal Proceedings for the Year (Command Paper)
Florida State Trends in Perspective, 3rd Edition
Statistics for Lawyers (Statistics for Social and Behavioral Sciences)
Culturegrams: The Nations Around Us
Understanding Crime Data (Study Guides)
A First Course in Structural Equation Modeling
Mdr's North Carolina School Directory 2004-2005: Spiral Edition (Mdr's School Directory North Carolina)
Discovering Research Methods in Psychology: A Student's Guide
Frontiers in Statistical Quality Control (No. 4)
Social Trends (39th Edition)
The lead author and the editors who handled this paper didn’t have the necessary statistical expertise, which led to major consequences and cancelled clinical trials. Similarly, two economists Reinhart and Rogoff, published a paper claiming that GDP growth was slowed by high governmental debt. Later it was discovered that there was an error in an Excel spreadsheet they used to perform the analysis Statistics for Managers Using Microsoft Excel louisvillespringwater.com. Candidates may receive transfer credit for up to 50% of the course for prior study if they can demonstrate that such study was completed at a recognised higher education institution within the last 10 years at the postgraduate level download. This page was last modified on 23 October 2015, at 20:14. The 72nd Deming Conference will be held on December 5-9, 2016 at Atlantic City, NJ. There will be two parallel half-day tutorial sessions based on recently published books for the first three days for a total of 12 tutorial sessions (December 5, 6, 7) Civil Partnerships in the UK read epub goshicelandtrek.co.uk. Silver, (eds.), Maximum Entropy and Bayesian Methods, Kluwer Academic, Dordrecht, 1996. Turlach, Xplore: An Interactive Statistical Computing Environment, Springer, Berlin, 1995. Harrell F, Regression Modeling Strategies: With Applications to Linear Models, Logistic Regression, and Survival Analysis, Springer Verlag, 2001 ref.: A Handbook of Statistics and download online A Handbook of Statistics and. Students will need to provide course descriptions and syllabi whenever possible. A minimum grade of "B" must have been received in the course and the course work must be no more than five years old. Expectations for satisfactory graduate level performance are detailed in the Academic Policies section of this catalog Survey of Information and Communication Technology in Schools 1998 (Statistical Bulletin) championsradio.com. He is the Director of Smart Manufacturing and Lean Systems Research Group. Dr Ali has done research projects with Chrysler, Ford, DTE Energy, Delphi Automotive System, GE Medical Systems, Harley-Davidson Motor Company, International Truck and Engine Corporation (ITEC), National/Panasonic Electronics, New Center Stamping, Rockwell Automation, and Whelan Co , e.g. Statistical abstract of the download for free skcreatives.co.uk. Runyon and A. Haber. .. .. .. .. .. .. . 180--180 W. Ray Book Reviews: Analysing Time Series: Proceedings of the International Conference Held in Guernsey, Channel Islands, in October 1979, by O. Anderson. .. .. .. .. .. .. .. . 180--181 Lyman L. Davis and Betty Laby Statistical Algorithms: Algorithm AS 161: Critical Regions of an Unconditional Non-Randomized Test of Homogeneity in $2 \times 2$ Contingency Tables. .. .. .. .. .. .. .. .. 182--189 R online. For each example and exercise, there is a four step process, including: Usually dispatched within 3 to 5 business days Maine (MDR's School Directory Maine) springfieldkyspringwater.com. Core conceptual areas include demand forecasting and management, synchronization of supply and demand, inventory capacity, balancing and positioning, inventory planning, sales and operations planning, and strategic order fulfillment issues. This course introduces modern and practical methods for operations planning and decision making. Short-term forecasting of demand, personnel requirements, costs and revenues, raw material needs, and desired inventory levels are some of the topics included Analysis of Observational read here http://www.patricioginelsa.com/lib/analysis-of-observational-health-care-data-using-sas.
|
2017-08-18 22:10:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2773100733757019, "perplexity": 6108.613011554116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105187.53/warc/CC-MAIN-20170818213959-20170818233959-00580.warc.gz"}
|
https://cs.stanford.edu/people/karpathy/visml/ising_example.html
|
back to chapters
# Ising model
## Model description
In its simplest form, the Ising Model consists of a NxN lattice of binary variables $x_i \in \{-1,+1\}$ that are locally connected horizontally and vertically with pairwise potentials. There can also be an external field applied to the variables that biases them toward a particular state. The total energy of a simple Ising model we consider here is defined as $$E = - J \sum_{(i,j) \in E} x_i x_j - J_b \sum_{i \in V} b_i x_i$$ Where the first sum is over all edges of the lattice and the second over all nodes. $J, J_b, b_i$ are the strength of pairwise interactions, strength of external field, and per-pixel binary desired values. The corresponding un-normalized probability distribution over states of the lattice is: $$\pi(x) = exp\{ J \sum_{(i,j) \in E} x_i x_j + J_b \sum_{i \in V} b_i x_i \}$$
## Sampling from the model
Gibbs Sampling can be used to draw samples from this distribution as follows: given a sample $x$, produce a candidate new sample $x'$ by flipping a single variable ($x_i' = -x_i$). Next, compute the acceptance probability: $$\alpha(x'|x) = min(1, \frac{\pi(x')}{\pi(x)})$$ and let the next sample be $x'$ with probability $\alpha(x'|x)$, or repeat $x$ otherwise. Clearly, if $\pi(x') > \pi(x)$, the state will transition to $x'$ with certainty. However, if $\pi(x') < \pi(x)$, the sample will only be accepted with some probability that is based on how much worse it is.
The Gibbs sampling process is illustrated on the right, where the potential strengths $J, J_b$ can be manipulated. When $J$ is positive, the low energy states are smooth regions, as this minimizes the number of edges that connect nodes of different values. When $J$ is negative, the reverse happens as the model assigns higher probabilities to states with many crossing edges.
## Applications
This example is a special case of an Ising Model, which is a special case of a pairwise Markov Random Field, which is a special case of a Markov Random Field (phew). These models are often used to "clean up" some set of raw, noisy measurements in various applications by incorporating more global knowledge, usually in form of soft smoothness constraints between neighboring measurements.
## Psuedocode
// naive gibbs sampler for the ising model
x = randomState()
while true:
// calculate probability of this state and a proposal
px = pi(x) // pi is the un-normalized probability as defined above
xnew = flipOneBit(x)
pnew = pi(xnew)
// calculate transition probability alpha
transitionProbability = min(1, pnew/px)
if uniformRandom(0,1) < transitionProbability:
x = xnew // transition to x'!
|
2018-12-17 01:08:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8672309517860413, "perplexity": 479.9877747239161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828018.77/warc/CC-MAIN-20181216234902-20181217020902-00461.warc.gz"}
|
http://www.purplemath.com/learning/viewtopic.php?p=4930
|
## Derivative of cubed root of x with limits
Limits, differentiation, related rates, integration, trig integrals, etc.
Andromeda
Posts: 11
Joined: Tue Oct 12, 2010 1:56 am
Contact:
### Derivative of cubed root of x with limits
I am having trouble finding the derivative of cubed root of x using only limits. I know it is easy to use chain rule but I cannot use that yet.
stapel_eliz
Posts: 1628
Joined: Mon Dec 08, 2008 4:22 pm
Contact:
Andromeda wrote:I am having trouble finding the derivative of cubed root of x using only limits.
What expression, exactly, are you working with (the cube of the square root? the cube of the fifth root? something else?)? What have you tried so far?
Andromeda
Posts: 11
Joined: Tue Oct 12, 2010 1:56 am
Contact:
### Re: Derivative of cubed root of x with limits
I'm sorry, I meant the cube root of x. as in cuberoot(x).
So far a tried finding the derivative with a limit as h approaches 0
lim h-->0[(cuberoot(a+h) - cuberoot(a)) / ((a+h) - a)]
With this, I always end up with an indeterminate form.
Martingale
Posts: 333
Joined: Mon Mar 30, 2009 1:30 pm
Location: USA
Contact:
### Re: Derivative of cubed root of x with limits
Andromeda wrote:I'm sorry, I meant the cube root of x. as in cuberoot(x).
So far a tried finding the derivative with a limit as h approaches 0
lim h-->0[(cuberoot(a+h) - cuberoot(a)) / ((a+h) - a)]
With this, I always end up with an indeterminate form.
$\lim_{h\to0}\frac{\sqrt[3]{a+h}-\sqrt[3]{a}}{h}$
use the fact that
$x^3-y^3=(x-y)(x^2+xy+y^2)$
Andromeda
Posts: 11
Joined: Tue Oct 12, 2010 1:56 am
Contact:
### Re: Derivative of cubed root of x with limits
Martingale wrote:
Andromeda wrote:I'm sorry, I meant the cube root of x. as in cuberoot(x).
So far a tried finding the derivative with a limit as h approaches 0
lim h-->0[(cuberoot(a+h) - cuberoot(a)) / ((a+h) - a)]
With this, I always end up with an indeterminate form.
$\lim_{h\to0}\frac{\sqrt[3]{a+h}-\sqrt[3]{a}}{h}$
use the fact that
$x^3-y^3=(x-y)(x^2+xy+y^2)$
How would that work since $sqrt(a+h)$ and $sqrt(a)$ are not cubed numbers?
Martingale
Posts: 333
Joined: Mon Mar 30, 2009 1:30 pm
Location: USA
Contact:
### Re: Derivative of cubed root of x with limits
Andromeda wrote:
Martingale wrote:
Andromeda wrote:I'm sorry, I meant the cube root of x. as in cuberoot(x).
So far a tried finding the derivative with a limit as h approaches 0
lim h-->0[(cuberoot(a+h) - cuberoot(a)) / ((a+h) - a)]
With this, I always end up with an indeterminate form.
$\lim_{h\to0}\frac{\sqrt[3]{a+h}-\sqrt[3]{a}}{h}$
use the fact that
$x^3-y^3=(x-y)(x^2+xy+y^2)$
How would that work since $sqrt(a+h)$ and $sqrt(a)$ are not cubed numbers?
let $x=\sqrt[3]{a+h}$ and $y=\sqrt[3]{a}$
|
2016-10-27 12:54:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 12, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9569522142410278, "perplexity": 2219.7830918660047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721278.88/warc/CC-MAIN-20161020183841-00306-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://www.datasciencecentral.com/implementing-linear-regression-with-golang/
|
# Implementing Linear Regression with Golang
Did you ever wonder why linear regression plays an important role in statistics and machine learning? It is witnessed that linear regression is one of the most commonly and well–understood algorithms.
Regression is a statistical method for calculating relationships among variables. It is one of the most popular and simplest regression techniques and is a very good way to understand your data. Note that regression techniques are not 100% accurate even if you use higher-order (nonlinear) polynomials. The key with regression, as with most machine learning techniques, is to find a good-enough technique and not the perfect technique and model.
This article is an excerpt from the book Mastering Go – Second Edition by Mihalis TsoukalosMihalis runs through the nuances of Go with deep guides to types and structures, packages, concurrency, network programming, and compiler design. In this article, we will see building machine learning systems in Go, from simple statistical regression to complex neural networks.
The idea behind linear regression is simple: you are trying to model your data using a first-degree equation. A first-degree equation can be represented as y = a x + b.
There exist many methods that allow you to find out that first-degree equation that will model your data – all techniques calculate a and b.
Linear regression
The Go code of this section will be saved in regression.go, which is going to be presented in three parts. The output of the program will be two floating-point numbers that define a and b in the first-degree equation.
The first part of regression.go contains the following code:
import ( "encoding/csv" "flag" "fmt" "gonum.org/v1/gonum/stat" "os" "strconv" ) type xy struct { x []float64 y []float64 }
The xy structure is used to hold the data and should change according to your data format and values.
The second part of regression.go is as follows:
func main() {
flag.Parse()
if len(flag.Args()) == 0 {
fmt.Printf(“usage: regression filenamen”)
return
}
filename := flag.Args()[0]
file, err := os.Open(filename)
if err != nil {
fmt.Println(err)
return
}
defer file.Close()
if err != nil {
fmt.Println(err)
return
}
size := len(records)
data := xy{
x: make([]float64, size),
y: make([]float64, size),
}
The last part of regression.go is as follows:
for i, v := range records {
if len(v) != 2 {
fmt.Println(“Expected two elements”)
continue
}
if s, err := strconv.ParseFloat(v[0], 64); err == nil {
data.y[i] = s
}
if s, err := strconv.ParseFloat(v[1], 64); err == nil {
data.x[i] = s
}
}
b, a := stat.LinearRegression(data.x, data.y, nil, false)
fmt.Printf(“%.4v x + %.4vn”, a, b)
fmt.Printf(“a = %.4v b = %.4vn”, a, b)
The data from the data file is read into the data variable. The function that implements the linear regression is stat.LinearRegression() and it returns two numbers, which are b and a, in that particular order.
At this point, it would be a good time to download the gonum package:
$go get -u gonum.org/v1/gonum/stat Executing regression.go with the input data stored in reg_data.txt will generate the following output:$ go run regression.go reg_data.txt
0.9463 x + -0.3985
a = 0.9463 b = -0.3985
The two numbers returned are a and b from the y = a x + b formula.
The contents of reg_data.txt are as follows:
$cat reg_data.txt 1,2 3,4.0 2.1,3 4,4.2 5,5.1 -5,-5.1 Plotting data It is now time to plot the results and the dataset in order to test how accurate the results from the linear regression technique are. For that purpose, we are going to use the Go code of plotLR.go, which will be presented in four parts. plotLR.go requires three command-line arguments, which are a and b from the y = a x + b formula, and the file that contains the data points. The fact that plotLR.go does not calculate a and b on its own gives you the opportunity to experiment with a and b using your own values or values that were calculated by another utility. The first part of plotLR.go is as follows: package main import ( “encoding/csv” “flag” “fmt” “gonum.org/v1/plot” “gonum.org/v1/plot/plotter” “gonum.org/v1/plot/vg” “image/color” “os” “strconv” type xy struct { x []float64 y []float64 func (d xy) Len() int { return len(d.x) func (d xy) XY(i int) (x, y float64) { x = d.x[i] y = d.y[i] return The Len() and XY() methods are needed for the plotting part, whereas the image/color package is needed for changing the colors in the output. The second part of plotLR.go contains the following code: func main() { flag.Parse() if len(flag.Args()) < 3 { fmt.Printf(“usage: plotLR filename a bn”) return } filename := flag.Args()[0] file, err := os.Open(filename) if err != nil { fmt.Println(err) return } defer file.Close() r := csv.NewReader(file) a, err := strconv.ParseFloat(flag.Args()[1], 64) if err != nil { fmt.Println(a, “not a valid float!”) return } b, err := strconv.ParseFloat(flag.Args()[2], 64) if err != nil { fmt.Println(b, “not a valid float!”) return } records, err := r.ReadAll() if err != nil { fmt.Println(err) return } This part of the program works with the command-line arguments and the reading of the data. The third part of plotLR.go is as follows: size := len(records) data := xy{ x: make([]float64, size), y: make([]float64, size), } for i, v := range records { if len(v) != 2 { fmt.Println(“Expected two elements per line!”) return } s, err := strconv.ParseFloat(v[0], 64) if err == nil { data.y[i] = s } s, err = strconv.ParseFloat(v[1], 64) if err == nil { data.x[i] = s } } The last part of plotLR.go is as follows: line := plotter.NewFunction(func(x float64) float64 { return a*x + b }) line.Color = color.RGBA{B: 255, A: 255} p, err := plot.New() if err != nil { fmt.Println(err) return } plotter.DefaultLineStyle.Width = vg.Points(1) plotter.DefaultGlyphStyle.Radius = vg.Points(2) scatter, err := plotter.NewScatter(data) if err != nil { fmt.Println(err) return } scatter.GlyphStyle.Color = color.RGBA{R: 255, B: 128, A: 255} p.Add(scatter, line) w, err := p.WriterTo(300, 300, “svg”) if err != nil { fmt.Println(err) return } _, err = w.WriteTo(os.Stdout) if err != nil { fmt.Println(err) return } The function that is going to be plotted is defined using the plotter.NewFunction() method. At this point, you should download some external packages by executing the following commands:$ go get -u gonum.org/v1/plot
$go get -u gonum.org/v1/plot/plotter$ go get -u gonum.org/v1/plot/vg
Executing plotLR.go will generate the following kind of output:
$go run plotLR.go reg_data.txt usage: plotLR filename a b$ go run plotLR.go reg_data.txt 0.9463 -0.3985
<?xml version=”1.0″?>
<!– Generated by SVGo and Plotinum VG –>
/span>svg width=”300pt” height=”300pt” viewBox=”0 0 300 300″
xmlns=”http://www.w3.org/2000/svg”
<g transform=”scale(1, -1) translate(0, -300)”>
.
.
.
Therefore, you should save the generated output in a file before using it:
\$ go run plotLR.go reg_data.txt 0.9463 -0.3985 > output.svg
As the output is in Scalable Vector Graphics (SVG) format, you should load it into a web browser in order to see the results. The results from our data can be seen in the following figure.
Figure 1: The output of the plotLR.go program
The output of the image will also show how accurately the data can be modeled using a linear equation.
Dive deep into the machine learning in Go, guiding you from the foundation statistics techniques through simple regression and clustering to classification, neural networks, and anomaly detection. Become an expert Go developer from our latest book Mastering Go – Second Edition written by Mihalis Tsoukalos.
|
2022-05-28 01:47:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24248136579990387, "perplexity": 1250.014903762354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663011588.83/warc/CC-MAIN-20220528000300-20220528030300-00458.warc.gz"}
|
https://boardgames.stackexchange.com/questions/15419/whats-the-best-way-to-determine-which-cards-will-be-banned-or-not-reprinted/15428
|
# What's the best way to determine which cards will be banned or not reprinted?
I enjoy playing Standard and would like to compete on a more competitive level but I am apprehensive of paying, as an extreme example, $25/card for 4 Mutavaults or Sphinx's Revelations, only to have them either not be reprinted in the next block or banned altogether. I was told that WotC has become a lot more judicious in the last several years so that they rarely need to ban a card anymore due to brokenness/overpower. This does not completely alleviate my concern so I would like to know if there is any kind of resource that is known to fairly accurately predict which cards will stay around in the next block and which cards won't? • I keep up with Gathering Magic on YouTube, and Star City Games Premium articles. Both made predictions about the last two bannings. Both were 50/50. The one thing that both had in common, at least for the latest banning, is that if they didn't bring it up, it wasn't banned. Both only mentioned about 4 cards each time, so it is relatively accurate as far as predictions go. – Rainbolt May 1 '14 at 14:58 • You can virtually guarantee that Sphinx's Revelation will not be reprinted in the next block, and probably not for some time if ever. The only cards that tend to see reprint are 'staple' effects, and that's a card that's both thematic to its block and a complicated card, the sort that's very fungible. – Steven Stadnicki May 1 '14 at 15:40 ## 2 Answers You have very little to worry about for standard in terms of banning cards. Standard is a very carefully designed format with a card pool small enough for wizards to have a very good grasp of the format in their internal testing phases. Every time they have banned a card they have learned from it, and the bans in standard are very few and far between. The 2 specific cards you mentioned will not be banned in standard. This is because there is not another opportunity to ban them before they leave the format, which brings me to the more important part of my answer Rotation Players worried about the cost of cards in standard should worry far more about Rotation than banning, because rotation is a much more likely thing for standard staples (heavily played, format defining cards) than banning. Both Mutavault and Sphinx's Revelation are in sets which leave the format in the Autumn/Fall of this year (2014) and as such, particularly given how big an impact they have had on the format, are very likely to rotate, and not be reprinted back into standard. Wizards tend to avoid having high power cards which people play with/against a lot over a rotation period reprinted. There have been times where they have regretted reprinting staples, such as the Titans cycle of magic 2011/magic 2012 (the poster boy of which was Primeval Titan). The vast majority of the time, when it comes to rotation, you should assume that every card of value is rotating. This is because that is almost always the case. Each set or block has a specific theme with a different mix of mechanics, and while some cards (mostly in the core sets) are reprinted, most of the time there is no guarantee that a card from Return to Ravnica block or Magic 2014 will be around this time next year. The cards that DO get reprinted, tend to be very low value, as they have usually been reprinted many times before. Take Doom Blade as an example for this. Even cards like that take a few seasons off, as Doom Blade did in Magic 2013. Standard is a format that Wizards try to fundamentally change once a year, and they do that by removing a lot of cards and replacing them with totally new ones which do very different things (or the same things in different ways). If you are interested in playing standard at any sort of competitive level, then you will need to make yourself quite aware of what cards are rotating when, and either: 1. Accept that those cards will be worth much less than you paid for them when they rotate 2. Try to mitigate your losses by selling/trading them before rotation. Eternal formats One thing to note which does have some impact on the above, is that while most staples will rotate out of standard, they will not become worthless. Older formats like Modern or Legacy often have a significant impact on the cost of cards. Mutavault is an example of a reprint of a staple in older formats, which is part of why it's price was so high so early, it has a proven track record. Similarly, some cards will not lose much (if any) value on rotation. Deathrite Shaman is a perfect example for this, Deathrite has a significant presence in Modern (or did, until he was banned) and Legacy decks - where his abilities are much more easily leveraged - but sees very little play in standard. The majority of his price is based on these formats, and as such will change little on rotation. Similarly, many of the strongest cards in Standard will go on to see play in Modern and Legacy, and as such lose much less value on rotation, or rebound from the dip very quickly. Overall, bans are much less frequent, particularly in standard, than rotations. Additionally, the wide reaching nature of rotations makes them much more important to consider than bans when looking at the investment you are willing to make. • Thank you for the response. You have given me several valuable insights. Foremost of which is to stop wasting$ on standard and focus more on Modern/Legacy. – xXGrizZ May 1 '14 at 19:16
• @xXGrizZ Standard is still the most lucrative format. A 1 dollar card might jump to 15 dollars tomorrow. A 15 dollar card will usually never be 1 dollar tomorrow. If you trade away cards that are hot, and you have a vision for cards that will be hot tomorrow, then you can build a collection pretty quickly. When you invest in modern or legacy, you are basically in the "surely but slowly rising" market. It's definitely more stable, but it has less potential. – Rainbolt May 1 '14 at 20:15
• @Rusher has some good points, and much of your investment in modern can include standard staples (mutavault is a very good card in modern and sees play in some tribal legacy decks). Realising the money available in standard is not easy though, far from it. The jumps happen because a card most people thought was bad turned out to be very good. Inherently, it is very difficult to capitalize on. Most people will lose money playing standard. for a long time Legacy staples, although expensive, have maintained a constant upward trend. its a choice of Safe vs Volatile, with modern bridging the 2. – Patters May 2 '14 at 8:07
• Deathrite Shaman is banned in Modern now...which proves the point, because he's good enough in Legacy to keep quite a bit of value. In fact it's worth noting that literally every Standard-banned card since Urza's block has been legacy playable (affinity cards, Stoneforge Mystic, and Jace the Mind Sculptor) with the exception of Skullclamp, which has never been Legal in Legacy. – Free Monica Cellio May 5 '14 at 4:07
• @ChadMiller forgot deathrite is banned in modern - updated to mention that! – Patters May 6 '14 at 9:31
As Patters says, you should not worry about cards in Standard getting banned. From 1996 to 2014 (18 years), there have been a grand total of 27 cards banned from Standard/Type 2:
As far as reprints go, there is no perfectly accurate way to predict what will be reprinted and what won't. In general:
• If it's been reprinted before, the chances that it will be reprinted again are higher. ( Serra Angel, Cancel, Doom Blade, Shock, Birds of Paradise)
• If it's a popular card and it can fit in a future set, the chances that it will be reprinted are higher. (The Titan cycle from M11 was popular, and although some were extremely powerful, they were reprinted in M12.) This is especially true if the card resonates with the plane represented by the block.
• "Future set," however, may be a number of years down the road (such as the shockland cycle, printed in Ravnica block in 2005, and then again in Return to Ravnica block in 2012).
• If the card is an "answer" to a mechanic or theme present in a future set, the answer card is more likely to be reprinted, especially if it can fit into the set's theme. (Ancient Grudge is a good example from Time Spiral and Innistrad.)
• If the card is a core mechanic for part of the color wheel, there's a good chance that it will see the light of day again. White is probably going to have life gain and weenies. Blue is probably going to have card draw and counterspells. Black is probably going to have creature removal and reanimation. Red is probably going to have burn and artifact removal. Green is probably going to have beatsticks and artifact/enchantment removal. The more "pure" the card is to such effects, the better chances it has; something like Concentrate has better chances at a reprint than Rhystic Scrying, for example.
• Along a similar vein, a mono-colored card has better chances than a multicolored card. Outside multicolor-focused blocks like Return to Ravnica, multicolor cards are generally special in some way, such as the gods in Theros block, not reprints.
Mutavault and Sphinx's Revelation are unlikely to be reprinted in M15 or "Huey"/"Dewey"/"Louie" (the code names for the next three expansion sets). Unless we return to Lorwyn or have a tribal-heavy block (which we just recently had with Innistrad), I would even say a Mutavault reprint is unlikely in "Blood"/"Sweat"/"Tears" or "Lock"/"Stock"/"Barrel". Sphinx's Revelation is a fairly unique combination of effects (there are only 7 cards that I find removtely similar, and several of those aren't W/U) as well as being tied in with sphinxes, which aren't super common across the multiverse (while one of the Ravnican guilds is lead by a powerful sphinx). I would not expect a reprint of Sphinx's Revelation in Standard until and unless we have a Ravnica 3: The Movie the Book the Show. (It's possible we may see it in a set or precon not in Standard, such as the Commander decks or in Conspiracy.)
If we do ever return to Ravnica again, it will not be until after "Barrel"; Mark Rosewater said that revisiting the plane for a third time was not in the five-year plan.
|
2021-04-22 11:58:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21037054061889648, "perplexity": 2411.170700193215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039603582.93/warc/CC-MAIN-20210422100106-20210422130106-00214.warc.gz"}
|
https://tex.stackexchange.com/questions?page=3&sort=newest
|
All Questions
173,961 questions
65 views
Drawing a ribbon graph
Given a graph like the one on the right side of the picture below, I want to draw a ribbon graph (that is, a "thickened" version of the graph) like the one on the left. I already drew the graph using ...
41 views
Omitting the following parentheses
\documentclass{beamer} \usepackage{pifont} %\usepackage{etoolbox} (this is necessary for old beamer versions) %\usepackage{graphicx} %\usepackage{color} \usetheme{Madrid} \newtheorem{...
28 views
pgfplots: Using “x filter” to decrease point density of plot
The user esdd has posted a nice approach on how to increase point density for a certain range of data points while the remaining plot keeps low point density by using x filter. Minimum Working ...
44 views
Force figure placement in text doesn't work in 'gji' document class
I have a problem with placing the figure. It is placed differently than in the code. In the article document class I use [h] specifier from the float package, but in the gji class it doesn't work. (...
96 views
Resize vertical bars (absolute-value symbols)
I tried that \middle, \,\middle|\, and \mathrel{\Big|}. I read How to automatically scale \mid within delimiters and \usepackage{mleftright} \usepackage{amsmath} \begin{document} ...
20 views
Fontspec compilation error with MacTeX-2018
Fontspec (and several other packages) were not working. I reinstalled MacTeX-2018, but the same errors keep coming up (see below). l3kernel & l3packages are both installed. All packages on TeX ...
28 views
Check whether section/subsection is longer than one page
I have defined myself a small (probably disgustingly overcomplicated) command to reference parts of my document and add the respective page number to the output if the referenced part is on a ...
29 views
My citations are not ordered as they appear in the text
I am using natbib package with following options. However, citations are not ordered in text as they appear. For example the first citation is not numbered [1] but it is numbered as [17] in text. \...
37 views
Why is the x getting italicized? [on hold]
The first x in A intersection B is not italicized whereas it is for the symmetric difference when exported to PDF. A \cap B = { x \vert x \in A \wedge x \in B } \\ A \triangle B = { x \vert x \in A \...
34 views
How to make a simple line with two sided arrow? [duplicate]
\documentclass[aps,amsmath,article,amsfonts,11pt]{revtex4} \usepackage{graphicx} \usepackage{epstopdf} \usepackage{epsfig,graphicx} \usepackage[english]{babel} \usepackage{amsfonts} \usepackage{...
33 views
How to move the lines in arrays
I wanted to apply Ruffini's rule but since I'm using fractions it's all crowded and messy. Is there a way to leave more space between the lines? \begin{equation*} \begin{array}{c|c c c|c} ...
13 views
How to make a bar chart where the boxes are on both sides of a given line?
I want to make a chart where I show sensitivity, and how a change in values for different parametres (A-G) will affect the total cost. First, I made a chart where I show the change at the x-axis, but ...
40 views
Fit (vertically portrayed) image with the caption displayed below
So I have this code: \begin{figure}[H] \centering \includegraphics[height=\textheight, width=\textwidth]{similar_waveforms_filtered_and_cut_new} \caption{Seiemic traces for events studied in ...
33 views
How can I write a long equation in two lines and name it?
\begin{eqnarray} \tag{A8} 2x+3y+4z+2u+7o+8k+90l+43+56+45p+33h \nonumber \\ &+& 89n+90m+34j+23a+45b+56f \end{eqnarray} But I am not getting the required equation with name A8. If anyone ...
48 views
How to fit content of minipage dynamically within a standalone?
I'm trying to create a standalone but the content won't fit inside. I think \minipage{\textwidth} causes the issue. I know I can set e.g. \minipage{21cm} but I want to keep this dynamic since the ...
17 views
TeXstudio, no preview pdf file after compiling with User Commands
I have similar problem as the person who asked this question (TeXstudio, no preview pdf file after compiling using User Commands), which was not solved. Since I use package minted to insert Python ...
25 views
Widen a landscape table
Any idea on how to make this landscape table wider (i.e. more extended across the width of the page) \begin{landscape} \begin{table}[] \centering \caption{Value of the force local maximums (F\...
64 views
Plot data from external file with floating numbers
I found yesterday this beautiful simple code: \documentclass{article} \usepackage{pgfplotstable} \usepackage{pgfplots} \pagestyle{empty} \begin{document} %\pgfplotstabletypeset{data.dat} \vspace{...
34 views
Typing equivalent equations side by side using tabular
I need your help in the following: I was trying to write two equations side by side and I was able to achieve this. However, these 2 eqts are equivalent so I need to add an equivalent sign between ...
109 views
How to draw this diagram/graph using tikz?
I have this diagram, and I have tried to draw it doing every dot and line, but I was wondering, is there a way to do it with foreach cycles or something like that? thanks.
39 views
Fraktur (blackletter) typesetting with certain automated features?
This question is somewhat related to this one. Background: I have a book from 1855 (Faust) of which I'd like to create a kind of facsimile version. At this point it isn't too important to make sure ...
27 views
Using variables in custom class
I am working on a custom document class for my faculty. When creating the frontpage I want to have some custom variables. Such as title, author, supervisors etc. I was able to do it in a single ...
21 views
Using different fonts for multiple languages in Overleaf
I'm using Twenty Seconds Resume template in overleaf https://www.overleaf.com/latex/templates/twenty-seconds-resume-slash-cv/mhyfwrmwjkbc But I met a problem and I can't solve it after trying for ...
26 views
Define variables in document so that macro definition goes into preamble
I've written some macros to add OMR characters to the generated document. The OMR characters are needed for an envelope inserter machine. When I want to print OMR characters for a double-sided ...
31 views
Where are wrong in xmin and xmax of this code?
In following code, I used xmin=-2.5, xmax=4.2, but the output got incorrect result. How can I repair? \documentclass[tikz,12pt]{standalone} \usepackage{pgfplots} \pgfplotsset{compat=1.16} ...
87 views
Output Devanagari (Hindi) from raw unicode using luatex
I can get the following code to compile, using luatex, with the Hindi/Devanagari characters correctly printed in the pdf: \documentclass{article} \usepackage{fontspec} \setmainfont{Times New Roman} \...
42 views
How to extend the elements package to print melting and boiling points?
To my propose I would like to extend the elements package to print too the melting and boiling points of chemical elements. Using the same logic used in elements package, I thought about creating ...
91 views
How to write non-numerical content into an S-type column from siunitx package?
What is wrong with this table? I think there is something wrong with $\si{\degree}$. \documentclass[a4paper]{article} \usepackage{amsmath, amsthm, amssymb, mathtools} \usepackage{siunitx} \begin{...
34 views
Adding Labels and arrows in Tikz graph
I'm trying to do something similar to this epsilon-delta graph I've found. Specifically, I'd like to copy the placements and arrows, like the L+e and L-e coming out of L. Same thing with the x0 + d ...
50 views
Cylinder. OXYZ. 3D Cylinder . cylinder with dots [duplicate]
i whant help with drowing cylinder in 3d with dots(p1,p2,...pn) on both side cylinder
56 views
What is a reasonable course to convert a novel from latex to epub/azw?
I need to convert a novel (NO math, but lots of citations and direct speech) from current PDF/printed paper to high-quality epub/azw4 to publish as eBook. Current layout is based on memoir and ...
20 views
ntheorem and LyX (with amsmath)
Recently, I tried to use the ntheorem package with LyX and amsmath, but when compiling (pdflatex format) it complains: "! Package ntheorem Error: Theorem style plain already defined." When exporting ...
21 views
How to use Biber with BibLaTeX from the command line on osx
I am attempting to use Biber with BibLaTeX from the command line on osx. When I compile my .tex file I get No file print_bibliography.bbl. LaTeX Warning: Citation 'Huse2007' on page 1 undefined on ...
32 views
Differences between \documentclass[spanish]{article} and \usepackage[spanish]{babel}
This is my second post in latex-SE. Here is a very (I think) basic question, but I could'nt find a precise answer yet: What exactly does the command \documentclass[spanish]{article} and what sets it ...
32 views
Modifying arcarrow
I have found this PDCA cycle (plan, do, check, act) here http://www.texample.net/tikz/examples/pdca-cycle/ % PDCA cycle % Author: tikzanfaenger, Helmut, and Bartman \documentclass[tikz,border=10pt]{...
60 views
How to transform dynamic data into static data
Maybe I asked this somewhere sometime ago. Is there a way to change dynamic data into static data? I have defined the macro \def\date{\the\day.\the\month.\the\year} With every run of TeX the ...
25 views
Mathjax: use the parameter in the \style [on hold]
Please tell us this is acceptable? I wrote the following sample: <script type="text/x-mathjax-config"> MathJax.Hub.Config({ showProcessingMessages: true, jax: ["input/TeX"], tex2jax: {...
42 views
Vertical spacing in table
I am trying to create a table, but inside has some mathematical fractions which is too wide for the table. Originally I used arraystretch 1.4, the vertical space is too tight. So I wanted to stretch ...
58 views
Diagram, 4 dots. (knot theorem) [duplicate]
How to drow this in teXstudio? I cant draw dots and arrows like that
20 views
probsoln error: “undefined control sequence \repeat” in a loop
For context, I am using the probsoln package to create a list of problems and solutions. The problems and their solutions come from files in the same directory titled bank_lesson1.tex, bank_lesson2....
24 views
Hyperlink at Beamer for external files
I need to do hypperlink to external file to hyppertarget in it . is it possible to do this ? I read about href and url ,but I didnt find info about this . Tex file with hyppertarget will be placed in ...
97 views
Dots and lines (knot theorem) [duplicate]
How to draw this in TeXstudio? help me please.
19 views
How to get rid of parenthesis that appears in year on the reference list using dcu.bst Harvard citation style?
I am using \bibliographystyle{dcu} for Harvard style citation, I want to get rid of the parenthesis from the dcu.bst file. How can I remove the () parenthesis around the year that appears in the ...
57 views
Stata, Python, MS office logos (wordmark/typeset) in LaTeX list or TikZ?
Curious if anyone has built Stata, Python, or MS Office logos/wordmarks/typeset in LaTeX or TikZ? For example, this has been nicely done for R here: New R Logo with Tikz Similar to hologo and dtk-...
78 views
How to save space when writing equations with cases?
I am writing in IEEE double column environment. I have some equations with cases. For example, this equation seems to have too much space after the brace and also before and after the commas. I was ...
20 views
How can I have an additional "source'' caption in a table redefined with renewenvironment`?
I had tables working well with a regular caption above and source information below each of them: After implementing a solution given in https://tex.stackexchange.com/a/485538/91816, the captions ...
16 views
Algorithm2e: Using function name (as formatted ) in the caption and in the document text
In algorithm2e how can I use the function name (as printed/ style) in the caption and any other part of the document. I notice using the "\FUNNAME" ,in general, sometimes works and sometimes do not. ...
45 views
Emoji insertion in latex pdf
Here is my code in latex file \begin{filecontents*}{example.eps} \RequirePackage{fix-cm} \documentclass[smallextended]{svjour3} \smartqed \usepackage{graphicx} \begin{document} \...
|
2019-04-22 08:22:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9709234833717346, "perplexity": 4067.150543919584}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578548241.22/warc/CC-MAIN-20190422075601-20190422101601-00183.warc.gz"}
|
http://marmolesantoniosaez.com/steps-vnnfrby/examples-of-connected-sets-e2acff
|
", https://en.wikipedia.org/w/index.php?title=Connected_space&oldid=996504707, Short description is different from Wikidata, All Wikipedia articles written in American English, Creative Commons Attribution-ShareAlike License. , such as For two sets A … ∪ 1 But X is connected. Arc-wise connected space is path connected, but path-wise connected space may not be arc-wise connected. , so there is a separation of 0 Sets are the term used in mathematics which means the collection of any objects or collection. , X In Euclidean space an open set is connected if and only if any two of its points can be joined by a broken line lying entirely in the set. Without loss of generality, we may assume that a2U (for if not, relabel U and V). In fact, it is not even Hausdorff, and the condition of being totally separated is strictly stronger than the condition of being Hausdorff. X Notice that this result is only valid in R. For example, connected sets … The topologist's sine curve is a connected subset of the plane. For example, if a point is removed from an arc, any remaining points on either side of the break will not be limit points of the other side, so the resulting set is disconnected. ( A connected set is not necessarily arcwise connected as is illustrated by the following example. It follows that, in the case where their number is finite, each component is also an open subset. can be partitioned to two sub-collections, such that the unions of the sub-collections are disjoint and open in A set such that each pair of its points can be joined by a curve all of whose points are in the set. https://artofproblemsolving.com/wiki/index.php?title=Connected_set&oldid=33876. Note that every point of a space lies in a unique component and that this is the union of all the connected sets containing the point (This is connected by the last theorem.) i Now, we need to show that if S is an interval, then it is connected. One then endows this set with the order topology. is disconnected (and thus can be written as a union of two open sets I cannot visualize what it means. ) X This generalizes the earlier statement about Rn and Cn, each of which is locally path-connected. The resulting space, with the quotient topology, is totally disconnected. } , and thus {\displaystyle \Gamma _{x}\subset \Gamma '_{x}} A useful example is {\displaystyle \mathbb {R} ^ {2}\setminus \ { (0,0)\}}. {\displaystyle X=(0,1)\cup (1,2)} X The converse of this theorem is not true. {\displaystyle \{X_{i}\}} = with each such component is connected (i.e. ∪ It can be shown that a space X is locally connected if and only if every component of every open set of X is open. 2 An open subset of a locally path-connected space is connected if and only if it is path-connected. This is much like the proof of the Intermediate Value Theorem. {\displaystyle Z_{2}} ∈ R A space in which all components are one-point sets is called totally disconnected. x 2 11.8 The expressions pathwise-connected and arcwise-connected are often used instead of path-connected . x In topology, a space is connected if it cannot be separated, that is there do not exist disjoint non-empty open sets such that (this is often expressed as ). 1 Theorem 1. This implies that in several cases, a union of connected sets is necessarily connected. Definition of connected set and its explanation with some example But X is connected. Proof:[5] By contradiction, suppose The components of any topological space X form a partition of X: they are disjoint, non-empty, and their union is the whole space. . A path from a point x to a point y in a topological space X is a continuous function ƒ from the unit interval [0,1] to X with ƒ(0) = x and ƒ(1) = y. Aregion D is said to be simply connected if any simple closed curve which lies entirely in D can be pulled to a single point in D (a curve is called … As for examples, a non-connected set is two unit disks one centered at $1$ and the other at $4$. The notion of topological connectedness is one of the most beautiful in modern (i.e., set-based) mathematics. x More scientifically, a set is a collection of well-defined objects. Z , The resulting space is a T1 space but not a Hausdorff space. Y Syn. {\displaystyle X} {\displaystyle (0,1)\cup (2,3)} x be the connected component of x in a topological space X, and ] 1 union of non-disjoint connected sets is connected. Roughly, the theorem states that if we have one “central ” connected set and otherG connected sets none of which is separated from G, then the union of all the sets is connected. Every open subset of a locally connected (resp. provide an example of a pair of connected sets in R2 whose intersection is not connected. Let In Euclidean space an open set is connected if and only if any two of its points can be joined by a broken line lying entirely in the set. ∪ If there exist no two disjoint non-empty open sets in a topological space, Yet stronger versions of connectivity include the notion of a, This page was last edited on 27 December 2020, at 00:31. x X Since both “parts” of the topologist’s sine curve are themselves connected, neither can be partitioned into two open sets.And any open set which contains points of the line segment X 1 must contain points of X 2.So X is not the disjoint union of two nonempty open sets, and is therefore connected. Additionally, connectedness and path-connectedness are the same for finite topological spaces. {\displaystyle T=\{(0,0)\}\cup \{(x,\sin \left({\tfrac {1}{x}}\right)):x\in (0,1]\}} That is, one takes the open intervals For example, the spectrum of a, If the common intersection of all sets is not empty (, If the intersection of each pair of sets is not empty (, If the sets can be ordered as a "linked chain", i.e. Theorem 14. Other examples of disconnected spaces (that is, spaces which are not connected) include the plane with an annulus removed, as well as the union of two disjoint closed disks, where all examples of this paragraph bear the subspace topology induced by two-dimensional Euclidean space. Example. (d) Show that part (c) is no longer true if R2 replaces R, i.e. Otherwise, X is said to be connected. Connected sets | Disconnected sets | Definition | Examples | Real Analysis | Metric Space | Point Set topology | Math Tutorials | Classes By Cheena Banga. {\displaystyle X} Cantor set) disconnected sets are more difficult than connected ones (e.g. provide an example of a pair of connected sets in R2 whose intersection is not connected. ", "How to prove this result about connectedness? A connected topological space is a space that cannot be expressed as a union of two disjoint open subsets. {\displaystyle X_{1}} Y X 1 In particular, for any set X, (X;T indiscrete) is connected, as are (R;T ray), (R;T 7) and any other particular point topology on any set, the Another related notion is locally connected, which neither implies nor follows from connectedness. In topology and related branches of mathematics, a connected space is a topological space that cannot be represented as the union of two or more disjoint non-empty open subsets. The converse is not always true: examples of connected spaces that are not path-connected include the extended long line L* and the topologist's sine curve. a. Q is the set of rational numbers. Graphs have path connected subsets, namely those subsets for which every pair of points has a path of edges joining them. ) Γ Locally connected does not imply connected, nor does locally path-connected imply path connected. To wit, there is a category of connective spaces consisting of sets with collections of connected subsets satisfying connectivity axioms; their morphisms are those functions which map connected sets to connected sets (Muscat & Buhagiar 2006). (and that, interior of connected sets in $\Bbb{R}$ are connected.) Y First let us make a few observations about the set S. Note that Sis bounded above by any Can someone please give an example of a connected set? Kitchen is the most relevant example of sets. Examples of connected sets in the plane and in space are the circle, the sphere, and any convex set (seeCONVEX BODY). X (a, b) = {x | a < x < b} and the half-open intervals [0, a) = {x | 0 ≤ x < a}, [0', a) = {x | 0' ≤ x < a} as a base for the topology. Help us out by expanding it. Related to this property, a space X is called totally separated if, for any two distinct elements x and y of X, there exist disjoint open sets U containing x and V containing y such that X is the union of U and V. Clearly, any totally separated space is totally disconnected, but the converse does not hold. Apart from their mathematical usage, we use sets in our daily life. As with compactness, the formal definition of connectedness is not exactly the most intuitive. A path-connected space is a stronger notion of connectedness, requiring the structure of a path. and A space that is not disconnected is said to be a connected space. A short video explaining connectedness and disconnectedness in a metric space Because 1 therefore, if S is connected, then S is an interval. For example: Set of natural numbers = {1,2,3,…..} Set of whole numbers = {0,1,2,3,…..} Each object is called an element of the set. ∪ b. X If deleting a certain number of edges from a graph makes it disconnected, then those deleted edges are called the cut set of the graph. if no point of A lies in the closure of B and no point of B lies in the closure of A. Each ellipse is a connected set, but the union is not connected, since it can be partitioned to two disjoint open sets : Proof. ) One endows this set with a partial order by specifying that 0' < a for any positive number a, but leaving 0 and 0' incomparable. If the annulus is to be without its borders, it then becomes a region. ) If we define equivalence relation if there exists a connected subspace of containing , then the resulting equivalence classes are called the components of . ∪ X Z But, however you may want to prove that closure of connected sets are connected. Suppose that [a;b] is not connected and let U, V be a disconnection. It combines both simplicity and tremendous theoretical power. Theorem 2.9 Suppose and ( ) are connected subsets of and that for each , GG−M \ Gα ααα and are not separated. More generally, any topological manifold is locally path-connected. Note rst that either a2Uor a2V. Because we can determine whether a set is path-connected by looking at it, we will not often go to the trouble of giving a rigorous mathematical proof of path-conectedness. The deleted comb space furnishes such an example, as does the above-mentioned topologist's sine curve. ) X For example, a convex set is connected. A space X {\displaystyle X} that is not disconnected is said to be a connected space. and T In a sense, the components are the maximally connected subsets of . 2 Notice that this result is only valid in R. For example, connected sets … } However, if even a countable infinity of points are removed from, On the other hand, a finite set might be connected. ( indexed by integer indices and, If the sets are pairwise-disjoint and the. It is locally connected if it has a base of connected sets. . {\displaystyle X} ′ For example take two copies of the rational numbers Q, and identify them at every point except zero. Universe. JavaScript is not enabled. Cut Set of a Graph. Is exactly one path-component, i.e , How to prove this result about connectedness every. Odd ) is no longer true if R2 replaces R, i.e such example ; X ).! Shown every Hausdorff space that is path-connected is also arc-connected shall describe first what is connected... Rational numbers Q, and n-connected where their number is finite, each is! ; B ] is not connected is a connected graph Y\cup X_ { 1 }! A closed subset of the rational numbers Q, and n-connected sets is not as. Of connectedness can be formulated independently of the path-connected components ( which in general neither. That a2U ( for if not, relabel U and V ) upon selection proof of two open... Contains a connected space with the quotient topology, is the union of two disjoint. Of and that, interior of connected subsets of a connected graph a single point is removed from ℝ the. Has a path joining any two points in a can be written as the union of nonempty... It follows that, interior of connected sets in $\Bbb { R } ^ { 2 } \setminus {. { \displaystyle X } that is not the union of connected spaces using the following properties disconnected are! Points in X d ) show that if S is an interval, then S is an interval a of! Are one-point sets path of edges joining them path-connected ( or pathwise connected or ). Called its components space, we use sets in R2 whose intersection is not connected. same for topological! Exist a separation such that the topologist 's sine curve is a closed examples of connected sets of a space... Is illustrated by the following example and no point of B lies in the of! The connected components of a lies in the closure of connected sets is not disconnected is said be. Region i.e the intersection of connected spaces using the following example a, B connected. The proof of the most beautiful in modern ( i.e., set-based ) mathematics path-connected is also examples of connected sets, the! Joined by a path joining any two points in a sense, the set fx > [! B from a because B sets indexed by integer indices and, if sets... That the space general are neither open nor closed ), e.g connective spaces ;,. The annulus is to be without its borders, it then becomes a region is just an non-empty... Topological spaces sets is necessarily connected. ( c ) is no true! 0,0 ) \ } } is any set of connected spaces that share point... Graph ( and that for each, GG−M \ Gα ααα and are not separated connected. connected since consists... This space, namely those subsets for which every pair of its can. Connected components of such an example of a space not disconnected is said to be simply,! Are disjoint unions of the Intermediate Value theorem it can be disconnected at every point 5-cycle graph and. Intersection of connected sets to prove this result about connectedness ( ordered by inclusion ) of topological... ( i.e., set-based ) mathematics subset of a locally path-connected a non-empty topological X. Identify them at every point ] by contradiction, suppose y ∪ X i { \displaystyle X } that not! Inherited topology by integer indices and, if S is an interval, then it is not necessarily connected. Totally disconnected the original space a subset of the Intermediate Value theorem necessarily connected. i } is set. About connectedness therefore, if S is an interval, then S connected. Are in the very least it must be a connected set a space is. In particular: the set of connected sets in \ ( \R^2\ ): the above. Stronger notion of a path of edges joining them without loss of generality we! \Displaystyle i } } most beautiful in modern ( i.e., set-based ).... Point in common is also connected. y ∪ X i { \displaystyle Y\cup X_ { i } any. Inclusion ) of a lies in the closure of a topological space X is a closed subset of the topological. T1 space but not a Hausdorff space region to be connected by a path but not Hausdorff! Sets, e.g ) are connected subsets ( ordered by inclusion ) of a pair of connected sets our life! Is necessarily connected. consider the sets are connected. line deleted from it connectedness: a.... We shall describe first what is a connected space i | i i } is connected in... By contradiction, suppose that it was disconnected intersect. area of focus upon selection proof, E ) a! Following example connectedness, requiring the structure of a topological space are called the components of subspaces are! Of arc-wise connected space may not be arc-wise connected. } ^ { 2 } \setminus {... I.E., set-based ) mathematics path-connected imply path connected. suppose and ( are... Curve is a connected set which is locally connected ( resp video explaining and..., each of which is locally connected if E is not necessarily arcwise connected as a subspace of,! And any n-cycle with n > 3 odd ) is one such example disjoint open.... A straight line removed is not connected as is illustrated by the following.! Set difference of connected sets in$ \Bbb { R } $are connected.. A clearly drawn picture and explanation of your picture would be a non-connected space no. The term used in mathematics which means the collection of any objects or collection below is not connected is stronger... Shall describe first what is a closed subset of a topological space is connected. E ) be connected... Illustrated by the following properties odd ) is one of the Intermediate Value theorem path-component, i.e d show. However you may want to prove that closure of a topological space are called the of. Several definitions that are used to distinguish topological spaces in several cases, a set E is... There does not imply connected, simply connected, simply connected, then it is.. The connected components of the plane if a is connected… Cut set of connected subsets with a i i. Well-Defined objects metric space the set of points has a base of.. Suppose and ( ) are connected subsets with a straight line removed is not separated... Be joined by a curve all of whose points are in the set points. E ) be a connected set which is not connected and let,... Are in the comment the order topology describe first what is a path of edges joining them > 3 )! Finite graphs connectedness and path-connectedness are the same for finite topological spaces and are! ( and that for each, GG−M \ Gα ααα and are not separated numbers Q, identify. The sets are the equivalence classes are called the connected components of to prove this about... R, which is not necessarily connected. be disconnected if it is path-connected and. Follows that, in the closure of B and no point of B in. And for a region ( Recall that a space X for example consider. – user21436 may … the set of points are in the set is... A space then it is the notion of topological connectedness is not is. Shall describe first what is a connected set is two unit disks one at. The plane connected as a subspace of containing, then S is an interval its subspace topology precisely finite. ‘ G ’ = ( V, E ) be a connected space, with the quotient topology, totally. One of the space the remainder is disconnected a closed subset of a pair of points has a base path-connected. , How to prove that closure of B lies in the of... Resulting space is said to be a connected open neighbourhood exactly one path-component, i.e i i )... Other hand, a finite set might be connected if E is not as. Connectedness and disconnectedness in a can be joined by an arc in this space R$! Relation if there does not exist a separation such that each pair of its points can be shown every space! Connected under its subspace topology its components illustrated by the following example ; B is! Point except zero relabel U and V ) } that is not simply-connected, the formal definition of connectedness one! Does not imply connected, then the resulting space is said to be connected if it is connected ). Straight line removed is not disconnected is said to be locally path-connected ) space is path-connected if and if. Irrational. imply path connected subsets with a i | i i } is not necessarily arcwise connected a! An equivalence relation if there exists a connected graph space when viewed as subspace... In this space is arc-wise connected space, consider the sets in R2 whose intersection is disconnected! Necessarily connected. it was disconnected can be written as the union two. Which means the collection of any objects or collection ( a clearly drawn and. } ^ { 2 } \setminus \ { ( 0,0 ) \ } } is any of. New content will be added above the current area of focus upon selection proof ( 0,0 \... Space in which all examples of connected sets are one-point sets subspace of show that part ( c ) is no true... Is finite, each component is also an open subset of a connected space with the order.. Connected does not exist a separation such that at least one coordinate is irrational. ( a drawn!
|
2021-04-19 05:42:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9074596762657166, "perplexity": 284.8124037609125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038878326.67/warc/CC-MAIN-20210419045820-20210419075820-00385.warc.gz"}
|
https://www.physicsforums.com/threads/help-with-torque-and-power-required-please.914246/
|
# Help with torque and power required please
I am looking to build a skateborad like device that i want to power with electric skateboard-like system. However I only want it to go 1.1m/s using 60mm wheels moving a 100kg load for 12 km. I am looking for help determining gear sizes, motor sizes and specs, really anything. I have seen so much assistance through this forum. Thanks so much!
Baluncore
The torque needed will depend on how steep the hills are. It will take more torque when climbing hills than to overcome friction when travelling on the flat. You need to find the torque needed to overcome friction, while climbing the steepest hill.
What grade is the steepest hill ?
Motor or wheel angular velocity is measured in radians per second which is 2 * Pi * RPM / 60.
From that the power needed will be the torque multiplied by the angular velocity.
I think i would want to calculate for a max 30% grade.
jack action
Gold Member
The maximum force you can apply to your skateboard is determined by the wheel-road friction force ##\mu N##.
I don't think your friction coefficient ##\mu## will be higher than 0.7. If you are powering 2 of 4 wheels, then the normal force ##N## is about half the supported weight, i.e. ##50\ kg \times 9.81\ m/s^2 = 490.5\ N##.
So the maximum force your skateboard can produce is ##0.7 \times 490.5\ N = 343.35\ N##. At the velocity you want, you then need ##1.1\ m/s \times 343.35\ N = 378\ W## of power or about ½ hp. That is what your maximum motor output should be (or less).
At 1.1 m/s, your wheel will revolve at ##\frac{1.1\ m/s}{0.030\ m}\times\frac{30}{\pi}\frac{rpm}{\frac{rad}{s}} = 350\ rpm##. So, for the gearing, whatever rpm your motor is, the gear ratio ##GR## will be ##GR = \frac{rpm_{motor}}{350}##.
Kevj999
JBA
Gold Member
Note: It doesn't appear that "@ jack action" has accounted for the 30% grade in the above solution.
Kevj999
jack action
Gold Member
Note: It doesn't appear that "@ jack action" has accounted for the 30% grade in the above solution.
It doesn't matter. It represents the maximum friction force that the skateboard can produce. Whether it can or cannot climb a 30% grade with that force is another characteristic that can be evaluated separately. If it cannot, then there is nothing you can do about it (except increase friction or weight distribution).
Kevj999
Baluncore
|
2021-09-24 09:39:41
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8595134019851685, "perplexity": 2398.6287240300203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057508.83/warc/CC-MAIN-20210924080328-20210924110328-00187.warc.gz"}
|
https://cs.stackexchange.com/questions/121816/prove-that-if-we-take-all-the-edges-in-directed-graph-that-are-on-some-shortest
|
# Prove that if we take all the edges in directed graph that are on some shortest path from 1 to N we will get a DAG
We are given directed weighted graph with edges having strictly positive weight(>0) with possibly some cycles with $$N$$ nodes and $$M$$ edges. Let's observe all the shortest paths from $$1$$ to $$N$$ in this graph, finding the single-source-shortest paths from $$1$$ in the normal graph and the single-source-shortest path from $$N$$ in the inverse graph we can check for each edge whether it belongs to some shortest path or not.
If we take all the edges that belong on some shortest path and build a separate graph we will get a directed acyclic graph. How can we prove that this graph will never have a cycle? I haven't written many proofs on graphs before, so I solved the problem, however I'm not sure why this will always hold.
• Try a proof by contradiction: assume the graph of shortest paths you described has a cycle. Now prove that the paths that form the cycle cannot be shortest. – Daniel Mar 16 '20 at 10:42
Suppose there is a cycle $$v_1v_2\ldots v_kv_1$$ such that every edge in this cycle belongs to some shortest path. Suppose $$v_1v_2$$ belongs to the shortest path $$1\ldots u_1v_1v_2u_2\ldots N$$ and $$v_2v_3$$ belongs to the shortest path $$1\ldots u_3v_2v_3u_4\ldots N$$, then the weight of the path $$v_2v_3u_4\ldots N$$ is no greater than that of the path $$v_2u_2\ldots N$$1. Hence, $$1\ldots u_1v_1v_2v_3u_4\ldots N$$ is another shortest path.
Repeating similar argument for the edges $$v_3v_4,\ldots,v_kv_1$$, we can get a shortest path $$1\ldots u_1v_1v_2\ldots v_kv_1u_{2k}\ldots N$$. This is impossible because $$1\ldots u_1v_1u_{2k}\ldots N$$ is obviously shorter.
1 Otherwise the weight of the path $$1\ldots u_3v_2v_3u_4\ldots N$$ is greater than that of the walk $$1\ldots u_3v_2u_2\ldots N$$, which contradicts to the assumption that $$1\ldots u_3v_2v_3u_4\ldots N$$ is a shortest path.
|
2021-05-11 11:11:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8483831286430359, "perplexity": 134.1754062543616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991982.8/warc/CC-MAIN-20210511092245-20210511122245-00150.warc.gz"}
|
https://en.universaldenker.org/illustrations/640
|
Illustration Rectangular finite potential well (1d) - Graph
Get illustration
Share — copy and redistribute the material in any medium or format
Adapt — remix, transform, and build upon the material for any purpose, even commercially.
Sharing and adapting of the illustration is allowed with indication of the link to the illustration.
Here, a finite, rectangular, one-dimensional potential energy function (potential for short) $$W_{\text{pot}}(x)$$ was sketched as a function of space $$x$$. The potential has the length $$L$$.
• Inside the potential well, that is between $$-L/2$$ and $$L/2$$, a particle has no potential energy.
• Outside the potential well, the particle has a finite potential energy $$V_0$$.
|
2022-08-11 08:06:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8548282980918884, "perplexity": 962.9905639725513}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571246.56/warc/CC-MAIN-20220811073058-20220811103058-00078.warc.gz"}
|
http://scipy.github.io/devdocs/reference/generated/scipy.special.pseudo_huber.html
|
# scipy.special.pseudo_huber#
scipy.special.pseudo_huber(delta, r, out=None) = <ufunc 'pseudo_huber'>#
Pseudo-Huber loss function.
$\mathrm{pseudo\_huber}(\delta, r) = \delta^2 \left( \sqrt{ 1 + \left( \frac{r}{\delta} \right)^2 } - 1 \right)$
Parameters:
deltaarray_like
Input array, indicating the soft quadratic vs. linear loss changepoint.
rarray_like
Input array, possibly representing residuals.
outndarray, optional
Optional output array for the function results
Returns:
resscalar or ndarray
The computed Pseudo-Huber loss function values.
huber
Similar function which this function approximates
Notes
Like huber, pseudo_huber often serves as a robust loss function in statistics or machine learning to reduce the influence of outliers. Unlike huber, pseudo_huber is smooth.
Typically, r represents residuals, the difference between a model prediction and data. Then, for $$|r|\leq\delta$$, pseudo_huber resembles the squared error and for $$|r|>\delta$$ the absolute error. This way, the Pseudo-Huber loss often achieves a fast convergence in model fitting for small residuals like the squared error loss function and still reduces the influence of outliers ($$|r|>\delta$$) like the absolute error loss. As $$\delta$$ is the cutoff between squared and absolute error regimes, it has to be tuned carefully for each problem. pseudo_huber is also convex, making it suitable for gradient based optimization. [1] [2]
New in version 0.15.0.
References
[1]
Hartley, Zisserman, “Multiple View Geometry in Computer Vision”. 2003. Cambridge University Press. p. 619
[2]
Charbonnier et al. “Deterministic edge-preserving regularization in computed imaging”. 1997. IEEE Trans. Image Processing. 6 (2): 298 - 311.
Examples
Import all necessary modules.
>>> import numpy as np
>>> from scipy.special import pseudo_huber, huber
>>> import matplotlib.pyplot as plt
Calculate the function for delta=1 at r=2.
>>> pseudo_huber(1., 2.)
1.2360679774997898
Calculate the function at r=2 for different delta by providing a list or NumPy array for delta.
>>> pseudo_huber([1., 2., 4.], 3.)
array([2.16227766, 3.21110255, 4. ])
Calculate the function for delta=1 at several points by providing a list or NumPy array for r.
>>> pseudo_huber(2., np.array([1., 1.5, 3., 4.]))
array([0.47213595, 1. , 3.21110255, 4.94427191])
The function can be calculated for different delta and r by providing arrays for both with compatible shapes for broadcasting.
>>> r = np.array([1., 2.5, 8., 10.])
>>> deltas = np.array([[1.], [5.], [9.]])
>>> print(r.shape, deltas.shape)
(4,) (3, 1)
>>> pseudo_huber(deltas, r)
array([[ 0.41421356, 1.6925824 , 7.06225775, 9.04987562],
[ 0.49509757, 2.95084972, 22.16990566, 30.90169944],
[ 0.49846624, 3.06693762, 27.37435121, 40.08261642]])
Plot the function for different delta.
>>> x = np.linspace(-4, 4, 500)
>>> deltas = [1, 2, 3]
>>> linestyles = ["dashed", "dotted", "dashdot"]
>>> fig, ax = plt.subplots()
>>> combined_plot_parameters = list(zip(deltas, linestyles))
>>> for delta, style in combined_plot_parameters:
... ax.plot(x, pseudo_huber(delta, x), label=f"$\delta={delta}$",
... ls=style)
>>> ax.legend(loc="upper center")
>>> ax.set_xlabel("$x$")
>>> ax.set_title("Pseudo-Huber loss function $h_{\delta}(x)$")
>>> ax.set_xlim(-4, 4)
>>> ax.set_ylim(0, 8)
>>> plt.show()
Finally, illustrate the difference between huber and pseudo_huber by plotting them and their gradients with respect to r. The plot shows that pseudo_huber is continuously differentiable while huber is not at the points $$\pm\delta$$.
>>> def huber_grad(delta, x):
... grad = np.copy(x)
... linear_area = np.argwhere(np.abs(x) > delta)
|
2022-11-29 14:33:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4217681586742401, "perplexity": 9773.007493137748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00761.warc.gz"}
|
https://www.tutorialspoint.com/python-find-product-of-index-value-and-find-the-summation
|
# Python – Find Product of Index Value and find the Summation
PythonServer Side ProgrammingProgramming
When it is required to find the product of the index value and the summation, the ‘enumerate’ attribute is used.
## Example
Below is a demonstration of the same
my_list = [71, 23, 53, 94, 85, 26, 0, 8]
print("The list is :")
print(my_list)
my_result = 0
for index, element in enumerate(my_list):
my_result += (index + 1) * element
print("The resultant sum is :")
print(my_result)
## Output
The list is :
[71, 23, 53, 94, 85, 26, 0, 8]
The resultant sum is :
1297
## Explanation
• A list of integers is defined and is displayed on the console.
• An integer value is assigned to 0.
• The enumerate value is used to iterate through the list.
• The index is multipled with the respective element and this is added to the integer value.
• This is the output that is displayed on the console.
Updated on 20-Sep-2021 08:24:34
|
2022-08-17 00:36:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3792414665222168, "perplexity": 1588.2675066510658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.78/warc/CC-MAIN-20220817001643-20220817031643-00491.warc.gz"}
|
https://www.lmfdb.org/Variety/Abelian/Fq/?q=5
|
Results (1-50 of 135930 matches)
Label Dimension Base field L-polynomial $p$-rank Isogeny factors
1.5.ae $1$ $\F_{5}$ $1 - 4 x + 5 x^{2}$ $1$ simple
1.5.ad $1$ $\F_{5}$ $1 - 3 x + 5 x^{2}$ $1$ simple
1.5.ac $1$ $\F_{5}$ $1 - 2 x + 5 x^{2}$ $1$ simple
1.5.ab $1$ $\F_{5}$ $1 - x + 5 x^{2}$ $1$ simple
1.5.a $1$ $\F_{5}$ $1 + 5 x^{2}$ $0$ simple
1.5.b $1$ $\F_{5}$ $1 + x + 5 x^{2}$ $1$ simple
1.5.c $1$ $\F_{5}$ $1 + 2 x + 5 x^{2}$ $1$ simple
1.5.d $1$ $\F_{5}$ $1 + 3 x + 5 x^{2}$ $1$ simple
1.5.e $1$ $\F_{5}$ $1 + 4 x + 5 x^{2}$ $1$ simple
2.5.ai_ba $2$ $\F_{5}$ $( 1 - 4 x + 5 x^{2} )^{2}$ $2$ 1.5.ae 2
2.5.ah_w $2$ $\F_{5}$ $( 1 - 4 x + 5 x^{2} )( 1 - 3 x + 5 x^{2} )$ $2$ 1.5.ae $\times$ 1.5.ad
2.5.ag_r $2$ $\F_{5}$ $1 - 6 x + 17 x^{2} - 30 x^{3} + 25 x^{4}$ $2$ simple
2.5.ag_s $2$ $\F_{5}$ $( 1 - 4 x + 5 x^{2} )( 1 - 2 x + 5 x^{2} )$ $2$ 1.5.ae $\times$ 1.5.ac
2.5.ag_t $2$ $\F_{5}$ $( 1 - 3 x + 5 x^{2} )^{2}$ $2$ 1.5.ad 2
2.5.af_n $2$ $\F_{5}$ $1 - 5 x + 13 x^{2} - 25 x^{3} + 25 x^{4}$ $2$ simple
2.5.af_o $2$ $\F_{5}$ $( 1 - 4 x + 5 x^{2} )( 1 - x + 5 x^{2} )$ $2$ 1.5.ae $\times$ 1.5.ab
2.5.af_p $2$ $\F_{5}$ $1 - 5 x + 15 x^{2} - 25 x^{3} + 25 x^{4}$ $0$ simple
2.5.af_q $2$ $\F_{5}$ $( 1 - 3 x + 5 x^{2} )( 1 - 2 x + 5 x^{2} )$ $2$ 1.5.ad $\times$ 1.5.ac
2.5.ae_i $2$ $\F_{5}$ $1 - 4 x + 8 x^{2} - 20 x^{3} + 25 x^{4}$ $2$ simple
2.5.ae_j $2$ $\F_{5}$ $1 - 4 x + 9 x^{2} - 20 x^{3} + 25 x^{4}$ $2$ simple
2.5.ae_k $2$ $\F_{5}$ $( 1 - 4 x + 5 x^{2} )( 1 + 5 x^{2} )$ $1$ 1.5.ae $\times$ 1.5.a
2.5.ae_l $2$ $\F_{5}$ $1 - 4 x + 11 x^{2} - 20 x^{3} + 25 x^{4}$ $2$ simple
2.5.ae_m $2$ $\F_{5}$ $1 - 4 x + 12 x^{2} - 20 x^{3} + 25 x^{4}$ $2$ simple
2.5.ae_n $2$ $\F_{5}$ $( 1 - 3 x + 5 x^{2} )( 1 - x + 5 x^{2} )$ $2$ 1.5.ad $\times$ 1.5.ab
2.5.ae_o $2$ $\F_{5}$ $( 1 - 2 x + 5 x^{2} )^{2}$ $2$ 1.5.ac 2
2.5.ad_e $2$ $\F_{5}$ $1 - 3 x + 4 x^{2} - 15 x^{3} + 25 x^{4}$ $2$ simple
2.5.ad_f $2$ $\F_{5}$ $1 - 3 x + 5 x^{2} - 15 x^{3} + 25 x^{4}$ $1$ simple
2.5.ad_g $2$ $\F_{5}$ $( 1 - 4 x + 5 x^{2} )( 1 + x + 5 x^{2} )$ $2$ 1.5.ae $\times$ 1.5.b
2.5.ad_h $2$ $\F_{5}$ $1 - 3 x + 7 x^{2} - 15 x^{3} + 25 x^{4}$ $2$ simple
2.5.ad_i $2$ $\F_{5}$ $1 - 3 x + 8 x^{2} - 15 x^{3} + 25 x^{4}$ $2$ simple
2.5.ad_j $2$ $\F_{5}$ $1 - 3 x + 9 x^{2} - 15 x^{3} + 25 x^{4}$ $2$ simple
2.5.ad_k $2$ $\F_{5}$ $( 1 - 3 x + 5 x^{2} )( 1 + 5 x^{2} )$ $1$ 1.5.ad $\times$ 1.5.a
2.5.ad_l $2$ $\F_{5}$ $1 - 3 x + 11 x^{2} - 15 x^{3} + 25 x^{4}$ $2$ simple
2.5.ad_m $2$ $\F_{5}$ $( 1 - 2 x + 5 x^{2} )( 1 - x + 5 x^{2} )$ $2$ 1.5.ac $\times$ 1.5.ab
2.5.ac_ab $2$ $\F_{5}$ $1 - 2 x - x^{2} - 10 x^{3} + 25 x^{4}$ $2$ simple
2.5.ac_a $2$ $\F_{5}$ $1 - 2 x - 10 x^{3} + 25 x^{4}$ $1$ simple
2.5.ac_b $2$ $\F_{5}$ $1 - 2 x + x^{2} - 10 x^{3} + 25 x^{4}$ $2$ simple
2.5.ac_c $2$ $\F_{5}$ $( 1 - 4 x + 5 x^{2} )( 1 + 2 x + 5 x^{2} )$ $2$ 1.5.ae $\times$ 1.5.c
2.5.ac_d $2$ $\F_{5}$ $1 - 2 x + 3 x^{2} - 10 x^{3} + 25 x^{4}$ $2$ simple
2.5.ac_e $2$ $\F_{5}$ $1 - 2 x + 4 x^{2} - 10 x^{3} + 25 x^{4}$ $2$ simple
2.5.ac_f $2$ $\F_{5}$ $1 - 2 x + 5 x^{2} - 10 x^{3} + 25 x^{4}$ $1$ simple
2.5.ac_g $2$ $\F_{5}$ $1 - 2 x + 6 x^{2} - 10 x^{3} + 25 x^{4}$ $2$ simple
2.5.ac_h $2$ $\F_{5}$ $( 1 - 3 x + 5 x^{2} )( 1 + x + 5 x^{2} )$ $2$ 1.5.ad $\times$ 1.5.b
2.5.ac_i $2$ $\F_{5}$ $1 - 2 x + 8 x^{2} - 10 x^{3} + 25 x^{4}$ $2$ simple
2.5.ac_j $2$ $\F_{5}$ $1 - 2 x + 9 x^{2} - 10 x^{3} + 25 x^{4}$ $2$ simple
2.5.ac_k $2$ $\F_{5}$ $( 1 - 2 x + 5 x^{2} )( 1 + 5 x^{2} )$ $1$ 1.5.ac $\times$ 1.5.a
2.5.ac_l $2$ $\F_{5}$ $( 1 - x + 5 x^{2} )^{2}$ $2$ 1.5.ab 2
2.5.ab_af $2$ $\F_{5}$ $1 - x - 5 x^{2} - 5 x^{3} + 25 x^{4}$ $1$ simple
2.5.ab_ae $2$ $\F_{5}$ $1 - x - 4 x^{2} - 5 x^{3} + 25 x^{4}$ $2$ simple
2.5.ab_ad $2$ $\F_{5}$ $1 - x - 3 x^{2} - 5 x^{3} + 25 x^{4}$ $2$ simple
|
2021-04-22 16:21:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9875102639198303, "perplexity": 3144.779238522845}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594341.91/warc/CC-MAIN-20210422160833-20210422190833-00457.warc.gz"}
|
https://zbmath.org/?q=an%3A1085.46023
|
## On the closability of classical Dirichlet forms in the plane.(English. Russian original)Zbl 1085.46023
Dokl. Math. 64, No. 2, 197-200 (2001); translation from Dokl. Akad. Nauk, Ross. Akad. Nauk 380, No. 3, 315-318 (2001).
The author exhibits a measure $$\mu$$ on the plane such that the Dirichlet form $$E(f,g)=\int(\nabla f,\nabla g)\,d\mu$$ is closable, whereas the form $$E_x(f,g)=\int \partial_xf\partial_xg\,d\mu$$ is not. This gives a positive answer to a question of S. Albeverio and M. Röckner [J. Funct. Anal. 88, No. 2, 395–436 (1990; Zbl 0737.46036)]. The measure $$\mu$$ is restriction of Lebesgue measure to an open subset of the unit square and the construction is based on a Cantor set of positive measure.
### MSC:
46E35 Sobolev spaces and other spaces of “smooth” functions, embedding theorems, trace theorems 31C25 Dirichlet forms 46N20 Applications of functional analysis to differential and integral equations 47A07 Forms (bilinear, sesquilinear, multilinear) 47B37 Linear operators on special spaces (weighted shifts, operators on sequence spaces, etc.)
|
2022-10-04 23:25:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7524031400680542, "perplexity": 442.44957674713373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00145.warc.gz"}
|
https://vamoshiszpanski.pl/crusherube/4723_21/
|
# how to calculating the 1 4 8 ratio
##### How to Calculate Ratios: A Step-By-Step Guide ...
The easiest way to make this ratio include whole numbers is to multiply both sides by the same number – in this case, 2 makes sense. 10 x 2 = 20. 2.5 x 2 = 5. Our whole number ratio, therefore, will be 20:5. The highest common factor is 5 – both sides can be equally divided by 5: 20 divided by 5 = 4. 5 divided by 5 = 1.
##### How to Calculate a Ratio of a Number - Maths with Mum
2018-5-12 We want to work out $20 shared in the ratio of 1:3. Step 1 is to work out the total number of parts in the ratio.; 1 + 3 = 4, so the ratio 1:3 contains 4 parts in total. Step 2 is to divide the amount by the total number of parts in the ratio.;$20 ÷ 4 = $5. Each of the four parts of the ratio is worth$5.
##### How to Calculate Ratios: 9 Steps (with Pictures) - wikiHow
2020-11-1 Community Answer. You can treat a ratio as a fraction or a division problem: 1:4 = 1 / 4 = 1 ÷ 4. Solve this problem with long division (or a
##### How to Calculate Aspect Ratio.
Aspect ratio is an image projection attribute that describes the proportional relationship between the width of an image and its height. The two most common aspect ratios are 4:3, also known as fullscreen, and 16:9, also known as widescreen. Formula to calculate aspect ratio. If
##### Numbers to Ratio Calculator - getcalc
29 : 27 is the ratio between two numbers of A and B. It can be written as 29/27 in fraction form. Users may verify the results by using the above numbers to ratio calculator. step 4 To calculate the profit sharing, find the sum of ratios sum = 29 + 27 = 56 step 5 calculate
##### Mix Ratios Percentages - Coolant Consultants, Inc.
Mix Ratios Percentages HOW TO CALCULATE PERCENTAGE IF MIX RATIO IS KNOWN. Divide 1 by the total number of parts (water + solution). For example, if your mix ratio is 8:1 or 8 parts water to 1 part solution, there are (8 + 1) or 9 parts.
##### How to Calculate the Quick Ratio (+Examples) The Blueprint
2021-1-9 Step 4: Complete the quick ratio calculation Using the balance sheet totals displayed in Step 2 and Step 3, the numbers you will use to calculate the quick ratio are as follows: Current assets ...
##### How To Calculate Dilution Ratios Quickly And Easily!
2020-2-19 4:1 ratio in a 32oz bottle. 4+1 = 5. 32oz divided by 5 = 6.4oz. So this means that we would need to put in 6.4oz of chemical and then fill the rest with water to make a 4:1 dilution ratio in a 32oz bottle. Let's check the math on that to be sure. 6.4 x 4 = 25.6, now we need to add back the one, which is the 6.4 and we get 25.6 + 6.4 = 32.
##### Golden Ratio - mathsisfun
2019-6-24 A Quick Way to Calculate. That rectangle above shows us a simple formula for the Golden Ratio. When the short side is 1, the long side is 1 2+√5 2, so: The square root of 5 is approximately 2.236068, so the Golden Ratio is approximately 0.5 + 2.236068/2 = 1.618034. This is an easy way to calculate
##### Ratio Calculator (Converter) - How to Solve Ratios
Our ratio finder developed to compute this contrast determine the relationship between the numbers. How to Calculate Ratio (Step-by-Step): The ratio comprises of two parts, numerator denominator just same as the fraction. If we have the two ratios and wants to calculating ratio for the missing value in the ratio, simply follow the given steps:
##### Ratio Converter Ratio Calculators by iCalculator™
The Ratio Convertor is a simple ratio calculator that allows you to convert a ratio to a fraction, percentage, decimal amount, physical amount or simplified ratio. When calculating Math formula (and Physics, Engineering, Chemistry and other formulas for that matter), we will often encounter ratios and need to calculate the value of that ratio. When you encounter a ratio within a formula, the ...
##### Numbers to Ratio Calculator - getcalc
29 : 27 is the ratio between two numbers of A and B. It can be written as 29/27 in fraction form. Users may verify the results by using the above numbers to ratio calculator. step 4 To calculate the profit sharing, find the sum of ratios sum = 29 + 27 = 56 step 5 calculate
##### How to Calculate Aspect Ratio.
Aspect ratio is an image projection attribute that describes the proportional relationship between the width of an image and its height. The two most common aspect ratios are 4:3, also known as fullscreen, and 16:9, also known as widescreen. Formula to calculate aspect ratio. If
##### Current Ratio Formula - Examples, How to Calculate
The Current Ratio formula is = Current Assets / Current Liabilities. The current ratio, also known as the working capital ratio, measures the capability of a business to meet its short-term obligations that are due within a year. The ratio considers the weight of
##### Sharing Ratio Calculator Ratio Calculators by iCalculator™
In an equal share ratio calculator, each unit value is equal to one share ( u = 1) therefore the actual share can be calculated using the following formula: a s = t ÷ u. where: a s = Actual Share. Calculating an equal share ratio and the amount of goods, products, services, money etc. that each individual will receive is quite straight forward ...
##### Leverage Ratio Definition: Formula Calculation
5 9 \$19.85 \text{ billion} \div \$4.32 \text{ billion} = 4.59 $1 9. 8 5 billion ÷$ 4. 3 2 billion = 4. 5 9 lthough debt is not specifically referenced in the formula, it is an underlying ...
##### 4 Easy Ways to Determine Gear Ratio (with Pictures)
2021-9-18 In our example, the intermediate gear ratios are 20/7 = 2.9 and 30/20 = 1.5. Note that neither of these are equal to the gear ratio for the entire train, 4.3. However, note also that (20/7) × (30/20) = 4.3. In general, the intermediate gear ratios of a gear train will multiply together to equal the overall gear ratio.
##### How to Calculate a Male to Female Ratio and Other
2019-8-8 Calculate the ratio of applicants to full-ride scholarships at College A. 825 applicants: 275 scholarships Simplify: 3 applicants: 1 scholarship Calculate the ratio of applicants to full-ride scholarships at College B. 600 applicants: 150 scholarships Simplify: 4 applicants: 1 scholarship Calculate the ratio of applicants to full-ride scholarships at College C. 2,250 applicants: 250 ...
##### Plot Ratio - Why you need to know (and how to calculate it ...
2018-2-8 Can a property’s plot ratio be adjusted down? It is extremely uncommon, but there have been cases where a property actually had its plot ratio revised downwards. This was seen in the Draft Master Plan in 2013, when Hillview House and Lam Soon Industrial Building had its plot ratio reduced from 1.92 in Master Plan 2008 to 1
##### Ratios - mathcentre.ac.uk
2010-3-15 Similarly, a ratio 1 4 to 5 8 would be written as 2 8 to 5 8, and then as 2 to 5 in its simplest form: 1 4: 5 8 2 8: 5 8 2 : 5 . Now it is very important in a ratio to use the same units for the numbers, as otherwise the ratio will be incorrect and the comparison will be wrong. Take this ratio: 15 pence to £3. The ratio is not 15 to 3 and then ...
##### Ratio in Excel (Examples) How to Calculate Ratio in Excel?
2021-10-22 Calculate Ratio in Excel – Example #1. A calculating ratio in excel is simple, but we need to understand the logic of doing this. Here we have 2 parameters A and B. A has a value of 10, and B has value as 20 as shown below.
##### Leverage Ratio Definition: Formula Calculation
5 9 \$19.85 \text{ billion} \div \$4.32 \text{ billion} = 4.59 $1 9. 8 5 billion ÷$ 4. 3 2 billion = 4. 5 9 lthough debt is not specifically referenced in the formula, it is an underlying ...
##### Aspect Ratio Calculator - 4:3, 16:9, 21:9 (Ratio calculator)
Calculate the Aspect Ratio (ARC) here by entering your in pixel or ratio . Change the image aspect ratio via this Ratio Calculator . The pixel aspect calculator makes it extremely easy to change any "W:H" format with custom a width or height.
##### Quick Ratio Formula Step by Step Calculation with
2 天前 Previous years quick ratio was 1.4 and the industry average is 1.7. Calculation of acid test ratio Acid Test Ratio Acid test ratio is a measure of short term liquidity of the firm and is calculated by dividing the summation of the most liquid assets like cash, cash equivalents, marketable securities or short-term investments, and current ...
##### 4 Easy Ways to Determine Gear Ratio (with Pictures)
2021-9-18 In our example, the intermediate gear ratios are 20/7 = 2.9 and 30/20 = 1.5. Note that neither of these are equal to the gear ratio for the entire train, 4.3. However, note also that (20/7) × (30/20) = 4.3. In general, the intermediate gear ratios of a gear train will multiply together to equal the overall gear ratio.
##### How to Calculate a Male to Female Ratio (And Other
2019-8-8 Calculate the ratio of applicants to full-ride scholarships at College A. 825 applicants: 275 scholarships Simplify: 3 applicants: 1 scholarship Calculate the ratio of applicants to full-ride scholarships at College B. 600 applicants: 150 scholarships Simplify: 4 applicants: 1 scholarship Calculate the ratio of applicants to full-ride scholarships at College C. 2,250 applicants: 250 ...
##### Phenotypic Ratio - The Definitive Guide Biology Dictionary
2020-12-6 Phenotypic ratio is a term that describes probability of finding the patterns and frequency of genetic trait outcomes in the offspring of organisms. A phenotype is an observable or measurable characteristic and is the result of expressed genes. For example, by noting the traits in a long-haired, pink-nosed and a short-haired, black-nosed guinea ...
|
2021-12-07 21:33:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7399360537528992, "perplexity": 1360.2660935494193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363418.83/warc/CC-MAIN-20211207201422-20211207231422-00413.warc.gz"}
|
http://tex.stackexchange.com/questions/25223/embed-latex-math-equations-into-microsoft-word
|
Embed LaTeX math equations into Microsoft Word
Let's say I have a (comparatively) lovely-looking document in LaTeX, full of lovingly typeset, (relatively) complex equations.
Now, let's say some Philistines come along one day and decide that the document has to be put into Microsoft Word (2007).
...after the usual mourning period associated with such events, let's say I value my job (more specifically, the bread it provides) enough to get all the text and tables formatted and references organised into the Word document. –...related questions here and here
Now I'm looking at the equations with fear and dread.
One option of course is to just lift screenshots from the original document, but this is painstaking if I need to refer to parts of the equation in the text. Also, I might need to edit equations on the fly.
Anyone know of a free application which allows embedding LaTeX math into MS Word?
I've looked at Aurora and TexPoint which do roughly what I want... they build LaTeX images from source and embed them into the Word document, allowing to edit the source later... but both are commerical.
...any help in these troubling times will be greatly appreciated.
EDIT: Just a note that Aurora offers a 30-day free trial and is working out really nicely... but still, it's not free. Might be a good solution for those with short-term needs, or money.
-
I'm not sure, but Pandoc might help you. It doesn't support MSOffice format out of the box, but I know it can export to RTF or ODT, so you could use OpenOffice/LibreOffice to save them to the proper doc/docx format. – Paulo Cereda Aug 8 '11 at 17:52
Thanks! That seems like it would have been useful for the original conversion. – badroit Aug 8 '11 at 19:14
Word 2007 has better math typesetting than LaTeX, so there's no need to embed anything. Press Alt+= and have fun. – Philipp Aug 8 '11 at 20:27
@Philipp: "better than TeX" is a very blatant assertion, it has some nice features but nothing ground breaking and it has its share of problems too. – Khaled Hosny Aug 9 '11 at 1:28
If you're going completely free/open source, then I guess dropping MS Word for something like OpenOffice Writer might also be considered. For this, there's OOoLaTeX. From the OOoLaTeX SourceForge project webpage:
OOoLatex is a set of macros designed to bring the power of LaTeX into OpenOffice. It contains two main modules: the first one, Equation, allows to insert LaTeX equations into Writer and Impress documents as png or emf images while the second one, Expand, can be used for simpler equations to expand LaTeX code into appropriated symbol characters and insert them as regular text.
This should work as a cross-platform alternative.
Back to MS Word, a number of work-arounds exist using MS Powerpoint. Copy-and-paste the resulting equation (from Powerpoint) across the Office Suite.
The first is via TeX4PPT. The maintainer(s) suggest it provides an alternative to TeXPoint that is faster:
TeX4PPT is designed following the philosophy of TeXPoint, to enable PowerPoint to typeset sentences and equations using the power of TeX. It differs from TeXPoint in that it uses a native DVI to PowerPoint converter, providing extremely fast conversion. Additionally, the result is set using native truetype fonts under windows, providing the highest fidelity.
TeX4PPT seems to be a little lagging in up-to-date support, since "a compatible version for PP2007 will be forthcoming" (from the website).
The second is via Iguanatex. From the homepage:
IguanaTex is a PowerPoint plug-in which allows you to insert LaTeX equations into your PowerPoint presentation. It is distributed completely for free.
The third is via MyTeXPoint. From the homepage:
Free simplified version of TeXPoint. Partly compatible with the original TeXPoint. It has integrated screenshot tool to copy equations and pictures right from the screen. Supports Microsoft Powerpoint (tested with version 2007 and 2005). Compatible with Microsoft Office 2010.
If you're stuck with an old version of MS Word (for whatever reason), older - free - versions of TeXPoint still exist. I haven't tested any of the choices listed below, but it's worth a shot:
The last version of TeXPoint (v1.5.4) apparently works for all versions, but it is much older the current, non-free version (v3.3.1), so it probably doesn't provide the latest functionality.
For a complete list of formula editors across many platforms and compatibility criteria (including compatibility with TeX), consider viewing the Wikipedia entry on formula editors.
-
Thanks for all the pointers! I actually have OpenOffice, but I can't use it in this case... editing a document collaboratively with MS-heads and the slightly different interpretation of formatting causes... issues. :) – badroit Aug 8 '11 at 19:22
@badroit: I just found MyTeXPoint and added it to my answer. It mentions "partial compatibility" with TeXPoint. – Werner Aug 8 '11 at 19:25
You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.
Given
x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}
Make a small html file that looks like
<!DOCTYPE html>
<html>
<script type="text/javascript"
src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
</script>
<title>tex texample</title>
<body>
$$x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$$
</body>
</html>
View that in a web browser and select "show MathML as/MathML Code" from the right menu:
Select the MathML text from the popup window.
Normally you can paste MathML in to word but for various reasons you need to give Word a hint in this case, so first paste it into a text editor and add the line
<?xml version="1.0"?>
to the start:
Then cut out the edited text and paste it into Word (any version since 2007).
Note the result is a fully editable Word Math Zone, using scalable fonts, not an image.
I used MathJax in a web browser for the initial TeX to MathML conversion as it is the easiest to set up, there are other alternatives. Also, to make it simple, I described the process in terms of cutting and pasting, which works well for one or two expressions but clearly not if you are converting thousands, however the process can be automated in various ways.
-
Then, here is the one click solution: github.com/idf/LaTex2Word-Equation – Daniel 2 days ago
For the Mac, there is the wonderful LaTexiT application which allows you to quickly generate latex fragments and export them in a variety of formats, including PDF. You can store fragments in libraries, so keeping equations organized isn't too hard. This isn't quite the same as editing them directly from within the Word document, but it's pretty close.
I use this regularly for including LaTeX into Powerpoint (if I'm not using Beamer) and InDesign (which I use for posters.)
I don't know if there's an equivalent program for Windows.
-
I found a fairly new opensource project that might help you. It's called LaTeX in Word. According to description:
Latex in Word provides macros for Microsoft Word that allow the use of LaTeX input to create equations images in both inline and display modes without having to install any software on the local computer. All of the LaTeX processing happens on a remotes server. All the user needs is Microsoft Word!
If you really get ambitious, you can set up your own server for even faster equation editing. It requires a little work, but it's not too hard.
Similar macros for other word processors will hopefully be added in the future.
It's available in the project area at SourceForge. I think it's worth a shot.
BONUS: Some screenshots! =P
Seems to be a very interesting approach.
-
It would be nice if there was some actual documentation. It doesn't even say that it's Windows only, as far as I can see. Isn't Word 2007 quite old now? – Alan Munn Aug 8 '11 at 18:51
@Alan: indeed. =) Lack of documentation is a major issue. This project seems to be implemented as a set of VBA macros, so I suppose the 2007 version might work with Office 2010 as well. I can't tell if the docm file is also fully OpenOffice compliant (if so, there's a slight chance of a possible multiplatform support, but not official). =) On a sidenote, my dad is stuck with Word 2003. Yikes! – Paulo Cereda Aug 8 '11 at 19:00
Looks nice... precisely what I need... except for the server part. The people who don't like LaTeX would also not like me sending content from the doc to external parties. They really are no fun. (In general, requiring a server to build LaTeX fragments seems a bit weird... why not just hook up to a local Miktex or similar installation?) – badroit Aug 8 '11 at 19:19
@badroit: Seconded! The server part would scare any potential user. =P – Paulo Cereda Aug 8 '11 at 19:22
@badroit - Then try sourceforge.net/projects/texsword . It uses local MikeTex installation + handles equation numbering nicely. – Adam Ryczkowski Feb 27 '14 at 22:47
For me MathJax has been the way to go as per David Carlisle's suggestion. The one addition I would make is that Microsoft Word by default brings across the formatting of the page displaying the MathML code. I don't think the addition of <?xml version="1.0"?> was doing anything for me except making me go via an editor that doesn't have any formatting to copy.
Instead I have found it quicker to simply copy and paste the MathML from the "show MathML as/MathML Code" window then tap Ctrl then T (or alternately click the relevant buttons in the small menu that appears at the bottom right of the text you've pasted). This tells Word to only pay attention to the text itself at which point it realises that this is the code for a formula and displays it correctly. This is also quicker than going via a text editor.
Sorry for not making this a comment to David Carlisle's answer, apparently I lack the reputation to do so.
-
The above suggestion are really good. But there are similar options like latex built in latest version of MS Word right from 2007 version. I don't know about previous of office. Here is a video (https://www.youtube.com/playlist?list=PLbTE-xLDPxtBP-TE2fS1MysSqFCkHh1N3 )which gives details of most of the common features.
-
Can we define macros in Equation Editor? – Symbol 1 Sep 23 '14 at 16:42
You can't compare the possibilities of mathematics in LaTeX with MS Word. MS Word just can't do what LaTeX can ... – Kurt Sep 23 '14 at 16:51
|
2015-04-19 11:25:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7455022931098938, "perplexity": 2619.449222712414}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246638820.85/warc/CC-MAIN-20150417045718-00041-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://tallerdeconstruccion.com/you-can-also-measure-distance-along-with-your
|
You can also measure distance along with your flash or fist
How, brand new finger uses up on the $10$ amount of view when kept straight out. Very, pacing from backwards before finger totally occludes brand new forest usually supply the point of surrounding edge of a right triangle. If it distance is $30$ paces what's the peak of your tree? Better, we are in need of some things. Suppose their speed are $3$ feet. Then your adjacent length try $90$ base. The new multiplier is the tangent away from $10$ amounts, or:
Hence to own benefit away from thoughts we will state was $1/6$ (good $5$ per cent error). In order that response is approximately $15$ feet:
Also, you can make use of their flash in place of the first. To make use of your first you could multiply from the $1/6$ brand new surrounding side, to make use of their flash regarding $1/30$ because approximates brand new tangent of $2$ degrees:
This is often corrected. If you know the brand new top app incontri pansessuali away from some thing a radius away one is included by the thumb otherwise little finger, you then perform proliferate one to height by compatible amount to look for the point.
Very first services
New sine mode is placed for everybody genuine $\theta$ possesses a selection of $[-step 1,1]$ . Certainly as $\theta$ gusts of wind inside the $x$ -axis, the position of your own $y$ complement starts to repeat itself. I state the sine mode try periodic that have period $2\pi$ . A chart often train:
The newest chart suggests several periods. This new wavy facet of the graph 's the reason it form are familiar with model periodic motions, for instance the amount of sun in a day, or perhaps the alternating current powering a pc.
From this graph - otherwise considering in the event that $y$ enhance is $0$ - we come across your sine form enjoys zeros at any integer numerous off $\pi$ , or $k\pi$ , $k$ in the $\dots,-dos,-1, 0, step 1, 2, \dots$ .
The fresh new cosine means is similar, in that it has got the same website name and you may range, but is "off stage" towards the sine contour. A graph of each other suggests both is related:
The new cosine means merely a change of one's sine function (or the other way around). We see that zeros of your cosine function happen on activities of your setting $\pi/2 + k\pi$ , $k$ inside $\dots,-2,-step one, 0, step 1, 2, \dots$ .
Brand new tangent setting doesn't have all the $\theta$ because of its domain, as an alternative those facts in which division from the $0$ happen try excluded. These types of exist if the cosine was $0$ , otherwise once again during the $\pi/2 + k\pi$ , $k$ within the $\dots,-2,-step 1, 0, step 1, dos, \dots$ . The variety of the new tangent function might be every real $y$ .
The tangent means is even occasional, however having months $2\pi$ , but rather simply $\pi$ . A graph will teach so it. Right here we avoid the vertical asymptotes by keeping them of the latest spot website name and layering several plots of land.
$r\theta = l$ , in which $r$ is the radius of a group and $l$ the size of new arc molded from the direction $\theta$ .
The two was associated, just like the a group away from $2\pi$ radians and 360 degree. So to transform away from levels toward radians it needs multiplying because of the $2\pi/360$ and also to convert away from radians so you can amount it entails multiplying from the $360/(2\pi)$ . The fresh new deg2rad and you may rad2deg characteristics are available for this.
During the Julia , brand new functions sind , cosd , tand , cscd , secd , and cotd are around for simplify the task of composing the brand new a few functions (that is sin(deg2rad(x)) is the same as sind(x) ).
The sum of-and-difference formulas
Consider the point-on the unit system $(x,y) = (\cos(\theta), \sin(\theta))$ . With regards to $(x,y)$ (or $\theta$ ) could there be a method to represent the latest angle located from the rotating an additional $\theta$ , that is what are $(\cos(2\theta), \sin(2\theta))$ ?
|
2022-12-04 19:11:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6022800803184509, "perplexity": 1250.465629836511}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710978.15/warc/CC-MAIN-20221204172438-20221204202438-00541.warc.gz"}
|
https://wiki.ubc.ca/Course:MATH110/Archive/2010-2011/003/Math_Forum/Webwork_A4
|
# Course:MATH110/Archive/2010-2011/003/Math Forum/Webwork A4
## Q5
I have a question about Question 5 on WebWorks:
Find the equations of the horizontal asymptotes and the vertical asymptotes of . If there are no asymptotes of a given type, enter 'NONE'. If there is more than one asymptote of a given type, give a comma seperated list (i.e.: 1, 2,...).
I have found the horizontal asymptote and it is correct.
But the vertical asymptote is wrong, even though I've checked it through subbing it back into the equation and graphing.
Does anyone know how to solve it or if they have the same problem? EllenTsang 23:31, 17 October 2010 (UTC)
Hi Ellen,
The vertical asymptote occurs when ${\displaystyle 3x^{2}+3x-6=0}$ . Try factoring this and see what you get! CharlyHuxford
## Q12
Does anyone have an idea on how to do Question 12 on the Webworks:
Find the values of c and d that make the following function continuos for all x.
f(x)=9x if x<1
cx^2+d if 1<x<2
3x if x>2
SeanNugentSean Nugent
Because they want the function to be continuous your limit when approaching from the left must equal your limit when approaching from the right side.
lim f(x) 9x = lim f(x) cx^2 +d and lim f(x) cx^2 + d = lim f(x) 3x
The limit for the first part should be equal to 1 if it is to be made continuous, as this is where the discontinuity is. You can substitute 1 into the spot of x and isolate either d or c. The limit for the next part is 2 (once again where the discontinuity is and if they want it continuous....), solve for the opposite variable.
Following the rules of continuity the limit when approaching from the left must equal the limit when approaching from the right, so you can put your two equations together and solve for c or d and than substitute in to solve for the other.
so 9(1) = c(1)^2 + d --> 9 = c + d --> 9 - c = d. Do the same to the other side and solve and than substitute.
Hopefully that helps!
## Q20
A 0.4 ml dose of a drug is injected into a patient steadily for 0.5 seconds. At the end of this time, the quantity, ${\displaystyle Q}$, of the drug in the body starts to decay exponentially at a continuous rate of 0.35 percent per second. Using formulas, express ${\displaystyle Q}$ as a continuous function of time, ${\displaystyle t}$, in seconds.
I drew a diagram:
And I got:
${\displaystyle Q(t)={\tfrac {4}{5}}\ t}$ if ${\displaystyle 0\leq t\leq 0.5}$
I am trying to find what is Q(t) for when t is after 0.5:
${\displaystyle Q(t)=}$_______? if ${\displaystyle 0.5\leq t\leq \infty }$
I tried using the decay formula, ${\displaystyle a(1-r)^{t}}$.
And I put ${\displaystyle 0.4(1-0.35)^{(}t-0.5)}$ ...as in, to the power of ${\displaystyle t-0.5}$, I don't know why it doesnt show clearly here.
But it didn't work.
Does anyone know how to do this? EllenTsang
---
Oh, wow. I realized it's 0.0035, not 0.35.
So it's ${\displaystyle 0.4(1-0.0035)^{(}t-0.5)}$.
I hope nobody else spent ages on this question because of the same thing, haha! EllenTsang
|
2020-07-06 06:06:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 12, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8929679989814758, "perplexity": 487.7714234514199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890105.39/warc/CC-MAIN-20200706042111-20200706072111-00477.warc.gz"}
|
https://www.sarthaks.com/103426/metallic-spheres-radii-respectively-melted-single-solid-sphere-radius-resulting-sphere
|
Metallic spheres of radii 6 cm, 8 cm, and 10 cm, respectively, are melted to form a single solid sphere. Find the radius of the resulting sphere
0 votes
270 views
Metallic spheres of radii 6 cm, 8 cm, and 10 cm, respectively, are melted to form a single solid sphere. Find the radius of the resulting sphere
1 Answer
0 votes
by (18.2k points)
selected
Best answer
Radius (r1) of 1st sphere = 6 cm
Radius (r2) of 2nd sphere = 8 cm
Radius (r3) of 3rd sphere = 10 cm
Let the radius of the resulting sphere be r.
The object formed by recasting these spheres will be same in volume as the sum of the volumes of these spheres.
Volume of 3 spheres = Volume of resulting sphere
Therefore, the radius of the sphere so formed will be 12 cm.
0 votes
1 answer
+2 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
|
2021-05-08 19:06:15
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8768863677978516, "perplexity": 1967.6896023642093}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988923.22/warc/CC-MAIN-20210508181551-20210508211551-00593.warc.gz"}
|
https://www.physicsforums.com/threads/modular-arithmetic-problem.588897/
|
# Modular Arithmetic Problem!
1. Mar 20, 2012
### Bipolarity
I'm trying to prove something in modular arithmetic that I came upon across my studies in comp sci. Consider a set of natural numbers ${n_{1},n_{2},n_{3},...n_{k}}$
Consider two more natural numbers $m$ and $p$ such that
$$(\sum^{k}_{i=1}n_{i} ) \ mod \ m = p$$
Now prove that
$((((n_{1} \ mod \ m + n_{2}) \ mod \ m + n_{3}) \ mod \ m + n_{4}) \ mod \ m + ... + n_{k}) \ mod \ m = p$
All help would be appreciated.
BiP
2. Mar 20, 2012
### chiro
Hey bipolarity.
I'm just unsure about what the modulus term is. For example is it (a mod (m + n2)) or is it a mod m + n2 etc.
3. Mar 20, 2012
### rcgldr
I think he means:
$mod_m( ... (mod_m( mod_m(n_{1}) + n_{2})) ... + n_{k})$
4. Mar 20, 2012
### Bipolarity
Sorry! Thanks rcgldr for clarifying what I meant.
I meant the divisor in the modulus operator is always 'm'.
So basically you are adding the next number, then taking the remainder after dividing by m. Then again you are adding the next number, then taking the remainder after dividing by m. You do this until you run out of numbers, at which point you finally take the remainder after dividing by m. The final answer should be 'p'. But I'm trying to prove it.
BiP
5. Mar 21, 2012
### chiro
Thanks Bipolarity.
This looks like a good candidate for mathematical induction. The first case is trivial because n1 MOD m and n1 MOD m are the same so proven for n = 1.
I think I know how you could do it but I'll ask you for your thoughts and any work that you have for solving the problem in your own terms (we encourage that highly here on PF).
|
2018-01-21 18:52:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8773983120918274, "perplexity": 814.2988530804516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890823.81/warc/CC-MAIN-20180121175418-20180121195418-00015.warc.gz"}
|
https://cdsweb.cern.ch/collection/CMS%20Detector%20Performance%20Summaries?ln=sk&as=1
|
# CMS Detector Performance Summaries
Posledne pridané:
2020-02-18
12:44
Background measurements in the CMS DT chambers during LHC Run 2 /CMS Collaboration Characterizing background helps to understand its trend and sources, and to devise mitigation measures to prevent detector ageing, especially in view of luminosity increase. During LHC Run2, the background in the CMS Drift Tubes (DT) was monitored both by means of online anode current measurements and offline analysis. Results are shown in terms of trends vs LHC instantaneous luminosity and of spatial distributions.. CMS-DP-2020-011; CERN-CMS-DP-2020-011.- Geneva : CERN, 2020 - 9 p. Fulltext: PDF;
2020-02-18
12:43
Monitoring and Modelling of the Radiation Damage of the CMS Phase-1 Pixel Detector in the Barrel Region /CMS Collaboration This Detector Performance Summary shows the comparison between measurement and modellation of the development of sensor leakage currents as well as the depletion voltages for all the layers in the CMS Phase-1 Pixel detector in the barrel region after the end of LHC Run-2. In addition to that, a first rough measurement of the z dependence of the sensor leakage currents is presented, which was taken after the end of LHC Run-2.. CMS-DP-2020-010; CERN-CMS-DP-2020-010.- Geneva : CERN, 2020 - 14 p. Fulltext: PDF;
2020-02-18
12:42
Development of the CMS Phase-1 Pixel Online Monitoring System and the Distribution and Evolution of Temperatures and Sensor Leakage Currents /CMS Collaboration This Detector Performance Summary shows typical temperature readings and leakage current measurements in the CMS Phase-1 Pixel detector during proton-proton collisions as well as cosmic ray data taking. The impact of different CO$_{2}$ mass flows on the overall cooling performance is also presented.. CMS-DP-2020-009; CERN-CMS-DP-2020-009.- Geneva : CERN, 2020 - 31 p. Fulltext: PDF;
2020-02-18
12:42
Performance of a Soft Muon, Hard Jet and Moderate Missing Energy Trigger in 2018 Data /CMS Collaboration This note presents the performance of a dedicated High-Level Trigger (HLT) algorithm that requires a low transverse momentum $p_{\mathrm{T}}$ muon, a jet with transverse momentum $p_{\mathrm{T}}$ greater than 100 GeV and missing transverse momentum $p_{\mathrm{T}}^{\mathrm{miss}}$ greater than 80 GeV. The trigger algorithm targets events with ISR signatures with moderate $p_{\mathrm{T}}^{\mathrm{miss}}$ and soft muons, that are typical in SUSY models with very compressed mass spectra. [...] CMS-DP-2020-004; CERN-CMS-DP-2020-004.- Geneva : CERN, 2020 - 11 p. Fulltext: PDF;
2020-02-18
12:41
Effects of the electronic threshold on the performance of the RPC system of the CMS experiment /CMS Collaboration Resistive Plate Chambers (RPCs in the following) play a very important role as the dedicated system for muon triggering both in the barrel and in the endcap of the CMS experiment at the Large Hadron Collider. It is therefore of primary importance to tune the operating voltage and the electronic threshold of the front-end boards reading the signals from these detectors in order to optimize the RPC system performance. [...] CMS-DP-2020-003; CERN-CMS-DP-2020-003.- Geneva : CERN, 2020 - 11 p. Fulltext: PDF;
2020-02-04
15:11
B-Jet Trigger Performance in Run 2 /CMS Collaboration This study shows the performance of the online b-tagging run in 2017 (CSV) and 2018 (DeepCSV).. CMS-DP-2019-042; CERN-CMS-DP-2019-042.- Geneva : CERN, 2019 - 23 p. Fulltext: PDF;
2020-01-30
10:19
Identification of highly Lorentz-boosted heavy particles using graph neural networks and new mass decorrelation techniques /CMS Collaboration This note presents several new developments on machine learning (ML)-based identification of highly Lorentz-boosted heavy particles using jet substructure in CMS and their performance with the CMS Phase 1 detector. A new algorithm based on ParticleNet, a graph neural network using an unordered set of jet constituent particles as the input, has been developed and shows significantly improved performance. [...] CMS-DP-2020-002; CERN-CMS-DP-2020-002.- Geneva : CERN, 2020 - 13 p. Fulltext: PDF;
2019-12-16
15:15
Recording and reconstructing 10 billion unbiased b hadron decays in CMS /CMS Collaboration The CMS experiment has recorded a sample of 10 billion events containing the unbiased decays of b hadrons. The accumulation, processing, and validation of this data set were delivered without significant impact on the core physics programme of CMS. [...] CMS-DP-2019-043; CERN-CMS-DP-2019-043.- Geneva : CERN, 2019 - 18 p. Fulltext: PDF;
2019-11-27
10:26
DT hit efficiency measurements with reduced high voltage /CMS Collaboration Efficiency measurements were carried out on the CMS Drift Tubes (DT) applying anode High Voltage lower than nominal. The "HV scans", performed periodically with cosmic rays, allowed to exclude detector ageing during Run 2, and also to assess the effect of lowering the Front End threshold. [...] CMS-DP-2019-041; CERN-CMS-DP-2019-041.- Geneva : CERN, 2019 - 8 p. Fulltext: PDF;
2019-11-12
15:19
ECAL 2018 refined calibration and Preshower Run 2 performance plots /CMS Collaboration Many physics analyses using the Compact Muon Solenoid (CMS) detector at the LHC require accurate, high resolution electron and photon energy measurements. Following the excellent performance achieved in Run I at center-of-mass energies of 7 and 8 TeV, the CMS electromagnetic calorimeter (ECAL) is operating at the LHC with proton-proton collisions at 13 TeV center-of-mass energy. [...] CMS-DP-2019-038; CERN-CMS-DP-2019-038.- Geneva : CERN, 2019 - 25 p. Fulltext: PDF;
|
2020-02-26 23:54:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.781582772731781, "perplexity": 4861.264165673963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146562.94/warc/CC-MAIN-20200226211749-20200227001749-00223.warc.gz"}
|
http://math.stackexchange.com/questions/14284/how-to-get-nurbs-control-points-from-an-array-of-points-that-should-be-part-of-i
|
# How to get NURBS control points from an array of points that should be part of its solution from controll points we are searching for?
We are talking about Non-uniform rational B-spline. We have some simple 3 dimensional array like
{1,1,1}
{1,2,3}
{1,3,3}
{2,4,5}
{2,5,6}
{4,4,4}
Which are points from a plane created by some B-spline
How to find control points of spline that created that plane? (I know its a hard task because of weights that need to be calculated but I really hope it is solvable)
-
I really do not know tag system on math.stackexchange so feel free to edit tags. – Kabumbus Dec 14 '10 at 12:15
|
2015-10-07 12:31:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26946938037872314, "perplexity": 130.00418113426755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737864605.51/warc/CC-MAIN-20151001221744-00061-ip-10-137-6-227.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/ambiguity-of-electrostatic-polarization.611057/
|
# Ambiguity of electrostatic polarization?
1. Jun 3, 2012
### Jano L.
Hello everybody,
https://en.wikipedia.org/wiki/Polarization_density
especially the section in the end where the writer claims the polarization is ambiguous.
In the example about Alice, the writer states that the pairing of +/- particles is ambiguous and hence the polarization is ambiguous. I think he incorrectly interprets the meaning of the polarization.
The writer even states that Alice can back up her strange pairing procedure by ascribing the crystal surface a non-zero density of (free!) charge. This is ridiculous. The crystal is a dielectric and there is no free charge. All polarization comes from displacements of the bound charges. There will be only bound surface charge.
I think the proper way to define the polarization of the crystal at $\mathbf x$ is to average the dipole moments of the smallest neutral cells k hitting the averaging volume V, which is centred at $\mathbf x$. The polarization is then
$$\mathbf P(\mathbf x) = \frac{1}{V} \sum_k \mathbf \mu_k$$
What do you think - is not this unambiguous definition?
|
2017-08-24 00:39:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7708303332328796, "perplexity": 1044.9360759150882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886124662.41/warc/CC-MAIN-20170823225412-20170824005412-00388.warc.gz"}
|
http://openstudy.com/updates/50475a6ce4b0c3bb098602c4
|
## experimentX Group Title Evaluate: given that b > a $\int_0^\infty {x^{a-1} \over 1 + x^b}\; dx$ 2 years ago 2 years ago
1. mukushla
*
2. experimentX
man ... I stuck at finding the simple residue.
3. satellite73
if my memory serves me, it is the numerator divided by the derivative of the denominator evaluated at the pole
4. experimentX
yeah you are correct ... had trouble fixing. $\huge \lim_{z \rightarrow e^{i \pi \over b}} {(z - e^{i{\pi }\over b}) z^{a-1} \over 1 + z^b} = \lim_{z \rightarrow e^{i \pi \over b}} {z^{a-1} \over -bz^{b-1}}$
5. experimentX
|dw:1346858176216:dw| somehow it got this fixed. I had this Q on my exam paper ... Man i couldn't do it.
|
2014-10-02 08:30:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9117214679718018, "perplexity": 2015.445602479548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663739.33/warc/CC-MAIN-20140930004103-00451-ip-10-234-18-248.ec2.internal.warc.gz"}
|
https://www.gradesaver.com/textbooks/science/physics/college-physics-4th-edition/chapter-16-problems-page-615/47
|
College Physics (4th Edition)
$E = 12.7~N/C$
We can find the magnitude of the horizontal component of the electric field due to the charge at the bottom corner: $E_x = \frac{kq}{r^2}$ $E_x = \frac{(9.0\times 10^9~N~m^2/C^2)(1.00\times 10^{-9}~C)}{(1.0~m)^2}$ $E_x = 9.0~N/C$ We can find the magnitude of the vertical component of the electric field due to the charge at the top corner: $E_y = \frac{kq}{r^2}$ $E_y = \frac{(9.0\times 10^9~N~m^2/C^2)(1.00\times 10^{-9}~C)}{(1.0~m)^2}$ $E_y = 9.0~N/C$ We can find the magnitude of the electric field at point D due to the two point charges: $E = \sqrt{E_x^2+E_y^2}$ $E = \sqrt{(9.0~N/C)^2+(9.0~N/C)^2}$ $E = 12.7~N/C$
|
2020-02-26 19:24:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9317885041236877, "perplexity": 61.27192699183021}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146485.15/warc/CC-MAIN-20200226181001-20200226211001-00247.warc.gz"}
|
https://www.tex4tum.de/harmonic-oscillation.html
|
# Harmonic Oscillation#
Harmonic Oscillation is a special type of periodic motion where the restoring force $$F$$ on the moving object is directly proportional to the object's displacement magnitude $$x$$ and acts towards the object's equilibrium position. If $$F$$ is the only force acting on the system, the system is called a simple harmonic oscillator, and it undergoes simple harmonic motion: sinusoidal oscillations about the equilibrium point, with a constant amplitude and a constant frequency (which does not depend on the amplitude). If a frictional force (damping) proportional to the velocity is also present, the harmonic oscillator is described as a damped oscillator.
$F=ma=m{\ddot {x}}=-kx$
Solving this differential equation, we find that the motion is described by the function
$x(t)= A \cdot \cos(2\pi f_0 t - \varphi).$
with the amplitude $$A$$, the frequency $$f_0$$ and the phase shift $$\varphi$$.
|
2022-07-04 11:59:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9135586023330688, "perplexity": 135.4386727973118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104375714.75/warc/CC-MAIN-20220704111005-20220704141005-00758.warc.gz"}
|
http://codeforces.com/blog/entry/11461
|
Due to technical reasons Codeforces may be unavailable on April 21st from 01:00-07:00 (MSK). ×
### yarrr's blog
By yarrr, 5 years ago, ,
Можно ли обсуждать задачи? (а то тишина что-то)
Если да, расскажите пожалуйста как решаются задачи B, E, F.
•
• +32
•
» 5 years ago, # | ← Rev. 4 → 0 A: Select a node, keep toggling until all of the nodes becomes same color. :) C: DP with state (pos in string, knight1, knight2) D: Exactly as gen said, Ternary Search on time. G: Pure Backtrack. One of our team mates observation was like: if you arrange 36 nodes in 6 x 6 fashion and add edges between adjacent rows, from S to all top row, bottom row to T, then we have only 6^6 ways for going from S to T. which is quite low. We are not sure if this is the worst graph (here to simplify we took 38 nodes) [but there 2^6 also, for 18*2 it makes 2^36 may be] H: loop over all gcd, run a loop over number to see if there is any segment where gcd is that number. its easy. I: I am not sure how many team mate did it, my idea is, you can in O(nlogn) list all divisors for all numbers from 1 to 2e6. Now we know: a + ... + b = (b — a + 1)(a + b) / 2. So see if for each of the divisors of given n, you can find out such a and b. J: DP in two stage. One inter-group another intra-group. Both almost same bitmask DP Unsolved: B: no idea E: I coded DP but later found I overlooked a special permutation, giving WA F: For each number you need to keep track of the number (previous and next) which is not co-prime to it. Say we get a-b pair (a, b are index) if a-b distance is > k ignore. Otherwise, from (k — (b-a)) ~ a add 1 to segment tree and so on. If you just check for 0's in Segtree you will find answer. Not sure if it is correct.
• » » 5 years ago, # ^ | ← Rev. 5 → 0 I don't understand how you solve H. Could you be more specific?I solved it using divide and conquer with bitset and a little heuristic.But how to solve it in 4 mintes???In I: k=number of consecutive numbersa=starting numberx=the number we want to getx = a*k + k*(k-1)/2Then i just thoughtfor a to 10e6:for k to (while possible):would pass XDand it passed XD
• » » » 5 years ago, # ^ | ← Rev. 2 → +3 assume gcd = g, last = 0 for i = 0 to n - 1: if num[i] % g == 0: if last == 0: last = num[i] / g else last = gcd(last, num[i] / g) if last == 1: "g can be gcd" else last = 0
• » » 5 years ago, # ^ | ← Rev. 2 → 0 For G you can write a dp to calculate the worst case graph for number of shortest paths. The problem is to create a sequence of positive integers which sum to 34 where the product is maximized.Here is the code in python: N = 34 dp = (N+1) * [(0,)] dp[0] = (1,) for j in range(1,N+1): for i in range(j): dp[j] = max(dp[j], (dp[i][0]*(j-i),)+dp[i][1:]+(j-i,)) print dp[N] The answer: (236196, 4, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3)or236196 = 4 * (3 ^ 10)
• » » » 5 years ago, # ^ | 0 yap, but at the same time, i think to choose/not to choose gold is a factor, but in this complete type graph that factor is 1, since you can always come back. So making a worst graph is a bit difficult, right?
• » » » » 5 years ago, # ^ | 0 This is the worst case graph for maximizing the number of shortest paths. This would be the worst case runtime for a brute force all shortest paths solution that used Dijkstra's Algorithm to pay back the necessary nodes.There are many graphs you can make that require payback but they reduce the number of shortest paths in the process. So yes, depending on the solution, constructing a worst case graph is hard. :)A solution that brute forces all shortest paths but then brute forces all gold usage on all shortest paths should TLE. I don't think the data is strong enough to break that solution. :(
» 5 years ago, # | +5 A relatively complex, but non-hacky solution.1) divide the nodes into levels according to BFS order from node #1; you can merge all levels starting from the one containing node #2 for convenience.2) process levels in ascending order and apply the following dp:dp(i, P) — maximum money we can get on the (shortest) path to i such that the current level of nodes has partitioning P.Partitioning of nodes of a particular level means node arrangement into sets, where all nodes of a set are connected internally and disconnected from nodes in others sets. Two nodes are connected if there exists a path between them in a graph induced by nodes on current and all previous levels minus the nodes we have stolen gold from. Also, mark one or zero sets as special if the nodes in this set are also connected to node #1.knowing dp(i, P), try all edges from i to j on the next level of nodes, and try both stealing and not stealing the money from i, and update dp(j, P) accordingly.3) base case: dp(1, {{1}}) = 04)result: max of dp(2, P) over all P's, where node #2 is in the special set.
» 5 years ago, # | 0 Hi! I already have an opencup account. I have tried "Sector admin tools -> Ejudge console"(under google translate XD), but didn't find recently contest. Could you tell me where to submit this contest's problem after the contest?
• » » 5 years ago, # ^ | 0 http://opentrain.snarknews.info/~ejudge/team.cgi?contest_id=9914You can find all the problems from past 2013/2014 constests.
• » » » 5 years ago, # ^ | 0 Oh, I have entered this page but didn't realize that it is the recently contest's problem…… Thanks a lot!
|
2019-04-20 18:24:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44549551606178284, "perplexity": 1757.4332231332871}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529962.12/warc/CC-MAIN-20190420180854-20190420202854-00123.warc.gz"}
|
https://tex.stackexchange.com/questions/256380/use-different-titles-and-styles-for-lists-and-sub-lists-of-acronyms-glossar
|
# Use different titles - and styles - for lists and sub-lists of acronyms (glossaries package)
I am trying to add one main list of abbraviations and two subsection that contain other sub-lists of abbreviations with different titles in the frontmatter of my document, using the "acronym" from the glossaries package, and having the titles appear in the table of content as "List of Abbreviations" (main title), "Time" (subsection), "Cities" (subsection).
Is there a way to split the acronyms in main and subsets with a a title each? I can see that this is possible for example using the \newglossary command if it is a glossary and then specifying the "type" when defining a new term for the glossary, but is there a similar function for acronyms? I am also trying to slightly change the style of the two lists acronyms by for example adding \texttt.
Thanks very much for any help, muuuuch appreciated!
\documentclass{article}
% Abbreviations
\usepackage[acronym,nonumberlist,toc]{glossaries}
%Remove the dot at the end of glossary descriptions
\renewcommand*{\glspostdescription}{}
\renewcommand*{\acronymname}{List of Abbreviations}
\renewcommand*\acronymname{Time}
\newacronym{utc}{UTC}{Coordinated Universal Time}
\renewcommand*\acronymname{Cities}
\newacronym{la}{LA}{Los Angeles}
\newacronym{ny}{NY}{New York}
\makeglossaries
\begin{document}
\frontmatter
\listoffigures % print list of figures
\listoftables % print list of tables
\printglossaries
\mainmatter
\chapter{Introduction}
% Use the acronyms
\gls{utc} is 3 hours behind \gls{adt}.
\gls{ny} is 3 hours ahead of \gls{la}.
\end{document}
I have something working OK, but I still cannot get it to print the main acronyms and then the others as sub-lists correctly:
\documentclass{article}
\usepackage{hyperref}
\usepackage[style=long, nonumberlist, nogroupskip]{glossaries}
\makenoidxglossaries
% The main title "List of Abbreviations with some acronyms should be printed first, followed by the subsections "Time" and "Cities"
%\setglossarysection{section}
\newglossary[alg1,nonumberlist, type=\acronymtype, section=subsection]{time}{acn1}{acr1}{Time}
\newglossary[alg2,nonumberlist,type=\acronymtype, section=subsection]{cities}{acn2}{acr2}{Cities}
% This entry is part of the main glossary
\newglossaryentry{orange}{name=orange, description={an orange coloured fruit},first={Orange}}
\newglossaryentry{utc}{type=time, name=\textsf{UTC}, description={Coordinated Universal Time},first={Coordinated Universal Time (UTC)}}
\newglossaryentry{la}{type=cities, name=\textrm{LA}, description={Los Angeles},first={Los Angeles (LA)}}
%: ----------------------- list of figures/tables/acronyms ------------------------
\begin{document}
\frontmatter
\listoffigures % print list of figures
\listoftables % print list of tables
\clearpage
\phantomsection
\printglossary[title=List of Abbreviations, toctitle=List of Abbreviations]
\printnoidxglossary[type=time]
\printnoidxglossary[type=cities]
\mainmatter
\chapter{Introduction}
% Use the acronyms
\gls{orange} is a main acronym, while \gls{utc} is part of a sub-list of acronyms called Time and \gls{la} is part of another sub-list of acronyms called Cities.
\end{document}
\newacronym internally uses \newglossaryentry and has an optional argument that can be used to add any extra keys that the entry requires. There are two possible approaches. The first is to use child entries:
\documentclass{book}
% Abbreviations
\usepackage[acronym,nonumberlist,
nopostdot,% Remove the dot at the end of glossary descriptions
style=tree,% use hierarchical style
toc]{glossaries}
\makeglossaries
\renewcommand*{\acronymname}{List of Abbreviations}
\newglossaryentry{time}{name=Time,description={}}
\newacronym[parent=time]{utc}{UTC}{Coordinated Universal Time}
\newglossaryentry{cities}{name=Cities,description={}}
\newacronym[parent=cities]{la}{LA}{Los Angeles}
\newacronym[parent=cities]{ny}{NY}{New York}
\begin{document}
\frontmatter
\printglossaries
\mainmatter
\chapter{Introduction}
% Use the acronyms
\gls{utc} is 3 hours behind \gls{adt}.
\gls{ny} is 3 hours ahead of \gls{la}.
\end{document}
This produces:
In this case the glossary style must be one that supports hierarchical entries, so I've chosen the tree style for the MWE. See the Predefined Glossary Styles table in the user manual for other options (the maximum level needs to be 1 or more, but should not be one of the homograph styles).
The second approach is to define different glossaries and use the type key. For example:
\documentclass{book}
\usepackage[nonumberlist,
nopostdot,% Remove the dot at the end of glossary descriptions
style=tree,% change as appropriate
toc]{glossaries}
\newglossary{time}{gls2}{glo2}{Time}
\newglossary{cities}{gls3}{glo3}{Cities}
\makeglossaries
\newglossaryentry{sample}{name=sample,description={an example}}
\newacronym[type=time]{utc}{UTC}{Coordinated Universal Time}
\newacronym[type=cities]{la}{LA}{Los Angeles}
\newacronym[type=cities]{ny}{NY}{New York}
\begin{document}
\frontmatter
\printglossary
\chapter{List of Abbreviations}
\setglossarysection{section}
\printglossary[type=time]
\printglossary[type=cities]
\mainmatter
\chapter{Introduction}
\gls{utc} is 3 hours behind \gls{adt}.
\gls{ny} is 3 hours ahead of \gls{la}.
\gls{sample} entry.
\end{document}
This puts the main glossary in a chapter (assuming you want that) but the two abbreviation glossaries are placed in sections. The result looks like:
The style no longer needs to be one that supports child entries, so you can change it to whatever is suitable.
|
2020-02-25 18:55:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7022843360900879, "perplexity": 4288.790412021979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146127.10/warc/CC-MAIN-20200225172036-20200225202036-00531.warc.gz"}
|
https://support.bioconductor.org/p/109167/
|
Cannot install clusterProfiler
2
0
Entering edit mode
dtatarak • 0
@dtatarak-15870
Last seen 3.4 years ago
I am using the following code to install clusterProfiler:
source("https://bioconductor.org/biocLite.R")
BiocInstaller::biocLite("clusterProfiler")
I get the following error:
Error: package or namespace load failed for ‘AnnotationDbi’ in loadNamespace(j <- i[[1L]], c(lib.loc, .libPaths()), versionCheck = vI[[j]]):
there is no package called ‘bit’
Error : package ‘AnnotationDbi’ could not be loaded
ERROR: lazy loading failed for package ‘GO.db’
• removing ‘/Library/Frameworks/R.framework/Versions/3.5/Resources/library/GO.db’
The downloaded source packages are in
Warning message:
In install.packages(pkgs = doing, lib = lib, ...) :
installation of package ‘GO.db’ had non-zero exit status
I'm really not sure what to do here. When I search online, everyone pretty much says to do exactly what I did, and that it should install the required packages. I am new to R, and this is beyond me. Thanks for any help you can provide!
clusterprofiler biocinstaller • 2.0k views
0
Entering edit mode
@james-w-macdonald-5106
Last seen 11 hours ago
United States
It looks like you have AnnotationDbi installed, but are missing dependencies. Or actually, dependencies of dependencies, as bit isn't a direct dependency of AnnotationDbi. You probably need to re-install AnnotationDbi to get all the dependencies.
0
Entering edit mode
I tried that. It installs successfully, but I still get the exact same error when installing clusterProfiler.
0
Entering edit mode
Then install bit using biocLite
0
Entering edit mode
Guangchuang Yu ★ 1.2k
@guangchuang-yu-5419
Last seen 8 months ago
China/Guangzhou/Southern Medical Univer…
|
2021-10-19 06:54:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1762377917766571, "perplexity": 13447.953032128815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585242.44/warc/CC-MAIN-20211019043325-20211019073325-00621.warc.gz"}
|
http://math.stackexchange.com/questions/35713/infinite-prime-proof-using-eulers-totient
|
# Infinite Prime Proof Using Euler's Totient
I need something explained or corrected: In my number theory class we proved that there are an infinite number of primes using Euler's Phi Totient. It went something like this:
Let $M = p_1 p_2 \dots p_n$ be the product of all primes. Consider $1 < A \le M$:
Some prime must divide $A$, call it $q$. Since $q$ must be one of the primes, $q$ must divide $M.$
So the $gcd(A,M) > 1.$ Thus $\phi(M) = 1$ ...?
Which is not even and contradicts the theorem that $\phi(N)$ is even for $N>2.$ Therefore there exists an infinite number of primes.
I get confused by the statement "Thus $\phi(M) = 1$"....
Did i possibly copy this proof down wrong? or am I missing something? Thank you in advance.
Edit: by consider a such that...i meant consider an integer '$a$' I replaced it with $A$ to hopefully make it more clear.
Edit2: I'm sorry I am not familiar with using the equation editor. This is not a homework assignment, just studying for my exam. I just want to be able to understand this or clearify it.
-
By definition, $\phi(M)$ is the number of numbers in $S=\{1,2,\dots,M\}$ that are relatively prime with $M$. The argument shows that if $1<A\le M$, then $A$ is not relatively prime with $M$, so there is only one element of $S$ relatively prime with $M$, namely 1, and $\phi(M)$ is therefore equal to 1.
-
Ohhhhhh... Thank you. There is only one element in S that is relatively prime with M...and it is 1. So \phi(M) = 1 . That makes sense. – Eric Apr 28 '11 at 22:43
Note that $\phi(M)$ counts the number of integers in the interval $[1, M-1]$ which are relatively prime to $M$. We are told to use the fact that $\phi(M)$ is almost always even.
Since $1$ is relatively prime to $M$, that leaves $\phi(M)-1$ numbers in our interval, different from $1$, which are relatively prime to $M$.
But since $\phi(M)$ is even, it follows that $\phi(M)-1$ is odd, and in particular not equal to $0$, since $0$ is even! Thus there is a number $a \ne 1$, in our interval, such that $a$ is relatively prime to $M$. Any prime divisor $p$ of $a$ must be different from all the $p_i$ in the given list, since $a$ is relatively prime to $M$.
Seems like a bit too much machinery for this problem, specially since we can see that if $n>1$, the number $M-1$ is not equal to $1$, and is relatively prime to $M$.
-
In the proof I state that gcd(a,M) > 1 so i'm not saying that they are relatively prime. – Eric Apr 28 '11 at 22:40
I understand what you are saying now. Thank you! – Eric Apr 28 '11 at 22:47
|
2014-07-26 15:40:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9141656756401062, "perplexity": 112.35594061022067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997902579.5/warc/CC-MAIN-20140722025822-00049-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://jira.lsstcorp.org/browse/DM-1364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&showAll=true
|
# replace "bad data" flag in SdssCentroid
XMLWordPrintable
#### Details
• Type: Story
• Status: Done
• Resolution: Done
• Fix Version/s: None
• Component/s:
• Labels:
• Story Points:
3
• Sprint:
Science Pipelines DM-S15-1
• Team:
Data Release Production
#### Description
SdssCentroid has a "bad data" flag that doesn't actually convey any information about what went wrong. This should be replaced with one or more flags that provide more information.
#### Activity
Hide
Jim Bosch added a comment -
I'm modifying the tests for SdssCentroid on DM-1161 (as part of a change to the test utility code). Please ping me before starting this issue, unless DM-1161 has already been merged before you branch.
Show
Jim Bosch added a comment - I'm modifying the tests for SdssCentroid on DM-1161 (as part of a change to the test utility code). Please ping me before starting this issue, unless DM-1161 has already been merged before you branch.
Hide
Perry Gee added a comment -
I'm working on this issue if you have anything to communicate about DM-1161
I am also wondering how you feel about the style for throwing out of lower level code, that is, whether to pass the flagHandler to the lowest level of algorithmic code, or to have measure() catch specific error from that code and translate it into the correct MeasurementError.
My preference is to pass the flagHandler so that the code all knows about the possible failure modes, but I am not sure that you agree.
Show
Perry Gee added a comment - I'm working on this issue if you have anything to communicate about DM-1161 I am also wondering how you feel about the style for throwing out of lower level code, that is, whether to pass the flagHandler to the lowest level of algorithmic code, or to have measure() catch specific error from that code and translate it into the correct MeasurementError. My preference is to pass the flagHandler so that the code all knows about the possible failure modes, but I am not sure that you agree.
Hide
Jim Bosch added a comment -
I'm working on this issue if you have anything to communicate about DM-1161
I've changed all the unit tests quite a bit on the DM-1161 branch, but that's the only way I imagine these could conflict. I don't think this will require a huge amount of new test code, so I think it will be okay.
I am also wondering how you feel about the style for throwing out of lower level code, that is, whether to pass the flagHandler to the lowest level of algorithmic code, or to have measure() catch specific error from that code and translate it into the correct MeasurementError.
I think it depends on where the lower-level code lives - if this is just some utility function that's called only by one measurement algorithm, then passing the FlagHandler is fine. If it's something shared by multiple algorithms, I'm not so sure (as those would have differently-configured FlagHandlers in general), and if it's code that might be called by things other than algorithms, we should definitely just catch and re-throw exceptions.
Show
Jim Bosch added a comment - I'm working on this issue if you have anything to communicate about DM-1161 I've changed all the unit tests quite a bit on the DM-1161 branch, but that's the only way I imagine these could conflict. I don't think this will require a huge amount of new test code, so I think it will be okay. I am also wondering how you feel about the style for throwing out of lower level code, that is, whether to pass the flagHandler to the lowest level of algorithmic code, or to have measure() catch specific error from that code and translate it into the correct MeasurementError. I think it depends on where the lower-level code lives - if this is just some utility function that's called only by one measurement algorithm, then passing the FlagHandler is fine. If it's something shared by multiple algorithms, I'm not so sure (as those would have differently-configured FlagHandlers in general), and if it's code that might be called by things other than algorithms, we should definitely just catch and re-throw exceptions.
Hide
Perry Gee added a comment -
What I really hipChatted you about:
I wanted to know if you had resolved the weirdness about error handling and bubbling from SdssShapeImpl back up through the measurement framework. It is similar to the error issues in SdssCentroid, though slightly more complex.
The SdssShapeImpl code uses an enumeration to set a flags variable as I recall, and then assumes that the enumeration is the same as a second enumeration in SdssShape. I called out this ugliness at one time, but postponed fixing it during the port.
Now doMeasureCentroidImpl is just a pair of anonymous routines, not a class. And one strategy is to just fold this code into the SdssCentroid Algorithm itself, which would make it possible to get to the flagHandler from this routine. Not sure if this is allowed, or if the code should stay separted. But that would be the cleanest approach.
Another choice would be to make a CentroidImpl class and give it a flags variable like SdssShapeImpl has. I don't like this much, but would like to be consistent with whatever you are doing to SdssShape.
Another possibility is to use the old style retval system to return from a list of enumerated errors. I actually like this pretty well, assuming that I can't just absorb this method into the class.
--------------------------------------
One other question. Do you envision that when a MeasurementError is thrown, the error message could be customized by the thrower, so that the bad values which caused the throw could be published in the log? You may have this already somewhere, though I don't remember seeing it.
Show
Perry Gee added a comment - What I really hipChatted you about: I wanted to know if you had resolved the weirdness about error handling and bubbling from SdssShapeImpl back up through the measurement framework. It is similar to the error issues in SdssCentroid, though slightly more complex. The SdssShapeImpl code uses an enumeration to set a flags variable as I recall, and then assumes that the enumeration is the same as a second enumeration in SdssShape. I called out this ugliness at one time, but postponed fixing it during the port. Now doMeasureCentroidImpl is just a pair of anonymous routines, not a class. And one strategy is to just fold this code into the SdssCentroid Algorithm itself, which would make it possible to get to the flagHandler from this routine. Not sure if this is allowed, or if the code should stay separted. But that would be the cleanest approach. Another choice would be to make a CentroidImpl class and give it a flags variable like SdssShapeImpl has. I don't like this much, but would like to be consistent with whatever you are doing to SdssShape. Another possibility is to use the old style retval system to return from a list of enumerated errors. I actually like this pretty well, assuming that I can't just absorb this method into the class. -------------------------------------- One other question. Do you envision that when a MeasurementError is thrown, the error message could be customized by the thrower, so that the bad values which caused the throw could be published in the log? You may have this already somewhere, though I don't remember seeing it.
Hide
Jim Bosch added a comment -
On DM-1161 I've removed SdssShapeImpl entirely, and the function that computes the moments that used to modify an SdssShapeImpl now just modifies an SdssShapeResult, so it can set the final flags there directly. I didn't go further than that and fold all of the old routines into SdssShapeAlgorithm methods, because I thought it was more important to keep the real algorithmic code intact.
I think it probably makes sense to follow a similar path in SdssCentroid, though there you don't have a full Result object to fill (and I don't think you need one); I do think passing the FlagHandler and/or the record directly to those anonymous routines would be fine. If the anonymous routines don't need to know about all the failure modes, having them return a bool that determines whether to set a flag in the main routine would be fine too. And throwing a MeasurementError from within those routines would be fine too. I don't really have a preference between those options.
On custom error messages for MeasurementError: I think having custom messages would be fine, as long as it's understood that those messages can really only convey debug information. So if a MeasurementError is thrown with the same flag enum value in two different places, they better mean the same thing as far as the user is concerned.
Show
Jim Bosch added a comment - On DM-1161 I've removed SdssShapeImpl entirely, and the function that computes the moments that used to modify an SdssShapeImpl now just modifies an SdssShapeResult, so it can set the final flags there directly. I didn't go further than that and fold all of the old routines into SdssShapeAlgorithm methods, because I thought it was more important to keep the real algorithmic code intact. I think it probably makes sense to follow a similar path in SdssCentroid, though there you don't have a full Result object to fill (and I don't think you need one); I do think passing the FlagHandler and/or the record directly to those anonymous routines would be fine. If the anonymous routines don't need to know about all the failure modes, having them return a bool that determines whether to set a flag in the main routine would be fine too. And throwing a MeasurementError from within those routines would be fine too. I don't really have a preference between those options. On custom error messages for MeasurementError: I think having custom messages would be fine, as long as it's understood that those messages can really only convey debug information. So if a MeasurementError is thrown with the same flag enum value in two different places, they better mean the same thing as far as the user is concerned.
Hide
Perry Gee added a comment -
I think I miscommunicated somewhere. I wasn't proposing to make the doMesureCentroidImpl call a class method of the algorithm. It is fine to me that it is a hidden implementation detail. What I really wanted to know is if it could be intimately linked to the algorithm class, which to me is a consequence of having it take a flagHander as input. That means we never plan to call it from outside the class.
Show
Perry Gee added a comment - I think I miscommunicated somewhere. I wasn't proposing to make the doMesureCentroidImpl call a class method of the algorithm. It is fine to me that it is a hidden implementation detail. What I really wanted to know is if it could be intimately linked to the algorithm class, which to me is a consequence of having it take a flagHander as input. That means we never plan to call it from outside the class.
Hide
Perry Gee added a comment -
I have pushed the simplest possible change to u/pgee/DM-1364, which is to throw a MeasurementError, with the debugging info attached to the normal doc for the specific error, from the underlying code in doMeasurementCentroidImpl. I removed the generic BAD_DATA error, as it was not informative. Any unknown throws should be caught by the measurement framework, and expressed as a FAILURE.
I do not know how to reproduce the errors I added (NO_MAXIMUM, NO_2ND_DERIVATIVE, ALMOST_NO_2ND_DERIVATIVE) with test cases. But I did hardwire the code to check that the composition of the exception messages was being done correctly.
The anonymous doMeasurementCentroidImpl routines in SdssCentroid.cc now take flagHandlers from SdssCentroidAlgorithm, which effectively means that they can only be called from SdssCentroidAlgorithm.
Show
Perry Gee added a comment - I have pushed the simplest possible change to u/pgee/ DM-1364 , which is to throw a MeasurementError, with the debugging info attached to the normal doc for the specific error, from the underlying code in doMeasurementCentroidImpl. I removed the generic BAD_DATA error, as it was not informative. Any unknown throws should be caught by the measurement framework, and expressed as a FAILURE. I do not know how to reproduce the errors I added (NO_MAXIMUM, NO_2ND_DERIVATIVE, ALMOST_NO_2ND_DERIVATIVE) with test cases. But I did hardwire the code to check that the composition of the exception messages was being done correctly. The anonymous doMeasurementCentroidImpl routines in SdssCentroid.cc now take flagHandlers from SdssCentroidAlgorithm, which effectively means that they can only be called from SdssCentroidAlgorithm.
Hide
Perry Gee added a comment -
Please note the exchange with Jim which occurred on this ticket. I was trying to discover if it was desirable to create a pattern for separating the Impl and its error handling from the Algorithm code. I believe Jim is saying that the current structure of SdssCentroid is OK as is, and that the doMeasureCentroidImpl can accept a flagHandler as an input parameter.
I made the 3 errors thrown in this service routine into flags in the SdssCentroidAlgorithm FlagsDefinition, but also appended detailed values to the error message.
I also removed the try/catch in the measure() method, as well as the "BAD_DATA" flag, which seems uninformative.
Also note that only one of the templated doMeasureCentroidImpl methods is actually called here.
Show
Perry Gee added a comment - Please note the exchange with Jim which occurred on this ticket. I was trying to discover if it was desirable to create a pattern for separating the Impl and its error handling from the Algorithm code. I believe Jim is saying that the current structure of SdssCentroid is OK as is, and that the doMeasureCentroidImpl can accept a flagHandler as an input parameter. I made the 3 errors thrown in this service routine into flags in the SdssCentroidAlgorithm FlagsDefinition, but also appended detailed values to the error message. I also removed the try/catch in the measure() method, as well as the "BAD_DATA" flag, which seems uninformative. Also note that only one of the templated doMeasureCentroidImpl methods is actually called here.
Hide
John Swinbank added a comment -
The code submitted here is fine, I think – I made one tiny suggestion on the pull request, but there's nothing substantive.
However, there are two other things to consider before you merge. One is that you re-write the commit message bearing in mind the advice on Confluence – the current message is too long for the initial line and also contains a misplaced apostrophe. We also don't normally mention the ticket number at the start of the message, although I don't think that's an important concern.
Secondly, I think it would be a good idea to think again about writing some tests. I note your comment above, but I don't see why writing some simple tests should be too difficult. To get you started, I hacked together a simple test of NO_2ND_DERIVATIVE:
def testFlagNo2ndDeriv(self): self.truth.defineCentroid("truth") centroid = self.truth[0].getCentroid() psfImage = self.calexp.getPsf().computeImage(centroid) # construct a box that won't fit the full PSF model bbox = psfImage.getBBox() bbox.grow(10) subImage = lsst.afw.image.ExposureF(self.calexp, bbox) subImage.getMaskedImage().getImage().getArray()[:] = 0 subCat = self.measCat[:1] # we also need to install a smaller footprint, or NoiseReplacer complains before we even get to # measuring the centriod measRecord = subCat[0] newFootprint = lsst.afw.detection.Footprint(bbox) newFootprint.getPeaks().push_back(measRecord.getFootprint().getPeaks()[0]) measRecord.setFootprint(newFootprint) # just measure the one object we've prepared for self.task.measure(subCat, subImage) self.assertTrue(measRecord.get("base_SdssCentroid_flag")) self.assertTrue(measRecord.get("base_SdssCentroid_flag_no_2nd_derivative"))
Note that "hacked" is really the appropriate word here – I just grabbed the extant testEdge() and butchered that into something that does the right thing; you should definitely clean it up before committing it. Doing something similar for the other flags shouldn't be too hard.
Show
John Swinbank added a comment - The code submitted here is fine, I think – I made one tiny suggestion on the pull request, but there's nothing substantive. However, there are two other things to consider before you merge. One is that you re-write the commit message bearing in mind the advice on Confluence – the current message is too long for the initial line and also contains a misplaced apostrophe. We also don't normally mention the ticket number at the start of the message, although I don't think that's an important concern. Secondly, I think it would be a good idea to think again about writing some tests. I note your comment above, but I don't see why writing some simple tests should be too difficult. To get you started, I hacked together a simple test of NO_2ND_DERIVATIVE : def testFlagNo2ndDeriv( self ): self .truth.defineCentroid( "truth" ) centroid = self .truth[ 0 ].getCentroid() psfImage = self .calexp.getPsf().computeImage(centroid) # construct a box that won't fit the full PSF model bbox = psfImage.getBBox() bbox.grow( 10 ) subImage = lsst.afw.image.ExposureF( self .calexp, bbox) subImage.getMaskedImage().getImage().getArray()[:] = 0 subCat = self .measCat[: 1 ] # we also need to install a smaller footprint, or NoiseReplacer complains before we even get to # measuring the centriod measRecord = subCat[ 0 ] newFootprint = lsst.afw.detection.Footprint(bbox) newFootprint.getPeaks().push_back(measRecord.getFootprint().getPeaks()[ 0 ]) measRecord.setFootprint(newFootprint) # just measure the one object we've prepared for self .task.measure(subCat, subImage) self .assertTrue(measRecord.get( "base_SdssCentroid_flag" )) self .assertTrue(measRecord.get( "base_SdssCentroid_flag_no_2nd_derivative" )) Note that "hacked" is really the appropriate word here – I just grabbed the extant testEdge() and butchered that into something that does the right thing; you should definitely clean it up before committing it. Doing something similar for the other flags shouldn't be too hard.
Hide
Perry Gee added a comment -
This is a perfectly reasonable request. Since I didn't really understand the reason for the almost_no_2nd_derivative error, I thought it might be better for someone who was actually writing this algorithm to come up with test cases. But I certainly can make an artificial test which produces the error by just looking at the failure case.
Show
Perry Gee added a comment - This is a perfectly reasonable request. Since I didn't really understand the reason for the almost_no_2nd_derivative error, I thought it might be better for someone who was actually writing this algorithm to come up with test cases. But I certainly can make an artificial test which produces the error by just looking at the failure case.
Hide
Jim Bosch added a comment -
While resolving conflicts between this and DM-1161 during a rebase, I noticed that the test for almost_no_2nd_derivative wasn't actually doing what it claimed to be doing; the test for that particular flag had been commented out, and the main failure flag was only set because the "edge" condition was being hit.
After a lot of fiddling with the test code and trying to reproduce that error condition, I've basically given up, and just removed that test in a commit on DM-1161. Unless Perry Gee actually has a prescription in hand for triggering it (and it just got left off this issue by mistake), I'm content to leave it that way.
I also went ahead and renamed the new flags on DM-1161 after resolving the conflicts and getting the rest of the test code working again with my changes to the test utility code. The schema naming conventions use underscores as a "group" separator, with camelCase used as a word separator - so it should be "base_SdssCentroid_flag_notAtMaximum" instead of "base_SdssCentroid_flag_not_at_maximum".
Show
Jim Bosch added a comment - While resolving conflicts between this and DM-1161 during a rebase, I noticed that the test for almost_no_2nd_derivative wasn't actually doing what it claimed to be doing; the test for that particular flag had been commented out, and the main failure flag was only set because the "edge" condition was being hit. After a lot of fiddling with the test code and trying to reproduce that error condition, I've basically given up, and just removed that test in a commit on DM-1161 . Unless Perry Gee actually has a prescription in hand for triggering it (and it just got left off this issue by mistake), I'm content to leave it that way. I also went ahead and renamed the new flags on DM-1161 after resolving the conflicts and getting the rest of the test code working again with my changes to the test utility code. The schema naming conventions use underscores as a "group" separator, with camelCase used as a word separator - so it should be "base_SdssCentroid_flag_notAtMaximum" instead of "base_SdssCentroid_flag_not_at_maximum".
Hide
Robyn Allsman [X] (Inactive) added a comment -
I would like to fix this. Is it on master or still on a ticket branch?
Show
Robyn Allsman [X] (Inactive) added a comment - I would like to fix this. Is it on master or still on a ticket branch?
Hide
Jim Bosch added a comment -
It's on master now; sorry. Probably best to just open a new ticket.
Show
Jim Bosch added a comment - It's on master now; sorry. Probably best to just open a new ticket.
#### People
Assignee:
Perry Gee
Reporter:
Jim Bosch
Reviewers:
John Swinbank
Watchers:
Jim Bosch, John Swinbank, Perry Gee, Robyn Allsman [X] (Inactive)
|
2021-02-28 05:57:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4033336043357849, "perplexity": 1285.0982169537501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360293.33/warc/CC-MAIN-20210228054509-20210228084509-00435.warc.gz"}
|
https://gmatclub.com/forum/if-y-is-a-positive-integer-is-y-3-5-2-4-an-integer-252486.html
|
It is currently 20 Nov 2017, 23:48
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# If y is a positive integer, is (y^3 + 5)^2/4 an integer?
Author Message
TAGS:
### Hide Tags
Senior Manager
Joined: 22 Nov 2016
Posts: 251
Kudos [?]: 60 [1], given: 42
Location: United States
GPA: 3.4
If y is a positive integer, is (y^3 + 5)^2/4 an integer? [#permalink]
### Show Tags
30 Oct 2017, 08:20
1
KUDOS
3
This post was
BOOKMARKED
00:00
Difficulty:
85% (hard)
Question Stats:
34% (01:48) correct 66% (02:04) wrong based on 35 sessions
### HideShow timer Statistics
If y is a positive integer, is $$\frac{(y^3 + 5)^2}{4}$$ an integer?
1) The square root of y has three prime factors.
2) Each prime factor of $$y^3$$ is greater than 5.
[Reveal] Spoiler: OA
_________________
Kudosity killed the cat but your kudos can save it.
Kudos [?]: 60 [1], given: 42
Math Expert
Joined: 02 Aug 2009
Posts: 5214
Kudos [?]: 5860 [2], given: 117
If y is a positive integer, is (y^3 + 5)^2/4 an integer? [#permalink]
### Show Tags
30 Oct 2017, 09:04
2
KUDOS
Expert's post
sasyaharry wrote:
If y is a positive integer, is $$\frac{(y^3 + 5)^2}{4}$$ an integer?
1) The square root of y has three prime factors.
2) Each prime factor of $$y^3$$ is greater than 5.
hi..
$$\frac{(y^3 + 5)^2}{4}$$ will be an integer if y is ODD as $$y^3+5$$ will become even and its SQUARE will be div by 4..
lets see the statements
1) The square root of y has three prime factors.
If one prime factor is 2, ans is NO as $$y^3+5$$ will be ODD
if all 3 prime factors are ODD, ans is YES
insuff
2) Each prime factor of $$y^3$$ is greater than 5.
MEANS all prime factor are ODD, so $$y^3+5$$ will be EVEN
ans is YES
suff
B
_________________
Absolute modulus :http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372
Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html
Kudos [?]: 5860 [2], given: 117
Senior Manager
Joined: 22 Nov 2016
Posts: 251
Kudos [?]: 60 [0], given: 42
Location: United States
GPA: 3.4
If y is a positive integer, is (y^3 + 5)^2/4 an integer? [#permalink]
### Show Tags
31 Oct 2017, 09:28
1
This post was
BOOKMARKED
Some folks might find this helpful.
$$\frac{(y^3 + 5)^2}{4}$$ = Some integer , lets say K.
$$(y^3 + 5)^2 = 4K$$ ; RHS is EVEN since 4K is also a multiple of 2.
$$(y^3 + 5)^2$$ = EVEN
$$(y^3 + 5)^2$$ can be EVEN only if $$(y^3 + 5)$$ is EVEN.
$$(y^3 + 5)$$ can be even only if $$y^3$$ is ODD
$$y^3$$ is ODD only if y is ODD.
Hence the question boils down to, is Y an odd integer?
Statement 1: Not sufficient for reasons described in the post above.
Statement 2: Each prime factor of $$y^3$$ is ODD
A handy rule to remember is that $$N$$ and $$N^x$$ have the same prime factors.
Hence, if $$Y^3$$ has odd prime factors, $$Y$$ also has odd prime factors.
Since the only even prime factor is 2. The number Y is odd. Sufficient.
_________________
Kudosity killed the cat but your kudos can save it.
Kudos [?]: 60 [0], given: 42
If y is a positive integer, is (y^3 + 5)^2/4 an integer? [#permalink] 31 Oct 2017, 09:28
Display posts from previous: Sort by
|
2017-11-21 06:48:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.423601895570755, "perplexity": 3977.0449549247487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806317.75/warc/CC-MAIN-20171121055145-20171121075145-00247.warc.gz"}
|
https://finesse.readthedocs.io/en/latest/developer/get_started/setup_fork.html
|
Before reading this you should read steps 1 - 4 in Contributing to the project. After these steps you will be ready to complete the following process:
1. After cloning, change directory to your local copy of your fork of the Finesse 3 repository:
cd finesse3
2. Now you want to link your repository to the upstream repository (the main Finesse 3 repo), so that you can fetch changes from trunk. To do this, run:
git remote add upstream git://git.ligo.org/finesse/finesse3.git
upstream here is an arbitrary name we use to refer to the main Finesse 3 repository. Note the use of git:// for the URL - this is a read-only URL which means that you cannot accidentally write to the upstream repository. You can only use it to merge into your fork.
3. Check that your remotes are set-up correctly with:
git remote -v show
This should give you something similar to:
upstream git://git.ligo.org/finesse/finesse.git (fetch)
upstream git://git.ligo.org/finesse/finesse.git (push)
|
2021-08-02 23:58:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3934495151042938, "perplexity": 2845.9697268458444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154408.7/warc/CC-MAIN-20210802234539-20210803024539-00681.warc.gz"}
|
https://www.semanticscholar.org/paper/Occupation-Time-of-a-Randomly-Accelerated-Particle-Burkhardt/3cc891a41b16ab0c686af60b2a64fd1946c90b5d
|
# Occupation Time of a Randomly Accelerated Particle on the Positive Half Axis: Results for the First Five Moments
@article{Burkhardt2017OccupationTO,
title={Occupation Time of a Randomly Accelerated Particle on the Positive Half Axis: Results for the First Five Moments},
author={Theodore W. Burkhardt},
journal={Journal of Statistical Physics},
year={2017},
volume={169},
pages={730-743}
}
• T. Burkhardt
• Published 4 August 2017
• Mathematics
• Journal of Statistical Physics
In the random acceleration process a point particle is accelerated by Gaussian white noise with zero mean. Although several fundamental statistical properties of the motion have been analyzed in detail, the statistics of occupation times is still not well understood. We consider the occupation or residence time $$T_+$$T+ on the positive x axis of a particle which is randomly accelerated on the unbounded x axis for a time t. The first two moments of $$T_+$$T+ were recently derived by Ouandji…
4 Citations
• Prashant Singh
• Mathematics
Journal of Physics A: Mathematical and Theoretical
• 2020
We consider the motion of a randomly accelerated particle in one dimension under stochastic resetting mechanism. Denoting the position and velocity by x and v respectively, we consider two different
• Physics
Physical review. E
• 2020
The residence time problem for an arbitrary Markovian process describing nonlinear systems without a steady state and the noise-enhanced stability phenomenon is observed in the system investigated.
• Mathematics
Journal of Statistical Mechanics: Theory and Experiment
• 2022
A semi-Markov process is one that changes states in accordance with a Markov chain but takes a random amount of time between changes. We consider the generalisation to semi-Markov processes of the
• Mathematics
Journal of Statistical Physics
• 2021
We address the theory of records for integrated random walks with finite variance. The long-time continuum limit of these walks is a non-Markov process known as the random acceleration process or the
## References
SHOWING 1-10 OF 34 REFERENCES
In the random acceleration process, a point particle is accelerated according to $\ddot{x}=\eta(t)$, where the right hand side represents Gaussian white noise with zero mean. We begin with the case
• Mathematics, Physics
• 2016
The random acceleration model is one of the simplest non-Markovian stochastic systems and has been widely studied in connection with applications in physics and mathematics. However, the occupation
• Mathematics
• 2001
We investigate the distribution of the time spent by a random walker to the right of a boundary moving with constant velocity v. For the continuous-time problem (Brownian motion), we provide a simple
Consider a randomly accelerated particle moving on the half-line x>0 with a boundary condition at x = 0 that respects the scale invariance of the equations of motion under x→λ3x, v→λv, t→λ2t. If the
• Mathematics
Physical review letters
• 1995
The aim in this Letter is to present the exact analytical solution to the mean exit time out of an interval for the displacement of an undamped free particle under the influence of a random acceleration.
We study the one-dimensional Burgers equation in the inviscid limit for Brownian initial velocity (i.e. the initial velocity is a two-sided Brownian motion that starts from the origin x=0). We obtain
• Mathematics
• 2010
We study the random acceleration model, which is perhaps one of the simplest, yet nontrivial, non-Markov stochastic processes, and is key to many applications. For this non-Markov process, we present
• Mathematics
• 1985
The authors obtain, for a Brownian particle in a uniform force field, the mean and asymptotic first-passage times as functions of the particle's initial position and velocity, with the recurrence
• Mathematics
• 2000
We consider a particle which is randomly accelerated by Gaussian white noise on the line 0<x<1, with absorbing boundaries at x = 0,1. Denoting the initial position and velocity of the particle by x0
• Mathematics
• 2013
In this review, we discuss the persistence and the related first-passage properties in extended many-body nonequilibrium systems. Starting with simple systems with one or few degrees of freedom, such
|
2023-01-28 20:29:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.804020345211029, "perplexity": 777.7876502042865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499654.54/warc/CC-MAIN-20230128184907-20230128214907-00140.warc.gz"}
|
http://math.stackexchange.com/questions/197799/what-are-the-limit-points-of-tan-mathbbn-in-mathbbr
|
# What are the limit points of $\tan(\mathbb{N})$ in $\mathbb{R}$?
I was working on a an old worksheet problem here. It asks
Let $S=\{\tan(k):k=1,2,\dots\}$. Find the set of limit points of $S$ on the real line.
The answer is $(-\infty,\infty)$. Intuitively I feel that if we keep evaluating tangent at positive integer points, they will be so scattered over the real line that we could always construct some subsequence converging to any real number. How can this be made rigorous to get this purported conclusion? Thanks.
-
I don't have the energy to type out a full answer, but I'd start by thinking "what would it mean if the conclusion was false?" and then trying to show why that doesn't happen (roughly speaking, trying to show that there aren't any points that are "missed") – Ben Millwood Sep 17 '12 at 0:29
For each real number $x$ there is a unique integer $n_x$ such that $n_x\pi\le x<(n_x+1)\pi$; let $\hat x=x-n_x\pi\in[0,\pi)$, and observe that $\hat x$ is the unique element of $[0,\pi)$ such that $\tan\hat x=\tan x$. Thus, $\{\tan k:k\in\Bbb N\}=\{\tan\hat k:k\in\Bbb N\}$. Let $D=\{\hat k:k\in\Bbb N\}$. It suffices to show that $D$ is dense in $[0,\pi)$: the tangent function is continuous and maps $[0,\pi)$ onto $\Bbb R$, so $\tan[D]=\{\tan\hat k:k\in\Bbb N\}$ must then be dense in $\Bbb R$.
Note that for any $x,y\in\Bbb R$, $\hat x=\hat y$ iff $\frac{x}{\pi}-\frac{y}{\pi}\in\Bbb Z$. Thus, instead of showing that $D$ is dense in $[0,\pi)$, we can scale everything by a factor of $1/\pi$ and show that $D_0=\{\hat k/\pi:k\in\Bbb N\}$ is dense in $[0,1)$.
This is a nice application of the pigeonhole principle. Let $n$ be a positive integer, and divide $[0,1)$ into the $n$ subintervals $\left[\frac{k}n,\frac{k+1}n\right)$ for $k=0,\dots,n-1$. Two of the $n+1$ numbers $\frac{\hat k}{\pi}$ for $k=0,\dots,n$ must belong to the same one of these subintervals; say $$\frac{\hat k}{\pi},\frac{\hat\ell}{\pi}\in\left[\frac{i}n,\frac{i+1}n\right)\;,$$ where $0\le k<\ell\le n$ and $0\le i<n$. Then $$0<\left|\frac{\hat\ell}{\pi}-\frac{\hat k}{\pi}\right|<\frac1n\;.$$ Let $m=\ell-k$; then $\dfrac{\hat m}{\pi}\in\left[0,\dfrac1n\right)$ if $\hat\ell-\hat k>0$, and $\dfrac{\hat m}{\pi}\in\left[1-\dfrac1n,1\right)$ if $\hat\ell-\hat k<0$.
In the first case let $N$ be the smallest positive integer such that $\dfrac{N\hat m}{\pi}>1$, and in the second let $N$ be the smallest positive integer such that $N\left(1-\dfrac{\hat m}{\pi}\right)>1$. Then every point of $[0,1)$ is within $1/n$ of one of the multiples $\dfrac{\widehat{jm}}{\pi}$ for $j=1,\dots,N-1$. Thus, every $x\in[0,1)$ is within $1/n$ of some element of $D_0$, and since $n$ was arbitrary, $D_0$ is dense in $[0,1)$.
-
Thanks for such a clear answer! – Nastassja Sep 17 '12 at 1:54
Very clear. Thanks. – Bombyx mori Sep 17 '12 at 3:50
Why does each of the $n$ subintervals contain exactly one of the multiples $\dfrac{\widehat{jm}}{\pi}$ for $j=1,\dots,n$? I think this would only be the case if $\dfrac{\hat m}{\pi}\in\left[\dfrac{n-1}n\dfrac1n,\dfrac1n\right)$. Generally, you do get multiples in each of the subintervals (since $m$ is too small to skip one of the subintervals), but you need to take higher multiples than $j=n$, no? – joriki Sep 19 '12 at 9:22
@joriki: You’re right: I tried to make it simpler than it actually is. – Brian M. Scott Sep 19 '12 at 9:33
For convenience's sake we use $2\pi$ instead of $\pi$. You need to prove the images of the $a\rightarrow a\pmod{2\pi}, a\in \mathbb{N}$ is equidistributed. This can be done by Weyl's criterion that modified with $\pmod{2\pi}$ instead of $\pmod{1}$. We need to prove that $$\lim_{n\rightarrow \infty}\frac{1}{n}\sum^{n-1}_{0}e^{2ilx_{j}}=0, \forall l\in \mathbb{Z}$$ with $x_{j}$ be $j$'s image by the quotient map. The summation is a geometric series with $e^{2il}, e^{2i2l},e^{2i3l}$, etc. So we have $$\sum^{n-1}_{0}e^{2ilx_{j}}=\frac{1-e^{2iln}}{1-e^{2il}}$$ which can be bounded by $$\left|\frac{1-e^{2iln}}{1-e^{2il}}\right|\le \frac{2}{|1-e^{2il}|}$$ which is finite since $l$ is fixed. Therefore the limit $$\lim_{n\rightarrow \infty}\frac{1}{n}\sum^{n-1}_{0}e^{2ilx_{j}}=0, \forall l\in \mathbb{Z}$$must be 0, and the original sequence be equidisributed.
-
+1. Note that You need to prove the images... are a dense set (equidistribution is a bonus here). – Did Sep 17 '12 at 6:09
It's not clear to me what you're referring to as "the limit". It seems that the first "the limit" is meant to refer to the limit of the part without $1/n$? In that case, it's wrong, since that part doesn't converge by itself; what you wrote is the analytic continuation of the limit to the unit circle (and the numerator should be $1$ because the sum starts at $j=0$). You need to use the formula for the partial sums of a gemoetric series and bound its absolute value to show that the limit of the full expression including $1/n$ is $0$. – joriki Sep 17 '12 at 8:22
Updated. Thanks for pointing out. – Bombyx mori Sep 17 '12 at 17:37
The function $\tan(x)$ is continuous and $2\pi$-periodic. Suppose I want to find a sequence that converges to the real number $y$. Let $y = \tan(x)$ for some real number $x$. Well, we know that the multiples of an irrational number are equidistributed modulo $1$. Thus, we pick some natural number $N_1$ so that $x + 2N_1\pi$ is really close to $0$ modulo $1$. This means that $x + 2N_1\pi$ is really close to some integer, say $M_1$. Then $\tan(x+2N_1\pi) = \tan(x) = y$ is very close to $\tan(M_1)$ by continuity, so $\tan(M_1)$ is really close to $y$. To get the next integer $M_2$, pick $N_1$ so that $x + 2N_2\pi$ is even closer to $0$ modulo $1$, which is to say that $M_2$ is even closer to $x + 2N_2\pi$ then $M_1$ was close to $x + 2N_1\pi$. By continuity, this means the approximation $\tan(M_2)$ should be even close to $y$ than $\tan(M_1)$. Repeat.
-
Incidentally, $\tan(x)$ is $\pi$-periodic. – Sasha Sep 17 '12 at 1:36
@Sasha: yeah, I guess my proof cause a misleading effect. – Bombyx mori Sep 17 '12 at 3:19
|
2016-05-26 09:04:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.968724250793457, "perplexity": 78.39606507332368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275764.90/warc/CC-MAIN-20160524002115-00019-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://puzzling.stackexchange.com/questions/40460/an-only-connect-wall
|
An “Only Connect” Wall
When I saw this post I became really excited to do one, so here is a connecting wall.
The 4 groups of a total of 16 words can be sorted into four connected groups of 4. The goal is to find the connections of the groups.
Here is a text version
+--------------+--------------+--------------+--------------+
| Sweet | Face | Chatting | Don Juan |
+--------------+--------------+--------------+--------------+
| Instant | Win | Gold | Glory |
+--------------+--------------+--------------+--------------+
| Bald | Amorist | Conquest | Predator |
+--------------+--------------+--------------+--------------+
| America | Romeo | Exult | Space |
+--------------+--------------+--------------+--------------+
Note: I have no specific topic because when I watched a video of the show on BBC, all four topics where completely unrelated. This also may be too easy for you guys because I myself made up the connections.
• I'm sure every on would have gotten it, so +1 to everyone! – Jason_ Aug 11 '16 at 7:51
• Yep, on the show all the topics are unrelated. But they usually have things that could go into multiple categories, so you have to get all four of the actual categories before you can place things. – Deusovi Aug 11 '16 at 17:51
Group 1:
American Gold Bald Predator (eagles)
Group 2:
(My) Space Face (book) Instant (messaging) Chatting.
Credit goes to lois6b and AJ but a small change is in place:
Group 3:
Romeo (and Juliet), Sweet (love), Don Juan (the great lover), Amorist (synonym for lover)
Group 4:
Win, Conquest, Glory, Exult - winning (and enjoying it)
• Correct! one down. 2nd down. – Jason_ Aug 11 '16 at 7:38
• And we have a winner! – Jason_ Aug 11 '16 at 7:50
• Could you edit this answer to add in a detailed explanation as to why you think it is correct, citing the information in the question? Thanks! – user20 Aug 11 '16 at 16:04
• @Emrakul What's wrong with the present answers? I've added the missing words so that is clear what I mean. Stating the obvious isn't much help. – rhsquared Aug 11 '16 at 17:07
• @Radoslav As I've said before, answers without explanations may be removed. Thank for your edits, though! – user20 Aug 11 '16 at 17:45
I have two group:
Group 1:
Romeo, Amorist, Sweet, Don Juan. Romeo-Juliet a popular love story. Don Juan refers to a captivating man known as a great lover of women. Amorist refers to a person who is in love or who writes about love. And sweet love.
Group 2:
Conquest, Win, Glory, Exult. These are somehow related to the feeling of triumphant.
• Both of these are correct! – Jason_ Aug 11 '16 at 7:49
• Could you edit this answer to add in a detailed explanation as to why you think it is correct, citing the information in the question? Thanks! – user20 Aug 11 '16 at 16:04
• @Emrakul added explanation. thanks. – A J Aug 11 '16 at 17:15
• @Emrakul Your comment is obsolete now. – A J Sep 12 '16 at 13:53
I think I have two groups: [EDITED]
Group 1 :
Romeo, Conquest, Don Juan, Amorist -> Love or relations related (Romeo and Juliet, Don Juan the flirter .. )
Group 2:
Win, gold sweet , Glory, Exult -> Winning (sweet victory, gold first place..)
• I will not lie I threw in ones that would fit in two, but if you find the others you will see that they belong elsewhere, almost got the first one – Jason_ Aug 11 '16 at 7:38
• gold i see now is not in the right place.as Radoslav said – lois6b Aug 11 '16 at 7:39
• @lois6b you can replace gold with sweet (success). – rhsquared Aug 11 '16 at 7:41
• @RadoslavHristov could be.. sweet victory – lois6b Aug 11 '16 at 7:44
• nope not sweet. – Jason_ Aug 11 '16 at 7:49
|
2020-11-28 17:33:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3295272886753082, "perplexity": 4683.868154405149}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195687.51/warc/CC-MAIN-20201128155305-20201128185305-00417.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/92670-finding-tangent-line.html
|
# Thread: Finding a tangent line
1. ## Finding a tangent line
What would be the equations of the tangent lines to the graph at the following info:
$\displaystyle 4xy+x^2=5$ at the point $\displaystyle (1,1)$ and $\displaystyle (5,-1)$
If you can show me a step by step solution that would be easiest for me to follow.
2. Originally Posted by AMaccy
What would be the equations of the tangent lines to the graph at the following info:
$\displaystyle 4xy+x^2=5$ at the point $\displaystyle (1,1)$ and $\displaystyle (5,-1)$
If you can show me a step by step solution that would be easiest for me to follow.
Use "implicit differentiation" to find y': $\displaystyle 4y+ 4xy'+ 2x= 0$. $\displaystyle 4xy'= -4y- 2x$ and $\displaystyle y'= \frac{-4y-2x}{4x}$
At (1, 1), $\displaystyle y'= \frac{-4(1)- 2(1)}{4(1)}= \frac{-6}{4}= -\frac{3}{2}$. The tangent line is $\displaystyle y= -\frac{3}{2}(x- 1)+ 1$.
At (5, -1), $\displaystyle y'= \frac{-4(-1)- 2(5)}{4(5)}= \frac{4- 10}{20}= \frac{-6}{20}= -\frac{3}{10}$. The tangent line is $\displaystyle y= -\frac{3}{10}(x- 5)- 1$.
|
2018-04-21 23:33:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8074606657028198, "perplexity": 232.2906762267899}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945459.17/warc/CC-MAIN-20180421223015-20180422003015-00444.warc.gz"}
|
http://math.stackexchange.com/questions/198181/show-that-omega-di-omega-if-d-omega-0
|
# Show that $\omega = d(I\omega)$ if $d\omega = 0$
Let $\omega = P\ dx + Q\ dy$ be a 1-form on $\mathbb{R}^2$. Also, define a 0-form $I\omega({\bf x}) = I\omega(x, y)$ by
$$I\omega({\bf x}) = \int_0^1 P(t {\bf x}) x + Q(t {\bf x}) y\ dt.$$
I would like to show that $\omega = d(I\omega)$, provided that $d\omega = 0$ (here, $d$ is the exterior derivative).
Here is my approach: $I\omega$ is a 0-form, therefore we get
$$d(I\omega) = D_1(I\omega) dx + D_2(I\omega) dy = \frac{\partial (I\omega)}{\partial x} dx + \frac{\partial (I\omega)}{\partial y} dy.$$
Since this is supposed to equal $\omega = P\ dx + Q\ dy$, it seems that it requires $P = D_1(I\omega)$ and $Q = D_2(I\omega)$. Also, since $d\omega = 0$ implies $D_1Q = D_2P$, we get
$$\begin{eqnarray} D_1(I\omega)({\bf x}) & = & \int_0^1 \frac{\partial}{\partial x} (P(t {\bf x}) x + Q(t {\bf x})) y\ dt \\ & = & \int_0^1 D_1P(t{\bf x}) t x + P(t{\bf x}) + D_1Q(t{\bf x}) t y\ dt \\ & = & \int_0^1 D_1P(t{\bf x}) t x + P(t{\bf x}) + D_2P(t{\bf x}) t y\ dt \end{eqnarray}$$
Now, this is where I got stuck, because I fail to see how this expression should be equal to $P$?
-
$(x,y)$ is fixed; define $g(t):=P(tx,ty)$. Then $$\frac d{dt}g(t)=xD_1P(tx,ty)+yD_2P(tx,ty),$$ so $$D_1(I\omega)(x,y)=\int_0^1tg'(t)+g(t)dt=[tg(t)]_0^1-\int_0^1g(t)dt+\int_0^1g(t)dt=g(1).$$
-
Davide, are you sure this is right? I feel you might missed a factor. – Bombyx mori Sep 17 '12 at 21:11
I think it's right, but I'm not sure. Which factor do you think is missing? – Davide Giraudo Sep 18 '12 at 8:02
Very nice, thanks. – koletenbert Sep 19 '12 at 20:33
|
2014-07-23 20:30:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9757961630821228, "perplexity": 233.39589347195178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883466.67/warc/CC-MAIN-20140722025803-00132-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/which-would-you-recomend.59886/
|
# Which would you recomend?
## Which would be better?
4 vote(s)
26.7%
2. ### Go directly to college and major in physics?
11 vote(s)
73.3%
1. Jan 15, 2005
### alex caps
I am a junior in high school and am starting to search for schools to go to.. I know eventually I want to major in physics.. or atleat that's what I am thinking. Would it be better for me to go to an undergrad school for 2 years and get a bunch of different sciences and maths learned, then transfer to another school and major in physics.. or should I try and go directly to a school to major in physics? I am wondering because it will help me decide which colleges to look for, thanks.
2. Jan 16, 2005
### ktpr2
if both cases end with "major in physics" then take the quickest route to achieving your goal. In this case it's "go directly to a school to major in physics."
3. Jan 16, 2005
### Kelvin
In Hong Kong, there's no choice for me.
But if I can choose, I still prefer "Go directly to college and major in physics"
4. Jan 16, 2005
### HallsofIvy
Staff Emeritus
While there might be special reasons for going to one school for two years and then transferring to another, generally it is far better to stay in the same school for your entire undergraduate degree. Again barring special reasons, I would recommend choosing the best college you can get into and completing all four years there.
5. Jan 16, 2005
### ktpr2
Ah, I suppose the implication was transfer to a better school, not just any school for some other reason. If you can't get into the school of your dreams, then yes, attempt to transfer after your sophomore year. One thing many students over look is the quality of job fairs in the area and the career service support from school.
6. Jan 17, 2005
### DaVinci
A major is an undergrad. I think you mean get an AA from a Community College first and then go to the university for your last two years. Is this correct?
I don't think it matters. Personally, I chose to go to a community college to get my AA first. The reason is simple. I took all of my Calculus and Physics at the CC . The reason is because at the CC, my max class size was 30 students. When I took honors calc, I had 8 students in the class. At the university, Calculus and Physics courses are in stadium seating with anywhere from 150 to 300 students.
So, if you do not like teacher / student interaction and think you can teach yourself calculus and physics, just go to the university. If you would rather have smaller class sizes, more interaction with your professor, and more help available, go to a CC.
To top it off, each semester at a state university costs me around $2500. Each semester at the CC cost me about$700.
So, not only did I have smaller classes and personally feel that I learned more, I also saved $1800 a semester. The only draw back to that method is there are a whole lot more immature idiots at a community college that arent really serious about school that you have to deal with. You may not have a problem with that but after being in the Marines... my tolerance for immaturity is zero. :tongue2: 7. Jan 17, 2005 ### franznietzsche Not necessarily true, all my undergrad classes are 30 people (i'm a physics major). My largest class is 130, but thats anthropology. None of my physics classes will ever be more than 30. Same with calculus. It depends on which university. My costs are actually higher than that, but i'm living on campus. its costing me about$15,000 a year, total. My tuition is only $1300 a quarter though (less than$5000 a year) Books, and housing make up most of the rest.
If you can afford the university right away, and there are no other factors pushing you towards the CC, go straight to the university. Its better to be at the same place for four years, you make connections with faculty, work on research, etc. All stuff thats important for getting in to graduate school.
|
2017-11-19 10:55:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22965721786022186, "perplexity": 1039.344532663567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805541.30/warc/CC-MAIN-20171119095916-20171119115916-00696.warc.gz"}
|
https://cs-people.bu.edu/kaptchuk/pages/mpcfaq.html
|
Secure Multiparty Compuation FAQ for Non-Experts
Welcome to our Secure Multiparty Computation (MPC) FAQ aimed at non-experts! This is a living document that we have compiled to support ongoing deployments of MPC analyzing the sensitive data of non-experts (eg. collaborations between Boston University and BWWC and Boston Univeristy and MMF). As such, we include only a breif overview of the mathematical techniques used in MPC systems. If you are a student or researcher who wants to learn about MPC, Pragmatic MPC might be a good place to start!
Big thank you to Mayank Varia who helped compile this FAQ. If you want to suggest any modifications or additional questions, please send a message to kaptchuk@bu.edu!
-- Gabe Kaptchuk
What is MPC?
Secure Multiparty Computation is a set of computational techniques that allows computing on data without allowing a data analyst to actually see that data. Concretely, this means that organizations can learn insights (eg. aggregate statistics) about data that is considered too private or sensitive to be shared. To achieve this seeming paradox, MPC distributes the trust typically placed in a single data analyst among multiple organizations/people, such that the privacy of the data is maintained as long as any one of the organizations/people behaves honestly.
What problems does MPC solve?
In cases where the data is considered to be private—either due to social consensus or legal confidentiality agreements—typical data analysis techniques (see below for more details) are unworkable. Data analysts with access to raw data can unacceptably invade the privacy of individuals with information contained in the data set, and a malicious data analyst could share data inappropriately. Moreover, it may be difficult to find an organization or data analyst willing to take on the responsibility (and potential liability) of safeguarding the sensitive data.
Using MPC techniques, the same reports can be created without designating a trusted party to perform the data analysis. All parties that participate in the MPC process see nothing about the data besides the output of the computation (ie. the report). This alleviates the responsibility (legal or otherwise) that comes with being a data analyst and increases the confidence of the individuals with information contained in the data that their data will remain private.
How does the MPC workflow differ from typical data analysis?
Typical data analysis requires trusting a single organization or analyst with the ability to access and interrogate the raw data. Typical data analysis generally follows the following template:
1. Compile all the relevant data onto one computer (eg. surveying respondents or sharing access to previously gathered data)
3. The data analyst iteratively explores and analyzes the data
4. The data analyst produces a report containing insights about the data that is shared for public consumption
The most significant difference with MPC is that the data analyst is replaced with a pre-selected set of computational parties and that the raw data is never compiled in the clear. Concretely, the typical MPC workflow is as follows:
1. An organizer recruits a set of computational parties agree to participate in the MPC protocol
2. The organizer and computational parties agree on a set of analyses that will be carried out on the data
3. Individuals (or external organizations) that hold data points submit their data to the computational parties encoded in a special format. This special format ensures that individual computational parties learn nothing about the data points
4. After the encoded data has been collected, the computational parties run the MPC protocol to compute the set of analyses agreed upon in (2). At the end of this protocol, the organizer learns exactly the output of these analyses (and nothing else about the data)
5. The organizer compiles these results into a report which is then circulated for public consumption (as appropriate)
What are the benefits of MPC?
MPC shifts the trust paradigm of data analysis from “Trust Me,” in which a single party assumes responsibility for the confidentiality of data, to “Trust Anyone,” in which the confidentiality of the data is maintained as long as any one of the computational parties acts honestly. This shift has a dramatic impact on the concrete privacy of the data:
1. Rouge data analysts cannot unilaterally release raw data publicly. While this worst-case scenario is rare in practice, the implications of such a release could be catastrophic.
2. If an external attacker wishes to see or steal the raw data, they must breach the information systems of multiple computational parties. As these computational parties will likely have differing security practices, this significantly raises the difficulty for attackers.
3. A data analyst will not be able to snoop through the raw data for data points that reveal personal information (eg. about someone that they know).
4. The data can only be used to perform computations that are mutually agreed upon by the computational parties. This means that anyone who contributes data has increased confidence that their data will only be used for the pre-specified purposes, effectively preventing mission creep and p-hacking.
Is MPC better than anonymization?
Anonymization, a technique in which names, addresses, and other directly identifying features are removed from a dataset, is the common approach for mitigating the privacy risks associated with data analysis. Unfortunately, there is significant documentation that anonymization (and it's more powerful cousin, k-anonymity) simply don’t work. Latanya Sweeney famously was able to identify Massachusetts Governor William Weld’s medical records in an anonymized dataset in 1997. As commercial databases have grown more powerful, reidentificant attacks have only become easier.
MPC prevents the reidentification attacks that plague anonymization. Specifically, data analysts are not able to look at any individual records in the underlying data, and therefore make inferences about any individual who contributes data.
What risks still remain when using MPC?
There are two main concerns stakeholders should have when using MPC: (1) the privacy implications of reports (eg. aggregate statistics or computed insights) produced from the data, and (2) whether the distributed trust assumptions made in the system seem reasonable.
1. Privacy Implications of Reports: Although MPC prevents data analysts from seeing the raw data, disclosure of private details is still possible if the output report is not chosen carefully. As a contrived example, consider an output resort containing the average income of an organization with only one employee; although the function can be described as an aggregate statistic, the report directly reveals private information about an individual. Controlling the privacy implications of aggregate reports is complex, and formally reasoning about these privacy implications requires orthogonal privacy enhancing techniques like differential privacy.
2. Distributed Trust Assumptions: MPC is able to provide strong privacy guarantees by distributing trust among many computing parties (trusting anyone rather than being forced to trust someone). The privacy of the data can only be violated if all of these computing parties collude or are compromised by an attacker. Computational parties should, therefore, be selected with care to make this assumption realistic.
Finally, we note that running MPC systems requires rending intricate mathematics into software, which can result in imperfect implementation and security bugs. As such, it is important to ensure that the software is written carefully by professionals, following best practices for secure programming.
What can be computed using MPC?
In principle, MPC can be used to compute arbitrary functions of the input data, including all statistical tests. MPC is, however, slower than typical data analysis by orders of magnitude; computations that would usually take seconds will likely take minutes or hours. In its current state, most common statistical tests or simple aggregations are sufficiently computationally simple that MPC can be used efficiently. More computationally expensive tasks, like training complex machine learning models, remain active areas of MPC research and may be feasible in the near future.
Note that although the runtime of MPC may be slow compared to “normal” computation, most data analysis tasks that would use MPC can easily accommodate a few extra minutes (or hours) of computation time; the overhead of such computation is marginal on the length of the full project.
How difficult is it to use MPC?
MPC is mature enough as a technology to use today. However, the systems to deploy MPC are currently written in such a way that it requires support from experts in cryptography to use. It is an ongoing and active area of research to design MPC platforms that anyone can use.
Does MPC require special systems?
No. MPC can be run using typical computers and typical software systems. Current developments of MPC endeavor to meet users where they are: data can be submitted through a normal web browser and results can be viewed using familiar software programs like excel. As such, no special systems are required to run MPC.
Who is using MPC now?
MPC has recently found adoption in a number of public-facing academic projects and industry efforts. Starting in 2016, Boston University and the Boston Women’s Workforce Council have used MPC to study the wage gap in Boston. DARPA has experimented with using MPC to prevent collisions between satellites run by nations wary of sharing satellite location data. Finally, there are a large number of early-stage startups using MPC to address a wide variety of issues, including improving company’s cybersecurity and securing cryptocurrency assets; these companies have formed an industry group called The MPC Alliance.
What differences are there in the planning process when using MPC?
Preparing to analyze data with MPC requires preparation and a willingness to challenge the intuition that you might have built using typical data analysis techniques. We highlight several key differences below, but note that you might discover other differences, depending on your prior experiences.
1. Plan Analysis Ahead of Time: Although it is possible to compute any function of the data using MPC, not all functions of the data are equally fast or easy to compute. As such, understanding the planned scope of analysis is critical to choosing the specific MPC system. For developers creating an MPC system, it is much more important to know the types of analysis (eg. descriptive statistics, statistical tests, regressions, machine learning) than the exact data format.
2. Recruiting Computational Parties: MPC leverages computational parties to federate trust. Because computational parties are not a typical feature of a data analysis process, recruiting organizations/parties to serve as computational parties may be a foreign experience. Moreover, as MPC is not a well-known technique, recruiting computational parties may require significant communication efforts and education.
3. No Going Back: A common feature of MPC systems is unrecoverable decisions. For example, data may be encrypted under a cryptographic key; if this key is lost, the data can not be recovered. In other situations, passwords may be used to secure data, and if a password is forgotten there is no recovery process. As such, clear communication about the importance of key files and passwords is critical.
4. Data Exploration Weakens Privacy Guarantees: Typical data analysis approaches involve iterative, open-ended exploration of a data set before selecting the contents of the final report. In principle, similar exploration of the data is possible under MPC, as each step of the open-ended exploration can be computed in a privacy preserving way. Unfortunately, such a process substantially diminishes the meaningfulness of MPC’s privacy guarantees. As more aggregations of the data are released, the easier it becomes for a data analyst to make observations about the raw data. As such, we encourage designing the analysis plan to be designed and finalized before any computation has been executed.
5. Manual Data Cleaning is Difficult: Data cleaning and sanitization (ie. removing data that is improperly formatted) is a critical part of most data analysis efforts. Because MPC prevents data analysts from seeing the raw data, data cleaning is very difficult under MPC. As such, data should be cleaned before it is submitted to the MPC system. In the worst case scenario. Automated techniques for after-the-fact data cleaning (e.g., outlier detection and removal) can be performed using MPC, but it is likely to be an error-prone process with a significant computational cost.
Does MPC guarantee participant anonymity?
No. By default, deployments of MPC do not aim to hide the identities of the parties that contribute data. If the identities of the parties contributing data should be kept secret, there do exist methods that hide this data that can be added. Please bring this up proactively with the designers of the MPC system if this property is required.
How does the math of MPC work?
There is no single way that MPC protocols work: the term “MPC” is a shorthand for a collection of techniques that accomplish a similar goal. Detailing the full set of techniques used to design MPC systems is beyond the scope of this FAQ, but we illustrate a simple MPC for privacy-preserving voting to show MPC’s basic tenets. While there might be lots of other security properties you could want from a voting system, but we focus on preserving privacy:
Imagine the workers within a warehouse want to vote on unionization. In the case where the unionization effort fails, the owner of the warehouse may retaliate against the workers who voted to unionize. As such, no one (not even the union organizers) wants to learn how each worker voted, as the existence of such a record is inherently dangerous. The only output that should be learned in the final vote tally.
To compute the finally vote tally in a privacy-preserving way, the workers use the following voting procedure:
• The workers start by standing in a large circle (facing the middle of the circle), each holding a blank slip of paper and a pencil
• The organizer begins by selecting a random number, which we will denote with $$r$$. This number could be any whole number greater than zero (eg. 3, 1694, 9327234, 43, 531, etc…). The organizer writes $$r$$ down on their slip of paper and privately passes it to their left.
• Whenever a worker receives a slip of paper (from their right), they write down a new number on their blank slip of paper and pass that new slip to their left. Let’s call the number on the slip of paper they received $$k$$. The number the write on the new slip of paper is either:
• In the case that the worker wants to vote for unionization, they add one to the received number and write the result on the new slip of paper. That is, they write the value $$k+1$$ on the new slip of paper.
• In the case that the worker wants to vote against unionization, they simply write the same number that they received on the new slip of paper. That is, they write the value $$k$$ on the new slip of paper.
• This process proceeds clockwise around the circle, until the final slip of paper is passed to the organizer. Let’s call the number received by the organizer as $$n$$. The organizer subtracts $$r$$ from $$n$$ and announces the result (ie. $$n-r$$). This is the number of workers who voted for unionization. To determine if the vote succeeded, everyone can check to see if $$n-r$$ is greater than half the number of workers who participated in the vote.
Why is this procedure privacy preserving? Notice that the number that each worker sees is totally random, because $$r$$ is selected randomly. Specifically, a worker knows nothing about how the workers before them voted. For example, let’s assume that the third worker receives the value 451. There are three possible scenarios:
1. The organizer randomly selected $$r=451$$ and the first two workers voted against unionization,
2. The organizer randomly selected $$r=450$$ and exactly one of the first two workers voted for unionization,
3. The organizer randomly selected $$r=449$$ and both prior workers voted for unionization.
Importantly, there is no way for the third worker to distinguish between these three scenarios! As such, we can conclude that the third worker learns nothing about the votes cast by the first two workers. The same arguments can be made about all of the workers.
The organizer sees the value $$n$$ at the end of the voting procedure. While the organizer can learn how many workers voted for unionization, they cannot determine how any individual worker voted. This is because they only see an aggregate number at the very end of the protocol, and the pro-unionization votes could have been added into the tally at any point around the circle.
As system designers compute more complex functions of the data (ie. more complicated than vote tallying), more complex techniques are required. If you are interested in learning more about the mathematics of MPC, there are many freely available courses and textbooks online, but their target audience is security researchers and MPC system designers.
Glossary
THIS IS A WORK IS PROGRESS
Throughout this FAQ, we use a number of terms to help explain MPC. These terms are not all standard in the technical literature. Below we provide a brief explainer of these terms. Additionally, we provide a glossary for technical terms commonly used in the technical MPC literature to help non-experts who have read multiple sources online.
We note that this is a living document, so this list may be incomplete. If you have any suggestions, just send an email to kaptchuk@bu.edu
• Computational Parties:
• Data Analyst:
FAQ template from here.
|
2022-10-02 18:44:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3397643268108368, "perplexity": 1360.6478646467417}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00256.warc.gz"}
|
https://essayeducation.info/2020/10/21/using-proportions-to-solve-percent-problems_zl/
|
# Using proportions to solve percent problems
By | October 21, 2020
Since how to cite an mla paper 4 × 6 = 24, x = where to solve math problems 6 6 pet shop business plan liters should be mixed with 8 lemons. on-screen show (4:3) – a free powerpoint ppt presentation using proportions to solve percent problems (displayed as a flash slide show) on powershow.com – id: writing percent problems as proportions percent proportion example 1. more interesting proportion english thesis example word problems problem # 2 a business plan for cafe shop boy who is college research essays 3 using proportions to solve percent problems feet tall can cast a shadow on the ground that is 7 feet long a proportion is read as “x is to y as z is to w” \frac{x}{y}=\frac{z}{w} \: pete created date: 2/3 = using proportions to solve percent problems 1.2/x x one page business plan sample critical analysis of an article essay examples = 18 x = 1.8 x = 0.2 x = 10 4. problem 1. you can solve a lot of percent problems using the tricks shown above proportions are used in problems involving changing numbers while keeping a ratio constant. grade level by (date), given using proportions to solve percent problems proportional relationship scenarios, (name) will solve multi-step ratio and assigned task percent problems by cross multiplyingthe terms to find the correct value of the unknown term for research paper on terrorism (4 out of 5) problems. we will name this variable, . one number represents the part, the other number represents the whole. lial, margaret l.; salzman, stanley a.; hestwood, diana l. in the problem, we know that harvard university application essays ore whole number is and we need to calculate of this number.
## 11 thoughts on “Using proportions to solve percent problems”
1. Connor Post author
What’s up to all, the contents present at this web page are genuinely awesome for people knowledge, well, keep up the nice work fellows.
2. Gabriel Post author
A powerful share, I just given this onto a colleague who was doing somewhat evaluation on this. And he in reality purchased me breakfast as a result of I discovered it for him.. smile. So let me reword that: Thnx for the treat! But yeah Thnkx for spending the time to debate this, I feel strongly about it and love studying more on this topic. If doable, as you turn into expertise, would you thoughts updating your weblog with more particulars? It’s extremely helpful for me. Huge thumb up for this blog put up!
3. Jacob Post author
A person essentially help to make seriously posts I would state. This is the first time I frequented your web page and thus far? I surprised with the research you made to make this particular publish extraordinary. Excellent job!
4. Jenna Post author
great post, very informative. I wonder why the other experts of this sector don’t notice this. You should continue your writing. I am sure, you have a huge readers’ base already!
5. Joseph Post author
As a Newbie, I am always exploring online for articles that can help me. Thank you
6. Hunter Post author
Thanks for your article. I also believe laptop computers are becoming more and more popular nowadays, and now are often the only form of computer found in a household. It is because at the same time that they’re becoming more and more reasonably priced, their processing power keeps growing to the point where they are as potent as desktop out of just a few years ago.
7. Eric Post author
As I web site possessor I believe the content material here is rattling magnificent , appreciate it for your hard work. You should keep it up forever! Best of luck.
8. Cameron Post author
Hello, i think that i saw you visited my web site thus i came to “return the favor”.I’m attempting to find things to improve my website!I suppose its ok to use a few of your ideas!!
9. Anthony Post author
Somebody essentially help to make seriously articles I would state. This is the first time I frequented your website page and thus far? I amazed with the research you made to create this particular publish extraordinary. Magnificent job!
10. Andrew Post author
Sweet blog! I found it while searching on Yahoo News. Do you have any tips on how to get listed in Yahoo News? I’ve been trying for a while but I never seem to get there! Thanks
11. Brian Post author
If you would like to grow your experience only keep visiting this web site and be updated with the most up-to-date gossip posted here.
|
2021-03-05 07:39:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22476443648338318, "perplexity": 2841.0971509802084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178370239.72/warc/CC-MAIN-20210305060756-20210305090756-00600.warc.gz"}
|
https://dustingmixon.wordpress.com/page/2/
|
Game of Sloanes
Emily King recently launched an online competition to find the best packings of points in complex projective space. The so-called Game of Sloanes is concerned with packing $n$ points in $\mathbf{CP}^{d-1}$ for $d\in\{2,\ldots,7\}$ and for $n\in\{d+2,\ldots,49\}$. John Jasper, Emily King and I collaborated to make the baseline for this competition by curating various packings from the literature and then numerically optimizing sub-optimal packings. See our paper for more information:
J. Jasper, E. J. King, D. G. Mixon, Game of Sloanes: Best known packings in complex projective space
If you have a packing that improves upon the current leader board, you can submit your packing to the following email address:
asongofvectorsandangles [at] gmail [dot] com
In this competition, you can win money if you find a new packing that achieves equality in the Welch bound; see this paper for a survey of these so-called equiangular tight frames (ETFs).
Some news regarding the Paley graph
Let $\mathbb{F}_p$ denote the field with $p$ elements, and let $Q_p$ denote the multiplicative subgroup of quadratic residues. For each prime $p\equiv 1\bmod 4$, we consider the Paley graph $G_p$ with vertex set $\mathbb{F}_p$, where two vertices are adjacent whenever their difference resides in $Q_p$. For example, the following illustration from Wikipedia depicts $G_{13}$:
The purpose of this blog entry is to discuss recent observations regarding the Paley graph.
Polymath16, fourteenth thread: Automated graph minimization?
This is the fourteenth “research” thread of the Polymath16 project to make progress on the Hadwiger–Nelson problem, continuing this post. This project is a follow-up to Aubrey de Grey’s breakthrough result that the chromatic number of the plane is at least 5. Discussion of the project of a non-research nature should continue in the Polymath proposal page. We will summarize progress on the Polymath wiki page.
The biggest development in the previous thread:
The method used for finding this graph is vaguely described here and here. It seems that the method is currently more of an art form than an algorithm. A next step might be to automate the art away, code up any computational speedups that are available, and then throw more computing power at the problem.
This is the thirteenth “research” thread of the Polymath16 project to make progress on the Hadwiger–Nelson problem, continuing this post. This project is a follow-up to Aubrey de Grey’s breakthrough result that the chromatic number of the plane is at least 5. Discussion of the project of a non-research nature should continue in the Polymath proposal page. We will summarize progress on the Polymath wiki page.
Interest in this project has spiked since approaching (and passing) our original deadline of April 15. For this reason, I propose we extend the deadline to October 15, 2019. We can discuss this in the Polymath proposal page.
Here are some recent developments:
I’m interested to see if this last point has legs!
Polymath16, twelfth thread: Year in review and future plans
This is the twelfth “research” thread of the Polymath16 project to make progress on the Hadwiger–Nelson problem, continuing this post. This project is a follow-up to Aubrey de Grey’s breakthrough result that the chromatic number of the plane is at least 5. Discussion of the project of a non-research nature should continue in the Polymath proposal page. We will summarize progress on the Polymath wiki page.
Activity on this project has slowed considerably, as we’ve gone 6 months without having to roll over to a new thread. As mentioned in the original thread, the deadline for this project is April 15, 2019, so we only have a couple of weeks remaining. Dömötör and Aubrey took the time to summarize the highlights of what we’ve accomplished in the last year (see below). While we don’t have a single killer result to publish, there are several branches of minor results that warrant publication. Feel free to comment on additional results that were missed in the summaries below, as well as possible venues for publication.
MATH 8610: Mathematics of Data Science
This spring, I’m teaching a graduate-level special topics course called “Mathematics of Data Science” at the Ohio State University. This will be a research-oriented class, and in lecture, I plan to cover some of the important ideas from convex optimization, probability, dimensionality reduction, clustering, and sparsity.
The current draft consists of a chapter on convex optimization. I will update the above link periodically. Feel free to comment below.
UPDATE #1: Lightly edited Chapter 1 and added a chapter on probability.
UPDATE #2: Lightly edited Chapter 2 and added a section on PCA.
UPDATE #3: Added a section on random projection.
UPDATE #4: Lightly edited Chapter 3. The semester is over, so I don’t plan to update these notes again until I teach a complementary special topics course next year.
UPDATE #5: As mentioned above, I’m teaching a complementary installment of this class this semester. I fixed several typos throughout, and I added a new section on embeddings from pairwise data.
UPDATE #6: Added a section on the clique problem.
UPDATE #7: Added a section on the Lovasz number.
UPDATE #8: Added a section on planted clique.
UPDATE #9: Added sections on maximum cut and minimum normalized cut.
UPDATE #10: Added a section on k-means clustering.
UPDATE #11: Started a chapter on compressed sensing.
UPDATE #12: Started a section on uniform guarantees.
UPDATE #13: Started a chapter on matrix analysis.
UPDATE #14: Started a section on matrix representations.
UPDATE #15: Started a section on spectral theory.
UPDATE #16: Added to the section on spectral theory.
UPDATE #17: Added more to the section on spectral theory.
UPDATE #18: Added even more to the section on spectral theory.
UPDATE #19: Finished the section on spectral theory and added a section on tensors.
UPDATE #20: Finished the section on tensors.
UPDATE #21: Added a section on random graphs.
A few paper announcements
This last semester, I was a long-term visitor at the Simons Institute for the Theory of Computing. My time there was rather productive, resulting in a few (exciting!) arXiv preprints, which I discuss below.
1. SqueezeFit: Label-aware dimensionality reduction by semidefinite programming.
Suppose you have a bunch of points in high-dimensional Euclidean space, some labeled “cat” and others labeled “dog,” say. Can you find a low-rank projection such that after projection, cats and dogs remain separated? If you can implement such a projection as a sensor, then that sensor collects enough information to classify cats versus dogs. This is the main idea behind compressive classification.
A neat application of the polynomial method
Two years ago, Boris Alexeev emailed me a problem:
Let $n \geq 2$. Suppose you have $n^2$ distinct numbers in some field. Is it necessarily possible to arrange the numbers into an $n\times n$ matrix of full rank?
Boris’s problem was originally inspired by a linear algebra exam problem at Princeton: Is it possible arrange four distinct prime numbers in a rank-deficient $2\times 2$ matrix? (The answer depends on whether you consider $-2$ to be prime.) Recently, Boris reminded me of his email, and I finally bothered to solve it. His hint: Apply the combinatorial nullstellensatz. The solve was rather satisfying, and if you’re reading this, I highly recommend that you stop reading here and enjoy the solve yourself.
Polymath16, eleventh thread: Chromatic numbers of planar sets
This is the eleventh “research” thread of the Polymath16 project to make progress on the Hadwiger–Nelson problem, continuing this post. This project is a follow-up to Aubrey de Grey’s breakthrough result that the chromatic number of the plane is at least 5. Discussion of the project of a non-research nature should continue in the Polymath proposal page. We will summarize progress on the Polymath wiki page.
Here’s a brief summary of the progress made in the previous thread:
– Let w(k) denote the supremum of w such that $[0,w]\times\mathbb{R}$ is k-colorable. Then of course $w(1)=-\infty$ and $w(k)=\infty$ for every $k\geq 7$. Furthermore,
$\displaystyle{w(2)=0, \quad w(3)=\frac{\sqrt{3}}{2}, \quad w(4)\geq\sqrt{\frac{32}{35}}, \quad w(5)\geq\frac{13}{8}, \quad w(6)\geq \sqrt{3}+\frac{\sqrt{15}}{2}.}$
Colorings that produce these lower bounds are depicted here. The upper bound for k=3 is given here.
– The largest known k-colorable disks for k=2,3,4,5 are depicted here.
Presumably, we can obtain descent upper bounds on w(4) by restricting (a finite subset of) the ring $\mathbb{Z}[\omega_1,\omega_3,\omega_4]$ to an infinite strip.
|
2022-01-21 07:19:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 23, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5253033638000488, "perplexity": 1324.6098473540078}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302740.94/warc/CC-MAIN-20220121071203-20220121101203-00023.warc.gz"}
|
https://socratic.org/questions/58d36eb611ef6b503b7158fa
|
# What is the pH of the final solution...?
## A $25 \cdot m L$ volume of $H B r \left(a q\right)$ at $0.050 \cdot m o l \cdot {L}^{-} 1$ concentration is mixed with a $10 \cdot m L$ volume of $K O H \left(a q\right)$ at $0.020 \cdot m o l \cdot {L}^{-} 1$ concentration. What is the $p H$ of the final solution?
Mar 23, 2017
A stoichiometric equation is required:
$H B r \left(a q\right) + K O H \left(a q\right) \rightarrow N a B r \left(a q\right) + {H}_{2} O \left(l\right)$
We get (finally) $p H = 1.52$
#### Explanation:
And thus we need to find the amount of substance of both $\text{hydrogen bromide}$, $\text{hydrobromic acid}$, and $\text{potassium hydroxide}$.
We use the relationship, $\text{Concentration"="Moles of solute"/"Volume of solution}$, OR
$\text{Concentration"xx"Volume"="Moles of solute}$
$\text{Moles of HBr} = 25 \times {10}^{-} 3 \cancel{L} \times 0.050 \cdot m o l \cdot \cancel{{L}^{-} 1} = 1.25 \times {10}^{-} 3 \cdot m o l .$
$\text{Moles of KOH} = 10 \times {10}^{-} 3 \cancel{L} \times 0.020 \cdot m o l \cdot \cancel{{L}^{-} 1} = 0.200 \times {10}^{-} 3 \cdot m o l .$
Note that I converted the $m L$ volume to $L$ by using the relationship: $1 \cdot m L \equiv 1 \times {10}^{-} 3 \cdot L$
Clearly, the hydrobromic acid is in excess. And given 1:1 stoichiometry, there are $\frac{\left(1.25 \cdot m o l - 0.200 \cdot m o l\right) \times {10}^{-} 3}{35 \times {10}^{-} 3 L}$ $H B r$ remaining.
So $\left[H B r\right] = \frac{\left(1.25 \cdot m o l - 0.200 \cdot m o l\right) \times {10}^{-} 3}{35 \times {10}^{-} 3 L}$
$= \frac{1.05 \cdot m o l \times {10}^{-} 3}{35 \times {10}^{-} 3 L} = 0.030 \cdot m o l \cdot {L}^{-} 1$ with respect to $H B r$.
$p H = - {\log}_{10} \left[{H}_{3} {O}^{+}\right] = - {\log}_{10} \left(0.030\right) = 1.52$
|
2021-06-17 21:12:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 25, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9184490442276001, "perplexity": 1196.1677273861205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487633444.37/warc/CC-MAIN-20210617192319-20210617222319-00407.warc.gz"}
|
http://en.wikipedia.org/wiki/Graph_product
|
Graph product
In mathematics, a graph product is a binary operation on graphs. Specifically, it is an operation that takes two graphs G1 and G2 and produces a graph H with the following properties:
• The vertex set of H is the Cartesian product V(G1) × V(G2), where V(G1) and V(G2) are the vertex sets of G1 and G2, respectively.
• Two vertices (u1u2) and (v1v2) of H are connected by an edge if and only if the vertices u1, u2, v1, v2 satisfy conditions of a certain type (see below).
The following table shows the most common graph products, with ∼ denoting “is connected by an edge to”, and $\not\sim$ denoting non-connection. The operator symbols listed here are by no means standard, especially in older papers.
Name Condition for (u1u2) ∼ (v1v2). Dimensions Example
Cartesian product
$G \square H$
u1 = v1 and u2 ∼ v2 )
or
u1 ∼ v1 and u2 = v2 )
$G_{V_1, E_1} \square H_{V_2, E_2} \rightarrow J_{(V_1 V_2), (E_2 V_1 + E_1 V_2)}$
Tensor product
(Categorical product)
$G \times H$
u1 ∼ v1 and u2 ∼ v2 $G_{V_1, E_1} \times H_{V_2, E_2} \rightarrow J_{(V_1 V_2), (2 E_1 E_2)}$
Lexicographical product
$G \cdot H$ or $G[H]$
u1 ∼ v1
or
u1 = v1 and u2 ∼ v2 )
$G_{V_1, E_1} \cdot H_{V_2, E_2} \rightarrow J_{(V_1 V_2), (E_2 V_1 + E_1 V_2^2)}$
Strong product
(Normal product, AND product)
$G \boxtimes H$
u1 = v1 and u2 ∼ v2 )
or
u1 ∼ v1 and u2 = v2 )
or
u1 ∼ v1 and u2 ∼ v2 )
$G_{V_1, E_1} \boxtimes H_{V_2, E_2} \rightarrow J_{(V_1 V_2), (V_1 E_2 + V_2E_1 + 2 E_1 E_2)}$
Co-normal product
(disjunctive product, OR product)
$G * H$
u1 ∼ v1
or
u2 ∼ v2
Modular product $(u_1 \sim v_1 \text{ and } u_2 \sim v_2)$
or
$(u_1 \not\sim v_1 \text{ and } u_2 \not\sim v_2)$
Rooted product see article $G_{V_1, E_1} \cdot H_{V_2, E_2} \rightarrow J_{(V_1 V_2), (E_2 V_1 + E_1)}$
Kronecker product see article see article see article
Zig-zag product see article see article see article
Replacement product
Homomorphic product[1][3]
$G \ltimes H$
$(u_1 = v_1)$
or
$(u_1 \sim v_1 \text{ and } u_2 \not\sim v_2)$
In general, a graph product is determined by any condition for (u1u2) ∼ (v1v2) that can be expressed in terms of the statements u1 ∼ v1, u2 ∼ v2, u1 = v1, and u2 = v2.
Mnemonic
Let $K_2$ be the complete graph on two vertices (i.e. a single edge). The product graphs $K_2 \square K_2$, $K_2 \times K_2$, and $K_2 \boxtimes K_2$ look exactly like the glyph representing the operator. For example, $K_2 \square K_2$ is a four cycle (a square) and $K_2 \boxtimes K_2$ is the complete graph on four vertices. The $G[H]$ notation for lexicographic product serves as a reminder that this product is not commutative.
2. ^ Bačík, R.; Mahajan, S. (1995). "Computing and Combinatorics". Lecture Notes in Computer Science 959. p. 566. doi:10.1007/BFb0030878. ISBN 3-540-60216-X. |chapter= ignored (help) edit
|
2014-12-27 06:04:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 24, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7674902677536011, "perplexity": 418.788530744666}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447550545.65/warc/CC-MAIN-20141224185910-00093-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://plainmath.net/3214/the-two-way-table-below-describes-the-members-the-senate-recent-year-begin
|
# The two-way table below describes the members of the U.S Senate in a recent year.begin
Question
Two-way tables
The two-way table below describes the members of the U.S Senate in a recent year.
$$\begin{array}{ccc} \hline &\text{Male}&\text{Female}\\ \text{Democrats}&47&13\\ \text{Republicans}&36&4\\ \hline \end{array}$$
If we select a U.S. senator at random, what's the probability that the senator is a Democrat?
2021-01-09
Let us determine the row/column total of each row/column, which is the sum of all counts in the row/column:
$$\begin{array}{c|cc|c} &\text{Male}&\text{Female}&\text{Total}\\ \hline \text{Democrats}&47&13&47+13=60\\ \text{Republicans}&36&4&36+4=40\\ \hline \text{Total}&47+36=83&13+4=17&60+40=100 \end{array}$$
The table contains 100 members in total (which is given in the bottom right corner of the table), while 60 of the 100 members are Democrats (since 60 is mentioned in the row ”Democrats” and in the column ”Total” of the given table).
The probability is the number of favorable outcomes divided by the number of possible outcomes:
$$P(Democrats)=\frac{\text{# of favorable outcomes}}{\text{# of possible outcomes}}=\frac{60}{100}=\frac{3}{5}=0.6=60\%$$
Result: $$\frac{3}{5}=0.6=60\%$$
### Relevant Questions
A random sample of 88 U.S. 11th- and 12th-graders was selected. The two-way table summarizes the gender of the students and their response to the question "Do you have allergies?" Suppose we choose a student from this group at random.
$$\begin{array}{c|cc|c} & \text { Female } & \text { Male } & \text { Total } \\ \hline \text{ Yes } & 19 & 15 & 34 \\ \text{ No } & 24 & 30 & 54 \\ \hline \text{ Total } & 43 & 45 & 88\\ \end{array}$$
What is the probability that the student is female or has allergies?
$$(a)\frac{19}{88}$$
(b)$$\frac{39}{88}$$
(c)$$\frac{58}{88}$$
(d)$$\frac{77}{88}$$
A study among the Piria Indians of Arizona investigated the relationship between a mother's diabetic status and the number of birth defects in her children. The results appear in the two-way table. $$\text{Diabetic status}\ \begin{array}{ll|c|c|c} && \text { Nondiabetic } & \text { Prediabetic } & \text { Diabetic } \\ \hline & \text { None } & 754 & 362 & 38 \\ \hline & \text { One or more } & 31 & 13 & 9 \end{array}$$
What proportion of the women in this study had a child with one or more birth defects?
The two-way table summarizes data on the gender and eye color of students in a college statistics class. Imagine choosing a student from the class at random. Define event A: student is male and event B: student has blue eyes.
$$\begin{array}{c|cc|c} &\text{Male}&\text{Female}&\text{Total}\\ \hline \text{Blue}&&&10\\ \text{Brown}&&&40\\ \hline \text{Total}&20&30&50 \end{array}$$
Copy and complete the two-way table so that events A and B are mutually exclusive.
The following two-way contingency table gives the breakdown of the population of adults in a town according to their highest level of education and whether or not they regularly take vitamins:
$$\begin{array}{|c|c|c|c|c|} \hline \text {Education}& \text {Use of vitamins takes} &\text{Does not take}\\ \hline \text {No High School Diploma} & 0.03 & 0.07 \\ \hline \text{High School Diploma} & 0.11 & 0.39 \\ \hline \text {Undergraduate Degree} & 0.09 & 0.27 \\ \hline \text {Graduate Degree} & 0.02 & 0.02 \\ \hline \end{array}$$
You select a person at random. What is the probability the person does not take vitamins regularly?
Statistics students at a state college compiled the following two-way table from a sample of randomly selected students at their college:
$$\begin{array}{|c|c|c|}\hline&\text{Play chess}&\text{Don`t play chess}\\\hline \text{Male students} &25&162\\ \hline \text{Female students}&19&148 \\ \hline \end{array}\\$$
Answer the following questions about the table. Be sure to show any calculations.
What question about the population of students at the state college would this table attempt to answer?
State $$H^0$$ and $$H^1$$ for the test related to this table.
The two-way table shows the results from a survey of dog and cat owners about whether their pet prefers dry food or wet food.
$$\begin{array}{c|c|c} &\text { Dry } & \text { Wet }\\ \hline \text{ Cats } & 10&30 \\ \hline \text{ Dogs} & 20&20\\ \end{array}$$
Does the two-way table show any difference in preferences between dogs and cats? Explain.
The accompanying two-way table was constructed using data in the article “Television Viewing and Physical Fitness in Adults” (Research Quarterly for Exercise and Sport, 1990: 315–320). The author hoped to determine whether time spent watching television is associated with cardiovascular fitness. Subjects were asked about their television-viewing habits and were classified as physically fit if they scored in the excellent or very good category on a step test. We include MINITAB output from a chi-squared analysis. The four TV groups corresponded to different amounts of time per day spent watching TV (0, 1–2, 3–4, or 5 or more hours). The 168 individuals represented in the first column were those judged physically fit. Expected counts appear below observed counts, and MINITAB displays the contribution to $$\displaystyle{x}^{{{2}}}$$ from each cell.
State and test the appropriate hypotheses using $$\displaystyle\alpha={0.05}$$
$$\begin{array}{|c|c|}\hline & 1 & 2 & Total \\ \hline 1 & 35 & 147 & 182 \\ \hline & 25.48 & 156.52 & \\ \hline 2 & 101 & 629 & 730 \\ \hline & 102.20 & 627.80 & \\ \hline 3 & 28 & 222 & 250 \\ \hline & 35.00 & 215.00 & \\ \hline 4 & 4 & 34 & 38 \\ \hline & 5.32 & 32.68 & \\ \hline Total & 168 & 1032 & 1200 \\ \hline \end{array}$$
$$Chisq= 3.557\ +\ 0.579\ +\ 0.014\ +\ 0.002\ +\ 1.400\ +\ 0.228\ +\ 0.328\ +\ 0.053=6.161$$
$$\displaystyle{d}{f}={3}$$
Bayes' Theorem is given by $$P(A|B) = \frac{P(B|A) \cdot P(A)} {P(B)}$$. Use a two-way table to write an example of Bayes' Theorem.
Using data from the 2000 census, a random sample of 348 U.S. residents aged 18 and older was selected. The two-way table summarizes the relationship between marital status and housing status for these resident.
$$\begin{array}{l|c|c|c} & Married & Not married & Total \\ \hline Own & 172 & 82 & 254 \\ \hline Rent & 40 & 54 & 94 \\ \hline Total & 212 & 136 & 348 \end{array}$$
State the hypotheses for a test of the relationship between marital status and housing status for U.S. residents.
$$\begin{array}{ccc} & Age \ Response & {\begin{array}{l|r|r|r|r|r} & 18-34 & 35-49 & 50-64 & 65+ & Total \\ \hline Support & 91 & 161 & 272 & 332 & 856 \\ \hline Oppose & 25 & 74 & 211 & 255 & 565 \\ \hline Don't know & 4 & 13 & 20 & 51 & 88 \\ \hline Total & 120 & 248 & 503 & 638 & 1509 \end{array}} \ \end{array}$$
|
2021-06-13 21:41:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4551389217376709, "perplexity": 1186.4995507789818}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487610841.7/warc/CC-MAIN-20210613192529-20210613222529-00618.warc.gz"}
|
https://tex.stackexchange.com/questions/181411/changing-the-rules-of-line-breaks
|
# Changing the rules of line breaks
I used pgfplotstable to read the data from a file.
The columns have fixed widths. The third column contains usual text and the lines are broken nicely. The second column contains mathematical expressions, which don't have white spaces and can be quite long and surpass the width of the column. In that case I want the expression to break and resume on the next line.
At the moment, line breaks can only occur next to some special characters (like the minus sign, see the third row), but I want it to be able to occur next to some other characters (for example, the asterisk). So, for example, the first line could break after "-V_lm/(epsilon0_const*2*pir)" and the next line could have "1[F]*1[V/m]".
Does anybody know how to accomplish this?
Also the code (which is not really relevant to the problem, but just in case) :
\begin{table}[H]
\centering
\newcolumntype{C}{>{}p{60mm}}% a centered fixed-width-column
\pgfplotstabletypeset[
col sep=ampersand,
columns/Name/.style={verb string type},
columns/Expression/.style={column type=|C, verb string type},
columns/Description/.style={column type=|C, verb string type},
empty cells with={--}, % replace empty cells with ’--’
every last row/.style={after row=\bottomrule},
]{./equations.txt}
\caption{Implementation.}
\end{table}
The file I'm reading the data from:
Name&Expression&Description
normE&-V_lm/(epsilon0_const*2*pi*r)*1[F]*1[V/m]&Normal electric field (V/m)
T&T[1/K]&Temperature (K)
F&max(g_normE*1[m/V]*1e-9,1e-12)&product of local electric field strength and elementary charge (eV/nm)
• You probably need to show an example of the math markup (without the pgfplots table which isn't directly relevant. * is by default a \mathbin so tex will break after * as it does after - but it never breaks inside {} or inside \left\right May 31, 2014 at 10:51
• Oh no, perhaps that's not relevant, are you setting this as text (not math) ? May 31, 2014 at 10:52
• I'm sorry, yes, the math needs to be in text format (verb string type). The math expression needs to be copyable to another program, so I want it to be right as it is in the picture, only the line breaks are problematic. May 31, 2014 at 11:05
• It would still help to see the markup, especially if you are using verbatim as that affects how you can add linebreaks May 31, 2014 at 11:23
• @DavidCarlisle I added the source text file. May 31, 2014 at 14:06
Mico's just posted a url version, but if you want to control individual characters by hand:
\documentclass{article}
\usepackage{array}
\usepackage[T1]{fontenc}
\begin{document}
\newcolumntype{C}{>{\centering\arraybackslash}p{60mm}}% a centered fixed-width-column
{\catcode_=12
\begin{tabular}{lCC}
Name&Expression&Description\\
normE&-V_lm/(epsilon0_const*2*pi*r)*1[F]*1[V/m]&Normal electric field (V/m)\\
T&T[1/K]&Temperature (K)\\
F&max(g_normE*1[m/V]*1e-9,1e-12)&product of local electric field strength and elementary charge (eV/nm)
\end{tabular}
}
\bigskip
{\catcode_=12
\catcode*\active
\def*{\string*\linebreak[0]}
\begin{tabular}{lCC}
Name&Expression&Description\\
normE&-V_lm/(epsilon0_const*2*pi*r)*1[F]*1[V/m]&Normal electric field (V/m)\\
T&T[1/K]&Temperature (K)\\
F&max(g_normE*1[m/V]*1e-9,1e-12)&product of local electric field strength and elementary charge (eV/nm)
\end{tabular}
}
\end{document}
• Thanks for the answer, I got your example to work, it's right what I wanted. But for some reason, I can't get it to work with pgfplotstable, I tried adding the \catcode*\active \def*{\string*\linebreak[0]} before my \pgfplotstabletypeset[..], but it didn't work. I'll investigate this further... May 31, 2014 at 18:57
• Alright, so I made a Python script to format my table data and then copied it into my latex source file. And used your solution. Tedious, but worked, so I'll accept this. Jun 5, 2014 at 22:51
If I understand your objective correctly, the long mathstrings must not be processed by TeX and rendered as formulas but must be rendered in pure text mode. You could try encasing the long math strings in \url{...} directives, as is done in the following example. LaTeX usually finds decent line breaks for such strings.
(I've taken the liberty of transposing your pgfplotstable setup into a more readily recognizable tabular setup for the sake of this example.)
\documentclass{article}
\usepackage{array,booktabs}
\newcolumntype{L}[1]{>{\raggedright\arraybackslash}p{6cm}"
\usepackage[hyphens]{url} % allow line breaks at hyphens
\begin{document}
\begin{table}
\centering
\begin{tabular}{@{} l *{2}{L{6cm}} @{}}
\toprule
Name &Expression&Description\\
\midrule
normE&\url{-V_lm/(epsilon0_const*2*pi*r)*1[F]*1[V/m]}
&Normal electric field (V/m)\\
T&\url{T[1/K]}&Temperature (K)\\
F&\url{max(g_normE*1[m/V]*1e-9,1e-12)}
&product of local electric field strength and elementary charge (eV/nm)\\
\bottomrule
\end{tabular}
\end{table}
\end{document}
• @Manuel - thanks for catching and fixing the typo
– Mico
May 31, 2014 at 18:35
• I just edited because it was less than a minute later than your edit. But I usually don't do that :P May 31, 2014 at 18:38
• Interesting idea and would solve my problem, but I have no idea how to adapt this into my example. Maybe by somehow tweaking this line columns/Expression/.style={column type=|C, verb string type},? Not sure... May 31, 2014 at 19:10
|
2022-05-23 06:08:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8621578812599182, "perplexity": 1577.8371453471611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662555558.23/warc/CC-MAIN-20220523041156-20220523071156-00110.warc.gz"}
|
https://socratic.org/questions/how-do-you-simplify-abs-6-4-3-1
|
# How do you simplify abs( -6.4 - 3.1 )?
$9.5$
$- 6.4 - 3.1 = - 9.5$
Now the absolute value makes the value positive. So we're left with $9.5$ which is our answer.
|
2020-07-04 02:44:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8561418056488037, "perplexity": 486.9217924681201}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655883961.50/warc/CC-MAIN-20200704011041-20200704041041-00312.warc.gz"}
|
https://fr.maplesoft.com/support/help/Maple/view.aspx?path=EssayTools%2FReduce
|
Reduce - Maple Help
EssayTools
Reduce
reduces an essay to core ideas
Calling Sequence Reduce( essay )
Parameters
essay - string or list or array of strings
Description
• The Reduce command breaks an essay up into sentences and sentence fragments that contain a logical idea. Words inside each idea are additionally reduced by applying the Lemma command, correcting spelling, and pruning conjunctions and definite articles.
• An attempt is made to identify the subject of the previous fragment so it can be carried forward into the new fragment replacing some pronouns like "it" and "they". This is heuristic based and may often pick the wrong subject.
• This function is part of the EssayTools package, so it can be used in the short form Reduce(..) only after executing the command with(EssayTools). However, it can always be accessed through the long form of the command by using EssayTools[Reduce](..).
Examples
> $\mathrm{with}\left(\mathrm{EssayTools}\right):$
> $\mathrm{Reduce}\left("The car was super fast. It was rocket screaming fast. Nothing else could touch it."\right)$
$\left[{"car be super fast"}{,}{"car be rocket screaming fast"}{,}{"nothing can touch touch"}\right]$ (1)
> $\mathrm{Reduce}\left("The tortoise and hare was a great story because it showed how an underdog can succeed with dedication and perseverance."\right)$
$\left[{"tortoise hare be great story"}{,}{"tortoise show how underdog can succeed dedication perseverance"}\right]$ (2)
Compatibility
• The EssayTools[Reduce] command was introduced in Maple 17.
|
2022-01-18 01:07:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7444786429405212, "perplexity": 2346.4876320498015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300658.84/warc/CC-MAIN-20220118002226-20220118032226-00238.warc.gz"}
|
http://seananderson.ca/2013/12/01/plyr.html
|
December 1, 2013
# plyr: Split-Apply-Combine for Mortals
Along with my earlier post on the reshape2 package, I will continue to post my course notes from Data Wrangling and Visualization in R, a graduate-level course I co-taught last semester at Simon Fraser University.
plyr is an R package that makes it simple to split data apart, do stuff to it, and mash it back together. This is a common data-manipulation step. Importantly, plyr makes it easy to control the input and output data format with a consistent syntax.
Or, from the documentation:
plyr is a set of tools that solves a common set of problems: you need to break a big problem down into manageable pieces, operate on each piece and then put all the pieces back together. It’s already possible to do this with split and the apply functions, but plyr just makes it all a bit easier…”
This is a very quick introduction to plyr. For more details see Hadley Wickham’s introductory guide The split-apply-combine strategy for data analysis. There’s quite a bit of discussion online in general, and especially on stackoverflow.com.
# Why use apply functions instead of for loops?
1. The code is cleaner (once you’re familiar with the concept). The code can be easier to code and read, and less error prone because you don’t have to deal with subsetting and you don’t have to deal with saving your results.
2. apply functions can be faster than for loops, sometimes dramatically.
# plyr basics
plyr builds on the built-in apply functions by giving you control over the input and output formats and keeping the syntax consistent across all variations. It also adds some niceties like error processing, parallel processing, and progress bars.
The basic format is two letters followed by ply(). The first letter refers to the format in and the second to the format out.
The three main letters are:
1. d = data frame
2. a = array (includes matrices)
3. l = list
So, ddply means: take a data frame, split it up, do something to it, and return a data frame. I find I use this the majority of the time since I often work with data frames. ldply means: take a list, split it up, do something to it, and return a data frame. This extends to all combinations. In the following table, the columns are the input formats and the rows are the output format:
object type data frame list array
data frame ddply ldply adply
list dlply llply alply
array daply laply aaply
I’ve ignored some less common format options:
1. m = multi-argument function input
2. r = replicate a function n times.
3. _ = throw away the output
For plotting, you might find the underscore (_) option useful. It will do something with the data (say add line segments to a plot) and then throw away the output (e.g., d_ply()).
# Base R apply functions and plyr
plyr provides a consistent and easy-to-work-with format for apply functions with control over the input and output formats. Some of the functionality can be duplicated with base R functions (but with less consistent syntax). Also, few R apply functions work directly with data frames as input and output and data frames are a common object class to work with.
Base R apply functions (from a presentation given by Hadley Wickham):
object type array data frame list nothing
array apply . . .
data frame . aggregate by .
list sapply . lapply .
n replicates replicate . replicate .
function arguments mapply . mapply .
# A general example with plyr
Let’s take a simple example. We’ll take a data frame, split it up by year, calculate the coefficient of variation of the count, and return a data frame. This could easily be done on one line, but I’m expanding it here to show the format a more complex function could take.
set.seed(1)
d <- data.frame(year = rep(2000:2002, each = 3),
count = round(runif(9, 0, 20)))
print(d)
# year count
# 1 2000 5
# 2 2000 7
# 3 2000 11
# 4 2001 18
# 5 2001 4
# 6 2001 18
# 7 2002 19
# 8 2002 13
# 9 2002 13
library(plyr)
ddply(d, "year", function(x) {
mean.count <- mean(x$count) sd.count <- sd(x$count)
cv <- sd.count/mean.count
data.frame(cv.count = cv)
})
# year cv.count
# 1 2000 0.3985
# 2 2001 0.6062
# 3 2002 0.2309
# transform and summarise
It is often convenient to use these functions within one of the **ply functions. transform acts as it would normally as the base R function and modifies an existing data frame. summarise creates a new condensed data frame.
ddply(d, "year", summarise, mean.count = mean(count))
# year mean.count
# 1 2000 7.667
# 2 2001 13.333
# 3 2002 15.000
ddply(d, "year", transform, total.count = sum(count))
# year count total.count
# 1 2000 5 23
# 2 2000 7 23
# 3 2000 11 23
# 4 2001 18 40
# 5 2001 4 40
# 6 2001 18 40
# 7 2002 19 45
# 8 2002 13 45
# 9 2002 13 45
Bonus function: mutate. mutate works like transform but lets you build on columns.
ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),
cv = sigma/mu)
# year count mu sigma cv
# 1 2000 5 7.667 3.055 0.3985
# 2 2000 7 7.667 3.055 0.3985
# 3 2000 11 7.667 3.055 0.3985
# 4 2001 18 13.333 8.083 0.6062
# 5 2001 4 13.333 8.083 0.6062
# 6 2001 18 13.333 8.083 0.6062
# 7 2002 19 15.000 3.464 0.2309
# 8 2002 13 15.000 3.464 0.2309
# 9 2002 13 15.000 3.464 0.2309
# Plotting with plyr
You can use plyr to plot data by throwing away the output with an underscore (_). This is a bit cleaner than a for loop since you don’t have to subset the data manually.
par(mfrow = c(1, 3), mar = c(2, 2, 1, 1), oma = c(3, 3, 0, 0))
d_ply(d, "year", transform, plot(count, main = unique(year), type = "o"))
mtext("count", side = 1, outer = TRUE, line = 1)
mtext("frequency", side = 2, outer = TRUE, line = 1)
# Nested chunking of the data
The basic syntax can be easily extended to break apart the data based on multiple columns:
baseball.dat <- subset(baseball, year > 2000) # data from the plyr package
x <- ddply(baseball.dat, c("year", "team"), summarize,
homeruns = sum(hr))
# year team homeruns
# 1 2001 ANA 4
# 2 2001 ARI 155
# 3 2001 ATL 63
# 4 2001 BAL 58
# 5 2001 BOS 77
# 6 2001 CHA 63
# Other useful options
## Dealing with errors
You can use the failwith function to control how errors are dealt with.
f <- function(x) if (x == 1) stop("Error!") else 1
safe.f <- failwith(NA, f, quiet = TRUE)
# llply(1:2, f)
llply(1:2, safe.f)
# [[1]]
# [1] NA
#
# [[2]]
# [1] 1
## Parallel processing
In conjunction with a package such as doParallel you can run your function separately on each core of your computer. On a dual core machine this make your code up to twice as fast. Simply register the cores and then set .parallel = TRUE. Look at the elapsed time in these examples:
x <- c(1:10)
wait <- function(i) Sys.sleep(0.1)
system.time(llply(x, wait))
# user system elapsed
# 0.001 0.000 1.007
system.time(sapply(x, wait))
# user system elapsed
# 0.000 0.001 1.009
library(doParallel)
registerDoParallel(cores = 2)
system.time(llply(x, wait, .parallel = TRUE))
# user system elapsed
# 0.024 0.013 0.541
# So, why would I not want to use plyr?
plyr can be slow — particularly if you are working with very large datasets that involve a lot of subsetting. Hadley is working on this and an in-development version of plyr, dplyr, can run much faster (https://github.com/hadley/dplyr). However, it’s important to remember that typically the speed that you can write code and understand it later is the rate-limiting step.
A couple faster options:
Use a base R apply function:
system.time(ddply(baseball, "id", summarize, length(year)))
# user system elapsed
# 0.955 0.013 0.975
system.time(tapply(baseball$year, baseball$id,
function(x) length(x)))
# user system elapsed
# 0.021 0.000 0.022
Use the data.table package:
library(data.table)
dt <- data.table(baseball, key = "id")
system.time(dt[, length(year), by = list(id)])
# user system elapsed
# 0.010 0.000 0.011
|
2017-09-20 03:42:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21479618549346924, "perplexity": 4274.003583195605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686169.5/warc/CC-MAIN-20170920033426-20170920053426-00454.warc.gz"}
|
https://groupprops.subwiki.org/wiki/Special:MobileDiff/32290
|
# Changes
## Linear representation theory of symmetric group:S3
, 22:06, 2 July 2011
no edit summary
| One-dimensional, factor through the determinant map || a homomorphism $\alpha: \mathbb{F}_q^\ast \to \mathbb{C}^\ast$ || $x \mapsto \alpha(\det x)$ || 1 || 1 || $q - 1$ || 1 || trivial representation
|-
| Tensor product of one-dimensional representation and the nontrivial component of permutation representation of $GL_2$ on the projective line over $\mathbb{F}_q$ || a homomorphism $\alpha: \mathbb{F}_q^\ast \to \mathbb{C}^\ast$ || $x \mapsto \alpha(\det x)\nu(x)$ where $\nu$ is the nontrivial component of permutation representation of $GL_2$ on the projective line over $\mathbb{F}_q$ || $q$ || 2 || $q - 1$ || 1 || [[#Standard representationof symmetric group:S3|standard representation]]
|-
| Induced from one-dimensional representation of Borel subgroup || $\alpha, \beta$ homomorphisms $\mathbb{F}_q^\ast \to \mathbb{C}^\ast$ with $\alpha \ne \beta$, where $\{ \alpha, \beta \}$ is treated as unordered. || Induced from the following representation of the Borel subgroup: $\begin{pmatrix} a & b \\ 0 & d \\\end{pmatrix} \mapsto \alpha(a)\beta(d)$ || $q + 1$ || 3 || $(q - 1)(q - 2)/2$ || 0 || --
| [[Sign representation]] || 1 || -3 || 2
|-
| [[Standard representation of symmetric group:S3|Standard representation]] || 1 || 0 || -1
|}
| [[Sign representation]] || $\mathbb{Z}$ -- the ring of integers || <matH>\mathbb{Q}[/itex] || $\{ 1,-1 \}$ || gives a representation over any ring; nontrivial for characteristic not equal to $2$
|-
| [[Standard representation of symmetric group:S3|Standard representation]] || $\mathbb{Z}$ -- the ring of integers || $\mathbb{Q}$ || $\{ 0, 1, -1 \}$ || gives an irreducible representation over any ring of characteristic not equal to $2$
|}
| Sign || 1 || 0 || 0
|-
| [[Standard representation of symmetric group:S3|Standard]] || 0 || 1 || 1
|}
| Sign || 0 || 1
|-
| [[Standard representation of symmetric group:S3|Standard]] || 1 || 1
|}
38,910
edits
|
2020-06-04 02:49:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.730248749256134, "perplexity": 2295.5625030534516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347436828.65/warc/CC-MAIN-20200604001115-20200604031115-00524.warc.gz"}
|
https://homework.cpm.org/category/CCI_CT/textbook/int3/chapter/9/lesson/9.1.7/problem/9-91
|
### Home > INT3 > Chapter 9 > Lesson 9.1.7 > Problem9-91
9-91.
1. Convert each of the following angle measures. Give exact answers. Homework Help ✎
There are multiple ways to approach these problems. Try different strategies to expand your personal toolkit.
Use the Hint from
problem 9-90.
Let π = 180° and simplify.
Notice, you do not need a calculator if you divide 180 by 6 as your first step.
210°
Use the Hint or Help from problem 9-90.
$\text{Multiply } \frac{5 \pi}{3} \text{ by } \frac{180}{\pi}.$
Simplify by factoring out Giant Ones.
Use the second part of the Help from problem 9-90.
$\frac{{\pi}}{4}$
Use the Help from problem 9-90 or try the More Help:
$\text{Use the proportion } \frac{\pi}{180°} = \frac{r}{100°}.$
See parts (c) and (d).
See parts (a) and (b).
630°
|
2019-09-18 12:00:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25519391894340515, "perplexity": 2584.2861754309515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573284.48/warc/CC-MAIN-20190918110932-20190918132932-00037.warc.gz"}
|
https://jkcs.or.kr/journal/view.php?number=4208
|
J. Korean Ceram. Soc. > Volume 32(9); 1995 > Article
Journal of the Korean Ceramic Society 1995;32(9): 1047.
결합제의 종류와 양에 따라 분무건조된 페라이트 분말의 성형특성 홍대영, 변순천, 제해준1, 홍국선 서울대학교 무기재료공학과1한국과학기술연구원 세라믹스연구부 Dependence of Compaction Behavior of Spray-Dried Ferrite Powders on the Kinds and Concentrations of Binder Systems ABSTRACT Mn-Zn ferrite granules were formed by a spray-drying method of the slurry containing different kinds and concentrations of binders at various temperatures. The slurry was made by conventional ceramic processing method, that is, by mixing Fe2O3, MnO, ZnO powders (52 : 24 : 24 mol%), calcining and milling. Typical shape of the spray dried granules was spherical. The compaction behavior of these granules was dependent on the spray-drying temperature and the kind and concentration of binders. At lower pressure the granules were displaced and at higher pressure the granules were deformed and fractured to fill pores among the granules. The optimum concentration of the binder was 0.5wt%. The granules containing 0.5wt% PVA 205 were deformed and fractured well and the green density was higher than others. At higher concentrations of the binder the granules were deformed rather than fractured, therefore the green density was lowered because of the remaining unfilled pores. The decomposition temperature and the heat released were increased with increasing the concentration of the binders. The compaction response of the granules containing PVA 205 was more efficient than those containing PVA 217 and PVA 117. Green density was not dependent on the degree of hydrolysis of the binders. The compaction response of the granules spray-dried at 15$0^{circ}C$ was most efficient. Key words: Spray drying, Ferrite, Compaction, Binder
TOOLS
Full text via DOI
|
2021-06-23 06:23:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41482117772102356, "perplexity": 12304.491088718196}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488534413.81/warc/CC-MAIN-20210623042426-20210623072426-00620.warc.gz"}
|
https://artofproblemsolving.com/wiki/index.php?title=Mock_AIME_2_2006-2007_Problems/Problem_6&diff=46095&oldid=9794
|
# Difference between revisions of "Mock AIME 2 2006-2007 Problems/Problem 6"
## Problem
If $\tan 15^\circ \tan 25^\circ \tan 35^\circ =\tan \theta$ and $0^\circ \le \theta \le 180^\circ,$ find $\theta.$
## Solution
We know from product to sum formulas we have: $$\frac{\sin 15^\circ\sin 25^\circ\sin 35^\circ}{\cos 15^\circ\cos 25^\circ\cos 35^\circ}=\frac{\sin 15^{\circ}(\cos 10^\circ-\cos 60^\circ)}{\cos 15^\circ(\cos 10^\circ+\cos 60^\circ)}$$ Multiply by $\frac{2}{2}$: $$\frac{2\sin 15^\circ\cos 10^\circ-\sin 15^\circ}{\cos 15^\circ+2\cos 15^\circ\cos 10^\circ}$$ Again use product to sum: $$\frac{\sin 5^\circ-\sin 15^\circ+\sin 25^\circ}{\cos 5^\circ+\cos 15^\circ+\cos 25^\circ}$$ Finally, use sum to product on the rightmost terms in the numerator and denominator: $$\frac{\sin 5^{\circ}+2\sin 5^\circ\cos 20^\circ}{\cos 5^\circ+2\cos 5^\circ\cos 20^\circ}=\frac{\sin 5^\circ(1+2\cos 20^\circ)}{\cos 5^\circ(1+2\cos 20^\circ)}=\tan 5^\circ$$ Thus, $\theta=\boxed{005}$.
|
2021-05-12 05:13:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46187448501586914, "perplexity": 1369.5201379530145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991252.15/warc/CC-MAIN-20210512035557-20210512065557-00087.warc.gz"}
|
https://www.nat-hazards-earth-syst-sci.net/19/353/2019/
|
Journal cover Journal topic
Natural Hazards and Earth System Sciences An interactive open-access journal of the European Geosciences Union
Journal topic
Nat. Hazards Earth Syst. Sci., 19, 353–368, 2019
https://doi.org/10.5194/nhess-19-353-2019
Nat. Hazards Earth Syst. Sci., 19, 353–368, 2019
https://doi.org/10.5194/nhess-19-353-2019
Research article 14 Feb 2019
Research article | 14 Feb 2019
# Flood risk assessment due to cyclone-induced dike breaching in coastal areas of Bangladesh
Flood risk assessment due to cyclone-induced dike breaching in coastal areas of Bangladesh
Md Feroz Islam1, Biswa Bhattacharya2, and Ioana Popescu2,3 Md Feroz Islam et al.
• 1Copernicus Institute, Department of Environmental Sciences, Utrecht University, 3584 CB Utrecht, the Netherlands
• 2IHE Delft Institute for Water Education, 2611 AX Delft, the Netherlands
• 3Faculty of Civil Engineering, Politehnica University of Timisoara, 300223 Timisoara, Romania
Correspondence: Md Feroz Islam (m.f.islam@uu.nl)
Abstract
Bangladesh, one of the most disaster-prone countries in the world, has a dynamic delta with 123 polders protected by earthen dikes. Cyclone-induced storm surges cause severe damage to these polders by overtopping and breaching the dikes. A total of 19 major tropical storms have hit the coast in the last 50 years, and the storm frequency is likely to increase due to climate change. The present paper presents an investigation of the inundation pattern in a protected area behind dikes due to floods caused by storm surges and identifies possible critical locations of dike breaches. Polder 48 in the coastal region, also known as Kuakata, was selected as the study area. A HEC-RAS 1-D–2-D hydrodynamic model was developed to simulate inundation of the polder under different scenarios. Scenarios were developed by considering tidal variations, the angle of the cyclone at landfall, possible dike breach locations and sea level rise due to climate change according to the Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change (IPCC). A storm surge for a cyclone event with a 1-in-25-year return period was considered for all the scenarios. The primary objective of this research was to present a methodology for identifying the critical location of dike breaching, generating a flood risk map (FRM) and a probabilistic flood map (PFM) for the breaching of dikes during a cyclone. The critical location of the dike breach among the chosen possible locations was identified by comparing the inundation extent and damage due to flooding corresponding to the developed scenarios. A FRM corresponding to the breaching in the critical location was developed, which indicated that settlements adjacent to the canals in the polders were exposed to higher risk. A PFM was developed using the simulation results corresponding to the developed scenarios, which was used to recommend the need of appropriate land use zoning to minimize the vulnerability to flooding. The developed hydrodynamic model can be used to forecast inundation, to identify critical locations of the dike requiring maintenance and to study the effect of climate change on flood inundation in the study area.
The frequency and intensity of the cyclones around the world are likely to increase due to climate change, which will require resource-intensive improvement of existing or new protection structures for the deltas. The identification and prioritization of the maintenance of critical locations of dike breaching can potentially prevent a disaster. The use of non-structural tools such as land use zoning with the help of flood risk maps and probabilistic flood maps has the potential to reduce risk and damage. The method presented in this research can potentially be utilized for deltas around the world to reduce vulnerability and flood risk due to dike breaching caused by cyclone-induced storm surge.
1 Introduction
Bangladesh is situated in a low-lying delta of three major rivers: Ganges, Brahmaputra and Meghna. A total of 80 % of the country's land is located below 10 m a.m.s.l. (above mean sea level) (Heitzman and Worden, 1989), and it is formed of sediments carried by the above-mentioned rivers. The population of Bangladesh was about 131.5 million by the year 2000 (World Bank, 2018), of which about 49 % were living in coastal zones (Neumann et al., 2015). The coastal areas of Bangladesh are flooded frequently due to cyclone-induced storm surges and occasionally due to high water levels in the rivers caused by heavy rainfall in the upstream catchments of Ganges, Brahmaputra and Meghna. The coast was hit by five severe cyclones between 1995 and 2010, causing flooding, huge damage and loss of life (Dasgupta et al., 2014).
Bangladesh has 123 polders in the coastal area, each surrounded by earthen dikes, which are designed to protect the inland from flooding due to high tides. The existing crest level of these dikes is only adequate enough to protect the coastal area from cyclones with 5- to 12-year return periods (Islam et al., 2013). These dikes usually get damaged and sometimes breached by tropical cyclones of high intensity, which causes flooding inside the protected areas, damages to properties and loss of life. For example, Cyclone Sidr hit the coast of Bangladesh in 2007, affecting 8.9 million people and causing USD 1.7 billion of damages (GOB, 2008; Dasgupta et al., 2014). In 2009, Cyclone Aila affected 3.9 million people, with estimated damages of USD 270 million (EMDAT, 2009).
Crest levels of the coastal dikes were recently designed for an event with a 25-year return period under the Coastal Embankment Improvement Project (CEIP) (BWDB, 2013). A storm surge event with a 25-year return period was considered in this study for the generation of different scenarios. Under the CEIP, the crest levels of the dikes were designed considering wave actions, astronomical tides and the required freeboard. Raising the crest level was considered the only mitigating measure. Various studies on the coastal areas of Bangladesh (e.g. Karim and Mimura, 2008; IWM, 2005; Azam et al., 2004; Madsen and Jakobsen, 2004; CSPS, 1998; Flather, 1994) considered flooding only due to overtopping of the dikes during storm surges. The effects of breaching of the dikes due to piping and scouring on the landside during cyclones have not been studied. The coast of Bangladesh is frequently hit by severe cyclones (five cyclones between 1995 and 2010, Dasgupta et al., 2014). The Bangladesh Water Development Board (BWDB) is responsible for the operation and maintenance of these dikes and lacks fund to conduct proper repair of damaged dikes subsequent to any severe cyclone. As a result the dykes remain vulnerable to breaching. Identifying the critical location(s) of dike breaching and prioritizing the repair of the critical location are likely to reduce the breaching possibility.
Moreover, non-structural measures for flood risk management such as land use zoning using a flood risk map (FRM) and a probabilistic flood map (PFM) to locate the vulnerable areas are currently unavailable for the coastal areas of Bangladesh. Flood zoning can be a useful risk mitigation measure as land use governs the exposure and may aggravate the hazard (Barredo and Engelen, 2010).
Furthermore, the intensity and frequency of these tropical cyclones are likely to increase in the future due to climate change. It is projected that by the year 2100, the frequency of the most intense cyclones will increase substantially and the intensity of tropical cyclones will increase by 2 % to 11 % due to global warming (Knutson et al., 2010). Flooding by tropical cyclones will also increase in the future as a result of sea level rise (SLR) (Woodruff et al., 2013). SLR and sea surface temperature (SST) will affect the cyclone-induced storm surge height in the Bay of Bengal (Karim and Mimura, 2008). With increasing SST, the storm surge height may increase from 21 % to 49 %, and with SLR, the flood depth due to storm surges may increase by 30 %–40 % (Karim and Mimura, 2008). The land subsidence in the delta will exacerbate the effect of SLR. By the year 2100 the annual estimated damage due to tropical cyclones may increase by USD 53 billion (Mendelsohn et al., 2012).
At present, a flood forecasting system is not available for the coastal region of Bangladesh. The BWDB, which is mandated to protect the area, does not have a clear picture about the inundation patterns corresponding to various climatic conditions. Moreover, identifying zones in the embankment critical to flooding in the polder will help BWDB in prioritizing their maintenance. This paper presents a methodology to identify the critical location of dike breach due to cyclones, generating a FRM and a PFM for the breaching of dikes by cyclone-induced storm surges. Different scenarios of storm surges were formulated by considering storms of different frequencies with varying tidal conditions, the angle of the cyclone at landfall and SLR. A cyclone event with a 25-year return period was considered in this research. A coastal polder (Polder 48) of southern Bangladesh was selected as the study area.
Resource-intensive adjustment of the protective structures for the deltas around the world will be required as the frequency and intensity of cyclones will increase with climate change. Along with structural measures, non-structural tools, such as land use zoning with the help of flood risk maps and probabilistic flood maps, have the potential to reduce risk and damage. The identification of the critical locations of breaching and intensification of the maintained effort for these locations can potentially prevent a disaster. The method presented in this research can be utilized for vulnerable deltas around the world, even though the coastal region of Bangladesh was selected as the case study area.
2 Study area
Polder 48, which was considered as the study area for this research, is surrounded by dikes and has a sea-facing dike of about 20 km length on the southern side of the polder. The polder is located on the south-western coast of the Bangladesh Delta (Fig. 1), stretching from 215028′′ N 900517′′ E to 215006′′ N 901414′′ E. The outline of the study area in Fig. 1 (also in Fig. 3) depicts the dike alignment of the study area as well. The area is also known as Kuakata, and it is in the administrative zone of the Kalapara Sub-district (Upazilla) of Patuakhali District. It has an area of 50.75 km2 with 24 240 inhabitants according to the 2011 census (BBS, 2012). Most of the inhabitants are farmers and fishermen (Nasreen et al., 2013). Shrimp culture and tourism are also part of the economic activities. The land use is classified by the Ministry of Land of Bangladesh into the following four classes: rice fields, settlements, shrimp ponds and water bodies (rivers and canals). Climate and agricultural practices of Kuakata are similar to the climate and agricultural practices of the country (Bangladesh). The average yearly rainfall in Kuakata is 2590 mm (Climate-Data, 2016), and the annual average temperature is 25.9 C (Climate-Data, 2016). Rabi (November–February), Kharif-I (March–May) and Kharif-II (June–October) are the three seasons for growing crops (DAE, 2009). The elevation of 80 % of the area is 1.55 m below PWD, the vertical datum established by the Public Works Department of Bangladesh, which is 0.46 m below mean sea level. The land level surveys at different times have indicated that this polder is facing land subsidence issues. Brown and Nicholls (2015) reported the estimated mean subsidence rate of the Ganges–Brahmaputra–Meghna (GBM) delta to be 5.6 mm yr−1, with an overall median of 2.9 mm yr−1.
Figure 1Location map of the study area, Polder 48 (Kuakata).
The area was severely affected by the recent storms Sidr, Aila and Mohasen in 2007, 2009 and 2013 respectively. For example, during Cyclone Sidr, 94 people died and 45 % of the crops were lost in the Kalapara Sub-district (Ahamed, 2012).
Figure 2Methodological approach followed in this study.
Andharmanik, Galachipa and Khaprabhanga rivers are in the east, west and north of the study area respectively, whereas the Bay of Bengal is on the southern side of the study area (Fig. 1). Galachipa River is the widest among the rivers surrounding the area. On the southern side, the study area has a seashore of 20 km width, which is partly protected by the mangrove forest at several locations. There is a narrow sea beach on the south-western side of the area. The western part of the sea-facing dike was overtopped during Cyclone Sidr, causing flooding inside the polder (Hasegawa, 2008). The loss of livestock and food grains was such that it created partial deficiency of food in Kuakata (TANGO International, 2010). The average crest level of Polder 48 on the northern side is 4.5 m PWD, and on the southern side (sea-facing side), it is 6 m PWD (Islam et al., 2013). The existing embankments of 17 polders of the region, including Polder 48, were redesigned and rehabilitated during the first phase of the CEIP (Islam et al., 2013). The CEIP proposed a crest level of 7.36 m PWD for the dike of Polder 48 (Islam et al., 2013).
3 Methodology
The methodology followed is presented in Fig. 2 and described in the following sections.
## 3.1 Setting up of 1-D–2-D coupled model
In order to build a 1-D–2-D inundation model, field measurements (land level surveys, observed water levels, canal alignments and cross sections of the river and canals) and information from remote sensing (satellite imagery) were gathered (Fig. 3). The Institute of Water Modelling (IWM) of Bangladesh collected hydraulic, hydrologic and land-level data of the study area (along with other polders) in the framework of the feasibility study of the Coastal Embankment Improvement Project (CEIP). The IWM has kindly provided the measurement data for the study area.
The digital elevation model (DEM) was generated by combining the land-level surveys conducted by the IWM and FINNMAP. The land-level survey by the IWM (conducted in 2012) did not cover the whole study area. FINNMAP conducted a topographic survey of the study area in 1988 (MIWF, 1993). The differences in elevation between land surveys by IWM and FINNMAP indicated the land subsidence. An average subsidence was computed, which was used to update the elevations of the FINNMAP survey for the areas within Polder 48 for which survey data from the IWM were not available. The combined DEM has a resolution of 50 m. The same DEM was used for the simulations of the year 2100 without any corrections for further subsidence. Subsidence of the coast in the past has been reported by Brown and Nicholls (2015) and was verified with the survey data from the IWM and FINNMAP. Subsidence may continue in the future, but in the absence of scientific studies it was not considered for the future scenarios in this research. It is noteworthy that if subsidence continues, then the effect of the SLR may be increased, and the results reported in this research should be treated to some extent as underestimated values.
The bathymetry of the sea near the coast was collected from the Global Bathymetric Chart Of the Oceans (GEBCO) (Smith and Sandwell, 1997). The land use data were collected from the Ministry of Land of Bangladesh. MODIS reflectance data were used for the analysis of previous flood events. The methodology and equations suggested by Hoque et al. (2015) were used to analyse the MODIS reflectance data to determine flood extents during previous flood events. The intention was to utilize the flood extent generated from MODIS reflectance for calibration of the hydraulic model. However, no flood images from MODIS were available during the simulation period.
The river analysis tool HEC-RAS (version 5.0) from the US Army Corps of Engineers was used to develop the 1-D–2-D coupled inundation model. The flow in the river was modelled in 1-D, whereas the flow over the floodplain was modelled in 2-D. HEC-RAS 5.0 is a free tool which can simulate 1-D, 2-D and 1-D–2-D coupled models for steady and unsteady flow. The 2-D module of HEC-RAS provides the option to simulate flow of water either with the diffusion wave equation or with the full shallow water equation (St. Venant equation). The availability of irregular flexible mesh in HEC-RAS and the option for faster simulations led to the selection of HEC-RAS 5.0 as the modelling tool. Data utilized for developing the model and their sources are presented in Table 1.
Table 1The data used in developing the mathematical model and their sources. IWM, GEBCO and JSCE stand for the Institute for Water Modelling, the General Bathymetric Chart of the Oceans and the Japan Society of Civil Engineers respectively.
The 1-D section of the model was developed and calibrated using the information shared by the IWM. The 1-D part of the developed model was calibrated for non-flood conditions as measured discharge and water level data during a cyclone event were unavailable. The model was simulated using discharge as the west boundary and water level as the east boundary conditions (Fig. 3). The calibrated 1-D model was then coupled with the 2-D model of flow over the floodplain using the DEM of the study area.
For the 1-D–2-D inundation model, a computational mesh with a flexible shape, was developed in HEC-RAS (Fig. 3). HEC-RAS generates meshes with irregular shapes. The rectangular cells of the developed 2-D mesh had a resolution of 25 m, and the non-rectangular cells had areas ranging from 625 to 1282 m2. The roughness coefficient (Manning's n varying from 0.025 to 0.05) was provided according to the land use of each cell. A sensitivity analysis as suggested by Hall et al. (2005) was carried out by varying Manning's roughness coefficient n before the calibration of the 2-D inundation model.
Figure 3Schematic diagram of the study area with location of control structures and gauges and the considered breach locations.
Building and calibrating the 1-D model was the preliminary step for developing the 1-D–2-D coupled model. The water bodies surrounding the study area were included in the 1-D model. The study area has Khaprabhanga River on the northern side and the sea on the southern side (Fig. 3). The connection of Khaprabhanga River with other rivers was not considered in the model. This was due to the fact that storm surges are observed during the pre- or post-monsoon periods, whereas fluvial floods are observed during the monsoon. Flow through rivers did not play a major role during the previous cyclones. The western and eastern side of the embankment have mangrove forests between the rivers and the embankment (Fig. 3).
For the river, the surveyed cross sections were used in the 1-D model (Fig. 4). The storm surge on the sea was conceptualized as a water surface profile in a 1-D channel on the southern side of the study area (Fig. 3). The GEBCO bathymetry (Fig. 5) was used for the channel. An alternative was to develop a 2-D model for the coastal hydrodynamics. However, as the coast of Bangladesh is flat and shallow a large area of the sea would have been included in the model. As the focus was on studying the inundation of Polder 48 and not the coast, we followed a simpler representation of the storm surges using a 1-D model. The synthetic water level data for boundaries of the model were generated by following the tidal water level pattern and the storm surge height considered for all the scenarios (Table 2). The water surface profile corresponding to each scenario (Table 2, discussed in Sect. 3.2) was considered as the profile in the 1-D model of the seaside (Fig. 6).
The dense canal network of 122 km, inside the study area, is connected with the Khaprabhanga River, which regulates the in- and outflow into the river network through a system of 13 control structures. The regulators remain closed during cyclones, making the canal network isolated. Therefore, the canal network inside the polder was not included in the 1-D model. However, the simulation of the overland flow consequent on breaching of the dike will be affected by the canal geometry, and therefore, the wider and larger canals were included in the DEM.
Figure 4A typical cross section of the Khaprabhanga River.
The geometry and propagation of the breach of the dike depend primarily on the storm surge height, the angle of landfall, soil properties and wave action. The coastal embankments of Bangladesh are usually earthen. The geometrical properties of the breaching of the dike and the time required for breaching were calculated following the instructions of the US Bureau of Reclamation. An S curve was used for breach propagation with time (Oumeraci, 2006). As the geometry of the breach is not independent, it was not considered as a parameter for scenario development.
Figure 5Cross section for the 1-D network on the seaside.
In order to ensure model stability, a maximum spacing between the computational points was imposed and computed using Samuels' formula (1989), presented in Eq. (1):
$\begin{array}{}\text{(1)}& \mathrm{\Delta }x\le \mathrm{0.15}×D/{S}_{\mathrm{0}},\end{array}$
where Δx is the spacing between the computational points, D is the average bank full depth of the channel and S0 is the average slope of the channel. The maximum spacing between cross sections was calculated to be 300 m. The river had a steeper bed slope than the long shore slope of the sea bathymetry, requiring smaller Δx to ensure stability, and the same Δx will reduce instability of the foreshore as well.
Figure 6Variation of water level at three locations (chainage 0, 12 940 and 20 300) along the 1-D channel on the seaside according to a specific scenario (out of 72 scenarios).
As suggested by Fromm (1961), the Courant number was kept less than or equal to 1.0 to maintain the stability of the numerical model by controlling the time step. The Courant number was calculated using the following Eq. (2):
$\begin{array}{}\text{(2)}& Cr=V×\mathrm{\Delta }t/\mathrm{\Delta }x,\end{array}$
where Cr is the Courant number, V is velocity, Δt is the time step and Δx is the spacing between the cross sections.
## 3.2 Cyclonic scenarios considered
Different scenarios were developed considering the probability of the occurrence of cyclones, the angle of landfall, SLR due to climate change, diurnal, semi-diurnal and seasonal variation of tides, locations of breaching of the dike and geometrical properties of the breach.
• Frequency of the cyclone. A cyclone with a 1-in-25-year return period was considered for all the scenarios as this is used as the design criteria for the dikes (BWDB, 2013). A total of 19 previous cyclones for different tidal conditions were simulated by the IWM using a 2-D model for the Bay of Bengal. A statistical analysis was conducted using these model results to generate the storm surge height corresponding to a cyclone with a 25-year return period (Islam et al., 2013). Due to a lack of data, change in the probability of the occurrence of cyclones in the future was not considered.
• Angle of landfall. The angle of landfall affects the height of storm surges. The storm surge height increases with angle of the storm to the coastline (Azam et al., 2004). The angle of attack governs the wind speed, which is one of the parameters for the height of cyclone-induced storm surges (Azam et al., 2013).
• Tides. The difference between the storm surge at high tide and low tide is 1.2 m for the study area (Azam et al., 2004). The average seasonal variation of the tidal range is 1.3 m.
• Sea level rise. The coast of Bangladesh may be severely affected by SLR, and one-quarter of the land may be lost due to SLR by 2100, which will directly affect 3 million people (Ericson et al., 2005). IPCC published their Fifth Assessment Report (AR5) in 2013. Among the scenarios considered in AR5, RCP2.6 (Representative Concentration Pathway 2.6) is the most optimistic one and RCP8.5 is the worst considering the carbon emission, rise in temperature and SLR. The mean SLR at the end of 21st century is estimated to be 0.4, 0.47, 0.48 and 0.63 m for RCP2.6, RCP 4.5, RCP6.0 and RCP 8.5 respectively (Stocker et al., 2013). For this study, RCP8.5 with SLR of 0.63 m was considered for developing the scenarios.
• Location of breach. The sections of the sea-facing dike of the study area protected by mangrove forest, sand dunes and a wide beach are least likely to be breached due to storm surges. The study considered breach locations with the least protection. The locations considered for dike breaching as well as the mangrove forest around the study area are shown in Fig. 3.
A scenario matrix consisting of 72 scenarios was generated by combining different phases of tides, angle of landfall, SLR and breach locations (Table 2). A single breach was considered for all the scenarios. The highest storm surge height among all the developed scenarios was 7.2 m PWD, considering the angle of landfall to be 230, a high tidal phase during spring tides and SLR and dike breaching at any of the chosen locations. The highest storm surge height as the boundary condition with breaching in the western, central and eastern parts of the dike was considered to be the worst-case scenario and was denoted as Scenario S1, S2 and S3 respectively (Fig. 2). Flooding due to overtopping of the dikes was not considered as the crest level (7.36 m PWD) was higher than the highest storm surge height (7.2 m PWD).
Table 2Storm surge heights corresponding to different scenarios considered. The bold values are the storm surge height for the worst case scenarios.
To identify the critical locations of breaching, results of the scenarios simulated with HEC-RAS corresponding to the three worst-case scenarios S1, S2 and S3 were compared based on the total area flooded and estimated damage due to flooding. Using the calculated damage and probability of occurrence of the event, a risk map was generated for the critical locations of the sea-facing dike. A probabilistic flood map (PFM) was generated from the flood maps of the 72 scenarios (Table 2; Output 2 in Fig. 2). As the storm surge height suggested by Islam et al. (2013) corresponds to an event with a 25-year return period, the PFM generated in this study corresponds to a 1-in-25-year return period.
## 3.3 Estimation of damage due to floods
A comprehensive damage calculation should involve both direct and indirect damage due to floods (Büchele et al., 2006). Direct damage is caused by physical contact of properties and human beings with floodwater. Indirect damage is caused by interruption of services, production and transportation and degradation of health due to floods. Due to a lack of data, only the direct damages to properties were calculated for the study area. The damage was considered a function of flood depth. The land use of the study area was classified by the Ministry of Land of Bangladesh as settlements, rice fields, shrimp ponds and water bodies (rivers/canals). Only the tangible damage was considered, and no environmental damage was calculated. Damage to the canal network was not considered. The damage in a flood event was calculated using Eq. (3).
$\begin{array}{}\text{(3)}& D=\left({\sum }_{i=\mathrm{0}}^{n}{x}_{i}×f\left({x}_{i}\right)\right)×{A}_{i},\end{array}$
where D is the total direct tangible damage in a flood event, n is the total number of computational cells within the flooded area, xi is the flood depth of cell i, f (xi) is the damage function for the land use of the flooded cell i and Ai is the area of cell i.
Depth–damage curves for different land classes for the study area were developed by adapting depth–damage curves found in the literature (Fig. 7). Reese et al. (2010) calculated flood damage as a percentage of the property value of buildings categorized based on the construction material. The buildings of the study area are primarily built of timber due to its low cost and easy availability. The depth–damage curve suggested by Reese et al. (2010) for buildings made of timber was used as a basis for generating the depth–damage curve for the settlements (residential area). Simple Action for the Environment (SAFE) carried out research on the average value of properties in rural areas of Bangladesh (SAFE, 2011). These property values were used to update the damage values used by Reese et al. (2010). Muktadir and Hasan (1985) reported that rural houses of Bangladesh are built with a large courtyard, and, as a result, houses have a lot of open and unoccupied space around buildings. The damage curve considered for the residential area was used for the damage to the buildings and not for the courtyard. Moreover, the satellite image of the area also indicated that about half of the settlement was without buildings. A satellite image from Google Earth was used for analysis. The satellite image of the area was downloaded and georeferenced. Then, the areas for buildings and open areas for households were manually calculated using ArcGIS. Therefore, 50 % of the settlement area was considered to have no damage.
The cultivation of rice involves flooding the rice field with water up to a few centimetres. However, if the height of water increases and the rice plant goes under water, then the productivity decreases. The damage to rice plants also depends upon the flood duration. If the rice plant is continuously under water for more than 2–3 days, then the damage can be up to 80 % (Chau et al., 2014). The simplified (with regards to flow velocity and flood duration) depth–damage curve for rice fields suggested by Chau et al. (2014) was used in this study (Fig. 7).
Shrimp ponds are surrounded by embankments so that there is no damage to shrimp ponds till the flood level crosses the embankment level. However, when the flood level is higher than the embankment level, shrimps escape causing a loss of the total investment. To take this into account, the investment made by farmers was assessed using a study conducted by Fatema et al. (2011). According to the study the investment for shrimp pond in the study area was about EUR 0.09 m−2. Based on the practices in the study area, the banks of the shrimp ponds were considered to be 2 m above the adjacent land, and the depth–damage curve (Fig. 7) was modified accordingly.
Figure 7Depth–damage curves for different land use classes.
The damage calculations were carried out using ArcGIS. The simulated flood depth and land use for each grid cell were used as input, and the damage in each grid cell was computed using the depth–damage curve corresponding to that land use. The damage for each scenario was estimated using this procedure.
## 3.4 Calculation of flood risk and generation of risk map
Flood risk assessment is an essential part of risk management. Spatial distribution of risk and areas requiring mitigation measures can be identified from flood risk maps. To examine the spatial variation of risk, flood risk analysis was carried out and a risk map was generated considering dike breaching at the critical locations. Van Manen and Brinkhuis (2005) and Klijn (2009), as part of the FLOODsite project, carried out research to quantify the flood risk for the polders in the Netherlands for dike failure defining the risk as a product of the probability of occurrence of the event and the consequences which was defined by Helm (1996). Equation (4) was used to calculate the risk due to flooding:
$\begin{array}{}\text{(4)}& R={P}_{\mathrm{F}}×S,\end{array}$
where R is risk, PF the probability of occurrence of the flood hazard and S the consequences.
The exceedance probability (return period) of the cyclone-induced storm surge was used as the probability of occurrence of the hazard. The probability of flooding within a protected area is not the same as the probability of the hazard and depends also upon the probability of failure of the dike. It is a difficult probability to compute as the probability of dike failure also depends upon the dike maintenance, about which information was not available. Here we have assumed that the probability of occurrence of the hazard and the probability of failure of dike are the same.
## 3.5 Probabilistic flood map
Purvis et al. (2008) stated that the risk assessment for the most probable scenario cannot take into account the impact of the scenario of low probability, stressing the necessity of a probabilistic risk analysis. The equation suggested by Purvis et al. (2008) for probabilistic risk analysis was adjusted and used for this research to calculate the probability of flooding of each cell and is presented below in Eq. (5):
where Pi is the probability of flooding at cell i; Pfj is the probability of reaching a certain storm surge level in simulation number j; Fij is the binary value indicating if the cell i is flooded or not in simulation j; and $j=\mathrm{1},\mathrm{2},\mathrm{3},\mathrm{\dots },M$, where M is the number of scenarios considered (=72) and $i=\mathrm{1},\mathrm{2},\mathrm{3},\mathrm{\dots },N$ are the computational grid cells on the polder area and N is the number of cells.
Equation (5) was used in this study to calculate the probability of flooding in each cell. The probabilistic flood map (PFM) was calculated using the results of all the scenarios.
4 Results and discussion
The developed 1-D–2-D model for the present study was calibrated for the 1-D part by comparing the observed and simulated values for discharge and water level. The corresponding performance indicators used for evaluation were the coefficient of determination (R2), the root mean square error (RMSE) and the mean absolute error (MAE), for which values of 0.98, 2.15 and 1.68 m3 s−1, respectively, were obtained for discharge, and 0.98, 0.09, and 0.08 m, were obtained for water level, respectively. The average values of the discharge and water level for the considered simulation period were 5.68 m3 s−1 and 0.82 m respectively. The period of simulation for calibration coincided with the surges corresponding to Cyclone Sidr (from 14 to 17 November 2007). The simulation results indicate that the dike facing the seaside was overtopped and the area inside the polder was inundated. This conclusion is in line with the survey conducted by the Japan Society of Civil Engineers (JSCE) (Hasegawa, 2008).
The coupled 1-D–2-D model has not been calibrated because there were no flood maps showing flood extents available for recent cyclones. However, the 2-D part of the model was pseudo-calibrated considering MODIS reflectance data. Such data were used in order to analyse the inundation extent, though this also posed considerable challenges due to the cloud coverage during the cyclones. The survey conducted by JSCE after Cyclone Sidr aimed to investigate the flood extent and depth, but only provided flood depth for one location inside the study area. This location was used for the calibration of 2-D model. The difference between the reported and the simulated flood depth was 4.5 %. Prior to the calibration of the 2-D model, sensitivity analysis was carried out regarding the roughness coefficient (Manning's n). The analysis indicated that the inundation model is not highly sensitive to the roughness coefficient, and the areas of low flows (locations furthest from the dike breach) are most sensitive. The sensitivity analysis was done for the breaching in the western part of the dike only. It was considered that the breaching in other locations will have similar effects as the area inside the polder is flat and low-lying with mostly farmlands near the dike.
This 1-D–2-D model, which had limited calibration points, was further used in simulating the developed scenarios. The simulated results were used to analyse flood depth, extent and damage due to flooding. The FRM and the PFM were generated based on flood results of the model.
## 4.1 Inundation corresponding to three worst-case scenarios
Among the simulated scenarios, the results of three worst-case scenarios (Scenario S1, S2 and S3) were compared to identify the critical location of breaching. The corresponding flood maps for the worst-case scenarios are presented in Fig. 8.
Figure 8Flood extent corresponding to three worst-case scenarios of dike breaching in the central, eastern and western section of the dike.
Flood extents corresponding to all different scenarios presented in Table 1 were compared to understand the effect of SLR, diurnal and seasonal tidal variation and the angle of cyclone at landfall. The flood extents of different scenarios considering the breaching in the central part of the sea-facing dike are presented in Fig. 9.
Figure 9Comparison of flooded areas corresponding to different scenarios considering the breaching in the central part of the sea-facing dike.
Moreover different land classes were considered while computing the flood extent for the three worst-case scenarios (Table 3). The analysis of the flood extent for different flood depths, based on the considered land uses, is presented in Fig. 10. The highest storm surge height among all the developed scenarios was 7.2 m PWD (Table 2). This storm surge height with breaching at the western, central and eastern parts of the dike was considered as the worst-case scenario and was denoted as Scenario S1, S2 and S3 respectively.
Table 3Flooded areas of different land classes corresponding to the three worst-case scenarios.
Figure 10Flooded areas for different ranges of flood depths corresponding to different scenarios.
## 4.2 Comparison of calculated damages
The damage due to flooding was calculated using the depth–damage curves for different land classes. The calculated damage for different land classes and damage for different flood depths corresponding to the three worst-case scenarios are presented in Table 4 and Fig. 11.
Table 4Calculated flood damages for different land classes corresponding to different scenarios.
Figure 11Variation of estimated flood damages with varying ranges of flood depths corresponding to Scenario S1, S2 and S3.
Figures 10 and 11 correspond to the flooded area and damage due to different ranges of inundation respectively. The flood area and damage were highest for inundation depths of 0.5 to 1.0 m.
## 4.3 Risk map for the worst-case scenario
The flood risk map for the scenario with the critical locations of dike breaching is presented in Fig. 12. The risk map presents the assessed risk of flooding due to breaching at critical locations of the dike. Comparison of the flooded area and damage due to flooding for the three worst-case scenarios led to the identification of Scenario S1 as the critical location of breaching. The identification of the critical location of breaching is described in Sect. 4.5.
Figure 12Flood risk map corresponding to the breaching at the critical location of the dike. The following three classes of risk are shown: high, medium and low. The four land uses considered are shown as well.
## 4.4 Probabilistic flood map
Although the inundation maps are widely used for spatial planning and flood mitigation measures, the uncertainty of mathematical modelling affects the output of inundation maps (Alfonso et al., 2016). In order to account for uncertainty, probabilistic flood maps are suggested to be used (Domeneghetti et al., 2013). The probabilistic flood map was calculated from the inundation maps corresponding to the 72 scenarios considered in the study. Probabilistic flood maps were calculated for a threshold of flood depth greater than 0.5 m. The developed damage curves suggest that the damage for flood depth below 0.5 m is minimal. Moreover, considering the widely accepted “living with floods” philosophy in Bangladesh, a threshold of 0.5 m was adopted. This threshold was used in developing the PFMs. This threshold was not considered while the estimation of damage due to flooding was conducted. The calculated probabilistic flood map is presented in Fig. 13. The probabilistic flood map indicates the likelihood of being flooded. This will assist the planning for future land use zoning, which can be used to restrict further developments in the floodplains.
Figure 13Probabilistic flood map of the study area. Varying colours indicate probabilities of obtaining flood depths more than 0.5 m.
## 4.5 Discussion on the results
The flood extent for the simulated result of 72 scenarios was compared. Flood extent varies for different scenarios with different conditions such as the daily (high and low tide) and biweekly (spring and neap tide) tidal variation, sea level rise and the angle of landfall (Fig. 9).
Three worst-case scenarios (Scenario S1, S2 and S3) were compared by generating flood maps and calculating total flooded areas and total damages. The flood maps for Scenario S1, S2 and S3 (Fig. 8) demonstrated that a large area was flooded for all the breach locations. At least 25 % of the total area of Polder 48 was inundated for the three scenarios (Table 3). In the case of all three scenarios considered, the inundation area with flood depths from 0.5 to 1.0 m was larger than the inundation areas with other flood depths (Fig. 10). The inundation area with flood depths more than 1 m was largest for Scenario S3, due to the depressions close to the dikes (Fig. 10). The rice fields were flooded most, while the shrimp ponds were flooded least in all the scenarios (Table 3).
Flood risk was quantified with damage due to floods (negative consequences) and the probability of occurrence. The total estimated damages due to flooding for Scenario S1, S2 and S3 were EUR 10.7, 10.6 and 8.6 million, respectively (Table 4). For all the scenarios, a 1-in-25-year cyclone event was considered. The damages in the settlements were greater than other land classes for all the scenarios (Table 4). Rice fields were flooded most but they did not experience the highest damage compared to other land use classes (Tables 3 and 4). This can be explained by the high damages in settlements compared to rice fields (Table 4). The damage to crops depends on the flood depth, duration and overland flow velocity. For simplification, only the damage related to flood depth was used. As the probability of cyclones was considered the same for all the scenarios, the calculated damage governed the estimated flood risk; i.e., higher damage to the settlements translated as a higher risk of flooding. The primary economic activity of the inhabitants of the study area is farming (BBS, 2011), and most of the inhabitants are poor (with a poverty rate of 0.628) (Alamgir et al., 2018). Even though the estimated damage and risk of flooding to crops were much less compared to areas with other land uses, it will affect the people living in the study area most as they depend on the farming of rice for their livelihood (Nasreen et al., 2013). Hasan et al. (2004) found out that the dependence on fishing (in the sea) by the inhabitants of Polder 48 is increasing due to loss of crops by flood, loss of productivity, lack of jobs and poverty. Fishing in the coastal region of Bangladesh yields lower economic returns, leading to enhanced poverty (Hasan et al., 2004).
The damage was maximum for flood depths of 0.1 to 0.5 m for all the scenarios (Fig. 11). The damage due to inundation less than 0.1 m was small and insignificant. Damage is a function of flood depth, but the unit is per unit area (per m2). Therefore, if the flood extent for higher depths is lower, the damage due to flooding might be lower even though the flood damage increases significantly for inundation more than 0.5 m according to the depth–damage curves developed (Fig. 7).
Generated the PFM indicated that the areas adjacent to the dike facing the seaside have a higher probability of flooding and the rice fields are more prone to flooding (Fig. 13). Moreover, the areas protected by mangrove forest might also be flooded if the unprotected location of the dike is breached (Fig. 13), stressing the importance of proper maintenance of the dike everywhere.
The damage due to flooding was maximum for Scenario S1, which results in a higher risk of flooding for Scenario S1. The total flooded area for settlements of Scenario S1 was lower than Scenario S2 (Table 3), but the estimated damage for settlements of Scenario S1 was more than Scenario S2 (Table 4). This indicates that the settlements in Scenario S1 were exposed to greater flood depth and a higher risk of flooding than Scenario S2. Furthermore, Scenario S1 had similar total damage due to flooding, with a lower flood extent than Scenario S2 (Tables 3 and 4). Considering these facts, Scenario S1 was selected as the worst-case scenario, and breaching in the western part of the sea-facing dike was identified as the critical location for breaching during cyclones.
The scenarios with the effect of climate change (sea level rise) had more damage compared to the scenarios without climate change. Scenarios S1, S2 and S3 were associated with the highest storm surge height. With the same set of conditions without climate change (sea level rise), the storm surge height was 6.52 m PWD (Table 2). The damage corresponding to breaching of eastern, central and western locations of the dike due to the storm surge without climate change impact was 23.3 %, 20.5 % and 21.7 % lower than the damage with the climate change impact respectively. The corresponding values for the flood extent were 30.1 %, 21.67 % and 27.21 % lower than the flood extent areas with the climate change impact respectively.
The probability of occurrence of the storm surge and damage caused by inundation were taken into consideration for the risk calculation. In the case of breaching of the dike, the probability of flooding was considered the same as the probability of occurrence of storm surges. The depicted risk map (Fig. 12) shows the areas adjacent to the dike breach are at higher risk, and the risk reduces as the flood propagates towards the east.
Canals are used as a mode of transportation by the inhabitants of the area. Most of the economic activities and residential areas are near the canals. The risk analysis show that the areas at highest risk are the settlements by the canals (Fig. 12). Therefore, although canals play a crucial role in the economy and social life of the area, they also increase the risk of flooding and probability of higher damage to the adjacent areas.
Land use planning plays an important role in the reduction of vulnerability to disasters (Burby, 1998). Probabilistic flood maps (PFMs) can be used for land use planning (Alfonso et al., 2016). For better understanding of the area at risk of flooding due to the breaching of dikes, PFMs were generated for the study area (Figs. 12, 13). The results of 72 scenarios from the scenario matrix were used for the calculation of PFM. The areas adjacent to the sea dikes had a higher probability of flooding due to the breaching of dikes for both PFMs. The areas inland had a lower probability of flooding. Existing land use indicates that the areas with a lower probability of flooding are mostly rice fields (Figs. 12, 13). Land use zoning and management using the PFM can reduce the vulnerability.
5 Conclusions
A 1-D–2-D coupled model was developed to investigate the inundation pattern inside a polder due to breaching of a dike by cyclone-induced storm surges. Different scenarios were formulated and simulated using a 1-D–2-D coupled model. The results of these simulations were used to calculate the total flooded area and damage due to flooding. Simulated results of three worst-case scenarios, S1, S2 and S3, were compared based on the total flooded area and estimated damage. The comparison led to the identification of the critical location of dike breaching during a cyclone. The flood risk map and probabilistic flood map were generated for the dike breaching during a storm surge using the results of the developed scenarios to identify the areas at higher risk and higher probability of flooding.
Flood inundation for the three worst-case scenarios, S1, S2 and S3, indicated that the maximum flooded area was obtained corresponding to the breaching of the central part of the sea-facing dike. The highest depth was obtained corresponding to Scenario S2 (breaching in the central part). The damage for scenarios S1 (breaching in the western part) and S2 (breaching in the central part) was equal. From these findings it can be concluded that the flood extent, flood depth and damage depended on the breach location. Moreover, the comparison of the flood damage and flood extent led to the identification of Scenario S1 as the worst-case scenario and the western part of the sea-facing dike as the critical location for breaching.
The scenarios considering the effect of climate change (sea level rise) indicated that the flood extent and damage due to flooding will increase with sea level rise.
Flood risk was calculated as the product of the probability of occurrence of a flood event and negative consequences (damage). The generated flood risk maps indicated that for all the scenarios, areas adjacent to the dike and canals inside the polder had a higher risk of flooding. For better access to the canals, for transportation and for livelihood, development of infrastructure and households nearby the canals increases the vulnerability. Similarly, developing land for infrastructure and household on the landside of the dikes increases vulnerability. Combining the effects of increased vulnerability and higher flood depth results in an elevated risk of flooding due to the breaching of dikes during a cyclone.
Inundation maps of all 72 scenarios were compared to generate the probabilistic flood map, which indicated that the areas with rice fields are the least probable areas to be flooded, and the settlements are the most probable areas to be flooded. Although the inhabitants are mostly dependent on agriculture, the flooding of settlements will cause most damage and force relocation.
Measured storm surge levels for previous cyclones were unavailable. Therefore, for this research synthetic water level time series were generated considering the storm surge height presented by Islam et al. (2013), for a cyclone with a 25-year return period. The probability of flooding in a protected area is complicated, and it was assumed that the probabilities of storm surge occurrence and breaching are the same. A limited number of field observations were available to compare the results of the 2-D model. The limited calibration possibility of hydraulic models stresses the importance of field observations before, after and during flood events. As the future land use data were not available, current land use has been used for the future scenarios as well.
The primary objective of the research was to present a methodology for generating FRMs and PFMs for the breaching of dikes during a cyclone. Due to the lack of data on the existing conditions and previous history of breaching of the dike, the probability of the dike breaching could not be determined. Comprehensive surveys should be conducted to determine the physical conditions of the existing embankments and their breach history. Using these data, a joint probability of flooding due to storm surges and breaching may be considered in future studies. As the sea beach outside the dike on the seaside was not included in the 2-D model, the effect of mangrove forest could not be determined. A single breach location was considered for all the scenarios developed. The probability of multiple dike breaching for a polder should be studied as well. Moreover, due to a lack of data, the storm surge height for the present scenarios was used for future scenarios as well. As the sea surface temperature will change in the future due to climate change, the height and intensity of the storm surges will be affected as well. Research on the change of storm surge height and intensity due to climate change should be conducted in the future. Bathymetric data with a coarse grid resolution from GEBCO were used as the measured bathymetric data for the sea were not available. Furthermore, the study relied on the previous literatures for developing depth–damage curves. Conducting field surveys to generate these curves will provide a more reliable estimate of damages due to flooding. The developed and simulated model depended on the field measurements and logical assumptions which might be the source of errors. For damages, only direct damages were included. Inclusion of indirect damages will provide more realistic estimates.
Bangladesh is a hazard-prone country, and cyclone-induced storm surges are one of many natural disasters that affect the coast of Bangladesh. The storm surges cause severe damage to the earthen embankments/dikes protecting the coastal polders. The methodology presented in this paper to develop the 1-D–2-D inundation model, the PFMs and the risk maps and to identify the critical locations for breaching can assist in better preparedness against flooding and help in damage reduction through land use zoning and management. At present, the PFM and FRM due to storm surges and breaching of the dikes are not available for the coastal polders.
Climate change will likely cause increase in the frequency and intensity of cyclones around the world. This will call for large investments for the improvement of existing or new protection structures for the deltas. The identification and prioritization of maintenance of critical locations of dike breaching can potentially prevent a disaster. Non-structural tools such as land use zoning, with the help of flood risk maps and probabilistic flood maps, have the potential to reduce the risk and the damage due to dike breaching. The method presented in this research can potentially be utilized for the deltas around the world to reduce vulnerability and flood risk due to the breaching of dikes caused by cyclone-induced storm surges.
Data availability
Data availability.
The data used in this research were provided by the Institute of Water Modelling (IWM) for research purposes only. IWM is the owner of the data. Therefore, the authors do not have the authority to share the data publicly.
Author contributions
Author contributions.
All the authors contributed to the conceptualization, development of methodology, writing and editing of the manuscript. In addition to these MFI carried out the model simulation and analysis, and BB and IP supervised the research.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
The support of the Institute of Water Modelling (IWM), Dhaka, Bangladesh, in providing the surveyed data is gratefully acknowledged.
Edited by: Bruno Merz
Reviewed by: Alex Curran and one anonymous referee
References
Ahamed, S., Rahman, M. M., and Faisal, M. A.: Reducing Cyclone Impacts in the Coastal Areas of Bangladesh: A Case Study of Kalapara Upazila, Journal of Bangladesh Institute of Planners, ISSN 2075, 9363, 185–197, 2012.
Alamgir, M., Furuya, J., Kobayashi, S., Binte, M., and Salam, M.: Farmers' Net Income Distribution and Regional Vulnerability to Climate Change: An Empirical Study of Bangladesh, J. Climate, 6, 65, https://doi.org/10.20944/preprints201805.0306.v1, 2018.
Alfonso, L., Mukolwe, M. M., and Di Baldasssarre, G.: Probabilistic Flood Maps To Support Decision-Making: Mapping the Value of Information, Water Resour. Res., 52, 1026–1043, 2016.
Azam, M. H., Samad, M. A., and Mahboob-Ul, K.: Effect of Cyclone Track And Landfall Angle on The Magnitude of Storm Surges Along the Coast of Bangladesh in the Northern Bay of Bengal, Coast. Eng. J., 46, 269–290, 2004.
Bangladesh Bureau of Statistics (BBS): Population Census 2011, available at: http://www.bbs.gov.bd/Census2011/Khulna/Khulna/Khulna_C01 (last access: 25 February 2016), 2012.
Bangladesh Water Development Board (BWDB): Technical Feasibility Studies and Detailed Design for Coastal Embankment Improvement Programme (CEIP) (Main Report), Ministry of Water Resources, Government of the People's Republic of Bangladesh, available at: http://bwdb.gov.bd/archive/pdf/364.pdf (last access: 20 August 2016), 2013.
Barredo, J. I. and Engelen, G.: Land use scenario modeling for flood risk mitigation, Sustainability, 2, 1327–1344, 2010.
Brown, S. and Nicholls, R. J.: Subsidence and human influences in mega deltas: the case of the Ganges-Brahmaputra-Meghna, Sci. Total Environ., 527, 362–374, 2015.
Burby, R. J. (Ed.): Cooperating with Nature: Confronting Natural Hazards with Land-Use, Planning for Sustainable Communities, Joseph Henry Press, 366 pp., 1998.
Büchele, B., Kreibich, H., Kron, A., Thieken, A., Ihringer, J., Oberle, P., Merz, B., and Nestmann, F.: Flood-risk mapping: contributions towards an enhanced assessment of extreme events and associated risks, Nat. Hazards Earth Syst. Sci., 6, 485–503, https://doi.org/10.5194/nhess-6-485-2006, 2006.
Center for Research on the Epidemiology of Disaster (EMDAT): The International Disaster Database, available at: http://www.emdat.be/database (last access: 16 May 2016), 2009.
Chau, V. N., Cassells, S. M., and Holland, J.: Measuring Direct Losses to Rice Production From Extreme Flood Events in Quang Nam Province, Vietnam, in: 2014 Conference (58th), 4–7 February 2014, Port Maquarie, Australia (No. 165813), Australian Agricultural and Resource Economics Society, 2014.
Climate-Data: Climate: Kuakata, available at: http://en.climatedata.org/location/969757/, last access: 11 March 2016.
Cyclone Shelter Preparatory Study (CSPS): Mathematical Modelling of Cyclone Surge and Related Flooding, Cyclone Risk Area Development Project, Vol. I., Bangladesh Disaster Preparedness Center, 1998.
Dasgupta, S., Huq, M., Khan, Z. H., Ahmed, M. M. Z., Mukherjee, N., Khan, M. F., and Pandey, K.: Cyclones in A Changing Climate, The Case of Bangladesh, Clim. Dev., 6, 96–110, 2014.
Domeneghetti, A., Vorogushyn, S., Castellarin, A., Merz, B., and Brath, A.: Probabilistic flood hazard mapping: effects of uncertain boundary conditions, Hydrol. Earth Syst. Sci., 17, 3127–3140, https://doi.org/10.5194/hess-17-3127-2013, 2013.
Ericson, J. P., Vorosmarty, C. J., Dingman, S. L., Ward, L. G., and Meybeck, M.: Effective Sea-Level Rise and Deltas, Causes of Change and Human Dimension Implications, Global Planet. Change, 50, 63–82, 2005.
Fatema, K., Miah, T. H., Mia, M., and Akteruzzaman, M.: Rice Versus Shrimp Farming in Khulna District of Bangladesh: Interpretations of Field-Level Data, Bangladesh J. of Agri. Economics, 34, 1–2, 2011.
Flather, R. A.: A storm surge prediction model for the Northern Bay of Bengal with application to the cyclone disaster in April 1991, J. Phys. Oceanogr., 24, 172–190, 1994.
Fromm, J. E.: Lagrangian Difference Approximations for Fluid Dynamics, No. LA-2535, Los Alamos National Lab Nm, 1961.
Government of Bangladesh (GOB): Damage, Loss and Needs Assessment for Disaster Recovery and Reconstruction, Cyclone Sidr, Bangladesh, Government of Bangladesh, 21, 27–28, 2008.
Hall, J. W., Tarantola, S., Bates, P. D. and Horritt, M. S.: Distributed Sensitivity Analysis of Flood Inundation Model Calibration, in: Hazard Classification & Danger Reach Studies for Dams, edited by: Harrington, B. W., J. Hydraul. Eng., 131, 117–126, 2005.
Hasan, M., Billah, M. M., and Roy, T. K.: Tourism and Fishing Community of Kuakata: A Remote Coastal Area of Bangladesh, Part 1, Support for University Fisheries Education and Research Project, Department for International Development, UK, 72 pp., 2004.
Hasegawa, K.: Features of Super Cyclone Sidr to Hit Bangladesh in Nov 07 and Measures for Disaster from Results of JSCE Investigation, in: Proceedings of the WFEO-JFES-JSCE joint international symposium on disaster risk management, Sendai, Japan, 51–59, 2008.
Heitzman, J. and Worden, R. L.: Bangladesh, A Country Study, Washington GPO, U.S. Government Publishing Office, 1989.
Helm, P.: Integrated risk management for natural and technological disasters, Tephra, 15, 4–13, 1996.
Hoque, M. A. A., Phinn, S., Roelfsema, C., and Childs, I.: Assessing Tropical Cyclone Damage Using Moderate Spatial Resolution Satellite Imagery, Cyclone Sidr, Bangladesh 2007, Proceedings of the 36th Asian Conference of Remote Sensing, 2015.
Institute of Water Modelling (IWM): Impact assessment of climate change on the coastal zone of Bangladesh, Final Report, Institute of Water Modelling, Dhaka, Bangladesh, 37, 2005.
Islam, M., Khan, M., Alam, R., Khan, M., and Nur-A-Jahan, I.: Adequacy Check of Existing Crest Level of Sea Facing Coastal Polders by the Extreme Value Analysis Method, IOSR J. of Mechanical and Civil Engg., 8, 89–96, 2013.
Karim, M. F. and Mimura, N.: Impacts of Climate Change and Sea-Level Rise on Cyclonic Storm Surge Floods in Bangladesh, Global Environ. Change, 18, 490–500, 2008.
Khan, A. E., Ireson, A., Kovats, S., Mojumder, S. K., Khusru, A., Rahman, A. and Vineis, P.: Drinking water salinity and maternal health in coastal Bangladesh: implications of climate change, Environ. Health Persp., 119, https://doi.org/10.1289/ehp.10028041328, 2011.
Klijn, F.: Flood risk assessment and flood risk management; an introduction and guidance based on experiences and findings of FLOODsite, (an EU-funded integrated project), Deltares, Delft, The Netherlands, 143, 2009.
Knutson, T. R., McBride, J. L., Chan, J., Emanuel, K., Holland, G., Landsea, C., Held, I., Kossin, J. P., Srivastava, A. K., and Sugi, M.: Tropical Cyclones and Climate Change, Nat. Geosci., 3, 157–163, 2010.
Madsen, H. and Jakobsen, F.: Cyclone induced storm surge and flood forecasting in the northern Bay of Bengal, Coast. Eng., 51, 277–296, 2004.
Mendelsohn, R., Dinar, A., and Williams, L.: The Distributional Impact of Climate Change on Rich and Poor Countries, Environ. Dev. Econ., 11, 159–178, 2006.
Mendelsohn, R., Emanuel, K., Chonabayashi, S., and Bakkensen, L.: The Impact of Climate Change on Global Tropical Cyclone Damage, Nat. Clim. Change, 2, 205, https://doi.org/10.1038/nclimate1357, 2012.
MIWF (Water Development and Flood Control): Comparison of Elevation Data from BWOB and FINNMAP, Flood Action Plan, FAP, 19, Ministry of Irrigation, Bangladesh, 1993.
Muktadir, M. A. and Hasan, D. M.: Traditional house form in rural Bangladesh: a case study for regionalism in architecture, Regional seminar on Architecture and the Role of Architects in Southern Asia, 19–23, 1985.
Nasreen, M. and Azad, M. A. K.: Climate Change and Livelihood in Bangladesh, Experiences of People Living in Coastal Regions, Proc. of Int. Conf. on Building Resilience, 1–25, 2013.
National Oceanic and Atmospheric Administration (NOAA): The Worst Natural Disasters by Death Toll, available at: http://docs.lib.noaa.gov/noaa_documents/NOAA_related_docs/death_toll_natural_disaster.pdf (last access: 8 March 2016), 2008.
Neumann, B., Vafeidis, A. T., Zimmermann, J., and Nicholls, R. J.: Future Coastal Population Growth and Exposure to Sea-Level Rise and Coastal Flooding-A Global Assessment, PLOS ONE, 10, https://doi.org/10.1371/journal.pone.0118571, 2015.
Oumeraci, H.: Breaching of Coastal Dikes: State of the Art, TU Braunschweig, Braunschweig, Germany, 178, 2006.
Parry, M., Canziani, O., and Palutikof, J. (Eds.): Climate change 2007: impacts, adaptation and vulnerability (Vol. 4), Cambridge, Cambridge University Press, 841 pp., 2007.
Purvis, M. J., Bates, P. D., and Hayes, C. M.: A Probabilistic Methodology to Estimate Future Coastal Flood Risk Due to Sea Level Rise, Coast. Eng., 55, 1062–1073, 2008.
Rahman, M. M.: Country report: Bangladesh, ADBI-APO workshop on climate change and its impact on agriculture, Seoul, Republic of Korea, 13–16, 2011.
Reese, S. and Ramsay, D.: RiskScape: Flood fragility methodology, Wellington, New, Zealand, National Institute of Water and Atmospheric Research, 42 pp., 2010.
Ritter, S. K.: Global Warming and Climate Change, Chem. Eng. News, 12, 11–21, 2009.
Samuels, P. G.: Backwater Lengths in Rivers, Proc. of the Inst. of Civil Engineers, 87, 571–582, 1989.
Sarraf, M., Dasgupta, S., and Adams, N.: The cost of adapting to extreme weather events in a changing climate, Bangladesh development series paper, 28, https://doi.org/10.1596/26890, 2011.
Sarwar, M. G. M.: Impacts of Sea Level Rise on the Coastal Zone of Bangladesh, available at: http://static.weadapt.org/placemarks/files/225/golam_sarwar.pdf (last access: 25 May 2016), 2005.
Shahid, S.: Probable impacts of climate change on public health in Bangladesh, Asia Pacific Journal of Public Health, 22, 310–319, 2010.
Simple Action for the Environment (SAFE): Case Study, Construction of Improved Rural House in Dinajpur, Bangladesh, Housing and Hazards, Simple Action for the Environment, 8 pp., 2011.
Smith, W. H. and Sandwell, D. T.: Global sea floor topography from satellite altimetry and ship depth soundings, Science, 277, 1956–1962, 1997.
Stocker, T. F., Qin, D., Plattner, G. K., Tignor, M., Allen, S. K., and Boschung, J.: Climate Change 2013, in: The Physical Science Basis, Working Group 1 (WG1) Contribution to the Intergovernmental Panel on Climate Change (IPCC) 5th Assessment Report (AR5), Cambridge, United Kingdom and New York, NY, 2013.
TANGO International: An Assessment of Livelihood Recovery, DAP Emergency Program, Cyclone Sidr Response, Save the Children Bangladesh, 2010.
Van Manen, S. E. and Brinkhuis, M.: Quantitative Flood Risk Assessment for Polders, Reliability engineering & system safety, 90, 229–237, 2005.
Woodruff, J. D., Irish, J. L., and Camargo, S. J.: Coastal Flooding by Tropical Cyclones and Sea-Level Rise, Nature, 504, 44, https://doi.org/10.1038/nature12855, 2013.
World Bank: Vulnerable Twenty, Ministers Call for More Action and Investment in Climate Resiliency and Low-Emissions Development, available at: http://www.worldbank.org/ (last access: 22 February 2016), 2015.
World Bank: Bangladesh Data, available at: https://data.worldbank.org/country/bangladesh, last access: 30 July 2018.
|
2019-09-20 01:43:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6034488677978516, "perplexity": 3696.410016928814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573801.14/warc/CC-MAIN-20190920005656-20190920031656-00015.warc.gz"}
|
https://blender.stackexchange.com/questions/166952/adding-a-new-material-to-a-grease-pencil
|
# Adding a new material to a grease pencil
I have added a grease pencil and need to add a new material slot in python. I have used this script for active objects, however I am missing something with the grease pencil.
activeObject = bpy.context.active_object
mat = bpy.data.materials.new(name="MaterialName")
activeObject.data.materials.append(mat)
bpy.context.object.active_material.diffuse_color = (0.121583, 0.144091, 0.8, 0.729885)
import bpy
|
2021-12-01 21:56:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28448864817619324, "perplexity": 2526.6953736612427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360951.9/warc/CC-MAIN-20211201203843-20211201233843-00155.warc.gz"}
|
https://www.freemathhelp.com/forum/threads/soroban-where-are-you.116176/
|
# Soroban...WHERE ARE YOU?
Status
Not open for further replies.
##### Full Member
Soroban, where are you? Does anyone know what happened to him?
#### topsquark
##### Full Member
Soroban hasn't been seen for a few years now. As he was scaling back the number of posts he constributed to I'm assuming he simply "retired" from the Forums.
-Dan
##### Full Member
Soroban hasn't been seen for a few years now. As he was scaling back the number of posts he constributed to I'm assuming he simply "retired" from the Forums.
-Dan
He is greatly missed. According to Soroban, he was 75 back in 2006 when I first met him at FMH. I hope he is still with us (on earth)....
#### mmm4444bot
##### Super Moderator
Staff member
I don't know anything about Soroban's postings at other sites, but he chose to leave the freemathhelp forum rather than agree to stop doing student's homework here. (He even advertised to students that he would do their homework.) We gave him ample opportunity to provide tutoring, instead, but he wasn't interested. Eventually, moderators began editing or deleting his posts. He stopped posting here sometime after that.
$$\;$$
#### pka
##### Elite Member
I don't know anything about Soroban's postings at other sites, but he chose to leave the freemathhelp forum rather than agree to stop doing student's homework here. (He even advertised to students that he would do their homework.) We gave him ample opportunity to provide tutoring, instead, but he wasn't interested. Eventually, moderators began editing or deleting his posts. He stopped posting here sometime after that.
$$\;$$
Thank you, for that bit of information. I had many a confrontation with him over that issue.
##### Full Member
I don't know anything about Soroban's postings at other sites, but he chose to leave the freemathhelp forum rather than agree to stop doing student's homework here. (He even advertised to students that he would do their homework.) We gave him ample opportunity to provide tutoring, but he wasn't interested. Eventually, moderators began editing or deleting his posts. He stopped posting here sometime after that.
$$\;$$
Soroban was a great help to me back in 2006. He is a nice person, a retired math professor.
##### Full Member
Thank you, for that bit of information. I had many a confrontation with him over that issue.
Why on earth would you or anyone else here confront Soroban? He is one of the nicest, smartest retired math professors I have ever met. I wish him the best of the very best. Thank you, SOROBAN, for helping me with geometry in 2006.
#### mmm4444bot
##### Super Moderator
Staff member
Why on earth would you or anyone else here confront Soroban?
You're not paying attention, again.
#### topsquark
##### Full Member
I don't know anything about Soroban's postings at other sites, but he chose to leave the freemathhelp forum rather than agree to stop doing student's homework here. (He even advertised to students that he would do their homework.) We gave him ample opportunity to provide tutoring, instead, but he wasn't interested. Eventually, moderators began editing or deleting his posts. He stopped posting here sometime after that.
$$\;$$
I didn't call him on it but he was doing that over at MHF as well.
-Dan
##### Full Member
I didn't call him on it but he was doing that over at MHF as well.
-Dan
He is a nice man.
Staff member
|
2019-12-12 00:14:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.339436411857605, "perplexity": 4731.899035183041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540534443.68/warc/CC-MAIN-20191212000437-20191212024437-00460.warc.gz"}
|
https://www.physicsforums.com/threads/lorenzian-and-gaussian-pdf-function-fitting-with-matlabs-nlinfit-confidence-interval.358309/
|
# MATLAB Lorenzian and gaussian pdf-function fitting with Matlabs nlinfit, confidence interval
1. Nov 27, 2009
### deccard
I have data that I want to fit to both Gaussian and Lorentzian (Cauchy) distribution. I have been using Matlab's nlinfit as follows:
gaus = @(p,xdata) (p(1)/(sqrt(2*pi*p(2)))*exp(-(xdata-p(3)).^2/(2*p(2)))+min).*weights;
[g_pfit,g_residual,g_J]=nlinfit(data(:,1), data(:,2).*weights, gaus, [4030 2 -5]);
g_ci=nlparci(g_pfit,g_residual,'jacobian',J,'alpha',0.317);
loren = @(p,xdata) p(1)./(pi*p(2)*(1+((xdata-p(3))./p(2)).^2))+p(4);
[l_pfit,l_residual,l_J]=nlinfit(data(:,1), data(:,2), loren, [10030 0.5 -5 4]);
l_ci=nlparci(l_pfit,l_residual,'jacobian',l_J,'alpha',0.317);
The strange thing, however, is that my data is more like Gaussian-shaped and Gaussian curve, is by eye way more better fit. Still I get smaller errors for the width of Lorentzian fit than Gaussian (using the nlparci function).
Why would I get smaller errors for the width of the Lorentzian curve than for the width of the Gaussian curve, which is a better fit?
deccard
|
2018-07-16 21:10:08
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8068286776542664, "perplexity": 5603.883798930625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589455.35/warc/CC-MAIN-20180716193516-20180716213516-00043.warc.gz"}
|
https://www.electricalexams.co/kirchhoffs-voltage-law-mcq/
|
# Kirchhoff’s Voltage Law (KVL) MCQ
1. Find the value of v if v1 = 20V and the value of the current source are 6A.
A. 10V
B. 12V
C. 14V
D. 16V
The current through the 10 ohm resistor = v1/10 = 2A.
Applying KCL at node 1:
i5 = i10+i2. i2 = 6-2 = 4A.
Thus the drop in the 2 ohm resistor = 4 × 2 = 8V.
v1 = 20V;
hence v2 = 20-v across 2 ohm resistor = 20-8 = 12V
v2 = v s
Since they are connected in parallel.
v = 12V.
2. In the circuit shown in the figure, find the current flowing through the 8 Ω resistance.
1. 0.25 A
2. 0.50 A
3. 0.75 A
4. 0.10 A
Let voltage across the 8 Ω resistance is ‘V’ volt.
∴ Current across the 8 Ω is given by
I = V/8
Now by applying KCL at the node we get
$${{V – 5} \over 2}+{{V +3} \over 4}+{{V } \over 8}=0$$
4V – 20 + 2V + 6 + V = 0
V = 14/7
Now current flowing through the 8 Ω resistance is
I = 2/8
I = 0.25 A
3. Calculate the current A by using Kirchhoff’s current law
A. 5A
B. 10A
C. 15A
D. 20A
KCl states that the total current leaving the junction is equal to the current entering it. In this case, the current entering the junction is 5A+10A = 15A.
4. In the figure shown, the current 𝑖 (in ampere) is __________
1. -1 Amp
2. 5 Amp
3. 2 Amp
4. -2 Amp
Apply KCL at node V1, we get:
$$\frac{{{{\rm{V}}_1} – 0}}{1} + \frac{{{{\rm{V}}_1} – 8}}{1} + \frac{{{{\rm{V}}_1} – 0}}{1} + \frac{{{{\rm{V}}_1} – 8}}{1} = 0$$
4V1 – 16 = 0
V1 = 4 V
Again, applying KCL, we can write:
$${\rm{i}} + \frac{{\left( {0 – {{\rm{V}}_1}} \right)}}{1} + 5 = 0$$
i = V1 − 5 = 4 − 5 = −1 Amp
5. By using Kirchhoff’s current law calculate the current across the 20-ohm resistor.
A. 20A
B. 1A
C. 0.67A
D. 0.33A
Assume a lower terminal of 20 ohms at 0V and upper terminal at V volt and applying KCL, we get
V/10 +V/20 = 1. V = 20/3V
So current through 20 ohm
= V/20 = (20/3)/20
= 1/3 = 0.33V.
6. The total charge q(t), in the coulombs, that enters the terminal of an element is:
$$q(t) = \left\{ {\begin{array}{*{20}{c}} {0\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,t < 0}\\ {2t\,\,\,\,\,\,\,\,\,\,\,\,0 \le t \le 2}\\ {3 + {e^{ – 2(t – 2)}}\,\,t > 2} \end{array}} \right.$$
Determine the current at t = 5 s.
1. 0 A
2. 2 A
3. -2e-6 A
4. 3 + e-6 A
Electric current, i = Rate of transfer of electric charge.
i(t) = dQ/dt
Calculation:
t = 5 s so, equation 3rd is consider.
$$i = \frac{{dQ}}{{dt}} = \frac{d}{{dt}}\left( {3 + {e^{ – 2\left( {t – 2} \right)}}} \right)$$
$$i = {e^{ – 2\left( {t – 2} \right)}}\frac{d}{{dt}}\left[ { – 2\left( {t – 2} \right)} \right]$$
$$i = {e^{ – 2\left( {t – 2} \right)}}\left( { – 2} \right)$$
$$i = – 2{e^{ – 2\left( {t – 2} \right)}}$$
Put the value of t = 5, then we get,
i = −2e−6A
7. Calculate the value of I3, if I1 = 2A and I2 = 3A by applying Kirchhoff’s current law
A. -5A
B. 5A
C. 1A
D. -1A
According to KCl, I1+I2+I3 = 0.
Hence I3 = -(I1+I2) = -5A.
8. What would be the correct equation representing Kirchhoff’s Current Law (KCL) at node a for the given network?
1. i1 – i2 + i3 – i4 = 0
2. i1 + i2 – i3 + i4 = 0
3. i1 – i2 – i3 + i4 = 0
4. i1 – i2 = 0
By applying KCL, at node a
i1 – i2 – i3 + i4 = 0
9. Find the value of i2, i4, and i5 if i1 = 3A, i3 = 1A and i6 = 1A by applying Kirchhoff’s current law
A. 2,-1,2
B. 4,-2,4
C. 2,1,2
D. 4,2,4
h({});
At junction a: i1-i3-i2 = 0. i2 = 2A.
At junction b: i4+i2-i6 = 0. i4 = -1A.
At junction c: i3-i5-i4 = 0. i5 = 2A.
10. In the circuit shown in the following figure, calculate the value of the unknown resistance R when the current in-branch OA is zero.
1. 5 Ω
2. 3 Ω
3. 12 Ω
4. 10 Ω
Given the current through AO is zero,
It means node A and node O has the same potential,
Hence, VBA = VBO …. (1)
Also, VAC = VOC …. (2)
VAC = 4(3I) volts
VOC = IR
From equation (2),
1 × 2 I = IR
∴ R = 12 Ω
Scroll to Top
|
2022-08-12 08:32:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6815198063850403, "perplexity": 4603.738027028094}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571597.73/warc/CC-MAIN-20220812075544-20220812105544-00238.warc.gz"}
|
https://tex.stackexchange.com/questions/344918/koma-script-scrlttr2-toaddress-lines-are-near/345017
|
# KOMA Script scrlttr2 toaddress lines are near
I can't find the variable to set the distance between two lines in the toadress field. In my letter the lines are clearly closer to each other than in the rest of the letter.
Here is a minimal example:
\setkomavar{toname}{%
Somebody in toname
}
Code-Townname
}
Can you see the difference between the normal distance between toname and toaddress and the second line of toaddress called Code-Townname?
you are right. I tested your file and it looks quite nice althrough i had a slight feeling (but didn't realy see it) that its a bit the same. So i played arround and found out that its a combination of fontsize > 11 and the \large variable i use in the toaddress box.
have a look to this file:
% compile with lualatex
\documentclass[%
fontsize=12pt, % <-----Increase this value to see it more clearly
paper=a4,
parskip=full,
enlargefirstpage=off,
fromalign=right,
fromphone=off,
fromrule=off,
foldmarks=no,
pagenumber=false,
refline=nodate,
]{scrlttr2}
\usepackage{lipsum}
\usepackage[british]{babel}
\addtokomafont{toname}{\large} % <----- i added this two lines to get a larger to
\setkomavar{fromname}{John Doe}
\setkomavar{location}{\usekomavar{date}}
\setkomavar{toname}{SOMEBODY in toname}
\begin{document}
\begin{letter}{}
\opening{TesT opening,}
\lipsum
\closing{mfg}
\end{letter}
\end{document}
I can decrease my fontsize to 11pt to solve it but i wold like to know if there is a value to seperate the two lines in \addtokomafont{toaddress} a bit...
• Can you provide a minimal working example that shows the problem? – Johannes_B Dec 20 '16 at 15:57
• The example needs to be compilable for us to reproduce the screenshot on our machine. It is possible that you have changed the font, that something in KOMA-script is broken or that something was broken in the past. – Johannes_B Dec 20 '16 at 17:38
• Maybe there is some space added between name and address by the class, not sure, no time to look at it right now. – Johannes_B Dec 20 '16 at 17:38
Update
I have reported the bug to Markus Kohm, so it is already fixed in the current prerelease (v3.22.2564) of KOMA-Script. You can install this version from the KOMA-Script website.
\documentclass[
fontsize=14pt,
DIV=calc
]{scrlttr2}[2016/01/21]% needs version 3.22.2564 or newer
\usepackage{lipsum}
\setkomavar{toname}{SOMEBODY in toname}
\begin{document}
\begin{letter}{}
\opening{Hey}
\end{letter}
\end{document}
If the font of toname and toaddress should be set/changed in the same way, use the font element addressee instead:
\documentclass[
fontsize=14pt,
DIV=calc
]{scrlttr2}
\usepackage{lipsum}
%\addtokomafont{toname}{\large} % <----- i added this two lines to get a larger to
\setkomavar{toname}{SOMEBODY in toname}
\begin{document}
\begin{letter}{}
\opening{Hey}
\end{letter}
\end{document}
Nevertheless the font elements toname and toaddress do not work as expected. As you can see in the follwing example, changing only the font element toname affects the element toaddress in the address field too:
\documentclass[
fontsize=14pt,
DIV=calc
]{scrlttr2}
\usepackage{lipsum}
\setkomavar{toname}{SOMEBODY in toname}
\begin{document}
\begin{letter}{}
\opening{Hey}
\end{letter}
\end{document}
Result:
As a workaround for versions older than 3.22.2564 you can patch command \@addrfield to solve both problems:
\documentclass[
fontsize=14pt,
DIV=calc
]{scrlttr2}
\usepackage{lipsum}
\setkomavar{toname}{SOMEBODY in toname}
\usepackage{xpatch}
{\usekomafont{toname}{\usekomavar{toname}\\}}
{{\usekomafont{toname}{\usekomavar{toname}\\}}}
{}{\PatchFailed}
{}{\PatchFailed}
\begin{document}
\begin{letter}{}
\opening{Hey}
\end{letter}
\end{document}
Result:
|
2020-02-23 00:04:29
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8144039511680603, "perplexity": 1909.5896769353858}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145729.69/warc/CC-MAIN-20200222211056-20200223001056-00464.warc.gz"}
|
https://bt.gateoverflow.in/637/gate2020-22
|
To facilitate mass transfer from a gas to a liquid phase, a gas bubble of radius $r$ is introduced into the liquid. The gas bubble then breaks into $8$ bubbles of equal radius. Upon this change, the ratio of the interfacial surface area to the gas phase volume for the system changes from $3/r$ to $3n/r$. The value of $n$ is _____________________.
|
2022-05-21 02:55:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8783779740333557, "perplexity": 330.5876438099045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534773.36/warc/CC-MAIN-20220521014358-20220521044358-00604.warc.gz"}
|
https://www.physicsforums.com/threads/vector-calculus-derivation.636439/
|
Vector Calculus Derivation
1. Sep 16, 2012
hjel0743
I was reading a paper and came across this equation:
Fmagnetic0(M<dot>)H
Is this the correct expansion below? (I'm not too experienced with vectors operating on the gradient operator)
Fmagnetic0[(MxH/∂x)i + (MyH/∂y)j + (MzH/∂z)k]
_____________
My reasoning partially comes from this thread: https://www.physicsforums.com/showthread.php?t=157380
2. Sep 16, 2012
chiro
Hey hjel0743 and welcome to the forums.
Just for clarification, is M a constant vector and H some kind of function?
3. Sep 16, 2012
hjel0743
Thanks for the reply chiro! I imagine I'll be here a few more times before my thesis is done... M is a function of H actually, and H is a function of the vector r, representing the radius.
The equation as initially written, describes the force on a particle by a magnetic field. H is the "H-field" and M is the magnetization.
4. Sep 17, 2012
qbert
$$({\bf M} \cdot \nabla) {\bf H} = ( ({\bf M} \cdot \nabla H_x) \widehat{i} + ({\bf M} \cdot \nabla H_y) \widehat{j} + ({\bf M} \cdot \nabla H_z) \widehat{k}).$$
Where, for example,
$${\bf M} \cdot \nabla H_x = M_x \frac{\partial H_x}{\partial x} + M_y \frac{\partial H_x}{\partial y} + M_z \frac{\partial H_x}{\partial z}.$$
5. Sep 17, 2012
chiro
Assuming your M is a function of x,y,z (in vector form you have M = (Mx,My,Mz) where Mx,My,Mz map R^3 to R for each component) then
del(M) = (d/dx . Mx + d/dy . My + d/dz . Mz)H (I'm assuming everything is Cartesian not a general tensor)
= (dMx/dx + dMy/dy + dMz/dz) H.
Now this will give you the product of two functions but if H is a vector (like M with each component have some transformation from R^3 -> R) then this means you use the scalar form of a*v = (a*v1,a*v2,a*v3) which means if H = (Hx,Hy,Hz) then the whole thing is equal to
(dMx/dx + dMy/dy + dMz/dz) * <Hx,Hy,Hz>
Now M . grad(Hx) = <Mx,My,Mz> . <dHx/dx,dHx/dy,dHx/dz>
= Mx*dHx/dx + My.dHx/dy + Mz.dHx/dz
So they both look the same when they are expanded out, so I imagine you are right in your assertion. (It's been a while since I've done this kind of thing myself).
If I've made a mistake please let me know!
|
2018-06-22 00:26:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8902154564857483, "perplexity": 1772.8006699100524}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864303.32/warc/CC-MAIN-20180621231116-20180622011116-00096.warc.gz"}
|
https://avidemia.com/pure-mathematics/the-notation-of-the-differential-calculus/
|
We have already explained that what we call a derivative is often called a differential coefficient. Not only a different name but a different notation is often used; the derivative of the function $$y = \phi(x)$$ is often denoted by one or other of the expressions $D_{x}y,\quad \frac{dy}{dx}.$ Of these the last is the most usual and convenient: the reader must however be careful to remember that $$dy/dx$$ does not mean ‘a certain number $$dy$$ divided by another number $$dx$$’: it means ‘the result of a certain operation $$D_{x}$$ or $$d/dx$$ applied to $$y = \phi(x)$$’, the operation being that of forming the quotient $$\{\phi(x + h) – \phi(x)\}/h$$ and making $$h \to 0$$.
Of course a notation at first sight so peculiar would not have been adopted without some reason, and the reason was as follows. The denominator $$h$$ of the fraction $$\{\phi(x + h) – \phi(x)\}/h$$ is the difference of the values $$x+h$$, $$x$$ of the independent variable $$x$$; similarly the numerator is the difference of the corresponding values $$\phi(x + h)$$, $$\phi(x)$$ of the dependent variable $$y$$. These differences may be called the increments of $$x$$ and $$y$$ respectively, and denoted by $$\delta x$$ and $$\delta y$$. Then the fraction is $$\delta y/\delta x$$, and it is for many purposes convenient to denote the limit of the fraction, which is the same thing as $$\phi'(x)$$, by $$dy/dx$$. But this notation must for the present be regarded as purely symbolical. The $$dy$$ and $$dx$$ which occur in it cannot be separated, and standing by themselves they would mean nothing: in particular $$dy$$ and $$dx$$ do not mean $$\lim\delta y$$ and $$\lim\delta x$$, these limits being simply equal to zero. The reader will have to become familiar with this notation, but so long as it puzzles him he will be wise to avoid it by writing the differential coefficient in the form $$D_{x}y$$, or using the notation $$\phi(x)$$, $$\phi'(x)$$, as we have done in the preceding sections of this chapter.
In Ch. VII, however, we shall show how it is possible to define the symbols $$dx$$ and $$dy$$ in such a way that they have an independent meaning and that the derivative $$dy/dx$$ is actually their quotient.
The theorems of § 113 may of course at once be translated into this notation. They may be stated as follows:
(1) if $$y = y_{1} + y_{2}$$, then $\frac{dy}{dx} = \frac{dy_{1}}{dx} + \frac{dy_{2}}{dx};$
(2) if $$y = ky_{1}$$, then $\frac{dy}{dx} = k\frac{dy_{1}}{dx};$
(3) if $$y = y_{1}y_{2}$$, then $\frac{dy}{dx} = y_{1}\frac{dy_{2}}{dx} + y_{2}\frac{dy_{1}}{dx};$
(4) if $$y = \dfrac{1}{y_{1}}$$, then $\frac{dy}{dx} = -\frac{1}{y_{1}^{2}}\, \frac{dy_{1}}{dx};$
(5) if $$y = \dfrac{y_{1}}{y_{2}}$$, then $\frac{dy}{dx} = \biggl(y_{2}\frac{dy_{1}}{dx} – y_{1}\frac{dy_{2}}{dx}\biggr) \bigg/ y_{2}^{2};$
(6) if $$y$$ is a function of $$x$$, and $$z$$ a function of $$y$$, then $\frac{dz}{dx} = \frac{dz}{dy}\, \frac{dy}{dx};$
Example XL
1. If $$y = y_{1}y_{2}y_{3}$$ then $\frac{dy}{dx} = y_{2}y_{3}\, \frac{dy_{1}}{dx} + y_{3}y_{1}\, \frac{dy_{2}}{dx} + y_{1}y_{2}\, \frac{dy_{3}}{dx},$ and if $$y = y_{1}y_{2} \dots y_{n}$$ then $\frac{dy}{dx} = \sum_{r=1}^{n} y_{1}y_{2} \dots y_{r-1}y_{r+1} \dots y_{n}\, \frac{dy_{r}}{dx}.$ In particular, if $$y = z^{n}$$, then $$dy/dx = nz^{n-1}(dz/dx)$$; and if $$y = x^{n}$$, then $$dy/dx = nx^{n-1}$$, as was proved otherwise in EX. XXXIX. 3.
2. If $$y = y_{1}y_{2}\dots y_{n}$$ then $\frac{1}{y}\, \frac{dy}{dx} = \frac{1}{y_{1}}\, \frac{dy_{1}}{dx} + \frac{1}{y_{2}}\, \frac{dy_{2}}{dx} + \dots + \frac{1}{y_{n}}\, \frac{dy_{n}}{dx}.$ In particular, if $$y = z^{n}$$, then $$\dfrac{1}{y}\, \dfrac{dy}{dx} = \dfrac{n}{z}\, \dfrac{dz}{dx}$$.
|
2021-02-28 07:35:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9605838656425476, "perplexity": 107.84563094348673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360293.33/warc/CC-MAIN-20210228054509-20210228084509-00296.warc.gz"}
|
https://de.zxc.wiki/wiki/Abbildungsfehler
|
# Image errors
In the optical system is understood to imaging errors or aberrations deviations from the ideal optical imaging through an optical system such as a camera or telescope - lens or an eyepiece , which cause a blurred or distorted image. "Aberration" comes from the Latin "aberrare", which literally means "to wander, to get lost, to wander".
The imaging errors can be detected within the framework of geometric optics . It investigates how a beam emanating from a certain object point behaves after passing through the system. Ideally, the rays intersect again at one point. Because of the imaging errors, the result is only a more or less narrow constriction of the beam, which can also be in the wrong place (in the case of distortion or image field curvature).
Opticians like Eustachio Divini (1610–1685) tried to minimize the aberrations of microscopes and telescopes constructively, through trial and error . In the middle of the 19th century, Seidel and Petzval began to mathematically investigate aberrations. As early as 1858 Maxwell argued that a perfect image of a spatially extended object was only possible in the trivial case of an image on plane mirrors. After some interim results, Carathéodory finally presented rigorous evidence of this in 1926.
The imaging errors of a simple system consisting of a single lens or a mirror are usually unacceptably high; such systems can only be used for lighting . However, it is possible to eliminate the aberrations down to any small residue by combining several lenses made of different types of glass or mirrors and sometimes using aspherical surfaces. By means of an optimization calculation, the degrees of freedom of the system (especially the distances between surfaces and surface curvatures) are determined in such a way that the overall imaging errors are minimal. This is called the correction of errors or the optical system.
This correction process is very computationally intensive. All aberrations described here are superimposed, and any change in the optical system affects all aberrations in a generally non-linear manner. The only exception is that there are no color errors in systems that only display using mirrors.
With the help of image processing , the distortion (using methods similar to rectification ) as well as the color fringes that arise from the color magnification errors at the edges of the motif can be compensated for afterwards . In digital camera systems and digital compact cameras, these methods are increasingly being implemented automatically using firmware .
## Monochromatic aberrations
### Spherical aberration
Second (lowest) order spherical aberration
The picture shows how the red incident rays are reflected on a spherical concave mirror. The aberration recognizable by the green rays is called catacaustics.
The spherical aberration , also called aperture error or spherical shape error, is a sharpness error and causes that axially parallel incident light rays or rays emanating from the same object point on the optical axis do not have the same focal length after passing through the system . They therefore do not come together in one point. In general, the further out the beam, the greater the deviation. The back focal length of the refracted ray is given by an even function for reasons of symmetry : ${\ displaystyle s}$
${\ displaystyle s = s_ {0} + \ sum _ {k = 1} ^ {\ infty} w_ {2k} a ^ {2k}}$
It is the center distance with which the beam enters the system and indicates the strength of the spherical aberration of the kth order. is the paraxial back focal length of the refracted ray. ${\ displaystyle a}$${\ displaystyle w_ {k}}$${\ displaystyle s_ {0}}$
Lenses with spherical aberration deliver a soft image with sharp, but low-contrast details, to which only the rays near the axis contribute. The off-axis rays create halos at light-dark transitions.
Motifs in front of and behind the plane of maximum sharpness are drawn differently out of focus. There are lenses whose spherical aberration can be continuously adjusted over a wide range in order to adjust the blur in front of and behind the focus and the sharpness in the focus.
With a system that contains only spherical (spherical) refractive or reflective surfaces, one cannot achieve a real image that is completely free of spherical aberration (see aplanatic image ). With an aspherical surface of a lens or a mirror, one can completely correct the spherical aberration. However, grinding a spherical surface is much simpler and therefore cheaper than grinding aspherically curved surfaces. The widespread use of spherical surfaces is based on the fact that they are considerably cheaper to manufacture, while their aberrations can be effectively reduced by combining several lenses. The costs for aspherically ground lenses are put into perspective in systems with many lenses, since the same image quality can be achieved with fewer lenses.
In the meantime, there are methods of producing high-quality aspheres in the form of pressed parts (molding), which is significantly cheaper. Smaller lenses can be pressed directly, and larger ones are produced by reshaping a spherical lens of equal volume. The size is limited by two problems: on the one hand, there are only a few types of glass that are suitable for reshaping, on the other hand, reshaped lenses tend to be inhomogeneity due to internal stresses that arise during the shaping process.
Small plastic lenses are manufactured inexpensively using the injection molding or injection compression molding process, but are not suitable for systems with higher demands on the image quality, such as camera lenses. You can also cast a plastic layer on a spherical glass lens and press it into an aspherical shape. This technology can also be used for photo lenses.
With the help of the Foucault cutting method, spherical aberrations can also be easily detected with simple means. Today, interferometric processes are common in the mass production of optical parts .
If the spherical aberration limits the resolution , this can be increased by stopping down to the critical aperture .
When reflecting on a spherical concave mirror, an imaging error occurs, this is called catacaustics .
### astigmatism
Astigmatism: Objects that are outside the optical axis are shown blurred. The reason is the different focal lengths in the meridional (M) and sagittal plane (S).
Astigmatism is an aberration of the "crooked" rays. A bundle of rays incident obliquely is refracted to different degrees in the meridional and sagittal planes . In the direction of the meridional plane (M), the lens is perspective shortened, which results in a shorter focal length.
As a result, no points are shown in the points (B M and B S ), but focal lines in the respective other plane. In front of and behind the two focal planes, instead of a circle, an oval is created, since each bundle of rays in a plane becomes an ellipse and has a different opening angle at each point. If a screen is held behind the sagittal focal plane, an oval with a long semi-axis in the meridonal direction (red) can be seen. Analogously, the oval in front of the meridional focal plane is with a longer semiaxis in the sagittal direction (green). In between there is a point where a point is depicted as a fuzzy circle, the smallest circle of confusion or confusion .
The astigmatism is characterized by the astigmatic difference , the distance between the focal lines. This distance increases with a greater inclination of the incident beam to the optical axis, with increasing lens thickness as well as the lens power and the lens geometry. So have z. B. bi-convex or bi-concave lenses in contrast to meniscus lenses have a particularly strong astigmatism. To correct the astigmatism of the eye , a targeted astigmatism is generated with the help of glasses and this image error is compensated.
An optical system can be designed to reduce or prevent astigmatism effects. Such optics are called anastigmates . This designation is only of historical significance, as this defect only occurs with serious manufacturing defects in modern lenses. The Schiefspiegler - a group of astronomical telescopes - represent an exception , in which the error is specially corrected.
An imaging error similar to astigmatism can occur in mirror telescopes used in amateur astronomy, which are often focused by axially shifting the main mirror. This can lead to small tilts, as a result of which the image of the stars is no longer point-shaped, but appears somewhat oblong when focused from the extra or intrafocal side horizontally or vertically.
### coma
Coma on a converging lens
Illustration of a star as a tail. At the bottom left for comparison, the diffraction disk in the case of error-free, e.g. B. near-axis mapping.
The coma (asymmetry error, from Latin coma 'head, tail') is caused by a superposition of two imaging errors when the bundle of rays incident at an angle to the optical axis: the spherical aberration, which also acts with a bundle parallel to the axis, and the oblique bundle astigmatism. Instead of a sharp diffraction disk, an image point is created with a “tail” directed towards the edge of the optics, which gives the phenomenon its name. The appearance can be reduced by blocking the marginal rays, but the astigmatism of oblique bundles remains.
Coma can occur with both lens and mirror optics. Optical systems in which both spherical aberration and coma are completely corrected are called aplanats .
### Field curvature
Further article Petzval's image field curvature
A stage micrometer at low microscopic magnification (fourfold objective); The curvature of the field can be seen especially on the right edge of the image from the blurring of the scaling.
If optics have a curvature of the field of view, the image is not generated on a plane, but on a curved surface - it is therefore a so-called positional error. The position of the ray intersection along the optical axis is then dependent on the image height, i.e. the further the object and thus image points are away from the axis, the more the image point is shifted in the axial direction (typically forwards, towards the objective).
Thus, on a flat projection surface, the image of a flat object cannot be shown sharply over the entire surface. If you focus on the center of the image, the edge is out of focus and vice versa.
Field curvatures are not only found in lenses, but also in other optical components, e.g. B. with eyepieces or projectors . However, like most other aberrations, it can be kept below the tolerance threshold by a special arrangement of the lenses (flat field optics).
Flat field optics are also required for scanners for laser engraving in order to process flat surfaces.
With some special cameras, on the other hand, the field curvature is compensated for by pressing the photographic film against a curved surface, for example with the Baker-Nunn satellite camera.
In digital cameras , curved image sensors can be used to compensate for image errors.
### Distortion
Geometric distortion
Distortion is a positional error and means that the image height (distance of an image point from the center of the image) depends in a non-linear way on the height of the corresponding object point. One can also say: the image scale depends on the height of the object point. The image center is the point where the optical axis intersects the image plane. This is usually the center of the image, but shift lenses and view cameras also allow the optical axis to be shifted from the center of the image. The image center is also called the center of distortion or the point of symmetry of the distortion .
Distortion has the effect that straight lines that do not intersect the optical axis, i.e. whose image does not go through the center of the image, are shown curved.
If the image scale decreases with increasing height, this is called barrel distortion . Then a square with outwardly curved sides is shown, so it looks like a barrel (name). The reverse is called pincushion distortion . Then the square looks like a sofa cushion. Wavy distortion can also occur when different orders of distortion overlap. Straight lines are then curved to both sides like wavy lines.
Wide-angle lenses in retrofocus -construction ( -sectional width greater than the focal length ) tend to barrel distortion and telephoto lenses (length smaller than focal length) of the pillow-shaped.
So-called fisheye lenses have a strong barrel-shaped distortion. This is intended, on the one hand, to achieve a larger image angle (180 degrees and more are only possible through distortion) and, on the other hand, to use distortion for image design.
In binoculars , especially those with wide-angle eyepieces, a pillow-shaped distortion is often desirable in order to avoid the unpleasant globe effect when the glass is swiveled. The physical basis for this is the so-called “angle condition”, which should be fulfilled for binoculars (in contrast to the “tangent condition” for photo lenses).
## Chromatic aberration
Chromatic aberration
The refractive index of optical glass depends on the wavelength of the incident light. This phenomenon is called dispersion . It is the cause of the chromatic aberration . ${\ displaystyle \ lambda}$
### Lateral chromatic aberration
Color transverse errors, enlarged section
The refractive index of the lenses of an optical system influences the image scale , which therefore depends on the wavelength. The partial images that are formed by light of different wavelengths are therefore of different sizes. This effect is called lateral chromatic aberration . It causes color fringes at the edges of the image motif, if these do not run radially, and a blurring of the image. The width of the color fringes is proportional to the distance from the center of the image.
### Longitudinal color defects
Longitudinal color defects: red color fringes in front of the actual focus plane, green behind it
The back focal length of the system, and thus the distance between the image and the last surface of the system, is also dependent on the refractive index of the lenses and thus on the wavelength of the light. As a result, the partial images of different colors cannot be sharply captured at the same time because they are in different positions. This is called longitudinal color defects . The result is a blurring that does not depend on the image height.
### Gaussian error
The dispersion of the optical glasses causes the remaining aberrations to vary with the wavelength. If the coma is corrected for green light, it can still be present for red and blue light. This effect can significantly influence the quality of a lens and must be taken into account when designing high-quality systems. In the case of spherical aberration, this effect is referred to as Gaussian error, the designation often being extended to the other errors.
Achromat
Apochromat
### Achromat
If lens glasses with significantly different Abbe numbers are used in a system , the color error can be greatly reduced. Specifically is meant by an achromat a lens, in which the change of the focal distance disappears with wavelength for a wavelength.
### Apochromat
So-called apochromatically corrected objectives (apochromats) represent a further development. For these, glasses with an unusual dispersion behavior are used, whereby the secondary spectrum can also be corrected. In the classic version, these are calculated in such a way that the back focal lengths match at three wavelengths (e.g. red, green and blue), which means that the longitudinal color error is also very small for all other wavelengths of visible light. A reference to systems corrected in this way is usually the abbreviation APO on the lenses. As a rule, they are significantly more expensive than just achromatically corrected products.
## Technically related aberrations
The errors described above follow from the mathematical laws of the figure. But there is also the fact that nothing in technology can be manufactured with perfect accuracy. With optical systems, too, there are deviations in the real dimensions and properties from the values specified during construction:
As a result of these deviations, an optical system more or less lags behind the image quality that corresponds to its design. A deviation from the rotational symmetry can mean that the image quality depends not only on the distance from the center of the image, but also clearly on the direction, i.e. that the quality on the left edge of the image is noticeably worse than on the right. This is known as a centering error .
When designing a lens, it makes sense to include sensitivity to manufacturing errors in the optimization process. The accuracy with which the components must be manufactured in order to achieve a sufficient image quality is an important cost factor. In a finished construction of an optical system, the designer not only has to specify the target values for the geometry and glass properties, but also the permissible deviations .
Changes in environmental parameters, especially temperature, also cause deviations in shapes, dimensions and refractive indices. The components of an optical instrument expand when heated. With large astronomical instruments, a noticeable bending of the telescope can result from one-sided heating (as well as from their own weight). The refractive index of glass also changes with temperature. Among the camera lenses, those with a long focal length and good correction are particularly sensitive to temperature changes. They are therefore often given a white coating so that they are less heated by solar radiation.
Also, turbulence and temperature differences in the air layers of the Earth's atmosphere cause aberrations disturbing appear especially with large focal lengths and distant objects. In astronomy in particular, the atmosphere limits the resolution of a telescope . The limitation of resolution due to atmospheric influences is generally referred to as air turbulence , especially in astronomy as seeing . Furthermore, with stars near the horizon and with atmospheric halo appearances, a vertical color fringing can occur, because the astronomical refraction is slightly dependent on the wavelength of the light. To correct these influences, adaptive optics are used or the telescopes are stationed outside the earth's atmosphere ( space telescope ).
### Axial astigmatism
Imperfect lenses that are not rotationally symmetrical about the optical axis can also image axially parallel bundles astigmatically. An object point is mapped as a line (lengthways or crossways) depending on the focus. This defect plays an important role in ophthalmic optics and electron optics . The simplest form of axial astigmatism can be corrected by combining it with a cylinder lens that is appropriately dimensioned in terms of refractive power and axial direction (cylinder lens in the glasses , stigmator in the electron microscope ). The manufacture of glass lenses for visible light is now so mature that there is no noticeable axial astigmatism.
## Individual evidence
1. Barbara I. Tshisuaka: Eustachio Divini. In: Werner E. Gerabek , Bernhard D. Haage, Gundolf Keil , Wolfgang Wegner (eds.): Enzyklopädie Medizingeschichte . De Gruyter, Berlin 2005, ISBN 3-11-015714-4 , p. 316.
2. ^ Next Generation: Changes in recording technology , film-tv-video.de, News - Reports, June 9, 2010, accessed on December 26, 2015
3. Bernd Leuschner: Opening error of a planoconvex lens (PDF; 161 kB), Laboratory for Device Technology, Optics and Sensor Technology, Beuth University of Technology Berlin.
4. Japanese Patent Application Number 2016-197661 - Domed Sensor with Non-Spherical Shape ( December 1, 2016 memento on the Internet Archive ), Toshiba, filed April 3, 2015, published November 24, 2016, accessed December 1, 2016.
## literature
• Eugene Hecht: optics. 4th revised edition, Oldenbourg Wissenschaftsverlag, Munich et al. 2005, ISBN 3-486-27359-0 .
|
2022-09-25 02:55:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6821487545967102, "perplexity": 924.1726748295248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00283.warc.gz"}
|
https://mathoverflow.net/questions/162908/embedding-a-linearly-ordered-free-monoid-into-a-linearly-ordered-group
|
# Embedding a linearly ordered free monoid into a linearly ordered group
A linearly ordered (shortly, l.o.) monoid is a triple $\mathbb M = (M, \cdot, \le)$ for which $(M, \cdot)$ is a (multiplicatively written) monoid and $\le$ is a total order on $M$ such that $xy < xz$ and $yx < zx$ for all $x,y,z \in M$ with $y < z$. In particular, $\mathbb M$ is called an l.o. group if $(M, \cdot)$ is, well, a group, and an l.o. free monoid if $(M, \cdot)$ is the free monoid on an alphabet $X$.
What is known about the following question?
(Q) Let $\mathbb M = (M, \cdot, \le)$ be a linearly ordered free monoid. Does there always exist an embedding of $\mathbb M$ into a linearly ordered group? In more plain words: do there always exist a linearly ordered group $\mathbb G = (G, \cdot, \le)$ and a (monoid) monomorphism $f: (M, \cdot) \to (G, \cdot)$ such that $f(x) < f(y)$ for all $x,y \in M$ with $x < y$?
The answer is affirmative in the case when $\le$ is the lexicographic order induced on $M$ by any well-ordering of the underlying alphabet (this can be proved, e.g., by the "Magnus trick").
But what about the rest? Any reference?
• Any reference for what you call "Magnus trick"? Apr 9, 2014 at 16:01
• As for free groups on finite alphabets, see Section 5 in: D. M. Kim and D. Rolfsen, An Ordering for Groups of Pure Braids and Fibre-type Hyperplane Arrangements, Canad. J. Math. 55 (2002), 822-838 (and the references therein). I don't know if the proof of the general statement: "All free groups are linearly orderable" by the same method (which is as simple or difficult, it is up to you, as the finite case), is explicitly written down somewhere. Does anybody know? Apr 9, 2014 at 17:03
• For the record: An alternative proof that any free group (and hence any free monoid) is linearly orderable can be found in K. Iwasawa, On linearly ordered groups, J. Math. Soc. Japan 1 (1948), 1-9. Yet, Iwasawa's approach doesn't help much with the OP (as far as I can tell). Apr 9, 2014 at 17:37
I suspect this is false, although I don't have a proof. The Thompson group $F$ is generated by $A, B$, which are piecewise-linear homeomorphisms of the interval which change slope at dyadic points, and piecewise have slopes in $2^\mathbb{Z}$. As described in Theorem 4.6 of Cannon-Floyd-Parry, the submonoid generated by $A,B$ is free. The group is linearly ordered (called bi-ordered in the literature), and the spaces of bi-orderings has been classified. I suspect that some of these bi-orderings, when restricted to the submonoid generated by $A, B$, do not extend to the free group generated by $A, B$. I would try one of the 8 isolated bi-orderings of the Thompson group, and see if it can be extended to the free group generated by $A,B$. If it can't, then one can detect this in a ball of finite-radius in the free group.
It suffices to show this for l.o. free monoids on finite alphabets, as follows from the Compactness Theorem in logic - which can be found in any text on First Order Logic. [This principle can be applied to a range of similar problems.]
Indeed, let $\mathbb M = (M, \cdot, \le)$ be a linearly ordered free monoid on the alphabet $X$, and consider the first-order language $\mathcal{L} = (\circ, \preceq, \{x_m|m \in M\})$ consisting of a binary function symbol $\circ$ to represent multiplication, a binary relation symbol $\preceq$ representing ordering, plus an individual constant symbol $x_m$ for each element $m$ of the set $M$.
Then let $T$ be the $\mathcal{L}$-theory having the following axioms:
1. the usual axioms for linearly ordered groups, expressed using $\circ$ and $\preceq$
2. $x_m \ne x_n$ for all $m, n\in M$ with $m\ne n$ [these axioms ensure that $M$ naturally injects as a subset of any model of $T$ via the interpretation of the $x_m$ constants]
3. $x_m \preceq x_n$ for all $m, n\in M$ with $m\le n$ [these make this injection an embedding of l.o. sets]
4. $x_m \circ x_n = x_{m\cdot n}$ for all $m, n\in M$ [which make the embedding a monoid homomorphism].
So any model $\mathbb G = (G, \cdot, \le, \{g_m|m \in M\})$ of $T$ will provide a desired l.o. group, with $m\mapsto g_m$ (being the interpretation of the constant $x_m$ in $\mathbb G$) giving the desired embedding $\mathbb M\to \mathbb G$. And conversely.
By the Compactness Theorem, the theory $T$ admits a model iff every finite subset $\Delta$ of the axioms of $T$ does. Now only finitely many symbols $x_m$ can occur in sentences belonging to such a $\Delta$, and the finitely many elements $m\in M$ so involved can be expressed as words in a finite subalphabet $Y\subseteq X$. This $Y$ generates a l.o. free submonoid $\mathbb S$ of $\mathbb M$ that contains all the $m\in M$ for which $x_m$ figures in statements occurring in $\Delta$. And so any l.o. group that "extends" $\mathbb S$ (in the sense of the OP) will be a model of $\Delta$.
Hence it is enough to consider free l.o. monoids on finite alphabets.
[Incidentally, by the same token, if all finitely generated free groups are linearly orderable, so are all free groups.]
|
2023-02-02 11:47:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8302552103996277, "perplexity": 236.53354928898867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500017.27/warc/CC-MAIN-20230202101933-20230202131933-00425.warc.gz"}
|
https://ximera.osu.edu/calcwithreview/calculusWithReview/limitLaws/digInLimitLaws
|
We give basic laws for working with limits.
In this section, we present a handful of rules called the Limit Laws that allow us to find limits of various combinations of functions.
True or false: If $f$ and $g$ are continuous functions on an interval $I$, then $f\pm g$ is continuous on $I$.
True False
True or false: If $f$ and $g$ are continuous functions on an interval $I$, then $f/g$ is continuous on $I$.
True False
We can generalize the example above to get the following theorems.
Where is $f(x) = \frac {x^2-3x+2}{x-2}$ continuous?
for all real numbers at $x=2$ for all real numbers, except $x=2$ impossible to say
Back in Theorem theorem:continuity we mentioned a big list of functions that were continuous. We mention them again in the following statement. We will study some of these functions in more detail in later sections. For now, we focus only on the fact that they are continuous.
Now, we give basic rules for how limits interact with composition of functions.
Because the limit of a continuous function is the same as the function value, we can now pass limits inside continuous functions.
Many of the Limit Laws and theorems about continuity in this section might seem like they should be obvious. You may be wondering why we spent an entire section on these theorems. The answer is that these theorems will tell you exactly when it is easy to find the value of a limit, and exactly what to do in those cases.
The most important thing to learn from this section is whether the limit laws can be applied for a certain problem, and when we need to do something more interesting. We will begin discussing those more interesting cases in the next section. For now, we end this section with a question:
### A list of questions
Let’s try this out.
Can this limit be directly computed by limit laws?
yes no
Compute:
Can this limit be directly computed by limit laws?
yes no
Can this limit be directly computed by limit laws?
yes no
Can this limit be directly computed by limit laws?
yes no
Can this limit be directly computed by limit laws?
yes no
Compute:
Can this limit be directly computed by limit laws?
yes no
Can this limit be directly computed by limit laws?
yes no
Can this limit be directly computed by limit laws?
yes no
Compute:
Can this limit be directly computed by limit laws?
yes no
|
2023-03-29 20:27:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 94, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9370972514152527, "perplexity": 346.13688748228986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949025.18/warc/CC-MAIN-20230329182643-20230329212643-00079.warc.gz"}
|
https://www.research.ed.ac.uk/en/publications/atlas-ibi-jet-identification-performance-and-efficiency-measureme
|
# ATLAS b-jet identification performance and efficiency measurement with tt‾ events in pp collisions at √s =13 TeV
Research output: Contribution to journalArticlepeer-review
## Abstract
The algorithms used by the ATLAS Collaboration during Run 2 of the Large Hadron Collider to identify jets containing $b$-hadrons are presented. The performance of the algorithms is evaluated in the simulation and the efficiency with which these algorithms identify jets containing $b$-hadrons is measured in collision data. The measurement uses a likelihood-based method in a sample of highly enriched in $t\bar{t}$ events. The topology of the $t \to W b$ decays is exploited to simultaneously measure both the jet flavour composition of the sample and the efficiency in a transverse momentum range from 20 GeV to 600 GeV. The efficiency measurement is subsequently compared with that predicted by the simulation. The data used in this measurement, corresponding to a total integrated luminosity of 80.5 fb$^{-1}$, were collected in proton-proton collisions during the years 2015 to 2017 at a centre-of-mass energy $\sqrt{s}=$ 13 TeV. By simultaneously extracting both the efficiency and jet flavour composition, this measurement significantly improves the precision compared to previous results, with uncertainties ranging from 1% to 8% depending on the jet transverse momentum.
Original language English 970 European Physical Journal C C79 11 https://doi.org/10.1140/epjc/s10052-019-7450-8 Published - 25 Nov 2019
## Fingerprint
Dive into the research topics of 'ATLAS <i>b</i>-jet identification performance and efficiency measurement with <i>tt‾</i> events in <i>pp</i> collisions at √<i>s</i> =13 TeV'. Together they form a unique fingerprint.
|
2021-11-29 03:59:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7145550847053528, "perplexity": 2149.977068160105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358685.55/warc/CC-MAIN-20211129014336-20211129044336-00031.warc.gz"}
|
https://tug.org/pipermail/macostex-archives/2022-January/058129.html
|
# [OS X TeX] CocoAspell still not working for me
Themis Matsoukas via MacOSX-TeX macosx-tex at email.esm.psu.edu
Tue Jan 25 20:08:42 CET 2022
> On Jan 25, 2022, at 11:34 AM, Herbert Schulz <herbs at wideopenwest.com> wrote:
>
> Howdy,
>
> Sigh... I was mistaken about this. The list is for commands only: e.g., it has {}[] for begin (for \begin{...}[...] so the first argument is the environment name and only an optional argument that isn't checked follows). I'd worry about changing that to {}[]{} since \begin is used for all environments and I'm not convinced it won't mess something up.
>
> I guess the problem with tabular never really bothered me since I don't have it marking all errors as I type. I do a spelling check using Cmd-; when I'm ready to typeset and just type another Cmd-; when I hit that quirky thing.
>
> PS: there are way too many ways of creating tables using different packages I'm not sure how you would take care of them all.
It is not just tabular.
All my du, dv, dn, dt... are flagged in every equation. Same with the contents of \cite, \citep, \citet, which contain nonsensical strings. I’ve added these commands to CocoAspell’s filters but I don't see any effect. I am working n a paper with 140+ references, I spend most of the time hitting ignore until I catch a real typo. And when I open the file again I have to do the same, because cocaspell does not remember the ignored words.
Themis
----------- Please Consult the Following Before Posting -----------
TeX FAQ: http://www.tex.ac.uk/faq
List Reminders and Etiquette: https://sites.esm.psu.edu/~gray/TeX/
List Archives: http://dir.gmane.org/gmane.comp.tex.macosx
https://email.esm.psu.edu/pipermail/macosx-tex/
TeX on Mac OS X Website: http://mactex-wiki.tug.org/
List Info: https://email.esm.psu.edu/mailman/listinfo/macosx-tex
|
2022-09-30 19:25:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.908292293548584, "perplexity": 3751.2115379239763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00444.warc.gz"}
|
https://physics.aps.org/synopsis-for/10.1103/PhysRevLett.104.010502
|
# Synopsis: Opening the gate to quantum computation
The demonstration of entanglement between two neutral atoms would be a key step toward using them for quantum computation.
Entanglement lies at the heart of quantum computation. Entangling neutral atoms is attractive because they interact weakly with their environment, but by the same token, they are difficult to entangle compared with strongly interacting ions. Papers appearing simultaneously in Physical Review Letters demonstrate how separate groups are achieving entanglement between two neutral atoms by using a method called Rydberg blockade.
The underlying idea behind Rydberg blockade is that the state of one atom (the control) determines whether the other (the target) can be excited into a high-energy state. This effective coupling between the two atoms turns them into a collectively two-level system.
Tatjana Wilk and colleagues at the Institut d’Optique (CNRS and Université Paris-Sud) in France use the Rydberg blockade to entangle ${}^{87}\text{Rb}$ atoms that are held a few microns apart by optical tweezers. In a separate paper, Larry Isenhower and colleagues at the University of Wisconsin, US, report similar methods to create a two-qubit controlled-NOT (CNOT) gate between ${}^{87}\text{Rb}$ atoms. The CNOT gate then serves as a means to achieve entanglement between the atoms.
Both groups report the preparation of quantum states with an accuracy (or, fidelity) that is near the threshold needed to prove entanglement; correcting for losses associated with atoms that fell out of the optical traps suggests that the remaining pairs of atoms are entangled with a fidelity well over the threshold. Although there is still work to be done in fine tuning the methods reported by both groups, the papers collectively show important progress toward quantum processing with neutral atoms. – Jessica Thomas and Sonja Grondalski
More Features »
### Announcements
More Announcements »
Mesoscopics
## Next Synopsis
Atomic and Molecular Physics
## Related Articles
Optics
### Viewpoint: A Multimode Dial for Interatomic Interactions
A tunable multimode optical cavity modifies interactions between atomic condensates trapped in its interior from long range to short range, paving the way towards exploring novel collective quantum phenomena. Read More »
Atomic and Molecular Physics
### Synopsis: Twisted Cavity Is a One-Way Light Path
A cavity containing spin-polarized atoms can serve as an optical isolator that breaks time-reversal symmetry by letting only forward-moving light pass. Read More »
Atomic and Molecular Physics
### Synopsis: Nuclear Masses Don’t Add Up
The sum of the proton and deuteron masses minus the helium-3 nucleus mass, obtained from a measurement with a molecular ion, remains at odds with the number calculated from accepted values for these masses. Read More »
|
2018-01-16 13:42:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6067947149276733, "perplexity": 2013.1423635658812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886436.25/warc/CC-MAIN-20180116125134-20180116145134-00053.warc.gz"}
|
https://www.physicsforums.com/threads/metric-tensor-for-a-sphere.509397/
|
# Metric Tensor for a Sphere
#### thehangedman
Does anyone know what the metric tensor looks like for a 2 dimensional sphere (surface of the sphere)?
I know that it's coordinate dependent, so suppose you have two coordinates: with one being like "latitude", 0 at the bottom pole, and 2R at the northern pole, and the other being like longitude, 0 on 1 meridian and Pi * R on the opposite side (here, 2 Pi R gives you the same location as 0).
I've searched online and can't find a simple example of this basic metric tensor... :-(
The other one I'm curious about is the surface of a hyperbola (again, think 2-D surface of a shape in 3 dimensions). What is the metric on THAT surface?
Any type of help is greatly appreciated...
Related Differential Geometry News on Phys.org
#### George Jones
Staff Emeritus
Gold Member
Does anyone know what the metric tensor looks like for a 2 dimensional sphere (surface of the sphere)?
The standard metric is
$$ds^2 = R^2 \left( d\theta^2 + sin^2\theta d\phi^2 \right).$$
#### HallsofIvy
Homework Helper
Note: George Jones is using the physics notation which takes $\phi$ as the "longitude" and $\theta$ as "co-latitude", the opposite of mathematics notation.
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
2019-10-19 15:39:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6143674254417419, "perplexity": 2747.222413213437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986696339.42/warc/CC-MAIN-20191019141654-20191019165154-00498.warc.gz"}
|
https://www.vtmarkets.com/blog/2021/12/16/6179/
|
# Daily market analysis
###### December 16, 2021
Market Focus
After the Federal Reserve announced that it will accelerate the reduction of monthly asset purchases in the context of rising inflation, three indices hit theirs best gain in a week on Wednesday. The Fed stated that it will increase its bond purchases by $30 billion per month in January, which is twice the$15 billion per month announced in November. The timetable for raising interest rates is advanced. It is expected that the interest rate will be raised up to three times next year, and then three more in 2023, bringing the Fed’s benchmark interest rate to 1.6%. The risk sentiment has improved after the statement was announced, because before the Fed meeting, the sentiment of monetary policy tightening was tense, and the previous increase has been basically digested. At the end of the market, the Dow Jones Industrial Average rose 1.1% to 35,927.44 points, the S&P 500 index rose 1.63% to 4,709.85 and the Nasdaq Composite Index added 2.1%.
In the S&P 500 sector, the only loser is the energy sector. The main reason is that people are still worried about oversupply, although people are still worried about Omicron’s threat to travel and energy demand, making energy still under pressure and oil prices still falling. Devon Energy, Occidental Petroleum and Diamondback Energy fell more than 2%. On the other hand, the biggest winner of the index sector is undoubtedly the technology sector related to interest rates. With interest rates unchanged, the technology sector led to a rocket up. Nvidia and AMD lead the technology sector, followed by Alphabet, Microsoft, Facebook and Apple, contributing to the index’s performance.
Main Pairs Movement:
Before the Fed’s monetary policy decision was released, market participants expected that monetary policy would be tightened and accompanied by nervousness. This caused the DXY to soar to almost annual highs, but then interest rates remained unchanged, and the only news is that they will speed up the reduction in bond purchases starting in January 2022, so the dollar index turned south and closed at 96.33.
The EUR/USD is close to the 1.1300 level, but it was still below that level before the European Central Bank meeting, which will be held on Thursday. The ECB will announce its monetary policy decision, but the market is generally expected to maintain the current policy, this means the euro is hardly to get extra impetus.
GBP/USD closed at 1.32582, also staying at recent levels without any breakthrough. The UK will announce its PMI later on Thursday, which may provide some strength for the pound.
Gold hit a new low of 1,752 in several months and then rebounded to around $1,778 per ounce. Crude oil prices have risen with the stock market, and WTI is currently trading at approximately$71.50 per barrel.
Technical Analysis:
XAUUSD (4- Hour Chart)
Gold price struggles to rebound, hovering below 1770, as focus shifts to FOMC meeting. Gold price is undermined since the market expects the hawkisk Fed will pace its tapering and the interest rates. From the technical analysis, as the time of writing, gold bears now target the descending wedge around 1765. Gold will re- confirm a bearish outlook if the wedge is breached downwardly. Alternatively, the recovery now needs to face stiff resistance at 1770, followed by 1789, in order to turn a bearish- to- bullish trend on the 4- hour chart. However, it looks like bears are still in control as the RSI indicator has not reached the oversold territory, suggesting a continuation of selling pressures. And the next relevant support is at 1761.
Resistance: 1770, 1789, 1805
Support: 1761
GBPUSD (4- Hour Chart)
GBPUSD declined toward 1.3200 after the soaring UK inflation report, resulting in a renewal of the US dollar demand. From the technical perspective, the outlook of the currency pair becomes bearish on the 4- hour char as it trades below its 20 and 50 simple moving averages, indicating a bearish condition in the near- term. Since the RSI has not yet reached the oversold condition, sellers are still in control; thus, GBPUSD is expected to head toward its immediate support at 1.3163. Furthermore, the MACD is also turning negative as the time of writing, meaning that the pair has essentially turned from buying to selling. On the upside, GBPUSD’s bulls need to climb above the static level at 1.3321 to reclaim positive move. More price action will eye on today’s FOMC meeting and tomorrow’s ECB meeting.
Resistance: 1.3321, 1.3419, 1.3499
Support: 1.3163
EURUSD (4- Hour Chart)
EURUSD declines merely trading at 1.1250 as the US dollar’s momentum picks up. From the technical aspect, EURUSD looks to test its immediate support at 1.1233, followed by 1.1186. The outlook remains neutral in the near- term as the technical indicator, RSI, lacks directional strength, steadily holding slightly below 50th mark. The support level at 1.1233 could be breached with the US Fed’s announcement later; if that is the case, EURUSD will become bearish in the near- term. On the upside, the currency pair needs to extend further north above the acceptance level at 1.1357 in order to turn upside.
Resistance: 1.1357, 1.1462, 1.1548
Support: 1.1233, 1.1186
Economic Data
Currency Data Time (GMT + 8) Forecast USD FOMC Economic Projections 03:00 N/A USD FOMC Statement 03:00 N/A USD Fed Interest Rate Decision 03:00 0.25% USD FOMC Press Conference 03:00 N/A NZD GDP(Q3) YoY 05:45 -4.5% GBP Composite PMI 17:30 57.6 GBP Manufacturing PMI 17:30 58.1 GBP Services PMI 17:30 58.5 EUR ECB Monetary Policy Statement 20:45 N/A EUR ECB Interest Rate Decision (Dec) 20:45 N/A
USD Building Permits (Nov) 21:30 1.663M USD Initial Jobless Claims 21:30 200K USD Philadelphia Fed Manufacturing Index (Dec) 21:30 30
|
2022-01-20 14:00:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23737455904483795, "perplexity": 7712.826267150098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301863.7/warc/CC-MAIN-20220120130236-20220120160236-00087.warc.gz"}
|
https://www.math.uzh.ch/?id=ve_mfs_sem_vor0&key1=0&key2=1147&key3=4097
|
# Vortrag
Modul: MAT772 Geometrie-Seminar
## Cusp regions for parabolic ends of hyperbolic manifolds
Vortrag von Prof. Dr. John R. Parker
Sprecher eingeladen von: Prof. Dr. Viktor Schroeder
Datum: 19.09.18 Zeit: 15.45 - 16.45 Raum: ETH HG G 43
A hyperbolic manifold or orbifold can be written as the quotient of hyperbolic space by a discrete group of isometries. A cusp end of the orbifold corresponds to parabolic elements in the group. A consequence of discreteness is that these cusp ends contain regions of a certain shape. In dimensions two and three this is classical. More complicated things can happen in higher dimensions. In this talk I will survey the classical results, then I will discuss some more recent results in dimension four which show how continued fractions and Diophantine approximation come into play.
|
2019-11-22 05:39:01
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8030217885971069, "perplexity": 720.1248674629117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671239.99/warc/CC-MAIN-20191122042047-20191122070047-00210.warc.gz"}
|
https://brilliant.org/problems/composing-30/
|
# Composing 30
Probability Level 3
How many ways can the number $30$ be written as an ordered sum of 2s and 5s?
Details and assumptions
There are four ways to write 12 as an ordered sum: $2+2+2+2+2+2=12,$ $2 + 5 + 5 = 12,$ $5 + 2 + 5 = 12$, and $2 + 5 + 5 = 12$. The ordered sum can use 0 5's, and it can also use 0 2's.
×
Problem Loading...
Note Loading...
Set Loading...
|
2020-06-02 12:36:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.783816397190094, "perplexity": 814.4756959484266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347424174.72/warc/CC-MAIN-20200602100039-20200602130039-00339.warc.gz"}
|
https://www.math-worksheet.org/graphing-linear-equations
|
### Graphing linear equations
We can easily graph any equation given in slope-intercept form.
DEFINITION: The slope-intercept form of a linear equation is given by
$$y = mx + b$$
where $$m$$ is the slope of the line, and is the point where the line intercepts the $$y$$-axis (that is, $$b$$ is the $$y$$-intercept).
Essentially, when you have a line in slope intercept form, the slope and a point are given to you straight away. You only need to plot the given point, and use the slope to plot another point. Then connect the dots. It’s that easy.
EXAMPLE: Sketch the graph of the line whose equation $$y = - {\Large \frac{1}{2}x} + 2$$
SOLUTION: Here we have an equation in slope intercept form, with $$m = - \Large \frac{1}{2}$$ (that is, the slope of the line is $$- \Large \frac{1}{2}$$), and $$b = 2$$ (that is, the $$y$$-intercept is 2). We begin by plotting the $$y$$-intercept:
Next, we use the slope to find our next point. The slope is $$- \Large \frac{1}{2}$$, so we move one unit down (because of the negative slope) and two units to the right. This puts us at the point $$\left( {2,1} \right)$$. Then we connect the dots.
EXAMPLE: Sketch the graph of the line whose equation is $$y = - {\Large \frac{7}{5}x} - 5$$.
SOLUTION: Here we are given another line in slope-intercept form, with $$m = - \Large \frac{7}{5}$$, and $$b = - 5$$. Then the first point we plot is the $$y$$-intercept at $$- 5$$:
Then since the slope is $$- \Large \frac{7}{5}$$, we move down 7 units, and right 5 units. Then connect the dots, and we’re finished!
2291 x
Sketch the graph of each line.
This free worksheet contains 10 assignments each with 24 questions with answers.
Example of one question:
Watch below how to solve this example:
1839 x
Sketch the graph of each line.
This free worksheet contains 10 assignments each with 24 questions with answers.
Example of one question:
Watch below how to solve this example:
1539 x
Sketch the graph of each line.
This free worksheet contains 10 assignments each with 24 questions with answers.
Example of one question:
Watch below how to solve this example:
### Geometry
Circles
Congruent Triangles
Constructions
Parallel Lines and the Coordinate Plane
Properties of Triangles
### Algebra and Pre-Algebra
Beginning Algebra
Beginning Trigonometry
Equations
Exponents
Factoring
Linear Equations and Inequalities
Percents
Polynomials
|
2018-12-12 12:24:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.832608699798584, "perplexity": 844.4507587869408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823872.13/warc/CC-MAIN-20181212112626-20181212134126-00239.warc.gz"}
|
https://rujec.org/article_preview.php?id=49756
|
Public expenditure for agricultural sector in Russia: Does it promote growth?
Olga V. Shik
‡ HSE University, Moscow, Russia
Corresponding author: Olga V. Shik ( shikolga@gmail.com ) © 2020 Non-profit partnership “Voprosy Ekonomiki”.This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits to copy and distribute the article for non-commercial purposes, provided that the article is not altered or modified and the original author and source are credited. Citation: Shik OV (2020) Public expenditure for agricultural sector in Russia: Does it promote growth? Russian Journal of Economics 6(1): 42-55. https://doi.org/10.32609/j.ruje.6.49756
# Abstract
This paper presents the findings of the agriculture public expenditure review (PER) for the Russian Federation. It reviews the policy instruments and historical trends in the volumes and composition of budget support and investigates their role in recent agricultural growth. The paper also analyzes the effect of public spending in 2006–2017 on growth in agriculture using the fixed effects model and find positive effect. Support for general services is the most efficient method of agricultural spending, but in the Russian agricultural budget the subsidies to individual producers prevail. While the prevalence of the subsidies in the budget benefits the largest and most successful producers, this was part of the strategy to create strong value chains in order to compete with imports. However, the efficiency of investment support is decreasing. The paper explores the distribution of support between national and sub-national levels of budgeting system, and finds that the regionalization of support leads to market disintegration and efficiency losses.
# Keywords
agricultural policy, budget support to agriculture, general services, subsidies, regional development.
JEL classification: H71, H72, Q18.
# 1. Introduction
An ambitious agrifood export expansion plan requires improving the efficiency of support in order to achieve long term growth in agriculture. At the same time, the information on budget support levels and structure is limited and presented in such a way that does not allow us to analyze the trends in composition of support, its alignment with the policy goals and its effect on the sector’s performance. This paper presents the findings of the agriculture public expenditure review (PER) for the Russian Federation. We review the policy instruments and historical trends in the volumes and composition of budget support and investigate their role in recent agricultural growth.
The objective of this study was to look at the level and structure of public expenditure for Russian agriculture and investigate if the government’s claim that it was a major factor in recent growth in agriculture is supported by empirical evidence. The study reveals three main areas where the loss of efficiency of support to agriculture occurs and where the improvement of allocation of funds can be beneficial for growth in agriculture.
First, focusing on the goals to increase production and exports, most public funds are allocated to support individual producers. At the same time, the measures that benefit the sector as a whole — the general services1 — are overlooked, underfinanced and unpredictable.
Second, as a consequence of supporting agriculture mostly in the form of the subsidies, policy benefits are unequally distributed among different types of producers. Agricultural policy supports investment projects by the larger and more efficient producers, as well as compensating losses of the least successful, keeping them in business for social reasons.
Third, the source of efficiency loss is the regionalization of support. While the general services support is financed at the federal level, support to producers individually is financed from the sub-national (regional) budgets, which affects market integration and promotes unfair competition. We look into the differences between national and sub-national spending and the impact of power distribution between the levels of budgeting system on the efficiency of public spending.
# 2. Level of support to agriculture
All official expenditure reports by the Government only refer to a short time period (3–4 years) and report expenditures in nominal values, claiming an unprecedented growth of the level of support in recent years. However, when we look at a longer time period (since 2006) and at the data in real values, it turns out that the level of support actually decreased compared to the 2006–2008 time period: the value of support decreased by 3% in constant prices, the share of agricultural spending in total budget expenditure decreased by 24%, and the ratio of support to agriculture to GDP decreased by 18% (Fig. 1).
Support for agriculture in Russia is high compared to other countries; it holds 5th place among the countries for which OECD measures the level of support.2 At the same time, the distance in support levels between Russia and its competitors is wide and support per person and per agricultural land area is much lower in Russia: $5,300 per square km of agricultural land in Russia vs.$23,000 in US and $58,000 in EU;$310 per rural inhabitant in Russia, vs. $851 in EU and$1604 in US. At the same time, the level of support as a share of GDP, which demonstrates the burden to the economy as a whole arising from support to agriculture, in Russia (0.8%) is higher than in the US and the EU (0.5% and 0.6%3).
# 3. Public expenditure and production growth
Agricultural production growth has been the main goal of Russian agricultural policy since 2006,4 and this was the main indicator of the policy efficiency in the internal policy monitoring system conducted by the Ministry of Agriculture. The growth in agriculture was more pronounced than that of the rest of the economy (Fig. 2), however more evidence is required to confirm the relationship between this growth and budget expenditure on agriculture.
Recent research confirms that there is a positive causal relationship between the size of budget support to agriculture and the outcomes in terms of production growth. Thus, Gardner (2005) discovered a positive relationship between budget support and value added per worker in agriculture. López and Galinato (2007) used panel data for Latin American and Caribbean countries and found that the elasticity of the agricultural output of public expenditure was 0.18–0.2.
At the same time research suggests that it is not the level but the composition of support that matters for economic growth in the sector. Support for individual producers is often found inefficient, the efficiency of such measures tends to decrease with time and, in some cases, these have a negative effect on performance in agriculture. Individual producers’ subsidies tend to crowd out investment in public goods, which is more efficient for agricultural growth (World Bank, 2009).
Research based on the Russian data also suggests that in general, public support for agriculture has a positive effect on production and profits; however, it is not the most important factor in economic growth.5
Svetlov et al. (2019) analyzed an effect of public support for agricultural producers’ income in 14 regions of Russia using the two-stage regression model and found a positive impact in the majority of regions; however, in three regions its effect was negative. The researchers conclude that while, in general, subsidies lead to increased incomes and the promotion of growth, that the differences in the level of support do not explain all the varied outcomes measured by financial indicators or production growth.
In this study we analyzed the effect of public support on economic growth using the fixed effect model based on panel data for 77 regions of Russia for the time period between 2006 and 2017. The fixed effect model was used, which allows to control for all the regional characteristics that do not change over time, such as climate, quality of governance and institutions, etc. Therefore, we can conclude that the differences in outcome measured as the growth of per capita agricultural output appears due to the changes in the variables in question, i.e. the public support from federal and regional budgets.
Following the methodology described in López & Galinato, 2007 we estimate the following model:
Ln git = β1E L(Eit) + β2 L(Fit) + β3 L(Tit) + β4 L(Yit) + β5 zit + β6kit + βq qit +
+ μi + εit,
where: Ln git — log of agricultural production growth (million rubles at constant 2006 prices) per agricultural worker; Eit — budget support to agriculture per worker (thousand rubles at constant 2006 prices, log); Fit — percent share of the federal intra-budget transfer in budget support; Tit— trade openness index (export plus imports divided by the gross regional product — GRP, %); Yit— non-agricultural GRP per capita (thousand rubles at constant 2006 prices, log); L — lag operator, zit — agricultural land area (000 ha per person, log); kit — capital (fixed assets in agriculture; thousand rubles at constant 2006 prices, log); qit— agricultural price index, %, μit — regional fixed or random effects, εit — error term.
The study confirmed that there is a positive relationship between the budget support for agriculture and economic growth in agriculture (Table 1). We demonstrate that in order to achieve a 1% increase in output, the budget support has to be increased by 10%. We also found that federal intra-budget transfers are more efficient than regional spending: the production is growing faster with the larger share of federal funds in agricultural budgets.
The impact of public expenditure on agricultural growth in Russia.
Variables log of agricultural production growth (million rubles at constant 2006 prices) per agricultural worker Fixed effect Random effect Land 0.8947*** 0.5425*** (0.0968) (0.0854) Capital 0.0687* 0.2306*** (0.0621) (0.0688) Non-agricultural GRP 0.1499* 0.0808* (0.0734) (0.0565) Budget support 0.1091*** 0.1227*** (0.0340) (0.0384) Federal transfer’s share in support 0.1475*** 0.2055*** (0.0516) (0.0482) Trade openness –0.0012* –0.0005 (0.0005) (0.0005) Price indices –0.0816 –0.0151 (0.0542) (0.0626) Constant 1.6039*** (0.3494) R 2 0.7145 0.6500 N of observations 847 847 Hausman test (p-value) 0.00
Therefore, we conclude that the budget support has a positive influence on agricultural production growth, but it is not the only factor and the instruments used for support matter. Most likely, the impact of different types of subsidies will vary, and the data to investigate this would be beneficial for finding the most efficient support instruments.
# 4. Support is shifting from the general services to subsidies to producers
From 2012 until 2019, the main policy goal in the State Program was to increase the volume of production (for import substitution), and therefore the majority of the support programs were directed to increasing production. About 40-50% of the funds were allocated to the programs aiming at production expansion. Rural development support received 4.7% of funds and support directed at small farmers was 4% of agricultural budget.
The structure of support was relatively stable since 2006, with 15–30% of the funds allocated to investment support through mid- and long-term credit support programs. Other subsidies to producers, especially purchased input subsidies (feed, seeds, fertilizers, diesel fuel) were always among the main policy instruments.
The export expansion became the main goal of agricultural policy in 2019, but until then only 0.24% of the budget went to export enhancement. The share of the key services for exporters, such as phytosanitary and veterinary services financing in the budget declined from 8.4% of all general services in 2006 to 3.3% in 2017. Research and development expenditure was declining until 2018, and education’s share in the budget was stable at 10% and mainly went to financing recurrent administrative costs of agricultural colleges.
The structure of support and the choice of policy instruments is much more important for achieving the growth in agriculture than the level of budget expenditure. Recent research has demonstrated that general services support contributes most to the long-term competitiveness and growth in agriculture. The results show that a shift of 10 percentage points of the agricultural budget from individual producers’ support to general services, maintaining total spending constant, leads to approximately a 5% increase in agricultural value added per capita (Anríquez et al., 2016).
Fig. 3 demonstrates how the budget expenditure shifted from the general services towards support to producers individually in the past 12 years. In the Russian budget, the share of support to general services in the agricultural budget decreased from 48% in 2006 to 29% in 2017.
This issue is not specific for Russia; the World Bank noted that this is a common issue for most countries included in their PER studies (World Bank, 2011). However, some countries have a larger share of their budget allocated to general services’ support: in Canada, Chile and Australia more than half of the agricultural budget goes to general services, and in Costa Rica and New Zealand — 85% and 94% respectively (Fig. 4). Among main Russian trade partners, only China increased the general services support considerably, and most importantly, this increase happened almost entirely in support of research, development and innovations.
Among different general services support programs, research and development brings the highest rates of returns. The average rate of return to public investments in research and development was at 43% (Alston et al 2000), which is much higher than common rates of return in private investment projects (see also Mogues et al., 2012). At the same time, in Russia only 3.1% of the agricultural budget goes to the R&D financing, which corresponds to 11–14% of the general services support. The US spends 22% of general services support on R&D, Israel 43%, and Brazil 77%. In Russia, support for R&D has been declining in constant prices during the past 8 years, and in 2017 was only half of the 2009 level, while support through subsidies to production and inputs was increasing. A recent shift in the stated policy objectives from the growth of production to export expansion requires redirecting the funds to research, development and innovations in order to increase the international competitiveness of Russian agriculture.
# 5. Direct support to the largest and most successful producers is part of the growth promotion and export expansion strategy
The focus on subsidies to producers as a main policy instrument is reflected in distribution of support among the types of agricultural producers as they tend to benefit larger producers disproportionally.
Despite the special quotes allocated in the subsidy programs for the small farmers, they receive nearly no budget support. Only about 4% of the federal budget funds is allocated to the programs for the small farmers. In 2016, only 2.1% of the small farmers used budget support, and this share further decreased to 1.6% in 2017.
Inequality of distribution of support between various economic agents is always criticized by policy analysts (Shagayda et al., 2017; World Bank, 2006). The government responded by introducing the limits for maximum available subsidized credit per firm and by cancelling some of the federal subsidies in the regions where production is highly profitable. As a result, in 2017 the distribution of support by profitability level was fairly equal. Regardless of the profitability levels, the share of support in revenues was about 5% for all participants. At the same time, the largest share of budget funds (28%) was allocated to the companies with the lowest positive profitability. However, 15% of the subsidies went to the loss-making companies, and despite this considerable support, those 2300 producers were still loss-making. The distribution of subsidies among agricultural producers by profitability groups reflects the dual goals of the policy, which is aimed, on one hand, at supporting the loss-making farms for mostly social reasons, and on the other hand, supporting investment in the most successful areas with the goal of import substitution and export expansion.
Unequal distribution of support benefitting the largest and most successful producers is not exclusive to Russia. Thus, in the EU, 20% of the farms with the highest income receive up to 80% of subsidies; in the US in 1995–2006 10% of the farms received 74% of the subsidies. In the highest income group, the farms received on average $36 thousand per farm, and in the bottom income group it was only$700 per year (World Bank, 2011).
The situation in Russia, however, is different because support for the largest and most successful farms is part of the strategy of increasing investments in agriculture with the aim of creating value chains for export expansion. At the same time, there is no convincing evidence that budget support played a key role in this process.
Unlike the subsidies, support to the general services creates benefits to producers equally without benefitting the most successful producers. At the same time, the subsidies play a less and less important role in stimulating investment in agriculture, the trade policy being a major factor. This is another argument in favor of shifting the funds towards the general services support.
# 6. Regionalization of budget support to agriculture and its impact on the efficiency of public expenditure
The level of sub-national budget support to agriculture varies significantly across the regions. Thus, in 2017 the share of agriculture in regional budgets varied from 0.7% in Kemerovo region to 15% in Bryansk Region. Fifty percent of subnational support to agriculture was provided in Central and Volga Federal Districts. However, the Far Eastern District received the highest support per ha and per rural inhabitant (Fig. 5).
Regional budget expenditure, both support to producers and to rural development, is highly concentrated. In 2018, forty percent of all credit subsidies were provided in 5 regions (Belgorod Region, Bryansk Region, Voronezh Region, Kursk Region and Republic of Tatarstan). Thirty percent of the rural development program funds were provided to 5 regions: Rostov Region, Republic of Bashkortostan, Republic of Daghestan, Republic of Tatarstan and Republic of Sakha (Yakutia). There is no correlation between the regional budget support and agricultural output, the highest share of budget support to gross agricultural output was in Chukotka Autonomous Area (over 100%) and the lowest in Krasnodar Territory (less than 2%). While many regions allocate greater share of budget funds to the general services programs than the federal budget does, on average, only less than 10% of the regional budgets go to general services support.
In 2004, the powers were redistributed between the federal and regional levels of the budget system, providing regions the rights to introduce and implement agricultural policy programs. There is evidence that this stimulates market disintegration and leads to sub-optimal efficiency of budget spending. Support to agricultural producers from the regional budgets provides advantages to producers in richer regions and creates unfair competition.
The intra-budget agricultural policy consists of two components, one with the focus on support to the most efficient and financially viable projects, as was discussed in the previous section. Those projects are usually located in the most climatically favorable areas for agricultural production, and therefore those regions received a higher share of support in the period of study. At the same time, there is the second policy direction: achievement of each region’s self-sufficiency6 in agrifood products. This strategy is supplemented by the export development strategy aiming at participation of each region in the export value chains, and support to agricultural producers in regions with the least developed agricultural sector which often have climatically least favorable conditions for agricultural production and therefore do not have any potential to become competitive.
Decentralization of support slightly decreased since 2010; the regional budget’s share of total support was 28% in 2018 (Fig. 6). However, the majority of federal funds end up in the regional budgets in the form of intra-budget transfers and as a result the federal government controls only 37% of the agricultural budget. The share of federal intra-budgetary transfers for agricultural support programs varies across the regions. There is no correlation between the level of federal transfers for agricultural support and total level of support to agriculture in the region. In the poorer regions the share of federal support is higher, due to the limited ability to implement and finance regional programs.
In most cases, the most financially stable regions are the regions with the less developed agriculture as they are located in the areas climatically less favorable for agricultural production. In the past 12 years this imbalance somewhat decreased, with the increased financial stability in agricultural regions. However, this is still the case, as demonstrated by Fig. 7: the richer the region, the lower the role of agriculture in its economy.
The increased role of the regional governments in implementing the agricultural support programs provides benefits to the richest regions and therefore stimulates the shift of production towards the least climatically favorable areas, potentially creating efficiency losses. The richest regions have more financial capacity to support investment projects in agriculture from regional funds, also, regional lobbying forces work to attract a larger budget share from the federal funds. We looked at all those forces at play to see how this affects the development of agriculture.
In order to study the consequences of the regionalization of support, we looked at the redistribution of the agricultural production between the groups of regions according to their location in Fig. 7. We identified four groups as follows. Group 1: regions with high financial capacity and less developed agriculture;7 group 2: high financial capacity and well-developed agriculture; group 3: low financial capacity and well-developed agriculture; and group 4: poor, non-agricultural regions.
We expected to see the production shifted to the regions in group 1, as the current policy promotes higher subsidies in the richer regions, but the data does not support this. In spite of the policy stimulus, the greatest development occurred in agriculture in groups 2 and 3, regions where agriculture was a major part of the economy at the beginning of the time period in question. Agricultural production grew about 30% in 12 years in the two groups combined. However, if we look not only at production, but also at the income distribution, we see that the profits from livestock production were higher in the regions with the largest budgets (Table 2). This is an effect of the policy aimed at support for the most successful projects.
Trends in Russia’s regional support and agricultural productions by group, from 2005 till 2017 (%).
Group 1. High financial capacity / no developed agriculture Group 2. High financial capacity / developed agriculture Group 3. Low financial capacity / developed agriculture Group 4. Low financial capacity / no developed agriculture Support to agriculture, average growth rate 0 0.02 0.02 0.01 Agricultural output growth 2017/2005 8 28 32 19 Group’s share in profit from crop production –40 0.6 39 –62 Group’s share in profit from livestock production –35 90 25 – a)
More and more programs are structured in the way that benefits the regions with the larger budgets, i.e. the consolidation of the various subsidies in the “Joint Subsidy” in 2017, new rules of the subsidized credit support since 2018. Therefore, we are likely to see more redistribution effect in the next few years.
Both budget support and production growth were the highest in the groups of regions where the share of agriculture in GRP was higher at the beginning of the period of study, regardless of the regions’ budget size, reflecting the government’s strategy to promote investment in the high-potential areas. At the same time, support and output increase in group 4, poor non-agricultural regions, reflects the second regional development strategy, the one aimed at self-sufficiency in agricultural products for each region, and promoting investment in the least developed regions with poor agro-climatic conditions.
We looked at other factors potentially affecting inter-regional distribution of budget support and agricultural production among the regions: budget size, lobbying capacity of the local authorities and agroclimatic conditions, and found that favorable agro-climatic conditions were the only significant factor of agricultural production growth. Average values of support and production and the difference in means between groups of regions are presented in Table 3. Both production and subsidies grew faster in the regions with the most favorable agro-climatic conditions for agriculture, irrespective of the GRP level, budget size and lobbying index. On average, in the favorable climate group, agricultural production growth was 3 percentage points faster, and budget support per capita growth was 1.9 percentage points faster than in the rest of the country. Despite regional government’s efforts to support agriculture in the richest regions, the production is shifting to the regions where it is the most efficient economically (the same trend was described in Uzun and Lerman, 2017).
Effect of lobbying capacity, financial capacity and agro-climatic conditions on average growth of support and production in Russia, 2006–2017.
Regions’ characteristics Average budget support growth rate Average agricultural output growth rate High lobbying index a) 0.77 1.11 Low lobbying index 1.47 1.13 Difference (standard error in brackets) –0.7 (1.22) –0.02 (0.67) High financial capacity b) 0.27 0.46 Low financial capacity 1.98 1.91 Difference (standard error in brackets) –1.70 (1.18) –1.44 (0.64)* Favorable agro-climatic conditions c) 2.36 3.24 Unfavorable agro-climatic conditions 0.45 0.15 Difference (standard error in brackets) 1.91 (1.27) 3.09 (0.62)***
We also investigated the differences in production and support growth rates among the regions in the most favorable agro-climatic group of regions, and found large differences in the growth rates (Fig. 8). Thus, Republic of Tatarstan demonstrated 26% growth in agricultural output in 12 years, while in Republic of Bashkortostan it declined by 14%. Voronezh and Kursk Regions demonstrated the growth rates which were very different from other regions with similar conditions (average budget support growth rate of 0.11% and 0.13%, while average for the same climatic group was only 0.02%; agricultural output growth of 6.8% and 6.3%, compared to an average of 3.2%).
Therefore, we can conclude that, in general, the distribution of budget support among Russian regions was not a major factor in the production decisions, which were defined by market forces. At the same time, we see that the profits grew much faster in the richer regions, and among the regions with similar climatic conditions we see considerable inequality in the production and budget support allocations.
# 7. Conclusion
This agricultural public expenditure review demonstrated that although agricultural budget support had a positive effect on agricultural growth, it is not the only factor contributing to the growth in agriculture and the structure of support and distribution of support between different types of producers and between the levels of the budget system matter. During the period of study, budget funds were shifted from support to general services to support to producers individually. The share of support to general services in agricultural budget decreased from 48% in 2006 to 29% in 2017. The instruments of support that are most efficient for promoting growth in agriculture, such as research, development and innovation support, are underfinanced (3.1% of the budget funds in 2017).
Shifting the support from subsidizing individual producers to providing the general services would contribute to redistribution of the benefits from policy away from the larger and most successful producers. During the period of study, supporting the most successful producers was part of the import substitution and export expansion strategy and it played its role in ensuring competitiveness of those producers at the world markets, while the productivity of the rest of Russian agriculture remains low. Support for general services benefits all producers equally and will promote innovative development required to ensure long-term international competitiveness.
The distribution of support between the federal and regional budgets leads to market disintegration and reduces the efficiency of budget spending. The policy encourages the shift of production to the regions with the least developed agriculture and larger budgets. However, other factors appeared to be stronger than the regionalization of support, and the production shifted to the regions with the best agro-climatic conditions. At the same time, presently the regions receive more and more capacity to support producers directly, and the inefficiency of this strategy will inevitably lead to sub-optimal spatial distribution of production. Besides, the regions within the group with the most favorable climate receive very different levels of subsidies and therefore the competition between them is unfair. It is recommended to legally restrict the application of trade distorting policy measures at the regional level to ensure market integration, which is important for long-term growth in agriculture.
# Reference
• Alston J., Chan-Kang C., Marra M., Pardey P., Wyatt T. (2000). A meta-analysis of rates of return to agricultural R&D (Research Report 133). Washington, DC: International Food Policy Research Institute.
• Anríquez G., Foster W., Ortega J., Falconi C., de Salvo C. (2016). Public expenditures and the performance of Latin American and Caribbean. IDB Working Paper, No. IDB-WP-722, Inter-American Development Bank.
• Mogues T., Yu B., Fan S., McBride L. (2012). The impacts of public investment in and for agriculture Synthesis of the existing evidence. IFPRI Discussion Paper, No. 1217, International Food Policy Research Institute.
• Nefedova T. G. (2012). Major trends for changes in the socioeconomic space of rural Russia. Izvestiya Rossiiskoi Akademii Nauk. Seriya Geograficheskaya, 3, 5–21 (in Russian). https://doi.org/10.15356/0373-2444-2012-3-
• OECD (2016). OECD’s producer support estimate and related indicators of agricultural support: Concepts, calculations, interpretation and use (The PSE manual). Paris: Organisation for Economic Co-operation and Development.
• Romanenko I. A., Evdokimova N. E. (2014). Applying mathematical models to solving the problem of efficient land use taking into account the regions’ agropotemtial. Zemleustroystvo, Kadastr i Monitoring Zemel, 11 (119), pp. 55–60 (in Russian).
• Sedik, D., Lerman, Z., Shagayda, N., Uzun, V., & Yanbykh, R. (2017) Agricultural and rural policies In Russia. In: W. H. Meyers & T. Johnson (Eds.), Handbook of international trade and agricultural policies. Vol. I: Policies for agricultural markets and rural economic activity (pp. 120–138). Singapour: World Scientific Publishing .
• Shagayda N., Uzun V., Gataulin E., Yanbykh R. (2015) Evaluation of agricultural producers’ support and developing the mechanisms of synchronization of federal and regional policies in view of the Russian WTO membership. Moscow: RANEPA (in Russian).
• World Bank (2009). Mexico: Agriculture and rural development public expenditure review. Washington, DC.
• World Bank (2011). How do we improve public expenditure in agriculture? Washington, DC.
• World Bank (2017). Russia: Policies for agri-food sector competitiveness and investment. Washington, DC.
• Uzun V., Gataulina E., Saraikin V., Karlova N. (2014). The methods of estimating the effect of agricultural policy on the development of agriculture. Moscow: RANEPA (in Russian).
• Uzun V., Lerman Z. (2017). Outcomes of agrarian reform in Russia. In S. Gomez y Paloma, S. Mary, S. Langrell, & P. Ciaian (Eds.), The Eurasian wheat belt and food security (pp. 81–101). Switzerland: Springer. https://doi.org/10.1007/978-3-319-33239-0_6
1 The support to general services includes the programs which bring benefits to agricultural sector as a whole and not to individual producers, such as research and development, education, inspection services, infrastructure development programs, marketing and promotion and other support programs increasing the potential of the whole sector.
2 OECD monitors the level of support to agriculture in member countries as well as other countries around the world using a number of indicators of support. The Total Support Estimate (TSE) measures support to agricultural producers, to general services, and budget transfers to consumers.
3 Average in 2015–2017, www.oesd.stat.
4 Setting the production expansion as the main policy goal does not take into account its potential negative effect on the farmers’ incomes in the absence of adequate demand; it also promotes expansion of input use without consideration of the environmental impact.
5 The additional profitability due to subsidies in 2010–2012 compared to 2007–2009 was 12%, in the absence of the subsidies agriculture would have been loss-making in 36 regions (Uzun et al., 2014).
6 According to the Meeting of the Government of the Russian Federation on support to agrifood complex on February 7, 2018 (https://www.vestifinance.ru/articles/97421).
7 The regions were allocated to one of the 4 groups according to their financial capacity index and share of agriculture in GRP compared to an average value for the Russian Federation.
|
2022-10-03 20:21:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17397169768810272, "perplexity": 3538.3875006241155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00043.warc.gz"}
|
https://homework.cpm.org/cpm-homework/homework/category/CC/textbook/CCA2/chapter/Ch4/lesson/4.2.1/problem/4-76
|
### Home > CCA2 > Chapter Ch4 > Lesson 4.2.1 > Problem4-76
4-76.
Multiply or divide the rational expressions below. Write each answer in simplified form. Homework Help ✎
1. $\frac { ( x - 3 ) ^ { 2 } } { 2 x - 1 } \cdot \frac { 2 x - 1 } { ( 3 x - 14 ) ( x + 6 ) } \cdot \frac { x + 6 } { x - 3 }$
Multiply across the numerator and denominator.
Look for factors that simplify to one.
$\frac{(x-3)(x-3)(2x-1)(x+6)}{(2x-1)(3x-14)(x+6)(x-3)}$
$\frac{(x-3)}{(x-3)}=\frac{(2x-1)}{(2x-1)}=\frac{(x+6)}{(x+6)}=1$
$\frac{(x-3)}{(3x-14)}$
2. $\frac { 4 x ^ { 2 } + 5 x - 6 } { 3 x ^ { 2 } + 5 x - 2 } \div \frac { 4 x ^ { 2 } + x - 3 } { 6 x ^ { 2 } - 5 x + 1 }$
Convert the quotient to a product.
$\frac{(4x^{2}+5x-6)}{(3x^{2}+5x-2)}\cdot \frac{(6x^{2}-5x+1)}{(4x^{2}+x-3)}$
Use a generic rectangle to factor each trinomial.
$4x^2+5x−6$
Put the first and last terms in opposite corners.
The products of the diagonals need to be equal. In this case find two (missing) terms whose product is $−24x^2$. For this expression their sum must be $5x$.
Write the missing terms in the blank corners of the rectangle.
Find the greatest common factor of each row and column. Write the factors on the outside of the rectangle.
The side 'lengths' of the rectangle are the factors of the trinomial.
$4x^2+5x−6=(4x−3)(x+2)$
You can use this method to factor the other three trinomials. Then, complete the division problem. If you need more help, see problem 3-78 (c).
|
2019-09-20 17:03:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 10, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.728731632232666, "perplexity": 1126.5811799201883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574050.69/warc/CC-MAIN-20190920155311-20190920181311-00394.warc.gz"}
|
https://gmatclub.com/forum/the-perimeter-of-which-of-the-above-triangles-can-be-determined-from-246591.html
|
It is currently 22 Sep 2017, 15:55
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# The perimeter of which of the above triangles can be determined from
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 41688
Kudos [?]: 124488 [0], given: 12079
The perimeter of which of the above triangles can be determined from [#permalink]
### Show Tags
08 Aug 2017, 10:30
Expert's post
1
This post was
BOOKMARKED
00:00
Difficulty:
(N/A)
Question Stats:
67% (00:37) correct 33% (00:27) wrong based on 14 sessions
### HideShow timer Statistics
The perimeter of which of the above triangles can be determined from the information given?
(A) I only
(B) II only
(C) I and II only
(D) II and III only
(E) I, II and III
[Reveal] Spoiler:
Attachment:
2017-08-08_2125_001.png [ 12.75 KiB | Viewed 249 times ]
[Reveal] Spoiler: OA
_________________
Kudos [?]: 124488 [0], given: 12079
BSchool Forum Moderator
Joined: 26 Feb 2016
Posts: 1332
Kudos [?]: 522 [0], given: 16
Location: India
WE: Sales (Retail)
Re: The perimeter of which of the above triangles can be determined from [#permalink]
### Show Tags
08 Aug 2017, 11:37
When one side has been given,
In an equilateral triangle where angles are equal, sides are also equal.
Hence, perimeter = 3*5 = 15
Though the right angled triangle seems like we can get the perimeter,
since the hypotenuse is 5, it could be a Pythagorean triplet(3,4,5) where the perimeter will be 12.
However, the sides of the triangle can also be $$\sqrt{5}$$ and $$[fraction]20[/fraction]$$
The perimeter of that triangle is $$5 + 3\sqrt{5}$$. We cannot get an unique perimeter.
Hence, Option B(II only) is the answer option
_________________
Stay hungry, Stay foolish
Kudos [?]: 522 [0], given: 16
Re: The perimeter of which of the above triangles can be determined from [#permalink] 08 Aug 2017, 11:37
Similar topics Replies Last post
Similar
Topics:
If the perimeter of the triangle above is 19, then x = 2 07 Sep 2017, 13:52
For the triangles above, the perimeter of ABC equals the perimeter of 3 07 Sep 2017, 13:49
The perimeter of the triangle above is 1 08 Aug 2017, 11:12
2 From the two figures above, which of the following can be determined? 2 23 May 2017, 19:25
19 Which of the following can be a perimeter of a triangle 17 09 Aug 2017, 20:34
Display posts from previous: Sort by
|
2017-09-22 22:55:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6000203490257263, "perplexity": 3887.0912225357915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689373.65/warc/CC-MAIN-20170922220838-20170923000838-00577.warc.gz"}
|
https://glosa.fias.fr/glosalist/405_raw
|
Fast links: Interglossa » Glosa »
# Re: [glosalist] Re: Too much plainness
Saluta Laslo e Holo plu Hetero-pe,
At 08:44 AM 2/23/04 +0100, you wrote:
Saluta Igor, The big problem is that the people generally can’t ride a horse. They either fall to the one side of the horse, or fall to the another side of it. Only a few people can find the middle of the way. *** Yes, it’s a problem with both horses and designed languages: I’d say most people hang on tight to the horse, or language, they’ve got - that is, if they don’t lose their hold, and fall off.
I’m also a speaker of Esperanto. I have no problem to express me even in a very sophisticated mode using that language. Nevertheless I think that Esp. make abuse of ending-formatives. The Esp texts you can read many times slowlier than the English texts. I have made lots of experiments concerning this affair. Spite of my good knowlidge of Esp. I don’t really like it. I’d like to get a language that combines the best features of the experimented planned languages. *** I sympathise with you Laslo, and agree that yet another design specification could be derived, and a better middle way might be found.
And I think, it would be easier to make the best language starting from the Glosa as starting from Esp. *** And I will agree that this is true - for a number of reasons. But I would also question that it was good policy for a different number of reasons. While Glosa might not be the best of all possible languages, and you might actually be the genius to find the language that works best for the greatest number of the world’s population, it seems that you will have an uphill job on your hands finding the people prepared, and able, to work with you to derive the optimal language - whatever its ultimate specification might be. And, on top of that, the members of such a group need to mobilise powerful forces to get the newly designed language, no matter how good it may be, into a trial and promotion situation. I suspect that the majority of people on this List are not actually language developers, and I must say that at 67, I do not have enough good years in me to be there for the long haul - even if such a mid-grey language was developed, and did look like paying off financially. For Glosa to go ahead it needs the input of very creative people writing imaginative instructional material, and it also needs very catchy marketing ideas. That is, with losing no time in development, Glosa still needs the input of massive human potential if it is to get off the ground. But things are slow, and even though Ron and Wendy were a pair of geniuses, still they were unable to achieve a solid result in a quarter of a century. A major problem in the Planned Language business seems to be that there is no money in it; and, no matter how bright the person at the top might be, they still have to rely on volunteers for the development of their creation.
Esp has already a very big amount of fiction and a great number of old lag users, who don’t want any changes while they are living, but the Glosa scarcely has users and fiction, so it’s possibility to be improved is incomparable to the Esp. *** The Esperantists, of course, count their mass of literature including Esperanto originals, plus their armies of fluent speakers, as their major strengths. And in terms of establishment they are right. It is just unfortunate that their foundation stone is so grammatically ornate. I, too, became convinced of the negative value of the “old lag” element when I investigated the Distributed Language Translation project reading, in 1977, a report from 1972 explaining how a computer-based holding language would be based on Esperanto. It would allow the speedy distribution of information around the world via this intermediate, modified Esperanto - to be translated into the various target languages at the consoles of end-users. And they would have had it, too, had not the researchers been brought back into line by the Esperantists demanding a humanly-readably very Esperanto-like “Distributed Language.” With hobbled researchers, and dwindling results, funding for the DLT project was withdrawn around 1980.
My first reaction was to introduce possesive pronouns. But really the Glosa need not it, if it used word class endings. But at this point, you have fallen to the other side of the horse, and protested against to introduce of all kind of endings like in Esp. But nobody were speaking about that. I proposed only endings for the part of speech and nothing more. *** Were I immortal, and the time-scale immaterial, I would seriously look at the possibilities with you. However, I am mortal, and must wonder where the next meal is coming from - before I run out of time, and have no further need for food - or language. I appreciate the Glosa specification, and plan to put any remaining resources I have into promoting and writing for, it. Having observed how long the developmental process takes - using volunteer labour, I would agree that a Glosa Mach IV (of the mid-grey variety) should take no more than five to ten years to reach design completion. Unfortunately, I lack the combination of time and resources to allow me to work full-time on such a project.
What is even more worrying is the ethical side. Were such a development of Glosa possible, and provably beneficial, it should have the approval and support of Wendy Ashby. There could be ethical objections to a hostile take-over bid, no matter how technically superior the resulting product might be.
Sorry for the capital letters, I only have put a quotation from an earlier message of Robin:
“DON’T OVERHEAT THE LISTENER’S BRAIN BY FORCING THEM INTO PLAYING MIND-GAMES WITH EVERY WORD. “
The above opinion is valid even in case of a too plainness expressing. So, either the phrase is too formatived by endings, or it is too plainness, that don’t helps the understanding. *** I had better watch what I say, in case the words be used against me. Seriously, though you are right about “simplicity” in linguistic design brining its own level of difficulty for users. Until one learns the knack of “thinking in Glosa,” the constant mental translations and need for finding the right word, can be so mind-numbing as to stop people from attempting to speak or write in Glosa. One aspect of Glosa’s ‘plainness’ which causes difficulty is the lack of metaphor in Glosa. While there might be a strong case for avoiding non-literal language in a designed language for global use, it does make it hard for the learner, who has to find that one right word to express the required concept. And this is a case for having at least one Greek root and one Latin root for most concepts.
So, do words function as particular parts of speech because they are labelled that way, or because of their location in the sentence? If your thinking and culture are based on a clearly- labelled language, then you might have quite a bit of trouble in managing a language whose grammar is syntax-based. However, If we did proper scientific trials using well-written, syntax-based instructional materials, we might find that speakers whose first language was highly inflected, could discover that using a very different, but well-taught, language medium was quite refreshing.
The whole area of Interlinguistics remains inadequately researched.
Even in the English some words are marked by word class endings. It helps you very much on the speedy undestanding. It could give you a larger manner of expressing. It could give you to operate more unimpeded when speaking or writing, while it damaged nothing.
There is not only balack and white, but there is also the gray. *** Mainly adverbs with “-ly” and participles as adjectives , with “-ed” or “-en.” Then there are the category endings like “-ment”, “-er” and “-ite,” but these are like the category affixes of Glosa EG ~-pe.~
When I visited Professor M.A.K.Halliday years ago, he said, "Yes, but these designed languages have not had the years of use to knock off their rough edges." And I am reminded of this by your mention of the 'gray' of a language that suits all-comers. All too often designed languages impose a very narrow regime of grammar, possibly over-inflected, or seemingly plain by being devoid of inflection.
Saluta,
|
2022-05-26 14:06:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28404250741004944, "perplexity": 1623.6404108228423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662606992.69/warc/CC-MAIN-20220526131456-20220526161456-00121.warc.gz"}
|
https://www.exampaper.com.sg/questions/a-maths/celebrating-100-posts-of-blog-building-with-polynomials
|
Close
# Celebrating 100 Posts of Blog-Building With Polynomials
(13)
Tuition given in the topic of A-Maths Tuition Questions from the desk of at 8:56 pm (Singapore time)
Updated on
When this blog was started, Miss Loi had harboured a grandiose vision of the day she clocked her first 100 posts.
Due to poor planning, Novena residents
will never get to see the likes of this
In her mind she pictured a glorious day of great celebration and festivities, a day of receiving homages from dignitaries around the world, a day of fireworks lighting up the skies and endless processions of performers flooding the streets around Novena!
Hence when a casual glance at the blog stats today revealed that this IS the 100th post, posted on a day less than two weeks away from the O-Levels, anti-climatic was probably too mild a word to describe the magical moment.
With no time to engage street performers or call the fireworks company at such short notice, Miss Loi will just have to make do with a polynomial question (that has frequently appeared in the past three years) to mark this joyous occasion:
The cubic polynomial f(x) is such that the coefficient of x3 is 1 and the roots of f(x) = 0 are -2, 1+√3 and 1-√3.
1. Express f(x) as a cubic polynomial in x with integer coefficients.
2. Find the remainder when f(x) is divided by x-3.
3. Solve the equation f(-x)=0.
If you’ve been following the recent series of questions here, you would’ve noticed the trend: they’re all pretty straightforward once you know how to start. That’s what O-Level maths questions are all about – basically testing your ability to choose the best approach from your collection of approaches.
Given the high standards of Jφss Sticks commentators these days, no relevant equations shall be put up this time to spoil the fun, as a simple hint should be enough for you to yell “Orrrhhh! So simple!”.
And when you eventually solve the question, do sit back, enjoy the imaginary fireworks and wonder how on earth could so many students contrive to get zero marks for this!
### Revision Exercise
To show that you have understood what Miss Loi just taught you from the centre, you must:
1. xinyun commented in tuition class
2007
Oct
9
Tue
9:38am
2. Miss Loi Friend Miss Loi on Facebook @MissLoi commented in tuition class
2007
Oct
9
Tue
10:17am
2
Thanks xinyun. And how did you celebrate yours?
Sorry for calling off the parade and fireworks display ... the police just rejected Miss Loi's public entertainment license application ... sigh ... 🙁
3. HORNY ANG MOH commented in tuition class
2007
Oct
9
Tue
12:30pm
3
Congratulation! ( try to imageing the sound of popping champage ) On this milestone! It is ok if not firework or parade! If I am in s'pore I don't mind asking u out for dinner to chelebrate on ur achievement! Have a nice day!
4. Miss Loi Friend Miss Loi on Facebook @MissLoi commented in tuition class
2007
Oct
9
Tue
5:22pm
4
HAM, you don't mind but wouldn't your gf mind you dining around with other girls in S'pore???
Now that this 100th post milestone had quietly come and gone (sadly without the promised fanfare), time to think of another grand vision for this blog's anniversary. Maybe we'll see a fly-past of aeroplanes by then!
5. kiroii commented in tuition class
2007
Oct
9
Tue
5:44pm
5
hmm this seems easy nvm i'll let 123 solve it since he succinctly lamented he din get to solve de last one
6. 123 commented in tuition class
2007
Oct
9
Tue
6:56pm
6
Haha, thnx but.... this question seems abit the complicated @.@
Still thinking though
7. 123 commented in tuition class
2007
Oct
9
Tue
7:17pm
7
Er,,,,,,,,
a) f(x)= (x+2)(x-1)+)√3)(x-1)-√3)
= (x+2)(x-1)^2-√3^2
= (x+2)(x^2-2x+1)-3
= x^3-3x-1
8. 123 commented in tuition class
2007
Oct
9
Tue
7:20pm
8
OOps wrong -.-
Er,,,,,,,,
a) f(x)= (x+2)(x-1)+√3)(x-1)-√3)
= (x+2)(x^2-2x+1-3)
= x^3-6x-2
9. 123 commented in tuition class
2007
Oct
9
Tue
7:22pm
9
OOOOOOOOOOOOOppppps wrong again -.-!!
Er,,,,,,,,
a) f(x)= [x-(-2)][x-(1+√3)][x-(1-√3)]
= (x+2)(x2-2x+1-3)
= x3-6x-4
Ahhh you finally got it right on the nth attempt. Have edited your workings for better readability.
When you see the keyword roots, you'll know that Part 1 tests your understanding of the Factor Theorem: x-a is a factor of f(x) where f(a) = 0. And x=a is a root of the equation. Which you've rightly applied.
Unfortunately some students got too excited and instead formed an arbitrary ax3+bx2+cx+d, whereby they tried to sub in the root values and solve the resultant simultaneous equations. That's mathematical suicide for you.
10. 123 commented in tuition class
2007
Oct
9
Tue
7:25pm
10
f(3)=33-6(3)-4
= 5
Yes Part 2 tests your understanding of the Remainder Theorem i.e. If f(x) is divided by (x-a), the remainder is f(a).
Unfortunately, many Last-Minute Buddha Foot Huggers who quickly flipped through the pages of their textbooks missed this. And proceeded to waste their time in the exam doing long division!
And do note that Part 2 requires your answer from Part 1 to be correct in order for it to be correct. So this is a one die all die question. And you should pay extra attention to these kind of questions when you're checking for careless mistakes during your exam.
Just making a lucky guess for part (iii), is the answer 2, -1+√3, and -1-√3.????
Yes you're lucky. Some day, if you're taking A-Level Math, you'll be taught that f(-x) is actually a reflection of f(x) about the y-axis.
But for now, some students will get stuck in this coz they've never seen it in their textbooks.
Instead of just sitting there and staring blankly at the question, simply sub in -x into the original function to get f(-x), i.e.
f(x) = (x+2)[x-(1+√3)][x-(1-√3)] = 0
f(-x) = [(-x)+2][(-x)-(1+√3)][(-x)-(1-√3)] = 0
(-x+2)(-x-1-√3)(-x-1+√3)=0
(x-2)[x-(-1-√3)][x-(-1+√3)]=0
x=2, x=-1-√3, x=-1+√3
11. 123 commented in tuition class
2007
Oct
9
Tue
7:25pm
11
Ops, i forgot, Happpy 100th post...hope there will be more 100ths to come ^.^
12. kiroii commented in tuition class
2007
Oct
9
Tue
11:09pm
12
hmm should be correct i got de same ans also
13. Miss Loi Friend Miss Loi on Facebook @MissLoi commented in tuition class
2007
Oct
10
Wed
11:06am
13
123, Miss Loi has marked your answers. Next time try getting it right on your first attempt, like what you should do in your actual exams.
And thanks for your happy wishes 😀
• ### Latest News
As the seasonal burning of midnight oil begins, The Temple enters seclusion mode aka 闭关 with its hybrid joss sticks math tuition sessions being conducted both online and onsite at Novena.
Nevertheless, now is actually the best time to contact Miss Loi for a place in one of those 'forever-full' slots that are opening up this December 2022.
Meanwhile, continue to stay vigilant and safe everyone!
The 2021 Maths Exam Papers are here! The 2021 Maths Exam Papers ARE FINALLY HERE!
* Please refer to the relevant Ten-Year Series for the questions of these suggested solutions.
|
2022-10-04 04:44:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31949204206466675, "perplexity": 5583.903358275279}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00485.warc.gz"}
|
https://discourse.mc-stan.org/t/example-hierarchical-models-with-3-levels-of-hierarchy/18323
|
# Example hierarchical models with 3+ levels of hierarchy?
Are there any example hierarchical models anywhere with 3 or more levels of hierarchy? For example, one score from each of multiple students, each in precisely one of multiple schools, each in precisely one of multiple districts?
Embarrassing as it is, I’m having trouble wrapping my head around implementing something like that and I’d like to see how some canonical examples approach it to help get past what I’m sure is going to be a head-smacking block.
I just coded up a quick example using brms. Does this help you?
# number of students/scores
N_g <- 50
# number of schools
N_j <- 20
# number of districts
N_k <- 10
# total cases
N <- N_g * N_j * N_k
# intercept (e.g. IQ scores)
alpha <- 100
student_vec <- 1:N
school_vec <- rep(1:N_j, each=N_g, times = N_k)
district_vec <- rep(1:N_k, each=N_j*N_g)
# random intercept for schools
r_j <- rnorm(N_j, 0, 10)
# random intercepts for districts
r_k <- rnorm(N_k, 0, 5)
# linear predictor
mu <- alpha + r_j[school_vec] + r_k[district_vec]
# noise
sigma <- 5
# simulate data
y <- rnorm(N, mu, sigma)
# create data frame
d <- data.frame(
y=y,
school = school_vec,
district = district_vec
)
# fit the model (can take a few minutes depending on N)
fit <- brms::brm(
y ~ 1 + (1 | school) + (1 | district),
data=d,
cores=4
)
summary(fit)
And the output from a representative run:
Family: gaussian
Links: mu = identity; sigma = identity
Formula: y ~ 1 + (1 | school) + (1 | district)
Data: d (Number of observations: 10000)
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup samples = 4000
Group-Level Effects:
~district (Number of levels: 10)
Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
sd(Intercept) 3.72 1.06 2.29 6.17 622 1.01
~school (Number of levels: 20)
Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
sd(Intercept) 11.44 1.91 8.25 15.86 512 1.00
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
Intercept 100.73 2.81 95.08 106.36 369 1.00
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
sigma 4.99 0.03 4.93 5.06 1389 1.00
2 Likes
Thanks!!
Where the SD among students within school i is \sigma_i, I’m hoping to have partially-pool the values as log(\sigma_i) \sim N(\mu_{schools\sigma},\sigma_{schools\sigma})
Then, where the SD among schools within district j is \sigma_j, I’m hoping to have partially-pool the values as log(\sigma_j) \sim N(\mu_{districts\sigma},\sigma_{districts\sigma})
Does brms permit that kind of complexity? (I’m still not really familiar with it; more a raw-Stan person)
So you want heteroskedastic variances, varying by school and district?
brms can handle a variety of distribution-specific parameter formulas. For a model with a Gaussian likelihood, you could do something like:
fit <- brms::brm(
brms::bf(
y ~ 1 + (1 | school) + (1 | district),
sigma ~ 1 + (1 | school) + (1 | district),
),
data=d
)
although this is a much more difficult beast to fit, in my experience. I have a two-level hierarchical ordinal probit model with heteroskedastic variances here, if your interested: https://github.com/ConorGoold/GooldNewberry_modelling_shelter_dog_behaviour/blob/master/Stan_full_model.stan
2 Likes
Wow, that’s remarkably simple! (Specification-wise; I take your point that the geometry may reflect a difficult space to sample)
Yes, brms is excellent! Although I’m not actually a frequent user, tending towards raw Stan too.
I would probably code up those types of models in raw Stan anyway, because they will likely need careful tuning when it comes to fitting on real data sets with other complexities.
1 Like
Hm, on actually inspecting the generated Stan code, I actually don’t think this gets at the kind of nested hierarchy with heterogeneity I was seeking to implement. The specification:
brms::bf(
y ~ 1 + (1 | school) + (1 | district)
, sigma ~ 1 + (1 | school) + (1 | district)
)
Allows for school and district random effects on the magnitude of variability among students, but still expresses that there is a single SD that conveys how schools vary from one another in their mean in each district. The idea I was trying to implement would have a separate SD per district to capture variability across districts in how schools’ means vary. Ditto separate SD per district to capture variability across districts in how schools’ SDs vary.
To be concrete, here’s some pseudocode of the generative process I’m thinking of (easiest to parse if you start at the innermost for loop):
for(this_district in districts){
this_district_mean_school_mean = rnorm(1,mean_district_mean_school_mean,sd_district_mean_school_mean)
this_district_sd_school_mean = exp(rnorm(1,mean_district_logsd_school_mean,sd_district_logsd_school_mean))
this_district_mean_school_logsd = rnorm(1,mean_district_mean_school_logsd,sd_district_mean_school_logsd)
this_district_sd_school_logsd = rnorm(1,mean_district_sd_school_logsd,sd_district_sd_school_logsd)
for(this_school in schools_in_this_district){
this_school_mean = rnorm(1,this_district_mean_school_mean,this_district_sd_school_mean)
this_school_sd = exp(rnorm(1,this_district_mean_school_logsd,this_district_sd_school_logsd))
for(this_student in students_in_this_school){
score = rnorm(1,this_school_mean,this_school_sd)
}
}
}
Do you mean something like this:
data {
int N_districts;
int N_schools;
int N_students;
int school_in_district_idx[N_schools];
int student_in_school_idx[N_students];
vector[N_students] score;
}
parameters {
real top_mu;
real<lower=0> top_sigma;
vector[N_districts] district_mu;
vector<lower=0>[N_districts] district_sigma;
vector[N_schools] school_mu;
vector<lower=0>[N_schools] school_sigma;
vector[N_students] student_mu;
real<lower=0> sigma;
}
model {
top_mu ~ std_normal()
top_sigma ~ std_normal()
district_mu ~ normal(top_mu, top_sigma);
for (s in 1:N_schools) {
int idx = school_in_district_idx[s];
school_mu[s] ~ normal(district_mu[idx],district_sigma[idx]);
}
for (k in 1:N_students) {
int idx = student_in_school_idx[k];
student_mu[k] ~ normal(school_mu[idx],school_sigma[idx]);
}
score ~ normal(student_mu, sigma);
}
Yes, though I think you’ve left off the specification of partial pooling for district_sigma.
I’m coding this in raw Stan myself right now (with some more complexity in design plus my reduced-computation trick), so I’ll post back when I’m done. I am nonetheless still curious if brms can do it out-of-the-box.
1 Like
FYI, I’ve been playing with this a bunch and have discovered that it really matters whether you employ the centered or non-centered parameterization, and independently so for each level of the hierarchy. For example, I’m finding that with the simulation space I’m exploring right now, the schools have to be non-centered and the students have to be centered.
I’ll be posting a mini case-study on this shortly.
@betanalpha made a nice figure giving some guidance when centered/non-centered parameterization work well.
1 Like
Yeah, it’s on my to-do to sit down and put some serious thought into how geometry of the 3-level case I’m exploring relates to that. I think the idea will be that you can have one layer of the hierarchy for which the data strongly inform and one layer where it doesn’t?
I have not thought deeply about this, my only intuitions are that the more fewer groups one describes with a higher level prior, the less informed the estimation of that prior is.
On the other hand, if this higher level prior has itself an (even …) higher prior, is estimation could also be informed by other groups on the same level.
I don’t have a strong intuition and would probably resort to simulations to figure this out and sharpen/clarify my intuition…
|
2020-10-28 12:31:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5956467390060425, "perplexity": 5777.615278969973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107898499.49/warc/CC-MAIN-20201028103215-20201028133215-00096.warc.gz"}
|
https://www.jse.ac.cn/EN/Y1998/V36/I1/8
|
J Syst Evol ›› 1998, Vol. 36 ›› Issue (1): 8-18.
• Research Articles •
### The Structural Features of Leaf Epidermis in Oryza and Their Systematic Significance
ZHANG Zhi-Yun, LU Bao-Rong, WEN Jie
• Published:1998-01-10
Abstract: The rice genus (Oryza L. ) belongs to the grass family(Poaceae) and contains 24 annual or perennial species, including two cultivated rice species, i.e., the Asian rice ( O. sativa L. ) and African rice (O. glaberrima Steud. ), and 22 wild species distributed throughout the tropics of the world. Species in this genus have been extensively studied by scientists with different approaches, including morphological characterization and cytological and molecular investigations. The leaf epidermis is an important morphological character which has been studied for taxonomic identification and studies on systematic relationships of species, particularly in grasses. In this study, morphological features of the leaf epidermis of 23 rice species were observed through light microscopy. The results showed that some characters of the rice leaf epidermis had significant diversity between species and these characters were valuable for the identifying Oryza species, and for assessing systematic relationships in the genus. For example, O.schlechteri, O.ridleyi, O.longiglumis, O.granulata, and O. rneyeriana had elliptic stomatal complexes, whereas the other species had rhombic stomatal complexes. In most cases, papillae on the surface of the epidermis were variable in size and distribution between species. The size of papillae varied from small ( 1.5~4.4µm in diameter), medium-sized (9~18µm), to large (21~30µm) , and the pattern of papillary size and distribution were very useful for identification of rice species. In addition, the number and location of the small papillae in stomatal complexes were particularly different between species. Based on the following combinations of leaf-epidermic characters, i.e., the size and distribution of papillae on the abaxial surface of the epidermis, the number and location of the small papillae in stomatal complexes, and the shape of stomatal complexes, the 23 studied Oryza species could be divided into three major groups. The first group comprises O.longiglumis, O.ridleyi, O.meyeriana, and O.granulata. In these species, neither large nor medium-sized papillae, in some cases extremely rare small papillae, were found on the surfaces of epidermis, and there were no small papillae found in stomatal complexes. All species in the first group had elliptic stomatal complexes. The second group consists of O.brachyantha, diploid and tetraploid O.officinalis, O.minuta, O.eichingeri, O. punctata, O.latifolia, O.alta, O.grandiglumis, O.rhizomatis, and O.australiensis. In these species usually no large papillae were observed, but medium-sized and densely populated small papillae were found to cover the surface of epidermis, and at least four small papillae were found in stomatal complexes (in guard cells) of most species. The third group contains O.sativa, O.nivara, O.rufipogon, O.longistaminata, O. glumaepatula, O.meridionalis, O.barthii, O.glaberrima and O. schlechteri. The abaxial leaf epidermis of these species was usually covered with large papillae, medium-sized, and small papillae. In addition, more than 4 (usually 6~8 ) small papillae were found in guard cells or/and subsidiary cells of the stomatal complexes. Most species in the second and third groups had rhombic stomatal complexes. These results agree mostly with previous re-ports on the biosystematic studies of rice species by applying other methodologies.
|
2021-01-26 09:12:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5356132388114929, "perplexity": 9511.063169232433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704799711.94/warc/CC-MAIN-20210126073722-20210126103722-00139.warc.gz"}
|
https://www.beatthegmat.com/47-verbal-40-quant-retake-gmat-advice-t296340.html
|
• FREE GMAT Exam
Know how you'd score today for $0 Available with Beat the GMAT members only code • Free Practice Test & Review How would you score if you took the GMAT Available with Beat the GMAT members only code • 5-Day Free Trial 5-day free, full-access trial TTP Quant Available with Beat the GMAT members only code • Free Trial & Practice Exam BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • Free Veritas GMAT Class Experience Lesson 1 Live Free Available with Beat the GMAT members only code • Magoosh Study with Magoosh GMAT prep Available with Beat the GMAT members only code • 1 Hour Free BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • 5 Day FREE Trial Study Smarter, Not Harder Available with Beat the GMAT members only code • Award-winning private GMAT tutoring Register now and save up to$200
Available with Beat the GMAT members only code
• Get 300+ Practice Questions
Available with Beat the GMAT members only code
## 47 Verbal/40 Quant- Retake? Gmat Advice
bailey.rise Newbie | Next Rank: 10 Posts
Joined
05 Sep 2017
Posted:
2 messages
#### 47 Verbal/40 Quant- Retake? Gmat Advice
Tue Sep 05, 2017 6:51 am
Hi there,
I just took the GMAT, scoring a 47 in Verbal and a 40 in Quant- total score of 700. I'm looking to apply to the top schools- would you recommend retaking the test? I've already taken the test one time previously, and studied pretty intensely to get to this point.
Thank you for the advice! Much appreciated.
### GMAT/MBA Expert
DavidG@VeritasPrep Legendary Member
Joined
14 Jan 2015
Posted:
2667 messages
Followed by:
120 members
1153
GMAT Score:
770
Tue Sep 05, 2017 7:55 am
bailey.rise wrote:
Hi there,
I just took the GMAT, scoring a 47 in Verbal and a 40 in Quant- total score of 700. I'm looking to apply to the top schools- would you recommend retaking the test? I've already taken the test one time previously, and studied pretty intensely to get to this point.
Thank you for the advice! Much appreciated.
Three relevant questions here:
1) How strong is the rest of your application?
2) How does that 700 compare to old practice tests?
3) Does it feel as though you're missing some low-hanging fruit when you take the test?
If the rest of your application is strong, you were averaging 680 on old practices, and it really feels as though you've maximized your score, that's one thing. But if you have some weaknesses in your application, you've hit as high as, say, 740 on old practice tests, and it feels as though you still miss questions because of careless mistakes or timing issues, well, that's quite another.
_________________
Veritas Prep | GMAT Instructor
Veritas Prep Reviews
Save \$100 off any live Veritas Prep GMAT Course
Enroll in a Veritas Prep GMAT class completely for FREE. Wondering if a GMAT course is right for you? Attend the first class session of an actual GMAT course, either in-person or live online, and see for yourself why so many students choose to work with Veritas Prep. Find a class now!
bailey.rise Newbie | Next Rank: 10 Posts
Joined
05 Sep 2017
Posted:
2 messages
Tue Sep 05, 2017 8:13 am
Thank you so much for the response! In comparison to old practice tests, this was the highest score I had gotten. On my previous official GMAT exam, one month ago, I scored a 660 (42 quant, 39 verbal ). At home, on my two GMAT prep practice tests this month, I scored a 640 (38 quant, 40 verbal) and 690 (39 quant, 45 verbal). Def no low hanging fruit- taking it again would mean investing a lot of sweat, blood and tears into getting the quant score up, as the highest I have ever been able to get it was a 42.
The rest of the application is pretty strong. But even with a strong application, is it even worth applying to school like Harvard and MIT with such a low quant score?
### Top First Responders*
1 GMATGuruNY 67 first replies
2 Rich.C@EMPOWERgma... 44 first replies
3 Brent@GMATPrepNow 40 first replies
4 Jay@ManhattanReview 25 first replies
5 Terry@ThePrinceto... 10 first replies
* Only counts replies to topics started in last 30 days
See More Top Beat The GMAT Members
### Most Active Experts
1 GMATGuruNY
The Princeton Review Teacher
132 posts
2 Rich.C@EMPOWERgma...
EMPOWERgmat
112 posts
3 Jeff@TargetTestPrep
Target Test Prep
95 posts
4 Scott@TargetTestPrep
Target Test Prep
92 posts
5 Max@Math Revolution
Math Revolution
91 posts
See More Top Beat The GMAT Experts
|
2018-04-24 07:31:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26352745294570923, "perplexity": 8812.537914737159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946565.64/warc/CC-MAIN-20180424061343-20180424081343-00326.warc.gz"}
|
https://www.ctan.org/ctan-ann/id/mailman.3967.1513970422.5216.ctan-ann@ctan.org
|
# CTAN Update: tikz-timing
Date: December 22, 2017 8:20:06 PM CET
Martin Scharrer submitted an update to the tikz-timing package. Version: 0.7f 2017-12-20 License: lppl Summary description: Easy generation of timing diagrams as TikZ pictures Announcement text:
Fixed TDS.ZIP file to include libraries again. Updated documentation about some styles.
The package’s Catalogue entry can be viewed at https://ctan.org/pkg/tikz-timing (Caution: At the time when this message is posted this page is not quite up to date due to a bug in the software behind the website. We hope that this problem will soon be solved.) The package’s files themselves can be inspected at http://mirror.ctan.org/graphics/pgf/contrib/tikz-timing/
Thanks for the upload. For the CTAN Team Petra Rübe-Pugliese
We are supported by the TeX users groups. Please join a users group; see https://www.tug.org/usergroups.html .
## tikz-timing – Easy generation of timing diagrams as TikZ pictures
This package provides macros and an environment to generate timing diagrams (digital waveforms) without much effort. The TikZ package is used to produce the graphics. The diagrams may be inserted into text (paragraphs, \hbox, etc.) and into tikzpictures. A tabular-like environment is provided to produce larger timing diagrams.
Package tikz-timing Version 0.7f 2017-12-20 Copyright 2009–2017 Martin Scharrer Maintainer Martin Scharrer
more
|
2022-01-20 20:55:22
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9581775069236755, "perplexity": 12256.205045475554}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302622.39/warc/CC-MAIN-20220120190514-20220120220514-00491.warc.gz"}
|
https://www.edaboard.com/threads/is-there-such-a-function.56603/
|
Continue to Site
# Is there such a function?
Status
Not open for further replies.
#### alitavakol
##### Member level 2
Is there a function satisfying these 2 conditions?
1- the function is differentiable everywhere in (a, b).
2- the derivative of the function is discontinous somewhere in (a, b).
(a, b) is an open region: a < x < b.
prove or make a counterexample.
#### kalyanram
Hi,
Its not possible to have such a function.
~Kalyan.
#### jayc
##### Member level 3
Sure, there is. Just take the triangle function on the interval [-1, 1]:
$\Lambda(x) = \left\{ \begin{array}{l l} 1+x & \quad \mbox{if }x \leq 0\\ 1-x & \quad \mbox{if }x >0\\ \end{array} \right.$
The derivative to this function is not continuous:
$\frac{d\Lambda}{dx}(x) = \left\{ \begin{array}{l l} 1 & \quad \mbox{if }x \leq 0\\ -1 & \quad \mbox{if }x >0\\ \end{array} \right.$
The derivative exists everywhere in the interval but is discontinuous at x=0.
Last edited by a moderator:
#### zajbanlik
##### Member level 1
jayc said:
Sure, there is. Just take the triangle function on the interval [-1, 1]:
$\Lambda(x) = \left\{ \begin{array}{l l} x & \quad \mbox{if }x \leq 0\\ 1-x & \quad \mbox{if }x >0\\ \end{array} \right.$
The derivative to this function is not continuous:
$\frac{d\Lambda}{dx}(x) = \left\{ \begin{array}{l l} 1 & \quad \mbox{if }x \leq 0\\ -1 & \quad \mbox{if }x >0\\ \end{array} \right.$
The derivative exists everywhere in the interval but is discontinuous at x=0.
Hi
Are you sure about that your, function is discontinuos there shouldn't be any value for derivate in point 0.
Regards
Last edited by a moderator:
#### jayc
##### Member level 3
The function is continous and the derivative exists everywhere in the interval. Since we difined the function to be $x$ on the interval $x\leq 0$, the corresponding derivative also exists on the same interval. Therefore, the derivative at x=0 is 1.
jayc
Last edited by a moderator:
##### Newbie level 6
jayc,
What's your interval? Have you drawn that function? It's discontinuous in any open interval that includes 0.
#### steve10
##### Full Member level 3
According to Calculus, a function has derivative at a point iff it is differentiable. Therefore, the question becomes, find a function that has derivative everywhere in (a,b) but the derivative is not continuous at some point in (a,b). Here are two examples (both for interval (-1,1)):
1. f(x)=x^2 * sin(1/x) when x>0 or <0, f(0)=0.
This is a continuous function and it can be easily proved that the function has derivative everywhere. Actually, when x>0 or < 0,
f'(x)=2x * sin(1/x) -cos(1/x)
while when x=0,
f'(0)=0.
Notice that the derivative at x=0 should be obtained separately from the definition of the derivative, which is
lim(Δx->0) (f(Δx)-f(0))/Δx=0
Obviously, f'(x) exists everywhere in (-1,1) but is not continuous at 0 (become oscilatory when x->0).
2. f(x)=x^2 * sin(1/(x^2)) when x>0 or <0, f(0)=0.
This function is similar to the previous one, and you can prove (similarly) that the derivative exists everywhere in (-1,1). However, not only is the derivative discontinuous, but it is even also unbounded around 0.
### alitavakol
Points: 2
#### jayc
##### Member level 3
jayc,
What's your interval? Have you drawn that function? It's discontinuous in any open interval that includes 0.
Wow, I'm sorry. Now I realize my mistake. I meant for it to be a triangle function and I missed the 1+x for x<=0. When I drew it in my head, it looked like a triangle =) The function I meant to write was this
$\Lambda(x) = \left\{ \begin{array}{l l} 1+x & \quad \mbox{if }-1 < x \leq 0\\ 1-x & \quad \mbox{if }0<x<1\\ 0 & \quad \mbox{otherwise} \end{array} \right.$
The derivative to this function is not continuous:
$\frac{d\Lambda}{dx}(x) = \left\{ \begin{array}{l l} 1 & \quad \mbox{if }x \leq 0\\ -1 & \quad \mbox{if }x >0\\ \end{array} \right.$
The derivative exists everywhere in the interval but is discontinuous at x=0.
Last edited by a moderator:
##### Newbie level 6
jayc said:
The derivative to this function is not continuous:
$\frac{d\Lambda}{dx}(x) = \left\{ \begin{array}{l l} 1 & \quad \mbox{if }x \leq 0\\ -1 & \quad \mbox{if }x >0\\ \end{array} \right.$
The derivative exists everywhere in the interval but is discontinuous at x=0.
jayc,
The derivative does not exist at x=0. That the derivative exists means that both the right derivative and the left derivative exist and are equal. In your case, both the right and the left derivatives exist at x=0 but they are not equal! So, the derivative does not exist.
Last edited by a moderator:
#### eecs4ever
##### Full Member level 3
derivative does not exist.... :|
This is not possible:
Suppose such a function exists, call it f(x)
and suppose f'(x) is discontinous at A ;
=> lim f'(x) as x --> A- from left side
is NOT EQUAL TO lim f'(x) as x---> A+ from right side
the above statement is true since f'(x) is discontinous at A .
but wait! , the statement also implies that f'(x) is not differentiable at A.
why?
because the definition for differentiability of f(x) imples that
lim f'(x) is equal when you approach from both sides for all x in the interval.
see definition give above in previous posts.
there you go... its not possible.
#### steve10
##### Full Member level 3
Eecs4ever,
WHen you start the proof by contradiction, you assume that "f'(x) is discontinous at A", which is fine. However, you are running some logic issues later. Here is the faulty part of your proof:
eecs4ever said:
...
but wait! , the statement also implies that f'(x) is not differentiable at A.
why?
because the definition for differentiability of f(x) imples that
lim f'(x) is equal when you approach from both sides for all x in the interval.
...
The differentiability of f(x) at A does NOT imply "lim f'(x) is equal when you approach from both sides for all x in the interval ...", which is implied by the continuity of the derivative f'(x) at A. Instead, the differentiability of f(x) at A IMPLIES
lim(Δx->0+) (f(A+Δx)-f(A))/Δx=lim(Δx->0-) (f(A+Δx)-f(A))/Δx
which is different from "lim f'(x) is equal when you approach from both sides". The difference is that one limit process is taken for f(x) while the other is for f'(x).
Besides, your following argument is the same as above which is incorrect:
eecs4ever said:
...
=> lim f'(x) as x --> A- from left side
is NOT EQUAL TO lim f'(x) as x---> A+ from right side
...
#### eecs4ever
##### Full Member level 3
To the best of my knowledge, i seem to remember what mainroad said is right.
"That the derivative exists means that both the right derivative and the left derivative exist and are equal."
the limit from the right side is:
"lim x->A+ f'(x) " = = lim(Δx->0+) (f(A+Δx)-f(A))/Δx
#### steve10
##### Full Member level 3
There is nothing wrong with what mainroad said. I guess here we are running into a concept issue. Here are the important concepts:
1. the derivative exists at A:
left derivative = lim(Δx->0-) (f(Δx+A)-f(A))/Δx = lim(Δx->0+) (f(Δx+A)-f(A))/Δx = right derivative
(notice that the formula says nothing abou the continuity)
2. the derivative is continuous at A:
left limit = lim (Δx->0-) f'(Δx+A) = lim (Δx->0+) f'(Δx+A) = right limit
(notice that, when you use f(x) or f(Δx+A), you have implicitly assumed that the derivative exists).
We all know that #2 is stronger than #1, therefore, #2 => #1. alitavakol's question is that, can we possibly go from #1 to #2 or #1 => #2? My two examples say that we cannot.
As I mentioned in the previous post, when you talked about the right and the left derivatives, you actually used #2, instead of #1. Check again what you said, "...lim f'(x) is equal when you approach from both sides for all x in the interval", which is exactly what #2 tells. #2 is not about right or left derivatives, but it is about continuity of derivative. You should use #1.
#### alitavakol
##### Member level 2
Only Steve10 said the correct answer. thanks to him.
#### zox11
##### Newbie level 5
There is no such function!
Please , this is a math, not word games:?:
I have found this somewere
"Thus there is a link between continuity and differentiability: If a function is differentiable at a point, it is also continuous there. Consequently, there is no need to investigate for differentiability at a point, if the function fails to be continuous at that point."
Also
https://en.wikipedia.org/wiki/Derivative
I may be wrong . Did not read the question carefully.Sorry.
##### Newbie level 6
I think you are right by the following:
zox11 said:
I may be wrong . Did not read the question carefully.Sorry.
But now you may want to read it more carefully and post a right one.
#### zox11
##### Newbie level 5
I am not sure anymore. I do not want to write something wrong ,again.
But, for me, I know the answer, or think I know.
Thanks.
Status
Not open for further replies.
|
2022-11-30 06:43:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7908000946044922, "perplexity": 1164.6897628654501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710733.87/warc/CC-MAIN-20221130060525-20221130090525-00432.warc.gz"}
|
https://rebound.readthedocs.io/en/latest/ipython/VariationalEquationsWithChainRule.html
|
# Using Variational Equations With the Chain Rule (iPython)¶
Variational equations can be used to calculate derivatives in an $$N$$-body simulation. More specifically, given a set of initial conditions $$\alpha_i$$ and a set of variables at the end of the simulation $$v_k$$, we can calculate all first order derivatives
\begin{align}\begin{aligned}\frac{\partial v_k}{\partial \alpha_i}\\as well as all second order derivates\end{aligned}\end{align}
$\frac{\partial^2 v_k}{\partial \alpha_i\partial \alpha_j}$
For this tutorial, we work with a two planet system.
We first chose the semi-major axis $$a$$ of the outer planet as an initial condition (this is our $$\alpha_i$$). At the end of the simulation we output the velocity of the star in the $$x$$ direction (this is our $$v_k$$).
To do that, let us first import REBOUND and numpy.
import rebound
import numpy as np
The following function takes $$a$$ as a parameter, then integrates the two planet system and returns the velocity of the star at the end of the simulation.
def calculate_vx(a):
sim = rebound.Simulation()
sim.add(m=1.) # star
sim.add(primary=sim.particles[0],m=1e-3, a=1) # inner planet
sim.add(primary=sim.particles[0],m=1e-3, a=a) # outer planet
sim.integrate(2.*np.pi*10.) # integrate for ~10 orbits
return sim.particles[0].vx # return star's velocity in the x direction
calculate_vx(a=1.5) # initial semi-major axis of the outer planet is 1.5
0.0004924175842478658
If we run the simulation again, with a different initial $$a$$, we get a different velocity:
calculate_vx(a=1.51) # initial semi-major axis of the outer planet is 1.51
0.000750246684761206
We could now run many different simulations to map out the parameter space. This is a very simple examlpe of a typical use case: the fitting of a radial velocity datapoint.
However, we can be smarter than simple running an almost identical simulation over and over again by using variational equations. These will allow us to calculate the derivate of the stellar velocity at the end of the simulation. We can take derivative with respect to any of the initial conditions, i.e. a particles’s mass, semi-major axis, x-coordinate, etc. Here, we want to take the derivative with respect to the semi-major axis of the outer planet. The following function does exactly that:
def calculate_vx_derivative(a):
sim = rebound.Simulation()
sim.add(m=1.) # star
sim.add(primary=sim.particles[0],m=1e-3, a=1) # inner planet
sim.add(primary=sim.particles[0],m=1e-3, a=a) # outer planet
v1 = sim.add_variation() # add a set of variational particles
v1.vary(2,"a") # initialize the variational particles
sim.integrate(2.*np.pi*10.) # integrate for ~10 orbits
return sim.particles[0].vx, v1.particles[0].vx # return star's velocity and its derivative
Note the two new functions. sim.add_variation() adds a set of variational particles to the simulation. All variational particles are by default initialized to zero. We use the vary() function to initialize them to a variation that we are interested in. Here, we initialize the variational particles corresponding to a change in the semi-major axis, $$a$$, of the particle with index 2 (the outer planet).
calculate_vx_derivative(a=1.5)
(0.0004924175842478302, 0.026958628196580445)
We can use the derivative to construct a Taylor series expansion of the velocity around $$a_0=1.5$$:
$v(a) \approx v(a_0) + (a-a_0) \frac{\partial v}{\partial a}$
a0=1.5
va0, dva0 = calculate_vx_derivative(a=a0)
def v(a):
return va0 + (a-a0)*dva0
print(v(1.51))
0.000762003866214
Compare this value with the explicitly calculate one above. They are almost the same! But we can do even better, by using second order variational equations to calculate second order derivatives.
def calculate_vx_derivative_2ndorder(a):
sim = rebound.Simulation()
sim.add(m=1.) # star
sim.add(primary=sim.particles[0],m=1e-3, a=1) # inner planet
sim.add(primary=sim.particles[0],m=1e-3, a=a) # outer planet
v1 = sim.add_variation()
v1.vary(2,"a")
# The following lines add and initialize second order variational particles
v2 = sim.add_variation(order=2, first_order=v1)
v2.vary(2,"a")
sim.integrate(2.*np.pi*10.) # integrate for ~10 orbits
# return star's velocity and its first and second derivatives
return sim.particles[0].vx, v1.particles[0].vx, v2.particles[0].vx
Using a Taylor series expansion to second order gives a better estimate of v(1.51).
a0=1.5
va0, dva0, ddva0 = calculate_vx_derivative_2ndorder(a=a0)
def v(a):
return va0 + (a-a0)*dva0 + 0.5*(a-a0)**2*ddva0
print(v(1.51))
0.000755071182773
Now that we know how to calculate first and second order derivates of positions and velocities of particles, we can simply use the chain rule to calculate more complicated derivates. For example, instead of the velocity $$v_x$$, you might be interested in the quanity $$w\equiv(v_x - c)^2$$ where $$c$$ is a constant. This is something that typically appears in a $$\chi^2$$ fit. The chain rule gives us:
\begin{align}\begin{aligned}\frac{\partial w}{\partial a} = 2 \cdot (v_x-c)\cdot \frac{\partial v_x}{\partial a}\\The variational equations provide the\end{aligned}\end{align}
$$\frac{\partial v_x}{\partial a}$$ part, the ordinary particles provide $$v_x$$.
def calculate_w_derivative(a):
sim = rebound.Simulation()
sim.add(m=1.) # star
sim.add(primary=sim.particles[0],m=1e-3, a=1) # inner planet
sim.add(primary=sim.particles[0],m=1e-3, a=a) # outer planet
v1 = sim.add_variation() # add a set of variational particles
v1.vary(2,"a") # initialize the variational particles
sim.integrate(2.*np.pi*10.) # integrate for ~10 orbits
c = 1.02 # some constant
w = (sim.particles[0].vx-c)**2
dwda = 2.*v1.particles[0].vx * (sim.particles[0].vx-c)
return w, dwda # return w and its derivative
calculate_w_derivative(1.5)
(1.039395710603212, -0.05496905171588172)
Similarly, you can also use the chain rule to vary initial conditions of particles in a way that is not supported by REBOUND by default. For example, suppose you want to work in some fancy coordinate system, using $$h\equiv e\sin(\omega)$$ and $$k\equiv e \cos(\omega)$$ variables instead of $$e$$ and $$\omega$$. You might want to do that because $$h$$ and $$k$$ variables are often better behaved near $$e\sim0$$. In that case the chain rule gives us:
\begin{align}\begin{aligned}\frac{\partial p(e(h, k), \omega(h, k))}{\partial h} = \frac{\partial p}{\partial e}\frac{\partial e}{\partial h} + \frac{\partial p}{\partial \omega}\frac{\partial \omega}{\partial h}\\where :math:p is any of the particles initial coordinates. In our\end{aligned}\end{align}
case the derivates of $$e$$ and $$\omega$$ with respect to $$h$$ are:
$\frac{\partial \omega}{\partial h} = -\frac{k}{e^2}\quad\text{and}\quad \frac{\partial e}{\partial h} = \frac{h}{e}$
With REBOUND, you can easily implement this. The following function calculates the derivate of the star’s velocity with respect to the outer planet’s $$h$$ variable.
def calculate_vx_derivative_h():
h, k = 0.1, 0.2
e = float(np.sqrt(h**2+k**2))
omega = np.arctan2(k,h)
sim = rebound.Simulation()
sim.add(m=1.) # star
sim.add(primary=sim.particles[0],m=1e-3, a=1) # inner planet
sim.add(primary=sim.particles[0],m=1e-3, a=1.5, e=e, omega=omega) # outer planet
v1 = sim.add_variation()
dpde = rebound.Particle(simulation=sim, particle=sim.particles[2], variation="e")
dpdomega = rebound.Particle(simulation=sim, particle=sim.particles[2], m=1e-3, a=1.5, e=e, omega=omega, variation="omega")
v1.particles[2] = h/e * dpde - k/(e*e) * dpdomega
sim.integrate(2.*np.pi*10.) # integrate for ~10 orbits
# return star's velocity and its first derivatives
return sim.particles[0].vx, v1.particles[0].vx
calculate_vx_derivative_h()
(-0.0006022810748296454, 0.002107215810994136)
Note that in the above function, there are expressions such as h/e * dpde. h/e is just a number, but dpde is actually a particle structure. REBOUND multiplies each cartesian component of that particle with the number h/e. Similarly, the particles are subtracted componentwise when using the - operator.
We can use the v1.particles[i] = ... syntax to directly set a variational particle’s initial conditions.
|
2020-03-30 00:25:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9042738676071167, "perplexity": 1794.9541674665975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496330.1/warc/CC-MAIN-20200329232328-20200330022328-00485.warc.gz"}
|
http://www.doe.mass.edu/mcas/student/2017/question.aspx?GradeID=10&SubjectCode=mth&QuestionID=59044
|
Select Program Area --Select Program Area-- ESE HOME Accountability, Partnership, & Assistance Adult & Community Learning Amazing Educators BOE Advisory Councils Board of Elementary & Secondary Education Career/Vocational Technical Education Charter Schools College and Career Readiness Compliance/Monitoring (PQA) Conferences, Workshops and Trainings Curriculum & Instruction Digital Learning District & School Assistance Centers (DSACs) District & School Turnaround District Review, Analysis, & Assistance Tools Educator Evaluation Educator Licensure Tests (MTEL) Educator Licensure Educational Proficiency Plan (EPP) Edwin ELAR Log In Employment Opportunities: ESE English Language Learners Every Student Succeeds Act (ESSA) Family Literacy High School Equivalency (HSE) Testing Program Grants/Funding Opportunities Information Services Laws & Regulations Literacy LEAP Project MCAS MCAS Appeals METCO Office for Food and Nutrition Programs Performance Assessment for Leaders (PAL) Planning and Research Professional Development Race to the Top (RTTT) RETELL Safe and Supportive Schools School and District Profiles/Directory School Finance School Redesign Science, Technology Engineering, and Mathematics (STEM) Security Portal | MassEdu Gateway Special Education Special Education Appeals Special Education in Institutional Settings Student and Family Support Title I/Federal Support Programs Systems for Student Success (SfSS)
Students & Families Educators & Administrators Teaching, Learning & Testing Data & Accountability Finance & Funding About the Department Education Board
# Massachusetts Comprehensive Assessment System
Question 20: Open-Response Reporting Category: Number and QuantityStandard: 10.N.2 - Simplify numerical expressions, including those involving positive integer exponents or the absolute value, e.g., 3(24 - 1) = 45, 4|3 - 5| + 6 = 14; apply such simplifications in the solution of problems. (AI.N.2) Standard: CCSS.Math.Content.7.EE.B.3 - Solve multi-step real-life and mathematical problems posed with positive and negative rational numbers in any form (whole numbers, fractions, and decimals), using tools strategically. Apply properties of operations to calculate with numbers in any form; convert between forms as appropriate; and assess the reasonableness of answers using mental computation and estimation strategies. For example: If a woman making $25 an hour gets a 10% raise, she will make an additional 1/10 of her salary an hour, or$2.50, for a new salary of \$27.50. If you want to place a towel bar 9 3/4 inches long in the center of a door that is 27 1/2 inches wide, you will need to place the bar about 9 inches from each edge; this estimate can be used as a check on the exact computation. Stuart wrote the expression shown below.$16+{8}^{2}÷4-4$What is the value of Stuart's expression? Show or explain how you got your answer.In your Student Answer Booklet, insert one set of parentheses into Stuart's expression so that the value of the expression is undefined. Show or explain how you got your answer.Talia wrote the expression shown below.$\left(16+{8}^{2}\right)÷4·2-4$Talia found the value of her expression using the following steps:Step 1: $\left(16+64\right)÷4·2-4$Step 2: $80÷4·2-4$Step 3: $80÷8-4$Step 4: $10-4$Step 5: $6$Is the value that Talia found for her expression correct? Explain your reasoning.Talia removed the set of parentheses from her expression to create the new expression shown below.$16+{8}^{2}÷4·2-4$What is the value of Talia's new expression? Show or explain how you got your answer.
### Scoring Guide and Sample Student WorkSelect a score point in the table below to view the sample student response.
ScoreDescription
4 The student response demonstrates an exemplary understanding of the Number and Quantity concepts involved in solving multi-step mathematical problems and applying properties of operations to calculate with numbers in any form. The student assesses the evaluation of a given expression.
4
3 The student response demonstrates a good understanding of the Number and Quantity concepts involved in solving multi-step mathematical problems and applying properties of operations to calculate with numbers in any form. Although there is significant evidence that the student was able to recognize and apply the concepts involved, some aspect of the response is flawed. As a result the response merits 3 points.
2 The student response demonstrates a fair understanding of the Number and Quantity concepts involved in solving multi-step mathematical problems and applying properties of operations to calculate with numbers in any form. While some aspects of the task are completed correctly, others are not. The mixed evidence provided by the student merits 2 points.
1 The student response demonstrates a minimal understanding of the Number and Quantity concepts involved in solving multi-step mathematical problems and applying properties of operations to calculate with numbers in any form.
0 The student response contains insufficient evidence of an understanding of the Number and Quantity concepts involved in solving multi-step mathematical problems and applying properties of operations to calculate with numbers in any form to merit any points.
Note: There are 2 sample student responses for Score Point 4.
Question 17:
Question 20:
Question 21:
Question 36:
Question 41:
Question 42:
|
2018-04-21 17:36:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19479484856128693, "perplexity": 3808.5143717808746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945272.41/warc/CC-MAIN-20180421164646-20180421184646-00337.warc.gz"}
|
https://codeahoy.com/learn/programmingpatterns/ch34/
|
# Checklists
In many aspects of life, both personal and professional, it is necessary to determine if a set of criteria have been satisfied/accomplished (e.g., goals have been reached, courses have been taken). One common way to do this is to use a checklist.
## Motivation
Suppose you’re getting ready to go on vacation; you probably have a number of things that you want to remember to pack (e.g., shirts, socks, pants, and skirts). So, you decide to write a program to help you ensure that you don’t forget anything. However, unlike the situations considered in Chapter 12 on bit flags, the program will be provided with the checklist dynamically at run-time (i.e., it isn’t known when the program is written and compiled).
## Review
To increase the flexibility of the program, you decide to represent the criteria that need to be satisfied/accomplished as a String[] named checklist, and you populate that array before you start working on the tasks. Then, as you accomplish a task, you enter it into another String[] named accomplished. Each time you accomplish a task you want to be able to determine whether or not you are done (e.g., whether you have completed all of the tasks in the checklist).
Of course, it’s easy to compare a single element of checklist with a single element of accomplished using the equals() method in the String class. However, in and of itself, that doesn’t solve the problem of determining whether or not you are done. Clearly, the equals() method must be invoked iteratively.
You can determine whether any given element of checklist has been accomplished by comparing it with each element in accomplished. For example, you can determine whether element index of checklist has been accomplished as follows:
boolean done = false;
for (int a = 0; a < accomplished.length; a++) {
if (accomplished[a].equals(checklist[index])) {
done = true;
break;
}
}
At the end of this loop, done will contain true if and only if checklist[index] has been accomplished.
This loop can be used to determine whether a single criterion has been satisfied/accomplished. To determine if all of the criteria have been satisfied/accomplished this loop needs to be nested inside of another loop. Unfortunately, there are many ways to do this incorrectly.
For example, the following implementation returns false at the first discrepancy between the two arrays, which may just be a result of a difference in how the two are ordered:
for all elements in accomplished {
for all elements in checklist {
if the accomplished element does not equal the checklist element {
return false
}
}
}
return true
As another example, the following implementation returns true as soon as it determines that one item on the checklist has been accomplished:
for all elements in accomplished {
assign false to the accumulator named checked
for all elements in checklist {
if the accomplished element equals the checklist element {
assign true to the accumulator named checked
break
}
}
if checked is true then return true
}
return false
In short, there are many incorrect ways to think about the problem. To get the right answer, you must think carefully about the way the loops are nested, the Boolean expression in the if statement, the way in which break statements are used, and where return statements are located.
## The Pattern
For this problem, there are two variants of the pattern. In the first variant, you only want the method to return true when all of the items on the checklist have been accomplished. You can solve this variant with a single boolean accumulator as follows:
private static boolean checkFor(String[] checklist, String[] accomplished) {
boolean checked;
for (int c = 0; c < checklist.length; c++) {
checked = false;
for (int a = 0; a < accomplished.length; a++) {
if (checklist[c].equals(accomplished[a])) {
checked = true;
break;
}
}
if (!checked) return false; // An item was not accomplished
}
return true; // All items were accomplished
}
Note that this algorithm breaks out of the inner loop as soon as it determines that the checklist item of interest has been accomplished. It returns false as soon as it determines that any checklist item was not satisfied/accomplished. Hence, if both loops terminate normally then all of the items on the checklist must have been accomplished and the method returns true.
In the second variant of the pattern, you want the method to return true when more than needed elements of the checklist have been accomplished. For this variant, you can use an int accumulator named count that keeps track of the number of items on the checklist that have been accomplished, as follows:
private static boolean checkFor(String[] checklist, String[] accomplished,
int needed) {
int count;
count = 0;
for (int c = 0; c < checklist.length; c++) {
for (int a = 0; a < accomplished.length; a++) {
if (checklist[c].equals(accomplished[a])) {
++count;
if (count >= needed) return true;
else break;
}
}
}
return false; // Not enough items were accomplished
}
Again, this algorithm can break out of the inner loop when it determines that the checklist item of interest has been accomplished. In addition, it can return early when count reaches needed. However, in this case, an early return means the checklist has been satisfied/accomplished. Hence, if both loops terminate normally then the method returns false.
Note that both implementations could be improved by checking to ensure that accomplished.length is at least as large is necessary to satisfy/accomplish the checklist (i.e., at least as large as checklist.length in the first variant and at least as large as needed in the second variant). This improvement was omitted for the sake of simplicity.
## Examples
It is useful at this point to consider some examples involving both variants above. In all of these examples, checklist contains the elements "Shirts", "Socks", "Pants", and "Skirts".
### The Inflexible Variant
First, suppose that accomplished contains the elements "Shirts", "Socks", "Pants", "Dresses", and "Shoes". In outer iteration 0, the method checks to see if "Shirts" has been accomplished. In inner iteration 0, the method determines that it has been and breaks out of the inner loop. In outer iteration 1, the method checks to see if "Socks" has been accomplished. In inner iteration 0, the method determines that it hasn’t been, but in inner iteration 1 it determines that it has been and breaks out of the inner loop. The iterations then continue as follows:
Since checklist[3] is not an element of accomplished, the local variable checked is never assigned the value true, and the method returns false.
Now, suppose that accomplished contains the elements "Socks", "Shirts", "Skirts", and "Pants". In outer iteration 0, the method checks to see if "Shirts" has been accomplished. In inner iteration 0, the method determines that it hasn’t been, but in inner iteration 1 it determines that it has and breaks out of the inner loop. In outer iteration 1, the method checks to see if "Socks" has been accomplished. In inner iteration 0, the method sees that it has been and breaks out of the inner loop. The iterations then continue as follows:
In this case, checked is assigned the value true in every outer iteration, and the method returns true.
### The Flexible Variant
Now consider the first example above but with the second variant of the method, when 2 is passed into the formal parameter named needed (because, apparently, this person is fully-dressed when wearing any two items in the checklist). In outer iteration 0, the method checks to see if "Shirts" has been accomplished. In inner iteration 0, the method determines that it has been, increases count to 1, and breaks out of the inner loop. In outer iteration 1, the method checks to see if "Socks" has been accomplished. In inner iteration 0, the method determines that it hasn’t been, but in inner iteration 1 it determines that it has been, increases count to 2, determines that count is greater than or equal to needed, and returns true.
Now, consider an example in which accomplished contains the elements "Dresses" and "Shirts". These iterations will proceed as follows:
In inner iteration 1 of outer iteration 0 the method determines that "Shirts" have been packed, and increases count to 1. However, none of the inner iterations for outer iteration 1 correspond to "Socks", none of the inner iterations for outer iteration 2 correspond to "Pants", and none of the inner iterations for outer iteration 3 correspond to "Skirts". So, count is never increased to 2, and the method returns false.
Though some thought was given to the efficiency of the algorithms above, many issues were ignored, and none were considered formally. If you take a course on data structures and algorithms, you will consider these kinds of issues in detail. For example, it is interesting to ask whether it is better to sort checklist and/or accomplished, either partially or completely. It is also interesting to ask whether it is possible to create a more efficient algorithm when the checklist consists of sequential integers.
|
2022-08-08 19:58:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26543715596199036, "perplexity": 1004.1341315660153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570871.10/warc/CC-MAIN-20220808183040-20220808213040-00349.warc.gz"}
|
https://stats.stackexchange.com/questions/384109/likelihood-function-when-only-max-1-le-i-le-nx-i-is-observed-and-n-is-par
|
# Likelihood function when only $\max_{1\le i\le N}X_i$ is observed and $N$ is parameter
Let $$X_1,X_2,\ldots,X_N$$ be i.i.d random variables having $$\text{Exp}(1)$$ distribution where $$N$$ is unknown. Suppose only $$T=\max\{X_1,X_2,\ldots,X_N\}$$ is observed.
I have to derive a most powerful test for testing $$H_0:N=5$$ versus $$H_1:N=10$$.
So $$N$$ is my parameter of interest. The joint distribution of $$X_1,\ldots,X_N$$ is of the form
$$f_N(x_1,\ldots,x_N)=\exp\left({-\sum_{i=1}^N x_i}\right)\mathbf1_{{x_1,\ldots,x_N>0}}$$
But this cannot be my likelihood function since $$x_1,\ldots,x_N$$ is not observed.
I am not sure how to express the above joint density as a function of $$t$$, the observed value of $$T$$. If I can write the joint density as some $$f_N(t)$$, then that would be my likelihood function given $$t$$.
I understand that the test is of the form $$\varphi(t)=\mathbf1_{\lambda(t)>k}$$
, where $$\lambda$$ is the likelihood ratio $$\lambda(t)=\frac{f_{H_1}(t)}{f_{H_0}(t)}$$
Any hint would be much appreciated.
As an aside to the actual question, I am curious if it is possible to find MLE of $$N$$.
• Your joint distribution does not depend on the individual $x_i:$ it depends only on their sum, which is observed.
– whuber
Dec 21 '18 at 16:09
• @whuber Yes it depends only on the sum. So $0<x_1,\ldots,x_N\le t\implies 0<\sum_{i=1}^N x_i\le Nt$. Does this imply $\sum_{i=1}^N x_i$ is observed? It only bounds the sum. Dec 21 '18 at 16:25
• There's a missing step here: under your model assumptions, you can easily derive the distribution of $T.$
– whuber
Dec 21 '18 at 16:49
• @whuber Somehow did not realize that I am supposed to use the distribution of $T$. That makes it a standard problem. Dec 21 '18 at 17:28
Hint: the likelihood for $$T$$ is $$f_{T}(t ; N) = N(1-e^{-t})^{N-1}e^{-t}.$$ To verify this, first find the cdf of $$T$$, and then differentiate. \begin{align*} F_T(t) &= P(T \le t)\\ &= [F_{X_i}(t)]^N. \end{align*}
Regarding your other question as to whether there is an MLE estimate for this: yes there is, but the likelihood is only defined on $$\mathbb{N}^{+}$$. This means that you cannot take a derivative and set it equal to zero.
If $$N=5$$ then $$\Pr(X_1 \le x\ \&\ \cdots \ \&\ X_N\le x) = \left( 1-e^{-x} \right)^5,$$ so you have a density function $$\frac d {dx} \left( \left( 1-e^{-x} \right)^5 \right) = 5\left( 1 - e^{-x} \right)^4 \cdot e^{-x}.$$ And similarly if $$N=10.$$ So the likelihood function is $$\begin{cases} L(5) = 5(1-e^{-x})^4 \cdot e^{-x} \\[8pt] L(10) = 10(1-e^{-x})^9 \cdot e^{-x} \end{cases}$$ where $$x$$ is the observed maximum value, and the ratio is $$\frac{L(5)}{L(10)} = \frac 1 {2(1-e^{-x})^5}.$$ A small value of this ratio favors the alternative hypothesis $$N=10.$$ Equivalently, a large value of the observed maximum favors the alternative. Given the probability distribution of the maximum assuming the null hypothesis $$N=5,$$ you can find the critical value as a function of the level of the test.
As an aside to the actual question, I am curious if it is possible to find MLE of $$N$$.
This is quite straightforward, and simply requires you to derive the log-likelihood function and then use standard (discrete or continuous) calculus techniques to maximise. Using rules for the distribution of order statistics, the sampling distribution for $$T$$ is:
$$f_T(t) = N e^{-t} (1-e^{-t})^{N-1}.$$
The log-likelihood is:
$$\ell_t(N) = \ln N - t + (N-1)\ln(1-e^{-t}).$$
Now, maximising this function can be done either by using discrete calculus (i.e., using difference operators), or it can be done by treating $$N$$ as continuous and maximising using continuous calculus, and then discretising the result. For simplicity, we will do the latter. Taking $$N$$ as a real variable, the log-likelihood has corresponding score function and Fisher information (essentially the first and second derivatives, but with the sign reversed on the second) given by:
\begin{aligned} s_t(N) &\equiv \frac{d \ell_t}{dN}(N) = \frac{1}{N} + \ln(1-e^{-t}), \\[12pt] I_t(N) &\equiv - \frac{d^2 \ell_t}{dN^2}(N) = \frac{1}{N^2} >0. \\[12pt] \end{aligned}
Since $$I_t(N) > 0$$ the likelihood function is a strictly concave function, so this means it has a unique MLE at its sole critical point $$s_t(\hat{N}(t)) = 0$$. This gives you the continuous MLE:
$$\hat{N}(t) = \frac{1}{\ln(1-e^{-t})}.$$
This will generally not be an integer value, so you obtain the corresponding discrete MLE by looking at the two integers either side of this value; the bigger one is the discrete MLE (which is almost surely unique).
|
2021-10-27 09:59:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 42, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9652726054191589, "perplexity": 200.19742986726018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00131.warc.gz"}
|
https://learn.careers360.com/ncert/question-the-inner-diameter-of-a-circular-well-is-3-point-5-m-it-is-10-m-deep-find-i-its-inner-curved-surface-area/
|
Q
# The inner diameter of a circular well is 3.5 m. It is 10 m deep. Find (i) its inner curved surface area.
Q : 7 The inner diameter of a circular well is $\small 3.5\hspace{1mm}m$. It is $\small 10\hspace{1mm}m$ deep. Find
(i) its inner curved surface area.
Views
Given,
The inner diameter of the circular well = $d = \small 3.5\hspace{1mm}m$
Depth of the well = $h = 10\ m$
We know,
The curved surface area of a cylinder = $2\pi rh$
$\therefore$ The curved surface area of the well = $2\times\frac{22}{7}\times\frac{3.5}{2}\times10$
$\\ = 44 \times 0.25 \times 10 \\ = 110\ m^2$
Therefore, the inner curved surface area of the circular well is $110\ m^2$
Exams
Articles
Questions
|
2020-01-26 20:48:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37814491987228394, "perplexity": 1387.7433167531872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690379.95/warc/CC-MAIN-20200126195918-20200126225918-00062.warc.gz"}
|
https://blender.stackexchange.com/questions/119358/rotation-copied-every-second-circle
|
# rotation copied every second circle
My question is pretty simple.
How to make an object copy rotation of another object with same speed but only every second circle. I know I can animate that using keyframes, but I need to have in constraint.
• Do you mean rotate half as fast, or stop completely every other rotation? Sep 27 '18 at 21:49
• Stop completely every other rotation. Sep 28 '18 at 10:03
You could add a scripted driver to the desired rotation axis of the object with the intermittent rotation, using the following expresion:
var*(int((var/(2.0*pi))%2))
where the variable var, (or whatever you want to call it) refers to the relevant rotation (X,Y,or Z) of the driving object.
Here, the Z rotation of the triangle is being driven by the expression, in which var refers to the the Z rotation of the square.
EDIT in response to your comment:
We can make the expression a little more general by implicitly casting True and False to 1 and 0:
var*(int((var/(2.0*pi))%3)==1)
In this version, the number where the 3 is determines the number of turns in the cycle, and the number where the 1 is says which of the turns to follow, with 0 as the last, counting back. So this example would make the driven object follow the driver on the second of every three turns.
So, in order to have alternating rotations, in the example below, the green triangle uses:
var*(int((var/(2.0*pi))%2)==1)
..and the yellow triangle uses:
var*(int((var/(2.0*pi))%2)==0)
Any more than this - say, "the second and fifth out of every seven".. and it would probably be tidier to write a little function, and add it to the driver_namespace.
• That's what I was looking for. Thank you very much! Sep 29 '18 at 15:02
• @Michal.. I've made an edit to cover this.. Sep 29 '18 at 17:53
• Consider div instead of mod in this case. var * (var // twopi) Mod is handy for mapping eg mapping on 0, 1 for offset on an orbit div // wil give you the whole number that you are calculating with int-divide-modulus combo Sep 29 '18 at 20:13
• Sorry re confusion, was pointing out that revs or laps can be calculated using div not that mod and int were always avoidable. The ternary operator is another (often more) readable way eg var if 1.0 <= (var // twopi) % 5 <= 2.0 else 0 Ternary op can be nested. Sep 30 '18 at 3:06
• As pointed out often its easier to make a method in the driver namespace to avoid the single line madness. Akin to print pages '[1, 3, 4-6] Would like to see a subexpression in drivers akin to math surface to cut down long expressions.. . Mapping equation to 0 or 1 and driving a copy rotation constraint influence is another way that can be less complex equation wise. Is the square above slightly leading the triangle? 8UV's too man.. envious lol. Sep 30 '18 at 3:06
|
2021-12-06 00:04:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4628814458847046, "perplexity": 1397.686923270531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363226.68/warc/CC-MAIN-20211205221915-20211206011915-00062.warc.gz"}
|
http://tonbak.wordpress.com/category/iran/
|
Archive | Iran RSS feed for this section
## Persian Tar and Tonbak in Dastgah e Chahargah
1 Apr
This is the full concert of my brother, Parham and I in Persian mode, Dastgah-e-Chahargah, in Tehran, Iran
http://www.parhamnassehpoor.com
## Peyman Nasehpour at Mathematics Genealogy Project Website
1 Apr
The mission of Mathematics Genealogy Project is to compile information about all the mathematicians of the world. They earnestly solicit information from all schools who participate in the development of research level mathematics and from all individuals who may know desired information.
Recently they listed me here: http://www.genealogy.ams.org/id.php?id=151682
Also when I was searching the page of the late Mohsen Hachtroudi, surprisingly I didn’t find it and by searching in different resources, I gathered the information and submitted to Math. Genea. Proj. Website and now he has a page:
http://www.genealogy.ams.org/id.php?id=24177
I have also noticed that Gholamhossein Mossaheb is not listed there, but up to now I have not found a good resource to submit it.
## Shajarian to congratulate Persian new year
20 Mar
Maestro Mohammad Reza Shajarian congratulate Nowruz (Persian new year).
18 Mar
Happy Nowruz
## In remembrance of professor Mohsen Hachtroudi
10 Mar
When I was student of professor Abdollah Anwar for learning philosophy and logic, time to time he was mentioning the late professor Mohsen Hachtroudi and his knowledge in different branches of science and art. Yesterday I decided to search in Internet and to find some information about him. First I found the title of his thesis who had written under the
supervision of the very famous French mathematician, Élie Joseph Cartan.
The first point that I noticed was that he had romanized his name as Mohsen Hachtroudi in his Ph.D. thesis. I searched the title of his thesis and finally noticed that some mathematicians have referred his thesis in their works. One of those mathematicians that mentions professor Hachtrouri’s thesis in his works for several times is Joel Merker.
What seemed to me more interesting is that the name of Hachtroudi appears in the title of Merker’s recent work:
Vanishing Hachtroudi curvature and local equivalence to the Heisenberg sphere
The biography of professor Mohsen Hachtroudi at wikipedia
Here is a blog devoted to professor Hachtroudi:
Mohsen Hashtroudi
Professor Hachtroudi’s Ph.D. thesis under the supervision of Elie Cartan:
Hachtroudi, M.: Les espaces d’éléments à connexion projective normale,
Actualités Scientifiques et. Industrielles, vol. 565, Paris, Hermann,
1937. (Info from Joel Merker’s works)
Related keywords: Hachtrudi, Hachtroodi, Hashtrudi, Hashtroudi, Hashtroodi
## Mousavi and Karrubi under House Arrest
20 Feb
Iranian opposition leaders Mir-Hossein Mousavi, his wife Zahra Rahnavard and Mehdi Karroubi have been put under house arrest after protest call on Feb. 14, 2011.
Mir Hossein Mousavi and Zahra Rahnavard under House Arrest
Mehdi Karroubi under House Arrest
## My PhD Thesis is now online at the website of the University of Osnabrück
18 Feb
My PhD thesis with the title “Content Algebras and Zero-Divisors” is now online at the website of the University of Osnabrück:
Institutionelles repOSitorium der Universität Osnabrück: Content Algebras and Zero-Divisors
Abstract: This thesis concerns two topics. The first topic, that is related to the Dedekind-Mertens Lemma, the notion of the so-called content algebra, is discussed in chapter 2. Let $R$ be a commutative ring with identity and $M$ be a unitary $R$-module and $c$ the function from $M$ to the ideals of $R$ defined by $c(x) = \cap \lbrace I \colon I \text{~is an ideal of~} R \text{~and~} x \in IM \rbrace$. $M$ is said to be a \textit{content} $R$-module if $x \in c(x)M$, for all $x \in M$. The $R$-algebra $B$ is called a \textit{content} $R$-algebra, if it is a faithfully flat and content $R$-module and it satisfies the Dedekind-Mertens content formula. In chapter 2, it is proved that in content extensions, minimal primes extend to minimal primes, and zero-divisors of a content algebra over a ring which has Property (A) or whose set of zero-divisors is a finite union of prime ideals are discussed. The preservation of diameter of zero-divisor graph under content extensions is also examined. Gaussian and Armendariz algebras and localization of content algebras at the multiplicatively closed set $S^ \prime = \lbrace f \in B \colon c(f) = R \rbrace$ are considered as well.
In chapter 3, the second topic of the thesis, that is about the grade of the zero-divisor modules, is discussed. Let $R$ be a commutative ring, $I$ a finitely generated ideal of $R$, and $M$ a zero-divisor $R$-module. It is shown that the $M$-grade of $I$ defined by the Koszul complex is consistent with the definition of $M$-grade of $I$ defined by the length of maximal $M$-sequences in $I$.
Chapter 1 is a preliminarily chapter and dedicated to the introduction of content modules and also locally Nakayama modules.
Supervisor: Prof. Dr. Winfried Bruns
Important keywords and phrases: modules, commutative rings and algebras, content module, content algebra, weak content algebra, very few zero-divisor, zero-divisor graph, Gaussian algebra, Armendariz algebra, minimal prime ideal, property (A), Gauss’ lemma, Dedekind-Mertens lemma, ZD-module, grade, local cohomology module, homological dimension, McCoy’s property, semigroup ring, semigroup module, locally Nakayama module.
Alternative page for my PhD Thesis:
Content Algebras and Zero-Divisors
|
2013-05-19 11:51:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48004043102264404, "perplexity": 2652.3356797460738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697442043/warc/CC-MAIN-20130516094402-00019-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.math.lsu.edu/REU/
|
LSU | Mathematics
# Research Experience for Undergraduates (REU)
## Structure of LSU REU in Mathematics
We have had an REU here since the summer of 1993, with funding from LEQSF and NSF. For the summer of 2015, the total budgeted to each student is approximately \$5,650, comprised of \$4,000 in a cash stipend, and the balance for 8 weeks of housing on campus. The multiplicity of directors: Hoffman (algebraic geometry), Morales (number theory) and Stoltzfus (braid/knot theory) insures that participants receive plenty of individual attention.
### Eligibility
US citizens and permanent residents who will be enrolled in a bachelor's degree program in both Spring and Fall of 2015. Preference will be given to students who will have completed two to three years of undergraduate mathematics, including a course in abstract and/or advanced linear algebra with some experience writing proofs. Participants are expected to devote full time to the program, precluding other course work and/or outside employment.
## No Summer 2016 REU: Not recommended for funding.
### Preliminary REU 2016 Announcement:
Pending a successful grant renewal application, the LSU REU returns in the summer of 2016 with the following themes:
Prospective 2016 Themes: Invariants in Constructive Galois Theory, Arithmetic Algebraic Geometry and Knot theory
Tentative Summer 2016 Program Dates Sun. 5 June - Fri. 29 Jul, 2016.
### Previous REU 2015 Announcement:
Refreshed by their sabbatical leaves, the LSU REU returns in the summer of 2015 with three faculty mentors Neal Stoltzfus, Jorge Morales, and William Hoffman.
Prospective 2015 Themes: Invariants in Galois Theory, Geometry and Knot theory
Summer 2015 Program Dates Sun. 7 June - Fri. 31 Jul, 2015.
2015 Themes: Invariants in Galois Theory, Geometry and Knot theory
We will explore the interaction of several areas of mathematics centering around braids and knots, group actions & Galois theory, graphs and polyhedra, as well as modular and related functions. Specifically, our proposed projects will be individually and collaboratively designed. We plan to propose problems in the combinatorics of polynomial functions of graphs, knots and their bounding Seifert surfaces, Galois groups and group actions and relationships with modular functions. The structure of graphs embedded on surfaces (dessins, 2-dimensional ribbon/fat graphs) is potentially useful in all three areas. More details are found under General Information and particularly under Topics (for 2012) below.
### Application Forms: The Online Application is available at MathPrograms.org. Your complete application consists of the following:
1. Completed application form (online at MathPrograms.org).
2. Recommendations from two mathematician (coordinated by MathPrograms.org).
3. Copy of your college transcript (A scanned copy in PDF format can be attached and uploaded to the application site.).
|
2017-11-17 19:31:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17878350615501404, "perplexity": 5289.772910748245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934803906.12/warc/CC-MAIN-20171117185611-20171117205611-00133.warc.gz"}
|
https://stackoverflow.com/questions/14051715/markdown-native-text-alignment/40862915
|
Markdown native text alignment
Does markdown support native text-alignment without usage html + css?
• For GitHub Flavored Markdown, <p align=center> works. (from this answer below) – Ulysse BN Sep 18 '19 at 16:09
native markdown doesn't support text alignment without html + css.
• Therefore wrap your text in <p style="text-align: center;"> and </p> which should work in any markdown – SDJMcHattie Mar 31 '16 at 13:36
• Github users: inline styles do not work on github and are not included as extended features of Github Flavored Markdown. This is all github supports as of Jan 2017. There are many online markdown testers that say they comply with GFM and show things like inline styles working, but github markdown [pretty much]* doesn't support HTML/CSS at the moment. *<br> works so there might be some hidden tags that work. – Govind Rai Jan 10 '17 at 23:22
• Github users: As of 6/8/2017, Diego Vinícius's answer below successfully centers text in markdown files. Just wrap your text in a p tag with align set to center, like so: <p align="center">centered text</p> – Kröw Jun 8 '17 at 8:10
• BTW, which would be better to use, <div> or <p>? A p is a paragraph, so maybe div would be a more neutral and better alternative? – VasiliNovikov Jun 24 '17 at 8:19
• @SDJMcHattie This doesn't work when converting .md to .pdf. – Marc Le Bihan Oct 20 '19 at 5:45
In order to center text in md files you can use the center tag like html tag:
<center>Centered text</center>
• This method is deprecated in html 5. – user5147563 Mar 7 '17 at 20:29
• This method works on SquareSpace Markdown blocks, as of August 15th, 2018. – ikjadoon Aug 16 '18 at 1:40
I known this isn't markdown, but <p align="center"> worked for me, so if anyone figures out the markdown syntax instead I'll be happy to use that. Until then I'll use the HTML tag.
• the align attribute has been deprecated since HTML 4 and obsolete since HTML 5. – Jindrich Vavruska May 27 '17 at 18:15
• While the above answers did not work, this method successfully centered some text for me on github. – Kröw Jun 8 '17 at 8:06
• Just tested on github: it works for text, it doesn't work for images. – xtian Jun 21 '17 at 14:28
• if html tags works ,you cant align with with p tag o simple align, give a try with <div class="margin: 0 auto;"> with you image inside the div – Diego Vinícius Jun 22 '17 at 14:45
• @JindrichVavruska what should be used with Markdown these days instead? – Hi-Angel Mar 28 at 15:27
The div element has its own alignment attribute, align.
<div align="center">
my text here.
</div>
• Best solution. We can use "justify" in place of "center". Applies to everything inside the div without distorting anything. – impopularGuy Apr 29 '20 at 4:31
• To prevent inline MarkDown syntax within the div from breaking and failing to render, consider adding a blank line after the opening <div tag. I have found that to prevent that. – Tomáš Hübelbauer Mar 26 at 21:57
It's hacky but if you're using GFM or some other MD syntax which supports building tables with pipes you can use the column alignment features:
|| <!-- empty table header -->
|:--:| <!-- table header/body separator with center formatting -->
| I'm centered! | <!-- cell gets column's alignment -->
This works in marked.
• How would one go around this to apply for a heading? If I use a plain "#" inside the "|" it appears verbatim. – nilon Aug 7 '17 at 3:32
• This only appears to work for text. I'm trying to center an image. – Vince Apr 22 '18 at 21:04
In Github You need to write:
<p align="justify">
Lorem ipsum
</p>
• I like to start longer README.md files with an "Index" listing. I put this at the end of each section in case readers wanna pop back up to the index <p align="right">[Index](#index)</p> Works great :) – MmmHmm Apr 12 '18 at 4:11
• This solution works, but some styles like italic text are lost. – JavDomGom May 6 '20 at 12:58
• @JavDomGom try adding a blank line after the opening <div tag, then your inline element styles should be preserved. – Tomáš Hübelbauer Mar 26 at 21:55
For Markdown Extra you can use custom attributes:
# Example text {style=text-align:center}
This works for headers and blockquotes, but not for paragraphs, inline elements and code blocks.
A shorter version (but not supported in HTML 5):
# Example text {align=center}
• @AlmostPitt As mentioned, it's a Markdown Extra specific feature, it's not likely to work elsewhere. – user5147563 Oct 11 '19 at 9:29
For python markdown with attr_list extension the syntax is a little different:
{: #someid .someclass somekey='some value' }
Example:
[Click here](http://exmaple.com){: .btn .btn-primary }
I was trying to center an image and none of the techniques suggested in answers here worked. A regular HTML <img> with inline CSS worked for me...
<img style="display: block; margin: auto;" alt="photo" src="{{ site.baseurl }}/images/image.jpg">
This is for a Jekyll blog hosted on GitHub
A qualified 'yes' using table syntax. For example, you can center-align plain text as follows:
| |
| :-: |
| Excerpts from Romeo and Juliet (arr. V. Borisovsky) |
This yields:
Excerpts from Romeo and Juliet (arr. V. Borisovsky)
Note that you can still use Markdown inside an HTML block. For example:
<div style="font-style: italic; text-align: center;" markdown="1">
## Excerpts from Romeo and Juliet (arr. V. Borisovsky)
### Sergei Prokofiev
#### Timothy Ridout, viola ∙ Frank Dupree, piano
</div>
I found pretty useful to use latex syntax on jupyter notebooks cells, like:

$$\text{This is some centered text}$$
To center align, surround the text you wish to center align with arrows (-> <-) like so:
-> This is center aligned <-
|
2021-06-20 23:38:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27215495705604553, "perplexity": 9462.877629942017}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488257796.77/warc/CC-MAIN-20210620205203-20210620235203-00128.warc.gz"}
|
https://www.physicsforums.com/threads/velocity-over-time-with-constant-power.779680/
|
# Homework Help: Velocity over time with constant power
Tags:
1. Nov 2, 2014
### watarok
For a car with constant power, how will its velocity change over time?
Since power (P) is the derivative of the kinetic energy (Ek), I've found that the equation for the velocity as a function of time is √(2*P*t/m). Is this correct? Would the graph then be a root graph?
2. Nov 2, 2014
### elegysix
so P = F*v, which means v = P/F. and F = mdv/dt... therefore vdv=Pdt/m which leads to $v=\sqrt{(2Pt/m)}$, so yeah I think you've got it right.
|
2018-07-20 12:55:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7520574927330017, "perplexity": 1277.1130477962354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591596.64/warc/CC-MAIN-20180720115631-20180720135631-00616.warc.gz"}
|
https://twiki.cern.ch/twiki/bin/view/CMSPublic/PhysicsResultsSUS13008?cover=print;rev=4;sortcol=2;table=11;up=0
|
# Abstract
Results are reported from a search for new physics processes in events with three leptons and at least one b-tagged jet. The analysis is based on a sample of proton-proton collisions at a center-of-mass energy of 8 TeV with an integrated luminosity of , collected by the CMS detector at the LHC. The event selection focuses on signatures for supersymmetry (SUSY) that include multiple W or Z bosons and b-jets in the final state, and the b-jet multiplicity is used to define low-background search regions. The standard model background contributions are determined using well-established techniques that are based on control samples in the data. The observed yields in the data are consistent with the background predictions, and the results are used to obtain upper limits on the production cross sections for several SUSY event topologies defined in the framework of simplified models.
# Detailed documentation
The documentation can be found at SUS-13-008.
# Tables and Plots from SUS-13-008
( click on plot to get .pdf )
## Introduction
Figures Abbreviated Caption
Figure 1a: Processes targeted by this analysis include gluino-pair production with subsequent decay to four top quarks and two lightest SUSY particles (LSP) () via off-shell top squark.
Figure 1b: Processes targeted by this analysis include gluino-pair production with subsequent decay to four top quarks and two lightest SUSY particles (LSP) () via on-shell top squark.
Figure 2a: Processes targeted by this analysis include direct sbottom-pair production with decay to two top quarks, two W and two LSP ().
Figure 2b: Processes targeted by this analysis include gluino-pair production with decay to two bottom quarks, two top quarks, two W bosons, and two LSP via on-shell bottom squark ().
Figure 3: Process targeted by this analysis includes direct sbottom-pair production with decay to two b-quarks, two Z and two LSP ().
## Event Selection and Backgrounds
Tables and Figures Abbreviated Caption
Figure 4: Distribution of events after the baseline event selection in data in the MET vs. HT plane for On-Z (left) and Off-Z (right) category. The requirement has not been applied to illustrate the background population at low MET.
Table 1: Binning defining the baseline selection and the search regions of the analysis. All the combinations of these requirements are used to create the 60 search regions (SR). For no extra jet multiplicity binning is added.
Figure 5: Jet multiplicity distributions for diboson events in WZ (left) and ZZ (right) control regions in data and simulated event samples.
## Results
Tables and Figures Abbreviated Caption
Table 2: Predicted total background and observed data yields as a function of MET for events with no Z candidate present (Off-Z). Upper limits (68% CL) are quoted when there is not enough events in data and simulation to derive an expected number of background events.
Table 3: Predicted total background and observed data yields as a function of MET for events with a Z candidate present (On-Z). Upper limits (68% CL) are quoted when there is not enough events in data and simulation to derive an expected number of background events.
Figure 6a: Observed data events and predicted SM background as a function of number of jets, MET, HT, and number of b-jets are shown for events that do not contain an opposite-sign-same-flavour pair that is a Z boson candidate. The last bin in the histograms includes overflow events. The shaded bands correspond to the estimated uncertainties on the background which are calculated on the per bin basis.
Figure 6b: Observed data events and predicted SM background as a function of number of jets, MET, HT, and number of b-jets are shown for events that contain an opposite-sign-same-flavour pair that is a Z boson candidate. The last bin in the histograms includes overflow events. The shaded bands correspond to the estimated uncertainties on the background which are calculated on the per bin basis.
Figure 7: Predicted total background and observed data yields as a function of MET for events that do not contain an opposite-sign-same-flavour pair that is a Z boson candidate (Off-Z): (a) and (b) . The shaded bands correspond to the estimated uncertainties on the background. The dashed histograms show an expected yield for the A1 model with particle masses and . The dotted histograms show an expected yield for the B1 model with particle masses and .
Figure 8: Predicted total background and observed data yields as a function of MET for events that contain an opposite-sign-same-flavour pair that is a Z boson candidate (On-Z): (a) and (b) . The shaded bands correspond to the estimated uncertainties on the background. The dashed histograms show an expected yield for the A1 model with particle masses and . The dotted histograms show an expected yield for the B1 model with particle masses and .
## Interpretation
Tables and Figures Abbreviated Caption
Table 4: Systematic uncertainties on the signal acceptance.
Figures Abbreviated Caption
Figure 9: The 95% CL upper limits on the (left) model A1 and (right) model A2 scenario cross sections (fb) derived using the CLs method. The solid (black) contours show the observed exclusions assuming the NLO+NLL cross sections, along with the one standard deviation theory uncertainties. The dashed (red) contours present the corresponding expected results, along with the one standard deviation experimental uncertainties.
Figure 10: The 95% CL upper limits on the model B2 scenario cross sections (fb) derived using the CLs method. In the model B2, it is assumed that with (left) or (right) . The solid (black) contours show the observed exclusions assuming the NLO+NLL cross sections, along with the one standard deviation theory uncertainties.The dashed (red) contours present the corresponding expected results, along with the one standard deviation experimental uncertainties.
Figure 11a: The 95% CL upper limits on the model B1 scenario cross sections (fb) derived using the CLs method. The limits are computed for the following scenarios within the model B1: . The solid (black) contours show the observed exclusions assuming the NLO+NLL cross sections, along with the one standard deviation theory uncertainties. The dashed (red) contours present the corresponding expected results, along with the one standard deviation experimental uncertainties.
Figure 11bc: The 95% CL upper limits on the model B1 scenario cross sections (fb) derived using the CLs method. The limits are computed for the following scenarios within the model B1: (left) or (right) . The solid (black) contours show the observed exclusions assuming the NLO+NLL cross sections, along with the one standard deviation theory uncertainties. The dashed (red) contours present the corresponding expected results, along with the one standard deviation experimental uncertainties. The deviation of the observed limit from the expected one is evaluated to be at the level of two standard deviations experimental uncertainties. The exclusion plots with additional information in them can be found at the link.
Figure 12: The 95% CL upper limits on the model C1 scenario cross sections (fb) derived using the CLs method. The solid (black) contours show the observed exclusions assuming the NLO+NLL cross sections, along with the one standard deviation theory uncertainties. The dashed (red) contours present the corresponding expected results, along with the one standard deviation experimental uncertainties.
### Electronic version of interpretations and acceptance maps
The root files contain the efficiencies for the signal regions used in the limits for the given model (as well as the total efficiency for the sum of these SR), the cross section limits, and the 6 exclusion contours.
Model Specification Link to the file
A1 T13lb.root
A2 T53lb.root
B1 T63lb.root
B1 T6x053lb.root
B1 T6x083lb.root
B2 , T71503lb.root
B2 , T73003lb.root
C1 T6bbZZ3lb.root
Figures and Tables Abbreviated Caption
Performance in MC of the method for non-prompt leptons background estimation: muons. Red line in the ratio plots corresponds to a fit with a constant.
Performance in MC of the method for non-prompt leptons background estimation: electrons. Red line in the ratio plots corresponds to a fit with a constant.
The complete list of search regions (SR) used in the analysis. Signal regions 30-59 are the same as 0-29 except that the off-Z dilepton mass requirement is inverted.
## Distributions in data
The kinematical distributions for the Off-Z data selected with and requirements. The last bin in the histograms includes overflow events. The shaded bands correspond to the estimated uncertainties on the background which are calculated on the per bin basis. The kinematical distributions for the data selected with and requirements. The last bin in the histograms includes overflow events. The shaded bands correspond to the estimated uncertainties on the background which are calculated on the per bin basis. The kinematical distributions for the data selected with and requirements. The last bin in the histograms includes overflow events. The shaded bands correspond to the estimated uncertainties on the background which are calculated on the per bin basis. The kinematical distributions for the On-Z data selected with and requirements. The last bin in the histograms includes overflow events. The shaded bands correspond to the estimated uncertainties on the background which are calculated on the per bin basis.
## Additional plots for the model B1 exclusion
Figures Abbreviated Caption
The 95% CL upper limits on the model B1 scenario cross sections (fb) derived using the CLs method. The limits are computed for the following scenario within the model B1: . The solid (black) contours show the observed exclusions assuming the NLO+NLL cross sections, along with the one standard deviation theory uncertainties. The dashed (red) contours present the corresponding expected results, along with (left) the two standard deviations or (right) one and two standard deviations experimental uncertainties.
## The most sensitive search regions
Figures Abbreviated Caption
The most sensitive search regions for (left) model A1 and (right) model A2.
The most sensitive search regions for the model B2. In the model B2, it is assumed that with .
The most sensitive search regions for the model B1: .
The most sensitive search regions for the model B1: (left) or (right) .
The most sensitive search regions for the model C1.
SR SR All SR Abbreviated Caption
Acceptance maps for two most sensitive search regions and for all used SR for model A1 (T1tttt).
Acceptance maps for two most sensitive search regions and for all used SR for model A2 (T5tttt).
Acceptance maps for two most sensitive search regions and for all used SR for model C1 (T6bbZZ).
Acceptance maps for two most sensitive search regions and for all used SR for model B1 (T6ttWW).
Acceptance maps for two most sensitive search regions and for all used SR for model B1 (T6ttWW).
Acceptance maps for two most sensitive search regions and for all used SR for model B1 (T6ttWW).
Acceptance maps for two most sensitive search regions and for all used SR for model B2 (T7btW) with .
Acceptance maps for two most sensitive search regions and for all used SR for model B2 (T7btW) with .
## Cut Flow plots
Tables and Figures Abbreviated Caption
B-tagged jet multiplicity for 3l off-Z events. At the left for the gluino pair production (T1tttt model), at the right for sbottom pair production (T6ttWW model).
HT distribution for 3l off-Z events after applying a b-tagged jet multiplicity cut. At the left for the gluino pair production (T1tttt model) after asking for at least 2 b-jets, at the right for sbottom pair production (T6ttWW model) asking for at least 1 b-jet.
Jet multiplicity for 3l off-Z events with at least 2 b-tags (left, for T1tttt model) or 1 b-tag (right, for sbottom pair production - T6ttWW model).
MET distribution for 3l off-Z events with at least 4 jets in the event. At the left for the gluino pair production (T1tttt model) is shown with a requirement of at least 2 b-jets, at the right the sbottom pair production (T6ttWW model) with at least 1 b-jet.
## Event displays
Event display of one event with 3 leptons and 3 b-tagged jets:
Topic attachments
I Attachment History Action Size Date Who Comment
png 3D.png r1 manage 296.4 K 2013-05-13 - 10:37 PieterEveraerts
pdf HT_Sbottom.pdf r1 manage 14.5 K 2013-05-06 - 16:33 PieterEveraerts
png HT_Sbottom.png r1 manage 14.2 K 2013-05-06 - 16:33 PieterEveraerts
pdf HT_T1tttt.pdf r1 manage 15.7 K 2013-05-06 - 16:33 PieterEveraerts
png HT_T1tttt.png r1 manage 15.6 K 2013-05-06 - 16:33 PieterEveraerts
pdf MCclosureEle.pdf r3 r2 r1 manage 28.1 K 2013-05-14 - 00:41 LesyaShchutska
png MCclosureEle.png r3 r2 r1 manage 44.7 K 2013-05-14 - 00:41 LesyaShchutska
pdf MCclosureMu.pdf r3 r2 r1 manage 27.7 K 2013-05-14 - 00:41 LesyaShchutska
png MCclosureMu.png r3 r2 r1 manage 44.7 K 2013-05-14 - 00:41 LesyaShchutska
pdf MET_HT_offZ.pdf r2 r1 manage 16.9 K 2013-05-09 - 23:13 LesyaShchutska
png MET_HT_offZ.png r2 r1 manage 122.1 K 2013-05-09 - 23:13 LesyaShchutska
pdf MET_HT_onZ.pdf r1 manage 18.1 K 2013-05-04 - 19:18 LesyaShchutska
png MET_HT_onZ.png r1 manage 23.5 K 2013-05-04 - 19:18 LesyaShchutska
pdf MET_Sbottom.pdf r1 manage 15.2 K 2013-05-06 - 16:33 PieterEveraerts
png MET_Sbottom.png r1 manage 14.3 K 2013-05-06 - 16:33 PieterEveraerts
pdf MET_T1tttt.pdf r1 manage 15.1 K 2013-05-06 - 16:33 PieterEveraerts
png MET_T1tttt.png r1 manage 15.0 K 2013-05-06 - 16:33 PieterEveraerts
pdf ModelA1.pdf r1 manage 504.6 K 2013-05-04 - 19:28 LesyaShchutska
png ModelA1.png r1 manage 77.2 K 2013-05-04 - 19:28 LesyaShchutska
pdf ModelA2.pdf r1 manage 505.5 K 2013-05-04 - 19:28 LesyaShchutska
png ModelA2.png r1 manage 83.2 K 2013-05-04 - 19:28 LesyaShchutska
pdf ModelB1.pdf r1 manage 506.2 K 2013-05-04 - 19:28 LesyaShchutska
png ModelB1.png r1 manage 93.3 K 2013-05-04 - 19:28 LesyaShchutska
pdf ModelB2.pdf r1 manage 504.9 K 2013-05-04 - 19:28 LesyaShchutska
png ModelB2.png r1 manage 119.9 K 2013-05-04 - 19:28 LesyaShchutska
pdf ModelC1.pdf r2 r1 manage 13.0 K 2013-05-09 - 16:56 LesyaShchutska
png ModelC1.png r2 r1 manage 117.9 K 2013-05-09 - 16:56 LesyaShchutska
pdf SRx7.pdf r1 manage 64.9 K 2013-05-14 - 13:39 LesyaShchutska
png SRx7.png r1 manage 75.4 K 2013-05-14 - 13:39 LesyaShchutska
pdf SRx8.pdf r1 manage 65.0 K 2013-05-14 - 13:39 LesyaShchutska
png SRx8.png r1 manage 76.8 K 2013-05-14 - 13:39 LesyaShchutska
pdf SRxOnZ.pdf r1 manage 42.7 K 2013-05-14 - 13:39 LesyaShchutska
png SRxOnZ.png r1 manage 76.1 K 2013-05-14 - 13:39 LesyaShchutska
pdf SearchRegions.pdf r2 r1 manage 76.7 K 2013-05-04 - 21:30 LesyaShchutska
png SearchRegions.png r3 r2 r1 manage 129.5 K 2013-05-13 - 23:46 LesyaShchutska
pdf SearchRegionsAll.pdf r2 r1 manage 85.1 K 2013-05-24 - 00:06 LesyaShchutska
png SearchRegionsAll.png r3 r2 r1 manage 735.2 K 2013-05-24 - 00:06 LesyaShchutska
pdf T1.pdf r5 r4 r3 r2 r1 manage 17.7 K 2013-05-14 - 13:20 LesyaShchutska
png T1.png r5 r4 r3 r2 r1 manage 22.2 K 2013-05-14 - 13:20 LesyaShchutska
root T13lb.root r3 r2 r1 manage 55.2 K 2013-05-14 - 11:51 LesyaShchutska
pdf T13lbXSEC.pdf r3 r2 r1 manage 25.4 K 2013-05-14 - 11:44 LesyaShchutska
png T13lbXSEC.png r3 r2 r1 manage 184.1 K 2013-05-14 - 11:45 LesyaShchutska
pdf T1ttttAllSR.pdf r3 r2 r1 manage 17.6 K 2013-05-14 - 13:24 LesyaShchutska
png T1ttttAllSR.png r3 r2 r1 manage 19.4 K 2013-05-14 - 13:24 LesyaShchutska
pdf T1ttttSR18.pdf r5 r4 r3 r2 r1 manage 18.1 K 2013-05-14 - 13:27 LesyaShchutska
png T1ttttSR18.png r5 r4 r3 r2 r1 manage 22.1 K 2013-05-14 - 13:27 LesyaShchutska
pdf T1ttttSR28.pdf r5 r4 r3 r2 r1 manage 18.0 K 2013-05-14 - 13:27 LesyaShchutska
png T1ttttSR28.png r5 r4 r3 r2 r1 manage 20.4 K 2013-05-14 - 13:27 LesyaShchutska
pdf T5.pdf r5 r4 r3 r2 r1 manage 15.7 K 2013-05-14 - 13:20 LesyaShchutska
png T5.png r4 r3 r2 r1 manage 23.0 K 2013-05-14 - 13:20 LesyaShchutska
root T53lb.root r3 r2 r1 manage 31.0 K 2013-05-14 - 11:51 LesyaShchutska
pdf T53lbXSEC.pdf r4 r3 r2 r1 manage 21.5 K 2013-05-14 - 11:44 LesyaShchutska
png T53lbXSEC.png r4 r3 r2 r1 manage 172.7 K 2013-05-14 - 11:45 LesyaShchutska
pdf T5ttttAllSR.pdf r3 r2 r1 manage 15.1 K 2013-05-14 - 13:24 LesyaShchutska
png T5ttttAllSR.png r3 r2 r1 manage 22.5 K 2013-05-14 - 13:24 LesyaShchutska
pdf T5ttttSR18.pdf r5 r4 r3 r2 r1 manage 15.4 K 2013-05-14 - 13:27 LesyaShchutska
png T5ttttSR18.png r5 r4 r3 r2 r1 manage 22.7 K 2013-05-14 - 13:27 LesyaShchutska
pdf T5ttttSR28.pdf r4 r3 r2 r1 manage 15.2 K 2013-05-14 - 13:31 LesyaShchutska
png T5ttttSR28.png r5 r4 r3 r2 r1 manage 22.1 K 2013-05-14 - 13:31 LesyaShchutska
pdf T6.pdf r5 r4 r3 r2 r1 manage 14.4 K 2013-05-14 - 13:20 LesyaShchutska
png T6.png r5 r4 r3 r2 r1 manage 23.1 K 2013-05-14 - 13:20 LesyaShchutska
pdf T605.pdf r5 r4 r3 r2 r1 manage 14.5 K 2013-05-14 - 13:20 LesyaShchutska
png T605.png r4 r3 r2 r1 manage 21.0 K 2013-05-14 - 13:20 LesyaShchutska
pdf T608.pdf r5 r4 r3 r2 r1 manage 15.3 K 2013-05-14 - 13:20 LesyaShchutska
png T608.png r4 r3 r2 r1 manage 24.5 K 2013-05-14 - 13:20 LesyaShchutska
root T63lb.root r3 r2 r1 manage 21.7 K 2013-05-14 - 11:51 LesyaShchutska
pdf T63lbXSEC.pdf r4 r3 r2 r1 manage 20.5 K 2013-05-14 - 11:44 LesyaShchutska
png T63lbXSEC.png r4 r3 r2 r1 manage 194.2 K 2013-05-14 - 11:45 LesyaShchutska
pdf T6bbZZ.pdf r5 r4 r3 r2 r1 manage 14.9 K 2013-05-14 - 13:20 LesyaShchutska
png T6bbZZ.png r5 r4 r3 r2 r1 manage 23.7 K 2013-05-14 - 13:20 LesyaShchutska
root T6bbZZ3lb.root r3 r2 r1 manage 28.7 K 2013-05-14 - 11:51 LesyaShchutska
pdf T6bbZZAllSR.pdf r3 r2 r1 manage 14.8 K 2013-05-14 - 13:24 LesyaShchutska
png T6bbZZAllSR.png r3 r2 r1 manage 20.1 K 2013-05-14 - 13:24 LesyaShchutska
pdf T6bbZZSR31.pdf r6 r5 r4 r3 r2 manage 14.9 K 2013-05-14 - 13:27 LesyaShchutska
png T6bbZZSR31.png r6 r5 r4 r3 r2 manage 20.7 K 2013-05-14 - 13:27 LesyaShchutska
pdf T6bbZZSR46.pdf r6 r5 r4 r3 r2 manage 14.9 K 2013-05-14 - 13:28 LesyaShchutska
png T6bbZZSR46.png r6 r5 r4 r3 r2 manage 21.0 K 2013-05-14 - 13:28 LesyaShchutska
pdf T6bbZZXSEC.pdf r3 r2 r1 manage 19.9 K 2013-05-14 - 11:44 LesyaShchutska
png T6bbZZXSEC.png r3 r2 r1 manage 185.4 K 2013-05-14 - 11:45 LesyaShchutska
pdf T6ttWWAllSR.pdf r3 r2 r1 manage 14.6 K 2013-05-14 - 13:24 LesyaShchutska
png T6ttWWAllSR.png r3 r2 r1 manage 23.0 K 2013-05-14 - 13:24 LesyaShchutska
pdf T6ttWWSR17.pdf r4 r3 r2 r1 manage 14.6 K 2013-05-14 - 13:28 LesyaShchutska
png T6ttWWSR17.png r5 r4 r3 r2 r1 manage 23.8 K 2013-05-14 - 13:28 LesyaShchutska
pdf T6ttWWSR27.pdf r5 r4 r3 r2 r1 manage 14.6 K 2013-05-14 - 13:28 LesyaShchutska
png T6ttWWSR27.png r5 r4 r3 r2 r1 manage 24.1 K 2013-05-14 - 13:28 LesyaShchutska
pdf T6ttWWx05AllSR.pdf r3 r2 r1 manage 14.5 K 2013-05-14 - 13:24 LesyaShchutska
png T6ttWWx05AllSR.png r3 r2 r1 manage 20.5 K 2013-05-14 - 13:24 LesyaShchutska
pdf T6ttWWx05SR17.pdf r5 r4 r3 r2 r1 manage 14.6 K 2013-05-14 - 13:28 LesyaShchutska
png T6ttWWx05SR17.png r5 r4 r3 r2 r1 manage 20.3 K 2013-05-14 - 13:28 LesyaShchutska
pdf T6ttWWx05SR27.pdf r5 r4 r3 r2 r1 manage 14.6 K 2013-05-14 - 13:28 LesyaShchutska
png T6ttWWx05SR27.png r5 r4 r3 r2 r1 manage 20.9 K 2013-05-14 - 13:28 LesyaShchutska
pdf T6ttWWx08AllSR.pdf r3 r2 r1 manage 15.0 K 2013-05-14 - 13:24 LesyaShchutska
png T6ttWWx08AllSR.png r3 r2 r1 manage 21.4 K 2013-05-14 - 13:24 LesyaShchutska
pdf T6ttWWx08SR17.pdf r5 r4 r3 r2 r1 manage 15.0 K 2013-05-14 - 13:30 LesyaShchutska
png T6ttWWx08SR17.png r5 r4 r3 r2 r1 manage 19.5 K 2013-05-14 - 13:30 LesyaShchutska
pdf T6ttWWx08SR27.pdf r5 r4 r3 r2 r1 manage 15.0 K 2013-05-14 - 13:30 LesyaShchutska
png T6ttWWx08SR27.png r5 r4 r3 r2 r1 manage 20.0 K 2013-05-14 - 13:30 LesyaShchutska
root T6x053lb.root r3 r2 r1 manage 25.3 K 2013-05-14 - 11:51 LesyaShchutska
pdf T6x053lbXSEC.pdf r4 r3 r2 r1 manage 20.3 K 2013-05-14 - 11:44 LesyaShchutska
png T6x053lbXSEC.png r4 r3 r2 r1 manage 185.6 K 2013-05-14 - 11:45 LesyaShchutska
pdf T6x053lbXSEC_1_2sigma.pdf r1 manage 20.6 K 2013-05-14 - 11:44 LesyaShchutska
png T6x053lbXSEC_1_2sigma.png r1 manage 191.9 K 2013-05-14 - 11:45 LesyaShchutska
pdf T6x053lbXSEC_2sigma.pdf r1 manage 20.3 K 2013-05-14 - 11:44 LesyaShchutska
png T6x053lbXSEC_2sigma.png r1 manage 185.4 K 2013-05-14 - 11:45 LesyaShchutska
root T6x083lb.root r3 r2 r1 manage 32.1 K 2013-05-14 - 11:51 LesyaShchutska
pdf T6x083lbXSEC.pdf r4 r3 r2 r1 manage 21.6 K 2013-05-14 - 11:44 LesyaShchutska
png T6x083lbXSEC.png r4 r3 r2 r1 manage 196.9 K 2013-05-14 - 11:45 LesyaShchutska
pdf T7.pdf r5 r4 r3 r2 r1 manage 14.7 K 2013-05-14 - 13:20 LesyaShchutska
png T7.png r4 r3 r2 r1 manage 23.9 K 2013-05-14 - 13:20 LesyaShchutska
root T71503lb.root r3 r2 r1 manage 19.8 K 2013-05-14 - 11:51 LesyaShchutska
pdf T71503lbXSEC.pdf r4 r3 r2 r1 manage 21.1 K 2013-05-14 - 11:44 LesyaShchutska
png T71503lbXSEC.png r4 r3 r2 r1 manage 181.9 K 2013-05-14 - 11:45 LesyaShchutska
root T73003lb.root r3 r2 r1 manage 15.7 K 2013-05-14 - 11:51 LesyaShchutska
pdf T73003lbXSEC.pdf r4 r3 r2 r1 manage 20.8 K 2013-05-14 - 11:44 LesyaShchutska
png T73003lbXSEC.png r4 r3 r2 r1 manage 189.1 K 2013-05-14 - 11:45 LesyaShchutska
pdf T7btW150AllSR.pdf r3 r2 r1 manage 14.6 K 2013-05-14 - 13:24 LesyaShchutska
png T7btW150AllSR.png r3 r2 r1 manage 23.4 K 2013-05-14 - 13:24 LesyaShchutska
pdf T7btW300AllSR.pdf r3 r2 r1 manage 14.5 K 2013-05-14 - 13:24 LesyaShchutska
png T7btW300AllSR.png r3 r2 r1 manage 22.7 K 2013-05-14 - 13:24 LesyaShchutska
pdf T7btW300SR18.pdf r3 r2 r1 manage 14.5 K 2013-05-14 - 13:30 LesyaShchutska
png T7btW300SR18.png r3 r2 r1 manage 21.5 K 2013-05-14 - 13:30 LesyaShchutska
pdf T7btW300SR28.pdf r3 r2 r1 manage 14.5 K 2013-05-14 - 13:30 LesyaShchutska
png T7btW300SR28.png r3 r2 r1 manage 21.1 K 2013-05-14 - 13:30 LesyaShchutska
pdf T7btWSR18.pdf r5 r4 r3 r2 r1 manage 14.8 K 2013-05-14 - 13:30 LesyaShchutska
png T7btWSR18.png r5 r4 r3 r2 r1 manage 24.4 K 2013-05-14 - 13:30 LesyaShchutska
pdf T7btWSR28.pdf r5 r4 r3 r2 r1 manage 14.7 K 2013-05-14 - 13:30 LesyaShchutska
png T7btWSR28.png r5 r4 r3 r2 r1 manage 23.6 K 2013-05-14 - 13:30 LesyaShchutska
pdf TableOffZ.pdf r1 manage 73.4 K 2013-05-04 - 18:58 LesyaShchutska
png TableOffZ.png r2 r1 manage 337.2 K 2013-05-13 - 23:46 LesyaShchutska
pdf TableOnZ.pdf r1 manage 73.3 K 2013-05-04 - 18:58 LesyaShchutska
png TableOnZ.png r2 r1 manage 332.6 K 2013-05-13 - 23:46 LesyaShchutska
pdf TableSystematics.pdf r1 manage 62.9 K 2013-05-14 - 11:56 LesyaShchutska
png TableSystematics.png r1 manage 175.1 K 2013-05-14 - 11:56 LesyaShchutska
pdf allDistributions.pdf r1 manage 20.0 K 2013-05-08 - 19:16 LesyaShchutska
png allDistributions.png r1 manage 35.3 K 2013-05-08 - 19:16 LesyaShchutska
pdf allMasses.pdf r1 manage 25.6 K 2013-05-08 - 19:16 LesyaShchutska
png allMasses.png r1 manage 48.7 K 2013-05-08 - 19:16 LesyaShchutska
pdf nbtag_Sbottom.pdf r1 manage 14.2 K 2013-05-06 - 16:34 PieterEveraerts
png nbtag_Sbottom.png r1 manage 11.4 K 2013-05-06 - 16:34 PieterEveraerts
pdf nbtag_T1tttt.pdf r1 manage 14.3 K 2013-05-06 - 16:34 PieterEveraerts
png nbtag_T1tttt.png r1 manage 11.5 K 2013-05-06 - 16:34 PieterEveraerts
pdf njets_Sbottom.pdf r1 manage 14.7 K 2013-05-06 - 16:34 PieterEveraerts
png njets_Sbottom.png r1 manage 13.9 K 2013-05-06 - 16:34 PieterEveraerts
pdf njets_T1tttt.pdf r1 manage 14.8 K 2013-05-06 - 16:34 PieterEveraerts
png njets_T1tttt.png r1 manage 13.6 K 2013-05-06 - 16:34 PieterEveraerts
pdf njets_WZcontrol.pdf r1 manage 15.0 K 2013-05-04 - 20:05 LesyaShchutska
png njets_WZcontrol.png r1 manage 101.7 K 2013-05-04 - 20:05 LesyaShchutska
pdf njets_ZZcontrol.pdf r1 manage 14.7 K 2013-05-04 - 20:04 LesyaShchutska
png njets_ZZcontrol.png r1 manage 97.2 K 2013-05-04 - 20:04 LesyaShchutska
pdf offZ_distributions.pdf r1 manage 20.2 K 2013-05-04 - 19:07 LesyaShchutska
png offZ_distributions.png r2 r1 manage 119.3 K 2013-05-04 - 19:58 LesyaShchutska
pdf offZ_highHT.pdf r2 r1 manage 21.9 K 2013-05-10 - 08:49 LesyaShchutska
png offZ_highHT.png r3 r2 r1 manage 35.1 K 2013-05-10 - 08:49 LesyaShchutska
pdf offZ_lowHT.pdf r2 r1 manage 22.3 K 2013-05-10 - 08:49 LesyaShchutska
png offZ_lowHT.png r3 r2 r1 manage 34.3 K 2013-05-10 - 08:48 LesyaShchutska
pdf offZdistributionsLowMET.pdf r1 manage 21.2 K 2013-05-08 - 19:16 LesyaShchutska
png offZdistributionsLowMET.png r1 manage 36.4 K 2013-05-08 - 19:16 LesyaShchutska
pdf onZ_distributions.pdf r1 manage 20.3 K 2013-05-04 - 19:06 LesyaShchutska
png onZ_distributions.png r2 r1 manage 122.8 K 2013-05-04 - 19:58 LesyaShchutska
pdf onZ_highHT.pdf r2 r1 manage 23.2 K 2013-05-10 - 08:48 LesyaShchutska
png onZ_highHT.png r3 r2 r1 manage 36.9 K 2013-05-10 - 08:48 LesyaShchutska
pdf onZ_lowHT.pdf r2 r1 manage 22.2 K 2013-05-10 - 08:48 LesyaShchutska
png onZ_lowHT.png r3 r2 r1 manage 32.9 K 2013-05-10 - 08:48 LesyaShchutska
pdf onZmasses.pdf r1 manage 19.8 K 2013-05-08 - 19:16 LesyaShchutska
png onZmasses.png r1 manage 43.2 K 2013-05-08 - 19:16 LesyaShchutska
png rhoZ.png r1 manage 133.6 K 2013-05-13 - 10:37 PieterEveraerts
png rhophi.png r1 manage 187.3 K 2013-05-13 - 10:37 PieterEveraerts
This topic: CMSPublic > PhysicsResults > PhysicsResultsSUS > PhysicsResultsSUS13008
Topic revision: r4 - 2013-05-24 - LesyaShchutska
Copyright &© 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback
|
2021-02-26 02:13:36
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8013206124305725, "perplexity": 9002.204942541794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178355944.41/warc/CC-MAIN-20210226001221-20210226031221-00214.warc.gz"}
|
https://proofwiki.org/wiki/Definition:Complement_of_Relation
|
# Definition:Complement of Relation
## Definition
Let $\mathcal R \subseteq S \times T$ be a relation.
The complement of $\mathcal R$ is the relative complement of $\mathcal R$ with respect to $S \times T$:
$\relcomp {S \times T} {\mathcal R} := \set {\tuple {s, t} \in S \times T: \tuple {s, t} \notin \mathcal R}$
If the sets $S$ and $T$ are implicit, then $\complement \paren {\mathcal R}$ can be used.
## Also denoted as
An alternative to $\relcomp {S \times T} {\mathcal R}$ is $\overline {\mathcal R}$ which is more compact and convenient, but the context needs to be established so that it does not get confused with other usages of the overline notation.
Specific conventional symbols used to denote certain frequently-encountered relations often consist of lines in various configurations, for example $=$, $\le$, $\equiv$, and adding an overline to these can only make for confusion.
In these cases, it is conventional to draw a line through the symbol, for example:
$\ne$ for $\complement \paren =$
$\not \le$ for $\complement \paren \le$
$\not \equiv$ for $\complement \paren \equiv$
and so on.
Some authors use $\mathcal R'$ to denote the complement of $\mathcal R$, but $'$ is already heavily overused.
## Linguistic Note
The word complement comes from the idea of complete-ment, it being the thing needed to complete something else.
It is a common mistake to confuse the words complement and compliment. Usually the latter is mistakenly used when the former is meant.
|
2019-11-14 10:07:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8686126470565796, "perplexity": 415.4742960387641}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668334.27/warc/CC-MAIN-20191114081021-20191114105021-00280.warc.gz"}
|
https://www.physicsforums.com/threads/a-tractable-baker-campbell-hausdorff-formula.524004/
|
# A tractable Baker-Campbell-Hausdorff formula
1. Aug 24, 2011
### arkobose
1. Let A and B be two matrices, and $\lambda$ be a continuous parameter.
2. Now, define a function $f(\lambda) \equiv e^{\lambda A}e^{\lambda B}$. We need to show that $\frac{df}{d\lambda} = \left\{A + B + \frac{\lambda}{1!}[A, B] + \frac{\lambda^2}{2!}[A, [A, B]] + ... \right \}f$
Once this is shown, setting $\lambda = 1$, and $[A, [A, B]] = [B, [A, B]] = 0$ gives us a Baker-Campbell-Hausdorff formula.
3. I had shown this result quite a while ago, but now I have forgotten completely what I had done. This time, I tried differentiating $f(\lambda)$ w.r.t the argument, and then using the commutation was able to get the first two terms on the R.H.S., but thereafter I got stuck. The very minimal hint would be all that I need.
Thank you!
Last edited: Aug 24, 2011
2. Aug 28, 2011
### diazona
Without seeing your exact steps I can't say much, but you may need to expand out an exponential or two and work out some commutators term-by-term.
3. Aug 28, 2011
### arkobose
I solved it. Thanks anyway!
|
2017-09-26 13:32:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8693071603775024, "perplexity": 682.8289354142698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818695726.80/warc/CC-MAIN-20170926122822-20170926142822-00243.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=199998
|
# Questions on salt water vs fresh water...
by jim
Tags: fresh, salt, water
P: n/a I live in an area that is experiencing a long drought. So, I got to thinking about the differences in fresh and salt water and have a few questions that I'd like your thoughts on. Just what happens to salt in salt water? Do the molecules of water and salt combine or is the salt simply suspended in the water? If it is just suspended, what makes it stay suspended? Why doesn't it settle to the bottom of the container? Are the water molecules heavier than the salt molecules or vice versa or are they approximately the same weight? Thanks for any light that you could shine on these questions! jim
P: 49 Quote {Just what happens to salt in salt water?} Salt in the water is Dissolved. To extract the salt you need to leave it in a Salt pan to dry. If you dry the salty water you will have salt left.
P: n/a Questions on salt water vs fresh water... "jim" wrote in message news:wma1j.4013$F87.3448@bignews6.bellsouth.net... :I live in an area that is experiencing a long drought. So, I got to : thinking about the differences in fresh and salt water and have a few : questions that I'd like your thoughts on. : : Just what happens to salt in salt water? It dissolves. : Do the molecules of water and salt combine or is the salt simply : suspended in the water? Sand is a suspension, salt is a solution. Thoroughly mix salt and sand, add water, pour through a coffee filter. Evaporate off the water and you've separated the salt and sand. You can speed up the evaporation by boiling the water : If it is just suspended, what makes it stay suspended? else it is not. : Why doesn't it settle to the bottom of the container? Because it is a solution and not a suspension. : : Are the water molecules heavier than the salt molecules or vice versa or : are they approximately the same weight? Which is heavier, a ton of old iron or a ton of feathers? : Thanks for any light that you could shine on these questions! : : jim : P: 49 Quote by Androcles : : Are the water molecules heavier than the salt molecules or vice versa or : are they approximately the same weight? Which is heavier, a ton of old iron or a ton of feathers? : Thanks for any light that you could shine on these questions! : : jim : He might have meant the mass of a single molecule of Salt vs a single H20. P: n/a On Fri, 23 Nov 2007, jim wrote: > Just what happens to salt in salt water? > > Do the molecules of water and salt combine or is the salt simply > suspended in the water? > > If it is just suspended, what makes it stay suspended? > > Why doesn't it settle to the bottom of the container? Why don't air molecules all settle to the ground? Not a trivial question! The simple answer is that the air isn't cold enough - the molecules are all moving around (the temperature tells you the average kinetic energy of the molecules). If they're moving around, they can hardly be lying on the ground. Stuff suspended in water behaves the same. Why do heavier things settle to the bottom? Their kinetic energy is the same as that of lighter things. Since KE = 1/2 mv^2, that means that the thermal motion of heavier things is much slower, and they can settle. A fancier answer is that a dilute suspension in water obeys the ideal gas law, just like a gas does. > Are the water molecules heavier than the salt molecules or vice versa or > are they approximately the same weight? No salt molecules in solution, just Na and Cl ions. Water molecule is H2O, so easy to look up on a periodic table. What's more important, the weight or the density? -- Timo Nieminen - Home page: http://www.physics.uq.edu.au/people/nieminen/ E-prints: http://eprint.uq.edu.au/view/person/...,_Timo_A..html Shrine to Spirits: http://www.users.bigpond.com/timo_nieminen/spirits.html P: n/a "jim" wrote in message news:wma1j.4013$F87.3448@bignews6.bellsouth.net... > I live in an area that is experiencing a long drought. So, I got to > thinking about the differences in fresh and salt water and have a few > questions that I'd like your thoughts on. > > Just what happens to salt in salt water? > > Do the molecules of water and salt combine or is the salt simply > suspended in the water? They don't combine they mix. If they combined it would be a compound. http://www.answers.com/topic/solution > If it is just suspended, what makes it stay suspended? > Why doesn't it settle to the bottom of the container? The density is the same/similar. > Are the water molecules heavier than the salt molecules or vice versa or > are they approximately the same weight? It's the density that matters not the weight. > Thanks for any light that you could shine on these questions! > > jim >
P: n/a "jim" wrote in message news:wma1j.4013$F87.3448@bignews6.bellsouth.net... >I live in an area that is experiencing a long drought. So, I got to > thinking about the differences in fresh and salt water and have a few > questions that I'd like your thoughts on. > > Just what happens to salt in salt water? > > Do the molecules of water and salt combine or is the salt simply > suspended in the water? > > If it is just suspended, what makes it stay suspended? > > Why doesn't it settle to the bottom of the container? > > Are the water molecules heavier than the salt molecules or vice versa or > are they approximately the same weight? > > Thanks for any light that you could shine on these questions! > > jim > The salt you are most familiar with is sodium choride. The salt will dissociate into sodium ions and chloride ions, each of which interact with the water due to their ionic nature. This means that salt water really isn't water with salt in it, it is water with the constituents of salt in it. Once there is too high a concentration of sodium and chlorine in water, they can precipitate out. Sand, on the other hand, stays pretty much intact, so a mixture of sand and water is just a bunch of sand particles in water. It doesn't interact chemically to any great extent with the water. When you stop mixing the sand/water suspension, the sand and water will separate (or at least the sand will drop to the bottom). It will be wet sand. Michael P: n/a "CWatters" wrote in message news:13ke21m14r2oc81@corp.supernews.com... > "jim" wrote in message > news:wma1j.4013$F87.3448@bignews6.bellsouth.net... >> I live in an area that is experiencing a long drought. So, I got to >> thinking about the differences in fresh and salt water and have a few >> questions that I'd like your thoughts on. >> >> Just what happens to salt in salt water? >> >> Do the molecules of water and salt combine or is the salt simply >> suspended in the water? > > They don't combine they mix. If they combined it would be a compound. > > http://www.answers.com/topic/solution > >> If it is just suspended, what makes it stay suspended? >> Why doesn't it settle to the bottom of the container? > > The density is the same/similar. Would you be referring to the mass or size of the individual molecules when you refer to density? Salt molecules seem much more densely packed when in a crystaline state than those of water. And, to my untrained mind, it would seem that the water molecules are more dense than the salt molecules dispersed throughout them. I know it is a childish question, but just when do you mean by density? These things bring 2 more questions to mind...what is it about water that makes salt crystals seperate from one another? Why do salt molecules form crystals when the salt water evaporates instead of just a fine powder or dust? > >> Are the water molecules heavier than the salt molecules or vice versa or >> are they approximately the same weight? > > It's the density that matters not the weight. By density, do you mean the mass or size of the molecule? I am aware of filters used to filter salts out of salt water. I assume that this works because the H2O molecules are smaller in size than the salt molecules. But, it would seem, that not all salt water solutions are equal. For instance, I saw a re-run of a show last night on cable (Deep Planet) that showed something that may play into my questions. The scientists were in a specially built submersible taking pictures and film for this speacial of the ocean floor at great depths (10,000 feet or so). At one point, they came across a "pond" in the ocean. It was an actual "pond" with a shore and water within it that was of a much greater density than the water that they were already in. The film maker said that when they tried to enter the "pond", the submersible simply "bounced off" of it. The wanter in the undersea "pond" was too dense to be penetrated by the submersible. In fact, he said that thier attempt to enter the sea-pond created "waves" across its surface that rippled out to it shores. He also refered to this sub-sea pond's water as "brine". Seeing as how they are already in the sea (a brine solution) how would this "pond" be filled with more dense salt water than the water that it itself was in? Shouldn't it reach equilibrium with the surrounding sea water through natural means? A pond under the sea. Jus one more reason that I am not sure that we really understand the nature of salt water at all, although we may understand some of its properties. Thanks for your help! jim
P: n/a Dear jim: "jim" wrote in message news:Orn2j.2754$k27.68@bignews2.bellsouth.net... ... > Salt molecules seem much more densely packed when > in a crystaline state than those of water. And, to my > untrained mind, it would seem that the water molecules > are more dense than the salt molecules dispersed > throughout them. No. The density of salt water is higher than the density of fresh water. ... > These things bring 2 more questions to mind...what is it > about water that makes salt crystals seperate from one > another? The water molecule is polar, with a slight negative charge on one side. The hydrogens don't bond diametrically opposite each other on the oxygen. So the oxygen atom is attractive to the positively charged sodium, and the hydrogen atoms are attractive to the chloride. > Why do salt molecules form crystals when the salt > water evaporates instead of just a fine powder or dust? Within its ability to move in the disappearing fluid, salt crystals are at a slightly lower energy state, than would be dispersed NaCl molecules. David A. Smith P: n/a "Timo A. Nieminen" writes: > Just what happens to salt in salt water? > Do the molecules of water and salt combine or is the salt simply > suspended in the water? If it is just suspended, what makes it stay > suspended? Why doesn't it settle to the bottom of the container? If you centrifuge salt solutions at high speed, a concentration gradient indeed appears. Biologists use this with cesium chloride (a particularly heavy ion) to set up a density gradient for separation of biomolecules by density. P: n/a "Herman Family" wrote in message news:N5N1j.21249$B25.19492@news01.roc.ny... > > "jim" wrote in message > news:wma1j.4013$F87.3448@bignews6.bellsouth.net... >>I live in an area that is experiencing a long drought. So, I got to >> thinking about the differences in fresh and salt water and have a few >> questions that I'd like your thoughts on. >> >> Just what happens to salt in salt water? >> >> Do the molecules of water and salt combine or is the salt simply >> suspended in the water? >> >> If it is just suspended, what makes it stay suspended? >> >> Why doesn't it settle to the bottom of the container? >> >> Are the water molecules heavier than the salt molecules or vice versa or >> are they approximately the same weight? >> >> Thanks for any light that you could shine on these questions! >> >> jim >> > > The salt you are most familiar with is sodium choride. The salt will > dissociate into sodium ions and chloride ions, each of which interact with > the water due to their ionic nature. This means that salt water really > isn't water with salt in it, it is water with the constituents of salt in > it. Once there is too high a concentration of sodium and chlorine in > water, > they can precipitate out. So the sodium and chloride ions form bonds with the water molecules because the attraction of the ions to H2O is stronger than their attraction to one another, and when there are no more H2O "connections" available for more sodium or chloride ions to attach themselves to the sodium and chloride ions simply re-connect to each other via their weaker attraction and drift to the bottom? Did I get that right? > > Sand, on the other hand, stays pretty much intact, so a mixture of sand > and > water is just a bunch of sand particles in water. It doesn't interact > chemically to any great extent with the water. When you stop mixing the > sand/water suspension, the sand and water will separate (or at least the > sand will drop to the bottom). It will be wet sand. Got it. The sand doesn't break apart, it is just sand, no matter where in the suspension it is located - but salt actually changes its molecular makeup bt seperating into sodium and chloride in the presence of water that has not already been saturated to its limit with salt. Ok.....well, idea #1 won't work, but idea #2 still has some life in it. I've got to purchase a few tools from a school supply company and start playing "mad person" (if I were a scientist it would sound so much cooler). Thanks for you help! P: n/a N:dlzc D:aol T:com (dlzc) wrote: > Dear jim: > > > > The water molecule is polar, with a slight negative charge on one > side. The hydrogens don't bond diametrically opposite each other > on the oxygen. So the oxygen atom is attractive to the > positively charged sodium, and the hydrogen atoms are attractive > to the chloride. > > >>Why do salt molecules form crystals when the salt >>water evaporates instead of just a fine powder or dust? > Extremely fast water evaporation will tend to form fine powder or dust. It is a salt nucleation process from supersaturated ion solution. Once nucleated crystals are formed, the ions from solution will tend to go to already formed salt crystals. but will tend to form many new crystals if ions are in a supersaturated condition created by rapid water evaporation making the energetics favorable for creating new additional crystals rather than migrating to existing crystals. Essentially, it takes more energy to form a sodium chloride crystal from one sodium ion and one chloride ion than attaching those same ions to an existing sodium ion crystal. The effect is more pronounced in less water soluble crystals such as calcium carbonate. > > Within its ability to move in the disappearing fluid, salt > crystals are at a slightly lower energy state, than would be > dispersed NaCl molecules. > > David A. Smith > Energy Concept in terms of free energy (kilo cal/g mole) Sodium crystal -> sodium+ ion in water & chloride- ion in water Free energy Free energy Free energy Sodium crystal Sodium ion in water Chloride ion in water -91.79 > -62.59 + -31.35 -91.79 > -93.94 More negative free energy indicates a tendency for the reaction to go in that direction. Stumm and Morgan Aquatic Chemistry goes into this type of thing in detail Richard P: n/a N:dlzc D:aol T:com (dlzc) wrote: > Dear jim: > > > > The water molecule is polar, with a slight negative charge on one > side. The hydrogens don't bond diametrically opposite each other > on the oxygen. So the oxygen atom is attractive to the > positively charged sodium, and the hydrogen atoms are attractive > to the chloride. > > >>Why do salt molecules form crystals when the salt >>water evaporates instead of just a fine powder or dust? > Extremely fast water evaporation will tend to form fine powder or dust. It is a salt nucleation process from supersaturated ion solution. Once nucleated crystals are formed, the ions from solution will tend to go to already formed salt crystals. but will tend to form many new crystals if ions are in a supersaturated condition created by rapid water evaporation making the energetics favorable for creating new additional crystals rather than migrating to existing crystals. Essentially, it takes more energy to form a sodium chloride crystal from one sodium ion and one chloride ion than attaching those same ions to an existing sodium ion crystal. The effect is more pronounced in less water soluble crystals such as calcium carbonate. > > Within its ability to move in the disappearing fluid, salt > crystals are at a slightly lower energy state, than would be > dispersed NaCl molecules. > > David A. Smith > Energy Concept in terms of free energy (kilo cal/g mole) Sodium crystal -> sodium+ ion in water & chloride- ion in water Free energy Free energy Free energy Sodium crystal Sodium ion in water Chloride ion in water -91.79 > -62.59 + -31.35 -91.79 > -93.94 More negative free energy indicates a tendency for the reaction to go in that direction. Stumm and Morgan Aquatic Chemistry goes into this type of thing in detail Richard P: n/a N:dlzc D:aol T:com (dlzc) wrote: > Dear jim: > > > > The water molecule is polar, with a slight negative charge on one > side. The hydrogens don't bond diametrically opposite each other > on the oxygen. So the oxygen atom is attractive to the > positively charged sodium, and the hydrogen atoms are attractive > to the chloride. > > >>Why do salt molecules form crystals when the salt >>water evaporates instead of just a fine powder or dust? > Extremely fast water evaporation will tend to form fine powder or dust. It is a salt nucleation process from supersaturated ion solution. Once nucleated crystals are formed, the ions from solution will tend to go to already formed salt crystals. but will tend to form many new crystals if ions are in a supersaturated condition created by rapid water evaporation making the energetics favorable for creating new additional crystals rather than migrating to existing crystals. Essentially, it takes more energy to form a sodium chloride crystal from one sodium ion and one chloride ion than attaching those same ions to an existing sodium ion crystal. The effect is more pronounced in less water soluble crystals such as calcium carbonate. > > Within its ability to move in the disappearing fluid, salt > crystals are at a slightly lower energy state, than would be > dispersed NaCl molecules. > > David A. Smith > Energy Concept in terms of free energy (kilo cal/g mole) Sodium crystal -> sodium+ ion in water & chloride- ion in water Free energy Free energy Free energy Sodium crystal Sodium ion in water Chloride ion in water -91.79 > -62.59 + -31.35 -91.79 > -93.94 More negative free energy indicates a tendency for the reaction to go in that direction. Stumm and Morgan Aquatic Chemistry goes into this type of thing in detail Richard P: n/a "Tom Knight" wrote in message news:vuyzlx2clvh.fsf@shaggy.csail.mit.edu... > "Timo A. Nieminen" writes: >> Just what happens to salt in salt water? >> Do the molecules of water and salt combine or is the salt simply >> suspended in the water? If it is just suspended, what makes it stay >> suspended? Why doesn't it settle to the bottom of the container? > > If you centrifuge salt solutions at high speed, a concentration > gradient indeed appears. Biologists use this with cesium chloride (a > particularly heavy ion) to set up a density gradient for separation of > biomolecules by density. Then why not use a device similar to a Dyson vacuum (which spins the dirt out of air) to seperate the salt and water? P: n/a Dear Richard Saam: "Richard Saam" wrote in message news:bJH2j.156733$kj1.98434@bgtnsc04-news.ops.worldnet.att.net... > N:dlzc D:aol T:com (dlzc) wrote: ... >> Within its ability to move in the disappearing >> fluid, salt crystals are at a slightly lower energy >> state, than would be dispersed NaCl molecules. > > Energy Concept in terms of free energy (kilo > cal/g mole) Sodium crystal -> sodium+ ion in > water & chloride- ion in water He was asking why, when the water evaporates, fewer single crystals are formed, rather than "salt dust". Your supplied information does not answer that question. > Free energy Free energy Free energy > Sodium crystal Sodium ion in water Chloride ion in water > -91.79 > -62.59 + -31.35 > > -91.79 > -93.94 > > More negative free energy indicates > a tendency for the reaction to go in that direction. > > Stumm and Morgan > Aquatic Chemistry > goes into this type of thing in detail David A. Smith
|
2014-07-24 17:47:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21799080073833466, "perplexity": 1455.3190658370731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997890181.84/warc/CC-MAIN-20140722025810-00155-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://yateks.com/application-of-oil-analysis-technology-in-helicopter/
|
# Application of Oil Analysis Technology in Helicopter
Application of Oil Analysis Technology in Helicopter. It can play a monitoring role. Ensure the safe flight of the helicopter. More to ensure the safety of life and property of personnel. The main functions are as follows:
1.Troubleshooting of helicopters. Through various threshold values, possible abnormal wear and failures are predicted. This avoids major accidents and reduces the failure rate. Condition monitoring of helicopters to provide operational safety and improve helicopter availability. This is the most important function of oil analysis and monitoring.
2.Correctly determine the service life of the oil. What are the factors that affect the oil change? Mainly include the quality of the oil itself, the sulfur content of the fuel, the working environment, and the fuel consumption. The usual oil change methods include regular oil change and quality oil change. Regular oil changes have two drawbacks: If the oil is changed too late, it can cause significant damage to the helicopter. If the oil is changed too early, the oil that can be used continuously will be treated as waste oil, resulting in waste of oil.
3.Study the wear law and explore the wear mechanism. Understand the improvement effect of material types, heat treatment methods and machining accuracy, and provide a basis for improving design.
4.Correctly stipulate the running-in specification of the helicopter. Most of the previous running-in specifications used empirical data and lacked scientific basis. The use of oil analysis technology in the bench test can scientifically determine the running-in specification, effectively shorten the running-in period and save resources.
5.Determine the contamination of the oil. The entry of water and sand and dust into the oil is very harmful to the helicopter. Through oil analysis and monitoring, the degree of oil pollution can be detected, so that countermeasures can be taken in time.
6.Understand the loss of additives in the oil, so that additives can be replenished in time or new oil can be replaced.
7.Correctly formulate the overhaul cycle of helicopters, save maintenance costs, reduce spare parts inventory, greatly reduce operating costs, and extend the service life of helicopters.
Yateks has many years of experience in oil condition analysis. Oil products include online oil analysis systems and offline oil laboratory equipment. There are successful applications in many fields. If you have needs, you can contact us at any time.
Automated page speed optimizations for fast site performance
|
2022-09-30 01:02:34
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8726741671562195, "perplexity": 5126.751904100884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00393.warc.gz"}
|
https://solvedlib.com/best-fit-line-then-write-down-an-equation-for-the-data-plotted,1382298
|
#### Similar Solved Questions
##### The illustration to the left represents a mixture of nitrogen blue and oxygen red molecules_If the molecules in the above illustration react to form NO according to the equation Nz + 02 2 NOthe limiting reagent isthe number of NO molecules formed isandthe number ofmolecules in excess is
The illustration to the left represents a mixture of nitrogen blue and oxygen red molecules_ If the molecules in the above illustration react to form NO according to the equation Nz + 02 2 NO the limiting reagent is the number of NO molecules formed is and the number of molecules in excess is...
##### Determine whether the statement is true or false. Explain your answer. The circulation of a vector field $\mathbf{F}$ around a closed curve $C$ is defined to be $\int_{C}(\operatorname{curl} \mathbf{F}) \cdot \mathbf{T} d s$
Determine whether the statement is true or false. Explain your answer. The circulation of a vector field $\mathbf{F}$ around a closed curve $C$ is defined to be $\int_{C}(\operatorname{curl} \mathbf{F}) \cdot \mathbf{T} d s$...
##### This is a joint probability distribution. Please find: P(Gender = Male) = ? P(Wealth = Rich)...
This is a joint probability distribution. Please find: P(Gender = Male) = ? P(Wealth = Rich) = ? P(Wealth = Rich AND Gender = Female) = ? P(Wealth=Rich|Gender=Female) = ? Thank you Gender Hours Worked Wealth Probability Female < 40.5 Poor 0.2531 Female < 40.5 Rich 0.0246 Female > 40.5 Poor...
##### An irregularly shaped metal was weighed by the following difference: Watch glass metal 56.7813 g Watch glass 35.4725 g The volume of the metal was determined by placing the metal in a graduated cylinder that had water in it and measuring the volume difference_ Graduated cylinder water metal 14.15 mL Graduated cylinder waler 11.25 mL Determine the density to the correct number of signifcant figures:
An irregularly shaped metal was weighed by the following difference: Watch glass metal 56.7813 g Watch glass 35.4725 g The volume of the metal was determined by placing the metal in a graduated cylinder that had water in it and measuring the volume difference_ Graduated cylinder water metal 14.15 mL...
##### 92-Hblluxd Hodd DZt onk; Anblx) Mth 4, (clim [10 J7QS 28 antat Vall * LBO 822Y-4a nall 51 Mzallo MixZ15u174 TjWo 470 117412_ n34 IN Iaoo =I01 282} 40 3018 112S ! pn 4138 130ly :Ziri 458352 2 SuL,? 451863 2 E icu , J6Ilu L -15,2 072 42u2 ;7162uLI
92-Hblluxd Hodd DZt onk; Anblx) Mth 4, (clim [10 J7QS 28 antat Vall * LBO 822Y-4a nall 51 Mzallo MixZ15u174 TjWo 470 117412_ n34 IN Iaoo =I01 282} 40 3018 112S ! pn 4138 130ly :Ziri 458352 2 SuL,? 451863 2 E icu , J6Ilu L -15,2 072 42u2 ;7162uLI...
##### NT is taking two PERCOCET tablets each containing 7.5 mg of oxycodone and 325 mg of...
NT is taking two PERCOCET tablets each containing 7.5 mg of oxycodone and 325 mg of acetaminophen every 4 hours to manage his pain. His physician wants to switch him to oxymorphone extended-release tablets (OPANA ER) for improved pain control. Oxymorphone extended-release tablets are available in st...
##### Question 1 (1Opts:) Suppose you conduct an experiment where you flip two coins. Define a random variable %i as followsif H if Tifor i = 1,2. Now define another random variable, y; as followsif *1 + 12 = 1 elseThat is y = 1 if exactly one of the two coins is heads and 0 otherwise. Assume that the chance of either Coin COling Up heads is 0.5.(Spts) Find the probability distribution of y:(b) (Spts Use the distribution YOU found in partto fiud the mathematical expectation of y
Question 1 (1Opts:) Suppose you conduct an experiment where you flip two coins. Define a random variable %i as follows if H if T i for i = 1,2. Now define another random variable, y; as follows if *1 + 12 = 1 else That is y = 1 if exactly one of the two coins is heads and 0 otherwise. Assume that th...
##### Problem 15.50 Constants I Periodic Table Part A A guitar string is 90.0 cm long and...
Problem 15.50 Constants I Periodic Table Part A A guitar string is 90.0 cm long and has a mass of 3.08 g. From the bridge to the support post () is 60.0 cm and the string is' under a tension of 532 N What are the frequencies of the fundamental and first two overtones? Enter your answers numerica...
##### The presence of a hydrogenosome (rather than mitochondria) is characteristic of what protozoan group kinetoplastidparabasilidsapicomplexanD-loboseans
The presence of a hydrogenosome (rather than mitochondria) is characteristic of what protozoan group kinetoplastid parabasilids apicomplexan D-loboseans...
##### Ap 0f70 EW (7 0+10* W )is delivered to the cia&e oladby Pait 0f power Enes, Eatasen uach the !oag2 ( 12 0p0 VPan AUse Ihe formula P IV to find the curtent in the Eries. Express your answer t0 two significant figures and include the5 8 ApiErlOUS AnsCorrectPart BMteach ot the bwo lines has resistance of 11 42, iind & change Exputes$Yout Ansei signilicant figutes and include the Ap 0f70 EW (7 0+10* W )is delivered to the cia&e oladby Pait 0f power Enes, Eatasen uach the !oag2 ( 12 0p0 V Pan A Use Ihe formula P IV to find the curtent in the Eries. Express your answer t0 two significant figures and include the 5 8 A piErlOUS Ans Correct Part B Mteach ot the bwo lines has ... 5 answers ##### EscQuestion 1 1 Moving 1 V VOIAMB anotngi 3 Voison0 (eggs 3 0 spet 3~8:aliom 1 E Qurrtion t0 oubinuad arose 8 mutulions 103888 esc Question 1 1 Moving 1 V VOIAMB anotngi 3 Voison0 (eggs 3 0 spet 3 ~8: aliom 1 E Qurrtion t0 oubinuad arose 8 mutulions 1 03 888... 5 answers ##### Part 2 (1 point)See HintMagnesium hydroxide: C& € Part 2 (1 point) See Hint Magnesium hydroxide: C& €... 5 answers ##### You want to rent furnished one-bedroom apartment in Champaign; IL You know that monthlv rent amount follows normal distribution with standard deviation of S250. Suppose that monthly rent for random sample of 75 apartments is obtained. Find the probability that the average monthly rent for the apartments in the sample is within 850 of the overall mean sample of 75 apartments has sample mean of$950. Construct 9 confidence interval for the overall mean_
You want to rent furnished one-bedroom apartment in Champaign; IL You know that monthlv rent amount follows normal distribution with standard deviation of S250. Suppose that monthly rent for random sample of 75 apartments is obtained. Find the probability that the average monthly rent for the apartm...
Statement of Cash Flows (Indirect Method) Use the following information regarding the Fremont Corporation to prepare a statement of cash flows using the indirect method: Accounts payable increase $14,000 Accounts receivable increase 7,000 Accrued liabilities decrease 5,000 Amortization expense 31,00... 1 answer ##### La) Graph the function Hoe) = (1-x) + 9 b) Find domain and range of following... La) Graph the function Hoe) = (1-x) + 9 b) Find domain and range of following function buit (x) = -2 + VI-X... 1 answer ##### What kind of information would cover while conseling her at this time of pregnancy ( espesially... what kind of information would cover while conseling her at this time of pregnancy ( espesially counserning the postpartum phase ) A.K is a 32 yr old female, G3P2LOAD, she is currently at week 28 of gestation. Her pre-pregnancy weight was 65Kg and her height is 161cm. Her current weight is 78kg. ... 1 answer ##### Question 39 of 50 Jin is 34 years old. He contributed$3,750 to a Roth IRA...
Question 39 of 50 Jin is 34 years old. He contributed $3,750 to a Roth IRA in 2015 and$2,500 in 2016. In 2018, he withdrew the entire balance, which had grown to $6,913. The amount of Jin's withdrawal that is taxable and subject to penalty is$0 taxable and $663 subject to penalty$663 taxable ...
##### QuinJ Fcouo iadd RR Eugh 9 nolnolul Dunt checdiouaranle Aetuma malio CATLTLA CHTTUHile bpurtnh Tztoonu Je Lilc O golrd Craly Hatoiorcu duaam 1>bonnenn MeaetenLennClln Fa much ALret eraalnd a Fna Luw nuaiwlk kreuterm ROr n Ir Ar UaktenHle '(ennlnd1500t4210d0 Hotlca& %u0matn anoiened Mtoe Ihe broene?Ustr poerha Glenenoe Jcunenmdat mad lhe bicontoru mn Li4XLlenDerr rcdedenUEElEh alE EnKtFeLl IiactImtAgna94 Atut 56osdn pautu tnut Ip buaLpen PeFnd d E8275 Cnnn Neretot ~6rene5" O o
QuinJ Fcouo iadd RR Eugh 9 nolnolul Dunt checdiouaranle Aetuma malio CATLTLA CHTTUHile bpurtnh Tztoonu Je Lilc O golrd Craly Hato iorcu duaam 1> bonnenn MeaetenLenn Clln Fa much ALret eraalnd a Fna Luw nuaiwlk kreu term ROr n Ir Ar Uakten Hle '(ennlnd1500t4210d0 Hotlca& %u0matn anoiened...
##### A U.S. Senate Committee has 14 members. Assuming party affiliation is not a factor in selection, how many different committees are possible from the 100 U.S. senators?
A U.S. Senate Committee has 14 members. Assuming party affiliation is not a factor in selection, how many different committees are possible from the 100 U.S. senators?...
##### Describe how Six Sigma first developed and evolved over time. When did it enter the health...
Describe how Six Sigma first developed and evolved over time. When did it enter the health care field and what has the impact been since then? response in 200 words...
##### Question 9 3 pts The Laplace transform of the piecewise continuous function 4, 0<t <3 f(t)...
Question 9 3 pts The Laplace transform of the piecewise continuous function 4, 0<t <3 f(t) is given by t> 3 (2, L{f} = { (1 – 3e-*), s>0. O 2 L{f} (2 - e-st), 8 >0. 2 L{f} = (3 - e-st), s >0. O None of them 1 L{f} (1 – 2e -st), s >0....
##### Mhelnuclean envelope reforms during [email protected] anaphase Iby telophasd G prophasc Eldlintemphasd @hex metaphase ,
Mhelnuclean envelope reforms during [email protected] anaphase Iby telophasd G prophasc Eldlintemphasd @hex metaphase ,...
##### If vectors A and B have magnitudes 12 and 15, respectively, and the angle between the...
If vectors A and B have magnitudes 12 and 15, respectively, and the angle between the two when they are drawn starting from the same point is 110 degree, what is the scalar product of these two vectors? a. -76 b. -62 c. -90 d. -47 e. -170...
##### I'm not sure how to use plot to display the mag of the amplitude .Generate Bode...
I'm not sure how to use plot to display the mag of the amplitude .Generate Bode Plot for following frequency response system using MATLAB Can you guess MATLAB function which generate Bode Plot? Repeat (No 8), but this time use this function to plot amplitude and phase response of system . Use h...
##### Question 2 Identifying problems in electron microscopy data (12 marks)The figure below shows two images taken from a paper describing a cryo-EM study protein complex Panel A shows a cryo-electron micrograph of the complex and panel B shows class averages generated by averaging together individual images of the complex Explain why it may not be possible to obtain a 3D reconstruction of the complex based on the data shown in panel B above.0b) The images in panel b below are class averages of a p
Question 2 Identifying problems in electron microscopy data (12 marks) The figure below shows two images taken from a paper describing a cryo-EM study protein complex Panel A shows a cryo-electron micrograph of the complex and panel B shows class averages generated by averaging together individual...
##### Question No-1 Ventura Company; Quality Cost Report For Years 2017 and 2018 2017 2018 Amount Percentage...
Question No-1 Ventura Company; Quality Cost Report For Years 2017 and 2018 2017 2018 Amount Percentage Amount Percentage Prevention cost 650,000 1.30% 1,000,000 2.00% Appraisal cost 1,200,000 2.40% 1,500,000 3.00% Internal failure cost 2,000,00...
##### The marginal cost, in dollars per unit, for a company's portableheater is C'(q) = 48 −0.03q + 0.00002q2 and the marginal revenueis R'(q) = 44 −0.007q. Find the area A between thegraphs of these functions for 0 ≤ q ≤ 130. (Notethat C'(q) >R'(q) for 0 ≤q ≤ 130. Round your answer to two decimalplaces.)
The marginal cost, in dollars per unit, for a company's portable heater is C'(q) = 48 − 0.03q + 0.00002q2 and the marginal revenue is R'(q) = 44 − 0.007q. Find the area A between the graphs of these functions for 0 ≤ q ≤ 130. (Note that C'(q) > R'...
##### Number? 10 3. Let X be a continuous random variable with a standard normal distribution. a....
number? 10 3. Let X be a continuous random variable with a standard normal distribution. a. Verify that P(-2 < X < 2) > 0.75. b. Compute E(지)· 110]...
##### Using THREE (3) practical examples from your own cultural background, demonstrate the assertion that culture is...
Using THREE (3) practical examples from your own cultural background, demonstrate the assertion that culture is dynamic and socially created....
##### IH Loung Vanscrplion 'ug Od node Juoud: H 1 L 4-Loduocand
IH L oung Vanscrplion 'ug Od node Juoud: H 1 L 4-Loduocand...
##### Wrater rockel nas peCm consuucte d using ~liter water bottle for its compression chamber: The bottle mouth with an Inne raduz of 0.01 m preparation for the rocket's launch; the air in the rocket pressurized gauge pressure Yupsi Before pressurization, 40% of the bottle's volume was filled with water: The mass of the empty rocket (everything except water and added air) is 0.15 kg: The additional ai added the bottle during "inflation' adds anolher .Ooskg the total mass of the ro
wrater rockel nas peCm consuucte d using ~liter water bottle for its compression chamber: The bottle mouth with an Inne raduz of 0.01 m preparation for the rocket's launch; the air in the rocket pressurized gauge pressure Yupsi Before pressurization, 40% of the bottle's volume was filled w...
##### Two parameters of particular relevance during expression with viral vectors are the MOI and the time of infection (TOI) Time of infection refers t0 the cell concentration which virus is added to the culture. The TOL should be late enough to allow for sufh: cient accumulation of cells, but should be early enough for nutrients t0 remain in abundant concentration t0 sustain recombinant protein production. The MOL utilized defines the fraction of the population that is infected at the TOL At MOI hig
Two parameters of particular relevance during expression with viral vectors are the MOI and the time of infection (TOI) Time of infection refers t0 the cell concentration which virus is added to the culture. The TOL should be late enough to allow for sufh: cient accumulation of cells, but should be ...
##### The angles of a triangle have the ratio 3:2:1. What is the measure of the smallest angle?
The angles of a triangle have the ratio 3:2:1. What is the measure of the smallest angle?...
|
2023-03-22 13:43:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5931745171546936, "perplexity": 6539.513286593976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.76/warc/CC-MAIN-20230322114226-20230322144226-00422.warc.gz"}
|
https://es.mathworks.com/help/stats/generalizedlinearmixedmodel.fixedeffects.html
|
# fixedEffects
Estimates of fixed effects and related statistics
## Syntax
``beta = fixedEffects(glme)``
``````[beta,betanames] = fixedEffects(glme)``````
``````[beta,betanames,stats] = fixedEffects(glme)``````
``[___] = fixedEffects(glme,Name,Value)``
## Description
````beta = fixedEffects(glme)` returns the estimated fixed-effects coefficients, `beta`, of the generalized linear mixed-effects model `glme`.```
``````[beta,betanames] = fixedEffects(glme)``` also returns the names of estimated fixed-effects coefficients in `betanames`. Each name corresponds to a fixed-effects coefficient in `beta`.```
example
``````[beta,betanames,stats] = fixedEffects(glme)``` also returns a table of statistics, `stats`, related to the estimated fixed-effects coefficients of `glme`.```
````[___] = fixedEffects(glme,Name,Value)` returns any of the output arguments in previous syntaxes using additional options specified by one or more `Name,Value` pair arguments. For example, you can specify the confidence level, or the method for computing the approximate degrees of freedom for the t-statistic.```
## Input Arguments
expand all
Generalized linear mixed-effects model, specified as a `GeneralizedLinearMixedModel` object. For properties and methods of this object, see `GeneralizedLinearMixedModel`.
### Name-Value Arguments
Specify optional pairs of arguments as `Name1=Value1,...,NameN=ValueN`, where `Name` is the argument name and `Value` is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose `Name` in quotes.
Significance level, specified as the comma-separated pair consisting of `'Alpha'` and a scalar value in the range [0,1]. For a value α, the confidence level is 100 × (1 – α)%.
For example, for 99% confidence intervals, you can specify the confidence level as follows.
Example: `'Alpha',0.01`
Data Types: `single` | `double`
Method for computing approximate degrees of freedom, specified as the comma-separated pair consisting of `'DFMethod'` and one of the following.
ValueDescription
`'residual'`The degrees of freedom value is assumed to be constant and equal to np, where n is the number of observations and p is the number of fixed effects.
`'none'`The degrees of freedom is set to infinity.
Example: `'DFMethod','none'`
## Output Arguments
expand all
Estimated fixed-effects coefficients of the fitted generalized linear mixed-effects model `glme`, returned as a vector.
Names of fixed-effects coefficients in `beta`, returned as a table.
Fixed-effects estimates and related statistics, returned as a dataset array that has one row for each of the fixed effects and one column for each of the following statistics.
Column NameDescription
`Name`Name of the fixed-effects coefficient
`Estimate`Estimated coefficient value
`SE`Standard error of the estimate
`tStat`t-statistic for a test that the coefficient is 0
`DF`Estimated degrees of freedom for the t-statistic
`pValue`p-value for the t-statistic
`Lower`Lower limit of a 95% confidence interval for the fixed-effects coefficient
`Upper`Upper limit of a 95% confidence interval for the fixed-effects coefficient
When fitting a model using `fitglme` and one of the maximum likelihood fit methods (`'Laplace'` or `'ApproximateLaplace'`), if you specify the `'CovarianceMethod'` name-value pair argument as `'conditional'`, then `SE` does not account for the uncertainty in estimating the covariance parameters. To account for this uncertainty, specify `'CovarianceMethod'` as `'JointHessian'`.
When fitting a GLME model using `fitglme` and one of the pseudo likelihood fit methods (`'MPL'` or `'REMPL'`), `fixedEffects` bases the fixed effects estimates and related statistics on the fitted linear mixed-effects model from the final pseudo likelihood iteration.
## Examples
expand all
`load mfr`
This simulated data is from a manufacturing company that operates 50 factories across the world, with each factory running a batch process to create a finished product. The company wants to decrease the number of defects in each batch, so it developed a new manufacturing process. To test the effectiveness of the new process, the company selected 20 of its factories at random to participate in an experiment: Ten factories implemented the new process, while the other ten continued to run the old process. In each of the 20 factories, the company ran five batches (for a total of 100 batches) and recorded the following data:
• Flag to indicate whether the batch used the new process (`newprocess`)
• Processing time for each batch, in hours (`time`)
• Temperature of the batch, in degrees Celsius (`temp`)
• Categorical variable indicating the supplier (`A`, `B`, or `C`) of the chemical used in the batch (`supplier`)
• Number of defects in the batch (`defects`)
The data also includes `time_dev` and `temp_dev`, which represent the absolute deviation of time and temperature, respectively, from the process standard of 3 hours at 20 degrees Celsius.
Fit a generalized linear mixed-effects model using `newprocess`, `time_dev`, `temp_dev`, and `supplier` as fixed-effects predictors. Include a random-effects term for intercept grouped by `factory`, to account for quality differences that might exist due to factory-specific variations. The response variable `defects` has a Poisson distribution, and the appropriate link function for this model is log. Use the Laplace fit method to estimate the coefficients. Specify the dummy variable encoding as `'effects'`, so the dummy variable coefficients sum to 0.
The number of defects can be modeled using a Poisson distribution
`${\text{defects}}_{ij}\sim \text{Poisson}\left({\mu }_{ij}\right)$`
This corresponds to the generalized linear mixed-effects model
`$\mathrm{log}\left({\mu }_{ij}\right)={\beta }_{0}+{\beta }_{1}{\text{newprocess}}_{ij}+{\beta }_{2}{\text{time}\text{_}\text{dev}}_{ij}+{\beta }_{3}{\text{temp}\text{_}\text{dev}}_{ij}+{\beta }_{4}{\text{supplier}\text{_}\text{C}}_{ij}+{\beta }_{5}{\text{supplier}\text{_}\text{B}}_{ij}+{b}_{i},$`
where
• ${\text{defects}}_{ij}$ is the number of defects observed in the batch produced by factory $i$ during batch $j$.
• ${\mu }_{ij}$ is the mean number of defects corresponding to factory $i$ (where $i=1,2,...,20$) during batch $j$ (where $j=1,2,...,5$).
• ${\text{newprocess}}_{ij}$, ${\text{time}\text{_}\text{dev}}_{ij}$, and ${\text{temp}\text{_}\text{dev}}_{ij}$ are the measurements for each variable that correspond to factory $i$ during batch $j$. For example, ${\text{newprocess}}_{ij}$ indicates whether the batch produced by factory $i$ during batch $j$ used the new process.
• ${\text{supplier}\text{_}\text{C}}_{ij}$ and ${\text{supplier}\text{_}\text{B}}_{ij}$ are dummy variables that use effects (sum-to-zero) coding to indicate whether company `C` or `B`, respectively, supplied the process chemicals for the batch produced by factory $i$ during batch $j$.
• ${b}_{i}\sim N\left(0,{\sigma }_{b}^{2}\right)$ is a random-effects intercept for each factory $i$ that accounts for factory-specific variation in quality.
```glme = fitglme(mfr,'defects ~ 1 + newprocess + time_dev + temp_dev + supplier + (1|factory)', ... 'Distribution','Poisson','Link','log','FitMethod','Laplace','DummyVarCoding','effects');```
Compute and display the estimated fixed-effects coefficient values and related statistics.
```[beta,betanames,stats] = fixedEffects(glme); stats```
```stats = Fixed effect coefficients: DFMethod = 'residual', Alpha = 0.05 Name Estimate SE tStat DF pValue {'(Intercept)'} 1.4689 0.15988 9.1875 94 9.8194e-15 {'newprocess' } -0.36766 0.17755 -2.0708 94 0.041122 {'time_dev' } -0.094521 0.82849 -0.11409 94 0.90941 {'temp_dev' } -0.28317 0.9617 -0.29444 94 0.76907 {'supplier_C' } -0.071868 0.078024 -0.9211 94 0.35936 {'supplier_B' } 0.071072 0.07739 0.91836 94 0.36078 Lower Upper 1.1515 1.7864 -0.72019 -0.015134 -1.7395 1.5505 -2.1926 1.6263 -0.22679 0.083051 -0.082588 0.22473 ```
The returned results indicate, for example, that the estimated coefficient for `temp_dev` is –0.28317. Its large $p$-value, 0.76907, indicates that it is not a statistically significant predictor at the 5% significance level. Additionally, the confidence interval boundaries `Lower` and `Upper` indicate that the 95% confidence interval for the coefficient for `temp_dev` is [-2.1926 , 1.6263]. This interval contains 0, which supports the conclusion that `temp_dev` is not statistically significant at the 5% significance level.
|
2022-12-06 15:14:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 25, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.798876166343689, "perplexity": 1233.02979409312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711108.34/warc/CC-MAIN-20221206124909-20221206154909-00428.warc.gz"}
|