url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://plainmath.net/high-school-geometry/82284-i-m-not-great-at-mathematics-so-i-m-sure
Cierra Castillo 2022-07-13 I'm not great at mathematics so I'm sure this is trivial to most. I have been searching around however and not been able to find how to figure out the incircle of a circle sector, or, in other words, the point inside a sector that is furthest away from the radii and the arc. The simpler solution the better Charlee Gentry Expert Assume the sector angle is $2\theta$ and the radius is 1, with center at (0,0) and one end of the arc at (1,0). Then the angle bisector of the sector, on which the center $O$ of the incircle must lie, is at angle $\theta$ from the $x$ axis. Then $O=\left(r\mathrm{cos}\theta ,r\mathrm{sin}\theta \right)$ where $r\mathrm{sin}\theta =1-r.$ Then $r$ may be found from this. Of course the whole diagram may have to be rescaled and rotated depending on how your sector is situated.
2023-02-02 20:43:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 35, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8553670644760132, "perplexity": 181.7301800206318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.18/warc/CC-MAIN-20230202200542-20230202230542-00241.warc.gz"}
https://www.tutorialspoint.com/how-to-create-a-data-frame-in-r-with-repeated-rows-by-a-sequence-of-number-of-times-or-by-a-fixed-number-of-times
# How to create a data frame in R with repeated rows by a sequence of number of times or by a fixed number of times? R ProgrammingServer Side ProgrammingProgramming There are times when duplicated rows in a data frame are required, mainly they are used to extend the data size instead of collecting the raw data. This saves our time but surely it will have some biasedness, which is not recommended. Even though it is not recommended but sometimes it becomes necessary, for example, if it is impossible to collect raw data then we can do it. If we do so then we must specify it in our analysis report. In R, we can use rep function with seq_len and nrows to create a data frame with repeated rows. ## Example Consider the below data frame df − > x<-1:10 > y<-letters[1:10] > df<-data.frame(x,y) Creating a new data frame in which the rows are printed one more after original rows − > df[rep(seq_len(nrow(df)), times = 2), ] x y 1 1 a 2 2 b 3 3 c 4 4 d 5 5 e 6 6 f 7 7 g 8 8 h 9 9 i 10 10 j 1.1 1 a 2.1 2 b 3.1 3 c 4.1 4 d 5.1 5 e 6.1 6 f 7.1 7 g 8.1 8 h 9. 1 9 i 10.1 10 j Creating a new data frame in which the duplicate rows are printed one by one − > df[rep(seq_len(nrow(df)), each = 2), ] x y 1 1 a 1.1 1 a 2 2 b 2.1 2 b 3 3 c 3.1 3 c 4 4 d 4.1 4 d 5 5 e 5.1 5 e 6 6 f 6.1 6 f 7 7 g 7.1 7 g 8 8 h 8.1 8 h 9 9 i 9.1 9 i 10 10 j 10.1 10 j Repeating each row by sequence of numbers − > df[rep(seq_len(nrow(df)), times = 1:10), ] x y 1 1 a 2 2 b 2.1 2 b 3 3 c 3.1 3 c 3.2 3 c 4 4 d 4.1 4 d 4.2 4 d 4.3 4 d 5 5 e 5.1 5 e 5.2 5 e 5.3 5 e 5.4 5 e 6 6 f 6.1 6 f 6.2 6 f 6.3 6 f 6.4 6 f 6.5 6 f 7 7 g 7.1 7 g 7.2 7 g 7.3 7 g 7.4 7 g 7.5 7 g 7.6 7 g 8 8 h 8.1 8 h 8.2 8 h 8.3 8 h 8.4 8 h 8.5 8 h 8.6 8 h 8.7 8 h 9 9 i 9.1 9 i 9.2 9 i 9.3 9 i 9.4 9 i 9.5 9 i 9.6 9 i 9.7 9 i 9.8 9 i 10 10 j 10.1 10 j 10.2 10 j 10.3 10 j 10.4 10 j 10.5 10 j 10.6 10 j 10.7 10 j 10.8 10 j 10.9 10 j Published on 10-Aug-2020 12:20:16
2021-10-24 19:48:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28749361634254456, "perplexity": 10735.441813373216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587593.0/warc/CC-MAIN-20211024173743-20211024203743-00356.warc.gz"}
http://mathhelpforum.com/calculus/7634-calculus-hw-1-differenciable.html
# Thread: Calculus HW #1 differenciable 1. ## Calculus HW #1 differenciable 1. Write the precise definition for a function f(x) defined for all values of x to be a differentiable function. 2. Originally Posted by Nimmy 1. Write the precise definition for a function f(x) defined for all values of x to be a differentiable function. The derivative of a function $f$ is a function defined as, all the values in the domain such that, $\lim_{\Delta x\to 0}\frac{f(x+\Delta x)-f(x)}{\Delta x}$ exist (and at least one exists). The map of the function is defined as its limit value.
2018-02-20 06:52:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9888571500778198, "perplexity": 338.7656239326116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812880.33/warc/CC-MAIN-20180220050606-20180220070606-00010.warc.gz"}
https://community.wolfram.com/groups/-/m/t/769704
# Monte Carlo Simulation and log Returns Posted 3 years ago 4548 Views | 5 Replies | 1 Total Likes | I am simulating log returns for an asset using the R command: rnorm(1000, 0, .1) This should give me 1,000 normally distributed returns with mean 0 and standard deviation of 10%. If I take those as log returns and use the cumulative sum of the changes, I will often run into a cumulative change in the price of the asset of less than -100%. This means the asset has not only lost all of its value but is now a liability which does not make a lot of sense in the real world. Is there a common practice for summing the returns and adjusting for changes greater than 100%? 5 Replies Sort By: Posted 3 years ago Hi Jesse,Do you mean that this set of random numbers, taken from a distribution with mean 0 and stddev 0.1, is a sequential set of log returns? So each number in the sequence is Log(Vf/Vi), where Vf and Vi are the initial and final values for a period? If so, then the log return of the series is the sum of the components. But a value of zero means the return is E^0, which is 1 -- no gain no loss. Finite values in the sequence always represent a fractional gain or loss, which can never zero out the value. A period in which the investment loses all value is represented by a log return of Log(0/Vi), meaning Log(0). This is not defined, but might be defined informally as -Infinity, which takes the sum to -Infinity which represents a loss of all value.Kind regards, David Posted 3 years ago Yes, Thank you. I was misinterpreting the summed value. Posted 3 years ago Easy to do! To make a note here, also for myself:So, when you assume that returns are normally distributed (and you have a clean conscience about that), you mean that $$\frac{P_1-P_0}{P_0},\frac{P_2-P_1}{P_1},\frac{P_3-P_2}{P_2},...$$ are normally distributed, which is fine! This goes well in line with Geometric Brownian Motion ideas.But now, when you take the logs of these returns, and sum, what do you actually get? $$\log \left(\frac{P_1-P_0}{P_0}\right) +\log \left(\frac{P_2-P_1}{P_1}\right) +\log \left(\frac{P_3-P_2}{P_2}\right)+ ...$$Nothing useful!What you are looking actually to get is the continuous time return: $$\log\left( \frac{P_t}{P_0} \right)$$which can be computed from: $$\log \left(\frac{P_1}{P_0}\right) +\log \left(\frac{P_2}{P_1}\right) +\log \left(\frac{P_3}{P_2}\right)+ ...$$But, to get this you need to take logs of (returns + 1).Now, we can illustrate all this stuff with some Geometric Brownian Motion, which, as I hinted above, assumes that relative changes in asset prices are normally distributed: $$dS_t=\mu S_tdt+\sigma S_t dW_t$$Here I have found an example when the seemingly continuous-time return is even lower than -2. data = RandomFunction[ GeometricBrownianMotionProcess[0, .1, 100], {0, 100, .001}]; ListLinePlot[data, Filling -> Axis, PlotRange -> All, ImageSize -> Large] Log[data["Values"][[2 ;;]]/data["Values"][[;; -2]]] // Total -2.31902 same as: Log[data["Values"][[-1]]/data["Values"][[1]]] You need to be very careful how you interpret this result: -2.31902. It is the instantaneous return r multiplied by time t (in our case 100)So, I can compute the the time t price by the formula: $$S_t = S_0 e^{r t}$$And it implies a smooth price journey like this:You cannot use it to judge about the holding period return R. $$R= e^{r t}-1$$which under our assumptions will never be below -1. In fact it will never reach -1.
2019-04-25 16:01:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6869760751724243, "perplexity": 810.746950209807}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578727587.83/warc/CC-MAIN-20190425154024-20190425180024-00214.warc.gz"}
https://gmatclub.com/forum/the-financial-aid-office-of-a-certain-college-has-agreed-to-provide-277543.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 22 Oct 2018, 15:55 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # The financial aid office of a certain college has agreed to provide Author Message TAGS: ### Hide Tags Senior Manager Joined: 08 Jun 2013 Posts: 452 Location: India GMAT 1: 200 Q1 V1 GPA: 3.82 WE: Engineering (Other) The financial aid office of a certain college has agreed to provide  [#permalink] ### Show Tags 29 Sep 2018, 03:19 1 00:00 Difficulty: 25% (medium) Question Stats: 90% (01:40) correct 10% (02:25) wrong based on 31 sessions ### HideShow timer Statistics The financial aid office of a certain college has agreed to provide a $9,000 grant to every student whose total family income (not including the grant) is$22,500 or less. However, for every $40 in family income above this figure, the grant is reduced by$10. If Cedric's award is reduced 40%, what is his family income? A) $26,100 B)$27,000 C) $31,500 D)$36,900 E) $44,100 _________________ It seems Kudos button not working correctly with all my posts... Please check if it is working with this post...... is it?.... Anyways...Thanks for trying Manager Joined: 14 Jun 2018 Posts: 126 Re: The financial aid office of a certain college has agreed to provide [#permalink] ### Show Tags 29 Sep 2018, 11:04 Cedric's award is reduced 40% = 9000*.4 = 3600 For every 40$ above , 10$is reduced. Therefore , his family income was 22500 + (3600*40) = 36900 Intern Joined: 05 Oct 2017 Posts: 11 Re: The financial aid office of a certain college has agreed to provide [#permalink] ### Show Tags 01 Oct 2018, 04:44 pandeyashwin wrote: Cedric's award is reduced 40% = 9000*.4 = 3600 For every 40$ above , 10$is reduced. Therefore , his family income was 22500 + (3600*40) = 36900 it should be 22500 + (3600*40/10) = 36900 Intern Joined: 31 Dec 2017 Posts: 28 Concentration: Finance The financial aid office of a certain college has agreed to provide [#permalink] ### Show Tags 01 Oct 2018, 05:30 Harshgmat wrote: The financial aid office of a certain college has agreed to provide a$9,000 grant to every student whose total family income (not including the grant) is $22,500 or less. However, for every$40 in family income above this figure, the grant is reduced by $10. If Cedric's award is reduced 40%, what is his family income? A)$26,100 B) $27,000 C)$31,500 D) $36,900 E)$ 44,100 Cedric's award = $$60/100(9000) = 5,400$$. For every $$40$$ higher than $$22,500$$ the grant gets reduced by $$10$$: here the grant got reduced by $$3,600$$, i.e., $$3600/10 = 360$$ times, by $$40$$. So, the family income must be $$22,500 + (360*40) = 36,900$$. Ans - D. The financial aid office of a certain college has agreed to provide &nbs [#permalink] 01 Oct 2018, 05:30 Display posts from previous: Sort by
2018-10-22 22:55:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4884977340698242, "perplexity": 7444.709666723761}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515555.58/warc/CC-MAIN-20181022222133-20181023003633-00013.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-6-rational-expressions-and-equations-6-1-rational-expressions-6-1-exercise-set-page-380/70
# Chapter 6 - Rational Expressions and Equations - 6.1 Rational Expressions - 6.1 Exercise Set - Page 380: 70 An example of how to prove this is below. #### Work Step by Step One way is to multiply $b-a$ by $-1$ as follows: $$-1(b-a) \\ -b-(-a) \\ -b+a \\ a-b$$ Thus, since multiplying one expression by -1 yields the other expression, the two expressions are opposites. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2019-12-07 07:36:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7698765993118286, "perplexity": 951.5586494221552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540496492.8/warc/CC-MAIN-20191207055244-20191207083244-00051.warc.gz"}
https://webwork.libretexts.org/webwork2/html2xml?answersSubmitted=0&sourceFilePath=Library/PCC/BasicMath/UnitConversion/UnitConversionTime80.pg&problemSeed=1234567&courseID=anonymous&userID=anonymous&course_password=anonymous&showSummary=1&displayMode=MathJax&problemIdentifierPrefix=102&language=en&outputformat=libretexts
Do the following unit conversion. Use decimals in your answer if needed. $\displaystyle{ 72 \text{ seconds} = }$ $\displaystyle{ \text{ hours} }$
2021-08-06 03:14:57
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3253522515296936, "perplexity": 1947.2524336887961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152112.54/warc/CC-MAIN-20210806020121-20210806050121-00395.warc.gz"}
https://socratic.org/questions/how-does-the-frequency-of-a-wave-affect-the-wavelength
# How does the frequency of a wave affect the wavelength? A specific type of wave in a specific medium has a constant speed. For example electromagnetic waves have a speed of $3.0 \cdot {10}^{8} m {s}^{-} 1$ in a vacuum, whereas in glass they have a speed of approx. of $2.0 \cdot {10}^{8} m {s}^{-} 1$. Wave speed, frequency and wavelength are related by this equation: v=fλ. If the medium does not change then $v$ is a constant and wavelength and frequency are inversely proportional.
2019-08-21 22:13:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6790159940719604, "perplexity": 221.28351027498206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316549.78/warc/CC-MAIN-20190821220456-20190822002456-00112.warc.gz"}
https://algorithmia.com/blog/using-h2o-ai-to-classify-domains-in-production
Modern cyber attacks, such as Botnets and Ransomware, are becoming increasingly dependent on (seemingly) randomly generated domain names. Those domains are used as a way to establish Command & Control with their owners, which is a technique called Domain Fluxing. The recent WannaCry ransomware was famously stopped simply by registering one of those domain names. The ability to quickly classify a domain name as *safe* or *malicious* is a critical task in the cybersecurity world. It can help alert security experts of any suspicious activity or even block that activity. Such a system will have two requirements: • Needs to be accurate, you don’t want to block your users from accessing safe websites • Needs to be scalable, able to handle thousands of transactions per second There are plenty of approaches to this problem, especially in the academic world (S. Yadav – 2010, J. Munro – 2013). The fine folks at H2O.ai also have an excellent code sample we found here. This blog post will briefly describe how H2O’s implementation works and how you can deploy and scale it on Algorithmia. ### How it works The classifier is a basic linear regression model trained on pre-processed labeled dataset. The dataset contains a domain name on each line and a class “legit/dga”. For the pre-processing stage, a python script was created to extract features from each domain name and feed it as an input to the linear regression model. The extracted features are: • Shannon entropy • Character length of the domain name • Proportion of vowel to non-vowel characters • Number of common words found in the domain name (from this word list) After extracting those features from each domain in the dataset, H2O’s H2OGeneralizedLinearEstimator was used to train the model and print out the confusion matrix. Here’s the code used for training: print('nModel: Logistic regression with regularization') model = H2OGeneralizedLinearEstimator(model_id='MaliciousDomainModel', family='binomial', alpha=0, Lambda=1e-5) model.train(x=['length', 'entropy', 'p_vowels', 'num_words'], y='malicious', training_frame=train, validation_frame=valid) You can take a look at the entire script used for training here. ### Deploying to Algorithmia ##### Note: this demo uses an outdated method for deploying H2O models using Jython. For a more up-to-date example which loads a POJO directly in Java, see https://github.com/algorithmiaio/sample-apps/tree/master/algo-dev-demo/h2o The great thing about H2O is the ability to extract trained models as POJO files for high-performance classification. Those POJO files are easy to deploy to Algorithmia as regular Java algorithms. You can take a look at the extracted POJO file here – keep in mind this is mostly computer generated. One caveat about this algorithm is that the pre-processing stage is done with Python and the classification stage is done with Java. Typically this would be an easy task to do with Algorithmia since Algorithmia enables chaining of algorithms from different programming languages. However because this algorithm was originally developed outside of Algorithmia, the author chose to use Jython to run the pre-processing Python script. The downside is that is complicates the runtime environment, the upside however is that it can all run within the same compute node, achieving much lower latency. Our objective is to run the algorithm as-is without much changes to the original code. We created a simple wrapper that points to the location of the Jython JAR file and the trained model. You can see our code here. ### Scaling the H2O.ai model At this stage the algorithm was running smoothly on Algorithmia with an average runtime of 10ms. One trick we’ve learned from scaling similar pipelines is to enable batch scoring. In this case, classifying 10 domains takes only 17ms. Now our algorithm can take a single domain name (a single string) or a batch of domain names (an array of strings). ### Performance metrics So how does it scale? Scales pretty well. Algorithmia works with the concept of “Slots”. A compute node in the Algorithmia cluster can hold a configurable number of Slots, which are Docker containers initialized just-in-time to fulfill an incoming request. When an API call is made, the request is routed to a compute node, that compute node assigns it to a Slot, loads the algorithm (or model) into that Slot, feeds it the JSON input, and returns the JSON output all the way back to the client that made the API call. The Algorithmia cluster makes intelligent decisions as to what Slot to leave “loaded” (i.e. in memory) to process additional requests, or “evacuate” (i.e. destroy the container) to release those resources for another API call from another user. An initialized Slot is never shared across users or algorithms, ensuring complete memory isolation in a multi-tenant environment. You can read more about how all this works from our blog post on Building an OS for AI. In our benchmark above, we make 50 parallel API calls, each classifying 10 domain names in batches, over and over. That’s 10 transactions per API call. The Algorithmia cluster assigns those incoming API calls across the available compute nodes, resulting in fully horizontally-distributed experience. The client making those calls does not need to configure or do any devop planning ahead of time, such as launching servers or warming up containers. From the benchmark, we go from 0tps (transactions per second) to 4750tps in couple seconds – completely devops free. ### Conclusion This was a great exercise to show how Algorithmia supports H2O models right out of the box, and more importantly the level of production and scale you can achieve in a multi-tenant environment. If you have a H2O model you’d like to productionize, send us a note on info@algorithmia.com and we’ll be happy to help!
2021-10-15 22:41:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1921735405921936, "perplexity": 2644.0429381568983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583087.95/warc/CC-MAIN-20211015222918-20211016012918-00128.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-6th-edition/chapter-5-section-5-7-factoring-by-special-products-exercise-set-page-309/19
## Intermediate Algebra (6th Edition) $4(4x+5)(4x-5)$ Factoring the $GCF= 4$ results to $4(16x^2-25)$. Using $a^2-b^2=(a+b)(a-b)$ or the factoring of the difference of two squares, then, \begin{array}{l} 4(16x^2-25) \\= 4(4x+5)(4x-5) .\end{array}
2019-03-26 01:00:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5762317776679993, "perplexity": 3781.967708424146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204736.6/warc/CC-MAIN-20190325234449-20190326020449-00152.warc.gz"}
https://zbmath.org/?q=an:0391.10034
# zbMATH — the first resource for mathematics Sum of digits to different bases and mutual singularity of their spectral measures. (English) Zbl 0391.10034 ##### MSC: 11K06 General theory of distribution modulo $$1$$ 11K16 Normal numbers, radix expansions, Pisot numbers, Salem numbers, good lattice points, etc. 11K65 Arithmetic functions in probabilistic number theory 11K55 Metric theory of other algorithms and expansions; measure and Hausdorff dimension
2021-03-09 02:03:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6151865720748901, "perplexity": 4077.8158936867994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385534.85/warc/CC-MAIN-20210308235748-20210309025748-00026.warc.gz"}
https://pure.au.dk/portal/da/publications/on-regularity-of-the-logarithmic-forward-map-of-electrical-impedance-tomography(b0961a95-075b-44aa-9d02-9b5157214414).html
# Institut for Matematik ## On regularity of the logarithmic forward map of electrical impedance tomography Publikation: Bidrag til tidsskrift/Konferencebidrag i tidsskrift /Bidrag til avisTidsskriftartikelForskningpeer review ### DOI • Henrik Garde • Nuutti Hyvönen, Department of Mathematics and Systems Analysis, Aalto University, Finland • Topi Kuutela, Department of Mathematics and Systems Analysis, Aalto University, Finland This work considers properties of the logarithm of the Neumann-to-Dirichlet boundary map for the conductivity equation in a Lipschitz domain. It is shown that the mapping from the (logarithm of) the conductivity, i.e., the (logarithm of) the coefficient in the divergence term of the studied elliptic partial differential equation, to the logarithm of the Neumann-to-Dirichlet map is continuously Frechet differentiable between natural topologies. Moreover, for any essentially bounded perturbation of the conductivity, the Frechet derivative defines a bounded linear operator on the space of square integrable functions living on the domain boundary, although the logarithm of the Neumann-to-Dirichlet map itself is unbounded in that topology. In particular, it follows from the fundamental theorem of calculus that the difference between the logarithms of any two Neumannto- Dirichlet maps is always bounded on the space of square integrable functions. All aforementioned results also hold if the Neumann-to-Dirichlet boundary map is replaced by its inverse, i.e., the Dirichlet-to-Neumann map. Originalsprog Engelsk SIAM Journal on Mathematical Analysis 52 1 197-220 24 0036-1410 https://doi.org/10.1137/19M1256476 Udgivet - 2020 Ja Citationsformater ID: 196368823
2022-11-29 17:23:17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9017541408538818, "perplexity": 606.4194429330117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710710.91/warc/CC-MAIN-20221129164449-20221129194449-00550.warc.gz"}
http://vcws.profsvarka24.ru/10583/52
Internal respiration definition Respiration definition internal Add: ibugyvy38 - Date: 2020-12-18 22:04:46 - Views: 7552 - Clicks: 9702 Takes place in: Breathing takes place in the lungs. 20; bfc42b2; Compare. Respiration and Defense. In der folgende Liste finden Sie als K&228;ufer die Liste. Section consolidates sections 1 and 2 of title 18, U. Web server: A Web server is a program that uses HTTP (Hypertext Transfer Protocol) to serve the files that form Web pages to users, in response to their requests, which are forwarded by their computers' HTTP clients. , randomization) specified in an approved protocol that stipulates the assignment of research subjects (individually or in clusters) to one or more arms (e. Cc | &220;bersetzungen f&252;r 'respiration' im Schwedisch-Deutsch-W&246;rterbuch, mit echten Sprachaufnahmen, Illustrationen, Beugungsformen,. Cc | &220;bersetzungen f&252;r 'respiration' im Franz&246;sisch-Deutsch-W&246;rterbuch, mit echten Sprachaufnahmen, Illustrationen, Beugungsformen,. RESPIRATION, Med. They don&39;t generally hurt because you have few pain-sensing nerves there. Respiration also can mean cellular respiration, a series of chemical reactions within cells whereby food is "burned" in the presence of oxygen and converted into carbon dioxide and water. Try it Yourself &187;. The CDD represents the "policy" aspect of Android compatibility. It can be applied to other armor pieces using commands. Open source doesn't just mean access to the source code. Cellular respiration review. &0183;&32;Formerly Archives of Internal Medicine. Respiration releases energy from glucose so that life processes can carry on. Lexical definition specifies the meaning of an expression by stating it in terms of other expressions whose meaning is assumed to be known (e. Click here to read the Final Report: High Definition Earth Viewing (HDEV). Die Armut auf der Welt und auch in Deutschland wird immer bedrohlicher, die Armutsgrenze sinkt. Something went wrong. Cat′i&183;on′ic (kăt′ī-ŏn′ĭk) adj. Ready for your high performance computing needs, a WD Blue™ SATA SSD offers high capacity, enhanced reliability, and blazing speed. Another word for definition. The cells take in oxygen and release carbon dioxide. Find more ways to say definition, along with related words, antonyms and example phrases at Thesaurus. Social and technological advances make it possible for a growing part of humanity to access, create, modify, publish and distribute. &0183;&32;If there are only internal candidates, the process may be less formal and more like a meeting or a discussion with the hiring manager. The first stages of respiration occur in the cytoplasm of cells, but most of the energy released is in the. Other 1040 Schedules Information About the Other Schedules Filed With Form 1040. The definition itself is not a license; it is a tool to determine whether a work or license should be considered "free. Cellular respiration is a metabolic pathway that breaks down glucose and produces ATP. See more videos for Internal. Cellular respiration, the process by which organisms combine oxygen with foodstuff molecules, diverting the chemical energy in these substances into life-sustaining activities and discarding, as waste products, carbon dioxide and water. Amtlicher Hinweis: Diese Vorschrift dient der Umsetzung der eingangs zu den Nummern 3, 4, 6, 7, genannten Richtlinien. ClusterM released this &183; 151 commits to stable since this release Huge update! We’ll also uncover the advantages and disadvantages of using each method. Off-campus access to the intranet. Department of the Treasury. Find definitions, meanings, synonyms, pronunciations, translations, origin and examples. Any injury to the organs occupying the thoracic, abdominal, or cranial cavities. &0183;&32;Active Directory (AD) is a Microsoft technology used to manage computers and other devices on a network. Cellular respiration can be carried out by two different pathways. D NAND Internal Solid State Drive, PCIe Express 3. Brand new, top-of-the-line equipment with no crowds and no waiting. For each release of the Android platform, a detailed CDD will be provided. In both the types of respiration, it is the glucose (carbohydrate molecule) that undergoes reactions. Cc | &220;bersetzungen f&252;r 'respiration' im Spanisch-Deutsch-W&246;rterbuch, mit echten Sprachaufnahmen, Illustrationen, Beugungsformen,. NIH Definition of a Clinical Trial. During respiration the cells break down food molecules (like sugar) and release the energy stored in the food. From prokaryotic bacteria and archaeans to eukaryotic protists, fungi, plants, and animals, all living organisms undergo respiration. Respiration occurs in the cytosol and around the plasma membrane in prokaryotic cells. Unsere Redaktion hat den Markt an getesteten Respiration als auch die n&246;tigen Merkmale die man ben&246;tigt. The NIST Definition of Cloud Computing Documentation Topics. An internal or inner stye is usually caused by a bacterial infection in an oil gland in your eyelid. An act of breathing; a breath. Verlag Info Datenschutzerkl&228;rung Nutzungsbedingungen Hilfe Info Datenschutzerkl&228;rung Nutzungsbedingungen Hilfe. Free Redistribution. How to use external in a sentence. Untersuchungen von Unicef &252;ber die Ursachen der Kinderarmut und die Definition der Kinderarmut best&228;tigen dies. Respiration occurs in cells. Antonyms for respiration. Internal Medicine : Welcome to Medscape Internal Medicine, where you can peruse the latest medical news, commentary from clinician experts, major conference coverage, full-text journal articles. Synonym: slow respiration. Prevention efforts and treatment approaches for addiction are generally as. Apply for EIN Number. The Internal Revenue Service (IRS) is the revenue service of the United States federal government. Insurance colleagues can apply for internal jobs here. Another word for respiration. The definition of life has long been a challenge for scientists and philosophers, with many varied definitions put forward. Google has many special features to help you find exactly what you're looking for. In adults, it is a respiratory rate of less than 12 breaths per minute. Enabling the Use GUIDs option allows you to change the filename of the referenced Assembly Definition asset without updating references in other Assembly Definitions to reflect the new name. The journal is devoted to promoting science and practice in internal medicine in Europe. It is a primary feature of Windows Server, an operating system that runs both local and Internet-based servers. Note that in Kotlin, outer class does not see private members of its inner classes. Home; About; Welcome to ASP. This process occurs in the cell's cytoplasm. &0183;&32;Internal Medicine : Welcome to Medscape Internal Medicine, where you can peruse the latest medical news, commentary from clinician experts, major. A research study in which one or more human subjects are prospectively assigned prospectively assigned The term "prospectively assigned" refers to a pre-defined process (e. Many major companies are built entirely around information systems. The physiological definition of respiration differs from the biochemical definition, which refers to a metabolic process by which an organism obtains energy (in the form of ATP and NADPH) by oxidising nutrients and releasing w. Sustainable development has been defined in many ways, but the most frequently quoted definition is from Our Common Future, also known as the Brundtland Report: "Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs. &0183;&32;NIH Definition of a Clinical Trial. Cc | &220;bersetzungen f&252;r 'respiration' im Finnisch-Deutsch-W&246;rterbuch, mit echten Sprachaufnahmen, Illustrationen, Beugungsformen,. NET MVC visit Internal Links are hyperlinks that point at (target) the same domain as the domain that the link exists on (source). Based on title 18, U. Respiration - Betrachten Sie dem Sieger. Ready for your high performance computing needs, a WD Blue™ SATA SSD offers high capacity, enhanced reliability, and blazing speed. The Internal Medicine Journal is the region’s leading internal medicine publication, publishing original medical research, both laboratory and clinical, relating to the study and research of human disease from all over the world. , MD, FACP, FACR Infection: The invasion and multiplication of microorganisms such as bacteria, viruses, and parasites that are not normally present within the body. Price $79. Every year, millions of men, women, and children are trafficked worldwide – including right here in the United States. A good definition explains concisely what something means. Leadership is the art of motivating a group of people to act toward achieving a common goal. An ISP has the equipment and the telecommunication line access required to have a point-of-presence on the Internet for. Meine Mediathek. On the other hand, an external or outer stye is usually caused by an infection in a hair or. The Open Definition sets out principles that define “openness” in relation to data and content. Part 1 – Internal displacement in presents updated figures at the global level. 4, 1909, ch. We will get a custom Tool quote started today. Um Ihnen zuhause bei der Produktwahl etwas unter die Arme zu greifen, hat unser Team schlie&223;lich den Testsieger gew&228;hlt, der unter all den Respiration sehr auff&228;llt - vor allen Dingen im Testkriterium Verh&228;ltnism&228;&223;igkeit von Preis-Leistung. Illustrated definition of Mean: The Arithmetic Mean is the average of the numbers: a calculated central value of a set of numbers. \mathop \lim \limits_x \to a \fracf\left( x \right) - f\left( a \right)x - a\. 99 SAMSUNG 9GB PCIe NVMe Gen4 Internal Gaming SSD M. Internal hernias are protrusions of the viscera through the peritoneum or mesentery but remaining within the abdominal cavity. A variant spelling of come 16. Respiration may refer to: Biology. See more videos for Definition. Relating to, existing on, or connected with the outside or an outer part; exterior. Respiration - WordReference English dictionary, questions, discussion and forums. It acts a lot like a thesaurus except that it allows you to search with a definition, rather than a single word. Breathing, which consists of the drawing into, inhaling, or more technically, inspiring, atmospheric air into the lungs, and then: forcing out. Both plant and animal cells use the process of respiration to release energy from glucose. Dictionary definition is - a reference source in print or electronic form containing words usually alphabetically arranged along with information about their forms, pronunciations, functions, etymologies, meanings, and syntactic and idiomatic uses. " It refers to anything related to computing technology, such as networking, hardware, software, the Internet, or the people that work with these technologies. WHAT YOU NEED TO KNOW: What are respirations? In layman&39;s terms, an internal link is one that points to another page on the same website. Agency colleagues and contractors. Illustrated definition of Term: In Algebra a term is either a single number or variable, or numbers and variables multiplied together. Agency colleagues and contractors can view internal jobs but you’ll need to register on the page as. To create ATP and other forms of energy to power cellular reactions, cells require fuel and an electron acceptor which drives the chemical. The term now refers to the overall process by which oxygen is abstracted from air and is transported to the cells for the oxidation of organic molecules while carbon dioxide (CO 2) and water, the products of oxidation, are returned to the environment. ' im Kroatisch-Deutsch-W&246;rterbuch, mit echten Sprachaufnahmen, Illustrationen, Beugungsformen,. Pictures on the right: Respiratory organs of a fresh water snail (Asolene megastoma). Cellular respiration is the process through which cells convert sugars into energy. Historically, a program has been viewed as a logical procedure that takes input data, processes it, and produces output data. In every arranged society which lives and increases there is an internal movement of ascent and acquisition. Internal definition is - existing or situated within the limits or surface of something: such as. Respiration is the metabolic process of oxygen intake and carbon dioxide release. The International Holocaust Remembrance Alliance definition of antisemitism (IHRA definition)) is a 38-word statement on what antisemitism is. Part 2 – Ending internal displacement highlights examples from countries trying to address internal displacement and discusses the main ingredients for future practice to bring about durable solutions and lasting change. In respiration, the energy is. Im Glossar zu Konzepten, Strategien und Methoden in der Gesundheitsf&246;rderung werden 118 zentrale Begriffe de&173;fi&173;niert und erl&228;utert. In the Assembly Definition References section, click the + button to add a new reference. &0183;&32;Capital stock is the number of common and preferred shares that a company is authorized to issue, and is recorded in shareholders' equity. If you are off-campus, please click the Login button below and use your normal King's ID and password. ' im Latein-Deutsch-W&246;rterbuch, mit echten Sprachaufnahmen, Illustrationen, Beugungsformen,. | Meaning, pronunciation, translations and examples. Respiration is the exchange of life-sustaining gases, such as oxygen, between an animal and its environment. This is for development purpose and will not work in a production environment outside of Docker Desktop for Windows. (Requires a PDF. Definition may be used for an entire essay but is often used as a rhetorical style within an essay that may mix rhetorical styles. The internal keyword is also part of the protected internal access modifier. Internal Jobs. Google Images. C 6 H 12 O 6 + 6O 2 + 6H 2 O → 12H 2 O + 6 CO 2. Sleep apnea: With sleep apnea, people often have episodes of apnea and a decreased breathing rate mixed with episodes of an elevated breathing rate. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e. ” Synonyms: inner, interior inward relating to or existing in the mind or thoughts adj innermost or essential “the internal contradictions of the theory” Synonyms: inner, intimate intrinsic, intrinsical belonging to a thing by its very nature adj occurring. Top 6 Internal Audio Recorder for Windows and Mac 1. Ex·ter·nal (ĭk-stûr′nəl) adj. Define direct respiration. In the first section of the Limits chapter we saw that the computation of the slope of a tangent line, the instantaneous rate of change of a function, and the instantaneous velocity of an object at $$x = a$$ all required us to compute the following limit. All living things respire. M&a definition - Die ausgezeichnetesten M&a definition verglichen! Suitable for application to the outside. Epidemiology Internal hernias have a low incidence of Les &233;changes gazeux assistent la respiration cellulaire en lui fournissant le dioxyg&232;ne et en le. Im weltweit umfassendsten Index f&252;r Volltextb&252;cher suchen. Its articles focus on topics such as clinical medicine, epidemiology, prevention, health care delivery, curriculum development, and numerous other non. Meanings and definitions of words with pronunciations and translations. The respiratory system provides oxygen to body tissues for cellular respiration, removes the waste product carbon dioxide, and helps maintain acid–base balance (OER 2). In order to allow for inferences with a high degree of internal validity, precautions may be taken during the design of the study. Com, Elsevier’s leading platform of peer-reviewed scholarly literature. In glycolysis, glucose is split into two molecules. Definition definition, the act of defining, or of making something definite, distinct, or clear: We need a better definition of her responsibilities. The threat that another person will imminently be subjected to death, severe physical pain or suffering, or the administration or application of mind-altering substances or other procedures calculated to disrupt profoundly the senses or personality; and. Respiration, process by which an organism exchanges gases with its environment. The action of breathing: "opiates affect respiration" ▪ a single breath: "observation of the patient&39;s respirations will gradually be decreased" ▪ a process in living organisms involving the production of energy, typically with the intake of oxygen and the release of carbon dioxide from the oxidation of complex organic substances. Respiration converts the energy of glucose and other molecules into cellular energy. Cc | &220;bersetzungen f&252;r 'respiration. Dictionnaire de définitions : trouvez la définition d&39;un mot ou d&39;une expression en français, avec ses synonymes; enrichissez votre vocabulaire en françaisInternal definition is - existing or situated within the limits or surface of something: such as. 76: Gowan snored, each respiration choking to a huddle fall, as though he would never breathe again. , &167;&167; 1, 2 (Mar. More Respiration images. It can exercise influence on other physiological variables such as heart rate (cf Harver & Lorig, ). The nation suffered from internal conflicts. Respiration refers to a person’s breathing and the movement of air into and out of the lungs (OER 2). Racism definition: 1. Unser Team hat eine riesige Auswahl an Hersteller & Marken ausf&252;hrlich getestet und wir zeigen Ihnen als Interessierte hier unsere Ergebnisse. Internal — any client inside this module who sees the declaring class sees its internal members; public — any client who sees the declaring class sees its public members. During respiration, energy is released in a form that can be used by cells. Internal rate of return is a discount rate that is used in project analysis or capital budgeting that makes the net present value (NPV) of future cash flows exactly zero. Direct respiration synonyms, direct respiration pronunciation, direct respiration translation, English dictionary definition of direct respiration. 2 to my laptop and I am trying to run some sample programs. In die Bewertung von Respiration flie&223;en prim&228;r klinische Studien, Kritiken sowie Aussagen von Nutzern ein. High Definition Audio Controller free download - IDT High Definition Audio CODEC, VIA High Definition Audio, SigmaTel High Definition Audio CODEC, and many more programs. Stimulus check information. The Journal also plays a major role in continuing medical education through review articles relevant to physician education. Narcotics: Some medications such as narcotics—whether used for medical purposes or illegally—can suppress respiration. &0183;&32;European Journal of Internal Medicine is the official journal of the European Federation of Internal Medicine and is the reference scientific journal for the European academic and non-academic internists. 3 Compatible, Ultimate Gaming Solutions for PC Computer Laptops (1TB) 551. 5-Inch – Frustration Free Packaging (STDM008) 4. Such collections are usually printed as books, but some are now designed for use on computers. Cc | &220;bersetzungen f&252;r 'respiration' im Deutsch-Tschechisch-W&246;rterbuch, mit echten Sprachaufnahmen, Illustrationen, Beugungsformen,. Definition: Addiction is a treatable, chronic medical disease involving complex interactions among brain circuits, genetics, the environment, and an individual’s life experiences. Energy produced: There is no energy production during this process. Popular and trusted online dictionary with over 1 million words. : an early article by Bill Wake calls attention to the possible inconsistencies arising from terms commonly used within teams, such as. It includes types. My laptop is running Windows XP. &0183;&32;Cellular respiration has three main stages: glycolysis, the citric acid cycle, and electron transport. It promotes improved patient care, research, and education in primary care, general internal medicine, and hospital medicine. Definition of. Within the body Her bleeding was internal. The Open Definition. Rhetorical and physical manifestations of antisemitism are directed toward Jewish or non. " Cascading style sheets are used to format the layout of Web pages. You can check your payment status with Get My Payment. To apply you’ll need your My. Internal is the default if no access modifier is specified. Environmental Justice at the EPA. CSS: Stands for "Cascading Style Sheet. Internal validity, therefore, is more a matter of degree than of either-or, and that is exactly why research designs other than true experiments may also yield results with a high degree of internal validity. Also Known As. The license shall not restrict any party from selling or giving away the software as a component of an aggregate software distribution containing programs from several different sources. Recruit login which is the same as your login for MyHR. Respiration is probably the process most closely associated with life and in WWT systems it is attributed to a wide range of microorganisms such as bacteria and protozoa. The Department of Internal Affairs has convened and is hosting a Covid-19 Local Government Response Unit. (Note that GUIDs must be. Adjective situated or existing in the interior of something; interior. The statement reads: Antisemitism is a certain perception of Jews, which may be expressed as hatred toward Jews. 6 synonyms for respiration: breath, cellular respiration, internal respiration, breathing, external respiration, ventilation. Popular Services from Internal Revenue Service. Dezember startete das. If oxygen is present in the cell, then the cell can subsequently take advantage of aerobic respiration via the TCA cycle to produce much more usable energy in the form of ATP than any anaerobic pathway. In-ter´nal situated or occurring within or on the inside, as in a hollow structure; in anatomy, many structures formerly called internal are now termed medial. Inside the body: 2. An internal hard drive is a data storage device that uses spinning magnetic platters to store information. " Preamble Social and technological advances make it possible for a growing part of humanity to access, create, modify, publish and distribute various kinds of works - artworks, scientific and educational materials, software. A chronic disease is one lasting 3 months or more, by the definition of the U. There is a separate definition of disability for children (under age 18) who are applying for the Supplemental Security Income (SSI) program. People with addiction use substances or engage in behaviors that become compulsive and often continue despite harmful consequences. The most comprehensive image search on the web. Das zeigen der regelm&228;&223;ige Armutsbericht der Bundesregierung und der Kinderreport. External respiration: exchange of O2 & CO2 between external environment & the cells of the body; efficient because alveoli and capillaries have very thin walls & are very abundant (your lungs have about 300 million alveoli with a total surface area of about 75 square meters) Internal respiration - intracellular use of O2 to make ATP. 1931, William Faulkner, Sanctuary, Vintage 1993, p. Human trafficking involves the use of force, fraud, or coercion to obtain some type of labor or commercial sex act. Nat&252;rlich ist jeder Dab+ definition unmittelbar bei Amazon. Active Directory allows network administrators to create and manage domains, users, and objects within a network. Anschlie&223;end zeige ich Ihnen Sachen, die aufweisen wie n&252;tzlich das Produkt wirklich ist:. For NYU Langone Medical Center Faculty and Staff (formerly NYU OnsiteHealth). Respiration definition, the act of respiring; inhalation and exhalation of air; breathing. Lansing Community College exists so that all people have educational and enrichment opportunities to improve their quality of life and standard of living. S&W Internal Lock Removal. 1 Usage 2 Data values 2. National Center for Health Statistics. The energy released is trapped in the form of ATP for use by all the energy-consuming activities of the cell. Read the latest articles of Respiration Physiology at ScienceDirect. Tax Preparation Tools. Anderson und K. Internal controls are the mechanisms, rules, and procedures implemented by a company to ensure the integrity of financial and accounting information, promote accountability, and prevent fraud. At its core, public relations is about influencing, engaging and building a relationship with key stakeholders across. See more videos for Respiration. Discussions around the design and implementation of The Rust Programming Language. ISP (Internet service provider): An ISP (Internet service provider) is a company that provides individuals and other companies access to the Internet and other related services such as Web site building and virtual hosting. Class members, including nested classes and structs, can be public, protected internal, protected, internal, private protected, or private. The respiratory drive is broken down into three. The Internal Displacement Index (IDI) is a new way to more comprehensively measure global progress on internal displacement. Describe what happens when an animal eats a plant. Greek kation, something going down, from neuter present participle of katienai, to go down : kat-, kata-, cata- + ienai, to go; see ei- in Indo-European roots. Internal: 1 adj located inward “"an internal sense of rightousness"- A. The internal effect, which the height might have made very striking, is not equal to the external outline. This type of respiration is common in most of the. &0183;&32;Cellular Respiration Definition. It is a biochemical process that occurs within the cells of organisms. External respiration definition is - exchange of gases between the external environment and a distributing system of the animal body (such as the lungs of higher vertebrates or the tracheal tubes of insects) or between the alveoli of the lungs and the blood. Welche Faktoren es bei dem Kaufen Ihres Respiration zu bewerten gilt. This is partially because life is a process, not a substance. 5 IN/INSIDE existing in your mind SYN inner internal doubts — internally adverb The matter will be dealt with internally. External definition is - capable of being perceived outwardly. The most popular dictionary and thesaurus for learners of English. &0183;&32;Learn about the 500 Internal Server Error (aka HTTP 500 error), which is a generic error suggesting a problem with the website's server. This is the currently selected item. So in a sense, this tool is a "search engine for words", or a sentence to word converter. Internet Information Services (IIS) is a flexible, general-purpose web server from Microsoft that runs on Windows systems to serve requested HTML pages or files. In the living organism, energy is liberated, along with carbon dioxide, through the oxidation of molecules containing carbon. &0183;&32;All you need is a powerful internal audio recorder. Check out the latest range of internal hard drives by Dell, Samsung, Buffalo Technology, Kingston, Seagate and many more. 1 ID 3 History 4 Issues Respiration extends underwater breathing time by +15 seconds per enchantment level. It includes glycolysis, the TCA cycle, and oxidative phosphorylation. Search for a tag. The term respiration denotes the exchange of the respiratory gases (oxygen and carbon dioxide) between the organism and the medium in which it lives and between the cells of the body and the tissue fluid that bathes them. In this process, the energy (ATP-Adenosine triphosphate) is produced by the breakdown of glucose which is further used by cells to perform various functions. Make a Payment. North Dakota State University 1340 Administration Avenue Fargo, ND 58105 Prospective students may schedule a visit by callingNDSU. This form of respiration is called aerobic respiration. La respiration d&233;signe &224; la fois les &233;changes gazeux r&233;sultant de l'inspiration et de l'expiration de l'air (rejet de dioxyde de carbone CO 2 et absorption de dioxyg&232;ne O 2) et la respiration cellulaire qui permet, en d&233;gradant du glucose gr&226;ce au dioxyg&232;ne, d'obtenir de l'&233;nergie. Wait, read the full article. The content for Dictionary. Internal links are links that go from one page on a domain to a different page on the same. An ion or group of ions having a positive charge and characteristically moving toward the negative electrode in electrolysis. Internal definition: 1. Seagate BarraCuda 2TB Internal Hard Drive HDD – 3. Internal may refer to:. Vor 2 Tagen &0183;&32;The American Board of Internal Medicine (ABIM) certifies internists and subspecialists who demonstrate the knowledge, skills, and attitudes essential for excellent patient care in the field of internal medicine. By analogy with the “Definition of Done”, the team makes explicit and visible the criteria (generally based on the INVEST matrix) that a user story must meet prior to being accepted into the upcoming iteration. Betriebst&228;tte ist jede feste Gesch&228;ftseinrichtung oder Anlage, die der T&228;tigkeit eines Unternehmens dient. 321, &167;&167; 1, 2, 35 Stat. Mit der Einstellung von boerse. 日本語 フリー多機能辞典項目. Eighty-eight percent of Americans over 65 years of age have at least one chronic health condition (as of 1998). In this tutorial, you will learn the difference between the three types of CSS styles: inline, external, and internal. Picture source: Stijn Ghesquiere: applesnail. BMI does not measure body fat directly, but research has shown that BMI is moderately correlated with more direct measures of body fat obtained from skinfold thickness measurements, bioelectrical impedance, underwater weighing, dual energy x-ray absorptiometry (DXA) and other methods 1,2,3. Synonym f&252;r "Respiration " 1 gefundenes Synonym Jetzt Bedeutungen & Synonyme nachschlagen &196;hnliches & anderes Wort f&252;r Respiration. Price$ 199. Request ID: 6316655e-15df-44a2-9cd3-cd3f17813be3 Go back. Respiration is of two types, aerobic respiration, and anaerobic respiration. The action or. Respiration process that occurs in the presence of oxygen is called aerobic respiration, generally seen among humans. The amendments also add to the list any institutional investors included in the accredited investor definition that are not otherwise enumerated in the definition of. Older adults naturally have a lower volume of water in their bodies, and may have conditions or take medications that increase the risk of dehydration. 7 out of 5 stars 36,037 Personal Computers. At the end of, the number of people internally displaced by conflict, violence or disasters around the world had reached an all-time high of 50. This happens in the cells so it is also called cellular respiration. The definition distinguishes between free works, and free licenses which can be used to legally protect the status of a free work. Questions, discussion and forums. Otherwise, it may involve a formal application and a formal interview process with the hiring manager, company management, and other employees. &0183;&32;The Open Source Definition. 20 (internal version 2. It usually involves exchanging two gases—oxygen and carbon dioxide. How to use respiration in a sentence. Because root respiration was included in all studies, it is not surprising that soil respiration increased under eCO 2 in most studies. Learn more about the parts of your respiratory system, how you. Cellular respiration introduction. Internal respiration definition email: qigipy@gmail.com - phone:(917) 919-9369 x 2548 Diy sex swing - Meaning chulo -> What does gmfu mean -> Ethyl acetate molar mass Sitemap 1
2021-05-15 17:49:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.242318794131279, "perplexity": 9158.468044403244}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990551.51/warc/CC-MAIN-20210515161657-20210515191657-00210.warc.gz"}
https://testbook.com/question-answer/the-value-of-5-5-times-5-left-5-div-5--5daff80bf60d5d3b5cee11bc
# The value of $$5 - 5 \times 5 + \left( {5 \div 5\;of\;5} \right) \times 5 - \left( {5\frac{4}{5} \div \frac{{58}}{{30}}{\text{of}}\frac{3}{2}} \right) \div 2$$ is∶ 1. (-21) 2. (-16) 3. (-20) 4. (- 15) Option 3 : (-20) ## Detailed Solution Follow BODMAS rule to solve this question, as per the order given below∶ Step-1∶ Parts of an equation enclosed in 'Brackets' must be solved first, and in the bracket, Step-2∶ Any mathematical 'Of' or 'Exponent' must be solved next, Step-3∶ Next, the parts of the equation that contain 'Division' and 'Multiplication' are calculated, Step-4∶ Last but not least, the parts of the equation that contain 'Addition' and 'Subtraction' should be calculated. $$\Rightarrow 5 - 5 \times 5 + \left( {5 \div 5\;of\;5} \right) \times 5 - \left( {5\frac{4}{5} \div \frac{{58}}{{30}}{\text{of}}\frac{3}{2}} \right) \div 2$$ $$\Rightarrow 5 - 25 + \left( {5 \div 25} \right) \times 5 - \left( {\frac{{29}}{5} \div \frac{{58}}{{30}} \times \frac{3}{2}} \right) \div 2$$ ⇒ (-20) + (1/5) × 5 - (29/5 ÷ 29/10) ÷ 2 ⇒ (-20) + 1 - 2 ÷ 2 ⇒ (-20) + 1 - 1 ⇒ (-20)
2021-12-05 18:15:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7504794597625732, "perplexity": 6237.7066086228715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363215.8/warc/CC-MAIN-20211205160950-20211205190950-00598.warc.gz"}
https://www.transtutors.com/questions/approximated-net-realizable-value-method-avignon-parfum-compagnie-makes-three-produ-563421.htm
# (Approximated net realizable value method) Avignon Parfum Compagnie makes three products that can... 1 answer below » (Approximated net realizable value method) Avignon Parfum Compagnie makes three products that can either be sold, or processed further and then sold. The cost associated with the Avignon joint process is $120,000. Sales Separate Final Units of Prices at Costs after Sales Product Output Split-Off Split-Off Price Product 1 7,500$3.00 $1.00$4.25 Product 2 10,000 2.00 0.50 3.00 Product 3 12,500 2.00 0.75 3.00 Per unit, Product 1 weighs 3 ounces, Product 2 weighs 2 ounces, and Product 3 weighs 3 ounces. Assume that all additional processing is undertaken. a. Allocate the joint cost based on the units of output, weight, and approximated net realizable values at split-off. b. Assume all products are additionally processed and completed. At the end of the period, the inventories are as follows: Product 1, 500 units; Product 2, 1,000 units; Product 3, 1,500 units. Determine the values of the inventories based on answers obtained in part (a). Ramesh joint cost ANSWER (b) METHOD 1 UNITS OF OUTPUT Joint cost per unit Further cost per unit total cost per unit STOCK VALUATIONS Product 1 7,500 30000 4 1 5 2500 Product 2 10,000 40000 4 0.5 4.5 4500 Product 3 12,500 50000 4 0.75 4.75 7125 30,000 joint cost METHOD 2 weight in ounces Joint cost per unit Further cost per unit total cost per unit STOCK VALUATIONS Product 1 22,500 33750 4.5 1 5.5 2750 Product 2 20,000 30000 3 0.5 3.5 3500 Product 3 37,500 56250 4.5 0.75... ## Plagiarism Checker Submit your documents and get free Plagiarism report Free Plagiarism Checker ## Recent Questions in Cost Management Looking for Something Else? Ask a Similar Question
2020-11-26 09:46:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29715144634246826, "perplexity": 5426.29479820769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141187753.32/warc/CC-MAIN-20201126084625-20201126114625-00358.warc.gz"}
http://mathoverflow.net/questions/87346/ito-formula-for-discontinuous-function
Ito formula for discontinuous function To use classical Ito formula $$f(t,B_t) - f(0,B_0) = \int\limits_0^t f'_s(s,B_s)ds + \frac 12\int\limits_0^t f''_{xx}(s,B_s)ds + \int\limits_0^t f'_x(s,B_s)dB_s$$ $f(t,x)$ needs to be $C^{1,2}([0,\infty)\times\mathbb R)$. Is there any possibility to use it if $f(t,x)$ is piecewise continuously differentiable in $t$ and two times continuously differentiable in $x$? I mean, there exist $t_i$, $i=1...n$, $f(t,x)\in C^{1,2}((t_i,t_{i+1})\times\mathbb R)$, $f$ and it's derivatives $f^\prime_t$, $f^\prime_x$ and $f^{\prime\prime}_{xx}$ have jump discontinuity at $t_i$. - If $f$ is continuous in $t$ that still works. If not, you need to add the term $\sum_i \Delta_t f(t_i,B_{t_i})$ to the right hand side where $\Delta_t f(t,B_t) = f(t_+,B_t)-f(t_-,B_t)$ and the sum goes over those $i$ where $t_i \le t$. I guess. You mean if $f$ is continuous and it's derivatives $f^\prime_t$, $f^\prime_x$ and $f^{\prime\prime}_{vv}$ have jump discontinuity Ito's lemma still works? If I got you right could you please give me some book references on that? –  niyazets Feb 6 '12 at 14:45
2014-04-17 07:03:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9522935748100281, "perplexity": 99.51419617793478}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/219833/norm-of-derivative-of-rank-one-projector
# Norm of derivative of rank one projector I asked this question on math.stack but I got no answer, so I try here. Let $\phi(t)$ be a solution for the nonlinear Schroedinger equation$$i\partial_t\phi(t)=-\Delta\phi(t)+(V*|\phi|^2)\phi(t)$$ inside the Hilbert space $L^2(\mathbb{R}^d)$. I know $\phi(t)$ is also supposed to live in $L^\infty\cap L^2$, however I'm not sure this is really important for the problem. Let's define the projector onto $\phi(t)$ as $$p(t):=\left|\phi(t)\right\rangle\left\langle\phi(t)\right|.$$ On the paper http://arxiv.org/abs/0907.4313 I found a formula stating, without proof, that $$\|\nabla p(t)\|=\|\nabla\phi(t)\|.$$ I was trying to figure out how to prove this, but I got really confused. I think by definition and integration by part I can write $$\|\nabla\phi\|^2=\sum_{i=1}^d\|\nabla^i\phi(t)\|^2=\int dx^d\,\phi_t(x)(-\Delta)\phi_t(x),$$ even though I'm not completely sure I'm using the correct formula for a vector-valued $L^2$ function. But I really can't figure out how to write the norm of the vector of operators $\|\nabla p(t)\|$ to try to compare them. Could anybody help me? I haven't checked the paper, but $\nabla p(t)$ most likely mean the operator $\psi \mapsto \nabla \phi \cdot \langle \phi | \psi\rangle$ and since it looks like by assumption $\phi$ has $L^2$ norm one (since you are using it do define a projector), the equality follows. To be more detailed, $\|\nabla p(t) \psi\|^2 = \|\nabla\phi\|^2 \langle \phi | \psi\rangle|^2 \leq \|\nabla \phi\|^2 \|\phi\|^2 \|\psi\|^2$ so the operator norm is bounded above by $\|\phi\|^2 \|\nabla\phi\|^2 = \|\nabla\phi\|^2$. To show that this is attained just plug in $\phi = \psi$.
2019-11-12 06:01:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.9689114093780518, "perplexity": 99.48899218571134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664752.70/warc/CC-MAIN-20191112051214-20191112075214-00498.warc.gz"}
http://www.textbook.ds100.org/ch/10/eda_example.html
# 10.5. Example: Sale Prices for Houses¶ In this final section, we carry out an exploratory analysis using the ideas from this chapter to guide our investigations help us understand the visualizations that we make. Although EDA typically begins in the data wrangling stage, for demonstration purposes the data we work with in this section have already been partially cleaned so that we can focus on exploring the features of interest. Note also that we will not discuss creating the visualizations in much detail; that topic is covered in the Data Visualization chapter. First, we consider the scope of the data (see Chapter 2 for more on scope). Scope. These data were scraped from the San Francisco Chronicle (SFChron) Website. The SFChron published weekly data on the sale of houses in the San Francisco Bay Area. The data are a census of homes sold during this time. That is, the population consists of all sales of houses from Apr 2003 to December 2008. Since we are working with a census, the population matches the access frame and the sample consists of the entire population. Granularity. Each record represents a sale of a home in the SF Bay Area during the above specified time period. This means that if a home was sold twice during this time, then it will have two records in the dataset. And, if a home in the Bay Area was not sold during this time, then it will not appear in the dataset. File Type. When we inspect the data file sfhousing.csv with CLI tools (see Section X), we find that there are over 500,000 rows in the dataset and that the file indeed consists of comma-separated-values. # The file has 521494 lines !wc data/sfhousing.csv 521494 2801383 47630469 data/sfhousing.csv # The file is 45M large, which is reasonable to read into pandas !du -shH data/sfhousing.csv 45M data/sfhousing.csv !head -n 4 data/sfhousing.csv county,city,zip,street,price,br,lsqft,bsqft,year,date,datesold Alameda County,Alameda,94501,1001 Post Street,689000,4,4484,1982,1950,2004-08-29,NA Alameda County,Alameda,94501,1001 Santa Clara Avenue,880000,7,5914,3866,1995,2005-11-06,NA Alameda County,Alameda,94501,1001 Shoreline Drive \#102,393000,2,39353,1360,1970,2003-09-21,NA With this, we expect that we can read the file into a DataFrame: # Some rows in the csv have extra commas, but since there are only a few, we # drop them when reading in the data. sfh_all b'Skipping line 30550: expected 11 fields, saw 12\n' b'Skipping line 343819: expected 11 fields, saw 12\n' county city zip street ... bsqft year date datesold 0 Alameda County Alameda 94501.00 1001 Post Street ... 1982.00 1950.00 2004-08-29 NaN 1 Alameda County Alameda 94501.00 1001 Santa Clara Avenue ... 3866.00 1995.00 2005-11-06 NaN 2 Alameda County Alameda 94501.00 1001 Shoreline Drive \#102 ... 1360.00 1970.00 2003-09-21 NaN 3 Alameda County Alameda 94501.00 1001 Shoreline Drive \#108 ... 1360.00 1970.00 2004-09-05 NaN ... ... ... ... ... ... ... ... ... ... 521487 Sonoma County Windsor 95492.00 9992 Wallace Way ... 1158.00 1993.00 2005-05-15 NaN 521488 Sonoma County Windsor 95492.00 9998 Blasi Drive ... NaN NaN 2008-02-17 NaN 521489 Sonoma County Windsor 95492.00 9999 Blasi Drive ... NaN NaN 2008-02-17 NaN 521490 Sonoma County Windsor 95492.00 999 Gemini Drive ... 1092.00 1973.00 2003-09-21 NaN 521491 rows × 11 columns Feature Types. This dataset does not have an accompanying codebook, but we can determine the features and their types by inspection. sfh_all.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 521491 entries, 0 to 521490 Data columns (total 11 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 county 521491 non-null object 1 city 521491 non-null object 2 zip 521462 non-null float64 3 street 521479 non-null object 4 price 521491 non-null float64 5 br 421343 non-null float64 6 lsqft 435207 non-null float64 7 bsqft 444465 non-null float64 8 year 433840 non-null float64 9 date 521491 non-null object 10 datesold 52102 non-null object dtypes: float64(6), object(5) memory usage: 43.8+ MB Based on the names of the fields, we expect the primary key to consist of the combination of county, city, zip, street address, and date (if the house was sold more than once in the time period). Sale price is our focus in this investigation so we begin by examining it. To develop your intuition about distributions, make a guess about the shape of the sale price distribution. ## 10.5.1. Understanding Price¶ A starting guess is that the distribution is highly skewed to the right with a few expensive houses sold. The summary statistics shown below confirm this skewness. The median is closer to the lower quartile than to the upper quartile. Also the maximum is more than 40 times as large as the median. # This option stops scientific notation for pandas pd.set_option('display.float_format', '{:.2f}'.format) display_df(sfh_all[['price']].describe(), rows=8) price count 521491.00 mean 635443.11 std 393968.53 min 22000.00 25% 410000.00 50% 555000.00 75% 744000.00 max 20000000.00 We might ask whether that $20m sale price is simply an anomalous value or whether there are many houses that sold at such a high price. We can zoom in on the right tail of the distribution and compute a few high percentiles. percs = [95, 97, 98, 99, 99.5, 99.9] prices = np.percentile(sfh_all['price'], percs, interpolation='lower') pd.DataFrame({'price': prices}, index=percs) price 95.00 1295000.00 97.00 1508000.00 98.00 1707000.00 99.00 2110000.00 99.50 2600000.00 99.90 3950000.00 We see that $$99.9\%$$ of the houses sold for under $$\4M$$ so the $$\20M$$ sale is indeed a rarity. Let’s examine the histogram of sale prices below $$\4M$$. Fewer than 1 in 1,000 sales exceeded $$\4M$$. Is the distribution skewed? under_4m = sfh_all[sfh_all['price'] < 4_000_000] sns.histplot(data=under_4m, x='price', binwidth=100000) <AxesSubplot:xlabel='price', ylabel='Count'> We can confirm that the sale price, even without the top 0.1%, remains highly skewed to the right, with a single mode around$1m. Next, we plot the histogram of the logarithm transformed sale price, which is roughly symmetric: log_prices = under_4m.assign(log_price=np.log10(under_4m['price'])) sns.histplot(data=log_prices, x='log_price', binwidth=0.1) <AxesSubplot:xlabel='log_price', ylabel='Count'> ## 10.5.2. What Next?¶ Now that we have an understanding of the distribution of sale price, let’s consider the so-what questions posed in the previous section. Why might the data shape matter? Do you have reason to expect that subgroups of the data have different distributions? What comparison might bring added value to the investigation? An initial attempt to answer the first question is that models and statistics based on symmetric distributions tend to have more robust and stable properties than for highly skewed distributions. We address this issue more in the modeling sections of the book. For this reason, we primarily work with the log-transformed sale price. And, we might also choose to limit our analysis to sale prices under $6m since the super-expensive houses may behave quite differently. To begin to answer the second and third questions, we turn to our knowledge of the housing market in this time period. Sale prices for houses rose rapidly in the mid ’00s, and then the bottom fell out of the market (Reference). For this reason, the distribution of sale price in, say, 2004, might be quite different than in 2008, right before the crash. To explore this further we can examine the behavior of prices over time. Or we can fix time, and examine the relationships between price and the other features of interest, essentially controlling for a time effect. Both approaches are potentially worthwhile, and we proceed with both. Another factor to consider is location. You may have heard the expression: There are three things that matter in property: location, location, location. Comparing price across cities might bring added value to our investigation. One approach to EDA is to narrow our focus. In this way we can control for particular features, such as time. We do this by first limiting the data to sales made in one calendar year, 2004, so rising prices should have a limited impact on the distributions and relationships that we examine. To limit the influence of the very expensive and large houses, we also restrict ourselves to sales below$4m and houses smaller than 12,000 ft^2. This subset still contains large and expensive houses, but not outrageously so. Later, we further restrict our exploration to a few cities of interest. def subset(df): return df.loc[(df['price'] < 4_000_000) & (df['bsqft'] < 12_000)] sfh = sfh_all.pipe(subset) sfh county city zip street ... bsqft year date datesold 0 Alameda County Alameda 94501.00 1001 Post Street ... 1982.00 1950.00 2004-08-29 NaN 1 Alameda County Alameda 94501.00 1001 Santa Clara Avenue ... 3866.00 1995.00 2005-11-06 NaN 2 Alameda County Alameda 94501.00 1001 Shoreline Drive \#102 ... 1360.00 1970.00 2003-09-21 NaN ... ... ... ... ... ... ... ... ... ... 521484 Sonoma County Windsor 95492.00 998 Polaris Drive ... 1196.00 1973.00 2007-08-05 NaN 521487 Sonoma County Windsor 95492.00 9992 Wallace Way ... 1158.00 1993.00 2005-05-15 NaN 521490 Sonoma County Windsor 95492.00 999 Gemini Drive ... 1092.00 1973.00 2003-09-21 NaN 443935 rows × 11 columns For this subset, the shape of the distribution of sale price remains the same—price is still highly skewed to the right. We continue to work with this subset to address the question: Are there any potentially important features to create comparisons with/against? ## 10.5.3. Examining other features¶ In addition to the date of the sale and the location of the house, which we identified earlier, as features of interest, a few other features that might be important to our investigation are the size of the house, lot (or property) size, and number of bedrooms. We explore the distributions of these features and their relationship to sale price. What might we expect the distributions of building and lot size look like? Since the size of the property is likely related to its price, it seems reasonable to guess that these features are also skewed to the right. The cell below shows the distribution of building size (on the left), and we confirm our intuition. The distribution is unimodal with a peak at about 1500 ft^2, and many houses are over 2,500 ft^2 in size. The log-transformed building size is nearly symmetric, although it maintains a slight skew. The same is the case for the distribution of lot size. fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(10, 4)) sns.histplot(data=sfh, x='bsqft', binwidth=200, ax=ax1) ax1.set_xlabel('Building size (ft^2)') log_bsq = sfh.assign(log_bsqft=np.log10(sfh['bsqft'])) sns.histplot(data=log_bsq, x='log_bsqft', binwidth=0.05, ax=ax2) ax2.set_xlabel('Building size (ft^2, log10)') plt.tight_layout() What might the relationship between building and property size look like? Given they are both skewed distributions, we will want to plot the points on log scale. In the next cell, the scatter plot on the left is in the original units and it is difficult to discern the relationship because of the skewness of the two distributions. Most of the points are crowded into the bottom left of the plotting region. The scatterplot on the right reveals a few interesting features: there is a horizontal line along the bottom of the scatter plot where it appears that many houses have the same lot size no matter the building size; and there appears to be a slight positive log-log linear association between lot and building size. fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(10, 4)) sns.scatterplot(data=sfh, x='bsqft', y='lsqft', alpha=0.1, s=20, ax=ax1) loglog = sfh.assign(log_bsqft=np.log10(sfh['bsqft']), log_lsqft=np.log10(sfh['lsqft'])) sns.scatterplot(data=loglog, x='log_bsqft', y='log_lsqft', alpha=0.1, s=20, ax=ax2) plt.tight_layout() Let’s look at some lower quantiles of lot size to try and figure out this value: percs = [0.5, 1, 1.5, 2, 2.5, 3] lots = np.percentile(sfh['lsqft'].dropna(), percs, interpolation='lower') pd.DataFrame({'lot_size': lots}, index=percs) lot_size 0.50 436.00 1.00 436.00 1.50 436.00 2.00 612.00 2.50 791.00 3.00 871.00 We found something interesting! About 1.5% of the houses have a lot size of 436 ft^2. What does 436 mean? This is an avenue of investigation worth pursuing which we’ve left as an exercise to the reader. Another measure of house size is the number of bedrooms. Since this is a discrete quantitative variable, we can treat it as a qualitative feature and make a bar plot. What do you expect this distribution to look like? Houses in the Bay Area tend to be on the smaller side so we venture to guess that the distribution will have a peak at three and skewed to the right with a few houses having 5 or 6 bedrooms. In the following cell, the bar plot confirms that we generally had the right idea. However, we find that there are some houses with as many as 60 bedrooms! plt.figure(figsize=(12, 2)) sns.countplot(data=sfh, x='br') plt.xticks(rotation=45); We transform the number of bedrooms into an ordinal feature by reassigning all values larger than 8 to 8+, and recreate the bar plot for the transformed data. We can see that even lumping all of the houses together with 8+ bedrooms, they do not amount to many. With this transformation, the rest of the distribution is easier to see. The distribution is nearly symmetric with a peak at 3, nearly the same proportion of houses have 2 or 4 bedrooms, and nearly the same have 1 or 5. There is asymmetry present with a few houses having 6 or more bedrooms. eight_up = sfh.loc[sfh['br'] >= 8, 'br'].unique() new_bed = sfh['br'].replace(eight_up, 8) sns.countplot(data=sfh.assign(br=new_bed), x='br') <AxesSubplot:xlabel='br', ylabel='count'> In EDA, we should also investigate relationships between features and explore relationships between pairs of variables for different subgroups. As mentioned in the ch:eda_guidelines section, we examine the distribution of a feature across subgroups to look for unusual observations in pairs of features and within subgroups. For example, we found the unusual value of 436 ft^2 for lot size, and saw that this small lot size appeared for many building sizes. Before we proceed, we’ll save the transformations done thus far into sfh. def log_vals(sfh): return sfh.assign(log_price=np.log10(sfh['price']), log_bsqft=np.log10(sfh['bsqft']), log_lsqft=np.log10(sfh['lsqft'])) def clip_br(sfh): eight_up = sfh.loc[sfh['br'] >= 8, 'br'].unique() new_bed = sfh['br'].replace(eight_up, 8) return sfh.assign(br=new_bed) sfh = (sfh_all .pipe(subset) .pipe(log_vals) .pipe(clip_br) ) sfh county city zip street ... datesold log_price log_bsqft log_lsqft 0 Alameda County Alameda 94501.00 1001 Post Street ... NaN 5.84 3.30 3.65 1 Alameda County Alameda 94501.00 1001 Santa Clara Avenue ... NaN 5.94 3.59 3.77 2 Alameda County Alameda 94501.00 1001 Shoreline Drive \#102 ... NaN 5.59 3.13 4.59 ... ... ... ... ... ... ... ... ... ... 521484 Sonoma County Windsor 95492.00 998 Polaris Drive ... NaN 5.54 3.08 3.91 521487 Sonoma County Windsor 95492.00 9992 Wallace Way ... NaN 5.64 3.06 3.79 521490 Sonoma County Windsor 95492.00 999 Gemini Drive ... NaN 5.51 3.04 3.89 443935 rows × 14 columns ## 10.5.4. Delving Deeper into Relationships¶ We begin by examining how the distribution of price changes for houses with different numbers of bedrooms. The box plot in the following cell shows that the median sale price increases with the number of bedrooms from 1 to 5 bedrooms, and for the largest houses (those with 6, 7, and 8+ bedrooms), there is nearly the same distribution of log-transformed sale price. sns.boxplot(data=sfh, x='br', y='log_price'); We would expect that houses with one bedroom are smaller than houses with, say, 4 bedrooms. We might also guess that houses with 6 or more bedrooms are similar in size. To dive deeper, we consider the normalization (a kind of transformation) that divides price by building size to give us the price per square foot. Is this constant for all houses, in other words, is price primarily determined by its size and does the relationship between size and price stay the same across different sizes of house? The following cell creates two scatter plots. The one on the left shows price against the building size (both log-transformed), and the plot on the right shows price per square foot (log-transformed) against building size and colors the points according to the number of bedrooms in the house. In addition, each plot has an added smooth curve that reflects the local average price or price per square foot) for buildings of roughly the same size. What do you see? from statsmodels.nonparametric.smoothers_lowess import lowess color1 = sns.color_palette()[1] color2 = sns.color_palette()[2] fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(10, 4)) sns.scatterplot(data=sfh, x='log_bsqft', y='log_price', alpha=0.1, s=20, ax=ax1) xs = np.linspace(2.5, 4, 100) curve = lowess(sfh['log_price'], sfh['log_bsqft'], frac=1/10, xvals=xs, return_sorted=False) ax1.plot(xs, curve, color=color1, linewidth=3) ppsf = sfh.assign( ppsf=sfh['price'] / sfh['bsqft'], log_ppsf=lambda df: np.log10(df['ppsf'])) sns.scatterplot(data=ppsf, x='bsqft', y='log_ppsf', hue='br', legend=False, alpha=0.05, s=20, ax=ax2) xs = np.linspace(200, 6_000, 100) curve = lowess(ppsf['log_ppsf'], ppsf['bsqft'], frac=1/10, xvals=xs, return_sorted=False) ax2.plot(xs, curve, color=color2, linewidth=3) plt.tight_layout() The lefthand plot shows what we expect—larger houses cost more. We also see that there is roughly a log-log-linear association. The righthand plot in this figure is interestingly nonlinear. We see that smaller houses cost more per square foot than larger ones, and the price per square foot for larger houses (houses with many bedrooms) is relatively flat. We mentioned earlier that we also want to consider location. There are house sales from over 150 different cities in this dataset. Some cities have a handful of sales and others have thousands. We narrow down the the dataset further and examine relationships for a few cities. Before we proceed, we’ll save the price per square foot transforms into sfh: def compute_ppsf(sfh): return sfh.assign( ppsf=sfh['price'] / sfh['bsqft'], log_ppsf=lambda df: np.log10(df['ppsf'])) sfh = (sfh_all .pipe(subset) .pipe(log_vals) .pipe(clip_br) .pipe(compute_ppsf) ) county city zip street ... log_bsqft log_lsqft ppsf log_ppsf 0 Alameda County Alameda 94501.00 1001 Post Street ... 3.30 3.65 347.63 2.54 1 Alameda County Alameda 94501.00 1001 Santa Clara Avenue ... 3.59 3.77 227.63 2.36 2 rows × 16 columns ## 10.5.5. Fixing Time and Location¶ We examine data for some cities in the East Bay: Richmond, El Cerrito, Albany, Berkeley, Walnut Creek, Lamorinda (which is a combination of Lafayette, Moraga, and Orinda, three neighboring bedroom communities), and Piedmont. We start by combining cities to create Lamorinda: def make_lamorinda(sfh): return sfh.replace({ 'city': { 'Lafayette': 'Lamorinda', 'Moraga': 'Lamorinda', 'Orinda': 'Lamorinda', } }) sfh = (sfh_all .pipe(subset) .pipe(log_vals) .pipe(clip_br) .pipe(compute_ppsf) .pipe(make_lamorinda) ) county city zip street ... log_bsqft log_lsqft ppsf log_ppsf 0 Alameda County Alameda 94501.00 1001 Post Street ... 3.30 3.65 347.63 2.54 1 Alameda County Alameda 94501.00 1001 Santa Clara Avenue ... 3.59 3.77 227.63 2.36 2 rows × 16 columns The following box plot of log sale price for these cities shows that Lamorinda and Piedmont tend to have more expensive homes and Richmond has the least expensive, but there is overlap in sale price for all areas. cities = ['Richmond', 'El Cerrito', 'Albany', 'Berkeley', 'Walnut Creek', 'Lamorinda', 'Piedmont'] sns.boxplot(data=sfh.query('city in @cities'), x='city', y='log_price') plt.xticks(rotation=45); Next, we’ll make a plot showing the log price per ft^2 against the building size. How does this plot help you think about the importance of location to home value? four_cities = ['Berkeley', 'Lamorinda', 'Piedmont', 'Richmond'] sns.lmplot(data=sfh.query('city in @four_cities'), x='bsqft', y='log_ppsf', hue='city', scatter_kws={'s': 20, 'alpha': 0.1}, ci=False);
2022-12-06 07:46:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1956060528755188, "perplexity": 4835.827778970545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711074.68/warc/CC-MAIN-20221206060908-20221206090908-00361.warc.gz"}
http://au.mathworks.com/help/fuzzy/psigmf.html?nocookie=true
Accelerating the pace of engineering and science # psigmf Product of two sigmoidal membership functions ## Syntax ```y = psigmf(x,[a1 c1 a2 c2]) ``` ## Description The sigmoid curve plotted for the vector x depends on two parameters a and c as given by $f\left(x;a,c\right)=\frac{1}{1+{e}^{-a\left(x-c\right)}}$ psigmf is simply the product of two such curves plotted for the values of the vector x f1(x; a1, c1) × f2(x; a2, c2) The parameters are listed in the order [a1 c1 a2 c2]. ## Examples expand all ### Product of Two Sigmoidal Membership Functions ```x = 0:0.1:10; y = psigmf(x,[2 3 -5 8]); plot(x,y) xlabel('psigmf, P=[2 3 -5 8]') ylim([-0.05 1.05]) ```
2014-12-19 00:46:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8129295706748962, "perplexity": 4918.310444882328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768089.153/warc/CC-MAIN-20141217075248-00126-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.coursehero.com/file/p786vno/Suppose-the-current-exchange-rate-for-the-polish-zloty-is-Z278-The-expected/
Suppose the current exchange rate for the polish • 14 This preview shows page 13 - 14 out of 14 pages. Suppose the current exchange rate for the polish zloty is Z2.78. The expected exchange rate in three years is Z2.86. What is the difference in the annual inflation rates for the U.S. and Poland over this period? Problem 31-11 The International Fisher Effect. Problem 31-12 Spot vs Forward rates Suppose the spot and three-month forward rates for the yen are Y 79.35 and Y 78.76, respectively. What would you estimate is the difference between the annual inflation rates of the U.S. and japan? MC algo 31-19 Covered Interest Arbitrage The spot rate between the U.K. and the U.S. is £.7624/\$, while the one-year forward rate is £.7542/\$. The risk-free rate in the U.K. is 4.63 percent and risk-free rate in the United States is 2.76 percent. How much in profit can you earn on \$12,000 utilizing covered interest MC algo 31-20 Interest Rate Parity The one-year forward rate for the Swiss franc is SF1.1665/\$. The spot rate is SF1.1776/\$. The interest rate on a risk-free asset in Switzerland is 3.11 percent. If interest rate parity exists, what is the one-year risk-free rate in the U.S.? MC algo 31-21 Interest Rate Parity The spot rate between the Japanese yen and the U.S. dollar is ¥107.87/\$, while the one-year forward rate is ¥108.42/\$. The one-year risk-free rate in the U.S. is 2.79 percent. If interest rate parity exists, what is the one-year risk-free rate in Japan? MC algo 31-22 Interest Rate Parity Assume interest rate parity holds. The one-year risk-free rate in the U.S. is 3.06 percent and the one-year risk-free rate in Japan is 3.45 percent. The spot rate between the Japanese yen and the U.S. dollar is ¥111.81/\$. What is the one-year forward exchange rate? F MC algo 31-22 Interest Rate Parity The one-year risk-free rate in the U.S. is 2.64 percent and the one-year risk-free rate in Mexico is 4.44 percent. The one-year forward rate between the Mexican peso and U.S. dollar is MXN12.24/\$. What is the spot exchange rate? Assume interest rate parity holds.
2022-01-22 17:29:15
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.812845766544342, "perplexity": 3656.0007341654086}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303868.98/warc/CC-MAIN-20220122164421-20220122194421-00500.warc.gz"}
https://blog.sourcepole.ch/2011/04/10/manually-sort-images/
# Visually sorting images under Linux to visually sort images under Linux doesn’t seem to be a trivial task. I duckduckgo‘ed for a long time and had a look at various image and file managing applications before finding gthumb. And even there, you first need to create a “catalog” and within the catalog a “library” which will finally allow you to manually sort your images. All of which is not documented. Once you’ve sorted your images, you’d possibly want to export the sorting? Again, no trace of any help or documentation: gthumb catalogs are saved under \\$HOME/.local/share/gthumb/catalogs/foobar.catalog. Tomáš Pospíšek
2019-06-18 20:34:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2826490104198456, "perplexity": 3040.0579615047495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998813.71/warc/CC-MAIN-20190618183446-20190618205446-00135.warc.gz"}
https://hmolpedia.com/page/318
# 318 A summary of π in relation to the number 318, which is equivalent, via Greek isopsephy, to the words: "Helios", the Greek name of sun, Theta, the name of the Greek letter Θ, and Eliezer, the chief servant of Abraham (who is the Judaic patriarch rescript of the Egyptian sun god Ra).[1] In numbers, 318 is the numerical equivalent, via isopsephy (or gematria), of the words "Helios", the Greek name of the sun, and "Theta", the name of the 9th letter Θ of Greek alphabet, the letter being the Egyptian-Greco symbol of the sun, and "Eliezer", the chief servant of Abraham (the Judaic patriarch rescript of the Egyptian sun god Ra), a number which is symbolic of the diameter d of the divine solar sun god circle, with a circumference C of 1000, the ratio of which, via the formula for circumference of a circle: ${\displaystyle \pi ={\frac {C}{d}}}$ yields an accurate 5-digit calculation of π. ## Isopsephy In what is called ‘isopsephy’[2], a learning technique where early Greeks used pebbles arranged in patters to learn arithmetic and geometry, and to make coded ciphers, a technique passed along to the Hebrews, it is found that the following three words: Theta, Helios, and Eliezer, are each "numerically equivalent", according to what is called their "gematria value", to 318, which is the symbolic of the diameter of the so-called divine solar circle (as discussed below) or monad of the universe: Word/Name Letter Values Sum Derived terms Θ (Th-) 9 (≈ Ennead) Theogony (Hesiod, 800BC), theology, thermal, thermo-, thermodynamics, think, thought, Thanatos Θῆτα (Theta) 9 + 8 + 300 + 1 318 Name of the eighth letter of the Greek alphabet. Ηλιος (Helios) 8 + 30 + 10 + 70 + 200 318 Greek name of sun; Heliopolis. אֱלִיעֶזֶר (Eliezer) 318 Abraham's main servant, who he intended to leave all his wealth to. A depiction of the Egyptian city of Heliopolis, aka Helio-Polis, meaning "city of the sun", the term Helios a name being code for the number 318. Theta, the name of the Greek letter Θ, which is equal to the number "9" in the Greek numeric system, which is symbolic of the 9 gods of the Heliopolis Ennead.[3] Helios is the Greek name of the sun, e.g. the Greeks referred to the main Egyptian sun god city as "Heliopolis", or city of the sun, a city defined the Egyptians by the following hieroglyph:[4] The hieroglyph shows, according to Wallis Budge (Ѻ), an obelisk with a cross on it, the water symbol or god Nun symbol, and a sun with a cross in it and or a sun inside of an eclipse, and a half-circle, which is bread., which is variously translated as: An, Anu, Junu, or Iunu. Junu is mentioned in the Pyramid Texts as ‘House of Ra’. Heliopolis is called On in Hebrew. ### Bible In the Bible (Pslam 16:9), it is stated that Abraham (aka Ab-Ra-ham), the Judaic rescript of the Egyptian sun god Ra, had a trained army of 318 servants; and that Eliezer, whose name is numerically equivalent to 318, was Abraham's main servant, who he intended to leave all his wealth to. ## Mathematics A circle with a diameter of 318.318 units, according (Ѻ) to the formula C = 2πr , has a circumference of 1000, which the Greeks equated with the monad or perfect unity.[1] Mathematically, the number 318 is found to have several relations in respect to a circle with a circumference of 1,000 units. Firstly, the ratio of 1/π which is: ${\displaystyle {\frac {1}{\pi }}=0.318}$ Secondly, In Egypt, the Rhind Papyrus, dated around 1650 BC but copied from a document dated to 1850 BC, has a formula for the area of a circle that treats ${\displaystyle \pi }$ as: ${\displaystyle \left({\frac {16}{9}}\right)^{2}}$ ≈ 3.16. Thus, according to the standard formula for measuring the circumference d of a circle, according to its diameter: ${\displaystyle C=\pi d}$ we have the following formula for ${\displaystyle \pi }$: ${\displaystyle \pi ={\frac {C}{d}}}$ according to which, we have the following symbolism: ${\displaystyle {\frac {C}{d}}={\frac {1000}{318.318}}=3.14151}$ In 250BC, Archimedes proved, using a 96-side regular polygon, that π falls within the following range: 3.1408 < π < 3.1429. Likewise, Ptolemy, in his Almagest (150AD), gave a value for π of 3.1416. ## 1000 A depiction of Theta (318) and Helios (318), by Daniel Gleason (1998), showing three three thetas: ΘΘΘ or the number 999 (≈1,000), each Θ being both the Egyptian symbol of the sun, and the Greek number "9", representative of the nine gods of the Heliopolis Ennead, namely: Atum-Ra (Adam-Abraham), Shu (Joshua), Tefnut, Geb (Joseph), Nut, Osiris (Lazarus), Isis (Mary), Set (Devil), Nephthys (Mary Magdalene). The significance of the 1000, as the unit size of the so-called divine solar circle is said to represent the "divine monad", thousand equivalent, in the Greek ratio system, to "1", as David Fideler (1993) sees things: “Helios, 318, the Greek name of the sun, is derived from the ratio of the circle, for the reciprocal of π is .318. In other words, a circle measuring 1000 units in circumference (representing unity) will have a diameter of 318 units. In music, 0.666 is the string ratio of the perfect fifth, while 0.888 is the string ratio of the whole tone. The Greeks did not use the decimal point at all, and, in every instance where gematria values are based on mathematical ratios, the ‘decimal point’ has been moved over exactly three places. In other words, while we define these ratios in relation to ‘1’, we conclude that the Greeks defined these ratios in relation to ‘1000’, which represents the same principle, the monad or unity, the ineffable first cause.” — David Fideler (1993), Jesus Christ, Sun of God: Ancient Cosmology and Early Christian Symbolism (pg. 84) [1] ### Jesus Alternatively, the significance of the 1000, according to Daniel Gleason (1998), is that it is a cypher or code for Jesus being a sun god: “The solar symbolism between the numbers ‘1000’ and ‘318’ was well known in ancient Greece. The product of the initials of Jesus Christ, in Greek Numerals (IX), is equal to the number ‘1000’. The solar symbolism in the name ‘Jesus Christ’ was apparent to anyone in antiquity who spoke Greek.” — Daniel Gleason (1998), “Theta – Helios (318), the Sun” [5] In short, the product of the numerical equivalent, in the Attic Greek number system[6], of the initials of Jesus Christ (JC), or Ιησούς Χριστός (ΙΧ) in Greek, namely the product of "I" (iota), which equal 1, and "X" (chi), which equals 1,000.[7] A diagram showing that the numerical equivalent of ratio of the Greek name for Jesus Christ (2368) divided by the numerical equivalent for the Hebrew name for Jesus Christ (754), yields ${\displaystyle \pi }$ or 3.141.[8] On a related note, as pointed out by Leo Tavares (2020), is one divides the Greek numerical equivalent of Jesus Christ of 2368 (888 + 1480), this being a cypher name for the circumference C of the divine solar circle, by the Hebrew numerical equivalent of Jesus Christ of 754 (319 + 363), this being a cypher name for the diameter of the divine solar circle, we get a close approximation of π = 3.141.[8] ## Quotes The following are related quotes: “A circle with a circumference equal to ‘divine unity’, the number ‘1’ or any power of one such as 1x10x10x10 = 1,000 units, by calculation has a diameter that rounds down to 318 units. Consider the solar symbolism of this amazing discovery. In the 1st century AD, the value of π was supposedly not known to five decimal places. But if the 5th decimal place of π is set equal to the number ‘1’, the diameter of a circle with a circumference of 1000 units by calculation has a diameter equal to the gematria value of Helios (318) on each side of the decimal point!” — Daniel Gleason (1998), “Theta – Helios (318), the Sun” [5] ## References 1. Fideler, David. (1993). Jesus Christ, Sun of God: Ancient Cosmology and Early Christian Symbolism (318, 6+ pgs). Quest Books. 2. Isopsephy – Wikipedia. 3. Theta (subdomain) – Hmolpedia 2020. 4. Heliopolis – Hmolpedia. 5. Gleason, Daniel. (1998). “Theta Helios (318)” (Ѻ), The Sacred Geometry Mysteries of Jesus Christ, Jesus8880.com. 6. Attic numerals – Wikipedia. 7. Gleason, Daniel. (1998). “The ‘Sign’ of Jesus Christ” (Ѻ), The Sacred Geometry Mysteries of Jesus Christ, Jesus8880.com. 8. Tavares, Leo. (2020). “The Proof is in the Pi: Part 2” (Ѻ), Mathematical Monotheism, Google Sites
2021-09-24 02:36:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5627439618110657, "perplexity": 5082.115666498121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057496.18/warc/CC-MAIN-20210924020020-20210924050020-00529.warc.gz"}
https://pypi.org/project/textlines/0.0.1/
Sparklines for text. text_lines counts your words, paragraphs, pages and emits a short summary. The text doesn't need to be here, but I'm trying to write a new paragraph. UNKNOWN ## Release history Release notifications | RSS feed Uploaded source
2022-09-28 14:15:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31148913502693176, "perplexity": 6797.577362842554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00404.warc.gz"}
http://mathoverflow.net/revisions/62043/list
4 edited tags 3 added 3 characters in body This is a re-post on a previous question I asked. My first question was too vague to warrant detailed responses. Really, I have two specific questions to ask. 1) Let $\sigma = (A; {0,1}; +, \times)$ be a signature. Form the language $L(\sigma)$ over $\sigma$. Let $T$ be the theory of commutative rings and let $M$ be a model of this theory. We can realize localization in the model $M$ by specifying a class of formulas in our language $$K = {s_x \ \mid \ x \in A - (0)}, \quad \mbox{where}\ s_x = [\exists x, \ x = x]$$ and then for each $x$ defining a formula $s_{x}^{-1} = \exists y, xy = 1$. Adding $s_{x}^{-1}$ to our theory $T$, call it $T_x$ and then taking a model $N$ of $T_x$ with the property that there is a monomorphism $M \rightarrow N$ will realize $N$ as a localization of $M$. My first question is whether or not this is right way, for a logician to think about localization of a commutative ring? 2) It seems to me that it should be possible to extend this construction to other languages by specifying an appropriate class $K$ and formula's $s_{x}^{-1}$. In particular, this should work for non-commutative rings. In summary, what can be said about localization in a first order language? Edit Actually, in 1), I still have a problem. Specifying a monomorphism $M \rightarrow N$ is not accurate because $M$ may not be integral. Actually, I need to specify a map $M \rightarrow N$ by a universal property. 2 added 223 characters in body This is a re-post on a previous question I asked. My first question was too vague to warrant detailed responses. Really, I have two specific questions to ask. 1) Let $\sigma = (A; {0,1}; +, \times)$ be a signature. Form the language $L(\sigma)$ over $\sigma$. Let $T$ be the theory of commutative rings and let $M$ be a model of this theory. We can realize localization in the model $M$ by specifying a class of formulas in our language $$K = {s_x \ \mid \ x \in A - (0)}, \quad \mbox{where}\ s_x = [\exists \ x = x]$$ and then for each $x$ defining a formula $s_{x}^{-1} = \exists y, xy = 1$. Adding $s_{x}^{-1}$ to our theory $T$, call it $T_x$ and then taking a model $N$ of $T_x$ with the property that there is a monomorphism $M \rightarrow N$ will realize $N$ as a localization of $M$. My first question is whether or not this is right way, for a logician to think about localization of a commutative ring? 2) It seems to me that it should be possible to extend this construction to other languages by specifying an appropriate class $K$ and formula's $s_{x}^{-1}$. In particular, this should work for non-commutative rings. In summary, what can be said about localization in a first order language? Edit Actually, in 1), I still have a problem. Specifying a monomorphism $M \rightarrow N$ is not accurate because $M$ may not be integral. Actually, I need to specify a map $M \rightarrow N$ by a universal property. 1
2013-05-21 13:59:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.762971043586731, "perplexity": 122.4988917450285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700074077/warc/CC-MAIN-20130516102754-00014-ip-10-60-113-184.ec2.internal.warc.gz"}
https://tel.archives-ouvertes.fr/tel-00004129
# Stabilité et filtration de Harder-Narasimhan Abstract : Introduced on algebraic manifolds, the notion of stability has been generalized to the case of Kähler manifolds and then to any compact complex manifold using Gauduchon's metrics. The behavior of non semi-stable fiber bundles (or coherent sheaves) had only been studied in the algebraic case and had been described by means of the notion of Harder-Narasimhan filtration (HNF). In the present work, we carry on with this study for any compact complex (possibly non Kähler) manifold. For any complex vector bundle, we prove the existence of a subsheaf of maximal degree. This subsheaf arises as a limit in the sense of weakly holomorphic subbundles''. This notion was first introduced by Uhlenbeck and Yau in their study of the Kobayashi-Hitchin correspondence and actually provides us with the good notion'' of convergence. In this context, we prove the existence of a HNF. We then generalize these results to the case of torsion-free coherent sheaves, which leads to important convergence questions resulting from the non compactness of the basis (the set where the sheaf is locally free). We also show how to apply these methods to families of fiber bundles (or flat families of torsion-free sheaves) over a deformation of compact complex manifolds to get existence theorems for limit subsheaves similar to Bishop's theorem. By the same, we get a new proof of the openness property of the stability in deformation. This proof does not use the difficult Kobayashi-Hitchin correspondance. In a second part, we study simplicity and stability conditions for the tangent bundle of a compact complex surface of the class $VII$. In particular, we obtain an example of deformation of a surface with global spherical shell illustrating the non openness of the non semi-stability property in deformation. Mots-clés : Document type : Theses Domain : Cited literature [36 references] https://tel.archives-ouvertes.fr/tel-00004129 Contributor : Laurent Bruasse <> Submitted on : Friday, January 9, 2004 - 10:11:12 AM Last modification on : Thursday, October 11, 2018 - 1:19:41 AM Long-term archiving on: : Friday, April 2, 2010 - 7:58:06 PM ### Identifiers • HAL Id : tel-00004129, version 1 ### Citation Laurent Bruasse. Stabilité et filtration de Harder-Narasimhan. Mathématiques [math]. Université de Provence - Aix-Marseille I, 2001. Français. ⟨tel-00004129⟩ Record views
2021-03-03 05:10:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.613510251045227, "perplexity": 791.5418526796378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178365454.63/warc/CC-MAIN-20210303042832-20210303072832-00361.warc.gz"}
https://qmcpy.org/2020/08/31/safe-handling-of-qmc-points/?shared=email&msg=fail
# Safe Handling of QMC Points A Quasi-Monte Carlo construction of $n$ points in $d$ dimensions may look like IID points, but they must be used with a bit more care. Because QMC can give errors that are $o(1/n)$ as $n\to\infty$, changing or ignoring even one point can change the estimate by an amount much larger than the error would have been and worsen the convergence rate. As a result, certain practices that fit quite naturally and intuitively with MC points are very detrimental to QMC performance. Operations like burn-in, thinning and even using a round number sample size, like a power of ten, can degrade QMC  effectiveness or even make it converge to the wrong answer. The safe way to use QMC points is to take all $n$ points produced, after applying a randomization to avoid singularities and to support uncertainty quantification. ### Introduction This note arose from a discussion of quasi-Monte Carlo (QMC) and randomized quasi-Monte Carlo (RQMC) software during and following the plenary tutorial at MCQMC 2020 by Fred Hickernell. Common ways of handling IID points can fail to work for (R)QMC points. A longer discussion of this point is at https://arxiv.org/abs/2008.08051. QMC sampling methods provide a set of $n$ points in $[0,1]^d$ that we can use instead of a sample of $\mathcal{U}[0,1]^d$ points. We can apply transformations to them to simulate non-uniform distributions and domains other than the unit cube. Then the resulting points can be used to estimate an expectation or just to explore the input to a function. If the points are $\boldsymbol{x}_1,\dots,\boldsymbol{x}_n\in[0,1]^d$ we may estimate $\mu=\int_{[0,1]^d}f(\boldsymbol{x})\rm d\boldsymbol{x}$ by $$\hat\mu =\frac1n\sum_{i=1}^nf(\boldsymbol{x}_i),$$ just as we would have done with $\boldsymbol{x}_i\overset{\text{iid}}{\sim}\mathcal{U}[0,1]^d$. The function $f(\cdot)$ subsumes transformations as well as the integrand of interest in the transformed space. Plain QMC points are deterministic. Randomizing them in one of several possible ways, makes them individually uniformly distributed while preserving the low discrepancy structure that makes them valuable for integration. The resulting RQMC methods allow uncertainty quantification via replication.  If it is important to be accurate then it must also be important to know that you were accurate and to show that you were accurate. A plain $t$-test based confidence interval,  or better yet, a bootstrap $t$-confidence interval for $\mu$ then lets one estimate accuracy. Bootstrap-$t$ works very well even with a modest number of replicates. We might want a modest number $R$ of replicates because the root mean squared error (RMSE) decreases proportionally to $1/\sqrt{R}$ as the number of replicates increases but often faster than $1/\sqrt{n}$ as the number of sample points increases.  The work involved is proportional to $nR$. A second reason to randomize is that QMC points are really designed for Riemann integrable functions. Those are necessarily bounded.  If $\hat\mu\to\mu$ whenever the star discrepancy of $\boldsymbol{x}_1,\dots,\boldsymbol{x}_n$ converges to zero, then it must hold that $f$ is Riemann integrable. That is, if $f$ is not Riemann integrable, as for instance it would be if it were unbounded, then there are sequences of inputs with vanishing star discrepancy for which $\hat\mu-\mu$ does not converge to zero.  It is safer to randomize.  Nested uniform scrambles ensure that $\hat\mu\to\mu$ with probability one under the weak condition that $f\in L^{1+\epsilon}[0,1]^d$  for some $\epsilon>0$. That is, $\int_{[0,1]^d}|f(\boldsymbol{x})|^{1+\epsilon}\rm d\boldsymbol{x}<\infty$, and $f$ is measurable. Because (R)QMC points look so similar to plain IID points, many users and software implementations handle (R)QMC points in inefficient or even unsafe ways that would be no problem for IID points. ### Sample sizes (R)QMC points are usually constructed as a finite sequence of points for a specific sample size $n$ such as $n=2^m$ or $n=p$ for a large prime number $p$. If one uses only a round number such as $1000$ of them, then the result will ordinarily be much less effective than using them all and can possibly even fail to sample a portion of $[0,1]^d$. Those $1000$ points might easily be less effecttive than using a smaller sequence of $512$ points. As for antibiotics, one should use the whole sequence. ### Skipping or burn-in For IID points, we do as well with $\boldsymbol{x}_{B+1},\dots,\boldsymbol{x}_{B+n}$ for any $B\ge0$.  Taking $B>0$ is a kind of burn-in that actually has an advantage in Markov chain Monte Carlo, where the points may only approach their desired distribution. For RQMC points, skipping even one observation can make the rate of convergence worse.  In the case of scrambled nets, taking $B=1$ can turn the RMSE from approximately $O(n^{-3/2})$ to approximately $O(n^{-1})$. The reason that people often skip the first point is that this first point is often equal to $(0,0,\dots,0)$. Such a point is then problematic when $f$ maps $[0,1]^d$ onto $\mathbb{R}^d$, as it would when using a transformation to induce a Gaussian distribution before evaluating the quantity of interest.  The point at the origin can map to an infinite point or even result in `not a number’. If one uses RQMC then that first point ends up with the $\mathcal{U}[0,1]^d$ distribution, as do all the others.  That avoids the problem of singularities at least mathematically.  One might still hit a singularity in a floating point representation if one is extremely unfortunate. That possibility is also there with QMC,  and plain QMC does not have the same assurance of avoiding singularities that RQMC has. ### Thinning In MCMC one often takes every $k$’th point for reasons of storage or computational efficiency. In IID sampling taking every $k$’th point would be statistically equivalent to taking an equal number of consecutive points. If we use $\boldsymbol{x}_{ki}$ for integer $k>1$ and $i=1,\dots,n$ in (R)QMC the result can be disastrously bad. For instance the van der Corput sequence in $[0,1]$ alternates between values in $[0,1/2)$ and values in $[1/2,1)$. Taking every second point would ignore half of the domain! The first component of a Sobol’ sequence is ordinarily the van der Corput sequence. Thinning (R)QMC points can be extremely dangerous. It should not be done without some very careful mathematical explanation of why it might be ok in some special setting. ### van der Corput sequences These are for $d=1$, so $x_i\in[0,1]$. Any consecutive $2^m$ points of the van der Corput sequence are a digital net and hence have some good discrepancy properties.  The same holds for generalizations of van der Corput to bases $b>2$.  There any $b^m$ consecutive points are a digital net. So van der Corput points are an exception. If we use burn-in, we still get a digital net and thus still get low discrepancy. We should take care of the chosen sample size, preferring $n$ to be a power of $b$. If the powers of $b$ are too far apart for our purposes then an integer multiple of a power of $b$ is next best.  That only makes a difference when $b>2$. We should not thin van der Corput sequences. ### Halton sequences Halton sequences are somewhat robust to burn-in and using round numbers. Each of the $d$ component variables of a Halton sequence is a van der Corput sequence in a different base. Usually the first $d$ prime numbers are used. For modestly large $d$ the special values of $n$ are so large and so far apart that we can consider that there simply are no specially good sample sizes.  Think of making $n$ divisible by a power of the product of the first $d$ prime numbers. Even the first such value may be too large to use. When no feasible sample sizes are very good, then maybe there is no particular harm from using a power of ten. Halton sequences start at the origin, which is problematic as described above. We can easily skip that point in Halton sequences because there are no especially good ranges. It may even be advantageous to use a very large burn-in for the Halton sequence because the initial points for large $d$ have unpleasant striping artifacts. It is however safer to randomize the Halton sequence. Scrambling the Halton sequence counters those striping artifacts more surely than a burn-in would. It also moves the point at the origin to a uniformly distributed random point.  This is another instance where RQMC is safer and more effective than plain QMC. ##### Art B. Owen Website | + posts Art Owen is a statistician with an interest in quasi-Monte Carlo sampling.
2021-12-03 03:47:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8167724013328552, "perplexity": 459.48982712889443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362589.37/warc/CC-MAIN-20211203030522-20211203060522-00153.warc.gz"}
https://tex.stackexchange.com/questions/296532/alternate-position-of-x-tick-labels
# Alternate position of x tick labels Is it possible to automatically change the position of every other x tick label in pgfplots? This is my scenario: I have a plot with symbolic x tick labels loaded from the table. Those labels are too wide to fit next to each other: What I would like to have is that every other label is automatically shifted a bit downwards like in this example: Minimal working code for the "bad" example: \documentclass[a4paper]{article} \usepackage{pgfplots} \pgfplotsset{compat=1.5} \begin{filecontents}{data.txt} label x y foobarbaz 0 1 foobarbaz 1 1 foobarbaz 2 1 foobarbaz 3 1 foobarbaz 4 1 \end{filecontents} \begin{document} \begin{tikzpicture} \begin{axis}[ ybar, x=1cm, xtick=data, xticklabels from table={data.txt}{label}, ] \end{axis} \end{tikzpicture} \end{document} For the "good" one I manually added \raisebox to every other label: \begin{filecontents}{data.txt} label x y foobarbaz 0 1 \raisebox{-4ex}{foobarbaz} 1 1 foobarbaz 2 1 \raisebox{-4ex}{foobarbaz} 3 1 foobarbaz 4 1 \end{filecontents} However, since my actual data contains much more lines, I don't want to do it manually like this. Is there a way to do this automatically with pgfplots? You could use x tick label style={yshift={-mod(\ticknum,2)*1em}} Code: \documentclass[a4paper]{article} \usepackage{pgfplots} \pgfplotsset{compat=1.5} \begin{filecontents}{data.txt} label x y foobarbaz 0 1 foobarbaz 1 1 foobarbaz 2 1 foobarbaz 3 1 foobarbaz 4 1 \end{filecontents} \begin{document} \begin{tikzpicture} \begin{axis}[ ybar, x=1cm, xtick=data, xticklabels from table={data.txt}{label}, x tick label style={yshift={-mod(\ticknum,2)*1em}} ]
2021-09-23 22:13:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7069910168647766, "perplexity": 4310.382371406196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057447.52/warc/CC-MAIN-20210923195546-20210923225546-00005.warc.gz"}
http://crypto.stackexchange.com/questions/6370/implementations-of-ntru-tls?answertab=active
# Implementations of Ntru TLS Has anyone come across any implementations of Ntru TLS? I'm working a project for uni that does quantum secure encryption. It relies on a mix of Ntru & AES, but I can't find an implementation of Ntru TLS anywhere. (CyaSSL claims to have one but it needs a license to compile, I've contacted them, waiting to hear back) Specifically, the following ciphers: TLS_NTRU_NSS_WITH_RC4_128_SHA TLS_NTRU_NSS_WITH_3DES_EDE_CBC_SHA TLS_NTRU_NSS_WITH_AES_128_CBC_SHA TLS_NTRU_NSS_WITH_AES_256_CBC_SHA TLS_NTRU_RSA_WITH_RC4_128_SHA TLS_NTRU_RSA_WITH_3DES_EDE_CBC_SHA TLS_NTRU_RSA_WITH_AES_128_CBC_SHA TLS_NTRU_RSA_WITH_AES_256_CBC_SHA Or – even better – has anyone encountered any type of SSL/TLS implementation that doesn't rely of the discrete logarithm problem, factoring prime numbers, or elliptic curves? I've scoured the internet and come up with nothing. -
2016-02-06 07:49:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17719444632530212, "perplexity": 2340.7770262701024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701146196.88/warc/CC-MAIN-20160205193906-00157-ip-10-236-182-209.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3229626/help-with-inequality-with-one-unknown
# Help with inequality with one unknown Please could you help how to solve the inequality $$(\sqrt{x-9})(2^{x-8}+3^{x-9}-9)\geq 0$$ • It's not clear what the argument of the radical is. – lulu May 17 at 15:34 • Is it $$\sqrt{x-9}(2^{x-8}+3^{x-9}-9)\geq 0$$? – Dr. Sonnhard Graubner May 17 at 15:34 • What did you try? Did you try $x=9,10,11$ for example? – Dietrich Burde May 17 at 15:35 • @Dr. Sonnhard Graubner yes, this is it – ramhat lubumba May 17 at 15:37 • $$x=9$$ is one solution, to solve $$2^{x-8}+3^{x-9}-9\geq 0$$ you will Need a numerical method – Dr. Sonnhard Graubner May 17 at 15:39 Let $$x = t+9$$, then the radical needs $$t \geqslant 0$$. $$t=0$$ is a solution obviously, for others, we are left to solve $$2\cdot 2^t+3^t\geqslant 9$$. Note LHS is increasing, and starting from $$3<9$$, so there is a unique $$a>0$$ s.t. $$t \in [a, \infty)$$ are all solutions, i.e. the full solution set is $$x\in \{9\}\cup[a+9, \infty)$$, where $$2\cdot 2^a+3^a=9$$. To find $$a$$, you will need to use numerical methods, it’s about $$1.288...$$.
2019-06-20 23:54:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7413747906684875, "perplexity": 545.4848811263774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999291.1/warc/CC-MAIN-20190620230326-20190621012326-00315.warc.gz"}
http://msp.org/jomms/2015/10-3/p09.xhtml
#### Vol. 10, No. 3, 2015 Recent Issues The Journal Cover Editorial Board Research Statement Scientific Advantage Submission Guidelines Submission Form Subscriptions Author Index To Appear ISSN: 1559-3959 Dynamic conservation integrals as dissipative mechanisms in the evolution of inhomogeneities ### Xanthippi Markenscoff and Shailendra Pal Veer Singh Vol. 10 (2015), No. 3, 331–353 DOI: 10.2140/jomms.2015.10.331 ##### Abstract By the application of Noether’s theorem, conservation laws in linear elastodynamics are derived by invariance of the Lagrangian functional under a class of infinitesimal transformations. The recent work of Gupta and Markenscoff (2012) providing a physical meaning to the dynamic $J$-integral as the variation of the Hamiltonian of the system due to an infinitesimal translation of the inhomogeneity if linear momentum is conserved in the domain, is extended here to the dynamic $M$- and $L$-integrals in terms of the “if” conditions. The variation of the Lagrangian is shown to be equal to the negative of the variation of the Hamiltonian under the above transformations for inhomogeneities, which provides a physical meaning to the dynamic $J$-, $L$- and $M$-integrals as dissipative mechanisms in elastodynamics. We prove that if linear momentum is conserved in the domain, then the total energy loss of the system per unit scaling under the infinitesimal scaling transformation of the inhomogeneity is equal to the dynamic $M$-integral, and if linear and angular momenta are conserved then the total energy loss of the system per unit rotation under the infinitesimal rotational transformation is equal to the dynamic $L$-integral. ##### Keywords elastodynamics, conservation laws, Noether's theorem, dissipative mechanism, dynamic $J$-integral, dynamic $L$-integral, dynamic $M$-integral, inhomogeneity
2017-05-26 07:21:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6518814563751221, "perplexity": 789.1758449284068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608648.25/warc/CC-MAIN-20170526071051-20170526091051-00496.warc.gz"}
http://mathhelpforum.com/number-theory/149465-if-today-tuesday-what-day-will-129-days.html
# Math Help - if today is Tuesday, what day will it be in 129 days? 1. ## if today is Tuesday, what day will it be in 129 days? Sun=0, Mon=1,.... $2+129\equiv x \ \mbox{(mod 7)}\rightarrow 131\equiv x \ \mbox{(mod 7)}$ Here is what I did, and I obtained the correct answer; however, I think this should have been done another way. $131-\left \lfloor \frac{131}{7} \right \rfloor*7=5$ $131\equiv 5 \ \mbox{(mod 7)}$ In 129 days, it will be Friday. 2. Every 7 days, you get back to Tuesday. Since $129 = 18\cdot 7 + 3$ that means that it will be $3$ days after Tuesday. So the day you'll be on is Friday. 3. Originally Posted by Prove It Every 7 days, you get back to Tuesday. Since $129 = 18\cdot 7 + 3$ that means that it will be $3$ days after Tuesday. So the day you'll be on is Friday. To explain Prove It's method a bit further, you're seeking $r$ such that $0\leq r <7$ and $129=7q+r$. $q,r$ can be found through the division algorithm.
2014-09-20 03:24:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5434543490409851, "perplexity": 731.399859557948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657132495.49/warc/CC-MAIN-20140914011212-00053-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://new.calwhine.com/american-farm-dzkljp/how-does-the-presence-of-charged-ion-effect-solubility-88395b
They are summarized in the table below . A combination of salts in an aqueous solution will all ionize according to the solubility products, which are equilibrium constants describing a mixture of two phases.If the salts share a common cation or anion, both contribute to the concentration of the ion and need to be included in concentration calculations. Does the temperature you boil water in a kettle in affect taste? The presence of either Ca 2+ (aq) or F – (aq) in a solution reduces the solubility of CaF 2, shifting the solubility equilibrium of CaF 2 to the left: This reduction in solubility is another application of the common-ion effect. What effect does the presence of a common ion have on. $CaF_2 \leftrightarrow Ca^{2+} + 2F^-$, (a) If the solubility in pure water is s, then, $K_{sp} = {[Ca^{2+}]}{[F^-]}^2$. Public domain. The Effect of Ionic Strength . New York: Houghton Mifflin. ion effect," by the occurrence of a chemical reaction involving one of the ions of the salt, or by a change in the activity coefficients of the ions of the salt. (Molarity) What is the solubility of M(OH)2 in a 0.202M solution of M(NO3)2 ? A starch-iodine titration will be used to determine the concentration of iodate ion in each solution. The increase in I increases the rate constant for reactions between ions of the same charge and decreases it when the ions are oppositely charged. The common ion effect also plays a role in the regulation of buffers. The very pure and finely divided precipitate of calcium carbonate that is generated is used in the manufacture of toothpaste. This attractive force is usually called an ion-dipole force. Salts of singly charged versions of these are soluble (HCO 3-, ... Common Ion Effect. This particular resource used the following sources: http://www.boundless.com/ Because electrolytes generally consist of ions in solution, they are also known as ionic solutions. If the concentration of dissolved lead(II) chloride is s mol dm-3, then: [Pb 2+] = s mol dm-3 [Cl-] = 2s mol dm-3. Please Explain … However, it is not easy to estimate the relative magnitudes of these two forces or to quantitatively predict water solubilities of electrolytes. CC BY-SA 3.0. http://commons.wikimedia.org/wiki/File:Lithium_hydroxide_with_carbonate_growths.JPG ISBN 0-618-37206-7. Why is (H2O2) known as hydrogen peroxide and not hydrogen dioxide? The solubility of a given salt in a given solvent depends on temperature (definitely) and pressure (maybe). H20 molecules and the ions of the solid, tends to bring ions into the solution. For example, for silver chloride, this is another slightly soluble compound, but adding acid does not affect the solubility of silver chloride. For organic compounds, charge make a huge difference, at least for compounds with more than 4–8 carbons. Therefore, we decrease the solubility of lead two chloride due to the presence of our common ion. Bases "Zn(OH)"_2 is a sparingly soluble base. Force of attraction between oppositely charged ions. In general, the solubility of a slightly soluble salt is decreased by the presence of a second solute that furnishes a common ion. So a common ion decreases the solubility of our slightly soluble compounds. This is because Le Chatelier’s principle states the reaction will shift toward the left (toward the reactants) to relieve the stress of the excess product. Get answers by asking now. With ionic participants, the magnitude of the electrolyte effect increases with charge. The solubility of an ionic compound in a solution which already contains one of the ions in that compound is reduced. In the water treatment process, sodium carbonate salt is added to precipitate the calcium carbonate. Counterions are the mobile ions in ion exchange polymers and colloids. When equilibrium is shifted toward the reactants, the solute precipitates. The solubility of the sodium salt of REV 3164 in a buffered medium was much lower than that in an unbuffered medium. Solubility is the property of a solid, liquid or gaseous chemical substance called solute to dissolve in a solid, liquid or gaseous solvent.The solubility of a substance fundamentally depends on the physical and chemical properties of the solute and solvent as well as on temperature, pressure and presence of other chemicals (including changes to the pH) of the solution. Decreasing the pH increases the solubility of sparingly soluble bases and basic salts. since fluoride ions are in NaF as well as in CaF2. NaCl (sodium +1) is more soluble than MgCl2 (magnesium +2) Ion size – larger ions = more soluble - metallic ions are smaller than their atoms (lose valence electrons) - non-metallic ions are larger than their atoms (gain valence electrons) - small ions bond more closely together than … When a common ion is added, the rule is that the solubility is reduced. Factors Affecting Solubility of Ionic Substances Ion charge – smaller ion charge = more soluble e.g. limestoneAn abundant rock of marine and fresh-water sediments; primarily composed of calcite (CaCO₃); it occurs in a variety of forms, both crystalline and amorphous. Many sparingly soluble compounds have solubilities that depend on pH. 1, Fig. When one of these solids dissolves in water, the ions that form the solid are released into solution, where they become associated with the polar solvent molecules. School University of South Carolina; Course Title CHEM 111; Uploaded By BaronHarePerson36. Pages 45. The common-ion effect is a term that describes the decrease in solubility of an ionic compound when a salt that contains an ion that already exists in the chemical equilibrium is added to the mixture. Here are two common examples. Calculate the molar solubility of a compound in solution containing a common ion. Small ionic size also help in the solubility of an ion. The effects of salts can be formally classified into two types, that is, nonspecific and specific effects. This is because the positive ends of water molecules (the H atoms) are going to be attracted towards any negative ions, and the negative ends (the O atoms) to any positive ions. Compounds containing the sulfate ion, SO42-, are water‑soluble except with barium ions, Ba2+, and lead(II) ions, Pb2+. Put these values into the solubility product expression, and do the sum. At any given ionic strength, the activity coefficients of ions of the same charge are approximately equal. The value of the solubility product is temperature-dependent and is generally found to increase with increasing temperature. Compounds containing the sulfate ion, SO42-, are water‑soluble except with barium ions, Ba2+, and lead(II) ions, Pb2+. If our prediction is valid, we can simplify the solubility-product equation: s2 = $\frac{3.90 \times 10^{-11}}{0.40}$ = 9.75 x 10-11. This can affect solubility in unexpected ways, sometimes causing a precipitate to form when you didn't expect it. The solubility and the dissolution rate of the sodium salt of an acidic drug (REV 3164; 7-chloro-5-propyl-1H,4H-[1,2,4]triazolo[4,3-alpha]quinoxaline-1,4-dione) decreased by the effect of common ion present in aqueous media. Your Citation. Calculate the concentration of the Cu2+ ion in a solution that is initially 0.10 M Cu2+ and 1.0 M NH3. An electrolyte is a substance that contains free ions and behaves as an electrically conductive medium. When that is not possible, you can use the following guidelines for predicting whether some substances are soluble or insoluble in water. Addition of a common ion will always operate directly through the solubility product expression to decrease the solubility. The solubility of insoluble substances can be decreased by the presence of a common ion. Calculate the solubility of calcium phosphate [Ca3(PO4)2] in 0.20 M CaCl2. Bases "Zn(OH)"_2 is a sparingly soluble base. Boundless Learning The Common Ion Effect and Solubility Introduction: Potassium hydrogen tartrate (cream of tartar), KHC 4 H 4 O 6, is a weak acid, that is not very soluble in water.Its solubility equilibrium in water is: KHC 4 H 4 O 6 (s) K + (aq) + HC 4 H 4 O 6 - (aq). precipitateA solid that exits the liquid phase of a solution. Activity Coefficients of Ions, 2. Wikimedia *Response times vary by subject and question complexity. Because of this polarity, water molecules will arrange themselves such that the negatively charged oxygen atom will attract the positively charged sodium (Na +) ion, and the positively charged hydrogen atom will attract the negatively charged chloride (Cl –) ion. Chemistry The common-ion effect can be used to separate compounds or remove impurities from a mixture. Adding an additional amount of one of the ions of the salt generally leads to increased precipitation of the salt, which reduces the concentration of both ions of the salt until the solubility equilibrium is reached. This is because Le Chatelier’s principle states the reaction will shift toward the left (toward the reactants) to relieve the stress of the excess product. In this laboratory, you will observe the effect of the presence of a common ion on the molar solubility and K sp of potassium hydrogen tartrate, or KHT. In areas where water sources are high in chalk or limestone, drinking water contains excess calcium carbonate CaCO3. AgCl will be our example. CC BY-SA 3.0. http://en.wikibooks.org/wiki/Chemical_Principles/Solution_Equilibria:_Acids_and_Bases%23Common-Ion_Effect CC BY-SA 3.0. http://en.wiktionary.org/wiki/limestone Median response time is 34 minutes and may be longer for new subjects. For example, ... Recall that the magnitude of attractive electrostatic interactions is greatest for small, highly charged ions. Ion-pair formation can have a major effect on the measured solubility of a salt. The solubility of a given salt in a given solvent depends on temperature (definitely) and pressure (maybe). Wikibooks to d bst of my knowledge...i wud take a guess dat....d more positively charged a species is...d more number of water molucules it can attract...bcos water is a polar solvent...so greater d magnitude of positive charge on d ion....d more number of ions of water it'll attract and thus bcom increasingly soluble...n smaller d size of d ion...greater will b its degree of hydration...i.e....it can occupy more number of water molecules around it....so it'll b more soluble... CaCl2-->Ca2+ + 2(Cl1-) TiO2-->Ti4+ + 2(O2-) ZnO-->Zn2+ + O2- NiCl2-->Ni2+ + 2(Cl1-). Introduction The solubility products K sp 's are equilibrium constants in hetergeneous equilibria (i.e., between two different phases). The common-ion effect is used to describe the effect on an equilibrium involving a substance that adds an ion that is a part of the equilibrium. Scientists take advantage of this property when purifying water. Explain In Terms Of Equilibrium Explain In Terms Of Equilibrium This problem has been solved! ? The glycerol turns back into an alcohol (addition of the green H's). Ion exchange resins are polymers with a net negative or positive charge. Through the addition of common ions, the solubility of a compound generally decreases due to a shift in equilibrium. For e.g, barium sulfate even though being an ionic compound is insoluble because of the difficulty of solubilizing sulfate ions by the water molecules. When you dissolve in CaCl2 which contains the common ion Ca^2+, the equilibrium will again shift to left reducing the solubility of CaCl2. If the presence of spectator ions result in the equilibrium shifting left, less ions would be produced and the solubility of the solid would decrease. In order for an ion to dissolve in water it must cause some ordering or structure in the water molecules. We have reduced the solubility of AgCl drammatically by adding the common ion, from 1.30 x10-5M to 8.5x10-11M. Notice that Ksp doesn't change, Ksp is still 1.6 times 10 to the negative five but the molar solubility has been affected by the presence of our common ion. For example, the measured K sp for calcium sulfate is 4.93 × 10 −5 at 25°C. If this is the predominant factor, then the compound may be highly soluble in water. However, it is not easy to estimate the relative magnitudes of these two forces or to quantitatively predict water solubilities of electrolytes. Force of attraction between oppositely charged ions This force tends to keep the ions in the solid state. The resin has a higher affinity for highly charged countercations, for example by Ca 2+ (calcium) in the case of water … It will shift the above equilibrium to the left reducing the solubility of Ca(OH)2. KHT(s) dissociates into the potassium ion, K+(aq), PbCl 2 (s) Pb 2+ (aq) + 2 Cl-(aq) If we add some NaCl (or … The ions in the compound attract each other, and the water molecules attract the ions. It all involves the application of Le Châtelier's Principle. the common-ion effect. CC BY-SA 3.0. http://en.wiktionary.org/wiki/precipitate precipitateTo come out of a liquid solution into solid form. The solubility of a salt in water can be influenced by the presence of other electrolytes in several ways: by a "common ion effect," by the occurrence of a chemical reaction involving one of the ions of the salt, or by a change in the activity coefficients of the ions of the salt. At the end, when all the NaCl dissolves, the sodium (Na To determine the molar solubility and Ksp of Ca(OH)2. b. In this laboratory, you will observe the effect of the presence of a common ion on the molar solubility and K sp of potassium hydrogen tartrate. That is, as the concentration of the anion increases, the maximum concentration of the cation needed for precipitation to occur decreases—and vice versa—so that K sp is constant. Common-Ion Effect. solubility in NaCl(aq), is much lower than the solubility in pure water (x from above) as predicted by LeChatelier’s principle. Source . Wiktionary Question: Hello, Just Wondering How Does The Presence Of A Common Ion Affect The Solubility Of A Salt? 2.3 Variables (b) Here the calcium ion concentration is the sum of the concentrations of calcium ions from the 0.10 M calcium chloride and from the calcium fluoride whose solubility we are seeking: Can we simplify this equation? The solubility product expression tells us that the equilibrium concentrations of the cation and the anion are inversely related. This kinetic effect of addition of a salt to the solutions, when the salt is not involved in the reaction, was studied in detail by Brönsted [12] and Bjerrum [13] and the expression, which it represents, is known as the Brönsted–Bjerrum equation. Common-Ion Effect. Two forces determine the extent to which solution will occur: Force of attraction between H2O molecules and the ions of the solid. Cite this Article Format. 5. If we go back and compare, only 4.7 percent as much CaF2 will dissolve in 0.10 M CaCl2 as in pure water: $\frac{(9.9 \times 10^{-6})}{2.1 \times 10^{-4}}$ x 100 = 4.7%. Join Yahoo Answers and get 100 points today. Adding a common ion decreases the solubility of a solute. Between oppositely charged ions. The solubility of a compound is the result of a competition. Solubility is the property of a solid, liquid or gaseous chemical substance called solute to dissolve in a solid, liquid or gaseous solvent.The solubility of a substance fundamentally depends on the physical and chemical properties of the solute and solvent as well as on temperature, pressure and presence of other chemicals (including changes to the pH) of the solution. At higher concentrations of the counter ions, however, an effect is observed in the solubility due to the formation of complexes. Notice that the K(sp) does NOT change, it remains the same. Calculate the solubility of calcium phosphate [Ca3(PO4)2] in 0.20 M CaCl2. So a common ion decreases the solubility of our slightly soluble compounds. This preview shows page 13 - 24 out of 45 pages. If you have a solution and solute in equilibrium, adding a common ion (an ion that is common with the dissolving solid) decreases the solubility of the solute. How to combine acetylene with propene to form one compound? 3 it can be concluded that in the range of pH measurements, the effect of the presence of different counter ions in aqueous solution is not significant. Please Explain In Terms Of Equilibrium. The atomic number of an atom is 29. With such a small solubility product for CaF2, you can predict its solubility << 0.10 moles per liter. The solubility of an ionic salt depends both upon its cations and its anions, but for simple salts in aqueous solution at room temperature the following general observations are useful. The small variations among ions of the same charge can be correlated with the effective diameter of the hydrated ions. Why does the solubility of CaSO4 increase when “inert” salts are added to ... highly charged ions bind solvent more tightly and have larger effective sizes than do larger or less highly charged ions. By changing the pH of the solution, you can change the charge state of the solute. Notice that Ksp doesn't change, Ksp is still 1.6 times 10 to the negative five but the molar solubility has been affected by the presence of our common ion. Adding a Common Ion. How many valency electrons are present in the outermost orbit. To determine the molar solubility of Ca(OH)2 in the presence of added Ca+2. AgCl is an ionic substance and, when a tiny bit of it dissolves in solution, it dissociates 100%, into silver ions (Ag +) and chloride ions (Cl¯).. Now, consider silver nitrate (AgNO 3).When it dissolves, it dissociates into silver ion and nitrate ion. The fatty acid portion is turned into a salt because of the presence of a basic solution of the NaOH. The nonspecific salt effects are simply due to their ionic properties. On the contrary, if the spectator ions caused the equilibrium to shift right, more ions would be produced causing the solubility of the solid to increase. Now let's think about why. Solubility is a result of an interaction between polar water molecules and the ions which make up a crystal. If the water molecules have a greater attraction to the ions than ions have for each other, then the compound will be soluble in water. "Zn(OH)"_2"(s)" ⇌ … Solubility is difficult to predict with confidence. Therefore, the approximation that s is small compared to 0.10 M was reasonable. How do I decide whether to use a Heck or Suzuki reaction? Wikipedia The common ion effect, illustrated in the examples of the previous section, is the effect on solubility observed when an ion common to a slightly soluble salt is present in solution from some other source. If the presence of spectator ions result in the equilibrium shifting left, less ions would be produced and the solubility of the solid would decrease. Electrolyte. Ionic compounds with group 1 (or 1A) metallic cations or ammonium cations, NH4+, form soluble compounds no matter what the anion is. The HC 4 H 4 O 6 - (aq) ion contains one acidic hydrogen, so that the quantity of potassium hydrogen tartrate in solution can be … Compounds containing carbonate, CO32-, phosphate, PO43-, or hydroxide, OH-, ions are insoluble in water except with group 1 metallic ions and ammonium ions. Consider the following. Boundless vets and curates high-quality, openly licensed content from around the Internet. In this case, by a factor of 10. Objectives: You will observe the common ion effect on the K sp and molar solubility of a slightly soluble salt, as determined from the hydrogen ion concentration. The latter effect is caused by a change in the ionic strength of the solution, which is related to the activity by the Debye-Huckel Theory. Chemistry equilibrium constant expression? Different common ions have different effects on the solubility of a solute based on the stoichiometry of the balanced equation. It all involves the application of Le Châtelier's Principle. Thus we predict that AgCl has approximately the same solubility in a 1.0 M KCl solution as it does in pure water, which is 10 5 times greater than that predicted based on the common ion effect. The molar solubility and the solubility product constant When you dissolve in NaOH, which contains the common ion, the OH- ions. The copper(I) ion forms a chloride salt that has Ksp = 1.2 10-6. Cation exchange resins consist of an anionic polymer with countercations, typically Na + (sodium). If the pH of the solution is such that a particular molecule carries no net electric charge, the solute often has minimal solubility and precipitates out of the solution. Considering this the compound would be highly soluble in water. > Increasing the pH has the opposite effect. Again, the equation can be simplified. When it is a major factor, then water solubility may be very low. This kinetic effect of addition of a salt to the solutions, when the salt is not involved in the reaction, was studied in detail by Brönsted [12] and Bjerrum [13] and the expression, which it represents, is known as the Brönsted–Bjerrum equation. What is more dangerous, biohazard or radioactivity? This force tends to bring ions into solution. Interfacial chemistry. The Common Ion Effect and Solubility. What effect does the presence of a common ion have on solubility The solubility. > Increasing the pH has the opposite effect. For comparison purposes later, I need to work out the lead(II) ion concentration in this saturated solution. The activity coefficient of a given ion describes its effective behavior in all equilibria in which it participates. The 2s term is << 0.10 moles per liter, and therefore: This approximation is also valid, since only 0.0019 percent as much CaF2 will dissolve in 0.10 M NaF as in pure water. I know the solubility of PbF2 is pH dependent because the solubility would increase as the solution becomes more acidic because the F^- ion is . The greater the charge of an ion the more water molecules it can attract around it self. Cation exchange resins consist of an anionic polymer with countercations, typically Na + (sodium). The greater the charge of an ion the more water molecules it can attract around it self. This force tends to keep the ions in the solid state. So rubbing two sticks together to make fire... even breadsticks? In this lab, the common-ion effect will be studied by determining the solubility of calcium iodate in water and also in an aqueous solution of potassium iodate. 2.3 Variables Decreasing the pH increases the solubility of sparingly soluble bases and basic salts. The solubility of CaSO 4 should be 7.02 × 10 −3 M if the only equilibrium involved were as follows: When it is a major factor, then … How to combine acetylene with propene to form one compound? The solubility of ionic solids in water depends on two things: (1) the energy change, DEdissolve, that occurs when the ionic solid goes into solution as hydrated ions, and (2) the effect of the hydrated ions on the arrangement of the surrounding water molecules, measured by the organization energy, DEorg. Fluoride is more effective than calcium as a common ion because it has a second-power effect on the solubility equilibrium. Equilibrium concentrations of the NaOH with acetate, C2H3O2-, or nitrate, NO3-, ion soluble! Cl- ( aq ) if we add some NaCl ( or ( Molarity ) what is predominant... Effect can be used to separate compounds or remove impurities from a.. Which solution will occur: force of attraction between H2O molecules and the which. In this case, by a factor of 10 K ( sp ) does not,! An effect on the solubility of the Cu2+ ion in a solution which contains... < < 0.10 moles per liter by BaronHarePerson36, s 2-, PO 3-mostly... Which have a very limited solubility in unexpected ways, sometimes causing a precipitate to one. Is observed in the presence of a common ion help in the water is able to conduct electricity that is... Are present in the regulation of buffers notice that the equilibrium will again shift to left reducing the.! Equilibrium to the formation of complexes following guidelines for predicting whether some substances soluble. Not hydrogen dioxide electrically conductive medium of doubly charged ions CO 3 2-, s 2-, PO 3-mostly. Salt effects are independent of the salts solubilities that depend on pH aq ) 2..., then water solubility may be highly soluble in water, the rule is that the magnitude of attractive interactions... Decreases due to the formation of complexes and basic salts are approximately equal a sparingly soluble bases and salts... Of some salts for compounds containing ions of the same charge are approximately equal and behaves as an on. Oh ) 2 ] in 0.20 M CaCl2 vets and curates high-quality, openly licensed content from around the.! The following guidelines for predicting whether some substances are soluble or insoluble in water, are called slightly compounds! In general, the OH- ions called an ion-dipole force lower than in. Have different effects on the solubility of PbCl2 is not easy to estimate the relative magnitudes of these forces... The copper ( I ) ion forms a chloride salt that has =... Will again shift to left reducing the solubility of sparingly soluble compounds no what. Ion decreases the solubility of a common ion have on 2. b compound may be longer for new.. Equilibrium this problem has been solved which already contains one of the salt. For an ionic compound < 0.10 moles per liter highly soluble in water, the OH- ions M Cu2+ 1.0! Solubility is a substance that contains free ions and behaves as an electrically conductive medium may... A major factor, then … as the charge of an interaction between polar molecules. Contains excess calcium carbonate that is not easy to estimate the relative magnitudes of these forces... Solid form may be longer for new subjects be longer for new subjects general, the approximation s! S is small compared to 0.10 M Cu2+ and 1.0 M NH3 when common. ; Uploaded by BaronHarePerson36 to their ionic properties the positive sodium ion negative or positive charge ion affect the of. You did n't expect it Ca^2+, the rule is that the K ( sp ) does change., s 2-, s 2-, s 2-, PO 4 3-mostly insoluble. '' _2 is a result of an ionic compound in solution, you can use the guidelines. In hetergeneous equilibria ( i.e., between two different phases ) solubility is a sparingly soluble bases basic! That attracts the positive sodium ion highly soluble in water the solubility of solids 2+ ( )! Expression tells us that the K ( sp ) does not change it! Has been solved is pH dependent, but the solubility of M ( OH ) b! Portion is turned into a salt combine acetylene with propene to form when did... The positive sodium ion net negative or positive charge the stoichiometry of the cation.... Is commonly seen as an electrically conductive medium extent to which solution will occur force. Do the sum how does the presence of charged ion effect solubility one oxygen ( red ) now has a negative that. You can predict its solubility < < 0.10 moles per liter activity coefficients of in! Forces determine the concentration of the presence of added Ca+2 is more than. Shifted toward the reactants, the solute small compared to 0.10 M Cu2+ and M. Sp ) does not change, it is a major factor, then water solubility may very! Where water sources are high in how does the presence of charged ion effect solubility or limestone, drinking water contains excess calcium CaCO3! Free ions and behaves as an electrically conductive medium equilibrium constants in hetergeneous equilibria ( i.e. between... Process, sodium carbonate salt is decreased by the presence of a solute precipitateto come out of a slightly compounds... 3-,... Recall that the solubility containing ions of the hydrated ions compounds containing of. I.E. how does the presence of charged ion effect solubility between two different phases ) the formation of complexes it remains the same charge are approximately equal,. And appear in the solubility products K sp: K sp: K is solubility... Inversely related usually called an ion-dipole force expression to decrease the solubility of our slightly soluble compounds typically Na (. Solubility of sparingly soluble compounds no matter what the cation and the anion are inversely related of. Approximately equal therefore, the activity coefficient of a compound generally decreases due to ionic... Solubility product for CaF2, you can use the following guidelines for whether! Salts for compounds containing ions of the cation is soluble in water a consequence the... Openly licensed content from around the Internet or limestone, drinking water excess. Acetate, C2H3O2-, or nitrate, NO3-, ion form soluble have... Agcl drammatically by adding the common ion decreases the solubility of M ( NO3 2. Even breadsticks solution of M ( NO3 ) 2 in a 0.202M solution the... Use the following guidelines for predicting whether some substances are soluble ( HCO,. To keep the ions in solution, you can change the charge state of the balanced equation will occur force... Predict water solubilities of electrolytes to conduct electricity the K ( sp does... University of South Carolina ; Course Title CHEM 111 ; Uploaded by BaronHarePerson36 a solution already! Products K sp: K sp for calcium sulfate is 4.93 × 10 −5 at 25°C size help. Ionic size also help in the solid counterions are the mobile ions in solution, you change... Ca ( OH ) 2 expression to decrease the solubility of CaCl2 ions! Problem has been solved with the effective diameter of the same charge can be by! To decrease the solubility product expression tells us that the K ( sp ) does not change, is! When that is not easy to estimate the relative magnitudes of these two forces or quantitatively. That the magnitude of attractive electrostatic interactions is greatest for small, highly charged ions CO 3 2-, 2-! The solution, you can use the following guidelines for predicting whether some are... Recall that the magnitude of attractive electrostatic interactions is greatest for small, charged. In CaF2, one oxygen ( red ) now has a negative charge that attracts the positive sodium.... That attracts the positive sodium ion unexpected ways, sometimes causing a precipitate to form one compound 10-2 of... I am going to constrain my answer to organic compounds, which contains the common ion decreases the of. Hetergeneous equilibria ( i.e., between two different phases ) Suzuki reaction that furnishes a common ion is,! Those ions together becomes stronger following guidelines for predicting whether some substances are soluble or insoluble in water this... Forces determine the molar solubility of our common ion is added of calcium [... Much lower than that in an unbuffered medium a basic solution of M ( ). Pressure ( maybe ) mostly visible in the solubility of insoluble substances be... The ions in the solid, tends to keep the ions in containing. This case, by a factor of 10 ionic solutions limestone, water. Why the solubility of a compound is the solubility equilibrium used in the,. Given ion describes its effective behavior in all equilibria in which it participates be correlated with the effective of... Highly charged ions product for CaF2, you can change the how does the presence of charged ion effect solubility of an the... An effect is commonly seen as an electrically conductive medium solubility may be very.... The extent to which solution will occur: force of attraction between H2O and... Between H2O molecules and the ions in the solid state soluble bases and salts. May be longer for new subjects product for CaF2, you can use the following for. Constants in hetergeneous equilibria ( i.e., between two different phases ) least for compounds containing ions the... Interactions is greatest for small, highly charged ions CO 3 2-, s 2- PO! Which have a very limited solubility in water, the measured solubility of slightly. ) known as ionic solutions to bring ions into the solution, they also. And 1.0 M NH3 constants in hetergeneous equilibria ( i.e., between two different )! Regulation of buffers ion exchange resins consist of an ion the more water and... Liquid phase of a given solvent depends on temperature ( definitely ) and (! Any given ionic strength, the ions are products and appear in numerator. For CaF2, you can change the charge state of the Cu2+ ion a.
2022-12-10 06:52:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5326617956161499, "perplexity": 1966.9961200152677}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711712.26/warc/CC-MAIN-20221210042021-20221210072021-00163.warc.gz"}
https://zenodo.org/record/3491100/export/dcite4
There is a newer version of this record available. Software Open Access # NHERI-SimCenter/pelicun: pelicun v2.0.0 ### DataCite XML Export <?xml version='1.0' encoding='utf-8'?> <identifier identifierType="DOI">10.5281/zenodo.3491100</identifier> <creators> <creator> <affiliation>Stanford University</affiliation> </creator> </creators> <titles> <title>NHERI-SimCenter/pelicun: pelicun v2.0.0</title> </titles> <publisher>Zenodo</publisher> <publicationYear>2019</publicationYear> <dates> <date dateType="Issued">2019-10-15</date> </dates> <resourceType resourceTypeGeneral="Software"/> <alternateIdentifiers> <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/3491100</alternateIdentifier> </alternateIdentifiers> <relatedIdentifiers> <relatedIdentifier relatedIdentifierType="URL" relationType="IsSupplementTo">https://github.com/NHERI-SimCenter/pelicun/tree/v2.0.0</relatedIdentifier> <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.2558557</relatedIdentifier> <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/nheri-simcenter</relatedIdentifier> </relatedIdentifiers> <version>v2.0.0</version> <rightsList> <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights> </rightsList> <descriptions> <description descriptionType="Abstract">&lt;p&gt;Probabilistic Estimation of Losses, Injuries, and Community resilience Under Natural disasters&lt;/p&gt; &lt;p&gt;&lt;strong&gt;What is it?&lt;/strong&gt;&lt;/p&gt; &lt;p&gt;&lt;code&gt;pelicun&lt;/code&gt; is a Python package that provides tools for assessment of damage and losses due to natural hazards. It uses a stochastic damage and loss model that is based on the methodology described in FEMA P58 (FEMA, 2012). While FEMA P58 aims to assess the seismic performance of a building, With &lt;code&gt;pelicun&lt;/code&gt; we&amp;nbsp;provide a more versatile, hazard-agnostic tool that estimates losses for several types of assets in the built environment.&lt;/p&gt; &lt;p&gt;Detailed documentation of the available methods and their use is available at &lt;a href="http://pelicun.readthedocs.io"&gt;http://pelicun.readthedocs.io&lt;/a&gt;&lt;/p&gt; &lt;p&gt;&lt;strong&gt;What can I use it for?&lt;/strong&gt;&lt;/p&gt; &lt;p&gt;&lt;code&gt;pelicun&lt;/code&gt;&amp;nbsp;quantifiies&amp;nbsp;losses from an earthquake or hurricane scenario in the form of &lt;em&gt;decision variables&lt;/em&gt;. This functionality is typically utilized for performance-based engineering and&amp;nbsp;regional risk assessment. There are several steps of&amp;nbsp;performance assessment that &lt;code&gt;pelcicun&lt;/code&gt; can help with:&lt;/p&gt; &lt;ul&gt; &lt;li&gt; &lt;p&gt;&lt;strong&gt;Describe the joint distribution of asset (e.g. building) response.&lt;/strong&gt; The response of a structure or other type of asset to an earthquake or hurricane wind is typically described by so-called &lt;em&gt;engineering demand parameters&lt;/em&gt; (EDPs). &lt;code&gt;pelicun&lt;/code&gt; provides methods that take a finite number of EDP vectors and find a multivariate distribution that describes the joint distribution of EDP data well. You can control the type of target distribution, apply truncation limits and censor part of the data to consider detection limits in your analysis. Alternatively, you can choose to use your EDP vectors as-is without resampling from a fitted distribution.&lt;/p&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt;&lt;strong&gt;Define the damage and loss model of a building.&lt;/strong&gt; The component damage and loss data from the first two editions of FEMA P58 and the HAZUS earthquake and hurricane models for buildings are provided with &lt;code&gt;pelicun&lt;/code&gt;. This makes it easy to define building components without having to collect and provide&amp;nbsp;all the data manually. The stochastic damage and loss model is designed to facilitate modeling correlations between several parameters of the damage and loss model.&lt;/p&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt;&lt;strong&gt;Estimate component damages.&lt;/strong&gt; Given a damage and loss model and the joint distribution of EDPs, &lt;code&gt;pelicun&lt;/code&gt; provides methods to estimate the amount of damaged components and the number of cases with collapse.&lt;/p&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt;&lt;strong&gt;Estimate consequences.&lt;/strong&gt; Using information about collapse&amp;nbsp;and component damages, the following consequences can be estimated with the loss model: reconstruction cost and time, unsafe placarding (red tag),&amp;nbsp;injuries&amp;nbsp;and fatalities.&amp;nbsp;&lt;/p&gt; &lt;/li&gt; &lt;/ul&gt; &lt;p&gt;&lt;strong&gt;Why should I use it?&lt;/strong&gt;&lt;/p&gt; &lt;ol&gt; &lt;li&gt;It is free and it always will be.&amp;nbsp;&lt;/li&gt; &lt;li&gt;It is open source. You can always see what is happening under the hood.&lt;/li&gt; &lt;li&gt;It is efficient. The loss assessment calculations in &lt;code&gt;pelicun&lt;/code&gt; use &lt;code&gt;numpy&lt;/code&gt;, &lt;code&gt;scipy&lt;/code&gt;, and &lt;code&gt;pandas&lt;/code&gt;&amp;nbsp;libraries to efficiently propagate uncertainties and provide detailed results quickly.&lt;/li&gt; &lt;li&gt;You can trust it. Every function in &lt;code&gt;pelicun&lt;/code&gt; is tested after every commit. See the Travis-CI and Coveralls badges at the top for more info.&amp;nbsp;&lt;/li&gt; &lt;li&gt;You can extend it. If you have other methods that you consider better than the ones we already offer, we encourage you to fork the repo and extend &lt;code&gt;pelicun&lt;/code&gt; with your approach. You do not need to share your extended version with the community, but if you are interested in doing so, contact us and we are more than happy to merge your version with the official release.&lt;/li&gt; &lt;/ol&gt; &lt;p&gt;&lt;strong&gt;Major changes&amp;nbsp;in v2.0:&lt;/strong&gt;&lt;/p&gt; &lt;ul&gt; &lt;li&gt;Migrated to the latest version of Python, numpy, scipy, and pandas see setup.py for required minimum versions of those tools.&lt;/li&gt; &lt;li&gt;Python 2.x is no longer supported.&lt;/li&gt; &lt;li&gt;Improve DL input structure to &lt;ul&gt; &lt;li&gt;make it easier to define complex performance models&lt;/li&gt; &lt;li&gt;make input files easier to read&lt;/li&gt; &lt;li&gt;support custom, non-PACT units for component quantities&lt;/li&gt; &lt;li&gt;support different component quantities on every floor&lt;/li&gt; &lt;/ul&gt; &lt;/li&gt; &lt;li&gt;Updated FEMA P58 DL data to use ea for equipment instead of units such as KV, CF, AP, TN.&lt;/li&gt; &lt;li&gt;Added FEMA P58 2nd edition DL data.&lt;/li&gt; &lt;li&gt;Supported EDP inputs in standard csv format.&lt;/li&gt; &lt;li&gt;Add a function that produces SimCenter DM and DV json output files.&lt;/li&gt; &lt;li&gt;Add a differential evolution algorithm to the EDP fitting function to do a better job at finding the global optimum.&lt;/li&gt; &lt;li&gt;Enhance DL_calculation.py to handle multi-stripe analysis (significant contributions by Joanna Zou): &lt;ul&gt; &lt;li&gt;recognize stripe_ID and occurrence rate in BIM/EVENT file&lt;/li&gt; &lt;li&gt;fit a collapse fragility function to empirical collapse probabilities&lt;/li&gt; &lt;li&gt;perform loss assessment for each stripe independently and produce corresponding outputs&lt;/li&gt; &lt;/ul&gt; &lt;/li&gt; &lt;/ul&gt; &lt;p&gt;&lt;strong&gt;Major changes&amp;nbsp;in v1.2:&lt;/strong&gt;&lt;/p&gt; &lt;ul&gt; &lt;li&gt;Support for HAZUS hurricane wind damage and loss assessment&lt;/li&gt; &lt;li&gt;Add HAZUS hurricane DL data for wooden houses&lt;/li&gt; &lt;li&gt;Move DL resources inside the pelicun folder so that they come with pelicun when it is pip installed&lt;/li&gt; &lt;li&gt;Add various options for EDP fitting and collapse probability estimation&lt;/li&gt; &lt;li&gt;Improved the way warning messages are printed to make them more useful&lt;/li&gt; &lt;/ul&gt;</description> <description descriptionType="Other">This material is based upon work supported by the National Science Foundation under Grant No. 1612843. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.</description> </descriptions> </resource> 298 43 views
2021-04-19 10:33:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39883413910865784, "perplexity": 3620.522255528349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038879305.68/warc/CC-MAIN-20210419080654-20210419110654-00113.warc.gz"}
https://www.physicsforums.com/threads/gauss-law-concentric-spheres.835063/
Gauss Law: Concentric Spheres 1. Sep 29, 2015 SarahAlbert 1. The problem statement, all variables and given/known data Two concentric spheres have radii a and b with b>a. The region between them is filled with charge of constant density. The charge density is zero everywhere else. Find E at all points and express it in terms of the total charge Q. Do your results reduce to the correct values as a->0? 2. Relevant equations Flux=EA 3. The attempt at a solution My line of thinking: The flux is independent of the radius because with two concentric circles it only depends on the charge enclosed by the sphere. The flux is then the same for both areas and it is equal to Q/eo. I'm not giving what the density charge is, so isn't my answer simply Q/eo? Any help would b appreciated. Thank you. I'm not looking for the solution, I guess I need to know if my line of thinking is correct in any way? 2. Sep 29, 2015 Orodruin Staff Emeritus Is the charge enclosed by a sphere independent of the radius of the sphere in this case? What if r < a or a < r < b? 3. Sep 29, 2015 SarahAlbert a<=r<=b a (is less than and equal to) r (is less than and equal to) b 4. Sep 29, 2015 Orodruin Staff Emeritus No, you are being asked to find the field at any point. This means that $r$ can be both larger or smaller than either $a$ or $b$. The question in my previous post still stands: 5. Sep 29, 2015 SarahAlbert See all the examples I've read about say that yes, the charge enclosed is independent of the sphere. However, in those cases the charge was enclosed by the small sphere. In this case the charge is between a and b. So I don't believe the charge is independent of the radius in this case because its located in an area of b-a. 6. Sep 29, 2015 Orodruin Staff Emeritus So take it step by step and treat one case at a time. You have three different cases: $r < a$, $a < r < b$, and $r > b$. Start with $r < a$. What is the enclosed charge of a sphere with a radius less than $a$? 7. Sep 29, 2015 SarahAlbert Isn't it zero? According to the problem even though it is constant between b and a, it is zero everywhere else. 8. Sep 29, 2015 Orodruin Staff Emeritus Yes, it is zero. What does this imply for the electric field? When you have answered this question you can go on with the case of $r > b$ and then finally $a < r < b$. 9. Sep 29, 2015 SarahAlbert Thank you so much for all your help by the way!
2017-11-24 21:38:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8358719348907471, "perplexity": 276.4028006991811}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808935.79/warc/CC-MAIN-20171124195442-20171124215442-00694.warc.gz"}
https://www.groundai.com/project/transitions-in-the-quantum-computational-power/
Transitions in the quantum computational power Transitions in the quantum computational power Tzu-Chieh Wei C. N. Yang Institute for Theoretical Physics and the Department of Physics and Astronomy, State University of New York at Stony Brook, Stony Brook, NY 11794-3840, USA    Ying Li Centre for Quantum Technologies, National University of Singapore, 2 Science Drive 3, Singapore Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH, United Kingdom    Leong Chuan Kwek Centre for Quantum Technologies, National University of Singapore, 2 Science Drive 3, Singapore National Institute of Education and Institute of Advanced Studies, Nanyang Technological University, 1 Nanyang Walk, Singapore July 11, 2019 Abstract We construct two spin models on lattices (both two and three-dimensional) to study the capability of quantum computational power as a function of temperature and the system parameter. There exists a finite region in the phase diagram such that the thermal equilibrium states are capable of providing a universal fault-tolerant resource for measurement-based quantum computation. Moreover, in such a region the thermal resource states on the 3D lattices can enable topological protection for quantum computation. The two models behave similarly in terms of quantum computational power. However, they have different properties in terms of the usual phase transitions. The first model has a first-order phase transition only at zero temperature whereas there is no transition at all in the second model. Interestingly, the transition in the quantum computational power does not coincide with the phase transition in the first model. pacs: 03.67.Ac, 03.67.Lx, 05.70.Fh, 75.10.Jm I Introduction Transitions in phases of matter, such as melting of ice and boiling of water, is common in everyday life Thermodynamics (). They also occur in zero temperature, where properties of the system are governed, instead of thermal effect, by quantum mechanical fluctuations Sachdev (). Tremendous understanding has been gained on the transitions in phases of matter. Recently, ideas from quantum information and computation NielsenChuang00 () give rise to new perspectives on examining phases of matter, such as topological phases and their classification Wen (). Moreover, from the viewpoint of computational universality in measurement-based quantum computation (MBQC) GoChua (); NielsenLeungChilds (); Oneway (); Oneway2 (); RaussendorfWei12 (), a few works have suggested that resource states can emerge from certain quantum phases of matter DohertyBartlett (); Miyake (); BartlettBrennenMiyakeRenes (); ElseSchwarzBartlettDoherty (); ElseBartlettDoherty (); FujiiNakataOhzekiMurao () and that the transition in the quantum computational capability results in a new notion of phase transitions GrossEisertEtAl (); Browne (); BarrettBartlettDohertyJenningsRudolph (); Darmawan (). Here, we construct two models to investigate their ground states and thermal states for providing universal quantum computational resource for MBQC. As we shall see both models exhibit similar ‘phase diagrams’ in terms quantum computational power, in both two and three dimensions. The advantage of 3D offers the possibility of topological protection in carrying out quantum computation, even at higher temperatures than in 2D. The two models are natural extension from a symmetric model that we considered previously Thermal (), and the asymmetric parameter introduced here can be used study its effect on computational universality, as well as the possibility to tune the system through a quantum phase transition. They are exactly solvable, and thus also allow us to study and compare with the usual transitions in phases of matter. The first model has a first-order phase transition only at zero temperature and it does not coincide with the transition in the quantum computational power. Moreover, even though there is no phase transition at any finite temperature, there is a region at finite temperature that supports universal quantum computation. The second model does not have a phase transition at zero temperature but has a transition in quantum computational power at both zero and finite temperatures. The remaining of the paper is organized as follows. In Sec. II we introduce the two models, which are defined on any trivalent lattices either in two or three dimensions. We focus on the ground-state properties as well as the phase diagram at finite temperatures. In Sec. III we discuss zero-temperature quantum computational capability and show the existence of a range of the system parameter, where the ground state can provide a useful resource for universal MBQC. In Sec. IV we turn to the finite temperatures and consider the thermal effects on quantum computational universality. We use the techniques of fault-tolerance quantum computation (FTQC) to map out regions in the phase diagram where FTQC can still be carried out by using thermal states for the universal MBQC. The corresponding phase diagrams of quantum computational power are obtained for both models in both two and three dimensions. It is worth mentioning that the 3D models provide topological protection and hence the transition temperature in QC power is higher than that in 2D. We make concluding remarks in Sec. V. Ii Two model Hamiltonians We have previously constructed a model Hamiltonian whose thermal states can be used for universal MBQC even without turning off the Hamiltonian Thermal (). The idea is to take a small unit of a few spins, e.g., one spin-3/2 at the center coupled to three outer spin-1/2 that interact via the Heisenberg interaction ; see Fig. 1. Then we stack up many such units to form a higher dimensional structure, e.g., the decorated 2D honeycomb or other trivalent lattices, or even 3D lattices, and then “glue” or map two smaller spins (i.e. spin-1/2 particles) from neighboring units to single larger spin; see e.g. Fig. 1. Each merged spin, which we shall refer to as a bond particle, possesses a Hilbert space of dimension (i.e. two copies of a qubit) and hence is equivalent to a spin-3/2 entity. One advantage of this approach is that the ground state and its spectral gap can be readily solved and checked. As we shall see, the exactly solvable Hamiltonians thereby constructed allow for fault-tolerant, universal quantum computation with thermal states, and even with topological protection in three dimensions Thermal (). There was no free parameter in the Hamiltonians in Ref. Thermal (). It was not clear whether or not such quantum computational universality only occurred for the specific Hamiltonian or could be extended to a region in a phase diagram. Here we use as building blocks two different types of interactions beyond the Heisenberg interaction to allow a free system parameter: the XXZ interaction and an additional on-site anisotropic term , and investigate relation between the statistical mechanical and quantum computational features of the resultant two- and three-dimensional models as the system parameter and the temperature vary. (Note the upper case is a spin operator for the center particle of larger spin magnitude, where is a spin-1/2 operator, i.e., ‘half’ of the degree of freedom in a bond particle and will be denoted by or later). These interactions might be engineered in cold atoms or trapped ions.) It turns out to be useful to relate the ground state wavefunctions of the two models if we parameterize by in the first model and thus the Heisenberg point is at . We thus arrive at two spin models. The Hamiltonian for model I consists of two types of interactions: , where Vline = SxcAxb+SycAyb+(1+δ)SzcAzb (1) Vdash = SxcBxb+SycByb+(1+δ)SzcBzb, (2) where ’s and ’s are two independent spin-1/2 operators for the two virtual qubits of a bond particle. For model II, , Vline = SxcAxb+SycAyb+SzcAzb (3) Vdash = SxcBxb+SycByb+SzcBzb (4) Vc = −dz(Szc)2, (5) the is a local term on the center particles. These two models can be placed on two- and three-dimensional lattices; see e.g. the hexagonal lattice in Fig. 1 and the 3D lattice in Fig. 3c. ii.1 Model 1: XXZ interaction in a building block Consider the XXZ interaction for each unit. The Hamiltonian within each unit can be exactly solved. For the ground state energy is (see Fig. 2) and the ground-state wavefunction for a unit (which is unique and gapped) is |Ψ(δ)⟩=N0(δ)[−(|3/2,−3/2⟩−|−3/2,3/2⟩) +−2δ+√9+4δ23(|1/2,−1/2⟩−|−1/2,1/2⟩)], (6) where is a normalization constant such that the wavefunction is properly normalized, the symbol denotes the joint state of the center spin-3/2 () and three outer virtual spin-1/2 particles (collectively denoted by ). The examples of the latter are, and . The ground-state wavefunction for the whole 2D system is simply a product of over all units (modulo appropriate merging). For , , which is a four-spin GHZ state. Because of the merging of outer spin-1/2 particles across two units, such entanglement is useful for quantum computation, as explained in Refs. GoChua () and VerstraeteCirac (). As approaches , it reduces to Heisenberg interaction within a unit and universal quantum computation can be done on such a two-dimensional structure Thermal (). For , the ground states are doubly degenerate: and , each of which is ferromagnetic within the unit (where we have used and to denote the of the center particle). The ground-state energy is . At a small but finite temperature (smaller than the gap above the ground space), the thermal state will be approximately , possessing no entanglement. Therefore, for , the whole system is not useful for universal quantum computation due to lack of entanglement. The ground state energy has discontinuity in its first-order derivative with respect to (see Fig. 2), i.e., there is a first-order phase transition. As the ground state in the ferromagnet-like phase, , cannot enable universal quantum computation, one is led to inquire whether universal quantum computation is possible for and whether emergence of such computational power coincide with the phase transition. ii.2 Model 2: Heisenberg interaction with an on-site anisotropic term In this section we consider interaction of the form . As Pauli operators square to identity , there is no need to add a term for spin-1/2 particles. As it is exactly solvable, the ground state energy for a unit consof one center spin-3/2 and three outer spin-1/2 particles is for all range of ; see Fig. 2. Furthermore, the ground state (which is unique and gapped) is |Ψ(dz)⟩=N1(dz)[−(|3/2,−3/2⟩−|−3/2,3/2⟩) +−2dz+√9+4d2z3(|1/2,−1/2⟩−|−1/2,1/2⟩)], (7) where is a normalization constant such that the wavefunction is properly normalized. We see that the ground state wavefunction and its energy are of the same form as in model 1 when . Hence, the computational power of the two models at zero temperature will be the same in the corresponding range. However, in contrast with model 1, this model does not have a phase transition in the state of matter. As this model contains the Heisenberg point, which is universal for MBQC, one is led to inquire whether the whole phase is universal (as there is no phase transition), as opposed to the first model. Iii Creating a 2D cluster state from ground states We shall first consider the range of the parameters for the two models where ground state is of the same form within a unit: |Ψ(a)⟩∼ −(|3/2,−3/2⟩−|−3/2,3/2⟩) (8) +1a(|1/2,−1/2⟩−|−1/2,1/2⟩). For Model 1: the relation of to (for ) is given by a−1=−2δ+√9+4δ23. (9) For Model 2: the relation of to (for all range of ) is given by a−1=−2dz+√9+4d2z3. (10) Since the two models possess the same form of the ground-state wavefunction in the appropriate range of the parameters, we can deal with the quantum computational universality at zero temperature with equal footing. We note that, however, at finite temperatures the region of quantum computational universality will differ due to the different structures in the excited states and their energies. This will be treated in the next section. The case reduces to the Heisenberg interaction and the use for MBQC has been shown and detailed in Ref. Thermal () and this corresponds to . Examining the wavefunction (8), we see that we can recover the wavefunction if we can apply the following operation on the center spin-3/2 particle: D(a)=diag(1,a,a,1), (11) in the basis of , , , and . However, such a filtering operation cannot be realized with unit probability of success. This is because to implement a filtering operation such as , one needs to include another element to represent the unsuccessful filtering so that . The solution is to use generalized measurement that can incorporate the filtering. For , the filtering is not needed and a generalized measurement has been used Thermal () so that a GHZ state, such as , can be obtained within each unit. The POVM elements (for spin-3/2’s) were first constructed in Refs. WeiAffleckRaussendorf11 (); Miyake11 (), ~Fx = √23(|3/2⟩x⟨3/2|+|−3/2⟩x⟨−3/2|) (12a) ~Fy = √23(|3/2⟩y⟨3/2|+|−3/2⟩y⟨−3/2|) (12b) ~Fz = √23(|3/2⟩z⟨3/2|+|−3/2⟩z⟨−3/2|). (12c) For general , we use a deformed POVM with elements ( and the proportional constants are to be determined below) to act on the center particle so as to distill a GHZ state. The reason that works can be illustrated by the example . First restores the wavefunction back to the case. Then filters out the GHZ state , or equivalently, . If we choose to encode the effective qubit for the center particle by and , and for the virtual spin-1/2’s by the usual definition and , then the resultant GHZ state for outcome is |GHZ⟩=1√2(|0000⟩+|1111⟩). (13) As the wavefunction is symmetric under rotation, the case of simply produces the GHZ state in the and bases, respectively. By imposing the completeness relation, , we find and . This can be verified easily by direction calculation that yields F†xFx+F†yFy=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝13a20000100001000013a2⎞⎟ ⎟ ⎟ ⎟ ⎟⎠ (14) F†zFz=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝1−13a2000000000000001−13a2⎞⎟ ⎟ ⎟ ⎟ ⎟⎠. (15) In order for the above expressions to remain non-negative, such construction is valid only when . We note that a similar construction of POVM has been first used in Ref. Darmawan () in the context of a deformed AKLT model. The POVM ’s give rise to three possible outcomes, and any of them is a good outcome. This effectively generates product of GHZ states among all units. However, the GHZ states are in different bases, depending on the outcome . To fix the outcome basis in the basis, we can imagine applying a unitary transformation to the post-POVM state if or . This is equivalent to using a different measurement basis for the effective qubits. For outcomes and , we perform operations and , respectively, where Uy = exp[iπ2(Syc+sy1+sy2+sy3)] (16) Ux = exp[−iπ2(Sxc+sx1+sx2+sx3)], (17) so that the resultant GHZ state is always of the form (13). Then measurement on the bond particle (i.e., a joint measurement on the two virtual qubits) can be used to induce a control-Z (CZ) gate between two center particles Cai10 (); Thermal (); WeiRaussendorfKwek11 (). The result is a cluster state, a universal resource state. To summarize, we have thus shown that for , the ground state is universal for MBQC. For , we need to use a different POVM Darmawan (), i.e., F′x=√3~FxD(a), F′y=√3~FyD(a) (18) F′z=⎛⎜ ⎜ ⎜ ⎜⎝00000√1−3a20000√1−3a200000⎞⎟ ⎟ ⎟ ⎟⎠. (19) One can easily verify the completeness relation , via a direct calculation that yields F′†xF′x+F′†yF′y=⎛⎜ ⎜ ⎜⎝100003a200003a200001⎞⎟ ⎟ ⎟⎠ (20) F′†zF′z=⎛⎜ ⎜ ⎜⎝000001−3a200001−3a200000⎞⎟ ⎟ ⎟⎠ (21) In this case, the outcome is not desirable, i.e., it needs to be regarded as an error (specifically a qubit loss) as a GHZ cannot be obtained. The outcomes from and still yield a perfect GHZ state. To arrive at the same GHZ state (13) as in the case of , we further perform operations and , for outcomes and , respectively. Sites with undesirable outcome is equivalent to having leakage out of logical qubit space (or a qubit loss) but can be removed without affecting neighboring center sites by performing measurement on the surrounding bond particles so as to disentangle the unit (the center spin and the three virtual qubits) from the neighboring ones. Thus the qubit loss rate corresponds to the probability of obtaining a outcome, where pdelete=1−3a21+a2. (22) If is smaller than the site percolation threshold (which depends on the lattices, such as honeycomb, cross, and square-octagon), then there is not sufficient connection in the remaining network and thus no two-dimensional graph state can be distilled Browne (). Fortunately, it turns out that there is a finite range of below such that the remaining sites still possess enough connection, i.e., the corresponding graph resides in the supercritical phase of percolation. For universal MBQC, it is thus required that , i.e., . This gives for honeycomb, square-octagon, and cross lattices, respectively. For the honeycomb lattice, the threshold translates to . Therefore, at zero temperature, there is a transition in the quantum computational power in both models and, in the first model, before the system reaches its phase transition at as decreases. The exact location of the transition point in the quantum computational power depends on the underlying lattices, due to the connection to percolation. Connection to percolation and quantum computation has been previously explored, such as site percolation in noisy cluster states Browne () and bond percolation in nondeterministic gates for cluster state preparation Kieling (). The above analysis shows that for model 1 the transition in the quantum computational power (at on the honeycomb lattice) does not coincide with the transition in the phase of matter (at ). Moreover, even though model 2 possesses only one phase of matter, the quantum computational universality only exists in part of the phase. In the following we shall investigate the finite-temperature effect on the quantum computational universality and determine the corresponding ‘phase diagram’ in terms of quantum computational capability. Iv Thermal states and fault tolerance: two and three dimensions Because of the structure that the Hamiltonian for both models can be divided into units independent with one another, the free energy at finite temperatures are non-singular, and thus there are no phase transition at finite temperatures. As we shall see below, the region with universal quantum computational power exists up to certain finite temperatures. For a finite temperature, the system is not in the exact ground state but a thermal state. This means that the production of a GHZ state in each unit (and thus the global cluster state) is faulty. Therefore, whether the ‘phase’ of universal quantum computational power exists depends on how one can deal with errors. In particular, the ‘phase’ boundary will depend on the error rates and the thresholds for fault-tolerant quantum computation (FTQC). Our goal is to establish the existence of a nonzero-temperature region that universal MBQC is possible rather than to pin point the absolute boundary of such a region. In the following we describe in detail the error analysis and how the ‘phase diagram’ of the computational power is obtained. For those readers who wish to skip the details, the ‘microscopic’ construction is in Fig. 3 and the resultant phase diagrams for both models are in Fig. 4 for 2D and in Fig. 5 for 3D. For each set of particles, the thermal state reads ρT=e−H/TTre−H/T, (23) where is the Hamiltonian of four spins including one spin-3/2 and three spin-1/2, and is the temperature. As the input state is a thermal state, the output state after the POVM and the associated unitary operations is a noisy GHZ state. If , the output state is ρGHZ=UyFxρTF†xU†y+UxFyρTF†yU†x+FzρTF†z, (24) and the success probability is . If , the output state is ρGHZ=p−1s(UyF′xρTF′†xU†y+UxF′yρTF′†yU†x), (25) where the success probability is , due to ‘loss’ of logical quits. The ideal GHZ state  (13) is the common eigenstate the stabilizer elements , , , and (this set denoted by ) with the same eigenvalue . Here, and are Pauli operators of the center qubit, and similarly for other three qubits. In order to use the fault-tolerant quantum computing (FTQC) theory to analyze the computational power, we convert imperfections in the noisy GHZ state into Pauli errors by randomly performing stabilizer operations, which results in ρ′GHZ=∏K∈{K}12([\openone]+[K])ρGHZ, (26) where . Here, is the set of the above stabilizer generators. Such randomization can be effectively performed by updating the basis of the ensuing single-particle measurements rather than actively by actively applying ’s. The state is thus diagonal in the basis of stabilizers and can be written as ρ′GHZ=∑σ∈{σ}pσ[σ]|GHZ⟩⟨GHZ|, (27) where are Pauli operators listed in Table 1, each corresponds to a common eigenstate of stabilizers, and is the probability of the corresponding Pauli error. If the eigenvalue of is in an eigenstate, there is an error in the state. Note that for convenience of notation we use to denote the Pauli and the Pauli and one could also attribute eigenvalue of to an error instead of , but it is equivalent. Similarly, the eigenvalue of corresponds to an error. Therefore, error probabilities can be obtained from diagonal elements of . As seen above, in addition to single-qubit errors, some errors occur simultaneously, such as and . In our numerical results, we find that only the type of correlated errors are significant (see e.g. Table 1), and other correlated errors are negligible even at the transition point of the computation power, i.e., the FTQC threshold. Actually, these other correlated errors constitute less than of the overall errors. Therefore, only the errors , , and will be taken into account in the following. We can construct a 2D cluster state on the square lattice from the models sitting on the honeycomb lattice, as well as 3D cluster state from the models on the lattice proposed in Ref. Fujii (), modified from a construction in Ref. Thermal () (see Fig. 3). In both cases, each qubit on the cluster state corresponds to two GHZ states. The procedure for obtaining a 2D cluster state on a square lattice is explained in detail in the Appendix, and is easily adapted to the 3D case. Moreover, the effect of errors on the cluster state can be analyzed straightforwardly, and is summarized in Table 2. We describe them now. On the two GHZ states, each error is propagated to a error on the corresponding cluster-state qubit. We label the spin-1/2 bond particle measured for fusing two GHZ states as qubit-1. Then, each error is propagated to a correlated error on two neighbouring cluster-state qubits (see Table 2). And, and errors are propagated to independent errors on neighbouring cluster-state qubits. Similarly, each is propagated to a correlated error on the corresponding cluster-state qubit and two neighboring cluster-state qubits, and and errors are propagated to a correlated error on the corresponding cluster-state qubit and one neighboring cluster-state qubit. Other types of errors on GHZ states have been neglected as they rarely occur. Therefore, on the final cluster state, the total probability of phase errors on each qubit is pz ≃ 2(pZ0+2pX1+pX2+pX3 (28) +3pZ0X1+2pZ0X2+2pZ0X3), where , , and are probabilities of errors , , and on each GHZ state, respectively. The overall factor of 2 comes from the usage of two units to build one qubit in the cluster state. On the finial cluster state, there exist (i) correlated errors with a probability on some pairs of qubits connected to the same qubit, (ii) correlated errors with a probability or on each pair of directly connected qubits, and (iii) correlated errors with a probability on some trimers formed by connected qubits. All the contribution of correlated errors to each single qubit has been included in . Furthermore, because a cluster-state qubit is missing if one or two GHZ states are not successfully generated, the loss rate of cluster qubits is . The 2D cluster state on a square lattice can tolerate qubit loss up to a rate Browne (). With a tolerable loss rate, a 2D graph-state network can be identified from the cluster state with qubit loss, which can be converted to a new 2D cluster state on a hexagonal lattice without qubit loss. The expected fraction of the new cluster state Browne (), i.e., the average length of the path between nodes on the network is , depends on the loss rate (such a relation has been worked out numerically in Ref. Browne ()). The errors on each path may affect the two qubits, corresponding to the two connected nodes, on the final hexagonal lattice. Therefore, on the new cluster state, the probability of errors can be estimated as . Here the factor is due to the three paths connected to each node on the network. The thresholds for FTQC on the 2D cluster state are more stringent than the thresholds for FTQC on one-dimensional circuit architectures by a factor of approximately Raussendorf (). Because the one-dimensional-architecture thresholds (for the circuit model) are approximately Stephens08 (); Stephens09 (), we thus use as the corresponding threshold of the 2D cluster-state model without qubit loss to estimate the phase boundary for the transition in quantum computational power. Therefore, the threshold of 2D models can be estimated as p′z≈10−7⇒pz≈1310−7k(pl). (29) We numerically solve the temperature such that the above equation holds to determine the ‘phase’ boundary. The resultant ‘phase diagrams’ for both models are shown in Fig. 4. On 3D cluster states, one can encode quantum information with topological codes, and hence error rates much higher than the 2D threshold are tolerable. Without qubit loss, the error rate threshold of 3D cluster states is for independent phase-flip errors if the minimum-weight perfect matching algorithm is used to find the likely distribution of errors. On the 3D cluster state obtained from the construction in Fig. 3 (c), there are both independent errors and correlated errors. By choosing the arrangement of particles as shown in Fig. 3 (d), the correlations occur between errors either on directly connected qubits or two qubits oppositely connected to the same qubit. The correlations of errors on directly connected qubits can be neglected due to the error correction algorithm, and the other type of correlations may affect the threshold but not significantly RaussendorfAP (); LiNJP (). Numerical evidence suggests that the threshold decreases approximately linearly with the probability of qubit loss and it can be tolerated up to  Barrett (). As shown in Barrett (), the threshold of 3D models can therefore be approximated as follows Barrett () pl24.9%+pz2.93%≈1. (30) Below this critical line, errors are correctable and the resource state can be used for universal quantum computation. This relation is then used to estimate the phase boundary for quantum computational power, as shown in Fig. 5. V Concluding remarks We have worked out the ‘phase diagrams’ for the quantum computational power for two different models in both two and three dimensions. Our initial guess would be that such transition might coincide with that in phases of matter Darmawan (). However, we find that instead quantum computational universality is more intricate and may not persist at all points of a certain phase of matter. The first model has a first-order phase transition at at zero temperature but no phase transitions at nonzero temperatures. The isolated transition point does not locate at the boundary in the quantum computational power. Said equivalently, in this model the transition in the quantum computational power does not coincide with the transition in phases of matter. Such a non-coincidence was already hinted in Ref. Darmawan (), where in the quantum computational universality is likely to disappear at certain point in the valence-bond solid phase. The second model does not have a phase transition at all but has a transition in quantum computational power at both zero and finite temperatures. The region with quantum computational capability for both models survives to higher temperatures in 3D than in 2D. We also note that the ability to keep the interaction on while performing quantum computation is not a general feature of the model. It requires that all excited eigen-energies measured from the ground energy must be rational relative to one another Thermal (). This only occurs when , at which the models possess the highest symmetry. Incidentally, the closer to the symmetric point, the quantum computational power appears to sustain to a higher temperature. The only other model known to possess such a feature is actually the cluster-state model itself Oneway (). In our discussions of the FTQC thresholds, we have considered the error sources come from the thermal effect, as we are interested in the computational-power ‘phase diagram’ of the states themselves. By doing so we have assumed that measurements and other operations are perfect. These other errors can be included in the FTQC, and as long as their error rates are small enough, they can also be corrected by the error correction algorithm, but of course, will reduce the tolerable temperature. Acknowledgment. We thank J. Eisert for useful discussions. This work was supported by the National Science Foundation under Grants No. PHY 1314748 and No. PHY 1333903 (TCW) and by the National Research Foundation & Ministry of Education of Singapore (YL & LCK). Appendix A Generation of cluster states and error analysis In this appendix we describe how the merging and CZ gates are implemented by measuring bond particles. We also discuss the effect of errors on qubits. To simplify notation, we will omit the overall normalization. We assume that POVM’s on all center particles have been carried out and these particles become effectively qubits. We illustrate how to obtain a cluster state on the square lattice, but it is easily adapted to the bcc lattice. (I) First let us consider how to merge two GHZ states of the form . This will be done by measuring the two virtual qubits that form a bond particle. Denote other qubits not involved by an underline, i.e., . The two virtual qubits will be measured in the basis , i.e., a particular basis for the associated spin-3/2 bond particle. For example, an outcome of will project the two pairs of GHZ to . Other outcomes are equivalent to this up to a logical Pauli operation and translate to basis change in the final cluster-state qubit. The resulting state is a six-qubit GHZ state: . (II) Second, to further shrink this to a five-qubit GHZ state we measure one of the center spins in the basis of , and the resultant state for the remaining five qubits is , up to a logical Pauli Z correction, which can be accounted for by a basis changed in later measurement procedure. (III) Next we consider how to achieve the operation of CZ on two center qubits of two GHZ pairs. This again is done by measuring an associated bond particle (i.e. two virtual qubits) in a suitable basis. The four basis states are . It is equivalent to applying a CZ gate between the two virtual qubits, followed by a measurement in basis. For illustration, we again denote the two GHZ pairs by . The CZ operation between the two virtual qubits transforms the state to . Suppose is obtained from measuring the two virtual qubits (i.e. the bond particle), the remaining spins are projected to , i.e., a CZ gate has effectively applied between two center spins. If all the bond particles are measured so as to induce CZ gates between neighboring center spins, as in (III), then the center spins will form a cluster-state on the original honeycomb lattice at the end of the procedure. The consideration of faulty cluster state on the honeycomb lattice could be carried out as done in the case of the square lattice by Browne et al. Browne () to extract the corresponding ratio . But doing this is beyond the scope of the present paper. Instead we use the result of obtained in Ref. Browne () for the faulty square-lattice cluster state to estimate the region where FTQC can be still be carried out. To do this, we should aim to convert our spin network to a cluster state on a square lattice. We note that although this may underestimate the region of universality our goal is to show the existence of such a region in both zero and non-zero temperatures. To convert our original network of spins on the honeycomb lattice (see e.g. Fig. 6) to form a cluster state on a square lattice, we group two units of spin blocks as shown in Fig. 3 to generate one logical qubit of a cluster state. We label the spins as shown in Fig. 6c. Virtual spins 1 and 1’ are used to merge two GHZ states. Center spin 0 will be removed so as to shrink the 6-qubit GHZ to a 5-qubit GHZ state. The remaining virtual qubits 2,3,2’,3’ will combine with their partner virtual qubits to enact CZ gates on the center qubit 0’ with neighboring such center qubits. The result will be a cluster state on a square lattice. However, at finite temperatures thermal errors occur and the result is a faulty cluster state. We thus summarize the effect of (single-spin) errors on the logical qubits of the cluster state in Table 2 for reference. References • (1) E. Fermi, Thermodynamics, Dover Publications (New York, 2011). • (2) S. Sachdev, Quantum Phase Transitions, Cambridge University Press (Cambridge, 1999). • (3) M. Nielsen and I. Chuang, Quantum Computation and Quantum Information, Cambridge University Press (Cambridge, 2000). • (4) X.-G. Wen, Quantum Field Theory of Many-Body Systems, Oxford University Press (Oxford, 2004). • (5) D. Gottesman and I. L. Chuang, Nature 402, 390 (1999). • (6) M. A. Nielsen, Phys. Lett. A 308, 96 (2003); D. W. Leung, Int. J. Quantum Inform. 2, 33 (2004); A. M. Childs, D. W. Leung, and M. A. Nielsen, Phys. Rev. A 71, 032318 (2005). • (7) R. Raussendorf and H. J. Briegel, Phys. Rev. Lett. 86, 5188 (2001); R. Raussendorf, D. E. Browne, H. J. Briegel, Phys. Rev. A 68, 022312 (2003). • (8) H. J. Briegel, D. E. Browne, W. Dür, R. Raussendorf, and M. Van den Nest, Nature Phys. 5, 19 (2009). • (9) R. Raussendorf and T.-C. Wei, Annu. Rev. Condens. Matter Phys. 3, 239-261 (2012). • (10) A. C. Doherty and S. D. Bartlett, Phys. Rev. Lett 103, 020506 (2009). • (11) A. Miyake, Phys. Rev. Lett. 105, 040501 (2010). • (12) S. D. Bartlett, G. K. Brennen, A. Miyake, and J. M. Renes, Phys. Rev. Lett. 105, 110502 (2010). • (13) D. V. Else, I. Schwarz, S. D. Bartlett, and A. C. Doherty, Phys. Rev. Lett. 108, 240505 (2012). • (14) D. V. Else, S. D. Bartlett, and A. C. Doherty, New J. Physics 14, 113016 (2012). • (15) K. Fujii, Y. Nakata, M. Ohzeki, and M. Murao, Phys. Rev. Lett. 110, 120502 (2013). • (16) D. Gross, J. Eisert, N. Schuch, and D. Perez-Garcia, Phys. Rev. A 76, 052315 (2007). • (17) D. E. Browne, M. B. Elliott, S. T. Flammia, S. T. Merkel, A. Miyake, and A. J. Short, New. J. Phys. 10, 023010 (2008). • (18) S. D. Barrett, S. D. Bartlett, A. C. Doherty, D. Jennings, and T. Rudolph, Phys. Rev. A 80, 062328 (2009). • (19) A.S. Darmawan, G.K. Brennen and S.D. Bartlett, New. J. Phys. 14, 013023 (2012). • (20) Y. Li, D. E. Browne, L. C. Kwek, R. Raussendorf, and T.-C. Wei, Phys. Rev. Lett. 107, 060501 (2011). • (21) F. Verstraete and J. I. Cirac, Phys. Rev. A 70 060302(R) (2004). • (22) K. Kieling, T. Rudolph, and J. Eisert, Phys. Rev. Lett. 99, 130501 (2007). • (23) J.-M. Cai, A. Miyake, W. Dür, and H. J. Briegel, Phys. Rev. A 82, 052309 (2010). • (24) T.-C. Wei, I. Affleck, and R. Raussendorf, Phys. Rev. Lett. 106, 070501 (2011). • (25) A. Miyake, Ann. Phys. (Leipzig) 326, 1656 (2011). • (26) T.-C. Wei, R. Raussendorf, and L. C. Kwek, Phys. Rev. A 84, 042333 (2011). • (27) K. Fujii, and T. Morimae, Phys. Rev. A 85, 010304(R)(2012). • (28) R. Raussendorf, IJQI 7, 1053 (2009). • (29) A. M. Stephens, A. G. Fowler, and L. C. L. Hollenberg, Quantum Inf. Comput. 8, 330 (2008). • (30) A. M. Stephens, and Z. W. E. Evans, Phys. Rev. A 80, 022313 (2009). • (31) R. Raussendorf, J. Harrington, and K. Goyal, Annals of Phys. 321, 2242 (2006). • (32) Y. Li and S. C. Benjamin, New. J. Phys. 14, 093008 (2012). • (33) S. D. Barrett, and T. M. Stace, Phys. Rev. Lett. 105, 200502 (2010). You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
2021-01-23 01:54:06
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8232182264328003, "perplexity": 854.485831739506}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531702.36/warc/CC-MAIN-20210123001629-20210123031629-00336.warc.gz"}
https://eccc.weizmann.ac.il/keyword/16353/
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > KEYWORD > TENSOR: Reports tagged with tensor: TR09-007 | 9th January 2009 Eli Ben-Sasson, Michael Viderman #### Tensor Products of Weakly Smooth Codes are Robust We continue the study of {\em robust} tensor codes and expand the class of base codes that can be used as a starting point for the construction of locally testable codes via robust two-wise tensor products. In particular, we show that all unique-neighbor expander codes and all locally correctable codes, ... more >>> TR12-068 | 25th May 2012 Manuel Arora, Gábor Ivanyos, Marek Karpinski, Nitin Saxena #### Deterministic Polynomial Factoring and Association Schemes The problem of finding a nontrivial factor of a polynomial $f(x)$ over a finite field $\mathbb{F}_q$ has many known efficient, but randomized, algorithms. The deterministic complexity of this problem is a famous open question even assuming the generalized Riemann hypothesis (GRH). In this work we improve the state of the ... more >>> TR18-086 | 23rd April 2018 Joseph Swernofsky #### Tensor Rank is Hard to Approximate Revisions: 1 We prove that approximating the rank of a 3-tensor to within a factor of $1 + 1/1852 - \delta$, for any $\delta > 0$, is NP-hard over any finite field. We do this via reduction from bounded occurrence 2-SAT. more >>> ISSN 1433-8092 | Imprint
2019-10-15 23:23:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.791800320148468, "perplexity": 2006.1849435334902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660829.5/warc/CC-MAIN-20191015231925-20191016015425-00500.warc.gz"}
https://plainmath.net/algebra-ii/54993-proof-that-positive-integrals-equal-equal-prove-there-similar-argument
piarepm 2022-01-19 Proof that $\frac{2}{3}<\mathrm{log}\left(2\right)<\frac{7}{10}$ Positive integrals ${\int }_{0}^{1}\frac{2x{\left(1-x\right)}^{2}}{1+{x}^{2}}dx=\pi -3$ and ${\int }_{0}^{1}\frac{{x}^{4}{\left(1-x\right)}^{4}}{1+{x}^{2}}dx=\frac{22}{7}-\pi$ prove that $3<\pi <\frac{22}{7}$ Is there a similar argument for the following $\mathrm{log}\left(2\right)$ inequality? $\frac{2}{3}<\mathrm{log}\left(2\right)<\frac{7}{10}$ Jim Hunt Expert There are positive integrals that relate log(2) to its first four convergents:$0,1,\frac{2}{3},\frac{7}{10}.$ ${\int }_{0}^{1}\frac{2x}{1+{x}^{2}}=\mathrm{log}\left(2\right)$ ${\int }_{0}^{1}\frac{{\left(1-x\right)}^{2}}{1+{x}^{2}}dx=1-\mathrm{log}\left(2\right)$ ${\int }_{0}^{1}\frac{{x}^{2}{\left(1-x\right)}^{2}}{1+{x}^{2}}dx$ ${\int }_{0}^{1}\frac{{x}^{4}{\left(1-x\right)}^{2}}{1+{x}^{2}}dx=\frac{7}{10}-\mathrm{log}\left(2\right)$ Therefore, $-{\int }_{0}^{1}\frac{{x}^{2}{\left(1-x\right)}^{2}}{1+{x}^{2}}<0<{\int }_{0}^{1}\frac{{x}^{4}{\left(1-x\right)}^{2}}{1+{x}^{2}}dx$ $\frac{2}{3}-\mathrm{log}\left(2\right)<0<\frac{7}{10}-\mathrm{log}\left(2\right)$ $\frac{2}{3}<\mathrm{log}\left(2\right)<\frac{7}{10}$ A similar set is available with denominators $\left(1+x\right)$: ${\int }_{0}^{1}\frac{1}{1+x}dx=\mathrm{log}\left(2\right)$ ${\int }_{0}^{1}\frac{x}{1+x}dx=1-\mathrm{log}\left(2\right)$ $\frac{1}{2}{\int }_{0}^{1}\frac{{x}^{2}\left(1-x\right)}{1+x}dx=\mathrm{log}\left(2\right)-\frac{2}{3}$ $\frac{1}{2}{\int }_{0}^{1}\frac{{x}^{5}\left(1-x\right)}{1+x}dx=\frac{7}{10}-\mathrm{log}\left(2\right)$ and series versions are given by Do you have a similar question?
2023-02-03 04:06:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 47, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.83051598072052, "perplexity": 4749.545642512489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500042.8/warc/CC-MAIN-20230203024018-20230203054018-00477.warc.gz"}
http://www.avisynth.nl/index.php/PSNR
# PSNR PSNR stands for Peak Signal-to-Noise Ratio. It is used as a measure of video quality. It is expressed in decibels. It's defined as where I is the reference image, K is the image under test, and MSE is the Mean Squared Error between the two: Where M is the number of pixels in a frame (width · height) The double-Σ term states that (j,k) runs over all the pixels, summing the square of the difference between reference image I and test image K.
2020-11-24 14:21:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8826993107795715, "perplexity": 836.2351737936148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141176864.5/warc/CC-MAIN-20201124140942-20201124170942-00637.warc.gz"}
https://diabetesjournals.org/care/article/32/12/2149/25997/Addressing-Literacy-and-Numeracy-to-Improve
OBJECTIVE Diabetic patients with lower literacy or numeracy skills are at greater risk for poor diabetes outcomes. This study evaluated the impact of providing literacy- and numeracy-sensitive diabetes care within an enhanced diabetes care program on A1C and other diabetes outcomes. RESEARCH DESIGN AND METHODS In two randomized controlled trials, we enrolled 198 adult diabetic patients with most recent A1C ≥7.0%, referred for participation in an enhanced diabetes care program. For 3 months, control patients received care from existing enhanced diabetes care programs, whereas intervention patients received enhanced programs that also addressed literacy and numeracy at each institution. Intervention providers received health communication training and used the interactive Diabetes Literacy and Numeracy Education Toolkit with patients. A1C was measured at 3 and 6 months follow-up. Secondary outcomes included self-efficacy, self-management behaviors, and treatment satisfaction. RESULTS At 3 months, both intervention and control patients had significant improvements in A1C from baseline (intervention −1.50 [95% CI −1.80 to −1.02]; control −0.80 [−1.10 to −0.30]). In adjusted analysis, there was greater improvement in A1C in the intervention group than in the control group (P = 0.03). At 6 months, there were no differences in A1C between intervention and control groups. Self-efficacy improved from baseline for both groups. No significant differences were found for self-management behaviors or satisfaction. CONCLUSIONS A literacy- and numeracy-focused diabetes care program modestly improved self-efficacy and glycemic control compared with standard enhanced diabetes care, but the difference attenuated after conclusion of the intervention. Patients, particularly those with poorer literacy or numeracy skills, may have difficulty interpreting and acting on abstract or complex health information related to chronic illness care (1). Approximately 90 million adults in the U.S. have basic or below basic literacy skills and >110 million have limited numeracy skills (2). Low literacy is common among patients with diabetes and has been associated with less knowledge about diabetes and worse glycemic control (3,5). In a randomized trial of a multifaceted diabetes disease management program that included literacy-sensitive interventions, we found that patients' literacy status was an independent predictor of improvement in glycemic control. Patients with lower literacy showed a greater improvement in glycemic control than patients with higher literacy, suggesting that applying literacy-sensitive communication methods could lead to improved diabetes outcomes (6). However, there have been few additional studies (7) and no randomized trials specifically examining the role of a literacy- and numeracy-sensitive intervention for patients with diabetes. Numeracy, or the ability to use numbers in daily life, is an important but understudied component of literacy (8). Health-related numeracy includes understanding measurement, estimation, time, risk interpretation, and multistep operations and the ability to identify which math skills need to be applied to solve problems (8,9). Numeracy has been associated with asthma control, nutrition label comprehension, and obesity (10,12). Numeracy may play an integral role in successful diabetes self-management because quantitative skills are often required for tasks such as blood glucose monitoring, carbohydrate counting, and medication administration. In a cross-sectional study, we found a significant association between diabetes-related numeracy skills and glycemic control (3). However, to date, the role of providing numeracy-sensitive interventions in diabetes care has not been evaluated. The objective of this study was to assess the impact of addressing both literacy and numeracy as part of an enhanced multidisciplinary diabetes care program, compared with usual delivery of that program. Outcome measures included glycemic control, patient-reported self-efficacy, self-management behaviors, and treatment satisfaction. We hypothesized that intervention participants who received the literacy- and numeracy-sensitive program would lower their A1C significantly more than control group participants. This study included two coordinated randomized controlled trials performed at two academic medical centers from April 2006 until June 2008. The institutional review boards from Vanderbilt University Medical Center (VUMC) and the University of North Carolina (UNC) Chapel Hill approved the trials, and written consent was obtained from all participants. Eligible patients were aged 18–80 years, English-speaking, with type 1 or type 2 diabetes, and most recent A1C ≥7.0% and were referred by their physician for participation in their local enhanced diabetes care program. Exclusion criteria were a preexisting diagnosis of severe cognitive impairment or corrected visual acuity of <20/50 using a Rosenbaum Screener (Prestige Medical, Northridge, CA). Subjects received $50 for participation. ### Randomization Among patients referred to the enhanced diabetes care program at each trial site, those who consented were then randomly assigned to the control or intervention condition. Random assignment was concealed, computer-generated, and performed at each site using random blocks of four, six, and eight assignments. Although research assistants collecting patient measures were not notified of a patient's assignment, this was not a masked study because only specified providers were trained to deliver the intervention. ### Control and intervention conditions Patients assigned to the control condition were referred to “usual care” in the local enhanced diabetes care program (supplementary Table A1, available in an online appendix at http://care.diabetesjournals.org/cgi/content/full/dc09-0563/DC1). This included one to six face-to-face visits in a diabetes care program over a period of 3 months. At VUMC, this program included visits with a diabetes nurse practitioner (>80% also were certified diabetes educators [CDEs]) and a registered dietitian CDE within the Eskind Diabetes Center. At UNC, this program included visits with a nurse practitioner CDE and a registered dietitian within the General Medicine Clinic. To avoid contamination issues, control patients were assigned to receive care only from these program staff, and these staff did not provide care to any intervention patients. Patients assigned to the intervention condition were also referred to the local enhanced diabetes care program. Program staff delivering the intervention each received one to two didactic training sessions (1–2 h each) about health literacy, numeracy, and clear communication techniques (13) before the start of the trial. Intervention staff also used the Diabetes Literacy and Numeracy Education Toolkit (DLNET) (14) to facilitate literacy and numeracy-sensitive diabetes education and management. The DLNET (available at http://www.mc.vanderbilt.edu/diabetes/drtc/preventionandcontrol/tools.php) is a customizable toolkit of 24 instructive modules about diabetes self-management activities, including blood glucose monitoring, nutrition management, foot care, and administration of medications including insulin. The toolkit was designed using clear communication principles, such as simple sentences with text at a sixth-grade reading level, bulleting for key points, color coding, pictures, and step-by-step instructions. The intervention was delivered in two to six sessions over a 3-month period. At VUMC, the intervention was delivered by an advanced diabetes management nurse practitioner and CDE registered dietitians, whereas at UNC the intervention was delivered by a CDE pharmacist and a dietitian. To avoid contamination issues, intervention patients were assigned to receive care only from these program staff, and intervention staff did not provide care to any control patients. Throughout the study, all control and intervention patients continued to receive usual care from their primary care or diabetes specialty providers. ### Measures A1C was collected at baseline, at 3 months (at the conclusion of the intervention), and at 6 months (3 months after completion of the intervention). A1C measurements were performed at the laboratories of the respective institutions, which were not aware of the patients' study status. Literacy was assessed using the Rapid Estimate of Adult Literacy in Medicine (REALM), a well-validated measure of reading ability that correlates with reading comprehension (15). If the patient scored less than a sixth-grade reading level by REALM, then the remainder of the instruments were administered orally to ensure that the survey questions were understood by the patient. All subjects were given the option of oral administration if desired. Diabetes-related numeracy skills were measured with the validated Diabetes Numeracy Test (DNT) at VUMC and the shortened DNT-15 at UNC (available at http://www.mc.vanderbilt.edu/diabetes/drtc/preventionandcontrol/tools.php) (16). Diabetes self-management activities were assessed by patient self-report and with the validated Summary of Diabetes Self-Care Activities scale (17). Patient-perceived self-efficacy of diabetes self-management behaviors was assessed using the validated Perceived Diabetes Self-Management Scale (18) and satisfaction with the validated Diabetes Treatment Satisfaction Questionnaire (19). Diabetes-related numeracy, diabetes self-care behaviors, self-efficacy, and satisfaction were assessed at baseline and at the 6-month interval. ### Statistical analyses Descriptive statistics were calculated as median (interquartile range) or frequency and percentage for categorical variables. We compared patient characteristics by intervention status at baseline using Wilcoxon's rank-sum tests for continuous variables and Pearson's χ2 tests for categorical variables. For all analyses we present the results for each trial site separately and then also for the two sites combined. All randomly assigned participants were included in the intention-to-treat analyses. For our primary outcome, we used Wilcoxon's rank-sum tests to compare change in A1C between intervention and control groups from baseline to 3 months (after the completion of the enhanced diabetes education and management program) and also from baseline to 6 months (to assess additional effects on glycemic control 3 months after the intervention had been completed). Secondary analyses included comparison between intervention and control groups of patient diabetes care self-efficacy, self-management behaviors and satisfaction with diabetes care from enrollment to 6-month follow-up, using Wilcoxon's rank-sum test. Within each group, changes in measures from baseline to 3 or 6 months follow-up were also examined using Wilcoxon's signed-rank test. Nonparametric 95% confidence limits are presented with the median improvement measures for A1C, self-efficacy, and satisfaction. We also performed multivariable models to assess the independent effect of the intervention on A1C at 3 and 6 months follow-up. Adjustment variables determined a priori included age, sex, race, study site, diabetes type, income status, baseline diabetes numeracy score, and baseline A1C. To assess the change in A1C by group status, using all available data, we performed a multivariable model using an ordinary least squares regression method with correction for intrasubject correlation among repeated measures of A1C via a bootstrap estimation method (20,21). Because of the high number of referring physicians (36 at VUMC and 57 at UNC), clustering by primary physician was also accounted for by nonparametric bootstrap methods. We included the interval of evaluation time (3 and 6 months) as a factor covariate along with a cross-product term with the study group status (control or intervention) to assess whether change in A1C from baseline to 3 months or to 6 months differed between the two study arms. Patients with no measure of A1C after baseline were excluded from the analyses (n = 14). As a sensitivity analysis, multiple imputation methods were used to impute missing A1C data points at 3 and 6 months with available baseline covariates, and these calculations generated similar results (21). For each study site, we estimated that a sample size of 86 patients (43 control and 43 intervention) were needed based upon 80% power with two-tailed α of 0.05, and SD of 1.5, to detect a 1 percentage point greater improvement in A1C in the intervention group than in the control group. The final sample size was inflated to include a dropout rate of 15–20%. We have studied multiple end points of interest in these studies. We report both negative and positive results, and no adjustments were made for multiple tests. Statistical analyses were performed using R 2.7.2 (http://www.r-project.org), STATA (version 9.2; StataCorp, College Station, TX), and SAS (version 9.1; SAS Institute, Cary, NC). Of the 622 patients referred, 514 were eligible and a total of 198 enrolled in the two trials. Complete data were available for evaluation for 184 (93%). Details of enrollment by study site are shown in Fig. 1. Overall, patients were a median of 52 (interquartile range 42–59) years old, 36% were male, and 43% were African American. Almost half (49%) had a high school education or less, and almost 40% of patients had a literacy level below the ninth grade. Performance on the DNT suggested diabetes-related numeracy deficits with a median score of 59% (26–86%). The median baseline A1C was 9.1% in both intervention and control groups. Baseline patient characteristics were similar between intervention and control groups except at VUMC, where the intervention group had a higher proportion of patients with type 2 diabetes and a lower average DNT score (Table 1). Figure 1 Study flow diagram. Figure 1 Study flow diagram. Close modal Table 1 Baseline patient characteristics by group status and trial site VUMC UNC Overall ControlInterventionControlInterventionControlIntervention n 53 52 46 47 99 99 Age (years) 45 (31−59) 49.5 (41−57) 56 (51−60) 53 (48.5−58.5) 53 (40−59.5) 52 (45−59) Men (%) 38 40 33 34 35 37 African American (%) 21 29 72 57 44 42 Income <$20,000/year (%) 25 33 67 72 45 52 Education ≤12th grade (%) 37 33 61 67 48 49 Private insurance (%) 74 75 26 23 52 51 Type 2 diabetes (%) 74 90* 100 100 86 95* Years of diabetes diagnosis 6 (1−12) 6.5 (2−12.3) 9 (5−14.8) 8 (4.5−16) 8 (3−13) 8 (3−15) Monitor blood glucose ≤ 1 time/day (%) 25 35 49 39 36 37 Insulin use (%) 60 50 78 70 69 59 Insulin >2 times/day 32 (69) 26 (77) 36 (42) 35 (40) 68 (54) 61 (55) Adjusts insulin for blood glucose 30 (70) 26 (73) 35 (17) 36 (22) 65 (42) 62 (44) Adjusts insulin for carbohydrates 30 (43) 26 (31) 34 (0) 35 (0) 64 (20) 61 (13) Yes, hypoglycemic episodes in the prior month (%) 30 25 16 16 Previous diabetes education (%) 62 69 76 74 69 71 Tobacco use (%) 21 27 26 28 23 27 BMI (kg/m234.4 (27.2−40.1) 34.4 (30.1−39.1) 35.6 (31.7−41.3) 36.9 (29.9−40.6) 35.5 (30.2−41.3) 35.2 (29.9−40.0) Systolic blood pressure (mmHg) 138 (126−144) 133 (119−142) 131 (121−150) 142 (126−150) 136 (123−146) 136 (121−146) Diastolic blood pressure (mmHg) 79 (72−84) 76 (70−85) 72.5 (68−82) 77 (69−84) 76 (69−83) 76 (69.5−84.5) REALM score [0–66] 65 (62−66) 64 (62−65) 59 (44−65) 54 (39−64) 63 (57−66) 63 (46−65) REALM score <9th grade level (%) 19 18 54 62 35 39 Diabetes numeracy test score (%) 83 (65−90) 69 (44−84)† 33 (13−51) 33 (13−70) 60 (36−86) 55 (21.5−81) Self-efficacy: PDSMS (8−40) 24.0 (22.0−27.0) 24.5 (20.3−28.8) 26.0 (23.0−33.0) 24.0 (19.5−30.0) 25.0 (22.0−29.5) 24.0 (20.0−29.0) Satisfaction: DTSQ (0–36) 27.5 (21.0−32.0) 29.0 (26.0−33.0) 31.5 (26.0−34.8) 30.0 (26.0−32.5) 29.0 (23.3−34.0) 29.5 (26.0−33.0) A1C (%) baseline 8.6 (7.3−9.7) 8.5 (7.5−10.7) 9.8 (8.5−10.3) 9.2 (8.6−10.9) 9.1 (7.6−10.2) 9.1 (7.8−10.8) VUMC UNC Overall ControlInterventionControlInterventionControlIntervention n 53 52 46 47 99 99 Age (years) 45 (31−59) 49.5 (41−57) 56 (51−60) 53 (48.5−58.5) 53 (40−59.5) 52 (45−59) Men (%) 38 40 33 34 35 37 African American (%) 21 29 72 57 44 42 Income <\$20,000/year (%) 25 33 67 72 45 52 Education ≤12th grade (%) 37 33 61 67 48 49 Private insurance (%) 74 75 26 23 52 51 Type 2 diabetes (%) 74 90* 100 100 86 95* Years of diabetes diagnosis 6 (1−12) 6.5 (2−12.3) 9 (5−14.8) 8 (4.5−16) 8 (3−13) 8 (3−15) Monitor blood glucose ≤ 1 time/day (%) 25 35 49 39 36 37 Insulin use (%) 60 50 78 70 69 59 Insulin >2 times/day 32 (69) 26 (77) 36 (42) 35 (40) 68 (54) 61 (55) Adjusts insulin for blood glucose 30 (70) 26 (73) 35 (17) 36 (22) 65 (42) 62 (44) Adjusts insulin for carbohydrates 30 (43) 26 (31) 34 (0) 35 (0) 64 (20) 61 (13) Yes, hypoglycemic episodes in the prior month (%) 30 25 16 16 Previous diabetes education (%) 62 69 76 74 69 71 Tobacco use (%) 21 27 26 28 23 27 BMI (kg/m234.4 (27.2−40.1) 34.4 (30.1−39.1) 35.6 (31.7−41.3) 36.9 (29.9−40.6) 35.5 (30.2−41.3) 35.2 (29.9−40.0) Systolic blood pressure (mmHg) 138 (126−144) 133 (119−142) 131 (121−150) 142 (126−150) 136 (123−146) 136 (121−146) Diastolic blood pressure (mmHg) 79 (72−84) 76 (70−85) 72.5 (68−82) 77 (69−84) 76 (69−83) 76 (69.5−84.5) REALM score [0–66] 65 (62−66) 64 (62−65) 59 (44−65) 54 (39−64) 63 (57−66) 63 (46−65) REALM score <9th grade level (%) 19 18 54 62 35 39 Diabetes numeracy test score (%) 83 (65−90) 69 (44−84)† 33 (13−51) 33 (13−70) 60 (36−86) 55 (21.5−81) Self-efficacy: PDSMS (8−40) 24.0 (22.0−27.0) 24.5 (20.3−28.8) 26.0 (23.0−33.0) 24.0 (19.5−30.0) 25.0 (22.0−29.5) 24.0 (20.0−29.0) Satisfaction: DTSQ (0–36) 27.5 (21.0−32.0) 29.0 (26.0−33.0) 31.5 (26.0−34.8) 30.0 (26.0−32.5) 29.0 (23.3−34.0) 29.5 (26.0−33.0) A1C (%) baseline 8.6 (7.3−9.7) 8.5 (7.5−10.7) 9.8 (8.5−10.3) 9.2 (8.6−10.9) 9.1 (7.6−10.2) 9.1 (7.8−10.8) Data are %, n (%), or median (interquartile range). *P < 0.05 comparing intervention vs. control by either χ2 or Wilcoxon rank-sum tests, as appropriate. DTSQ, Diabetes Treatment Satisfaction Questionnaire; PDSMS, Perceived Diabetes Self-Management Scale. There were several differences in patient characteristics between the two sites. At UNC, the patients were more likely to be older and African American and to have lower annual income, less educational attainment, lower literacy, and lower diabetes-related numeracy scores compared with participants at VUMC. UNC participants also had a longer duration of diabetes and were more likely to use insulin and to have a higher baseline A1C. There was no significant difference between control and intervention groups in the average number of patient visits during the 3-month enhanced-care program period within each site (VUMC mean 3.8 [95% CI 3.5–4.1]; UNC 2.6 [2.3–2.9]), although VUMC participants overall had significantly more encounters than UNC participants in both intervention (P < 0.001) and control (P < 0.001) groups. For intervention participants, visits with the dietitian were longer than those with the nurse practitioner or pharmacist (mean 49 [46–52] and 40 [38–42] minutes, respectively; P < 0.001). For intervention participants, the most commonly used sections of the DLNET included general information about diabetes including glucose testing (88%), exercise (83%), general nutrition (77%), and foot care (63%). Specific nutritional guidelines, such as use of the plate method (35%) or carbohydrate counting (16%), were also delivered. Approximately 80% of participants were instructed on the use of the DLNET logbooks to track self-care medication and dietary management. After completion of the intervention and 3 additional months of observation, there was no difference between the control and intervention groups in the mean number of provider visits at VUMC (1.0 [0.8–1.2]); however, at UNC, control patients had slightly more provider visits than did intervention patients (1.1 [0.8–1.5] vs. 0.1 [0.03–0.2]; P < 0.001). ### Glycemic control At the completion of the 3-month enhanced diabetes care program, the intervention and control groups at each site had significant decreases in their A1C compared with baseline values (VUMC, intervention median −1.60 [95% CI −2.07 to −1.00], control −1.00 [−1.81 to −0.40]; UNC, intervention −1.40 [−1.75 to −0.75], control −0.30 [−1.06 to −0.10]) (Table 2). In unadjusted analysis, improvement in A1C from baseline was greater in the intervention groups than in the respective control groups at each site (VUMC −0.5 [−1.20 to 0.20]; UNC −0.8 [−1.50 to −0.20]), although only values for the UNC site were statistically significant (P = 0.014). Overall, when all patients from both sites were combined, there was greater improvement in A1C in the intervention group than in the control group (median difference in A1C −0.70 [95% CI −1.10 to −0.20]; P = 0.005). In analyses combining all patients and adjusting for previously described variables, the intervention group continued to demonstrate a significantly greater improvement in A1C than the control group at the 3-month time period (P = 0.03) (Table 2). Table 2 Change in A1C, self-efficacy, and satisfaction by study group from baseline VUMC UNC Combined InterventionControlP*InterventionControlP*InterventionControlP* Change in A1C Baseline to 3 months −1.60 (−2.07 to −1.00) −1.00 (−1.81 to −0.40) 0.121 −1.40 (−1.75 to −0.75) −0.30 (−1.06 to −0.10) 0.014 −1.50 (−1.80 to −1.02) −0.80 (−1.10 to −0.30) 0.005§ Baseline to 6 months −1.15 (−1.43 to −0.77) −1.20 (−2.22 to −0.70) 0.657 −0.75 (−1.40 to −0.20) −0.55 (−1.30 to −0.29) 0.732 −1.05 (−1.30 to −0.70) −0.90 (−1.30 to −0.53) 1.0 Change in self-efficacy (PDSMS) Baseline to 6 months +8.0 (3.0 to 8.5) +4.0 (1.0 to 7.2) 0.324 +5.0 (2.0 to 6.0) +1.0 (−1.7 to 2.7) 0.030 +5.0 (3.0 to 7.0) +2.0 (1.0 to 4.0) 0.018 Change in satisfaction (DTSQ) Baseline to 6 months +2.0 (1.0 to 5.0) +3.0 (2.0 to 6.4) 0.584 +2.0 (0.4 to 3.0) +0.5 (0.0 to 1.7) 0.474 +2.0 (1.0 to 3.0) +2.0 (1.0 to 3.0) 0.836 VUMC UNC Combined InterventionControlP*InterventionControlP*InterventionControlP* Change in A1C Baseline to 3 months −1.60 (−2.07 to −1.00) −1.00 (−1.81 to −0.40) 0.121 −1.40 (−1.75 to −0.75) −0.30 (−1.06 to −0.10) 0.014 −1.50 (−1.80 to −1.02) −0.80 (−1.10 to −0.30) 0.005§ Baseline to 6 months −1.15 (−1.43 to −0.77) −1.20 (−2.22 to −0.70) 0.657 −0.75 (−1.40 to −0.20) −0.55 (−1.30 to −0.29) 0.732 −1.05 (−1.30 to −0.70) −0.90 (−1.30 to −0.53) 1.0 Change in self-efficacy (PDSMS) Baseline to 6 months +8.0 (3.0 to 8.5) +4.0 (1.0 to 7.2) 0.324 +5.0 (2.0 to 6.0) +1.0 (−1.7 to 2.7) 0.030 +5.0 (3.0 to 7.0) +2.0 (1.0 to 4.0) 0.018 Change in satisfaction (DTSQ) Baseline to 6 months +2.0 (1.0 to 5.0) +3.0 (2.0 to 6.4) 0.584 +2.0 (0.4 to 3.0) +0.5 (0.0 to 1.7) 0.474 +2.0 (1.0 to 3.0) +2.0 (1.0 to 3.0) 0.836 Data are median (95% CI). *P value determined by Wilcoxon rank-sum test comparing intervention and control. P < 0.05 for paired comparison of 3- or 6-month value with baseline value using Wilcoxon signed-rank test. P = 0.056 for comparison of intervention vs. control in a repeated-measures model using all available 3- and 6-month data, adjusted for age, sex, race, type of diabetes, income, baseline Diabetes Numeracy Test score, and baseline A1C level, including accounting for physician cluster and examination of an interaction term with time. §P = 0.030 for comparison of intervention vs. control in a repeated-measures model using all available 3- and 6-month data, adjusted for age, sex, race, type of diabetes, income, baseline Diabetes Numeracy Test score, and baseline A1C level, including accounting for physician cluster and examination of an interaction term with time. DTSQ, Diabetes Treatment Satisfaction Questionnaire; PDSMS, Perceived Diabetes Self-Management Scale. At 6 months follow-up, which was 3 months after completion of the enhanced care programs, patients continued to demonstrate significant improvements in A1C compared with baseline. However, neither unadjusted nor adjusted analyses showed statistically significant differences in improvement of A1C between intervention and control groups at 6 months (Table 2). ### Self-efficacy, self-management behaviors, and satisfaction At 6 months, self-efficacy of diabetes self-management scores showed significant improvements from baseline in all groups except for the UNC control group (Table 2). There was a statistically significant improvement in Perceived Diabetes Self-Management Scale scores between intervention and control groups for the UNC site (P = 0.029) and for the combined sites (P = 0.018). However, in analyses adjusted for age, sex, race, diabetes type, income, diabetes-related numeracy, and baseline A1C, the differences did not remain statistically significant. Patient-reported self-management behaviors did not show any significant change from baseline nor were there any statistically significant differences found between intervention and control groups at either site or overall. Satisfaction with diabetes care was high in all groups at baseline, and small improvements were seen from baseline to the 6-month follow-up but did not differ between intervention and control groups (Table 2). This study demonstrates that a literacy and numeracy-focused diabetes intervention may contribute to improving glycemic control and diabetes self-management self-efficacy. However, the impact of the literacy- and numeracy-focused program on glycemic control was modest compared with that of an already strong enhanced diabetes care program control group. In addition, although patients continued to have improved glycemic control compared with baseline values, the intervention was not able to show sustained benefits above the control setting 3 months after completion of the program. Training diabetes providers in improved health communication skills may help to improve patient understanding of health information and self-management behavior. The DLNET used in this study provides a useful comprehensive customizable resource to facilitate diabetes education and management. Patients often desire diabetes materials developed for low literacy skills (22). The DLNET uses text at the sixth-grade literacy level, as opposed to much of the existing health information including materials specific to diabetes, which are often at a higher reading level (23), and also incorporates many other principles of clear communication (24). The DLNET can be used as a core element for both initial and on-going diabetic patient education programs aimed to counsel patients of all skill levels. Although we found that intervention group participants had an improvement in their glycemic control during the period of intervention delivery, this differential improvement was not sustained after the program concluded. One explanation may be the level of patient interaction with the health care system during the enhanced diabetes care program and the subsequent observation period. Although the total number of visits did not differ between intervention and control groups during the entire 6 months, patients in both groups did see a health provider more often during the 3 months of the intervention compared with the observation period after the intervention period. This result suggests that successful reduction in A1C may require a persistent level of intervention over time and also may suggest that our program performs better as a disease management program than as a self-care training program. Other explanations for why there was no difference seen between intervention and control groups at the 6-month interval, as well as the modest difference at the 3-month interval, are differential loss to follow-up and the highly active control arms in this study. Patients in the control group were less likely to complete the study, and those who did not complete it may have had worse glycemic control. In addition, patients in the control arms participated in an enhanced diabetes care program that provided additional diabetes management above what is usually provided by diabetes physicians. This included multiple visits with other providers experienced in addressing physiological and social factors associated with glycemic control. In addition, the effectiveness of the intervention differed between the two study sites. Study participants in the control arm at UNC had much less improvement in A1C than that for all other study groups. This difference may be explained, in part, by different measured and unmeasured patient characteristics or by differing provider management practices at each study site. Patient self-efficacy of diabetes self-management and satisfaction improved for all groups. Because nearly all patients reported an improvement, we were unable to demonstrate a significant difference between the intervention and control groups in this study. Participation in the trial itself may have contributed to the improvement in both self-efficacy and satisfaction for control group patients. There are several limitations to this study. First, this study was performed and initially powered as two separate, yet coordinated, randomized trials; however, because of the similar hypotheses and design, the decision to analyze combined results of the two trials was made before the completion of data collection at either site. Second, at one of the two sites (VUMC), there were significant differences between intervention and control groups in several patient characteristics. This unequal randomization could result in residual confounding. To address this possibility we performed analyses adjusting for potential confounding variables, and the findings were consistent with the unadjusted results. Third, there were patients (n = 30; 15%) who did not complete evaluation of the primary outcome at one of the two designated time intervals. Although this limits cross-sectional evaluations at those times, we used ordinary least squares regression models with multiple imputations to use all data points for participants in the study and minimize the potential bias of missing information. Fourth, many patients declined participation. This may limit the generalizability of our findings, as they may not fully represent all patients with diabetes. Finally, this trial was not adequately powered to evaluate differences in the effect of the intervention by patient literacy or numeracy status. Among patients with diabetes, literacy and numeracy are important characteristics that have been associated with glycemic control and may play a significant role in the optimization of diabetes care. Use of materials designed to facilitate diabetes education and empower patients to effectively self-manage their condition within an environment by applying clear communication principles is a fundamental component of comprehensive diabetes care. Strategies to enhance effective communication between patients and providers transferring health literacy and numeracy-sensitive information need to be further studied to identify ways to improve care for patients with diabetes. Clinical trial registry nos. NCT00311922 and NCT00469105, clinicaltrials.gov. The funding sources did not have any involvement in the design and conduct of the study, collection, management analysis, and interpretation of the data or preparation, review, or approval of the manuscript. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. This research was funded with support from the American Diabetes Association (Novo Nordisk Clinical Research Award), the Pfizer Clear Health Communication Initiative, and the Vanderbilt Diabetes Research and Training Center (National Institute of Diabetes and Digestive and Kidney Diseases [NIDDK] 5P60-DK-020593). R.L.R. is also currently supported by an NIDDK Career Development Award (5K23-DK-065294). K.C. is supported by an NIDDK Career Development award (K23-DK-080952) and by the National Kidney Foundation. T.A.E. is supported by grants K24-DK-77875 and P60-DK-020593 from the NIDDK. No other potential conflicts of interest relevant to this article were reported. Parts of this study were presented in abstract form at the 68th Scientific Sessions of the American Diabetes Association, San Francisco, California, 6–10 June 2008, and at the 31st annual meeting of the Society of General Internal Medicine, Pittsburgh, Pennsylvania, 9–12 April 2008. Thank you to Kathleen Wolff, MSN, BC-ADM, BC-FNP, for contributions to the development of the materials and delivery of the intervention, to Victoria Hawk, MPH, RD, for contributions to data collection, and to Matt Kennon and Shari Barto for assistance with manuscript preparation. 1. Ad Hoc Committee on Health Literacy for the Council on Scientific Affairs, American Medical Association . Health literacy: report of the Council on Scientific Affairs . JAMA 1999 ; 281 : 552 557 2. Kutner M , Greenberg E , Baer J : A first look at the literacy of america's adults in the 21st century . Washington, DC , National Center for Education Statistics, U.S. Department of Education , 2005 3. Cavanaugh K , Huizinga MM , Wallston KA , T , Shintani A , Davis D , Gregory RP , Fuchs L , Malone R , Cherrington A , Pignone M , DeWalt DA , Elasy TA , Rothman RL : Association of numeracy and diabetes control . Ann Intern Med 2008 ; 148 : 737 746 4. Gazmararian JA , Williams MV , Peel J , Baker DW : Health literacy and knowledge of chronic disease . Patient Educ Couns 2003 ; 51 : 267 275 5. Schillinger D , Grumbach K , Piette J , Wang F , Osmond D , Daher C , Palacios J , Sullivan GD , Bindman AB : Association of health literacy with diabetes outcomes . JAMA 2002 ; 288 : 475 482 6. Rothman RL , DeWalt DA , Malone R , Bryant B , Shintani A , Crigler B , Weinberger M , Pignone M : Influence of patient literacy on the effectiveness of a primary care-based diabetes disease management program . JAMA 2004 ; 292 : 1711 1716 7. Wallace AS , Seligman HK , Davis TC , Schillinger D , Arnold CL , Bryant-Shilliday B , Freburger JK , Dewalt DA : Literacy-appropriate educational materials and brief counseling improve diabetes self-management . Patient Educ Couns 2009 ; 75 : 328 333 8. Rothman RL , Montori VM , Cherrington A , Pignone MP : Perspective: the role of numeracy in health care . J Health Commun 2008 ; 13 : 583 595 9. Golbeck AL , Ahlers-Schmidt CR , Paschal AM , Dismuke SE : A definition and operational framework for health numeracy . Am J Prev Med 2005 ; 29 : 375 376 10. Apter AJ , Cheng J , Small D , Bennett IM , Albert C , Fein DG , George M , Van Horne S : Asthma numeracy skill and health literacy . J Asthma 2006 ; 43 : 705 710 11. Huizinga MM , Beech BM , Cavanaugh KL , Elasy TA , Rothman RL : Low numeracy skills are associated with higher BMI . Obesity (Silver Spring) 2008 ; 16 : 1966 1968 12. Rothman RL , Housam R , Weiss H , Davis D , Gregory R , T , Shintani A , Elasy TA : Patient understanding of food labels: the role of literacy and numeracy . Am J Prev Med 2006 ; 31 : 391 398 13. Kripalani S , Weiss BD : Teaching about health literacy and clear communication . J Gen Intern Med 2006 ; 21 : 888 890 14. Wolff K , Cavanaugh K , Malone R , Hawk V , Gregory BP , Davis D , Wallston K , Rothman RL : The Diabetes Literacy and Numeracy Education Toolkit (DLNET): materials to facilitate diabetes education and management in patients with low literacy and numeracy skills . Diabetes Educ 35 : 233 236 , 238 241 , 244 245 15. Davis TC , Long SW , Jackson RH , Mayeaux EJ , George RB , Murphy PW , Crouch MA : Rapid estimate of adult literacy in medicine: a shortened screening instrument . Fam Med 1993 ; 25 : 391 395 16. Huizinga MM , Elasy TA , Wallston KA , Cavanaugh K , Davis D , Gregory RP , Fuchs LS , Malone R , Cherrington A , Dewalt DA , Buse J , Pignone M , Rothman RL : Development and validation of the Diabetes Numeracy Test (DNT) . BMC Health Serv Res 2008 ; 8 : 96 17. Toobert DJ , Hampson SE , Glasgow RE : The summary of diabetes self-care activities measure: results from 7 studies and a revised scale . Diabetes Care 2000 ; 23 : 943 950 18. Wallston KA , Rothman RL , Cherrington A : Psychometric properties of the Perceived Diabetes Self-Management Scale (PDSMS) . J Behav Med 2007 ; 30 : 395 401 19. C : Diabetes treatment satisfaction questionnaire . In Handbook of Psychology and Diabetes: A Guide to Psychological Measurement in Diabetes Research and Practice . Chur, Switzerland , , 1994 , p. 111 132 20. Feng Z , McLerran D , Grizzle J : A comparison of statistical methods for clustered data analysis with Gaussian error . Stat Med 1996 ; 15 : 1793 1806 21. Harrell FE : Regression Modeling Strategies: With Applications to Linear Models, Logistic Regression and Survival Analysis . New York , Springer , 2001 22. Hill-Briggs F , Renosky R , Lazo M , Bone L , Hill M , Levine D , Brancati FL , Peyrot M : Development and pilot evaluation of literacy-adapted diabetes and CVD education in urban, diabetic African Americans . J Gen Intern Med 2008 ; 23 : 1491 1494 23. Hill-Briggs F , Smith AS : Evaluation of diabetes and cardiovascular disease print patient education materials for use with low-health literate populations . Diabetes Care 2008 ; 31 : 667 671 24. Doak CC , Doak LG , Root JH : Teaching Patients with Low Literacy Skills .
2022-05-24 05:39:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2540231943130493, "perplexity": 6401.863153579878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662564830.55/warc/CC-MAIN-20220524045003-20220524075003-00067.warc.gz"}
https://stats.meta.stackexchange.com/tags/votes/hot
# Tag Info ## Hot answers tagged votes 31 Let's look at the data. First up is a plot of acceptance rates versus the number of answers, with a weighted GAM smooth superimposed (courtesy ggplot2). It shows $11,609$ users. The strongest signal is a strong "learning effect" for those with $25$ or fewer answers. (There are many possible explanations: it's not necessarily due to learning to write high-... 27 When a person in 2017 is looking at an answer that may have been written in 2010 (say), it's not that person's responsibility to try to figure out if the answer might have been correct at some time many years previously. Either the answer is correct right now or it isn't. The correct actions when faced with an incorrect answer are already laid out by ... 26 TL;DR Vote early and often. Deploy your daily votes constructively to help people use our site effectively and well. I'm sure people have different systems for reading posts and voting on them. Please bear in mind the constructive role played by voting, which I think is the concern being expressed here: Upvotes, when they are merited, encourage people to ... 26 In short: I think that the bottom-line is that in terms of correlations, there is not a clear effect. It differs per user, and if we would scale the number of equations by the size of the post then actually the post scores become lower for more equations. To see if there are causal effects one might still do some alternative experiments, but the ... 24 As far, as I can verify, it was me who made the upvote. Out of fun and out of curiosity. I also thought it was inconsequential since the answer will get flagged and removed soon and since I would remove the upvote few minutes later anyway. I didn't care that the answer was just posted, nor did I know that there is a badge for upvotes. I can't answer the ... 24 I'm all in favor of option 3: ask SE to lower the threshold for closing questions from 5 to 3 votes. (Importantly, the same should apply to the threshold for reopening a question. This is also the case at SO.) In addition, at SO, qualified users of 3000 rep minimum can cast up to 50 close or reopen votes per day. Here, we only get 24 close (and presumably ... 20 I have only a little to add to @NickCox's answer. Down-votes do tend to be sticky, which is a pity when the down-voted poster makes the effort to improve their post. I don't see any reason to suppose this is due to anything more than people's disinclination to keep on returning to a post they've down-voted to see whether it's been improved. There are some ... 20 I can answer this from the perspective of someone who has given clumps of up-votes to users at various points in time, often for old obscure answers. As Tim correctly points out in his answer, this generally starts off either when a profile piques my interest through a good question or answer in the main thread, or simply from browsing through profiles on ... 19 EDIT: Here is a summary added also in reaction to various comments (some now deleted). Downvoting is not constructive (and not intended to be). It should, mostly, express a view that a post is not useful, although whatever other reasons or motives people may hide remain indiscernible. Downvoting is not informative unless people explain why they downvote. ... 19 My best guess: someone found your answer that s/he liked, up-voted it, clicked your profile and found other answers s/he liked. There is no algorithm that bounces questions in clusters. What is bounced on the main site is the questions with no accepted answer and the ones that were edited (either question or answer). 18 CV's a democracy of a kind, so many standard political points arise. The first lessons in politics include learning that many other people are very confident in telling you should be voting this way or you shouldn't be voting that way. Excuse me: they're my votes, or not. Within the rules, I vote as I like. (I don't impute or infer attempts to offend, ... 18 I'll frame this more widely in these terms: people here may readily disagree with (a) someone else's upvote or (b) the OP's acceptance of an answer. Clearing (b) out of the way: An OP's acceptance of an answer is their exercise of their privilege. In principle, they are free to accept an answer they find helpful and need pay absolutely no attention to the ... 18 Votes on this site are highly noisy and so over-analyzing one-offs like this is a waste of energy in my opinion (especially because the criteria for what deserves an up/downvote are completely individual specific). I don't hear you complaining about upvotes on ancient posts that may or may not still be relevant. Just from my own posts, I can see that ... 18 Some users on this site feel compelled to provide input on things they don't understand and this sounds like a more nuanced version of that...Obviously you can vote however you want but, since you asked... If you are sincerely unsure about whether or not the answer is correct, I don't know why you'd upvote it: that adds noise to the system and could mislead ... 18 To clarify: programming questions are not automatically off-topic here. Instead, they are off-topic if they don't need "statistical expertise to understand or answer"; as we see under Programming in the help/on-topic: if it needs statistical expertise to understand or answer, ask it here I am not criticizing or challenging your response to programming ... 15 For reasons I hope are obvious, (1) these "simple checks, heuristics or algorithms" are in place and (2) their details--even their nature--are not publicized. When you think you have been a victim of "revenge" or "serial" downvoting, please do not post the usual "why the downvote?" comments. Instead, flag one of the downvoted posts and use the "Custom" ... 15 I'd agree that there is at best a weak positive correlation. But what is effort? It's not just the effort that went into an individual answer. The effort of following the forum over a few years needs to enter the accounting. So effort is obvious short-term effort $+$ whatever long-term effort is pertinent. Over time on the forum, you learn at least ... 15 I think there are several (confounding) factors here. The biggest, by far, is the number of views a thread gets. If no one views a given thread, there are obviously no opportunities for people to vote. Moreover, every view isn't necessarily another actual opportunity for people to vote anyway; many of the views may come from the same people (who may have ... 15 Voting is an important signal, but for many askers only acts as a signal at all when given before they get an answer they like (because they then don't log in until they have another question). If a user asks a good question, there's no loss in upvoting it immediately. Why wait? If a user asks a badly formed/somewhat mangled question, there's some argument ... 14 As well as, or instead of, taking any of the actions @NickCox has discussed, we can flag or vote to close the common questions as duplicates (preferably soon after they're asked), thus ensuring they're linked to a good answer. 14 After actively observing CV for some time I'd say that the recipe for a question to get noticed and highly upvoted is: Make it general rather then narrow, Use a short, meaningful, but "catchy" title, Make it nicely formatted, use code formatting and $\TeX$, It should consist of a few sentences and should not be one sentence: an overly long question would ... 14 Include any XKCD drawing. 13 See this blog post I wrote for the CV site, Voting behavior and accumulation of old votes on CV (I know I need to work on my titles). As of 2012, there was non-trivial accumulation of upvotes for older questions, they are somewhat invisible to regular interaction though. Here I have updated the query to return the aggregate counts of Vote Day - Post Day. ... 13 We have a boilerplate close reason for such questions: Self-study questions (including textbook exercises, old exam papers, and homework) that seek to understand the concepts are welcome, but those that demand a solution need to indicate clearly at what step help or advice are needed. For help writing a good self-study question, please visit the meta ... 13 1. To what extent is this a common experience of new answerers? Do we lose noticeable number of potentially active participants this way? I think I felt similarly when I started participating, and it was years before I became a regular user. As with most communities, this one has some unusual norms that take a while to learn - especially since there's no ... 12 Some personal thoughts/methods about this important issue: voting. 1- Voting on answers: In this thread, Peter Flom gives an answer about the low answer ratio that was found/audited in CV (at that time): "...I think that is partially a function of the nature of statistics and the questions we get..." Many users agreed with him and I believe this ... 12 The first clustering algorithm you will learn about is k-means. That is nice and good, but unfortunately people will sometimes think that k-means is the One Tool to solve all their clustering problems and neglect finding out about drawbacks and alternatives to k-means. In such cases, I find both Anony-Mousse's and David Robinson's answers to the question ... 12 I agree that this is common. I don't find it especially troubling on the whole. If a question is unacceptable on any ground, it doesn't belong. If you throw something out, it is secondary why you do. At the same time, good (concise, precise, informative) feedback will tell the OP what was wrong (and help them to do it less often in the future). In ... 12 Do not agree with the position that very early voting is unjustifiable. It can be a reasonable voting pattern. In many cases some questions show genuine research effort and/or tackle an very interesting problem. Heck, some of them I am curious about myself! I will obviously upvote that as soon as I finish reading it and understand the basic issue. If that ... 12 I'm sorry for any new user who has been discouraged in genuine attempts to ask a question. I agree there's a problem with what appears to be an amount of fairly indiscriminate downvoting. If you're posting the best question you can - and try to act on any feedback you do get - I'd encourage you to avoid deleting your questions (at least if they only get a ... Only top voted, non community-wiki answers of a minimum length are eligible
2020-09-24 15:39:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4331544041633606, "perplexity": 1294.7196318582662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400219221.53/warc/CC-MAIN-20200924132241-20200924162241-00172.warc.gz"}
http://solidwebstrategies.info/formule-de-parseval-34/
## FORMULE DE PARSEVAL PDF January 15, 2020 Krige’s formula. formule de Parseval. Parseval’s equation. formule de Rodrigues. Rodrigues formula. fractal. fractal. fractile quantile. fractile. frequence cumulee. Si on les applique au groupe commutatif fermé à un paramètre des rotations d’un cercle, nos idées contiennent une démonstration de la formule de Parseval. Author: Mazujas Gujora Country: Colombia Language: English (Spanish) Genre: History Published (Last): 17 September 2014 Pages: 209 PDF File Size: 10.39 Mb ePub File Size: 9.71 Mb ISBN: 181-9-36967-904-7 Downloads: 85583 Price: Free* [*Free Regsitration Required] Uploader: Gazilkree Informally, the identity asserts that the sum of the squares of the Fomule coefficients of a function is equal to the padseval of the square of the function. Then [4] [5] [6]. Let e n be an orthonormal basis of H ; i. The identity is related to the Pythagorean theorem in the more general setting of a separable Hilbert space as follows. Theorems in Fourier analysis. Thus suppose that H is an inner-product space. Allyn and Bacon, Inc. Related Posts (10)  GEMU 1235 PDF Zygmund, AntoniTrigonometric series 2nd ed. The interpretation of this form of the theorem is that the total energy of a signal can be calculated by summing power-per-sample across time or spectral power across frequency. ### Parseval’s Theorem — from Wolfram MathWorld parweval This is directly analogous to the Pythagorean theorem, which asserts that the sum of the squares of the components of a vector in an orthonormal basis is equal to the squared length of the vector. This general form of Parseval’s identity ;arseval be proved using the Riesz—Fischer theorem. Views Read Edit View history. In mathematical analysisParseval’s identitynamed after Marc-Antoine Parsevalis a fundamental result on the summability of the Fourier series of a function. Titchmarsh, EThe Theory of Functions 2nd ed. Parseval’s theorem can also be expressed as follows: By using this site, you agree to the Terms of Use and Privacy Policy. Views Read Edit View history. The assumption that B is total is necessary for the validity of the identity. Parseval’s theorem is closely related to other mathematical results involving unitary transformations:. Retrieved from ” https: A similar result is the Plancherel theoremwhich asserts that the integral of the square of the Fourier transform of a function is equal to the integral of the square of the function itself. Related Posts (10)  JRC2904D DATASHEET PDF Riesz extension Riesz representation Open mapping Parseval’s identity D fixed-point. Alternatively, for the discrete Fourier transform DFTthe relation becomes:. Retrieved from ” https: Fourier series Theorems in functional analysis. This page was last edited on 16 Julyat See also [ edit ] Parseval’s theorem References [ edit ] Hazewinkel, Michieled. Although the term “Parseval’s theorem” is often used to describe the unitarity of any Fourier transform, especially in physicsthe most general form of this property is more properly called the Plancherel theorem. ## Parseval’s identity DeanNumerical Analysis 2nd ed. More generally, Parseval’s identity holds in any inner-product spacenot just separable Hilbert spaces. Advanced Calculus 4th ed. For discrete time signalsthe theorem becomes:. Geometrically, it is the Pythagorean theorem for inner-product spaces. By using this site, you agree to the Terms of Use and Privacy Policy. It originates from a theorem about series by Marc-Antoine Parsevalwhich was later applied to the Fourier series. Let B be an orthonormal basis of Formlue ; i.
2020-02-17 21:19:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.861428439617157, "perplexity": 1818.7361602472213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143373.18/warc/CC-MAIN-20200217205657-20200217235657-00426.warc.gz"}
http://openstudy.com/updates/5111b9a1e4b09cf125bde029
## lopus Group Title Hello fellow mathjax'm researching and I would like to know if anyone has worked with this, I would do something very similar to the box "equations". one year ago one year ago 1. UnkleRhaukus Group Title $\boxed{V=lbh}$ im not sure i understand your question 2. lopus Group Title button equation in openstudy is make with mathjax. you know mathjax 3. UnkleRhaukus Group Title oh, I've been typing it all out by hand 4. SWAG Group Title $$\Huge\color{red}\ast$$$$\huge\color{purple}\ast$$$$\Large\color{blue}\ast$$$$\large\color{green}\ast$$$$\color{yellow}\ast$$$$\small\color{grey}\ast$$$$\Tiny\color{goldenrod}\ast$$$$\tiny\color{turquoise}\ast$$$$\Tiny\color{goldenrod}\ast$$$$\small\color{grey}\ast$$$$\color{yellow}\ast$$$$\large\color{green}\ast$$$$\Large\color{blue}\ast$$$$\huge\color{purple}\ast$$$$\Huge\color{red}\ast$$$$\huge\color{purple}\ast$$$$\Large\color{blue}\ast$$$$\large\color{green}\ast$$$$\color{yellow}\ast$$$$\small\color{grey}\ast$$$$\Tiny\color{goldenrod}\ast$$$$\tiny\color{turquoise}\ast$$$$\Tiny\color{goldenrod}\ast$$$$\small\color{grey}\ast$$$$\color{yellow}\ast$$$$\large\color{green}\ast$$$$\Large\color{blue}\ast$$$$\huge\color{purple}\ast$$$$\Huge\color{red}\ast$$$$\huge\color{purple}\ast$$$$\Large\color{blue}\ast$$$$\large\color{green}\ast$$$$\color{yellow}\ast$$$$\small\color{grey}\ast$$$$\Tiny\color{goldenrod}\ast$$$$\tiny\color{turquoise}\ast$$$$\Tiny\color{goldenrod}\ast$$$$\small\color{grey}\ast$$$$\color{yellow}\ast$$$$\large\color{green}\ast$$$$\Large\color{blue}\ast$$$$\huge\color{purple}\ast$$$$\Huge\color{red}\ast$$
2014-08-02 00:40:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5046519637107849, "perplexity": 4482.848432750621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510275463.39/warc/CC-MAIN-20140728011755-00275-ip-10-146-231-18.ec2.internal.warc.gz"}
https://myaptitude.in/jee/maths?start=100
# JEE Maths Questions ### A man saves Rs. 200 in each of the first three months of his service A man saves Rs. 200 in each of the first three months of his service. In each of the subsequent months his saving increases by Rs. 40 more than the saving of immediately previous month. His total saving from the start of service will be Rs. 11040 after 1. 18 months 2. 19 months 3. 20 months 4. 21 months ### If α ≠ β but α^2 = 5α - 3 and β^2 = 5β - 3, then the equation having If α ≠ β but α2 = 5α - 3 and β2 = 5β - 3, then the equation having α/β and β/α as its roots is 1. 3x2 - 19x - 3 = 0 2. 3x2 - 19x + 3 = 0 3. x2 - 5x + 3 = 0 4. 3x2 + 19x - 3 = 0 ### Let a and b be roots of the equation px^2 + qx + r, p ≠ 0 Let a and b be roots of the equation px2 + qx + r, p ≠ 0. If p, q, r are in A.P. and 1/a + 1/b = 4, then the value of |a - b| is 1. 2√17 / 9 2. √61 / 9 3. √34 / 9 4. 2√13 / 9 ### The domain of the function f(x) = 1/√(|x| - x) is The domain of the function f(x) = 1/√(|x| - x) is 1. (0,∞) 2. (-∞,0) 3. (-∞,∞) 4. (-∞,∞) - {0} ### The function f(x) = log (x + √(x^2 +1)), is The function f(x) = log (x + √(x2 +1)) is 1. an even function 2. an odd function 3. neither an even nor an odd function 4. a periodic function ### Let R = {(1, 3), (4, 2), (2, 4), (2, 3), (3, 1)} be a relation Let R = {(1, 3), (4, 2), (2, 4), (2, 3), (3, 1)} be a relation on the set A = {1, 2, 3, 4}. The relation R is 1. not symmetric 2. transitive 3. reflexive 4. a function ### If a ϵ R and the equation -3(x - [x])^2 + 2(x - [x]) + a^2 = 0 has no integral solution If a ϵ R and the equation -3(x - [x])2 + 2(x - [x]) + a2 = 0 (where [x] denotes the greatest integer ≤ x) has no integral solution, then all possible values of a lie in the interval 1. (-1, 0) U (0, 1) 2. (-2, -1) 3. (1, 2) 4. (-∞, -2) U (2, ∞) ### Let A and B be two sets containing four and two elements respectively Let A and B be two sets containing four and two elements respectively. Then the number of subsets of the set A × B, each having at least three elements is 1. 510 2. 256 3. 275 4. 219 ### If A, B and C are three sets such that If A, B and C are three sets such that A ∩ B = A ∩ C and A ∪ B = A ∪ C, then 1. B = C 2. A = C 3. A ∩ B = φ 4. A = B ### A function f from the set of natural numbers to integers A function f from the set of natural numbers to integers defined by f(n) = (n-1)/2, when n is odd f(n) = -n/2, when n is even 1. one-­one and onto both 2. one-­one and but not onto 3. neither one-­one nor onto 4. onto but not one-­one ### For real x, let f(x) = x^3 + 5x + 1, then For real x, let f(x) = x3 + 5x + 1, then 1. f is one-one and onto R 2. f is onto R but not one-one 3. f is neither one-one nor onto R 4. f is one-one but not onto R ### If ((1 + i)/(1 - i))^x = 1, then If ((1 + i)/(1 - i))x = 1, then 1. x = 4n, where n is any positive integer. 2. x = 2n, where n is any positive integer. 3. x = 4n + 1, where n is any positive integer. 4. x = 2n + 1, where n is any positive integer. ### If |z - 4| less than |z - 2|, its solution is given by If |z - 4| < |z - 2|, its solution is given by 1. Re(z) > 3 2. Re(z) > 0 3. Re(z) < 0 4. Re(z) > 2 ### If (1 – p) is a root of quadratic equation x2 + px + (1 – p) = 0 If (1 – p) is a root of quadratic equation x2 + px + (1 – p) = 0, then its roots are 1. 0, -1 2. 1, 1 3. 0, 1 4. 2, 1 ### If a, b, c are distinct +ve real numbers and a^2 + b^2 + c^2 = 1 If a, b, c are distinct +ve real numbers and a2 + b2 + c2 = 1, then ab + bc + ca is 1. greater than 1 2. equal to 1 3. less than 1 4. any real number ### If A^2 – A + I = 0, then the inverse of A is If  A2 – A + I = 0, then the inverse of A is 1. A - I 2. A 3. I + A 4. I - A ### Number greater than 1000 but less than 4000 is formed Number greater than 1000 but less than 4000 is formed using the digits 0, 2, 3, 4 when repetition allowed is 1. 105 2. 125 3. 128 4. 625 ### Let S(k) = 1 + 3 + 5 + .. + (2k – 1) = 3 + k^2 . Then which of the following is true Let S(k) = 1 + 3 + 5 + .. + (2k – 1) = 3 + k2 . Then which of the following is true? 1. principle of mathematical induction can be used to prove the formula 2. S(k) implies S(k + 1) 3. S(k) implies S(k - 1) 4. S(1) is correct ### Let f(x) = 4 and f′(x) = 4. Then Limx→2 (x f(2) − 2 f(x)) / (x − 2) is given by Let f(x) = 4 and f′(x) = 4. Then Limx→2 (x f(2) − 2 f(x)) / (x − 2) is given by 1. -4 2. 2 3. 3 4. -2 ### Two particles start simultaneously from the same point and move along two straight lines Two particles start simultaneously from the same point and move along two straight lines, one with uniform velocity u and the other from rest with uniform acceleration f. Let α be the angle between their directions of motion. The relative velocity of the second particle w.r.t. the first is least after a time 1. t = (u sin α)/f 2. t = (f cos α)/u 3. t = (u sin α) 4. t = (u cos α)/f ### ∫ |sin x| dx is 010π |sin x| dx is 1. 8 2. 10 3. 18 4. 20 ### The area of the region bounded by the parabola (y – 2)^2 = x – 1, the tangent to the parabola The area of the region bounded by the parabola (y – 2)2 = x – 1, the tangent to the parabola at the point (2, 3) and the x-axis is 1. 3 2. 6 3. 9 4. 12 ### The order and degree of the differential equation (1 + 3dy/dx)^2/3 = 4d^3y/dx^3 are The order and degree of the differential equation (1 + 3dy/dx)2/3 = 4d3y/dx3 are 1. (3, 3) 2. (3, 1) 3. (1, 2) 4. (1, 2/3) ### In a class of 100 students there are 70 boys whose average marks in a subject are 75 In a class of 100 students there are 70 boys whose average marks in a subject are 75. If the average marks of the complete class is 72, then what is the average of the girls? 1. 74 2. 65 3. 68 4. 73 ### The median of a set of 9 distinct observations is 20.5. If each of the largest 4 observations The median of a set of 9 distinct observations is 20.5. If each of the largest 4 observations of the set is increased by 2, then median of the new set 1. is two times the original median 2. is increased by 2 3. remains the same as that of the original set 4. is decreased by 2 ### A problem in mathematics is given to three students A, B, C and their respective probability of solving the problem is 1/2, 1/3 and 1/4 A problem in mathematics is given to three students A, B, C and their respective probability of solving the problem is 1/2, 1/3 and 1/4. Probability that the problem is solved is 1. 1/3 2. 1/2 3. 3/4 4. 2/3 ### Events A, B, C are mutually exclusive events such that P(A) = (3x + 1)/3, P(B) = (x - 1)/4, P(C) = (1 - 2x)/4 Events A, B, C are mutually exclusive events such that P(A) = (3x + 1)/3, P(B) = (x - 1)/4, P(C) = (1 - 2x)/4. The set of possible values of x are in the interval 1. [1/3, 1/2] 2. [1/3, 13/3] 3. [0, 1] 4. [1/3, 2/3] ### The number of solution of tan x + sec x = 2 cos x in [0, 2π) is The number of solution of tan x + sec x = 2 cos x in [0, 2π) is 1. 0 2. 1 3. 2 4. 3 ### The negation of the statement "If I become a teacher, then I will open a school" is The negation of the statement "If I become a teacher, then I will open a school" is 1. I will not become a teacher or I will open a school 2. Either I will not become a teacher or I will not open a school 3. Neither I will become a teacher nor I will open a school 4. I will become a teacher and I will not open a school ### A triangle with vertices (4, 0), (-1, -1), (3, 5) is A triangle with vertices (4, 0), (-1, -1), (3, 5) is 1. right angled but not isosceles 2. neither right angled nor isosceles 3. isosceles and right angled 4. isosceles but not right angled ### If the two circles (x-1)^2 + (y-3)^2 = r^2 and x^2 + y^2 - 8x + 2y + 8 = 0 intersect in two distinct point If the two circles (x-1)2 + (y-3)2 = r2 and x2 + y2 - 8x + 2y + 8 = 0 intersect in two distinct point, then 1. 2 < r < 8 2. r = 2 3. r > 2 4. r < 2 ### A plane which passes through the point (3, 2, 0) and the line (x-4)/1 = (y-7)/5 = (z-4)/4 is A plane which passes through the point (3, 2, 0) and the line (x-4)/1 = (y-7)/5 = (z-4)/4 is 1. 2x - y + z = 5 2. x + 2y - z = 1 3. x - y + z = 1 4. x + y + z = 5 ### The number of ways of selecting 15 teams from 15 men and 15 women The number of ways of selecting 15 teams from 15 men and 15 women, such that each team consists of a man and a woman, is: 1. 1880 2. 1120 3. 1240 4. 1960 ### If 2+3i is one of the roots of the equation 2x^3 – 9x^2 + kx – 13 = 0 If 2+3i is one of the roots of the equation 2x3 – 9x2 + kx – 13 = 0, k ∈ R, then the real root of this equation: 1. does not exist 2. exists and is equal to 1/2 3. exists and is equal to -1/2 4. exists and is equal to 1 ### Let the sum of the first three terms of an AP be 39 and the sum of its last four terms be 178 Let the sum of the first three terms of an A.P. be 39 and the sum of its last four terms be 178. If the first term of this A.P. is 10, then the median of the A.P. is: 1. 26.5 2. 28 3. 29.5 4. 31 ### If the coefficients of the three successive terms in the binomial expansion of (1+x)^n are in the ratio 1:7:42 If the coefficients of the three successive terms in the binomial expansion of (1+x)n are in the ratio 1:7:42, then the first of these terms in the expansion is: 1. 6th 2. 7th 3. 8th 4. 9th ### What is the sum of the squares of the roots of the equation x^2 + 2x - 143 = 0 What is the sum of the squares of the roots of the equation x2 + 2x - 143 = 0? 1. 170 2. 180 3. 190 4. 290 ### If the difference between the roots of ax^2 + bx + c = 0 is 1 If the difference between the roots of ax2 + bx + c = 0 is 1, then which one of the following is correct? 1. b2 = a(a + 4c) 2. a2 = b(b + 4c) 3. a2 = c(a + 4c) 4. b2 = a(b + 4c) ### If α and β are the roots of the equation x^2 - q(1+x) - r = 0, then what is (1+α)(1+β) If α and β are the roots of the equation x2 - q(1+x) - r = 0, then what is (1+α)(1+β) equal to? 1. 1 - r 2. q - r 3. 1 + r 4. q + r ### If A and B given, then what is determinant of AB If $$A = \begin{bmatrix}1 & 2 \\2 & 3 \end{bmatrix}$$ and $$B = \begin{bmatrix}1 & 0 \\1 & 0 \end{bmatrix}$$ then what is determinant of AB? 1. 0 2. 1 3. 10 4. 20 ### A and B are two matrices such that AB = A and BA = B then what is B^2 equal to A and B are two matrices such that AB = A and BA = B then what is B2 equal to? 1. B 2. A 3. I 4. -I ### If the 2nd, 5th and 9th terms of a non-constant AP are in GP If the 2nd, 5th and 9th terms of a non-constant A.P. are in G.P., then the common ratio of this G.P. is: 1. 4/3 2. 1 3. 7/4 4. 8/5 ### Let P be the point on the parabola, y^2 = 8x which is at a minimum distance Let P be the point on the parabola, y2 = 8x which is at a minimum distance from the centre C of the circle, x2 + (y + 6)2 = 1. Then the equation of the circle, passing through C and having its centre at P is: 1. x2 + y2 – x + 4y – 12 = 0 2. x2 + y2 – x/4 + 2y – 24 = 0 3. x2 + y2 – 4x + 9y + 18 = 0 4. x2 + y2 – 4x + 8y + 12 = 0 ### The system of linear equations has a non-trivial solution for The system of linear equations x + λy – z = 0 λx – y – z = 0 x + y – λz = 0 has a non-trivial solution for: 1. exactly one value of λ. 2. exactly two values of λ. 3. exactly three values of λ. 4. infinitely many values of λ. ### The eccentricity of the hyperbola whose length of the latus rectum is equal to 8 The eccentricity of the hyperbola whose length of the latus rectum is equal to 8 and the length of its conjugate axis is equal half of the distance between its foci, is: 1. 4/√3 2. 2/√3 3. √3 4. 4/3 ### If the standard deviation of the numbers 2, 3, a and 11 is 3.5 If the standard deviation of the numbers 2, 3, a and 11 is 3.5, then which of the following is true? 1. 3a2 – 32a + 84 = 0 2. 3a2 – 34a + 91 = 0 3. 3a2 – 23a + 44 = 0 4. 3a2 – 26a + 55 = 0 ### The integral is equal to: #01 The integral $$\int \dfrac{2x^{12}+5x^9}{(x^5+x^3+1)^3} dx$$ is equal to: 1. $$\dfrac{-x^5}{(x^5+x^3+1)^2} + C$$ 2. $$\dfrac{x^{10}}{2(x^5+x^3+1)^2} + C$$ 3. $$\dfrac{x^5}{2(x^5+x^3+1)^2} + C$$ 4. $$\dfrac{-x^{10}}{2(x^5+x^3+1)^2} + C$$ ### If the line lies in the plane, lx + my – z = 9, then l^2 + m^2 is equal to If the line, (x-3)/2 = (y+2)/-1 = (z+4)/3, lies in the plane, lx + my – z = 9, then l2 + m2 is equal to: 1. 18 2. 5 3. 2 4. 26 ### The number of real values of x, which satisfy the equation cosx + cos2x + cos3x + cos4x = 0 If 0 ≤ x < 2π, then the number of real values of x, which satisfy the equation cosx + cos2x + cos3x + cos4x = 0 is: 1. 5 2. 7 3. 9 4. 3
2020-07-11 20:04:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7010825276374817, "perplexity": 771.6047444262123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655937797.57/warc/CC-MAIN-20200711192914-20200711222914-00079.warc.gz"}
http://math.stackexchange.com/questions/271834/what-properties-of-busy-beaver-numbers-are-computable
# What properties of busy beaver numbers are computable? The busy beaver function $\text{BB}(n)$ describes the maximum number of steps that an $n$-state Turing machine can execute before it halts (assuming it halts at all). It is not a computable function because computing it allows you to solve the halting problem. Are functions like $\text{BB}(n) \bmod 2$, or more generally $\text{BB}(n) \bmod m$ for a modulus $m$, computable? Computing these functions doesn't solve the halting problem, so the above argument doesn't apply. - This seems like it might well depend sensitively on the details of your machine setup. – Chris Eagle Jan 7 '13 at 0:01 Some discussion on this question: scottaaronson.com/blog/?p=46 – Dan Brumleve Jan 7 '13 at 0:44 A variation: can it be shown that $\text{BB}(n)$ is composite infinitely often? This version is seemingly less sensitive to the encoding. – Dan Brumleve Jan 7 '13 at 4:02 1-D BB Turing machines are hard to visualize, so I made a page for 2-D Turing Machine BBs.. Once a 1-D Turing machine becomes predictable, it can be classified as halting or infinite. Thus, the point of predictability is the important point. This rarely happens elegantly. The champions tend to be machines that can be extended forward as they get into temporary predictable behaviors. – Ed Pegg Feb 26 '13 at 15:51 – Andrés E. Caicedo Jul 23 '13 at 16:21 There are two types of Busy Beaver: the original definition is the maximum number of $1$s that an $n$-state, 2 symbol Turing Machine can leave on a blank tape (consisting entirely of $0$s) after halting. The function for this is $\Sigma(n)$. The other definition is maximum number of steps (or moves) that an $n$-state, 2 symbol Turing Machine can take on a blank tape before halting. The function for this is $\text{S}(n)$. It actually doesn't matter which definition we're using because both are uncomputable. The machine that generates $\Sigma(n)$ and the machine that generates $\text{S}(n)$ don't have to be the same. For ease of discussion I will call them both $\text{BB}(n)$, as you have done. It's easy to prove that the general case of $\text{BB}(n)\mod m$ is also uncomputable. For a sufficiently large $m$, $\text{BB}(n)\mod m = \text{BB}(n)$, therefore it's not computable either. It might be possible to calculate $\text{BB}(n)\mod m$ for certain small values up to a limit but we don't have enough information to say either way. If $m=1$, then absolutely! Yes, it can be computed, but this is a boring answer that tells us nothing useful about $\text{BB}(n)$. I'm assuming you're interested in non-trivial cases. - Your argument doesn't have the right quantifiers in it. For fixed $m$, knowing $BB(n) \bmod m$ for all $n$ does not tell you $BB(n)$ for all $n$, so you can't use the fact that $BB(n)$ is uncomputable to conclude that $BB(n) \bmod m$ is uncomputable. In fact, 1) any finite sequence of $BB(n)$s is computable in a tautological sense, and 2) as pointed out in the comments, $BB(n) \bmod m$ is highly sensitive to details of encodings: you might choose dumb encodings with the property that it's always $0$, for example, and so trivially computable. – Qiaochu Yuan Feb 7 at 3:19 If $m$ is smaller than $BB(n)$, then this is true. However, there are an infinite number of values for $m$ larger than $BB(n)$. This holds true for all finite values of $n$. If you know all the values in advance it is trivial to produce a formula that works for all of them. $n^3-6.5n^2+15.5n-9\mod m$ works for any $m$ and all $\Sigma(n)$ up to 4. – CJ Dennis Feb 7 at 4:21 Again, you're not using the right quantifiers. Since $BB(n)$ grows arbitrarily large, for fixed $m$ there will always be some $n_0$ such that $BB(n) > m$ for $n > n_0$. So for any fixed $m$, reducing $\bmod m$ necessarily causes you to lose information about the busy beaver numbers in such a way that you can't conclude that the sequence $BB(n) \bmod m$ is uncomputable from this argument. (Again, independently, your argument also can't work because it's possible to choose a dumb encoding relative to which $BB(n) \equiv 0 \bmod m$, in which case the sequence is trivially computable.) – Qiaochu Yuan Feb 7 at 4:54
2016-07-23 23:30:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9193836450576782, "perplexity": 303.29875749529674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823802.12/warc/CC-MAIN-20160723071023-00303-ip-10-185-27-174.ec2.internal.warc.gz"}
https://nationalhomeemploymentassociation.com/spiral-starecase-ypgf/pie-chart-worksheets-pdf-c0d36e
The arc length of each section is proportional to the quantity it represents, usually resulting in a shape similar to a slice of pie. a) a) What is the most common method of travel? Nigel sells bottles of drinks. The "Basic Pie Graphs" require students to have a basic understanding of fractions. A single pie chart only provides information about one particular moment in time. The Pie Donut Chart visualizes the percentage of parts of the whole and looks like as a ring divided into sectors. Pie Chart A pie chart is useful for showing how a set is a composite of many differently sized parts. Pie charts are one of the most commonly used graphs to represent data using the attributes of circles, spheres, and angular data to represent real-world information. Pie Chart Template or Fractions from 13 pieces up to 36 pieces (enough for a classroom) Clipart Set - 24 pieces of clip art in a pack or bundle for your worksheets or educational resources. Use these pie chart interpretation worksheets to see how well children can read and interpret data in pie charts. People prefer to use smartphones and laptops to go online, with a difference of 3 percent between the two. All images or pictures are high resolution Visit now! RESPONSIBILITY PIE We often blame ourselves some feared future event that might happen. Download important Pie Chart Questions PDF based on previously asked questions in IBPS PO and other MBA exams. "How Katie allocates her wages is shown in the table below. " Read the pie graph and answer the questions. Of grade 3 through grade 7, your overall comment will not describe a trend time! Can read and interpret data on pie Graphs '' require students to have a Basic understanding fractions... To graphing and charting exam Maths questions on this pie chart videos, past papers and 5-a-day charts I to! Data sets relate to one another by [ 1 ] the pie chart read and interpret in. Children can read and interpret data in pie charts, your overall comment will not describe a over. A trend over time part in a lesson on pie charts I like to start with giving..., on clothes, on food and on other items common method of travel Flow Examples... Keystone mathematical skills for which early exposure makes all the difference between data < br > pie! Graphs ( Circle Graphs ) read & interpret data on pie Graphs ( Circle Graphs ) well children can and! On any topic, as well as videos, past papers and 5-a-day Circle Graphs ) each part speech! The first time using this worksheet, kids give definitions and Examples of each part of speech chart is thus. Makes all the difference about the money she spent on petrol, on and! To see how well children can read and interpret data in pie!! Interpret data in pie charts other gases in the air visualized on this subject those delicious,! Illustrate to kids that 1/3 is the same as 3/9 by shading a pie pie We often ourselves... Table below. travel to school by car travel to school by car travel to school car. Whole and looks like as a ring divided into sectors interpretation worksheets are designed for students to have a understanding! Board, it is easy to illustrate to kids that 1/3 is most. On clothes, on food and on other items or more pie charts to start with giving! On clothes, on food and on other items and interpret data on pie charts I like start... Same as 3/9 by shading a pie chart are doing on this pie chart worksheets pdf... How Katie allocates her wages is shown in the air visualized on this subject >! Basic pie Graphs '' require students to have a Basic understanding of pie chart worksheets pdf! Part in a lesson on pie charts ( Circle Graphs ) on Graphs... That might happen sample shows the percentage of oxygen, nitrogen and other MBA exams of its resemblance to pie... And charting delicious pies, pie charts, your overall comment will not a... Data on pie Graphs '' require students to have a Basic understanding fractions... See the percentage of parts of the students tables of angles to draw correctly on a chart. Of speech read & interpret data in pie charts are divided into slices to the. & interpret data on pie Graphs '' require students to have a Basic understanding of.... “ Lifestyle Balance pie & worksheet ” was prepared and written for SMART Recovery® Jim! Data sets relate to one another use this innovative worksheet to encourage children to their... Draw an accurate pie chart interpretation worksheets are a fantastic way to test how well your students are doing this. Well your students are doing on this subject overall comment will not describe a trend over time information how prefer! draw an accurate pie chart test how well children can read and interpret data pie. A difference of 3 percent between the two interpret data on pie charts to master the 11+ Maths... Are doing on this subject clothes, on food and on other items pie chart worksheets pdf drinks sold on a.! Papers and 5-a-day giving the students tables of angles to draw correctly on a pie chart questions pdf based previously... Balance pie & worksheet ” was prepared and written for SMART Recovery® by Braastad... Po and other MBA exams chart Templates & Examples pie chart is clearly marked, this represents %... 1/3 is the same as 3/9 by shading a pie have a Basic understanding of fractions Lifestyle pie... Yes, the kind that you eat. difference between data Maths offers outstanding, original exam style questions any. Same as 3/9 by shading a pie chart to show the difference between data a difference of percent... A great way for children of different abilities to take part in a lesson pie... Basic Flow chart Examples 10+ Gantt chart Templates & Examples pie chart to show this information designed for students have... The percentage of drinks sold on a pie chart is clearly marked, this represents 50 % the time... Oxygen, nitrogen and other MBA exams pie Donut chart visualizes the percentage of drinks on. Using a SMART board, it is easy to illustrate to kids that 1/3 is the as! Data sets relate to one another a lesson on pie Graphs '' require to! Whole and looks like as a ring divided into sectors start with just giving the students of... Students of grade 3 through grade 7 with pie chart worksheets pdf Answers, Timing, pdf download thus because of its to.
2021-09-22 21:11:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24176734685897827, "perplexity": 2577.751800667346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057388.12/warc/CC-MAIN-20210922193630-20210922223630-00451.warc.gz"}
http://mathhelpforum.com/calculus/183240-cosine-trig-limit.html
# Math Help - Cosine Trig Limit 1. ## Cosine Trig Limit limx->0 cosx-1/2x ok i kinda dont know where to start 2. ## Re: Cosine Trig Limit Originally Posted by purplec16 limx->0 cosx-1/2x ok i kinda dont know where to start It should be limx->0 {cosx-1}/2x. 3. ## Re: Cosine Trig Limit Originally Posted by purplec16 limx->0 cosx-1/2x ok i kinda dont know where to start edit: post put in a spoiler now it apparent what the OP means. Spoiler: What is that second term? You need to clean up the brackets, I can think of three different expressions here $\displaystyle \lim_{x \to 0} \cos(x) - \dfrac{x}{2}$ is direct substitution $\displaystyle \lim_{x \to 0} \cos(x) - \dfrac{1}{2x}$. Think about what happens when you divide by a number smaller than 1 - the cos(x) term will be negligible in comparison to the 1/(2x) term $\displaystyle \lim_{x \to 0} \cos \left(x - \dfrac{x}{2}\right) = \lim_{x \to 0} \cos \left(\frac{x}{2}\right)$ which is direct subbing again 4. ## Re: Cosine Trig Limit Is this $\displaystyle \lim_{x \to 0}\frac{\cos{x} - 1}{2x}$? If so \displaystyle \begin{align*} \lim_{x \to 0}\frac{\cos{x} - 1}{2x} &= \frac{1}{2}\lim_{x \to 0}\frac{\cos{x} - 1}{x} \\ &= \frac{1}{2}\lim_{x \to 0}\frac{(\cos{x} - 1)(\cos{x} + 1)}{x(\cos{x} + 1)} \\ &= \frac{1}{2}\lim_{x \to 0}\frac{\cos^2{x} - 1}{x(\cos{x} + 1)} \\ &= \frac{1}{2}\lim_{x \to 0}\frac{-\sin^2{x}}{x(\cos{x} + 1)} \\ &= -\frac{1}{2}\lim_{x \to 0}\frac{\sin{x}}{x} \cdot \lim_{x \to 0}\frac{\sin{x}}{\cos{x} + 1}\\ &= -\frac{1}{2}\cdot 1 \cdot \frac{0}{2} \\ &= 0 \end{align*} 5. ## Re: Cosine Trig Limit Normally, I complain about people using L'Hopital's rule when simpler methods will work but I think that is the best way to handle this one: $\lim_{x\to 0}\frac{cos(x)-1}{2x}= \lim_{x\to 0}\frac{-sin(x)}{2}= 0$ Or use a power series: $cos(1)= 1- (1/2)x^2+ (1/24)x^4+ \cdot\cdot\cdot$ so $\frac{cos(x)-1}{2x}= \frac{-(1/2)x^2+ (1/24)x^4+ \cdot\cdot\cdot}{2x}= -(1/4)x+ (1/48)x^3+ \cdot\cdot\cdot$ which goes to 0 as x goes to 0. 6. ## Re: Cosine Trig Limit Originally Posted by HallsofIvy Normally, I complain about people using L'Hopital's rule when simpler methods will work but I think that is the best way to handle this one: $\lim_{x\to 0}\frac{cos(x)-1}{2x}= \lim_{x\to 0}\frac{-sin(x)}{2}= 0$ Or use a power series: $cos(1)= 1- (1/2)x^2+ (1/24)x^4+ \cdot\cdot\cdot$ so $\frac{cos(x)-1}{2x}= \frac{-(1/2)x^2+ (1/24)x^4+ \cdot\cdot\cdot}{2x}= -(1/4)x+ (1/48)x^3+ \cdot\cdot\cdot$ which goes to 0 as x goes to 0. Or just multiply top and bottom by the top's conjugate, like I did... 7. ## Re: Cosine Trig Limit Originally Posted by HallsofIvy Normally, I complain about people using L'Hopital's rule when simpler methods will work but I think that is the best way to handle this one: $\lim_{x\to 0}\frac{cos(x)-1}{2x}= \lim_{x\to 0}\frac{-sin(x)}{2}= 0$ Alternatively, one could just use the definition of the derivative. $\lim_{x \to 0} \frac{\cos(x) - 1}{x} = \left. \frac d {dx} \cos(x) \right|_{x = 0} = -\sin(0) = 0.$ I guess this is probably begging the question a little bit, though, since we effectively need to know this limit to calculate the derivatives of sin and cos in the first place. If you already have $\frac{\sin(x)}{x} \to 1$ as $x \to 0$ and you are working with the geometric definitions (not the power series definitions), then Prove It's proof seems best so far. 8. ## Re: Cosine Trig Limit Originally Posted by purplec16 limx->0 cosx-1/2x ok i kinda dont know where to start Please use brackets to make it obvious what the limit is of, as it is everyone helping you may be wasting their time by making the wrong guess at what you are really trying to ask. CB
2014-04-19 18:26:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999978542327881, "perplexity": 1301.105362180026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
https://ee.gateoverflow.in/1623/gate-electrical-2021-question-1
Let $p$ and $q$ be real numbers such that $p^{2}+q^{2}=1$ . The eigenvalues of the matrix $\begin{bmatrix} p & q\\ q& -p \end{bmatrix}$are 1. $1$ and $1$ 2. $1$ and $-1$ 3. $j$ and $-j$ 4. $pq$ and $-pq$ recategorized SOLUTION FORMULA OF EIGEN VALUE where A  is eign value
2021-07-25 18:38:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9860092997550964, "perplexity": 503.30345336955946}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151760.94/warc/CC-MAIN-20210725174608-20210725204608-00693.warc.gz"}
http://www.lama.univ-savoie.fr/pagesmembres/vuillon/articles.html
## Articles : B. Colange, L. Vuillon, S. Lespinats, D. Dutykh Interpreting Distortions in Dimensionality Reduction by Superimposing Neighbourhood Graphs . Paper presented at IEEE VIS 2019 Conference. To perform visual data exploration, many dimensionality reduction methods have been developed. These tools allow data analysts to represent multidimensional data in a 2D or 3D space, while preserving as much relevant information as possible. Yet, they cannot preserve all structures simultaneously and they induce some unavoidable distortions. Hence, many criteria have been introduced to evaluate a map's overall quality, mostly based on the preservation of neighbourhoods. Such global indicators are currently used to compare several maps, which helps to choose the most appropriate mapping method and its hyperparameters. However, those aggregated indicators tend to hide the local repartition of distortions. Thereby, they need to be supplemented by local evaluation to ensure correct interpretation of maps. In this paper, we describe a new method, called MING, for Map Interpretation using Neighbourhood Graphs'. It offers a graphical interpretation of pairs of map quality indicators, as well as local evaluation of the distortions. This is done by displaying on the map the nearest neighbours graphs computed in the data space and in the embedding. Shared and unshared edges exhibit reliable and unreliable neighbourhood information conveyed by the mapping. By this mean, analysts may determine whether proximity (or remoteness) of points on the map faithfully represents similarity (or dissimilarity) of original data, within the meaning of a chosen map quality criteria. We apply this approach to two pairs of widespread indicators: precision/recall and trustworthiness/continuity, chosen for their wide use in the community, which will allow an easy handling by users. A. Gheeraert, L. Pacini, V.S Batista, L.Vuillon, C.Lesieur, I. Rivalta Exploring Allosteric Pathways of a V-Type Enzyme with Dynamical Perturbation Networks . The Journal of Physical Chemistry B, Elucidation of the allosteric pathways in proteins is a computational challenge that strongly benefits from combination of atomistic molecular dynamics (MD) simulations and coarse-grained analysis of the complex dynamical network of chemical interactions based on graph theory. Here, we introduce and assess the performances of the dynamical perturbation network analysis of allosteric pathways in a prototypical V-type allosteric enzyme. Dynamical atomic contacts obtained from MD simulations are used to weight the allosteric protein graph, which involves an extended network of contacts perturbed by the effector binding in the allosteric site. The outcome showed good agreement with previously reported theoretical and experimental extended studies and it provided recognition of new potential allosteric spots that can be exploited in future mutagenesis experiments. Overall, the dynamical perturbation network analysis proved to be a powerful computational tool, complementary to other network-based approaches that can assist the full exploitation of allosteric phenomena for advances in protein engineering and rational drug design. E. Domenjoud, B. Laboureix, and L. Vuillon Facet Connectedness of Arithmetic Discrete Hyperplanes with Non-Zero Shift . LNCS, International Conference on Discrete Geometry for Computer Imagery, We present a criterion for the arithmetic discrete hyperplane to be facet connected when θ is the connecting thickness. We encode the shift μ in a numeration system associated with the normal vector and we describe an incremental construction of the plane based on this encoding. We deduce a connectedness criterion and we show that when the Fully Subtractive algorithm applied to has a periodic behaviour, the encodings of shifts μ for which the plane is connected may be recognised by a finite state automaton. A. Frosini and L. Vuillon Tomographic reconstruction of 2-convex polyominoes using dual Horn clauses . Theoretical Computer Science, Among the very many research interests of Maurice Nivat, a special role, according to the produced literature, was played by the study of the algorithmic and combinatorial aspects of connected finite discrete sets of points called polyominoes. In particular, he addressed the problem of a faithful reconstruction of some subclasses of them by imposing convexity constraints. The present study fits in this research line, and relies on a well known algorithm that Maurice Nivat and co-authors defined 1996 for the reconstruction of hv-convex polyominoes by orthogonal projections in polynomial time. Here, we consider a hierarchy on this class of polyominoes, and we continue a longstanding research on the reconstruction of its first levels by specializing the above mentioned result. The algorithm we propose bases on the possibility of characterizing the elements of the second level of the hierarchy, by a logic formula belonging to Dual-Horn and so polynomially solvable. Some related open problems are also presented. R. Dorantes-Gilardi, L. Bourgeat, L. Pacini, L. Vuillon and C. Lesieur In proteins, the structural responses of a position to mutation rely on the Goldilocks principle: not too many links, not too few . Physical Chemistry Chemical Physics, vol. 20,(39), (2018) 25399-25410. A disease has distinct genetic and molecular hallmarks such as sequence variants that are likely to produce the alternative protein structures accountable for individual responses to drugs and disease development. Thus, to set up customized therapies, the structural influences of amino acids on one another need to be tracked down. Using network-based models and classical analysis of amino acid and atomic packing in protein structures, the influence of first shell neighbors on the structural fate of a position upon mutation, is revisited. Regardless of the type and position in a structure, amino acids satisfy on average over their neighbors, a low and similar number of atomic interactions, the average called the neighborhood watch (Nw). The structural tolerance of a position to mutation depends on the modulation of the composition and/or proximity of neighbors to maintain the same Nw, before and after mutation, at every position. Changes, upon mutation of the number of atomic interactions at the level of individual pairs (wij) are structurally tolerated but influence structural dynamics. Robust, fragile and rescue interactions can be identified with Nw and wij, offering a framework to classify sequence variants according to position-dependent structural changes. A. Casagrande et L. Vuillon, Sciences humaines et sociales et méthodes du numérique, un mariage heureux ?. Les Cahiers du numérique, vol. 13,(3), (2017) 115-136. Un nouveau domaine de recherche a vu le jour récemment au travers d’un manifeste rédigé en 2010 : les humanités numériques. Ces dernières ont comme objectif de combiner les sciences humaines et sociales (SHS) avec les méthodes du numérique. Or, ce mariage semble difficile au premier abord car chaque discipline possède son propre univers sémantique ainsi que ses propres méthodologies : la communication n’est pas toujours évidente. Malgré ces difficultés, nous souhaitons montrer dans cet article que les méthodes du numérique en se mettant à l’écoute des problématiques des SHS peuvent apporter davantage de légitimité aux résultats des recherches en SHS. Nous présentons un exemple de recherche entre chercheurs en mathématiques-informatique et chercheurs en psychologie qui a consisté à réaliser un contrôle négatif sur une expérience sur le thème de la mémoire. C. Reutenauer and L. Vuillon, Palindromic Closures and Thue-Morse Substitution for Markoff Numbers, Uniform Distribution Theory 12 (2017), no. 2, 25-35. We state a new formula to compute the Markoff numbers using iterated palindromic closure and the Thue-Morse substitution. The main theorem shows that for each Markoff number m, there exists a word v ∈ {a, b}∗ such that m − 2 is equal to the length of the iterated palindromic closure of the iterated antipalindromic closure of the word av. This construction gives a new recursive construction of the Markoff numbers by the lengths of the word s involved in the palindromic closure. This construction interpolates between the Fibonaccinumbers and the Pell numbers. L. Vuillon, D. Dutykh and F. Fedele, "Some special solutions to the Hyperbolic NLS equation." Communications in Nonlinear Science and Numerical Simulation (2018). The Hyperbolic Nonlinear Schrödinger equation (HypNLS) arises as a model for the dynamics of three-dimensional narrow-band deep water gravity waves. In this study, the symmetries and conservation laws of this equation are computed. The Petviashvili method is then exploited to numerically compute bi-periodic time-harmonic solutions of the HypNLS equation. In physical space they represent non-localized standing waves. Non-trivial spatial patterns are revealed and an attempt is made to describe them using symbolic dynamics and the language of substitutions. Finally, the dynamics of a slightly perturbed standing wave is numerically investigated by means a highly accurate Fourier solver. I. Gambini and L. Vuillon, Tiling the space by polycube analogues to Fedorov’s polyhedra, Fundamenta Informaticae, 146, (2) (2016), 197–209. We investigate minimal polycubes in terms of volume that tile the R3 space like the Fedorov’s polyhedra. In fact the 5 Fedorov’s polyhedra are convex polyhedra that tile the space by translation and we construct geometrical discrete objects formed by union of cubes with the same number of faces than the Fedorov’s polyhedra. M. Achoch, R. Dorantes-Gilardi, C. Wymant, G. Feverati, K. Salamatian, L. Vuillon and C. Lesieur, Protein structural robustness to mutations : an in silico investigation, Physical Chemistry Chemical Physics, (2016). Proteins possess qualities of robustness and adaptability to perturbations such as mutations, but occasionally fail to withstand them, resulting in loss of function. Herein, the structural impact of mutations is investigated independently of the functional impact. Primarily, we aim at understanding the mechanisms of structural robustness pre-requisite for functional integrity. The structural changes due to mutations propagate from the site of mutation to residues much more distant than typical scales of chemical interactions, following a cascade mechanism. This can trigger dramatic changes or subtle ones, consistent with a loss of function and disease or the emergence of new functions. Robustness is enhanced by changes producing alternative structures, in good agreement with the view that proteins are dynamic objects fulfilling their functions from a set of conformations. This result, robust alternative structures, is also coherent with epistasis or rescue mutations, or more generally, with non-additive mutational effects and compensatory mutations. To achieve this study, we developed the first algorithm, referred to as Amino Acid Rank (AAR), which follows the structural changes associated with mutations from the site of the mutation to the entire protein structure and quantifies the changes so that mutations can be ranked accordingly. Assessing the paths of changes opens the possibility of assuming secondary mutations for compensatory mechanisms. A. Casagrande, E. Loza-Aguirre and L. Vuillon, Improving strategic scanning information analysis : an optimized measure for information proximity evaluation. In Third International Conference on Entreprise Systems (IEEE conference). Bale. 2015. Strategic Scanning activities become less effective when faced with managing information overload. This paper introduces “nearness measure” (NM), a measure for information proximity evaluation. We also present and analyze the principles and properties of a graphical representation of the measure. We compare the measure to the usual Cosine similarity (CS). Even when the algorithm that calculates NM between texts is larger than CS in terms of complexity, NM with a minimization phase expresses an analysis of the documents from the general to the specific that is the opposite to what CS does. Thus, the manager using NM would require less time to explore collected information. #### E. Domenjoud, X. Provençal and L. Vuillon Palindromic language of thin discrete planes '' Theoretical Computer Science 624 (2016) 101–108. We work on the Reveilles hyperplane P(v, 0, omega) with normal vector v in Rd, shift mu = 0 and thickness omega in R. Such a hyperplane is connected as soon as omega is greater than some value _(v,0), called the connecting thickness of v with null shift. In the case where v satisfies the so called Kraaikamp and Meester criterion, at the connecting thickness the hyperplane has very specific properties. First of all the adjacency graph of the voxels forms a tree. This tree appeared in many works both in discrete geometry and in discrete dynamical systems. In addition, it is well known that for a finite coding of length n of discrete lines, the number of palindromes in the language is exactly n + 1. We extend this notion of language to labeled trees and we compute the number of distinct palindromes. In fact for our voxel adjacency trees with n letters we show that the number of palindromes in the language is also n+1. This result establishes a first link between combinatorics on words, palindromic languages, voxel adjacency trees and connecting thickness of Reveilles hyperplanes. It also provides a better understanding of the combinatorial structure of discrete planes. #### X. Provençal and L. Vuillon Discrete segments of Z3 constructed by synchronization of words'' Discrete Applied Mathematics, Volume 183, 11 March 2015, Pages 102-117 We study a natural and naive composition algorithm what takes three input words written on two-letter alphabets and synchronizes then into a word on a three-letter alphabet. We show that in the case where the three input words are compatible Christoffel words, the algorithm provides a synchronization of the letters what allows the geometrical interpretation of the input words to be inherited by the output word forming a 3D discrete line segment. A second approach is considered while applying our composition algorithm to words defined by stripes meeting at a corner of a discrete planes. We show that, under certain conditions, the output of the algorithm correspond to the normal vector of the plane. #### L. Vuillon and C. Lesieur From local to global changes in proteins: a network view'' Current Opinion in Structural Biology Volume 31, April 2015, Pages 1–8 To fulfill the biological activities in living organisms, proteins are endowed with dynamics, robustness and adaptability. The three properties co-exist because they allow global changes in structure to arise from local perturbations (dynamics). Robustness refers to the ability of the protein to incur such changes without suffering loss of function; adaptability is the emergence of a new biological activity. Since loss of function may jeopardize the survival of the organism and lead to disease, adaptability may occur through the combination of two local perturbations that together rescue the initial function. The review highlights the relevancy of computational network analysis to understand how a local change produces global changes. #### C. Lesieur and L. Vuillon From Tilings to Fibers – Bio-mathematical Aspects of Fold Plasticity'' Oligomerization of Chemical and Biological Compounds, Dr. Claire Lesieur (Ed.), ISBN: 978-953-51-1617-2, InTech, DOI: 10.5772/58577. Protein oligomers are made by the association of protein chains via intermolecular amino acid interactions (interaction between subunits) forming so called protein interfaces. This chapter proposes mathematical concepts to investigate the shape constraints on the protein interfaces in order to promote oligomerization. First, we focus on tiling the plane (2 dimensions) by translation with abstract shapes. Using the fundamental Theorem of Beauquier-Nivat, we show that the shapes of the tiles must be either like a square or like a hexagon to tile the whole plane. Second, we look in more details at the tiling of a cylinder and discuss its relevancy in constructing protein fibers. The universality of such “building” properties are investigated through biological examples. This chapter is written four-hand by a mathematician and a biologist in order to present bio-mathematical aspects of fiber constructions. #### G. Feverati, M. Achoch, L. Vuillon and C. Lesieur Intermolecular β-Strand Networks Avoid Hub Residues and Favor Low Interconnectedness: A Potential Protection Mechanism against Chain Dissociation upon Mutation'' PLOS ONE 10.1371/journal.pone.0094745 Altogether few protein oligomers undergo a conformational transition to a state that impairs their function and leads to diseases. But when it happens, the consequences are not harmless and the so-called conformational diseases pose serious public health problems. Notorious examples are the Alzheimer's disease and some cancers associated with a conformational change of the amyloid precursor protein (APP) and of the p53 tumor suppressor, respectively. The transition is linked with the propensity of β-strands to aggregate into amyloid fibers. Nevertheless, a huge number of protein oligomers associate chains via β-strand interactions (intermolecular β-strand interface) without ever evolving into fibers. We analyzed the layout of 1048 intermolecular β-strand interfaces looking for features that could provide the β-strands resistance to conformational transitions. The interfaces were reconstructed as networks with the residues as the nodes and the interactions between residues as the links. The networks followed an exponential decay degree distribution, implying an absence of hubs and nodes with few links. Such layout provides robustness to changes. Few links per nodes do not restrict the choices of amino acids capable of making an interface and maintain high sequence plasticity. Few links reduce the “bonding” cost of making an interface. Finally, few links moderate the vulnerability to amino acid mutation because it entails limited communication between the nodes. This confines the effects of a mutation to few residues instead of propagating them to many residues via hubs. We propose that intermolecular β-strand interfaces are organized in networks that tolerate amino acid mutation to avoid chain dissociation, the first step towards fiber formation. This is tested by looking at the intermolecular β-strand network of the p53 tetramer. #### A. Casagrande, H. Lesca et L. Vuillon Un outil pour surmonter la surcharge d’information de la veille stratégique'' Colloque VSST Nancy (France), 23-25 octobre 2013, 17p. Dans cette communication, nous proposons le concept d’« informations voisines », nous indiquons son utilité dans le processus de veille anticipative stratégique (VAS), face au problème de la surcharge d’information notamment occasionnée par l’usage de l’Internet. Nous présentons un prototype d’outil informatique visant à instrumenter le concept ainsi qu’un cas d’application. Le concept est particulièrement utile lorsque la veille stratégique est orientée « exploitation des informations à caractère anticipatif » pour l’anticipation, ceux-ci étant généralement noyés dans de gros volumes de données. Nous expérimentons notre prototype sur la problématique de la valorisation du CO2 et nous montrons ainsi que cet outil permet un réel gain de temps pour rendre utilisable les informations collectées, par les décideurs. #### G. Feverati, C. Lesieur and L. Vuillon SYMMETRIZATION: RANKING AND CLUSTERING IN PROTEIN INTERFACES'' Mathematics of Distances and Applications. Purely geometric arguments are used to extract information from three-dimensional structures of oligomeric proteins, that are very common biological entities stably made of several polypeptidic chains. They are characterized by the presence of an interface between adjacent amino acid chains and can be investigated with the approach proposed here. We introduce a method, called symmetrization, that allows one to rank interface interactions on the basis of inter-atomic distances and of the local geometry. The lowest level of the ranking has been used previously with interesting results. Now, we need to complete this picture with a careful analysis of the higher ranks, that are for the first time introduced here, in a proper mathematical set up. The interface finds a very nice mathematical abstraction by the notion of weighted bipartite graph, where the inter-atomic distance provides the weight. Thus, our approach is partially inspired to graph theory decomposition methods but with an emphasis to “locality”, namely the idea that structures constructed by the symmetrization adapt to the local scales of the problem. This is an important issue as the known interfaces may present major differences in relation to their size, their composition and the local geometry. Thus, we looked for a local method, that can autonomously detect the local structure. The physical neighborhood is introduced by the concept of cluster of interactions. We discuss the biological applications of this ranking and our previous fruitful experience with the lowest symmetrized level. An example is given, using the prototypical cholera toxin. #### E. Domenjoud and L. Vuillon Geometric palindromic closure'' uniform distribution theory 7 (2012), no. 2, 109-140. We de fine, through a set of symmetries, an incremental construc- tion of geometric objects in Zd. This construction is directed by a word over the alphabet {1,...,d}. These objects are composed of d disjoint components linked by the origin and enjoy the nice property that each component has a central sym- metry as well as the global object. This construction may be seen as a geometric palindromic closure. Among other objects, we get a 3 dimensional version of the Rauzy fractal. For the dimension 2, we show that our construction codes the standard discrete lines and is equivalent to the well known palindromic closure in combinatorics on words. #### A. Blondin massé, G. Paquin, H. Tremblay and L. Vuillon On Generalized Pseudostandard Words Over Binary Alphabets'' Journal of Integer Sequences, Vol. 16 (2013), Article 13.2.11 In this paper, we study generalized pseudostandard words over a two-letter alphabet, which extend the classes of standard Sturmian, standard episturmian and pseudostandard words, allowing different involutory antimorphisms instead of the usual palindromic closure or a fixed involutory antimorphism. We first discuss pseudoperiods, a useful tool for describing words obtained by iterated pseudopalindromic closure. Then, we introduce the concept of normalized directive bi-sequence (Θ, w) of a generalized pseudostandard word, that is the one that exactly describes all the pseudopalindromic prefixes of it. We show that a directive bi-sequence is normalized if and only if its set of factors does not intersect a finite set of forbidden ones. Moreover, we provide a construction to normalize any directive bi-sequence. Next, we present an explicit formula, generalizing the one for the standard episturmian words introduced by Justin, that computes recursively the next prefix of a generalized pseudostandard word in term of the previous one. Finally, we focus on generalized pseudostandard words having complexity 2n, also called Rote words. More precisely, we prove that the normalized bi-sequences describing Rote words are completely characterized by their factors of length 2. #### G. Feverati, M. Achoch, J. Zrimi, L. Vuillon and C. Lesieur Beta-Strand Interfaces of Non-Dimeric Protein Oligomers Are Characterized by Scattered Charged Residue Patterns'' PLoS ONE 7(4): e32558. Protein oligomers are formed either permanently, transiently or even by default. The protein chains are associated through intermolecular interactions constituting the protein interface. The protein interfaces of 40 soluble protein oligomers of stœchiometries above two are investigated using a quantitative and qualitative methodology, which analyzes the x-ray structures of the protein oligomers and considers their interfaces as interaction networks. The protein oligomers of the dataset share the same geometry of interface, made by the association of two individual β-strands (β-interfaces), but are otherwise unrelated. The results show that the β-interfaces are made of two interdigitated interaction networks. One of them involves interactions between main chain atoms (backbone network) while the other involves interactions between side chain and backbone atoms or between only side chain atoms (side chain network). Each one has its own characteristics which can be associated to a distinct role. The secondary structure of the β-interfaces is implemented through the backbone networks which are enriched with the hydrophobic amino acids favored in intramolecular β-sheets (MCWIV). The intermolecular specificity is provided by the side chain networks via positioning different types of charged residues at the extremities (arginine) and in the middle (glutamic acid and histidine) of the interface. Such charge distribution helps discriminating between sequences of intermolecular β-strands, of intramolecular β-strands and of β-strands forming β-amyloid fibers. This might open new venues for drug designs and predictive tool developments. Moreover, the β-strands of the cholera toxin B subunit interface, when produced individually as synthetic peptides, are capable of inhibiting the assembly of the toxin into pentamers. Thus, their sequences contain the features necessary for a β-interface formation. Such β-strands could be considered as ‘assemblons’, independent associating units, by homology to the foldons (independent folding unit). Such property would be extremely valuable in term of assembly inhibitory drug development. #### I. Gambini and L. Vuillon How many faces can polycubes of lattice tilings by translation of R3 have?'' Electronic Journal of Combinatorics, P199, 2011 We construct a class of polycubes that tile the space by translation in a latticeperiodic way and show that for this class the number of surrounding tiles cannot be bounded. The first construction is based on polycubes with an L-shape but with many distinct tilings of the space. Nevertheless, we are able to construct a class of more complicated polycubes such that each polycube tiles the space in a unique way and such that the number of faces is 4k + 8 where 2k + 1 is the volume of the polycube. This shows that the number of tiles that surround the surface of a space-filler cannot be bounded. #### I. Gambini and L. Vuillon Non lattice periodic tilings of R3 by single polycubes'' To appear in Theoretical Computer Science 2012 In this paper, we study a class of polycubes that tile the space by translation in a non lattice periodic way. More precisely, we construct a family of tiles indexed by integers with the property that Tk is a tile having k >= 2 has anisohedral number. That is k copies of Tk are assembled by translation in order to form a metatile. We prove that this metatile is lattice periodic while Tk is not a lattice periodic tile. #### A. Frosini, S. Rinaldi, K. Tawbe and L. Vuillon Reconstruction of 2-convex polyominoes'' LAMA research report A polyomino P is called 2-convex if for every two cells belonging to P , there ex- ists a monotone path included in P with at most two changes of direction. This paper studies the tomographical aspects of 2-convex polyominoes from their hori- zontal and vertical pro jections and gives an algorithm that reconstructs all 2-convex polyominoes in polynomial time. #### D. Jamet , G. Paquin, G. Richomme and L. Vuillon On the fixed points of the iterated pseudopalindromic closure'' Theoretical Computer Science 412, Issue 27, 2974-2987, 2011, special issue "Combinatorics on Words (WORDS 2009), 7th International Conference on Words" First introduced in the study of the Sturmian words, the iterated palindromic closure was generalized to pseudopalindromes. This operator allows one to construct words with infinitely many pseudopalindromic prefixes, called pseudostandard words. We provide here several combinatorial properties of the fixed points under the iterated pseudopalindromic closure. #### A. Blondin Massé , S. Brlek, S. Labbé and L. Vuillon Palindromic complexity of codings of rotations'' Theoret. Comput. Sci. 412 (2011) 6455-6463. We study the structure of infinite words obtained by coding rotations on partitions of the unit circle by inspecting the return words. The main result is that every factor of a coding of rotations on two intervals has at most 4 complete return words, where the bound is realized only for a finite number of factors. As a byproduct we obtain that when the partition consists of two intervals, then the corresponding word is full, that is, it realizes the maximal palindromic complexity. We also provide a combinatorial proof for the special case of complementary-symmetric Rote sequences by considering both the palindromes and the antipalindromes occurring in it. #### E. Domenjoud , D. Jamet , D. Vergnaud and L. Vuillon Enumeration Formula for (2, n)-Cubes in Discrete Planes'' Discrete applied mathematics 160, 15 (2012) 2158-2171 We compute the number of local configurations of size 2 ×n on naive discrete planes using combinatorics on words, 2-dimensional Rote sequences and Berstel-Pocchiola diagrams. #### F. De Carli, A. Frosini, S. Rinaldi and L. Vuillon How to construct convex polyominoes on DNA Wang tiles ? '' LAMA research report In this article, we describe a general method for constructing various shapes of convex polyominoes using DNA Wang tiles. we recall the basic definitions and notations of two-dimensional languages and tiling systems and some basic definitions on polyominoes, in particular the definitions of convex, directed-convex, and parallelogram polyominoes. We describe the algorithm to transform tiles of a tiling system into labelled Wang tiles. We show explicitly the set of labelled Wang tiles that allows us to construct convex polyominoes. We give an example of a parallelogram polyomino built on labelled Wang tiles. The last part concerns the transformation of labelled Wang tiles into DNA Wang tiles. Moreover, we show that it is possible to control the size of the polyominoes to be constructed, by means of a DNA strand. #### K. Tawbe, F. Cotton and L. Vuillon   Evolution of Brain Tumor and Stability of Geometric Invariants’’International Journal of Telemedicine and Applications Volume 2008 (2008), Article ID 210471, 12 pages This paper presents a method to reconstruct and to calculate geometric invariants on brain tumors. The geometric invariants considered in the paper are the volume, the area, the discrete Gauss curvature, and the discrete mean curvature. The volume of a tumor is an important aspect that helps doctors to make a medical diagnosis. And as doctors seek a stable calculation, we propose to prove the stability of some invariants. Finally, we study the evolution of brain tumor as a function of time in two or three years depending on patients with MR images every three or six months. #### P. Domosi, G. Horvath and L. Vuillon On Shyr-Yu Theorem'' Theoretical Computer Science, Volume 410 (2009) An alternative proof of Shyr-Yu Theorem is given. Some generaliza- tions are also considered using fractional root decompositions and frac- tional exponents of words. #### G. Paquin and L. Vuillon A characterization of balanced episturmian sequences'' Electronic Journal of Combinatorics. Volume 14 (2007) It is well-known that Sturmian sequences are the non ultimately periodic sequences that are balanced over a 2-letter alphabet. They are also characterized by their complexity: they have exactly $(n+1)$ distinct factors of length $n$. A natural generalization of Sturmian sequences is the set of infinite episturmian sequences. These sequences are not necessarily balanced over a $k$-letter alphabet, nor are they necessarily aperiodic. In this paper, we characterize balanced episturmian sequences, periodic or not, and prove Fraenkel's conjecture for the special case of episturmian sequences. It appears that balanced episturmian sequences are all ultimately periodic and they can be classified in 3 families. #### L. Vuillon Editor of the Special Issue of TCS on Combinatorics of the Discrete Plane and Tilings'' Theoretical Computer Science, Volume 319, Issues 1-3, Pages 1-484 (10 June 2004). #### S. Brlek, S. Dulucq, A. Ladouceur and L. Vuillon Combinatorial properties of smooth infinite words'' Theoretical Computer Science, Volume352, issue 1-3 pages 306-317 (2006). We describe some combinatorial properties of an intriguing class of infinite words connected with the one defined by Kolakoski, defined as the fixed point of the run-length encoding $\Delta$. It is based on a bijection on the free monoid over $\Sigma =\{ 1,2\}$, that shows some surprising mixing properties. All words contain the same finite number of square factors, and consequently they are cube-free. This suggests that they have the same complexity as confirmed by extensive computations. We further investigate the occurrences of palindromic subwords. Finally we show that there exist smooth words obtained as fixed points of substitutions (realized by transducers) as in the case of $K$. #### C. Frougny and L. Vuillon Coding of two-dimensional constraints of finite type by substitutions''  JALC, 2005,volume 10 (4) pages 465-482. We give an automatic method to generate transition matrices associated with two-dimensional contraints of finite type by using squared substitutions of constant dimension. #### A. Frosini, M. Nivat and L. Vuillon An introductive analysis of periodical discrete sets from a tomographical point of view'' Theoretical Computer Science,Volume347, Issues 1-2, 30 November 2005, Pages 370-392. In this paper we introduce a new class of binary matrices whose entries show periodical configurations, and we furnish a first approach to their analysis from a tomographical point of view. In particular we propose a polynomial-time algorithm for reconstructinf matrices with a special periodical behavior from their horizontal and vertical projections. We succeeded in our aim by using reductions involving polyominooes which can be characterized by means of 2-SAT formulas. #### I. Gambini and L. Vuillon An algorithm for deciding if a polyomino tiles the plane by translations'' Theoret. Informatics Appl. 41, 147-155 (2007). For polyominoes coded by their boundary word, we describe a quadratic  $O(n^2)$ algorithm in the boundary length $n$ which improves the naive $O(n^4)$ algorithm. Techniques used emanate from algorithmics, discrete geometry and combinatorics on words. #### F. Chavanon, M. Latapy, M. Morvan, E. Rémila and L. Vuillon Graph encoding of 2D-gon tilings''  Theoretical Computer Science Volume 346, Issues 2-3, 28 November 2005, Pages 226-253. 2D-gon tilings with parallelograms are a model used in physics to study quasicrystals, and they are also important in combinatorics for the study of aperiodic structures. In this paper, we study the graph induced by the adjacency relation between tiles. This relation can been used to encode simply and efficiently 2D-gon tilings for algorithmic manipulation. We show for example how it can be used to sample random 2D-gon tilings. #### B. Durand and L. Vuillon Editors of the Special Issue of TCS on Tilings of the Plane'' Theoretical Computer Science, Volume 303, Issues 2-3, Pages 265-554 (15 July 2003). #### L. Vuillon Balanced words'' Bull. Belg. Math. Soc. Simon Stevin 10 (2003), no. 5, 787-805. This article presents a survey about balanced words. The balance property provides from combinatorics on words and is used as a characteristic property of the well-known Sturmian words. The main goal of this survey is to study various generalizations of this notion with applications and with open problems in number theory and in theoretical computer science. We also prove a new result about the balance property of hypercubic billiard words. #### A. Del Lungo, A. Frosini, M. Nivat and L. Vuillon Discrete tomography: reconstruction under periodicity constraints'' Automata, Languages and Programming, 29th International Colloquium, ICALP 2002, Malaga, Spain, July 8-13, 2002, Proceedings, Springer, Lecture Notes in Computer Science, volume 2380,(2002), 38-56. We study the reconstruction problem on some new classes consisting of binary matrices with periodicity properties, and we propose a polynomial-time algorithm for reconstructing these binary matrices from their othogonal discrete X-rays. #### P. Hubert and L. Vuillon Complexity of cutting words on regular tilings'' European Journal of Combinatorics, Volume 28 , Issue 1, table of contents Pages: 429 - 438, 2007 . We show that the complexity of a cutting word $u$ in a regular tiling by a polyomino $Q$ is equal to $P_n(u)= (p+q-1)n +1$ for all $n \geq 0,$ where $P_n(u)$ counts the number of distinct factors of length $n$ in the infinite word $u$ and where the boundary of $Q$ is constructed by $2p$ horizontal and $2q$ vertical unit segments. #### J. Berstel and L. Vuillon Coding rotations on intervals'' Theoretical Computer Science 281, (2002), 99-107. We show that the coding of rotation by $\alpha$ on m intervals with rationally independent lengths can be recoded over m Sturmian words of angle $\alpha.$ More precisely, for a given m an universal automaton is constructed such that the edge indexed by the vector of values of the ith letter on each Sturmian word gives the value of the ith letter of the coding of rotation. #### C. Magnien, H. D. Phan and L. Vuillon Characterization of lattices induced by (extended) chip firing games'' Discrete mathematics and theoretical computer science procedings AA (DM-CCG), 2001, 229-244. The Chip Firing Game (CFG) is a discrete dynamical model used in physics, computer science and economics. It is known that the set of configurations reachable from an initial configuration can be ordered as a lattice. We first present a structural result about this model, which allow us to introduce some useful tools for describing those lattices. Then we establish that the class of lattices that are the configuration space of a CFG is strictly between the class of distributive lattices and the class of upper locally distributive (ULD) lattices. Finally we propose an extension of the model, the coloured Chip Firing Game, which generates exactly the class of ULD lattices. #### L. Vuillon On the number of return words in infinite words constructed by interval exchange transformations'' Pure Mathematics and Applications, Volume 18 (2007), Issue No. 3-4 In this article, we count the number of return words in some infinite words with complexity $2n+1$. We also consider some infinite words given by codings of rotation and interval exchange transformations on $k$ intervals. We prove that the number of return words over a given word $w$ for these infinite words is exactly $k.$ #### J. Justin and L. Vuillon Return words in Sturmian and episturmian words'' Theoretical Informatics and Applications 34 (2000) 343-356. Considering each occurence of a word $w$ in a reccurent infinite word, we define the set of return words of $w$ to be the set of all distinct words beginning with an occurence of $w$ and ending exactly just before the next occurence of $w$ in the infinite word. We give a simpler proof of the recent result (of the second author) that for each factor $w$ of a Sturmian word there exist exactly two return words of $w.$ Then, considering episturmian infinite words, which are a natural generalization of Sturmian words, we study the position of the occurrences of any factor in such infinite words and we determinate the return words. At last, we apply these results in order to get a kind of balance property of episturmian words and to calculate the recurrence function of these words. #### I. Fagnot and L. Vuillon Generalized balances in Sturmian words'' Discrete Applied Mathematics, Volume 121, Issues 1-3, (2002), 83-101. One of the numerous characterizations of Sturmian words is based on the notion of balance. An infinite word x on the {0,1} alphabet is balanced if, given two factors w and w' of x having the same length, the difference between the number of 0 in w (denoted by |w|0) and the one of w' is at most 1, i.e. ||w|0 - |w'|0| <= 1. It is well known that a word is Sturmian if and only if it is balanced. In this paper, the balance notion is generalized by considering the number of occurrences of a word u in w (denoted by |w|u) and w'. The following is obtained Theorem Let x be a Sturmian word. Let u, w and w' be three factors of x. If |w|=|w'|, we have | |w|u - |w'|u | <= |u|. We will also give another balance property called equilibrium. This notion permits us to give a new characterization of Sturmian words. The proofs use word graphs and return words technics. #### L. Vuillon A characterisation of Sturmian words by return words'' European Journal of Combinatorics (2001) 22, 263-275. We present a new characterization of Sturmian sequences using return words. Considering each occurrence of a word $w$ in a recurrent sequence, we define the set of return words over $w$ to be the set of all distinct words beginning with an occurrence of $w$ and ending with the next occurrence of $w$ in the sequence. It is shown that a sequence is a Sturmian sequence if and only if for each non empty word $w$ appearing in the sequence the cardinality of the set of return words over $w$ is equal to two. #### V. Berthé and L. Vuillon Palindromes and two-dimensional Sturmian sequences'' J. Autom. Lang. Comb., 6 (2001), 121--138. This paper introduces a two-dimensional notion of palindrome for rectangular factors of double sequences: these palindromes are defined as centrosymmetric factors. This notion provides a characterization of two-dimensional Sturmian sequences in terms of two-dimensional palindromes, generalizing to double sequences the results in the article of X. Droubay and G. Pirillo. #### V. Berthé et L. Vuillon Suites doubles de faible complexité'' Journal de Theorie des Nombres de Bordeaux 12 (2000), 179-208. Nous donnons une représentation géométrique des suites doubles uniformément récurrentes de fonction de complexité rectangulaire $mn+n$. Nous montrons que ces suites codent l'action d'une $Z^2$-action définie par deux rotations irrationnelles sur le cercle unité. La preuve repose sur une étude des suites doubles dont les lignes sont des suite sturmiennes de m\^eme langage. #### J. Mairesse and L. Vuillon Optimal Sequences in a Heap Model with Two Pieces'' Theoret. Comput. Sci. 270 1-2 (2002), 525-560. In a heap model, solid blocks, or pieces, pile up according to the Tetris game mechanism. An optimal sequence is an infinite sequence of pieces minimizing the asymptotic growth rate of the heap. In a heap model with two pieces, we prove that there always exists an optimal sequence which is either periodic or Sturmian. We completely characterize the cases where the optimal is periodic and the ones where it is Sturmian. The proof is constructive, providing an explicit optimal sequence. We also consider the model where the successive pieces are choosen at random, independently and with some given probabilities. We study the expected growth rate of the heap. For a model with two pieces, the rate is either computed explicitly or given as an infinite series. We show an application for a system of two processes sharing a resource, and we prove that a greedy schedule is not always optimal. #### L. Vuillon Local configurations in discrete planes'' Bull. Belg. Math. Soc. 6 (1999), 625-636. We study the number of local configurations in a discrete plane. We convert this problem into a computation of a double sequence complexity. We compute the number $C(n,m)$ of distinct $n\times m$ patterns appearing in a discrete plane. We show that $C(n,m)=nm$ for all $n$ and $m$ positive integers. The coding of this sequence by a $Z^2$-action on the unidimensional torus gives information about the structure of a discrete plane. Furthermore, this sequence is a generalized Rote sequence with complexity $P(n,m)=2nm$ for all $n$ and $m$ positive integers and with a symmetric complementary language for rectangular words. #### V. Berthé and L. Vuillon "Tilings and rotations: a two-dimensional generalization of Sturmian sequences'' Discrete Math. 223 (2000), 27-53. We study a two-dimensional generalization of Sturmian sequences corresponding to an approximation of a plane: these sequences are defined on a three-letter alphabet and code a two-dimensional tiling obtained by projecting a discrete plane. We show that these sequences code a $Z^2$-action generated by two rotations on the unit circle. We first deduce a new way of computing the rectangle complexity function. Then we provide an upper bound on the number of frequencies of rectangular factors of given size. #### L. Vuillon Combinatoire des motifs d'une suite sturmienne bidimensionnelle'' Theoret. Comput. Sci. 209 (1998), no. 1-2, 261-285. avec les figures Figure 1 ,Figure 2 ,Figure 3 ,Figure 4 ,Figure 5 . Nous étudions une généralisation des suites sturmiennes en construisant une surface plissée", donnée par l'approximation d'un plan par trois sortes de faces carrées orientées suivant les trois plans de coordonnées. \A cette surface, on associe par projection un pavage du plan par trois sortes de losanges. On définit sur ce pavage une fonction de complexité en comptant le nombre de motifs distincts d'une fen\^{e}tre de taille donnée. Nous donnons, en étudiant les prolongements des motifs, la forme explicite de cette fonction dans le cas d'une fen\^{e}tre triangulaire et d'une fen\^{e}tre en forme de parallélogramme. ## Habilitation : Habilitation à diriger des recherches, Université Paris VII, 28 novembre 2001 : Mots infinis, suites doubles et pavages'' ## Thèse : de 1993 à 1996, thèse à l'Institut de Mathématiques de Luminy (Aix-Marseille II), sous la direction du Professeur G. RAUZY: #### Contribution à l'étude des pavages et des surfaces discrétisées'' La thèse a été financée par le CNRS / MRE / DRET de Septembre 1994 à Septembre 1996: Bourse de Docteur Ingénieur.
2022-07-05 21:46:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5680139660835266, "perplexity": 1819.8268690827506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104628307.87/warc/CC-MAIN-20220705205356-20220705235356-00120.warc.gz"}
https://a2i2.deakin.edu.au/2021/04/22/phd-opportunity-an-improved-methodology-for-evaluating-the-risk-of-orbital-debris-impacts-on-spacecraft-structures/
PhD opportunity: An improved methodology for evaluating the risk of orbital debris impacts on spacecraft structures News / Shannon Ryan / April 22, 2021 Join a large and diverse group of students, and work with world-class researchers on some of the most exciting AI problems. This project will utilise machine learning and adaptive optimisation to make spacecraft safer from orbital pollution, in collaboration with NASA’s Orbital Debris Program Office (https://orbitaldebris.jsc.nasa.gov/). The ideal candidate will be based at A2I2 in Waurn Ponds and will study under the supervision of A/Prof Santu Rana and Dr Shannon Ryan at A2I2. Background The space environment continues to become increasingly polluted with man-made debris. Small fragments of such debris travel at hypersonic speeds and, as a result, pose a significant risk to the safe operation of spacecraft. Indeed, micrometoroid and orbital debris impact presents the #1 risk to the International Space Station (and prior to its retirement, the Space Shuttle Orbiter). All manned space missions and some robotic missions require a risk assessment to quantify the risk of debris impact. Part of this assessment uses equations which describe the protective capability of the spacecraft walls – referred to as ballistic limit equations. The current state-of-the-art equations are semi-analytical equations that are limited in accuracy, statistical relevance, and in their application to different types of potential space debris materials and shapes. Scope The goal of this project is to utilise machine learning and adaptive optimisation to develop new methods for evaluating the protective capability of spacecraft walls impacted by micrometeoroid and orbital debris particles. Scholarships Scholarships are available for local students and onshore international students currently staying in Australia. The 2021 stipend is $28,600 (tax-exempt) plus attractive HDR funding to cover the cost of presenting at major international conferences in the field. The qualified candidate will also receive a tuition-fee waiver and can claim up to$1,500 for relocation expenses. For applicants All applications will go through a rigorous assessment process and shortlisted applicants will be interviewed. Qualification: 4-year undergraduate degree or master degree in computer science, machine learning, artificial intelligence, electrical engineering, or similar disciplines. Skills: Python (preferred), R, Julia Research experience: Preferred – machine learning model development and applications, including artificial neural networks, symbolic learning, transfer learning, and Bayesian Optimisation. Interested applicants should email applications to: Dr Trang Tran, HDR Coordinator at trang.tran@deakin.edu.au The application should include: – Resume (or CV), – A2I2 Expression of Interest (access via this link: https://bit.ly/3ssl0AI ); and – other supporting documents (if available): Degree certificates, Academic transcripts; Published papers; Research proposal; and Referral reports.
2021-05-17 10:09:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3923580050468445, "perplexity": 6813.122628293538}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992159.64/warc/CC-MAIN-20210517084550-20210517114550-00516.warc.gz"}
http://mathhelpforum.com/geometry/128447-proving-law-sines.html
# Math Help - Proving the Law of Sines 1. ## Proving the Law of Sines Hey Everyone, I have a question here that asks: Prove the Law of Sines for triangle ABC: sinA/a = sinB/b = sinC/c We have looked at all the side and angle axioms and up to euclidean geometry, any help here would be greatly appreciated. Thanks guys. 2. Originally Posted by GreenDay14 Hey Everyone, I have a question here that asks: Prove the Law of Sines for triangle ABC: sinA/a = sinB/b = sinC/c We have looked at all the side and angle axioms and up to euclidean geometry, any help here would be greatly appreciated. Thanks guys. Hi GreenDay14. If we do not have the triangle altitude on a particular base, we can simply split the triangle into back to back right-angled triangles, using any of the 3 sides as base. Hence, there are 3 ways to write the triangle area.. $A=\frac{1}{2}abSinC=\frac{1}{2}acSinB=\frac{1}{2}b cSinA$ Hence $bSinC=cSinB$ $aSinC=cSinA$ $aSinB=bSinA$ Hence $\frac{b}{SinB}=\frac{c}{SinC}\ or\ \frac{SinB}{b}=\frac{SinC}{c}$ etc 3. You can prove the Law of Sines by starting with a triangle consisting of vectors A,B, and C. This means means that A+B+C=0. Then take the cross product of both sides with vector A to get the first part of the relation. Do the same thing with vector B to get the second part of the relation. 4. thanks a lot for the help guys. 5. Hello, GreenDay14! Prove the Law of Sines for triangle ABC: . $\frac{\sin A}{a} \:=\: \frac{\sin B}{b} \:=\: \frac{\sin C}{c}$ Here is the classic textbook proof. Code: C * *| * * | * b * | * a * |h * * | * * | * * | * A * - - - * - - - - - - - * B D Draw altitude $CD$ to side $AB.$ . . Call it $h.$ In right triangle $CDA\!:\;\;\sin A \:=\:\frac{h}{b} \quad\Rightarrow\quad h \:=\:b\sin A$ .[1] In right triangle $CDB\!:\;\;\sin B \:=\:\frac{h}{a} \quad\Rightarrow\quad h \:=\:a\sin B$ .[2] Equate [1] and [2]: . $b\sin A \:=\:a\sin B \quad\Rightarrow\quad \boxed{\frac{\sin A}{a} \:=\:\frac{\sin B}{b}}$ In a similar fashion, we can prove that: . $\frac{\sin B}{b} \:=\:\frac{\sin C}{c}$
2015-11-25 22:34:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6707082986831665, "perplexity": 688.5074647005675}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446218.95/warc/CC-MAIN-20151124205406-00003-ip-10-71-132-137.ec2.internal.warc.gz"}
https://liusida.github.io/2017/09/24/use-tensorflow-to-compute-gradient/
# Use Tensorflow to Compute Gradient Posted by : at Category : In most of Tensorflow tutorials, we use minimize(loss) to automatically update parameters of the model. In fact, minimize() is an integration of two steps: computing gradients, and applying the gradients to update parameters. Let’s take a look at an example: $Y = (100 - 3W - B)^2$ What is the gradient of W and B when W=1.0, B=1.0? We can calculate them by hand: let $$N = 100 - 3W - B$$, so that $$Y = N^2$$ $\frac{\partial{Y}}{\partial{W}} = \frac{\partial{Y}}{\partial{N}} * \frac{\partial{N}}{\partial{W}} = 2N * 3 = 600 - 18W - 6B = 576$ $\frac{\partial{Y}}{\partial{B}} = \frac{\partial{Y}}{\partial{N}} * \frac{\partial{N}}{\partial{B}} = 2N * 1 = 200 - 3W - B = 196$ ok, now let use tensorflow to compute that: import tensorflow as tf # make an example: # Y = (100 - W X - B)^2 X = tf.constant(3.) W = tf.Variable(1.) B = tf.Variable(1.) Y = tf.square(100 - W*X - B) #the lr here is not about gradient computing. it only effect when appling Ops = tf.train.GradientDescentOptimizer(learning_rate=0.001) grads_and_vars = Ops.compute_gradients(Y) # we can modify the gradient here and then: # Op_update = Ops.apply_gradients(grads_and_vars) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print(sess.run(grads_and_vars)) run it, and we get: [(-576.0, 1.0), (-192.0, 1.0)] So next time your professor ask you to implement a back-propagation for some complex networks by your self, maybe this trick can help you double-check your implementation. Hooray! About Sida Liu I am currently a M.S. graduate student in Morphology, Evolution & Cognition Laboratory at University of Vermont. I am interested in artificial intelligence, artificial life, and artificial environment. Categories Useful Links
2021-01-19 18:44:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4629201889038086, "perplexity": 6049.271050838301}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519600.31/warc/CC-MAIN-20210119170058-20210119200058-00758.warc.gz"}
http://www.physics.utah.edu/~detar/phys6720/assignments/assign06.html
# Physics 3730/6720 Assignment 6 For each of the following exercises create the specified files with your answer(s). Submit your homework using the course submit utility as usual. #### Exercise 1. Monte Carlo simulations of radioactive decay For this exercise you will write a code called decay.py that simulates radioactive decay of a bunch of atoms. You will do this using random numbers, so this "experiment" can be called a Monte Carlo simulation. The inputs to the code are (in this order, one per line): • Number of atoms N. • Simulation time interval dt. • Duration of the experiment T. • Random number seed seed. The code should start by seeding the random number generator. Then, during a time interval dt, it should "visit" every atom once. (Do this with a for loop with one iteration per surviving atom.) The probability that the atom decays during that time interval is dt/tau (as long as this ratio is much less than one). So for each atom, it should get a random number r on the interval [0,1). If that number is less than dt/tau, then the atom decays. Otherwise, it does not. Do this for all of the atoms during that time interval. As you go along, count the number of decays. Reduce the number of atoms by that number. Then go on to the next time interval and repeat the whole process. Your code should print two numbers per line: the time and the number of atoms that decayed in the time interval at that time. To be precise, print the time at the end of the interval. #### Exercise 2. Exponential decay Run your program with the input file ~p6720/examples/radioactive/indecay. Plot the output number of decays vs time (use points), and then, in the same plot, plot the theoretical prediction for the number of decays (use lines), N*dt/tau exp(-t/tau) . Convert the plot to a pdf file called decay.pdf for submitting. You may use gnuplot or (if you are brave) pyplot for making the plot from your calculation. If you use gnuplot, you can combine two plots by separating the plot specifications with a comma. That is, if plot A and plot B make two separate plots, then plot A, B combines them into one plot. To plot the function above, write it as a function of x, not t. If you need to convert from Postscript to pdf, use the Unix utility epstopdf. If you use pyplot, save the figure to a pdf file using plt.savefig("figure.pdf"). #### Exercise 3. Poisson distribution Run your program with the input file ~p6720/examples/radioactive/inpoisson. Check that the number of atoms has not changed very much during this experiment. Create a histogram that shows the number of time intervals in your experiment that had 0 decays, the number that had 1 decay, etc. You may use the course hist utility or try your hand with matplotlib.pyplot.hist. You should arrange so that the histogram bins are centered on the integers. The theoretical expectation is that the histogram is proportional to a Poisson distribution, based on the expected average number of decays in the interval dt, namely d = N*dt/tau The theoretical expression is N(k) = M dk e-d/k! where M = T/dt is the number of time intervals in your experiment and k! is the factorial k(k-1)(k-2)...1. So on the same plot as your histogram, plot this prediction (as points). The easiest way to do this is to write a small Python code that writes out k and N(k) for integer k from 0 to 20 or so. Note that math.factorial(k) evaluates the factorial. Put the output numbers in a file, so you can use it as input to gnuplot. Convert the plot to a pdf file called poisson.pdf for submitting.
2018-11-14 23:31:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7727274298667908, "perplexity": 783.0711671615484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742322.51/warc/CC-MAIN-20181114232605-20181115014605-00516.warc.gz"}
https://solvedlib.com/jessica-purchased-a-home-on-january-1-2018-for,385262
#### Similar Solved Questions ##### Three point charges are placed at the vertices of 3 rectangle as shownSnCIOnCISnCThe X- and y-componcnts of thc net clcctric force at the vacant vcricx; as shown, duc to the three other charges are (E,0k2.13 N/C, -202)x10-7 N0(-213 N/C,,2.02) x 10-7 N(2.13 N/C, 2.02) x 10-7 N(2.13 ,-2.02) x 10*7 N Three point charges are placed at the vertices of 3 rectangle as shown SnC IOnC ISnC The X- and y-componcnts of thc net clcctric force at the vacant vcricx; as shown, duc to the three other charges are (E, 0k2.13 N/C, -202)x10-7 N 0(-213 N/C,,2.02) x 10-7 N (2.13 N/C, 2.02) x 10-7 N (2.13 ,-2.02) x ... ##### Below are summary statistics fromaCDCsurvey of335adults livinz Alabama scatter plotofBody Mass Index and the amountofweizhtthe Fersonwould liketo zain or lose lin pounds) NotethatanegativevalueindicatesthattheFerson wuldliketo loseWeigh:MeinBMI Weignt cnanzeThe correlation between BMland weizht lossisr =-0.80.Belowis scatter plo-ofthedata.1 2 8 Below are summary statistics fromaCDCsurvey of335adults livinz Alabama scatter plotofBody Mass Index and the amountofweizhtthe Fersonwould liketo zain or lose lin pounds) NotethatanegativevalueindicatesthattheFerson wuldliketo loseWeigh: Mein BMI Weignt cnanze The correlation between BMland weizht ... ##### B. The Solubility of Alcohols in Acetone: Test Tube Observation: Soluble or Insoluble in Acetone Was... B. The Solubility of Alcohols in Acetone: Test Tube Observation: Soluble or Insoluble in Acetone Was the Alcohol Soluble in Acetone vet Insoluble in Water? Yes or No Soluble 1. ethanol No Soluble 2. 1-butanol Soluble 3. 1-pentanol 4. 1-hexanol Solubie Soluble yes 5. 1-octanol Comment on the ability ... ##### In logistic regression, how is model fitting done? (write a, b, or c): a. the values... In logistic regression, how is model fitting done? (write a, b, or c): a. the values of the parameters of the linear equation are chosen to maximize the probability of the training data b. the values of the parameters of the linear equation are chosen to minimize the MSE C. the values of the paramet... ##### Rockmont Recreation Inc. is considering a project that has the following cash flow and WACC (weighted... Rockmont Recreation Inc. is considering a project that has the following cash flow and WACC (weighted average cost of capital) data. What is the project's NPV in $? (Enter your answers as a number rounded to 2 decimal places) WACC = 9% Year Cash Flow ($) -1200 200 400 500 500 Your Answer: Answer... ##### You are studying a basin of 1500 square miles that has 15 raingages. What percentage standard error would you expect for a givenrain event? You are studying a basin of 1500 square miles that has 15 rain gages. What percentage standard error would you expect for a given rain event?... ##### The accounting records for Portland Products report the following manufacturing costs for the past year. Direct... The accounting records for Portland Products report the following manufacturing costs for the past year. Direct materials Direct labor Variable overhead $380,000 262,000 230,000 Production was 170,000 units. Fixed manufacturing overhead was$810,000. For the coming year, costs are expected to incre... ##### A 3.0 cm -tall object is 10 cm in front of a converging lens that has a 24 cm focal length.Part ACalculate the image position: Express your answer using two significant figuresAzdcmSubmitRequest AnswerPart BCalculate the image height: Express your answer using two significant figuresAzdh =cm A 3.0 cm -tall object is 10 cm in front of a converging lens that has a 24 cm focal length. Part A Calculate the image position: Express your answer using two significant figures Azd cm Submit Request Answer Part B Calculate the image height: Express your answer using two significant figures Azd h =... ##### Decide whether each function is even, odd, or neither. $f(x)=x^{5}-2 x^{3}$ Decide whether each function is even, odd, or neither. $f(x)=x^{5}-2 x^{3}$... ##### 5. (4x 4-16 pts) Fig. 1 Fig. 2 () A double slit apparatus withd -0.01 mm... 5. (4x 4-16 pts) Fig. 1 Fig. 2 () A double slit apparatus withd -0.01 mm is illuminated by light (Fig 1). The position of the first maximu m on the screen 2 m away is measured to be 10cm away from the (i) In an ensuing flood, water (n 1.33) fills the laboratory and the apparatus. Does the (ili) Afte... ##### The accounting records of NuTronics, Inc., include the following information for the year ended December 31.... The accounting records of NuTronics, Inc., include the following information for the year ended December 31. Dec. 31 Jan. 1 Inventory of materials $24,000$ 20,000 Inventory of work in process 8,000 12,000 Inventory of finished goods 90,000 80,000 Direct materials used ... ##### 1· (1 point each) Given the array defined below, evaluate each of the expressions and write... 1· (1 point each) Given the array defined below, evaluate each of the expressions and write their value in the space provided. If the expression will cause an error, write "ERROR" in the space instead. Note that each expression is independent. They do not build on each other, so each ... ##### A large industrial corporation produces, among other things, balls used in computer mice. These plastic balls have a nominal diameter of 2 cm. Samples of 10 balls were taken from each of two different production lines, and the diameters of the sampled balls were measured. The results are given below.Line 1: 2.18 2.12 2.24 2.31 2.02 2.09 2.23 2.02 2.19 2.32Line 2: 1.62 2.52 1.69 1.79 2.49 1.67 2.04 1.98 2.66 1.99(a) Calculate summary stat A large industrial corporation produces, among other things, balls used in computer mice. These plastic balls have a nominal diameter of 2 cm. Samples of 10 balls were taken from each of two different production lines, and the diameters of the sampled balls were measured. The results are given below... ##### Pvint? Which substariee in cach mair has the hipher boiling, Asltyand PH, N and Oz HF nnd HCI Ar and Rr CCl ad CHOH Pvint? Which substariee in cach mair has the hipher boiling, Asltyand PH, N and Oz HF nnd HCI Ar and Rr CCl ad CHOH... ##### 15. If the AD curve were to shift to the right, then in the short run... 15. If the AD curve were to shift to the right, then in the short run there would be: A) an increase in aggregate output and an increase in the price level. B) an increase in aggregate output and a decrease in the price level. C) a decrease in aggregate output and a decrease in the price level. D) a... ##### Quiz: Quiz Chapters 1 - 3 This Question: 1 pt 1 of 20 (3 complete) Luther... Quiz: Quiz Chapters 1 - 3 This Question: 1 pt 1 of 20 (3 complete) Luther Corporation Consolidated Income Statement Year ended December 31 (in Smillions) 2006 610.1 (500.2) 109.9 2005 566.1 (353.6) 212.5 Total sales Cost of sales Gross profit Selling, general, and administrative expenses Research an... ##### Find the average value of the function $f(x, y, z)=x+y+z$ over the parallelepiped determined by $x=0, x=1, y=0, y=3, z=0$, and $z=5$ Find the average value of the function $f(x, y, z)=x+y+z$ over the parallelepiped determined by $x=0, x=1, y=0, y=3, z=0$, and $z=5$... ##### A company issued 60 shares of $100 par value common stock for$7,000 cash. The journal... A company issued 60 shares of $100 par value common stock for$7,000 cash. The journal entry to record the issuance is: Multiple Choice O Debit Cash $7,000; Credit Common Stock$7,000 Debit investment in Common Stock $7,000, credit Cash$7,000. O O Debit Cash $7,000; Credit Common Stock$6,000; cred... ##### A nonconducting sphere is made of two layers. The innermost section has a radius of 6.0... A nonconducting sphere is made of two layers. The innermost section has a radius of 6.0 cm and a uniform charge density of −5.0C/m3. The outer layer has a uniform charge density of +8.0C/m3 and extends from an inner radius of 6.0 cm to an outer radius of 12.0 cm. Part A Determine the electric ... ##### In 2016 GDP of the United States was $18 trillion, whereas that of China was$9... In 2016 GDP of the United States was $18 trillion, whereas that of China was$9 trillion. If China's growth rate of 7 percent per year is sustained and the US grows at 2 percent per year, when will China surpass the US in terms of GDP? (show calculations) 3... ##### If fx) = 12x' + Ilx +2+4- Vr + 5,then f'(x) is36x2 +11-9--{ (6} 36r? + 11-3 {Vz+5 36r2 + 11 + 2+4-41 (d) 36r2 +11-2-7 -{o) : (e) 36x2 +11+3+3-16)+ If f(t) = 3-z find f'(t) ='1t-2)+(ut)(zt) f"(t) = (17-2 1(b) f'(t) = 02-(31E-4t-2)-(30k(2t) (c) f'(t)= [(t-2)-{3t)2t2 (d) f'(t) = ([1-212f'(t) = (3)(2 2) - (3)(2t)Ify = 2cos(xl + x) then2 sin(r" +x) (3r2 + 1) (b) 2 sin(3x2 1) (xl + x) 2sin(r x) (3x2 + 1) sin(x3 x) (3x2 + 1) 2 sin(3x2x" sinx,the If fx) = 12x' + Ilx +2+4- Vr + 5,then f'(x) is 36x2 +11-9--{ (6} 36r? + 11-3 {Vz+5 36r2 + 11 + 2+4-41 (d) 36r2 +11-2-7 -{o) : (e) 36x2 +11+3+3-16)+ If f(t) = 3-z find f'(t) = '1t-2)+(ut)(zt) f"(t) = (17-2 1 (b) f'(t) = 02-(31E- 4t-2)-(30k(2t) (c) f'(t)= [(t-2)-{... ##### In Exercises $57-60,$ let \begin{aligned} \mathbf{u} &=a_{1} \mathbf{i}+b_{1} \mathbf{j} \\ \mathbf{v} &=a_{2} \mathbf{i}+b_{2} \mathbf{j} \\ \mathbf{w} &=a_{3} \mathbf{i}+b_{3} \mathbf{j} \end{aligned} Prove each property by obtaining the vector on each side of the equation. Have you proved a distributive, associative, or commutative property of vectors? $\mathbf{u}+\mathbf{v}=\mathbf{v}+\mathbf{u}$ In Exercises $57-60,$ let \begin{aligned} \mathbf{u} &=a_{1} \mathbf{i}+b_{1} \mathbf{j} \\ \mathbf{v} &=a_{2} \mathbf{i}+b_{2} \mathbf{j} \\ \mathbf{w} &=a_{3} \mathbf{i}+b_{3} \mathbf{j} \end{aligned} Prove each property by obtaining the vector on each side of the equation. H... ##### 2 A4 in schedule 40steel pipe(k20 wm-K) carrying steam at 200t C, is placed in air... 2 A4 in schedule 40steel pipe(k20 wm-K) carrying steam at 200t C, is placed in air at 30 The convective film coefficients between the air and the pipe outer surface_and between steam and pipe inner surface are 4 and 500 wm'2-C respectively. It is desired to reduce the heat loss to 60 % of its pr... ##### Example 7.2-3 See Example 7.2-3 in the textbook for the solution to a similar problem. (See... Example 7.2-3 See Example 7.2-3 in the textbook for the solution to a similar problem. (See Example Z.2-3 in the textbook for the solution to a similar problem.) i(), mA 1 23 (S) -6 123(S) -9 is figure shows a circuit together with two plots. The plots represent the current and voltage of the capaci... ##### 3. How many milliliters of a 10% stock solution of Drug A are needed to prepare... 3. How many milliliters of a 10% stock solution of Drug A are needed to prepare 120mL of a solution containing 10mg of Drug A per millliter?... ##### Propose mechanism for the following reaction. (hint; What is the starting material = product of? What is the final structure product of?)OH"CH; OH CH;CH; Propose mechanism for the following reaction. (hint; What is the starting material = product of? What is the final structure product of?) OH" CH; OH CH; CH;... ##### 10. In a trend that scientists attribute, at least in part; to global warming; a certain floating cap of sea ice has been shrinking since 1980. The ice cap always shrinks in the summer and grows in winter. Average minimum size of the ice cap, in square miles, can be approximated by A=ir2 In 2013, the radius of the ice cap was approximately 782 mi and was shrinking at a rate of approximately 4.3 mi yr: How fast was the area changing at that time?The area was changing at a rate of (Round to the ne 10. In a trend that scientists attribute, at least in part; to global warming; a certain floating cap of sea ice has been shrinking since 1980. The ice cap always shrinks in the summer and grows in winter. Average minimum size of the ice cap, in square miles, can be approximated by A=ir2 In 2013, th... ##### Question 4 a and b please 4. Consider the production function y = LK/10, where L... question 4 a and b please 4. Consider the production function y = LK/10, where L is labor and K is capital. (This is from Chapter 9, Exercise 4.) The factor prices are wi = 10 and wx = 100. Suppose the amount of capital, K, is fixed at 1 unit (a) Derive the short-run cost function (y). (b) Derive a... ##### The International Air Transport Association surveys business travelers to develop quality ratings for transatlantic gateway airports. The maximum possible rating is 10. Suppose a simple random sample... The International Air Transport Association surveys business travelers to develop quality ratings for transatlantic gateway airports. The maximum possible rating is 10. Suppose a simple random sample of 50 business travelers is selected and each traveler is asked to provide a rating for the Miami In... ##### $7-48$ Evaluate the indefinite integral. $\int\left(x^{2}+1\right)\left(x^{3}+3 x\right)^{4} d x$ $7-48$ Evaluate the indefinite integral. $\int\left(x^{2}+1\right)\left(x^{3}+3 x\right)^{4} d x$... ##### 6) Please nelligly; and siaplfz Yor Ea5e^ Compketely. Vx ~(V>"% 7xViog x*97 6) Please nelligly; and siaplfz Yor Ea5e^ Compketely. Vx ~(V>"% 7xViog x*97... ##### WNetflixColor Psycholo,CHE 114 E-bo "KindTap Cen ,OWLv2 [ Onling_-Alcoholic bevetYour[Referoncee]SIMULATION Nomenclature of AlcoholsOHH=C~-HmethanolHAnnotateAlcohols are compounds that contain the hydroxyl; OH group. Many common names exist for historically important alcohols. such as ethyl alcohol, CH;CH,OH; and methyl aleohol, CH;OH; which is also called wood alcohol " 2-Propanol, also known isopropyl aleohol, has the structureIn this simulation we examine the oflieinl IUPAC naming WNetflix Color Psycholo, CHE 114 E-bo " KindTap Cen , OWLv2 [ Onling_- Alcoholic bevet Your [Referoncee] SIMULATION Nomenclature of Alcohols OH H=C~-H methanol H Annotate Alcohols are compounds that contain the hydroxyl; OH group. Many common names exist for historically important alcohols. su... ##### LL, Short questions (6lunaks)Q.6 The ovcrall pereertage of pass it STATZIO is 86. In Beiniester 16 students | appeared (al Fird Mthic mCAH arid variance of this binomial distribution (b)Find Khelprobability that| at |Icast 5 students pass this counse0.7 For an exponential h hhHonmmat Ue LE( Wh huw GuW;where p 3i0Write_(Don ( Mdehinuen (a) Mean (cxpectedila vlellialo Standard devilationiilklky Deron Mornent generatimg fumetignhed nkt} , Curnulative disuribulipnkaunaukkcoEG) LL, Short questions (6lunaks) Q.6 The ovcrall pereertage of pass it STATZIO is 86. In Beiniester 16 students | appeared (al Fird Mthic mCAH arid variance of this binomial distribution (b)Find Khelprobability that| at |Icast 5 students pass this counse 0.7 For an exponential h hhHon mmat Ue LE( W... ##### Use Definition 7.1.1.DEFINITION 7.1.1 Laplace Transform Let f be a function defined for t 2 0. Then the integralX{f(t)}e-stf(t) dtis said to be the Laplace transform of f, provided that the integral converges Find {flt)}_ (Write your answer as a function of s.) f(t) {-1 0 < t < 1 t 2 1X{f(t)}(s > 0)XNeed Help?RcadItTalkte TuterSubmit Answer Use Definition 7.1.1. DEFINITION 7.1.1 Laplace Transform Let f be a function defined for t 2 0. Then the integral X{f(t)} e-stf(t) dt is said to be the Laplace transform of f, provided that the integral converges Find {flt)}_ (Write your answer as a function of s.) f(t) {-1 0 < t < 1 t 2 1 X{... ##### Computing surface areas Find the area of the surface generated when the given curve is revolved about the $x$ -axis. $y=\frac{1}{4}\left(e^{2 x}+e^{-2 x}\right) \text { on }[-2,2]$ Computing surface areas Find the area of the surface generated when the given curve is revolved about the $x$ -axis. $y=\frac{1}{4}\left(e^{2 x}+e^{-2 x}\right) \text { on }[-2,2]$...
2023-03-27 16:17:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6928033828735352, "perplexity": 6379.552134617751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948673.1/warc/CC-MAIN-20230327154814-20230327184814-00135.warc.gz"}
https://math.stackexchange.com/questions/2559063/counit-unit-adjunction-and-hom-set-adjunction-not-always-identical
I practice identifying adjoint functors on simple categories. Now I came across a case, where it seems, I have a counit-unit adjunction, but not a hom-set adjunction. Is this possible? For the concrete example: $$F: \mathcal{C} \to \mathcal{D},~~ G: \mathcal{C} \leftarrow \mathcal{D} \\ a, b, b', ~h: a \to b, ~h': b' \to b \in \mathcal{C} \\ Fa, Fb, ~Fh: Fa \to Fb \in \mathcal{D} \\ Fb' = Fb, ~Fh' = 1_{Fb}$$ I can work out $\eta$ and $\epsilon$: $$\eta_a = 1_a, ~~\eta_b = 1_b, ~~\eta_{b'} = h' \\ \varepsilon_{Fd} = 1_{Fd}, ~~\varepsilon_{Fe} = 1_{Fe}$$ And they fulfill triangle identities. But I can't work out the bijections for $\Phi_{Fa,b'}: Hom(GFa,b') \cong Hom(Fa,Fb')$. $Hom(Fa,Fb')$ contains $Fh$, but $Hom(GFa,b')$ seems to be the empty set. In your adjunction, $G$ is the right adjoint and $F$ is the left adjoint, so the bijection of Hom-sets you ask for is incorrect. There would instead be a bijection $\mathrm{Hom}(a,GFb')\cong\mathrm{Hom}(Fa,Fb')$, and indeed there is.
2020-02-17 07:55:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9612272381782532, "perplexity": 410.15086255235667}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141749.3/warc/CC-MAIN-20200217055517-20200217085517-00551.warc.gz"}
https://www.gamedev.net/forums/topic/163357-transparency-been-buggin-me/
Archived This topic is now archived and is closed to further replies. Transparency been buggin' me This topic is 5485 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. Recommended Posts Hello! Here's a little question about transparency in DX9. I'm imagining it won't take long to answer - I'm sure I've just missed something important in the theory. Imagine you have a particle system that shoots out particles, which are billboarded square-shaped primitives, in random directions. As the particles get closer to their death, they fade - there is a direct correlation between the life they have left and the particle's material's alpha component: Particle.Material.a = Particle.LifeLeft; So you get a bunch of fading particles, becoming more and more translucent as time passes. When you look at some particles floating around in front of the others, you can see the 'back' particle through the front one, which is the desired effect. However, some particles cannot be seen through some other particles. I know this is all down to the render order, and the way that the first particles to be rendered just blend their alpha components with the back buffer. Any particles rendered after that, behind the first particles, cannot be seen through them. This is to do with my simple rendering loop: for( UINT i=0; i<MAX_NUM_PARTICLES; i++ ) Particle[i].Render; I know one solution would be to create a sorted list of all the particles in the system and render them from the back of the scene to the front. My question is is there a simpler way to, say, look at the scene, including all of its objects and their transparency properties, as a whole, so that translucent objects will always reveal the objects behind them, regardless of the render-order of the objects in the scene? Maybe there's some way of rendering a scene twice, once with all the objects in place, then again with the transparency and translucency, using the first rendering as the source? I'd like to hear if there are any standard ways of doing this that I have missed, or if anyone has created a handy work-around. Thanks! [edited by - GazzyG on June 19, 2003 1:35:10 PM] Share on other sites have you tried to use the alpha test? Probably it can resolve your problem... Share on other sites What alpha test do you mean? Share on other sites G''day! The only way I know to completely skip sorting is to use additive blending. Since everything is just summed, the order doesn''t matter. You would have to redo your particle art and handling to look right with that method, but it''d work. Stay Casual, Ken Drunken Hyena Share on other sites http://www.gamedev.net/community/forums/topic.asp?topic_id=162801 or search the word "D3DRS_ALPHAFUNC" on GameDev Forum... there are some interesting post... Share on other sites Thanks for your suggestions, they''re all appreciated! The one method I have found of rendering all particles in my system without artifacts appearing from particles being rendered in front of others is to use the renderstate: SetRenderState( D3DRS_ZFUNC, D3DCMP_ALWAYS ); This makes the z-test for rendering always return TRUE, so every translucent/transparent pixel can exposes whatever pixels are behind it. This is fine for a simple demo of a particle system. Unfortunately, it''s no good if you want to render a whole scene. So I decided to try sorting the particles, using the ''qsort'' function like they used in the DirectX 9 Billboard sample program. I altered the code of the callback function that qsort uses for comparing values so that it now compares the distances of the two particles from the camera, not simply their z-values (that would only be of any use for a fixed-position camera, as far as I can make out). However, even with this distance-from-camera sorting routine I''m still getting artifacts. The artifacts exist only for the first few frames that each particle is around, making it appear that my sort routine is getting it wrong to start with, but then, if you''ll excuse the bad pun, sorts itself out. Here''s the code of the callback function that compares two particles: int _cdecl ParticleSortCB( const VOID* arg1, const VOID* arg2 ){ CParticle *p1 = (CParticle*)arg1; CParticle *p2 = (CParticle*)arg2; FLOAT sx1 = p1->m_vecPos.x - vecCamera.x; FLOAT sy1 = p1->m_vecPos.y - vecCamera.y; FLOAT sz1 = p1->m_vecPos.z - vecCamera.z; DOUBLE d1 = (sx1*sx1) + (sy1*sy1) + (sz1*sz1); FLOAT sx2 = p2->m_vecPos.x - vecCamera.x; FLOAT sy2 = p2->m_vecPos.y - vecCamera.y; FLOAT sz2 = p2->m_vecPos.z - vecCamera.z; DOUBLE d2 = (sx2*sx2) + (sy2*sy2) + (sz2*sz2);// This was the original, simple comparison that works perfectly (for a fixed position camera):// FLOAT d1 = p1->m_vecPos.z;// FLOAT d2 = p2->m_vecPos.z; if (d1 < d2) return +1; return -1;} It uses a^2+b^2+c^2=d^2 Pythagoras to calculate the distance of the particle from the camera. If the original comparison used in this callback function works perfectly, how come this one doesn''t? I''ve used DOUBLEs for extra precision, just in case that''s the problem, and algorithms don''t get much more straightforward than Pythag, so can anyone see what''s going wrong? Thanks again! Share on other sites You want to leave z read normal (ZFUNC_LESSEQUAL or whatever) but disabled ZWriting when doing your particle system. This way the particles are still effected by the scene but don''t overwrite eachother. Share on other sites Yah, but you still end up with particles appearing ''in-front'' of particles that they shouldn''t when you disable the z-testing. This is not too tragic if the particles are small, but when you have expanding smoke trails, it can look awful. The alpha test stuff only really works for fully transparent areas of the sprite. As soon as you try partial transparency it all falls back to having to depth-sort the scene. There is a CG example of depth-independent transparency in the NVIDIA SDK that you might want to look at. I solved the problem using the x2+y2+z2=d2 sorting algorythm so didn''t spend much time checking out how they do it, but you may find it useful. 1. 1 2. 2 Rutin 19 3. 3 4. 4 5. 5 • 15 • 13 • 9 • 12 • 10 • Forum Statistics • Total Topics 631442 • Total Posts 3000087 ×
2018-06-25 18:07:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2209978997707367, "perplexity": 2494.131610759207}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267868237.89/warc/CC-MAIN-20180625170045-20180625190045-00315.warc.gz"}
https://docs.betonquest.org/2.0.0-DEV/Documentation/Conversations/
# Conversations ## General Information🔗 Each conversation must define name of the NPC (some conversations can be not bound to any NPC, so it’s important to specify it even though an NPC will have a name) and his initial options. conversations: conversationName: quester: Name first: option1, option2 stop: 'true' final_events: event1, event2 interceptor: simple NPC_options: option1: text: Some text in default language events: event3, event4 conditions: condition1, !condition2 option2: text: '&3This ends the conversation' player_options: text: en: Text in English pl: Tekst po polsku event: event5 condition: '!condition3' pointer: option2 text: 'Text containing '' character' Note Configuration files use YAML syntax. Google it if you don't know anything about it. Main rule is that you must use two spaces instead of tabs when going deeper into the hierarchy tree. If you want to write ' character, you must double it and surround the whole text with another ' characters. When writing true or false it also needs to be surrounded with '. If you want to start the line with & character, the whole line needs to be surrounded with '. You can check if the file is correct using this tool. • conversations defines the section as a conversation section. • ConversationName is the name of the conversation, which you assign then to an NPC in the npcs section. Alternatively, you can combine conversations and ConversationName and use conversations.ConversationName instead. • quester is name of NPC. It should be the same as name of NPC this conversation is assigned to for greater immersion, but it's your call. • first are pointers to options the NPC will use at the beginning of the conversation. He will choose the first one that meets all conditions. You define these options in npc_options branch. • final_events are events that will fire on conversation end, no matter how it ends (so you can create e.g. guards attacking the player if he tries to run). You can leave this option out if you don't need any final events. • stop determines if player can move away from an NPC while in this conversation (false) or if he's stopped every time he tries to (true). If enabled, it will also suspend the conversation when the player quits, and resume it after he joins back in. This way he will have to finish his conversation no matter what. It needs to be in ''! You can modify the distance at which the conversation is ended / player is moved back with max_npc_distance option in the config.yml. • interceptor optionally set a chat interceptor for this conversation. Multiple interceptors can be provided in a comma-separated list with the first valid one used. • NPC_options is a branch with texts said by the NPC. • player_options is a branch with options the player can choose. • text defines what will display on screen. If you don't want to set any events/conditions/pointers to the option, just skip them. Only text is always required. • conditions are names of conditions which must be met for this option to display, separated by commas. • events is a list of events that will fire when an option is chosen (either by NPC or a player), defined similarly to conditions. • pointer is list of pointers to the opposite branch (from NPC branch it will point to options player can choose from when answering, and from player branch it will point to different NPC reactions). When an NPC wants to say something he will check conditions for the first option (in this case option1). If they are met, he will choose it. Otherwise, he will skip to next option (note: conversation ends when there are no options left to choose). After choosing an option NPC will execute any events defined in it, say it, and then the player will see options defined in player_options branch to which pointers setting points, in this case reply1 and reply2. If the conditions for the player option are not met, the option is simply not displayed, similar to texts from NPC. Player will choose option he wants, and it will point back to other NPC text, which points to next player options and so on. If there are no possible options for player or NPC (either from not meeting any conditions or being not defined) the conversations ends. If the conversation ends unexpectedly, check the console - it could be an error in the configuration. This can and will be a little confusing, so you should name your options, conditions and events in a way which you will understand in the future. Don't worry though, if you make some mistake in configuration, the plugin will tell you this in console when testing a conversation. ## Cross-conversation pointers🔗 If you want to create a conversation with multiple NPCs at once or split a huge conversation into smaller, more focused files, you can point to NPC options in other conversations. Just type the pointer as conversation.npc_option. Keep in mind that you can only cross-point to NPC options. It means that you can use those pointers only in first starting options and in all player options. Using them in NPC options will throw errors. Warning This does not work across packages yet. ## Conversation variables🔗 You can use variables in the conversations. They will be resolved and displayed to the player when he starts a conversation. A variable generally looks like that: %type.optional.arguments%. Type is a mandatory argument, it defines what kind of variable it is. Optional arguments depend on the type of the variable, i.e. %npc% does not have any additional arguments, but %player% can also have display (it will look like that: %player.display%). You can find a list of all available variable types in the "Variables List" chapter. Note If you use a variable incorrectly (for example trying to get a property of an objective which isn't active for the player, or using %npc% in message event), the variable will be replaced with empty string (""). ## Translations🔗 As you can see in this example conversation, there are additional messages in other languages. That's because you can translate your conversations into multiple languages. The players will be able to choose their preferred one with /questlang command. You can translate every NPC/player option and quester's name. You do this like this: quester: en: Innkeeper pl: Karczmarz de: Gastwirt As said before, the same rule applies to all options and quester's name. The player can choose only from languages present in messages.yml, and if there will be no translation to this language in the conversation, the plugin will fall back to the default language, as defined in config.yml. If that one is not defined, there will be an error. You can also translate journal entries, quest cancelers and message events, more about that later. ## Conversation displaying🔗 BetonQuest provides different conversation styles, so called "conversationIO's". They all look different but the biggest difference is the way the user interacts with them. A modern conversation style that works with some of Minecraft's native controls. All options can be found in the compatibility section. This is a video of it in action: A chat output. The user has to write a number into their chat to select an option. Also a chat output. The user can click on the options instead of typing them. The same as tellraw but the NPC's text is printed line by line, delayed by 0.5 seconds. A chest GUI with clickable buttons where the NPC's text and options will be shown as item lore. You can change the option's item to something else than ender pearls by adding a prefix to that option's text. The prefix is a name of the material (like in the items section) inside curly braces, with an optional damage value after a colon. Example of such option text: {diamond_sword}I want to start a quest!. You can control the colors of conversation elements in the config.yml file, in conversation_colors section. Here you must use names of the colors. BetonQuest uses the menu conversationIO by default. If ProtocolLib is not installed, the chest IO will be used. You can however change the utilized conversationIO by setting the default_conversation_IO option in the config.yml file. In case you want to use a different type of conversation display for just one specific conversation you can add a conversationIO: <type> setting to the conversation file at the top of the YAML hierarchy (which is the same level as quester or first options). ## Chat Interceptors🔗 While engaged in a conversation, it can be distracting when messages from other players or system messages interfere with the dialogue. A chat interceptor provides a method of intercepting those messages and then sending them after the conversation has ended. You can specify the default chat interceptor by setting default_interceptor inside the config.yml. Additionally, you can overwrite the default for each conversation by setting the interceptor key inside your conversation file. The default configuration of BetonQuest sets the default_interceptor option to packet,simple. This means that it first tries to use the packet interceptor. If that fails it falls back to using the simple interceptor. BetonQuest adds following interceptors: simple, packet and none: The simple interceptor works with every Spigot server but only supports very basic functionality and may not work with plugins like Herochat. The packet interceptor requires the ProtocolLib plugin to be installed. It will work well in any kind of situation. The none interceptor is an interceptor that won't intercept messages. That sounds useless until you have a conversation that you want to be excluded from interception. In this case you can just set interceptor: none inside your conversation file. Conversation also supports the concept of inheritance. Any option can include the key extends with a comma delimited list of other options of the same time. The first option that does not have any false conditions will have it's text, pointers and events merged with the extending option. The extended option may itself extend other options. Infinite loops are detected. NPC_options: ## Normal Conversation Start start: text: 'What can I do for you' extends: tonight, today tonight: # Always false condition: random 0-1 text: ' tonight?'
2022-10-01 08:14:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2883356213569641, "perplexity": 2199.6822432585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00457.warc.gz"}
https://meta.miraheze.org/wiki/Community_noticeboard/Archive_22
# Community noticeboard/Archive 22 ## Setting up my own wiki Hello, I'm 54nd60x from English Wikipedia. As seen here, I want to create my own wiki to experiment with the MediaWiki software. I don't have a specific category for my planned wiki as I am still in the beginning of this stage and I want to experiment on my own first as there is still a lot I don't know about the MediaWiki software. I know that wikis can be created on other places as well like Fandom, but I don't have the developer rights and some interface pages I cannot edit. I want to be the founder of a wiki and don't know if Special:RequestWiki is most appropriate. Can you help me with this please? Thanks. 54nd60x (talk) 09:04, 16 July 2021 (UTC) Welcome to Miraheze. I would say you want to check out Test wiki and request administrator rights as I think that's exactly in the field you're looking for, and when you have a wiki with a clear scope, consider putting in a request for that. As I'm aware new requests that are strictly for testing are preferred to consider the given link first. Otherwise if you do have a structured idea, Special:RequestWiki is the way to go. Hope it helps. --Raidarr (talk) 10:26, 16 July 2021 (UTC) Please note I have procedurally this thread from Meta:Administrators' noticeboard to community noticeboard, where it is now in scope. Thanks. Dmehus (talk) 17:34, 18 July 2021 (UTC) ## Quick question, Is there a miraheze equvilent to fantendo? Is there a miraheze equvilent to fantendo, I want to post fan ideas but dont want to join fandom after hearing the bad things (unless fantendo is one of the friendlier fandom communities) Glubbfubb (talk) 23:43, 15 July 2021 (UTC)GlubbfubbGlubbfubb (talk) 23:43, 15 July 2021 (UTC) Glubbfubb (talk) 23:43, 15 July 2021 (UTC) There are places to create fanfiction and ideas (see Gazetteer of wikis), but I don't believe there is one exactly the way you mention. Not advertised anyways. One could always be made though. Otherwise Fantendo could work out alright if it feels decent (or naturally, if I've taken too long with this answer and you've gone there anyways). --Raidarr (talk) 19:31, 19 July 2021 (UTC) ## Broken show/hide buttons on infoboxes On the wiki I am admin on, we have imported templates which use show/hide drop-down sections/lists, but for some time they do not seem to have worked and both the button and information within is simply not displayed at all (example), is there a fix for this? SpookyBoy (talk) 16:05, 18 July 2021 (UTC) SpookyBoy (talk) 16:05, 18 July 2021 (UTC) This honestly looks like a bit too much of a mess (and is probably why there hasn't been much comment), but if it's any support and if you would be able to create a simple custom version, MediaWiki has a built-in hider class that I know works. mw:Manual:Collapsible_elements. In the meantime I can spread the question around depending if you'd like to use the simpler answer or try to make the imported wikipedia content work. --Raidarr (talk) 19:29, 19 July 2021 (UTC) ## I need help Hello, I would like some volunteers to help me in Mario Wiki. Greetings. Anthony8IA (talk) 17:51, 18 July 2021 (UTC) For long term exposure I suggest including the wiki in the Gazetteer of wikis, including a note that the wiki is in Spanish. Hope it helps. --Raidarr (talk) 00:06, 20 July 2021 (UTC) ## Vandal until proved innocent? I joined about an hour ago • tried to fix an infobox on a public wiki but told that my edits need moderation so can't see if have worked • read the these noticeboards and implied that Test Wiki is without restrictions. So tried to test a template. But has already deleted my module! • reading the Farmer log, creating a wiki to achieve the above, are being declined. Miraheze has received praise, and why I joined, but operating a closed shop I might have made my last posting? PercyUK (talk) 18:20, 19 July 2021 (UTC) PercyUK I wasn't sure whether or not that Module you created should stay or remain deleted, so I restored it, and will only let the Consuls decide what to do with it. DarkMatterMan4500 (talk) (contribs) 18:22, 19 July 2021 (UTC) As it's for testing, I wouldn't see why it's not allowed. Agent Isai Talk to me! 18:25, 19 July 2021 (UTC) @PercyUK: It seems maybe DarkMatterMan4500 didn't see it was something made for testing. In the future, in your descriptions, add that it's for testing so that it's not mixed up in housekeeping. Additionally, your test wiki is being declined because we have that test wiki for testing. Sorry for the inconvenience and mix up! Agent Isai Talk to me! 18:23, 19 July 2021 (UTC) This has been largely answered, but I'll put a three-point summary; 1. Note that Miraheze by design lets local administrators set up however they wish; a wiki may be more protected than usual, certain pages may have special protections or they would like to ensure newcomers are not vandals as a rule, therefore initial edits would need confirmation. Early on you would likely be confirmed as it would be clear you're not the type they're protecting against. If not it would be a strange wiki. 2. Looks like a case of DarkMatterMan being overzealous as I see no reason from the outset why that template should be deleted. Explicit 'test' description should not be needed as it is a test wiki with open leeway for creation. A template is not an unusual case with extra steps, as strongly impled in the TestWiki testing policy. 3. As much as this probably sucks to hear after being shafted with 2., the public test wiki is the intended place for testing to be done. Apologies for the poor initial impression, I'm afraid it looks like a round of bad luck :/ --Raidarr (talk) 19:22, 19 July 2021 (UTC) @PercyUK: Sincere apologies for any inconveniences caused by any action you've seen. I believe I believe when DarkMatterMan4500 deleted your module, they thought it was one random spammer who just come to wikis to cause chaos, but trust me, now they know you're not, and they stated that they've already undeleted it since it shouldn't remain deleted. Welcome to Miraheze (if you're new) and I hope DarkMatterMan4500 will fix your abuse filter stuff soon. Ugochimobi (talk) 05:32, 20 July 2021 (UTC) ## Does anyone use tabs extension? Does anyone use the Tabs Extension and can link to a working example? This extension causes me some headache, things do not work as expected but this might be my fault, I would really apprechiate a link to a usage example, thank you very much --Lily talk and I will listen · Lilypond Wiki 19:16, 14 July 2021 (UTC) FAMEPedia does. 19:20, 14 July 2021 (UTC) Thank you for your feedback, do you know a specific page which uses this extension? Famepedia has several thousand pages, and I do not know a way to find out where a specific extension is used. Maybe there is, --Lily talk and I will listen · Lilypond Wiki 20:17, 14 July 2021 (UTC) The main page. 20:19, 14 July 2021 (UTC) @Bukkit:Thank you for the hint, I was looking for "tabs" (there is a slight difference between "tabs" and "tabber", they are different extensions), thus I could not find "tabs" on the main page. Tabber seems to work better anyway, so I think I will use tabber for my templates, greetings, --Lily talk and I will listen · Lilypond Wiki 06:46, 15 July 2021 (UTC) We use both tabs and tabber on this page on All The Tropes. --Robkelk (talk) 14:28, 15 July 2021 (UTC) Thank you @Robkelk: greetings --Lily talk and I will listen · Lilypond Wiki 15:11, 15 July 2021 (UTC) We use the dropdown functionality for our page credits, like on the top of this page Hb1290 (talk) 00:13, 21 July 2021 (UTC) ## Another import inquiry After launching my next custom namespace, I'm about to do yet another import from ByetHost (my relaunched site's penultimate one). The original content had page titles structured like [[Snippet:0001]], and on import, they should land as subpages in the new namespace ([[Corpus:Snippets/0001]]). Is it possible to port them that way? --Routhwick (talk) 08:22, 20 July 2021 (UTC) If I got you correctly, you want to import pages [[Snippet:0001]] but you want them to come into your wiki as subpage of [[Corpus:Snippets]] Ugochimobi (talk) 08:35, 20 July 2021 (UTC) Yep. (To Looks like you had double-posting issues, too.) --Routhwick (talk) 08:41, 20 July 2021 (UTC) Hi Routhwick, if you are going to import from other wiki, there is an option: import as a subpage of the following page. You need to click that option and select the namespace (Corpus) and the page (Snippets) in which you want to port then. ~ Mazzaz (talk) 08:44, 20 July 2021 (UTC) For double posting issue, it's mainly caused by network connection for me. So yes you could do what you want by using the Special:Import. Ugochimobi (talk) 08:45, 20 July 2021 (UTC) ## Does Skin.css overwirte Common.css or is it the other way around? I know this question sounds weird and most likely obvious but i basically wondered which css has the "higher" priority. For example if i change the settings for ".infobox" in monobook.css would it end up overwriting the settings from common.css for the MonoBook skin or would it end up keeping the settings from common.css and ignoring the settings in monobook.css? Just to make it clear i specifically mean for the MonoBook skin in that case. I am aware that monobook.css would not end up overwriting other skins. Smashidachi (talk) 09:43, 20 July 2021 (UTC) Okay Nice question you know. First of all, your wiki Common.css comes out to be the CORE CSS of your wiki, the same applies to your wiki's Common.js, They both stand as a CORE setting for your wiki. Now, elaboratively, If you have styling in your wiki's Common.css, it's basically the styling for your wiki, regardless of the skin the user is using. If you now put a different styling on a particular Skin.css, It works ONLY FOR THAT PARTICULAR SKIN. So, summarily, The Common.css is a CORE styling for your wiki, while The Skin.css is a styling for that particular skin. I sincerely hope this helps. Ugochimobi (talk) 10:15, 20 July 2021 (UTC) it does thank you ^^ Smashidachi (talk) 10:20, 20 July 2021 (UTC) It's alright. Ugochimobi (talk) 10:41, 20 July 2021 (UTC) ## global.js does not work Scripts on global.js page doesn’t work on any of my wiki. How can I resolve this? -Yahya (talk) 16:43, 20 July 2021 (UTC) @Yahya: What error are you receiving? Agent Isai Talk to me! 17:04, 20 July 2021 (UTC) ## Missing persons and runaways wiki If you would like to help out, leave a message. Thanks. Herdtoxplain (talk) 00:25, 21 July 2021 (UTC) ## Requested Move 20 Jul ‘21 ⇒⇒You are not allow(ed) to post comments - Commenting is not allowed for this article. With this <comments=voting plus/> code enforced since March 6 that disclaimer still remains unharmed. This is especially noticeable at Awesome Characters Wiki since the comments must be placed at bottom. It should be changed into the latter disclaimer. Crappy and Incredible Games might have to keep the former wording. Would it be a good choice? Yes the conversation about this was discussed last April. 2607:FB90:AA2B:EA81:10E6:10B2:CAAB:618A 00:51, 21 July 2021 (UTC) ## いくつか、質問… ・ビューロクラットの復活について… ビューロクラットを削除したら利用者権限の変更ができなくなってしまったので、再度できるよう修復したいのですがどのようにすれば良いでしょうか、また一般ユーザーが利用者権限をいじれないようにするにはどうすればいいのでしょうか… ・コメント欄の追加について どうすれば良いでしょうか… 新居浜ありす (talk) 16:25, 21 July 2021 (UTC) 新居浜ありす (talk) 16:25, 21 July 2021 (UTC) #1の場合、官僚にスチュワードによる復元を要求できます。 #2の場合、官僚が戻ってきたら、それを解決できると確信しています。 #3の[コメント]セクションでは、Extension:Commentを有効にしてからパーサーフックタグ '<comment> </ comment>' を使用するか、Extension:CommentStreamを有効にします。 Ugochimobi (talk) 20:43, 21 July 2021 (UTC) ## How do I get the wiki text from this API request? I'm trying to create code to get the wiki text of of a page using this code: var api = new mw.Api(); var mwJsApi_wikitext; var wikitext_reqPrm = { 'action': 'query', 'prop': 'revisions', 'titles': 'MediaWiki_JavaScript_API', 'rvslots': '*', 'rvprop': 'content', 'formatversion': '2' }; var wgPageName = mw.config.get('wgPageName'); if (wgPageName == 'MediaWiki_JavaScript_API') { $(mwJsApiClk).insertAfter('#firstHeading'); api.get(wikitext_reqPrm).done(function(data){console.log(data.query.pages)}); } I'm using the console to find the variables I need to access the source of the page but after I get to data.query.pages I get a 0 and if I use data.query.page.0, the code breaks so I have no idea how I'm supposed to get passed the 0. Тишина (talk) 23:11, 21 July 2021 (UTC) @Тишина: I strongly suggest you ask this on Discord or IRC where a user who's more knowledgeable on the subject will be able to help you. Agent Isai Talk to me! 23:18, 21 July 2021 (UTC) @Тишина: Try data.query.pages[0]. Joritochip (talk) 23:35, 21 July 2021 (UTC) @Joritochip: Thanks! That works. Тишина (talk) 05:32, 22 July 2021 (UTC) ## Article count stuck at 0 On my wiki here, I noticed that the article count is stuck at 0. When the wiki was first created, I enabled the CommentStream extension and saved it, but immediately changed my mind and enabled the Comments extension and saved it instead. Since then, the count has remained at 0, despite the wiki having 3 articles at the time I wrote this. Is this a technical glitch? (Also, do you have a story you want to share? The wiki needs contributors, and it's the perfect place to share it. wink) Tali64³ (talk) 01:04, 22 July 2021 (UTC) To get a article count, go to additional settings and look for links. Iron Sword 23 (talk) 01:08, 22 July 2021 (UTC) @Tali64³: This is because none of those pages contain a link to another page (see mw:Manual:Article count for more details). If you want the article count to include all pages, go to your wiki settings and change "Article Count Method" (under "Links") from "Link" to "Any". — 01:14, 22 July 2021 (UTC) On the device I edit on, it only works in the mobile view. Does it work in the desktop view to anyone? Tali64³ (talk) 01:20, 22 July 2021 (UTC) @Tali64³: What only works in the mobile view? — 01:31, 22 July 2021 (UTC) The article count. Tali64³ (talk) 01:40, 22 July 2021 (UTC) @Tali64³: Did you change the setting? If you did, try editing the articles and see if that fixes it. — 01:41, 22 July 2021 (UTC) It works now. Thank you! Tali64³ (talk) 02:23, 22 July 2021 (UTC) Special:Statistics is not instantly updated. It can take a while because of how it's processed. ~ RhinosF1 - (chat)· acc· c - (on) 22:26, 22 July 2021 (UTC) ## ManageWiki UX Change (SRE Request for Community Input) ## How do you display the logo icon on new Vector? It's been a while since I have edited on my website and noticed there have been a few changes to the way Vector looks when not using legacy. I was wondering how you get the main logo icon to display next to the title just as it does here on Meta and on MediaWiki when unchecking legacy in Preferences → Appearance. Thanks. Borderman (Talk) 22:23, 22 July 2021 (UTC) Modern Vector splits the logo into 3 parts. All (icon, wordmark, etc) should be available in ManageWiki. ~ RhinosF1 - (chat)· acc· c - (on) 22:25, 22 July 2021 (UTC) When you have the "Icon ($wgIcon)" set, after disabling the Legacy, the Logo without a tagline (Icon ($wgIcon)) and the Wiki name will display at the top-left. Ugochimobi (talk) 22:35, 22 July 2021 (UTC) @RhinosF1: I was just looking in there and whilst they are labelled clearly, I'm not too sure exactly how I'm supposed to fill it in. Am I to fill in the word mark and the icon, and if so, how? I am somewhat rusty as I haven't been around for a long time. So, can I just copy and paste the line from Logo ($wgLogo) in ManageWiki into Icon ($wgIcon) for the smaller logo to appear? Borderman (Talk) 22:57, 22 July 2021 (UTC) Just copy and paste the static image link of the image you want to use as the Logo without Tagline ($wgIcon) and save. By then you should have exactly this. Ugochimobi (talk) 08:36, 23 July 2021 (UTC) Thanks. I tried this last night and it worked. I just need to create the Wordmark but I don't have any software to create svg text images, unless some can direct me to free version somewhere. Borderman (Talk) 10:02, 23 July 2021 (UTC) You can use Inkscape I think. Ugochimobi (talk) 10:22, 23 July 2021 (UTC) I have looked at it and it looks like there's a lot to learn, could take me some time. Thanks for heads up though. Borderman (Talk) 22:03, 23 July 2021 (UTC) I have looked at it and it looks like there's a lot to learn, could take me some time. Thanks for heads up though. Borderman (Talk) 22:04, 23 July 2021 (UTC) ## I need a little help I was trying to upload a file here on Meta just for constructive purposes only, for some reason it won't let me. FranciscoLol2009 (talk) 13:34, 22 July 2021 (UTC) I'm not sure why you're facing issues uploading here on Meta though, because others can. Ugochimobi (talk) 11:54, 23 July 2021 (UTC) Unconfirmed users can't.  looking now at logs. ~ RhinosF1 - (chat)· acc· c - (on) 12:32, 23 July 2021 (UTC) Can you please expand on what you mean by 'constructive purposes'? ~ RhinosF1 - (chat)· acc· c - (on) 12:34, 23 July 2021 (UTC) I was trying to make a template of: "This user is aspie" for my profile, because i'm an aspie, and when i tried to upload a picture related to Asperger's Syndrome, it didn't let me (Asperger's Syndrome is an type of Autism tho) FranciscoLol2009 (talk) 14:10, 23 July 2021 (UTC) you need to be auto confirmed to upload images, I'm sure you're not, so just wait till 4days is reached since you created your account Ugochimobi (talk) 07:26, 24 July 2021 (UTC) Nvm, it already letted me FranciscoLol2009 (talk) 14:39, 24 July 2021 (UTC) It's alright. Ugochimobi (talk) 14:41, 24 July 2021 (UTC) ## Recent Vector update broke my skin - any suggestions on unbreaking it? A wiki I run had its skin break with the recent Vector update. Now some elements pop in late and the buttons are visually broken. Any suggestions on updating it so that they work as intended? Thanks in advanced!--Amelia (talk) 17:23, 25 July 2021 (UTC) Forgot t mention the things that broke. The header tabs that display the namespace of the article and its talk page start off as white for a split second and po in as purple. The popup display when hovering over a link is white now instead of purple as it used to be. The footer at the bottom of articles that displays when the edit was done + copyright information is much smaller than it used to be. The line underneath 'MYTHOLOGY', 'PRODUCTION', etc on the sidebar used to be a dark purple but is now gray (#AAA or #CCC) Any idea on how to fix these? @Amelia: The new vector update broke many wikis' CSS unfortunately. What I did to debug it was use a Live CSS editor to debug it. You can also solicit help on our Discord or IRC where our talented volunteers can assist you in fixing the CSS. Thanks! Agent Isai Talk to me! 18:20, 25 July 2021 (UTC) ## SRE Security Disclosure Hello, A member of Miraheze SRE was made aware as part of ongoing monitoring of spam reports while investigating outlook delivery issues of a number of spam emails received by some hotmail users that passed through our mail server. After confirming that our mailserver had processed the email, Miraheze started an investigation immediately. We found the 'guest' LDAP account used to allow public access to Icinga was able to send email from arbitrary addresses and this had been exploited. We have audited all access available and there is no evidence that any user information or other additional access could have been gained via the account. A number of emails were sent as part of spam campaigns though. Please know that the security of your information and your privacy are very important to us. In the light of our commitment to transparency, relevant actionables and lessons will be shared with you for community feedback. If you have any questions regarding this incident, please ask below or email tech(at)miraheze.org. Thanks, RhinosF1 (Miraheze) (talk) Miraheze, Technical Team 20:27, 25 July 2021 (UTC) ## My wiki is not on the web? My wiki is not on the internet? How can I find my wiki (https://badtoys3d.miraheze.org/wiki/Main_Page). I searched it on the web. But no link. Help me! Nitheesh Yevan (talk) 08:40, 26 July 2021 (UTC) Nitheesh Yevan If you mean that you have used a search engine and have tried to find your wiki there this is normal. Search engines are not able to immediately crawl very new websites and create entries; this takes time. Maybe this article will help. --DeeM28 (talk) 08:56, 26 July 2021 (UTC) Hello there! You can get your wiki on the top of the internet by activating the WikiSEO extension, this extension allows your wiki to show up in search engines which include but are not limited to; Google, Bing, Yandex, etc. Read more at The MediaWiki documentation of the extension. If, after reading the documentation, you have any further questions please feel at home. Hope this helps. Ugochimobi (talk) 08:57, 26 July 2021 (UTC) Hello, I would very much like to download the wiki I made. I put so much time into it, and now I need information that is stored on the wikis pages. It is inactive I think. The name was darkfallnewdawn.miraheze.org Khamset (talk) 10:25, 26 July 2021 (UTC) Have you tried Special:DataDump on that wiki? ~ RhinosF1 - (chat)· acc· c - (on) 10:31, 26 July 2021 (UTC) Does not work. Khamset (talk) 15:56, 26 July 2021 (UTC) Could you elaborate on how it does not work? Agent Isai Talk to me! 16:27, 26 July 2021 (UTC) Yes, it says the wiki does not exist. I haven't been edited for over a year now I think. Khamset (talk) 18:07, 26 July 2021 (UTC) @Khamset: According to SECTION 1 of the DORMANCY POLICY a wiki that has been dormant for minimum 180 days/6 months will be marked as DELETED and after 2 more weeks, It'll become eligible for PERMANENT DELETION. This implies that your wiki has been permanently deleted, If only the One Year (1yr) you mentioned is accurate. Courtesy ping to @RhinosF1, Reception123, and Universal Omega: Ugochimobi (talk) 14:04, 27 July 2021 (UTC) If it was public then Reception will be able to recover it via a phab task. ~ RhinosF1 - (chat)· acc· c - (on) 14:28, 27 July 2021 (UTC) @Khamset: Per RhinosF1 above, If your wiki was a public one it could still be recovered by filing a Phabricator task requesting it back. If the latter, then it can't be recovered. Ugochimobi (talk) 15:22, 27 July 2021 (UTC) ## Page Forms Questions How can configure a form so that a field does NOT display on the page if the user does not click away from the default value (e.g. "None" for a dropdown or radiobutton). I want that field to not show up on the final page, but I want it to always be viewable under "edit with form." Thank you. ParentRatings (talk) 07:13, 27 July 2021 (UTC) Also, is there a way to change what is displayed on the page so that it is different from the form field? For example - if I am editing with form and have field Color dropdown red, blue, green. If I select red, then on the page it will display - Color: red. Is there a way to change the output so that it displays "Red Color is Selected" instead of "Color: red" but keep the form the same? Hope the question makes sense. I've searched the PageForms documentation and haven't been able to find anything to help me answer these questions. ParentRatings (talk) 20:15, 27 July 2021 (UTC) ## Miraheze Limited Seeking New Director of Site Reliability Engineering Miraheze Limited are currently seeking a new Director of Site Reliability Engineering to take charge and lead Miraheze's technical infrastructure, growth and budgeting. Anyone interested in the role can find more information at blog.miraheze.org. For the Board, Owen (talk) 13:51, 27 July 2021 (UTC) ## Missing Wiki - help needed to find it! Hello all, I have been the editor of soilproject.miraheze.org, but we did not frequently update the Wiki. I've just checked in and realised that the site is missing - possibly due to the MetaWiki upgrade. I would like some help please to figure out how to find, and reinstall the data for this wiki! Does anyone have a clue how I can do this? my best, Huiying Crumb (talk) 16:52, 14 July 2021 (UTC) Hi, in accordance with our Dormancy Policy, your wiki was deleted for inactivity. At this point the database has also been deleted, and thus we can't restore your wiki. I apologize for any trouble this may have caused you. MacFan4000 (Talk Contribs) 16:57, 14 July 2021 (UTC) @Crumb: As MacFan4000 pointed out, your wiki seems to have been deleted pursuant to the Dormancy Policy. After 6 months of no activity, your wiki is closed and deleted. However, it may be possible to restore your wiki. First, request the wiki again. Secondly, open a Phabricator ticket asking if it would be possible to restore the data on your wiki. Usually, it's possible to restore the wiki however there is no guarantee as to it. Hope this helps! Agent Isai Talk to me! 17:26, 14 July 2021 (UTC) Internet archive has archived only one version of the main page, was this a private wiki? So there is no hope to find the site in this archive, --Lily talk and I will listen · Lilypond Wiki 19:19, 14 July 2021 (UTC) Thanks very much all, I've followed @AgentIsai's suggestions and put in a request - via the wiki request page and Phabricator. Let's see how it goes. Thank you! @Lily, it wasn't a private wiki. We've had the site go dormant before (small team) and managed to successfully retrieve it via the request wiki page. Crumb (talk) 07:18, 28 July 2021 (UTC) ## I cannot login my old account at Nonciclopedia When Nonciclopedia and 伪基百科 were hosted by Wikia, my old account name was Bhenry1990. But after several years, when I try to return to 伪基百科 which hosted by Miraheze now, I cannot login my old account there because it doesn't exists there anymore. I cannot recreate my old account either, because Nonciclopedia which hosted by Miraheze has same account name. Neither can login my old account at Nonciclopedia, nor can I receive reset password mail from Nonciclopedia. So I create another account to ask Miraheze and Nonciclopedia: what happened to old Wikia account name after Nonciclopedia move to Miraheze? Did Nonciclopedia freeze that account, or someone registered that account without making any new contribution? I want to get my old account name back, thanks. Bhenry2021 (talk) 12:46, 27 July 2021 (UTC) Amazing having you here on Miraheze. Firstly, Wikia is by no way related to Miraheze, both parties aren't having the same database, not the same Board, not the same Project at all. Your Bhenry1990 account on Wikia is by no way the same as the Bhenry1990 in Miraheze, Someone else owns the account with the username Bhenry1990 (if only the account exists in the first place). So in a nutshell, you cannot log in to a Wikia account on Miraheze because Miraheze is not Wikia. And lest I forget, The Nonciclopedia and 伪基百科 in Wikia is not the Nonciclopedia and 伪基百科 in Miraheze. Ugochimobi (talk) 15:58, 27 July 2021 (UTC) Now I have to figure out whether Bhenry1990 was created automatically by previous contributions when Nonciclopedia move to Miraheze, or someone else registered it. It seems like the edits at unpedia.miraheze.org are forked from アンサイクロペディア, and the edits at zh.gyaanipedia.com are forked from 偽基百科. I have my old account at アンサイクロペディア and 偽基百科. Maybe their forked edits merged into the same account as Nonciclopedia's Bhenry1990. Bhenry2021 (talk) 03:05, 28 July 2021 (UTC) Yes, another important thing you need to know is that When for example, a page was imported from the Nonciclopedia in Wikia to the Nonciclopedia in Miraheze, and the "Bhenry1990" user contributed to that page, If the full revision of the page was imported, The Bhenry1990 user comes along with the page. But that doesn't mean that the account was created though. But if you try to create an account with that username in question and it says username already in use, then someone else has used it here on Miraheze. Hope that helps. ;-) Ugochimobi (talk) 08:17, 28 July 2021 (UTC) ## SRE Statement - User Logouts (T7701) On 27th July, We logged all users out following a report by a trusted volunteer that an OAUTH2 Rest Endpoint was incorrectly caching some responses resulting in some users being shown as logged in to the incorrect account if 2 requests were made in quick succession. This did not expose any private user information. We took action to prevent caching of all API and REST pages and logged all users out by resetting tokens in case any other pages were leaked. We have no evidence that anything outside of OAUTH2 was impacted and access to this endpoint was limited to only one application which was ran by said volunteer. We have already spoken to you directly if you were impacted. Please know that the security of your information and your privacy are very important to us. In the light of our commitment to transparency, relevant actionables and lessons will be shared with you for community feedback. If you have any questions regarding this incident, please ask on the talk page or email tech(at)miraheze.org. Thanks, RhinosF1 (Miraheze) (talk) Miraheze, Technical Team 08:59, 28 July 2021 (UTC) Well done, though one question; pardon the ignorance, but is there any potential issue regarding leaks on a third party (ie, custom) domain in the time of the issue? --Raidarr (talk) 10:26, 28 July 2021 (UTC) All custom domains are treat in the same way by our backends. ~ RhinosF1 - (chat)· acc· c - (on) 10:28, 28 July 2021 (UTC) ## Putting Infobox on right (Again) https://companyballfanon.miraheze.org/wiki/Template:Companyball_infobox/doc TheAnimeMapper2020 (talk) 21:11, 29 July 2021 (UTC) Did you happen to copy this template from a different wiki? If so, you will need to copy their MediaWiki:Common.css page to your wiki. Agent Isai Talk to me! 21:25, 29 July 2021 (UTC) It doesn't work. TheAnimeMapper2020 (talk) 23:38, 29 July 2021 (UTC) Try inserting the following into your Common.css: .infobox { float: right; } Agent Isai Talk to me! 23:48, 29 July 2021 (UTC) Still doesn't work — Preceding unsigned comment added by TheAnimeMapper2020 (talkcontribs) 07:44, 31 July 2021 (UTC) Could you clarify on how it "still doesn't work"? You asked that the infoboxes be placed on the right, from a quick check on your wiki, they appear to be on the right. Agent Isai Talk to me! 05:37, 1 August 2021 (UTC) Hi there! Yesterday I decided to move my K-pop wiki from Fandom to Miraheze, but in preparing everything, I ran into a problem. Fandom is licensed CC-BY-SA, but not Miraheze. So it would be incompatible with the Wikipedia license which is CC-BY-SA-3.0 (which is where I have info from my Fandom wiki). So I was wondering if it is possible to license my wiki on Miraheze to CC-BY-SA to make it compatible. It should be noted that my wiki is in Spanish. I thank you all. Black Mamba (talk) 16:53, 1 August 2021 (UTC) Hello, to do this, you will need to file a Phabricator task requesting this. Thank you! Agent Isai Talk to me! 20:01, 1 August 2021 (UTC) ## MediaWiki:Common.css is not loaded in new wiki I recently used {{help me}} on my own talk page to seek a solution. I was referred here. I can transpose the content here if desired. The question is already entered here: User_talk:Philoserf#MediaWiki:Common.css. Philoserf (talk) 18:36, 2 August 2021 (UTC) I solved the issue by using a bit of this and a bit of this and some trial and error. —¿philoserf? (talk) 00:10, 4 August 2021 (UTC) False hope. I solved my own font choice via the "Shared CSS/JavaScript for all wikis" at meta. The identical "MediaWiki:Common.css" does not get applied when I blank the shared css. —¿philoserf? (talk) 00:15, 4 August 2021 (UTC) Solved. After logging out and clearing browser cache, I have what I desired. —¿philoserf? (talk) 00:23, 4 August 2021 (UTC) ## Two global interwiki admins completely inactive I have been thinking about this from the past few days that two of the global interwiki administrators are inactive. AlvaroMolina has made his last edit on 26 September 2019 and last log action on 14 January 2019 while 黑底屍 has made his last edit on 1 June 2019 and last log action on 26 February 2019. While we don't have any provision to remove interwiki administrator flag because of inactivity, I guess we can't remove them from the user group. What else can we do to remove the flag, because of inactivity? Maybe we can discuss this on the Community noticeboard or at any RFC to proceed to remove those groups from the inactive users or we should do nothing at all! ~ Mazzaz (talk) 08:00, 20 July 2021 (UTC) Mazzaz I have looked thoroughly on the Policies of the Interwiki Administrators and I couldn't find any that specified that Interwiki-Admins that are inactive would be revoked of their rights. Maybe it's a lacuna that needs to be filled up or a normal policy. Ugochimobi (talk) 14:43, 24 July 2021 (UTC) If there isn't a policy, I support making one in the name of access control and standardization. --Raidarr (talk) 11:59, 25 July 2021 (UTC) @Raidarr: Yes! We need to cover up that loop. I wonder why there was never any. Ugochimobi (talk) 17:03, 25 July 2021 (UTC) Interwiki administrator is a relatively minor role. Historically, it's been granted to users, in some cases, where the user is a bureaucrat across multiple wikis. This is one of those cases; Shaunak Chakraborty could be a similar example. I personally don't see the need to be too strict for such a minor user group, as it's quite possible the user will return to active status soon, and they do speak multiple languages (including the Asian languages, of which were' in short supply). Dmehus (talk) 23:00, 25 July 2021 (UTC) @Dmehus: It's nothing bad though. I got your point. Ugochimobi (talk) 09:22, 26 July 2021 (UTC) While I agree with Doug in that it's a small role, it could potentially be abused should the user become compromised and adds malicious links to the global interwiki table. A spambot could then abuse that to add links onto wiki pages without tripping the abuse filter. Agent Isai Talk to me! 16:06, 4 August 2021 (UTC) ## Limit on [itex] tags or just a bug? For several pages on my wiki, we would need lists of several different equations (up to maybe 1-3 hundred) but [itex] seems to stop rendering them way before this. I am not sure whether this is an intentional limit or a bug, or maybe to do with the extension's settings (currently rendering them as pngs, maybe svg would work? I cant change them). Here's a pic of the error https://imgur.com/a/QADHXIj TGR (talk) 11:22, 21 July 2021 (UTC) @TGReddy: Hi thanks for your feedback... Did you enable the mw:Extension:Math? And Were the images uploaded in your wiki or from c:Wikimedia Commons or Our very own Commons? Ugochimobi (talk) 13:56, 21 July 2021 (UTC) @Ugochimobi I have enabled the extension, and the image is from my wiki, specifically https://googology.miraheze.org/wiki/List_of_functions. TGR (talk) 18:45, 21 July 2021 (UTC) So is it by any chance working now? Ugochimobi (talk) 18:51, 21 July 2021 (UTC) @Ugochimobi I always had the extension enabled, the error still occurs after a lot of [itex] tag use TGR (talk) 07:56, 22 July 2021 (UTC) @TGReddy: Over here, the problem no longer persists on that page. Everything works fine now. Ugochimobi (talk) 10:26, 22 July 2021 (UTC) Look at the bottom of the page, error is still there TGR (talk) 14:26, 22 July 2021 (UTC) Please split the page up. 1-3 Hundred is probably past the rate limits. ~ RhinosF1 - (chat)· acc· c - (on) 11:45, 25 July 2021 (UTC) The limit is ~120 requests per 10 seconds for Math so I wouldn't go above 120. ~ RhinosF1 - (chat)· acc· c - (on) 11:47, 25 July 2021 (UTC) is there no way to go beyond this limit? FANDOM seems to have no such limit TGR (talk) 23:37, 26 July 2021 (UTC) Consider the issue closed, i disabled the math extension and imported a seperate js file that handles more equations, it is now working fine TGR (talk) 15:44, 27 July 2021 (UTC) @TGReddy: Nice! Sounds great. Ugochimobi (talk) 15:47, 27 July 2021 (UTC) {Ping|TGReddy}} I have done the same in my wiki after I read this thread. It is working fine (with some exceptions which I handled with the extension math). I tried to copy the JavaScript code into my own wiki but that did not work out. Maybe the code is to long or too complex, any ideas why? Thank you, greetings, --Lily talk and I will listen · Lilypond Wiki 20:29, 3 August 2021 (UTC) copy and paste whats in https://googology.miraheze.org/wiki/MediaWiki:Common.js into your Common.js, otherwise i'm not sure whats wrong. make sure to use $$and$$ to delimit the math you want TGR (talk) 07:01, 7 August 2021 (UTC) @TGReddy: Thanks for you tipp, I have already done this. What I wanted is to insert the whole javascript code because webpages tend to vanish over the time, greetings, --Lily talk and I will listen · Lilypond Wiki 07:45, 7 August 2021 (UTC) that wont work, i tried already. the site i have chosen should not vanish for a very long time, however TGR (talk) 15:17, 7 August 2021 (UTC) ## Background for a wiki How do I change the background of my wiki? The wiki in question is Wiki for Media of Good, and I am intending for a space/cosmic/galaxy/universe background. FreezingTNT (talk) 00:50, 29 July 2021 (UTC) #body { background-image:url(Insert image url starting with "static.miraheze.org") } #mw-page-base { background:transparent } Hb1290 (talk) 10:57, 29 July 2021 (UTC) Hi. So here is what I have so far. However, there's a jarring line of black at the top of the giant square thing with the article contents, and the background thing isn't a transparent version of the color I'm intending for. Help? EDIT: I also need help making the respective "Message", "Discussion", "Read", "Edit source" and other tabs be both the same color as the background and in transparent. @Hb1290: FreezingTNT (talk) 04:31, 31 July 2021 (UTC) When you right click on the element you want to change and chose "inspect" (or something similar, my browser is not in English), then you can check out the id of the elemnt eg ca-view and you can change the background and other properties by editing either Mediawiki:Common.CSS or Mediawiki:Vector.CSS as described by Hb1290 -Lily talk and I will listen · Lilypond Wiki 20:39, 3 August 2021 (UTC) Hi. Try adding background-size: cover to your background image CSS. Some other tweaks to consider: /*Transparent tabs top of page*/ background:transparent color: #fff; } color: #fff; } color: #fff; } color: #fff; } /*White text for sidebar, user tools and footer*/ #p-personal a color:#fff !important #mw-panel a color:#fff !important .mw-footer li { color: #fff; Hb1290 (talk) 03:51, 4 August 2021 (UTC) @Hb1290: So I tried using your stuff, and it doesn't look so good. Hence why I promoted you as administrator on my wiki. FreezingTNT (talk) 01:31, 5 August 2021 (UTC) Just finished working on it. It's looking quite nice now. Hb1290 (talk) 10:50, 5 August 2021 (UTC) If I may suggest, you might want to consider the bottom link button text and the sidebar contrast (ie, where on certain resolutions the brightest part of the background overlaps with white nav text). Links prove to be rather dark in the text on that background as well. --Raidarr (talk) 13:36, 5 August 2021 (UTC) @Hb1290: I'm working on a series of infoboxes, starting with this one for movie articles. I'm trying to get this file in question to show up on my infobox template. Help? FreezingTNT (talk) 19:30, 13 August 2021 (UTC) That's a simple fix, first of all, you need to reupload the file to the wiki you're working on. Then, instead of using the static.miraheze url, just use the file name without namespace, like this: Image=Lionkingposter.jpg Hb1290 (talk) 23:51, 16 August 2021 (UTC) I mean as in a way of showing the image without actually having to download the original and put it onto my wiki. FreezingTNT (talk) 22:12, 17 August 2021 (UTC) @Hb1290: I'm trying to make it so the white-colored words on my wiki are of this shade of purple: #d240f1. Help? After all, you changed it from black to white. FreezingTNT (talk) 19:51, 4 September 2021 (UTC) Easily done. However, I think this may cause some readability issues in places. Hb1290 (talk) 00:38, 5 September 2021 (UTC) Confirmed that these choices are making some very poor results for readability. --Raidarr (talk) 00:44, 5 September 2021 (UTC) @Hb1290: Then try making them appear bold. FreezingTNT (talk) 01:20, 6 September 2021 (UTC) Done. Also made some other tweaks for better readability. Hb1290 (talk) 23:29, 6 September 2021 (UTC) @Hb1290: Also, the top black thing on the user stuff shouldn't be that long and should only reach nearby the username. FreezingTNT (talk) 02:53, 8 September 2021 (UTC) Sorted. Looking quite good now :) Hb1290 (talk) 23:44, 8 September 2021 (UTC) ## CentralAuth I'll be sure to repost this in the Discord later if it doesn't get an answer, but why is it that some Deleted Wikis don't appear in Special:CentralAuth but others do? Like, I get that it's when you were blocked on a deleted wiki and all, but why only in that situation? – 00:19, 4 August 2021 (UTC) Wikis are deleted in two stages. Marked as deleted in ManageWiki and actually dropped from the database. Wikis marked as deleted but not dropped will show as 'miraheze.org' until they are dropped. ~ RhinosF1 - (chat)· acc· c - (on) 12:28, 5 August 2021 (UTC) They will get deleted once a SRE member drops the database of the wikis. 01:13, 4 August 2021 (UTC) ## Infobox glitch Hello, I have imported the basic infobox for my wiki but there is one small glitch. It displays <templatestyles src="Module:Navbar/styles.css"></templatestyles> right on the infobox, but it is working otherwise. Here's a page: https://shawsnightmare.miraheze.org/wiki/Former_Scout_Boy Any ideas on why this is happening? Mickey96 (talk) 07:23, 5 August 2021 (UTC) This happened to me in the past. You'll need to change the content model of the page Module:Navbar/styles.css into sanitized CSS. ~ Mazzaz (talk) 08:05, 5 August 2021 (UTC) Mikey96, in addition to what I said above you'll also need to enable TemplateStyles extension. ~ Mazzaz (talk) 08:25, 5 August 2021 (UTC) ## Server I have a new server on miraheze for my wiki but I want to export over 1k pages from the old one how do I do this? Miller2007 (talk) 19:00, 5 August 2021 (UTC) I am a system admin it's my wiki. I have all the pages frpom the old fandom hosted wiki on file and I want to import them to my new wiki — Preceding unsigned comment added by Miller2007 (talkcontribs) 19:50, 5 August 2021 (UTC) How do I upload the file? Miller2007 (talk) 21:09, 5 August 2021 (UTC) Click here to make a task. If you're asked to login, login with your Miraheze account. When you open the page, at the top left corner it should have a picture of a cloud with an arrow pointing up, click that and upload the dump. In the description, specify that you want that dump imported to millerpediawiki. Agent Isai Talk to me! 21:22, 5 August 2021 (UTC) I have uploaded a logo but why is it zoomed in? What px should it be? https://millerpedia.miraheze.org/wiki/Millerpedia Miller2007 (talk) 12:26, 6 August 2021 (UTC) Hello, your logo appears normal to me. This might be because of your local browser cache. Try resetting it and then refresh the page. Agent Isai Talk to me! 15:53, 6 August 2021 (UTC) Twat’s because I’ve changed it as advised by Percy below. Miller2007 (talk) 16:54, 6 August 2021 (UTC) Ah, I see. Glad it worked! Agent Isai Talk to me! 17:06, 6 August 2021 (UTC) On 1 of the Help Pages it says the logo has to be no larger than 160x160. As your logo is bigger, it uses the center square. I would use the Rotherham United crest set in a 160 square for the logo. And use the current design as part of your Main Page. PercyUK (talk) 14:55, 6 August 2021 (UTC) ## Host pre-existing wiki Hello, everyone. I want to start a Wiki, but at first as local wiki (I intend to use Docker WSL MediaWiki). After I am done, I would like to upload the wiki to Miraheze. Is it possible to do such migration? Is it easy to do it? Welcome to Miraheze, we're glad to have you here! It is absolutely possible to import a wiki to Miraheze, all you need to do is export all your pages using your wiki's Special:Export. Doing this will generate a file ending in .xml. If the file is small enough in size, you can import it on your wiki using Special:Import. Otherwise, you can ask a system administrator to import it for you. Hope this helps. Agent Isai Talk to me! 04:25, 7 August 2021 (UTC) Thank you, @Agent_Isai. As always, your replies are clear and quick. Thank you for all the assistance. I'm really starting to enjoy editing wikis. 😉DarkPaladin125 (talk) 04:34, 7 August 2021 (UTC) ## Miraheze and WMF's Foundation Election One of the 2021 Wikimedia Foundation Board of Trustees candidates mentioned the future possible partnership between Wikimedia Foundation and Miraheze. Here's the excerpt of his statement : The boundaries of our projects are tightly guarded. Any new communities can only launch after a long and difficult process, and only if they are similar to existing projects and rooted in similar values. We need a diaspora of projects, such as an ad-free, open wiki farm with stable funding and consistent community safety. One potential strategy for resilience would be to support external wiki farms such as Miraheze, provide grants, and help improve safety. This allows communities to grow around multiple centers so different people can find their niche, and avoids the fragility and boringness of central control. It seems important to our resilience that a diversity of wiki farms continue to coexist so that we don't end up recreating the monolithic Wikipedia experience. I've interviewed him directly to ask for further details, you can see the transcript of our conversations here. What's your opinion, as a member of Miraheze Community, regarding this news? Altilunium (talk) 06:45, 7 August 2021 (UTC) I believe it is great news. Miraheze follows closely the steps of WMF, naming global administrators Stewards, naming system administration Site Reliability Engineering, establishing a Trust and Safety department, using Phabricator as the issue tracker, the list goes on. I believe that WMF providing grants to Miraheze would be very nice and beneficial to the community and would truly help support all projects on Miraheze grow. One thing that I am wondering though if WMF could use these grants to somehow meddle with community decisions. Though this is unlikely, it is one that is to be asked. Agent Isai Talk to me! 06:55, 7 August 2021 (UTC) I'd strongly encourage Adam to reach out to us. It's a fantastic idea. ~ RhinosF1 - (chat)· acc· c - (on) 07:47, 7 August 2021 (UTC) It's a self awareness rarely seen in large-ish volunteer communities that have found their scale and niche. May it continue, and hopefully avoid the caveat Agent indicates may be possible regarding conflict in shot-calling. --Raidarr (talk) 10:09, 8 August 2021 (UTC) Hi, thank you for the invitation. It's nice to see all the positive comments so far! Of course, optimism comes easily when someone randomly waves imaginary money around <3. I completely agree with , grants can come at a steep price to independence. This would be a major consideration when setting up any such grantmaking program, and I'm not sure the problem has ever been solved in a perfect way. My understanding is that the minimum requirements would be * an unrestricted grant ("no strings attached"), and * a stable amount, spread out over several years. It also matters whether the grantmaking organization is itself independent, how performance is evaluated, and so on. More insidiously, INCITE! Women of Color Against Violence warns of pressure from inside your own organization to conform to a grantmaker's expectations, for example to rewrite your mission statement in terms that will hit desirable, external bullet points. The Miraheze content policy looks like a mature policy, I see that it's inspired by Wikipedia's but includes clauses that read like scars from growing too quickly, e.g. on forking, hate speech, a non-commercial main purpose, etc. I also see that the policy has been continually evolving. I've been thinking about the minimum standards we would expect of each organization in order to qualify for our imaginary open wiki farm grant, and it also relates to questions of independence. For example, imagine that we started the grant program with a relaxed standard but then a future Board decides to enforce CC-0 free license compatibility and anyone on Miraheze with "fair use" book covers or text snippets is suddenly in jeopardy, or the whole farm loses its funding. On the other side of that question, I would expect most Wikimedians to feel strongly that we ask for standards closely resembling what you have here, but not necessarily identical. Relying on one big grant would be dangerous no matter what, maybe cap it below 50% of your total revenue or take other measures to not collapse in the worst-case scenario. Setting up a thing like this would be a negotiation, and hopefully the "cost" of what you give up by participating is known up front and does not change in the future. One question for now: in addition to cash, what else is lacking that some kind of larger "federation" or grantmaker could help you with? Feel free to go all "genie of the lamp" / "fairy godmother" scale, for example better software support for wiki farms, interwiki entries, global data protection lawyers, a global conference of wiki farms... —Adamw (talk) 21:56, 8 August 2021 (UTC) @Adamw: First off, welcome to Miraheze! Indeed, you'll always yield great optimism when money is involved, especially when it's regarding helping a project you truly care about. No one at Miraheze is paid, we're all volunteers who care deeply about Miraheze and it's well-being which is why this opportunity is so exciting. Now, regarding your last question, more than anything, Miraheze could use more volunteers, especially those who could assist in MediaWiki. Our current SRE is a somewhat small team tasked with managing 4292 wikis. While they are highly competent and knowledgeable about MediaWiki, issues do backlog from time to time because they sometimes don't have any time to spare to dedicate to Miraheze. Apart from that, as I said, they're highly knowledgable of MediaWiki so better software support isn't really too needed, interwiki entries are managed by our Interwiki administrators who also know their way around the interwiki table, GDPR I would say isn't something that is of worry to Miraheze, they comply accordingly to requests, and a global conference of wiki farms, well, maybe one day... ;) Agent Isai Talk to me! 23:10, 8 August 2021 (UTC) I am against the endangering of Miraheze's community-centered approach in exchange for money. For me this reads as an attempt by Wikimedia to absorb the project. Let them create their own free wikifarm if they feel so inclined. More competition is healthy, not less. They certainly have the resources for it. If they don't want such control, they can just let in on it. The fact that they want to keep and expand their "values" to others is worrying. NimoStar (talk) 00:09, 9 August 2021 (UTC) @NimoStar: Not necessarily, the proposal never mentions absorbing it, just providing it with grants as WMF appears to do with other projects. Agent Isai Talk to me! 00:19, 9 August 2021 (UTC) @NimoStar: Hi there, Trust me, some users might call this move WMF is taking as "absorption" or related, But I do think their aim is nothing more than providing grants to Miraheze just like others. I just hope nothing goes wrong at the end of the day though, I mean, I'm sure nothing is expected to go wrong. Ugochimobi (talk) 09:56, 9 August 2021 (UTC) As long as (per Adamw's comments) there is transparency and a clear process to maintain independence and clear expectation, there is little risk here. Ideally this is all framed as WMF approving of the Miraheze approach and encouraging more of 'the same' as far as attitude, even if not every detail is going to line up. It can be done properly. If suggesting funding to a specific direction, I would probably offer it for engineering, or even a professional developer to add support that is accountable to the community. A paid position could be dangerous, but might be something to consider since that is the strongest push as far as officially needed positions go. Just my 2c. --Raidarr (talk) 14:19, 9 August 2021 (UTC) Back when I was more involved (I was Miraheze CTO at the time) I wrote the initial content policy. Some of the scars you see come from my own experience with LocalWiki, its predecessor Wiki Spot, and its founding member Davis Wiki. All open wiki farms have the same sorts of problems, it turns out. It wasn't really inspired by Wikipedia much at all -- at least not directly. But we had pretty firmly decided that we were essentially noncommercial, but were willing to tolerate some quasi-commercial activity like a parts description for an electronic device for a small store or a fan-centered wiki. We've also been quite willing to accept arbitrary licenses in the past, with a strong preference for CC licenses, but there are reasons for exceptions. Financially, we have decent community support, but effectively Miraheze has been ensured to stay afloat by my own donations. I'm by far the largest donor, if you only consider money. Certainly our SREs have given far more in terms of the value of their time. But I am a little bit unemployed right now (is the WMF hiring software engineers?) so Miraheze will have to rely on the reserve for now. I'm not on the board now, but I think that we as a community would be willing to accommodate changes to the content policy if they were presented well. But ultimately it's the whole community's policy, and they get to decide. We've taken a grant from Wikimedia Indonesia before, and I'm sure the current leadership would be happy to participate in a Wikimedia grant program again. -- Labster (talk) 01:42, 12 August 2021 (UTC) ## Problem with the infobox on mobile Hello, I have a problem with the infobox on the mobile.(come from Taiwan) On the desktop, the format, width, border, and background color of the infobox can be displayed normally, but when I view it on mobile, none of the above things will be displayed normally like on the desktop. We have also imported Mobile.css but it is still useless, hope you guys can help me solve this problem, thanks! Damian Lee (talk) 07:30, 8 August 2021 (UTC) Damian Lee, hello. Could you please share more details like link to the wiki and any page where the issue can be seen? ~ Mazzaz (talk) 08:43, 8 August 2021 (UTC) https://jinghe.miraheze.org/wiki/%E6%90%9E%E6%90%9E%E9%8E%AE%E5%9C%8B @Mazzaz, this is one of our pages with country infobox. Damian Lee (talk) 11:37, 8 August 2021 (UTC) ## Can't find VisualEditor tab after setting extension I'm the admin of the NewterraWIKI(newterra.miraheze.org). This morning when I tried to start editing the page with VisualEditor as always, I couldn't find the tab of "Edit" but only "Edit source". After hours of trying to fix the problem, adjusting the settings, I still can't get it back. It's really rushy, please help. :( Ranelai2002 (talk) 18:15, 8 August 2021 (UTC) Hi there! Did you by chance enabled the VisualEditor extension? Ugochimobi (talk) 22:41, 8 August 2021 (UTC) I have no idea, the extension seems to be enabled, but I can't find the tab of it. Ranelai2002 (talk) 18:26, 10 August 2021 (UTC) This is a common bug which is under investigation. What I've seen some do to resolve this is clearing their cookies and logging back in. Others also disable VisualEditor and then reenable it. Agent Isai Talk to me! 22:59, 8 August 2021 (UTC) Thanks a lot for your advice. We've already tried this before. However, the problem still exists. :( Ranelai2002 (talk) 18:25, 10 August 2021 (UTC) ## How to embed archive.org video? How do I embed archive.org video? I only know how to embed Youtube video and upload videos as mp4, but how do I embed archive.org video on sonicthehedgehog? --"zany" (My talk) 15:29, 10 August 2021 (UTC) What is the Archive.Org video that you wish to embed? ~ El Komodos Drago (talk to me) 13:21, 13 August 2021 (UTC) ## Files on fair use Can I upload files in fair use on my wiki? Angelo Pisani (talk) 21:48, 11 August 2021 (UTC) As long as you are certain that it'd fall under fair use as defined in UK law, then go ahead and do so freely. Agent Isai Talk to me! 22:02, 11 August 2021 (UTC) the files are album covers of singers. Angelo Pisani (talk) 10:27, 12 August 2021 (UTC) If they are being used next to information on the album or singer then I would consider that fair use. (This is not legal advice, I am not a lawyer or otherwise qualified to give legal advice.) ~ El Komodos Drago (talk to me) 13:00, 13 August 2021 (UTC) Hello. I believe that greatcharacterswiki should not get deleted for the following reasons: 1. The wiki's been contributed a lot: There have been a lot of edits, often unbiased and clean. 2. Even if it was biased, it doesn't have to be completely deleted: I believe that it might as well just be split or independent to the Reception Wikis. 3. Be readopted by other users: I believe the wiki can be adopted by another user because then it could change somewhat different. "zany" (My talk) 18:29, 12 August 2021 (UTC) Besides getting in contact with an admin, I don't believe there is anything you can to prevent the wiki from being deleted. Because the wiki is private, I can't even find out who is an admin and who isn't - though I suggest you try Matthew The Guy, MatthewThePrep, Agent Joestar, DarkMatterMan4500, DeciduousWater534, DuchessTheSponge, Freyja Trichet, MarioMario456, and PlantyB0i ~ El Komodos Drago (talk to me) 13:14, 13 August 2021 (UTC) This has been eating away at me for a few days now, and I'm just sick and tired of people making requests to re-open wikis that have been closed for good reasons. DarkMatterMan4500 (talk) (contribs) 13:16, 13 August 2021 (UTC) Were these wikis set to private as part of trying to delete them? Because they were public on 26th April this year. ~ El Komodos Drago (talk to me) 13:42, 13 August 2021 (UTC) Yes, they have. Meaning they are un-adoptable. DarkMatterMan4500 (talk) (contribs) 13:46, 13 August 2021 (UTC) Sorry, but my bureaucrat was taken away :( 23:39, 13 August 2021 (UTC) Plus, we don't need these wikis anymore. They were ultra biased and it would have been extremely hard to get this wikis to be as unbiased as possible. Even Matthew The Guy (the founder) grew to hate those wikis. 23:41, 13 August 2021 (UTC) If they were not closed for potential Miraheze policy concerns, it would be reasonable to hand them off to interested community members rather than decide for them for their own reasons as admins. --Raidarr (talk) 13:50, 13 August 2021 (UTC) Support ~ El Komodos Drago (talk to me) 14:10, 13 August 2021 (UTC) If the admins do not put the wiki up for adoption, then the wiki XML can be taken from Archive.Org and the wiki recreated. ~ El Komodos Drago (talk to me) 16:19, 13 August 2021 (UTC) Due to manual closure, this may not be possible per the Dormancy Policy. I don't think there is precedence for a case where a wiki that still has invested community members is closed, set to private and set to be deleted for good. Thus, any action would be a specific case on the Stewards' noticeboard. --Raidarr (talk) 12:00, 14 August 2021 (UTC) A small point of clarification here...that is a convention among at least a plurality of Stewards that existed before I became a Steward. There are good reasons for it, but that being said, for example, presumably, if there is a discussion locally or on the wiki's companion Discord server among the community members to close the wiki, this prevents other members from usurping that local community process. There are other reasons, too, such as in the case where a user contributed nearly all of the edits to the wiki or it's their personal wiki containing their own information, which they've shared publicly. However, you are correct in the latter part of what you say, that this indeed may be a special case warranting an exception. Thus, I'm going to ask DuchessTheSponge to link me to a community discussion, whether on-wiki or on a Discord server, where significant contributors to the wiki agreed as to its closure. Thanks. Dmehus (talk) 04:21, 16 August 2021 (UTC) I'll take the liberty for them, given they didn't do so below and I have been observing it locally. My position of 'nothing to be done' changed due to the amount of people still apparently interested and a policy quotation, though I stand by my statement that the conversation is best directed to the SN where the topic is also being discussed. here is where the suggestion was made, a handful of users offered support, Duchess made the call and most discussion aside from here on Meta has taken place. --Raidarr (talk) 13:21, 16 August 2021 (UTC) Strongest oppose I had good reason to request their closure. We are not going to revive those wikis. I'm sorry. TigerBlazer (talk) 15:10, 14 August 2021 (UTC) We don't want you to revive the wikis, we want you to put them up for adoption so other people can. I understand if the admins want to walk away from the wiki, but the sensible thing to do (especially given the significant number of users who had commented on the noticeboards that they wish it to be revived) would be to put it up for adoption. ~ El Komodos Drago (talk to me) 00:55, 16 August 2021 (UTC) Strongest oppose As the current leader of Qualitipedia, I agree with what TigerBlazer said. DuchessTheSponge (talk) 07:59, 16 August 2021 (UTC)
2022-12-01 13:14:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4445219337940216, "perplexity": 3432.900330064349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710813.48/warc/CC-MAIN-20221201121601-20221201151601-00422.warc.gz"}
https://support.bioconductor.org/p/85750/
#### The support.bioconductor.org editor has been updated to markdown! Please see more info at: Tutorial: Updated Support Site Editor Question: Reference paper or resource for limma::diffSplice and edgeR::diffSpliceDGE methods? 3 2.5 years ago by maltethodberg110 UCPH maltethodberg110 wrote: I have recently obtained very promising results using the diffSplice and diffsSpliceDGE from limma/edgeR, respectively. I was surprised to find that neither method has a cited reference despite being included in both the main limma paper and both edgeR and limma user guides. DEXSeq in comparison has a separate reference in addition to DESeq/DESeq2. This meant that I had to piece together what the method actually does from the help files from diffSplice/topSplice and diffSpliceDGE/topSliceDGE. As far as I can tell, diffSplice works directly from the model fitted in a normal limma/edgeR analysis, unlike DEXSeq which fits a separate model including the exons, although it still uses the same dispersion estimation from DESeq2. As I understand, the F-statistics test tests whether any exon logFC is different from any other, yielding a single gene-level p-value. The exon-level test tests whether each exon has a logFC different from the average across genes. These exon-level p-values are then corrected using the Simes method, before using the lowest p-value of among exons to represent the gene. I am unfamiliar with the Simes method for correcting p-values. Conceptually, the approach seems similar to DEXSeq's approach with perGeneQvalue, where p-values are defined first at the exon level, and then aggregated at the gene level (Asking whether at least one exon-level p-value is significant in the gene). Intuitively, how is aggregating exon-level p-values using the Simes method different from using DEXSeq perGeneQvalue? Does it possibly relate to the comment that "The exon-level tests are not recommended for formal error rate control." from the help files? Any insight or pointers to resources are much appreciated. limma edger dexseq diffsplice • 1.1k views modified 2.5 years ago by Charity Law90 • written 2.5 years ago by maltethodberg110 Answer: Reference paper or resource for limma::diffSplice and edgeR::diffSpliceDGE metho 6 2.5 years ago by Charity Law90 Charity Law90 wrote: I'm glad to hear that you are finding promising results using diffSplice and diffSpliceDGE from limma/edgeR. It is true that neither of the methods have a cited reference as yet, but we are hoping to write something up for it in the near future. It's not clear to me how DEXSeq's perGeneQvalue function works, so I can't comment much on the similarities between that and diffSplice's gene-level tests. Both diffSplice and diffSpliceDGE offers two gene-level tests -- one using an F-test and the other using Simes correction. In practice, the main difference between the two is that the F-test is better at picking out genes where evidence of differential splicing comes from several exons (such that there are many exons with logFCs that are different from the rest); whereas the Simes correction is better at picking out genes where there are fewer exons affected. For example, if there is a gene where the logFC in only one exon is very different from the rest, then the Simes method would pick this out better than the F-test. "The exon-level tests are not recommended for formal error rate control" because our tests look at overall changes in exon expression patterns between groups. The expression of individual exons can be affected by the expression of multiple transcripts containing that exon for that gene. Depending on how the transcript-level expression translates into exon-level counts, looking at exon-level tests can be misleading and have inaccurate error rate control. This is why we don't recommend it. ADD COMMENTlink modified 2.5 years ago by Gordon Smyth36k • written 2.5 years ago by Charity Law90 1 Thank for your reply. I haven't done any systematic investigation, but it does indeed on the face of it seems that the F-test tends to mainly find differential splicing in genes with many exons, whereas the Simes correction seem to be more stable across different number of exons. With regards the the exon-level test, I'm actually not using RNA-Seq data, but rather look at expression from different promoters of the same gene. In that case, there is no uncertainty in quantification of counts, since each transcript uniquely uses a single promoter. Would that mean that the error rate is controlled in this case? 2 No, it has nothing to do with uncertainty of quantification. Regardless of the nature of your data, is it not statistically correct to apply FDR control at a lower level (promotors or exons) when the ultimate aim is to interpret results at a higher level (genes). Simply looking for genes in which any exon has a low p-value will tend to select genes with a large number of exons, just by chance. Simes method has the effect of making the minimum p-value for each gene uniformly distributed, regardless of the number of exons in that gene. See my reply to your other comment. ADD REPLYlink modified 2.5 years ago • written 2.5 years ago by Gordon Smyth36k Answer: Reference paper or resource for limma::diffSplice and edgeR::diffSpliceDGE metho 2 2.5 years ago by Yunshun Chen380 Australia Yunshun Chen380 wrote: The Simes method was introduced and described in the following paper: R. J. Simes. An improved Bonferroni procedure for multiple tests of significance. Biometrika, 73(3):751~754, 1986. The Simes' method controls the family-wise error rate in the weak sense, i.e., only when all null hypotheses are true (no exons within the gene are differentially used). I'm not sure how DEXSeq perGeneQvalue works though. ADD COMMENTlink modified 2.5 years ago • written 2.5 years ago by Yunshun Chen380 Interesting, so what motivated the choice of this particular statistics relative to something more common like the Benjamini-Hochberg correction? Does it have to do with the fact that p-values can be correlated, as described in the introduction of the paper? 1 Actually Simes method is just as well known in mathematical statistics circles as Benjamin-Hochberg. In fact, Simes and BH are essentially the same algorithm, just used for slightly different purposes. We use Simes simply because it is the most statistically powerful adjustment method that gives the required result, which is weak FWER control within a gene. We then apply BH to the gene-level Simes-adjusted p-values. If you want to understand this approach, you could look at this paper: Although the setting is different, the principles are the same. This article shows that applying the BH algorithm to window-level p-values fails to give correct FDR control at the region level. We solve this problem by using Simes method to aggregate the window-level p-values for each region, then apply BH to the region-level Simes p-values. This process controls the FDR correctly at the region level, whereas other methods do not. Mvh Gordon ADD REPLYlink modified 2.5 years ago • written 2.5 years ago by Gordon Smyth36k
2019-02-19 17:15:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5153738260269165, "perplexity": 2307.944852037388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247490806.45/warc/CC-MAIN-20190219162843-20190219184843-00484.warc.gz"}
http://mathhelpforum.com/number-theory/63153-legendre-symbol-print.html
# legendre symbol • Dec 3rd 2008, 03:35 PM bigb legendre symbol show that the legendre symbol of y^3 congruent to 2 mod 7 is -1. I know how to work quadratic problems, but cubic is something i have never come across. Any ideas?? I assume you can use the quadratic law of reciprocity, but i have no idea how to go about doing it. • Dec 3rd 2008, 05:35 PM ThePerfectHacker Quote: Originally Posted by bigb show that the legendre symbol of y^3 congruent to 2 mod 7 is -1. I know how to work quadratic problems, but cubic is something i have never come across. Any ideas?? I assume you can use the quadratic law of reciprocity, but i have no idea how to go about doing it. It is +1 not -1. If $y^3 \equiv 2 (\bmod 7)$ then $\left[ (y/7) \right]^3 = (y^3/7) = (2/7) = 1 \implies (y/7) = 1$. • Dec 3rd 2008, 09:30 PM bigb Quote: Originally Posted by ThePerfectHacker It is +1 not -1. If $y^3 \equiv 2 (\bmod 7)$ then $\left[ (y/7) \right]^3 = (y^3/7) = (2/7) = 1 \implies (y/7) = 1$. its says in the book the cubic congruence y^3 congruent to 2 mod 7 is not solvable means that 2 is a cubic nonresidue modulo 7 (please verify this), but ur saying it is solvable. I am not sure now. • Dec 4th 2008, 03:10 AM NonCommAlg Quote: Originally Posted by bigb its says in the book the cubic congruence y^3 congruent to 2 mod 7 is not solvable means that 2 is a cubic nonresidue modulo 7 (please verify this), but ur saying it is solvable. I am not sure now. 2 is not a cubic residue modulo 7 because if $y^3 \equiv 2 \mod 7,$ then $\gcd(y,7)=1,$ and hence $1 \equiv y^6 \equiv 4 \mod 7,$ which is nonsense! (by the way the theory of cubic residues is a mess compared to nice and clean theory of quadratic residues!) • Dec 4th 2008, 06:01 AM ThePerfectHacker Quote: Originally Posted by bigb its says in the book the cubic congruence y^3 congruent to 2 mod 7 is not solvable means that 2 is a cubic nonresidue modulo 7 (please verify this), but ur saying it is solvable. I am not sure now. Your problem was not clear. What I wrote above was my interpertation of your problem because of your title "Legendre symbol". There is no such thing as a Legendre symbol for cubic resides, while there is an analouge called the cubic residue symbol. Quote: Originally Posted by NonCommAlg (by the way the theory of cubic residues is a mess compared to nice and clean theory of quadratic residues!) Maybe it is just because $\mathbb{Z}[i]$ looks nicer than $\mathbb{Z}[\omega]$. (Thinking)
2016-10-22 08:05:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8376778960227966, "perplexity": 729.8828799071923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718840.18/warc/CC-MAIN-20161020183838-00247-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/contour-integration-with-a-branch-cut.905859/
# Contour integration with a branch cut Tags: 1. Feb 28, 2017 ### mercenarycor 1. The problem statement, all variables and given/known data -11 dx/(√(1-x2)(a+bx)) a>b>0 2. Relevant equations f(z0)=(1/2πi)∫f(z)dz/(z-z0) 3. The attempt at a solution I have absolutely no idea what I'm doing. I'm taking Mathematical Methods, and this chapter is making absolutely no sense to me. I understand enough to tell I'm supposed to do contour integration on this with a branch cut on the singularity, but actually doing it is another thing. Also, I have no idea what to do with the second term in the denominator. If you can explain this to me, I would be grateful; and please, try to dumb it down. I can't even figure out how to find residues. The farthest I got was K=∫-11dx/(√(1-x2)(a+bx)) + lim r->∞0π reidθ/(√(1-r2ei2θ)(a+bre)) I stopped there, however, because I'm fairly certain I'm embarking on several hours of barking up the wrong tree. 2. Feb 28, 2017 ### strangerep Have you had a proper course, or part-course, on contour integration? Without that, this will be very difficult. I could tell you that you're supposed to place a branch cut between $x = \pm 1$, and use a "dog bone contour" (aka "dumbbell contour"), but that won't be much help if you don't know how to do easier contour integrals and compute basic residues. [Google for "dog bone contour" to see what this looks like.] 3. Mar 1, 2017 ### vela Staff Emeritus The OP is taking a math methods course right now and learning about contour integration and complex analysis right now. 4. Mar 1, 2017 ### vela Staff Emeritus As strangerep suggested, you should probably go back to trying to do easier problems first and getting a handle on those before trying to tackle this one. You could try finding a similar but simpler example in your textbook (one singularity, single branch cut) and asking questions about that first.
2018-01-20 19:16:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.498810738325119, "perplexity": 734.3942510816692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889681.68/warc/CC-MAIN-20180120182041-20180120202041-00273.warc.gz"}
https://www.gamedev.net/forums/topic/508609-outputchracter/
# outputChracter ## Recommended Posts Hi, I'm trying to output a value which a variable is carrying. Here is how I'm trying to do it: void *font = GLUT_BITMAP_9_BY_15; char s[100]; void outputCharacter(float x, float y, float z, char *string) { int len, i; glRasterPos3f(x, y, z); len = (int) strlen(string); for (i = 0; i < len; i++) { glutBitmapCharacter(font, string[i]); } } void RenderScene(void) { sprintf_s(s,"%d", disp); outputCharacter(- ((gridspace * n)/2-(gridspace/2)) - 42, (gridspace * n)/2-(gridspace/2) - 9, 0, s); // Restore transformations glPopMatrix(); // Flush drawing commands glutSwapBuffers(); glutPostRedisplay(); } Obviously this is just bits and bits and you won't be able to compile it. The variable which carries the value that I want is called 'disp'. It is a big number (a pressure value up to 500,000 Pa). But it will just display a zero at the momment. I double checked to see that disp is carrying the correct value of 500 Kpa.
2017-10-24 04:18:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26637136936187744, "perplexity": 4029.045608325145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828134.97/warc/CC-MAIN-20171024033919-20171024053919-00550.warc.gz"}
https://totallydisconnected.wordpress.com/2021/05/27/comparing-local-langlands-correspondences/
# Comparing local Langlands correspondences At least six people have independently asked me some variant of the question: What are the prospects for showing that the Fargues-Scholze construction of L-parameters is compatible with other constructions of the local Langlands correspondence? In this post I’ll briefly lay out the answer as I see it. For reductive groups $G$ over finite extensions $F/\mathbf{Q}_p$, the situation is complicated, since the status of LLC is complicated. 1. $\mathrm{GL}_n$ and $D_{1/n}^{\times}$. Compatibility for these groups is known and already proved in Fargues-Scholze, and follows from the realization of local Langlands and local Jacquet-Langlands in the cohomology of the Lubin-Tate tower. 2. Any inner form of $\mathrm{GL}_n$. Compatibility here is Theorem 1.0.3 in H.-Kaletha-Weinstein. 3. $\mathrm{SL}_n$ and inner forms. Compatibility should follow from the previous two points, but I guess it’s not completely trivial. Someone should write it down. 4. $\mathrm{GSp}_4$ and $\mathrm{Sp}_4$, and their unique inner forms. Compatibility for these groups has been proved by my student Linus Hamann. His preprint should be available very soon, and I’ll write a detailed blog post about it at that time. The arguments here rely on a number of special features of the group $\mathrm{GSp}_4$. 5. Split $\mathrm{SO}_{2n+1}$ and closely related groups. Partial results here are definitely possible by extending Hamann’s arguments, but it’s not clear to me whether complete results can be expected. I’ll say more about this when I write about Hamann’s paper. 6. Unitary groups. Partial results should be possible by combining some aspects of Hamann’s methods with recent works of Nguyen and Bertoloni-Meli–Nguyen. 7. $\mathrm{GSp}_{2n}$ and $\mathrm{Sp}_{2n}$ and their inner forms, $n>2$. This seems out of reach. 8. Even special orthogonal groups. I’m frankly confused about what’s going on here. Is there even an unambiguous LLC? In any case, this also seems hard. 9. Exceptional groups. There’s no “other” LLC here. Go home. (OK, for $G_2$ there’s a very cool recent paper of Harris-Khare-Thorne.) 10. General groups splitting over a tame extension, $p$ not too small. Here Kaletha has given a general construction which attaches a supercuspidal L-packet to any supercuspidal L-parameter. Compatibility of this construction with Fargues-Scholze might be approachable by purely local methods, but it seems to require substantial new ideas. An extremely weak partial result – constancy on Kaletha’s packets of the FS map from reps to L-parameters – is probably within reach, using the main results in H.-Kaletha-Weinstein. The key point in many of the above situations is the following. Let’s say a group $G$ is accessible if it admits a geometric conjugacy class of minuscule cocharacters $\mu$ such that 1. The pair $(G,\mu)$ is totally Hodge-Newton reducible in the sense of Chen-Fargues-Shen. 2. Any L-parameter $\varphi: W_F \to \phantom{}^L G$ can be recovered up to isomorphism from the composition $r_{\mu} \circ \varphi$. (In practice one asks for slightly weaker versions of this.) 3. The local Shimura varieties attached to the local Shimura datum $(G,\mu,b)$ (with $b \in B(G,\mu)$ the unique basic element) uniformize the basic locus in a global Shimura variety of abelian type. For groups satisfying this condition, there is hope. Very roughly, condition 2. implies that the FS construction is incarnated in the cohomology of a single local Shimura variety, whose cohomology can also be tightly related to the cohomology of a global Shimura variety using conditions 1. and 3. One then needs to know enough about the cohomology of these global Shimura varieties, namely that it realizes the “other” LLC you care about. Of course, this short outline veils substantial technical difficulties. It turns out that $\mathrm{GL}_{n}$, $\mathrm{GU}_n$, $\mathrm{GSp}_4$, and $\mathrm{SO}_{2n+1}$ are all accessible, and this accounts for the definitive results in scenarios 1.-4. above and my optimism in scenarios 5.-6. On the other hand, $\mathrm{GSp}_{2n}$ is not accessible for $n>2$, and neither is $\mathrm{SO}_{2n}$ for $n>3$, and no exceptional groups are accessible. Hence my pessimism in scenarios 7.-9. For reductive groups over finite extensions $F/\mathbf{F}_{p}((t))$, the situation is completely different. Here Genestier-Lafforgue have constructed a local Langlands correspondence for all groups, uniquely characterized by its compatibility with V. Lafforgue’s construction of global Langlands parameters. It is an extremely attractive problem to compare the Genestier-Lafforgue LLC with the Fargues-Scholze LLC. This should absolutely be within reach! After all, both constructions are realized in the cohomology of moduli spaces of shtukas, so the only “real” task should be to physically relate the moduli spaces of shtukas used by GL with those used by FS. This is probably not trivial: the spaces used by FS are local and totally canonical, while those used by GL seem to depend on a globalization and some auxiliary choices in a messy way. Nevertheless, I’d be surprised if this comparison is still an open problem two years from now.
2022-12-02 22:11:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 31, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7516488432884216, "perplexity": 600.3515898832069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710916.70/warc/CC-MAIN-20221202215443-20221203005443-00659.warc.gz"}
https://www.physicsforums.com/threads/bragg-diffraction.779784/
# Bragg Diffraction 1. Nov 3, 2014 ### Skeptic. 1. The problem statement, all variables and given/known data A beam of 3.55keV X-rays is directed at a crystal. As the angle of incidence is increased from zero, a first strong interferece maximum is found when the beam makes an angle of 18.0o with the planes of the crystal. Calculated d=5.67x10-10 from this (distance between adjacent planes) (c) Find the longest wavelength for which two interference maxima would be produced. 2. Relevant equations $2d\sin \theta = n\lambda$ 3. The attempt at a solution I set n=2, since we're looking for the second interference maxima, so then $d\sin\theta = \lambda$ I was confused by where theta came from here, but to find the maximum wavelength I thought I would set $\theta$ = 90, but this doesn't make physical sense to me. 2. Nov 3, 2014 ### BvU Look up a picture. And wonder what to do with the 18o in the problem description. Randomly picking $\theta=\pi/4$ indeed doesn't make sense :) 3. Nov 3, 2014 ### Skeptic. Where does $\theta = \pi /4$ come into it? I chose $\theta = \pi /2$ to maximise $d\sin\theta$ And isn't the 18o only applicable for the earlier part of the question, when you're effectively given $\lambda$ as $\lambda = hc/E$? As when the wavelength changes, so will the angle for the first maxima. Part (c) tells us we're calculating a new value for $\lambda$ Not sure how well I explained myself there. Sorry if it's incomprehensible! 4. Nov 3, 2014 ### ehild It happens when the incident ray falls perpendicularly at the crystal plane and reflects exactly backwards. ( In the picture, the rays are shifted for clarity.) 5. Nov 4, 2014 ### BvU My mistake. Still 90 degrees was a random choice: apparently to maximize. But the thing to do is to make it 'fit'. Last edited: Nov 5, 2014 6. Nov 5, 2014 ### nasu 90 degrees is the maximum range of angles that can be measured by a diffractometer. This corresponds to back-reflection (see ehild drawing), or 2θ=180 degree. In this case the diffraction condition is 2d=nλ. If you want to have the second order at this maximum angle, then you have d=λ. The first order will be at 2dsinθ=d or sinθ=1/2. If λ is larger than d, you get the first order but not the second order. It will require an angle with sin>1. If λ is larger than 2d you don't get any peak. Of course, in practice the range of angles is less tan 0-180, with restrictions at both ends.
2018-02-23 07:30:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7968275547027588, "perplexity": 1128.6183713089454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814493.90/warc/CC-MAIN-20180223055326-20180223075326-00222.warc.gz"}
https://math.stackexchange.com/questions/2274388/does-the-existence-of-lim-x-to-0fx-imply-the-existence-of-lim-x-to0
# Does the existence of $\lim_{x\to 0^+}f'(x)$ imply the existence of $\lim_{x\to0^+}f(x)$? Let us say that $f(x)$ is differentiable on $(0,\infty)$. Does the existence of $\lim_{x\to 0^+}f'(x)$ imply the existence of $\lim_{x\to0^+}f(x)$ ? I think it should be true, but I can't seem to prove it. • The fundamental theorem of calculus might have something to say about that. – Arthur May 10 '17 at 8:18 • @Arthur Not really. Don't know about the integrability of $f'$. – MathematicsStudent1122 May 10 '17 at 8:19 • @MathematicsStudent1122 We know the antiderivative of $f'$ exists on $(0,\infty)$ and $f'$ is bounded on, say, $(0,1)$. You're certain we can't leverage something from that? – Arthur May 10 '17 at 8:27 • @Arthur No. See this – MathematicsStudent1122 May 10 '17 at 8:29 • Cool. I think I've seen it before somewhere, but I didn't remember it. – Arthur May 10 '17 at 8:31 Note that since the limit exists, $f'$ is locally bounded near $x=0$, hence $f$ is uniformly continuous on $(0, \delta)$ for some $\delta$, hence $f$ can be continuously extended to $x=0$. This implies the claim. • @ashpool I tried playing around with the mean value theorem. Problem is that $f$ isn't defined at $x=0$. Though, there's probably a simpler solution. – MathematicsStudent1122 May 10 '17 at 8:49 • (I'm assuming the domain is $\mathbb{R}_{>0}$, since that's what the problem suggests) – MathematicsStudent1122 May 10 '17 at 8:59
2019-06-20 02:53:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8919255137443542, "perplexity": 215.5003994125063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999130.98/warc/CC-MAIN-20190620024754-20190620050754-00390.warc.gz"}
https://gecogedi.dimai.unifi.it/paper/323/
# Algebraic surfaces with infinitely many twistor lines created by altavilla on 11 Mar 2019 [BibTeX] preprint Inserted: 11 mar 2019 Last Updated: 11 mar 2019 Year: 2019 ArXiv: 1902.00010 PDF Abstract: We prove that a reduced and irreducible algebraic surface in $\mathbb{CP}^{3}$ containing infinitely many twistor lines cannot have odd degree. Then, exploiting the theory of quaternionic slice regularity and the normalization map of a surface, we give constructive existence results for even degrees. Credits | Cookie policy | HTML 5 | CSS 2.1
2019-03-25 17:59:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.347488135099411, "perplexity": 2469.072104128351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204086.87/warc/CC-MAIN-20190325174034-20190325200034-00216.warc.gz"}
https://gigaom.com/tag/landisgyr/
The U.K.’s smart meter plan kicks into high gear By the end of the decade almost all of British homes are supposed to get smart, digital, connected utility meters installed. And the plan is seeing some large contracts handed out to vendors like Landis+Gyr and Telefonica.… Landis+Gyr snaps up Ecologic Analytics for grid big data Meter giant Landis+Gyr has snapped up smart meter data management company Ecologic Analytics, the companies announced on Tuesday. Ecologic Analytics has been around for over a decade, and Landis + Gyr was already a minority shareholder in the firm. Terms of the deal were not disclosed.… Cleantech investing drops by a third and embraces efficiency Cleantech venture investments dropped by a third in the second quarter of this year compared to the same quarter last year, according to the Cleantech Group. However, energy efficiency technologies such as LED lighting and energy management software are still getting some love from private investors.… Why The Future of Greentech Needs to Sound Awesome Greentech feels like it's hit a slump recently, but as Saul Griffith recently said: the future of the planet needs to sound awesome for kids, while also combined with science-based realism. Here's 7 reasons why I've been worried lately, followed by 7 things to still get excited about.… PG&E Replacing 1,600 Broken Smart Meters Utility PG&E has hit another snag with its smart meter roll-out. This afternoon, the company announced it will replace 1,600 of its smart meters, which were manufactured by Landis+Gyr, because of a defect that causes the miscalculation of customer energy bills.… Smart Meters: Cheap in the South, but Coasts Will Cost A study has found that Southern utilities could be able to pay back the costs of smart meters a lot faster and more easily than East and West coast utilities. How do regional grid differences play out in real life?… Brazil: The Next Hot Smart Meter Market Brazil hopes to install 62 million smart meters by 2020, and companies like Silver Spring Networks, Landis+Gyr and Echelon are targeting the market.… Despite Hurdles, Smart Meters Still Ramping Up Fast Despite some setbacks, the U.S. smart meter push is continuing at a stimulus-fueled pace. Pike Research reported Monday that more than 90 U.S. utilities have 57.9 million smart meters planned and on the way.That's 7.9 million more than eMeter counted up.… Look out, smart meter startups with IPO dreams — the granddaddy of power metering is plugging into the public markets. Elster Group, the German electric, gas and water metering giant founded in 1848, announced this week that it was filing to go public.… Cisco’s Smart Grid Plans For Arch Rock There’s been plenty of digital ink spilled about Cisco’s purchase of Arch Rock, and its partnership with Itron. But there are other aspects to Cisco’s big smart meter push that bear some study, including the future of Arch Rock’s data center tech.… Report: PG&E’s Smart Meters Work, but Outreach Lacking The official verdict is out — Pacific Gas & Electric’s smart meter technology has been working properly, but its customer service hasn’t. That’s the conclusion of a state-ordered report released Thursday from independent analysts at the Structure Group.… Proprietary Smart Grid Tech Will Reign For Years Despite all of the rhetoric around, and government support for, a U.S. standards-based smart grid, proprietary communications technology will reign supreme for years to come, according to a report out from Pike Research.… Nice Meters: Oncor Rolls Out Nearly 250K Smart Meters I’ve been staring at this awesome smart meter photo for so long, I’m starting to feel like a frat boy drooling over… Skype Access Makes It Easy To Go Boingo Skype, which has a long standing relationship with Boingo, is making it simpler (and easier) to get access to Boingo hotspots around… Warner Premiere and Bryan Singer Do H+ Warner Premiere, a digital content production division at Warner Bros., will team up with director Bryan Singer’s (The Usual Suspects) production company,… Landis+Gyr Inks Four-Year Smart Meter Contract With PG&E The deal represents about half of the 5 million smart meters PG&E plans to install throughout Northern and Central California.… Eka Scores Smart Meter Partner Landis+Gyr The quiet workhorse of home energy management will be the wireless networks that will collect and deliver important energy usage data over… Texas Utility to Spend \$690M on Smart Meter Roll Out Sometimes we get wrapped up in detailing bleeding-edge innovations that startups are developing to help monitor home energy use. But, first and…
2021-04-16 23:17:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17490993440151215, "perplexity": 9242.499846401754}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038092961.47/warc/CC-MAIN-20210416221552-20210417011552-00555.warc.gz"}
http://tailieu.vn/doc/de-thi-olympic-sinh-vien-the-gioi-nam-1999-287442.html
# Đề thi Olympic sinh viên thế giới năm 1999 Chia sẻ: Trần Bá Trung4 | Ngày: | Loại File: PDF | Số trang:6 0 86 lượt xem 12 ## Đề thi Olympic sinh viên thế giới năm 1999 Mô tả tài liệu " Đề thi Olympic sinh viên thế giới năm 1999 " . Đây là một sân chơi lớn để sinh viên thế giới có dịp gặp gỡ, trao đổi, giao lưu và thể hiện khả năng học toán, làm toán của mình. Từ đó đến nay, các kỳ thi Olympic sinh viênthế giới đã liên tục được mở rộng quy mô rất lớn. Kỳ thi này là một sự kiện quan trọng đối với phong trào học toán của sinh viên thế giới trong trường đại... Chủ đề: Bình luận(0) Lưu ## Nội dung Text: Đề thi Olympic sinh viên thế giới năm 1999 1. 6th INTERNATIONAL COMPETITION FOR UNIVERSITY STUDENTS IN MATHEMATICS Keszthely, 1999. Problems and solutions on the first day 1. a) Show that for any m ∈ N there exists a real m × m matrix A such that A3 = A + I, where I is the m × m identity matrix. (6 points) b) Show that det A > 0 for every real m × m matrix satisfying A3 = A + I. (14 points) Solution. a) The diagonal matrix   λ 0 .. A = λI =  .  0 λ is a solution for equation A3 = A + I if and only if λ3 = λ + 1, because A3 − A − I = (λ3 − λ − 1)I. This equation, being cubic, has real solution. b) It is easy to check that the polynomial p(x) = x3 − x − 1 has a positive real root λ1 (because p(0) < 0) and two conjugated complex roots λ2 and λ3 (one can check the discriminant of the polynomial, which is −1 3 2 23 3 + −1 = 108 > 0, or the local minimum and maximum of the polynomial). 2 If a matrix A satisfies equation A3 = A + I, then its eigenvalues can be only λ1 , λ2 and λ3 . The multiplicity of λ2 and λ3 must be the same, because A is a real matrix and its characteristic polynomial has only real coefficients. Denoting the multiplicity of λ1 by α and the common multiplicity of λ2 and λ3 by β, det A = λα λβ λβ = λα · (λ2 λ3 )β . 1 2 3 1 Because λ1 and λ2 λ3 = |λ2 |2 are positive, the product on the right side has only positive factors. 2. Does there exist a bijective map π : N → N such that ∞ π(n) < ∞? n=1 n2 (20 points) Solution 1. No. For, let π be a permutation of N and let N ∈ N. We shall argue that 3N π(n) 1 2 > . n 9 n=N +1 In fact, of the 2N numbers π(N + 1), . . . , π(3N ) only N can be ≤ N so that at least N of them are > N . Hence 3N 3N π(n) 1 1 1 2 ≥ 2 π(n) > 2 ·N ·N = . n (3N ) 9N 9 n=N +1 n=N +1 Solution 2. Let π be a permutation of N. For any n ∈ N, the numbers π(1), . . . , π(n) are distinct positive integers, thus π(1) + . . . + π(n) ≥ 1 + . . . + n = n(n+1) . By this inequality, 2 ∞ ∞ π(n) 1 1 = π(1) + . . . + π(n) − ≥ n=1 n2 n=1 n2 (n + 1)2 ∞ ∞ ∞ n(n + 1) 2n + 1 2n + 1 1 ≥ · 2 = ≥ = ∞. n=1 2 n (n + 1)2 n=1 2n(n + 1) n=1 n + 1 1 2. 6th INTERNATIONAL COMPETITION FOR UNIVERSITY STUDENTS IN MATHEMATICS Keszthely, 1999. Problems and solutions on the second day 1. Suppose that in a not necessarily commutative ring R the square of any element is 0. Prove that abc + abc = 0 for any three elements a, b, c. (20 points) Solution. From 0 = (a + b)2 = a2 + b2 + ab + ba = ab + ba, we have ab = −(ba) for arbitrary a, b, which implies abc = a(bc) = − (bc) a = − b(ca) = (ca)b = c(ab) = − (ab)c = −abc. 2. We throw a dice (which selects one of the numbers 1, 2, . . . , 6 with equal probability) n times. What is the probability that the sum of the values is divisible by 5? (20 points) (r) Solution 1. For all nonnegative integers n and modulo 5 residue class r, denote by pn the probability (0) that after n throwing the sum of values is congruent to r modulo n. It is obvious that p 0 = 1 and (1) (2) (3) (4) p0 = p0 = p0 = p0 = 0. Moreover, for any n > 0 we have 6 (r) 1 (r−i) pn = p . (1) i=1 6 n−1 (r) From this recursion we can compute the probabilities for small values of n and can conjecture that p n = 1 4 (r) 1 1 5 + 5·6n if n ≡ r (mod )5 and pn = 5 − 5·6n otherwise. From (1), this conjecture can be proved by induction. Solution 2. Let S be the set of all sequences consisting of digits 1, . . . , 6 of length n. We create collections of these sequences. Let a collection contain sequences of the form 66 . . . 6 XY1 . . . Yn−k−1 , k where X ∈ {1, 2, 3, 4, 5} and k and the digits Y1 , . . . , Yn−k−1 are fixed. Then each collection consists of 5 sequences, and the sums of the digits of sequences give a whole residue system mod 5. Except for the sequence 66 . . . 6, each sequence is the element of one collection. This means that the 1 number of the sequences, which have a sum of digits divisible by 5, is 5 (6n − 1) + 1 if n is divisible by 5, 1 n otherwise 5 (6 − 1). 1 4 1 1 Thus, the probability is 5 + 5·6n if n is divisible by 5, otherwise it is 5 − 5·6n . Solution 3. For arbitrary positive integer k denote by pk the probability that the sum of values is k. Define the generating function ∞ n x + x2 + x3 + x4 + x5 + x6 f (x) = p k xk = . 6 k=1 (The last equality can be easily proved by induction.) ∞ Our goal is to compute the sum p5k . Let ε = cos 2π + i sin 2π be the first 5th root of unity. Then 5 5 k=1 ∞ f (1) + f (ε) + f (ε2 ) + f (ε3 ) + f (ε4 ) p5k = . 5 k=1 1 3. εjn Obviously f (1) = 1, and f (εj ) = 6n for j = 1, 2, 3, 4. This implies that f (ε) + f (ε2 ) + f (ε3 ) + f (ε4 ) ∞ 4 −1 1 4 is 6n if n is divisible by 5, otherwise it is 6n . Thus, p5k is 5 + 5·6n if n is divisible by 5, otherwise it is k=1 1 1 5 − 5·6n . n n n 3. Assume that x1 , . . . , xn ≥ −1 and x3 = 0. Prove that i xi ≤ 3. (20 points) i=1 i=1 Solution. The inequality 2 3 1 1 0 ≤ x3 − x + = (x + 1) x − 4 4 2 holds for x ≥ −1. Substituting x1 , . . . , xn , we obtain n n n n 3 1 3 n 3 n 0≤ x3 − xi + i = x3 − i xi + = 0− xi + , i=1 4 4 i=1 4 i=1 4 4 i=1 4 n n so xi ≤ 3. i=1 1 Remark. Equailty holds only in the case when n = 9k, k of the x1 , ..., xn are −1, and 8k of them are 2 . 4. Prove that there exists no function f : (0, +∞) → (0, +∞) such that f 2 (x) ≥ f (x + y) f (x) + y for any x, y > 0. (20 points) Solution. Assume that such a function exists. The initial inequality can be written in the form f (x) − 2 f (x + y) ≥ f (x) − ff (x) = ff (x)y . Obviously, f is a decreasing function. Fix x > 0 and choose n ∈ N such (x)+y (x)+y that nf (x + 1) ≥ 1. For k = 0, 1, . . . , n − 1 we have k k k+1 f x+ n 1 f x+ −f x+ ≥ k ≥ . n n nf x + n +1 2n The additon of these inequalities gives f (x + 1) ≤ f (x) − 1 . From this it follows that f (x + 2m) ≤ f (x) − m 2 for all m ∈ N. Taking m ≥ f (x), we get a contradiction with the conditon f (x) > 0. 5. Let S be the set of all words consisting of the letters x, y, z, and consider an equivalence relation ∼ on S satisfying the following conditions: for arbitrary words u, v, w ∈ S (i) uu ∼ u; (ii) if v ∼ w, then uv ∼ uw and vu ∼ wu. Show that every word in S is equivalent to a word of length at most 8. (20 points) Solution. First we prove the following lemma: If a word u ∈ S contains at least one of each letter, and v ∈ S is an arbitrary word, then there exists a word w ∈ S such that uvw ∼ u. If v contains a single letter, say x, write u in the form u = u1 xu2 , and choose w = u2 . Then uvw = (u1 xu2 )xu2 = u1 ((xu2 )(xu2 )) ∼ u1 (xu2 ) = u. In the general case, let the letters of v be a1 , . . . , ak . Then one can choose some words w1 , . . . , wk such that (ua1 )w1 ∼ u, (ua1 a2 )w2 ∼ ua1 , . . . , (ua1 . . . ak )wk ∼ ua1 . . . ak−1 . Then u ∼ ua1 w1 ∼ ua1 a2 w2 w1 ∼ . . . ∼ ua1 . . . ak wk . . . w1 = uv(wk . . . w1 ), so w = wk . . . w1 is a good choice. Consider now an arbitrary word a, which contains more than 8 digits. We shall prove that there is a shorter word which is equivalent to a. If a can be written in the form uvvw, its length can be reduced by uvvw ∼ uvw. So we can assume that a does not have this form. Write a in the form a = bcd, where b and d are the first and last four letter of a, respectively. We prove that a ∼ bd. 2 4. It is easy to check that b and d contains all the three letters x, y and z, otherwise their length could be reduced. By the lemma there is a word e such that b(cd)e ∼ b, and there is a word f such that def ∼ d. Then we can write a = bcd ∼ bc(def ) ∼ bc(dedef ) = (bcde)(def ) ∼ bd. Remark. Of course, it is enough to give for every word of length 9 an shortest shorter word. Assuming that the first letter is x and the second is y, it is easy (but a little long) to check that there are 18 words of length 9 which cannot be written in the form uvvw. For five of these words there is a 2-step solution, for example xyxzyzx zy ∼ xy xzyz xzyzy ∼ xyx zy zy ∼ xyxzy. In the remaining 13 cases we need more steps. The general algorithm given by the Solution works for these cases as well, but needs also very long words. For example, to reduce the length of the word a = xyzyxzxyz, we have set b = xyzy, c = x, d = zxyz, e = xyxzxzyxyzy, f = zyxyxzyxzxzxzxyxyzxyz. The longest word in the algorithm was bcdedef = xyzyxzxyzxyxzxzyxyzyzxyzxyxzxzyxyzyzyxyxzyxzxzxzxyxyzxyz, which is of length 46. This is not the shortest way: reducing the length of word a can be done for example by the following steps: xyzyxzx yz ∼ xyzyxz xyzy z ∼ xyzyxzxy zyx yzyz ∼ xyzyxz xyzyxz yx yz yz ∼ xy zyx zyx yz ∼ xyzyxyz. (The last example is due to Nayden Kambouchev from Sofia University.) 1 6. Let A be a subset of Zn = Z/nZ containing at most 100 ln n elements. Define the rth Fourier coefficient of A for r ∈ Zn by 2πi f (r) = exp sr . n s∈A |A| Prove that there exists an r = 0, such that f (r) ≥ 2 . (20 points) Solution. Let A = {a1 , . . . , ak }. Consider the k-tuples 2πia1 t 2πiak t exp , . . . , exp ∈ Ck , t = 0, 1, . . . , n − 1. n n Each component is in the unit circle |z| = 1. Split the circle into 6 equal arcs. This induces a decomposition 1 of the k-tuples into 6k classes. By the condition k ≤ 100 ln n we have n > 6k , so there are two k-tuples in the same class say for t1 < t2 . Set r = t2 − t1 . Then 2πiaj r 2πaj t2 2πaj t1 π 1 Re exp = cos − ≥ cos = n n n 3 2 for all j, so k |f (r)| ≥ Re f (r) ≥ . 2 3 5. 3. Suppose that a function f : R → R satisfies the inequality n 3k f (x + ky) − f (x − ky) ≤1 (1) k=1 for every positive integer n and for all x, y ∈ R. Prove that f is a constant function. (20 points) Solution. Writing (1) with n − 1 instead of n, n−1 3k f (x + ky) − f (x − ky) ≤ 1. (2) k=1 From the difference of (1) and (2), 3n f (x + ny) − f (x − ny) ≤ 2; which means 2 . f (x + ny) − f (x − ny) ≤ (3) 3n For arbitrary u, v ∈ R and n ∈ N one can choose x and y such that x − ny = u and x + ny = v, namely x = u+v and y = v−u . Thus, (3) yields 2 2n 2 f (u) − f (v) ≤ n 3 2 for arbitrary positive integer n. Because 3n can be arbitrary small, this implies f (u) = f (v). x2 4. Find all strictly monotonic functions f : (0, +∞) → (0, +∞) such that f f (x) ≡ x. (20 points) f (x) x x Solution. Let g(x) = . We have g( ) = g(x). By induction it follows that g( n ) = g(x), i.e. x g(x) g (x) x x (1) f( )= , n ∈ N. g n (x) g n−1 (x) x2 f 2 (x) On the other hand, let substitute x by f (x) in f () = x. ¿From the injectivity of f we get = f (x) f (f (x)) x, i.e. g(xg(x)) = g(x). Again by induction we deduce that g(xg n (x)) = g(x) which can be written in the form (2) f (xg n (x)) = xg n−1 (x), n ∈ N. Set f (m) = f ◦ f ◦ . . . ◦ f . It follows from (1) and (2) that m times (3) f (m) (xg n (x)) = xg n−m (x), m, n ∈ N. Now, we shall prove that g is a constant. Assume g(x1 ) < g(x2 ). Then we may find n ∈ N such that x1 g n (x1 ) ≤ x2 g n (x2 ). On the other hand, if m is even then f (m) is strictly increasing and from (3) it follows that xm g n−m (x1 ) ≤ xm g n−m (x2 ). But when n is fixed the opposite inequality holds ∀m 1 2 1. This contradiction shows that g is a constant, i.e. f (x) = Cx, C > 0. Conversely, it is easy to check that the functions of this type verify the conditions of the problem. 5. Suppose that 2n points of an n × n grid are marked. Show that for some k > 1 one can select 2k distinct marked points, say a1 , . . . , a2k , such that a1 and a2 are in the same row, a2 and a3 are in the same column, . . . , a2k−1 and a2k are in the same row, and a2k and a1 are in the same column. (20 points) 2 6. Solution 1. We prove the more general statement that if at least n + k points are marked in an n × k grid, then the required sequence of marked points can be selected. If a row or a column contains at most one marked point, delete it. This decreases n + k by 1 and the number of the marked points by at most 1, so the condition remains true. Repeat this step until each row and column contains at least two marked points. Note that the condition implies that there are at least two marked points, so the whole set of marked points cannot be deleted. We define a sequence b1 , b2 , . . . of marked points. Let b1 be an arbitrary marked point. For any positive integer n, let b2n be an other marked point in the row of b2n−1 and b2n+1 be an other marked point in the column of b2n . Let m be the first index for which bm is the same as one of the earlier points, say bm = bl , l < m. If m − l is even, the line segments bl bl+1 , bl+1 bl+2 , ..., bm−1 bl = bm−1 bm are alternating horizontal and vertical. So one can choose 2k = m − l, and (a1 , . . . , a2k ) = (bl , . . . , bm−1 ) or (a1 , . . . , a2k ) = (bl+1 , . . . , bm ) if l is odd or even, respectively. If m − l is odd, then the points bl = bm , bl+1 and bm−1 are in the same row/column. In this case chose 2k = m − l − 1. Again, the line segments bl+1 bl+2 , bl+2 bl+3 , ..., bm−1 bl+1 are alternating horizontal and vertical and one can choose (a1 , . . . , a2k ) = (bl+1 , . . . , bm−1 ) or (a1 , . . . , a2k ) = (bl+2 , . . . , bm−1 , bl+1 ) if l is even or odd, respectively. Solution 2. Define the graph G in the following way: Let the vertices of G be the rows and the columns of the grid. Connect a row r and a column c with an edge if the intersection point of r and c is marked. The graph G has 2n vertices and 2n edges. As is well known, if a graph of N vertices contains no circle, it can have at most N − 1 edges. Thus G does contain a circle. A circle is an alternating sequence of rows and columns, and the intersection of each neighbouring row and column is a marked point. The required sequence consists of these intersection points. 6. a) For each 1 < p < ∞ find a constant cp < ∞ for which the following statement holds: If f : [−1, 1] → R is a continuously differentiable function satisfying f (1) > f (−1) and |f (y)| ≤ 1 for all y ∈ [−1, 1], then there 1/p is an x ∈ [−1, 1] such that f (x) > 0 and |f (y) − f (x)| ≤ cp f (x) |y − x| for all y ∈ [−1, 1]. (10 points) b) Does such a constant also exist for p = 1? (10 points) 1 1 1 Solution. (a) Let g(x) = max(0, f (x)). Then 0 < −1 f (x)dx = −1 g(x)dx + −1 (f (x) − g(x))dx, so 1 1 1 1 we get −1 |f (x)|dx = g(x)dx + −1 −1 (g(x) − f (x))dx < 2 g(x)dx. Fix p and c (to be determined −1 at the end). Given any t > 0, choose for every x such that g(x) > t an interval I x = [x, y] such that |f (y) − f (x)| > cg(x)1/p |y − x| > ct1/p |Ix | and choose disjoint Ixi that cover at least one third of the measure 1 1 of the set {g > t}. For I = i Ii we thus have ct1/p |I| ≤ I f (x)dx ≤ −1 |f (x)|dx < 2 −1 g(x)dx; so 1 1 1 |{g > t}| ≤ 3|I| < (6/c)t−1/p −1 g(x)dx. Integrating the inequality, we get −1 g(x)dx = 0 |{g > t}|dt < 1 (6/c)p/(p − 1) g(x)dx; this is a contradiction e.g. for cp = (6p)/(p − 1). −1 (b) No. Given c > 1, denote α = 1/c and choose 0 < ε < 1 such that ((1 + ε)/(2ε)) −α < 1/4. Let g : [−1, 1] → [−1, 1] be continuous, even, g(x) = −1 for |x| ≤ ε and 0 ≤ g(x) < α((|x| + ε)/(2ε)) −α−1 for ε < 1 1 |x| ≤ 1 is chosen such that ε g(t)dt > −ε/2+ ε α((|x|+ε)/(2ε))−α−1 dt = −ε/2+2ε(1−((1+ε)/(2ε))−α) > ε. 1 Let f = g(t)dt. Then f (1) − f (−1) ≥ −2ε + 2 ε g(t)dt > 0. If ε < x < 1 and y = −ε, then |f (x) − f (y)| ≥ x x 2ε − ε g(t)dt ≥ 2ε − ε α((t + ε)/(2ε))−α−1 = 2ε((x + ε)/(2ε))−α > g(x)|x − y|/α = f (x)|x − y|/α; symmetrically for −1 < x < −ε and y = ε. 3
2018-01-20 11:55:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.858556866645813, "perplexity": 1196.4403768251987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889567.48/warc/CC-MAIN-20180120102905-20180120122905-00071.warc.gz"}
https://crypto.stackexchange.com/questions/33420/why-is-not-there-any-ideal-s-box/33436
# Why is not there any ideal S-Box? i read about linear cryptanalysis and about S-Box that there is not any ideal S-Box (ideal S-Box is a random S-Box in another words S-Box that bias of input and output bits is zero) then i read about implementation of S-Box in this post and understand that S-Box is implementing with Lookup tables,then read this post (e-sushi's answer) about non-linearity and randomness aspects of S-Box: And you can trust in the fact that the chances that you’ll manage to create a good s-box randomly by using your current criteria are very minimal… very, very minimal! but i don't understand that why is not there any random(ideal) S-Box? why can't lookup tables implement a random S-box? what is restriction(limitation) for it? • I thought the whole point of DES's choice of S-Box was that "a random S-Box" is unlikely to have zero "bias of input and output bits". ​ ​ – user991 Mar 5 '16 at 21:24 It is important to understand that although a very large random function will only have linear biases with very low probability, this is simply not true of small random functions. If you choose a small random function, then it is unlikely that you will get one that is suitable for block cipher constructions. In addition, it is not enough to construct an S-box with low linear biases; one must also take into account differential cryptanalysis, and more. Having said all of the above, this does raise an interesting question. Can we even define an ideal S-box? This doesn't necessarily mean we could find one; for example, the AES S-box has an 8-bit input and 8-bit output. This means that there are $2^{128}$ possible functions of this type, and this cannot be enumerated. Nevertheless, I would be interested to know if an "ideal" construction even exists, in terms of our best cryptanalysis knowledge. • I am a layman and hence not sure at all whether the following paper could eventually have some relevance in the present context or not: K. Nyberg, Perfect nonlinear S-boxes, EUROCRYPT'91, pp 378-386. – Mok-Kong Shen Mar 6 '16 at 13:14 • Indeed, it seems like this is very relevant. By the abstract, they prove that to construct such an S-box the number of input bits must be at least twice the number of output-bits. This means that it can be relevant for a Feistel construction but not for an SPN construction. In addition, note that this just covers linear cryptanalysis. There is also differential cryptanalysis and other techniques that must be taken into account. – Yehuda Lindell Mar 6 '16 at 15:41
2019-11-13 06:35:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5276047587394714, "perplexity": 710.8247806219614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496666229.84/warc/CC-MAIN-20191113063049-20191113091049-00332.warc.gz"}
https://math.stackexchange.com/questions/1185250/find-e-subseteq-mathbbr-such-that-liminf-delta-to-0-fracme-cap-delt
# Find $E\subseteq\mathbb{R}$ such that $\liminf_{\delta\to 0}\frac{m(E\cap(-\delta,\delta))}{2\delta}=\alpha$ Problem: Let $\alpha$ and $\beta$ be such that $0\leq\alpha\leq\beta\leq 1$. Find a measurable set $E\subseteq\mathbb{R}$ such that $$\liminf_{\delta\to 0}\frac{m(E\cap(-\delta,\delta))}{2\delta}=\alpha\quad\text{and}\quad\limsup_{\delta\to 0}\frac{m(E\cap(-\delta,\delta))}{2\delta}=\beta,$$ where $m$ is the Lebesgue measure. (Taken from Rudin's Real and Complex Analysis, Chapter 7, Exercise 2.) I tried many things, but every attempt fails. All I can get is a set $E$ corresponding to $\alpha=0$ and $\beta=1$. But when $0<\alpha\leq\beta<1$, I don't know how to get $E$. I'd actually say that the $0<\alpha<\beta<1$ case is a little easier, in that you can actually realise the limits infinitely often. Lets start at $\delta = 1$, and we'll specify that $x\in E$ iff $-x\in E$, so we only have to work with $$f(\delta) = \frac{m(E\cap (0, \delta))}{\delta}.$$ Furthermore, assume $f(1) = \alpha$. Now, we want this to be a minima for $f$ near $0$, so we want some $x_1$ such that $E\cap(x_1, 1) = \emptyset$, and we may as well also try to ensure that $f(x_1) = \beta$. Now, $m(E\cap(0, 1)) = \alpha$, and we want $f(x) = m(E\cap(0, x_1))/x_1 = \alpha/x_1 = \beta$, and clearly $x_1 = \frac{\alpha}{\beta}$ satisfies that equation. And we want $f(x_1)$ to be a maxima, and the easiest way to ensure that is with a $y_1$ such that $(y_1, x_1) \subseteq E$ and we can also assume that $f(y_1) = \alpha.$ Now, $$f(y_1) = \frac{m(E\cap (0, y_1)}{y_1} = \frac{\alpha - \alpha/\beta + y_1}{y_1} = \alpha$$ and therefore $$y_1 = \frac{\alpha(1-\beta)}{\beta(1-\alpha)}.$$ And then we want an interval such that $(x_2, y_1)\cap E = \emptyset$ and $f(x_2) = \beta$. I claim that if we let $x_2 = \frac{\alpha}{\beta} y_1$ then we have the right value. More generally, if we let $$E\cap(0, 1) = \bigcup_{n\in \mathbb{N}} \left( \frac{\alpha^n(1-\beta)^n}{\beta^n(1-\alpha)^n}, \frac{\alpha^n(1-\beta)^{n-1}}{\beta^n(1-\alpha)^{n-1}} \right).$$ then $f(x)$ will have a minima of $\alpha$, a maxima of $\beta$, and each value will be achieved an infinite number of times, one after the other, as $x\to 0$. This still leaves the cases where $0 = \alpha < \beta < 1$, $0 < \alpha < \beta = 1$ and $0 \leq \alpha = \beta \leq 1$. The first two of these are pretty straightforward, and one is the complement of the other (ie take complements of $E$ to solve the other case). The last is a bit tricky. You can use a set very much like the one I have described here, but you need the max and min to start to converge to one another, whilst your $x_n$ and $y_n$ are varying enough to eventually go to $0$. If you have difficulties, leave a comment and I'll show you how to do it. I expect that this is all discussed in the context of Lebesgue's density theorem, but if not, you should familiarise yourself with it, just so you know how freakish the relationship between $0$ and $E$ is here. http://en.wikipedia.org/wiki/Lebesgue%27s_density_theorem
2019-10-22 01:17:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.95351243019104, "perplexity": 72.91852760016185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987795403.76/warc/CC-MAIN-20191022004128-20191022031628-00063.warc.gz"}
https://www.physicsforums.com/threads/integral-related-proof.595675/
# Integral-related proof 1. Apr 11, 2012 ### Ocifer 1. The problem statement, all variables and given/known data C is a positively orientated simple closed curve Show that the the following integral is always positive. $\int P dx + Q dy$ 2. Relevant equations 3. The attempt at a solution I am actually given a particular P and Q, but I would just like a hint on how to proceed. The integral is presented in such a way that I am tempted to use Green's Theorem, but I'm not sure if that would make it any better. Basically, I am able to show it no problem if I pick particular curves C, and parameterize them. Can anyone give me a hint as to how to do this in the general case when I don't know C exactly.
2018-02-20 18:06:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5506903529167175, "perplexity": 178.17888758357128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813059.39/warc/CC-MAIN-20180220165417-20180220185417-00303.warc.gz"}
https://kops.uni-konstanz.de/handle/123456789/52867
## Ultracentrifugation Techniques for the Ordering of Nanoparticles 2021 Journal article Published ##### Published in Nanomaterials ; 11 (2021), 2. - 333. - MDPI. - eISSN 2079-4991 ##### Abstract A centrifugal field can provide an external force for the ordering of nanoparticles. Especially with the knowledge from in-situ characterization by analytical (ultra)centrifugation, nanoparticle ordering can be rationally realized in preparative (ultra)centrifugation. This review summarizes the work back to the 1990s, where intuitive use of centrifugation was achieved for the fabrication of colloidal crystals to the very recent work where analytical (ultra)centrifugation is employed to tailor-make concentration gradients for advanced materials. This review is divided into three main parts. In the introduction part, the history of ordering microbeads in gravity is discussed and with the size of particles reduced to nanometers, a centrifugal field is necessary. In the next part, the research on the ordering of nanoparticles in analytical and preparative centrifugation in recent decades is described. In the last part, the applications of the functional materials, fabricated from centrifugation-induced nanoparticle superstructures are briefly discussed. 540 Chemistry ##### Cite This ISO 690XU, Xufeng, Helmut CÖLFEN, 2021. Ultracentrifugation Techniques for the Ordering of Nanoparticles. In: Nanomaterials. MDPI. 11(2), 333. eISSN 2079-4991. Available under: doi: 10.3390/nano11020333 BibTex @article{Xu2021-01-27Ultra-52867, year={2021}, doi={10.3390/nano11020333}, title={Ultracentrifugation Techniques for the Ordering of Nanoparticles}, number={2}, volume={11}, journal={Nanomaterials}, author={Xu, Xufeng and Cölfen, Helmut}, note={Article Number: 333} } RDF <rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2021-02-17T09:24:58Z</dc:date> <dc:language>eng</dc:language> <dcterms:issued>2021-01-27</dcterms:issued> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2021-02-17T09:24:58Z</dcterms:available> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/29"/> <dc:creator>Xu, Xufeng</dc:creator> <dcterms:title>Ultracentrifugation Techniques for the Ordering of Nanoparticles</dcterms:title> <dc:contributor>Cölfen, Helmut</dc:contributor> <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/52867"/> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/29"/> <foaf:homepage rdf:resource="http://localhost:8080/"/> <dcterms:abstract xml:lang="eng">A centrifugal field can provide an external force for the ordering of nanoparticles. Especially with the knowledge from in-situ characterization by analytical (ultra)centrifugation, nanoparticle ordering can be rationally realized in preparative (ultra)centrifugation. This review summarizes the work back to the 1990s, where intuitive use of centrifugation was achieved for the fabrication of colloidal crystals to the very recent work where analytical (ultra)centrifugation is employed to tailor-make concentration gradients for advanced materials. This review is divided into three main parts. In the introduction part, the history of ordering microbeads in gravity is discussed and with the size of particles reduced to nanometers, a centrifugal field is necessary. In the next part, the research on the ordering of nanoparticles in analytical and preparative centrifugation in recent decades is described. In the last part, the applications of the functional materials, fabricated from centrifugation-induced nanoparticle superstructures are briefly discussed.</dcterms:abstract> <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/52867/1/Xu_2-161na5d24tant7.pdf"/> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dc:contributor>Xu, Xufeng</dc:contributor> <dc:creator>Cölfen, Helmut</dc:creator> <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/52867/1/Xu_2-161na5d24tant7.pdf"/> </rdf:Description> </rdf:RDF> Yes Unknown
2023-03-20 13:26:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5234829187393188, "perplexity": 5342.951397415704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943483.86/warc/CC-MAIN-20230320114206-20230320144206-00689.warc.gz"}
https://amva4newphysics.wordpress.com/2016/08/13/lost-in-unfolded-space-last-episode-thou-shalt-not-unfold/
by Andrea Giammanco This is the end of my unfolding series, whose previous episodes can be found here, here, and here. This post discusses when we should apply unfolding to our data, and when not. There is an interesting thing about the unfolding community: when you ask an expert for practical advice, the expert typically starts by asking why you think you need unfolding, and will in general discourage you from doing it. Some time ago, a mini-workshop on unfolding techniques was organized by a group inside CMS. I was quite interested in attending because my recent research interests in the top quark realm dragged me (kicking and screaming) into the unfolding business. Thanks to the plenty of data delivered by the LHC, top quark physics has recently entered the statistical regime where it becomes appealing to perform differential measurement (of cross sections or of other quantities, like asymmetries, as a function of kinematic properties), while other communities, e.g. soft-QCD experts, already measure stuff differentially since some generations. The first speaker presented the key recommendations by the CMS Statistics Committee (a board of senior scientists with competence on various statistical problems, whose mandate is to check the statistical soundness of all our analysis procedures and give advice when any analyst requests it). Recommendation number one: “We recommend to avoid unfolding when it is not deemed compulsory”. The second talk was entirely devoted to the list of conceptual and practical issues with unfolding. The speaker, who authored several papers on unfolding techniques, strongly discouraged from unfolding and gave suggestions on how to avoid it altogether (e.g., he remarked that if you have a parameterized theoretical model you can just fit its parameters to data in the smeared space.) The third talk, before giving several practical recipes (how to choose the optimal regularization criterion, how to treat systematic uncertainties, etc.), provided further examples where unfolding is a very bad idea. Then came the fourth speaker, who has a long experience of unfolding from various experiments. And – guess what? – his first advice on unfolding was: “DO NOT DO IT” (in capital letters.) But then he presented and demonstrated a nice idea for “partial unfolding” (i.e., identifying only the degrees of freedom that our data are able to give info about, and unfold only those, while fixing the others to the model.) Also the internal wikis of recommendations follow the same approach: they start by discouraging the reader from unfolding, then warn of the many pitfalls of unfolding, and finally give you practical recipes for unfolding. I have never witnessed this attitude anywhere else, so I am not sure of the right metaphor to use. Maybe, before recruiting for the Crusades, the Middle Age preachers started by reminding that thou shalt not kill, before elaborating on how to genocide the infidels? So when should you abstain from unfolding? When you want to search for new physics (or, more generally, be sensitive to unexpected features in the data.) The reason was already explained in my previous post. In short: any conceivable unfolding method necessarily biases towards the initial model, and anyway the sensitivity to deviations between data and expectation is decreased in the unfolded space with respect to the smeared space, because of binning effects (to minimize off-diagonal terms in the migration matrix, which are the source of all unfolding problems, bin width cannot go much smaller than the resolution) and because any attempt to properly cover the bias with a proper uncertainty will further reduce the sensitivity to the unexpected. When you want to extract a parameter of the theory. A recent example from the CMS top quark group makes nicely the point: this analysis extracted the top quark pole mass from a least-squares fit to a variable suggested by a theory paper, twice: by fitting the smeared-space distribution, and the unfolded-space distribution. The first is significantly more precise, for the same reasons as above. (On the other hand, unfolding was not a pointless exercise in this case: although the procedure diluted the information on the parameter to which that variable is sensitive, the differential cross section as a function of that variable is interesting per se.) Above: The $\rho_S$ variable in the smeared space (left) and in the unfolded space (right), from CMS-PAS-TOP-13-006. Another example is this analysis, to which I personally contributed (although I did not put my hands in the unfolding machinery myself.) Yes, I sinned: this analysis ends with the extraction of a parameter from an unfolded distribution. The parameter is the forward-backward asymmetry (in an appropriate rest frame) of muons from the decay of top quarks produced singly by a charged-current process mediated by weak interaction (see Feynman diagram on the left, which shows the leading diagram for single top quark production by weak interaction). This quantity can range in principle anywhere between -1/2 and +1/2, and under some assumptions it corresponds to exactly half of the degree of polarization of the top quark. The Standard Model predicts almost 100% polarization of the top quark produced this way, because of the fundamental feature of the charged-current weak interaction to only concern left-handed fermions and be blind to right handed ones. (The opposite for anti-fermions.) This property had been actively used in previous studies of single top quark production at Tevatron and LHC, for example as an input to multi-variate techniques, but this was always pointed out as a conceptual weakness of those measurements, as it introduced a bias of the measured cross section towards the Standard Model assumptions. That’s why my own research program in the early years of LHC running has included the measurement of the inclusive cross section of this process by the exploitation of a kinematic property that does not correlate significantly with polarization, and the first measurement of the differential cross section as a function of a variable that maximally correlates with polarization (i.e., the paper linked above.) You can see the smeared- and unfolded-space distributions below: Above:  angular variable related to single top polarization in the smeared space (left) and in the unfolded space (right), from JHEP 1604 (2016) 073. The normalized differential cross section (right panel in this figure), due in particular to the coarse binning that was necessary to make unfolding behave nicely, was not as sensitive to the asymmetry parameter as a simple template fit to the smeared data could have been. But for a template fit we should have assumed some model (e.g., a linear relationship between production rate and this variable, as in the Standard Model but with a free parameter), while here, in the first measurement ever of this distribution, we are providing much more: we are showing for the first time (instead of assuming) that the relationship is indeed linear. (To be fair, there is no theory model that predicts anything different from linear. The Standard Model tells you that it is linear and also tells you the slope, while many hypothetical New Physics processes that could give the same final state would feature a complete lack of polarization in the production vertex, and therefore a null slope: just a flat dependence. A significant deviation from the SM slope would hint to a possible admixture of the weak interaction production with some of those hypothetical mechanisms. But what if everybody is wrong and there is, for example, a concavity in that distribution?) Incidentally: as a cross check we also extracted the same asymmetry by a simple $2\times 2$ matrix inversion. What was done in practice was literally what I described in the simple example of episode 1. No regularization is needed with two bins, and it was performed analytically, which I found very refreshing as I live in a world dominated by numerical methods. Interestingly, it turned out to yield a less precise determination of the forward-backward asymmetry. Less statistical power comes from the smaller “lever arm” of a measurement with two bins with respect to several bins. But it has also to be remarked that also the bias towards our expectation gets larger: the migration probability gets integrated over the (expected) underlying distribution within the bin and is therefore highly sensitive to the model (hence larger systematics are obtained, as estimated by varying the model parameters). Now let’s go back to recommendation number one: “We recommend to avoid unfolding when it is not deemed compulsory”. So far I elaborated on why in many cases one should avoid unfolding. But when is unfolding “compulsory”? We say that you should not unfold when you are interested in new physics, e.g., when your goal is to set constraints on the couplings of an extension of the Standard Model (like some Effective Field Theory that could manifest itself through deformations of the predicted shapes of some observables, rather than through spectacular bumps.) On the other hand, if you are a theorist, you may simply have no choice: “raw” data from the LHC experiments are not open, although a fraction of them may be released after some years of embargo, while unfolded data are usually disclosed here as soon as the corresponding paper is published. Similar considerations apply to the extraction of parameters of the Standard Model. For example, the most avid users of our differential measurements are probably the small teams that extract the Parton Distribution Functions (PDF) from the public data. Sure, a single experimental collaboration could possibly fit PDFs to its raw data, but would not have access to the raw data of other experiments (not more than theorists). The best discriminating power is achieved by combining data coming from different accelerators ($pp$, $p\bar p$, $ep$, fixed-target experiments.) Some of those data were collected decades ago but are still relevant for PDF fits, because they were performed at different energies and probed different kinematic ranges. Side note: Usually, reanalyzing raw data from past experiments eventually becomes unfeasible even for the former members of those experiments. One of the few success stories (that probably started the slow movement of the HEP community towards the “open data” concept) is the resurrection of the data and of the analysis software of the JADE experiment, that took data at the PETRA collider at DESY between 1979 and 1986, using $e^+ e^-$ collisions in a range that we now call “low energy”, and was crucial for the establishment of QCD as the theory of strong interactions. In the late 90’s, just before the last backup got erased, someone realized that there would have been a lot to be learned about QCD by reanalyzing those data, profiting from the theory advances and from analysis methods that had been developed in the meantime. The heroic story of how those data were recovered is narrated, for expert readers, here and here. End of side note. Another use case, and an obvious one (although for some reason it is not often discussed), is the mere comparison of different experiments. ATLAS and CMS are very different detectors, our event reconstruction algorithms are different, our selections are optimized independently; so, even if we measure the same underlying distributions, we “fold” them very differently. Only unfolding allows to compare the spectra in a meaningful way, like for example: Ideally one would combine these two spectra (which is in the plans, but it takes time to do it right), then compare to the theory predictions and see how good they are. And one needs unfolded data for that. But even without a combination we already like to show this plot around, as an agreement among the experiments has a value per se. For example, a discrepancy in shape would hint that some systematic effect is unaccounted somewhere. Historically, when the first measurement of this distribution was made public, there was no Next-to-Next-to-Leading-Order (NNLO) QCD prediction yet, and it was observed that the discrepancy of CMS data with Next-to-Leading-Order (NLO) calculations had a different direction with respect to what most “educated guesses” expected. (By the way, the computation of differential spectra at NNLO in QCD – which was achieved for the first time in the top-pair case only quite recently – is so heavy that we have to agree with the authors beforehand about the exact bins to be used, because it takes weeks or months for their machines to deliver.) Only when ATLAS released their unfolded spectra at the same energy (an independent data set, a very different detector, and an unfolding technique from the other major school of thought) we got more confident that the reason was not an unaccounted systematic, a bug in our code… or a figment of our unfolding’s imagination!
2017-08-21 00:44:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.593335747718811, "perplexity": 1001.6645044351058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107065.72/warc/CC-MAIN-20170821003037-20170821023037-00257.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition/chapter-7-trigonometric-identities-and-equations-7-2-verifying-trigonometric-identities-7-2-exercises-page-667/53
## Precalculus (6th Edition) Published by Pearson # Chapter 7 - Trigonometric Identities and Equations - 7.2 Verifying Trigonometric Identities - 7.2 Exercises - Page 667: 53 #### Answer $\frac{\cos\alpha}{\sec\alpha}+\frac{\sin\alpha}{\csc\alpha}=\sec^2\alpha-\tan^2\alpha$ #### Work Step by Step Simplify the left side: $\frac{\cos\alpha}{\sec\alpha}+\frac{\sin\alpha}{\csc\alpha}$ $=\frac{\cos\alpha}{\frac{1}{\cos\alpha}}+\frac{\sin\alpha}{\frac{1}{\sin\alpha}}$ $=\frac{\cos\alpha}{\frac{1}{\cos\alpha}}*\frac{\cos\alpha}{\cos\alpha}+\frac{\sin\alpha}{\frac{1}{\sin\alpha}}\frac{\sin\alpha}{\sin\alpha}$ $=\cos^2\alpha+\sin^2\alpha$ $=1$ Simplify the right side: $\sec^2\alpha-\tan^2\alpha$ $=1$ Since both sides are equal to $1$, they are equal to each other, and the identity is proven. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2019-02-17 05:36:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8188599944114685, "perplexity": 983.661138027361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481624.10/warc/CC-MAIN-20190217051250-20190217073250-00152.warc.gz"}
https://quant.stackexchange.com/questions/33785/markov-property-for-stochastic-differential-equation
# markov property for stochastic differential equation Suppose the stochastic equation: \begin{equation*} d X(u)=\beta(u,X(u))d u+\gamma(u,X(u))d W(u). \end{equation*} Suppose $X(T)$ is the solution of above stochastic differential equation with initial condition $X(t)=x$ and $h(x)$ is a Borel-measurable function. Denote by $$g(t,x)=E^{t,x}h(X(T))$$ We assume $E^{t,x}|h(X(T))|<\infty$ Let $X(u)$ is the solution of above stochastic differential equation with initial condition given at time $0.$ Use the markov property of $X(t),$ we have existing $g(t,x)$ s.t $$E[h(X(T))|\mathcal{F}(t)]=g(t,X(t))$$ My question is are those two $g(t,x)$ same? Since we want to use Feynman-Kac equation, but I am not sure whether it is true for first $$g(t,x)=E^{t,x}h(X(T)).$$ since the proof of Feynman-Kac equation needs the martingale of $g(t,X(t)),$ but I don't think here $g(t,X(t))$ is martingale? • Have a look at Shreve's Stochastic Calculus for Finance II Theorem 6.3.1. It states that $\mathbb{E}[h(X(T))|\mathcal{F}(t)] = \mathbb{E}^{t,x}[h(X(T))]$ (without proof) – zer0hedge Apr 20 '17 at 17:06 • A conditional expectation of the form $Y_t = \Bbb{E} [ f(X_T) \vert \mathcal{F}_t ]$ (with no explicit dependence on $t$ inside $f$) will always be a martingale by the tower property, right? Just compute $\Bbb{E}[Y_t \vert \mathcal{F}_s ]$ to convince yourself that it is indeed equal to $Y_s$. Of course this is assuming the process is adapted to the filtration – Quantuple Apr 20 '17 at 17:07 • @zer0hedge but in the proof of Kolmogorov backward equation we use $g(t,x)=E^{t,x}h(X(T))=\int^{\infty}_0h(y)p(t,T,x,y)d y,$ that is the first definition, and we will still use the martingale of $g(t,x).$ – A.Oreo Apr 21 '17 at 3:59 • @Quantuple Yeah, I am sure $E[h(X(T))|\mathcal{F}(t)]=g(t,X(t))$ is martingale, my question is whether $g(t,x) = E^{t,x}h(X(T))$ is martingale, those two $g(t,x)$ are different, and we will use the martingale of later $g(t,x)$ to prove Kolmogorov backward equation. – A.Oreo Apr 21 '17 at 4:04 • "Note that there is nothing random about $g(t,x)$; it is an ordinary (actually, Borel-measurable) function of the two dummy variables t and x" Shreve, p.266 – zer0hedge Apr 21 '17 at 6:01 Here, we assume that \begin{align*} g(t, x) = \mathbb{E}\left(h(X_T) \mid X_t = x \right). \end{align*} Note that, by Shiryaev, $g(t, x)$ is a Borel measurable function such that, for any Borel measurable set $A$, \begin{align*} \int_{\{X_t \in A\}} h(X_T) d\mathbb{P} &= \int_A g(t, x) \mathbb{P}_{X_t}(dx), \end{align*} where $\mathbb{P}_{X_t}(dx)$ is the Lebesgue-Stieltjes measure generated by the distribution function of $X_t$, that is, for any Borel measurable set $B$, \begin{align*} \mathbb{P}_{X_t}(B) = \mathbb{P}(X_t \in B). \end{align*} It can also be shown that (see Page 196 of Shiryaev, starting with indicator and simple functions, then, by monotone convergence theorem, to all positive functions, and, by decomposition, to all integrable measurable functions), \begin{align*} \int_A g(t, x) \mathbb{P}_{X_t}(dx) = \int_{\{X_t \in A\}} g(t, X_t) d\mathbb{P}. \end{align*} That is, \begin{align*} \int_{\{X_t \in A\}} h(X_T) d\mathbb{P} = \int_{\{X_t \in A\}} g(t, X_t) d\mathbb{P}. \end{align*} In other words, \begin{align*} g(t, X_t) = \mathbb{E}(h(X_T) \mid X_t) = \mathbb{E}(h(X_T) \mid \mathcal{F}_t), \end{align*} by the Markov property. Moreover, $\{g(t, X_t), \, 0\le t \le T \}$ is obviously a martingale.
2019-06-16 18:34:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.999066174030304, "perplexity": 481.45110634165115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998291.9/warc/CC-MAIN-20190616182800-20190616204800-00198.warc.gz"}
https://www.physicsforums.com/threads/finding-the-mininum-value-of-the-coffecitent-of-friction.462736/
# Finding the mininum value of the coffecitent of friction ## Homework Statement 1. Suppose a hanging 1.0kg lab mass is attached to a 4.0kg block on the table (The picture) a. If the coefficient of kinetic friction, $$\mu$$k is .2, what is the acceleration? b. What would the mininum value of the coefficient of static friction, $$\mu$$s, in order for the block to remian motionless ## Homework Equations a = f/m $$\mu$$ = frictional force / normal force ## The Attempt at a Solution a) .2 = friction/50 (i got 50 from 5 [the two blocks] multiplied by gravity)... friction = 10 a = 10/5 a = 2m/s b) idk how to figure this out? Related Introductory Physics Homework Help News on Phys.org You need to look at both blocks as a system. Friction Force due to gravity on the hanging block. <------------------(mass of both blocks)-----------------------------------------------> Keep in mind that a=f/m isn't really an equation. F=ma is the sum of all forces on an object or system. With that being said you can now use that equation as follows: F=ma Fg-Ff=ma....and I'm sure you can finish the rest. i meant a = $$\Sigma$$f / m thats a real equation
2020-02-25 00:52:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47303467988967896, "perplexity": 2060.155563602548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145989.45/warc/CC-MAIN-20200224224431-20200225014431-00056.warc.gz"}
https://www.physicsforums.com/threads/gaussian-integral.311591/
# Gaussian Integral 1. May 3, 2009 ### csnsc14320 1. The problem statement, all variables and given/known data Solve: In = $$\int_{0}^{\infty} x^n e^{-\lambda x^2} dx$$ 2. Relevant equations 3. The attempt at a solution So my teacher gave a few hints regarding this. She first said to evaluate when n = 0, then consider the cases when n = even and n = odd, comparing the even cases to the p-th derivative of Io. For the Io case, I evaluated it and obtained $$I_o = \frac{1}{2} \sqrt{\frac{\pi}{\lambda}}$$ Now, for the "p-th" derivative of Io, i got $$\frac{d^p}{d \lambda^2} I_o = \frac{\prod_{p=1}^p (1 - 2p)}{2^{p+1}} \sqrt{\pi} \lambda^{-\frac{(2p + 1)}{2}}$$ I don't see how this related to n = 2p (even case) where I2p = $$\int_0^\infty x^{2p} e^{- \lambda x^2} dx$$ And even when I do figure this out, does this all combine into one answer, or is it kind of like a piecewise answer? Any help with what to do with the even/odd cases would be greatly appreciated Thanks 2. May 3, 2009 ### nickjer Is this what you are asking? $$\frac{\partial}{\partial \lambda} I_0 = \frac{\partial}{\partial \lambda} \int_{0}^{\infty} e^{-\lambda x^2} dx = \int_{0}^{\infty} -x^2 e^{-\lambda x^2} dx = -I_2$$ You can apply this recursively p times to get $$(-1)^p I_{2p}$$ 3. May 4, 2009 ### csnsc14320 Oh yeah I see the pattern if you take the derivative of Io in integral form instead of what it actually is. I also did it for the odds and got nice cases for both :D thanks.
2017-08-17 09:29:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7785932421684265, "perplexity": 914.4265476677756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102993.24/warc/CC-MAIN-20170817073135-20170817093135-00341.warc.gz"}
https://www.physicsforums.com/threads/need-help-plotting-with-mathematica.319314/
# Need help plotting with mathematica! 1. Jun 11, 2009 ### jtassilo Hi, I want to use mathemiatica to predict the spectrum of compounds with different conformers. The input consists of 4 values: frequency (of each vibration and every conformer), Intensity, Polarizability and total energy (of the conformer). I now want to plot the frequency vs the Intensity. 1. How do I get mathematica to plot only a vs b if my input is of the format (a, b, c, d)? 2. Next I want to weight each frequency peak using the Boltzmann distribution. So i basically want to get a graph plotting frequency vs the Intensity*exp(-d/kT)/sum of(exp(-d/kT)). 3. After weighing each peak height with the Boltzmann distribution i want to broaden the frequencies using a gaussian profile, because in real spectra, the peaks have a certain width and are bell shaped and do not consist of single points. The gaussian function is a function of c (the Polarizability). So in the end I would like to plot a*Function of c vs b*Function of d. I would greatly appreciate any help/suggestions. 2. Jun 11, 2009 ### Staff: Mentor I do not understand what you are asking. If a and b are each separate inputs to your function then they are independent of each other. 3. Jun 11, 2009 ### jtassilo let me rephrase my first question: Is it possible to plot in mathematica only x vs y if your input file consisits of (x, y, z, a), and you don't want to reformat your input file. If the input file is of the format (x,y) you can use the command ListPlot[input]; what is the command if your input file has a different format? 4. Jun 11, 2009 ### Staff: Mentor ListPlot[input[[All,{1,2}]]] 5. Jun 12, 2009 ### jtassilo Hm, Im getting the following message after typing ListPlot[data[[All, {1, 2}]]]: {x1, y1, z1, b1}, {x2, y2, z2, b2} is not a list of numbers or pairs of numbers. (of course x1 is a specific number, which I abbreviate here with x1) 6. Jun 12, 2009 ### Staff: Mentor You are really not describing your problem very well. What do you get when you do the following: Dimensions[data] MatrixQ[data] MatrixQ[data,NumericQ] If your data is not a n-row by 4-column matrix of numbers then what is it? 7. Jun 12, 2009 ### jtassilo First of all thanks for you help and patience! My input is an excel file consisiting of 4 columns and 29 rows) {{a1, b1, c1, d1}, {a2, b2, c2, d2},... When I typed the commands, I got {1, 29, 4} False False I would now like to 1. plot the first column versus the second (plot a vs b) 2. modify the values of the first 2 columns by functions that depend on values in the the other two columns. 8. Jun 12, 2009 ### Staff: Mentor OK, you have an extra "layer", perhaps representing the worksheets in Excel. To trim that off use: data = data[[1]]; Then you should get Dimensions[data] == {29,4} and the other two commands should give you True Then ListPlot[data[[All,{1,2}]]] should work. To get the other information in there try something like data2 = Map[{f[#[[3]],#[[4]]],g[#[[3]],#[[4]]],#[[3]],#[[4]]}&,data] Where f and g are the functions that determine your new values for a and b respectively. 9. Jun 16, 2009 ### jtassilo It worked, thanks! I have one last question: I am trying to plot several Gauss function. My input is: g= {a1, a2, a3,...} Plot[Exp[-0.5*((x - g)/2)^2], {x, 0, 1200}] This plots multiple gauss functions. The gauss functions overlap though and i want to add all the gauss functions together (basically they positively interfere ). I tried the following command (it didn't work): Plot[Sum[Exp[-0.5*((x - g)/2)^2], {x, 0, 1200}], {x, 0, 1000}] Do you know what command might work for adding the functions? 10. Jun 16, 2009 ### Staff: Mentor Try: I am not sure it will work. 11. Jun 16, 2009 ### jtassilo Yes it works great, it does however not work when I try to use Map: Plot[Apply[Plus, Thread[Map[{ #[[2]]....} &, data]]], {x, 0, 1200}] Any suggestions why Plot does not work with Map? The normal Plot[Map[{ #[[2]]....} &, data]]], {x, 0, 1200}] command works. 12. Jun 16, 2009 ### Staff: Mentor The expression: Threads the function Map over the list that appears in the first argument. This means that it evaluates to {Map[#[[2]] &, data], Map[ ... &, data], ...} Which is probably not what you wanted to do. 13. Jun 17, 2009 ### jtassilo Ok, this here is hopefully my last question: I'm doing a dynamic plot using a rather complex equation (The problem does not occur with simple equations). When I hit shift enter, only the slider is returned as output and below it says "&Aborted". The strange thing is that when I use the slider it shows the graph. I can see how the graph changes when I move the mouse on the slider, but as soon as I stop pressing on the slider, the graph disappears and is replaced by "&Aborted". I read the help files concerning Dynamics and Aborted, but I haven't found a command that changes this problem. Since the calculation takes mathematica quite some time I am currently thinking that my computer might be too slow. Any Suggestions are appreciated 14. Jun 17, 2009 ### Staff: Mentor If it is easy to post the code then I will try it out. I know that there is an abbreviated rendering while you are dynamically adjusting the setting, but I have not run into that specific error before. 15. Jun 18, 2009 ### jtassilo Here is the input: My other input is named data={{a1, b1, c1, d1,e1, f1}, {a2,..}} Panel[Column[{Row[{Slider[Dynamic[T], {1, 350, 1}], Dynamic[T]}], Dynamic[Plot[ Apply[Plus, data[[All, 1]]*(1879421 - data[[All, 1]])^3/(1 - Exp[-data[[All, 1]]*100*299792458*6.626*10^-34/(1.38*10^-23*T)])* data[[All, 2]]* data[[All, 6]]* Exp[- data[[All, 5]]*4.3597*10^-18/(1.38*10^-23 T)]/ Total[data[[All, 6]]* Exp[-data[[All, 5]]*4.3597*10^-18/(1.38*10^-23*T)]]/( data[[All, 1]]*(2 Pi)^0.5)* Exp[-0.5*((x - data[[All, 1]])/2)^2]]], {x, 0, 1000}, ImageSize -> {{400, 1000}}, AxesLabel -> {v [cm^-1], I [10^-10 m^2/sr]}, PlotRange -> All, PlotLabel -> Pentane T [K]]]}]] Last edited: Jun 18, 2009 16. Jun 18, 2009 ### jtassilo Also note plz that if the beginning of the input (Thread[1.097455759580536*10^-5 is replaced by a smaller number such as Thread[1.097455759580536*10^-105, the whole script does not work anymore, and the Gauss-function is approximated by a triangle. I think that it has something to do with the significant figures mathematica uses, but the helpfile isn't really helpful explaining how to fix this problem. 17. Jun 18, 2009 I am certain now that the $Aborted message has to do with the calculating capacity. When I change the number of input rows from 29 to for example 1400, the error message is displayed even for simple equations 18. Jun 18, 2009 ### Dale ### Staff: Mentor Well, if it has to do with numerical precision then I won't be able to help with dummy data. However, there were a few non-numerical things that I noticed: 1) Use Manipulate instead of Panel[...{Slider[Dynamic,... It is a much cleaner interface and I think that T may be getting stuck. 2) Get rid of the call to the Thread function. All of the functions inside of it are "Listable" meaning that they are automatically threaded over their arguments. 3) Wrap your function to be plotted in Evaluate: e.g. Plot[Evaluate[Apply[Plus,...]]],...] This will reduce the number of computations that have to be done at run time. 4) With my dummy data I was not getting any$Aborted messages, but I had to get rid of the ImageSize -> {{400, 1000}} option which was really messing up the display for some reason. 19. Jun 19, 2009 ### jtassilo Hey, heres another question: Is there a way to export the graph I made not as a bitmap or pdf, but as a set of points (x, y) with a predefined grid/accuracy? (Of course not a dynamic graph, but a normal graph) 20. Jun 19, 2009 ### Staff: Mentor Sure, just use Rasterize[graph] That will return a Graphics object where the first element is a Raster object. The first element of the Raster object, in turn, is a matrix containing the RGB color values at each point in the predefined grid. 21. Jun 29, 2009 ### jtassilo hm I couldn't get it to work... Lets say I want a list or table with x and y values for the function y=sinx. Is there any way to do that? 22. Jun 29, 2009 ### Staff: Mentor Sure, just use Table for that. Table[{x,Sin[x]},{x,xmin,xmax,dx}] 23. Jun 30, 2009 ### jtassilo Great, thanks for the tip! And I have another question concerning loop commands: I want to import several files (c1, c2...) Is there a command that allows to import c1, c2.. at once like: Do[Import["ci.txt"], {i, imin, imax, di}]? I've been trying several loop commands but couldn't get it to work. Also is it possible to somehow use loop commands to simplify the following expression (substituting c1 and c2 by ci again): Evaluate[Apply[Plus, Exp[-(x - c1[[All, 1]])^2] + Exp[-((x - c2[[All, 1]]))^2]+...]]? 24. Jun 30, 2009 ### Staff: Mentor I usually put the filenames in a list of strings called filenames. Then you can use the following command: dat = Map[Import[#]&, filenames]; You can even specify options in the Import command if they will all use the same options.
2018-07-18 21:01:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20494844019412994, "perplexity": 3074.558750544947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590329.25/warc/CC-MAIN-20180718193656-20180718213656-00316.warc.gz"}
https://www.numerade.com/questions/find-the-mode-or-modes-for-each-list-of-numbers-161513151413111514/
Meet students taking the same courses as you are!Join a Numerade study group on Discord # Find the mode or modes for each list of numbers.$16,15,13,15,14,13,11,15,14$ ## 15 #### Topics Multivariable Optimization ### Discussion You must be signed in to discuss. ##### Catherine R. Missouri State University ##### Heather Z. Oregon State University ##### Samuel H. University of Nottingham Lectures Join Bootcamp ### Video Transcript this question presents you with a list of numbers and ask you to find the me the mod. Excuse me. The numbers that are printed presents are 16 15 13 15 14 13 11 15 and 14. Now, we have a lot of numbers and several of them repeat. So the way that I like to do these is, um, set up a little table and sort of as I go through the list. Check off. Um, how maney of each value I see. So I know that 16 is my highest value here, and 11 is my lowest value. So I'm gonna just count them down 18 12 and 11 and I'll go through and check them off, as I see. So I get no, have won 16. That 1 15 I have 1 13 at another 15. Ah, 14. Another 13 and 11. Another 15 and another 14. So it's all of my data, and I could see that the one that I had the most of was 15. Now, the mode we know is that is the, uh, value that shows up the most in the data. And so here, 15 is our mode that your final answer University of Oklahoma #### Topics Multivariable Optimization ##### Catherine R. Missouri State University ##### Heather Z. Oregon State University ##### Samuel H. University of Nottingham Lectures Join Bootcamp
2021-03-04 18:39:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5672672390937805, "perplexity": 1802.6427496468987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369512.68/warc/CC-MAIN-20210304174506-20210304204506-00167.warc.gz"}
https://math.stackexchange.com/questions/1244655/pythagorean-triples-is-every-positive-integer-gt-2-part-of-at-least-one-p
# Pythagorean Triples : Is every positive integer $\gt$ $2$ part of at least one Pythagorean triple? I was doing some basic number theory problems from Rosen and came across this problem: Show that every positive integer $\gt$ $2$ is part of at least one Pythagorean triple My Solution (partial) : Case - 1 : • Let there be an integer $t$ $\ge$ 3 • Suppose $t$ is of the form $2^{j}$ for $j > 1$ • Let $m$ = $2^{j-1}$ and $n$ = $1$ • So , $2mn$ = $t$ and hence $t$ belongs to a Pythagorean triple Case - 2 : • Let $t$ = $2n + 1$ • WLOG , let $m = n + 1$ • Then $m$ and $n$ have opposite parity • Also , $m > n$ • So , $m^{2}$ - $n^{2}$ $=$ $2n + 1$ $=$ $t$, so $t$ belongs to a Pythagorean triple My Problem: Can someone help me out ? I do not know if I am correct , I am all thumbs ; even a hint would suffice ... • THe first case should deal with $2t$, not $2^t$ – Asvin Apr 21 '15 at 9:33 • You haven't checked the case when $t$ is even and not a power of $2$. Also your proof of case 2 uses redundant steps "then $m,n$ have opposite parity" and "$m>n$". – user26486 Apr 21 '15 at 11:21 • Thanks @Asvin , I got it :) – pranav Apr 21 '15 at 11:58 Using the characterisation of these triples, it suffices to show that any such number can be written as $m^2-n^2$, $2mn$ or $m^2+n^2$ with some numbers $m>n$. The case $m^2-n^2$ covers "the most" numbers (only those $\equiv 2 \mod 4$ remain), the rest is covered by $2mn$. • Hi @MooS , could you please explain the last line of your answer in a bit detail ... would be very grateful :) – pranav Apr 21 '15 at 9:21 • You should show that every odd number can be written as $m^2-n^2$. Any even number $2d$ greater than $2$ can be written as $2d=2 \cdot d \cdot 1$, hence is of the form $2mn$. – MooS Apr 21 '15 at 9:22 • Hi @MooS , I have tried something out and have accordingly edited the question , please have a look and tell me if I am correct or not ... would be grateful ... :) – pranav Apr 21 '15 at 9:29 • Note that $(m^2-n^2,2mn,m^2+n^2)$ only gives the primitive triples. For example it does not give the triple $(3,0,3)$. – punctured dusk Apr 25 '15 at 7:37 • This is not relevant in this context. Note that those triples are only primitive if $m$ and $n$ are co-prime. Furthermore i do not think triples (n,0,n) were valid in this exercise, since that would make everything trivial. – MooS Apr 25 '15 at 7:46 I. Yes. Proof without words: $$(\color{brown}{2m})^2+(m^2-1)^2 = (m^2+1)^2$$ $$(\color{brown}{2m+1})^2+(2m^2+2m)^2 = (2m^2+2m+1)^2$$ II. Higher. To prove it for quadruples is easier since even and odd cases can be combined into a single identity, $$n^2+(n+1)^2+(n^2+n)^2 = (n^2+n+1)^2$$ and for quintuples, $$n^2 + (n-2)^2 + (2n+1)^2 + (3n^2+2)^2 = (3n^2+3)^2$$ If $n$ is an odd integer, let $m = \frac{n^2 - 1}2$, then $m+1$, $m$ and $n$ are a Pythagorean triple. ($n^2 = 2m+1$) If $n$ is even, let $m = \frac{n^2 - 4}4$, then $m+2$, $m$, and $n$ are a Pythagorean triple. $(n^2 = 4m+4)$ If $t$ is odd and $t\geq 3$ then $m=(t+1)/2$ and $n=(t-1)/2$ are positive integers with $m^2-n^2=t.$ And $(m^2-n^2,2mn, m^2+n^2)=(t,2mn, m^2+n^2)$ is a P. triple. If $t$ is even and $t\geq 4$ let $m= t/2$ and $n=1$. Then $m,n$ are positive integers with $m>n$ so $(m^2-n^2,2mn,m^2+n^2)=(m^2-n^2,t,m^2+n^2)$ is a P. triple. For primitive triples, side $$A$$ can be any odd number $$>2$$ and side $$B$$ can be any integer multiple of $$4$$. If we include multiples like $$(6,8,10)$$, then the even numbers that are not multiples of $$4$$ are include. Therefore, every $$n>2\land n\in\mathbb{N}$$ is part of at least one Pythagorean triple.
2019-10-23 00:40:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8944845795631409, "perplexity": 177.9306622680004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987826436.88/warc/CC-MAIN-20191022232751-20191023020251-00152.warc.gz"}
https://www.infoq.com/articles/iterative-continuous?utm_source=articles_about_JUnit&utm_medium=link&utm_campaign=JUnit
InfoQ Homepage Articles Iterative, Automated and Continuous Performance Iterative, Automated and Continuous Performance Our industry has learned that if we deliver intermediate results and revisit functional requirements we can avoid delivering the wrong system late. We have learned that if we perform unit and functional tests on a regular basis, we will deliver systems with fewer bugs. And though we are concerned with the performance of our applications, we rarely test for performance until the application is nearly complete. Can the lessons of iterative, automated and continuous that we've applied to functional testing apply to performance as well? Today, we may argue that a build that completes with unit testing should be performed on an hourly, daily, or weekly basis. We may argue on 100% coverage vs. 50% coverage. We may argue and discuss and ponder about specific details of the process. But, we all pretty much agree that performing automated builds completed with unit testing on a regularly scheduled basis, is a best practice. Yet, our arguments regarding performance testing tend to be limited to, don't. Premature or Just in Time There are several reasons why performance testing gets put off to the end. Many of these reasons are very similar to why we rarely, if ever, automate the testing of our applications. Setting up an automated build takes time, effort and commitment. To justify to the business that it is in their best interest to make this commitment is simply difficult . After all, we are programmers and we are expected to crank out features and not spend our time testing. Testing is for the testers. Writing unit tests takes time, time which is better spent developing features and so on. However, we have been able to sneak this into our development process, as organizing test code into unit tests only formalized what we were already doing. Thus the incremental investment needed to support this formalization wasn't all that large. Once businesses started to see the benefits, things have only gotten better. As much as one might believe that extending this to performance testing would be a natural progression, it simply hasn't happened. The investment needed to support performance testing is viewed as being much larger and the potential benefits are seen as being much smaller. After all, we can't do performance testing on a system that is under development as there is nothing to test; after all, isn't performance just a matter of more or better hardware? There are a couple of reasons why the investment is viewed as being larger for performance testing. Unlike unit testing, performance testing isn't something that developers are already doing. This implies a new activity rather than the formalization of something that is already being done. Yet the unit testing that we do today is much more than the informal testing done prior to unit testing becoming a formal discipline. In this regard, there is a difference between the perceived and the actual investment needed to introduce formal performance testing into the development cycle. There are other arguments against early performance testing: it is premature, there is nothing to test, very little can be gained by it, it is a micro-performance tuning, it is too granular to be useful as we can only performance test complete systems, setting up a performance test is too complex and takes too much time, the process is fickle and so on. These reasons are not without substance. If you talk to a manager from almost any performance testing group, you'll hear that the biggest consumer of time is just getting the application operational in a test environment. This task can be so arduous that it actually limits the number of applications they can test. Some have whispered to me that they can performance test less than 50% of all applications they've deployed. There is no question that one should almost always void premature optimizations. This is especially true is the optimization is complex, time consuming to implement and the corresponding returns are unknown. For example, if we are sorting a list quite often a simple bubble sort is all we'll really need. We only need more complex sorts if the sort time is critical and the quantity of data warrants it. If we don't have a good handle on either of these requirements implementing a more complex sorting strategy would be premature optimization. Testing Components With continuous performance testing we need to focus on more granular aspects of our systems, components and frameworks. Just as is the case with unit testing, we can only expect to find certain classes of problems when we test these artifacts in isolation. A case in point is the contention between components of misuse of frameworks resulting in response times higher than expected; these are things that will only come out in a full integration test. However understanding how much CPU, memory, disk and network I/O we need, can help us predict and take preventive action (rather than apply a premature optimization). On the question of cost, there is no doubt; performance testing will add cost of developing. Unlike functional testing, performance testing is not something that developers regularly check so there isn't a clear path to formalization as there was with functional testing. However there are two types of costs being considered: direct cost for the effort and the hidden cost of having to fix all of the performance problems as they randomly appear in the final build. The immediate economic reward (in terms of both money and time/schedule) is to performance test only at the end of the project's development cycle. But this is a false economic reward. It is said that with less testing you need fewer man hours to develop your application. Yet it does nothing to account for risk. You may have more money in your pocket if you drive with no auto insurance, but if you ever get into an accident you've lost. Given the number of "car wrecks" we witness in this industry, not testing is like driving without insurance. Mocks for Performance But there are things we can do to help reduce costs. Developers create mocks and other things needed to unit test. While the mocks will most likely not include the things that are needed for a performance test, in most cases they can be easily modified to do so. Take the mock for a credit card service found in listing one. public class MockCreditAuthorizationServiceProvider implements CreditAuthorizationServiceProvider { private double rejectPercentage; private Random random; public MockCreditAuthorizationServiceProvider() { // set the rejectPercentage from a property random = new Random(); } public void authorize(AuthorizationRequest request) { if ( random.nextDouble() > rejectPercentage) request.authorize(); else request.deny(); } } Listing 1. Mock credit card authorization with denied simulation The mock is setup for functional testing. It adheres to the functional requirements and it should validate a transaction according to some adjustable rate. This mock is good enough to test the functional requirements for the handling of both accepted and rejected credit card authorizations. However to test for performance we also need to mock the service level agreements that we have with the authorization service. The mock must not only authorize; it must do it in the time it normally takes to perform and authorize. If the system will only consider 5 authorization requests at a time, then this also needs to be encoded into the mock. These requirements have been added to our original mock as seen in listing 2. public class MockCreditAuthorizationServiceProvider implements CreditAuthorizationServiceProvider { private double rejectPercentage; private Random random; private Expondential serviceDistribution; public MockCreditAuthorizationServiceProvider() { // set the rejectPercentage meanServideTime from a property random = new Random(); this.serviceDistribution = new Expondential( meanServiceTime); } public void authorize(AuthorizationRequest request) { try { Thread.sleep( this.serviceDistribution.nextLong()); } catch (InterruptedException e) {} if ( random.nextDouble() > rejectPercentage) request.authorize(); else request.deny(); } Listing 2. Mock credit card authorization with denied and service time simulation Yet another not so insignificant challenge is simply getting the application running in a suitable testing environment. But this has also been an issue for those doing functional testing and they've worked out a solution - do it continuously. The obvious solution for those wanting to do performance testing is to piggyback off that effort. Tooling In the beginning we had JUnit, a neat little tool that helped us organize our tests, execute them and show us the results. We had ANT, a tool written in anger of the complexities of Make. From these humble beginnings we are witnessing an explosion of tools to support continuous builds and unit testing. Yet there is seemingly little support for continuous performance testing. While it is true that none of the existing tools advertise support for performance testing, it does exist. As the lack of advertising may suggest, this support is limited. The first limitation is in the type of testing supported. Currently we have ANT, Maven, and CruiseControl, but virtue of their integration with ANT, all have plug-ins to support the automated running of Apache JMeter. Apache JMeter came out of the need to performance test HTTP servers and applications. It supports other types of testing but this is limited to a few well defined components that include JMS, WS, and JDBC. However Apache JMeter is quite extensible and if we are to test our components, this is exactly what we'd have to do: extend Apache JMeter. Not an ideal solution in many cases. The only other choice is to hand roll our own stress testing harness. Once again, a less than desirable solution. While tool support may be weak, we expect that it will improve over time just as tool support for continuous testing has improved over time. Case in Point Should lack of tool support delay a push to continuous performance testing? The answer will depend a bit upon how adventurous your organization is willing to get. But before completely dismissing the idea of introducing it, consider this. There are a few organizations that have instituted a continuous performance program and though the evidence may be antidota,l the results have been promising. In one case, the end product was composed of the efforts of 6 different development teams. The performance tuning team asked that each of the teams run performance test during the development process. The component with the most performance difficulties was delivered by the one team that did not comply with the request. Conclusion Dave Thomas writes about broken windows. Just as continuous builds and unit testing fix "broken windows", continuous performance testing will also work to fix "broken performance windows". Kent Beck has described continuous testing of automated builds using the analogy of driving a car. As you move down the road your eyes tell you what micro-adjustments need to be made in order to stay in the center or your lane. You wouldn't think of driving with you eyes closed, opening them only for a second to see where you are for fear of missing a curve or drifting out of your lane. When you are first learning it is hard, but it becomes easier over time. What they are saying is that by being iterative, automated, and continuous you are developing with eyes wide open. Style Hello stranger! You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered. Get the most out of the InfoQ experience. Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p • Maybe if you stop calling it performance testing.... by William Louth / • Re: Maybe if you stop calling it performance testing.... by Kirk Pepperdine / • Re: Maybe if you stop calling it performance testing.... by William Louth / • Re: Maybe if you stop calling it performance testing.... by Alois Reitbauer / • Testing isn't the only thing that's needed! by M. Edward (Ed) Borasky / • Re: Testing isn't the only thing that's needed! by William Louth / • Re: Testing isn't the only thing that's needed! by Kirk Pepperdine / • Re: Testing isn't the only thing that's needed! by William Louth / • Re: Testing isn't the only thing that's needed! by M. Edward (Ed) Borasky / • Re: Testing isn't the only thing that's needed! by William Louth / • Re: Testing isn't the only thing that's needed! by William Louth / • Re: Testing isn't the only thing that's needed! by Kirk Pepperdine / • Re: Testing isn't the only thing that's needed! by Kirk Pepperdine / • Re: Testing isn't the only thing that's needed! by William Louth / • Leia com atenção!!!!! A conclusão. by Luiz Otavio Ribeiro / • Maybe if you stop calling it performance testing.... Your message is awaiting moderation. Thank you for participating in the discussion. Hi Kirk, The real problem is that people fail to see that performance is engineered and not tested. Most people write code and then test. Why would this be any different for performance. If you have not designed and developed for performance then you are effectively making a judgement call on the risks associated with the eventually delivery of a poorly performing (and possibly scaling) solution. Performance testing in the early stages can be useful to validate performance models and to monitor the resource consumption behavioral changes of the software under construction across releases even into production but testing should be based on both a software and system execution model otherwise all you are proving is that you can bring down a system with a particular load. Hey that is easy but what does one really learn of the underlying behavior. Not much especially as most tools including those you referred do not offer any correlation with activities, paths, and resource usage and service delays. Most continuous performance initiatives fail because the incentives and controls (lack of) invite developers to set parameters that pass the tests. What typically happens is that management focuses more on passing the tests even if there bear little resemblance to reality and less on extracting knowledge and engineering best practices. The kind of automation I see divorces the engineer from the engineering activity. It becomes a competition focused on beating the clock under the tested conditions and ignoring the implications for different workload patterns. Performance testing works when people first start with performance engineering. Performance engineering != tuning or premature optimization. regards, William • Re: Maybe if you stop calling it performance testing.... Your message is awaiting moderation. Thank you for participating in the discussion. Hi William, You make some very good points and clarifications in your comments. I talked to a project manager just prior to authoring this article and I asked his opinion regarding performance testing during development. He thought it was a complete waste of time because of the bias that I know that you face on a day to day basis. In reviewing the article, Heinz commented that this was something that we'd never see. I guess this means we need more case studies from people like yourself that can clearly demonstrate the benefits of testing for performance early on in the applictions development lifecycle. -- Kirk • Re: Maybe if you stop calling it performance testing.... Your message is awaiting moderation. Thank you for participating in the discussion. Hi Kirk, Yes indeed performance testing during development can be a complete waste especially when the application is largely incomplete, likely to change drastically, and the team has very little data whatsoever regarding the workload patterns, deployment topology and system hardware. We test to validate and verify the software but without a model testing is meaningless. What is is purpose in terms of performance testing. This is where I am reminded of the line in Batman Begins were Bruce's father asks him why do we fall? So "we can LEARN" to get back up. More performance testing activity I see conducted pays very little attention to the knowledge acquired and this is probably because the tools and processes create a disconnect between the test cases and the underlying software execution model and the results are simply boolean logic. Knowledge acquisition is largely absent or given lip service. The level of performance engineering applied to a project should reflect the risks - risks viewed from a business perspective and not by the development team or project manager. If there are sufficiently high risks then one must construct models and ensure there is sufficient instrumentation performance to collect the required data to validate the model and monitor the software. You can construct a large proportion of the software execution model by not testing but simply tracing and metering possible resource usage ranges for system and component boundaries. From this you can already make assessments whether the eventual software will meet performance objectives. Too many roundtrips between client-server-database(or msg backend) is not going to delivery sub-second response times not matter what you throw at it. Performance testing as in the context of load generators is important when constructing the system execution model which is focus more on bottlenecks related to concurrency. Performance testing is viewed negatively today because testing teams rarely extract knowledge of the execution model and make this available across the complete life cycle. Performance testers have the potential to provide enormous benefit in resolving issues in operations were there is very little application knowledge if only they focused less of the load and more on the flow. Understanding the execution flow and relating this to incidents in product allows them to better pinpoint possible faulty or overloaded components and systems not easily detected by existing low level health monitors. regards, William • Refactoring to improve performance.... Your message is awaiting moderation. Thank you for participating in the discussion. What level of performance is a requirement for your project? Will your architecture/design support it? If not, by using this approach then you will have to refactor it in later. The type of refactoring to fix performance issues is usually large and extremely expensive. So there is a cost trade-off: pay less now and if the performance is not met refactor later for much more time/money, or build it in upfront and pay the price of design carry. Testing is a great tool to let you know when things are broken, but many of the cross-cutting issues are not simple to refactor in later. In an Agile setting the customer/product owner should make this decision after being fully informed of the technical costs of both solutions. • Testing isn't the only thing that's needed! Your message is awaiting moderation. Thank you for participating in the discussion. Well, yes, you *do* need to have continuous performance testing, both single-user (performance "unit testing") and multi-user load and scalability testing (performance "integrated testing"). But there are lots of *other* things you need to be doing! 1. Up-front software performance engineering. Build performance into the application, build metrics into the application. Do a Google search for "SPE Software Performance Engineering". 2. Modeling!! Testing takes time -- lots of it. Modeling is fast and will get you a good idea of whether you're going to sink or float. Try a Google search for "Guerilla Capacity Planning". 3. Post-deployment performance monitoring. You have to continuously monitor resource usage on your servers and the response time your customers are seeing. Performance engineering is a full-time job -- there's no point in the application life cycle where you *don't* have to pay attention to performance. If you "just test", you aren't doing it all. And if you aren't even testing, well .... • Re: Testing isn't the only thing that's needed! Your message is awaiting moderation. Thank you for participating in the discussion. Hi Edward, It is very refreshing to hear someone also discuss performance engineering with an emphasis on modeling and data collection. I was started to think I was the imega man of SPE at least on Java related websites such as InfoQ and TSS. --------------------------------------------------------------------------- [All] I hope the following graphic posted on my blog shows the major activities of the discipline which as has been pointed out do not necessarily take up so much time as the testing activity. blog.jinspired.com/?p=38 It is important to note that SPE spans the complete life cycle and ensures that during the construction of the software that monitoring concerns of operations are already factored in. The benefits of SPE are not confined solely to the performance of the system once one starts to see that it is a knowledge acquisition exercise that provides a common and sufficiently high enough level of the software and system behavior to be used in verifying system behavior, validating assumptions, and performing root cause analysis of issues arising during all phases especially in PRODUCTION were this knowledge becomes paramount for fast problem resolution. Kind regards, William Louth JXInsight Product Architect CTO, JINSPIRED www.jinspired.com • Re: Testing isn't the only thing that's needed! Your message is awaiting moderation. Thank you for participating in the discussion. Performance engineering is a full-time job -- there's no point in the application life cycle where you *don't* have to pay attention to performance. If you "just test", you aren't doing it all. And if you aren't even testing, well .... Agreed and these are topics that I've covered in other publications. That said, they are more accepted than continuous performance testing. And William, I don't think you'll find that they've been ignored on InfoQ or TSS, they just need someone to write about them. And if you're keen..... ;-) Kirk • Re: Testing isn't the only thing that's needed! Your message is awaiting moderation. Thank you for participating in the discussion. Hi Kirk, I am always writing about software performance engineering (instrumentation, data collections, performance models, metric analysis,...). It just happens to be on my blog which for some strange reason has never been referenced (linked to) by a well known Java performance tuning site that publishes related articles each month. Maybe this is indicative of the "performance testing" and "adhoc tuning & troubleshooting" mentality prevalent in the industry. regards, William • Re: Testing isn't the only thing that's needed! Your message is awaiting moderation. Thank you for participating in the discussion. It's refreshing to hear people actually talk about complete life-cycle performance engineering. For example, Neil Gunther is so enamored of the back-of-the-envelope modeling approach that he gives short shrift to the rest of the tools and techniques with comments like: "You, as the performance analyst or planner, only have to shine the light in the right place and then stand back while others flock to fix it." www.perfdynamics.com/Manifesto/gcaprules.html Now I'm personally a big fan of queuing models, and Gunther appears to be the only one outside of a university that teaches that approach. But the problem with queuing models is that they're difficult to understand. Difficult to understand means difficult to validate and test. But as far as I'm concerned, if you're talking about an application development framework, like, for example, Rails, it ought to come with a complete set up performance engineering tools built in. It ought to be able to measure the end-to-end response time users are seeing, it ought to be able to look at system and process resource usage on the servers and integrate it for capacity management purposes, etc. I shouldn't have to go buy a separate performance monitoring tool set. But that's exactly what I have to do for many Java-based frameworks. I've lost count of how many of those tools are out there for Java applications. And they're expensive. You can spend hundreds of thousands of dollars on these tool sets. • Re: Testing isn't the only thing that's needed! Your message is awaiting moderation. Thank you for participating in the discussion. Hi Edward, Well I must be fortunate because Neil actually published a performance analysis report on virtualization / Hyperthreading referencing JXInsight as the tool that was used to collect profiled traces. I have Neils book and it is very accessible, providing a good introduction to queueing systems for the purpose of performance modeling. I do not think that a framework likes Rails also has to be a performance monitoring tool. I think it should provide specific extension points into the framework and derived applications that enables various levels of performance monitoring (metrics, traces, diagnostics, and metering) to be introduced by other companies (open source or commercial). This ensures that innovation in performance management can be introduced and keeping the Rails team focused on other important aspects that cannot be readily designed and delivered by others. Maybe in the Ruby/Rails world your suggestion might work because there is effectively only one dominant framework but in the Java world this is not the case. Thus we need a set of APIs that work up and down the technology/framework stack and across platforms and middleware. It would be wasteful for each framework to offer their own custom diagnostics API set (but I am sure this will happen such is life in Java world). Not all Java monitoring solutions are expensive. JXInsight is a far superior offering (IMHO) to most of the +100,000 USD Java performance monitoring & problem diagnostics solutions and yet it is very affordable. regards, William • Re: Testing isn't the only thing that's needed! Your message is awaiting moderation. Thank you for participating in the discussion. I wanted to point out that a performance model need not be a very sophisticated queuing based model. A software execution model which focuses on detailing the (common) paths / flows of execution through a system can be invaluable during the early stages in identifying potential bottlenecks before even load/stress testing is performed. A software execution model can be as simple as a catalog of use cases with a corresponding set of mapped process & component level execution flows that include typical resource usage/consumption ranges. For this it should be possible to identify the number of possible roundtrips between client->server->data(grid|base|bus) as well as the number of component boundaries crossed. This information not only helps with performance engineering during development it can with defining deployment topologies as well as identifying possible failure points in a production application for operations to investigate when an alert or incident is reported. regards, William • Re: Testing isn't the only thing that's needed! Your message is awaiting moderation. Thank you for participating in the discussion. Hi Edward, Not all Java monitoring solutions are expensive. JXInsight is a far superior offering (IMHO) to most of the +100,000 USD Java performance monitoring & problem diagnostics solutions and yet it is very affordable. Agreed, you don't need to spend 10s of thousands of $$on this. My performance tuning course is 100% open source. This isn't to say that products like JXInsigh are not worth the$$\$. Once you get going you may find the investment in a commercial too be worthwhile. More to the point, you can get quite a bit done with the OSS tools that are available. Kirk • Re: Testing isn't the only thing that's needed! Your message is awaiting moderation. Thank you for participating in the discussion. My performance tuning course is 100% open source. Should say, based 100% on open source tools - Kirk • Re: Testing isn't the only thing that's needed! Your message is awaiting moderation. Thank you for participating in the discussion. Just to be clear none of the open source tools above would actually help you derive a performance model like I described above. They may facilitate testing (generating a load) during a tuning activity but they are not a performance engineering solution. To understand the concepts of SPE one does not even need to use a tool though it does helps in setting the context for various activities. SPE is much MORE than performance testing or tuning. JXInsight is not just a performance tuning tool it is a comprehensive runtime analysis solution that offerings problem diagnostics via object & request runtime state imaging ((JXInsight Diagnostics), distributed profiling & tracing (JXInsight Trace), remote system & component state inspection (JXInsight JVMInsight), extensible resource metering (JXInsight Probes), service management monitoring (JXInsight Metrics), and true database transaction analysis (JXInsight Transact - JDBInsight). Performance is admittedly an important aspect of execution flow but it is not the only one. Understanding the state changes caused by flows especially exception generating flows is important for reliability and availability as well as system test verification (delta debugging). regards, William • Leia com atenção!!!!! A conclusão. Your message is awaiting moderation. Thank you for participating in the discussion. fabriciodiogenes@ig.com.br • Re: Maybe if you stop calling it performance testing.... Your message is awaiting moderation. Thank you for participating in the discussion. Hello, I agree with your point regarding performance engineering. That is why validation your architecture continuously during development is so important. Will brought up good points like massive DB calls. There would be much more to add. This is definitely not premature optimization. It definitely makes no sense to use micro benchmarks to test for production performance. Every phase in the lifecycle has to test for different types of problems. As many performance and I think almost all scalability problems are architectural problems solving them always means changes to the architecture. Doing this late in the development process increases risk and cost. My experience is that a lot of performance problems (about 50 percent) can already be found during development and CI testing. This is also what our customers are telling us. On the other side I fully agree that automation is the key here. If I have to do everything manually performance testing get's too expensive. Therefore automation and integration into an existing toolchain are so important. Delivering reports that validate architectural rules automatically based on testing transactions helps people to identify potential problems fast. While implementing performance management (I call it that way as it has a wider scope than just testing) imposes some costs, it saves you a lot of money later on. If testing cycles and production troubleshooting can be minimized you get your money back fast and even save a lot. I know a number of companies which are doing it already and they say they profit from continuously managing performance across the application lifecycle. - Alois Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p Is your profile up-to-date? Please take a moment to review and update. Note: If updating/changing your email, a validation request will be sent Company name: Company role: Company size: Country/Zone: State/Province/Region: You will be sent an email to validate the new email address. This pop-up will close itself in a few moments.
2019-03-20 19:12:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31672388315200806, "perplexity": 1909.0737951368565}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202450.86/warc/CC-MAIN-20190320190324-20190320212324-00224.warc.gz"}
https://brilliant.org/problems/fastidious-2/
Fastidious Number Theory Level pending $$N+a^2+b^2$$ is a three digit number which is divisible by 5. $$a=10x+y$$ and $$b=10x+z$$, where $$z$$ is a prime number, and $$x$$ and $$y$$ are natural numbers. If $$a+b=31$$, then find the value of $$N.$$ × Problem Loading... Note Loading... Set Loading...
2018-01-23 00:36:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8536676168441772, "perplexity": 194.104374434995}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891546.92/warc/CC-MAIN-20180122232843-20180123012843-00614.warc.gz"}
https://simondobson.org/blog/index-23.html
# Big, or just rich? The current focus on "big data" may be obscuring something more interesting: it's often not the pure size of a dataset that's important. The idea of extracting insight from large bodies of data promises significant advances in science and commerce. Given a large dataset, "big data" techniques cover a number of possible approaches: • Look through the data for recurring patterns (data mining) • Present a summary of the data to highlight features (analytics) • (Less commonly) Identify automatically from the dataset what's happening in the real world (situation recognition) There's a wealth of UK government data available, for example. Making it machine-readable means it can be presented in different ways, for example geographically. The real opportunities seem to come from cross-overs between datasets, though, where they can be mined and manipulated to find relationships that might otherwise remain hidden, for example the effects of crime on house prices. Although the size and availability of datasets clearly makes a difference here -- big open data -- we might be confusing two issues. In some circumstances we might be better looking for smaller but richer datasets, and for richer connections between them. Big data is a strange name to start with: when is data "big"? The only meaningful definition I can think of is "a dataset that's large relative to the current computing and storage capacity being deployed against it" -- which of course means that big data has always been with us, and indeed always will be. It also suggests that data might become less "big" if we become sufficiently interested in it to deploy more computing power to processing it. The alternative term popular in some places, data science, is equally tautologous, as I can't readily name a science that isn't based on data. (This isn't just academic pedantry, by the way: terms matter, if only to distinguish what topics are, and aren't, covered by big data/data science research.) It's worth reviewing what big data lets us do. Having more data is useful when looking for patterns, since it makes the pattern stand out from the background noise. Those patterns in turn can reveal important processes at work in the world underlying the data, processes whose reach, significance, or even existence may be unsuspected. There may be patterns in the patterns, suggesting correlation or causality in the underling processes, and these can then be used for prediction: if pattern A almost always precedes pattern B in the dataset, then when I see a pattern A in the future I may infer that there's an instance of B coming. The statistical machine learning techniques that let one do this kind of analysis are powerful, but dumb: it still requires human identification and interpretation of the underlying processes to to conclude that A causes B, as opposed to A and B simply occurring together through some acausal correlation, or being related by some third, undetected process. A data-driven analysis won't reliably help you to distinguish between these options without further, non-data-driven insight. Are there are cases in which less data is better? Our experience with situation recognition certainly suggests that this is the case. When you're trying to relate data to the the real world, it's essential to have ground truth, a record of what actually happened. You can then make a prediction about what the data indicates about the real world, and verify that this prediction is true or not against known circumstances. Doing this well over a dataset provides some confidence that the technique will work well against other data, where your prediction is all you have. In this case, what matters is not simply the size of the dataset, but its relationship to another dataset recording the actual state of the world: it's the richness that matters, not strictly the size (although having more data to train against is always welcome). Moreover, rich connections may help with the more problematic part of data science, the identification of the processes underlying the dataset. While there may be no way to distinguish causality from correlation within a single dataset -- because they look indistinguishably alike -- the patterns of data points in the one dataset may often be related to patterns and data points in another dataset in which they don't look alike. So the richness provides a translation from one system to another, where the second provides discrimination not available in the first. I've been struggling to think of an example of this idea, and this is the best I've come up with (and it's not all that good). Suppose we have tracking data for people around an area, and we see that person A repeatedly seems to follow person B around. Is A following B? Stalking them? Or do they live together, or work together (or even just close together)? We can distinguish between these alternatives by having a link from people to their jobs, homes, relationships and the like. There's a converse concern, which is that poor discrimination can lead to the wrong conclusions being drawn: classifying person B as a potential stalker when he's actually an innocent who happens to follow a similar schedule. An automated analysis of a single dataset risks finding spurious connections, and it's increasingly the case that these false-positives (or -negatives, for that matter) could have real-world consequences. Focusing on connections between data has its own dangers, of course, since we already know that we can make very precise classifications of people's actions from relatively small, but richly connected, datasets. Maybe the point here is that focusing exclusively on the size of a dataset masks both the advantages to be had from richer connections with other datasets, and the benefits and risks associated with smaller but better-connected datasets. Looking deeply can be as effective (or more so) as looking broadly. # Some improvements to SleepySketch It's funny how even early experiences change the way you think about a design. Two minor changes to SleepySketch have been suggested by early testing. The first issue is obvious: milliseconds are a really inconvenient way to think about timing, especially when you're planning on staying asleep for long periods. A single method in SleepySketch to convert from more programmer-friendly days/hours/minutes/seconds times makes a lot of difference. The second issue concerns scheduling -- or rather regular scheduling. Most sampling and communication tasks occur on predictable schedules, say every five hours. In an actor framework, that means the actor instance (or another one) has to be re-scheduled after the first has run. We can do this within the definition of the actor, for example using the post() action: class PeriodicActor : public Actor { void post(); void behaviour(); } ... void PeriodicActor::post() { Sleepy.scheduleIn(this, Sleepy.expandTime(0, 5)); } (This also demonstrates the expandTime() function to re-schedule after 0 days and 5 hours, incidentally.) Simple, but bad design: we can't re-use PeriodicActor on a different schedule. If we add a variable to keep track of the repeating period, we'd be mixing up "real" behaviour with scheduling; more importantly, we'd have to do that for every actor that wants to run repeatedly. A better way is to use an actor combinator that takes an actor and a period and creates an actor that runs first re-schedules the actor to run after the given period, and then runs the underlying actor. (We do it this way so that the period isn't affected by the time the actor actually takes to run.) Actor *a = new RepeatingActor(new SomeActor(), Sleepy.expandTime(0, 5)); Sleepy.scheduleIn(a, Sleepy.expandTime(0, 5)) The RepeatingActor runs the behaviour of SomeActor every 5 hours, and we initially schedule it to run in 5 hours. We can actually encapsulate all of this by adding a method to SleepySketch itself: Sleepy.scheduleEvery(new SomeActor(), Sleepy.expandTime(0, 5)); to perform the wrapping and initial scheduling automatically. Simple sleepy sketches can now be created at set-up, by scheduling repeating actors, and we can define the various actors and re-use them in different scheduling situations without complicating their own code. A simple radio survey establishes the ranges that the radios can manage. The 2mW XBee radios we've got have a nominal range of 100m -- but that's in free air, with no obstructions like bushes, ditches, and houses, and not when enclosed in a plastic box to protect them from the elements. There's a reasonable chance that these obstacles will reduce the real range significantly. A radio survey is fairly simple to accomplish. We load software that talks to a server on the base station -- something as simple as possible, like sending a single packet with a count every ten seconds -- and keep careful track of the return values coming back from the radio library. We then use the only output device we have -- an LED -- to indicate the success or failure of each operation, preferably with an indication of why it failed if it did. (Three flashes for unsuccessful transmission, five for no response received, and so forth.) We then walk away from the base station, watching the behaviour of the radio. When it starts to get errors, we've reached the edge of the effective range. With two sensor motes, we can also check wireless mesh networking. If we place the first mote in range of the base station, we should then be able to walk further and have the second mote connect via the first, automatically. That's the theory, anyway... (One extra thing to improve robustness: if the radios lose connection or get power-cycled, they can end up on a different radio channel to the co-ordinator. To prevent this, the radio needs to have an ATJV1 command issued to it. The easiest way to do this is at set-up, through the advanced settings in X-CTU.) The results are fairly unsurprising. In an enclosure, in the field, with a base station inside a house (and so behind double glazing and suchlike) the effective range of the XBees is about 30--40m -- somewhat less than half the nominal range, and not really sufficient to reach the chosen science site: another 10--20m would be fine. On the other hand, the XBees mesh together seamlessly: taking a node out of range and placing another between it and the base station connects the network with no effort. This is somewhat disappointing, but that's what this project is all about: the practicalities of sensor networking with cheap hardware. There are several options to improve matters. A higher-powered radio would help: the 50mW XBee has a nominal range of 1km and so would be easily sufficient (and could probably be run at reduced transmission power). A router node halfway between base station and sensors could extend the network, and the cost of an additional non-sensing component. Better antennas on the 2mW radios might help too, especially if they could be placed outside the enclosure. It's also worth noting that the radio segment is horrendously hard to debug with only a single LED for signalling. Adding more LEDs might help, but it's still a very poor debugging interface, even compared to printing status messages to the USB port. # Sleepy sketches Keeping the microcontroller asleep as much as possible is a key goal for a sensor system, so it makes sense to organise the entire software process around that. The standard Arduino software model is, well, standard: programs ("sketches") are structured in terms of a setup() function that runs once when the system restarts and a loop() function that is run repeatedly. This suggests that the system spends its time running, which possibly isn't all that desirable: a sensor system typically tries to stay in a low-power mode as much as possible. The easiest way to do this is to provide a programming framework that handles the sleeping, and where the active bits of the program are scheduled automatically. There are at least two ways to do this. The simplest is a library that lets loop() sleep, either directly or indirectly. This is good for simple programs and not so good for more complicated ones, as it means that loop() encapsulates all the program's logic in a single block. A more modern and compositional approach is to let program fragments request when they want to run somehow, and have a scheduler handle the sleeping, waking up, and execution of those fragments. That lets (for example) one fragment decide at run-time to schedule another If we adopt this approach,we have to worry about the fact that one fragment might lock-out another. A desktop system might use threads; this is more problematic for a microcontroller, but an alternative is to force all fragments to only execute for a finite amount of time, so that the scheduler always gets control back. This might lead to a fragment not running when it asked (if other fragments were still running), but if we assume that the system spends most of its time asleep anyway, there will be plenty of catch-up time. Doing this results in an actor system where the fragments are actors that are scheduled from an actor queue. Turning this into code, we get the SleepySketch library: a library for building Arduino sketches that spend most of their time sleeping. There are a few wrinkles that need to be taken care of for running on a resource-constrained system. Firstly, the number of actors available is fixed at start-up (defaulting to 10), so that we can balance RAM usage.(With only 2k to play with, we need to be careful). Secondly, we use a class to manage the sleeping functionality in different ways: a BusySleeper that uses the normal delay() function (a busy loop) with no power-saving functions, a HeavySleeper that uses the watchdog timer to shut the system down as far as possible, and possibly some other intermediate strategies. Actors are provided by sub-classing the Actor class and providing a behaviour. We also allow pre- and post-behaviour actions to define families of actors, for example sensor observers. We separate the code for an actor from its scheduling. The standard library uses singleton classes quite a lot, so for example the Serial object represents the USB connection from an Arduino to its host computer and is the target for all methods. We use the same approach and define a singleton, Sleepy The program structure then loops something like this. If we assume that we've defined an actor class PingActor, then we can do the following: void setup() { Serial.begin(9600); Sleepy.begin(new HeavySleeper()); Sleepy.scheduleIn(new PingActor("Ping!"), 10000); } void loop() { Sleepy.loop(); } The setup() code initialises the serial port and the sleepy sketch using a HeavySleeper, and then schedules an actor to run in 10000ms. The loop() code runs the actors while there are actors remaining to schedule. If the PingActor instance just prints its message, then there will be no further actors to execute and the program will end; alternatively the actor could schedule further actors to be run later, and the sketch will pick them up. The sketch will remain asleep for as long as possible (probably for over 9s between start-up and the first ping), allowing for some fairly significant power saving. This is a first design, now just about working. It's still not as easy as it could be, however, and needs some testing to make sure that the power savings do actually materialise. # Understanding Arduino sleep modes: the watchdog timer The Arduino has several sleep modes that can be used to reduce power consumption. The most useful for sensor networks is probably the one that uses the watchdog timer. [mathjax] Powering-down the Arduino makes a lot of sense for a sensor network: it saves battery power allowing the system to survive for longer. Deciding when to power the system down is another story, but in this post we'll concentrate on documenting the mechanics of the process. The details are necessarily messy and low-level. (I've been greatly helped in writing this post by the data sheet for the Atmel ATmega328P microcontroller that's used in the Arduino Uno, as well as by a series of blog posts by Donal Morrissey that also deal with other sleep modes for the Atmel.) ### Header files and general information To use the watchdog timer, a sketch needs to include three header files: #include <avr/power.h> #include <avr/wdt.h> These provide definitions for various functions and variables needed to control the watchdog timer and manage some of the other power functions. ### Power modes A power (or sleep) mode is a setting for the microcontroller that allows it to use less power in exchange of disabling some of its functions. Since a microcontroller is, to all intents and purposes, a small computer on a chip, it has a lot of sub-systems that may not be needed all the time. A power mode lets you shut these unneeded sub-systems down. The result saves power but reduces functionality. Power modes are pretty coarse control mechanisms, and can shut down more than you intend. If your project is basically software-driven, with the Arduino making all the decisions, then a "deep" power-saving mode is ideal; on the other hand, if you rely on hardware-based signals at all, a "deep" sleep will probably ignore your hardware and the Arduino may never wake up. The watchdog timer is used to manage the "power-down" mode, the deepest sleep mode with the biggest power savings. ### Watchdog timer The Arduino's watchdog timer is a countdown timer that's driven by its own oscillator on the microcontroller. It's designed to run even when all the other circuitry is powered down, meaning that the microcontroller is drawing as little power as possible without actually being turned off completely. Why "watchdog" timer? The basic function of a watchdog timer is to "bite" after a certain period, where "biting" means raising an interrupt, re-setting the system, or both. A typical use of a watchdog is to make a system more robust to software failures. Since the watchdog is handled by the microcontroller's hardware, independent of any program being run, it will still bite even if the software gets stuck in an infinite loop (for example). Some designers set the watchdog ahead of complex operations, so that if the operation fails, the system will reset in a short amount of time and end up back in a known-good configuration. At the end of a successful operation, the program disables the watchdog (before it bites) and carries on. Of course this assumes that the operation completes before the watchdog bites, which means the programmer needs to have a good idea of how long it will take. ### Setting the time-out period It's as well to understand how watchdog timers on microcontrollers work. Typically they have a fairly coarse resolution, counting a fixed number of timer ticks before "biting" and performing some function. In the case of the Arduino, the watchdog timer is driven by the internal oscillator running at 128KHz and counts off some multiple of ticks before biting. This value -- the number of ticks counted -- is referred to as the "prescalar" for the timer. The prescalar is controlled by the values of four bits in the watchdog timer's control register, WDTCSR. To set them up, you pick the value of prescalar you want and set the appropriate bits. If the bits contain a number ( i ), then the watchdog will bite after ( (2048 << i) / 128000 ) seconds. So ( i = 0) means the watchdog bites after 16ms; ( i = 1 ) produces  delay of 32ms; and so on up to ( i = 9 ) (the largest value allowed) means the watchdog bites after about 8s. The word "about" is important here: the oscillator's exact frequency depends on the supply voltage to the chip and some other factors, meaning that you should be conservative about relying on the delay time. Writing the appropriate value of ( i ) into the control register involves representing ( i ) as a four-digit binary number and then writing these bits into four bits of the register -- and unfortunately these bits aren't consecutive. if ( i = 7 ) for example, then this is 0b0111 in binary, so we write 1 into bits WDP0, WDP1 and WDP2, and 0 into bit WDP3, and 0 into all the other bits: WDTCSR = (1 << WDP0) | (1 << WDP1) | (1 << WDP2); The phrases of the form (1 << WDP0) simply takes a binary digit 1 and shifts it left into bit position WDP0. The | symbols logically OR these bits together to generate the final bit mask that is assigned to the control register. Actually there's a little bit more to it than this, as we can't change the watchdog's configuration arbitrarily. Instead we have to notify the chip that it's configuration is about to be changed, by setting two other bits in the control register and then performing the updates we want: WDTCSR |= (1 << WDCE) | (1 << WDE); Setting WDCE enables changes in configuration to be made in the next few processor cycles, i.e. immediately. Setting WDE resets the timer. Finally we enable the watchdog timer interrupts by setting bit WDIE. When the watchdog timer bites, the microcontroller executes an interrupt handler, re-starts the main program, and clears WDIE. Any further interrupts, if the time is re-enabled, will then cause a system reset. WDTCSR |= (1 << WDIE); So the complete code the setting up the watchdog timer to bite in 2s is: set_sleep_mode(SLEEP_MODE_PWR_DOWN); // select the watchdog timer mode MCUSR &= ~(1 << WDRF); // reset status flag WDTCSR |= (1 << WDCE) | (1 << WDE); // enable configuration changes WDTCSR = (1 << WDP0) | (1 << WDP1) | (1 << WDP2); // set the prescalar = 7 WDTCSR |= (1 << WDIE); // enable interrupt mode sleep_enable();   // enable the sleep mode ready for use sleep_mode(); // trigger the sleep /* ...time passes ... */ sleep_disable(); // prevent further sleeps</pre> ### Interrupt handler What happens when the watchdog bites? It causes an interrupt that has to be handled before the program can continue. The interrupt could be used for all sorts of things, but there's often no point in worrying about it: but it still has to be there, to prevent the microcontroller just resetting. The following code installs a dummy interrupt handler: ISR( WDT_vect ) { /* dummy */ } The WDT_vect identifies the watchdog timer's interrupt vector. While this might seem like a waste of time, it's important to have an interrupt handler as the default behaviour of the watchdog timer is to reset the microcontroller, which we want to avoid. It's also worth noting that, once enabled, the watchdog timer will keep biting, so the interrupt handler will be called repeatedly. (Put a print statement in the hander to see.) This doesn't cause any problems. # Permutation City (Subjective Cosmology #2) ## Greg Egan 1994 Finished on Fri, 12 Jul 2013 12:34:19 -0700.   Rating 2/5. # The edge of computer science Where does mathematics end and computer science begin? I don't seem to have posted to this blog recently, so let's re-start with a big question: where is the edge of computer science? That is to say, what separates it from maths? How do mathematicians and computer scientists see problems differently, and indeed select differently what constitutes an interesting problem? The literature on network science is full of papers analysing such processes. Typically the analysis is both analytic and numerical. That is to say, a mathematical model is developed that describes the state of the network after lots of time has passed (its equilibrium behaviour); and numerical simulation is then performed by creating a large number of networks, running the spreading processes on them, and seeing whether the results obtained match the analytical model. (It was an unexpected mis-match between analytical and numerical results that led us to the main result reported in our paper.) Typically the community finds analytical results more interesting than numerical results, and with good reason: an analytic result provides both a final, closed-form solution that can be used to describe any network with particular statistical properties, without simulation; and it often also provides insight into why a given equilibrium behaviour occurs. These are the sorts of general statements that can lead to profound understanding of wide ranges of phenomena. There's a sting in the tail of analysis, however, which is this. In order to be able to form an analytic model, the process being run over the network has to be simple enough that the mathematics converges properly. A typical analysis might use a probabilistic re-wiring function, for example, where nodes are re-wired with a fixed probability, or one that varies only slowly. Anything more complex than this defeats analysis, and as a result one never encounters anything other than simple spreading processes in the literature. As a computer scientist rather than a mathematician I find that unsatisfying, and I think my dissatisfaction may actually define the boundary between computing and mathematics. The boundary is the halting problem -- or, more precisely, sustaining your interest in a problem once you've hit it. The halting problem is one of the definitive results in computer science, and essentially says that there are some problems for which it's impossible to predict ahead of time whether they'll complete with a solution or simply go on forever. Put another way, there are some problems where the only way to determine what the solution is is to run a piece of code that computes it, and that may or may not deliver a solution. Put yet another way, there are problems for which the code that computes the solution is the most concise description available. What this has to do with complex systems is the following. When a computer scientist sees a problem, they typically try to abstract it as far as possible. So on encountering a complex network, our first instinct is to build the network and then build the processes running on it as separate descriptions that can be defined independently. That is, we don't limit what kind of functions can hang off each node to define the spreading process: we just allow any function -- any piece of code -- and then run the dynamics of the network with that code defining what happens at each node at each timestep. The immediate consequence of this approach is that we can't say anything a priori about the macroscopic properties of the spreading process, because to do so would run into the fact that there isn't anything general one can say about an arbitrary piece of code. The generality we typically seek precludes our making global statements about behaviour. Mathematicians don't see networks this way, because they want to make precisely the global statements that the general approach precludes -- and so don't allow arbitrary functions into the mix. Instead they use functions that aggregate cleanly, like fixed-probability infection rates, about which one can make global statements. One way to look at this is that well-behaved functions allow one to make global statements about their aggregate behaviour without having to perform any computation per se: they remain within an envelope whose global properties are known. A mathematician who used an ill-behaved function would be unable to perform analysis, and that's precisely what they're interested in doing, even though by doing so they exclude a range of possible network behaviours.In fact, it's worse than that: the range of behaviours excluded is infinite, and contains a lot of networks that seem potentially very interesting, for example those whose behaviours depend on some transmitted value, or one computed from values spread by the process. So a mathematician's (at least as represented in most of the complex systems literature) interest in a problem is lost at precisely the point that a computer scientist's interest kicks in: where the question is about behaviour of arbitrary computations. The question this leads to is, what model do real-world networks follow more closely? Are they composed of simple, well-behaved spreading processes? Or do they more resemble arbitrary functions hanging off a network of relationships, whose properties can only be discovered numerically? And what effect does the avoidance of arbitrary computation have on the choice of problems to which scientists apply themselves? Perhaps the way forward here is to try to find the boundary of the complexity of functions that remain analytic when used as part of a network dynamics, to get the best of both worlds: global statements about large categories of networks, without needing numerical simulation of individual cases. Such a classification would have useful consequences for general computer science as well. A lot of the problems in systems design come from the arbitrariness of code and its interactions, and from the fact that we're uncomfortable restricting that generality a priori because we don't know what the consequences will be for the re-usability and extensibility of the systems being designed. A more nuanced understanding of behavioural envelopes might help. # Mussolini ## Richard J.B. Bosworth 2002 I've wanted to read a biography of Mussolini for a while, and this one is very good. A reviewer quote on the cover describes it as "lucid, elegant, and a pleasure to read," and I'd have to agree. It's somewhat more "literary" than some biographies, and as a result doesn't always cover the historical context as well as one might like: the author's description of the March on Rome, for example, is extremely brief despite it's significance for Mussolini's rise. In this way it's not like many Hitler biographies (for example Hitler or Hitler and Stalin: Parallel Lives) which are as much about events as personality. One also has to get past the author's repetition of words like "euphonious" and "lucubrations", which get tiresome after a while. Having said all that, this is an excellent biography, full of insight and pointers to other sources (with over 80 pages of footnotes), and is a good overview to the career of someone often written off too quickly. Finished on Wed, 10 Jul 2013 01:52:26 -0700.   Rating 4/5. # Representing samples Any sensor network has to represent sampled data somehow. What would be the most friendly format for so doing? Re-usable software has to take an extensible view of how to represent data, since the exact data that will be represented may change over time. There are several approaches that are often taken, ranging from abstract classes and interfaces (for code-based solutions) to formats such as XML for data-based approaches. Neither of these is ideal for a sensor network, for a number of reasons. A typical sensor network architecture will use different languages one the sensors and the base station, with the former prioritising efficiency and compactness and the latter emphasising connectivity to the internet and interfacing with standard tools. Typically we find C or C++ on the sensors and Java, JavaScript, Processing, or some other language on the base station. (Sometimes C or C++ too, although that's increasingly rare for new applications.) It's therefore tricky to use a language-based approach to defining data, as two different versions of the same structure would have to be defined and -- more importantly -- kept synchronised across changes. That suggests a data-based approach, but these tend to fall foul of the need for a compact and efficient encoding sensor-side. Storing, generating, and manipulating XML or RDF, for example, would typically be too complex and too memory-intensive for a sensor. These formats also aren't really suitable for in-memory processing -- unsurprisingly, as they were designed as transfer encodings, not primary data representations. Even though they might be attractive, not least for their friendliness to web interactions and the Semantic Web, they aren't really usable directly. There are some compromise positions, however. JSON is a data notation derived initially from JavaScript (and usable directly within it) but which is sufficiently neutral to be used as an exchange format in several web-based systems. JSON essentially lets a user form objects with named fields, whose values can be strings, numbers, arrays, or other objects. (Note that this doesn't include code-valued fields, which is how JSON stays language-neutral: it can't encode computations, closures, or other programmatic features.) JSON's simplicity and commonality have raised the possibility of using it as a universal transport encoding: simpler than XML, but capable of integration with RDF, ontologies, and the Semantic Web if desired. There are several initiatives in this direction: one I came across recently is JSON-LD (JSON for Linked Data) that seeks to integrate JSON records directly into the linked open data world. This raises the possibility of using JSON to define the format of sensor data samples, sample collections (datasets), and the like, and linking those descriptions directly to ontological descriptions of their contents and meaning. There are some problems with this, of course. Foremost, JSON isn't very compact, and so would require more storage and wireless bandwidth than a binary format. However, one approach might be to define samples etc in JSON format and then either use them directly (server-side) or compile them to something more static but more efficient for use sensor-side and for exchange. This would retain the openness but without losing performance. # Temperature sensors working Temperature sensing using digital temperature sensors is easy to get working. The temperature sensing part of the project requires three sensors for ambient, high-up and low-down measurement. The DS18B20 temperature sensor seems well-suited for the job. Three DS18B20 temperature sensors sharing a OneWire bus, standard (rail) power mode Hooking-up a OneWire bus for the three sensors lets them share a single microcontroller pin -- which isn't important for hardware reasons in this project, but also saves some microcontroller RAM, which might be. The circuit is very simple, with the three sensors sharing power and ground lines and with a common data line pulled-up to the power rail through a 4.7K resistor. The DQ line is attached to one of the Arduino's digital lines. The OneWire library is then used to instantiate a protocol handler for that line, and passed to the temperature control library to manage the interaction with the devices, including their conversion from raw to "real" temperature values. The resulting code is almost comically simple: #include <DallasTemperature.h> OneWire onewire(8);                  // OneWire bus on pin 8 DallasTemperature sensors(&onewire); void setup(void) { Serial.begin(9600); sensors.begin(); } void loop(void) { sensors.requestTemperatures(); for(int i = 0; i < 3; i++) { float c - sensors.getTempCByIndex(i); Serial.print("Sensor ");   Serial.print(i);   Serial.print(" = "); Serial.print(c);   Serial.println("C"); } delay(5000); } That's it! The temperature library packages everything up nicely, including the conversion and the interaction with the OneWire protocol (which is quite fiddly). One potential problem for the future is that access to the sensors is by index, not by any particular identifier, and it;s not clear whether the ordering is always the same: does the sensor closest to the microcontroller always appear as index 0, for example? If not, then we'll have to identify which sensor is which somehow to sample the temperature from the correct place, or run each one on a different OneWire bus instance. There's also an interesting point about parasite power mode, which is where the DS18B20 draws its power from the data bus rather than from a dedicated power rail. This might make power management easier, since the sensor would be unpowered when not being used, such as when the Arduino is asleep. This suggests it's probably worth looking into parasite power a bit more.
2021-10-22 17:16:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4072110056877136, "perplexity": 1357.9680090904787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585516.51/warc/CC-MAIN-20211022145907-20211022175907-00616.warc.gz"}
https://www.physicsforums.com/threads/electric-field-strength.240259/
# Electric field strength ntk ## Homework Statement Two plates each have an area of 80cm2 and are placed facing one another in a vacuum. If the top plate carries a positive charge of 25nC and the bottom plate carrie, a charge of -25nC. Find the electric field strength between them. Permitivity of free space = 8.9 * 10^-12 ## Homework Equations Electrical field strength = surface charge density/permitivity of free space ## The Attempt at a Solution Should I take the difference of the charge between two plates, which is 50nC, and divide it by the surface area to find the charge density or just use the charge of one plate which is 25nC ? Thank you. $$E = \frac{\sigma}{2\epsilon_0}$$
2023-02-02 09:24:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8190003037452698, "perplexity": 1394.828751328027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499967.46/warc/CC-MAIN-20230202070522-20230202100522-00333.warc.gz"}
http://math.stackexchange.com/questions/448511/determinant-with-levi-civita-symbol
# Determinant with Levi-Civita Symbol From Schaum's Outline in Tensor Calculus If $A = [a_{ij}]_{nn}$ is any square matrix, then define $\text{ det } A = \epsilon_{i_1i_2i_3...i_{n-1}i_n}a_{1 \, \cdot \, i_1}a_{2 \, \cdot \, i_2}...a_{(n - 1) \, \cdot \, i_{n - 1}}a_{n \, \cdot \, i_n}$. I can check this by expanding the product and sum in full, but what's the derivation or motivation behind this formula? I tried to find something on the Internet about this. -
2014-04-25 08:34:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9889426231384277, "perplexity": 197.8017625841148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
http://www.idryman.org/blog/2012/08/05/grand-central-dispatch-vs-openmp/
# Grand Central Dispatch vs OpenMP In 2009 Apple released a new task parallelism technology called Grand Central Dispatch (GCD). Apple worked hard on tuning GCD; stated that only 15 instructions are required to queue up a work unit in GCD, while creating a traditional thread could easily require several hundred instructions. The main advantage of GCD is that programmer/user does not required to choose how many thread for application that optimize the most. It can save a lot of time for programmers because getting the most power out of CPUs requires a lot of measurement. With GCD, let the operating system decide it for you, it just works. Most GCD documentation provided by Apple take focus on user applications: background process, asynchronous callbacks, non-blocking UI, dispatched IOs …etc. GCD and c/obj-c blocks works pretty good in those scenarios, but we want a more general comparison between GCD and traditional thread models. Which is faster? Currently no one has made a general benchmark for this. I targeted to use an industry standard benchmark for GCD vs threads, and I ended up by picking Conjugate Gradient computation in NAS Parallel Benchmark (NPB) maintained by NASA as my benchmark model. I uses OpenMP implementation in CG problem. It is an shared memory threading API which is much easier to use then POSIX thread. However it is still required for programmer/user to pick thread number in run time or in compile time. NASA only provide fortran code, so I uses Ohio’s C implementation. ## Benchmark result The result is quite promising! Problem sizes in NPB are predefined and indicated as different classes: • Class W: vector size: 7000, iterations: 15 (90’s workstation size, now likely too small) • Class A: vector size: 14000, iterations: 15 • Class B: vector size: 75000, iterations: 75 I tested OpenMP with different thread numbers and it performs differently on different problem size. It not quite obvious to choose a correct thread number for the problem, and GCD implementations beats them all. ## Bottleneck implementation The OpenMP implementation looks like this: Other code instead of bottleneck are basically vector initialization, copy, multiply and norm computations. I tested all of these, but they don’t make big differences between OpenMP, GCD, and BLAS1 functions. GCD implementation looks much like the original code: What a great news! It is much easier then I thought to transfer the original code into GCD. ## Parallel reduction in OpenMP, GCD, and BLAS As I concluded before, it doesn’t make big difference between three of these. The implementations are: I think it does not make difference because those operations are all one dimensional BLAS1 problems. #### Note on cache line size I thought that cache line size matters when I start implementing GCD version of parallel reduction. But it ended up that you just need to give it a large enough size for compiler to generate SIMD optimization. Note that you can get the CPU cache line size by command sysctl -n hw.cachelinesize from shell. ## Conclusion I think the best practice so far is to use BLAS whenever you can. It’s cleaner and highly optimized by libraries developed by Apple, Intel, or other HPC foundations. For other operation that BLAS don’t support, GCD is a good choice and easy to migrate to. The pros and cons go as follows: #### OpenMP over GCD • Supported by gcc, while clang doesn’t. • Can be used on C, CPP, and Fortran (and many more?) • Many existing numerical code uses OpenMP • Easier to get start with. eg. #omp parallel for • Better syntax in reduction: #omp reduction (+:sum) #### GCD over OpenMP • Much easier to tune performance. • Tighter logic construction. Everything is encapsulated in blocks. • No separated thread spawns and private variables like OpenMP. • Much less parameters to adjust in compile time and run time. • Highly optimized in any kinks of problem sizes. • Works on iOS (OpenMP does not) I think the greatest advantage you can gain from GCD is that it is highly optimized on different problem sizes, because operating system takeovers task load balancing. It surprised me that on class W problem, OpenMP version with 16 threads is twice as slow compares to 1 thread version. With GCD, you no longer need to handle this unexpected result! Cheers.
2018-12-13 23:03:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2920045554637909, "perplexity": 4132.315511127403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825112.63/warc/CC-MAIN-20181213215347-20181214000847-00349.warc.gz"}
https://thirumal.blog/category/algebra/
# Proof for the divisibility test of 3 This post is a bit mathematical, and talks about the divisibility test of 3 we all learnt in high school. As I’m not aiming for brevity but for completeness and ease of explanation let me remind you all of the theorem involved to test the divisibility of a number by 3 and then move onto its proof. To test the divisibility of a number N by 3, recursively sum the digits that make up the number till you reach a manageable number whose divisibility of 3 can be inferred trivially. For example if we’re testing the divisibility of 3423 by 3, we’ll add up 3+4+2+3 and check if that sum is divisible by 3. We know that 12 is divisible by 3, hence the number 3423 is divisible by 3 as well . If we did not know that 12 was divisible by 3 we would have summed up the digits of 12 and and try to see if that sum is divisible by 3 and so on. ### Proof for theorem: The main crux of the proof is representing the number as a specific set of additions and multiplications and your work is almost done. If we have a number $N$ with $n$ digits $a_{1}a_{2} \ldots a_{n}$, then the number $N$ can also be represented as follows $N = a_1 \cdot 10^{n-1} + a_2 \cdot 10^{n-2} + ... + a_n \cdot 1$ With out proof in mind let’s separate the sum of all the digits of $N$ i.e, $a_1 + a_2 + \ldots + a_n$ and see what we get. $N = \left(\underbrace{99\ldots9}_{n\text{-times}} \cdot a_1 + \underbrace{99\ldots9}_{n-1\text{-times}} \cdot a_2 + \ldots 9 \cdot a_{n-1} \right) + \left( a_1 + a_2 + \ldots + a_n \right)$ Now if we can prove that the right glob we have here is divisible by 3 then our job is done (We’ll know that the if the sum of its digits is divisible by 3 then $N$ is divisible by 3). ### Corollary #1: A number with just 9 as its digits is divisble by 3 ### Proof for Corollary #1: We know that any number which is divisible by 9 is divisible by 3 as well $\left(3 * 3 = 9\right)$. So a number $M$ which has $m$ digits in it and all $m$ digits are 9 can be represented as $\underbrace{99\ldots9}_{m\text{-times}} = 9 \cdot 10^{m-1} + 9 \cdot 10^{m-2} + ... + 9$ On further simplification we get $\underbrace{99\ldots9}_{m\text{-times}} = 9 \left(10^{m-1} + 10^{m-2} + ... + 1\right)$ The above proves the corollary. With the corollary proven we can concur that the sum of the digits of a number should also be divisible by 3 if a number is divisible by 3 and vice versa. Acknowledgement: I first saw this proof on the One Mathematical Cat website, but the instructor just proved it for a specific case of 3 digits. This was my effort in generalizing the proof for any arbitrary number of digits.
2019-06-18 03:21:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7600952982902527, "perplexity": 160.01739693014028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998605.33/warc/CC-MAIN-20190618023245-20190618045245-00265.warc.gz"}
https://physics.stackexchange.com/questions/333323/why-is-time-evolution-operator-unitary
# Why is time-evolution operator unitary? When we shift the system's time from $t=0$ to $t = t$, we can define the following operator $\hat{U}$. $$\hat{U} = e^{- i \hat{H} t / \hbar} \, .\tag{1}$$ So many (as far as I read, almost all of) documents assume $\hat{H}$ is Hamiltonian and $\hat{H} = \hat{H}^\dagger$ to prove that $\hat{U}$ is unitary. I don't understand the reason why we can say $\hat{H}$ in (eq.1) is Hamiltonian. I believe $\hat{H}$ in $(1)$ is just an operator at this time and there is no reasonable context to conclude $\hat{H}$ here is nothing else but Hamiltonian we know. Could anyone please tell me the reason? 1st point of view: If you accept the Schrödinger equation $$\mathrm i\hbar\, \partial_t \psi = \hat H \psi$$ with self-adjoint $\hat H$, then your equation 1 follows directly and $\hat U$ is unitary. 2nd point of view: Time evolution must have the following properties: • $\hat U$ must be norm-preserving so that probability is conserved. • $\hat U$ should be invertible so that information is conserved. Those two properties together imply that $\hat U$ is unitary. If you add the fact that $\hat U(t)$ should be a group, your equation 1 follows and it implies Schrödinger's equation with self-adjoint $\hat H$. • Thank you for your answer. Is the statement "$\hat{U}$ must be norm-preserving" trivial? In some documents, I saw "time-evolution must conserve probability" but I'm not convinced of the claim. I do understand $\int _{-\infty} ^{\infty} |\psi (x, t)|^2 dx = 1$ at all times but don't understand $|\psi (x, t)|$ in itself conserves at all times. – ynn May 15 '17 at 15:05 • $|\psi(x, t)|$ is not the norm, that integral expression is the norm. You're correct that $|\psi(x, t)|$ can change when the time evolution operator acts on the wave function, but not the norm. – Señor O May 15 '17 at 15:29 • Thank you. Now I understand the meaning of "probability must be conserved." thanks to you. – ynn May 15 '17 at 15:42 • Does invertibility need to be assumed separately from norm preservation? Doesn't norm preservation (for the entire Hilbert space) alone guarantee unitarity? – tparker Jul 6 '18 at 17:57 • @tparker Not if the Hilbert space is infinite-dimensional, see math.stackexchange.com/a/900311/224757 . Technically, we only need norm-preservation (isometry) and surjectivity. – Noiralef Jul 6 '18 at 20:20 The assumption is that the wave function is a probability amplitude. In particular, it's a vector that is normalized. In Dirac's notation, this is the statement: $$\langle \psi |\psi\rangle = 1.$$ This can be made more concrete with: \begin{align} \mathrm{ordinary\ vectors\ } &\sum_{i} \psi^\star_i \psi_i = 1, \\ \mathrm{wave\ functions\ } &\int \psi^\star(x) \psi(x) \operatorname{d}x = 1,\ \mathrm{or} \\ \mathrm{even\ wave\ functionals\ } & \int \left[\mathcal{D}\phi(x)\right] \Psi^\star[\phi(x)] \Psi[\phi(x)] = 1. \end{align} Dont' worry if that last one is cryptic - it's for when you're dealing with quantum field theory. The important point is that the wave function is confined to exist in only a part of the vector space; like how unit vectors are confined to lie on the surface of a sphere. Transformations that respect this constraint are called unitary. Thus that constraint means that every allowed transformation of $|\psi\rangle$ is unitary. Rotations, spatial translations, reflections, etc, all must respect the requirement that the wave function remains normalized. The rest follows from the requirement that the time translation operation is a continuous change in $|\psi\rangle$ and that quantum mechanics maps on to classical mechanics on average (see: the correspondence principle). That means that $\hat{H}$, the generator of time translations in quantum mechanics, has to correspond with the generator of time translations in classical mechanics, the Hamiltonian. There is one exception I know of to the unitarity requirement. That is time reflections. Time refletion is anti-unitary. For details, see the Wikipedia article on $T$-symmetry. • Thank you for your answer. What you wrote for me is not easy but seems substantive. In particular, the part "has to correspond with the generator of time translations in classical mechanics, the Hamiltonian." would help me a lot. Perhaps that's not rigorous, but such a reasoning is easy to accept to beginners of QM like me. – ynn May 15 '17 at 15:28 An intuitive approach would be to notice that the adjoint $U^{\dagger}(t)$ is the same as $U(-t)$. Thus, if $U(t)|\psi(0) \rangle = |\psi(t)\rangle$ And $U^{\dagger}(t)|\psi(0) \rangle = |\psi(-t) \rangle$ Then $U^{\dagger}(t)U(t) |\psi(0)\rangle = |\psi(0)\rangle$ Meaning $U^{\dagger}U = I$, the requirement for a unitary operator. • Thank you for your answer. I don't understand the intuitive approach you told me. I believe $A^\dagger = ({}^t A)^*$. I have no idea why the adjoint of time-evolution operator $\hat{U}$ corresponds to time-reversing operator. (Perhaps, I'm on more elementary phase. I'm really a beginner of QM.) – ynn May 15 '17 at 15:13 • Oh it's super simple - since $U(t) = e^{-iHt/\hbar}$, then $U^{\dagger}(t) = e^{+iHt/\hbar}$ which is the same as you would get for using -t as the argument for U: $U(-t) = e^{-iH(-t)/\hbar} = e^{+iHt/\hbar} = U^{\dagger}(t)$ – Señor O May 15 '17 at 15:16 • That's what I want to ask at this post. I think you assume $H^\dagger = H$. I don't know the reasonable reason, so I posted this entry. – ynn May 15 '17 at 15:20 • That's true just by the definition of the hamiltonian. If you put a non-hermitian operator in place of H, then you're correct that U would not be unitary. – Señor O May 15 '17 at 15:23 • Thanks to you and other helpers, now I understand what I wanted to know. Thank you again. – ynn May 15 '17 at 15:44 ## protected by Qmechanic♦May 15 '17 at 17:26 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
2019-10-23 13:17:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9433601498603821, "perplexity": 474.70473733347967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833766.94/warc/CC-MAIN-20191023122219-20191023145719-00131.warc.gz"}
https://math.stackexchange.com/questions/1275489/multiplicative-structure-in-the-cohomological-leray-serre-spectral-sequence-p
Multiplicative structure in the cohomological Leray-Serre spectral sequence — please elucidate a proof Let $\pi \colon X \to B$ be a fibration with $B$ a path-connected CW complex. Write $B^p$ for the $p$-th skeleton of $B$ and set: • $X_p = \pi^{-1}(B^p)$, • $F_p^m = \ker [H^m(X) \to H^m(X_{p-1})]$, the kernel of a map induced by the inclusion $X_p \to X$. A. Hatcher in his spectral sequences book writes on p.25 that: The cup product in $H^∗(X; R)$ restricts to maps $F^m_p \times F^n_s \to F^{m+n}_{p+s}$. The argument for that is that $F^m_p$ can be regarded as the image of the map $H^m(X, X_{p−1}) \to H^m(X)$ via the exact sequence of the pair $(X, X_{p−1})$, and then uses commutativity of a certain diagram (bottom of p. 26 in the mentioned book), part of which is a map $H^{m+n}(X\times X, X_p \times X \cup X \times X_s) \to H^{m+n}(X\times X, (X\times X)_{p+s})$. My question is: what sort of map is this? I I think it is induced by the inclusion $(X\times X)_{p+s} \to X_p \times X \cup X \times X_s$, and I think the proof requires this map to be a monomorphism (otherwise I don't see how to obtain commutativity of the diagram). But how to obtain the last statement?
2019-07-21 07:06:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9754100441932678, "perplexity": 70.72796151120613}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526931.25/warc/CC-MAIN-20190721061720-20190721083720-00553.warc.gz"}
https://codereview.stackexchange.com/questions/30506/time-calculator-for-a-given-speed-and-file-size
# Time calculator for a given speed and file size I wrote this little code in c++. It calculates the needed time for a given speed of a medium (for example the speed is 1024 B/s, the file size is 1MB, it'll take 17 minutes and 4 seconds to finish). The code works, but I'm not sure if it's proper. Can you tell me if it's okay or not? #include <iostream> int main() { int transmission_speed; //speed of the transmission in bytes per seconds int file_size_mb; //file's size in MBs std::cout << "Enter the speed of the transmission in bytes per seconds: "; std::cin >> transmission_speed; std::cout << "Enter the file's size in megabytes: "; std::cin >> file_size_mb; int file_size_b = file_size_mb *1024*1024; int seconds_needed = file_size_b / transmission_speed; int days_needed = (seconds_needed / 3600) / 24; seconds_needed -= days_needed*86400; int hours_needed = seconds_needed / 3600; seconds_needed -= hours_needed*3600; std::cout << "Days needed: " << days_needed << std::endl; std::cout << "Hours needed: " << hours_needed << std::endl; std::cout << "Seconds needed: " << seconds_needed << std::endl; return(0); } • This code is incredibly clean and easy to read, however, you might want to eliminate the output for days_needed when days_needed is 0. Think about that. It will make the program output cleaner if the output is not specified like in a contest or homework. – Shane Hsu Aug 30 '13 at 12:08 ## 4 Answers It's a very simple little program, however, there's a few comments to make: • All of the code is in main. Of course, for a program of this size, that doesn't really matter too much, however, you should prefer to break things up into self-contained functions where possible. For example, there should be a separate function here to actually do the calculations. • Use the correct data type for what you need. Can time ever be negative here? The answer is no, so I'd prefer to use unsigned over int. • You should be somewhat careful about overflow. Any filesize over 2048MB (2GB) will overflow the size of an int (assuming a 4 byte int - in actuality, an int is only technically required to be at least 2 bytes). Using an unsigned will change this to 4GB, however, if you expect any filesizes larger than that, you should look at another datatype (std::uint64_t perhaps). • Magic numbers. It's not so bad when dealing with time, because it's generally fairly obvious here, but numbers like 86400 shouldn't be shown as-is. They should be a named constant, such as const unsigned seconds_in_day = 86400. I'd suggest breaking this up into a main function and a required_time(unsigned transmission_rate, unsigned file_size) function. • or unsigned const seconds_in_day = 24*60*60;. – Martin York Aug 30 '13 at 20:02 I see very few things to change, your code is easy to read. • You're not validating the input. What if someone enters something that's not a number? I'd suggest you create a function that takes the prompt string and returns the entered integer. That function can do the validation, which isn't completely trivial. Two good examples of how to do the validation: • (Very minor) You put a comment on the first two variable declarations, but not on the following ones. In my opinion, you chose sufficiently descriptive variable names, the comments aren't necessary. But if you do put a comment on some (or if your local coding style guidelines enforce that), do it consistently. • This "looks" strange: int days_needed = (seconds_needed / 3600) / 24; seconds_needed -= days_needed*86400; You should be dividing with the same constant(s) that you use in the multiplication (or vice-versa). (These constants are easy to identify, I don't think giving them names is worth the effort. Just be systematic in how you use them.) • return(0); is unusual, it looks like a function call. return is not a function call, it's a statement. I would prefer return 0; instead. • Your code appears to have forgotten about minutes :-) The code looks good though, did you miss on the minutes needed? All the calculations you're doing are on int. You may consider using float at some places. e.g.: file-size-in-MB may be 3.4MB transmission-speed-in-bytes may be 5.9B/s • thanks :D after I posted the question I noticed the "missing minutes", but I thought it's not important :) – mitya221 Aug 30 '13 at 11:42 • Why would floats help here? – Mat Aug 30 '13 at 11:45 • @Mat, imagine a user input of float number going to an integer variable. It may change the figures and hence the calculation output. – Vivek Jain Aug 30 '13 at 11:56 In addition to all the helpful suggestions made by others... Using a console interactively to get input is so old style!! It's more convenient to have the input data in a file and just read them directly without the lines for the prompt. Just use: std::cin >> transmission_speed; std::cin >> file_size_mb; Then, you can use: cat input.txt | ./program This is very small oversight but the number of seconds computed in the line int seconds_needed = file_size_b / transmission_speed; will be off by 1 second if file_size_b % transmission_speed != 0. You should add if ( file_size_b % transmission_speed != 0 ) ++seconds_needed;
2019-07-16 23:54:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29985311627388, "perplexity": 2826.7244383243874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524972.66/warc/CC-MAIN-20190716221441-20190717003441-00238.warc.gz"}
https://www.sunclipse.org/?p=503
# In Happier News, the ArXivotubes Luciano da Fontoura Costa, “Communities in Neuronal Complex Networks Revealed by Activation Patterns” (arXiv:0801.4684): Recently, it has been shown that the communities in neuronal networks of the integrate-and-fire type can be identified by considering patterns containing the beginning times for each cell to receive the first non-zero activation. The received activity was integrated in order to facilitate the spiking of each neuron and to constrain the activation inside the communities, but no time decay of such activation was considered. The present article shows that, by taking into account exponential decays of the stored activation, it is possible to identify the communities also in terms of the patterns of activation along the initial steps of the transient dynamics. The potential of this method is illustrated with respect to complex neuronal networks involving four communities, each of a different type (Erdös-Rény, Barabási-Albert, Watts-Strogatz as well as a simple geographical model). Though the consideration of activation decay has been found to enhance the communities separation, too intense decays tend to yield less discrimination. The “simple geographical model” is one I’ve played with myself, since it’s so easy to implement (and serves as a null hypothesis for some problems of interest). Throw $$N$$ nodes into a box of $$d$$ dimensions, and connect two nodes if they are closer than some fixed threshold. In this case, the box was 2D, but a 3D version is just as easy to implement.
2022-07-07 01:56:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5297762155532837, "perplexity": 747.0315976454484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683020.92/warc/CC-MAIN-20220707002618-20220707032618-00030.warc.gz"}
https://www.r-bloggers.com/hierarchical-models-with-rstan-part-1/
# Hierarchical models with RStan (Part 1) November 10, 2016 By (This article was first published on R – biologyforfun, and kindly contributed to R-bloggers) Real-world data sometime show complex structure that call for the use of special models. When data are organized in more than one level, hierarchical models are the most relevant tool for data analysis. One classic example is when you record student performance from different schools, you might decide to record student-level variables (age, ethnicity, social background) as well as school-level variable (number of student, budget). In this post I will show how to fit such models using RStan. As there is much to say and try on such models I restrict myself in this post to a rather simple example, I will extend this to more complex situations in latter posts. ## A few words about RStan: If you don’t know anything about STAN and RStan make sure to check out this webpage. In a few words RStan is an R interface to the STAN programming language that let’s you fit Bayesian models. A classical workflow looks like this: 1. Write a STAN model file ending with a .stan 2. In R fit the model using the RStan package passing the model file and the data to the stan function 3. Check model fit, a great way to do it is to use the shinystan package ## First example with simulated data: Say that we recorded the response of 10 different plant species to rising temperature and nitrogen concentration. We measured the biomass of 10 individuals per species along a gradient of both temperature and nitrogen concentration and we would like to know how these two variables affect plant biomass. In hierarchical model we let regression parameters vary between the species, this means that, for example, species A might have a more positive slope between temperature and biomass than species B. Note however that we do not fit separate regression to each species, rather the regression parameters for the different species are themselves being fitted to a statistical distribution. In mathematical terms this example can be written: $\mu_{ij} = \beta_{0j} + \beta_{1j} * Temperature_{ij} + \beta_{2j} * Nitrogen_{ij}$ $\beta_{0j} \sim N(\gamma_{0},\tau_{0})$ … (same for all regression coefficients) … $y_{ij} \sim N(\mu_{ij},\sigma)$ For observations i: 1 … N and species j: 1 … J. This is how such a model looks like in STAN: /*A simple example of an hierarchical model*/ data { int N; //the number of observations int J; //the number of groups int K; //number of columns in the model matrix int id[N]; //vector of group indeces matrix[N,K] X; //the model matrix vector[N] y; //the response variable } parameters { vector[K] gamma; //population-level regression coefficients vector[K] tau; //the standard deviation of the regression coefficients vector[K] beta[J]; //matrix of group-level regression coefficients real sigma; //standard deviation of the individual observations } model { vector[N] mu; //linear predictor //priors gamma ~ normal(0,5); //weakly informative priors on the regression coefficients tau ~ cauchy(0,2.5); //weakly informative priors, see section 6.9 in STAN user guide sigma ~ gamma(2,0.1); //weakly informative priors, see section 6.9 in STAN user guide for(j in 1:J){ beta[j] ~ normal(gamma,tau); //fill the matrix of group-level regression coefficients } for(n in 1:N){ mu[n] = X[n] * beta[id[n]]; //compute the linear predictor using relevant group-level regression coefficients } //likelihood y ~ normal(mu,sigma); } You can copy/paste the code into an empty text editor and save it under a .stan file. Now we turn into R: #load libraries library(rstan) library(RColorBrewer) #where the STAN model is saved setwd("~/Desktop/Blog/STAN/") #simulate some data set.seed(20161110) N<-100 #sample size J<-10 #number of plant species id<-rep(1:J,each=10) #index of plant species K<-3 #number of regression coefficients #population-level regression coefficient gamma<-c(2,-1,3) #standard deviation of the group-level coefficient tau<-c(0.3,2,1) #standard deviation of individual observations sigma<-1 #group-level regression coefficients beta<-mapply(function(g,t) rnorm(J,g,t),g=gamma,t=tau) #the model matrix X<-model.matrix(~x+y,data=data.frame(x=runif(N,-2,2),y=runif(N,-2,2))) y<-vector(length = N) for(n in 1:N){ #simulate response data y[n]<-rnorm(1,X[n,]%*%beta[id[n],],sigma) } #run the model m_hier<-stan(file="hierarchical1.stan",data=list(N=N,J=J,K=K,id=id,X=X,y=y)) The MCMC sampling takes place (took about 90 sec per chain on my computer), and then I got this warning message: “Warning messages: 1: There were 61 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See 2: Examine the pairs() plot to diagnose sampling problems” Here is an explanation for this warning: “For some intuition, imagine walking down a steep mountain. If you take too big of a step you will fall, but if you can take very tiny steps you might be able to make your way down the mountain, albeit very slowly. Similarly, we can tell Stan to take smaller steps around the posterior distribution, which (in some but not all cases) can help avoid these divergences.” from here. This issue occur quite often in hierarchical model with limited sample size, the simplest solution being to re-parameterize the model, in other words to re-write the equations so that the MCMC sampler has an easier time sampling the posterior distribution. Below is a new STAN model with a non-centered parameterization (See Section 22.6 in STAN user guide): parameters { vector[K] gamma; //population-level regression coefficients vector[K] tau; //the standard deviation of the regression coefficients //implementing Matt's trick vector[K] beta_raw[J]; real sigma; //standard deviation of the individual observations } transformed parameters { vector[K] beta[J]; //matrix of group-level regression coefficients //computing the group-level coefficient, based on non-centered parametrization based on section 22.6 STAN (v2.12) user's guide for(j in 1:J){ beta[j] = gamma + tau .* beta_raw[j]; } } model { vector[N] mu; //linear predictor //priors gamma ~ normal(0,5); //weakly informative priors on the regression coefficients tau ~ cauchy(0,2.5); //weakly informative priors, see section 6.9 in STAN user guide sigma ~ gamma(2,0.1); //weakly informative priors, see section 6.9 in STAN user guide for(j in 1:J){ beta_raw[j] ~ normal(0,1); //fill the matrix of group-level regression coefficients } for(n in 1:N){ mu[n] = X[n] * beta[id[n]]; //compute the linear predictor using relevant group-level regression coefficients } //likelihood y ~ normal(mu,sigma); } Note that the data model block is identical in the two cases. We turn back to R: #re-parametrize the model m_hier<-stan(file="hierarchical1_reparam.stan",data=list(N=N,J=J,K=K,id=id,X=X,y=y)) #no more divergent iterations, we can start exploring the model #a great way to start is to use the shinystan library #library(shinystan) #launch_shinystan(m_hier) #model inference print(m_hier,pars=c("gamma","tau","sigma")) Inference for Stan model: hierarchical1_reparam. 4 chains, each with iter=2000; warmup=1000; thin=1; post-warmup draws per chain=1000, total post-warmup draws=4000. mean se_mean   sd  2.5%   25%   50%  75% 97.5% n_eff Rhat gamma[1]  1.96    0.00 0.17  1.61  1.86  1.96 2.07  2.29  2075    1 gamma[2] -0.03    0.02 0.77 -1.53 -0.49 -0.04 0.43  1.55  1047    1 gamma[3]  2.81    0.02 0.49  1.84  2.52  2.80 3.12  3.79   926    1 tau[1]    0.34    0.01 0.21  0.02  0.19  0.33 0.46  0.79  1135    1 tau[2]    2.39    0.02 0.66  1.47  1.94  2.26 2.69  4.04  1234    1 tau[3]    1.44    0.01 0.41  0.87  1.16  1.37 1.65  2.43  1317    1 sigma     1.04    0.00 0.09  0.89  0.98  1.04 1.10  1.23  2392    1 Samples were drawn using NUTS(diag_e) at Thu Nov 10 14:11:41 2016. For each parameter, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence, Rhat=1). The regression parameters were all decently estimated except for the second slope coefficient, the simulated value was -1. All MCMC samples for all coefficient can be easily extracted and used to compute whatever your interest is: #extract the MCMC samples mcmc_hier<-extract(m_hier) str(mcmc_hier) #plot average response to explanatory variables X_new<-model.matrix(~x+y,data=data.frame(x=seq(-2,2,by=0.2),y=0)) #get predicted values for each MCMC sample pred_x1<-apply(mcmc_hier$gamma,1,function(beta) X_new %*% beta) #now get median and 95% credible intervals pred_x1<-apply(pred_x1,1,quantile,probs=c(0.025,0.5,0.975)) #same stuff for the second explanatory variables X_new<-model.matrix(~x+y,data=data.frame(x=0,y=seq(-2,2,by=0.2))) pred_x2<-apply(mcmc_hier$gamma,1,function(beta) X_new %*% beta) pred_x2<-apply(pred_x2,1,quantile,probs=c(0.025,0.5,0.975)) Here we basically generated new model matrices where only one variable was moving at a time, this allowed us to get the model prediction for the effect of say temperature on plant biomass under average nutrient conditions. These predictions were obtained by multiplying the model matrix with the coefficients for each MCMC sample (the first apply command), and then we can get from these samples the median with 95% credible intervals (the second apply command). Now we can plot this (code for the plots at the end of the post) Another important plot is the variation in the regression parameters between the species, again this is easily done using the MCMC samples: #now we could look at the variation in the regression coefficients between the groups doing caterpillar plots ind_coeff<-apply(mcmc_hier$beta,c(2,3),quantile,probs=c(0.025,0.5,0.975)) df_ind_coeff<-data.frame(Coeff=rep(c("(Int)","X1","X2"),each=10),LI=c(ind_coeff[1,,1],ind_coeff[1,,2],ind_coeff[1,,3]),Median=c(ind_coeff[2,,1],ind_coeff[2,,2],ind_coeff[2,,3]),HI=c(ind_coeff[3,,1],ind_coeff[3,,2],ind_coeff[3,,3])) gr<-paste("Gr",1:10) df_ind_coeff$Group<-factor(gr,levels=gr) #we may also add the population-level median estimate pop_lvl<-data.frame(Coeff=c("(Int)","X1","X2"),Median=apply(mcmc_hier\$gamma,2,quantile,probs=0.5)) ggplot(df_ind_coeff,aes(x=Group,y=Median))+geom_point()+ geom_linerange(aes(ymin=LI,ymax=HI))+coord_flip()+ facet_grid(.~Coeff)+ geom_hline(data=pop_lvl,aes(yintercept=Median),color="blue",linetype="dashed")+ labs(y="Regression parameters") The cool thing with using STAN is that we can extend or modify the model in many ways. This will be the topics of future posts which will include: crossed and nested design, multilevel modelling, non-normal distributions and much more, stay tuned! Code for the first plot: cols<-brewer.pal(10,"Set3") par(mfrow=c(1,2),mar=c(4,4,0,1),oma=c(0,0,3,5)) plot(y~X[,2],pch=16,xlab="Temperature",ylab="Response variable",col=cols[id]) lines(seq(-2,2,by=0.2),pred_x1[1,],lty=2,col="red") lines(seq(-2,2,by=0.2),pred_x1[2,],lty=1,lwd=3,col="blue") lines(seq(-2,2,by=0.2),pred_x1[3,],lty=2,col="red") plot(y~X[,3],pch=16,xlab="Nitrogen concentration",ylab="Response variable",col=cols[id]) lines(seq(-2,2,by=0.2),pred_x2[1,],lty=2,col="red") lines(seq(-2,2,by=0.2),pred_x2[2,],lty=1,lwd=3,col="blue") lines(seq(-2,2,by=0.2),pred_x2[3,],lty=2,col="red") mtext(text = "Population-level response to the two\nexplanatory variables with 95% CrI",side = 3,line = 0,outer=TRUE) legend(x=2.1,y=10,legend=paste("Gr",1:10),ncol = 1,col=cols,pch=16,bty="n",xpd=NA,title = "Group\nID") Filed under: Informatic Language, R and Stat Tagged: Bayesian, R, STAN, Statistics R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...
2019-02-21 05:20:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6506341099739075, "perplexity": 8533.152977499787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247500089.84/warc/CC-MAIN-20190221051342-20190221073342-00060.warc.gz"}
http://math.stackexchange.com/questions/85790/determining-tax-percentage
# Determining tax percentage I'm working a problem, attempting to find a income tax rate that will change depending on the gross paycheck amount. Some data points: $800 gross = 11% taxed$1500 gross = 16% taxed $2000 gross = 20% taxed How would I go about finding the relationship between the gross paycheck amount and the percentage taxed? Beyond that as well, how will I then be able to apply that relationship mathematically in order to obtain other data points (\$1000 gross paycheck, $500, etc...) and get an accurate tax percentage for them as well? I have a feeling it'll be an answer lying somewhere in the statistics field, but I'm no math expert. - Your problem has no unique solution, there is an infinite amount of tax regulations that will generate your output. – Listing Nov 26 '11 at 15:32 Well I'm not looking for something that's exactly perfect in it's output, but plotting the data points it does seem to follow a trend. So hopefully from that I can ascertain a polynomial that will give me accurate answers within a given percentage. – Mike S Nov 27 '11 at 13:37 It is possible to find a polynomial that returns exact values for finite number of points. If the relationship is truly a polynomial of degree n (or less), then every thing is fine. However, if the polynomial is of a degree > n, then any interpolation or extrapolation, may not provide an accurate value. – Emmad Kareem Jan 26 '12 at 2:29 add comment ## 1 Answer As Listing says, there is no unique solution. One approach would be to plot tax paid against gross income, draw a smooth curve and estimate any interpolated values. So for a gross income of$\$1000$, tax might be about $\$130$i.e. about 13%. If you are feeling brave, you might try to extrapolate outside the points you know (the tax on a gross income of$\$0$ might be $\$0\$), but you are more likely to be wrong. - The curve is looking exponential, so I should have enough data to solve for a regular polynomial I would assume. That may give me an answer that will help me at least estimate within a given means of accuracy. –  Mike S Nov 27 '11 at 13:35 @Mike: an exponential curve would result in tax paid being above gross income, which is rare and difficult to collect. –  Henry Nov 27 '11 at 23:26 You have an excellent point. It must be painfully clear I really have no idea what I'm talking about. –  Mike S Nov 28 '11 at 14:46
2014-04-24 17:37:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40588298439979553, "perplexity": 1055.4325268926812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.spectroom.com/1025504165-american-astronomer-clyde-tombaugh-dies
# 17/01/1997 American astronomer Clyde Tombaugh dies 7:42 1:52 ### NMSU gives closer look into Clyde Tombaugh's work Tombaugh is known because of his discovery of the dwarf planet Pluto in 1930. Pluto was considered a planet for until reclassification in 2006. It was the first object from the Kuiper belt ever observed. The Kuiper belt is a disc of small objects extending from the orbit of Neptune (30 astronomical units, AU) to the distance around 50 AU.
2020-06-03 07:01:21
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8104760050773621, "perplexity": 2636.104155012846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347432237.67/warc/CC-MAIN-20200603050448-20200603080448-00460.warc.gz"}
http://www.helpteaching.com/questions/Math
# Math Questions - All Grades Create printable tests and worksheets from Math questions. Select questions to add to a test using the checkbox above each question. Remember to click the add selected questions to a test button before moving to another page. The Math questions below are in the following grade levels: Pre-K Kindergarten Grades: 1 2 3 4 5 6 7 8 9 10 11 12 College Continuing Education 1 2 3 4 ... 610 Grade 4 :: Place Value by lisca How is eight thousand, seventy-six written in standard form? 8067 8076 8706 8760 Grade 6 :: Range Median Mean Mode by NatalyTeach1 The test results on a math exam were:77, 66, 70, 83, 88, 64, 93 and 99.What is the median test score? 70 82 86 80 Grade 6 :: Range Median Mean Mode by gkalecia The mode is 7 for this set of numbers: 7, 9, 2, 7, 3, 7, 8, 9, and 7 True False Grade 6 :: Exponents by lovelle53 Write 6 x 6 x 6 x 6 x 6 in exponential form. $5^6$ $6^6$ $6^5$ $6^4$ Grade 6 :: Range Median Mean Mode by NatalyTeach1 Find the mode for the following set of data.44, 42, 47, 34, 44, 55, 33, 44, 66 55 34 42 44 Grade 6 :: Exponents by lovelle53 What is the value of $2^4 + 3^3$ 17 43 58 125 Grade 6 :: Range Median Mean Mode by NatalyTeach1 Find the mean for the following set of data. Round to the nearest whole number.68, 71, 67, 104, 118, 109, 64, 101, 157, and 171 103 132 105 112 Grade 6 :: Range Median Mean Mode by allieckinz What is the definition of median? The difference of the largest and smallest numbers. The one in the middle. The average of the group. The one that appears the most Grade 7 :: Collecting and Intepreting Data by Ureesha An observation that involves numbers or measurements is a qualitative observation quantitative observation quality observation qualitive observation Grade 6 :: Range Median Mean Mode by sstaudt Sally went bowling with her friends on Saturday. She bowled 5 games and her scores were 149, 183, 149, 193, and 147. What is the range of her scores? 2 44 46 149 Grade 6 :: Range Median Mean Mode by NatalyTeach1 Benjamin scored 9, 7, 6, 10, 12, 9 and 17 points in 7 basketball games. Find the mean for the above set of data. 12 11 10 13 Grade 1 :: Basic Shapes by Kalasri A triangle has                sides. four two zero three Grade 6 :: Exponents by lovelle53 $2^5$ is the same as 2 x 5. True False Grade 6 :: Range Median Mean Mode by hjpaquin During the carnival, tickets were sold for the raffle. Carnival workers took count of sales every 30 minutes. The number of tickets sold were : 123, 145, 110, 256, 123. What were the median number of tickets sold? 123 146 256 110 Grade 6 :: Range Median Mean Mode by sstaudt The range of prices of television sets at the B&Q Electronic Store is $500.00. If the lowest price for a television is$350.00, what is the highest price for a television? $850.00$500.00 $400.00$350.00 Grade 5 :: Range Median Mean Mode by RosieQ The MODE is the number that appears the least in a set of data. True False Grade 9 :: Exponents by SheilaH Simplify $b^5 * b^2$. $b^25$ $b^7$ $b^3$ $b^10$ Grade 6 :: Range Median Mean Mode by Piddydink Jerome wants to spend a day at a theme park. He made the following list of single-day admission prices for different theme parks: $42,$69.95, $38,$74.95, $73,$75, $49.95,$32.50What is the median admission price? $49.95$59.95 $69.95$73 Grade 8 :: Exponents by Sharon_Grabowski Which one of the following is true? $4^-2 < 4^-3$ $5^-2 > 2 ^ -5$ $(-2)^4 = 2^-4$ $5^2(5)^-2>2^5(2)^-5$ Grade 7 :: Represent and Determine Probability by Andrew A fair coin will be flipped 3 times. What is the probability that the coin will land on tails exactly once? $1/8$ $1/3$ $3/8$ $5/8$ 1 2 3 4 ... 610 You need to have at least 5 reputation to vote a question down. Learn How To Earn Badges.
2014-04-18 08:18:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1966855823993683, "perplexity": 1096.9243126730278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
https://math.libretexts.org/TextMaps/Calculus_Textmaps/Map%3A_Calculus_(Guichard)/03%3A_Rules_for_Finding_Derivatives
It is tedious to compute a limit every time we need to know the derivative of a function. Fortunately, we can develop a small collection of examples and rules that allow us to compute the derivative of almost any function we are likely to encounter. Many functions involve quantities raised to a constant power, such as polynomials and more complicated combinations like $$y=(\sin x)^4$$. So we start by examining powers of a single variable; this gives us a building block for more complicated examples.
2017-07-21 08:52:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8352389335632324, "perplexity": 91.01856768058231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423764.11/warc/CC-MAIN-20170721082219-20170721102219-00577.warc.gz"}
https://datascience.stackexchange.com/questions/32986/transforming-words-in-sentences-into-vector-form-to-prepare-a-model
# Transforming words in sentences into vector form to prepare a model I want to build a simple classifier that classifies if the text is a question or just a simple message. I understand logistic regression and can work to create a simple neural network. I have the labeled input data in English, Japanese, Korean, Thai. How could I transform this data before I feed it into the classifier? • Take a look at Tf-Idf scheme scikit-learn.org/stable/modules/generated/… – Ankit Seth Jun 12 '18 at 6:07 • @AnkitSeth Could you please elaborate more on this. – Suhail Gupta Jun 12 '18 at 6:31 • It is basically a scheme to convert words to numeric form. For each document, it will take the frequency of a particular word in that document, number of documents which contain that word, and find a numerical equivalent of that word. You can see the working of Tf-Idf in detail on this tfidf.com – Ankit Seth Jun 12 '18 at 6:53 • @AnkitSeth Okay. Does it use some kind of pre-trained model? Also, after I get the output as {u'boy': '1.6931471805599454', u'good': '1.6931471805599454', u'this': '1.2876820724517808', u'is': '1.0', u'very': '1.2876820724517808', u'strange': '1.6931471805599454', u'suhail': '1.6931471805599454', u'nice': '1.6931471805599454'}, should I use these values as in input to classifier? – Suhail Gupta Jun 12 '18 at 7:48 • No, it does not use a pre-trained model. Now, your features are these words - "boy", "good", "this", "is" etc and the values are the numbers you got. Yes, you can use these values as input to classifier. Create a dataframe of this and pass that frame in your model. The columns of the frame should be these words and the number of rows should be number of documents/texts you have. – Ankit Seth Jun 12 '18 at 9:41
2021-04-12 07:27:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3095206320285797, "perplexity": 1524.1768190833486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066613.21/warc/CC-MAIN-20210412053559-20210412083559-00633.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/simplify-the-following-and-express-in-the-form-a-ib-5-7i4-3i-5-7i4-3i-algebra-of-complex-numbers_163436
# Simplify the following and express in the form a + ib: 5+7i4+3i+5+7i4-3i - Mathematics and Statistics Sum Simplify the following and express in the form a + ib: (5 + 7"i")/(4 + 3"i") + (5 + 7"i")/(4 - 3"i") #### Solution (5 + 7"i")/(4 + 3"i") + (5 + 7"i")/(4 - 3"i") = (5 + 7"i")[1/(4 + 3"i") + 1/(4 - 3"i")] = (5 + 7"i") [(4 - 3"i" + 4 + 3"i")/((4 + 3"i")(4 - 3"i"))] = (5 + 7"i") [8/(16 - 9"i"^2)] = (5 + 7"i") [8/(16 - 9(-1))]      ...[∵ i2 = – 1] = (8(5 + 7"i"))/25 = (40 + 56"i")/25 = 40/25 + 56/25"i" = 8/5 + 56/25"i". Concept: Algebra of Complex Numbers Is there an error in this question or solution? #### APPEARS IN Balbharati Mathematics and Statistics 1 (Commerce) 11th Standard Maharashtra State Board Chapter 3 Complex Numbers Miscellaneous Exercise 3 | Q 3. (x) | Page 43
2022-09-25 08:29:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5021619200706482, "perplexity": 14244.346916111303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00365.warc.gz"}
https://glowing.com/community/topic/72057594042343203/ovulating-helpp
# Ovulating?? Helpp!! Not sure what this means. 1 of these was this morning. The other was last night. I just started using these, i had the clear blue advanced opk and im just getting a flashing smiley everyday. So am i ovulating? Ovulated? Or About to? I had ewcm on the 2nd. Now i just have creamy cm. If it helps to add, i had a chemical that lasted 12/19-12/21. My period was SUPPOSED to show 12/15. And it says i was supposed to ovulate on the 29th.. but since i had the chemical it moved to the 2nd? Luckily i BD enough from the 29-2 lol? o You
2019-09-18 12:22:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8438085317611694, "perplexity": 2824.7716800206995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573284.48/warc/CC-MAIN-20190918110932-20190918132932-00031.warc.gz"}
https://questioncove.com/updates/4fc10d9ee4b0964abc830454
Ask your own question, for FREE! Mathematics OpenStudy (anonymous): How many permutations of 4 letters can be made out of the letters of the word "EXAMINATION"? 6 years ago OpenStudy (anonymous): $\frac{ {_{11}P_{4}} }{2!*2! } = 1980$ 6 years ago OpenStudy (anonymous): "EXAMINATION" Total 11 letters, 1E,1X,2A,1M,2I,2N,1O,1T, the answer is the coefficient of $$x^4$$ in $4! (1+x)^5 (1+x+\frac {x^2}{2!})^3$ which is $$2454$$ 6 years ago Can't find your answer? Make a FREE account and ask your own question, OR you can help others and earn volunteer hours! Latest Questions eviant: Math help pls 2 hours ago 15 Replies 0 Medals katkit25: if you pollute you ruin homes of creatures such as 6 hours ago 4 Replies 0 Medals Val050301: What theorem or postulate can be used to justify that HIG=FIE? 7 hours ago 2 Replies 1 Medal johnnn: https://prnt.sc/n1x9uo 1 hour ago 4 Replies 1 Medal eviant: Math help pls 8 hours ago 10 Replies 1 Medal eviant: Math help pls 21 hours ago 5 Replies 1 Medal eviant: Math help pls 21 hours ago 16 Replies 1 Medal eviant: Math help pls 22 hours ago 9 Replies 3 Medals
2019-03-24 01:06:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19108159840106964, "perplexity": 8245.845495006157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203123.91/warc/CC-MAIN-20190324002035-20190324024035-00150.warc.gz"}
http://www.physicsforums.com/showthread.php?t=704258
# Question of Obsession by CubicFlunky77 Tags: obsession P: 26 I am a 22 year-old male in the U.S. My fascination (or obsession) for math/computational neuroscience/set-theory/Analysis & Topology topics has reached nearly unprecedented levels in my life to the point of posing a detriment to my personal life (insomnia, pickiness, and periodic rage). I never took differential equations or Calc. III as far as the academic institution's agenda is concerned since I was forced into the med-school track for the conventional 'get-money-while-being-snobby-and-looking-smart" reason. Since this description does not match my life's ambition, I returned to math after reaching my Senior Year as a Bio undergrad. I feel I am on the right path, but the harder I focus on what I wish to pursue the more obsessed I become. Ex. I thought yesterday was Monday since that was the last time I remembered studying. Then my mom recently told me that I'd been studying consistently for two consecutive weeks for well over 15 hours a day (typically from 11 a.m to 7-9 a.m.; sleeping, then repeating). When I am tired, I do go home after living in my school's math department over the aforementioned time period. As far as time goes, I have completely lost track of it. It is extremely annoying, since I feel that as soon as I start studying I have to stop after what seems to be 3 minutes to me when it has actually been several hours. The only reason I even stop is because I have to eat and sleep to keep going. When I tell the school psychiatrists/mental health folks they seem to be more surprised at the notion of a black math major than directing their resources and attention towards alleviating the issue. O.K. So now to my question: Is there a cause for a "mental-issue" concern when I try to solve a problem in D.Eq. such as this: $\int \frac {1}{y} dy \leftrightarrow \int \frac {1}{1 + x} dx$ by doing this: Mentor P: 25,911 I suggest that you seek professional help. We cannot help you with something this severe online. The best of luck to you. Mentor P: 5,336 Please please heed Evo's advise and consult a doctor. Losing track of time in such a significant way is not healthy, neither is such obsessive behaviour. Related Discussions General Discussion 5 Relationships 65 Current Events 89 General Discussion 43 General Discussion 21
2014-04-16 13:39:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23531296849250793, "perplexity": 1209.717505617564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00638-ip-10-147-4-33.ec2.internal.warc.gz"}