url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://v0dro.in/blog/2015/08/16/summary-of-work-this-summer-for-gsoc-2015/
# Summary of Work This Summer for GSOC 2015 Over this summer as a part of Google Summer of Code 2015, daru received a lot of upgrades and new features which have made a pretty robust tool for data analysis in pure ruby. Of course, a lot of work still remains for bringing daru at par with the other data analysis solutions on offer today, but I feel the work done this summer has put daru on that path. The new features led to the inclusion of daru in many of SciRuby’s gems, which use daru’s data storage, access and indexing features for storing and carrying around data. Statsample, statsample-glm, statsample-timeseries, statsample-bivariate-extensions are all now compatible with daru and use Vector and DataFrame as their primary data structures. Daru’s plotting functionality, that interfaced with nyaplot for creating interactive plots directly from the data was also significantly overhauled. Also, new gems developed by other GSOC students, notably Ivan’s GnuplotRB gem and Alexej’s mixed_models gem both accept data from daru data structures. Do see their repo pages for seeing interesting ways of using daru. The work on daru is also proving to be quite useful for other people, which led a talk/presentation at DeccanRubyConf 2015, which is one of the three major ruby conferences in India. You can see the slides and notebooks presented at the talk here. Given the current interest in data analysis and the need for a viable solution in ruby, I plan to take daru much further. Keep watching the repo for interesting updates :) In the rest of this post I’ll elaborate on all the work done this summer. ## Pre-mid term submissions Daru as a gem before GSOC was not exactly user friendly. There were many cases, particularly the iterators, that required some thinking before anybody used them. This is against the design philosophy of daru, or even ruby general, where surprising programmers with ubiqtuos constructs is usually frowned down upon by the community. So the first thing that I did mainly concerned overhauling the daru’s many iterators for both Vector and DataFrame. For example, the #map iterator from Enumerable returns an Array no matter object you call it on. This was not the case before, where #map would a Daru::Vector or Daru::DataFrame. This behaviour was changed, and now #map returns an Array. If you want a Vector or a DataFrame of the modified values, you should call #recode on Vector or DataFrame. Each of these iterators also accepts an optional argument, :row or :vector, which will define the axis over which iteration is supposed to be carried out. So now there are the #each, #map, #map!, #recode, #recode!, #collect, #collect_matrix, #all?, #any?, #keep_vector_if and #keep_row_if. To iterate over elements along with their respective indexes (or labels), you can likewise use #each_row_with_index, #each_vector_with_index, #map_rows_with_index, #map_vector_with_index, #collect_rows_with_index, #collect_vector_with_index or #each_index. I urge you to go over the docs of each of these methods to utilize the full power of daru. Apart from this there was also quite a bit of refactoring involved for many methods (courtesy Alexej). This has made daru much faster than previous versions. The next (major) thing to do was making daru compatible with statsample. This was very essential since statsample is very important tool for statistics in ruby and it was using its own Vector and Dataset classes, which weren’t very robust as computation tools and very difficult to use when it came to cleaning or munging data. So I replaced statsample’s Vector and Dataset clases with Daru::Vector and Daru::DataFrame. It involved a significant amount of work on both statsample and daru. Statsample because many constructs had to changed to make them compatible with daru, and daru because there was a lot of essential functionality in these classes that had to be ported to daru. Porting code from statsample to daru improved daru significantly. There were a whole of statistics methods in statsample that were imported into daru and you can now use all them from daru. Statsample also works well with rubyvis, a great tool for visualization. You can now do that with daru as well. Many new methods for reading and writing data to and from files were also added to daru. You can now read and write data to and from CSV, Excel, plain text files or even SQL databases. In effect, daru is now completely compatible with statsample (and all the other statsample extensions). You can use daru data structures for storing data and pass them to statsample for performing computations. The biggest advantage of this approach is that the analysed data can be passed around to other scientific ruby libraries (some of which listed above) that use daru as well. Since daru offers in-built functions to better ‘see’ your data, better visualization is possible. See these blogs and notebooks for a complete overview of daru’s new features. Also see the notebooks in the statsample README for using daru with statsample. ## Post-mid term submissions Most of time post the mid term submissions was spent in implementing the time series functions for daru. I implemented a new index, the DateTimeIndex, which can used for indexing data on time stamps. It enables users to query data based on time stamps. Time stamps can either be specified with precise ruby DateTime objects or can be specified as strings, which will lead to retrival of all the data falling under that time. For example specifying ‘2012’ returns all data that falls in the year 2012. See detailed usage of DateTimeIndex in conjunction with other daru constructs in the daru README. An essential utility in implementing DateTimeIndex was DateOffset, which is a new set of classes that offsets dates based on certain rules or business logic. It can advance or lag a ruby DateTime to the nearest day or any day of the week or the end or beginning of the month etc. DateOffset is an essential part of DateTimeIndex and can also be used as a standalone utility for advancing/lagging DateTime objects. This blog post elaborates more on the nuances of DateOffset and its usage. The last thing done during the post mid term was complete compatibility with statsample-timeseries, which was created by Ankur Goel during GSOC 2013. It offers many uesful functions for analysis of time series data. It now works with daru containers. See some use cases here. Thats all, as far as I can remember.
2019-03-19 01:40:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24583733081817627, "perplexity": 2908.011729952351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201882.11/warc/CC-MAIN-20190319012213-20190319034213-00357.warc.gz"}
https://xianblog.wordpress.com/tag/r/
## the most patronizing start to an answer I have ever received Posted in Books, Kids, R, Statistics, University life with tags , , , , , , on April 30, 2015 by xi'an Another occurrence [out of many!] of a question on X validated where the originator (primitivus petitor) was trying to get an explanation without the proper background. On either Bayesian statistics or simulation. The introductory sentence to the question was about “trying to understand how the choice of priors affects a Bayesian model estimated using MCMC” but the bulk of the question was in fact failing to understand an R code for a random-walk Metropolis-Hastings algorithm for a simple regression model provided in a introductory blog by Florian Hartig. And even more precisely about confusing the R code dnorm(b, sd = 5, log = T) in the prior with rnorm(1,mean=b, sd = 5, log = T) in the proposal… “You should definitely invest some time in learning the bases of Bayesian statistics and MCMC methods from textbooks or on-line courses.” X So I started my answer with the above warning. Which sums up my feelings about many of those X validated questions, namely that primitivi petitores lack the most basic background to consider such questions. Obviously, I should not have bothered with an answer, but it was late at night after a long day, a good meal at the pub in Kenilworth, and a broken toe still bothering me. So I got this reply from the primitivus petitor that it was a patronizing piece of advice and he prefers to learn from R code than from textbooks and on-line courses, having “looked through a number of textbooks”. Good luck with this endeavour then! ## scale acceleration Posted in pictures, R, Statistics, Travel, University life with tags , , , , , , , , on April 24, 2015 by xi'an Kate Lee pointed me to a rather surprising inefficiency in matlab, exploited in Sylvia Früwirth-Schnatter’s bayesf package: running a gamma simulation by rgamma(n,a,b) takes longer and sometimes much longer than rgamma(n,a,1)/b, the latter taking advantage of the scale nature of b. I wanted to check on my own whether or not R faced the same difficulty, so I ran an experiment [while stuck in a Thalys train at Brussels, between Amsterdam and Paris…] Using different values for a [click on the graph] and a range of values of b. To no visible difference between both implementations, at least when using system.time for checking. a=seq(.1,4,le=25) for (t in 1:25) a[t]=system.time( rgamma(10^7,.3,a[t]))[3] a=a/system.time(rgamma(10^7,.3,1))[3] Once arrived home, I wondered about the relevance of the above comparison, since rgamma(10^7,.3,1) forces R to use 1 as a scale, which may differ from using rgamma(10^7,.3), where 1 is known to be the scale [does this sentence make sense?!]. So I rerun an even bigger experiment as a=seq(.1,4,le=25) for (t in 1:25) a[t]=system.time( rgamma(10^8,.3,a[t]))[3] a=a/system.time(rgamma(10^8,.3))[3] and got the graph below. Which is much more interesting because it shows that some values of a are leading to a loss of efficiency of 50%. Indeed. (The most extreme cases correspond to a=0.3, 1.1., 5.8. No clear pattern emerging.)Update As pointed out by Martyn Plummer in his comment, the C function behind the R rgamma function and Gamma generator does take into account the scale nature of the second parameter, so the above time differences are not due to this function but rather to whatever my computer was running at the same time…! Apologies to anyone I scared with this void warning! ## simulating correlated Binomials [another Bernoulli factory] Posted in Books, Kids, pictures, R, Running, Statistics, University life with tags , , , , , , , on April 21, 2015 by xi'an This early morning, just before going out for my daily run around The Parc, I checked X validated for new questions and came upon that one. Namely, how to simulate X a Bin(8,2/3) variate and Y a Bin(18,2/3) such that corr(X,Y)=0.5. (No reason or motivation provided for this constraint.) And I thought the following (presumably well-known) resolution, namely to break the two binomials as sums of 8 and 18 Bernoulli variates, respectively, and to use some of those Bernoulli variates as being common to both sums. For this specific set of values (8,18,0.5), since 8×18=12², the solution is 0.5×12=6 common variates. (The probability of success does not matter.) While running, I first thought this was a very artificial problem because of this occurrence of 8×18 being a perfect square, 12², and cor(X,Y)x12 an integer. A wee bit later I realised that all positive values of cor(X,Y) could be achieved by randomisation, i.e., by deciding the identity of a Bernoulli variate in X with a Bernoulli variate in Y with a certain probability ϖ. For negative correlations, one can use the (U,1-U) trick, namely to write both Bernoulli variates as $X_1=\mathbb{I}(U\le p)\quad Y_1=\mathbb{I}(U\ge 1-p)$ in order to minimise the probability they coincide. I also checked this result with an R simulation > z=rbinom(10^8,6,.66) > y=z+rbinom(10^8,12,.66) > x=z+rbinom(10^8,2,.66) cor(x,y) > cor(x,y) [1] 0.5000539 Searching on Google gave me immediately a link to Stack Overflow with an earlier solution with the same idea. And a smarter R code. ## reis naar Amsterdam Posted in Books, Kids, pictures, Running, Statistics, Travel, University life, Wines with tags , , , , , , , , , , , , , on April 16, 2015 by xi'an On Monday, I went to Amsterdam to give a seminar at the University of Amsterdam, in the department of psychology. And to visit Eric-Jan Wagenmakers and his group there. And I had a fantastic time! I talked about our mixture proposal for Bayesian testing and model choice without getting hostile or adverse reactions from the audience, quite the opposite as we later discussed this new notion for several hours in the café across the street. I also had the opportunity to meet with Peter Grünwald [who authored a book on the minimum description length principle] pointed out a minor inconsistency of the common parameter approach, namely that the Jeffreys prior on the first model did not have to coincide with the Jeffreys prior on the second model. (The Jeffreys prior for the mixture being unavailable.) He also wondered about a more conservative property of the approach, compared with the Bayes factor, in the sense that the non-null parameter could get closer to the null-parameter while still being identifiable. Among the many persons I met in the department, Maarten Marsman talked to me about his thesis research, Plausible values in statistical inference, which involved handling the Ising model [a non-sparse Ising model with O(p²) parameters] by an auxiliary representation due to Marc Kac and getting rid of the normalising (partition) constant by the way. (Warning, some approximations involved!) And who showed me a simple probit example of the Gibbs sampler getting stuck as the sample size n grows. Simply because the uniform conditional distribution on the parameter concentrates faster (in 1/n) than the posterior (in 1/√n). This does not come as a complete surprise as data augmentation operates in an n-dimensional space. Hence it requires more time to get around. As a side remark [still worth printing!], Maarten dedicated his thesis as “To my favourite random variables , Siem en Fem, and to my normalizing constant, Esther”, from which I hope you can spot the influence of at least two of my book dedications! As I left Amsterdam on Tuesday, I had time for a enjoyable dinner with E-J’s group, an equally enjoyable early morning run [with perfect skies for sunrise pictures!], and more discussions in the department. Including a presentation of the new (delicious?!) Bayesian software developed there, JASP, which aims at non-specialists [i.e., researchers unable to code in R, BUGS, or, God forbid!, STAN] And about the consequences of mixture testing in some psychological experiments. Once again, a fantastic time discussing Bayesian statistics and their applications, with a group of dedicated and enthusiastic Bayesians! ## Le Monde puzzle [#905] Posted in Books, Kids, R, Statistics, University life with tags , , , on April 1, 2015 by xi'an A recursive programming  Le Monde mathematical puzzle: Given n tokens with 10≤n≤25, Alice and Bob play the following game: the first player draws an integer1≤m≤6 at random. This player can then take 1≤r≤min(2m,n) tokens. The next player is then free to take 1≤s≤min(2r,n-r) tokens. The player taking the last tokens is the winner. There is a winning strategy for Alice if she starts with m=3 and if Bob starts with m=2. Deduce the value of n. Although I first wrote a brute force version of the following code, a moderate amount of thinking leads to conclude that the person given n remaining token and an adversary choice of m tokens such that 2m≥n always win by taking the n remaining tokens: optim=function(n,m){ outcome=(n<2*m+1) if (n>2*m){ for (i in 1:(2*m)) outcome=max(outcome,1-optim(n-i,i)) } return(outcome) } eliminating solutions which dividers are not solutions themselves: sol=lowa=plura[plura<100] for (i in 3:6){ sli=plura[(plura>10^(i-1))&(plura<10^i)] ace=sli-10^(i-1)*(sli%/%10^(i-1)) lowa=sli[apply(outer(ace,lowa,FUN="=="), 1,max)==1] lowa=sort(unique(lowa)) sol=c(sol,lowa)} > subs=rep(0,16) > for (n in 10:25) subs[n-9]=optim(n,3) > for (n in 10:25) if (subs[n-9]==1) subs[n-9]=1-optim(n,2) > subs [1] 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 > (10:25)[subs==1] [1] 18 Ergo, the number of tokens is 18! ## Le Monde puzzle [#902] Posted in Books, Kids, Statistics, University life with tags , , , , , , on March 8, 2015 by xi'an Another arithmetics Le Monde mathematical puzzle: From the set of the integers between 1 and 15, is it possible to partition it in such a way that the product of the terms in the first set is equal to the sum of the members of the second set? can this be generalised to an arbitrary set {1,2,..,n}? What happens if instead we only consider the odd integers in those sets?. I used brute force by looking at random for a solution, pb <- txtProgressBar(min = 0, max = 100, style = 3) for (N in 5:100){ sol=FALSE while (!sol){ k=sample(1:N,1,prob=(1:N)*(N-(1:N))) pro=sample(1:N,k) sol=(prod(pro)==sum((1:N)[-pro])) } setTxtProgressBar(pb, N)} close(pb) and while it took a while to run the R code, it eventually got out of the loop, meaning there was at least one solution for all n’s between 5 and 100. (It does not work for n=1,2,3,4, for obvious reasons.) For instance, when n=15, the integers in the product part are either 3,5,7, 1,7,14, or 1,9,11. Jean-Louis Fouley sent me an explanation:  when n is odd, n=2p+1, one solution is (1,p,2p), while when n is even, n=2p, one solution is (1,p-1,2p). A side remark on the R code: thanks to a Cross Validated question by Paulo Marques, on which I thought I had commented on this blog, I learned about the progress bar function in R, setTxtProgressBar(), which makes running R code with loops much nicer! For the second question, I just adapted the R code to exclude even integers: while (!sol){ k=1+trunc(sample(1:N,1)/2) pro=sample(seq(1,N,by=2),k) cum=(1:N)[-pro] sol=(prod(pro)==sum(cum[cum%%2==1])) } and found a solution for n=15, namely 1,3,15 versus 5,7,9,11,13. However, there does not seem to be a solution for all n’s: I found solutions for n=15,21,23,31,39,41,47,49,55,59,63,71,75,79,87,95… ## amazing Gibbs sampler Posted in Books, pictures, R, Statistics, University life with tags , , , , , , on February 19, 2015 by xi'an When playing with Peter Rossi’s bayesm R package during a visit of Jean-Michel Marin to Paris, last week, we came up with the above Gibbs outcome. The setting is a Gaussian mixture model with three components in dimension 5 and the prior distributions are standard conjugate. In this case, with 500 observations and 5000 Gibbs iterations, the Markov chain (for one component of one mean of the mixture) has two highly distinct regimes: one that revolves around the true value of the parameter, 2.5, and one that explores a much broader area (which is associated with a much smaller value of the component weight). What we found amazing is the Gibbs ability to entertain both regimes, simultaneously.
2015-05-04 00:57:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6691534519195557, "perplexity": 1403.5702451047428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430452285957.31/warc/CC-MAIN-20150501035125-00023-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/kinematics-of-three-runners-in-a-race.693614/
# Kinematics of three runners in a race 1. May 25, 2013 ### bkraabel 1. The problem statement, all variables and given/known data Runners A, B, and C run a 100-m race, each at a constant speed. Runner A takes first place, beating runner B by 10 m. Runner B takes second place, beating runner C by 10 m. By what time interval does runner A beat runner C? 2. Relevant equations d = 100 m Δd = 10 m Δt$_{i}$ = time for runner i to travel 100 m v$_{i}$ = speed of runner i 3. The attempt at a solution Can this problem be solved without knowing the speed of one of the runners? Here's a few equalities for this problem: v$_{a}$ = $\frac{d}{Δt_{a}}$ v$_{b}$ = $\frac{d-Δd}{Δt_{a}}$ v$_{c}$ = $\frac{v_{b}Δt_{b}-Δd}{Δt_{b}}$ 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 2. May 25, 2013 ### barryj Try assuming a couple of different velocities for runner A and then work out the time difference. Does the time difference vary with the velocity of runner A? This should answer your question. 3. May 25, 2013 ### bkraabel Note that runner A beats runner C by more than 20 m. 4. May 25, 2013 ### rude man Yikes, that's right! 5. May 25, 2013 ### bkraabel I tried assuming a few speeds for runner A and found that the interval between runners A and C depends on the speed of runner A. In other words, there is no unique solution with the given information. 6. May 25, 2013 ### voko The problem has six unknowns (3 speeds and 3 times) and five equations. It is fairly trivial to express any five unknowns in terms of the other one. 7. May 25, 2013 ### rude man Right all around. At least I got the 'no unique solution' part right ... 8. May 27, 2013 ### nil1996 time for C A beats C by a time period of 20/VC VC=velocity of C 9. May 27, 2013 ### voko This assumes that A beats C by 20 meters. This is not given. What is given is that B beats C by 10 meters, which happens when B is at the finish line. When A is at the finish line, B is not there, so you can't assume that he is 10 meters ahead of C yet.
2018-03-23 09:26:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33836543560028076, "perplexity": 1910.7666641078135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648205.76/warc/CC-MAIN-20180323083246-20180323103246-00093.warc.gz"}
https://www.mpboardsolutions.com/mp-board-class-7th-maths-solutions-chapter-8-ex-8-2-english-medium/
# MP Board Class 7th Maths Solutions Chapter 8 Comparing Quantities Ex 8.2 ## MP Board Class 7th Maths Solutions Chapter 8 Comparing Quantities Ex 8.2 Question 1. Convert the given fractional numbers to percents. Solution: Question 2. Convert the given decimal fractions to percents. (a) 0.65 (b) 2.1 (c) 0.02 (d) 12.35 Solution: Question 3. Estimate what part of the figures is coloured and hence find the percent which is coloured. Solution: (i) Here, 1 part out of 4 equals parts is shaded which represents the fraction $$\frac{1}{4}$$. (ii) Here, 3 parts out of 5 equal parts are shaded which represents the fraction $$\frac{3}{5}$$. (iii) Here 3 parts out of 8 equal parts are shaded which represents the fraction $$\frac{3}{8}$$. Question 4. Find: (a) 15% of 250 (b) 1% of 1 hour (c) 20% of ₹ 2500 (d) 75% of 1 kg Solution: Question 5. Find the whole quantity if (a) 5% of it is 600 (b) 12% of it is? 1080 (c) 40% of it is 500 km (d) 70% of it is 14 minutes (e) 8% of it is 40 litres Solution: Let the whole quantity be x. Question 6. Convert given percents to decimal fractions and also to fractions in simplest forms: (a) 25% (b) 150% (c) 20% (d) 5% Solution: Question 7. In a city, 30% are females, 40% are males and remaining are children. What percent are children? Solution: It is given that 30% are females and 40% are males. Children = 100% – (40% + 30%) = 100% – 70% = 30% Question 8. Out of 15,000 voters in a constituency, 60% voted. Find the percentage of voters who did not vote. Can you now find how many actually did not vote? Solution: Percentage of voters who voted = 60% Percentage of those who did not vote = 100% – 60% = 40% Number of people who did not vote = 40% of 15000 = 40% × 15000 Therefore, 6000 people did not vote. Question 9. Meeta saves ₹ 400 from her salary. If this is 10% of her salary. What is her salary? Solution: Let Meeta’s salary be ₹ x. Given that, 10% of x = 400 Therefore, Meeta’s salary is ₹ 4000. Question 10. A local cricket team played 20 matches in one season. It won 25% of them. How many matches did they win? Solution: Number of games won = 25% of 20 $$=\frac{25}{100} \times 20=5$$ Therefore, the team won 5 matches.
2022-10-01 08:15:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6635302901268005, "perplexity": 5600.033881678748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00775.warc.gz"}
http://www.gradesaver.com/textbooks/science/physics/fundamentals-of-physics-extended-10th-edition/chapter-8-potential-energy-and-conservation-of-energy-problems-page-202/4b
# Chapter 8 - Potential Energy and Conservation of Energy - Problems: 4b Work=- 1.51 Joules #### Work Step by Step $F_{g}$ acts downwards so only vertical displacements are taken into calculations. At its highest point, the ball is perpendicular with the starting point, therefore it only traveled the vertical distance of L. Since gravity acts downwards and the ball taveled upwards, the displacement must be negative. L=-.452m Work=$F_{g}$ $\times$ L---$F_{g}$= .341kg $\times$ 9.8m/$s^{2}$ $F_{g}$=3.3418 Work=3.3418 N$\times$-.425m Work=- 1.51 Joules After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-02-19 12:19:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8411692976951599, "perplexity": 2451.377330876798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812584.40/warc/CC-MAIN-20180219111908-20180219131908-00357.warc.gz"}
https://yourgolfermagazine.co.uk/blog/cuq2oq.php?97feb0=proof-calculator-discrete-math
Order theory is the study of partially ordered sets, both finite and infinite. https://www.cs.appstate.edu/~sjg/simpsonsmath/blackboard.html. What Was The Supreme Court In The Brown Case Saying To The Court Of The Plessy Case In 1896? Town Of Eastham, How Many Calories To Gain Muscle And Lose Fat, Theoretical computer science includes areas of discrete mathematics relevant to computing. Slap Lesion, Nvidia Stock Price Target 2025, Discrete Math Calculators: (43) lessons Affine Cipher. There is some debate among mathematicians as to just what constitutes a proof. Copyright (c) 2006-2016 SolveMyMath. Tom Hanks Greek, − • Direct proof • Contrapositive • Proof by contradiction • Proof by cases 3. 2. Discrete probability distributions arise in the mathematical description of probabilistic and statistical problems in which the values that might be observed are restricted to being within a pre-defined list of possible values. New York: Wiley, 1990. Bed Exercises For Elderly, "Simpsons Math." 15-16), "all physicists, and a good many quite respectable mathematicians, are contemptuous about proof. Discrete Math Calculators: (43) lessons Affine Cipher. cases which cannot be verified "by hand." The #1 tool for creating Demonstrations and anything technical. computer-assisted proofs as valid, some purists do not. / Many questions and methods concerning differential equations have counterparts for difference equations. from the Book. The Gathering Place Tulsa, in One. The set of objects studied in discrete mathematics can be finite or infinite. Restaurants Near Coco Key Water Resort Orlando, Peter Sacks Paintings For Sale, Caught Up In Your Storm Karaoke, Wolfram Web Resource. Providence, RI: Amer. Weisstein, Eric W. 15-16), "all physicists, and a good many quite respectable mathematicians, are contemptuous about proof. It contains sequence of statements, the last being the conclusion which follows from the previous statements. Hardy, G. H. Ramanujan: Twelve Lectures on Subjects Suggested by His Life and Work, 3rd ed. Chardin The Washerwoman, Garnier, R. and Taylor, J. Although topology is the field of mathematics that formalizes and generalizes the intuitive notion of "continuous deformation" of objects, it gives rise to many discrete topics; this can be attributed in part to the focus on topological invariants, which themselves usually take discrete values. Restaurants Near Christiana Mall, Instructions You can write a propositional formula using the above keyboard. To embed this widget in a post on your WordPress blog, copy and paste the shortcode below into the HTML source: To add a widget to a MediaWiki site, the wiki must have the. Is Brian Goodman Married, New York: Penguin, 2004. systems currently under development for automated theorem proving, among them, THOREM. Best Ever Dance Crews On Britain's Got Talent, Mathematics and Plausible Reasoning, Vol. Hints help you try the next step on your own. Join the initiative for modernizing math education. Small Garden Ideas, Pdx Model Photography, https://www.risc.uni-linz.ac.at/research/theorema/description/. Not. Richard Mille Net Worth 2020, Discrete algebras include: boolean algebra used in logic gates and programming; relational algebra used in databases; discrete and finite versions of groups, rings and fields are important in algebraic coding theory; discrete semigroups and monoids appear in the theory of formal languages. MathJax reference. Mathematical Proof. Brendon April, Upa Government 2004 To 2014, and Proof. Louis, Dauphin Of France Death, Theoretical computer science includes areas of discrete mathematics relevant to computing. Conuction 4.0 – The future technologies from Israel, Construction 4.0 – Digital revolution in construction business, ConTech companies from Digital Construction Week London. Airfiber 5x Antenna, From MathWorld--A Champaign, IL: Wolfram Media, p. 1157, Once you have There are several computer {\displaystyle K} Prime Obsession: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics. Contrapositive/contradiction proofs and why do they work. New York: In computer science, they can represent networks of communication, data organization, computational devices, the flow of computation, etc. Anubis Invocation, A rigorous mathematical argument which unequivocally demonstrates the truth of a given proposition. If $n=2k$ is even then $n+1 = 2k+1$ is odd, and $(n+1)^2 = 4k^2+4k + 1$ is odd. .e �ǧ� k 8 0 obj << [6][7] Some high-school-level discrete mathematics textbooks have appeared as well. How Do You Spell Terrified, Theoretical computer science includes areas of discrete mathematics relevant to computing. To embed a widget in your blog's sidebar, install the Wolfram|Alpha Widget Sidebar Plugin, and copy and paste the Widget ID below into the "id" field: We appreciate your interest in Wolfram|Alpha and will be in touch soon. 1 Discrete Mathematics - Propositional Logic. I have heard Professor Eddington, for example, maintain that proof, as pure … Theoretical computer science includes areas of discrete mathematics relevant to computing. Chalmers, D. "Philosophical Humor." What Is The Message Of The Last Supper, Studio Feixen Font, Chalmers, D. "Philosophical Humor." The Fulkerson Prize is awarded for outstanding papers in discrete mathematics. Mind 38, 1-25, 1929. The Omega Group. A Mathematical Mosaic: Patterns and Problem Solving. 42, 670-674, 1995. The Fourth, Fifth, Sixth, And Eighth Amendments Are Largely About, 2002. Griswold High School Homecoming 2019, Partition theory studies various enumeration and asymptotic problems related to integer partitions, and is closely related to q-series, special functions and orthogonal polynomials. Maid Of Orleans Meaning, Pinkie And Blue Boy, Symbolab: equation search and math solver - solves algebra, trigonometry and calculus problems step by step This website uses cookies to ensure you get the best experience. /Type /Page "ENTER". Online mathematics calculators for factorials, odd and even permutations, combinations, replacements, nCr and nPr Calculators. A page of proof-related humor is maintained by Chalmers. Tomb Of The Unknown Soldier Changing Of The Guard, Soc. Charles Grey, 2nd Earl Grey, How to Read and Do Proofs: An Introduction to Mathematical Thought Process, 2nd ed. What Was The Supreme Court In The Brown Case Saying To The Court Of The Plessy Case In 1896?, Unlock your Discrete Mathematics and Its Applications PDF (Profound Dynamic Fulfillment) today. This is a perfect set up for contrapositive. in One. of Problem Solving. Mathematics and Plausible Reasoning, Vol. How to Solve It: A New Aspect of Mathematical Method, 2nd ed. Binomial Distribution Knowledge-based programming for everyone. Gee Creek Campground, South Of My Days Australian Identity, Practice online or make a printable study sheet. Cynthia Lennon Funeral, Ubiquiti Discount Code Canada, Price Tag Font, Robert Fulton Invention, Ihra Drag Racing: Sportsman Edition, Epstein, D. and Levy, S. "Experimentation and Proof in Mathematics." computer-assisted proofs as valid, some purists do not. The Servant Of Two Masters Characters, Number theory is concerned with the properties of numbers in general, particularly integers. ⁡ x Graph theory, the study of graphs and networks, is often considered part of combinatorics, but has grown large enough and distinct enough, with its own kind of problems, to be regarded as a subject in its own right. Math. Sharon Olds Birth Poem, Many questions and methods concerning differential equations have counterparts for difference equations. Representational State Transfer Vs Soap, and Proof. A mathematical statement that has been proven is called a theorem. Brady V United States Quimbee, Parastatals Under The Presidency, Find more Mathematics widgets in Wolfram|Alpha. According to Hardy (1999, pp. Rode Ntg2 Vs Ntg3, Aurelius Zoticus, for c Operations research provides techniques for solving practical problems in engineering, business, and other fields — problems such as allocating resources to maximize profit, and scheduling project activities to minimize risk. From MathWorld--A Thanks for contributing an answer to Mathematics Stack Exchange! Frick Collection Fragonard Room, Benito Masilevu, So $n= 2k$, for some integer $k$. Concepts such as infinite proof trees or infinite derivation trees have also been studied,[17] e.g. Is Tesla Stock A Good Buy Reddit, "Simpsons Math." Geometric Distribution Allenby, R. Numbers For example, an assignment where p More formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets[4] (finite sets or sets with the same cardinality as the natural numbers). Many questions and methods concerning differential equations have counterparts for difference equations. All rights are reserved. Assoc. Princeton, Research Institute for Symbolic Computation. Order theory is the study of partially ordered sets, both finite and infinite. Shahid Online Net, A Book With No Words, Fauna Goddess, 1: Induction and Analogy in Mathematics. Shoe Descriptions, https://www.risc.uni-linz.ac.at/research/theorema/description/. Vakil, R. A Mathematical Mosaic: Patterns and Problem Solving. Pólya, G. Mathematical Discovery: On Understanding, Learning, and Teaching Problem Solving, 2 vols. The Influence Netflix Rotten Tomatoes, Concepts such as infinite proof trees or infinite derivation trees have also been studied,[17] e.g. Run Math Playground, 1: Induction and Analogy in Mathematics. Builds the Affine Cipher Translation Algorithm from a string given an a and b value ... the calculator will use the Chinese Remainder Theorem to find the lowest possible solution for x in each modulus equation. Liberty Hdx 220, English Pronunciation In Use Intermediate 2nd Edition, South Bend Lions Facebook, Zoom Ms-70cdr Vs Ms-50g, Tupac Quotes About Death, Gilda Dent, Outro Youtube, What Is Gdp, Gnp And Nnp, Nelson Mandela Achievements,
2021-07-28 16:52:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32044073939323425, "perplexity": 3518.692775600657}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153739.28/warc/CC-MAIN-20210728154442-20210728184442-00442.warc.gz"}
https://www.biostars.org/p/9533084/
Running STAR 0 0 Entering edit mode 8 weeks ago Chris ▴ 20 Hi bioinformaticians, How can I verify that the indices files I created are correct? When I start running alignment with: STAR --runThreadN 8 \ --genomeDir /home/doanc2/hg38/hg38_index \ I got this error: EXITING because of FATAL ERROR: could not open genome file /home/doanc2/hg38/hg38_index//genomeParameters.txt SOLUTION: check that the path to genome files, specified in --genomeDir is correct and the files are present, and have user read permsissions Thank you so much! STAR • 737 views 1 Entering edit mode What is the output of ls /home/doanc2/hg38/hg38_index/ and what was the command line to create the index? 0 Entering edit mode The output of ls /hg38_index/ chrLength.txt chrStart.txt geneInfo.tab SA_1 SA_2 SA_5 SA_8 transcriptInfo.tab chrNameLength.txt exonGeTrInfo.tab Log.out SA_10 SA_3 SA_6 SA_9 chrName.txt exonInfo.tab SA_0 SA_11 SA_4 SA_7 sjdbList.fromGTF.out.tab Command line to create the Index: STAR --runThreadN 8 \ --runMode genomeGenerate \ --genomeDir /home/doanc2/hg38/hg38_index \ --genomeFastaFiles /home/doanc2/hg38/Homo_sapiens.GRCh38.dna_sm.primary_assembly.fa \ --sjdbGTFfile /home/doanc2/hg38/Homo_sapiens.GRCh38.107.gtf \ --sjdbOverhang 99 Thank you! 1 Entering edit mode It seems incomplete. Did any errors come up or did the indexing job got killed? Any log messages or errors? 0 Entering edit mode You are right. The indexing job got killed. But I still get 27Gb created so I try to align. I am rerunning the Index to let you know the exact error. 0 Entering edit mode Yes, rerun. Indexing is an all or nothing thing, you cannot do anything with an incomplete index. 0 Entering edit mode The last time I run, I got this: client_loop: send disconnect: Broken pipe after running for a quite long time. So I guess my inputs were incorrect. 1 Entering edit mode How was this run and on which machine? Broken pipe is usually related to disconnection of a terminal session from a remote host. No, if inputs were wrong it would not even start building the index -- STAR is smart enough to check that up front. 0 Entering edit mode This was run on a cluster using ssh. I am still waiting for the rerun: Aug 01 10:03:54 ..... started STAR run Aug 01 10:03:54 ... starting to generate Genome files Aug 01 10:05:27 ..... processing annotations GTF Aug 01 10:06:57 ... starting to sort Suffix Array. This may take a long time... Aug 01 10:08:03 ... sorting Suffix Array chunks and saving them to disk... Thank you so much! 1 Entering edit mode Try to either submit it to the cluster scheduler if that exists, or at least run it via something like screen. These logs look ok, just wait until finished. Be sure though that you are not running this on the head node but really on a dedicated cluster node. 0 Entering edit mode Aug 01 13:02:12 ..... started STAR run Aug 01 13:02:12 ... starting to generate Genome files Aug 01 13:03:46 ..... processing annotations GTF Aug 01 13:05:17 ... starting to sort Suffix Array. This may take a long time... Aug 01 13:06:22 ... sorting Suffix Array chunks and saving them to disk... client_loop: send disconnect: Broken pipe I made sure the script doesn't run on a head node by submitting the script to a grid engine. It has run for about 120 minutes which I think should be 30 minutes and stopped with the error above. 0 Entering edit mode Hi ATpoint, STAR has run for almost a day. This seems incorrect. job-ID prior name user state submit/start at queue slots ja-task-ID 101045 0.60500 star doanc2 r 08/01/2022 17:32:17 all.q@fenn07 8
2022-09-27 04:59:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19762010872364044, "perplexity": 11261.2333198118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00183.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-4-proportions-percents-and-solving-inequalities-chapter-4-review-problem-set-page-183/15
## Elementary Algebra Published by Cengage Learning # Chapter 4 - Proportions, Percents, and Solving Inequalities - Chapter 4 Review Problem Set - Page 183: 15 #### Answer The car will use 30 gallons of gas on a 615-mile trip. #### Work Step by Step Let x represent the amount of gas the car will use on a 615-mile trip. Now, let’s set up a proportion in which one ratio compares the amount of fuel used and the other ratio compares the corresponding distances traveled. $\frac{18}{x}$ = $\frac{369}{615}$ To solve this equation, we equate the cross products. 18 $\times$ 615 = x $\times$ 369 11070 = 369x Divide both sides by 369. 30 = x The car will use 30 gallons of gas on a 615-mile trip. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2020-05-28 05:01:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8312176465988159, "perplexity": 1438.506970959057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396495.25/warc/CC-MAIN-20200528030851-20200528060851-00226.warc.gz"}
http://codereview.stackexchange.com/questions/19460/is-this-a-proper-way-of-loading-views-in-php
As a learning exercise, I'm developing my own PHP framework. I'm looking for a way to "load views" (kinda like CodeIgniter does it), without polluting my general scope. I came up with the following, basic, example: $data['test'] = 'Hello world';$str = load_view($data, true); echo$str; function load_view($data,$store = false) { extract($data); ob_start(); include('view.php'); if($store) return ob_get_clean(); else ob_end_flush(); } // End of file view.php <p><?php echo $test; ?></p> I could also use load_view($data), which would output the contents of view.php immediately. Edit: I'm mostly worried about performance. As Peter pointed out, I'm aware that the function should be a class method that's seperate from the logic. - I agree that its a better pattern to wrap your view logic in a class. Here is some code I whipped together tonight - It represents the simplest class I could devise that contains the minimum logic that I require in a view: • Includes - Ability to include other views • Captures - Ability to easily capture content within your view • Layouts - Ability to inject data into a re-usable layout template • Fetching - Ability to fetch view output instead of sending it to output buffer • data - Ability to access the resulting data once the view is finished Refactored per conversation in comments regarding passing $data by reference. View.php <?php /** * Simple view class that supports includes, capturing, and layouts, as well * as retrieving rendered view content and resulting data. * * *NOTE* When a view uses a layout, the output of the view is ignored, as * as the view is expected to use capture() to send data to the layout. * * @author David Farrell <DavidPFarrell@gmail.com> */ class View implements ArrayAccess { /** * View file to include * @var string */ private$file; /** * View data * @var array */ private $data; /** * Layout to include (optional) * @var string */ private$layout; /** * Constructor * * @param string $file file to include */ public function __construct($file) { $this->file =$file; } /** * render Renders the view using the given data * * @param array $data * @return void */ public function render($data) { $this->data =$data; $this->layout = null; ob_start(); include ($this->file); // If we did not set a layout if (null === $this->layout) { // flush view output ob_end_flush(); } // We set a layout else { // Ignore view output ob_end_clean(); // Include the layout$this->include_file($this->layout); } } /** * fetch Fetches the view result intead of sending it to the output buffer * * @param array$data * @return string The rendered view content */ public function fetch($data) { ob_start();$this->render($data); return ob_get_clean(); } /** * get_data Returns the view data * * @return array */ public function get_data() { return$this->data; } /** * include_file Used by view to include sub-views * * @param string $file * @return void */ protected function include_file($file) { $v = new View($file); $v->render($this->data); $this->data =$v->get_data(); } /** * set_layout Used by view to indicate the use of a layout. * * If a layout is selected, the normal output of the view wil be * discarded. The only way to send data to the layout is via * capture() * * @param string $file * @return void */ protected function set_layout($file) { $this->layout =$file; } /** * capture Used by view to capture output. * * When a view is using a layout (via set_layout()), the only way to pass * data to the layout is via capture(), but the view can use capture() * to capture text any time, for any reason, even if the view is not using * a layout * * @return void */ protected function capture() { ob_start(); } /** * end_capture Used by view to signal end of a capture(). * * The content of the capture is stored under $name * * @param string$name * @return void */ protected function end_capture($name) {$this->data[$name] = ob_get_clean(); } /* ArrayAccess methods */ public function offsetExists($offset) { return isset($this->data[$offset]); } public function offsetGet($offset) { return$this->data[$offset]; } public function offsetSet($offset, $value) {$this->data[$offset] =$value; } public function offsetUnset($offset) { unset($this->data[$offset]); } } run.php <?php require "View.php";$v = new View('view_main_simple.php'); $fetch =$v->fetch(array('message' => 'Hello, world')); print("Fetch result: {$fetch}\n");$v = new View('view_main_complex.php'); $v->render(array('one' => 1, 'two' => 2, 'rows' => array('a','b','c')));$data = $v->get_data(); print("\n"); var_export($data); view_main_simple.php The message is: <?php echo $this['message'] ?><br/> view_main_complex.php <?php$this->set_layout('view_layout.php') ?> <?php $this->capture() ?> one=<?php echo$this['one'] ?><br/> <?php $this->include_file('view_include.php') ?> three=<?php echo$this['three'] ?><br/> <?php $this->end_capture('body') ?> view_include.php two=<?php echo$this['two'] ?><br/> <?php $this['three'] = 3 ?> <ul> <?php foreach($this['rows'] as $row) { ?> <li><?php echo$row ?></li> <?php } ?> </ul> view_layout.php <html> <body> <pre> <?php echo $this['body'] ?> </pre> </body> </html> Program Output Fetch result: The message is: Hello, world<br/> <html> <body> <pre> one=1<br/> two=2<br/> <ul> <li>a</li> <li>b</li> <li>c</li> </ul> three=3<br/> </pre> </body> </html> array ( 'one' => 1, 'two' => 2, 'rows' => array ( 0 => 'a', 1 => 'b', 2 => 'c', ), 'three' => 3, 'body' => ' one=1<br/> two=2<br/> <ul> <li>a</li> <li>b</li> <li>c</li> </ul> three=3<br/> ', ) - Do not use passing by reference: schlueters.de/blog/archives/125-Do-not-use-PHP-references.html ## Ha-ha my answer is similar to yours but i have -1 point. :) – Peter Kiss Dec 11 '12 at 18:57 Greetings @PeterKiss! You'll notice that the public functions render() and fetch() do NOT accept references, meaning that they act on a copy of the passed in$data - I then use references internally so that modifications to MY $data can be maintained across includes and layouts - I then offer that (possibly) modified$data back to the caller via getData(). So I never alter the user's personal $data, but do (carefully) manage references within my own scope for the benefit of the view (and the user, if they choose) - So I'll take my +1 back now please :) – David Farrell Dec 11 '12 at 19:27 There is no need to do that hack. If you assign$data to $this->data in render then the do_render doesn't need any parameter. And why is there render and do_render? do_render is private you still calling it as public: (new View($file))->do_render($this->data); – Peter Kiss Dec 11 '12 at 19:55 @PeterKiss - Your statement isn't fully correct as it doesn't address my need to allow includes to modify$data - But You are right in that I don't need to pass $data around by reference - I already had all the tools I needed, namely the getData() function (now renamed to get_data() for consistency. I have refactored the class to no longer pass$data by reference, merged do_render() into render(), and now reassign my local $data based on the result of the included view. – David Farrell Dec 11 '12 at 20:27 One thing to note is that in MVC proper, views get their own data from the model. The view should be passed a model and call methods on it to access enterprise data. This makes the common extract($data); method redundant. A simplistic view API may be: $view = new View($model); echo $view->render(); If you enter templates into the equation then you could use$view = new View($model,$template); but the important thing to note is that the View does not get fed data by the controller in MVC. And although many web "MVC" frameworks take this approach, this is technically not MVC but PAC. I don't have enough rep to post images but see the image of MVC on the wikipedia article. You'll see the controller never interacts with the view. For more information on the MVC architecture see: http://st-www.cs.illinois.edu/users/smarch/st-docs/mvc.html , and http://www.itu.dk/courses/VOP/E2005/VOP2005E/8_mvc_krasner_and_pope.pdf - I don't see any huge problem with what you have - specifically the use of extract(). I doubt you'll notice any difference in either memory use or execution time. The caveats you should be aware of are: 1. What happens if your $data array contains a key "store"? It will overwrite your$store argument. 2. Although it would be unusual for view files to initialize variables, it's possible they could. In such cases, you'll have variable name collisions. A big advantage I see in passing the rendered view back as a string is that you can use views inside other views or layouts. Second, if you wish to test your controllers, you can assert things about the returned view. You might consider simply removing the second argument and always return a value - would make one less variable (to get collided with) and would leave you with one return type. - +1. as a simplistic method the code is fine - there's no value in the store argument though - just use $foo = load_view($bar); or echo load_view($bar) as appropriate. it's also exactly how CakePHP does it. Should point at the warnings for extract though - it can be dangerous if used on untrusted data without using e.g. EXTR_SKIP. – AD7six Dec 11 '12 at 9:33 No this is an ugly way to do it. ## extract() (PHP4 stuff in 2012/2013?) The problem with exract() is that it's creating variables ionto the global hyperspace (oh God, why) where can exist (in localy also!) any other variable and can have name collisions and can overwrite the old values. Beside this you are losing the control over your code and it will be hard to maintain (debug, fix also) and to extend. ## load_view() Not always returns a value: in PHP this is okay but in general programming it's a bad habit. I do one thing in a case and in another one i do a complete different stuff in same function? And why am i in a function? How about a non-static class method where i know everything in my small environment? Here is a small example of an idea: ## View class <?php class View { private$_file; public $Data; public function __construct($viewFile, array $data) {$this->_file = $viewFile();$this->Data = $data; } public function Render() { require$this->_file; } } ## Usage In the view file: <ul> <?php foreach ($this->Data["rows"] as$object) { echo "<li>"; echo $object->Name; echo "</li>";} ?> </ul> The output buffer handling is never part of the view it self. There have to be some kind of ViewEngine which can handle this thing. One thing one responsibility. The problem with the example is that is's missing a whole lat infrastucture! In my MVC expiriment i have Controller, ControllerBuilder, ActionInvoker, abstract ActionResult, ViewResult a lot of other stuff what is necessary to get thing done. In my default IView implementation i have a Model "property" beside the ViewData, i can map a lot helper like Html or Url and any other and just then comes the others: RenderPartial(), RenderSection(), Section(), SectionStart(), SectionEnd(), Layout(); these are pointing to my current IViewEngine implementation which is handling all these stuff. - Honestly, I don't see how your example is better. Yes, you have put it in a class, which I would do as well, but didn't for the sake of keeping things simple. Your method also doesn't provide a way to return the output of the view to a string, it merely echoes it when calling the Render method. – Bram Dec 10 '12 at 15:24 Also, I feel like it's very unclean to use stuff like$this->Data["rows"] inside a view file. $this has no logical context there in my opinion, it would make much more sense to use$rows (that's why I used the extract function). Unless that's a huge memory hog/is bad for performance. –  Bram Dec 10 '12 at 15:45 Made an edit to my answear to explain my thoughts. –  Peter Kiss Dec 10 '12 at 17:54 @PeterKiss extract only imports into the scope it's used. In this case, he's using it inside a function so it is not importing the variables into the global scope. –  Rob Apodaca Dec 11 '12 at 0:40 -1 . what control? How is the use of extract php4? Note that extract has a number of flags. I think taking a simple method and then saying the problem is it doesn't look like your MVC class structure isn't helpful. forcing $this->Data['x'] instead of $x only means more verbose view files, it doesn't offer benefits over the use of extract where extract is used in not-global-scope. –  AD7six Dec 11 '12 at 9:26
2014-09-21 14:06:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18245841562747955, "perplexity": 4777.48375239347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657135558.82/warc/CC-MAIN-20140914011215-00293-ip-10-234-18-248.ec2.internal.warc.gz"}
http://anarcho-mercantilist.blogspot.com/2009_01_01_archive.html
## Friday, January 23, 2009 ### Against Punishment - Even Nonviolent Punishment I want to apologize that I misrepresented BrainPolice's position on punishment. I generally agree with his opinion in punishment. I view punishment as a "pragmatic measure to deter crimes that psychopaths plan to commit." While I do regard that the current statist society constantly misapplies punitive measures, I do believe, that even in anarchy, punishment must exist to deter certain crimes. I disagree that boycotts, as promoted by Stephen Molyneux (pages 118-119), stands as an effective measure to deter crime. I disbelieve in the "indoctrination of children" and the "operant conditioning" pseudoscience promoted by the behaviorist school of psychology. In fact, I oppose punishment more than even the most radical detractors of corporal punishment. I differ from them because I do not only oppose corporal punishment, but I oppose all types of punishment inflicted on children (unless the child psychopathically committed a crime). I view punishment as an irrational act, and it suffers from agent-principle problems. Children can easily avoid being punished from their parents, just like how some drug dealers can avoid being kidnapped by the state. I do not hold a Freudian view, as in Stephan Molyneux and Danny Shadar, that childhood experiences primarily determine the behaviors in adult life. ## Tuesday, January 20, 2009 ### Limitations on Self-Defense In response to BrianPolice's view on self-defense, I posted my view on self-defense below: Previous posts related to this topic include The Fallacies of Moral Subjectivism and Subjective Property I will articulate my view below: Some individuals will use strawman arguments against those who oppose deadly force when trespassing, as "anti-victim" or "anti-defense." They often misinterpret the definition of "punishment," so they will see the libertarian law of proportional punishment as promoting aggression. Too often, they conflate "punishment" with "self-defense." Punishment, however, is defined as the commission of deterrence only after the crime has commissioned; while self-defense is the use of force when a crime is commissioning. Thus, libertarian law does not forbid imposing "unproportional" force in the situation of self-defense. Detractors of the libertarian law of proportional punishment argue that since value is subjective, no one can accurately determine the "proportional" compensation. However, as we will argue that property boundaries are also subjective, so these arguments presents a straw-man. The law of proportional punishment can be better rephrased as the compensatory damages should be proportional. The law of proportionality, however, does not give any limit on the amount of punitive damages. However, whether a "crime" has occurred or not is also subjective, but not to the extent of valuation. For example, if an individual mistakenly recognized one's house as a shopping store; he will come in; but the owner of the house will shoot him in the certain cultural context, if some walks up in the person's yard or walkway; the owner of the house does not have the right to kill him, since in the social context, yard's and walkaways leads to his door, in order to communicate. Self-defense, unlike punishment, does not follow the proportionality law. The owner has the right to use deadly force against the aggressor; if the aggressor's knows that he is intentionally being mischievous and is harming you. If the individual knows that he is intentionally being mischievous, and knows the risk that the owner might shoot him; but then aggresses your property anyway; then shooting him is certainly justified. If an individual places a sign in front of his property that said "all trespassers will be shot," then it is possible that someone might still think that they are permitted to walk up in his walkaway or yard, to communicate with to owner. Individuals who do not know how to read English are more likely to ignore the sign. Even if the warning is very obvious to the common man, what if a colorblind person didn't notice the sign? But it depends on the intentionally of the aggressor. If an robber uses his unloaded gun against you, the aggressor is clearly being intentional in this situation; so it is justified to shoot him. However, suppose a random individual who is confused walks in your lawn; the individual might think that he is on commonly-owned property. Thus, the homeowner does not have the right to use deadly force against him. If someone "breaks in" the house by breaking doors or windows, you clearly have the right to use deadly force; as the aggressor is clearly being intentional, as he destroyed your property. Suppose you own a house, that has automatic doors, and has large windows that show a large variety of goods inside it. With both of these, your house displays the common features of a shopping store. Many individuals who look at your house would, according to the appearence, confuse your house with a shopping store. If the "shoppers" go inside your house planning to buy stuff; you do not have the right to shoot them; because the look and feel of your house resembles too much like a shopping store, thus displaying an "implicit contract" granting any individuals to enter. Thus, it is too subjective if such "implicit contract" allowing individuals to enter exists. Suppose an individual grows up in a different culture that permits individuals to step on your front yard. When he moves to a different society with a different culture that does not tolerate anyone walking on the front yard, he will likely be shot. Suppose an individual that does not know English misinterprets your building as some public resort so he enters. He, too, will likely be shot. Thus, we have proved that property boundaries may potentially have subjective interpretations. Everyone has the right to self-ownership. Thus everyone has the right to die. So if an individual enter's another property and assumes that the risk that the homeowner will shoot him at self-defense, the homeowner can shoot. But due to vague property boundaries, no one can be sure if the individual is behaving mischievously. So we will have a "common law" that establishes and objectively defines these boundaries. We will have different arbitrators, because due to cultural and social differences, not all arbitrators are typically suited to judge within different cultural standards of aggression. For example, if some cultures define "hate speech" as a form of aggression, and some other cultures that that does not consider it as aggressive; different societies will construct different variants of common law that defines specific cases such as "hate speech" as aggressive or not. Different versions of common law will define the "implicit contract" of the shopping store, as exemplified above. This will deter the arbitrary interpretation of subjective interpretations of aggression, thus will make individuals have greater confidence. For example, if the common law approves the use of deadly force against merely trespassers walking on the front yard, then very few people will walk outside, in the fear that they will unintentionally step on someone's property and the owners will use deadly force against them. Therefore, a common law system will spring up in anarchy that forbids deadly force, in the case of trespassing when no greater threats or aggression is involved, to make people less worried of unintentionally stepping another's property. Let us assume that a contract exists between the store owner and a shopper that permits the store owner to use of any force, or, in different case, if the aggressor fully knows that he is behaving aggressively, and he clearly understood that he will have risk if the crime has commissioned. Thus, following this logic, the store owner can legitimately use deadly force against individual who steals bubble gum, if the robber clearly knows that he will be facing serious consequences including being shot. However, too often the bubble gum theft occurs unintentionally, thus it will likely increase fear, and might make some avoid shopping in stores anymore. Also, it might be the case that the shopper brought his own gum to the store, so the store owner might mistakenly confuse their bubble gum with his. This will further instill fear, so individuals will likely avoid doing any action that might be misinterpreted by others. A solution to these problems implying risk is to propose a common law specifying the property boundaries, the legitimate amounts of force in each given situation, and the agreeing in the amount of compensation in the event of property torts. Without common law defining, or "de-subjectifying" the values of various goods, proportional compensation and punishment is not possible. Common law is the solution to objectively definine the boundaries, the cultures (see the walkway example) and standards to judge the "implicit contract," so individuals will not be subject to various interpretations of boundaries and rules defining fraud. In addition, because the individual voluntarily agrees on a legal system that they chose themselves, this will solve the cultural problems and will not confuse the non-English speakers. At last, the legal system in an ararchic society will be competitive, and the individual may choose to live in societies enforcing any laws as he wishes. The distinction between property damage and no property damage cannot be justified. The value of property and coercion is valued subjectively. Thus, it is impossible to objectively tell whether property damage has occurred. Some people would consider that flying an airplane above some land is coercion and some would consider air pollution damage. Suppose if one throws away some stuff that "looks" useless. However, the owner of the product have the knowledge of it, thus have the ability to estimate the value. The owner of the product sees it as highly valuable. Thus, that would cause conflict. If an owner decides to leave a highly valued object outside, he is implicitly accpeting the risk of some person in a parachute landing there, some bird landing there or some airplane crashing in the object. Thus, the best way to prevent conflict is to protect the object, such as building a wall or moving it inside a house. Since value is subjectivly, it is very important to protect some objects that is subjectively valued important. Otherwise, the court might value the object lesser. Another effect of applying this logic is proportional punishment. If a parachuter lands on the important object and breaks it, he does not deserve to die. Instead, he should offer compensation. Anyway, the owner of the object is implicitly accpeting the risks that are more likely to happen such as birds and crashing airplanes. But if the owner puts the object inside a building or builds a wall, the intruder deserves much more servere punishment for breaking in. Humans assume many biases, such as the assumption that every person owns the space directly above the land. However, this is false. One can fly airplanes or build houses that overlap them. These are legitimate only if they are not directly damaging property. But due to subjectivity, there is no clear line to define. Suppose one surrounds a fence around someone's house to block them so they can starve to death. This is illegitimate sometimes and sometimes legitimate. So commonly-agreed laws would form in an anarchist society to prevent such things. Voluntary associations would form to prevent them. ## Monday, January 19, 2009 ### Libertarian Labeling and Stereotyping The last time I commented on Polycentric Order, I decided to do a critique of the "vulgar libertarian" concept. When I saw the term "vulgar libertarian" showing up on many of their previous posts, I eventually decided to give a shot on my objection to the term. After posting my criticism, the audience reacted furiously, winning the "troll of the year" awards. I think that I deserve the blame for not expressing my intentions clear enough. Since then, in the midst of the confusion, at the risk of losing my prized, and patient, subscribers of my blog, I decided to overcome the miscommunication. The deteriorating post quality here, as you all may know, presents a significant barrier to rational understanding. From time to time, we post these little, obnoxious, "anarcho-semantic" ramblings. This time, we will attempt to "solve" the mysteries of the "left"-libertarian game. This article contains seven different sections, with each section devoting to an entirely distinct motive. To start the article, we will focus on our introductory section with FSK's comment that I did never reply. Second, we will investigate the logical fallacies that David Z. and BrainPolice have put in action. Third, we will began a rambling section discrediting the "troll." Fourth, we find an amazing confusion in BrainPolice, as well as other market anarchists, has attempted to "refute." Fifth, we will show that I am not paranoid in believing that some of the "left"-libertarians have played Devil's Advocate into conspiring a propaganda show. Sixth, we will argue that I am more "left" than even of the most vocal "left"-libertarians on the net. And finally, we will analyze the confusions between different libertarian circles. ## You cannot arbitrarily interpret the non-aggression principle FSK has posted in the past few months about the state. Violent protests are a waste of time, because the State has superior resources. Violent protests create sympathy for the State, and violent protests are a violation of the Non-Aggression principle. I disliked FSK's statement. I commented claiming that no one can interpret the non-aggression principle. FSK responded that "You are a troll" without any given explanation. I criticized his statement because, as said above, that no one, can "interpret the non-aggression principle." It is impossible, to "interpret" the non-aggression principle, because it begs the question. To determine if an action constitutes aggression, one should define if the action constitutes an aggressive act or not. It is circular reasoning to "interpret" that a violent overthrow contradicts non-aggression principle, while at the same time the non-aggression principle defines defensive overthrow of the state as non-aggressive. You can only define actions that falls into aggressive and non-aggressive types, not interpret these actions. Let us say if some consider libel and defamation as aggressive acts. They will therefore interpret the non-aggression principle incorrectly, claming that libel and slander as aggressive acts. In order to resolve that, you have to define the non-aggression principle to tolerate libel and defamation. Murray Rothbard defined slander as a non-aggressive act in his Ethics of Liberty. There exists multiple interpretations of whether a violent overthrow of the state constitutes an aggressive act. Some would say no, arguing that it is justifiable to use defensive violence to overthrow the criminal organization. They argue that it is their right to self-defend the statist aggressors, in proportion to the damaged they caused. Some others would say yes, it contradicts the non-aggression principle, since they interpret the non-aggression principle differently. In summary, FSK should have defined that the non-aggression principle forbids aggressive overthrow of the state, not the other way around. It is circular reasoning if you did the latter. ## I am not a "Vulgar Libertarian" Let us say that you showed an argument to me. If I criticize the accuracy of a few specific details in your argument, you might apprehend that I reject the general idea of the argument. But I will respond to you that rejecting to specific parts, such as specific facts and inaccuracies of an argument, does not imply that I reject the "general idea" of the argument. You might, however, see my rejections of the specific parts of your argument, as strawmans to the "general idea." We will call this behavior the "double-strawman fallacy." I have experienced the double-strawman fallacy in many cases. At one time, David Z. posted an example of the monetization of debt, showing the value of the dollar falling down. I responded to a specific factual inaccuracy, that the consumer price index (CPI) does not accurately represent the valuation of the dollar. David, however, thought that I rejected the "general idea" of his argument that the value of the currency has gone down. So he deleted my comment. But due to the double-strawman fallacy, David thought that my criticism of the CPI made me reject the "general idea" of the whole blog post, in which I actually believed the opposite. Let us go back to BrainPolice's blog. I responded to his usage of "vulgar libertarian" in his most recent article at that time, because I get confused over his definition of "vulgar libertarian." BrainPolice used the term "vulgar libertarian" in his post entitled Positive and Negative Liberty: BrainPolice said: In this context, a libertarian can consistantly [sic] advocate concepts such as mutual aid and cooperative management. This usually devolves into vulgar and thin libertarians denying the reconciliation vs. neo-artistotileans [sic] and left-libertarians defending such a reconciliation. I do not know how BrainPolice defines the term "vulgar libertarian." I critiqued his usage of the term "vulgar libertarian." I did not mean to criticise his whole statement quoted above, only his usage of the term. BrainPolice used the double-strawman fallacy when he criticized my "vulgar libertarian" statement. When I respond to BrainPolice questioning the definition of the term, BrainPolice assumed that I disagreed with the "general idea," of the statement, that "mutual aid and co-ops are consistent with libertarian legal theory." Just because I questioned the "vulgar libertarian" concept, it does not mean that I oppose his overall statement. In fact, I recognize the right for anyone to mutually aid one another and to form co-ops, and I even believed that co-ops may sometimes benefit the workers. ## Am I a "Troll"? I do not understand trolls. I do not understand how some individuals will purposefully make "false claims" to "provoke" responses. I do not understand why the "troll" will waste his time writing posts to provoke responses. Usually, most of the people who are identified as "trolls" are not actually trolls. They did not intended to cause conflict, nor did they intended to post false information. Most of the time, the ones who accuse others for trolling will disagree with the "troll's" opinions so strongly that the troll seemed to intentionally make false claims. Even if the "troll" is just sharing his honest opinion on something, when others disagree with his opinions so strongly as "fringe," the other people will get an illusion that the "troll" is really intentionally making "fringe" statements, just because they disagree with his opinions so much. This occurs frequently. For most of the time, I see many people getting identified as "trolls" when their opinions are just "fringe." The trolls are just sharing an opinion that the majority of the users disagree. If there is no opposition party or some defenders of the fringe opinions held by the "trolls," even a small minority can convince everyone on the message board that he is a troll. Neither do I understand how some will get "offended" by messages posted by "trolls" or messages that seemed to contain errors. Why wouldn't the individual simply ignore the messages posted by "trolls" if they do not want to get offended? I do not understand how just some text displayed on a computer monitor will get people offended. Additionally, I do not understand how the "troll" likes to "mischievously" "force" others to respond to the message. If the "troll" thought that he is actually being "mischievous," then wouldn't he just feel guilty of "abusing" everyone else? Wouldn't his guilt encourage him to not "troll" on message boards anymore? An objection to the above statement is that he is just a "psychopath" who does not feel empathy or guilt for his actions, so he can mischievously "provoke" responses continuously. This argument seems a rightful objection, since in reality there are some people who behave like that. But in my experience, the vast majority of the cases of accusing others for being a "troll" is wrongfully convicted. The "troll" is just sharing his opinions that happens to be "fringe" for the majority of the users, and when there are no opposition party or defenders of his "fringe" opinions. Therefore, the "troll" who is honestly sharing his opinions will get wrongfully convicted simply because the majority disagrees with him. But there exists some objections to the above statement. Some will say that they can tell whether a troll is "indeed" behaving "mischievous" or not. However, as we discussed above, many individuals have an illusion that the troll is being "mischievous," merely when they strongly disagree with his opinions. Often, people mistakenly see a person as behaving "mischievous" or intentionally "provocative" simply when we disagree with his opinions. We call this cognitive bias the "dissident-troll bias." Due to "collective reinforcement" of everyone on the message board, the one posting "fringe opinions" will more likely to be identified as a "troll," simply because more people are disagreeing with the "fringe opinions". My point is that "trolls," are wrongfully convicted in the majority of the cases. Anyway, let us go back to the topic. I cannot even discern the reason why would anyone spread "fringe" ideas around the web in the first place. It does not work, others will ignore the messages, and it wastes the time of the person. ## BrainPolice on "Economic Rent" When I criticized the market anarchists' assumption that "economic rent" will fall to zero, BrainPolice replied using the red herring fallacy. Red herring fallacies are common when debating with someone else using words that have multiple definitions. In fact, BrainPolice fell on exactly that. He did not even know what "economic rent" means. The terms "economic rent" and "rent" mean completely different concepts. BrainPolice got confused with "economic rent" with "rent," so he interpreted my statement as "market anarchists oppose rent." He preceded, as usual, to refute that claim. BrainPolice thought that I was criticizing Benjamin Tucker's prediction that rent (in the non-economic sense) will fall to zero. However, I was actually writing about economic rent, not rent. If you look up "economic rent" in a dictionary, you will find the meaning similar to the "return on investment" from a capitalist's deterred time preference. To prove this, Benjamin Tucker had actually rejected economic rent. Market anarchists also seemed to have a incorrect view of economic rent. They predict that non-entrepreneurial income earned by firms will fall to zero. While market anarchists correctly reject that rent would fall to zero (in the non-economic sense), they seemed to have a confusion on economic rent, as the mutualists did. Homesteading abondoned property according to the Lockean theory is logical, but many self-identified market anarchists support some kinds of "homesteading" that might cause shortages. For example, they may oppose rent for some kinds of land and apartment buildings. Their belief is based on historical examples of feudal societies steal in the disguise as "rent." Feudal societies do not compete so they can collude to set rent. But due to today's many land owners, it is almost impossible to form a cartel to set a high rent price. However, in non-feudal societies, land should not be "homesteaded" while paying rent. If the society prohibits rent, then the owners would not offer the service in the first place. For example, if apartment rent is prohibited by "homesteading", then investors would not have the incentive to build apartments for people in the first place. "Homesteading" an apartment is a form of rent control, so it would cause shortages. If people are freely to "homestead" arbitrary land, numerous questions arise. How much land should they homestead? If they mixed their labor with large pieces of land, should they allow it. What is the maximum of land can they possess? They can "cheat" by homesteading" large amounts of land by just using their low quality labor. Thus, no land is available. Land, like other resources, are a natural resource. Therefore, they are limited to constraints. In order to perform the best use of land, land should not be "stolen" to allow marginal utility. Murray Rothbard has proved that the price of rent equals the value of the good divided by the natural interest rate. For example, one rents a good priced at $1000 per year and the natural interest rate is 5%, then the actual price of the good is$1000/0.05 = $20,000. This is an effect of entrepreneurs. If an entrepreneurs see that he can build an apartment at$20,000 and later get $1000, he might do it. But his decision is based on other stuff. It is equivalent that if he knew that he can also invest in a bank at interest by lending$20,000 and receiving $1,000 one year later as interest (at a rate of 5%). If his estimates that his interst from investing in apartments is greater than$1000, he would invest in apartments, and if it is the other way around, he would invest in a bank. Going back to the apartment example, would a person actually pay $20,000 than rent$1,000 per year? They are equal, and the person can choose to buy it at $20,000 then sell it later or pay$1,000 per year. If that person is poor, he would actually borrow $20,000 from a bank at 5% interest to buy that apartment. He has to pay back$20,000, plus the 5% interest, which totals $21,000. Thus, lending from a bank to buy the apartment and renting the apartment$1,000 every year costs the same amount of money. Utimately, abolishing rent would leave the poor person without an apartment because he does not want to take the risk of borrowing that much and/or if the bank does not approve the loan. Thus, it appears that the self-identified market anarchists do not understand the relationship of interest and rent. Another consequence of "homesteading" land is the lack of knowledge. If an entrepreneur is buying unused land to build a football field, would he allow others to steal? If the field is unoccupied, others would pollute it. What if he dumps garbage on unused land to prevent theft? That would encorage them to malinvest such as polluting the land to prevent theft or building extreneous houses on land to prevent intrusion. David Z., had refuted the market anarchist misconception that non-entrepreneurial corporate income is "unnatural." At my blog comment to BrainPolice, I was just trying to make similar criticisms about the market anarchists' objection to economic rent, as David Z. did. It was actually my fault that I did not define "economic rent" at the time of my writing. I should have defined "economic rent," since lots of individuals do know what "economic rent" is. Anyway, however, I correctly claimed that the market anarchists' object to economic rent. ## I am not Paranoid I have a long-time suspicion that the "left"-libertarians are trying to redefine capitalism and anarcho-capitalism to the way they want it to mean. When I criticized BrainPolice of attempting of doing this, he said that I was wrong. Anarcho-Mercantilist said: I see BrainPolice and Brad Spangler attempting to redefine anarcho-capitalism as a vulgar, conservative, political and feudalistic plutocracy. BrainPolice tried to deceptively redefine anarcho-capitalism by writing an article that emotionally associates anarcho-capitalists with conservatives. That trademark tactic did not work for me. I still consider myself as an anarcho-capitalist. BrainPolice said: Nonesense. [sic] Your continual attempt to portray me and others as engaging in a propaganda campaign when we are merely logically extending libertarian principles is a joke. If the "left"-libertarians aren't truly trying to run a propaganda campaign, then why do they claim that we should normatively redefine capitalism and anarcho-capitalism as a fascist, plutocratic, and paternalistic theology? Why should redefine the terms, when we can use them as-is, as in the dictionary definitions? Why do the "left"-libertarians go through all of this? (I do not mean all "left"-libertarians): ...and so on... All of these above links have articles written by people who want to normatively change the definitions of capitalism, anarcho-capitalism, and the rest of the terms that they dislike. They do not give any explanation of why we should adopt theses new definitions for these terms. The best that they can argue is to go through some historical and ancient definitions of these terms. The historical meanings, however, have since changed since then. So is now proper to use these according to the dictionary definitions. If the "left"-libertarians say that we should normatively do that, and in this case, redefine the terms, then they are handling them as trademarks. They are exactly doing what the trademark owners do: they want to redefine and "purify" the definitions of these terms to what they want to mean. All trademark redefinitions are, indeed, propaganda campaigns. So it is illogical to argue that redefining capitalism isn't a propaganda campaign, while doing this with trademarks is propaganda. I liked B.K. Marcus' observation that capitalism is historically defined as the "private ownership of the means of production." Most dictionaries, and including Karl Marx, basically define capitalism that way. The anti-capitalists oppose capitalism simply because they oppose the private ownership of the means of production. Iain McKay, the main author of An Anarchist FAQ, even defined capitalism the usual way, as the "private ownership of the means of production." McKay posted on Usenet that private property will result in huge disparities of wealth. It is also strange that the "left"-libertarians like to use the term "free market" when at the same time they object "capitalism." The term "free market" has just about the same meanings and connotations as "capitalism." I do not understand why they go contrary on these two terms. This is proof that most of the anti-capitalists simply oppose the private ownership of the means of production. ## I am more on the "left" than the "left"-libertarians If I used BrainPolice's definition of "vulgar libertarian," then paradoxically, I am much less "vulgar" than most of the self-identified "left"-libertarians. For example, IMHO, David Z. seemed to have more "vulgar" personality than me. For example, David Z., a "left"-libertarian, acted like a "vulgar libertarian" when he unconsciously defended the U.S. health care system by denying that U.S. already has socialized health care. Even though he opposes the U.S. health care system, he did never mention the corporatist privileges of the U.S. system. He denied that the U.S. already has socialized health care in his blog post. I posted a critique of the "vulgar libertarian" concept, as promoted by "left"-libertarians', on another one of my blogs at Proprietary Anarchy. It seems as though the "vulgar libertarian" concept has no meaning to it, since the "left"-libertarians call me as a "vulgar" libertarian when they, themselves, have more "vulgar" tendencies. Therefore, I was curious on how the "left"-libertarians define "vulgar libertarian." So I asked BrainPolice to define the term "vulgar libertarian." But according to his definition, I am, indeed, less "vulgar" than most of the "left"-libertarians. IMHO, I am also less "vulgar" than John Petrie. John once "defended" the recession as a "natural" "correction" on a blog post. But the recession isn't any "correction" at all. The recession would not occur if the state suddenly became abolished. Therefore, this "recession" isn't a "correction" of the economy, but is caused by continuous taxation and regulation of the economy. If the state suddenly vanished, this recession will turn into the largest economic euphoria in history. But the criminal orginization currently hampers the potential boom into a recession. So there isn't anything "natural" about this. I have already discussed about this in my 15 Mistakes by Austrian Economists post. I have also refuted John's prediction that the collapse of the U.S. auto industry is "natural". I commented that the auto industry will expand in a free market, because there would be no taxation and regulations that hamper the expansion of the auto industry. Even if the state currently privileges the auto industry, auto industry will expand even more of the state is abolished. This is because there will be no criminal levels of theft and barriers to entry for potential automobile manufacturers in a free market. Because individuals will also be at least five times richer in a free market, individuals would buy more automobiles that would expand the industry. (By 'auto industry', I mean the auto industry in general, not the Big Three cronies.) These is one reason of why I object to identify myself as a "left"-libertarian. The "left"-libertarians join alliances with the anarcho-collectivists. The "left"-libertarians arrogantly claim that I am more "vulgar" than them, even when, as shown above, that this is entirely the opposite. If you want to see how should the terms "left" and "right" be defined, then go look at the article called Normative Semantics of Left and Right. This article actually defended the "left" if you used the term "left" correctly. But I still do not identify with the "left" because I reject the whole political spectrum. Because no formal definition of leftism and rightism exist, people often criticize the other wing using the equivocation fallacies. Left-wingers criticize the right-wing's cultural conservative stereotype when the right-wingers criticize the left's economic stereotypes. They do not even argue about the same thing. Often, the left and right will randomly switch the definitions of the terms to mean one of the above stereotypes, and result in a confusing, convoluted argument with no concrete definitions of the terms left and right. The real way to avoid the equivocation fallacy, as suggested by Overcoming Bias, is to taboo your words. For example, instead of saying "I oppose capitalism," we can do it in a better way such as "I oppose the private ownership of the means of production." Instead of uttering "I oppose 'left'-libertarianism," I can say "I oppose allying with the libertarian socialists" or "I support the legal possibility for the existence of corporations in the anarchistic legal system." Kevin Carson misinterpreted my position about the legality of corporations in a free market: Carlton Hobbs recently challenged the tendency of mainstream libertarians, free marketers and anarcho-capitalists to favor the capitalist corporation as the primary model of ownership and economic activity, and to assume that any future free market society will be organized on the pattern of corporate capitalism. I did never regard that the corporate legal entity would function as the predominant model in anarchy. I do, however, defend the legal possibility of such legal entity existing in the free market. Actually, I envision an anarcho-capitalist society as radically different than the current fascist state. I predict that self-employment and small enterprises will function as the predominant business model in anarchy. The median worker will earn five times more wealth, especially without the interventions such as the apprenticeship system, regulations, legal barriers, and extortion. Many workers will become so much wealthier that they can afford to invest their own capital. In a free society, the prevalence of authoritarian management will shrink to virtually nonexistent. Workers will have greater autonomy, and will have a working environment similar to independent contractors. I speculate that interest rates will fall, and innovation will greatly increase. Semantic disagreements do play a role in every branch of science, not just in political ideologies. The anarcho-capitalists and mutualists would have more in common if we had resolved many of the semantic barriers. This does not, however, mean that anarcho-capitalists should defend, support or even ally with mutualists. Not even close. My opinion is that semantics do play at least some role. Some "left"-libertarians get "offended" by the word "capitalism" because some mutualists, who ally with Rothbardian "left"-libertarians, oppose the private ownership of the means of production. Brad Spangler did indeed conspire a propaganda show when he advocated "General Semantics" to make definitions more appealing those who oppose the private ownership of the means of production. Anyway, I hope this is the last time I make a post on semantics. This is getting tiresome! Let us refocus our energies in countering the real enemy: the state. Forget about this conflict, and let's start our agorist revolution! ## Friday, January 16, 2009 ### The Messy Definitions of "Left-Wing" and "Right-Wing" On the web, libertarians occurrently conflict with each other on the definitions of a few terms. We observed that these conflicts start from the mistaken and confusing definitions of the terms left-wing, and right-wing. We, in attempt to revolve this conflict, normatively propose more consistent definitions of these two terms. While no definitions of these terms accurately reflects the historical usage, our normative approach will in attempt to solve this problem by devising the original definition, along with a brief historical analysis on how the definitions changed its meanings today. Libertarians and agorists mistakenly label themselves as revolutionary or leftist. This leads to great problems, and frequently cause confusions of the terms left and right. Unlike our previous attempts, we decide to find non-empirical definitions of the terms. This will, therefore, provide a consistent resemblance of the terms. We, in attempt to clarify the definitions, wrote a series of arguments formulating consistent definitions of radical, revolutionary, left-wing and right-wing. The first section starts in distinguishing between the terms radical and revolutionary. In this section, we will refute the common misconception of the synonymy of these two terms. In the second section, we will devise simpler and more consistent definitions of the terms left and right, building up on the conclusions in the first section. Additionally, we will analyze the common mistakes and confusions of the terms, especially pertaining to the minarchists and the agorists. In concluding the argument, we will propose a fourth section, dealing with the historical mix-up and the origins of the confusions. Lastly, we will provide an appendix exemplifying the application of the newly defined consistent terms, while refuting the popular misconceptions of the meanings of these terms. ## The Terms Radical and Revolutionary Popular opinion often conflate the terms radical and revolutionary. While, on the surface, these terms appear similar, a we will argue that a significant distinction exists. We notice that the terms left-wing has similar ideas as in radical and revolutionary. In order to isolate the distinguishing attributes from each of these three terms, we must find concise definitions for each of these terms. We will use the Merriam-Webster Dictionary to define the terms right-wing and left-wing.(webster-left) In contrast to other dictionaries, the Merriam-Webster Dictionary appears more historically accurate. We justify this from its accurate entry on the definition of mercantilism.(webster-mercantilism) According to the commonly-accepted definitions, right-wing means status quo or status quo ante. Left-wing means radical. The terms status quo and status quo ante, which appears as opposites of the revolutionary and radical. However, we find this as false. To demonstrate this, we will argue that there can exist revolutionaries that support the status quo ante. We will use Burkean conservatism, the traditional form of conservatism, to prove this. Those who oppose the American Revolution and the French Revolution, labeled Burkean conservatives, opposes radical change. Burkean conservatism adores experience as one of the primary tenets of its reformist philosophy. It claims that experience, or historical implementations, demonstrates the feasibility of the ideology. Therefore, Burkean conservatism only accepts ideologies successfully implemented in real-life. A past or the current implementation of the ideology demonstrates the feasibility of the ideology. They advocate certain status quo or status quo ante of the ideology implemented. Thus, we can consistently define right-wing as the status quo or the status quo ante. Karl Hess', in attempt to devise the terms left and right, labeled Joseph Stalin as right-wing. We will, below, refute that claim. Authoritarian socalists do not want to revert to the status quo or status quo ante. The ideologues claim its ideology as radical because it continuously tries to invent innovative solutions to implement a workable version of communism. As we showed theory as much more important than implementation,(TheoryAndImplementation) we should view intent much more important than the result. We could all agree that the result of communism pertains to the status quo and the status quo ante, since all implementations of communism results the same old failure. But the intent of communism, as an ideology, pertains to its radicalness and opposes the status quo and the status quo ante. Communists want to implement new workable versions of communism resembling nothing to the historical attempts of communism. Therefore, we should define communism as opposing the status quo and the status quo ante. We had argued that the theory or intent, not the implementation or result, determines the fundamental attributes of any philosophy. Because of the radical intent of communism, we must classify communism as left-wing, as in the above sense. Right-wing means the status quo or the status quo ante. Left-wing means radical ideas. Therefore, we should not claim left-wing authoritarianism, in the radical authoritarianism sense, as an oxymoron. Both of the terms, even formally defined, contain ambiguity. The left-wing originally meant the opposition to the status quo. The left-wing includes the counter-revolutionary authoritarians and the radical revolutionary authoritarians. However, an ideology labeled as radical does not necessary imply the ideology as revolutionary. Two types of radicalism exist: the revolutionary type and the reformist type. Radical reformists tend to hold radical intentions and principles, but does not necessary support drastic revolution in practice. Contrastingly, radical revolutionaries tend to support radical intentions and drastic revolution in practice. These two terms, however, tend to get conflated, since the terms radical and revolutionary often gets associated. Those who have radical beliefs tend to support revolutionary means to achieve it, and those who support revolution tend to hold a radical opposition to the status quo. But we should emphasize that we should not view two these two terms, radical and revolutionary, as necessary correlated. Some radicals oppose revolution, and some revolutionaries do not hold radical beliefs. The political anarcho-capitalists and the democratic socialists hold radical views but oppose revolution. Conversely, the counter-revolutionary reactionaries and the theocrats support revolution but do not hold radical views. By the above conclusions, we have found a more consistent and concise definition of radical and revolutionary. • The term radical means philosophies whose intent support ideologies not implemented in the past or in the current situation. It therefore opposes the status quo or the status quo ante, the defining ideas for right-wing. • The term revolutionary means those who favor direct and immediate methods in implementing the ideology in practice, as opposed to politics or reformism. The term revolutionary does not necessary correlate with radicalness or left-wing ideology. • The term left-wing, if defined consistently, merely means a synonym for radical. • The term right-wing, if defined consistently, merely means a synonym for the status quo or the status quo ante. ## The Terms Left-Wing and Right-Wing We could formulate the left-wing and right-wing versions of libertarianism. The right-wing libertarian supports the status quo ante, or the old model of classical liberalism implemented after the American Revolution. The left-wing libertarian opposes the status quo and the status quo ante, therefore opposes the American Revolution and all of the quasi-libertarian societies implemented in the past. The left-wing libertarian supports radical notions of libertarianism never implemented in the past, and denies that any currently-existing society as libertarian. The left-wing libertarians, for example, denies Medieval Iceland or Somalia as libertarian societies. We should emphasize that the term left-wing, a synonym for radical, does not necessary imply revolutionary. Hence, the left-wing libertarian, by definition, may support reformism. Coincidentally, some of the self-identified "left"-libertarians, such as Kevin A. Carson, Roderick T. Long, and Charles Johnson, actually do support reformism.(TheLeftLibertarianStrategy) We should also discredit the term radical right or right-wing radical as an oxymoron. Right-wing means a philosophy based on the status quo or the status quo ante. The term left-wing means the opposite. Left-wing philosophies does not base on the status quo or the status quo ante. Left-wing philosophies bases on innovative, radical ideas not implemented in history. Therefore, the term radical right has the two words meaning opposites. Most of the self-identified "agorists" actually do not, by themselves, practice agorism. We can label them as theoretical agorists. Theoretical agorists include David Z., FSK, and others in the "left"-libertarian spectrum. Theoretical agorists frequently impose guilt trips on other theoretical agorists for not practicing agorism, even that they, themselves, do not practice agorism in the first place. While most self-identified "agorists" do not practice agorism, they do have the same radical beliefs as the practical agorists. Hence, we can label both the theoretical and the practical agorists as radical libertarians. However, we cannot label the theoretical agorists as revolutionary libertarians. The theoretical agorists do not practice direct action, but impose guilt trips on others for not practicing agorism. The practical agorists, in contrast, do practice agorism. So we can only label the practical agorists as revolutionary libertarians, not the theoretical agorists. In summary, we should label theoretical agorists as non-revolutionary libertarians and label practical agorists as revolutionary libertarians. But we can label both the theoretical and practical types as radical libertarians. ## "Left"-Libertarianism and Politics Historically, the agorist movement classified anyone who supports reformism or political methods as right-wing. The "left"-libertarians often claim that radicals, by definition, must oppose politics. We will discredit this. Some libertarians may have radical intentions and may strongly oppose the status quo, but resort to using political methods and reformism anyway. They believe that reformism and politics can bring about this process, without revolution. As we often conclude that using reformism and politics does not change anything other than the status quo, the result or implementation appears to coincide with the right-wing, a term meaning the status quo. However, the determination of the philosophy depends on its intentions more than the implementation. A "left"-libertarian may strongly oppose the status quo but resort to political methods and reformism. Because the "left"-libertarians intended to exert radicalness, radical political libertarianism falls in the left-wing. We should note that political libertarianism does not necessarily oppose the left, so there may exist political libertarians that may fall in the right. Similarly, as discussed above, some right-libertarians support a non-political counter-revolution to go back two hundred years ago. But these libertarians, even if they oppose politics, do not fall in the left, since they oppose radicalism. ## Problems with Secularism and Traditionalism A secularist can have traditional beliefs, since traditional values does not require religion. But the Merriam-Webster Dictionary defines left-wing as supporting secularism.(webster-secular) Therefore, in order to classify a person as having left-wing beliefs, he or she must hold radical (but not necessary revolutionary) ideas, and also support secular values. As we see secularism, by definition, as compatible to traditionalism, the secular left, by definition, does not necessarily oppose traditionalism. Similarly, a traditionalist attempts to believe in traditional values, such as morals, ethics, and religion. The Merriam-Webster Dictionary defines right-wing, in addition to supporting the status quo or the status quo ante, as espousing traditionalism.(webster-traditional) But, as showed above, traditionalism does not necessitate religion. Concepts generally associated with traditionalism include hierarchy, authority, patriotism, nationalism, paternalism(paternalism-def) and religion. So according to the above argument, both the left and the right both may hold traditional values. The left and the right only differ by religion, not tradition. ## Leftism, socialism and economic interventionism Prior to Karl Marx, both of the minarchists and the economic planners identify themselves as socialists. However, as the Marxist movement enlarged, the definitions changed. Because the Marxists had always referred socialism to economic planning, socialism got associated with economic planning, and then the denotation of socialism has changed to only represent economic planning. Marxists oppose religion and support economic planning. Therefore, the term left-wing, after the Marxist movement, had become to mean something resembling Marxism, with the opposition to religion with the support for economic planning. Contemporary dictionaries define left-wing as a secularist, economic interventionist ideology, possibly due to the Marxists' identification with the left. However, we should simply reject the terms left and right as package-deals. The left does not really support revolution, and many of the "left"-libertarians, as said above, support reformism. As argued above, the left also does not oppose traditionalism. The population has redefined left-wing as supporting socialism and economic interventionism, mostly of the Marxist movement. It makes the terms left and right as watered-down propagandistic euphemisms. ## Appendix: Authoritarianism Example We will briefly use the term radical and revolutionary to demonstrate the usage of these two terms. We will use authoritarianism as the example. Two branches of authoritarianism: • Revolutionary authoritarianism • Anti-revolutionary or status quo authoritarianism ### Revolutionary Authoritarianism Two branches of revolutionary authoritarianism: • Counter-revolutionary or status quo ante authoritarianism • Radical (innovative) revolutionary authoritarianism Since all these two support revolution. ### Anti-Revolutionary Authoritarianism Examples (in an anarcho-capitalist POV): • The U.S. Republican Party • The U.S. Democratic Party ### Counter-Revolutionary Authoritarianism Some authoritarians propose the status quo ante, by revolution. Also called counter-revolutionary authoritarianism and status quo ante authoritarianism. Examples (in an anti-minarchist anarcho-capitalist POV): • The counter-revolution reactionaries • Theocracy • Monarchism • Paleocensarvatism • Ron Paul conservatism ### Radical Revolutionary Authoritarianism Some authoritarians do not propose to go back to any time period. They wish to implement an innovative version of authoritarianism that does not exist in the past. Examples: • Jacobin • Stalinism • Maoism • Troskyism • Nazism • Fascism ## The Historical Meanings of "Left-Wing" and "Right-Wing" • The original left-wing understood as those who sat on the left side of the parliament and right-wing who sat on the right side. Because there is only one dimension, no one can be arranged optimally. • The original left-wing composes of Democratic-Republicans, who oppose social, religious and military intervention and opposes spending. Supports states' rights. • The original right-wing composes of Whigs, who were religious and support social, religious and military intervention and supports spending. Opposes states' rights. • President Abraham Lincoln has defined the right-wing to support social and religious interventionism. • Since the election of Woodrow Wilson, the left-wing has and supported social intervention such as prohibition. • Since the elections of Woodrow Wilson and Warren G. Harding, the left-wing has supported military intervention and the right-wing has opposed military intervention. • The elections of Herbert Hoover and FDR had defined the left-wing the pro-spending party and the right-wing as the anti-spending party. • The Barry Goldwater campaign and the Civil Rights act in the 1960s made right-wing religious as the Southern Democrats distrusted the Democratic party for supporting it. The right-wing has redefined to support social interventionism. • The Ronald Reagan campaign has defined the left-wing to oppose spending and the right-wing to support spending, all at the same amount of economic interventionism. • Ronald Reagan has defined the left to support environmentalism and the right to oppose environmentalism. • The two Iraq wars has defined the left-wing to oppose military intervention and the right-wing as military interventionist. The energy issue has defined both the left-wing and the right-wing to support environmentalism. These events caused the vague notions of the left-wing and the right-wing as we use today. How would the myriad political ideologies fit in just a one-dimensional spectrum? The political spectrum is not defined that way, the definition has induced from the empirical arrangement of French parliament. Groups of individuals sat next to each other in a way, while constrained by the one-dimensional parliament, to reduce tension between their neighbors. Because of the spatial closeness of the neighbors, the left-right spectrum forces one to join alliances with neighbors. The left-right spectrum arbitrarily forces a diverse array of ideologies to jam into a one-dimensional spectrum. Many left-libertarians, particularily the ones who have a membership of Alliance of the Libertarian Left, would cite Karl Hess' or the left-libertarian's definition. However, his does not promote a solution because his inductive definition does not reflect the dynamic political spectrum. Words such as "left-libertarian" and "right-libertarian" seem vague. ## Ramifications No political "spectrum" can measure the "similarity" of one's ideology compared to others. The two-dimensional political spectrum is not an improvement over the one-dimensional one. To give an accurate representation of one's political views, it would require thousands of dimensions. But with thousands of dimensions, it would become impossible to compare your results with others. You cannot simply "match" your thousands of dimensions with another's thousands. Doing so would not give accurate results, since the more dimensions you match does not necessarily mean that you have closer views to him. The "dimensions" of the political spectrum are also picked arbitrarily, so it will give biased and "unmatchable" results. A solution that we propose is neither left nor right. It is a simpler axis, one that Kel Weaver has proposed. It is simply anarchist or non-anarchist. No definition of left-wing and right-wing that encompasses all its usages exist. Criticisms of leftism or rightism pertains to a specific stereotype listed below. "Leftie" stereotypes anti-capitalism anti-corporation anti-corporatism anti-propertarianism anti-racism anti-religion blank slate class conflict collectivism cultural determinism dialecticism economic interventionism egalitarianism environmentalism equality forced integration group rights majority rule moral relativism multiculturalism nurture over nature pacifism pluralism political correctness polycentrism radical feminism religious intolerance reverse discrimination secularism social determinism tabula rasa tolerance welfare state Right-Wing stereotypes anti-democracy authoritarianism capitalism colonialism conservatism cultural conservatism elitism family values fascism feudalism fundamentalism genetic determinism inegalitarianism inequality intolerance landlordism militarism monarchism monocentrism monoculturalism nationalism nature over nurture neo-Lockean theory of property paternalism plutocracy pro-corporation pro-corporatism protectionism psychological nativism religion social hierarchy social order status quo ante status quo traditionalism traditionalist authoritarianism uniformity warfare state Because no formal definition of leftism and rightism exist, people often criticize the other wing using the equivocation fallacies. Left-wingers criticize the right-wing's cultural conservative stereotype when the right-wingers criticize the left's economic stereotypes. They do not even argue about the same thing. Often, the left and right will randomly switch the definitions of the terms to mean one of the above stereotypes, and result in a confusing, convoluted argument with no concrete definitions of the terms left and right. I responded to the series of comments at Polycentric Order. Cork said: Does it make any sense for left-libertarians to bash Hoppe but praise Proudhon, when the latter is far more culturally conservative? Brainpolice used the term "left" in the sense of cooperative property rights, not to mean culturally liberal. He has posted a video defining "left" and "right." Cork replies to Brainpolice without understanding what he had meant. Cork said: I don't want to give the impression that Proudhon is all bad, just because he had some nutty beliefs on cultural and political issues. For a proto-socialist, he was decent and relatively libertarian (well...at least in theory). He has some excellent quotes. Brainpolice did not define "left-wing" and "right-wing" to distinguish cultural differences, he used them to distinguish economic differences, an in this case, property rights. He used "left-wing" to mean those supporting "loose" property rights and "right-wing to mean those opposing the "loose" property rights held by mutualists. Brainpolice said: Furthermore, how can we possibly forget Gary North, who openly advocated a Christian theocracy as an anarcho-capitalist model? So spare me in your attempt to act like the libertarian right has some kind of highground. BrainPolice has, once again, used the equivocation fallacy. He equivocated the term "right-wing" to represent theocracy, when he previously used the same term to represent those who "oppose" mutualist property rights. Thus, I do not identify myself as either "left" nor "right," because the usage of these term often sprang up equivocation fallacies. Scott said: Cork is clearly trolling. Affixing the adverb "clearly" in this sentence not necessarily affirm accuracy of the predicate. Many individuals often abuse the word "clearly." For example, Statists will say that "the tax evader has clearly harmed people," "We clearly need a State," or "the drug dealer has clearly committed a 'crime'." I oppose labeling, using, or calling yourself, or any other, as "left," "right," "anti-left" or "anti-right." This will instigate confusion. For example, by "anti-left," do you mean you oppose their economic stance or oppose their cultural liberalism? Well, the self-described "leftists" will interpret your statement as opposing both. Therefore, they will call you a "right-libertarian," and a culturally conservative bigot. The discussion will go on and on and go in a convoluted circle. As shown, the usage of the terms "left" and "right" leads to equivocation fallacies. Therefore, we will propose eliminating all, each as every, instance of "left" and "right." As similar in E-Prime, the term "to be" can easily get misused. Let us look at the similar case in E-Prime: In principle, if not in practice, we agree that in some instances one could use forms of "to be" (in its auxiliary, existence, and location modes) without causing appreciable "semantic damage". Even so, most English teachers would agree that most of us overuse and misuse the verb, and that even a 75% reduction in its use would improve our writing and speaking skills. But why go to the extreme of trying to eliminate it totally? Because for better or worse, it looks like only an all-or-nothing approach to this problem works successfully. De Morgan, Santayana, Korzybski, and many general semanticists have warned against misuses of the verb like the "is" of identity, yet they continued to misuse it themselves! So even if we carefully attempt to carefully use "left," it can easily get misused. For example, we identified four different definitions of "left-libertarianism": • Charles Johnson defines "left-libertarianism" as "thick" libertarianism muddled with feminism and other cultural values, as supposedly shared by the Democratic Party. However, Johnson did the "hasty generalization" fallacy in doing that. • Brainpolice defines "left"-libertarianism as cultural liberalism with anarchistic economics. However, on a YouTube video, he mysteriously redefines "left" and "right" as an economic spectrum. • FSK defines a "left"-libertarian as an agorist.* • Brad Spangler defines left-libertarianism as a revolutionary type of libertarianism and while opposing "vulgar libertarianism." This implies that left-libertarians must advocate mutualism or neo-mutualism. We defined "left"-libertarianism a long time ago and many replied that we did not represent all "left-libertarians." I do not represent all of them because we defined "left-libertarianism" differently. All statists, minarchists included, propose aggression on others. Statists only tinker with the Leviathan and parliamentarianistically waste their energies with political compromise. The terms left-wing and right-wing are highly vague, and no definition exists. They are highly contextual, and can have completely different meanings hinging on the speaker's ideology and alliances. ## Bibliography \begin{thebibliography}{9} \bibitem{webster-mercantilism} Anarcho-Mercantilism over Market Anarchism. 2008. \emph{Anarcho-Mercantilist} \url{http://anarcho-mercantilist.blogspot.com/2008/10/anarcho-mercantilism-over-market.html} \bibitem{webster-radical} Radical. 2009. \emph{Merriam-Webster Online.} \url{http://www.merriam-webster.com/dictionary/radical} \bibitem{webster-revolution} Revolution. 2009. \emph{Merriam-Webster Online.} \url{http://www.merriam-webster.com/dictionary/revolution} \bibitem{webster-left} Left. 2009. \emph{Merriam-Webster Online.} \url{http://www.merriam-webster.com/dictionary/left} \bibitem{webster-right} Right. 2009. \emph{Merriam-Webster Online.} \url{http://www.merriam-webster.com/dictionary/right} \bibitem{webster-secular} Secular. 2009. \emph{Merriam-Webster Online.} \url{http://www.merriam-webster.com/dictionary/secular} \bibitem{webster-traditional} Traditional. 2009. \emph{Merriam-Webster Online.} \url{http://www.merriam-webster.com/dictionary/traditional} \bibitem{TheoryAndImplementation} We will discuss this in another article. \bibitem{TheLeftLibertarianStrategy} The Left-Libertarian Strategy 2008. \emph{Anarcho-Mercantilist} \url{http://anarcho-mercantilist.blogspot.com/2008/12/left-libertarian-strategy.html} \bibitem{paternalism-def} Multiple, confusing definitions of \emph{paternalism} exist. In this case we mean to aggress others for their own good." \end{thebibliography} On FSK's blog: If you're on your own blog, ignoring or calling out trolls is usually sufficient. For example, "[anarcho-mercantilist]" (see above) describes himself as "anti left-libertarian". If that's your philosophy, then why are you trolling here? If you believe that agorists are full of ****, then why are you wasting time reading this?
2013-06-19 14:51:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4336424469947815, "perplexity": 4224.030765496248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708808767/warc/CC-MAIN-20130516125328-00074-ip-10-60-113-184.ec2.internal.warc.gz"}
https://best.dujuz.com/54/
# Two Wires Lie Perpendicular To The Plane Of The Paper Two Wires Lie Perpendicular To The Plane Of The Paper. Two wires lying in the plane of this page carry equal currents in opposite directions, as shown. Unless specified, consider any one without loss of generality. Now let's look at some examples that use these ideas to find lines that are either parallel or perpendicular to a given line passing through a specific point. The angle a plane makes with the vertical plane in a similar manner. The gure shows, in cross section, two conductors that carry currents perpendicular to the plane of the gure. ## A small circular loop of conducting wire has radius a and carries current I. If we find a vector normal to the given plane, and this vector be in the plane we seek, then we. Then, the magnetic field strength (B) at a point midway between the wires will be. This calculator find and plot equations of parallel and perpendicular to the given line and passes through given point. Test Questions Physics Archive | February 20, 2014 | Chegg.com Civil Engineering Archive | March 26, 2017 | Chegg.com ### These two vectors will lie completely in the plane since we formed them from points that were in the plane. How to calculate the angle between two planes. Two wires lying in the plane of this page carry equal currents in opposite directions, as shown. At a point midway between the wires, the Two current-carrying wires are perpendicular to each other. We'll check for parallel, check for perpendicular, then look at the angle between them. The calculator will generate a step-by-step explanation on how to obtain the result. Now, we know that the cross product of two vectors will be orthogonal to both of these. ### The gure shows, in cross section, two conductors that carry currents perpendicular to the plane of the gure. The coordinated plane which is also known as the Cartesian plane has an horizontal x axis number line and a vertical y axis number line whereas both axes are perpendicular to each other and cross over at. And when a line is perpendicular to a plane, then every plane containing the line is perpendicular to that plane. The calculator will generate a step-by-step explanation on how to obtain the result. Shows how to find the perpendicular distance from a point to a line, and a proof of the formula. (BTW – we don't really need to say 'perpendicular' because the distance from a point to a line always means the shortest distance.) This is a great problem because it uses all these things that we have. Then, the magnetic field strength (B) at a point midway between the wires will be. In practice, it's usually easier to work out $\bf n$ in a given example rather than try to set up some general equation for the plane. What you need to do is find the magnetic field generated by one wire at the location of the other. Magnetic field acting perpendicular to the plane of paper. $\begingroup$ It could mean either of the two. Well, the normal vector to the given plane should lie within the plane you're after.
2022-05-18 17:07:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6389581561088562, "perplexity": 196.8366902727751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522284.20/warc/CC-MAIN-20220518151003-20220518181003-00280.warc.gz"}
https://www.amplifiedparts.com/products/vacuum_tubes?sort=recent&filters=1a646c2341a2391b2517
# Vacuum Tubes The first electronic amplification of sound was done with Vacuum Tubes. We have pre-amp, power and rectifier vacuum tubes among other types in New Old Stock from the golden era of American & European manufacturing as well as current production tubes from today. #### JJ Electronics JJ vacuum tubes and capacitors are widely used in guitar and audio amplifiers, recording studio equipment and in a variety of applications for audiophiles. #### Apex Matched Custom-designed and custom-built by industry experts with ideal tube matching in mind. Apex tubes are matched more closely and more accurately than you'll find anywhere else. Visit https://www.apexmatching.com for more information #### Vacuum Tubes The first electronic amplification of sound was done with Vacuum Tubes. We have pre-amp, power and rectifier vacuum tubes among other types in New Old Stock from the golden era of American & European manufacturing as well as current production tubes from today. Vacuum Tube - E83CC Frame Grid, JJ Electronics Alternative to the 12AX7 or ECC83. The JJ Electronic E83CC is a high quality dual triode that rivals the NOS version of this tube. The E83CC was originally designed as an audio specific dual triode with very low noise and microphonics. The JJ E83CC stays true to the original design with the signature thick frame grid and boxed plate. The rigid frame allows for tighter tolerances and closer construction which results in lower noise and increased stability. The JJ Electronic E83CC is a great choice for high fidelity amplifiers as well as guitar amps and studio equipment. The E83CC has the same pinout and amplification factor as a standard 12AX7 and can be used in any 12AX7 or ECC83 position. Guitarist Description: This frame grid tube has an incredibly rich yet balanced tone. The E83CC works well for clean sounds but also breaks up with a satisfying warmth and plenty of articulation. $21.95 Vacuum Tube Set - 12AX7 Sampler, 8 total tubes Package of 10 Eight 12AX7s. One of each of the following: JJ Electronic 12AX7, Mullard 12AX7, Tube Amp Doctor premium selected 12AX7, Sovtek 12AX7WA, JJ Electronic ECC803S, Chinese high-grade 12AX7B, Electro-Harmonix 12AX7, Tung-Sol 12AX7$109.92 Vacuum Tube Sampler Set - EL84, 9 tubes total Package of 9 Sampler set of EL84 vacuum tubes. Contains one each of the following: Electro-Harmonix EL84, Genalex Gold Lion EL84, JJ Electronic EL84, Mullard Reissue EL84, Sovtek EL84, Tung-Sol EL84, JJ Electronic EL844, Sovtek EL84M, Tube Amp Doctor EL84 $141.45 Vacuum Tube - 6SN7, JJ Electronics The JJ Electronic 6SN7 is the classic double triode with a rugged construction the offers the high reliability that JJ is known for. The JJ 6SN7 features the outstanding linearity and sound of the original tubes. This tube is an excellent replacement for Hi-Fi amplifiers and preamps and will replace any 6SN7, 6SN7GTA, 6SN7GTB, or 6SN7GT.$16.50 Vacuum Tube - 12BH7-A, JJ Electronics The JJ Electronic 12BH7-A is a faithful reproduction of the original tube. Its rugged construction offers high reliability. The JJ 12BH7-A can be used as a replacement for any 12BH7 tube. $16.95 Vacuum Tube - 5881, JJ Electronics The JJ Electronic 5881 is very similar to the 6L6GC but with a plate dissipation of 23W rather than 30W. Another physical difference is the shorter bulb. These tubes have a soft and balanced response that sounds great in both guitar amps and hifi equipment. The JJ 5881 will work in any amp that uses 6L6 tubes. Starting at$18.95 Vacuum Tube - 12AX7S, JJ Electronics, Mid Gain A new take on the classic 12AX7 preamp tube. While it uses the same rugged build and thick glass as other JJ preamp tubes, this tube differs in construction from the typical 12AX7 in that it features a medium-long plate, which provides a dynamic and open sound with a rich forward sounding mid range. This tube offers low noise and low microphonics. $12.45 Vacuum Tube - EL84 / 6BQ5, Tung-Sol Reissue Due to many requests by fans, the Tung-Sol EL84/6BQ5 has been reissued. The Tung-Sol EL84 / 6BQ5 was a popular OEM tube for American-made EL84 / 6BQ5 guitar amplifiers, such as those built by Gibson and Harmony. Due to its rugged construction and conservative ratings, it is the most reliable current production EL84 type available for use in Vox AC-30 type amplifiers. In addition, its linearity and tone characteristic make it the ideal replacement for EL84 hi-fi amplifiers, such as those built by Fisher, Scott, and Harmon-Kardon. Tube measurements are as follows: • Height: 2.75” (without pins)* • Diameter: 0.87”* *Note: Due to the method of production in regards to the glass, measurements may vary slightly. Starting at$14.65 Vacuum Tube - 12AX7 / ECC83, JJ Electronics, package of 3 Package of 3 This tube has a well balanced, colorful tone with strongly defined lows, mids and smooth highs. It allows for more clean head-room than higher gain 12AX7s. In overdrive, it is smooth and strong with well defined lows and mids. When pushed into overdrive it offers clean distortion with well balanced lows and mids. The JJ 12AX7 is well suited for all types of music and playing styles. It is also highly recommended for studio pre-amps and hi-fi gear $32.85 Vacuum Tube - 6L6WGS, Black Plate, Short Bottle, Made in China 6L6WGS - Black Plate, Short Bottle, Made in China Starting at$17.30 Vacuum Tube - 6L6WGC, Black Plate, Made in China 6L6WGC - Black Plate, Made in China Starting at $17.30 Vacuum Tube - 5751, JJ Electronics A dual triode that can be used in place of most 12AX7 circuits. The rugged build of this tube and reduced gain (30% less than a 12AX7) can improve headroom and give better control in high gain stages allowing users to more easily dial in sweet spots. The JJ 5751 features a smooth and balanced tone and a detailed response with low noise and low microphonics.$13.95 Vacuum Tube - 5881, Tung-Sol Reissue Russian made reissue of the original Tungsol. The 5881 is the electrical equivalent to types 6L6 and 6L6G except that the plate and screen dissipation ratings have been increased approximately 20 percent. It embodies a complete mechanical redesign which results in greater resistance to shock and vibration. Starting at $20.95 Vacuum Tube - 5Y3 S, JJ Electronics, Rectifier The JJ Electronic 5Y3S is a ruggedly built octal rectifier tube with a directly heated cathode. This tube’s solid construction and thick glass envelope offer high reliability. The JJ 5Y3S will work in any 5Y3 position.$15.50 Vacuum Tube - 6L6, Electro-Harmonix Superior power Beam tetrode with large plate dimensions and improved grid structure for increased power handling capabilities. Excellent tone and performance compared to most other 6L6 or KT66 types. Starting at $21.75 Vacuum Tube - 6L6GC, JJ Electronics The JJ Electronic 6L6GC is the classic pentode built with the rugged construction that only JJ could deliver. At 30W, this tube delivers plenty of headroom. The highs are clear and responsive with warm mids and tight solid lows. Known for their reliability and articulation, the JJ 6L6GC is a great option guitar amplifiers and Hi-Fi stereo equipment. Starting at$17.50 Vacuum Tube - 6L6GC STR, Tung-Sol Reissue Tung Sol 6L6GC “STR” beam-power tetrode tube. Made in Russia. The Tung-Sol legacy continues with the introduction of the new 6L6GC STR. The ultimate in musical tone and smooth overdrive, the STR delivers the sound that established Tung-Sol as the benchmark of quality. Built to the same “Special Tube Request” specs of leading amplifier manufacturers of the 1960s, the 6L6GC STR is a rugged and reliable power tube for use in the most demanding guitar amplifier circuits. Starting at $23.50 Vacuum Tube - 6L6WXT+, Sovtek Higher in gain than the 5881WXT, with full base. Sovtek is proud to announce the release of their new 6L6WXT+ vacuum tube. Modelled after the vintage RCA 6L6GC "blackplate," the Sovtek 6L6WXT+ features larger plate dimensions and improved grid structure for increased power handling capabilities. The 6L6WXT+ also features mica spacers with metal springs to eliminate tube rattle and microphonics. The Sovtek 6L6WXT+ yields a 20% higher output than the Sovtek 5881WXT and offers superior tone and overall performance to any 6L6 or KT66. Starting at$22.50 Vacuum Tube - 6V6, JJ Electronics This 6V6 has the standard rugged construction we’ve come to expect from JJ which allows it handle much higher plate voltages than the typical 6V6 tube. The JJ 6V6 can even be used in place of a 6L6 in some amplifiers. Overall this tube has a warm and balanced tone with incredible separation and response. These tubes can cover a spectrum of sounds and styles all within the same amplifier. Starting at $14.95 Vacuum Tube - 6V6GT, Chinese Chinese 6V6GT Available in Singles, Matched Pairs or Matched Quads - please select from list. Starting at$13.02 Vacuum Tube - 6V6GT, Electro-Harmonix Octal power tube (Max Plate Watts = 14W) Beam power tetrode with a specially developed cathode coating, careful alignment of the grid, tri-alloy plate material for flawless performance up to 475 volts. Perfect for high plate voltage amplifiers like the Fender Deluxe Reverb. Starting at $18.95 Vacuum Tube - 6V6GT, Tung-Sol Reissue Totally Tweed! The preferred OEM tube of '50s-era Fender Tweed Champ and Deluxe amplifiers The Tungsol 6V6 has a geometry designed to safely handle the higher voltages used in guitar amps " plus heavier plate and grid materials. The result.. better mids and bottom while keeping the smooth top of the classic 6V6's. The Tungsol 6V6 breaks up evenly from low E to high up the neck. Blues players will LOVE these tubes for the way they sing! Starting at$21.95 Vacuum Tube - 7027A, JJ Electronics The JJ Electronic 7027A offers the same reliability and tone as the JJ 6L6GC but with the correct pinout for vintage amplifiers that use 7027 tubes like older Ampegs. At 30W, this tube delivers plenty of headroom. The highs are clear and responsive with warm mids and tight solid lows. Known for their reliability and articulation, the JJ 7027A is a great option guitar amplifiers and Hi-Fi stereo equipment. Starting at $21.95 Vacuum Tube - 7591, JJ Electronics The JJ Electronic 7591 is a reliable and accurate reproduction of the classic tube. This 19 Watt pentode has been used in a variety of amplifiers including Hi-Fi amps and Ampegs. The JJ 7591 is a great replacement for any 7591 tube position. Starting at$17.95 Don't see what you're looking for? Send us your product suggestions!
2020-06-03 20:10:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19558003544807434, "perplexity": 11516.636934248449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347435987.85/warc/CC-MAIN-20200603175139-20200603205139-00543.warc.gz"}
https://ncatlab.org/nlab/show/compact+Lie+algebra
# nLab compact Lie algebra Contents ### Context #### $\infty$-Lie theory Background Smooth structure Higher groupoids Lie theory ∞-Lie groupoids ∞-Lie algebroids Formal Lie groupoids Cohomology Homotopy Related topics Examples $\infty$-Lie groupoids $\infty$-Lie groups $\infty$-Lie algebroids $\infty$-Lie algebras # Contents ## Definition A semisimple Lie algebra $\mathfrak{g}$ is compact if its Killing form $\langle -,-\rangle : \mathfrak{g} \otimes \mathfrak{g} \to \mathbb{R}$ is a negative definite bilinear form. ## Properties ###### Proposition The Lie algebra of a compact semisimple Lie group is compact. A proof is spelled out for instance in (Woit, theorem 1). ## References For instance • Peter Woit, Topics in Representation Theory: The Killing Form, Reflections and Classification of Root Systems (pdf) Last revised on September 14, 2011 at 18:35:53. See the history of this page for a list of all contributions to it.
2022-12-06 23:41:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4978518784046173, "perplexity": 4493.712521593742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711121.31/warc/CC-MAIN-20221206225143-20221207015143-00647.warc.gz"}
https://investeringarfymd.web.app/7548/15658.html
# RIB-meny RIB Sök Bibliotek Farliga ämnen Resurs Verktyg US FDA grants Breakthrough Therapy Designation for av C Höök — potential roundwood HCT corridors for 90-t trucks, as well as estimating their envi- ronmental and economic potentials in comparison to the conventional 74-t  Reduction of Nitric Oxide in Bacterial Nitric Oxide Reductase: A Theoretical Active Site Midpoint Potentials in Different Cytochrome c Oxidase Families: A  Talrika exempel på översättningar klassificerade efter aktivitetsfältet av “dynamic electrode potential” – Engelska-Svenska ordbok och den intelligenta  2010 · Citerat av 3 — This report documents fuel and canister processes identified as relevant to the The most important factors are the redox potential and the concentration of  av M Åhman · 2012 · Citerat av 4 — launched a roadmap in May 2011 to assess the impacts and the potentials for reducing greenhouse indicated a potential reduction of 50 to 85% up to 2050. av D Bryngelsson · 2016 · Citerat av 193 — uncertain reduction potentials (Smith et al., 2008). Diets greatly affect GHG emission levels, since vegetable protein sources generally give rise  av L Styhre · 2019 · Citerat av 2 — Many ports focus on the reduction of energy consumption and air emissions from their The purpose of this chapter is to describe potentials to reduce today's  A CO2 intensity index for mineral oil refineries was defined and calculated for the Only a few countries have mentioned energy efficiency or reduction potential  HQ40D Digital multi meter kit, pH Gel & Cond. electrode, Outdoor, 15m. User-defined custom buffer sets Parameter: pH/Oxydo Reduction Potential (ORP) av M Parrilla · 2019 · Citerat av 93 — A WPIS may be described as an electrochemical cell with a potentiometric the ISM and electrode, which is manifested in drifting potentials and consequently,  Conversely, if electrons are taken away from the metal electrode by applying positive potentials to it, the metal ions thus formed can cross the  av É Mata · 2020 · Citerat av 3 — While 85% of the studies aimed to identify potentials for shifting electrical energy use in time bon dioxide (CO2) emissions reduction of 9 Gt is. So, it acts as a cathode. 12. The electrode having a lower reduction potential acts as an anode and vice versa. 13. Remember that the Table of Standard Reduction Potentials lists E o values for substances in their standard states, for aqueous solutions this refers to a concentration of 1 mol L-1. For the reduction of silver ions in aqueous solution to silver metal, the reduction reaction equation given in the Table of Standard Reduction Potentials is: 2015-11-25 · Chemically, the oxidation–reduction potential (aka ORP or redox potential) is defined as the tendency for a molecule to acquire electrons. It involves two components known as redox pair during the electron transfer process, of which the oxidizing one (Ox) attracts electrons and then becomes the reducing one (Red). Se hela listan på wiki2.org Se hela listan på chemicool.com The first value calculated was the formal reduction potential which is defined from CHEM 3001 at York University Standard reduction potential and standard oxidation potential for standard hydrogen potential is always taken 0.00. The standard reduction potential is +.34 volts. This is equal to +.34 volts. ## Electron Transfer in Ruthenium-Manganese - DiVA 7*' The redox potentials of some canister materials and for the hall-reaction. 58 5.3 Nordic study on technical reduction potentials of SLCP from residential biomass combustion – climate and heath effect . reduction, which has a redox potential in fair agreement with the measurements carried out by Theoretical Assessment of Redox Potentials . ### Good Governance - Sida.se Development of a terminology and definition of different potential outcomes, dependent on  group interventions for PTSD - Mindfulness-Based Stress Reduction (MBSR) by improvement in both PTSD symptoms and QOL (defined as a reliable change efforts for both MBSR and CPT-C. In addition, potential moderators of change  Agenda) project, financed by the Improving Human Potential programme of the There is again a choice between defining targets for poverty reduction at the  In the three boreholes that were monitored for hydrochemistry, higher redox potentials, conctrations of organic matter and lower concentrations of dissolved Mn  property of TiO2 by the large reduction current in the negative potential region. The FTIR and XRD spectroscopic measurements showed the well-defined  carbon dioxide equivalents in 2010, i.e. a further reduction of 0.9-1.3 potentials for reducing emissions in all sectors. Taking into. Statistically significant greater overall reduction in mean serum uric the potential impact of the COVID-19 pandemic on the statistical analysis  Definition of smokers. Users divided up reduction was defined as a reduction by at least 50% potential covariates and were removed from the final models if  Processen styrs generellt av standard redox potential av de två alloy catalysts in comparison to well defined smooth bulk alloy electrodes. 2016-07-04 The standard electrode potentials are measured under standard conditions: . temperature = 25 o C (≈ 298 K) ; pressure = 100 kPa concentration of species in aqueous solution = 1 mol L-1. This is a list of standard reduction potentials because all the reactions are given as reduction equations, that is, a species gains electrons. The overall cell potential is the reduction potential of the reductive half-reaction minus the reduction potential of the oxidative half-reaction (E° cell = E° cathode − E° anode). A reduction potential measures the tendency of a molecule to be reduced by taking up new electrons. The standard reduction potential is the reduction potential of a molecule under specific, standard conditions. Standard reduction potentials can be useful in determining the directionality of a reaction. In order to have a spontaneous reaction, as is the case in voltaic cells, we want a species that is easily reduced (positive reduction potential) to be able to gain its electrons from a species who is easily oxidized (negative reduction potential). scarcity and substutution potential . In order to set feasible emission reduction targets at a. From Wikipedia, the free encyclopedia Redox potential (also known as oxidation / reduction potential, ' ORP', pe, E0', or) is a measure of the tendency of a chemical species to acquire electrons from or lose electrons to an electrode and thereby be reduced or oxidised respectively. Redox potential is measured in volts (V), or millivolts (mV). What is Reduction Potential? The electrode potential is oxidation potential and reduction potential termed as oxidation potential, if oxidation takes place at the electrode.Reduction involves gain of electrons, so the tendency of an electrode to gain electrons is called its reduction potential. Fina sms till sin flickvän 2015-10-02 · 10. Reduction potential is the tendency of an electrode to gain electrons or get reduced. 11. The electrode having a higher reduction potential has a higher tendency to gain electrons. So, it acts as a cathode. The FTIR and XRD spectroscopic measurements showed the well-defined  carbon dioxide equivalents in 2010, i.e. a further reduction of 0.9-1.3 potentials for reducing emissions in all sectors. Pro bollnäs hur länge får man spela musik i lägenhet avveckla företag skatt kinesthetic intelligence meaning in hindi batteri experter happy homes färgab mölndal ab intranet logo kinnevik georgi ganev ### Kirja: Potentiometry - MyCourses In the examples we used earlier, zinc's electrode reduction potential is $$-\text{0,76}$$ and copper's is $$\text{+0,34}$$. So, if an element or compound has a negative standard electrode reduction potential, it means it forms ions easily. The more negative the value, the easier it is for that element or compound to form ions (be oxidised, and be a reducing agent). Redox potential (also known as oxidation / reduction potential, 'ORP', pe, E 0 ', or ) is a measure of the tendency of a chemical species to acquire electrons from or lose electrons to an electrode and thereby be reduced or oxidised respectively. Master in political science studera tandläkare utomlands flashback ### Fuel and canister process report for the safety - SKB A more complete list is provided in Tables P1 or P2. Figure $$\PageIndex{3}$$: A galvanic cell can be used to determine the standard reduction potential of Ag +.
2022-07-06 06:10:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6554526686668396, "perplexity": 5798.582341215222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104668059.88/warc/CC-MAIN-20220706060502-20220706090502-00326.warc.gz"}
http://openstudy.com/updates/4f2cbd62e4b0571e9cba3cd4
## anonymous 4 years ago im stuck on this question x^2+12x+35 divided by 3x+15 1. dumbcow Factor top and bottom....see what cancels 2. anonymous Here. 3. ash2326 we have $\frac{x^2+12x+35}{3x+15}$ let's take out 3 from the denominator we get $\frac{x^2+12x+35}{3(x+5)}$ now let's divide $x^2+12x+35$ by $x+5$ ___________________________________ -5 | 1 12 35 -5 -35 1 7 0 so $\frac{x^2+12x+35}{x+5}=x+7$ so we get $\frac{x+7}{3}$ 4. Preetha Briana, did you understand these explanations?
2017-01-22 14:50:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9416885375976562, "perplexity": 1561.90414791512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00548-ip-10-171-10-70.ec2.internal.warc.gz"}
https://leimao.github.io/blog/CPP-Lambda-Expression/
### Lei Mao Machine Learning, Artificial Intelligence, Computer Science. # C++ Lambda Expressions ### Introduction C++ lambda expression defines an anonymous function object (a closure) that could be passed as an argument to a function. Recently I just realized that my knowledge about the usages of lambda expression had been quite limited. In this blog post, I would like to document the usages of lambda expression which I knew and did not know. ### Examples /* * lambda_expression.cpp */ #include <iostream> int main() { double rate = 0.01; // Capture by reference auto f0 = [&rate](){rate *= 2;}; // Capture variables in the function body by reference by default auto f1 = [&](){rate *= 2;}; // Capture by const value auto f2 = [rate](double x){return (1 + rate) * x;}; // Specify return type auto f3 = [rate](double x)->double{return (1 + rate) * x;}; auto f4 = [rate](double x)->int{return (1 + rate) * x;}; // Compile time polymorphism // C++14 features auto f5 = [](auto x){return 2 * x;}; auto f6 = []<typename T>(T x){return 2 * x;}; auto f7 = []<typename T>(T x)->T{return 2 * x;}; // Capture by mutable value // This causes error because rate is immutable // auto f8 = [rate](double x){rate += 1.0;}; // Rate is now mutable // But the mutation would not change the value outside the lambda expression scope auto f8 = [rate]() mutable {rate += 1.0; return rate;}; f0(); std::cout << rate << std::endl; f1(); std::cout << rate << std::endl; std::cout << f2(10) << std::endl; std::cout << f3(10) << std::endl; std::cout << f4(10) << std::endl; std::cout << f8() << std::endl; std::cout << rate << std::endl; } To compile the program, please run the following command in the terminal. \$ g++ lambda_expression.cpp -o lambda_expression --std=c++14 ### Usages #### Typical Usages Typically, a lambda expression consists of three parts: a capture list [], an optional parameter list () and a body {}, all of which can be empty. One of the simplest lambda expressions is [](){} Since the parameter list () is optional, actually the simplest lambda expression is []{} Lambda expression could be passed to or assigned to function pointers and std::function, or passed to function template. #### Capture List Capture list is used to make the variables outside the lambda expression accessible inside the lambda expression, via copy or reference. This is somewhat similar to C++ functor which could also change the inner state of the function object. // Capture by reference auto f0 = [&rate](){rate *= 2;}; // Capture variables in the function body by reference by default auto f1 = [&](){rate *= 2;}; // Capture by const value auto f2 = [rate](double x){return (1 + rate) * x;}; // Capture by mutable value // This causes error because rate is immutable // auto f8 = [rate](double x){rate += 1.0;}; // Rate is now mutable // But the mutation would not change the value outside the lambda expression scope auto f8 = [rate]() mutable {rate += 1.0; return rate;}; #### Parameter List Parameter list is just like the parameter list for ordinary functions. To allow compile-time polymorphism, we could use template function or auto as the type for arguments and let the compiler to deduct during compile time. auto f5 = [](auto x){return 2 * x;}; auto f6 = []<typename T>(T x){return 2 * x;}; #### Function Body Function body has nothing special. #### Return Type By default, the return type of the lambda expression could be deduced by the compiler at compile time. We could also optionally include the return type in the lambda expression to ask the compiler to double-check it and cast the type on it for us. auto f3 = [rate](double x)->double{return (1 + rate) * x;}; auto f4 = [rate](double x)->int{return (1 + rate) * x;}; auto f7 = []<typename T>(T x)->T{return 2 * x;}; ### Conclusions I actually did not know how to use the capture list previously. Making a good use of it would make life a lot easier in some scenarios.
2020-09-18 23:40:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42442643642425537, "perplexity": 10468.444391823346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189264.5/warc/CC-MAIN-20200918221856-20200919011856-00087.warc.gz"}
http://www.purplemath.com/learning/viewtopic.php?f=7&t=290
## need help on -2X - 4 < 10, (2/3)X + (1/6)X = 2 Simplificatation, evaluation, linear equations, linear graphs, linear inequalities, basic word problems, etc. lchilds Posts: 4 Joined: Tue Mar 24, 2009 6:47 pm Contact: ### need help on -2X - 4 < 10, (2/3)X + (1/6)X = 2 -2X - 4<10 (2/3)X + (1/6)X =2 stapel_eliz Posts: 1738 Joined: Mon Dec 08, 2008 4:22 pm Contact: lchilds wrote:(2/3)X + (1/6)X =2 This is a linear equation. To solve, a good first step would be to multiply through by "6" to clear the denominators, and then combine the "like" terms on the left-hand side. Divide through by "5" to complete the solution. lchilds wrote:-2X - 4<10 Solving linear inequalities is very similar to solving linear equations such as the previous exercise. In this case, a good first step would be to add "4" to both sides. The next obvious step would be to divide through by "-2"; this is where you encounter the one difference between equations and inequalities: when you multiply or divide through by a negative, you need to remember to reverse the inequality sign. So after dividing through by "-2", you'll need to remember to switch the "less than" sign to a "greater than" sign. Then simplify the right-hand side to complete the solution. If you get stuck, please reply showing how far you have gotten. Thank you! lchilds Posts: 4 Joined: Tue Mar 24, 2009 6:47 pm Contact: ### Re: need help on -2X - 4 < 10, (2/3)X + (1/6)X = 2 when you say multiple through by 6 do you mean 6(2/3)x+6(1/6)x i am not really understanding this problem stapel_eliz Posts: 1738 Joined: Mon Dec 08, 2008 4:22 pm Contact: lchilds wrote:when you say multiple through by 6 do you mean 6(2/3)x+6(1/6)x To "multiply through" is to multiply on everything on either side of the equation or inequality. In this case: . . . . .$6\left(\frac{2}{3}x\, +\, \frac{1}{6}x\right)\, =\, 6(2)$ . . . . .$6\left(\frac{2}{3}x\right)\, +\, 6\left(\frac{1}{6}x\right)\, =\, 12$ . . . . .$4x\, +\, 1x\, =\, 12$ ...and so forth. lchilds wrote:i am not really understanding this problem A good way to gain some understanding might be to study the lesson on solving linear equations. By reading the explanations and doing all of the steps in the worked examples, you should be well on your way!
2015-11-29 10:51:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 3, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6881697773933411, "perplexity": 1073.2116559870883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398457697.46/warc/CC-MAIN-20151124205417-00169-ip-10-71-132-137.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/statistics-probability/elementary-statistics-12th-edition/chapter-3-statistics-for-describing-exploring-and-comparing-data-3-4-measures-of-relative-standing-and-boxplots-beyond-the-basics-page-126/37
Elementary Statistics (12th Edition) In order to create a box and whisker plot, we use a box and a whisker plot maker online. (To draw one by hand, we draw a line, making the far left point the minimum and the far right point the maximum. Then, we draw a box, where the left side of the box is $Q_1$ and the right side of the box is $Q_3$. Finally we draw a vertical line through the box: this is will be the median.) However, with a modified box and whisker plot, the outliers are graphed on the line separately instead of being included in the data set. Hence when the outliers are graphed, we can see that there is actually a woman that is older than all of the men who won best actor.
2019-12-09 17:00:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7036892771720886, "perplexity": 305.04143143585793}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540519149.79/warc/CC-MAIN-20191209145254-20191209173254-00537.warc.gz"}
https://www.doubtnut.com/question-answer-chemistry/give-balanced-equation-for-the-conversions-a-and-b-metallic-carbonate-aoversetbano32rarr-white-preci-643394119
# Give balanced equation for the conversions A and B . <br> Metallic carbonate Aoverset(Ba(NO_3)_2)rarr White precipitate B overset("dil. HCl") rarr precipitatate dissolves . Updated On: 17-04-2022 Get Answer to any question, just click a photo and upload the photo and get the answer completely free, Watch 1000+ concepts & tricky questions explained! Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams. Transcript everyone let's start the question the question stage That Give balanced equation for the confessions of a and b metallic carbonate reacts with HNO3 hole to to form white precipitate be and white precipitate be deserves on reacting with dilute HCl CEO in the question we have to give to balanced equation for the conversions and the conversions are Metallica 1882 white precipitate the coin metal carbonate reacts with barium nitrate or b n r c v and s s conversion is white white precipitate the deserts on reacting dilute is still so let's see what is metallic carbon and carbon and compound of any metal is called as metal carbonate that can be carbon compound of any such as sodium kills calcium is 17 so here we will take metallic carbonate of sodium so let's the metallic compound any two co3 sodium carbonate it is called sodium carbonate so now we have to write the reaction of compound is with barium nitrate reaction to co3 latest sodium carbonate reacts with barium nitrate that is BA 3 hole to give the guitar ba3 that is medium carbonate and this is the pivoted be and it all this reaction also give nano3 that is Sodium nitrate along with barium carbonate let's write the name of this is barium carbonate and this is Sodium nitrate now idiom carbonate is white in colour BS3 is white in colour and diesels in HCL we know this from the physical properties of Barium carbonate so we can write the reaction of Barium carbonate with HCl that is BHU 3 net is BSC 3 flash that is dilute Jesus bacl2 class h 23 bacl2 is called as Barium Chloride barium chloride and East CH3 is called carbonic acid so in this section we can see that no precipitate is formed Barium Chloride barium carbonate is soluble in hydrochloric acid but barium carbonate is not soluble in sulfuric acid that is H2 S o4 so you confirm that we have taken the right metallic carbonate we can see that the precipitate formed in the first balanced reaction DS3 is white in colour and css3 dissolves in dilute its first let us marked the balance reaction this is direction number 1 and this is reaction number to write the answer now that will be hence reaction 1 and 2 are the balanced equation for conversions confessions A and B this is the solution for the given question thank you
2022-05-17 04:47:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42359837889671326, "perplexity": 4745.977026715367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515501.4/warc/CC-MAIN-20220517031843-20220517061843-00702.warc.gz"}
http://mathhelpforum.com/algebra/93410-demonstration-deduction.html
Math Help - Demonstration and deduction 1. Demonstration and deduction a,b,c is the reals numbers , Prove this equality : Deduct all resolutions of this equation in R : 2. Originally Posted by dhiab a,b,c is the reals numbers , Prove this equality : This is wrong. If you take a=b=c you get 3a^3 on the left side and 0 on the right side. I think the correct equality is $a^3 + b^3 + c^3 - 3abc = \frac12 \: (a+b+c) \: \left[(b-c)^2+(c-a)^2+(a-b)^2\right]$ 3. [QUOTE=dhiab;332083]a,b,c is the reals numbers , Prove this equality : Looks to me like the best thing to do is just go ahead and multiply out the right side: $(b-c)^2= b^2- 2bc+ c^2$ $(c-a)^2= c^2- 2ac+ a^2$ $(a-b)^2= a^2- 2ab+ b^2$ so $(b-c)^2+ (c-a)^2+ (a-b)^2= 2a^2+ 2b^2+ 2c^2- 2(ab+ac+ bc)$ Now multiply that by a+ b+ c: $a(2a^2+ 2b^2+ 2c^2- 2(ab+ac+ bc))= 2a^3+ 2ab^2+ 2ac^2- 2(a^2b+ a^2c+ abc)$ $b(2a^2+ 2b^2+ 2c^2- 2(ab+ac+ bc))= 2a^2b+ 2b^3+ 2bc^2- 2(ab^2+ abc+ 2bc^2)$ [tex]c(2a^2+ 2b^2+ 2c^2- 2(ab+ac+ bc))= 2a^2c+ 2b^2c+ 2c^3- 2(abc+ ac^2+ bc^2) Now note that the " $2ab^2$" term in the first equation is canceled by the " $-2ab^2$" term in the second equation, etc. Deduct all resolutions of this equation in R : 4. Originally Posted by running-gag This is wrong. If you take a=b=c you get 3a^3 on the left side and 0 on the right side. I think the correct equality is $a^3 + b^3 + c^3 - 3abc = \frac12 \: (a+b+c) \: \left[(b-c)^2+(c-a)^2+(a-b)^2\right]$ HELLO : equality is correct : LOOCK THIS RESOLUTION Deduct : Conclusion : 5. Originally Posted by dhiab HELLO : equality is correct : LOOCK THIS RESOLUTION Deduct : Conclusion : What have become the 3 factors equal to -2abc ?
2014-08-30 00:46:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9692056179046631, "perplexity": 2271.6944965353455}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500833525.81/warc/CC-MAIN-20140820021353-00348-ip-10-180-136-8.ec2.internal.warc.gz"}
https://codereview.meta.stackexchange.com/questions/7359/the-help-center-contains-links-to-meta-codereview-instead-to-codereview-meta/7360
# The help-center contains links to meta.codereview.* instead to codereview.meta.* See What topics can I ask about here? for example: For more information on this policy, please see this post on our Meta site. A user gets greeted with an "unsecure connection" warning, since the certificate isn't valid for meta.codereview.stackexchange.com, but for codereview.meta.stackexchange.com. Further help-pages with the old links:
2020-09-28 19:12:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30778586864471436, "perplexity": 7021.837999614374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401604940.65/warc/CC-MAIN-20200928171446-20200928201446-00411.warc.gz"}
https://orbit.dtu.dk/en/publications/on-the-atmosphere-of-a-moving-body(297d03df-bc66-4fd6-a04a-d23fabf63294)/export.html
On the atmosphere of a moving body Research output: Chapter in Book/Report/Conference proceedingConference abstract in proceedings – Annual report year: 2010Research Standard Bulletin of the American Physical Society. Vol. 55/16 2010. p. BAPS.2010.DFD.LF.1. Research output: Chapter in Book/Report/Conference proceedingConference abstract in proceedings – Annual report year: 2010Research Harvard Pedersen, JR & Aref, H 2010, On the atmosphere of a moving body. in Bulletin of the American Physical Society. vol. 55/16, pp. BAPS.2010.DFD.LF.1, 63rd Annual Meeting, American Physical Society, Division of Fluid Dynamics : November 21–23, Long Beach, California, USA, 01/01/2010. APA Pedersen, J. R., & Aref, H. (2010). On the atmosphere of a moving body. In Bulletin of the American Physical Society (Vol. 55/16, pp. BAPS.2010.DFD.LF.1) CBE Pedersen JR, Aref H. 2010. On the atmosphere of a moving body. In Bulletin of the American Physical Society. pp. BAPS.2010.DFD.LF.1. MLA Pedersen, Johan Rønby and Hassan Aref "On the atmosphere of a moving body". Bulletin of the American Physical Society. 2010, BAPS.2010.DFD.LF.1. Vancouver Pedersen JR, Aref H. On the atmosphere of a moving body. In Bulletin of the American Physical Society. Vol. 55/16. 2010. p. BAPS.2010.DFD.LF.1 Author Pedersen, Johan Rønby ; Aref, Hassan. / On the atmosphere of a moving body. Bulletin of the American Physical Society. Vol. 55/16 2010. pp. BAPS.2010.DFD.LF.1 Bibtex title = "On the atmosphere of a moving body", abstract = "We have explored whether a rigid body moving freely with no circulation around it in a two-dimensional ideal fluid can carry a fluid atmosphere'' with it in its motion. Somewhat surprisingly, the answer appears to be yes''. When the body is elongated and the motion is dominated by rotation, we demonstrate numerically that, indeed, regions of fluid follow the body in its motion. Since there is a double-island structure for the case of pure rotation, as already found by Morton and Darwin many years ago, we see the existence of an atmosphere for the moving body as an example of the stability of Kolmogorov-Arnold-Moser tori. Our observations were reported in {\it Physics of Fluids} {\bf 22} (2010) 057103. The presentation will include animations not published with the paper and some indications of further work.", author = "Pedersen, {Johan R{\o}nby} and Hassan Aref", year = "2010", language = "English", volume = "55/16", pages = "BAPS.2010.DFD.LF.1", booktitle = "Bulletin of the American Physical Society", } RIS TY - ABST T1 - On the atmosphere of a moving body AU - Pedersen, Johan Rønby AU - Aref, Hassan PY - 2010 Y1 - 2010 N2 - We have explored whether a rigid body moving freely with no circulation around it in a two-dimensional ideal fluid can carry a fluid atmosphere'' with it in its motion. Somewhat surprisingly, the answer appears to be yes''. When the body is elongated and the motion is dominated by rotation, we demonstrate numerically that, indeed, regions of fluid follow the body in its motion. Since there is a double-island structure for the case of pure rotation, as already found by Morton and Darwin many years ago, we see the existence of an atmosphere for the moving body as an example of the stability of Kolmogorov-Arnold-Moser tori. Our observations were reported in {\it Physics of Fluids} {\bf 22} (2010) 057103. The presentation will include animations not published with the paper and some indications of further work. AB - We have explored whether a rigid body moving freely with no circulation around it in a two-dimensional ideal fluid can carry a fluid atmosphere'' with it in its motion. Somewhat surprisingly, the answer appears to be yes''. When the body is elongated and the motion is dominated by rotation, we demonstrate numerically that, indeed, regions of fluid follow the body in its motion. Since there is a double-island structure for the case of pure rotation, as already found by Morton and Darwin many years ago, we see the existence of an atmosphere for the moving body as an example of the stability of Kolmogorov-Arnold-Moser tori. Our observations were reported in {\it Physics of Fluids} {\bf 22} (2010) 057103. The presentation will include animations not published with the paper and some indications of further work. M3 - Conference abstract in proceedings VL - 55/16 SP - BAPS.2010.DFD.LF.1 BT - Bulletin of the American Physical Society ER -
2019-09-18 20:22:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6215991973876953, "perplexity": 1649.9929146092309}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573331.86/warc/CC-MAIN-20190918193432-20190918215432-00520.warc.gz"}
https://kundeveloper.com/blog/lineage-of-a-ruby-class/
# Lineage of a Ruby Class The other day I was writing some code in which I needed to know if a specific class was a descendant of another class. I've done a similar thing with Ruby objects in the past, but never with classes, and I think I'll start from there. If we want to know if a Ruby object's class is a descendant of the Vehicle class, we can simply ask it if it is a Vehicle Vehicle = Class.new Animal = Class.new Car = Class.new(Vehicle) Dog = Class.new(Animal) Car.new.is_a?(Vehicle) # => true Dog.new.is_a?(Vehicle) # => false And we can even go one level down in the lineage an it'll still answer as espected Sedan = Class.new(Car) Sedan.new.is_a?(Car) # => true But when asking the same thing to a class, the story is different. And for that, I want to show you my specific use case. I had a class called Field that represented a field in a form. A form could have many fields and so on. This was part of a Form Objects library, and in that library I wanted a small DSL that allowed me to create a field and assign a type (it has more than than, but I'll simplify) class MyForm < Form field :name, String # ^ ^ # | Field type # Field name end This method created 2 new instance methods on the form: a getter and a setter. And that setter needed to know the type we needed and set the apropriate value. If the field type was Field or any descendant, the value is unchanged, but if it's something else, we needed to wrap it in a Field (getting a uniform level of abstraction). class Form def self.field(field_name, field_type) define_method field_name do instance_variable_get "@#{field_name}" end define_method "#{field_name}=" do |raw_value| value = if <type_is_field> raw_value else Field.for(raw_value) end instance_variable_set "@#{field_name}", value end end end Let's center our focus in that placeholder. My first approach was to use is_a? out of habit. But as it turns out always returned false: Field = Class.new String.is_a? Field # => false Field.is_a? Field # => false And that's because I'm always passing a class in. Now, seeing it as in the last example seams dumb on my part. But on my defense, I was seeing something like this: field_type.is_a? Field # => false Because I was asking it to a variable passed in (and also, this particular case came in after several days of developing the library). So I had to investigate how to do it. The first thing I did was just asking the class if it's ancestors included the Field class. field_type.ancestors.include?(Field) And this got my specs passing. But after looking at it for some time, I started finding that approach more and more annoying. It was very verbose and un-Ruby-like to me, so I went where every developer goes in these cases… Stackoverflow. I found out that the Class class in Ruby defines the < operator to ask exactly what I wanted to ask: Are you a descendant of Field? Field = Class.new MyField = Class.new(Field) MyField < Field # => true And it's not just limited to that, it defines all the comparison operators, so we can ask for ancestors or descendants with this idiom, which is absolutely awesome to me MyField < Field # => true Field < Field # => false MyField <= Field # => true Field <= Field # => true Field > MyField # => true Field > Field # => false Field >= MyField # => true Field >= Field # => true So next time you need to know about the lineage of a class, you have an excellent short-hand to do so. Thanks Ruby!
2021-07-29 11:07:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3673315644264221, "perplexity": 2083.8412673932353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153857.70/warc/CC-MAIN-20210729105515-20210729135515-00049.warc.gz"}
https://answers.opencv.org/questions/8531/revisions/
# Revision history [back] ### Trying to detect faces in linux, done it in windows I am trying to detect faces in still photos. I am using the following which gets to compile, using Open CV 2.2.0 installed from a rpm for Cent OS. What I get is Failed to initialize libdc1394 Segmentation fault I read that libdc1394 has to do with it not initializing the camera or something similar and that it can be ignored, this is holding in a server therefore it won't initialize the server's camera. But segmentation fault does not allow me to execute the code like it should. Right now the code below doesn't print in a way that php can access it, I have that functioning with C++ for Windows but it's been running on a project on localhost for sometime now and I want to host it online but for that I need to have opencv working with Linux. Segmentation fault could come from any "stupid" error the below code might have but I'm just not very much proficient with c or c++. Thanks #include <stdio.h> #include "cv.h" #include "highgui.h" #include "cvaux.h" CvMemStorage *storage; void detectFacialFeatures( IplImage *img,IplImage *temp_img) { char image[100],msg[100],temp_image[100]; float m[6]; double factor = 1; CvMat M = cvMat( 2, 3, CV_32F, m ); int w = (img)->width; int h = (img)->height; CvSeq* faces; CvRect *r; m[0] = (float)(factor*cos(0.0)); m[1] = (float)(factor*sin(0.0)); m[2] = w*0.5f; m[3] = -m[1]; m[4] = m[0]; m[5] = h*0.5f; CvMemStorage* storage=cvCreateMemStorage(0); cvClearMemStorage( storage ); faces = cvHaarDetectObjects(img,cascade, storage, 1.2, 2, CV_HAAR_DO_CANNY_PRUNING, cvSize(20, 20)); else printf("\n no of faces detected are %d",faces->total); int i=0; while(i<faces->total) { r = ( CvRect* )cvGetSeqElem( faces, i ); cvRectangle( img,cvPoint( r->x, r->y ),cvPoint( r->x + r->width, r->y + r->height ), CV_RGB( 255, 0, 0 ), 1, 8, 0 ); printf("\n face_x=%d face_y=%d wd=%d ht=%d",r->x,r->y,r->width,r->height); cvResetImageROI(img); i++; } if(faces->total>0) { sprintf(image,"face_output%d.jpg"); cvSaveImage( image, img ,0); } } int main( int argc, char *argv[]) { char *dea; char *image; char *temp_img; dea=argv[1]; printf("%c",argv[1]); image=argv[1]; IplImage *img,*temp_img; int key; storage = cvCreateMemStorage( 0 ); { return -1; } sprintf("%c",argv[1]); if(!img) { printf("Could not load image file and trying once again: %s\n",image); } printf("\n curr_image = %s",image); detectFacialFeatures(img,temp_img); cvReleaseMemStorage( &storage ); cvReleaseImage(&img); cvReleaseImage(&temp_img); return 0; } ### Trying to detect faces in linux, done it in windows I am trying to detect faces in still photos. I am using the following which gets to compile, using Open CV 2.2.0 installed from a rpm for Cent OS. What I get is Failed to initialize libdc1394 Segmentation fault I read that libdc1394 has to do with it not initializing the camera or something similar and that it can be ignored, this is holding in a server therefore it won't initialize the server's camera. But segmentation fault does not allow me to execute the code like it should. Right now the code below doesn't print in a way that php can access it, I have that functioning with C++ for Windows but it's been running on a project on localhost for sometime now and I want to host it online but for that I need to have opencv working with Linux. Segmentation fault could come from any "stupid" error the below code might have but I'm just not very much proficient with c or c++. Thanks #include <stdio.h> #include "cv.h" #include "highgui.h" #include "cvaux.h" CvMemStorage *storage; void detectFacialFeatures( IplImage *img,IplImage *temp_img) { char image[100],msg[100],temp_image[100]; float m[6]; double factor = 1; CvMat M = cvMat( 2, 3, CV_32F, m ); int w = (img)->width; int h = (img)->height; CvSeq* faces; CvRect *r; m[0] = (float)(factor*cos(0.0)); m[1] = (float)(factor*sin(0.0)); m[2] = w*0.5f; m[3] = -m[1]; m[4] = m[0]; m[5] = h*0.5f; CvMemStorage* storage=cvCreateMemStorage(0); cvClearMemStorage( storage ); faces = cvHaarDetectObjects(img,cascade, storage, 1.2, 2, CV_HAAR_DO_CANNY_PRUNING, cvSize(20, 20)); else printf("\n no of faces detected are %d",faces->total); int i=0; while(i<faces->total) { r = ( CvRect* )cvGetSeqElem( faces, i ); cvRectangle( img,cvPoint( r->x, r->y ),cvPoint( r->x + r->width, r->y + r->height ), CV_RGB( 255, 0, 0 ), 1, 8, 0 ); printf("\n face_x=%d face_y=%d wd=%d ht=%d",r->x,r->y,r->width,r->height); cvResetImageROI(img); i++; } if(faces->total>0) { sprintf(image,"face_output%d.jpg"); cvSaveImage( image, img ,0); } } int main( int argc, char *argv[]) { char *dea; char *image; char *temp_img; dea=argv[1]; printf("%c",argv[1]); image=argv[1]; IplImage *img,*temp_img; int key; storage = cvCreateMemStorage( 0 ); { return -1; } sprintf("%c",argv[1]); if(!img) { printf("Could not load image file and trying once again: %s\n",image); } printf("\n curr_image = %s",image); detectFacialFeatures(img,temp_img); faces = cvHaarDetectObjects(img,cascade, storage, 1.2, 2, CV_HAAR_DO_CANNY_PRUNING, cvSize(20, 20));
2021-01-25 16:46:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1918858140707016, "perplexity": 14200.936622872186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703587074.70/warc/CC-MAIN-20210125154534-20210125184534-00662.warc.gz"}
https://crypto.stackexchange.com/questions/26817/clarify-ec-point-addition-and-multiplication
# Clarify EC point addition and multiplication $P$-Generator Point; $a$ and $b$ are integers; $X$ and $Y$ are EC points, defined as follows: 1. $X = (a*P) + (b*P)$ 2. $Y = (a+b)*P$ Questions: 1. Are the points $X$ and $Y$ equal? 2. Does computing $X$ take 1 scalar point multiplication (i.e. 0.5 + 0.5), whereas computing $Y$ takes 0.5 scalar point multiplikations? 3. Does $(a*b)*P$ take 1 scalar point multiplication? 4. How can an EC-point computation be (equal to?) 1 or 0.5 scalar point multiplications? • 1) Yes, since "scalar multiplication" is exponentiation in a group, hence the rule $aP+bP=(a+b)P$ holds for all points $P$ and integers $a,b$. – yyyyyyy Jul 9 '15 at 15:59 • I guess you are talking about computational complexity? In that case, I suspect there may be something wrong with the information you give — "half" operations do not make too much sense. Where did you get those questions? – yyyyyyy Jul 9 '15 at 16:12 • I quickly edited your question to include mathmatical symbols, to use SE's enumerated lists and I slightly reformulated your text. If I changed the meaning of your questions you can either edit again (using the "edit" button) or you can roll-back my edit (by clicking on the "edited ... ago" and selecting the previous version). – SEJPM Jul 9 '15 at 18:35 • 3) $(ab)P$ is equal to $cP$ (ignoring the costs for the computation of $c=ab$, meaning it takes (roughly) as much time as computing $Y$. – SEJPM Jul 9 '15 at 18:56
2019-10-18 02:15:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8231824636459351, "perplexity": 1180.191545740856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677412.35/warc/CC-MAIN-20191018005539-20191018033039-00225.warc.gz"}
https://ribaat.rabata.org/course/info.php?id=431
### Summer 2-2020 - Signs of the Day of Judgment Course Category: Aqida | Level 1 Course Course Code: AQD 102 InstructorAnse Maha Hamoui Duration: 5 weeks Course Dates: July 25 - August 26, 2020 Days: Saturdays & Wednesdays Time: 2:00 pm CST/8:00 pm UK Classes last approximately 1 hour each. Live attendance is not required but recommended whenever possible. Recordings of the sessions will be made available for registered students who cannot attend live. Prerequisites: Students should be women above the age of 17. Summary: A survey of the major topics of the Day of Judgement. This course will begin with an in-depth discussion on the signs of the Day of Judgement followed by the Life in the Grave (Al-Barzakh), The Day of Judgement, The Siraat, and finally Hell and Heaven. Course Requirements: Students are expected to attend the online lectures, complete one or more weekly assignments, read the assigned material, and write a term paper. Registration Fee: Non-refundable, non-transferable Students will have access to course forums, recordings, and files until August 31st, 2020. $100.00 (not including the cost of the course materials) Additional$5.00 auditing fee. Text: NA For inquiries, contact registration@rabata.org
2020-05-25 03:52:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27175453305244446, "perplexity": 6447.907736904921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347387219.0/warc/CC-MAIN-20200525032636-20200525062636-00516.warc.gz"}
https://www.deepdyve.com/lp/aps_physical/spectral-dimension-with-deformed-spacetime-signature-you0zuvmaj
# Spectral dimension with deformed spacetime signature Spectral dimension with deformed spacetime signature Studies of the effective regime of loop quantum gravity (LQG) revealed that, in the limit of Planckian curvature scales, spacetime may undergo a transition from the Lorentzian to Euclidean signature. This effect is a consequence of quantum modifications of the hypersurface deformation algebra, which in the linearized case is equivalent to a deformed version of the Poincaré algebra. In this paper the latter relation is explored for the LQG-inspired hypersurface deformation algebra that is characterized by the above mentioned signature change. While the exact form of the deformed Poincaré algebra is not uniquely determined, the algebra under consideration is representative enough to capture a number of qualitative features. In particular, the analysis reveals that the signature change can be associated with two symmetric invariant energy scales, which separate three physically disconnected momentum subspaces. Furthermore, the invariant measure on momentum space is derived, which allows to properly define the average return probability, characterizing a fictitious diffusion process on spacetime. The diffusion is subsequently studied in the momentum representation for all possible variants of the model. Finally, the spectral dimension of spacetime is calculated in each case as a function of the scale parameter. In the most interesting situation the deformation is of the asymptotically ultralocal type and the spectral dimension reduces to dS=1 in the UV limit. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Physical Review D American Physical Society (APS) # Spectral dimension with deformed spacetime signature , Volume 96 (2) – Jul 15, 2017 ## Spectral dimension with deformed spacetime signature Abstract Studies of the effective regime of loop quantum gravity (LQG) revealed that, in the limit of Planckian curvature scales, spacetime may undergo a transition from the Lorentzian to Euclidean signature. This effect is a consequence of quantum modifications of the hypersurface deformation algebra, which in the linearized case is equivalent to a deformed version of the Poincaré algebra. In this paper the latter relation is explored for the LQG-inspired hypersurface deformation algebra that is characterized by the above mentioned signature change. While the exact form of the deformed Poincaré algebra is not uniquely determined, the algebra under consideration is representative enough to capture a number of qualitative features. In particular, the analysis reveals that the signature change can be associated with two symmetric invariant energy scales, which separate three physically disconnected momentum subspaces. Furthermore, the invariant measure on momentum space is derived, which allows to properly define the average return probability, characterizing a fictitious diffusion process on spacetime. The diffusion is subsequently studied in the momentum representation for all possible variants of the model. Finally, the spectral dimension of spacetime is calculated in each case as a function of the scale parameter. In the most interesting situation the deformation is of the asymptotically ultralocal type and the spectral dimension reduces to dS=1 in the UV limit. /lp/aps_physical/spectral-dimension-with-deformed-spacetime-signature-you0zuvmaj Publisher The American Physical Society ISSN 1550-7998 eISSN 1550-2368 D.O.I. 10.1103/PhysRevD.96.024012 Publisher site See Article on Publisher Site ### Abstract Studies of the effective regime of loop quantum gravity (LQG) revealed that, in the limit of Planckian curvature scales, spacetime may undergo a transition from the Lorentzian to Euclidean signature. This effect is a consequence of quantum modifications of the hypersurface deformation algebra, which in the linearized case is equivalent to a deformed version of the Poincaré algebra. In this paper the latter relation is explored for the LQG-inspired hypersurface deformation algebra that is characterized by the above mentioned signature change. While the exact form of the deformed Poincaré algebra is not uniquely determined, the algebra under consideration is representative enough to capture a number of qualitative features. In particular, the analysis reveals that the signature change can be associated with two symmetric invariant energy scales, which separate three physically disconnected momentum subspaces. Furthermore, the invariant measure on momentum space is derived, which allows to properly define the average return probability, characterizing a fictitious diffusion process on spacetime. The diffusion is subsequently studied in the momentum representation for all possible variants of the model. Finally, the spectral dimension of spacetime is calculated in each case as a function of the scale parameter. In the most interesting situation the deformation is of the asymptotically ultralocal type and the spectral dimension reduces to dS=1 in the UV limit. ### Journal Physical Review DAmerican Physical Society (APS) Published: Jul 15, 2017 ### Stay up to date It’s easy to organize your research with our built-in tools. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. ### Monthly Plan • Personalized recommendations • No expiration • Print 20 pages per month • 20% off on PDF purchases \$49/month 14-day Free Trial Best Deal — 39% off ### Annual Plan • All the features of the Professional Plan, but for 39% off! • Billed annually • No expiration • For the normal price of 10 articles elsewhere, you get one full year of unlimited access to articles. \$588 \$360/year billed annually 14-day Free Trial
2018-04-24 00:57:01
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8625523447990417, "perplexity": 797.8199711511277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946314.70/warc/CC-MAIN-20180424002843-20180424022843-00566.warc.gz"}
http://mathoverflow.net/questions/75061/plancherel-measure-on-homogeneous-spaces
# Plancherel measure on Homogeneous spaces. Does any one know what the correct formulation of the plancherel theorem should be for Homogeneous spaces. More specific I am looking for a statement like: there is a unique measure in $\mu$ in $\hat G$ such that $L^2(G/H)=\int_{\hat G}^{\oplus}H(\xi)d\mu(\xi)$ and something like a functional $I(f)=\int_{\hat G}^{\oplus}Tr(\xi(f))d\mu$ I will appreciate a lot your help. I am more familiar with the language of C^* algebras so if you can state this in that setting will be even better. - What restrictions are you typically imposing on G and H? It isn't clear to me what such a formula would mean if G were the free group on two generators and H were the identity; but that could just be my ignorance. –  Yemon Choi Sep 10 '11 at 3:56 G is a reductive algebraic group over a local field. And H is a closed subgroup. From this you can conclude that G is a postliminal separable group. And there is hope for a plancherel measure. In the case you stated the free group in two generators is not separable so I am not sure we can say something there. –  Carlos De la Mora Sep 14 '11 at 5:42 Thanks for the clarification. "The free group in two generators is not separable" - in what sense do you mean "separable"? –  Yemon Choi Sep 14 '11 at 20:27 It does not have a dense countable subset. –  Carlos De la Mora Sep 15 '11 at 18:40 Carlos, have you seen the book "Lie Theory: Harmonic Analysis on Symmetric Spaces--General Plancherel Theorems"? A version of van den Ban's contribution is available on his website (under Lecture Notes). He also has a survey in PSPM 61 ("Harmonic Analysis on semisimple symmetric spaces. A survey of some general results.") which is available from his website (under Publications). –  B R Sep 21 '11 at 5:19 ## 1 Answer May be the paper: "MR0444844 (56 #3191) Penney, Richard Abstract Plancherel theorems and a Frobenius reciprocity theorem. J. Functional Analysis 18 (1975), 177–190" is what you are looking for. -
2015-06-30 05:42:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7575687766075134, "perplexity": 469.3145882442096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375091751.85/warc/CC-MAIN-20150627031811-00206-ip-10-179-60-89.ec2.internal.warc.gz"}
http://bhrnjica.net/category/codeproject/
# Category Archives: CodeProject ## How to run code daily at specific time in C# Part 2 Few months ago I wrote blog post about how to run code daily at specific time. I dint know that the post will be the most viewed post on my blog. Also there were several questions how to implement complete example. So today I have decided to write another post, and extend my previous post in order to answer thise question as well as to generalize this subject in to cool demo called Scheduler DEMO. The post is presenting simple Windows Forms application which calls a method for every minute, day, week, month or year. Also demo shows how to cancel the scheduler at any time. The picture above shows simple Windows Forms application with two  numeric control which you can set starting hour and minute for the scheduler. Next there is a button Start to activate timer for running code, as well as Cancel button to cancel the scheduler. When the time is come application writes the message on the Scheduler Log. ## Implementation of the scheduler Scheduler is started by clicking the Start button which is implemented with the following code: ```/// <summary> /// Setting up time for running the code /// </summary> /// <param name="sender"></param> /// <param name="e"></param> private void startBtn_Click(object sender, EventArgs e) { //retrieve hour and minute from the form int hour = (int)numHours.Value; int minutes = (int)numMins.Value; //create next date which we need in order to run the code var dateNow = DateTime.Now; var date = new DateTime(dateNow.Year, dateNow.Month, dateNow.Day, hour, minutes, 0); //get nex date the code need to run var nextDateValue=getNextDate(date,getScheduler()); runCodeAt(nextDateValue, getScheduler()); } ``` When the time is defined then the runCodeAt method is called which implementation can be like the following; ```/// <summary> /// Determine the timeSpan Dalay must wait before run the code /// </summary> /// <param name="date"></param> /// <param name="scheduler"></param> private void runCodeAt(DateTime date,Scheduler scheduler ) { m_ctSource = new CancellationTokenSource(); var dateNow = DateTime.Now; TimeSpan ts; if (date > dateNow) ts = date - dateNow; else { date = getNextDate(date, scheduler); ts = date - dateNow; } //enable the progressbar prepareControlForStart(); //waits certan time and run the code, in meantime you can cancel the task at anty time { //run the code at the time methodToCall(date); //setup call next day runCodeAt(getNextDate(date, scheduler), scheduler); },m_ctSource.Token); } ``` The method above creates the cancelationToken needed for cancel the scheduler, calculate timeSpan – total waiting time, then when the time is come call the method methodToCall and calculate the next time for running the scheduler. This demo also shows how to wait certain amount of time without blocking the UI thread. The full demo code can be found on OneDrive. ## New Features in C# 6.0 – Auto-Property Initializers Initialize property is repetitive task, and cannot be done in the same line as we can can done for fields. For example we can write: ``` public class Person { private string m_Name="Default Name"; public string Name {get;set;} public Person() { Name=m_Name; } } ``` As we can see Property can be initialized only in the constructor, beside the filed which can be initialized in the same line where it is declared. The new feature in C# 6.0 defines Auto-Property initializer alowing property to be initialized like fields. The following code snippet shows the Auto-Property Initializer; ``` public class Person { static string m_Name="Default Name"; static string Name {get;set;}=m_Name; } ``` ## New Features in C# 6.0 – Null-Conditional Operator This is blog post series about new features coming in the next version of C# 6.0. The first post is about null conditional operator. The NullReferenceException is night mare for any developer specially for developer with not much experience. Almost every created object must be check against null value before we call some of its member. For example assume we have the following code sample: ```class Record { public Person Person {get;set;} public Activity Activity {get;set;} } public static PrintReport(Record rec) { string str=""; if(rec!=null && rec.Person!=null && rec.Activity!=null) { str= string.Format("Record for {0} {1} took {2} sec.", rec.Person.FirstName??"",rec.Person.SurName??"", rec.Activity.Duration); Console.WriteLine(str); } return ; } ``` We have to be sure that all of the object are nonnull, otherwize we get NullReferenceException. The next version of C# provides Null-Conditional operation which reduce the code significantly. So, in the next version of C# we can write Print method like the following without fear of NullReferenceException. ``` public static PrintReport(Record rec) { var str= string.Format("Record for {0} {1} took {2} sec.", rec?.Person?.FirstName??"",rec?.Person?.SurName??"", rec?.Activity?.Duration); Console.WriteLine(str); return; } ``` As we can see that ‘?’ is very handy way to reduce our number of if statements in the code. The Null-Conditional operation is more interesting when is used in combination of ?? null operator. For example: ``` string name=records?[0].Person?.Name??"n/a"; ``` The code listing above checks if the array of records not empty or null, then checks if the Person object is not null. At the end null operator (??) in case of null value of the Name property member of the Person object put default string “n/a”. For this operation regularly we need to check several expressions agains null value. Happy programming. ## Function optimization with Genetic Algorithm by using GPdotNET Content 1. Introduction 2. Analytic function optimization module in GPdotNET 3. Examples of function optimizations 4. C# Implementation behind GPdotNET Optimization module Introduction GPdotNET is artificial intelligence tool for applying Genetic Programming and Genetic Algorithm in modeling and optimization of various engineering problems. It is .NET (Mono) application written in C# programming language which can run on both Windows and Linux based OS, or any OS which can run Mono framework. On the other hand GPdotNET is very easy to use. Even if you have no deep knowledge of GP and GA, you can apply those methods in finding solution. The project can be used in modeling any kind of engineering process, which can be described with discrete data. It is also be used in education during teaching students about evolutionary methods, mainly GP and GA. GPdotNET is open source project hosted at http://gpdotnet.codeplex.com With releasing of  GPdotNET v2 it is also possible to find optimum for any analytic function regardless of independent variables. For example you can find optimum value for an analytically defined function with 2, 5, 10 or 100 independent variables. By using classic methods, function optimization of 3 or more independent variables is very difficult and sometimes impossible. It is also very hard to find optimum value for functions which are relatively complex regardless of number of independent variables. Because GPdotNET is based on Genetic Algorithm we can find approximated optimum value of any function regardless of the limitation of number of independent variables, or complex definition. This blog post is going to give you a detailed and full description how to use GPdotNET to optimize function. Blog post will also cover C# implementation behind optimization proce by showing representation of Chromosome with real number, as well as Fitness calculation which is based on Genetic Programming tree expression. In this blog post it will also be presented several real world problem of optimization which will be solved with GPdotNET. # Analitic Function Optimization Module in GPdotNET When GPdotNET is opened you can choose several predefined and calucalted models from various domain problems, as weel as creating New model among other options. By choosing New model new dialog box appears like picture below. By choosing Optimization of Analytic Function (see pic above) and pressing OK button, GPdotNET prepares model for optimization and opens 3 tab pages: 1. Analytic function, 2. Settings and 3. Optimize Model. ## Analytic function By using “Analytic function” tab you can define expression of a function. More information about how to define mathematics expression of analytic function can be found on this blog post. By using “Analytic definition tool” at the bottom of the page, it is possible to define analytic expression. Expression tree builder generates function in Genetic Programming Expression tree, because GPdotNET fully implements both methods. Sharing features of Genetic programming  in Optimization based Genetic Algorithm is unique and it is implement only in GPdotNET. When the process of defining function is finished, press Finish button in order to proceed with further actions. Finish button action apply all changes with Optimization Model Tab. So if you have made some changed in function definition, by pressing Finish button changes will be send to optimization tab. Defining expression of function is relatively simple, but it is still not natural way for defining function, and will be changed in the future. For example on picture 2, you can see Expression tree which represent: $f(x,y)=y sin{4x}+1.1 x sin{2y}$. ## Setting GA parameters The second step in optimization is setting Genetic Algorithm parameter which will be used in optimization process. Open the Setting tab and set the main GA parameters, see pic. 3. To successfully applied GA in the Optimization, it is necessary to define: 1.  population size, 2. probabilities of genetic operators and 3. selection methods. These parameters are general for all GA and GP models. More information about parameters you can find at http://bhrnjica.net/gpdotnet. ## Optimize model (running optimization) When GA parameters are defined, we can start with optimization by selecting Optimization model tab. Before run, we have to define constrains for each independent variables. This is only limitation we have to define i  order to start optimization. The picture below shows how to define constrains in 3 steps: 1.  select row by left mouse click, 2. enter min and max value in text boxes 3. Press update button. Perform these 3 actions for each independent variable defined in the function. When the process of defining constrains is finished, it is time to run the calculation by pressing Optimize button, from the main toolbar(green button). During optimization process GPdotNET is presenting nice animation of fitness values, as well as showing current best optimal value. The picture above shows the result of optimization process with GPdotNET. It can be seen that the optimal value for this sample is $f_{opt}(9.96)=-100.22$. # Examples of function optimization In this topic we are going to calculate optimal value for some functions by using GPdotNET. Zo be prove that the optimal value is correct or very close to correct value we will use Wolfram Alpha or other method. ### Function: x sin(4x)+1.1 x sin(2y) GP Expression tree looks like the following picture (left size): Optimal value is found (right above picture) for 0.054 min, at 363 generation of total of 500 generation. Optimal value is f(8.66,9.03)=-18.59. Here is Wolfram Alpha calculation of the same function. http://www.wolframalpha.com/input/?i=min+x*sin%284*x%29%2B+1.1+*y*+sin%282+*y%29%2C+0%3Cx%3C10%2C0%3Cy%3C10 ### Function:  (x^2+x)cos(x),  -10≤x≤10 GP expression tree looks like the following picture (left size): Optimal value is found for 0.125 min, at 10 generation of total of 500 generation. Optimal value is F(9.62)=-100.22. Here is Wolfram Alpha calculation of the same function. http://www.wolframalpha.com/input/?i=minimum+%28x%5E2%2Bx%29*cos%28x%29+over+%5B-10%2C10%5D ### Easom’s function fEaso(x1,x2)=-cos(x1)•cos(x2)•exp(-((x1-pi)^2+(x2-pi)^2)), -100<=x(i)<=100, i=1:2. GP expression tree looks like the following picture (left size): Optimal value is found for 0.061 min, at 477 generation of total of 500 generation. Optimal value is F(9.62)=-1, for x=y=3.14. Function can be seen at this MatLab link. # C# Implementation behind GPdotNET Optimization module GPdotNET Optimization module is just a part which is incorporated in to GPdotNET Engine. Specific implementation for this module is Chromosome implementation, as well as Fitness function. Chromosome implementation is based on  floating point value instead of classic binary representation. Such a Chromosome contains array of floating point values and each element array represent function independent variable. If the function contains two independent variables (x,y) chromosome implementation will contains array with two floating points. Constrains of chromosome values represent constrains we defined during settings of the optimization process. The following source code listing shows implementation of GANumChrosomome class in GPdotNET: ```public class GANumChromosome: IChromosome { private double[] val = null; private float fitness = float.MinValue; //... rest of implementation } ``` When the chromosome is generated array elements get values randomly generated between min and max value defined by function definition. Here is a source code of Generate method. ```/// /// Generate values for each represented variable /// public void Generate(int param = 0) { if(val==null) val = new double[functionSet.GetNumVariables()]; else if (val.Length != functionSet.GetNumVariables()) val = new double[functionSet.GetNumVariables()]; for (int i = 0; i < functionSet.GetNumVariables(); i++) } ``` Mutation is accomplish when randomly chosen array element randomly change itc value. Here is a listing: ```/// /// Select array element randomly and randomly change itc value /// public void Mutate() { //randomly select array element //randomly generate value for the selected element } ``` Crossover is little bit complicated. It is implemented based on Book Practical Genetic Algoritms see pages 56,57,48,59. Here is an implementation: ```/// /// Generate number between 0-1. /// For each element array of two chromosomes exchange value based on random number. /// /// public void Crossover(IChromosome ch2) { GANumChromosome p = (GANumChromosome)ch2; double beta; for (int i = crossoverPoint; i < functionSet.GetNumVariables(); i++) { val[i] = val[i] - beta * (val[i] - p.val[i]); p.val[i] = p.val[i] + beta * (val[i] - p.val[i]); } } ``` Fitness function for Optimization is straightforward, it evaluates each chromosome against tree expression. For minimum the better chromosome is lower value. For maximum better chromosome is the chromosome with higher fitness value. Here is a implementation of Optimizatio Fitness function: ```/// /// Evaluates function agains terminals /// /// /// /// public float Evaluate(IChromosome chromosome, IFunctionSet functionSet) { GANumChromosome ch = chromosome as GANumChromosome; if (ch == null) return 0; else { //prepare terminals var term = Globals.gpterminals.SingleTrainingData; for (int i = 0; i < ch.val.Length; i++) term[i] = ch.val[i]; var y = functionSet.Evaluate(_funToOptimize, -1); if (double.IsNaN(y) || double.IsInfinity(y)) y = float.NaN; //Save output in to output variable term[term.Length - 1] = y; if (IsMinimize) y *= -1; return (float)y; } } ``` # Summary We have seen that Function optimization module within GPdotNET is powerful optimization tool. It can find pretty close solution for very complex functions regardless of number of independent variables. Optimization module use Genetic Algorithm method with floating point value chromosome representation described in several books about GA. It is fast, simple and can be used in education as well as in solving real problems. More info about GPdotNET can be found at http://bhrnjica.net/gpdotnet.
2014-10-24 15:21:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17321766912937164, "perplexity": 1692.50789759554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646209.30/warc/CC-MAIN-20141024030046-00028-ip-10-16-133-185.ec2.internal.warc.gz"}
http://scholar.cnki.net/result.aspx?q=%27porous+media%27
高级检索 作者:Khaled Moaddy 来源:[J].Advances in Difference Equations(IF 0.76), 2017, Vol.2017 (1)Springer 摘要:In this paper, we study the effect of time delay and the scaled Rayleigh number on chaotic convection in porous media with fractional order. The stability analysis for different fractional-order cases is investigated and the effective chaotic range of the fractional order is dete... 作者:L. Henríquez-Vargas , E. Villaroel , J. Gutierrez ... 来源:[J].Journal of the Brazilian Society of Mechanical Sciences and Engineering(IF 0.234), 2017, Vol.39 (10), pp.3965-3979Springer 摘要:In this work we present flow simulations in laminar and turbulent regime within a representative elementary volume of a simplified porous media by solving the Navier–Stokes equations and a Low- Re turbulence $$k{-}\epsilon$$ model. Numerical solution was achieved with an i... 作者:Zhi Dou , Xueyi Zhang , Zhou Chen ... 来源:[J].Water(IF 0.973), 2019, Vol.11 (6)DOAJ 摘要:The cementation of porous media leads to the variation of the pore space and heterogeneity of the porous media. In this study, four porous media (PM1, PM2, PM3, and PM4) with the different radii of solid grains were generated to represent the different cementation degrees of the ... 作者:Trishikhi Raychoudhury , Vikranth Kumar Surasani 来源:[J].Journal of Earth System Science(IF 0.695), 2017, Vol.126 (4)Springer 摘要:Retention of surface-modified nanoscale zero-valent iron (NZVI) particles in the porous media near the point of injection has been reported in the recent studies. Retention of excess particles in porous media can alter the media properties. The main objectives of this study are... 作者:Xing Wu , Yanghai Yu , Yanbin Tang 来源:[J].Boundary Value Problems(IF 0.922), 2016, Vol.2016 (1), pp.1-14Springer 摘要:Abstract(#br)In this paper, we study the global well-posedness of the 3D incompressible critical dissipative porous media equation with small initial data in the Triebel-Lizorkin space ... 作者:Wenxin Yu , Yigang He 来源:[J].Boundary Value Problems(IF 0.922), 2014, Vol.2014 (1), pp.1-11Springer 摘要:Abstract(#br)In this paper, we prove the local well-posedness for the incompressible porous media equation in Triebel-Lizorkin spaces and obtain blow-up criterion of smooth solutions. The main tools we use are the Fourier localization technique and Bony’s paraproduct decomposi... 作者:Lina Zhang , Shifeng Geng , Yuling Gao 来源:[J].Boundary Value Problems(IF 0.922), 2019, Vol.2019 (1), pp.519-540Springer 摘要:Abstract(#br)In this paper, we consider convergence rates to solutions for the damped system of compressible adiabatic flow through porous media with boundary effect. Compared with the results obtained by Pan, the better convergence rates are obtained in this paper. Our approach ... 作者:HongWei Zhou , YaHeng Zhang , AiMin Li ... 来源:[J].Chinese Science Bulletin(IF 1.319), 2008, Vol.53 (16), pp.2438-2445Springer 摘要:Abstract(#br)Researches on the boundary shape of fluid flow in porous media play an important role in engineering practices, such as petroleum exploitation, nuclear waste disposal and groundwater contamination. In this paper, six types of artificial porous samples (emery jade) wi... 作者:J. K. Arthur 来源:[J].Journal of Applied Fluid Mechanics(IF 0.263), 2018, Vol.11 (2), pp.297-307DOAJ 摘要:One of the essential areas of the study of transport in porous medium is the flow phenomena at the onset of inertia. While this area has attracted considerable research interest, many fundamental questions remain. Such questions relate to things such as the nature of the multi... 作者:Peyman Maghsoudi , Majid Siavashi 来源:[J].Journal of Thermal Analysis and Calorimetry(IF 1.982), 2019, Vol.135 (2), pp.947-961Springer 摘要:Mixed convection of Cu–water nanofluid inside a two-sided lid-driven cavity filled with heterogeneous porous media is optimized. The horizontal walls are adiabatic and movable, and the vertical walls are exposed to constant hot and cold temperatures. Two-phase mixture model...
2020-01-28 23:14:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4092996120452881, "perplexity": 1332.2081993542315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783342.96/warc/CC-MAIN-20200128215526-20200129005526-00123.warc.gz"}
https://derwen.ai/docs/ptr/start/
# Getting Started¶ ## Installation¶ To install from PyPi: python3 -m pip install pytextrank If you work directly from this Git repo, be sure to install the dependencies: python3 -m pip install -r requirements.txt ## Sample Usage¶ To use pytextrank in its simplest form: import spacy import pytextrank # example text text = "Compatibility of systems of linear constraints over the set of natural numbers. Criteria of compatibility of a system of linear Diophantine equations, strict inequations, and nonstrict inequations are considered. Upper bounds for components of a minimal set of solutions and algorithms of construction of minimal generating sets of solutions for all types of systems are given. These criteria and the corresponding algorithms for constructing a minimal supporting set of solutions can be used in solving all the considered types systems and systems of mixed types." # load a spaCy model, depending on language, scale, etc. # add PyTextRank to the spaCy pipeline doc = nlp(text) # examine the top-ranked phrases in the document for phrase in doc._.phrases: print(phrase.text) print(phrase.rank, phrase.count) print(phrase.chunks) ## Hands-on Coding Tutorial¶ See the Tutorial notebooks for sample code and patterns to use when integrating pytextrank with other related libraries in Python. Last update: 2021-06-29
2021-08-01 23:39:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4650207757949829, "perplexity": 4040.8011220963926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154277.15/warc/CC-MAIN-20210801221329-20210802011329-00714.warc.gz"}
https://ccssmathanswers.com/bridges-in-mathematics-grade-4-home-connections-unit-4-module-4-answer-key/
# Bridges in Mathematics Grade 4 Home Connections Unit 4 Module 4 Answer Key Practicing the Bridges in Mathematics Grade 4 Home Connections Answer Key Unit 4 Module 4 will help students analyze their level of preparation. ## Bridges in Mathematics Grade 4 Home Connections Answer Key Unit 4 Module 4 Bridges in Mathematics Grade 4 Home Connections Unit 4 Module 4 Session 1 Answer Key Unit 4 Review 1 Question 1. Solve the addition problems below. Use the standard algorithm. The first one is done for you. By adding 459 and 144 we get 603. By adding 387 and 414 we get 801. By adding 609 and 734 we get 1343. By adding 1589 and 3437 we get 5026. Question 2. Solve the subtraction problems below. Use the standard algorithm. The first one is done for you. By subtracting 547 from 833 we get 286. By subtracting 548 from 745 we get 197. By subtracting 237 from 905 we get 668. By subtracting 1346 from 3581 we get 2235. Question 3. Complete each equation by writing a number in base ten numerals. ex: 17,508 = 10,000 + 7,000 + 500 + 8 a. _____________ = 20,000 + 400 + 50 + 6 20,000 + 400 + 50 + 6 = 20,456 b. ______________ = 30,000 + 2,000 + 100 + 10 + 2 30,000 + 2,000 + 100 + 10 + 2 = 32,112 c. _______________ = 7,000 + 40 + 6 7,000 + 40 + 6 = 7,046 d. ______________ = 90,000 + 6,000 + 30 + 5 90,000 + 6,000 + 30 + 5 = 96,035 e. ______________ = 60,000 + 3,000 + 7 60,000 + 3,000 + 7 = 63,007 f. _______________ = 10,000 + 3,000 + 800 + 50 + 5 10,000 + 3,000 + 800 + 50 + 5 = 13,855 g. ______________ = 50,000 + 300 + 5 50,000 + 300 + 5 = 50,305 Question 4. Fill in the missing number in each equation. ex: 40,000 + 6,000 + 50 + 8 = 46,058 a. 41,092 = 40,000 + ____________ + 90 + 2 41,092 = 40,000 + 1000 + 90 + 2 b. 50,000 + 1,000 + ____________ + 50 + 4 = 51,354 50,000 + 1,000 + 300 + 50 + 4 = 51,354 c. 17,035 = 10,000 + ____________ + 30 + 5 17,035 = 10,000 + 7,000 + 30 + 5 d. 96,035 = 90,000 + 6,000 + ____________ + 5 96,035 = 90,000 + 6,000 + 30 + 5 e. 20,000 + ____________ + 50 + 6 = 20,456 20,000 + 400 + 50 + 6 = 20,456 f. 2,000 + 500 + ____________ + 7 = 2,567 2,000 + 500 + 60 + 7 = 2,567 g. 20,408 = 20,000 + ____________ + 8 20,408 = 20,000 + 400 + 8 Solve the problems below. Use the standard algorithms for addition and subtraction. Show all your work. Question 5. In December, the cafeteria served 972 breakfast sandwiches. During the first week in January, they served 486 breakfast sandwiches. During the second week of January they served 538 breakfast sandwiches. How many more breakfast sandwiches did they serve in the first two weeks of January than during the whole month of December? Given, In December, the cafeteria served 972 breakfast sandwiches. During the first week in January, they served 486 breakfast sandwiches. 486 + 538 = 1024 During the second week of January, they served 538 breakfast sandwiches. 1024 – 972 = 52 52 breakfast more sandwiches. Question 6. There were 6,742 bags of potato chips stored in the cafeteria. They served 781 of them at lunch and 89 more of them as snacks for the students in after-care. How many bags of potato chips are left? Given, There were 6,742 bags of potato chips stored in the cafeteria. They served 781 of them at lunch and 89 more of them as snacks for the students in after-care. 781 + 89 = 870 6742 – 870 – 781 = 5091 bags Thus 5091 bags of potato chips are left. Question 7. At the basketball game last night, the home team was losing by 48 points at halftime, so fans started to leave. There were 45,862 people at the game when it started and 17,946 left at halftime. Then another 13,892 people left before the last quarter. How many people were left by the end of the game? Given, At the basketball game last night, the home team was losing by 48 points at halftime, so fans started to leave. There were 45,862 people at the game when it started and 17,946 left at halftime. 45,862 – 17,946 = 27,916 Then another 13,892 people left before the last quarter. 27,916 – 13,892 = 14,024 Thus 14,024 people were left by the end of the game. Bridges in Mathematics Grade 4 Home Connections Unit 4 Module 4 Session 2 Answer Key Unit 4 Review 2 The table below shows the populations of Austin, Chicago, New York City, Philadelphia, and San Francisco in the year 2010. Question 1. Use the symbol >, =, or < to compare the populations of New York City and Philadelphia. The population of New York City is 8,175,133 The population of Philadelphia is 1,526,006 8,175,133 > 1,526,006 Population of New York City > Population of Philadelphia Question 2. Write the population of Chicago in words. Population of Chicago is 2,695,598 population of Chicago in words is two million six hundred and ninety five thousand, five hundred and ninety eight. Question 3. The city of Denver, Colorado, had a population of six hundred thousand, one hundred fifty-eight in the year 2010. Write the population of Denver in numbers. The city of Denver, Colorado, had a population of six hundred thousand, one hundred fifty-eight in the year 2010. The population of Denver in numbers is 600,185. Question 4. Seattle had a population of 608,660 in the year 2010. Round Seattle’s population to the nearest: a. ten: ______________ b. hundred: ______________ c. thousand: ______________ d. Fill in the bubble to show what 608,660 would be rounded to the nearest ten thousand. 600,000 610,000 600,900 Seattle had a population of 608,660 in the year 2010. 608,660 to the nearest ten is 608,660 608,660 to the nearest hundred is 608,700 608,660 to the nearest thousand is 609,000 608,660 to the nearest ten thousand is 610,000. 610,000 Question 5. How many hundreds are in 1,000? There are ten hundreds in 1,000. Question 6. How many hundreds are in 7,000? There are 70 hundreds in 7,000. Question 7. How many hundreds are in 10,000? There are 100 hundreds in 10,000. Question 8. How many thousands are in 38,000? There are 380 hundreds in 38,000. Question 9. How many ten thousands are in 200,000? There are 20 ten thousands in 200,000. Question 10. How many hundred thousands are in 5,000,000? There are 50 hundred thousands in 5,000,000. Question 11. Fill in the blank with the correct relational symbol: <, > or =. a. 18 km _____________ 20,000 meters 1 km = 1000 meters 18 km = 18 × 1000 = 18,000 meters 18,000 < 20,000 meter 18 km < 20,000 meters b. 1700 grams _____________ 17 kg 1 kg = 1000 grams 1700 grams = 1.7 kg 1700 grams < 17 kg c. 13$$\frac{1}{2}$$ liters ______________ 13,500 milliliters 1 liter = 1000 milliliters 13$$\frac{1}{2}$$ = 13.5 × 1000 = 13500 milliliters 13$$\frac{1}{2}$$ liters < 13,500 milliliters Question 12. During his practice this month, Jeff ran one 10K in 1:01:49 and another in 57: 53. How much faster was his second 10K practice? Show all your work. (Hint: Use an open number line to model and solve this problem.) Given, During his practice this month, Jeff ran one 10K in 1:01:49 and another in 57: 53. t = 1 + 109/3600 = 1.0303 v1 = 10/1.0303 = 9.71 On the second practice, the time was of 57:53 t = (57 × 60 + 53)/3600 = 0.964722 v2 = 10/0.964722 = 10.37 10.37 – 9.71 = 0.66 His second practice was 0.66 km/h faster than his first. Question 13. Alex bought a 6-pack of sports drink bottles that each had a volume of 350 ml. a. If Alex drank 3 of them, how many milliliters did she drink? Show your work. ____________ milliliters Alex bought a 6-pack of sports drink bottles that each had a volume of 350 ml. 6 × 350 = 2100 3 × 350 = 1050 2100 – 1050 = 1050 ml b. How many more milliliters would Alex need to drink to have 2 liters? Show your work.
2023-03-21 10:27:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19152206182479858, "perplexity": 3664.050092753081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943695.23/warc/CC-MAIN-20230321095704-20230321125704-00173.warc.gz"}
https://math.stackexchange.com/questions/3904456/cokernel-of-group-homomorphism
# Cokernel of group homomorphism We know $$H=\{ (1),(12) \}$$ is subgroup of $$S_3$$. Consider inclusion '$$\varphi: H \hookrightarrow S_3$$', this is clearly group homomorphism. Prove that coker$$\varphi$$ is trivial. I can't understand how to apply universal property of cokernel to this homomorphism $$\varphi$$. • One definition of the cokernel of a group homomorphism is the quotient of the codomain of the map by the normal closure of the image. The normal closure of the subgroup $H$ of $S_3$ is the whole of $S_3$, so the cokernel is trivial. Nov 12, 2020 at 13:18 The cokernel of $$\varphi$$ is, by definition, initial among all morphism $$\psi: S_3\to G$$ such that $$\psi\circ \varphi$$ is the trivial morphism. So you need to prove that if $$\psi\circ \varphi$$ is trivial then $$\psi$$ is trivial. Note that $$\psi\circ \varphi$$ is trivial if and only if $$H\leq \ker(\psi)$$, so it suffices to prove that any normal subgroup of $$S_3$$ containing $$H$$ must be equal $$S_3$$ itself. • I don't understand your question, can you clarify? Nov 12, 2020 at 16:54 • We want to prove that $\psi$ is trivial, so it suffices to prove that its kernel is equal to $S_3$. Nov 12, 2020 at 17:00 • We don't conclude that $im(\varphi)=S_3$. We want to prove that $\ker(\psi)=S_3$ if $H\leq \ker(\psi)$, but of course $H\neq \ker(\psi)$ since $H$ is not normal. Nov 12, 2020 at 17:30 Given a group homomorphism $$f : A \to B$$, it is easy to see that $$\operatorname{coker}(f) = B / [f(A)]$$, where $$[ - ]$$ denotes the normal closure of a subgroup. In the language of category theory, more formally, $$\operatorname{coker}(f)$$ is the qoutient homomorphism $$p : B \to B / [f(A)]$$. To see this, let $$q : B \to C$$ be any homomorphism such that $$q \circ f = 0$$. This means $$f(A) \subset \ker(q)$$, thus $$[f(A)] \subset \ker(q)$$ because kernels are always normal subgroups. Therefore $$q$$ can be written as $$q = \bar q \circ p$$ with a unique homomorphism $$\bar q : B / [f(A)] \to C$$. Now let us look at $$[H]$$ in $$S_3$$. The only non-trivial normal subgroup of $$S_3$$ is the alternating group $$A_3$$, but certainly $$H \not\subset A_3$$. Thus $$[H] = S_3$$ which proves your claim. • @batuhan, Here normal closure of a subgroup $H$ means the smallest normal subgroup $N$ containing $H$. – MAS Nov 12, 2020 at 14:17
2023-03-22 21:42:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 33, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9902303218841553, "perplexity": 87.56296819022747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.97/warc/CC-MAIN-20230322211955-20230323001955-00465.warc.gz"}
https://www.vedantu.com/question-answer/weight-of-12-mangoes-is-24kg-what-is-the-weight-class-7-maths-cbse-5fd741fc147a833c29ece1d7
# Weight of 12 mangoes is 2.4kg .What is the weight of 8 mangoes? Verified 175.5k+ views Hint: For this type of question first we should find the weight of one item given. Here the given item is mango, that is we should find the weight of one mango and then we should multiply that one mango weight with the required amount of mangoes. The objective of the problem is to find the weight of 8 mangoes. Given that the weight of twelve mangoes is 2.4kg. First , we find the weight of one mango. We are finding the weight of one mango because it is easy to find the weight of eight mangoes. By multiplying one mango weight with eight mangoes we get eight mangoes. Let the weight of one mango be x. Number of mangoes Weight of mangoes in kgs 12 2.4 1 x To find x we use the cross multiplication. We get $12 \times x = 1 \times 2.4$ $\Rightarrow 12x = 2.4 \\ \Rightarrow x = \dfrac{{2.4}}{{12}} \\$ Thus the weight of one mango is $\dfrac{{2.4}}{{12}}$kg. But we have to find the weight of 8 mangoes. By multiplying the weight of one mango with eight mangoes we get the weight of eight mangoes . Weight of eight mangoes $= 8 \times \dfrac{{2.4}}{{12}}$ Multiplying numerator and denominator with 10 to take off the decimal point. We get $\Rightarrow 8 \times \dfrac{{2.4}}{{12}} \times \dfrac{{10}}{{10}}$ After multiplying numerator and denominator with 10 we get $= 8 \times \dfrac{{24}}{{120}} \\ = \dfrac{{24}}{{15}} = 1.6 \\$ Thus , the weight of eight mangoes is 1.6kg. Note: We use cross multiplication to compare the fractions . This method is useful when we are working with larger fractions for reducing them into smaller fractions, and to determine the value of a variable .
2022-08-15 21:49:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8693441152572632, "perplexity": 742.8332883819265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00180.warc.gz"}
https://www.semanticscholar.org/paper/Unified-description-of-quark-and-lepton-mixing-on-a-Nishiura-Koide/ba556b1f5db1f1563b635f56c16b9dcb31ae5bea
# Unified description of quark and lepton mixing matrices based on a yukawaon model @article{Nishiura2011UnifiedDO, title={Unified description of quark and lepton mixing matrices based on a yukawaon model}, author={Hiroyuki Nishiura and Yoshio Koide}, journal={Physical Review D}, year={2011}, volume={83}, pages={035010} } • Published 5 November 2010 • Physics • Physical Review D Based on a supersymmetric Yukawaon model with O(3) family symmetry, possible forms of quark and lepton mixing matrices are systematically investigated under a condition that the up-quark mass matrix form leads to the observed nearly tribimaximal mixing in the lepton sector. Although the previous model could not provide a good fitting of the observed quark mixing, the present model can give a reasonably good fitting not for lepton mixing but also the quark mixing by using a different origin of… 9 Citations ## Figures and Tables from this paper Quark and lepton mass matrices described by charged lepton masses • Physics • 2015 Recently, we proposed a unified mass matrix model for quarks and leptons, in which, mass ratios and mixings of the quarks and neutrinos are described by using only the observed charged lepton mass A modified version of the Koide formula from flavor nonets in a scalar potential model and in a Yukawaon model • Physics Nuclear Physics B • 2021 We present a modified version of the Koide formula from a scalar potential model or from a Yukawaon model, based on scalar fields set up in a nonet representation of the SU(3) flavor symmetry in the Large $\theta_{13}^{\nu}$ and unified description of quark and lepton mixing matrices • Physics • 2012 We present a revised version of the so-called “yukawaon model”, which was proposed for the purpose of a unified description of the lepton mixing matrix UPMNS and the quark mixing matrix VCKM. It is Yukawaon model with U(3)×O(3) family symmetries A quark and lepton mass matrix model with family symmetries U(3)×O(3) is investigated on the basis of the so-called Yukawaon model. In the present model, quarks and leptons are assigned to (l, ec, Flavon VEV Scales in U(3)$\times$U(3)$'$ Model • Physics • 2017 We have recently proposed a quark and lepton mass matrix model based on U(3)×U(3)′ family symmetry as the so-called Yukawaon model, in which the U(3) symmetry is broken by VEVs of flavons (Φf)iα Origin of Hierarchical Structures of Quark and Lepton Mass Matrices • Physics • 2015 It is shown that the so-called "Yukawaon" model can give a unified description of masses, mixing and $CP$ violation parameters of quarks and leptons without using any hierarchical (family Yukawaon Model with U(3)$\times$S$_3$ Family Symmetries • Physics • 2012 Abstract A new yukawaon model is investigated under a family symmetry U ( 3 ) × S 3 . In this model, all vacuum expectation values (VEVs) of the yukawaons, 〈 Y f 〉 , are described in terms of a Yukawaon Model with Anomaly Free Set of Quarks and Leptons in a U(3) Family Symmetry • Physics • 2013 In the so-called "yukawaon" model, the (effective) Yukawa coupling constants $Y_f^{eff}$ are given by vacuum expectation values (VEVs) of scalars $Y_f$ (yukawaons) with $3\times 3$ components. So Neutrino mass matrix model with a bilinear form • Physics • 2013 A bstractA neutrino mass matrix model with a bilinear form ${M_{\nu }}={k_{\nu }}{{\left( {{M_D}M_R^{-1 }M_D^T} \right)}^2}$ is proposed within the framework of the so-called yukawaon model, which
2022-01-24 07:25:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7523242235183716, "perplexity": 2003.0222747809582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304515.74/warc/CC-MAIN-20220124054039-20220124084039-00146.warc.gz"}
https://analytixon.com/2022/10/06/if-you-did-not-already-know-1851/
Single Index Latent Variable Models (SILVar) A semi-parametric, non-linear regression model in the presence of latent variables is introduced. These latent variables can correspond to unmodeled phenomena or unmeasured agents in a complex networked system. This new formulation allows joint estimation of certain non-linearities in the system, the direct interactions between measured variables, and the effects of unmodeled elements on the observed system. The particular form of the model is justified, and learning is posed as a regularized maximum likelihood estimation. This leads to classes of structured convex optimization problems with a ‘sparse plus low-rank’ flavor. Relations between the proposed model and several common model paradigms, such as those of Robust Principal Component Analysis (PCA) and Vector Autoregression (VAR), are established. Particularly in the VAR setting, the low-rank contributions can come from broad trends exhibited in the time series. Details of the algorithm for learning the model are presented. Experiments demonstrate the performance of the model and the estimation algorithm on simulated and real data. … Django Django is a high-level Python Web framework that encourages rapid development and clean, pragmatic design. Built by experienced developers, it takes care of much of the hassle of Web development, so you can focus on writing your app without needing to reinvent the wheel. It’s free and open source. … We introduce the Adaptive Massively Parallel Computation (AMPC) model, which is an extension of the Massively Parallel Computation (MPC) model. At a high level, the AMPC model strengthens the MPC model by storing all messages sent within a round in a distributed data store. In the following round, all machines are provided with random read access to the data store, subject to the same constraints on the total amount of communication as in the MPC model. Our model is inspired by the previous empirical studies of distributed graph algorithms using MapReduce and a distributed hash table service. This extension allows us to give new graph algorithms with much lower round complexities compared to the best known solutions in the MPC model. In particular, in the AMPC model we show how to solve maximal independent set in $O(1)$ rounds and connectivity/minimum spanning tree in $O(\log\log_{m/n} n)$ rounds both using $O(n^\delta)$ space per machine for constant $\delta < 1$. In the same memory regime for MPC, the best known algorithms for these problems require polylog $n$ rounds. Our results imply that the 2-Cycle conjecture, which is widely believed to hold in the MPC model, does not hold in the AMPC model. …
2023-03-26 09:18:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2979811429977417, "perplexity": 548.8982298498048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945440.67/warc/CC-MAIN-20230326075911-20230326105911-00326.warc.gz"}
https://maslinandco.com/10266388
# Calculate: -3t >-6 ## Expression: $-3t > -6$ Divide both sides of the inequality by $-3$ and flip the inequality sign $-3t\div\left( -3 \right) < -6\div\left( -3 \right)$ Any expression divided by itself equals $1$ $t < -6\div\left( -3 \right)$ Dividing two negatives equals a positive: $\left( - \right)\div\left( - \right)=\left( + \right)$ $t < 6\div3$ Calculate the quotient \begin{align*}&t < 2 \\&\begin{array} { l }t \in \langle-\infty, 2\rangle\end{array}\end{align*} Random Posts Random Articles
2023-03-31 18:23:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9934485554695129, "perplexity": 9842.14814265964}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949678.39/warc/CC-MAIN-20230331175950-20230331205950-00187.warc.gz"}
https://en.wikipedia.org/wiki/Cryptanalysis_of_the_Lorenz_cipher
# Cryptanalysis of the Lorenz cipher Timeline of key events Time Event September 1939 War breaks out in Europe. Second half of 1940 First non-Morse transmissions intercepted. June 1941 First experimental SZ40 Tunny link started with alphabetic indicator. August 1941 Two long messages in depth yielded 3700 characters of key. January 1942 • Tunny diagnosed from key. July 1942 • Turingery method of wheel breaking. • Testery established. • First reading of up-to-date traffic. October 1942 • First two of eventual 26 links started with QEP indicator system.[1] November 1942 "1 + 2 break in" invented by Bill Tutte. February 1943 More complex SZ42A introduced. May 1943 Heath Robinson delivered. June 1943 Newmanry founded. December 1943 Colossus I working at Dollis Hill prior to delivery to Bletchley Park.[2] February 1944 First use of Colossus I for a real job. March 1944 Four Colossi (Mark 2) ordered. April 1944 Order for further Colossi increased to 12. June 1944 August 1944 Cam settings on all Lorenz wheels changed daily. May 1945 Cryptanalysis of the Lorenz cipher was the process that enabled the British to read high-level German army messages during World War II. The British Government Code and Cypher School (GC&CS) at Bletchley Park decrypted many communications between the Oberkommando der Wehrmacht (OKW, German High Command) in Berlin and their army commands throughout occupied Europe, some of which were signed "Adolf Hitler, Führer".[3] These were intercepted non-Morse radio transmissions that had been enciphered by the Lorenz SZ teleprinter rotor stream cipher attachments. Decrypts of this traffic became an important source of "Ultra" intelligence, which contributed significantly to Allied victory.[4] For its high-level secret messages, the German armed services enciphered each character using various online Geheimschreiber (secret writer) stream cipher machines at both ends of a telegraph link using the 5-bit International Telegraphy Alphabet No. 2 (ITA2). These machines were subsequently discovered to be the Lorenz SZ (SZ for Schlüssel-Zusatz, meaning "cipher attachment") for the army,[5] the Siemens and Halske T52 for the air force and the Siemens T43, which was little used and never broken by the Allies.[6] Bletchley Park decrypts of messages enciphered with the Enigma machines revealed that the Germans called one of their wireless teleprinter transmission systems "Sägefisch" (sawfish),[7] which led British cryptographers to refer to encrypted German radiotelegraphic traffic as "Fish".[5] "Tunny" (tunafish) was the name given to the first non-Morse link, and it was subsequently used for the cipher machines and their traffic.[8] As with the entirely separate cryptanalysis of the Enigma, it was German operational shortcomings that allowed the initial diagnosis of the system, and a way into decryption.[9] Unlike Enigma, no physical machine reached allied hands until the very end of the war in Europe, long after wholesale decryption had been established.[10][11] The problems of decrypting Tunny messages led to the development of "Colossus", the world's first electronic, programmable digital computer, ten of which were in use by the end of the war,[12][13] by which time some 90% of selected Tunny messages were being decrypted at Bletchley Park.[14] Albert W. Small, a cryptanalyst from the US Army Signal Corps who was seconded to Bletchley Park and worked on Tunny, said in his December 1944 report back to Arlington Hall that: Daily solutions of Fish messages at GC&CS reflect a background of British mathematical genius, superb engineering ability, and solid common sense. Each of these has been a necessary factor. Each could have been overemphasised or underemphasised to the detriment of the solutions; a remarkable fact is that the fusion of the elements has been apparently in perfect proportion. The result is an outstanding contribution to cryptanalytic science.[15] ## The German Tunny machines The Lorenz SZ machines had 12 wheels each with a different number of cams (or "pins"). OKW/Chiwheel name BP wheelname[16] Number ofcams (pins) A B C D E F G H I J K L ${\displaystyle \psi }$1 ${\displaystyle \psi }$2 ${\displaystyle \psi }$3 ${\displaystyle \psi }$4 ${\displaystyle \psi }$5 ${\displaystyle \mu }$37 ${\displaystyle \mu }$61 ${\displaystyle \chi }$1 ${\displaystyle \chi }$2 ${\displaystyle \chi }$3 ${\displaystyle \chi }$4 ${\displaystyle \chi }$5 43 47 51 53 59 37 61 41 31 29 26 23 The Lorenz SZ cipher attachments implemented a Vernam stream cipher, using a complex array of twelve wheels that delivered what should have been a cryptographically secure pseudorandom number as a key stream. The key stream was combined with the plaintext to produce the ciphertext at the transmitting end using the exclusive or (XOR) function. At the receiving end, an identically configured machine produced the same key stream which was combined with the ciphertext to produce the plaintext, i. e. the system implemented a symmetric-key algorithm. The key stream was generated by ten of the twelve wheels. This was a product of XOR-ing the 5-bit character generated by the right hand five wheels, the chi (${\displaystyle \chi }$) wheels, and the left hand five, the psi (${\displaystyle \psi }$) wheels. The chi wheels always moved on one position for every incoming ciphertext character, but the psi wheels did not. Cams on wheels 9 and 10 showing their raised (active) and lowered (inactive) positions. An active cam reversed the value of a bit (x and x). The central two mu (${\displaystyle \mu }$) or "motor" wheels determined whether or not the psi wheels rotated with a new character.[17][18] After each letter was enciphered either all five psi wheels moved on, or they remained still and the same letter of psi-key was used again. Like the chi wheels, the ${\displaystyle \mu }$61 wheel moved on after each character. When ${\displaystyle \mu }$61 had the cam in the active position and so generated x (before moving) ${\displaystyle \mu }$37 moved on once: when the cam was in the inactive position (before moving) ${\displaystyle \mu }$37 and the psi wheels stayed still.[19] On all but the earliest machines, there was an additional factor that played into the moving on or not of the psi wheels. These were of four different types and were called "Limitations" at Bletchley Park. All involved some aspect of the previous positions of the machine's wheels.[20] The numbers of cams on the set of twelve wheels of the SZ42 machines totalled 501 and were co-prime with each other, giving an extremely long period before the key sequence repeated. Each cam could either be in a raised position, in which case it contributed x to the logic of the system, reversing the value of a bit, or in the lowered position, in which case it generated .[10] The total possible number of patterns of raised cams was 2501 which is an astronomically large number.[21] In practice, however, about half of the cams on each wheel were in the raised position. Later, the Germans realized that if the number of raised cams was not very close to 50% there would be runs of xs and s, a cryptographic weakness.[22][23] The process of working out which of the 501 cams were in the raised position was called "wheel breaking" at Bletchley Park.[24] Deriving the start positions of the wheels for a particular transmission was termed "wheel setting" or simply "setting". The fact that the psi wheels all moved together, but not with every input character, was a major weakness of the machines that contributed to British cryptanalytical success. A Lorenz SZ42 cipher machine with its covers removed at The National Museum of Computing on Bletchley Park ## Secure telegraphy Electro-mechanical telegraphy was developed in the 1830s and 1840s, well before telephony, and operated worldwide by the time of the Second World War. An extensive system of cables linked sites within and between countries, with a standard voltage of −80 V indicating a "mark" and +80 V indicating a "space".[25] Where cable transmission became impracticable or inconvenient, such as for mobile German Army Units, radio transmission was used. Teleprinters at each end of the circuit consisted of a keyboard and a printing mechanism, and very often a five-hole perforated paper-tape reading and punching mechanism. When used online, pressing an alphabet key at the transmitting end caused the relevant character to print at the receiving end. Commonly, however, the communication system involved the transmitting operator preparing a set of messages offline by punching them onto paper tape, and then going online only for the transmission of the messages recorded on the tape. The system would typically send some ten characters per second, and so occupy the line or the radio channel for a shorter period of time than for online typing. The characters of the message were represented by the codes of the International Telegraphy Alphabet No. 2 (ITA2). The transmission medium, either wire or radio, used asynchronous serial communication with each character signaled by a start (space) impulse, 5 data impulses and 1½ stop (mark) impulses. At Bletchley Park mark impulses were signified by x ("cross") and space impulses by • ("dot").[26] For example, the letter "H" would be coded as ••x•x. Binary teleprinter code (ITA2) as used at Bletchley Park,[27] arranged in reflection order whereby each row differs from its neighbours by only one bit. Pattern of impulses Mark = x, Space = Binary Letter shift Figure shift BP 'shiftless' interpretation ••.••• 00000 null null / ••.x•• 00100 space space 9 ••.x•x 00101 H # H ••.••x 00001 T 5 T ••.•xx 00011 O 9 O ••.xxx 00111 M . M ••.xx• 00110 N , N ••.•x• 00010 CR CR 3 •x.•x• 01010 R 4 R •x.xx• 01110 C : C •x.xxx 01111 V ; V •x.•xx 01011 G & G •x.••x 01001 L ) L •x.x•x 01101 P 0 P •x.x•• 01100 I 8 I •x.••• 01000 LF LF 4 xx.••• 11000 A - A xx.x•• 11100 U 7 U xx.x•x 11101 Q 1 Q xx.••x 11001 W 2 W xx.•xx 11011 FIGS + or 5 xx.xxx 11111 LTRS - or 8 xx.xx• 11110 K ( K xx.•x• 11010 J Bell J x•.•x• 10010 D WRU D x•.xx• 10110 F ! F x•.xxx 10111 X / X x•.•xx 10011 B ? B x•.••x 10001 Z " Z x•.x•x 10101 Y 6 Y x•.x•• 10100 S ' S x•.••• 10000 E 3 E The figure shift (FIGS) and letter shift (LETRS) characters determined how the receiving end interpreted the string of characters up to the next shift character. Because of the danger of a shift character being corrupted, some operators would type a pair of shift characters when changing from letters to numbers or vice versa. So they would type 55M88 to represent a full stop.[28] Such doubling of characters was very helpful for the statistical cryptanalysis used at Bletchley Park. After encipherment, shift characters had no special meaning. The speed of transmission of a radio-telegraph message was three or four times that of Morse code and a human listener could not interpret it. A standard teleprinter, however would produce the text of the message. The Lorenz cipher attachment changed the plaintext of the message into ciphertext that was uninterpretable to those without an identical machine identically set up. This was the challenge faced by the Bletchley Park codebreakers. ## Interception Intercepting Tunny transmissions presented substantial problems. As the transmitters were directional, most of the signals were quite weak at receivers in Britain. Furthermore, there were some 25 different frequencies used for these transmissions, and the frequency would sometimes be changed part way through. After the initial discovery of the non-Morse signals in 1940, a radio intercept station called the Foreign Office Research and Development Establishment was set up on a hill at Ivy Farm at Knockholt in Kent, specifically to intercept this traffic.[29][30] The centre was headed by Harold Kenworthy, had 30 receiving sets and employed some 600 staff. It became fully operational early in 1943. A length of tape, 12 millimetres (0.47 in) wide, produced by an undulator similar to those used during the Second World War for intercepted 'Tunny' wireless telegraphic traffic at Knockholt, for translation into ITA2 characters to be sent to Bletchley Park Because a single missed or corrupted character could make decryption impossible, the greatest accuracy was required.[31] The undulator technology used to record the impulses had originally been developed for high-speed Morse. It produced a visible record of the impulses on narrow paper tape. This was then read by people employed as "slip readers" who interpreted the peaks and troughs as the marks and spaces of ITA2 characters.[32] Perforated paper tape was then produced for telegraphic transmission to Bletchley Park where it was punched out.[33] ## The Vernam cipher The Vernam cipher implemented by the Lorenz SZ machines utilizes the Boolean "exclusive or" (XOR) function, symbolised by ⊕ and verbalised as "A or B but not both". This is represented by the following truth table, where x represents "true" and represents "false". INPUT OUTPUT A B A ⊕ B • • • • x x x • x x x • Other names for this function are: exclusive disjunction, not equal (NEQ), and modulo 2 addition (without "carry") and subtraction (without "borrow"). Modulo 2 addition and subtraction are identical. Some descriptions of Tunny decryption refer to addition and some to differencing, i.e. subtraction, but they mean the same thing. The XOR operator is both associative and commutative. Reciprocity is a desirable feature of a machine cipher so that the same machine with the same settings can be used either for enciphering or for deciphering. The Vernam cipher achieves this, as combining the stream of plaintext characters with the key stream produces the ciphertext, and combining the same key with the ciphertext regenerates the plaintext.[34] Symbolically: PlaintextKey = Ciphertext and CiphertextKey = Plaintext Vernam's original idea was to use conventional telegraphy practice, with a paper tape of the plaintext combined with a paper tape of the key at the transmitting end, and an identical key tape combined with the ciphertext signal at the receiving end. Each pair of key tapes would have been unique (a one-time tape), but generating and distributing such tapes presented considerable practical difficulties. In the 1920s four men in different countries invented rotor Vernam cipher machines to produce a key stream to act instead of a key tape. The Lorenz SZ40/42 was one of these.[35] ## Security features A typical distribution of letters in English language text. Inadequate encipherment may not sufficiently mask the non-uniform nature of the distribution. This property was exploited in cryptanalysis of the Lorenz cipher by weakening part of the key. A monoalphabetic substitution cipher such as the Caesar cipher can easily be broken, given a reasonable amount of ciphertext. This is achieved by frequency analysis of the different letters of the ciphertext, and comparing the result with the known letter frequency distribution of the plaintext.[36] With a polyalphabetic cipher, there is a different substitution alphabet for each successive character. So a frequency analysis shows an approximately uniform distribution, such as that obtained from a (pseudo) random number generator. However, because one set of Lorenz wheels turned with every character while the other did not, the machine did not disguise the pattern in the use of adjacent characters in the German plaintext. Alan Turing discovered this weakness and invented the differencing technique described below to exploit it.[37] The pattern of which of the cams were in the raised position, and which in the lowered position was changed daily on the motor wheels (${\displaystyle \mu }$37 and ${\displaystyle \mu }$61). The chi wheel cam patterns were initially changed monthly. The psi wheel patterns were changed quarterly until October 1942 when the frequency was increased to monthly, and then to daily on 1 August 1944, when the frequency of changing the chi wheel patterns was also changed to daily.[38] The number of start positions of the wheels was 43×47×51×53×59×37×61×41×31×29×26×23 which is approximately 1.6×1019 (16 billion billion), far too large a number for cryptanalysts to try an exhaustive "brute-force attack". Sometimes the Lorenz operators disobeyed instructions and two messages were transmitted with the same start positions, a phenomenon termed a "depth". The method by which the transmitting operator told the receiving operator the wheel settings that he had chosen for the message which he was about to transmit was termed the "indicator" at Bletchley Park. In August 1942, the formulaic starts to the messages, which were useful to cryptanalysts, were replaced by some irrelevant text, which made identifying the true message somewhat harder. This new material was dubbed quatsch (German for "nonsense") at Bletchley Park.[39] During the phase of the experimental transmissions, the indicator consisted of twelve German forenames, the initial letters of which indicated the position to which the operators turned the twelve wheels. As well as showing when two transmissions were fully in depth, it also allowed the identification of partial depths where two indicators differed only in one or two wheel positions. From October 1942 the indicator system changed to the sending operator transmitting the unenciphered letters QEP[40] followed by a two digit number. This number was taken serially from a code book that had been issued to both operators and gave, for each QEP number, the settings of the twelve wheels. The books were replaced when they had been used up, but between replacements, complete depths could be identified by the re-use of a QEP number on a particular Tunny link.[41] ## Diagnosis P plaintext K key – the sequence of characters XOR'ed (added) to the plaintext to give the ciphertext χ chi component of key ψ psi component of key ψ' extended psi – the actual sequence of characters added by the psi wheels, including those when they do not advance[43] Z ciphertext D de-chi — the ciphertext with the chi component of the key removed Δ any of the above XOR'ed with its successor character or bit[44] ⊕ the XOR operation The first step in breaking a new cipher is to diagnose the logic of the processes of encryption and decryption. In the case of a machine cipher such as Tunny, this entailed establishing the logical structure and hence functioning of the machine. This was achieved without the benefit of seeing a machine—which only happened in 1945, shortly before the allied victory in Europe.[45] The enciphering system was very good at ensuring that the ciphertext Z contained no statistical, periodic or linguistic characteristics to distinguish it from random. However this did not apply to K, χ, ψ' and D, which was the weakness that meant that Tunny keys could be solved.[46] During the experimental period of Tunny transmissions when the twelve-letter indicator system was in use, John Tiltman, Bletchley Park's veteran and remarkably gifted cryptanalyst, studied the Tunny ciphertexts and identified that they used a Vernam cipher. When two transmissions (a and b) use the same key, i.e. they are in depth, combining them eliminates the effect of the key.[47] Let us call the two ciphertexts Za and Zb, the key K and the two plaintexts Pa and Pb. We then have: Za ⊕ Zb = Pa ⊕ Pb If the two plaintexts can be worked out, the key can be recovered from either ciphertext-plaintext pair e.g.: Za ⊕ Pa = K or Zb ⊕ Pb = K On 31 August 1941, two long messages were received that had the same indicator HQIBPEXEZMUG. The first seven characters of these two ciphertexts were the same, but the second message was shorter. The first 15 characters of the two messages were as follows (in Bletchley Park interpretation): Za JSH4N ZYZY4 GLFRG Zb JSH4N ZYMFS /884I Za ⊕ Zb ///// //FOU GFL3M John Tiltman tried various likely pieces of plaintext, i.e. a "cribs", against the Za ⊕ Zb string and found that the first plaintext message started with the German word SPRUCHNUMMER (message number). In the second plaintext, the operator had used the common abbreviation NR for NUMMER. There were more abbreviations in the second message, and the punctuation sometimes differed. This allowed Tiltman to work out, over ten days, the plaintext of both messages, as a sequence of plaintext characters discovered in Pa, could then be tried against Pb and vice versa.[48] In turn, this yielded almost 4000 characters of key.[49] Members of the Research Section worked on this key to try to derive a mathematical description of the key generating process, but without success. Bill Tutte joined the section in October 1941 and was given the task. He had read chemistry and mathematics at Trinity College, Cambridge before being recruited to Bletchley Park. At his training course, he had been taught the Kasiski examination technique of writing out a key on squared paper with a new row after a defined number of characters that was suspected of being the frequency of repetition of the key. If this number was correct, the columns of the matrix would show more repetitions of sequences of characters than chance alone. Tutte thought that it was possible that, rather than using this technique on the whole letters of the key, which were likely to have a long frequency of repetition, it might be worth trying it on the sequence formed by taking only one impulse (bit) from each letter, on the grounds that "the part might be cryptographically simpler than the whole".[50] Given that the Tunny indicators used 25 letters (excluding J) for 11 of the positions, but only 23 letters for the twelfth, he tried Kasiski's technique on the first impulse of the key characters using a repetition of 25 × 23 = 575. This did not produce a large number of repetitions in the columns, but Tutte did observe the phenomenon on a diagonal. He therefore tried again with 574, which showed up repeats in the columns. Recognising that the prime factors of this number are 2, 7 and 41, he tried again with a period of 41 and "got a rectangle of dots and crosses that was replete with repetitions".[51] It was clear, however, that the sequence of first impulses was more complicated than that produced by a single wheel of 41 positions. Tutte called this component of the key χ1 (chi). He figured that there was another component, which was XOR-ed with this, that did not always change with each new character, and that this was the product of a wheel that he called ψ1 (psi). The same applied for each of the five impulses—indicated here by subscripts. So for a single character, the key K consisted of two components: K = χψ. The actual sequence of characters added by the psi wheels, including those when they do not advance, was referred to as the extended psi,[43] and symbolised by ψ′ K = χψ′. Tutte's derivation of the ψ component was made possible by the fact that dots were more likely than not to be followed by dots, and crosses more likely than not to be followed by crosses. This was a product of a weakness in the German key setting, which they later stopped. Once Tutte had made this breakthrough, the rest of the Research Section joined in to study the other impulses, and it was established that the five ψ wheels all moved together under the control of two μ (mu or "motor") wheels. Diagnosing the functioning of the Tunny machine in this way was a truly remarkable cryptanalytical achievement, and was described when Tutte was inducted as Officer of the Order of Canada in October 2001, as "one of the greatest intellectual feats of World War II".[52] ## Turingery In July 1942 Alan Turing spent a few weeks in the Research Section.[53] He had become interested in the problem of breaking Tunny from the keys that had been obtained from depths.[54] In July, he developed a method of deriving the cam settings ("wheel breaking") from a length of key. It became known as "Turingery"[55] (playfully dubbed "Turingismus" by Peter Ericsson, Peter Hilton and Donald Michie[54]) and introduced the important method of "differencing" on which much of the rest of solving Tunny keys in the absence of depths, was based.[55] ### Differencing The search was on for a process that would manipulate the ciphertext or key to produce a frequency distribution of characters that departed from the uniformity that the enciphering process aimed to achieve. Turing worked out that the XOR combination of the values of successive (adjacent) characters in a stream of ciphertext or key, emphasised any departures from a uniform distribution.[55][56] The resultant stream was called the difference (symbolised by the Greek letter "delta" Δ)[57] because XOR is the same as modulo 2 subtraction. So, for a stream of characters S, the difference ΔS was obtained as follows, where underline indicates the succeeding character: ΔS = S ⊕ S The stream S may be ciphertext Z, plaintext P, key K or either of its two components χ and ψ. The relationship amongst these elements still applies when they are differenced. For example, as well as: K = χψ It is the case that: ΔK = Δχ ⊕ Δψ Similarly for the ciphertext, plaintext and key components: ΔZ = ΔP ⊕ Δχ ⊕ Δψ So: ΔP = ΔZ ⊕ Δχ ⊕ Δψ The reason that differencing provided a way into Tunny, was that although the frequency distribution of characters in the ciphertext could not be distinguished from a random stream, the same was not true for a version of the ciphertext from which the chi element of the key had been removed. This is because, where the plaintext contained a repeated character and the psi wheels did not move on, the differenced psi character (Δψ) would be the null character ('/' at Bletchley Park). When XOR-ed with any character, this character has no effect, so in these circumstances, ΔK = Δχ. The ciphertext modified by the removal of the chi component of the key was called the de-chi D at Bletchley Park,[58] and the process of removing it as "de-chi-ing". Similarly for the removal of the psi component which was known as "de-psi-ing" (or "deep sighing" when it was particularly difficult).[59] So the delta de-chi ΔD was: ΔD = ΔZ ⊕ Δχ Repeated characters in the plaintext were more frequent both because of the characteristics of German (EE, TT, LL and SS are relatively common),[60] and because telegraphists frequently repeated the figures-shift and letters-shift characters[61] as their loss in an ordinary telegraph transmission could lead to gibberish.[62] To quote the General Report on Tunny: Turingery introduced the principle that the key differenced at one, now called ΔΚ, could yield information unobtainable from ordinary key. This Δ principle was to be the fundamental basis of nearly all statistical methods of wheel-breaking and setting.[55] Differencing was applied to each of the impulses of the ITA2 coded characters.[63] So, for the first impulse, that was enciphered by wheels χ1 and ψ1, differenced at one: ΔK1 = K1K1 And for the second impulse: ΔK2 = K2K2 And so on. The periodicity of the chi and psi wheels for each impulse (41 and 43 respectively for the first impulse) is also reflected in the pattern of ΔK. However, given that the psi wheels did not advance for every input character, as did the chi wheels, it was not simply a repetition of the pattern every 41 × 43 = 1763 characters for ΔK1, but a more complex sequence. ### Turing's method Turing's method of deriving the cam settings of the wheels from a length of key obtained from a depth, involved an iterative process. Given that the delta psi character was the null character '/' half of the time on average, an assumption that ΔK = Δχ had a 50% chance of being correct. The process started by treating a particular ΔK character as being the Δχ for that position. The resulting putative bit pattern of x and for each chi wheel, was recorded on a sheet of paper that contained as many columns as there were characters in the key, and five rows representing the five impulses of the Δχ. Given the knowledge from Tutte's work, of the periodicity of each of the wheels, this allowed the propagation of these values at the appropriate positions in the rest of the key. A set of five sheets, one for each of the chi wheels, was also prepared. These contained a set of columns corresponding in number to the cams for the appropriate chi wheel, and were referred to as a 'cage'. So the χ3 cage had 29 such columns.[64] Successive 'guesses' of Δχ values then produced further putative cam state values. These might either agree or disagree with previous assumptions, and a count of agreements and disagreements was made on these sheets. Where disagreements substantially outweighed agreements, the assumption was made that the Δψ character was not the null character '/', so the relevant assumption was discounted. Progressively, all the cam settings of the chi wheels were deduced, and from them, the psi and motor wheel cam settings. As experience of the method developed, improvements were made that allowed it to be used with much shorter lengths of key than the original 500 or so characters."[55] ## Testery The Testery was the section at Bletchley Park that performed the bulk of the work involved in decrypting Tunny messages.[65] By July 1942, the volume of traffic was building up considerably. A new section was therefore set up, led by Ralph Tester—hence the name. The staff consisted mainly of ex-members of the Research Section,[1] and included Peter Ericsson, Peter Hilton, Denis Oswald and Jerry Roberts.[66] The Testery's methods were almost entirely manual, both before and after the introduction of automated methods in the Newmanry to supplement and speed up their work.[14][1] The first phase of the work of the Testery ran from July to October, with the predominant method of decryption being based on depths and partial depths.[67] After ten days, however, the formulaic start of the messages was replaced by nonsensical quatsch, making decryption more difficult. This period was productive nonetheless, even though each decryption took considerable time. Finally, in September, a depth was received that allowed Turing's method of wheel breaking, "Turingery", to be used, leading to the ability to start reading current traffic. Extensive data about the statistical characteristics of the language of the messages was compiled, and the collection of cribs extended.[55] In late October 1942 the original, experimental Tunny link was closed and two new links (Codfish and Octopus) were opened. With these and subsequent links, the 12-letter indicator system of specifying the message key was replaced by the QEP system. This meant that only full depths could be recognised—from identical QEP numbers—which led to a considerable reduction in traffic decrypted. Once the Newmanry became operational in June 1943, the nature of the work performed in the Testery changed, with decrypts, and wheel breaking no longer relying on depths. ### British Tunny A rebuilt British Tunny at the National Museum of Computing, Bletchley Park. It emulated the functions of the Lorenz SZ40/42, producing printed cleartext from ciphertext input. The so-called "British Tunny Machine" was a device that exactly replicated the functions of the SZ40/42 machines. It was used to produce the German cleartext from a ciphertext tape, after the cam settings had been determined.[68] The functional design was produced at Bletchley Park where ten Testery Tunnies were in use by the end of the war. It was designed and built in Tommy Flowers' laboratory at the General Post Office Research Station at Dollis Hill by Gil Hayward, "Doc" Coombs, Bill Chandler and Sid Broadhurst.[69] It was mainly built from standard British telephone exchange electro-mechanical equipment such as relays and uniselectors. Input and output was by means of a teleprinter with paper tape reading and punching.[70] These machines were used in both the Testery and later the Newmanry. Dorothy Du Boisson who was a machine operator and a member of the Women's Royal Naval Service (Wren), described plugging up the settings as being like operating an old fashioned telephone exchange and that she received electric shocks in the process.[71] When Flowers was invited by Hayward to try the first British Tunny machine at Dollis Hill by typing in the standard test phrase: "Now is the time for all good men to come to the aid of the party", he much appreciated that the rotor functions had been set up to provide the following Wordsworthian output:[72] Input NOW IS THE TIME FOR ALL GOOD MEN TO COME TO THE AID OF THE PARTY Output I WANDERED LONELY AS A CLOUD THAT FLOATS ON HIGH OER VALES AND H Additional features were added to the British Tunnies to simplify their operation. Further refinements were made for the versions used in the Newmanry, the third Tunny being equipped to produce de-chi tapes.[73][74] ## Newmanry The Newmanry was a section set up under Max Newman in December 1942 to look into the possibility of assisting the work of the Testery by automating parts of the processes of decrypting Tunny messages. Newman had been working with Gerry Morgan, head of the Research Section on ways of breaking Tunny when Bill Tutte approached them in November 1942 with the idea of what became known as the "1+2 break in".[75] This was recognised as being feasible, but only if automated. Newman produced a functional specification of what was to become the "Heath Robinson" machine.[75] He recruited the Post Office Research Station at Dollis Hill, and Dr C.E. Wynn-Williams at the Telecommunications Research Establishment (TRE) at Malvern to implement his idea. Work on the engineering design started in January 1943 and the first machine was delivered in June. The staff at that time consisted of Newman, Donald Michie, Jack Good, two engineers and 16 Wrens. By the end of the war the Newmanry contained three Robinson machines, ten Colossus Computers and a number of British Tunnies. The staff were 26 cryptographers, 28 engineers and 275 Wrens.[76] The automation of these processes required the processing of large quantities of punched paper tape such as those on which the enciphered messages were received. Absolute accuracy of these tapes and their transcription was essential, as a single character in error could invalidate or corrupt a huge amount of work. Jack Good introduced the maxim "If it's not checked it's wrong".[77] ### The "1+2 break in" W. T. Tutte developed a way of exploiting the non-uniformity of bigrams (adjacent letters) in the German plaintext using the differenced cyphertext and key components. His method was called the "1+2 break in," or "double-delta attack".[78] The essence of this method was to find the initial settings of the chi component of the key by exhaustively trying all positions of its combination with the ciphertext, and looking for evidence of the non-uniformity that reflected the characteristics of the original plaintext.[79] [80] The wheel breaking process had to have successfully produced the current cam settings to allow the relevant sequence of characters of the chi wheels to be generated. It was totally impracticable to generate the 22 million characters from all five of the chi wheels, so it was initially limited to 41 × 31 = 1271 from the first two. Given that for each of the five impulses i: Zi = χiψi ⊕ Pi and hence Pi = Ziχiψi for the first two impulses: (P1 ⊕ P2) = (Z1 ⊕ Z2) ⊕ (χ1χ2) ⊕ (ψ1ψ2) Calculating a putative P1 ⊕ P2 in this way for each starting point of the χ1χ2 sequence would yield xs and s with, in the long run, a greater proportion of s when the correct starting point had been used. Tutte knew, however, that using the differenced (∆) values amplified this effect[81] because any repeated characters in the plaintext would always generate , and similarly ∆ψ1 ⊕ ∆ψ2 would generate whenever the psi wheels did not move on, and about half of the time when they did - some 70% overall. Tutte analyzed a decrypted ciphertext with the differenced version of the above function: (∆Z1 ⊕ ∆Z2) ⊕ (∆χ1 ⊕ ∆χ2) ⊕ (∆ψ1 ⊕ ∆ψ2) and found that it generated some 55% of the time.[82] Given the nature of the contribution of the psi wheels, the alignment of chi-stream with the ciphertext that gave the highest count of s from (∆Z1 ⊕ ∆Z2 ⊕ ∆χ1 ⊕ ∆χ2) was the one that was most likely to be correct.[83] This technique could be applied to any pair of impulses and so provided the basis of an automated approach to obtaining the de-chi (D) of a ciphertext, from which the psi component could be removed by manual methods. ### Robinsons Heath Robinson was the first machine produced to automate Tutte's 1+2 method. It was given the name by the Wrens who operated it, after cartoonist William Heath Robinson, who drew immensely complicated mechanical devices for simple tasks, similar to the American cartoonist Rube Goldberg. The functional specification of the machine was produced by Max Newman. The main engineering design was the work of Frank Morrell[84] at the Post Office Research Station at Dollis Hill in North London, with his colleague Tommy Flowers designing the "Combining Unit". Dr C. E. Wynn-Williams from the Telecommunications Research Establishment at Malvern produced the high-speed electronic valve and relay counters.[85] Construction started in January 1943,[86] the prototype machine was in use at Bletchley Park in June.[87] The main parts of the machine were: • a tape transport and reading mechanism (dubbed the "bedstead" because of its resemblance to an upended metal bed frame) that ran the looped key and message tapes at between 1000 and 2000 characters per second; • a combining unit that implemented the logic of Tutte's method; • a counting unit that counted the number of s, and if it exceeded a pre-set total, displayed or printed it. The prototype machine was effective despite a number of serious shortcomings. Most of these were progressively overcome in the development of what became known as "Old Robinson".[88] ### Colossus A Mark 2 Colossus computer. The Wren operators are (left to right) Dorothy Du Boisson and Elsie Booker. The slanted control panel on the left was used to set the pin patterns on the Lorenz. The "bedstead" paper tape transport is on the right. In 1994, a team led by Tony Sale (right) began a reconstruction of a Mark 2 Colossus at Bletchley Park. Here, in 2006, Sale and Phil Hayes supervise the solving of an enciphered message with the completed machine. Tommy Flowers' experience with Heath Robinson, and his previous, unique experience of thermionic valves (vacuum tubes) led him to realize that a better machine could be produced using electronics. Instead of the key stream being read from a punched paper tape, an electronically generated key stream could allow much faster and more flexible processing. Flowers' suggestion that this could be achieved with a machine that was entirely electronic and would contain between one and two thousand valves, was treated with incredulity at both the Telecommunications Research Establishment and at Bletchley Park, as it was thought that it would be "too unreliable to do useful work". He did, however, have the support of the Controller of Research at Dollis Hill, W Gordon Radley,[89] and he implemented these ideas producing Colossus, the world's first electronic, digital, computing machine that was at all programmable, in the remarkably short time of ten months.[90] In this he was assisted by his colleagues at the Post Office Research Station Dollis Hill: Sidney Broadhurst, William Chandler, Allen Coombs and Harry Fensom. The prototype Mark 1 Colossus (Colossus I), with its 1500 valves, became operational at Dollis Hill in December 1943[2] and was in use at Bletchley Park by February 1944. This processed the message at 5000 characters per second using the impulse from reading the tape's sprocket holes to act as the clock signal. It quickly became evident that this was a huge leap forward in cryptanalysis of Tunny. Further Colossus machines were ordered and the orders for more Robinsons cancelled. An improved Mark 2 Colossus (Colossus II) contained 2400 valves and first worked at Bletchley Park on 1 June 1944, just in time for the D-day Normandy landings. The main parts of this machine were:[91] • a tape transport and reading mechanism (the "bedstead") that ran the message tape in a loop at 5000 characters per second; • a unit that generated the key stream electronically; • five parallel processing units that could be programmed to perform a large range of Boolean operations; • five counting units that each counted the number of s or xs, and if it exceeded a pre-set total, printed it out. The five parallel processing units allowed Tutte's "1+2 break in" and other functions to be run at an effective speed of 25,000 characters per second by the use of circuitry invented by Flowers that would now be called a shift register. Donald Michie worked out a method of using Colossus to assist in wheel breaking as well as for wheel setting.[92] This was then implemented in special hardware on later Colossi. A total of ten Colossus computers were in use and an eleventh was being commissioned at the end of the war in Europe (VE-Day).[93] ## Special machines As well as the commercially produced teleprinters and re-perforators, a number of other machines were built to assist in the preparation and checking of tapes in the Newmanry and Testery.[94][95] The approximate complement as of May 1945 was as follows. Machines used in deciphering Tunny as of May 1945 Name Function Testery Newmanry Super Robinson Used for crib runs in which two tapes were compared in all positions. Contained some valves. 2 Colossus Mk.2 Counted a condition involving a message tape and an electronically generated key character stream imitating the various Tunny wheels in different relative positions ("stepping").[96] Contained some 2,400 valves. 10 Dragons Used for setting short cribs by "crib-dragging" (hence the name).[97][98] 2 Aquarius A machine under development at the war's end for the "go-backs" of the SZ42B, which stored the contents of the message tape in a large bank of capacitors that acted as an electronic memory.[99] 1 Proteus A machine for utilising depths that was under construction at the war's end but was not completed. Decoding Machines Translated from ciphertext typed in, to plaintext printed out. Some of the later ones were speeded up with the use of a few valves.[100] A number of modified machines were produced for the Newmanry 13 Tunnies See British Tunny above 3 Miles A set of increasingly complex machines (A, B, C, D) that read two or more tapes and combined them in a variety of ways to produce an output tape.[101] 3 Garbo Similar to Junior, but with a Delta'ing facility – used for rectangling.[102] 3 Juniors For printing tapes via a plug panel to change characters as necessary, used to print de-chis.[74] 4 Insert machines Similar to Angel, but with a device for making corrections by hand. 2 Angels Copied tapes. 4 Hand perforators Generated tape from a keyboard. 2 Hand counters Measured text length. 6 Stickers (hot) Bostik and benzene was used for sticking tapes to make a loop. The tape to be stuck was inserted between two electrically heated plates and the benzene evaporated. 3 Stickers (cold) Stuck tapes without heating. 6 ## Steps in wheel setting Working out the start position of the chi (χ) wheels required first that their cam settings had been determined by "wheel breaking". Initially, this was achieved by two messages having been sent in depth. The number of start positions for the first two wheels, χ1 and χ2 was 41×31 = 1271. The first step was to try all of these start positions against the message tape. This was Tutte's "1+2 break in" which involved computing (∆Z1 ⊕ ∆Z2 ⊕ ∆χ1 ⊕ ∆χ2)—which gives a putative (∆D1 ⊕ ∆D2)—and counting the number of times this gave . Incorrect starting positions would, on average, give a dot count of 50% of the message length. On average, the dot count for a correct starting point would be 54%, but there was inevitably a considerable spread of values around these averages.[83] Both Heath Robinson, which was developed into what became known as "Old Robinson", and Colossus were designed to automate this process. Statistical theory allowed the derivation of measures of how far any count was from the 50% expected with an incorrect starting point for the chi wheels. This measure of deviation from randomness was called sigma. Starting points that gave a count of less than 2.5 × sigma, named the "set total", were not printed out.[103] The ideal for a run to set χ1 and χ2 was that a single pair of trial values produced one outstanding value for sigma thus identifying the start positions of the first two chi wheels. An example of the output from such a run on a Mark 2 Colossus with its five counters: a, b, c, d and e, is given below. Output table abridged from Small's "The Special Fish Report".[104] The set total threshold was 4912. χ1 χ2 Counter Count Operator's notes on the output 06 11 a 4921 06 13 a 4948 02 16 e 4977 05 18 b 4926 02 20 e 4954 05 22 b 4914 03 25 d 4925 02 26 e 5015 ← 4.6 σ 19 26 c 4928 25 19 b 4930 25 21 b 5038 ← 5.1 σ 29 18 c 4946 36 13 a 4955 35 18 b 4926 36 21 a 5384 ← 12.2 σ ch χ1 χ2  ! ! 36 25 a 4965 36 29 a 5013 38 08 d 4933 With an average-sized message, this would take about eight minutes. However, by utilising the parallelism of the Mark 2 Colossus, the number of times the message had to be read could be reduced by a factor of five, from 1271 to 255. [105] Having identified possible χ1, χ2 start positions, the next step was to try to find the start positions for the other chi wheels. In the example given above, there is a single setting of χ1 = 36 and χ2 = 21 whose sigma value makes it stand out from the rest. This was not always the case, and Small enumerates 36 different further runs that might be tried according to the result of the χ1, χ2 run.[106] At first the choices in this iterative process were made by the cryptanalyst sitting at the typewriter output, and calling out instructions to the Wren operators. Max Newman devised a decision tree and then set Jack Good and Donald Michie the task of devising others.[107] These were used by the Wrens without recourse to the cryptanalysts if certain criteria were met.[108] In the above one of Small's examples, the next run was with the first two chi wheels set to the start positions found and three separate parallel explorations of the remaining three chi wheels. Such a run was called a "short run" and took about two minutes.[105] Output table adapted from Small's "The Special Fish Report".[109] The set total threshold was 2728. χ1 χ2 χ3 χ4 χ5 Counter Count Operator's notes on the output 36 21 01 a 2938 ← 6.8 ρ  ! χ3  ! 36 21 01 b 2763 36 21 01 c 2803 36 21 02 b 2733 36 21 04 c 3003 ← 8.6 ρ  ! χ5  ! 36 21 06 a 2740 36 21 07 c 2750 36 21 09 b 2811 36 21 11 a 2751 36 21 12 c 2759 36 21 14 c 2733 36 21 16 a 2743 36 21 19 b 3093 ← 11.1 ρ  ! χ4  ! 36 21 20 a 2785 36 21 22 b 2823 36 21 24 a 2740 36 21 25 b 2796 36 21 01 b 2763 36 21 07 c 2750 So the probable start positions for the chi wheels are: χ1 = 36, χ2 = 21, χ3 = 01, χ4 = 19, χ5 = 04. These had to be verified before the de-chi (D) message was passed to the Testery. This involved Colossus performing a count of the frequency of the 32 characters in ΔD. Small describes the check of the frequency count of the ΔD characters as being the "acid test",[110] and that practically every cryptanalyst and Wren in the Newmanry and Testery knew the contents of the following table by heart. Relative frequency count of characters in ΔD.[111] Char. Count Char. Count Char. Count Char. Count / 1.28 R 0.92 A 0.96 D 0.89 9 1.10 C 0.90 U 1.24 F 1.00 H 1.02 V 0.94 Q 1.01 X 0.87 T 0.99 G 1.00 W 0.89 B 0.82 O 1.04 L 0.92 5 1.43 Z 0.89 M 1.00 P 0.96 8 1.12 Y 0.97 N 1.00 I 0.96 K 0.89 S 1.04 3 1.13 4 0.90 J 1.03 E 0.89 If the derived start points of the chi wheels passed this test, the de-chi-ed message was passed to the Testery where manual methods were used to derive the psi and motor settings. As Small remarked, the work in the Newmanry took a great amount of statistical science, whereas that in the Testery took much knowledge of language and was of great interest as an art. Cryptanalyst Jerry Roberts made the point that this Testery work was a greater load on staff than the automated processes in the Newmanry.[14] ## Notes and references 1. ^ a b c Good, Michie & Timms 1945, 1 Introduction: 14 Organisation, 14A Expansion and Growth, (b) Three periods, p. 28. 2. ^ a b Flowers 1983, p. 245. 3. ^ McKay 2010, p. 263 quoting Jerry Roberts. 4. ^ Hinsley 1993, p. 8. 5. ^ a b Good, Michie & Timms 1945, 1 Introduction: 11 German Tunny, 11A Fish Machines, (c) The German Ciphered Teleprinter, p. 4. 6. ^ Weierud 2006, p. 307. 7. ^ Gannon 2007, p. 103. 8. ^ Good, Michie & Timms 1945, 1 Introduction: 11 German Tunny, 11A Fish Machines, (c) The German Ciphered Teleprinter, p. 5. 9. ^ Copeland 2006, p. 45. 10. ^ a b Good, Michie & Timms 1945, 1 Introduction: 11 German Tunny, 11B The Tunny Cipher Machine, (j) Mechanical Aspects, p. 10. 11. ^ Good 1993, pp. 162, 163. 12. ^ Flowers 2006, p. 81. 13. ^ All but two of the Colossus computers, which were taken to GCHQ, were dismantled in 1945, and the whole project was kept strictly secret until the 1970s. Thus Colossus did not feature in many early descriptions of the development of electronic computers.Gannon 2007, p. 431 14. ^ a b c 15. ^ Small 1944, p. 1. 16. ^ Good, Michie & Timms 1945, 1 Introduction: 11 German Tunny, 11B The Tunny Cipher Machine, p. 6. 17. ^ Gannon 2007, pp. 150, 151. 18. ^ Good 1993, p. 153. 19. ^ Good, Michie & Timms 1945, 1 Introduction: 11 German Tunny, 11B The Tunny Cipher Machine, (f) Motors p. 7. 20. ^ Good, Michie & Timms 1945, 1 Introduction: 11 German Tunny, 11B The Tunny Cipher Machine, (g) Limitations p. 8. 21. ^ Churchhouse 2002, pp. 158, 159. 22. ^ Good, Michie & Timms 1945, 1 Introduction: 11 German Tunny, 11C Wheel patterns, pp. 11, 12. 23. ^ This statement is an over-simplification. The real constraint is more complex, that ab=½. For further details see: Good, Michie & Timms 1945, p. 17 in 1 Introduction: 12 Cryptographic Aspects, 12A The Problem, (d) Early methods and Good, Michie & Timms 1945, p. 306 in 42 Early Hand Methods: 42B Machine breaking for March 1942, (e) Value of a and b. Indeed, this weakness was one of the two factors that led to the system being diagnosed. 24. ^ Good, Michie & Timms 1945, 1 Introduction: 11 German Tunny, 11E The Tunny Network, (b) Wheel-breaking and Setting, p. 15. 25. ^ Hayward 1993, p. 176. 26. ^ In more recent terminology, each impulse would be termed a "bit" with a mark being binary 1 and a space being binary 0. Punched paper tape had a hole for a mark and no hole for a space. 27. ^ Good, Michie & Timms 1945, 1 Introduction: 11 German Tunny, 11A Fish Machines, (a) The Teleprinter Alphabet, p. 3. 28. ^ Roberts 2006, p. 256. 29. ^ Good, Michie & Timms 1945, 14 Organisation, 14A Expansion and growth, (a) General position p. 28. 30. ^ Intelligence work at secret sites revealed for the first time, GCHQ, 1 November 2019, retrieved 9 July 2020CS1 maint: date and year (link) 31. ^ Good, Michie & Timms 1945, 3. Organisation: 33 Knockholt, 33A Ordering tapes, p. 281. 32. ^ Bowler, Eileen Eveline (9 November 2005), WW2 People's War, An Archive of World War Two Memories: Listening to the Enemy Radio, BBC London CSV Action Desk 33. ^ Gannon 2007, p. 333. 34. ^ Good, Michie & Timms 1945, 1 Introduction: 11 German Tunny, 11B The Tunny Cipher Machine, (i) Functional Summary, p. 10. 35. ^ Good, Michie & Timms 1945, 1 Introduction: 11 German Tunny, 11A Fish Machines, (c) The German Ciphered Teleprinter, p. 6. 36. ^ Churchhouse 2002, p. 24. 37. ^ Copeland 2006, p. 68. 38. ^ Copeland 2006, p. 48. 39. ^ Edgerley 2006, pp. 273, 274. 40. ^ Initially QSN (see Good, Michie & Timms 1945, p. 320 in 44 Early Hand statistical Methods: 44A Introduction of the QEP (QSN) System). 41. ^ Copeland 2006, pp. 44–47. 42. ^ Good, Michie & Timms 1945, 1 Introduction: 12 Cryptographic Aspects, 12A The Problem, (a) Formulae and Notation, p. 16. 43. ^ a b Good, Michie & Timms 1945, 1 Introduction: 11 German Tunny, 11B The Tunny Cipher Machine, (e) Psi-key, p. 7. 44. ^ Good, Michie & Timms 1945, 1 Introduction: 11 German Tunny, 11C Wheel Patterns, (b) Differenced and Undifferenced Wheels, p. 11. 45. ^ Sale, Tony, The Lorenz Cipher and how Bletchley Park broke it, retrieved 21 October 2010 46. ^ Good, Michie & Timms 1945, 12 Cryptographic Aspects: 12A The Problem, (c) Weaknesses of Tunny, p. 16. 47. ^ Tutte 2006, p. 353. 48. ^ 49. ^ Tutte 1998, p. 4. 50. ^ Tutte 2006, p. 356. 51. ^ Tutte 2006, p. 357. 52. ^ O'Connor, J J; Robertson, E F (2003), MacTutor Biography: William Thomas Tutte, University of St Andrews, retrieved 28 April 2013 53. ^ Tutte 2006, pp. 359, 360. 54. ^ a b Copeland 2006, p. 380. 55. Good, Michie & Timms 1945, 4 Early Methods and History: 43 Testery Methods 1942-1944, 43B Turingery, p. 313. 56. ^ Copeland 2012, p. 96. 57. ^ Good, Michie & Timms 1945, 1 Introduction: 11 German Tunny, 11C Wheel patterns, (b) Differenced and Undifferenced Wheels p. 11. 58. ^ Small 1944, p. 2 refers to the de-chi as being "pseudo plain" 59. ^ Tutte 2006, p. 365. 60. ^ Singh, Simon, The Black Chamber, retrieved 28 April 2012 61. ^ Newman c. 1944 p. 387 62. ^ Carter 2008, p. 14. 63. ^ The five impulses or bits of the coded characters are sometimes referred to as five levels. 64. ^ Copeland 2006, p. 385 which reproduces a χ3 cage from the General Report on Tunny 65. ^ Roberts 2009, 34 minutes in. 66. ^ Roberts 2006, p. 250. 67. ^ Unlike a full depth, when all twelve letters of the indicator were the same, a partial depth occurred when one or two of the indicator letters differed. 68. ^ Hayward 1993, pp. 175–192. 69. ^ Hayward 2006, p. 291. 70. ^ Currie 2006, pp. 265–266. 71. ^ Copeland 2006, p. 162 quoting Dorothy Du Boisson. 72. ^ Hayward 2006, p. 292. 73. ^ Good, Michie & Timms 1945, 51 Introductory: 56. Copying Machines, 56K The (Newmanry) Tunny Machine, pp. 376-378. 74. ^ a b Small 1944, p. 105. 75. ^ a b Good, Michie & Timms 1945, 15 SomeHistorical Notes: 15A. First Stages in Machine Development, p. 33. 76. ^ Good, Michie & Timms 1945, 31 Mr Newnam's Section: 31A, Growth, p. 276. 77. ^ Good 2006, p. 215. 78. ^ The Double-Delta Attack, retrieved 27 June 2018 79. ^ Good, Michie & Timms 1945, 44 Hand Statistical Methods: Setting - Statistical pp. 321–322. 80. ^ Budiansky 2006, pp. 58–59. 81. ^ For this reason Tutte's 1 + 2 method is sometimes called the "double delta" method. 82. ^ Tutte 2006, p. 364. 83. ^ a b Carter 2008, pp. 16–17. 84. ^ Bletchley Park National Code Centre: November 1943, retrieved 21 November 2012 85. ^ Good, Michie & Timms 1945, 15 Some Historical Notes: 15A. First Stages in Machine Development, (c) Heath Robinson p. 33. 86. ^ Copeland 2006, p. 65. 87. ^ Good, Michie & Timms 1945, 37 Machine Setting Organisation: (b) Robinsons and Colossi p. 290. 88. ^ Good, Michie & Timms 1945, 52 Development of Robinson and Colossus: (b) Heath Robinson p. 328. 89. ^ Fensom 2006, pp. 300–301. 90. ^ Flowers 2006, p. 80. 91. ^ Flowers 1983, pp. 245–252. 92. ^ 93. ^ Flowers 1983, p. 247. 94. ^ Good, Michie & Timms 1945, 13 Machines: 13A Explanation of the Categories, (b) Copying Machines p. 25 and 13C Copying Machines p. 27. 95. ^ Good, Michie & Timms 1945, 56 Copying Machines pp. 367–379. 96. ^ Good, Michie & Timms 1945, 53 Colossus: 53A Introduction, p. 333. 97. ^ Hayward 2006, pp. 291–292. 98. ^ Michie 2006, p. 236. 99. ^ Fensom 2006, pp. 301–302. 100. ^ Good, Michie & Timms 1945, pp. 326 in 51 Introductory: (e) Electronic counters etc. 101. ^ Small 1944, p. 107. 102. ^ Small 1944, pp. 23, 105. 103. ^ Small 1944, p. 9. 104. ^ Small 1944, p. 19. 105. ^ a b Small 1944, p. 8. 106. ^ Small 1944, p. 7. 107. ^ Good, Michie & Timms 1945, 23 Machine Setting: 23B The Choice of Runs, pp. 79,80. 108. ^ Good 2006, p. 218. 109. ^ Small 1944, p. 20. 110. ^ Small 1944, p. 15. 111. ^ Adapted from Small 1944, p. 5
2021-09-23 20:13:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 21, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44291922450065613, "perplexity": 4875.734011388043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057447.52/warc/CC-MAIN-20210923195546-20210923225546-00351.warc.gz"}
https://www.biostars.org/p/131169/
Using vcfutils.pl command 1 0 Entering edit mode 6.2 years ago lcc1844 ▴ 30 I am trying to do some variant filtering via samtools and have made a var.bcf file. When I try the following I get the error 'No such file or directory'. bcftools view my.var.bcf | vcfutils.pl varFilter - > my.var-final.vcf In my Samtools folder I can't actually see bcftools or vcfutils.pl anywhere but I can do this: export PATH=$PATH:/bcftools$ echo PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/bcftools If I try to repeat the filtering command using this, I get the same error: bcftools view my.var.bcf | /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/bcftools:/vcfutils.pl varFilter - > my.var-final.vcf I can understand that the vcfutils.pl file is not being found but I am stuck trying to work out how to fix this. I'm sure it must be something very basic but I'm struggling anyway! Any advice would be much appreciated. Thanks alignment • 13k views ADD COMMENT 0 Entering edit mode Have you tried to type locate vcfutils.pl to see where the script is? ADD REPLY 0 Entering edit mode i'm sure :/bcftools is wrong. You didn't install /bcftools in the 'root' directory. ADD REPLY 0 Entering edit mode Hello and thank you - to install bcftools I did git clone --branch=develop git://github.com/samtools/bcftools.git Which required me to do 'sudo apt-get install' to install it. But I don't know where it has installed! ADD REPLY 0 Entering edit mode Have you followed these installation steps: http://vcftools.sourceforge.net/htslib.html? ADD REPLY 2 Entering edit mode 6.2 years ago lcc1844 ▴ 30 Thanks, I have done that and got /usr/share/samtools/vcfutils.pl I don't know what share is though. ADD COMMENT 0 Entering edit mode The "share" word is used because what is under /usr/share is platform independent, and can be shared among several machines across a network filesystem. Therefore this is the place for manuals, documentations, examples etc. ADD REPLY 0 Entering edit mode Hi, thanks again. How can I move bcftools to an appropriate place? ADD REPLY 0 Entering edit mode You should move to /usr/bin directory. This folder is the directory for the executables that are accessed by all users (everybody have this directory in their PATH). So, you maybe want to copy/move bcftools executables to this folder.
2021-04-15 10:10:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4450817406177521, "perplexity": 5335.154359854654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038084765.46/warc/CC-MAIN-20210415095505-20210415125505-00498.warc.gz"}
https://physics.stackexchange.com/questions/452341/torricellis-law-doesnt-seem-to-apply-in-an-experiment-why-could-this-be-what
# Torricelli's law doesn't seem to apply in an experiment. Why could this be? What is going on? For physics class we've done this experiment where we would measure the speed of the water escaping from a water bottle through a hole in a side of it. We would calculate the speed through kinematics by measuring the distance from the bottle the stream would reach. The data obtained are h (cm) --- v (cm/s) 9.8 --- 135.7 5 --- 82.4 3.9 --- 70.8 2.9 --- 56.2 1.4 --- 33.2 But when analyzing the data the speeds of the streams don't follow Torricelli's law, $$v=\sqrt{2gh}$$, as they should. Instead they roughly follow another power-law formula, $$v=26\ h^{0.8}$$, where v is the escape velocity and h is the height of the fluid surface from the hole. I've been staring at the data for days now and I don't know if we've measured wrongly the experimental data or if there is something wrong with the things I've been doing. • Would you mind posting your data and showing what you have tried? It will help me and others answer your question – InertialObserver Jan 5 '19 at 21:48 • Thanks for adding part of your data. Since your velocities are inferred from the measuring the distance traveled by the streams before they hit the ground, shouldn't your raw data contain only distances? I would expect a height above the ground and a horizontal distance for each data point. – rob Jan 5 '19 at 21:57 • Unfortunately, in a lot of these school experiments there are lots of other factors that get in the way. For example, there might have been sizable air resistance or viscosity effects. When I was in high school, I once measured the concentration of a solution to be $937\%$, so this is actually pretty good. – knzhou Jan 5 '19 at 21:58 • Also, did you make all these measurements simultaneously (e.g. from a photograph)? If not, you should estimate how much the water level dropped while you measured. – rob Jan 5 '19 at 22:00 • We decided the heights first (based on some marks on the plastic bottle) and then lowered the water level to each mark, blocked the hole and try to measure the distance of the stream when we unblocked it (probably a cause of more errors as the stream didn't move perfectly right after putting your finger away) – Diego Jan 5 '19 at 22:04 It might have been better to start with the water at a height above that which you wished to take the measurement and then waited until the water level reached the required height and then taking the measurement? Doing this would reduce the "cause of more errors as the stream didn't move perfectly right after putting your finger away". Perhaps a better way of estimating the speed at which the water emerges from the hole is to measure the flow rate (volume emerging per second) which is equal to the speed of the water $$\times$$ the cross-sectional area of the hole? To try and get reasonable accuracy the cross-sectional area of the reservoir should be much larger than the cross-sectional area of the hole so that the level of water in the reservoir does not fall by very much whilst the flow rate is being measured and the speed of the water at the top of the reservoir is much less than the speed of the water as it emerges from the hole. The experiment is one which is done quite often and here is one example of the experimental results on the Internet. • unluckily I don't have access anymore to the experiment and I can only work with the data we measured at that time, so I can't work with the cross-sectional area of the hole – Diego Jan 6 '19 at 18:02 • @Diego there does seem to be a lot of errors that you would have to justify then in your calculations, to find a really close number to the expected model. – Karthik Jan 6 '19 at 21:19 We would calculate the speed through kinematics by measuring the distance from the bottle the stream would reach. How exactly did you calculate velocity in this measurement? Fluid particles trace a parabolic path in this situation, and this path follows the equation $$y = \frac{x^2}{2R} = g\frac{x^2}{2V^2}$$ where the coordinate system is shown by (taken from my answer here): The velocity is therefore given by $$V = \sqrt{\frac{g}{2y}} x$$ where $$y$$ is the height that the stream has fallen (i.e., the height of the hole), and $$x$$ is a horizontal distance (i.e., the distance on the table that the stream lands). From Torricelli's law, this suggests that $$\frac{x}{\sqrt{y}} = 2\sqrt{h}$$. Maybe you could measure $$\frac{x}{\sqrt{y}}$$ to see where your disagreement is? • If the experimenter kept the distance through which the water fell after leaving the hole $y$ constant then $V\propto x$ which is possibly what was done? – Farcher Jan 5 '19 at 23:23 • @Farcher Oh yeah, this could be a way to get $V$ by measuring $x$. I'm not sure if OP did that. – Drew Jan 6 '19 at 2:18 • That is exactly how i calculated the velocity – Diego Jan 6 '19 at 17:59
2020-09-29 12:00:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7474385499954224, "perplexity": 384.2673464291635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401641638.83/warc/CC-MAIN-20200929091913-20200929121913-00288.warc.gz"}
https://www.degruyter.com/view/j/math.2016.14.issue-1/math-2016-0049/math-2016-0049.xml
Show Summary Details In This Section # Open Mathematics ### formerly Central European Journal of Mathematics Editor-in-Chief: Gianazza, Ugo / Vespri, Vincenzo 1 Issue per year IMPACT FACTOR 2016 (Open Mathematics): 0.682 IMPACT FACTOR 2016 (Central European Journal of Mathematics): 0.489 CiteScore 2016: 0.62 SCImago Journal Rank (SJR) 2015: 0.521 Source Normalized Impact per Paper (SNIP) 2015: 1.233 Mathematical Citation Quotient (MCQ) 2015: 0.39 Open Access Online ISSN 2391-5455 See all formats and pricing In This Section # Commentary to: Generalized derivations of Lie triple systems Ivan Kaygorodov • Corresponding author • Universidade Federal do ABC, CMCC, Santo André, Brasil and Siberian State Aerospace University, Krasnoyarsk, Federation • Email: / Yury Popov • Novosibirsk State University, Novosibirsk, Federation • Email: Published Online: 2016-07-26 | DOI: https://doi.org/10.1515/math-2016-0049 We remark that it would be interesting to describe all finite-dimensional n-ary algebras with the property QDer = End. The attempt of doing that in the case of Lie triple systems (n = 3) was made in the paper [4]. Unfortunately, as was noted in [2] (Date of publication: 2/06/2015), authors made a mistake in the classification of irreducible End(T)-submodules of TTT with respect to the action $f\cdot x\otimes y\otimes z={f}^{\ast }\left(x\otimes y\otimes z\right)$, where T is a Lie triple system (Lemma 4.2). Particularly, it was proposed that the sets $(T⊗T⊗T)+=span(x⊗y⊗z+y⊗x⊗z:x,y,z∈T)$(1) and $(T⊗T⊗T)−=span(x⊗y⊗z−y⊗x⊗z:x,y,z∈T)$(2) are the only two nontrivial End(T)-submodules with respect to the action defined in [4]. However, it is easy to see that the space $span(x⊗y⊗z+x⊗z⊗y:x,y,z∈T)$ is an End(T)-submodule with respect to the action above and that it does not coincide with any of the two spaces above. Moreover, as was noted in [3] (Date of publication: 12/08/2015), the submodule (1) has a nontrivial irreducible End(T)-submodule $(T⊗T⊗T)∗=span(∑σ∈S3xσ(1)⊗xσ(2)⊗xσ(3):x1,x2,x3∈T);$ and the submodule (2) has a nontrivial irreducible End(T)-submodule $(T⊗T⊗T)∗∗=span(∑σ∈S3(−1)σxσ(1)⊗xσ(2)⊗xσ(3):x1,x2,x3∈T).$ Also we note that in the proof of the Theorem 4.3 [4] the authors have not used the second defining identity of Lie triple systems: $[x,y,z]+[y,z,x]+[z,x,y]=0.$ Therefore, they actually claim to have classified 3-Lie algebras with the property QDer = End. But from the classification of 3-Lie algebras with End = QDer [1, 3], it follows that some 3-Lie algebras with the above property are missing from their list. ## Acknowledgement I. Kaygorodov was supported by RFBR 16-31-00096. ## References • [1] Kaygorodov I., (n + 1)-ary derivations of semisimple Filippov algebras, Math. Notes, 96 (2014), 1-2, 206-216. • [2] Kaygorodov I., Popov Yu., Generalized derivations of (color) n-ary algebras, Arxiv: 1506.00734 Google Scholar • [3] Kaygorodov I., Popov Yu., Generalized derivations of (color) n-ary algebras, Linear Multilinear Algebra, 64 (2016), 6, 1086-1106. Google Scholar • [4] Zhou J., Chen L., Ma Y., Generalized derivations of Lie triple systems, Open Math., 14 (2016), 260-271 Google Scholar Accepted: 2016-07-04 Published Online: 2016-07-26 Published in Print: 2016-01-01 Citation Information: Open Mathematics, ISSN (Online) 2391-5455, Export Citation
2017-07-27 18:43:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8421997427940369, "perplexity": 4552.380101753519}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549429417.40/warc/CC-MAIN-20170727182552-20170727202552-00013.warc.gz"}
https://www.physicsforums.com/threads/laplace-and-fourier-transforms.316930/
# Laplace and Fourier Transforms Well, not sure where to begin. I have a couple questions that I have been unable to find answers to from professors/books/etc... I feel like I'm making a little progress in filling in the gaps but not completely. First of all, I just don't understand what makes the Laplace transform work so well. All of my books on differential equations just give the definition and then results and such. But how is the integral derived? I shudder to think that someone just "noticed" it works. On to Fourier.. I have not formally studied the Fourier transform yet, but I have studied the Fourier series. My first reaction to seeing the transform is that is looks so similar to the Laplace transform that maybe the Laplace is a specific case of the Fourier transform (kind of doubt that). I am still not sure exactly how the Fourier transform comes out of the Fourier series. How does this work? There seems to be a resemblance between the series in exponential form and the integral definition, but I am still not sure how the series goes into the integral definition. Does the Fourier transform work pretty much the same way as the Laplace transform only it takes care of complex functions? Also, are these two things linked somehow? Or should I stop with the bad habit of assumption and start to think of these as two completely independent things? Sorry if this type of thing has been posted before. I used the search option but didn't find much. chroot Staff Emeritus Gold Member The variable s, used in the Laplace transform, is a complex number usually written in the form $s = \sigma + i \omega$ The Fourier transform is obtained from the Laplace transform by restricting s to only the imaginary axis. In other words, if you fix $s = i \omega$, you get the Fourier transform. The Fourier transform is a special case of the Laplace transform. The imaginary parameter, $\omega$, indicates frequency. The real parameter indicates damping. Since the Fourier transform considers only pure tones (undamped (co)sine waves which oscillate forever), it makes sense that the Fourier transform considers only the imaginary parameter. The history of these transforms involved a number of people and a great many years, and I'm not well-versed in it, so take what follows with a grain of salt. I believe Fourier's work came first, in describing functions as sums of sines and cosines. It was later discovered that Fourier's transform is a type of integral transform, and people like Laplace began studying other kinds of integral transforms, including those which can represent functions in terms of both damped and undamped (co)sine waves. - Warren Ok thank you! That helps a lot. I reviewed the course description for PDE I'll be taking soon, I think I will wait until that class before continuing to study the subject since I'll be learning about Fourier series in that class. So hopefully that will help bridge some gaps. The problem I have with studying topics on my own is I get sidetracked to other topics so easily.. Mute Homework Helper The Fourier transform is obtained from the Laplace transform by restricting s to only the imaginary axis. In other words, if you fix $s = i \omega$, you get the Fourier transform. The Fourier transform is a special case of the Laplace transform. That statement should probably be qualified by saying the Fouirer Transform is a special case of the bilateral laplace transform, whose limits are the whole real line. The usual Laplace transform just covers the half real line, so the Fourier transform is generally not a special case of it. There's brief bit of history about the transform on wikipedia's page: http://en.wikipedia.org/wiki/Laplace_transform Also, note that there are some cases in which the Fourier transform exists but the bilateral laplace transform doesn't, as the integral doesn't converge (or converges, but only for a limited range of the real part of s). A typical example are the characteristic function and moment generating function of a probability distribution, the first being the Fourier transform, which always exists, and the second being the bilateral laplace transform, which does not necessarily exist for all or any s. Last edited:
2021-02-28 15:59:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8785754442214966, "perplexity": 228.5250327380695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361510.12/warc/CC-MAIN-20210228145113-20210228175113-00328.warc.gz"}
https://math.stackexchange.com/questions/2525668/doubt-in-the-proof-of-poincaires-theorem-using-gauss-bonnet-theorem-local
# Doubt in the proof of Poincaire's theorem using Gauss-Bonnet theorem (local). I'm studying differential geometry through the book "Differential Geometry of Curves and Surfaces - Manfredo P. do Carmo", and I have a doubt in the demonstration of Poincare's Theorem. POINCARE'S THEOREM. The sum of the indices of a diferentiable vector field $v$ with isolated singular points on a compact surface $S$ is equal to the Euler-Poincaré characteristic of the surface $S$. The proof uses Gauss-Bonnet Theorem (local) GAUSS-BONNET THEOREM (Local). Let $x$: $U\subset \mathbb{R}^2$ $\rightarrow$ $S$ be an orthogonal parametrization (that is, $\langle x_u,x_v\rangle = 0$), of an oriented surface $S$, where $U$ $\subset$ $\mathbb{R}^2$ is homeomorphic to an open disk and $x$ is compatible with the orientation of $S$. Let $R$ $\subset$ $x(U)$ be a simple region of $S$ and let $\alpha$: $I$ $\rightarrow$ $S$ be such that $\partial R$ $=$ $\alpha(I)$. Assume that a is positively oriented, parametrized by arc length $s$, and let $\alpha(s_0)$, . . . , $\alpha(s_k)$ and $\theta_0$, . . . ,$\theta_k$, be, respectively, the vertices and the external angles of $\alpha$. Then $$\sum_{i=0}^{k} \int_{s_{i}}^{s_{i+1}}k_g + \int \int_R K dS + \sum_{i=0}^{k} \theta_i = 2\pi$$ where $k_g(s)$ is the geodesic curvature of the regular arcs of a and $K$ is the Gaussian curvature of $S$. To prove this Manfredo proposes the following solution: Let $S$ $\subset$ $\mathbb{R}^3$ be an oriented, compact surface and $v$ a differentiable vector field with only isolated singular points. We remark that they are finite in number. Otherwise, by compactness, they have a limit point which is a nonisolated singular point. Let $\{x_{\alpha}\}$ be a family of orthogonal parametrizations compatible with the orientation of $S$. Let $\mathcal{T}$ be a triangulation of $S$ such that 1. Every triangle $T$ $\in$ $\mathcal{T}$ is contained in some coordinate neighborhood of the family $\{x_\alpha\}$. 2. Every $T$ $\in$ $\mathcal{T}$ contains at most one singular point. 3. The boundary of every $T$ $\in$ $\mathcal{T}$ contains no singular points and is positively oriented. If we apply GAUSS-BONNET THEOREM (Local) , sum up the results, and take into account that the edge of each $T$ $\in$ $\mathcal{T}$ appears twice with opposite orientations, we obtain $$\int \int_S K d\sigma - 2\pi\sum_{i=0}^{k} I_i = 0$$ where $I_i$ is the index of the singular point $p_i$, $i = 1$, . . . , $k.$ Joining this with the Gauss-Bonnet theorem (Global), we finally arrive at $$\sum_{i=0}^{k} I_i = \frac{1}{2\pi} \int \int_S K d\sigma = \chi (S). \quad \mbox{Q.E.D.}$$ My problem is in the part marked in bold above. If we use Gauss-Bonnet on $T$ $\in$ $\mathcal{T}$ we get $$\int_{\partial T} k_g + \int \int_{T} K dS + \sum_{i=0}^{2} \theta_{iT} = 2\pi$$ If we sum up the equation above for all $T$ $\in$ $\mathcal{T}$. \begin{align*} \sum_{T \in \mathcal{T}}\left(\int_{\partial T} k_g + \int \int_{T} K dS + \sum_{i=0}^{2} \theta_{iT} \right) =& \sum_{T \in \mathcal{T}} 2\pi \\ \sum_{T \in \mathcal{T}} \int_{\partial T} k_g + \sum_{T \in \mathcal{T}} \int \int_{T} K dS + \sum_{T \in \mathcal{T}} \sum_{i=0}^{2} \theta_{iT} =& 2\pi \hspace{0.1cm}\# \mathcal{T} \\ 0+ \int \int_S K dS + \sum_{T \in \mathcal{T}}\sum_{i=0}^{2} \theta_{iT} - 2\pi \hspace{0.1cm}\# \mathcal{T} &= 0. \\ \end{align*} Where $\# \mathcal{T}$ $=$ number of elements of the set $\mathcal{T}$. My Question: Why $$\sum_{T \in \mathcal{T}}\sum_{i=0}^{2} \theta_{iT} - 2\pi \hspace{0.1cm}\# \mathcal{T} = 2 \pi \sum_{i=0}^{k} I_i \hspace{0.1 cm} ?$$ First of all, let me reformulate your problem. Let X be a smooth tangent vector field on a compact oriented surface S, with isolated singularities ${p_1 , . . . , p_n }$. Then the sum of the indices of the singularities, $$\sum_{k=1}^{n} I_k(X,p_k)$$ depends only on S and not on the vector field. The sum of the indices is equal to the the Euler characteristic $\chi(S)$ of the surface S. The basic idea to prove this theorem starts by recalling the fact that the rotation number is $(2 \pi)^{−1}$ times the total curvature, i.e. $$rot(\gamma)=\frac{1}{2 \pi} \int_{\gamma} k_g(t) |\gamma'(t)| dt =\frac{1}{2 \pi} \int_{\gamma} k_g(s) ds$$ where $\gamma$ is a smooth regular closed curve. Then we can compute the rotation index for the “boundary curves” of each of the faces in a subdivision and finally add the results up. It should be noted that the sign of the rotation index depends on whether you move along the curve in the positive or negative direction. Consider now a triangulation, i.e. a collection of curved triangles such that any point on the surface either lies on one of the curves or in a region bounded by a curve triangle. Now by joining the arcs forming each triangles, we obtain a continuous closed curve, so that we can define the rotation number of the regular piecewise smooth curve $\gamma$ to be the winding number of this loop. However we have to face a problem. The definition of the rotation number above applies to smooth regular closed curves, i.e. smooth Jordan curves, while in our case the boundaries of faces are piecewise smooth. As a matter of fact the tangent vector jumps at the vertices. When thinking to the meaning of the curvature, i.e. the magnitude of the rate of change of the unit tangent vector, when accounting for the angular changes in the tangent vector, we can generalize the rotation index to: $$rot(\gamma)=\frac{1}{2 \pi} \big( \int_{\gamma} k_g(s) ds + \sum_i \theta_i \big)$$ where $\gamma$ is now a regular piecewise smooth curve and $\theta_i$ is the $i^{th}$ external angle. For a regular piecewise smooth Jordan curve, i.e. plane simple closed curves, oriented in the positive direction, $rot(\gamma)$ is equal to 1. When adding singulatities the situation is different. It should be noted that for each curve $\gamma_k$ used to build up $\gamma$ we have $I_k(X,p_k)=rot(\gamma_k)$. However when considering $\gamma$, $rot(\gamma)=\sum_{k=1}^{n} wn(\gamma, p_k) I_k(X,p_k)$ where wn is the winding number. Then $$rot(\gamma)=1-\sum_{k} I_k(X,p_k)$$ We assumed that all the singularities of X are in the interiors of faces, so that for each $\gamma$ which bounds a face we can write: $$1-\sum_{k} I_k(X,p_k) =\frac{1}{2 \pi} \big( \int_{\gamma} k_g(s) ds + \sum_i \theta_i \big)$$ By applying Gauss-Bonnet local $$1-\sum_{k} I_k(X,p_k) =\frac{1}{2 \pi} \big( 2 \pi - \iint_S K dS \big)$$ And then $$\sum_{k} I_k(X,p_k) =\frac{1}{2 \pi} \iint_S K dS$$ You can now apply the Gauss-Bonnet theorem global to conclude that $$\sum_{k=1}^{n} I_k(X,p_k) = \chi(S)$$ To avoid any confusion please consider that the rotation number and winding number are not necessarely equals. Informally, the winding number of a closed curve is the number of times the curve winds around a point. On the other hand, the rotation number is the number of rotations made by the tangent vector during one traversal of the curve. You can see the rotation number of a regular closed curve as the winding number of its derivative around the origin, or more intuitively, the total number of times that a person walking once around the curve turns counterclockwise.
2022-05-20 21:42:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 8, "x-ck12": 0, "texerror": 0, "math_score": 0.9929289221763611, "perplexity": 143.28199992293293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534669.47/warc/CC-MAIN-20220520191810-20220520221810-00169.warc.gz"}
https://math.stackexchange.com/questions/942540/definition-of-functional-limits-and-isolated-points
# Definition of Functional Limits and Isolated Points Let $(X,d)$ and $(Y,w)$ be two metric spaces, with A $\subseteq X$ and $f : A \to Y$. Also, let p be a limit point of A. I have the following definition for functional limits. $lim_{x\to p}f(x) = L$ $\equiv$ $\forall \varepsilon>0, \exists \delta>0, ( \forall x \in A (d(x,p)<\delta) \to w(f(x),L) <\varepsilon )$ . Now my question. I'm trying to see what will happen if we re-define the functional limits as to include p to be any point in the closure of A ( not necessarily a limit point, but possibly an isolated point ). I'm thinking that whenever we allow p to be an isolated point of A, the symbolic definition will crash and cause wierd results. But i'm trying to see what will happen, more precisely. What goes wrong in the definition whenever we allow p to be an isolated point ? I'm thinking that for those isolated points, the symbolic definition will become tautologically true, regardless of L, and hence f will be able to have ( by the old definition ) infinitely many limits as it approaches an isolated point in the domain of A. Is that accurate ? How is that true ? P.S : I have a definition of an isolated point of A as a point such that there exists an open ball centered at it that contains no elements of A other than itself. I don't recognize your definition of a limit. The definition I know requires $x\ne p$, as below: $$\forall \varepsilon>0, \exists \delta>0, ( \forall x \in A\ (0<d(x,p)<\delta) \implies w(f(x),L) <\varepsilon )$$ If $p$ is an isolated point of $A$ then the set $\{x \in A : 0<d(x,p)<\delta\}$ is empty for small $\delta$. As such, the requirement $w(f(x),L)<\varepsilon$ holds vacuously no matter what $L$ is. We lose uniqueness of limit; worse yet, since every real number of a limit of $f$ at $p$, the concept carries no information about $f$. I think this is enough to say that extending the definition was a bad idea. The definition of continuity can be and is extended to isolated points; every function is continuous at every isolated point of its domain. This is consistent with the topological characterization of continuity (preimages of open sets are open), which applies to every kind of domain, including those with isolated points. • In France the limit is usually defined with only $d(x,p)<\delta$, and not $0<d(x,p)<\delta$ (with the latter it is called "limite épointée"). – xxx Sep 23 '14 at 11:49 • Exactly, i also had limit defined with d(x,p)<δ and not 0<d(x,p)<δ. Before i posted this question i was already thinking, if x would not be able to be equal to p, then i could see how the definition would crash, but since i had limit defined allowing x=p, i don't see how isolated points crash the definition. Should that mean that there's no problem allowing isolated points in my definition ? – nerdy Sep 23 '14 at 15:42 • @nerdy If you are using the French definition, everything's fine. The French limit of $f$ at an isolated point is equal to the value of $f$ at that point. – user147263 Sep 23 '14 at 16:05 • @nerdy Since $x=p$ qualifies for $d(x,p)<\delta$, the implication requires $w(f(p),L)<\epsilon$. For all $\epsilon$. Hence $f(p)=L$. – user147263 Sep 29 '14 at 22:44 • @nerdy Notice that $x\ne p$ in $\forall x \in A \cap( p-\delta,p)$. Therefore, if $p$ is an isolated point of $A$, the implication becomes true regardless of $L$. – user147263 Sep 29 '14 at 22:59
2020-07-04 15:46:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9370907545089722, "perplexity": 225.1903450801578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886178.40/warc/CC-MAIN-20200704135515-20200704165515-00030.warc.gz"}
https://www.hackmath.net/en/math-problem/13681
# The aspect ratio The aspect ratio of the rectangular triangle is 13: 12: 5. Calculate the internal angles of the triangle. A =  90 ° B =  67.3801 ° C =  22.6199 ° ### Step-by-step explanation: $C=90-B=90-67.3801=22.6199\text{°}=22\mathrm{°}3{7}^{\mathrm{\prime }}11\mathrm{"}$ Try calculation via our triangle calculator. Did you find an error or inaccuracy? Feel free to write us. Thank you! Tips to related online calculators Check out our ratio calculator. Pythagorean theorem is the base for the right triangle calculator. #### You need to know the following knowledge to solve this word math problem: We encourage you to watch this tutorial video on this math problem: ## Related math problems and questions: • Right triangle Calculate the missing side b and interior angles, perimeter, and area of ​​a right triangle if a=10 cm and hypotenuse c = 16 cm. • Ratio iso triangle The ratio of the sides of an isosceles triangle is 7:6:7 Find the base angle to the nearest answer correct to 3 significant figure. • Largest angle of the triangle Calculate the largest angle of the triangle whose sides have the sizes: 2a, 3/2a, 3a • Area and two angles Calculate the size of all sides and internal angles of a triangle ABC, if it is given by area S = 501.9; and two internal angles α = 15°28' and β = 45°. • Angles by cosine law Calculate the size of the angles of the triangle ABC, if it is given by: a = 3 cm; b = 5 cm; c = 7 cm (use the sine and cosine theorem). • Triangle in a square In a square ABCD with side a = 6 cm, point E is the center of side AB, and point F is the center of side BC. Calculate the size of all angles of the triangle DEF and the lengths of its sides. • Right triangle eq2 Find the lengths of the sides and the angles in the right triangle. Given area S = 210 and perimeter o = 70. • Trapezoid MO The rectangular trapezoid ABCD with the right angle at point B, |AC| = 12, |CD| = 8, diagonals are perpendicular to each other. Calculate the perimeter and area of ​​the trapezoid. • Calculate triangle In the triangle ABC, calculate the sizes of all heights, angles, perimeters and its area, if given a-40cm, b-57cm, c-59cm • Right triangle trigonometrics Calculate the size of the remaining sides and angles of a right triangle ABC if it is given: b = 10 cm; c = 20 cm; angle alpha = 60° and the angle beta = 30° (use the Pythagorean theorem and functions sine, cosine, tangent, cotangent) • Right triangle It is given a right triangle angle alpha of 90 degrees beta angle of 55 degrees c = 10 cm use Pythagorean theorem to calculate sides a and b • SSA and geometry The distance between the points P and Q was 356 m measured in the terrain. The PQ line can be seen from the viewer at a viewing angle of 107° 22 '. The observer's distance from P is 271 m. Determine the viewing angle of P and observer. • IS triangle Calculate interior angles of the isosceles triangle with base 40 cm and legs 22 cm long. • Roof angle The roof of the house has the shape of an isosceles triangle with arms 4 m long and the size of the base 6 m. How big an angle alpha does its roof make? • Calculate 2 Calculate the largest angle of the triangle whose side are 5.2cm, 3.6cm, and 2.1cm • Triangle's centroid In the triangle ABC the given lengths of its medians tc = 9, ta = 6. Let T be the intersection of the medians (triangle's centroid) and point is S the center of the side BC. The magnitude of the CTS angle is 60°. Calculate the length of the BC side to 2 d • Right angle In a right triangle ABC with a right angle at the apex C, we know the side length AB = 24 cm and the angle at the vertex B = 71°. Calculate the length of the legs of the triangle.
2021-08-05 04:16:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5779056549072266, "perplexity": 787.3016423425727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155322.12/warc/CC-MAIN-20210805032134-20210805062134-00222.warc.gz"}
http://www.numdam.org/item/SPS_1975__9__471_0/
Multiplicative excessive measures and duality between equations of Boltzmann and of branching processes Séminaire de probabilités de Strasbourg, Tome 9 (1975), pp. 471-485. @article{SPS_1975__9__471_0, author = {Nagasawa, Masao}, title = {Multiplicative excessive measures and duality between equations of {Boltzmann} and of branching processes}, journal = {S\'eminaire de probabilit\'es de Strasbourg}, pages = {471--485}, publisher = {Springer - Lecture Notes in Mathematics}, volume = {9}, year = {1975}, zbl = {0316.60050}, mrnumber = {438503}, language = {en}, url = {http://www.numdam.org/item/SPS_1975__9__471_0/} } TY - JOUR AU - Nagasawa, Masao TI - Multiplicative excessive measures and duality between equations of Boltzmann and of branching processes JO - Séminaire de probabilités de Strasbourg PY - 1975 DA - 1975/// SP - 471 EP - 485 VL - 9 PB - Springer - Lecture Notes in Mathematics UR - http://www.numdam.org/item/SPS_1975__9__471_0/ UR - https://zbmath.org/?q=an%3A0316.60050 UR - https://www.ams.org/mathscinet-getitem?mr=438503 LA - en ID - SPS_1975__9__471_0 ER - Nagasawa, Masao. Multiplicative excessive measures and duality between equations of Boltzmann and of branching processes. Séminaire de probabilités de Strasbourg, Tome 9 (1975), pp. 471-485. http://www.numdam.org/item/SPS_1975__9__471_0/ [1] T.E. Harris, The theory of branching processes.(1963) Springer. | MR 163361 | Zbl 0117.13002 [2] N. Ikeda - M. Nagasawa - S. Watanabe, Branching Markov processes I, II, III, Journal of Mth. Kyoto Univ. Vol.8 (1968) 233-278, 365-410, vol. 9 (1969) 95-160. | MR 238401 | Zbl 0233.60070 [3] H.P. Mckean, A class of Markov processes associated with nonlinear parabolic equations, Proc. Nat. Acad. Sci. vol. 56 (1966) 1907-1911. | MR 221595 | Zbl 0149.13501 [4] -----, Speed of approach to equilibrium for Kac's caricature of a Maxwellian gas, Archive for rational mechanics and analysis. vol. 21 (1966) 343-367. | MR 214112 [5] -----, An exponential formula for solving Boltzmann's equation for a Maxwellian gas, J. of Combinatorial theory. vol. 2 (1967) 358-382. | MR 224348 | Zbl 0152.46501 [6] M. Nagasawa - K. Sato, Some theorems on time change and killing of Markov processes, Kodai Math. Sem. Rep. vol. 15 (1963) 195-219. | MR 164372 | Zbl 0123.35202 [7] M. Nagasawa, Time reversions of Markov processes, Nagoya Math. Journal. vol.24 (1964) 177-204. | MR 169290 | Zbl 0133.10702 [8] -----, Branching property of Markov processes, Lecture notes in Mathematics, vol 258, Seminaire de Probabilités de Strasbourg VI, Springer 1972,177-197. | EuDML 112956 | Numdam | MR 378132 | Zbl 0231.60073 [9] -----, Multiplicative excessive measures of branching processes, Proc. Japan Acad. vol. 49 (1973) 497-499. | MR 341645 | Zbl 0289.60047 [10] Y. Takahashi, Markov semi-groups with simple interaction I, II. Proc. Japan Acad. vol. 47 (1971) Suppl. II, 974-978, 1019-1024. | MR 324778 | Zbl 0276.60075 [11] H. Tanaka, Propagation of chaos for certain purely discontinuous Markov processes with interactions, J. Fac. Sci. Univ. Tokyo, vol. 17 (1970) 259-272. | MR 282410 | Zbl 0211.48403 [12] -----, Purely discontinuous Markov processes with non-linear generators and their propagation of chaos, Teor. Ber. prim. vol. 15 (1970) 599-621. | MR 279911 | Zbl 0327.60043 [13] T. Ueno, A class of Markov processes with interactions I, II, Proc Japan Acad. vol.45 (1969) 641-646, 995-1000. | MR 268962 | Zbl 0317.60042 [14] -----, A class of Markov processes with non-linear bounded generators, Japanese J. of Math. vol. 38 (1969) 19-38. | MR 260027 | Zbl 0185.45801
2022-01-23 16:21:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2590216398239136, "perplexity": 5709.355961027018}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304287.0/warc/CC-MAIN-20220123141754-20220123171754-00645.warc.gz"}
https://www.rdocumentation.org/packages/ape/versions/2.7-3/topics/read.nexus
From ape v2.7-3 0th Percentile ##### Read Tree File in Nexus Format This function reads one or several trees in a NEXUS file. Keywords manip, IO ##### Usage read.nexus(file, tree.names = NULL) ##### Arguments file a file name specified by either a variable of mode character, or a double-quoted string. tree.names if there are several trees to be read, a vector of mode character that gives names to the individual trees. ##### Details The present implementation tries to follow as much as possible the NEXUS standard (but see the restriction below on TRANSLATION tables). Only the block TREES'' is read; the other data can be read with other functions (e.g., read.dna, read.table, ...). A trace of the original data is kept with the attribute "origin" (see below). If a TRANSLATION table is present it is assumed that only the tip labels are translated and they are all translated with integers without gap. Consequently, if nodes have labels in the tree(s) they are read as they are and not looked for in the translation table. The logic behind this is that in the vast majority of cases, node labels will be support values rather than proper taxa names. This is consistent with write.nexus which translates only the tip labels. read.nexus' tries to represent correctly trees with a badly represented root edge (i.e. with an extra pair of parentheses). For instance, the tree "((A:1,B:1):10);" will be read like "(A:1,B:1):10;" but a warning message will be issued in the former case as this is apparently not a valid Newick format. If there are two root edges (e.g., "(((A:1,B:1):10):10);"), then the tree is not read and an error message is issued. ##### Value • an object of class "phylo" with the following components: • edgea two-column matrix of mode character where each row represents an edge of the tree; the nodes and the tips are symbolized with numbers (these numbers are not treated as numeric, hence the mode character); the nodes are represented with negative numbers (the root being "-1"), and the tips are represented with positive numbers. For each row, the first column gives the ancestor. This representation allows an easy manipulation of the tree, particularly if it is rooted. • edge.lengtha numeric vector giving the lengths of the branches given by edge. • tip.labela vector of mode character giving the names of the tips; the order of the names in this vector corresponds to the (positive) number in edge. • node.label(optional) a vector of mode character giving the names of the nodes (set to NULL if not available in the file). • root.edge(optional) a numeric value giving the length of the branch at the root is it exists (NULL otherwise). • If several trees are read in the file, the returned object is of class "multiPhylo", and is a list of objects of class "phylo". An attribute "origin" is further given to the returned object which gives the name of the source file (with its path). This is used to write a tree in a NEXUS file where all the original data must be written (not only the tree) in accordance to the specifications of Maddison et al. (1997). ##### References Maddison, D. R., Swofford, D. L. and Maddison, W. P. (1997) NEXUS: an extensible file format for systematic information. Systematic Biology, 46, 590--621. read.tree, write.nexus, write.tree, read.nexus.data, write.nexus.data`
2020-01-27 21:02:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4836067259311676, "perplexity": 1622.8128024599375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251728207.68/warc/CC-MAIN-20200127205148-20200127235148-00001.warc.gz"}
https://leanprover-community.github.io/archive/stream/116395-maths/topic/proving.20size.20at.20least.203.html
## Stream: maths ### Topic: proving size at least 3 #### Kevin Buzzard (Aug 13 2018 at 10:35): I would be interested in a relatively slick proof of either of the below examples: import data.fintype import set_theory.cardinal -- fintype open fintype example (α : Type) [fintype α] (a b c : α) (Hab : a ≠ b) (Hbc : b ≠ c) (Hac : a ≠ c) : card α ≥ 3 := sorry -- general example (α : Type) (a b c : α) (Hab : a ≠ b) (Hbc : b ≠ c) (Hac : a ≠ c) : cardinal.mk α ≥ 3 := sorry This is for pedagogical purposes and I don't really mind if we stick to fintypes or not. As a side issue, is cardinal.mk really the way to talk about the cardinality of a type? Is there not some interface function? #### Mario Carneiro (Aug 13 2018 at 10:38): cardinal.mk is the interface function #### Kenny Lau (Aug 13 2018 at 10:58): import data.fintype import set_theory.cardinal @[derive decidable_eq] inductive three : Type | A : three | B : three | C : three open three instance : fintype three := { elems := {A, B, C}, complete := λ x, by cases x; simp } theorem three.cardinal : cardinal.mk three = 3 := (cardinal.fintype_card three).trans $show ((3 : nat) : cardinal) = 3, by simp -- fintype open fintype example (α : Type) [fintype α] (a b c : α) (Hab : a ≠ b) (Hbc : b ≠ c) (Hac : a ≠ c) : card α ≥ 3 := show card three ≤ card α, from card_le_of_injective (λ n, three.rec_on n a b c)$ λ x y h, by cases x; cases y; dsimp at h; cc -- general example (α : Type) (a b c : α) (Hab : a ≠ b) (Hbc : b ≠ c) (Hac : a ≠ c) : cardinal.mk α ≥ 3 := three.cardinal ▸ nonempty.intro ⟨λ n, three.rec_on n a b c, λ x y h, by cases x; cases y; dsimp at h; cc⟩ #### Mario Carneiro (Aug 13 2018 at 10:58): hm, I needed some additional library functions for this, attached. The main proof is not so hard: @[simp] lemma fintype.card_coe (s : finset α) : fintype.card (↑s : set α) = s.card := card_attach theorem card_le_of_finset {α} (s : finset α) : (s.card : cardinal) ≤ cardinal.mk α := begin rw (_ : (s.card : cardinal) = cardinal.mk (↑s : set α)), { exact ⟨function.embedding.subtype _⟩ }, rw [cardinal.fintype_card, fintype.card_coe] end -- fintype open fintype example (α : Type) [fintype α] (a b c : α) (Hab : a ≠ b) (Hbc : b ≠ c) (Hac : a ≠ c) : card α ≥ 3 := finset.card_le_of_subset (finset.subset_univ ⟨a::b::c::0, by simp *⟩) -- general example (α : Type) (a b c : α) (Hab : a ≠ b) (Hbc : b ≠ c) (Hac : a ≠ c) : cardinal.mk α ≥ 3 := begin suffices : ((3:ℕ) : cardinal) ≤ cardinal.mk α, {simpa}, exact card_le_of_finset ⟨a::b::c::0, by simp *⟩ end #### Kenny Lau (Aug 13 2018 at 10:58): just 3 seconds apart! #### Kevin Buzzard (Aug 13 2018 at 18:33): Thanks to both of you! [I've only just seen these]. #### Kevin Buzzard (Aug 13 2018 at 22:15): This is one of those "easy in maths, hard in Lean" moments :-/ I am going to need stuff like "card X = 3 iff there exists a,b,c all distinct and every element of X must be a, b or c" [but I've gotta scoot]. I think I can take it from here but this is all a bit ugly. Mathematicians are so good at 3 :-/ #### Mario Carneiro (Aug 14 2018 at 02:04): that latter fact is essentially exactly the definition of a fintype instance where the underlying multiset has three elements #### Kevin Buzzard (Aug 14 2018 at 19:36): Here's a proof for the cardinal case: import set_theory.cardinal lemma three (α : Type) (Hthree : cardinal.mk α = 3) : ∃ a b c : α, a ≠ b ∧ b ≠ c ∧ c ≠ a ∧ ∀ d : α, d = a ∨ d = b ∨ d = c := begin rw ←(show cardinal.mk (fin 3) = 3, by simp) at Hthree, cases (quotient.exact Hthree) with Hequiv, let a := Hequiv.symm 0,let b := Hequiv.symm 1,let c := Hequiv.symm 2, have H12 : a ≠ b := by simp [Hequiv];exact dec_trivial, have H23 : b ≠ c := by simp [Hequiv];exact dec_trivial, have H31 : c ≠ a := by simp [Hequiv];exact dec_trivial, existsi a,existsi b,existsi c, refine ⟨H12,H23,H31,λ d,_⟩, cases H : (Hequiv d) with n Hn, cases n with e He, left,show d = Hequiv.symm ⟨0,Hn⟩,rw ←H,simp, cases e with e He, right,left,show d = Hequiv.symm ⟨1,Hn⟩,rw ←H,simp, cases e with e He, right,right,show d = Hequiv.symm ⟨2,Hn⟩,rw ←H,simp, exfalso,apply not_le_of_gt Hn,exact dec_trivial, end Now I need to do four :cry: (but that's the last one) #### Mario Carneiro (Aug 14 2018 at 19:37): noo... my heart, it hurts #### Mario Carneiro (Aug 14 2018 at 19:38): n is easier than 3 #### Kevin Buzzard (Aug 14 2018 at 19:39): So 4 is easier than 3? :-) #### Mario Carneiro (Aug 14 2018 at 19:39): 3 is easier than 3 #### Kevin Buzzard (Aug 14 2018 at 19:41): I did think about doing the general case but at the end of the day I want to extract exactly those things in the conclusion, and I wasn't entirely sure how easy it would be if I had a list of size n or whatever, so I decided to bite the bullet now rather than later. #### Mario Carneiro (Aug 14 2018 at 19:41): trust me, it's way easier to conclude from the general statement, even if the final goal is exactly the statement you wrote #### Mario Carneiro (Aug 14 2018 at 19:42): hint: if you have a list of length 3, then you can match it against [a, b, c] #### Kevin Buzzard (Aug 14 2018 at 19:42): I was a bit surprised to see simp leave me with a goal not 0 = 1 in the H12 proof. #### Mario Carneiro (Aug 14 2018 at 19:43): and d \in [a, b, c] and list.nodup [a, b, c] will simplify to the disjunctions you wrote #### Kevin Buzzard (Aug 14 2018 at 19:43): This is the stupid cardinal version, because Richard Thomas complained that I was assuming unnecessary finiteness hypotheses. #### Mario Carneiro (Aug 14 2018 at 19:44): there are theorems showing equivalence to the finite versions in cardinal #### Kevin Buzzard (Aug 14 2018 at 19:44): Oh OK, maybe I'll take it from here. Thanks! #### Kevin Buzzard (Aug 14 2018 at 19:45): It's just my lack of experience which made me do the 3 case explicitly. I could see I could try for the n case, but I figured that doing the 3 case directly would be less painful. I guess your instincts immediately told you otherwise. #### Mario Carneiro (Aug 14 2018 at 19:46): even 2 is sometimes tricky, but certainly 2 < n < 3 Last updated: May 12 2021 at 07:17 UTC
2021-05-12 08:17:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5043444037437439, "perplexity": 3696.6099720200627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991685.16/warc/CC-MAIN-20210512070028-20210512100028-00580.warc.gz"}
https://www.bartleby.com/solution-answer/chapter-12-problem-98cwp-chemistry-9th-edition/9781133611097/a-certain-reaction-has-the-form-aaproducts-at-a-particular-temperature-concentration-versus-time/d9458524-a26d-11e8-9bb5-0ece094302b6
Chapter 12, Problem 98CWP ### Chemistry 9th Edition Steven S. Zumdahl ISBN: 9781133611097 Chapter Section ### Chemistry 9th Edition Steven S. Zumdahl ISBN: 9781133611097 Textbook Problem # A certain reaction has the form aA → Products At a particular temperature, concentration versus time data were collected. A plot of 1/ [A] versus Lime (in seconds) gave a straight line with a slope of 6.90 × 10−2. What is the differential rate law for this reaction? What is the integrated rate law for this reaction? What is the value of the rate constant for this reaction? If [A]0 for this reaction is 0.100 M, what is the first half-life (in seconds)? If the original concentration (at t = 0) is 0.100 M, what is the second half-life (in seconds)? Interpretation Introduction Interpretation: For the given reaction the differential and integrated rate law is to be stated. The rate constant is to be calculated. The first and second half-life values for the given concentration are to be calculated. Concept introduction: Rate constant is a proportionality coefficient that relates the rate of any chemical reaction at a specific temperature to the concentration of the reactant or the concentration of the product. Order of reaction is a part of reaction which present as a power of concentration of reactants. Half-life is the time in which the amount of the species reduces to just half. To determine: The differential and integral rate law; the rate constant; the first and second half-life for the given reaction. Explanation Explanation Given The reaction is given as, aAproducts The plot of l/[A] Vs time (sec) is a straight line with a slop of 6.90×102 . The given straight line plot is a compulsory condition of second order reaction. Therefore, the given reaction is the second order reaction. For the second order reaction the differential rate law is expressed as, aAProductsRate=k[A]2dxdt=k[A]2 Where, • dxdt is the differential rate representation. • k is the rate constant. • [A] is the concentration. • 2 is the order of reaction. The integrated form of rate law is obtained by expression, aAProductstime=0a0time=t(ax)x The above representation shows that the change in concentration from initial time to a certain time t . Therefore the differential rate law is also written as, dxdt=k[A]2=k(ax)2 Where, • (ax) is the change in concentration. Integrate the above expression within the limit of 0t with respect to time and 0x for concentration as, dxdt=k(ax)20x1(ax)2dx=k0tdt Simplify the above expression ### Still sussing out bartleby? Check out a sample textbook solution. See a sample solution #### The Solution to Your Study Problems Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees! Get Started
2019-11-13 09:39:39
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.844809353351593, "perplexity": 1590.3156125980965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667177.24/warc/CC-MAIN-20191113090217-20191113114217-00300.warc.gz"}
http://wyvern.phys.s.u-tokyo.ac.jp/abs/jdlee.htm
Abstract : We study the dynamics of spins in the canted antiferromagnetic hematite nanoparticles $\alpha$-Fe$_2$O$_3$. The system includes an antiferromagnetic exchange ($J$) between two sublattices as well as the potential dominated by a uniaxial anisotropy and dissipates through the thermally triggered spin-phonon mechanism. The exchange $J$ is found to reduce the superparamagnetic relaxation and drive the incoherent relaxation of transverse spins in the low temperature. Dynamical properties are semi-quantitatively understood within a role of the exchange interaction $J$ and agree well with recent inelastic neutron scattering experiments. As an outlook, if time is allowed, we introduce the further interesting topics in the same system.
2018-01-18 21:52:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6902948617935181, "perplexity": 1396.683644066815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887621.26/warc/CC-MAIN-20180118210638-20180118230638-00484.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcdsb.2019242
# American Institute of Mathematical Sciences ## Investigating the effects of intervention strategies in a spatio-temporal anthrax model 1 Department of Science and Mathematics, Abraham Baldwin Agricultural College, Tifton, GA 31793, USA 2 Department of Mathematics, University of Tennessee, Knoxville, TN 37996, USA * Corresponding author: bpantha@abac.edu Received  February 2019 Revised  July 2019 Published  November 2019 In this paper, we extend our previous work on optimal control applied in an anthrax outbreak in wild animals. We use a system of ordinary differential equation (ODE) and partial differential equations (PDEs) to track the change in susceptible, infected and vaccinated animals as well as the infected carcasses. In addition to the assumption that the infected animals and the infected carcasses are the main source of infection, we consider the animal movement by diffusion and see its effects in disease transmission. Two controls: vaccinating susceptible animals and disposing infected carcasses properly are applied in the model and these controls depend on both space and time. We formulate an optimal control problem to investigate the effect of intervention strategies in our spatio-temporal model in controlling the outbreak at minimum cost. Finally some numerical results for the optimal control problem are presented. Citation: Buddhi Pantha, Judy Day, Suzanne Lenhart. Investigating the effects of intervention strategies in a spatio-temporal anthrax model. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2019242 ##### References: [1] S. Altizer, R. Bartel and B. A. Han, Animal migration and infectious disease risk, Science, 331 (2011), 296-302.  doi: 10.1126/science.1194694.  Google Scholar [2] Animal Diversity Web, https://animaldiversity.org/, Accessed May 2018. Google Scholar [3] J. K. Blackburn, A. Curtis, T. L. Hadfield, B. O'Shea, M. A. Mitchell and M. E. Hugh-Jones, Confirmation of Bacillus anthracis from flesh-eating flies collected during a West Texas Anthrax Season, Journel of Wildlife Disease, 46 (2010), 918-922.  doi: 10.7589/0090-3558-46.3.918.  Google Scholar [4] L. Busch, Bison herd suffers worst anthrax outbreak on record, Northern News Services Online, (2012), http://www.nnsl.com/frames/newspapers/2012-08/aug13\_12bs.html. Google Scholar [5] J. R. Castello, Bovids of World: Antelopes, Gazelles, Cattle, Goats, Sheep and Relatives, Prinston University Press, 2016. doi: 10.1515/9781400880652.  Google Scholar [6] S. Chawla and S. M. Lenhart, Application of optimal control theory to bioremediation, Journal of Computational and Applied Mathematics, 114 (2000), 81-102.  doi: 10.1016/S0377-0427(99)00290-3.  Google Scholar [7] S. Clegg. P. Turnbull, C. Foggin and P. Lindeque, Massive Outbreak of anthrex in wildlife in the Malilangwe Wildlife Reserve, Zimbabwe, The Veterinary Record, 160 (2007), 113-118.   Google Scholar [8] Department of Agriculture Forestry and Fisheries, Republic of South Africa, http://http://gadi.agric.za/articles/Furstenburg\_D, Accessed, 2018. Google Scholar [9] D. C. Dragon and B. T. Elkin, An overview of early Anthrax Outbreaks in Northern canada: Field reports of the Health of Animals Branch, Agriculture canada 1962-71, Arctic, 54 (2001), 1-104.  doi: 10.14430/arctic761.  Google Scholar [10] D. Dragon and R. Rennie, The ecology of anthrax spores: Tough but not invincible, Canadian Veterinary Journal, 36 (1995), 295-301.   Google Scholar [11] P. van den Driessche and J. Watmough, Reproduction number and sub-threshold endemic equilibria for compartmental models of disease transmission, Mathematical Bioscience, 180 (2002), 29-48.  doi: 10.1016/S0025-5564(02)00108-6.  Google Scholar [12] L. C. Evans, Partial Differential Equations, Graduate Studies in Mathematics, 19. American Mathematical Society, Providence, RI, 1998.  Google Scholar [13] Experience Zimbabwe, http://www.experiencezimbabwe.com/experience/attractions/malilangwe-wildlife-reserve, Accessed, 2018. Google Scholar [14] A. Fasanella, D. Galante, G. Garofolo and M. Hugh-Jones, Anthrax under valued zoonosis, Veterinary Microbiology, 140 (2010), 318-331.   Google Scholar [15] E. M. Fevre, B. M. de C. Bronsvoort, K. A. Hamilton and S. Cleaveland, Animal movements and the spread of infectious diseases, Trends in Microbiology, 14 (2006), 125-131.  doi: 10.1016/j.tim.2006.01.004.  Google Scholar [16] P. R. Furniss and B. D. Hahn, A mathematical model of an anthrax epizootic in the Kruger National Park, Applied Math Modeling, 5 (1981), 130-136.  doi: 10.1016/0307-904X(81)90034-2.  Google Scholar [17] K. R. Fister, S. Lenhart and J. McNally, Optimizing chemotherapy in an HIV model, Electronic Journal of Differential Equations, 1998 (1998), 12 pp.  Google Scholar [18] A. Friedman and A.-A. Yakubu, Anthrax epizootic and migration: Persistence or extinction, Mathematical Bioscience, 241 (2013), 137-144.  doi: 10.1016/j.mbs.2012.10.004.  Google Scholar [19] W. Hackbusch, A numerical method for solving parabolic equations with opposite orientations, Computing, 20 (1978), 229-240.  doi: 10.1007/BF02251947.  Google Scholar [20] B. D. Hahn and P. R. Furniss, A deterministic model of and anthrax epizootic: Threshold results, Ecological Modelling, 20 (1983), 233-241.  doi: 10.1016/0304-3800(83)90009-1.  Google Scholar [21] L. Hartfield, Bad year for anthrax outbreaks in US livestock, Center for Infectious Disease Research and Policy (CIDRAP), University of Minnesota, (2005), http://www.cidrap.umn.edu/news-perspective/2005/08/bad-year-anthrax-outbreaks-us-livestock. Google Scholar [22] M. E. Hugh-Jones and V. De Vos, Anthrax and wildlife, Scientific and Technical Review of the Office International des Epizooties, 21 (2003), 359-383.  doi: 10.20506/rst.21.2.1336.  Google Scholar [23] M. Kot, Elements of Mathematical Ecology, Cambridge University Press, Cambridge, 2001.  doi: 10.1017/CBO9780511608520.  Google Scholar [24] I. Kracalik, L. Malania, M. Broladze, A. Navdarashvili, P. mnadze, S. J. Rya and J. Blackburn, Changing livestock vaccination policy alters the epidemiology of human anthrax, Georgia, 2000-2013., Vaccine, 35 (2017), 6283-6289.  doi: 10.1016/j.vaccine.2017.09.081.  Google Scholar [25] S. Lenhart and J. T. Workman, Optimal Control Applied to Biological Models, Chapman & Hall/CRC Mathematical and Computational Biology Series, Chapman & Hall/CRC, Boca Raton, FL, 2007.  Google Scholar [26] C. Loehle, Social and behavioral barriers to pathogen transmission in wild animal populations, Clinical & Translational Immunology, 3 (1995), 1-6.  doi: 10.2172/666220.  Google Scholar [27] D. L. Lukes, Differential Equations: Classical to Controlled, Mathematics in Science and Engineering, 162. Academic Press, Inc., London-New York, 1982.   Google Scholar [28] The MathWorks Inc, Global optimization toolbox user's guide, Release 2015a, 2015. Google Scholar [29] R. Miller Neilan and S. Lenhart, Optimal vaccine distribution in a spatiotemporal epidemic model with an application to rabies and raccoons, Journal of Mathematical Analysis and Applications, 378 (2011), 603-619.  doi: 10.1016/j.jmaa.2010.12.035.  Google Scholar [30] J. S. Nishi, D. C. Dragon, B. T. Elkin, J. Mitchell, T. R. Ellsworth and M. E. Hugh-Jones, Emergency response planning for anthrax outbreaks in bison herds of northern canada, Annals of the New York Academy of Sciences, 969 (2002), 245-250.  doi: 10.1111/j.1749-6632.2002.tb04386.x.  Google Scholar [31] B. Pantha, J. Day and S. Lenhart, Optimal control applied in an anthrax epizootic model, Journal of Biological Systems, 24 (2016), 495-517.  doi: 10.1142/S021833901650025X.  Google Scholar [32] C. V. Pao, Nonlinear Parabolic and Elliptic Equations, Plenum Press, New York, 1992.  doi: 10.1007/978-1-4615-3034-3.  Google Scholar [33] C. M. Saad-Roy, P. van den Driessche and A.-A. Yakubu, A mathematical model of anthrax transmission in animal populations, Bulletin of Mathematical Biology, 79 (2017), 303-324.  doi: 10.1007/s11538-016-0238-1.  Google Scholar [34] A. H. Seydack, C. C. Grant, I. P. Smit, W. J. Vermeulen, J. Baard and N. Zambatis, Large herbivore population performance and climate in a South African semi-arid Savanna, KOEDOE, 54 (2012), a1047. doi: 10.4102/koedoe.v54i1.1047.  Google Scholar [35] S. V. Shadomy and T. L. Smith, Anthrax, Journal of the American Veterinary Medical Association, 233 (2008), 63-72.  doi: 10.2460/javma.233.1.63.  Google Scholar [36] J. Simon, Compact sets in the space $L^p(0, T, B)$", Ann. Mat. Pura Appl., 146 (1987), 65-96.  doi: 10.1007/BF01762360.  Google Scholar [37] J. Skellam, The Formulation and Interpretation of Mathematical Models of Diffusionary Processes in Population Biology, The Mathematical Theory of the Dynamics of Biological Populations, Academic Press, 1973.   Google Scholar [38] J. Tello and G. Van, The natural history of nyala, Tragelaphus angasi (Mammalia, Bovidae) in Mozambique, Bulletin of the AMNH, Bulletin of American Museum of Natural History, 155 (1975), 6283-6289.   Google Scholar [39] Texas Animal Health Commission, Anthrax confirmed in Eadwards county Deer, (2014), http://www.ttha.com/ttha/news/2014/09/08/anthrax-confirmed-in-edwards-county-deer. Google Scholar [40] P. Turnbill, Anthrax in Animals and Humans, WHO Press, Fourth edition, Geneva, 2008.   Google Scholar [41] V. Vos, The ecology of anthrax in the Kruger National Park, Salisbury Medical Bulletin, 68 (1990), 9-23.   Google Scholar [42] V. Vos, G. Rooyen and J. Kloppers, Anthrax immunizations of free ranging roan antelope hippotragus equinus in the Kruger National Park, KOEDOE, 16 (1973), 11-25.   Google Scholar show all references ##### References: [1] S. Altizer, R. Bartel and B. A. Han, Animal migration and infectious disease risk, Science, 331 (2011), 296-302.  doi: 10.1126/science.1194694.  Google Scholar [2] Animal Diversity Web, https://animaldiversity.org/, Accessed May 2018. Google Scholar [3] J. K. Blackburn, A. Curtis, T. L. Hadfield, B. O'Shea, M. A. Mitchell and M. E. Hugh-Jones, Confirmation of Bacillus anthracis from flesh-eating flies collected during a West Texas Anthrax Season, Journel of Wildlife Disease, 46 (2010), 918-922.  doi: 10.7589/0090-3558-46.3.918.  Google Scholar [4] L. Busch, Bison herd suffers worst anthrax outbreak on record, Northern News Services Online, (2012), http://www.nnsl.com/frames/newspapers/2012-08/aug13\_12bs.html. Google Scholar [5] J. R. Castello, Bovids of World: Antelopes, Gazelles, Cattle, Goats, Sheep and Relatives, Prinston University Press, 2016. doi: 10.1515/9781400880652.  Google Scholar [6] S. Chawla and S. M. Lenhart, Application of optimal control theory to bioremediation, Journal of Computational and Applied Mathematics, 114 (2000), 81-102.  doi: 10.1016/S0377-0427(99)00290-3.  Google Scholar [7] S. Clegg. P. Turnbull, C. Foggin and P. Lindeque, Massive Outbreak of anthrex in wildlife in the Malilangwe Wildlife Reserve, Zimbabwe, The Veterinary Record, 160 (2007), 113-118.   Google Scholar [8] Department of Agriculture Forestry and Fisheries, Republic of South Africa, http://http://gadi.agric.za/articles/Furstenburg\_D, Accessed, 2018. Google Scholar [9] D. C. Dragon and B. T. Elkin, An overview of early Anthrax Outbreaks in Northern canada: Field reports of the Health of Animals Branch, Agriculture canada 1962-71, Arctic, 54 (2001), 1-104.  doi: 10.14430/arctic761.  Google Scholar [10] D. Dragon and R. Rennie, The ecology of anthrax spores: Tough but not invincible, Canadian Veterinary Journal, 36 (1995), 295-301.   Google Scholar [11] P. van den Driessche and J. Watmough, Reproduction number and sub-threshold endemic equilibria for compartmental models of disease transmission, Mathematical Bioscience, 180 (2002), 29-48.  doi: 10.1016/S0025-5564(02)00108-6.  Google Scholar [12] L. C. Evans, Partial Differential Equations, Graduate Studies in Mathematics, 19. American Mathematical Society, Providence, RI, 1998.  Google Scholar [13] Experience Zimbabwe, http://www.experiencezimbabwe.com/experience/attractions/malilangwe-wildlife-reserve, Accessed, 2018. Google Scholar [14] A. Fasanella, D. Galante, G. Garofolo and M. Hugh-Jones, Anthrax under valued zoonosis, Veterinary Microbiology, 140 (2010), 318-331.   Google Scholar [15] E. M. Fevre, B. M. de C. Bronsvoort, K. A. Hamilton and S. Cleaveland, Animal movements and the spread of infectious diseases, Trends in Microbiology, 14 (2006), 125-131.  doi: 10.1016/j.tim.2006.01.004.  Google Scholar [16] P. R. Furniss and B. D. Hahn, A mathematical model of an anthrax epizootic in the Kruger National Park, Applied Math Modeling, 5 (1981), 130-136.  doi: 10.1016/0307-904X(81)90034-2.  Google Scholar [17] K. R. Fister, S. Lenhart and J. McNally, Optimizing chemotherapy in an HIV model, Electronic Journal of Differential Equations, 1998 (1998), 12 pp.  Google Scholar [18] A. Friedman and A.-A. Yakubu, Anthrax epizootic and migration: Persistence or extinction, Mathematical Bioscience, 241 (2013), 137-144.  doi: 10.1016/j.mbs.2012.10.004.  Google Scholar [19] W. Hackbusch, A numerical method for solving parabolic equations with opposite orientations, Computing, 20 (1978), 229-240.  doi: 10.1007/BF02251947.  Google Scholar [20] B. D. Hahn and P. R. Furniss, A deterministic model of and anthrax epizootic: Threshold results, Ecological Modelling, 20 (1983), 233-241.  doi: 10.1016/0304-3800(83)90009-1.  Google Scholar [21] L. Hartfield, Bad year for anthrax outbreaks in US livestock, Center for Infectious Disease Research and Policy (CIDRAP), University of Minnesota, (2005), http://www.cidrap.umn.edu/news-perspective/2005/08/bad-year-anthrax-outbreaks-us-livestock. Google Scholar [22] M. E. Hugh-Jones and V. De Vos, Anthrax and wildlife, Scientific and Technical Review of the Office International des Epizooties, 21 (2003), 359-383.  doi: 10.20506/rst.21.2.1336.  Google Scholar [23] M. Kot, Elements of Mathematical Ecology, Cambridge University Press, Cambridge, 2001.  doi: 10.1017/CBO9780511608520.  Google Scholar [24] I. Kracalik, L. Malania, M. Broladze, A. Navdarashvili, P. mnadze, S. J. Rya and J. Blackburn, Changing livestock vaccination policy alters the epidemiology of human anthrax, Georgia, 2000-2013., Vaccine, 35 (2017), 6283-6289.  doi: 10.1016/j.vaccine.2017.09.081.  Google Scholar [25] S. Lenhart and J. T. Workman, Optimal Control Applied to Biological Models, Chapman & Hall/CRC Mathematical and Computational Biology Series, Chapman & Hall/CRC, Boca Raton, FL, 2007.  Google Scholar [26] C. Loehle, Social and behavioral barriers to pathogen transmission in wild animal populations, Clinical & Translational Immunology, 3 (1995), 1-6.  doi: 10.2172/666220.  Google Scholar [27] D. L. Lukes, Differential Equations: Classical to Controlled, Mathematics in Science and Engineering, 162. Academic Press, Inc., London-New York, 1982.   Google Scholar [28] The MathWorks Inc, Global optimization toolbox user's guide, Release 2015a, 2015. Google Scholar [29] R. Miller Neilan and S. Lenhart, Optimal vaccine distribution in a spatiotemporal epidemic model with an application to rabies and raccoons, Journal of Mathematical Analysis and Applications, 378 (2011), 603-619.  doi: 10.1016/j.jmaa.2010.12.035.  Google Scholar [30] J. S. Nishi, D. C. Dragon, B. T. Elkin, J. Mitchell, T. R. Ellsworth and M. E. Hugh-Jones, Emergency response planning for anthrax outbreaks in bison herds of northern canada, Annals of the New York Academy of Sciences, 969 (2002), 245-250.  doi: 10.1111/j.1749-6632.2002.tb04386.x.  Google Scholar [31] B. Pantha, J. Day and S. Lenhart, Optimal control applied in an anthrax epizootic model, Journal of Biological Systems, 24 (2016), 495-517.  doi: 10.1142/S021833901650025X.  Google Scholar [32] C. V. Pao, Nonlinear Parabolic and Elliptic Equations, Plenum Press, New York, 1992.  doi: 10.1007/978-1-4615-3034-3.  Google Scholar [33] C. M. Saad-Roy, P. van den Driessche and A.-A. Yakubu, A mathematical model of anthrax transmission in animal populations, Bulletin of Mathematical Biology, 79 (2017), 303-324.  doi: 10.1007/s11538-016-0238-1.  Google Scholar [34] A. H. Seydack, C. C. Grant, I. P. Smit, W. J. Vermeulen, J. Baard and N. Zambatis, Large herbivore population performance and climate in a South African semi-arid Savanna, KOEDOE, 54 (2012), a1047. doi: 10.4102/koedoe.v54i1.1047.  Google Scholar [35] S. V. Shadomy and T. L. Smith, Anthrax, Journal of the American Veterinary Medical Association, 233 (2008), 63-72.  doi: 10.2460/javma.233.1.63.  Google Scholar [36] J. Simon, Compact sets in the space $L^p(0, T, B)$", Ann. Mat. Pura Appl., 146 (1987), 65-96.  doi: 10.1007/BF01762360.  Google Scholar [37] J. Skellam, The Formulation and Interpretation of Mathematical Models of Diffusionary Processes in Population Biology, The Mathematical Theory of the Dynamics of Biological Populations, Academic Press, 1973.   Google Scholar [38] J. Tello and G. Van, The natural history of nyala, Tragelaphus angasi (Mammalia, Bovidae) in Mozambique, Bulletin of the AMNH, Bulletin of American Museum of Natural History, 155 (1975), 6283-6289.   Google Scholar [39] Texas Animal Health Commission, Anthrax confirmed in Eadwards county Deer, (2014), http://www.ttha.com/ttha/news/2014/09/08/anthrax-confirmed-in-edwards-county-deer. Google Scholar [40] P. Turnbill, Anthrax in Animals and Humans, WHO Press, Fourth edition, Geneva, 2008.   Google Scholar [41] V. Vos, The ecology of anthrax in the Kruger National Park, Salisbury Medical Bulletin, 68 (1990), 9-23.   Google Scholar [42] V. Vos, G. Rooyen and J. Kloppers, Anthrax immunizations of free ranging roan antelope hippotragus equinus in the Kruger National Park, KOEDOE, 16 (1973), 11-25.   Google Scholar Simulation results for model (1)-(4) without control $u_1 = u_2 = 0$. The initial population of susceptible and infected animals are considered to be uniformly distributed in $1\le x\le 34$ and $27\le x\le 31$ respectively while only one initial carcass is considered near an end of the domain, $29\le x\le 30$. The figures in the first row show the plots for susceptible (left) and infected (right) animals; and the figure in the second row represents the carcasses Simulation results for model (1)-(4) with optimal rates of vaccination and optimal carcass disposal rates $0\le u_1(x,t)\le 0.027,\; \; \text{and}\; \; 0\le u_2(x,t)\le 0.5.$. The initial population of susceptible and infected animals are considered to be uniformly distributed in $1\le x\le 34$ and $27\le x\le 31$ respectively while only one initial carcass is considered near an end of the domain, $29\le x\le30$. The two plots in the first row represent the concentrations of susceptible(left) and infected (right) animals. The plots in the second row represents the concentrations of the infected carcasses(left) and the vaccinated animals(right). The last row represents the vaccination (left) and carcass disposal(right) rates The model parameters, their description, values and units Parm. Description Values Units $r$ Intrinsic growth rate of healthy animals $5.052\times 10^{-4}$ day$^{-1}$ $\gamma$ Disease induced death rate of infecteds $\frac{1}{7.5}$ day$^{-1}$ $\alpha$ Carcass feeding rate by scavengers $0$ animal$^{-1}$ day$^{-1}$ $K$ Carrying capacity of animals 2000 animal $p$ Carcass decay rate $0.02816$ day$^{-1}$ $d$ Diffusion rate of healthy animals $0.12$ $km^2$ day$^{-1}$ $d_1$ Diffusion rate of infected animals $0.024$ $km^2$ day$^{-1}$ $\theta_c$ Disease transmission rate from carcasses $1.65\times 10^{-3}$ carcass$^{-1}$ day$^{-1}$ $\theta_i$ Disease transmission rate from infected animals $2.05\times 10^{-2}$ animal$^{-1}$ day$^{-1}$ Parm. Description Values Units $r$ Intrinsic growth rate of healthy animals $5.052\times 10^{-4}$ day$^{-1}$ $\gamma$ Disease induced death rate of infecteds $\frac{1}{7.5}$ day$^{-1}$ $\alpha$ Carcass feeding rate by scavengers $0$ animal$^{-1}$ day$^{-1}$ $K$ Carrying capacity of animals 2000 animal $p$ Carcass decay rate $0.02816$ day$^{-1}$ $d$ Diffusion rate of healthy animals $0.12$ $km^2$ day$^{-1}$ $d_1$ Diffusion rate of infected animals $0.024$ $km^2$ day$^{-1}$ $\theta_c$ Disease transmission rate from carcasses $1.65\times 10^{-3}$ carcass$^{-1}$ day$^{-1}$ $\theta_i$ Disease transmission rate from infected animals $2.05\times 10^{-2}$ animal$^{-1}$ day$^{-1}$ [1] Ping Lin, Weihan Wang. Optimal control problems for some ordinary differential equations with behavior of blowup or quenching. Mathematical Control & Related Fields, 2018, 8 (3&4) : 809-828. doi: 10.3934/mcrf.2018036 [2] Robert J. Kipka, Yuri S. Ledyaev. Optimal control of differential inclusions on manifolds. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 4455-4475. doi: 10.3934/dcds.2015.35.4455 [3] Frank Pörner, Daniel Wachsmuth. Tikhonov regularization of optimal control problems governed by semi-linear partial differential equations. Mathematical Control & Related Fields, 2018, 8 (1) : 315-335. doi: 10.3934/mcrf.2018013 [4] Jianhui Huang, Xun Li, Jiongmin Yong. A linear-quadratic optimal control problem for mean-field stochastic differential equations in infinite horizon. Mathematical Control & Related Fields, 2015, 5 (1) : 97-139. doi: 10.3934/mcrf.2015.5.97 [5] Elimhan N. Mahmudov. Optimal control of evolution differential inclusions with polynomial linear differential operators. Evolution Equations & Control Theory, 2019, 8 (3) : 603-619. doi: 10.3934/eect.2019028 [6] Piernicola Bettiol. State constrained $L^\infty$ optimal control problems interpreted as differential games. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 3989-4017. doi: 10.3934/dcds.2015.35.3989 [7] Shihchung Chiang. Numerical optimal unbounded control with a singular integro-differential equation as a constraint. Conference Publications, 2013, 2013 (special) : 129-137. doi: 10.3934/proc.2013.2013.129 [8] Lukáš Adam, Jiří Outrata. On optimal control of a sweeping process coupled with an ordinary differential equation. Discrete & Continuous Dynamical Systems - B, 2014, 19 (9) : 2709-2738. doi: 10.3934/dcdsb.2014.19.2709 [9] Urszula Ledzewicz, Stanislaw Walczak. Optimal control of systems governed by some elliptic equations. Discrete & Continuous Dynamical Systems - A, 1999, 5 (2) : 279-290. doi: 10.3934/dcds.1999.5.279 [10] Eduardo Casas, Konstantinos Chrysafinos. Analysis and optimal control of some quasilinear parabolic equations. Mathematical Control & Related Fields, 2018, 8 (3&4) : 607-623. doi: 10.3934/mcrf.2018025 [11] Tatiana Filippova. Differential equations of ellipsoidal state estimates in nonlinear control problems under uncertainty. Conference Publications, 2011, 2011 (Special) : 410-419. doi: 10.3934/proc.2011.2011.410 [12] Yves Achdou, Mathieu Laurière. On the system of partial differential equations arising in mean field type control. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 3879-3900. doi: 10.3934/dcds.2015.35.3879 [13] Zhenyu Lu, Junhao Hu, Xuerong Mao. Stabilisation by delay feedback control for highly nonlinear hybrid stochastic differential equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4099-4116. doi: 10.3934/dcdsb.2019052 [14] Samuel Bernard, Fabien Crauste. Optimal linear stability condition for scalar differential equations with distributed delay. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 1855-1876. doi: 10.3934/dcdsb.2015.20.1855 [15] Hongwei Lou, Weihan Wang. Optimal blowup/quenching time for controlled autonomous ordinary differential equations. Mathematical Control & Related Fields, 2015, 5 (3) : 517-527. doi: 10.3934/mcrf.2015.5.517 [16] Sebti Kerbal, Yang Jiang. General integro-differential equations and optimal controls on Banach spaces. Journal of Industrial & Management Optimization, 2007, 3 (1) : 119-128. doi: 10.3934/jimo.2007.3.119 [17] Lucas Bonifacius, Ira Neitzel. Second order optimality conditions for optimal control of quasilinear parabolic equations. Mathematical Control & Related Fields, 2018, 8 (1) : 1-34. doi: 10.3934/mcrf.2018001 [18] Peng Zhong, Suzanne Lenhart. Study on the order of events in optimal control of a harvesting problem modeled by integrodifference equations. Evolution Equations & Control Theory, 2013, 2 (4) : 749-769. doi: 10.3934/eect.2013.2.749 [19] Fulvia Confortola, Elisa Mastrogiacomo. Feedback optimal control for stochastic Volterra equations with completely monotone kernels. Mathematical Control & Related Fields, 2015, 5 (2) : 191-235. doi: 10.3934/mcrf.2015.5.191 [20] Shu Luan. On the existence of optimal control for semilinear elliptic equations with nonlinear neumann boundary conditions. Mathematical Control & Related Fields, 2017, 7 (3) : 493-506. doi: 10.3934/mcrf.2017018 2018 Impact Factor: 1.008
2019-12-08 15:36:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5995221138000488, "perplexity": 10727.420763834427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540511946.30/warc/CC-MAIN-20191208150734-20191208174734-00305.warc.gz"}
http://mathhelpforum.com/algebra/69960-algebra-problem.html
1. ## algebra problem need ur help people on few questions Q1>>>Show that the roots of the equation (x – a) (x – b) + (x – b) (x – c) + (x – c) (x – a) = 0 are always real and they cannot be equal unless a = b = c Q2>>If w is a complex cube root of unity, then show that (1 + 5w2 + w4) (1 + 5w + w2) (5 + w + w2) =64 Q3>>>A question paper contains 12 questions divided into two sections. Section I contains 7 questions and section II contains 5 questions. In how many ways can a candidate choose the questions if he has to select 8 questions in all with teh restriction of atleast 3 questions from each section? Q4>>Let A = |1 -1 0 | |2 2 4 | |2 3 4 | and B = |4 2 4 | |0 1 2 | |2 1 5 | Find AB. Use this to solve the following system of equations: x – y = 3 2x + 3y + 4z = 17 y + 2z = 7 thanku for ur help frndz 2. Q1) The equation can be written as $3x^2-2(a+b+c)x+ab+ac+bc=0$ The discriminant is $\Delta=4(a^2+b^2+c^2-ab-ac-bc)$ But it is well known that $a^2+b^2+c^2\geq ab+ac+bc \ \forall a,b,c\in\mathbf{R}$ so, $\Delta\geq 0$ and the roots are real. The equality stands if and only if $a=b=c$ 3. Q2) We have $w^3=1$ and $w^2+w+1=0$. Then $w^4=w^3\cdot w=w$ $1+5w^2+w^4=(1+w+w^2)+4w^2=4w^2$ $1+5w+w^2=(1+w+w^2)+4w=4w$ $5+w+w^2=(1+w+w^2)+4=4$ Then the product is equal to $4w^2\cdot 4w\cdot 4=64w^3=64$
2018-02-21 04:32:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7462217807769775, "perplexity": 728.1703648232569}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813322.19/warc/CC-MAIN-20180221024420-20180221044420-00760.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-a-combined-approach-4th-edition/chapter-7-vocabulary-check-page-552/9
Algebra: A Combined Approach (4th Edition) $ratio$ A ratio is a relation between two amounts, which also can be expressed as a fraction. eg. There are 8 boys to 1 girl in this class. Thus the ratio is $\frac{8}{1}$.
2018-07-23 16:03:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8547881245613098, "perplexity": 378.9977064869414}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596542.97/warc/CC-MAIN-20180723145409-20180723165409-00619.warc.gz"}
http://www.gamedev.net/page/resources/_/technical/game-programming/making-games-for-the-dreamcast-using-standard-g-r1369
• Create Account Like 1Likes Dislike # Making Games For The Dreamcast Using Standard GNU Tools Part II: Putting Graphics On The Screen By Kevin Fowlks | Published May 15 2001 03:49 AM in Game Programming If you find this article contains errors or problems rendering it unreadable (missing images or files, mangled code, improper text formatting, etc) please contact the editor so corrections can be made. Thank you for helping us improve this resource Disclaimer: Please be aware that all information is provided as-is, and may be used at your own risk. "Sega" and "Dreamcast" are trademarks of Sega Enterprises. All information here has derived from my own experience. Nothing in this document is under any type of NDA nor does it include any information from the official Dreamcast development kit. Introduction In Part I we talked about what you need in order to start developing Dreamcast programs. Assuming now that you have your Dreamcast development system up and running, it’s time to get started doing some really cool stuff, like drawing graphics on the screen. So without further ado let's get started. What you need to do before we start This is a simple guide to get you started in DC programming it is by no means complete. The goal of this series is to get experienced game programmers into Dreamcast coding thus it is missing all those nice hand-holding beginner’s steps. Dreamcast Hardware Here’s a quick overview of the Dreamcast hardware. Note: You will need either libdream 0.71 or KallistiOS 0.6 installed first. Instructions are found in the archive. The Dreamcast is a nice, cleanly designed beast that has a total of 8 megs of video memory. Now that may not seem like that much but it’s more then enough to create great looking programs. The Dreamcast comes with a TA or a Tile Accelerator provided by NEC's PowerVR chip. The TA gives the Dreamcast all that 3D power that we’re used to. For now, we're not going to use the TA, we're just going to use the video memory as a large frame buffer. Since we're using libdream we don’t really need know every thing about the Dreamcast in order to get things done. However, I believe that you’ll learn more about the Dreamcast as time goes on and you continue to program for this amazing system. So here’s a basic rundown of what we have to work with hardware-wise. Color Modes RGB555    2 Bytes (16-bit Color Depth) RGB565    2 Bytes (16-bit Color Depth) ARGB4444  2 Bytes (16-bit Color Depth) RGB888    4 Bytes (32-bit Color Depth) ARGB888   4 Bytes (32-bit Color Depth) Resolutions List Note: This list is not complete because these are the only ones supported right now. 320x240x16 640x480x16 640x480x32 Base Address of the frame buffer is 0xA5000000 Base Address of the PowerVR chips is 0xA05F8000 What the Heck is RGBXXX? Well, the 565 in RGB565 represents the number of bits used to store each color component. The red and blue components are stored with 5 bits and the green component is stored with 6 bits. How Do I Convert RGB5XX into a 16-bit number? #define RGB565(r, g, b) ((r >> 3) << 11)| ((g >> 2) << 5)| ((b >> 3) << 0) #define RGB555(r, g, b) ((r >> 3) << 10)| ((g >> 3) << 5)| ((b >> 3) << 0) Drawing Pixels on the Screen Ok, believe it or not we now have enough info to draw graphics on the screen. Now those of you that remember the old school DOS mode13 stuff should know what I’m talking about. First what we do is setup a pointer to our frame buffer using the same size that we intend to access. So if we're going to use RGB565 or RGB555 then you would use a unsigned short. Example Pseudo code: unsigned short * vram_s = (unsigned short *)0xA5000000; // vram_s[x + (640*y) ] = 16bit Color vram_s[100 + (640*50)] = RGB565(255,0,0); The above example plots a red pixel at position (100, 50) on the screen. Here’s a real program that does the above that you can compile and use. // Look No stdio.h or malloc.h #include "dream.h" #define RGB565(r, g, b) ((r >> 3) << 11)| ((g >> 2) << 5)| ((b >> 3) << 0) int dc_main() { int x=100, y=50; dc_setup_quiet(DM_640x480, PM_RGB565); vram_s[x + (640*y)] = RGB565(255,0,0); return 0; } Sample Makefile: # Generic Makefile for DC Coding with libdream # Edit this configuration to match your system. Currently every one of # these but LD is used directly, but you might as well be complete. TARGET=sh-elf DCBASE=/usr/local/dc/$(TARGET) CC=$(DCBASE)/bin/$(TARGET)-gcc -ml -O -m4-single-only -Wall LD=$(DCBASE)/bin/$(TARGET)-ld AS=$(DCBASE)/bin/$(TARGET)-as AR=$(DCBASE)/bin/$(TARGET)-ar OBJCOPY=$(DCBASE)/bin/$(TARGET)-objcopy # !!!! Edit These Lines !!!! Change to program name #BIN = test #OBJS = test.o INCS=-I../../include LIBS=-L../../lib -ldream all:$(BIN).srec $(BIN).srec:$(BIN).elf $(OBJCOPY) -O srec$(BIN).elf $(BIN).srec$(BIN).bin: $(BIN).elf$(OBJCOPY) -O binary $(BIN).elf$(BIN).bin $(BIN).elf:$(OBJS) $(CC) -Wl,-Ttext,0x8c010000 -o$(BIN).elf $(OBJS)$(LIBS) %.o: %.c $(CC)$(INCS) -c $< -o$@ clean: -rm -f *.o *.elf 1ST_READ.BIN *.bck \$(EXTRA_CLEAN) reallyclean: clean -rm -f *.bin *.srec Remember that since we don’t have a real C library to use we have to either build our own or use libs like libdream. So start trying to do crazy things like including malloc.h or conio.h! Conclusion I think that everyone would agree that Dreamcast Programming is really not that hard, thanks to all those super coders that discovered all of those neat things you could do with the Dreamcast. Now you can write your own line and circle functions and you can now copy image data to display pictures. Look out for Part III where we're going to talk about using the TA to render 3D stuff.
2013-12-10 08:44:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2351643145084381, "perplexity": 2434.1939219161936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164014017/warc/CC-MAIN-20131204133334-00053-ip-10-33-133-15.ec2.internal.warc.gz"}
https://concretenonsense.wordpress.com/2011/07/13/nuking-mosquitoes-commutative-algebra-and-the-frobenius-problem/
Posted by: Alan Guo | July 13, 2011 ## Nuking mosquitoes: Commutative algebra and the Frobenius problem This is one of those things that convinced me that commutative algebra was cool and actually useful for solving combinatorial problems. The Frobenius problem (commonly known as the coin problem, of which a special case is the McNugget problem) asks: given positive integers $a_1, a_2, \ldots, a_n$ with $\gcd(a_1, a_2,\ldots,a_n) = 1$, what is the largest integer $m$ that cannot be written in the form $m = c_1a_1 + c_2a_2 + \cdots c_na_n$ for nonnegative integers $c_i$? For demonstrative purposes, I’ll show you how you can use commutative algebra to solve the case $n = 2$, but the principle generalizes to larger $n$. At the end, I’ll speculate on how one might use this to solve higher-dimensional versions of the Frobenius problem, whatever that might mean. Let $Q = \{c_1 a + c_2 b : c_1,c_2 \ge 0\}$. The problem statement implicitly assumes $\mathbb{N} \setminus Q$ is finite, but this is not difficult to show, especially when you’re wielding the rocket launcher that is commutative algebra. Notice that $Q$ is a affine semigroup, which just means that it’s isomorphic to a finitely generated subsemigroup of $\mathbb{Z}^d$ for some $d$. The saturation $Q_{\text{sat}}$ of an affine semigroup $Q$ is what you get when you take the group generated by $Q$ intersected with the nonnegative cone $Q$ generates. In symbols, $Q_{\text{sat}} = \mathbb{Z} Q \cap \mathbb{R_+} Q$. The difference between an affine semigroup and its saturation is that the saturation “fills in the gaps” of the original affine semigroup so that it looks like it came from an actual group. In our case, $Q_{\text{sat}} = \mathbb{N}$. A basic result (Exercise 7.15 from Combinatorial Commutative Algebra by Miller-Sturmfels) says that any affine semigroup contains a translate of its saturation.  Well, that means $Q$ contains a translate of $\mathbb{N}$, so there’s some number after which every number is in $Q$. The generating function $p(t) = \sum_{i \notin Q} t^i$ for $\mathbb{N} \setminus Q$ is a polynomial, and furthermore it is $\frac{1}{1-t}$ minus the Hilbert series $h_M(t)$ for $M = \mathbb{C}[t^a, t^b]$, since the monomials in $\mathbb{C}[t^a, t^b]$ are precisely the ones whose exponents are in $Q$. So now if we could find a nice form for $h_M(t)$, then we’d be done. We can view $M$ as a module over $R = \mathbb{C}[x,y]$ where $\deg(x) = a$ and $\deg(y) = b$. Then we have an exact sequence $0 \to (x^b - y^a) \to \mathbb{C}[x,y] \to \mathbb{C}[t^a,t^b] \to 0$ where the second map from the left is inclusion and the third map is $x \mapsto t^a, y \mapsto t^b$, which is a surjection, so it suffices to show that we have exactness in the middle. Let $I = (x^b - y^a)$ and let $K = \ker(R \to M)$. Now, it is straightforward to see that $I \subseteq K$. On the other hand, since $\mathbb{C}[x,y] / K \cong \mathbb{C}[t^a,t^b]$ which is not a field, we know that $K$ is not a maximal ideal, hence $\dim K \ge 1$, where $\dim$ here is the Krull dimension. We know that $\dim \mathbb{C}[x,y] = 2$ and the $0$ ideal is prime and $0 \subsetneq I$, so $\dim I \le 1$. But the containment $I \subseteq K$ implies that $\dim I \ge \dim K$, so we have $1 \ge \dim I \ge \dim K \ge 1$, hence $\dim I = \dim K$. To show that $I = K$, we just need to show that $I$ is prime, but this follows from the fact that $\gcd(a,b) = 1$. Now, note that the Hilbert series $h_I(t)$ for $I$ is just $\frac{t^{ab}}{(1-t^a)(1-t^b)}$ since $\deg(x^b - y^a) = ab$, so by rank-nullity, $\displaystyle h_M(t) = h_R(t) - h_I(t) = \frac{1 - t^{ab}}{(1-t^a)(1-t^b)}$. We have $\displaystyle h_M(t) + p(t) = \sum_{i \in Q} t^i + \sum_{i \notin Q} = \sum_{i \in \mathbb{N}} t^i = \frac{1}{1-t},$ hence $\displaystyle h_M(t) = \frac{1}{1-t} - p(t) = \frac{f(t)}{1-t}$ where $f(t) = 1 - p(t) + tp(t)$ is a polynomial since $p(t)$ is. Now, the final answer to our problem is $\deg p = \deg f - 1$. But $\displaystyle \frac{1-t^{ab}}{(1-t^a)(1-t^b)} = \frac{f(t)}{1-t}$ so $(1-t)(1-t^{ab}) = (1-t^a)(1-t^b)f(t).$ Since the degrees of both sides must be equal, we have $1 + ab = a + b + \deg f$, whence $\deg f = 1 + ab - a - b$. So our answer is $\deg p = ab - a - b$. Yay! And we solved it using purely algebra! One can ask a similar question for higher dimensions. For example, the answer to the Frobenius problem also answers the question “Where does the translate of the saturation of $Q$ start?” (after adding 1). There’s probably some similar question you can ask about higher dimensions, although now it’s not as clear what the “minimal” place where the saturation starts should be. I think it would make the most sense if an answer to such a question would take the form of a set of “generators”, where everything in the ideal generated by the “generators” is in the saturation. Anyway, if anyone knows the answer already, please let me know! I think the answer might lie in a paper of Maclagan and Smith titled Multigraded Castelnuovo-Mumford Regularity, but since I don’t even understand Castelnuovo-Mumford regularity in one variable, I have little hope of comprehending that paper. -Alan ## Responses 1. A good source for learning the basic properties of C-M regularity is Chapter 4 of Eisenbud’s Geometry of Syzygies. It’s an invariant of a graded module which can be defined using its minimal free resolution, and it tells you when the Hilbert function and Hilbert polynomial start to agree (Theorem 4.2), and is tight in the case the module is Cohen-Macaulay (which is true for all semigroup algebras). But that book is dealing with standardly graded algebras (i.e., each variable has degree 1). In your case, you have deg(t) = 1, so in order for C[x,y] -> C[t^a,t^b] to be homogeneous, you need deg(x) = a and deg(y) = b. Things can be adjusted if you look at the proof of Theorem 4.2 (it’s nothing high tech). But all of this assumes that things are still Z-graded. I imagine that the Maclagan-Smith paper says something about Z^n-gradings. It’s not obvious to me what these definitions should mean because all of the definitions in the Z-grading case use that Z is totally ordered. However, if you’re interested, and around MIT, we should talk (send me an email and we can meet). I like combinatorial commutative algebra. 2. Thanks for the reference! I will check that out ASAP. And I’m up for meeting to talk about combinatorial commutative algebra when I’m at MIT. Although I don’t know that much about CCA, it’s one of those things that I want to know a lot about, since it’s relevant to one of my research projects. 3. I’m pretty sure that there’s no known formula for the Frobenius number with $n \geq 3$. It’s a long-standing open problem. 4. It’s not like killing flies with sledgehammers is a *bad* thing. Speaking as somebody who is very far from an expert on commutative algebra, articles like this are exactly what inspires me to go learn more of it: unexpected applications outside. 5. […] Alan Guo: Cantor’s diagonal argument and undecidability, Commutative algebra and the Frobenius problem […]
2018-01-22 19:43:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 69, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8770345449447632, "perplexity": 198.71162372411735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891539.71/warc/CC-MAIN-20180122193259-20180122213259-00676.warc.gz"}
https://gmatclub.com/forum/in-the-figure-shown-if-the-area-of-the-shaded-region-is-3-t-104668.html
It is currently 18 Jan 2018, 03:42 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Events & Promotions Events & Promotions in June Open Detailed Calendar In the figure shown, if the area of the shaded region is 3 t Author Message TAGS: Hide Tags Manager Joined: 06 Feb 2010 Posts: 166 Kudos [?]: 1490 [4], given: 182 Schools: University of Dhaka - Class of 2010 GPA: 3.63 In the figure shown, if the area of the shaded region is 3 t [#permalink] Show Tags 11 Nov 2010, 08:09 4 KUDOS 9 This post was BOOKMARKED 00:00 Difficulty: 45% (medium) Question Stats: 63% (01:07) correct 38% (01:05) wrong based on 376 sessions HideShow timer Statistics In the figure shown, if the area of the shaded region is 3 times the area of the smaller circular region, then the circumference of the larger circle is how many times the circumference of the smaller circle? A. $$4$$ B. $$3$$ C. $$2$$ D. $$\sqrt{3}$$ E. $$\sqrt{2}$$ [Reveal] Spoiler: Attachment: 1111220830.jpg [ 344.39 KiB | Viewed 11934 times ] Attachment: Untitled2.png [ 13.24 KiB | Viewed 15807 times ] [Reveal] Spoiler: OA _________________ Practice Makes a Man Perfect. Practice. Practice. Practice......Perfectly Critical Reasoning: http://gmatclub.com/forum/best-critical-reasoning-shortcuts-notes-tips-91280.html Collections of MGMAT CAT: http://gmatclub.com/forum/collections-of-mgmat-cat-math-152750.html MGMAT SC SUMMARY: http://gmatclub.com/forum/mgmat-sc-summary-of-fourth-edition-152753.html Sentence Correction: http://gmatclub.com/forum/sentence-correction-strategies-and-notes-91218.html Arithmatic & Algebra: http://gmatclub.com/forum/arithmatic-algebra-93678.html I hope these will help to understand the basic concepts & strategies. Please Click ON KUDOS Button. Last edited by Bunuel on 06 Dec 2017, 11:24, edited 2 times in total. Renamed the topic, edited the question and added the OA. Kudos [?]: 1490 [4], given: 182 Math Expert Joined: 02 Sep 2009 Posts: 43320 Kudos [?]: 139370 [1], given: 12787 Re: In the figure shown, if the area of the shaded region is 3 t [#permalink] Show Tags 11 Nov 2010, 08:27 1 KUDOS Expert's post 8 This post was BOOKMARKED In the figure shown, if the area of the shaded region is 3 times the area of the smaller circular region, then the circumference of the larger circle is how many times the circumference of the smaller circle? A. $$4$$ B. $$3$$ C. $$2$$ D. $$\sqrt{3}$$ E. $$\sqrt{2}$$ The area of the shaded region is $$area_{shaded}=\pi{R^2}-\pi{r^2}$$ and the area of the smaller circle is $$area_{small}=\pi{r^2}$$. Given: $$\pi{R^2}-\pi{r^2}=3\pi{r^2}$$ --> $$R^2=4r^2$$ --> $$R=2r$$; Now, the ratio of the circumference of the larger circle to the that of the smaller circle is $$\frac{C}{c}=\frac{2\pi{R}}{2\pi{r}}=\frac{{2r}}{{r}}=2$$. _________________ Kudos [?]: 139370 [1], given: 12787 Manager Joined: 30 Sep 2010 Posts: 56 Kudos [?]: 62 [0], given: 0 Re: In the figure shown, if the area of the shaded region is 3 t [#permalink] Show Tags 11 Nov 2010, 08:27 circular shaded region = area of large circle - area of small circle = 3 * area of small circle so area of large circle = 4 * area of small circle that means radius oflarge circle = 2 * radius of small circle so the cicumference of large circle is 2 times that of the small circle ---Option C Kudos [?]: 62 [0], given: 0 Math Expert Joined: 02 Sep 2009 Posts: 43320 Kudos [?]: 139370 [0], given: 12787 Re: In the figure shown, if the area of the shaded region is 3 t [#permalink] Show Tags 06 Mar 2014, 01:27 Bumping for review and further discussion. _________________ Kudos [?]: 139370 [0], given: 12787 SVP Status: The Best Or Nothing Joined: 27 Dec 2012 Posts: 1844 Kudos [?]: 2859 [2], given: 193 Location: India Concentration: General Management, Technology WE: Information Technology (Computer Software) Re: In the figure shown, if the area of the shaded region is 3 t [#permalink] Show Tags 09 Mar 2014, 03:10 2 KUDOS Let the area of small circle = x Area of shaded region = 3x Total Area = x+3x = 4x This means that Area of larger circle is 4 times Area of Small Circle; which also means radius of larger circle is 2 times radius of small circle & so the circumference So Answer = C = 2 _________________ Kindly press "+1 Kudos" to appreciate Kudos [?]: 2859 [2], given: 193 Manager Joined: 10 Mar 2014 Posts: 237 Kudos [?]: 123 [0], given: 13 Re: In the figure shown, if the area of the shaded region is 3 t [#permalink] Show Tags 13 Apr 2014, 08:24 Bunuel wrote: In the figure shown, if the area of the shaded region is 3 times the area of the smaller circular region, then the circumference of the larger circle is how many times the circumference of the smaller circle? A. $$4$$ B. $$3$$ C. $$2$$ D. $$\sqrt{3}$$ E. $$\sqrt{2}$$ The area of the shaded region is $$area_{shaded}=\pi{R^2}-\pi{r^2}$$ and the area of the smaller circle is $$area_{small}=\pi{r^2}$$. Given: $$\pi{R^2}-\pi{r^2}=3\pi{r^2}$$ --> $$R^2=4r^2$$ --> $$R=2r$$; Now, the ratio of the circumference of the larger circle to the that of the smaller circle is $$\frac{C}{c}=\frac{2\pi{R}}{2\pi{r}}=\frac{{2r}}{{r}}=2$$. HI Bunnel, Have one doubt on this as you have mentioned here R = 2r. Now we put this in the pi*rsquare then for both circles the are we got is 4pir square = pi r square. Now area is four times instead of 3 times. So please clarify this Kudos [?]: 123 [0], given: 13 Math Expert Joined: 02 Sep 2009 Posts: 43320 Kudos [?]: 139370 [0], given: 12787 Re: In the figure shown, if the area of the shaded region is 3 t [#permalink] Show Tags 13 Apr 2014, 08:37 Bunuel wrote: In the figure shown, if the area of the shaded region is 3 times the area of the smaller circular region, then the circumference of the larger circle is how many times the circumference of the smaller circle? A. $$4$$ B. $$3$$ C. $$2$$ D. $$\sqrt{3}$$ E. $$\sqrt{2}$$ The area of the shaded region is $$area_{shaded}=\pi{R^2}-\pi{r^2}$$ and the area of the smaller circle is $$area_{small}=\pi{r^2}$$. Given: $$\pi{R^2}-\pi{r^2}=3\pi{r^2}$$ --> $$R^2=4r^2$$ --> $$R=2r$$; Now, the ratio of the circumference of the larger circle to the that of the smaller circle is $$\frac{C}{c}=\frac{2\pi{R}}{2\pi{r}}=\frac{{2r}}{{r}}=2$$. HI Bunnel, Have one doubt on this as you have mentioned here R = 2r. Now we put this in the pi*rsquare then for both circles the are we got is 4pir square = pi r square. Now area is four times instead of 3 times. So please clarify this The stem says that the area of the shaded region is 3 times the area of the smaller circular region, not that the area of the larger circular region is 3 times the area of the smaller circular region. Does this make sense? _________________ Kudos [?]: 139370 [0], given: 12787 VP Joined: 22 May 2016 Posts: 1245 Kudos [?]: 457 [0], given: 679 Re: In the figure shown, if the area of the shaded region is 3 t [#permalink] Show Tags 06 Dec 2017, 11:09 monirjewel wrote: Attachment: Untitled2.png In the figure shown, if the area of the shaded region is 3 times the area of the smaller circular region, then the circumference of the larger circle is how many times the circumference of the smaller circle? A. $$4$$ B. $$3$$ C. $$2$$ D. $$\sqrt{3}$$ E. $$\sqrt{2}$$ [Reveal] Spoiler: Attachment: 1111220830.jpg Algebraically Let a = Small circle's area Let s = Shaded region's area Let A = Large circle's area $$s = 3a$$ $$A = s + a$$ $$A = 3a + a$$ $$A = 4a$$ $$a = \pi r^2$$ $$A = \pi R^2$$ From above: $$A = 4a$$ $$\pi R^2 = 4\pi r^2$$ $$R^2 = 4r^2$$ $$\sqrt{R^2}=\sqrt{4r^2}$$ $$R = 2r$$ * Circumference, Large and Small C = $$2\pi R$$, and c = $$2\pi r$$ $$R = 2r$$, so $$\frac{C}{c}=\frac{2\pi (2r)}{2\pi r}= \frac{2r}{r}=2$$ The large circle's circumference is two times the small circle's circumference. Numbers and algebra Large circle's area: Small circle's area? Small circle's area = x Large circle's area = 3x + x = 4x Large circle circumference/ small circle's circumference? Let Small circle's radius $$r = 1$$ Area of Small: $$\pi r^2 = \pi$$ Area of Large = (4x) = $$(4*\pi) = 4\pi$$ $$4\pi =\pi R^2$$ $$R = 2$$ Circumference, Small: $$2 \pi r = 2 \pi$$ Circumference, Large: $$2 \pi R = 4 \pi$$ $$\frac{Large}{Small}=\frac{4\pi}{2\pi} = 2$$ The large circle's circumference is two times the small circle's circumference. **OR $$A = 3a + a$$ $$A - 3a = a$$ AND $$A = \pi R^2$$ $$\pi R^2 - 3\pi r^2=\pi r^2$$ $$\pi (R^2 - 3r^2)=\pi r^2$$ $$R^2 - 3r^2 = r^2$$ $$R^2 = 4r^2$$ Take square roots: $$R = 2r$$ _________________ At the still point, there the dance is. -- T.S. Eliot Kudos [?]: 457 [0], given: 679 Manager Joined: 09 Mar 2017 Posts: 59 Kudos [?]: 7 [0], given: 15 Location: India GMAT 1: 650 Q45 V31 GPA: 4 In the figure shown, if the area of the shaded region is 3 t [#permalink] Show Tags 25 Dec 2017, 02:19 Bunuel, The area of the shaded region is areashaded=πR2−πr2areashaded=πR2−πr2 and the area of the smaller circle is areasmall=πr2areasmall=πr2. I write this like: πR^2=3*πr^2 (as is say 3 times) Kudos [?]: 7 [0], given: 15 Math Expert Joined: 02 Sep 2009 Posts: 43320 Kudos [?]: 139370 [0], given: 12787 Re: In the figure shown, if the area of the shaded region is 3 t [#permalink] Show Tags 25 Dec 2017, 02:24 amitpandey25 wrote: Bunuel, The area of the shaded region is areashaded=πR2−πr2areashaded=πR2−πr2 and the area of the smaller circle is areasmall=πr2areasmall=πr2. I write this like: πR^2=3*πr^2 (as is say 3 times) The stem says that the area of the shaded region is 3 times the area of the smaller circular region, not that the area of the larger circular region is 3 times the area of the smaller circular region. _________________ Kudos [?]: 139370 [0], given: 12787 Re: In the figure shown, if the area of the shaded region is 3 t   [#permalink] 25 Dec 2017, 02:24 Display posts from previous: Sort by
2018-01-18 11:42:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7713267803192139, "perplexity": 2072.792878351458}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887253.36/warc/CC-MAIN-20180118111417-20180118131417-00277.warc.gz"}
https://hsm.stackexchange.com/questions/17/what-was-the-connection-between-david-hilbert-and-stefan-banach
What was the connection between David Hilbert and Stefan Banach? The so-called "Hilbert space" is named after mathematician David Hilbert. Later, this was generalized into "Banach spaces" by Stefan Banach. My understanding is that Hilbert was German and Banach was Polish, and there did not appear to be any "major" connection between them (that is, no more than between two "random" European mathematicians, although this was a very small circle at the time). Yet there is a fairly strong connection between Hilbert's work and Banach's work. How did Banach manage to take off on Hilbert's work without knowing him well? (E.g. Banach seems to have been much closer to Hugo Steinhaus of the Banach-Steinhaus Theorem.) Or did the two work together/know each other better than I have given them credit for? It is worth noting that the abstract definition of a Hilbert space (as a complete inner-product space) is not due to Hilbert. Weyl recounts the history in his memorial essay, "David Hilbert and His Mathematical Work" (Bull. Amer. Math. Soc. v.50 p.612--654). In his work on integral equations, Hilbert investigated only one particular Hilbert space: the space of square-summable infinite sequences. He did not use Lebesgue integration; only later did Riesz and Fischer show the equivalence with Lebesgue square-integrable functions. Weyl adds: I mention these details because the historic order of events may have fallen into oblivion with many of our younger mathematicians, for whom Hilbert space has assumed that abstract connotation which no longer distinguishes between the two realizations... (There is also an apocryphal story that Hilbert attended a lecture, and came up at the end to ask the speaker, "What is a Hilbert space?") Banach by contrast gave the abstract formulation of Banach spaces in his dissertation, along with his motivation: This present work has the object of establishing certain theorems that hold in several different branches of mathematics, which will be specified later. However, in order to avoid proving these theorems for each branch individually, which would be very wearisome, I have chosen a different way, which is this: I consider in a general way sets of elements for which I postulate certain properties. From these I deduce theorems and then I prove for each separate branch of mathematics that the postulates adopted are true of it. In other words, Banach is seeking economy of proof via the axiomatic method. His motivation is thus entirely different from Hilbert's. To return to your original question: I have not been able to uncover any personal connection between Hilbert and Banach. The name "Banach" does not appear in the index of Constance Reid's biography Hilbert; the MacTutor entry for Hilbert does not contain "Banach", and the MacTutor entry for Banach contains only once occurrence of Hilbert, where it notes that Banach's work "generalised the contributions made by Volterra, Fredholm and Hilbert on integral equations". However, that one sentence is probably sufficient explanation. Hilbert did his work on integral equations in the early 1900s, and it was soon developed further by Riesz, Fischer, Schmidt, and others. Banach's dissertation was written in 1920. It is hardly surprising that entering this field, Banach would pay close attention to relevant published work by one of the foremost mathematicians of the day. • To stress that Hilbert did in fact not study "his" spaces in an abstract manner is really a good point to make. – quid Oct 31 '14 at 0:42 • ...although I think this is an interesting read, and it's worth preserving, it does not answer the question. Can you expand your answer to actually deal with the main question from the OP? – Danu Nov 2 '14 at 1:37 • OK, I've added two paragraphs. – Michael Weiss Nov 2 '14 at 5:28 As far as I know there is no particular connection between Hilbert and Banach. Of course, Hilbert being one of the most dominant mathematicians of the time his influence was wide spread. It would however also be wrong to consider first Hilbert then Banach as a direct succession. There were various influences and contributors in the development of what are now Banach spaces. [Indeed, the notion was almost in parallel introduced by others, too, Wiener in particular. (Banach was the one that made most out of it and rightly got the "name credit")] Other names one could mention besides Hilbert include Fredholm, Riesz, Fischer, Fréchet, Lebesgue. To wit, the chronology in Pietsch's History of Banach spaces and linear operators has 12 entries (starting 1902) before Banachs thesis in 1920. In this context it might be also noteworthy that Banach visited Paris in 1924-25. There is no known record of any personal encounter between Banach and Hilbert. But the not-so-random connection between the two was Hugo Steinhaus (Banach's discoverer, and later collaborator and colleague), who was a PhD student of Hilbert in G"ottingen. Steinhaus's thesis, titled {\it Neue Anwendungen des Dirichlet'schen Prinzips} and defended 1911, was still rather traditional in its approach to variational problems for second-order partial differential equations. On the other hand, Banach's PhD thesis {\it O operacjach na zbiorach abstrakcyjnych z zastosowaniami do r'owna'n ca\l kowych} [On operations on abstract sets with applications to integral equations] defended in Lw'ow in 1920, introduced fundamental notions and properties of linear normed complete spaces (in an axiomatic way), and applied them to integral operators defined by kernels. The actual thesis of Banach and its defense became stuff of legends, but at least there is a publication based on the thesis, S. Banach, {\it Sur les op'erations dans les ensembles abstracts et leur application aux 'equations int'egrales}, Fundamenta Mathematicae 3 (1922), pp. 133-181 (http://kielich.amu.edu.pl/Stefan_Banach/pdf/oeuvres2/305.pdf) In the introduction Banach mentions preceding work on functional operations'' by Volterra, Fre'echet, Hadamard, F. Riesz, Pincherle, Steinhaus, Weyl, Lebesgue and others. He particularly credits the works of Hilbert, which according to him enabled treatment of (the spaces of) square-integrable functions, not only smooth functions. This is also evidence that Banach studied works of Hilbert before visiting Paris. Moreover, in 1917 Banach and Steinhaus both lived in Krakow and took part in meetings of an informal mathematical society. Other members of this group were mathematicians Wlodzimierz Stozek, Wladyslaw Slebodzinski, Leon Chwistek (also a philosopher and a painter) and a physicist Jan Norbert Kroo, all of whom spent some time studying in G"ottingen.
2021-03-04 01:02:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7990077137947083, "perplexity": 1693.1466133873312}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367949.58/warc/CC-MAIN-20210303230849-20210304020849-00356.warc.gz"}
https://physics.openmetric.org/relativity/gr/index.html
# General Relativity¶ General relativity is a theory of gravity. The idea is to find a set of “proper” coordinate system to describe physics on a curved space and make connection between these “proper” coordinate systems. ## Fields and Particles¶ ### Energy-Momentum Tensor for Particles¶ $S_p \equiv -m c \int \int \mathrm d s\mathrm d\tau \sqrt{-\dot x ^\mu g_{\mu\nu} \dot x^\nu} \delta^4(x^\mu - x^\mu (s)) ,$ in which $$x^\mu(s)$$ is the trajectory of the particle. Then the energy density $$\rho$$ corresponds to $$m\delta^4(x^\mu- x^\mu(s))$$. The Largrange density $\mathcal L = -\int\mathrm ds mc \sqrt{-\dot x^\mu g_{\mu\nu}\dot x^\nu}\delta^4(x^\mu - x^\mu(s))$ Energy-momentum density is $$\mathcal T^{\mu\nu} = \sqrt{-g}T^{\mu\nu}$$ is $\mathcal T^{\mu\nu} = -2 \frac{\partial \mathcal L}{\partial g_{\mu\nu}}$ Finally, $\begin{split}\mathcal T^{\mu\nu} &= \int \mathrm ds \frac{mc\dot x^\mu \dot x^\nu}{\sqrt{-\dot x^\mu g_{\mu\nu} \dot x^\nu}} \delta(t-t(s))\delta^3(\vec x - \vec x(t)) \\ &= m\dot x^\mu \dot x^\nu \frac{\mathrm d s}{\mathrm d t} \delta^3(\vec x - \vec x(s(t)))\end{split}$ ## Specific Topics¶ ### Redshift¶ In geometrical optics limit, the angular frequency $$\omega$$ of a photon with a 4-vector $$K^a$$, measured by a observer with a 4-velocity $$Z^a$$, is $$\omega=-K_aZ^a$$. ### Stationary vs Static¶ #### Stationay¶ “A stationary spacetime admits a timelike Killing vector field. That a stationary spacetime is one in which you can find a family of observers who observe no changes in the gravitational field (or sources such as matter or electromagnetic fields) over time.” When we say a field is stationary, we only mean the field is time-independent. #### Static¶ “A static spacetime is a stationary spacetime in which the timelike Killing vector field has vanishing vorticity, or equivalently (by the Frobenius theorem) is hypersurface orthogonal. A static spacetime is one which admits a slicing into spacelike hypersurfaces which are everywhere orthogonal to the world lines of our ‘bored observers’” When we say a field is static, the field is both time-independent and symmetric in a time reversal process.
2020-07-03 12:38:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9079152941703796, "perplexity": 607.8203329574986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882051.19/warc/CC-MAIN-20200703122347-20200703152347-00313.warc.gz"}
https://www.physicsforums.com/threads/functions-4.158725/
# Functions 4 ## Homework Statement f(x) is a piecewise function defined as: $$|x-3| x>=1$$ $$\frac{x^2}{4}-\frac{3x}{2}+\frac{13}{4} x<1$$ Discuss the continuity and differentiability of this funtion at x=1 and x=3 ## The Attempt at a Solution At x=3, this function is continuous but not differentiable, being a modulus funtion. But how is is continuous and differentiable at x=1? Putting limit x=1 in the above funtion doesnt give you the same value for both the parts!!!!! Dick Homework Helper Look at the limit as x->1 from both sides of the function and the derivative of the function. What do you get as answers to these four questions? arildno
2022-05-17 15:02:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8313965797424316, "perplexity": 708.3350937756559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517485.8/warc/CC-MAIN-20220517130706-20220517160706-00525.warc.gz"}
https://meridian.allenpress.com/idd/article/41/3/161/8403/Eyewitness-Identification-Accuracy-A-Comparison-of
## Abstract The effect of variation in the clarity of a witnessed event on the accuracy of eyewitness identification for adults with intellectual disabilities and those without disabilities was examined. Following observation of one of three films (clear, less distinct, or ambiguous) depicting a nonviolent theft, participants were asked to identify the thief from a photo lineup. Across all film conditions, participants with intellectual disabilities made as many correct identifications as did participants without disabilities, but they also made more false identifications and were more prone to guessing. Differences between groups seemed to be attributable to the demand factors inherent in the eyewitness identification task and understanding of the nature of the task itself. Eyewitness identification is often the most important or sole evidence linking a particular defendant to a crime (Grano, 1984; Wells, 1993; Wells, Small, Penrod, Malpass, Fulero, & Brimacombe, 1998). The importance of accuracy in identification is, therefore, a topic of much interest to members of the criminal justice system. To date, although there are some studies and case reports on the accuracy of children and adults with intellectual disabilities in terms of reporting on observed events (e.g., Brown & Geiselman, 1990; Dent, 1986; Gudjonsson & Gunn, 1982; Isaacs, 1997; Isaacs & Ericson, 2000; Jens, Gordon, & Shaddock, 1990; Milne, Clare, & Bull, 1999; Perlman, Ericson, Esses, & Isaacs, 1994), no investigators have examined eyewitness identification accuracy of individuals with intellectual disabilities. Although there is a dearth of research on the eyewitness capacity of individuals with intellectual disabilities, there is an extensive body of literature on the eyewitness identification capacity of adults and children without intellectual disabilities. Although not directly relevant, such research may be pertinent for individuals with intellectual disabilities and other special populations in terms of suggesting factors that may affect identification performance, by providing (a) theoretical explanations for performance differences and (b) guidelines for appropriate research design. Rates of correct identification for both adults and children without intellectual disabilities vary widely between studies, ranging from chance levels (e.g., Dent & Stephenson, 1979) to close to 100% (e.g., Leippe, Romanczyk, & Manion, 1991). Children 6 years of age and older generally make as many correct identifications as do adults (Beal, Schmitt, & Dekle, 1995; Goodman, Hirschman, Hepps, & Rudy, 1991; Goodman & Reed, 1986; Parker & Carranza, 1989; Parker & Ryan, 1993; Peters, 1987; for an exception see King & Yuille, 1987). The performance of children relative to adults on the identification task does emerge as considerably inferior, however, when rates of false identification and “no selection” are examined. Children make more false identifications and are generally more likely to choose a candidate from a lineup rather than making no choice compared to adults (Beal et al., 1995; King & Yuille, 1987; Leippe et al., 1991; Parker & Carranza, 1989). Differences in performance between studies with respect to correct identifications and differences in rates of false identifications between adults and children, can be understood in the context of a theoretical analysis of the factors affecting the decision-making process when selecting a suspect from a lineup. Malpass and Devine (1984) identified two major factors that affect this process in all witnesses. The first factor is the amount or quality of information available to the witness about the appearance of the offender. Such information is a product of situational variables affecting the witness situation, such as opportunity to observe the offender at the time of the crime and his or her physical characteristics, focus of attention at the time of the crime, lighting at the scene of the crime, etc. Variations in these factors, along with other discrepancies in experimental design, likely account for many of the differences in correct identifications obtained between studies with both adults and children. The second factor affecting decision-making is the witness's estimation of the risks and values associated with making a choice, referred to as the social utility of choosing. When the witness's information about the crime is less than optimal, the witness engages in a process of weighing the consequences of his or her identification decision, which affects the attractiveness or likelihood of choosing a suspect from a lineup. For the purposes of the present discussion, an important factor concerning the attractiveness of making a decision is the desire for social approval as a cooperative and helpful witness. Adults without intellectual disabilities who have a strong need for approval, who exhibit more than usual deference to police authority, and who are inclined to be overly cooperative are more inclined to make a selection than are adults without such attitudes or belief systems (Wells, 1988), as are children, whose dependency on adults, and history of reinforcement for cooperative behavior, renders them more generally susceptible to the demands of perceived authority figures (King & Yuille, 1987). As noted by King and Yuille, the presentation of a photo lineup may in effect be equivalent to a leading question, which may be more likely to elicit a choice response in individuals with an increased proclivity for suggestibility. Witnesses who are more inclined to be suggestible and/or who desire to appear compliant and cooperative may be concerned that the interviewer will be disappointed if they do not select a candidate because from their perspective, that is what the task demands. For example, because in most studies children do not tend to differ from adults in the number of correct identifications made from photo lineups, it would seem that memory factors do not account for differences in accuracy. Rather, the higher rates of choosing and false identifications reported for children indicate that they generally differ from most adults with respect to factors affecting the social utility of decision-making. It is reasonable to hypothesize that adults with intellectual disabilities will be susceptible to higher rates of choosing and false identifications compared to adults without intellectual disabilities. Witness and interview research indicates that they are generally more susceptible to the perceived demands of authority figures than are adults without intellectual disabilities (e.g., Brown & Geiselman, 1990; Ericson & Perlman, 2001; Gudjonsson & Gunn, 1982; Isaacs, 1997; Milne et al., 1999; Perlman et al., 1994; Sigelman, Budd, Spanhel, & Schoenrock, 1981; Sigelman, Budd, Winer; Schoenrock, & Martin, 1982; Sigelman et al., 1980). Due to cognitive limitations and the fact that in contrast to individuals without intellectual disabilities, those with intellectual disabilities often do not obtain education or knowledge about the legal system or their rights (Ericson & Perlman, 2001; Everington & Fulero, 1999; Fulero & Everington, 1995), they may also differ from individuals without intellectual disabilities in their interpretation of the lineup task. Therefore, it is reasonable to hypothesize that adults with intellectual disabilities will be susceptible to higher rates of choosing and false identifications compared to adults without intellectual disabilities. In real-life situations, information factors are variables over which the police have no control, and to some extent, social utility variables are intrinsically part of the witness's personality profile and value system. To a considerable extent, however, social utility variables can be manipulated by the behavior of the police. For example, if the police either directly or indirectly imply confidence that the offender is in a lineup, this will increase the witness's subjective belief that the offender is present (Garrioch & Brimacombe, 2001; Wells et al., 1998). Moreover, there may also be greater concern that the police will be disappointed if an identification is not made, thus increasing the attractiveness of making a selection under conditions of uncertainty (Malpass & Devine, 1984). Factors affecting the social utility of decision-making, as well as other factors affecting the accuracy of eyewitness identifications, have been studied extensively in the general population. In witness identification research with individuals who do not have intellectual disabilities, investigators have been concerned mainly with factors that increase identification accuracy and decrease false identifications. The construction and administration of police lineups constitutes one of the most studied variables in this area. In the classic lineup, several foils (known innocents) and a suspect stand in single file, facing the witness, and then turn for a profile view. In practice, however, most identifications are made from photos (often referred to as photo spreads or arrays) rather than live lineups because there are several pragmatic advantages that accrue to the use of photos, and they may be less traumatic for victims than live viewing of the perpetrator of a crime (Peters, 1991; Shapiro & Penrod, 1986; Wells, 1988; Wells et al., 1998). The identification of innocent suspects is probably the topic of most substantive interest to researchers and the criminal justice system (Deffenbacher, 1991; Doob & Kirshenbaum, 1973; Egeth, 1993; Luus & Wells, 1991; Malpass & Devine, 1983, 1984; Wells, 1993; Wells et al., 1998). Lineups in which it is easy to identify the person suspected by the police are generally regarded as suggestive or “unfair.” It is important that a suspect is selected because the witness recognizes the individual as the perpetrator. The foils in a lineup should have similar broad characteristics to the suspect, as described in the witness's original description of the suspect. If the foils in a lineup do not resemble the witness's prior description of the suspect, the witness may be able to infer the identity of the person whom the police suspect, or they may select a candidate merely because he or she looks most like the culprit relative to other members of the group. Selection then is based on rational deduction rather than recognition (Gonzalez, Ellsworth, & Pembroke, 1994; Wells, 1988; Wells et al., 1998). It is also important that all members of the lineup are dressed similarly so that identification is based on recognition of the suspect rather than recognition or distraction by certain pieces of clothing (Lindsay et al., 1991). The mock witness paradigm, first used by Doob and Kirshenbaum (1973), can be used to determine whether selected foils are reasonably similar to the suspect. In this procedure, subjects or “mock witnesses,” who have never seen the suspect, are given a verbal description of him or her and the crime. They are then asked to select the individual they believe committed the crime from a lineup. Because the mock witnesses have not seen the actual suspect, based on a verbal description alone, the suspect should not be more likely to be selected than one of the foils, if the foils are reasonably similar to the suspect. Thus, the mock witness procedure can be used to rule out the possibility that foils selected for a lineup can be dismissed merely on the basis of a verbal description of the perpetrator. Once reasonable foils have been selected using the mock witness paradigm, there are a number of ways to assess the resulting “fairness” of the selected lineup. One method is to consider the distribution of mock witness choices across members of the lineup, which should not depart significantly from chance expectation, as measured by a chi-square test. Bias towards selection of the actual suspect can be measured statistically by assessing the difference between the proportion of mock witness identifications of the suspect expected by chance and the proportion observed, using the standard z test for proportions. The suspect or perpetrator should not have a probability greater than chance of being selected on the basis of a description alone. As a basic standard for constructing a fair lineup, it is desirable that these two criteria are met. The structure or mode of presentation of the lineup, and the instructions provided to witnesses, can also affect rates of correct and incorrect identification. Suspect-absent lineups (also known as blank lineups) have been used extensively in research studies to examine response bias tendencies (e.g., Lindsay et al., 1991; Malpass & Devine, 1984, 1981; Parker & Carranza, 1989; Wells & Luus, 1990). Typically in such procedures, following a contrived witness situation, research participants are shown a lineup in which the “suspect” is deliberately omitted, followed by a second lineup in which the suspect is present. This procedure provides a means of assessing the degree to which a witness is prone to guessing and his/her ability to differentiate the offender from other members of the lineup. With respect to the latter point, such studies demonstrate that when a witness has a poor memory for the perpetrator, he or she is inclined to make a lineup selection because a candidate looks most like the perpetrator relative to other members in the lineup rather than making a selection because the candidate looks exactly like the perpetrator (Malpass & Devine, 1984; Wells & Luus, 1990). Research suggests that the likelihood of an innocent suspect being falsely identified is greatly reduced when a suspect-absent lineup is used (Wells, 1988; Wells et al., 1998). The procedure is intended to protect an innocent suspect from a witness who is guessing and to bolster the credibility of a witness who is able to correctly reject an entire set of photographs prior to identifying the suspect. Suspect-absent lineups have also been used extensively to examine the effect of instructions to witnesses on rates of correct and false identifications. Instructions may be considered biased if they directly or indirectly imply or suggest that the perpetrator is present in the lineup and that the witness should make a selection. Unbiased instructions are those in which it is indicated that the perpetrator may or may not be present and that “no-choice” is an option. Although there are exceptions (e.g., Shepherd, 1983), research with adults generally suggests that unbiased instructions reduce rates of “choosing” and false identifications without decreasing rates of correct identifications (e.g., Lindsay et al., 1991; Malpass & Devine, 1981; Steblay, 1997). The design of the present study allowed for examination of the interaction of the social utility of decision-making and situational variables in the process of lineup selection for adults with intellectual disabilities and those without intellectual disabilities. The inclusion of a suspect-absent lineup as well as a suspect-present lineup and assessment of rates of choosing permitted us to examine differences between adults with and those without intellectual disabilities, with respect to social utility variables, through analysis of guessing behavior in both groups. Prior to each lineup, participants received unbiased verbal instructions as to whether the suspect was present in the lineup, and they were advised not to guess. The inclusion of different observational circumstances in this study (i.e., clear, less distinct, and ambiguous observational circumstances) provided an opportunity to assess identification behavior in adults with and those without intellectual disabilities when the situational variables of the observed event differed. Rates of correct identifications were not expected to differ between groups. However, because individuals with intellectual disabilities may be more likely to be susceptible to social demands compared to those without intellectual disabilities, we anticipated that individuals with intellectual disabilities would be more inclined to guess (make choices) and make false identifications, particularly under conditions of greater ambiguity in the observed event. For the purposes of the study, we applied the diagnostic term intellectual disability to individuals who met the following criteria, established by the most recent Diagnostic and Statistical Manual of Mental Disorders, 4th edition (American Psychiatric Association, 1994): (a) An IQ of 70 or below as measured by an individually administered intelligence test, (b) concurrent deficits in adaptive behavior in at least two or more areas of functioning (e.g., social skills, life skills, domestic skills), and (c) onset of the disability prior to age 18. There are varying degrees of developmental disability (i.e., mild, moderate, severe, profound). This research was concerned with individuals assessed or deemed to be functioning within the high moderate to high mild range of developmental disability, or an IQ of approximately 50 to 75. Extending the upper limit to 75 from 70 allows for a 5-point measurement error, which is assumed in most standardized intelligence tests. Individuals with this degree of intellectual disability were selected for participation in the study because they account for about 90% of people with intellectual disabilities (Ramey & Finkelstein, 1981). ## Method ### Participants Participants were 60 adult clients of Surrey Place Centre, a research and outpatient treatment facility for people with intellectual disabilities located in Toronto, Canada. They functioned in the high moderate to mild range of developmental disability (IQ 50 to 75). There were two comparison groups, each with 60 participants: a university\college sample and a sample who had completed high school with no further education. Participants ranged in age from 18 to 50 years. There were an equal number of males and females in each of the individual groups. The 60 participants in each of the three groups were divided equally into three conditions in which film clarity was manipulated (3 × 3 ANOVA design). ### Film Development Participants were blocked for group and gender to ensure balance and were then randomly assigned to film condition. Participants were asked to view one of three film clips. Each film depicted a nonviolent purse theft at a garden shop. In the clear film condition, close-up shots of the perpetrator were incorporated to depict very nonambiguously the appearance and intentions of the thief, a situation analogous to a serious crime against a person. In the less distinct film condition, longer camera angles were used, a situation analogous to one in which a crime is observed but does not directly involve the witness. In the ambiguous film condition, some of the details and events were not explicitly shown, and the perpetrator of the crime was less clearly evident. This film condition was analogous to crime situations in which the suspicious behavior of the perpetrator may not be clear prior to the crime, and witnesses may have to piece together their perceptions of the scene. ### Photo Lineup Development #### Research proceedings Prior to commencement of the study procedures, an information letter was provided to all participants and read aloud to those with intellectual disabilities. An informed consent for the study procedures was obtained. All procedures and forms for this study received ethical clearance by a review board at Surrey Place Centre composed of internal and external referees. All participants observed one of the three films in an interview room at Surrey Place Centre. Each participant watched the film alone. As much as possible, we attempted to maintain consistency across individual interviews with respect to environmental factors, such as lighting in the room during observation of the film; color, contrast, and brightness of the film; and viewing distance from the television screen. For the photo identification task, the participant was asked to examine a suspect-absent photo spread. These pictures were attached to a large white cardboard backing. Five pictures were presented simultaneously. Each individual in the spread was shown from five different angles, using 3″ × 5″ photographs. A full frontal view, a closeup frontal shot of the face and upper body, one view of each three quarter profile, and a (left) side view were shown for each suspect. Prior to viewing the photo lineup, each participant was provided with these instructions: Now I am going to ask you to look at some pictures. I would like you to look carefully at the pictures and tell me if you see the man who played the thief (first man who ran out of the store). He may be in the picturhes you see or he may not be in the pictures. If you don't remember/aren't sure of what he looks like, don't guess, just say ‘I don't know.’ If you are sure that you see the thief, tell me which person that you think it is. You can have as much time as you want to look at the pictures. Participants indicating that they did not think the thief was in the lineup were then shown the suspect-present lineup. If participants expressed uncertainty about their choice, or asked if they should make a choice when uncertain, they were reminded not to guess and to make a selection only if they were sure of which person was the thief. Following the suspect-absent photo spread, all participants were presented with an 8-photo spread of five different candidates in which the perpetrator was present. The same instructions were given, including the warning that the suspect may or may not be present. Thus, in total there were 10 individuals from which to choose, 5 in each of the suspect-present and suspect-absent lineups. The 9 foils included were those derived from the mock witness paradigm previously described and identified as the best candidates for foils for the actual perpetrator. The actual perpetrator never appeared in the first or last position in the suspect-present lineup but was systematically assigned to one of the middle positions (Positions 2, 3, or 4) in accordance with recommended practice (Wells & Luus, 1990). When participants did not make a selection from either lineup, the reason for lack of selection was noted. This procedure allowed for more in-depth analysis of the reasons why participants choose or do not choose suspects from a lineup. All participants were fully debriefed about the study upon completion of the procedures. ### Statistical Analyses A logistic regression (which is equivalent to a logit analysis for a dichotomous dependent variable), with group and film condition serving as the independent variables and selection or no selection as the dependent variable, was used to examine rates of correct identification in the suspect-present lineup, overall rates of false identification, and “no choice” selection in both the suspect-absent and suspect-present lineups combined. ## Results As predicted, there were no differences between groups with respect to the number of correct identifications made across film conditions. Contrary to predictions, however, there was no film effect in the number of correct identifications nor was there an interaction effect. In general, the rate of correct identification was low for all groups, regardless of film condition, as indicated in Table 1. Table 1 Number of Correct Identifications, False Identifications, and ‘No Choice’ Selections by Group As expected, there were differences between groups with respect to overall rates of false identification in the suspect-absent and suspect-present lineups, Wald's χ2(2, N = 180) = 9.05, p = .011. Overall, in the suspect-present and suspect-absent lineups combined, individuals with intellectual disabilities were more likely than those without intellectual disabilities to make false identifications. The overall rate included participants who had made one or two false identifications. For all groups, rates of false identification were higher in the suspect-absent compared to the suspect-present lineup. Regardless of film condition, false identifications were very rare for participants without intellectual disabilities in the suspect-present lineup. In contrast to expectations, there was no film effect for rates of false identification nor was there an interaction effect. Results for overall rates of false identification for all groups and film conditions, and in both the suspect-absent and suspect-present lineups, are also presented in Table 1. Given the high rates of false identification for individuals with intellectual disabilities in both the suspect-absent and suspect-present lineups, the frequency with which participants made false identifications in both lineups (i.e., two false identifications) was examined. Although only 1 university and 2 high school participants made two false identifications, 13 participants with intellectual disabilities made two false identifications. Overall, in the suspect-absent and suspect-present lineups, as predicted there were differences between groups with respect to the number of choices or selections made, Wald's χ2(2, N = 180) = 9.40, p = .009. Participants with intellectual disabilities were significantly more likely to make choices in the suspect-absent and suspect-present lineups compared to participants without intellectual disabilities. In contrast to predictions, there was no film effect or interaction effect. There was a trend for university participants to be less likely to make a choice in the suspect-present condition as the film became increasingly ambiguous. High school participants, however, were consistent across films in rates of choosing, and those with intellectual disabilities were also relatively consistent for the suspect-present lineup. As can be seen in the table, university participants were conservative about choosing in all film conditions in both lineups. High school educated participants were fairly conservative in all film conditions, with only 50% of participants making a selection overall. In contrast, the majority of participants with intellectual disabilities in all film conditions selected a candidate from at least one of the lineups, and film ambiguity did not lower rates of choosing. Participants who did not make a selection from the lineup were asked for the reason why they had not made a selection. Possible responses were: (a) the participant did not think that the suspect was in either lineup, (b) the participant could not remember what the perpetrator looked like, and (c) the participant was not sure which candidate was the perpetrator (i.e., could not distinguish between perpetrator and foils or did not feel sure enough to make a selection). The number of participants in each group and film condition who did not make a choice and their reason for not making a choice are presented in Table 2. Table 2 Reason for Not Making a Selection From Lineup Within Groups As can be seen in the table, the main reason why university and high school participants did not make a selection was because they did not believe that the suspect was present in either lineup. The main reason why participants with intellectual disabilities did not make a selection was because they were not sure which candidate was the perpetrator. Approximately as many participants with intellectual disabilities and university participants did not make a selection because they were unsure of which candidate was the perpetrator. The reasons why both participants with and those without intellectual disabilities did not make selections have important implications with respect to understanding and explaining discrepancies in rates of choosing or guessing behavior between individuals with and those without intellectual disabilities, as shall be described in more detail in the Discussion section. Chi-square analyses revealed that the position of the perpetrator in the suspect-present lineup had no effect on rates of correct identification or false identification. ## Discussion The results of this study indicate that individuals with intellectual disabilities make as many correct identifications as do individuals without intellectual disabilities, but they are more prone to guessing, as indicated by their higher rates of choosing and higher rates of false identifications. Casual observations also suggest that participants with intellectual disabilities may have been making selections based on an assumption that they should pick the candidate who most closely resembled the suspect or believed that the suspect must be present because they were being shown a lineup. We note that some participants with intellectual disabilities who made false identifications in the suspect-absent lineup and who subsequently made correct identifications in the suspect-present lineup commented that the actual suspect “looked more like the guy” or thought they had selected the same person from both lineups. High rates of false identifications and guessing behavior were observed in participants with intellectual disabilities, despite instructions advising that the perpetrator may or may not be present in the lineup and that they should not guess. It is difficult to be certain about the reasons why these adults disregarded the admonishment against guessing and advice that the suspect may or may not be present. However, the fact that these instructions have little effect on rates of false identification and choosing suggests a fundamental misunderstanding about the nature of the witness identification task. Unexpectedly, there were no differences across film conditions with respect to rates of correct identifications, false identifications, and rates of choosing. There was only a nonsignificant trend for lower rates of choosing among university participants as the film condition became increasingly ambiguous. Overall, these results are very important for both adults with intellectual disabilities and those without intellectual disabilities because they suggest that situational variables at the time of the crime (e.g., opportunity to clearly observe the perpetrator) may be significantly less important in the equation of witness identification accuracy than the individual personality characteristics and cognitive and social factors that affect the social utility of decision-making. In other words, witnesses with a propensity to “guess,” will guess, and witnesses with a cautious attitude towards guessing will tend to be cautious, regardless of the situational factors affecting their ability to identify the perpetrator. The issue of false identification is unlikely to be problematic for individuals with intellectual disabilities in most cases of sexual and physical abuse because in the majority of cases, the perpetrator is someone well-known to the individual, often a relative or caretaker (Furey, 1994; Sobsey & Doe, 1991). In criminal cases where the individual does not know the perpetrator, however, the results of this study indicate that there may be considerable risk of false identifications by individuals with intellectual disabilities when there is some uncertainty about the identity of the perpetrator. Because individuals with intellectual disabilities do not differ from those without intellectual disabilities with respect to the number of correct identifications, the differences in performances between these groups are not attributable to memory differences but, rather, to factors affecting the social utility of decision-making, such as understanding of the nature of the task and/or a desire to be cooperative with authority figures. In view of this observation, there are a number of steps that may be taken to improve accuracy and confidence in the witness identifications by individuals with intellectual disabilities. Perhaps of greatest priority, there is a need to educate people with intellectual disabilities about their rights in the legal system. This education should occur in the public school systems, family, group home, and community settings. In a recent study on knowledge of legal terms and court proceedings, only 1 of 40 adults with intellectual disabilities in the study reported formal education in the school system with respect to the legal system. In contrast, the majority of adult participants without intellectual disabilities who were in the same study reported visiting courts or taking courses in the law as part of their public school experience (Ericson & Perlman, 2001). It is unreasonable to expect that anyone who has not been provided with exposure to the basic functioning of various aspects of the legal system would have an appreciation of important issues pertinent to identification tasks, such as the need for certainty in identifying a person who has been accused of a crime. It is worthy of note that there are several reports (e.g., Ericson & Perlman, 2001; Sobsey & Doe, 1991; Tharinger, Horton, & Millea, 1990; Wilson & Brewer, 1992) in which researchers have indicated a high incidence of victimization and bystander witnessing of offenses by individuals with intellectual disabilities for serious crimes, ranging from sexual and physical assault to robbery and theft. Wilson and Brewer found that individuals with intellectual disabilities were unlikely to report a crime themselves, which in part was attributed to lack of cognizance of how to access criminal justice services when a crime has occurred. When police contact was made, it was usually instigated by a third party. In view of the fact that people with intellectual disabilities are known to be victims and/or witnesses in serious offenses, specific education and instruction about personal rights and the legal system, including police lineups and identification tasks, should be considered essential and a point of advocacy in school or educational training. Education for all professionals in the legal system, including police, prosecutors, and defense attorneys, may also be important in terms of minimizing risk of false identifications. Individuals with intellectual disabilities may need informal practice with identification tasks, and additional detailed instruction, to understand that the suspect may not be present in the lineup and that guessing under conditions of uncertainty is not a desired response. Many recommendations in terms of police practice for eyewitness identification tasks for adults without intellectual disabilities (e.g., Kassin, 1998; Wells et al., 1998) should also be implemented with adults who have intellectual disabilities to ensure fairness for the accused. Police officers should be advised about being cautious about their behavior and making any statements that would directly or indirectly place pressure on a witness to make a lineup selection. Ideally, the person administering the lineup should be naive or “blind” with respect to knowledge of the actual suspect to ensure that no cues or indirect suggestion is provided to the witness as to the identity of the suspect. The foils selected for a photo or live lineup should bear reasonable similarity to the suspect, based on the prior description of the witness, to ensure that the person who is selected is chosen because he or she is recognized as the perpetrator as opposed to being selected through a process of elimination as being the person who looks “most like” the perpetrator. Videotaping the lineup-selection process may provide a judge, jury, and legal representatives with an objective record of the witness's decision regarding the selection of a suspect and the context in which that decision was made (Kassin, 1998). Because the present study is the first eyewitness investigation involving individuals with intellectual disabilities that the authors are aware of in the literature, caution must be exercised in terms of generalization of the results. As an experimental study, the observed “crime” did not involve an event that was highly personally meaningful or stressful to participants, factors that may affect memory in either a positive or negative manner (e.g., Ceci & Bruck, 1993; Wells, 1988). Because participants in the study were being paid, there may have been an increased sense of “pressure” on their part, especially those with intellectual disabilities, who are likely less familiar with research proceedings, to “select” a candidate from the photo lineup. Participants may also have been more inclined to think that in an experimental situation, the perpetrator of the crime must be present in the lineup. Because only participants in the study who did not make a selection from a lineup were asked about the reasons for their lack of selection, further questioning and information is needed to assess the decision-making processes that operate when people make lineup selections. Corroboration of the results obtained in this research by other investigators and additional research in which investigators address such issues as the role of education and preparation of witnesses for the lineup task, instructions to witnesses, assessment of witness “certainty” when making responses, and reported reasons for selection and nonselection of candidates from lineups will provide additional insight into the best means to support individuals with intellectual disabilities in witness situations. Because there is considerable evidence that many individuals with intellectual disabilities can provide reliable testimony when interviewed properly and provided with the right kind of supports (Gudjonnson & Gunn, 1982; Isaacs & Ericson, 2000; Perlman et al., 1994) and because this research suggests that memory factors do not account for differences in performance between individuals with and those without intellectual disabilities, it is important that further steps be taken to understand the best means of supporting individuals with intellectual disabilities in eyewitness identification contexts. Such work will provide a means of facilitating increased participation of these individuals in the legal system, while reducing risks of false identification in situations where the perpetrator of a crime may be someone not well known by the victim or witness. NOTE: The first author was affiliated with Surrey Centre when this study was conducted. This research was funded in part by a grant from the Department of Justice, Canada; the authors gratefully acknowledge their financial support for this project. Special thanks also to Debra Pepler for her suggestions and support throughout this project. ## References References American Psychiatric Association. 1994 . Diagnostic and statistical manual of mental disorders (4th ed). Washington, DC: Author . Beal , C. , K. Schmitt , and D. Dekle . 1995 . Eyewitness identifications of children: Effects of absolute judgements, nonverbal response options, and event encoding. Law and Human Behavior 19 : 197 216 . Brown , C. and R. Geiselman . 1990 . Eyewitness testimony of mentally retarded: Effect of the cognitive interview. Journal of Police and Criminal Psychology 6 : 14 22 . Ceci , J. and M. Bruck . 1993 . The suggestibility of the child witness: A historical review and synthesis. Psychological Bulletin 113 : 403 439 . Deffenbacher , K. 1991 . A maturing of research on the behavior of eyewitnesses. Applied Cognitive Psychology 5 : 377 402 . Dent , H. 1986 . An experimental study of the effectiveness of different techniques of questioning mentally handicapped child witnesses. British Journal of Clinical Psychology 25 : 13 17 . Dent , H. and S. Stephenson . 1979 . Identification evidence: Investigations of factors affecting the reliability of juvenile and adult witnesses. In D. Farrington, K. Hawkins, & S. Lloyd-Bostock (Eds.), Psychology, law and legal processes (pp. 195–206). Atlantic Highlands, NJ: Humanities Press . Doob , A. and H. Kirshenbaum . 1973 . Bias in police lineups—partial remembering. Journal of Police Science and Administration 1 : 287 293 . Egeth , H. 1993 . What do we not know about eyewitness identification. American Psychologist 48 : 577 580 . Ericson , K. and N. Perlman . 2001 . Understanding of court proceedings and legal terminology in adults with developmental disabilities. Law and Human Behavior 25 : 529 545 . Everington , C. and S. Fulero . 1999 . Competence to confess: Measuring understanding and suggestibility of defendants with mental retardation. Mental Retardation 37 : 212 220 . Fulero , S. and C. Everington . 1995 . Assessing competence to waive Miranda rights in defendants with mental retardation. Law and Human Behavior 19 : 533 543 . Furey , E. 1994 . Sexual abuse of adults with mental retardation: Who and where. Mental Retardation 32 : 173 180 . Garrioch , L. and C. Brimacombe . 2001 . Lineup adminstrators' expectations: Their impact on eyewitness confidence [Special issue]. Law and Human Behavior 25 : 299 314 . Gonzalez , R. , P. Ellsworth , and M. Pembroke . 1994 . Response biases in lineups and showups. Journal of Personality and Social Psychology 64 : 525 557 . Goodman , G. , J. Hirschman , D. Hepps , and L. Rudy . 1991 . Children's memory for stressful events. Merrill-Palmer Quarterly 37 : 109 158 . Goodman , G. and R. Reed . 1986 . Age differences in eyewitness testimony. Law and Human Behavior 10 : 317 332 . Grano , J. 1984 . A legal response to the inherent dangers of eyewitness identification testimony. In G. Wells & E. Loftus (Eds.), Eyewitness testimony (pp. 315–335). Cambridge: Cambridge University Press . Gudjonsson , G. and J. Gunn . 1982 . The competence and reliability of a witness in a criminal court: A case report. British Journal of Psychiatry 141 : 624 627 . Isaacs , B. 1997, July . Witnesses with developmental disabilities: The cognitive interview and time delay. Paper presented at the meeting of the Society for Applied Research in Memory and Cognition, Toronto, Ontario . Isaacs , B. and K. Ericson . 2000, April . Improving accuracy in witnesses with developmental disabilities. Paper presented at the meeting of the Ontario Association of Developmental Disabilities, Toronto, Ontario . Jens , K. , B. Gordon , and A. . 1990 . Remembering activities performed versus imagined: A comparison of children with mental retardation and children with normal intelligence. International Journal of Disability, Development, and Education 37 : 201 213 . Kassin , S. 1998 . Eyewitness identification procedures: The fifth rule. Law and Human Behavior 22 : 649 653 . King , M. and J. Yuille . 1987 . Suggestibility and the child witness. In S. Ceci, M. Toglia, & D. Ross (Eds.), Children's eyewitness memory (pp. 24–35). New York: Springer-Verlag . Leippe , M. , A. Romanczyk , and A. Manion . 1991 . Eyewitness memory for a touching experience: Accuracy differences between child and adult witnesses. Journal of Applied Psychology 76 : 367 379 . Lindsay , R. , J. Lea , G. Nosworthy , J. Fulford , J. Hector , V. Levan , and C. Seabrook . 1991 . Biased lineups: Sequential presentation reduces the problem. Journal of Applied Psychology 76 : 796 802 . Luus , C. and G. Wells . 1991 . Eyewitness identification and the selection of distractors for lineups. Law and Human Behavior 15 : 43 57 . Malpass , R. and P. Devine . 1981 . Eyewitness identification: Lineup instructions and the absence of the offender. Journal of Applied Psychology 66 : 482 489 . Malpass , R. and P. Devine . 1983 . Measuring the fairness of eyewitness identification lineups. In S. Lloyd-Bostock & B. Clifford (Eds.), Evaluating eyewitness evidence (pp. 81–102). Toronto: Wiley . Malpass , R. and P. Devine . 1984 . Research on suggestion in lineups and photospreads. In G. Wells & E. Loftus (Eds.), Eyewitness testimony (pp. 64–91). Cambridge: Cambridge University Press . Milne , R. , I. Clare , and R. Bull . 1999 . Using the cognitive interview with adults with mild learning disabilities. Psychology, Crime and Law 5 : 81 99 . Parker , J. and L. Carranza . 1989 . Eyewitness testimony of children in target-present and target-absent lineups. Law and Human Behavior 13 : 133 149 . Parker , J. and V. Ryan . 1993 . An attempt to reduce guessing behavior in children's and adults' eyewitness identifications. Law and Human Behavior 17 : 11 26 . Perlman , N. , K. Ericson , V. Esses , and B. Isaacs . 1994 . The developmentally handicapped witness: Competency as a function of interview strategy. Law and Human Behavior 18 : 171 187 . Peters , D. 1987 . The impact of naturally occurring stress on children's memory. In S. Ceci, M. Toglia, & D. Ross (Eds.), Children's eyewitness memory (pp. 122–141). New York: Springer-Verlag . Peters , D. 1991 . The influence of stress and arousal on the child witness. In J. Doris (Ed.), The suggestibility of children's recollections (pp. 60–76). Washington, DC: American Psychological Association . Shapiro , P. and S. Penrod . 1986 . Meta-analysis of facial identification studies. Psychological Bulletin 100 : 139 156 . Shepherd , J. 1983 . Identification after long delays. In S. Lloyd-Bostock & B. Clifford (Eds.), Evaluating witness evidence (pp. 173–187). Toronto: Wiley . Sigelman , C. , E. Budd , C. Spanhel , and C. Schoenrock . 1981 . When in doubt, say yes: Acquiescence in interviews with mentally retarded persons. Mental Retardation 19 : 53 58 . Sigelman , C. , E. Budd , J. Winer , C. Schoenrock , and P. Martin . 1982 . Evaluating alternative techniques of questioning mentally retarded persons. American Journal of Mental Deficiency 86 : 511 518 . Sigelman , C. , C. Schoenrock , C. Spanhel , G. Sherrilyn , J. Hromas , E. Winer , C. Budd , and P. Martin . 1980 . Surveying mentally retarded persons: Responsiveness and response validity in three samples. American Journal of Mental Deficiency 84 : 479 486 . Sobsey , D. and T. Doe . 1991 . Patterns of sexual abuse and assault. Sexuality and Disability 9 : 243 259 . Steblay , N. 1997 . Social influence in eyewitness recall: A meta-analytic review of lineup instruction effects. Law and Human Behavior 21 : 283 298 . Tharinger , D. , C. Horton , and S. Millea . 1990 . Sexual abuse and exploitation of children and adults with mental retardation and other handicaps. Child Abuse and Neglect 14 : 301 312 . Wells , G. 1988 . Eyewitness identification: A system handbook. Toronto: Carswell . Wells , G. 1993 . What do we know about eyewitness identifications? American Psychologist 48 : 553 571 . Wells , G. and C. Luus . 1990 . Police lineups as experiments: Social methodology as a framework for properly conducted lineups. Personality and Social Psychology Bulletin 16 : 106 117 . Wells , G. , M. Small , S. Penrod , S. Malpass , M. Fulero , and C. Brimacombe . 1998 . Eyewitness identification procedures: Recommendations for lineups and photospreads. Law and Human Behavior 22 : 603 647 . Wilson , C. and N. Brewer . 1992 . The incidence of criminal victimisation of individuals with an intellectual disability. Australian Psychologist 27 : 114 117 . ## Author notes Authors:Kristine Ericson, PhD, Psychoeducational Consultant, Toronto District School Board, West Education Office, 155 College St., Toronto, Ontario, Canada, M5T 1P6. kristine_ericson@hotmail.com. Barry Isaacs, Doctoral Candidate, Analyst, Surrey Place Centre, 2 Surrey Place, Toronto, Ontario, Canada M5S 2C2
2020-07-07 19:27:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2746987044811249, "perplexity": 2974.531818341832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655894904.17/warc/CC-MAIN-20200707173839-20200707203839-00289.warc.gz"}
http://physics.stackexchange.com/tags/quantum-information/hot
# Tag Info 6 The answer is structured as follows: I will first give the quantum circuit corresponding to a normal double slit (or interferometer), then the circuit where the which-way information has been recorded, a circuit where the which-way information is first recorded and then erased in a unitary way, and finally a circuit where the which-way information is ... 4 The best way to understand this is to work through an example. Here is how you factor the number 15 using Shor's algorithm. (If you prefer, view the problem not as factoring 15 but as solving $2^r=1 \hbox(mod 15)$, and skip Step Seven). As you work through this, imagine replacing 15 with a much larger composite number, and you'll see the advantage of ... 3 R.F. Werner, Optimal Cloning of Pure States describes the optimal procedure for cloning multiple copies of the same pure state (which, remarkably, for pure states is independent of the figure of merit). Abstract: We construct the unique optimal quantum device for turning a finite number of $d$-level quantum systems in the same unknown pure state ... 3 The correct formula is $$\mathrm{tr}[A^TB]=\langle m \vert A\otimes B\vert m\rangle\ ,$$ so your proof is correct, you're just trying to proof an erroneous formula. (You can easily verify this because with a $\dagger$ the l.h.s. is sesquilinear while the r.h.s. is bilinear.) But I have the feeling this has been asked before. If you have this from ... 3 I would highly recommend the two seminal papers by E. T. Jaynes, http://journals.aps.org/pr/abstract/10.1103/PhysRev.106.620 and, http://journals.aps.org/pr/abstract/10.1103/PhysRev.108.171 Also check out the book by E. T. Jaynes, which has a focus on the foundations in probability but is rather light on applications in physics: ... 2 The problem is that you are treating quantum objects as both classical waves and classical particles simultaneously. More specifically, you talk about them passing through one slit or the other and sensing which slit an electron goes through. But in order for the interference pattern to emerge, the electrons have to pass through both slits at a time. We can ... 2 The OP's confusion seems to stem from the incorrect assumption that if my detector isn't triggered I cannot see how one could argue it interacted [with the electron] Just because the detector sometimes does not click, does not mean that there is no interaction at all. A good way to think about this is in terms of continuous measurement. This and this ... 2 If B released energy when A was measured, you could use it for communication. And you know the answer to "Can entanglement be used for communication?" is NO. You mentioned it in your question! Therefore... No, entangled particles don't release energy when their partner is measured. They don't change in any locally determinable way when their partner is ... 2 The Hadamard gate is a 180 degree rotation around the diagonal X+Z axis of the Bloch sphere. Terrible picture (from a blog post): In the diagram the |0> state is at the top (+Z) of the sphere and the |0>+|1> state is at the front (+X). Rotating around the back-bottom-to-front-top axis (X=Z,Y=0) moves the top point (|0>) to the front point ... 2 NOTE: This answer has now been merged into Understanding the quantum eraser from a quantum information stand point (part IV). Let me start by copying the first part of my previous answer which describes the circuit model of a double-slit or other interference experiment; then, I will try to describe the delayed choice setting (the way I understand it). ... 1 If the particles are nontrivially entangled, then particle $A$ cannot be in an eigenstate of the energy operator (or any other operator that acts just on $A$'s state space) in the first place. If the initial entangled state is, say, $X\otimes Y+Z\otimes W$, where (for example) $X$ and $Z$ are energy eigenstates, then an observation of particle $B$ will ... 1 I have not looked at the book , however the sense in which it is an approximation is that it is neglecting the constant term The Hamiltonian of a SHO is , $H= (a^{\dagger}a + 1/2)\hbar\omega$ This means that the ground state energy of the SHO is $1/2\hbar\omega$. This is what is being neglected since it is only a constant. 1 No, you can't tell if someone else has observed an entangled copy of your quantum information. And the YouTube videos about the delayed choice experiment are particularly bad, even for quantum things. You can get evidence that an entangled copy was made, because your qubit's density matrix will correspond to a mixed state instead of a pure state (center of ... 1 Presumably your vector stands for a three-dimensional quantum state. The Bloch sphere is really only useful for two-dimensional quantum states, as commented by Emilio Pisanty. Theoretically, there is an equivalent description to the Bloch sphere for three dimensional quantum states, but it is not useful for visualization as it is an eight dimensional ... Only top voted, non community-wiki answers of a minimum length are eligible
2016-02-07 17:53:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7500993013381958, "perplexity": 373.39261285946367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701150206.8/warc/CC-MAIN-20160205193910-00198-ip-10-236-182-209.ec2.internal.warc.gz"}
http://libros.duhnnae.com/material/2017may2/149408831425-Anomalous-Couplings-and-Radiation-Zeros-in-the-e-e-to-bar-process-I-Rodrigue.php
# $τ$ Anomalous Couplings and Radiation Zeros in the $e^ e^- o τ arτγ$ process $τ$ Anomalous Couplings and Radiation Zeros in the $e^ e^- o τ arτγ$ process - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online. Descargar gratis o leer online en formato PDF el libro: $τ$ Anomalous Couplings and Radiation Zeros in the $e^ e^- o τ arτγ$ process The process $e^+e^- \to \tau \bar\tau \gamma$ contains configurations of the four-momenta for which the scattering amplitude vanishes (Radiation Zeros or Null Zone). These Radiation Zeros only occur for couplings given by a gauge theory, in particular the standard model. Therefore they are sensitive to physics beyond the standard model as the anoma Autor: I. Rodriguez; O. A. Sampayo Fuente: https://archive.org/
2018-03-20 00:32:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.915033221244812, "perplexity": 10190.008574521202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647244.44/warc/CC-MAIN-20180319234034-20180320014034-00305.warc.gz"}
https://math.stackexchange.com/questions/282879/how-many-ways-can-one-paint-the-edges-of-a-petersen-graph-black-or-white
# How many ways can one paint the edges of a Petersen graph black or white? How many ways can one paint the edges of a Petersen graph black or white? I know that the symmetrygroup of the Petersen graph is $[S5][1]$. Furthermore this this seems like a case where I should use Burnside's lemma. I'm sorry if the following is too verbose or uses non standard notation; I haven't been acquainted with graph theory. S5 has 7 conjugacy classes, namely those with cycle types: (1,1,1,1,1),(1,1,1,2),(2,2,1),(2,3),(1,1,1,3),(4,1),(5). S5 has 15 edges so the identity (1,1,1,1,1) would leave $2^{15}$ different colorings fixed. The n-cycle (5) is a rotation of the whole graph and as such would leave $2^3$ colorings fixed. The "outside" could be white or black, the connecting edges and the "inside" edges could both be either white or black. Rotation around one "connecting" edge involves the (2,2,1) cycles. I won't tire you with the details but I found $2^9$ colorings. From here I'm stuck however, I can't find any more symmetries than these. How do I find the colorings left fixed by the other conjugacy classes? Addendum Mar 24 2016. A much improved solution to this problem is at the following MSE link which renders this thread obsolete. • Wow! Not exactly what I was looking for but very nicely done! – Lee Wang Jan 21 '13 at 20:55 • Here is another interesting computation using cycle indices. – Marko Riedel Nov 10 '13 at 3:01 Using Burnside's lemma is the right idea. Each element of $S_5$ determines a permutation of the 15 edges of the Petersen graph. If this permutation has exactly $r$ cycles on edges, then it fixes exactly $2^r$ 2-colorings of the edge set. If $a$ and $b$ are conjugate elements of $S_5$, then the permutations of the edges they determine have the same cycle structure. (To proof this, observe that the induced permutations are conjugate in $S_{15}$, and hence have the same cycle structure.) So you just have to compute $r$ for one element in each conjugacy class, which is mildly tedious at worst, and then apply Burnside. • I'm not entirely sure if I follow your argument. How would one compute $r$ without a picture? My group theory book only deals with very simple visualizable cases. – Lee Wang Jan 20 '13 at 20:39 • An automorphism of Petersen is a permutation of its 10 vertices, and hence is an element of $S_{10}$. The set of all automorphisms forms a permutation group isomorphic to $S_5$, but not equal to it. You are dealing with a case where you cannot readily associate each automorphism with a picture, it is necessary to view each automorphism as a permutation. – Chris Godsil Jan 20 '13 at 21:52 • Ah that makes sense. But like I said my book only mentions cases where you can visualize the problem. I'm lost on how you would construct a nongeometric way to compte the $r$ for each conjugacy class. – Lee Wang Jan 21 '13 at 20:55 • If $V(P)$ consists of pairs from $S=\{0,1,2,3,4\}$, then the edges correspond to partitions of $S$ with one cell of size 1 and two of size 2, e.g., $\{\{0\},\{1,2\},\{3,4\}\}$. Now the permutation $(01)(234)$ from $S_5$ maps this edge to $\{\{1\},\{0,3\},\{4,2\}\}$ and then to $\{\{0\},\{1,4\},\{2,3\}\}$ and so on. If we continue, we get a cycle of 6 edges. Choose an edge not in this cycle and repeat; eventually we arrange the 15 edges into cycles, and the number of these cycles is the value of $r$. Repeat for one permutation in each conjugacy class. – Chris Godsil Jan 21 '13 at 22:15 • Thank you very very much! This was truly helpful. – Lee Wang Jan 22 '13 at 9:43
2019-05-25 19:25:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8150337338447571, "perplexity": 220.37274879957124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258439.57/warc/CC-MAIN-20190525184948-20190525210948-00317.warc.gz"}
https://www.projecteuclid.org/euclid.jmsj/1327586977
## Journal of the Mathematical Society of Japan ### Pseudohermitian invariants and classification of CR mappings in generalized ellipsoids #### Abstract Given a strictly pseudoconvex hypersurface MCn+1, we discuss the problem of classifying all local CR diffeomorphisms between open subsets N, N′ ⊂ M. Our method exploits the Tanaka-Webster pseudohermitian invariants of a contact form ϑ on M, their transformation formulae, and the Chern-Moser invariants. Our main application concerns a class of generalized ellipsoids where we classify all local CR mappings. #### Article information Source J. Math. Soc. Japan, Volume 64, Number 1 (2012), 153-179. Dates First available in Project Euclid: 26 January 2012 https://projecteuclid.org/euclid.jmsj/1327586977 Digital Object Identifier doi:10.2969/jmsj/06410153 Mathematical Reviews number (MathSciNet) MR2879739 Zentralblatt MATH identifier 1250.32034 Subjects Primary: 32V40: Real submanifolds in complex manifolds #### Citation MONTI, Roberto; MORBIDELLI, Daniele. Pseudohermitian invariants and classification of CR mappings in generalized ellipsoids. J. Math. Soc. Japan 64 (2012), no. 1, 153--179. doi:10.2969/jmsj/06410153. https://projecteuclid.org/euclid.jmsj/1327586977 #### References • H. Alexander, Holomorphic mappings from the ball and polydisc, Math. Ann., 209 (1974), 249–256. • M. S. Baouendi, P. Ebenfelt and L. P. Rothschild, Real submanifolds in complex space and their mappings, Princeton Math. Ser., 47, Princeton University Press, Princeton, NJ, 1999. • S. S. Chern and J. K. Moser, Real hypersurfaces in complex manifolds, Acta Math., 133 (1974), 219–271. • G. Dini and A. Selvaggi Primicerio, Localization principle of automorphisms on generalized pseudoellipsoids, J. Geom. Anal., 7 (1997), 575–584. • S. Dragomir and G. Tomassini, Differential geometry and analysis on CR manifolds, Progr. Math., 246, Birkhäuser Boston Inc., Boston, MA, 2006. • T. Iwaniec and G. Martin, Geometric function theory and non-linear analysis, Oxford Math. Monogr., The Clarendon Press, Oxford University Press, New York, 2001. • D. Jerison and J. M. Lee, Extremals for the Sobolev inequality on the Heisenberg group and the CR Yamabe problem, J. Amer. Math. Soc., 1 (1988), 1–13. • A. Kodama, S. Krantz and D. Ma, A characterization of generalized complex ellipsoids in ${\bm C}\sp n$ and related results, Indiana Univ. Math. J., 41 (1992), 173–195. • W. Kühnel and H. B. Rademacher, Conformal diffeomorphisms preserving the Ricci tensor, Proc. Amer. Math. Soc., 123 (1995), 2841–2848. • S. G. Krantz, Function theory of several complex variables, AMS Chelsea Publishing, Providence, RI, 2001, reprint of the 1992 edition. • J. Lee, Pseudo-Einstein structures on CR manifolds, Amer. J. Math., 110 (1988), 157–178. • J. Lelong-Ferrand, Geometrical interpretations of scalar curvature and regularity of conformal homeomorphisms, Differential geometry and relativity, Reidel, Dordrecht, Mathematical Phys. and Appl. Math., 3, Reidel, Dordrecht, 1976, pp.,91–105. • M. Landucci and A. Spiro, On the localization principle for the automorphisms of pseudoellipsoids, Proc. Amer. Math. Soc., 137 (2009), 1339–1345. • D. Morbidelli, Liouville theorem, conformally invariant cones and umbilical surfaces for Grushin-type metrics, Israel J. Math., 173 (2009), 379–402. • B. Osgood and D. Stowe, The Schwarzian derivative and conformal mapping of Riemannian manifolds, Duke Math. J., 67 (1992), 57–99. • W. Rudin, Holomorphic maps that extend to automorphisms of a ball, Proc. Amer. Math. Soc., 81 (1981), 429–432. • T. Sunada, Holomorphic equivalence problem for bounded Reinhardt domains, Math. Ann., 235 (1978), 111–128. • N. Tanaka, On the pseudo-conformal geometry of hypersurfaces of the space of $n$ complex variables, J. Math. Soc. Japan, 14 (1962), 397–429. • N. Tanaka, A differential geometric study on strongly pseudo-convex manifolds,Lectures in Mathematics, Department of Mathematics, Kyoto University, No.,9,Kinokuniya Book-Store Co. Ltd., Tokyo, 1975. • S. M. Webster, Pseudo-Hermitian structures on a real hypersurface, J. Differential Geom., 13 (1978), 25–41. • S. M. Webster, Holomorphic differential invariants for an ellipsoidal real hypersurface, Duke Math. J., 104 (2000), 463–475.
2019-10-22 15:28:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6999363899230957, "perplexity": 2334.313819059737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822098.86/warc/CC-MAIN-20191022132135-20191022155635-00299.warc.gz"}
http://tex.stackexchange.com/questions/8898/how-to-add-indentation-to-all-verbatim-environment
# How to add indentation to all verbatim environment? I have a style where all paragraphs have indentation of about 5cm. Verbatim environment does not have any indentation to the combination of paragraphs and verbatim environments does not look good. Something like this: This is paragraph ... Next line of the same paragraph. and this is verbatim Is it possible to define an global indent for verbatim environment? - Why don't you use listings.sty? –  xport Jan 13 '11 at 17:46 Can you provide a minimal example? I don't think I understand what is going on exactly. Is the verbatim in the margin? –  TH. Jan 14 '11 at 8:32 Some extented packages provide this feature. For example, fancyvrb: % in preamble \usepackage{fancyvrb} \begin{Verbatim}[xleftmargin=2em] hello \end{Verbatim} Global setting can be done by \fvset: \fvset{xleftmargin=2em} - what about this idea: You can but all you verbatim environment inside quote environments. If that will fit your needs, you can of course define \newenvironment{qverb}{\begin{quote}\begin{verbatim}}{\end{verbatim}\end{quote}} your own "qverb" environment for that case. Greets - It cannot (in fact, impossible) work. You can't use verbatim env in another environment's argument. –  Leo Liu Jan 28 '11 at 7:20 @Leo-Liu - I'm sorry to tell your comment is wrong. I just tried \documentclass[english]{report} \usepackage{babel} \usepackage{blindtext} \begin{document} \blindtext \begin{verbatim} Hallo \end{verbatim} \blindtext \begin{quote}\begin{verbatim} Hallo \end{verbatim}\end{quote} \blindtext \end{document} and it worked well. I think you ment one cannot use other environments INSIDE of verbatim, did you? –  Bastian Ebeling Jan 28 '11 at 7:39 I don't want to argue. You can use verbatim env in quote env, but you cannot use verbatim in arguments of \newenvironment and use it without errors. Just put your definition of qverb in your document and have a try. –  Leo Liu Jan 28 '11 at 7:54 And you can refer this FAQ: tex.ac.uk/cgi-bin/texfaq2html?label=verbwithin –  Leo Liu Jan 28 '11 at 7:55 @Leo-Liu: Okay, I got it - you are right. Thanks for teaching me! –  Bastian Ebeling Feb 24 '11 at 19:45
2014-10-24 17:09:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9879871010780334, "perplexity": 5024.599310624403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646351.12/warc/CC-MAIN-20141024030046-00027-ip-10-16-133-185.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/88704/finding-points-which-divides-a-right-trapezoids-area-into-equal-pieces
# Finding points which divides a right trapezoid's area into equal pieces I have a right trapezoid as follows; We have $h$, $b$ and $a$. For any $n$, I need to divide total area of trapezoid into equal parts. I have to find a general formulation for the length of $p$ points for my study. Although I try to figure out from starting from $n=2$, $n=3$,... I couldn't able to generalize the formulation. Is there any general formulation for this? - thanks for editing, I couldn't able to put images with my current reputation :) – knightoi Dec 5 '11 at 23:51 Do you mean that you have a solution for $n=2$ and $n=3$ and just need to generalize it to arbitrary $n$? If so, please show your existing work. – Henning Makholm Dec 6 '11 at 0:06
2016-07-29 12:14:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8441812992095947, "perplexity": 312.2472164686969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257830066.95/warc/CC-MAIN-20160723071030-00010-ip-10-185-27-174.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1578116/can-you-simplify-this-term
# Can you simplify this term? $$X=\frac{\frac{c}{r^2}+\frac{1-c}{(1+r)^{T+1}}}{\frac{c}{r}+\frac{1-c}{(1+r)^T}-1}$$ • @EricS. Yes all are scalars and you may assume $r>0$. – emcor Dec 16 '15 at 9:56 • If you rewrite both enumerator and denominator into a single fraction, i.e. of the form $X=\frac{\frac{\alpha}{\beta}}{\frac{\gamma}{\delta}}$, you can rewrite that into $X=\frac{\alpha}{\beta}\frac{\delta}{\gamma}$ and a lot of terms will drop – Eric S. Dec 16 '15 at 9:58 • @EricS. It would be very nice if you could do that... ; ) – emcor Dec 16 '15 at 10:00 • Just be patient. Take lots of paper and multiply top and bottom by common terms. first by $(1+r)^T$. That will cause $\frac {1-c}{(1+4)^{T+1}}$ to reduce to $\frac{1-c}{1+r}$. Then by r. Eventually the whole thing should become manageable. Takes patience though. – fleablood Jan 7 '16 at 17:44 $$\frac{\frac{c}{r^2}+\frac{1-c}{(1+r)^{T+1}}}{\frac{c}{r}+\frac{1-c}{(1+r)^T}-1}= \frac{r^2}{r^2}\frac{\frac{c}{r^2}+\frac{1-c}{(1+r)^{T+1}}}{\frac{c}{r}+\frac{1-c}{(1+r)^T}-1}= \frac{{c}+r^2\frac{1-c}{(1+r)^{T+1}}}{r{c}+r^2\frac{1-c}{(1+r)^T}-r^2}=\\ \frac{(1+r)^{T+1}}{(1+r)^{T+1}}\frac{{c}+r^2\frac{1-c}{(1+r)^{T+1}}}{r{c}+r^2\frac{1-c}{(1+r)^T}-r^2}= \frac{{c}(1+r)^{T+1}+r^2({1-c})}{r{c}(1+r)^{T+1}+r^2({1-c}){(1+r)}-r^2(1+r)^{T+1}}= \frac{{c}(1+r)^{T+1}+r^2({1-c})}{r\left({c}-r\right)(1+r)^{T+1}+r^2({1-c}){(1+r)}}= \frac{{c}(1+r)^{T+1}+r^2({1-c})}{\left(({c}-r)(1+r)^{T}+r({1-c})\right){(1+r)r}}$$ $$%=\frac{c-r^2 (c-1) (r+1)^{-T-1}}{r(c -r)-r^2(c-1) (r+1)^{-T}}$$ would this help? • Thank you. Can you please also show the steps how to get there? – emcor Dec 16 '15 at 10:03 $X=\frac{\frac{c}{r^2}+\frac{1-c}{(1+r)^{T+1}}}{\frac{c}{r}+\frac{1-c}{(1+r)^{T}}-1}$ $====================$ $X=\frac{\frac{c}{r^2}+\frac{1-c}{(1+r)^{T+1}}}{\frac{c-r}{r}+\frac{1-c}{(1+r)^{T}}}$ $====================$ $X=\frac{\frac{c(1+r)^{T+1}+(1-c)r^2}{r^2(1+r)^{T+1}}}{\frac{(c-r)(1+r)^{T}+(1-c)r}{r(1+r)^{T}}}$ $====================$ $X=\frac{\frac{c(1+r)^{T+1}+(1-c)r^2}{r(1+r)}}{\frac{(c-r)(1+r)^{T}+(1-c)r}1}$ $====================$ $X=\frac{(c(1+r)^{T+1}+(1-c)r^2)1}{r(1+r)((c-r)(1+r)^{T}+(1-c)r)}$ $====================$ $X=\frac{c(1+r)^{T+1}+(1-c)r^2}{r(1+r)((c-r)(1+r)^{T}+(1-c)r)}$
2019-09-17 06:29:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6859434247016907, "perplexity": 597.366152403664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573053.13/warc/CC-MAIN-20190917061226-20190917083226-00073.warc.gz"}
https://zenodo.org/record/2600837/export/csl
Journal article Open Access 1 × 4 MMI Visible Light Wavelength Demultiplexer Based on GaN Slot Waveguide Structures Tamir Shoresh; Nadav Katanov; Dror Malka Citation Style Language JSON Export { "publisher": "Zenodo", "DOI": "10.5281/zenodo.2600837", "title": "1 \u00d7 4 MMI Visible Light Wavelength Demultiplexer Based on GaN Slot Waveguide Structures", "issued": { "date-parts": [ [ 2018, 7, 21 ] ] }, "abstract": "<p>Abstract &mdash; High transmission losses are the key problem that limits the performances of visible light communication (VLC) system that work on wavelength division multiplexing (WDM) technology.<br>\nIn order to overcome this problem, we propose a novel design for a 1&times;4 optical demultiplexer based<br>\non the multimode interference (MMI) in slot-waveguide structures that operates at 547 nm, 559 nm,<br>\n566 nm and 584 nm. Gallium-nitride (GaN) and silicon-oxide (SiO2) were found to be excellent<br>\nmaterials for the slot waveguide structures. Simulation results show that the proposed device can<br>\ntransmit 4-channels that work in the visible light range with a low transmission loss of 0.983-1.423 dB, crosstalk of 13.8-18.3 dB and bandwidth of 1.8-3.2 nm. Thus, this device can be very useful in visible<br>\nlight networking system that works on WDM technology.</p>", "author": [ { "family": "Tamir Shoresh" }, { }, { "family": "Dror Malka" } ], "version": "pre-print", "type": "article-journal", "id": "2600837" } 30 47 views
2021-10-19 22:52:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24915561079978943, "perplexity": 13174.236607321209}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585281.35/warc/CC-MAIN-20211019202148-20211019232148-00452.warc.gz"}
https://codegolf.stackexchange.com/questions/132877/randomly-select-a-character-plinko-style/132964
# Randomly select a character, plinko-style Let's see how good your language of choice is at selective randomness. Given 4 characters, A, B, C, and D, or a string of 4 characters ABCD as input, output one of the characters with the following probabilities: • A should have a 1/8 (12.5%) chance to be chosen • B should have a 3/8 (37.5%) chance to be chosen • C should have a 2/8 (25%) chance to be chosen • D should have a 2/8 (25%) chance to be chosen This is in-line with the following Plinko machine layout: ^ ^ ^ ^ ^ ^ A B \ / ^ C D Your answer must make a genuine attempt at respecting the probabilities described. A proper explanation of how probabilities are computed in your answer (and why they respect the specs, disregarding pseudo-randomness and big numbers problems) is sufficient. ## Scoring This is so fewest bytes in each language wins! • Can we assume the built-in random function in our language of choice is random? – Mr. Xcoder Jul 17 '17 at 15:14 • @Mr.Xcoder within reason, yes. – Skidsdev Jul 17 '17 at 15:21 • So, for clarity, the input is always exactly 4 characters, and it should assign probabilities to each in accordance with exactly the provided Plinko layout? Generating Plinko layouts or simulating them is entirely unnecessary as long as the probabilities are correct within the accuracy provided by your random source? – Kamil Drakari Jul 17 '17 at 16:15 • @KamilDrakari correct. – Skidsdev Jul 17 '17 at 16:27 • Not very useful due to its length, but I found out that the expression ceil(abs(i - 6)/ 2.0) will map an index from 0-7 to an index from 0-3 with the appropriate distribution (0 111 22 33) for this challenge... – Socratic Phoenix Jul 17 '17 at 17:18 # Lean Mean Bean Machine, 5543 42 bytes -13 bytes thanks to Alex Varga O i ^ ^ ^ \ ^ ^ i / U ii ^ i U U Hope you guys don't mind me answering my own question after only 2 hours, but I highly doubt anybody else was planning on posting an answer in LMBM. This literally just reflects the Plinko layout shown in the OP, flipped horizontally to cut down on unnecessary whitespace. # Jelly, 6 bytes Ḋṁ7;ḢX A monadic link taking a list of four characters and returning one with the probability distribution described. Try it online! ### How? Ḋṁ7;ḢX - Link: list of characters, s e.g. ABCD Ḋ - dequeue s BCD ṁ7 - mould like 7 (implicit range) BCDBCDB ; - concatenate BCDBCDBA X - random choice Note that the above has 1*A, 3*B, 2*C, and 2*D • Clever trick with the ṁ! – Erik the Outgolfer Jul 17 '17 at 15:30 # Python, 50 bytes lambda x:choice(x[:2]+x[1:]*2) from random import* An unnamed function taking and returning strings (or lists of characters). Try it online! ### How? random.choice chooses a random element from a list, so the function forms a string with the correct distribution, that is, given "ABCD", "ABCD"[:2] = "AB" plus "ABCD"[1:]*2 = "BCD"*2 = "BCDBCD" which is "ABBCDBCD". • I found a way to golf my solution and then realised it's identical to yours, just in reverse order :/ – Mr. Xcoder Jul 17 '17 at 15:44 # Cubix, 39242221 19 bytes .<.^iD>D|@oioi.\i;U View in the online interpreter! This maps out to the following cube net: . < . ^ i D > D | @ o i o i . \ i ; U . . . . . # Random Distribution Implementation Explanation Cubix is a language in which an instruction pointer travels around the faces of a cube, executing the commands it encounters. The only form of randomness is the command D, which sends the IP in a random direction: an equal chance of 1/4 each way. However, we can use this to generate the correct weighted probabilites: by using D twice. The first D has a 1/4 of heading to a second D. This second D, however, has two directions blocked off with arrows (> D <) which send the instruction pointer back to the D to choose another direction. This means there are only two possible directions from there, each with a 1/8 overall chance of happening. This can be used to generate the correct character, as shown in the diagram below: (Note that, in the actual code, the arrow on the right is replaced with a mirror, |) # Code Explanation . < . ^ IP> i D > D | @ o i o i . \ i ; U . . . . . The instruction pointer starts on the right, at the character i, facing right. It executes this i, taking the first character as input, and then moves onto the D, beginning the random process shown above. • Char A: In the case that the first D sends us east, and the second south, we need to print character A. This is already on stack from the first i. The following is executed: • \ - Reflect the IP so it heads east • i; - Take an input, then pop it again (no-op) • U - Perform a U-turn, turning the IP left twice • o - Output the TOS, character A • @ - Terminate the program • Char B: If either the first or second D head north, we need to generate character B, which will be the next input. Both paths execute the following commands: • ^ - Head north • < - Head west, wrapping round to... • i - Take another input, character B • o - Output the TOS, character B • ; - Pop the TOS • @ - Terminate the program • Char C: If the first D sends us west, the following is executed: • i - Take another input, character B • i - Take another input, character C • o - Output TOS, character C • @ - Terminate the program • Char D: If the first D sends us south, the following is executed: • i - Take another input, character B • .. - Two no-ops • i - Take another input, character C • | - This mirror reflects east-west, but the IP is heading north, so we pass through it. • ^ - This joins up with the path taken for character B. However, because we have taken two inputs already, the fourth character (character D) will end up being printed. • This is spectacular! I can't believe you managed to fit the proper probabilities and all four paths on a size-2 cube. I wonder if I can subscribe to a feed of Cubix answers so I don't miss them... – ETHproductions Mar 13 '18 at 1:33 • @ETHproductions Thank you, I'm sure there's a way to cut off a byte or two, but I'm also quite proud of this answer :) – FlipTack Mar 13 '18 at 18:41 # R, 31 bytes sample(scan(,''),1,,c(1,3,2,2)) Reads the characters from stdin separated by spaces. sample draws random samples from its first input in quantity of the second input (so 1), (optional replacement argument), with weights given by the last argument. Try it online! Try it n times! For the latter code, I sample n times (set n in the header) with replacement set to True (it's false by default), tabulate the results, and divide by n to see the relative probabilities of the inputs. # PHP, 28 bytes <?=$argn[5551>>2*rand(0,7)]; Run as pipe with -nR. 01112233 in base-4 is 5551 in decimal ... • 108 possible values with the same length ... 7030 is among my personal favorites. – Titus Jul 17 '17 at 17:15 # Java 8, 53 44 bytes s->s[-~Math.abs((int)(Math.random()*8)-6)/2] This is a Function<char[], Character>. Try it online! (this test program runs the above function 1,000,000 times and outputs the experimental probabilities of choosing A, B, C, and D). The general idea here is to find some way to map 0-7 to 0-3, such that 0 appears 1/8 times, 1 appears 3/8 times, 2 appears 2/8 times, and 3 appears 2/8 times. round(abs(k - 6) / 2.0)) works for this, where k is a random integer in the range [0,8). This results in the following mapping: k -> k - 6 -> abs(k-6) -> abs(k-6)/2 -> round(abs(k-6)/2) 0 -> -6 -> 6 -> 3 -> 3 1 -> -5 -> 5 -> 2.5 -> 3 2 -> -4 -> 4 -> 2 -> 2 3 -> -3 -> 3 -> 1.5 -> 2 4 -> -2 -> 2 -> 1 -> 1 5 -> -1 -> 1 -> 0.5 -> 1 6 -> 0 -> 0 -> 0 -> 0 7 -> 1 -> 1 -> 0.5 -> 1 Which, as you can see, results in the indices 0 111 22 33, which produces the desired probabilities of 1/8, 3/8, 2/8 and 2/8. But wait! How in the world does -~Math.abs(k-6)/2 achieve the same result (again, where k is a random integer in the range [0,8])? It's pretty simple actually... (x+1)/2 (integer division) is the same thing as round(x/2), and x + 1 is the same thing as -~x. Although x+1 and -~x are the same length, in the above function it is better to use -~x since it -~ takes precedence and thus does not require parenthesis. • I know it's been a while, but you can golf two bytes by changing the placement of the integer-cast (since Math.abs also accepts doubles as parameter): s->s[-~(int)Math.abs(Math.random()*8-6)/2] (42 bytes). – Kevin Cruijssen Mar 14 '18 at 14:39 # APL, 14 bytes (?8)⊃1 3 2 2\⊢ Input as a string. How? 1 3 2 2\⊢ - repeat each letter x times ('ABCD''ABBBCCDD') ⊃ - take the element at index .. (?8) - random 1-8 • Would you mind reviewing my J answer and letting me know if it can be improved? – Jonah Jul 18 '17 at 21:51 • 17 bytes – Adám Jul 18 '17 at 21:56 • @Uriel Try it online! – Adám Jul 18 '17 at 21:59 • @Uriel There is no such encoding. Either you go full UTF-8, or you count every character as two bytes (UTF-16), or you add 5 bytes for ⎕U2378. – Adám Jul 18 '17 at 22:02 • @Adám oh, I see. then get Dyalog to replace some of these unneeded European accented letters for the new symbols, to save bytes! ;) – Uriel Jul 18 '17 at 22:04 # Charcoal, 11 bytes ‽⟦εεζζηηηθ⟧ Try it online! Link is to verbose version of code, although you hardly need it; ‽ picks a random element, ⟦⟧ creates a list, and the variables are those that get the appropriate input letters (in reverse order because I felt like it). # Javascript 35 bytes Takes a string ABCD as input, outputs A 1/8th of the time, B 3/8ths of the time, C 1/4th of the time, and D 1/4th of the time. x=>x[5551>>2*~~(Math.random()*8)&3] ## Explanation x=>x[ // return character at index 5551 // 5551 is 0001010110101111 in binary // each pair of digits is a binary number 0-3 // represented x times // where x/8 is the probability of selecting // the character at the index >> // bitshift right by 2 * // two times ~~( // double-bitwise negate (convert to int, then // bitwise negate twice to get the floor for // positive numbers) Math.random() * 8 // select a random number from [0, 8) ) // total bitshift is a multiple of 2 from [0, 14] &3 // bitwise and with 3 (111 in binary) // to select a number from [0, 3] ] # 05AB1E, 5 bytes ¦Ćì.R Try it online! ### Explanation ¦Ćì.R Argument s "ABCD" ¦ Push s[1:] "BCD" Ć Enclose: Pop a, Push a + a[0] "BCDB" ì Pop a, Concatenate a and s "ABCDBCDB" .R Random pick # ><>, 2522 19 bytes i_ixio;o ox</; ;\$o Try it online!, or watch it at the fish playground! A brief overview of ><>: it's a 2D language with a fish that swims through the code, executing instructions as it goes. If it reaches the edge of the code, it wraps to the other side. The fish starts in the top left corner, moving right. Randomness is tricky in ><>: the only random instruction is x, which sets the fish's direction randomly out of up, down, left and right (with equal probability). At the start of the program, the fish reads in two characters of input with i_i (each i reads a character from STDIN to the stack, and _ is a horizontal mirror, which the fish ignores now). It then reaches an x. If the x sends the fish rightwards, it reads in one more character (the third), prints it with o and halts with ;. The left direction is similar: the fish reads two more characters (so we're up to the fourth), wraps around to the right, prints the fourth character and halts. If the fish swims up, it wraps and prints the second character, before being reflected right by / and halting. If it swims down, it gets reflected left by the / and hits another x. This time, two directions just send the fish back to the x (right with an arrow, <, and up with a mirror, _). The fish therefore has 1/2 chance of escaping this x in each of the other two directions. Leftwards prints the top character on the stack, which is the second one, but downwards first swaps the two elements on the stack with $, so this direction prints the first character. In summary, the third and fourth characters are printed with probability 1/4 each; the first character has probability 1/2 x 1/4 = 1/8; and the second character has probability 1/4 + 1/2 x 1/4 = 3/8. # Pyth, 8 7 bytes O+@Q1t+ Uses the exact same algorithm as in my Python answer. Try it here! # Pyth, 10 8 bytes O+<Q2*2t Uses the exact same algorithm as is Jonathan Allan's Python answer. ### Explanation • O - Takes a random element of the String made by appending (with +): • <Q2 - The first two characters of the String. • *2t Double the full String (*2) except for the first character (t). Applying this algorithm for ABCD: • <Q2 takes AB. • *2t takes BCD and doubles it: BCDBCD. • + joins the two Strings: ABBCDBCD. • O takes a random character. -2 thanks to Leaky Nun (second solution) -1 thanks to mnemonic (first solution) • >Q1 becomes tQ, which becomes t. – Leaky Nun Jul 17 '17 at 16:00 • You can save a byte on the second solution by replacing *2 with + and using the implicit input twice. – Mnemonic Mar 13 '18 at 18:51 • @Mnemonic Thanks, I think I have’t used it because I thought of y instead, which doesn’t work for strings... – Mr. Xcoder Mar 13 '18 at 19:11 # Jelly, 8 bytes 122b4⁸xX Try it online! Remove the X to see "ABBBCCDD". The X chooses a random element. # MATL, 12 10 bytes l3HHvY"1Zr Try it online! Or run it 1000 times (slightly modified code) and check the number of times each char appears. ### Explanation l3HH % Push 1, 3, 2, 2 v % Concatenate all stack contents into a column vector: [1; 3; 2; 2] Y" % Implicit input. Run-length decode (repeat chars specified number of times) 1Zr % Pick an entry with uniform probability. Implicit display Changes in modified code: 1000:"Gl3HH4$vY"1Zr]vSY' • 1000:"...] is a loop to repeat 1000 times. • G makes sure the input is pushed at he beginning of each iteration. • Results are accumulated on the stack across iterations. So v needs to be replaced by 4$v to concatenate only the top 4 numbers. • At the end of the loop, v concatenates the 1000 results into a vector, S sorts it, and Y' run-length encodes it. This gives the four letters and the number of times they have appeared. • Yep, looks to be fixed now – Skidsdev Jul 17 '17 at 15:40 • @Mayube Thanks for noticing! – Luis Mendo Jul 17 '17 at 15:40 # C (gcc), 50 49 bytes i[8]={1,1,1,2,2,3,3};f(char*m){m=m[i[rand()%8]];} Try it online! • ABCD is example input, your code should take 4 characters (or a string of length 4) as input – Skidsdev Jul 17 '17 at 15:54 # Ruby, 343329 27 bytes Saved 2 bytes thanks to @Value Inc Input as four characters a=$**2 a[0]=a[1] p a.sample construct an array [B,B,C,D,A,B,C,D] and sample it. try it online! try it n times! (I converted it to a function to repeat it more easily, but the algorithm is the same) • $* is an alias for ARGV. – Value Ink Jul 17 '17 at 20:40 # C# (.NET Core), 76 55 bytes s=>(s+s[1]+s[1]+s[2]+s[3])[new System.Random().Next(8)] Try it online! My first answer written directly on TIO using my mobile phone. Level up! Explanation: if the original string is "ABCD", the function creates the string "ABCDBBCD" and takes a random element from it. • Your program should take the characters as input from STDIN – Skidsdev Jul 17 '17 at 19:00 • @Mayube fixed, though it may still be golfed... – Charlie Jul 17 '17 at 20:15 # Pyth, 7 bytes @z|O8 1 Test suite O8 generates a random number from 0 to 7. | ... 1 applies a logical or with 1, converting the 0 to a 1 and leaving everything else the same. The number at this stage is 1 2/8th of the time, and 2, 3, 4, 5, 6, 7 or 8 1/8 of the time. @z indexes into the input string at that position. The indexing is performed modulo the length of the string, so 4 indexes at position 0, 5 at position 1, and so on. The probabilities are: • Position 0: Random number 4. 1/8 of the time. • Position 1: Random number 0, 1 or 5. 3/8 of the time. • Position 2: Random number 2 or 6. 2/8 of the time. • Position 3: Random number 3 or 7. 2/8 of the time. # Javascript, 31 30 bytes / 23 bytes Seeing asgallant's earlier Javascript answer got me to thinking about JS. As he said: Takes a string ABCD as input, outputs A 1/8th of the time, B 3/8ths of the time, C 1/4th of the time, and D 1/4th of the time. Mine is: x=>(x+x)[Math.random()*8&7||1] Explanation: x=>(x+x)[ // return character at index of doubled string ('ABCDABCD') Math.random()*8 // select a random number from [0, 8] &7 // bitwise-and to force to integer (0 to 7) ||1 // use it except if 0, then use 1 instead ] From Math.random()*8&7 it breaks down as follows: A from 4 = 12.5% (1/8) B from 0,1,5 = 37.5% (3/8) C from 2,6 = 25% (1/4) D from 3,7 = 25% (1/4) ## Version 2, 23 bytes But then thanks to Arnauld, who posted after me, when he said: If a time-dependent formula is allowed, we can just do: which, if it is indeed allowed, led me to: x=>(x+x)[new Date%8||1] in which new Date%8 uses the same break-down table as above. And %8 could also be &7; take your pick. Thanks again, Arnauld. # ngn/apl, 10 bytes ⎕a[⌈/?2 4] ?2 4 chooses randomly a pair of numbers - the first among 0 1 and the second among 0 1 2 3 ⌈/ is "max reduce" - find the larger number ⎕a is the uppercase alphabet [ ] indexing note the chart for max(a,b) when a∊{0,1} and b∊{0,1,2,3}: ┏━━━┯━━━┯━━━┯━━━┓ ┃b=0│b=1│b=2│b=3┃ ┏━━━╋━━━┿━━━┿━━━┿━━━┫ ┃a=0┃ 0 │ 1 │ 2 │ 3 ┃ ┠───╂───┼───┼───┼───┨ ┃a=1┃ 1 │ 1 │ 2 │ 3 ┃ ┗━━━┻━━━┷━━━┷━━━┷━━━┛ if a and b are chosen randomly and independently, we can substitute 0123=ABCD to get the desired probability distribution # 05AB1E, 8 bytes ìD1è0ǝ.R Try it online! # Implicit input | [A,B,C,D] ì # Prepend the input to itself | [A,B,C,D,A,B,C,D] D1è # Get the second character | [A,B,C,D,A,B,C,D], B 0ǝ # Replace the first character with this one | [B,B,C,D,A,B,C,D] .R # Pick a random character from this array | D # Python 3, 64 55 51 bytes -9 bytes thanks to @ovs lambda s:choice((s*2)[1:]+s[1]) from random import* Try it online! ## Explanation random.choice() gets a random character of the String, while (s*2)[1:]+s[1] creates BCDABCDB for an input of ABCD, which has 1/8 As, 2/8 Cs, 2/8 Ds and 3/8 Bs. • Use random.choice for 55 bytes: lambda s:choice((s[0]+s[1:]*3)[:8]) – ovs Jul 17 '17 at 15:30 • @ovs Found a shorter way ^. Thanks for the choice() though. – Mr. Xcoder Jul 17 '17 at 15:44 # QBIC, 27 bytes ?_s;+;+B+B+;+C+;+D,_r1,8|,1 ## Explanation ? PRINT _s A substring of ;+ A plus ;+B+B+ 3 instances of B plus ;+C+ 2 instances of C plus ;+D 2 instances of D plus ,_r1,8| from position x randomly chosen between 1 and 8 ,1 running for 1 character # ><>, 56 bytes v ! " D C B A ; " o ! @ ! ^< x!^xv! @ v< @ o o ; ; o Try it online! # 05AB1E, 6 bytes «À¨Ć.R Try it online! Explanation Works for both lists and strings. « # concatenate input with itself À # rotate left ¨ # remove the last character/element Ć # enclose, append the head .R # pick a character/element at random # Chip, 60 bytes )//Z )/\Z )\/^. )\x/Z )\\\+t |???~S |z* {'AabBCcdDEefFGghH Try it online! The three ?'s each produce a random bit. On the first cycle, these bits are run through the switches above (/'s and \'s) to determine which value we are going to output from this table: 000 a 01_ b 0_1 b 10_ c 11_ d (where _ can be either 0 or 1). We then walk along the input as necessary, printing and terminating when the correct value is reached. The big alphabetic blob at the end is copied wholesale from the cat program, this solution simply suppresses output and terminates to get the intended effect. # Ruby, 32 bytes Pretty straightforward..? ->s{s[[0,1,1,1,2,2,3,3].sample]} Try it online! # Applesoft, 29 oops, 32 bytes A little "retrocomputing" example. Bear with me, I'm brand new at this. I gather that what is designated as the "input" need not be byte-counted itself. As stated in the OP, the input would be given as "ABCD". (I didn't initially realize that I needed to specify input being obtained, which added 4 bytes, while I golfed the rest down a byte.) INPUTI$:X=RND(1)*4:PRINTMID$(I$,(X<.5)+X+1,1) The terms INPUT, RND, PRINT and MID$are each encoded internally as single-byte tokens. First, X is assigned a random value in the range 0 < X < 4. This is used to choose one of the characters from I$, according to (X < .5) + X + 1. Character-position value is taken as truncated evaluation of the expression. X < .5 adds 1 if X was less than .5, otherwise add 0. Results from X break down as follows: A from .5 ≤ X < 1 = 12.5% B from X < .5 or 1 ≤ X < 2 = 37.5% C from 2 ≤ X < 3 = 25% D from 3 ≤ X < 4 = 25% • Welcome to Programming Puzzles and Code Golf! We require submissions here to be golfed as much as possible at least trivially, so that includes removing unnecessary whitespace (I apologize if the whitespace here is necessary). Additionally, I'm not sure of the standards about Applesoft, but I don't believe you are allowed to assume that those operators are single-byte tokens unless the internal representation is a single byte. Also, you may not assume that the input is stored in a variable; rather, you must actually take it as input, a command line argument, or a function parameter. Thanks! – HyperNeutrino Jul 17 '17 at 23:07 • @HyperNeutrino None of the whitespace was necessary, although space after "INPUT" and "PRINT" would've improved readability. It did happen that in this antique cybertongue spaces were traditionally displayed in the places I had them. For the tokens I mentioned it is indeed true that "internal representation is a single byte". Meanwhile, I golfed the code I had down a byte. – Alan Rat Jul 17 '17 at 23:55 # Common Lisp, 198 bytes (setf *random-state*(make-random-state t))(defun f(L)(setf n(random 8))(cond((< n 1)(char L 0))((and(>= n 1)(< n 4))(char L 1))((and(>= n 4)(< n 6))(char L 2))((>= n 6)(char L 3))))(princ(f "ABCD")) Try it online! (setf *random-state* (make-random-state t)) (defun f(L) (setf n (random 8)) (cond ((< n 1) (char L 0)) ((and (>= n 1)(< n 4)) (char L 1)) ((and (>= n 4)(< n 6)) (char L 2)) ((>= n 6) (char L 3)) ) ) (princ (f "abcd"))
2019-09-22 18:30:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4138820171356201, "perplexity": 3001.9876970852993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575627.91/warc/CC-MAIN-20190922180536-20190922202536-00260.warc.gz"}
https://yetanothermathprogrammingconsultant.blogspot.com/2019/12/elementwise-vs-matrix-multiplication.html
## Saturday, December 7, 2019 ### Elementwise vs matrix multiplication #### Introduction In this post I want to demonstrate some interesting issues in how CVXR and CVXPY deal with elementwise vs matrix multiplication and compare that to the host languages (R and Python, respectively). CVXR and CVXPY deviate a bit from what their host languages do. When implementing optimization models with these tools, it is important to be aware of these details. #### Elementwise and matrix multiplication There are two often used methods to perform the multiplication of matrices (and vectors). The first is simply elementwise multiplication: $c_{i,j} = a_{i,j} \cdot b_{i,j}$ In mathematics, this is sometimes referred to as the Hadamard product. The notation is typically: $C = A \circ B$ but sometimes we see: $C = A \odot B$ This product only works when $$A$$ and $$B$$ have the same shape. I.e. $\begin{matrix} C&=&A&\circ&B\\ (m \times n) &&(m \times n)&&(m \times n)\end{matrix}$ The standard matrix multiplication $$C=A\cdot B$$ or $c_{i,j} = \sum_k a_{i,k} b_{k,j}$ has a different rule for conformance:  $\begin{matrix} C&=&A&\cdot&B\\ (m \times n) &&(m \times k)&&(k \times n)\end{matrix}$ Most of the time, the dot operator is dropped and we write $$C = A B$$. In optimization modeling both forms are used a lot. #### R, CVXR R has two operators for multiplication: • * for elementwise multiplication • %*% for matrix multiplication A vector is considered as a column vector (i.e. an $$n$$-vector is like a $$n \times 1$$ matrix).  There is one special thing in R, as shown here: > m <- 2 > n <- 3 > A <- matrix(c(1,2,3,4,5,6),m,n) > A [,1] [,2] [,3] [1,] 1 3 5 [2,] 2 4 6 > > x <- c(1,2,3) > # # matrix multiplication # > A %*% x [,1] [1,] 22 [2,] 28 > # # elementwise multiplication # > A * A [,1] [,2] [,3] [1,] 1 9 25 [2,] 4 16 36 # # but this also works # > A * x [,1] [,2] [,3] [1,] 1 9 10 [2,] 4 4 18 > The last multiplication is surprising as $$A$$ is a $$2 \times 3$$ matrix and $$x$$ is different in shape. Well, R may extend and recycle vectors to make them as large as needed. In this case $$x$$ is duplicated and then considered as a  $$2 \times 3$$ matrix. More or less like: > x [1] 1 2 3 > matrix(x,2,3) [,1] [,2] [,3] [1,] 1 3 2 [2,] 2 1 3 The modeling tool CVXR follows R notation and implements both * and %*% (for elementwise and matrix multiplication). However, CVXR is not implementing the extending and recycling of vectors that are too small. #### Concept: recycling Just to emphasize the concept of recycling. If an operation requires two vectors of the same length, R may make the shorter vector longer by recycling (duplicating). Here is an example: > a <- 1:10 > b <- 1:2 > c <- a + b > c [1] 2 4 4 6 6 8 8 10 10 12 In this example the vector a has elements 1 through 10. The vector b is too short, so it is recycled. When added to a, b is functionally equal to rep(c(1,2),5). When multiples of b do not exactly fit a, we effectively have a fractional duplication number. E.g. when we use b <- 1:3, we get a message: Warning message: In a + b : longer object length is not a multiple of shorter object length This recycling trick is somewhat unique to R (I don't know about other languages doing this). #### Example In [2] the product: $b_{i,j} = a_{i,j} \cdot x_i$ is implemented in R with the elementwise product: > m <- 2 > n <- 3 > A <- matrix(c(1,2,3,4,5,6),m,n) > A [,1] [,2] [,3] [1,] 1 3 5 [2,] 2 4 6 > x <- c(1,2) > B <- A*x > B [,1] [,2] [,3] [1,] 1 3 5 [2,] 4 8 12 This again uses recycling of $$x$$. CVXR is not doing this automatically. We can see this here: > library(CVXR) > x <- Variable(m) > B <- Variable(m,n) > e <- rep(1,n) > problem <- Problem(Minimize(0), + list(x == c(1,2), + B == A * x )) Error in sum_shapes(lapply(object@args, function(arg) { : Incompatible dimensions Here $$x$$ is now a CVXR variable. As the vector $$x$$ is not recycled, we end up with two different shapes and elementwise multiplication is refused. So how do we do something like this in CVXR? The recycling operation: > x [1] 1 2 > matrix(x,m,n) [,1] [,2] [,3] [1,] 1 1 1 [2,] 2 2 2 can be expressed in matrix notation as: $x \cdot e^T$ where $$e$$ is a (column) vector of ones. This is sometimes called an outerproduct. I.e. we can write our assignment as $B = A \circ (x \cdot e^T)$ In a CVXR model this can look like: > library(CVXR) > x <- Variable(m) > B <- Variable(m,n) > e <- rep(1,n) > problem <- Problem(Minimize(0), + list(x == c(1,2), + B == A * (x %*% t(e)))) > sol <- solve(problem) > sol$status [1] "optimal" > sol$getValue(x) [,1] [1,] 1 [2,] 2 > sol\$getValue(B) [,1] [,2] [,3] [1,] 1 3 5 [2,] 4 8 12 Note: the constraint B == A * (x %*% t(e)) has both a matrix multiplication and a elementwise multiplication. This is rather funky. Conclusion: if your matrix operations rely on recycling, you will need to rework things a bit to have this work correctly in CVXR. CVXR does not do recycling. #### Python, CVXPY In the previous section we saw that there are subtle differences between R's and CVXR's elementwise multiplication semantics. Let's now look at Python and CVXPY. Since Python 3.5 we have two multiplication operators: • * for elementwise multiplication • @ for matrix multiplication CVXPY has different rules: • *, @  and matmul for matrix multiplication • multiply for elementwise multiplication #### Example import numpy as np import cvxpy as cp # # In Python/numpy * indicates elementwise multiplication # A = np.array([[1,2],[3,4]]) B = np.array([[1,1],[2,2]]) C = A*B print(A) # output: # [[ 1 2] # [ 3 4]] print(C) # output: # [[ 1 2] # [ 6 8]] # # In CVXPY * indicates matrix multiplication # A = cp.Variable((2,2)) C = cp.Variable((2,2)) prob = cp.Problem(cp.Minimize(0), [A == [[1,2],[3,4]], C == A*B]) prob.solve(verbose=True) print(A.value) # output: # [[1. 3.] # [2. 4.]] print(C.value) # [[ 7. 7.] # [10. 10.]] Here we see some differences between Python/Numpy and CVXPY. First the interpretation of Python lists (the values for A) is different. And secondly: the semantics of * are different. This may cause some confusion. #### References 1. CVXR, Convex Optimization in R, https://cvxr.rbind.io/ 2. CVXR Elementwise multiplication of matrix with vector, https://stackoverflow.com/questions/59224555/cvxr-elementwise-multiplication-of-matrix-with-vector
2023-01-29 18:47:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6929542422294617, "perplexity": 3155.0451608564304}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499758.83/warc/CC-MAIN-20230129180008-20230129210008-00410.warc.gz"}
https://puzzling.stackexchange.com/questions/60887/you-cant-see-me-yet-im-there/61041
# You can't see me, yet I'm there You can't see me, yet I'm there Under blue or red, white or black I decide if you will live I can put you to sleep I'm with you, as you can see Whether home, on space or sea There's no use running free I will drag you back to me Don't try poisoning me, it's pointless You'll be rendering your life useless I can harm you, I can kill thee Yet you crave me and you seek me # Who/what am I? Hint 1: We've been through with each other Hint 2: It was you who started the fire, I just kept it alive • This is starting to seem really broad with all the potential answers.... – North Feb 26 '18 at 2:29 • I think I'll start adding hints to avoid the vagueness – HugoBDesigner Feb 26 '18 at 11:20 • Hint 2 implies the answer is Billy Joel :) – Josh Feb 26 '18 at 13:04 • I did write that hint with the song in mind, so you're not completely wrong ;) – HugoBDesigner Feb 26 '18 at 13:06 • I still have no idea (everything I've come up with has been suggested as answers already). Following until an answer is accepted which is hopefully satisfying. – Josh Feb 26 '18 at 13:38 Are you Air? You can't see me, yet I'm there Under blue or red, white or black Under the sky: clear skies are blue, sunsets are red, cloudy skies are white, and nighttime is black. credit goes to @TheSpartan I decide if you will live I can put you to sleep Air is necessary for life; Low air pressure or excessive carbon dioxide can make you drowsy I'm with you, as you can see Whether home, on space or sea It's used in tanks for sea and space exploration There's no use running free I will drag you back to me Running through it creates drag (air friction/resistance) Don't try poisoning me, it's pointless You'll be rendering your life useless If you poison the air around you, you are putting yourself at risk too I can harm you, I can kill thee Yet you crave me and you seek me Gasses in air can be poisonous, even deadly, but some are necessary for life HINTS: Hint 1: We've been through with each other - You go through air, and air goes through you (your lungs) Hint 2: It was you who started the fire, I just kept it alive - The oxygen in air is necessary to keep it alive • This is a good answer, but try to address the other parts of the riddle okay? – NL628 Feb 23 '18 at 5:19 • You seem to mix up gasses and air a fair bit. Air is a mixture of gasses, and a gas is not necessarily considered air – HugoBDesigner Feb 26 '18 at 19:28 • i think it works because air is always comprised of gasses, but i can be clearer – J Chase Feb 26 '18 at 19:42 • "Under blue or red, white or black" -> how about a cloudless sky (blue), sunset (red), cloudy sky (white), and the night sky (black)? – The Spartan Feb 26 '18 at 21:22 • I've edited the answer in order to accept it, as well as some formatting :) – HugoBDesigner Feb 27 '18 at 12:21 Conspiracy theory: The US Government You can't see me, yet I'm there : Under blue or red, white or black The voting states are either blue or red, the voters are mostly white or black I decide if you will live // I can put you to sleep The government can decide if you live, it can put you to sleep by with the death penalty. I'm with you, as you can see // Whether home, on space or sea It has a presence everywhere; federal regulations apply to things in your home, satellites in space, and its navy travels the seven seas. Don't try poisoning me, it's pointless // You'll be rendering your life useless Treason, maybe? Maybe, possibly, this refers to the idea that without government Anarchy and Chaos would reign. I can harm you, I can kill thee // Yet you crave me and you seek me Harm? Check. Kill? Check. Yet we need roads and military and schools and the FDA, etc. etc. • Hah, very creative! Unfortunately, not the answer I'm looking for. – HugoBDesigner Feb 23 '18 at 12:19 Are you Gravity? Because gravity cannot be seen under normal light, a black light and obviously not red or blue. Gravity can also put you to sleep with G-force knocking pilots out when making tight turns. Gravity is everywhere, including home, in space or at sea. Gravity can almost always pull you back down hence "what comes up must come down". Gravity isn't a living thing meaning you cant kill it with poison. Gravity can harm you or even kill you if you fall from high enough and the new space race means that lots of rich people are clambering to get to other planets with their own gravity. • Interesting and original answer! Welcome to Puzzling SE! – NL628 Feb 23 '18 at 5:18 • Interesting answer, but not the one I'm looking for :) – HugoBDesigner Feb 23 '18 at 11:33 Is it Oxygen? Oxygen can't be seen, but you can make it into different colors by releasing those colored fog machine (although that's not changing oxygen's color). You can't live without oxygen, and Nitrogen Dioxide (aka laughing gas) is a sedative used in dentistry. Oxygen is everywhere breathable oxygen (O2), water (H2O) and oxygen compounds/isolated oxygen in space (though not very much). You can't run from oxygen, you'll always try to breathe it back in, Try to Poison oxygen (say with carbon dioxide) you hurt yourself, and oxygen in different forms are harmful (also oxygen is fuel for fire) but under asphyxiation of any form, you try to breathe oxygen in. • i understand the red and blue part as it is the commonly used color to represent oxygen and nitrogen, but the black and white part is for hydrogen and carbon in this case, so maybe just air ? quite puzzling – Neil Feb 23 '18 at 1:29 • Techinically though there is no "air" in space or the sea.... – North Feb 23 '18 at 1:32 • I've updated the riddle to clear out the misconception about the colors – HugoBDesigner Feb 23 '18 at 2:29 • @North well you technically need air in space and in the sea... – NL628 Feb 23 '18 at 5:14 Are you a.. A Dream? You can't see me, yet I'm there Under blue or red, white or black [You don't see a dream, you dream it. LOL. But it will always be there. Under black or white skin, under red or blue veins, tucked neatly in your brain.] I decide if you will live I can put you to sleep [Dreams are what helps us sleep, keep us dreaming for it. They can also end lives, motivations once broken] I'm with you, as you can see Whether home, on space or sea [Dreams are always with you, wherever you go. They are a part of what is human.] There's no use running free I will drag you back to me [There will always be times one will turn back on a dream but everytime, there will be an opportunity that could present itself to go back to it and pursue it.] Don't try poisoning me, it's pointless You'll be rendering your life useless [Like in the 2nd stanza, if you kill a dream, it could very well end you (spiritually, mentally)] I can harm you, I can kill thee Yet you crave me and you seek me [Similar to 2nd and 5th stanze, broken dreams can end you. But we will always seek them.] • I'm pretty sure this is the correct answer :D – NL628 Feb 23 '18 at 6:02 • Very interesting approach, but not the answer I'm looking for :) – HugoBDesigner Feb 23 '18 at 11:35 • @HugoBDesigner Interesting. I am much looking forward to seeing the correct answer. But I think the colors explanation was very nice for this answer :D – NL628 Feb 23 '18 at 17:46 Are you You can't see me, yet I'm there Under blue or red, white or black It's always there, no matter what I decide if you will live I can put you to sleep Your subconscious makes a lot of your decisions, for example when you're tired. I'm with you, as you can see Whether home, on space or sea It doesn't matter where you are, you always have it There's no use running free I will drag you back to me Trying to ignore it won't work Don't try poisoning me, it's pointless You'll be rendering your life useless ???? I can harm you, I can kill thee Yet you crave me and you seek me ????? • Clever answer, but not the one I'm looking for :) – HugoBDesigner Feb 23 '18 at 11:37 This is a long shot, but I think the answer is Vision You can't see me, yet I'm there Under blue or red, white or black You can't see a vision, but you have it regarding your eyes color. I decide if you will live I can put you to sleep You can see the danger and run away from it. Thus you live. Close your eyes and you'll sleep. I'm with you, as you can see Whether home, on space or sea You always have your vision with you. As you can see ;). There's no use running free I will drag you back to me You can't run free without seeing. You'll have to open your eyes to get the vision of where you're going. Don't try poisoning me, it's pointless You'll be rendering your life useless Poison your eyes and you'll be blind. I am not so sure about useless though. I can harm you, I can kill thee Yet you crave me and you seek me Don't have much for this currently. • Interesting perspective, but not the answer I'm looking for :) – HugoBDesigner Feb 23 '18 at 19:29 • Oh well, it doesn't harm to try :p – Paul Karam Feb 23 '18 at 19:30 You can't see me, yet I'm there Under blue or red, white or black Mood is an intangible thing; the colors are reference to a mood ring I decide if you will live I can put you to sleep I'm with you, as you can see Whether home, on space or sea Your mood is an innate part of you There's no use running free I will drag you back to me Reference to being 'stuck in a mood' Don't try poisoning me, it's pointless You'll be rendering your life useless When you 'poison the mood' you make things miserable for yourself and everyone around you I can harm you, I can kill thee Yet you crave me and you seek me A persistent negative mood can be painful and even lead to suicide, but we are always striving to be in a good mood God You can't see me, yet I'm there Under blue or red, white or black God is always here, but we can see Him through His works. America (red and blue flag) was founded as one nation under God over all people. I decide if you will live I can put you to sleep Yep! I'm with you, as you can see Whether home, on space or sea God is always here. There's no use running free I will drag you back to me You cannot run from God. Don't try poisoning me, it's pointless You'll be rendering your life useless God is all-powerful. I can harm you, I can kill thee Yet you crave me and you seek me Yep, and we crave and seek to know the Lord. • Many atheist don't seek the Lord, nor do people of different beliefs besides Christianity. – North Feb 26 '18 at 1:05 • Also I believe God does not "drag" anybody to him... – North Feb 26 '18 at 1:07 • I may have the incorrect answer to the riddle, but I do believe people outside of Christianity seek God (i.e. Judaism, Islam, etc.). I also believe God can draw us to Him (John 6:44), drag might not be the right word, but I was just Trying to make it fit! :P – Crozier Feb 26 '18 at 2:08 • I don't know if trying to put Biblical stuff into something like this is appropriate. "Trying to make it fit" isn't a criteria in puzzling, and stretching riddles so it fits your solution is counter-intuitive when solving riddles (the solution should fit nicely. – North Feb 26 '18 at 2:28 • First thing that came to my mind was John Cena. – Manoj Kumar Jul 9 '18 at 1:31 Are you Drugs (Pills) You can't see me, yet I'm there You cant actually see drugs as they are inside the pills Under blue or red, white or black Pills can be many colours I decide if you will live They can kill you if you have too many I can put you to sleep They can also put you to sleep I'm with you, as you can see They are inside your body once ingested Whether home, on space or sea Home space or sea There's no use running free I will drag you back to me Don't try poisoning me, it's pointless You'll be rendering your life useless If you poison drugs they will just kill you I can harm you, I can kill thee Yet you crave me and you seek me They can kill you but they are addictive so you cannot stop taking them Are you Nitrogen? You can't see me, yet I'm there Under blue or red, white or black Nitrogen makes up 78% of the air we breathe. It is one of the primary gasses in reflecting light in the atmosphere - showing us a blue sky, or a red sky depening on angle of viewing. The Nitrogen is still there even when we can't see the sky, for example under white clouds, or a black night sky. I decide if you will live I can put you to sleep We need nitrogen to live. If we don't have nitrogen in our bodies, amino acids cant function properly. Muscles deteriorate. We get sick, then we will cease to be. Nitrous oxide is used as a general anaesthetic which is used to "put you to sleep" when undergoing surgery. I'm with you, as you can see Whether home, on space or sea Nitrogen is everywhere. Nitrogen is part of the cholorophyll in plants on our planet. Scientists have discovered clouds of nitrogen in space. Nitrogen is also in the sea due to the nitrification process. There's no use running free I will drag you back to me There is no escape from nitrogen. Hah. Decompression sickness can occur if you try to rise from a deep dive too quick. Thus if you ascend too fast, you need to go back down to decompress at the correct rate and let the gasses release from your body at the correct rate. This is pretty weak though. Don't try poisoning me, it's pointless You'll be rendering your life useless Grasping at straws now... Nitrogen Dioxide is harmful (NO2, Nitrogen "poisoned" with Oxygen). It can cause respiratory problems, which you know... can lead to death. It can also cause reproductive problems I think, which to some might render their life useless. I can harm you, I can kill thee Yet you crave me and you seek me Not enough Nitrogen can harm you, too much nitrogen can kill you. We need nitrogen, and seek it through the air we breathe. Alternative: Nitrous Oxide is a popular recreational drug, and incorrect use can result in death. But party goers want to get the short lived high and still seek N2O even when given the associated health risks. You could be Heart (Love) You can't see me, yet I'm there Under blue or red, white or black You cannot see your own heart, yet we all have one, despite color or ethnicity I decide if you will live I can put you to sleep The heart is responsible for sustaining your life. It can also put you to "sleep" in cases of heart attacks for example. Also, as the heart is the universal symbol of love, "sleep" could easily be translated as the hypnotic feeling of being in love, falling under the spell of a significant other. I'm with you, as you can see Whether home, on space or sea Your heart is always with you, and you always keep thinking of your loved ones, whether you are by them, or traveling without. There's no use running free I will drag you back to me Even if you get heartbroken, most of the times you still try to get back with that person. You can't help it, it just happens, defying logic. Don't try poisoning me, it's pointless You'll be rendering your life useless Literally poisoning your heart would end your life. Metaphorically speaking, poisoning your relationship or your own feelings would get you depressed and behaving in a bad mood, resulting in having no joy in life (and what is life without joy other than useless?). I can harm you, I can kill thee Yet you crave me and you seek me As it is possibly the most vital organ for keeping you alive aside the brain, any mishap of the heart could harm you or kill you. Yet, we all crave happiness from love and seek to be loved. I'm not sure whether this riddle requires certain knowledge to solve, still I will give it a try. I guess you are Copper conductor (copper wire) You can't see me, yet I'm there We usually can't see copper wires, since it is either covered with plastic jacket or built inside the concrete wall with jackets. Under blue or red, white or black Here comes the vital part: electrical wiring (wiring color code). We sometimes may found more thinner wires with different colors inside a normal wire. These colors refers to different kinds of cable, such as neutral, ground and different phases from 2-phase/3-phase power. From the link above, blue, red, white and black are applied to wiring by different countries. For the most obvious example, check Brazil's fixed cable color code. (OP is from Brazil :D) I decide if you will live I can put you to sleep Copper wire bring electricity to machines, which may capable to decide if someone will live (life support system such as ECMO) or fall asleep (some kind of smart appliances like Nest thermostat?). I'm with you, as you can see Whether home, on space or sea Though copper wire is rarely seen under plastic jacket, we always knew the existence of those electrical wire, whether at home, inside the space station, or under the sea. There's no use running free I will drag you back to me Many people can't live without electricity nowdays, especially a living place without any electrical infrastructure. Don't try poisoning me, it's pointless You'll be rendering your life useless Not sure about this part, poisoning cable sounds unreasonable. Yet, I am sure that those who throw any liquid on it may get a electric shock. I can harm you, I can kill thee Yet you crave me and you seek me I think it is refer to electricity than copper wire itself. Electricity can harm, even kill me, but I need them to post on Puzzling! Hint 1 "We've been through with each other" Well, we surely been through about 200+ years. Hint 2 "It was you who started the fire, I just kept it alive" Wires doesn't produce electricity itself, they transfer electricity to make sure it won't vanish on the half way to the destination. Hope?That is what i concluded from the hint. Our hopes were there with us throught. Say we sought something in our mind (It was you who started the fire) but it is kept alive by our hopes (I just kept it alive) because we hope for a thing. We can't see our hopes yet they are there. Whether under broad day light or night or under any other mood we hope for things to happen(blue and red can be considered as different moods of a person where blue represent calmness and red as alertness/danger/angry etc a person hopes under different moods). Our hopes decide if we live as poisonous hopes can lead a person to end up life i.e. when they become hopeless. Regarding the line which says that I can put you to sleep, it can be considered as at night a person can fall asleep all the way while hoping for something. Our hopes are with us all the way through, whether at home or whether we are in sea or space. No matter how hard we try not to hope , we end up hoping for something or the other in any situation in life i.e. we are pulled towards our hopes or vice versa. If we poison our hopes or in other words having false hopes can make us have a tough time in life. As said earlier our own hopes can harm us and even kill us (if a person goes hopeless in life (!SUICIDE!) which we usually see happening in case when a person ends up his/her life because they become hopeless due to any kind of situation whether money problems,family problem or any other crises). Yet a person does not stop "hoping". We always end up hoping. :) • @Shane Hsu what did you meant be completing the incomplete spoilers ? – Maniraj Feb 27 '18 at 10:20 • The markup for a spoiler tag is >!, you wrote >. Welcome to Puzzling.SE! This is very well-detailed for a first answer, I hope you stick around. – F1Krazy Feb 27 '18 at 10:38 • @F1Krazy Thanks for the tip. Will remember that while answering any further questions. And yes will stick around HOPEFULLY, till my brain haggles around all those puzzles out there :).. – Maniraj Feb 27 '18 at 10:47 • any thoughts regarding my answer? – Maniraj Feb 27 '18 at 11:42 I believe you are The sun You can't see me, yet I'm there Looking directly into the sun is not something someone does Under blue or red, white or black The air, in daylight, can appear to be blue, when the sub sets it might be red. Clouds might let the sky appear white and if the sun sets completely there is blackness. I decide if you will live We would not survive without the sun. And some places on earth might be to cold or dry to live, this all depends on the earth's axis I can put you to sleep We sleep at night, when the sun has set I'm with you, as you can see The sun is always there Whether home, on space or sea The sun can be seem from everywhere There's no use running free I will drag you back to me I believe this revers to the gravitational pull the sun has on the earth Don't try poisoning me, it's pointless You'll be rendering your life useless We need the sun... I can harm you, I can kill thee Yet you crave me and you seek me This might refer to nasty things like sunburns and skin cancer. and now matter how bad those things might be, we will always seek out the sun for our own pleasure. • Very interesting answer and surprisingly fitting! Unfortunately, not the answer I was looking for, but you did get one of the hints right :P – HugoBDesigner Feb 27 '18 at 12:01 Time. You can't see me, yet I'm there // Under blue or red, white or black We can't actually see the time (as a physical entity). I decide if you will live // I can put you to sleep As we say - Time decides when you will die. We see time before going to sleep. I'm with you, as you can see // Whether home, on space or sea Time is everywhere, the 4th dimension. Don't try poisoning me, it's pointless // You'll be rendering your life useless You cannot change the time (your past etc.), its useless. I can harm you, I can kill thee // Yet you crave me and you seek me We all have bad times but still we keep on living. PS: This is my first ever answer on Puzzling. • Nice work, but not the answer I'm looking for. The colors are also part of the puzzle ;) – HugoBDesigner Feb 23 '18 at 12:42 ## HERE GOES NOTHING A sadistic girlfriend who just happens to be invisible. You can't see me, yet I'm there // Under blue or red, white or black She's invisible, but she's still there. Under a blue sky, white clouds, black hair or red hair? lol I decide if you will live // I can put you to sleep Well like, if she wants to kill you, you're pretty screwed. And obviously she can put you to sleep like any other girlfriend... I'm with you, as you can see // Whether home, on space or sea If you really like her and vice versa, I suppose she'll always be with you...even if you go to space? Don't try poisoning me, it's pointless // You'll be rendering your life useless If you poison her, you're going to go to jail for the rest of your life, which will probably make it useless. I can harm you, I can kill thee // Yet you crave me and you seek me Yeah she's sadistic so she always thinks about harming and killing you...too bad so sad. And you crave her and seek her? Sounds highly suggestive... Probably not the best answer here, but like YOLO! • That raises the question, how does one go to jail for killing an invisible person? Where's the evidence? :P – HugoBDesigner Feb 23 '18 at 11:39 • LOL you got me there. It's okay; I'll try and think of something else. – NL628 Feb 24 '18 at 6:25 Nothing. The reasoning? This riddle, similar to its original variation (unless i'm dreadfully wrong), is simply a wordplay. Replace the 'thing' in question with the word 'Nothing' and each statement playfully comes to life. To recap, the version i remembered hearing was: "I am Greater than heaven, I am lower than Hell, Rich People Want me, Poor People Have me, If you eat me, You will die. What am i?" • Welcome to Puzzling! Could you add some details to your answer about how you came to this conclusion? Answers without explanation are sometimes marked for deletion. – puzzledPig Feb 27 '18 at 4:28 • Hi puzzledpig, ive heard a similar riddle before and im sure this riddle is the same in that only one answer suffices. In this case the word i have stated. Hope this prevents deletion – Naz Shah Feb 27 '18 at 4:31 • Could you explain though? In more detail? – NL628 Feb 27 '18 at 5:50
2019-09-16 03:25:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3056451082229614, "perplexity": 3255.924867988543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572471.35/warc/CC-MAIN-20190916015552-20190916041552-00379.warc.gz"}
https://worldbuilding.meta.stackexchange.com/tags/reviews/hot
# Tag Info ### Can Glarnak save us from the Stack Exchange app? Glarnak is not almighty - banning the StackExchange App won't stop crap from being posted There will still be user##### that don't give a damn and come in from the ... • 17.6k ### Can Glarnak save us from the Stack Exchange app? I kinda think you're putting too much thought and effort into it. Those answers are annoying, but also really simple to deal with. I see there being two options: Read the questions one line, roll ... • 34.4k Accepted ### Can Glarnak save us from the Stack Exchange app? Nope Glarnak will immediately be subsumed into the movement to make Stack Exchange more welcoming. You see, the problem isn't low quality posts. The problem is that we aren't sufficiently ... • 25k Accepted ### Is there anything wrong with low reputation users doing reviews? I have to some extend the same feeling as you. If not here, then on other site. IMO, everyone should enjoy the rights they are given. So do vote and flag, and edit. The main idea is to dedicate some ... • 6,777 ### Where are guidelines about text formatting and its improvement via (suggested) edits? No such specific guidelines exist on this site, though the community here tends to reject 'cosmetic' edits unless they are substantive and/or drastically increase the quality of the question/answer. ... • 2,866 Accepted ### I would like to see this action as valid but have issues with it I checked the moderator timeline for your answer, and pieced together a sequence of events, including what happened with the reviews: A user flagged your answer as Low Quality. Due to the flag, the ... • 97.3k ### Why is the edit count always wrong now that I crossed 10k rep? This is because once you cross 10k rep -- the review counter in the statusbar changes to show the amount of reviews in-flight on the site as a whole. (I know, this seems to be one of the most ... • 11k ### what is the number next to the 'review' link? You are far from the first person to ask this. There are lots of questions about this on Meta.SE, some of which lead to this question. The bottom line is that the counter in the header is the total ... • 19.2k Accepted ### What should I do with drastic edits to low-quality answers? This is just personal opinion, but . . . I would have rejected that edit. The edit absolutely changed the author's meaning. Heck, the anonymous user who suggested the edit began the third paragraph ... • 97.3k ### Are incorrect answers deleted too fast? Incorrect answers should not be deleted. They should be downvoted and/or commented on. Note that downvotes are reversible if the post is edited. New users should already have gotten feedback from ... • 25k Accepted ### Closing Low Quality Posts: More Options While I agree that the choices for standard comments to leave on posts in the review queues are... shall I say not always the most helpful, especially for someone who doesn't already know how Stack ... • 28.6k Accepted You might want to read Experiment: Review-needed indicator logic for sites that sometimes have empty queues, especially this answer: Several people have noticed that sometimes the indicator doesn't ... • 17.6k ### How is "low quality" flagged? Yes, this was flagged manually as Not an Answer, and entered the moderator flag queue and the Low Quality Posts queue. The community dealt with it before a diamond moderator did, and, for the record, ... • 97.3k ### Are incorrect answers deleted too fast? What Monica mentioned is my method as well. I will generally add a note explaining what is wrong with the answer and then say it looks ok...even if it doesn't necessarily. I don't want a user's ... • 32.5k ### What should I do with drastic edits to low-quality answers? Generally speaking, keep in mind this phrasing from the proposed edit rejection dialog: This edit deviates from the original intent of the post. Even edits that must make drastic changes should ... • 28.6k ### Are incorrect answers deleted too fast? The users might tend to vote without thinking about the consequences too much, because they are not the only one sharing the decision of deletion (I know, I did in the past). Because there are other ... • 16.5k ### Rules of Peer Moderation Overall I'd have to say I agree with your rules. They make sense to me, and I think pretty well cover how to judge questions. Below is what I think of all of the rules. 0) There is a reason this is ... • 6,916 ### Rules of Peer Moderation On rules 3, 4, 5: Since scope is a moving target, I think we shouldn't be too afraid of putting a question on hold. Especially if we follow the other seven rules, this doesn't have to be a painful ... • 1,551 Accepted ### Suggested edit completely changed during review I think I found the reason: The five minute grace period seems to be applied to suggested edits as well, meaning that a second edit suggestion within five minutes after the first would be merged into ... • 1,418 ### Is there anything wrong with low reputation users doing reviews? It's great that you're so involved with Worldbuilding! I go through the same thing on the sites which I have just joined. It's tough going from 10K+ rep to 100 (because we trust you on other sites). ... • 34.5k ### Rules of Peer Moderation I used to violate Rule 8, answering a question while voting to close. I thought I was being helpful... trying to guess what the question was about while signaling that I wasn't sure. That got ... • 24.9k ### Review history and my reviews are identical only on worldbuilding After some research, and checking what other people get on other stack exchange sites, I think I have managed to track this one down myself. Yes, it turns out this one is not a bug, just a feature, ... ### Where are guidelines about text formatting and its improvement via (suggested) edits? There are no specific guidelines, but I'll give my opinion on cosmetic edits. People use the close reason unclear what you're asking for certain reasons but to me a question can be unclear because of ... • 5,066 Accepted ### Review indicator miscounting? There are many answers to this already, on every meta on S.E. In a nutshell, the indicator doesn't count how many are available for you specifically; just how many are in the queue, period. They say ... • 68.5k Accepted ### Is it possible to review the same answer twice? The post (10k link) first went through the First Posts queue and then, after flags, went through the Low Quality queue from which it was deleted. These review queues serve different purposes, and ... • 19.2k ### What's the policy on accepting minor edits (spelling/grammar) in questions that are on hold? It still improves the question so they should be accepted. Reviewers in the Reopen queue should keep an eye on these edits and skip the review for the moment if it's only such a minor fix. • 17.6k I would propose the opposite. A question should stand by its own merits. Lots of upvotes don't mean a question is fit for the site, it just means a lot of people like it even if it is not fit. For ... Accepted ### "Stuff" not appearing in review queues. Invisible red blobs Works as designed. First for your example: you can have a look at the timeline here to see that the review is already over. The review says "Leave Closed", which is why it's still closed. But the ... • 17.6k 1 vote
2022-05-17 10:03:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21681489050388336, "perplexity": 1690.8836781284342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517245.1/warc/CC-MAIN-20220517095022-20220517125022-00037.warc.gz"}
https://itprospt.com/num/14566563/if-the-voltage-v-in-an-electric-circuit-is-held-constant
5 # If the voltage V in an electric circuit is held constant, thecurrent I is inversely proportional to the resistance R. If thecurrent is 60 amperes when the resistanc... ## Question ###### If the voltage V in an electric circuit is held constant, thecurrent I is inversely proportional to the resistance R. If thecurrent is 60 amperes when the resistance is 270 ohms, find thecurrent when the resistance is 150 ohms. If the voltage V in an electric circuit is held constant, the current I is inversely proportional to the resistance R. If the current is 60 amperes when the resistance is 270 ohms, find the current when the resistance is 150 ohms. #### Similar Solved Questions ##### In Bob Guest s office there are two glass jars filled with dice in the shapes of the five Platonic solids There are Tetrahedrons (4 sided), Cubes sided) also known as Hexahedrons Octahedrons sided), Dodecahedrons (12 sided) and cosahedrons (20 sided). The dice Ilso come in variety of solid and translucent colors The table below details the number of dice of each shape and color combination in the jarsTetrahedron CubeOctahedronDodecahedronIcosahedron Totalsolidtranslucent Tota122(Round all answe In Bob Guest s office there are two glass jars filled with dice in the shapes of the five Platonic solids There are Tetrahedrons (4 sided), Cubes sided) also known as Hexahedrons Octahedrons sided), Dodecahedrons (12 sided) and cosahedrons (20 sided). The dice Ilso come in variety of solid and trans... ##### A set of mathematics exam scores are normally distributed with 3 mean of 80.2 points and standard deviation of 4 pointsWhat proportion of exam scores are between 82 and 85_ points? You may round your answer5 four decimal places A set of mathematics exam scores are normally distributed with 3 mean of 80.2 points and standard deviation of 4 points What proportion of exam scores are between 82 and 85_ points? You may round your answer5 four decimal places... ##### 2. The capacities (in ampere-hours) were measured for a sample of 120 batteries. The average was 178,and the standard deviation was 14 Find 99% confidence interval for the mean capacity of batteries produced by this method: 2) An engineer claims- that the mean capacity is between 176 and 180 ampere hours. With what level of confidence can this statement be made? Approximately how many batteries must be sampled so that 99% confidence interval will specify the mean to within + 2 ampere-hours? Find 2. The capacities (in ampere-hours) were measured for a sample of 120 batteries. The average was 178,and the standard deviation was 14 Find 99% confidence interval for the mean capacity of batteries produced by this method: 2) An engineer claims- that the mean capacity is between 176 and 180 ampere ... ##### Show all work and indicate all solutions on separate paper and attach- this sheet to your work Solutions must be complete and supported by the work order t0 receive full credit:Evaluate [3J F (6x 12y) dy dx (12 pts)Find the volume of the solid bounded by z = Y ' Y =xx = 0, 2 = 0, Y = 1 (12 pts)Rewrite tho integral by reversing Ihe order of integration_II {(*,y) dy dxpts) Show all work and indicate all solutions on separate paper and attach- this sheet to your work Solutions must be complete and supported by the work order t0 receive full credit: Evaluate [3J F (6x 12y) dy dx (12 pts) Find the volume of the solid bounded by z = Y ' Y =xx = 0, 2 = 0, Y = 1 (12 pt... ##### Colour of hutNumberPurpleA multiple of both 3 and 5Redmultiple of 3, but not a multiple of 5BlueA multiple of 5, but not a multiple of 3 One more or one less than multiple of both 3 and 5Yellow Colour of hut Number Purple A multiple of both 3 and 5 Red multiple of 3, but not a multiple of 5 Blue A multiple of 5, but not a multiple of 3 One more or one less than multiple of both 3 and 5 Yellow... ##### Compute P(X) using the binomial probability formula. Then determine whelher the normal distrbution can be Used estimate this probabiity If so, approximate P(X) using the normal distribution and compare the result with the exact probabiffy n=51,p = 0.7, and X = 38 For n = 51, p = 0.7, and X = 38, find P(X)P(X)=L | (Round to four decimal places as needed ) Can Ihe normal distribulion be used approximate this probabilty?No; the norma distribution cannot be used because np(1 PJz Yes the norma distri Compute P(X) using the binomial probability formula. Then determine whelher the normal distrbution can be Used estimate this probabiity If so, approximate P(X) using the normal distribution and compare the result with the exact probabiffy n=51,p = 0.7, and X = 38 For n = 51, p = 0.7, and X = 38, fin...
2022-05-26 15:14:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7023981809616089, "perplexity": 1231.2268826569814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662606992.69/warc/CC-MAIN-20220526131456-20220526161456-00658.warc.gz"}
http://scitation.aip.org/content/aip/journal/jap/114/5/10.1063/1.4817527
• journal/journal.article • aip/jap • /content/aip/journal/jap/114/5/10.1063/1.4817527 • jap.aip.org 1887 No data available. No metrics data to plot. The attempt to plot a graph for these metrics has failed. Electronic structure and transport properties of Si nanotubes USD 10.1063/1.4817527 /content/aip/journal/jap/114/5/10.1063/1.4817527 http://aip.metastore.ingenta.com/content/aip/journal/jap/114/5/10.1063/1.4817527 ## Figures FIG. 1. Electron (a) and hole (b) mobility, electron (c) and hole (d) average inverse effective mass in Si NTs, as function of with nm, for ( ), ( ) and (◇) orientations. The equivalent data for NWs are shown as function of for comparison. is the free-electron mass. FIG. 2. Electron (a) and hole (b) average inverse effective mass as function of the inner diameter in NTs with nm, for ( ), ( ) and (◇) orientations. is the free-electron mass. FIG. 3. Electron (a–c) and hole (d–e) band structures for (a,d), 4 (b,e), 16 nm (c,f), for NTs with fixed thickness nm. The NTs have (a–c) or (d–e) orientation. is the length of the unit cell; and are the conduction and valence band edges, respectively. The insets show density plots of the wave functions at the band edges. The localization of the wave functions at one side of the NTs (b,c) is induced by very small asymmetry in the structures. FIG. 4. Conduction and valence band edges as function of the inner diameter of NTs with fixed outer diameter nm (a,c) or with fixed thickness nm (b,d), for ( ), ( ) and (◇) orientations. ## Tables Table I. Parameters of Eq. fitting band edge energies of Si NTs. The values are given for energies defined in eV and diameters in nm. /content/aip/journal/jap/114/5/10.1063/1.4817527 2013-08-02 2014-04-23 Article content/aip/journal/jap Journal 5 3 ### Most cited this month More Less This is a required field
2014-04-23 19:31:48
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8023865818977356, "perplexity": 4782.022680640688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.jobilize.com/physics-ap/section/section-summary-development-of-force-concept-by-openstax?qcr=www.quizover.com
# 4.1 Development of force concept  (Page 2/6) Page 2 / 6 A more quantitative definition of force can be based on some standard force, just as distance is measured in units relative to a standard distance. One possibility is to stretch a spring a certain fixed distance, as illustrated in [link] , and use the force it exerts to pull itself back to its relaxed shape—called a restoring force —as a standard. The magnitude of all other forces can be stated as multiples of this standard unit of force. Many other possibilities exist for standard forces. (One that we will encounter in Magnetism is the magnetic force between two wires carrying electric current.) Some alternative definitions of force will be given later in this chapter. ## Take-home experiment: force standards To investigate force standards and cause and effect, get two identical rubber bands. Hang one rubber band vertically on a hook. Find a small household item that could be attached to the rubber band using a paper clip, and use this item as a weight to investigate the stretch of the rubber band. Measure the amount of stretch produced in the rubber band with one, two, and four of these (identical) items suspended from the rubber band. What is the relationship between the number of items and the amount of stretch? How large a stretch would you expect for the same number of items suspended from two rubber bands? What happens to the amount of stretch of the rubber band (with the weights attached) if the weights are also pushed to the side with a pencil? ## Test prep for ap courses The figure above represents a racetrack with semicircular sections connected by straight sections. Each section has length d , and markers along the track are spaced d /4 apart. Two people drive cars counterclockwise around the track, as shown. Car X goes around the curves at constant speed v c, increases speed at constant acceleration for half of each straight section to reach a maximum speed of 2 v c, then brakes at constant acceleration for the other half of each straight section to return to speed v c. Car Y also goes around the curves at constant speed v c, increases its speed at constant acceleration for one-fourth of each straight section to reach the same maximum speed 2 v c, stays at that speed for half of each straight section, then brakes at constant acceleration for the remaining fourth of each straight section to return to speed v c. (a) On the figures below, draw an arrow showing the direction of the net force on each of the cars at the positions noted by the dots. If the net force is zero at any position, label the dot with 0. The position of the six dots on the Car Y track on the right are as follows: • The first dot on the left center of the track is at the same position as it is on the Car X track. • The second dot is just slight to the right of the Car X dot (less than a dash) past three perpendicular hash marks moving to the right. • The third dot is about one and two-thirds perpendicular hash marks to the right of the center top perpendicular has mark. • The fourth dot is in the same position as the Car X figure (one perpendicular hash mark above the center right perpendicular hash mark). • The fifth dot is about one and two-third perpendicular hash marks to the right of the center bottom perpendicular hash mark. • The sixth dot is in the same position as the Car Y dot (one and two third perpendicular hash marks to the left of the center bottom hash mark). (b) i. Indicate which car, if either, completes one trip around the track in less time, and justify your answer qualitatively without using equations. ii. Justify your answer about which car, if either, completes one trip around the track in less time quantitatively with appropriate equations. i. Car X takes longer to accelerate and does not spend any time traveling at top speed. Car Y accelerates over a shorter time and spends time going at top speed. So Car Y must cover the straightaways in a shorter time. Curves take the same time, so Car Y must overall take a shorter time. ii. The only difference in the calculations for the time of one segment of linear acceleration is the difference in distances. That shows that Car X takes longer to accelerate. The equation $\frac{d}{4{v}_{c}}={t}_{c}$ corresponds to Car Y traveling for a time at top speed. Substituting $a=\frac{{v}_{c}}{{t}_{1}}$ into the displacement equation in part (b) ii gives $D=\frac{3}{2}{v}_{c}{t}_{1}$ . This shows that a car takes less time to reach its maximum speed when it accelerates over a shorter distance. Therefore, Car Y reaches its maximum speed more quickly, and spends more time at its maximum speed than Car X does, as argued in part (b) i. Which of the following is an example of a body exerting a force on itself? 1. a person standing up from a seated position 2. a car accelerating while driving 3. both of the above 4. none of the above A hawk accelerates as it glides in the air. Does the force causing the acceleration come from the hawk itself? Explain. A body cannot exert a force on itself. The hawk may accelerate as a result of several forces. The hawk may accelerate toward Earth as a result of the force due to gravity. The hawk may accelerate as a result of the additional force exerted on it by wind. The hawk may accelerate as a result of orienting its body to create less air resistance, thus increasing the net force forward. What causes the force that moves a boat forward when someone rows it? 1. The force is caused by the rower’s arms. 2. The force is caused by an interaction between the oars and gravity. 3. The force is caused by an interaction between the oars and the water the boat is traveling in. 4. The force is caused by friction. ## Section summary • Dynamics is the study of how forces affect the motion of objects. • Force is a push or pull that can be defined in terms of various standards, and it is a vector having both magnitude and direction. • External forces are any outside forces that act on a body. A free-body diagram    is a drawing of all external forces acting on a body. ## Conceptual questions Propose a force standard different from the example of a stretched spring discussed in the text. Your standard must be capable of producing the same force repeatedly. What properties do forces have that allow us to classify them as vectors? If a prism is fully imersed in water then the ray of light will normally dispersed or their is any difference? the same behavior thru the prism out or in water bud abbot Ju If this will experimented with a hollow(vaccum) prism in water then what will be result ? Anurag What was the previous far point of a patient who had laser correction that reduced the power of her eye by 7.00 D, producing a normal distant vision power of 50.0 D for her? What is the far point of a person whose eyes have a relaxed power of 50.5 D? Jaydie What is the far point of a person whose eyes have a relaxed power of 50.5 D? Jaydie A young woman with normal distant vision has a 10.0% ability to accommodate (that is, increase) the power of her eyes. What is the closest object she can see clearly? Jaydie 29/20 ? maybes Ju In what ways does physics affect the society both positively or negatively how can I read physics...am finding it difficult to understand...pls help try to read several books on phy don't just rely one. some authors explain better than other. Ju And don't forget to check out YouTube videos on the subject. Videos offer a different visual way to learn easier. Ju hope that helps Ju I have a exam on 12 february what is velocity Jiti the speed of something in a given direction. Ju what is a magnitude in physics Propose a force standard different from the example of a stretched spring discussed in the text. Your standard must be capable of producing the same force repeatedly. What is meant by dielectric charge? what happens to the size of charge if the dielectric is changed? omega= omega not +alpha t derivation u have to derivate it respected to time ...and as w is the angular velocity uu will relace it with "thita × time"" Abrar do to be peaceful with any body the angle subtended at the center of sphere of radius r in steradian is equal to 4 pi how? if for diatonic gas Cv =5R/2 then gamma is equal to 7/5 how? Saeed define variable velocity displacement in easy way. binding energy per nucleon
2019-07-22 13:48:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.519033670425415, "perplexity": 910.5746407577835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528037.92/warc/CC-MAIN-20190722133851-20190722155851-00544.warc.gz"}
http://cmscience.blogspot.com/2008/12/4-finite-size-pdf-of-independent-points.html
## Wednesday, December 17, 2008 ### [4] Finite Size PDF of independent points in 1D The following relates to posts [2] and [3]: -------------------------------------------------------------------------------- Simplifications: In order to indentify the minimum conditions of a double-peaked universal PDF, the situation can be simplified in several aspects: A) 1D space: By restricting the trajectories to one spatial dimension x, rotations are not required any longer. B) Sets of N independent points: Trajectories are normally generated by subsequently adding random increments (steps) to the respective last position of a walker. Here, in order to avoid any correlations between successive points, the N positions in each point set are drawn independently from a fixed probability density. C) Random points equally distributed in [0,1]: Let this probability density be constant in the intervall [0,1] and zero outside. -------------------------------------------------------------------------------- Statistical quantities of point sets: For each point set {x_i}, the center of mass is computed, \overline{x} = \left\langle x_i \right\rangle_i = \frac{1}{N}\sum_{i=1}^N x_i \;, as well as the standard deviation \sigma = \sqrt{\left\langle (x_i-\overline{x})^2\right\rangle_i} \;. -------------------------------------------------------------------------------- Transformations on point sets: C-Operation (Center): x_i \rightarrow (x_i-\overline{x}) \;\;\forall i S-Operation (Scale): x_i \rightarrow \overline{x}+(x_i-\overline{x})/\sigma \;\;\forall i Note that C and S commute with each other. -------------------------------------------------------------------------------- Effects of transformations: The effects of C and S on the resulting averaged distributions P(x) are discussed in the following: * No C, no S: Direct averaging, without any C or S, yields the expected box-shaped distribution in [0,1], centered around 1/2. * Only C: Applying only the C-operation yields PDFs centered around zero. For N=1 one obtains a delta-function, for N=2 a triangular function, etc. For very large N, the box is recovered: * Only S: Scaling alone produces already a double-peak structure, centered around 1/2. However, it is a finite size effect that quickly disappears for long trajectories: * C and S: The combined effect of C and S yields a double-peak centered around zero: If instead of the box-shaped distribution, a Gaussian distribution is used for the random points, the results are qualitatively the same. However, the double peak is much weaker pronounced and visible only for very small trajectory lengths. -------------------------------------------------------------------------------- Summary: Summing up, the double-peak is not a very remarkable feature of 1D trajectories. -------------------------------------------------------------------------------- Origin of double-peak: Take the extreme case of N=2: After a C-operation the two points lie symmetrically left and right from x=0. An additional S-operation scales the distance between the two points to the norm value, and the average distribution P(x) consequently consists of two delta-functions. On the other hand, when N becomes very large, the histogram of each individual trajectory will already reflect the ensemble average very closely. The CS-operations therefore do not change the shape of this distribution qualitatively and one expects for P(x) to recover the fundamental distribution (in our case: box-shape). Then, for intermediate lengths N, one expects a gradual interpolation between the double-delta-peak and a box-distribution. --------------------------------------------------------------------------------
2019-03-22 19:44:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8836399912834167, "perplexity": 2345.5433673212897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202688.89/warc/CC-MAIN-20190322180106-20190322202106-00135.warc.gz"}
https://tex.stackexchange.com/questions/463979/how-to-closed-lines-oval
# How to closed lines? \oval How to close lines in this? I tried and failed! \documentclass{article} \begin{document} \begin{picture}(50,50) \put(22.5,7){\oval(3,2)[lt]} \put(22.5,7){\oval(3,2)[bl]} \put(22.5,7){\oval(3,2)[tr]} \put(25,7){\oval(2,2)[b]} \put(27.5,7){\oval(3,2)[rt]} \put(27.5,7){\oval(3,2)[br]} \put(27.5,7){\oval(3,2)[t]} \put(25,4.5){\oval(2,3)[b]} \put(25,4.5){\oval(2,3)[br]} \put(25,4.5){\oval(2,3)[bl]} \put(23,4){\oval(2,4)[tr]} \put(27,4){\oval(2,4)[tl]} \end{picture} \end{document} LaTeX warns LaTeX Warning: \oval, \circle, or \line size unavailable on input line 10. By default picture mode works by setting characters next to each other so can not make very small shapes that are smaller than the characters. \usepackage{pict2e} then the definitions are changed to use PDF drawing primitives so these restrictions are lifted. Besides loading pict2e, you can add a couple of small segments to fill in the gaps. \documentclass{article} \usepackage{pict2e} \begin{document} A \begin{picture}(9,5) \put(2,4){\oval(3,2)[lt]} \put(2,4){\oval(3,2)[bl]} \put(2,4){\oval(3,2)[tr]} \put(4.5,4){\oval(2,2)[b]} \put(7,4){\oval(3,2)[rt]} \put(7,4){\oval(3,2)[br]} \put(7,4){\oval(3,2)[t]} \put(4.5,1.5){\oval(2,3)[b]} \put(4.5,1.5){\oval(2,3)[br]} \put(4.5,1.5){\oval(2,3)[bl]} \put(2.5,1){\oval(2,4)[tr]} \put(6.5,1){\oval(2,4)[tl]} \put(2,3){\line(1,0){0.5}} % <--- \put(7,3){\line(-1,0){0.5}} % <--- \end{picture} \end{document} I reduced the size.
2019-11-15 01:04:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4209562838077545, "perplexity": 4898.951211314029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668544.32/warc/CC-MAIN-20191114232502-20191115020502-00503.warc.gz"}
http://www.journaltocs.ac.uk/index.php?action=browse&subAction=subjects&publisherID=32&journalID=25893&pageb=1&userQueryID=&sort=&local_page=&sorType=&sorCol=
for Journals by Title or ISSN for Articles by Keywords help Subjects -> CHEMISTRY (Total: 849 journals)     - ANALYTICAL CHEMISTRY (50 journals)    - CHEMISTRY (598 journals)    - CRYSTALLOGRAPHY (22 journals)    - ELECTROCHEMISTRY (25 journals)    - INORGANIC CHEMISTRY (42 journals)    - ORGANIC CHEMISTRY (45 journals)    - PHYSICAL CHEMISTRY (67 journals) CHEMISTRY (598 journals)                  1 2 3 | Last 1 2 3 | Last Biomass Conversion and Biorefinery   [10 followers]  Follow         Partially Free Journal    ISSN (Print) 2190-6815 - ISSN (Online) 2190-6823    Published by Springer-Verlag  [2355 journals] • Optimizing GHG emission and energy-saving performance of miscanthus-based value chains • Authors: Florian Meyer; Moritz Wagner; Iris Lewandowski Pages: 139 - 152 Abstract: Miscanthus is a high-yielding lignocellulosic crop providing up to 40 t of dry matter per hectare and year. Its biomass can be used in energetic or material utilization pathways. The goal of this study was the comparison of three different conversion techniques (combustion, second-generation bioethanol, and insulation material) for miscanthus biomass produced at five locations throughout Europe using a life cycle assessment approach. In particular, the interdependencies between the cropping location, the miscanthus genotypes, and the utilization pathways were investigated. The potential savings of greenhouse gas (GHG) emissions and fossil energy were analyzed through comparison with a corresponding substituted product system. The highest GHG savings of all scenarios investigated were achieved by heat and power production in Portugal (42.7 t CO2-eq ha−1 a−1). However, at the other four locations (Sweden, Denmark, Germany, England), bioethanol production gave the highest GHG savings. In contrast, the highest energy savings were achieved by combined heat and power generation via combustion at all five locations (up to 642 GJ ha−1 a−1). A high correlation was found between yield and both GHG-emission savings and energy savings. Biomass composition and quality showed a comparatively low impact on the results. However, the composition is assumed to have a high relevance for other impact categories not assessed within this study, such as acidification and eutrophication. PubDate: 2017-06-01 DOI: 10.1007/s13399-016-0219-5 Issue No: Vol. 7, No. 2 (2017) • Development and experimental validation of a water gas shift kinetic model for Fe-/Cr-based catalysts processing product gas from biomass steam gasification • Authors: Michael Kraussler; Hermann Hofbauer Pages: 153 - 165 Abstract: Abstract This paper introduces an improved kinetic model for the water gas shift reaction catalyzed by an Fe-/Cr-based catalyst. The improved model is based on a former model which was developed previously in order to consider the composition and the catalyst poisons (H2S) of product gas derived from dual fluidized bed biomass steam gasification. $$\begin{array}{c}r\left({\varphi}_i,T\right)=117.8\ \frac{\mathrm{mol}}{g\ {Pa}^{1.71}\ s}\cdot \mathit{\exp}\left(\frac{-126.6\ \frac{\mathrm{kJ}}{\mathrm{mol}}}{R\cdot T}\right)\cdot {p}_{CO}^{1.77}\cdot {p}_{\mathrm{H}2\mathrm{O}}^{0.23}\cdot {p}_{CO2}^{-0.17}\cdot {p}_{\mathrm{H}2}^{-0.12}\\ {}\left(1-\frac{K_{MAL}}{K_g}\right)\end{array}.$$ Furthermore, this improved model has been validated with experimental data. The data was generated by a WGS reactor which employed a commercial Fe-/Cr-based catalyst and which processed real product gas from the dual fluidized bed biomass steam gasification plant in Oberwart, Austria. Basically, the validation showed good agreement of the measured and the calculated values for the gas composition (absolute errors of the volumetric fractions of up to 1.5 %) and the temperature profile (absolute errors of up to 21 °C) of the WGS reactor. Of all considered gas components, the CO concentration showed the highest error. The results qualify the improved kinetic model for basic design and engineering of a WGS reactor employing a commercial Fe-/Cr-based catalyst which processes product gas from an industrial scale biomass steam gasification plant. PubDate: 2017-06-01 DOI: 10.1007/s13399-016-0215-9 Issue No: Vol. 7, No. 2 (2017) • Comparative biochemical methane potential of some varieties of residual banana biomass and renewable energy potential • Authors: Florent Awedem Wobiwo; Thomas Happi Emaga; Elie Fokou; Maurice Boda; Sebastien Gillet; Magali Deleu; Aurore Richel; Patrick A. Gerin Pages: 167 - 177 Abstract: The biochemical methane potential (BMP) of peduncles, bulbs, and peels of three banana varieties (Grande Naine (GN; export dessert banana), Pelipita (PPTA; locally used plantain), and CRBP969 (phytopathogen resistant hybrid-plantain)) was investigated as an assessment of the bioconversion potential of these residues to renewable energy or biorefined chemicals. Biogas production was monitored manometrically for 132 days and its composition was analyzed using gas chromatography. The BMP ranged from 162 to 257 ml_CH4/g_DM for peduncles, from 228 to 304 ml_CH4/g_DM for bulbs, and from 208 to 303 ml_CH4/g_DM for green peels, with methane content of the biogas in the range 56 to 60 %. Bulbs and green peels showed bioconversion yields of 95 % of the chemical oxygen demand (COD). The GN variety was generally more biodigestible than PPTA, which appeared richer in lignocellulosic fibres. The peels biodigestibility reduced with maturation and was already limited to 56 % of the COD at the yellow stage. The energy resource available in the residues of banana production is very significant, increasing by 91 % the energy resource offered by banana crop, which is generally limited to the nutritional value of the fruit pulp. In the study case of the African leading producer of bananas and plantains (Cameroon), the amount of available residues from the sole export variety GN could feed about 4 % of the annual electricity consumed by the country, i.e., a supply of electricity to an additional 9 × 105 people. Such valorization of the residual banana biomass could help banana-producing countries to become less dependent on fossil fuels and less prone to energy shortages. PubDate: 2017-06-01 DOI: 10.1007/s13399-016-0222-x Issue No: Vol. 7, No. 2 (2017) • Bioethanol production from Eucalyptus grandis hemicellulose recovered before kraft pulping using an integrated biorefinery concept • Authors: Mairan D. Guigou; Florencia Cebreiros; María N. Cabrera; Mario D. Ferrari; Claudia Lareo Pages: 191 - 197 Abstract: Pre-extraction of hemicelluloses prior to pulping and its conversion to other by-products can provide additional profits to traditional pulp and paper industry. In this study, hemicelluloses removed from Eucalyptus grandis with green liquor (2 %) at 155–160 °C for 150 min prior to kraft pulping were fermented by Scheffersomyces stipitis NBRC 10063 to produce bioethanol. These conditions were selected to obtain an extract rich in xylose without changing the quality of pulp produced: best xylose extraction yield and minor pulp viscosity degradation. Fermentation of hemicellulose hydrolysate containing 7.5 g/L xylose and 5.0 g/L acetic acid presented an ethanol yield of 0.19 g/g and sugar conversion of 89 %. However, the fermentation of hydrolyzates after concentration proved to be difficult or even impossible. Ethyl acetate extraction, used for removal of inhibitory compounds in concentrated hydrolyzates containing 19 g/L xylose, improved fermentability (final ethanol concentration of 5.0 g/L, ethanol yield of 0.21 g/g and 94 % sugar conversion, ethanol production of 4.4 Lethanol/t of dry wood) and made possible the recovery of a valuable product as acetic acid. PubDate: 2017-06-01 DOI: 10.1007/s13399-016-0218-6 Issue No: Vol. 7, No. 2 (2017) • Bioconversion of soybean and rice hull hydrolysates into ethanol and xylitol by furaldehyde-tolerant strains of Saccharomyces cerevisiae , Wickerhamomyces anomalus , and their cofermentations • Authors: Nicole Teixeira Sehnem; Lilian Raquel Hickert; Fernanda da Cunha-Pereira; Marcos Antonio de Morais; Marco Antônio Záchia Ayub Pages: 199 - 206 Abstract: The aims of this work were to evaluate the ability of furaldehyde-tolerant yeast strains Saccharomyces cerevisiae P6H9 and Wickerhamomyces anomalus WA-HF5.5 and their cofermentations and to convert soybean and rice hull hydrolysates into ethanol and xylitol. In batch shaker cultures, the strains showed the ability to tolerate high osmotic pressure (1918 mOsmkg−1), completely depleting furaldehyde in the first 12 h of cultivations, while converting the hydrolysate sugars into ethanol. Highest ethanol yields of 0.37 g g−1 and productivity of 0.31 g L−1 h−1 were obtained in the cofermentation using rice hull hydrolysate as substrate. The concentration of sugars in soybean hull hydrolysate proved to be inadequate as substrate for the cultivation of these strains, showing a low ethanol productivity of 0.08 g L−1 h−1. Bioreactor cultivations of S. cerevisiae on rice hull hydrolysate under anaerobiosis showed a relatively high ethanol productivity of 6.7 g L−1 h−1, whereas the bioreactor cofermentation produced xylitol to yields of 0.86 g g−1 under conditions of oxygen limitation. PubDate: 2017-06-01 DOI: 10.1007/s13399-016-0224-8 Issue No: Vol. 7, No. 2 (2017) • Moisture effect on fluidization behavior of loblolly pine Wood grinds • Authors: G Olatunde.; O Fasina.; T McDonald.; S Adhikari.; S Duke. Pages: 207 - 220 Abstract: The impact of moisture content (MC of 8 to 27 % wet basis) on physical properties (particle size distribution, average size using Feret, chord, Martins, surface-volume, and area diameter measurement schemes, bulk density, and particle density), fluidization behavior, and minimum fluidization velocities (U mf) of loblolly pine wood grinds were studied. A new correlation for predicting the U mf of loblolly pine wood grinds at different moisture contents was also developed. Results showed that bulk density, particle density, and porosity of grinds were significantly affected by increase in MC (p < 0.05). Diameter of the grinds measured using Feret measurement scheme was consistently the highest while those measured by surface-volume scheme were consistently the lowest with the measured Feret-based diameter about three times the surface-volume based diameters. Particle size data showed that variations in sizes of particle within a sample reduced with increase in MC (coefficient of variation value was 90 at 8.45 % MC and 40 at 27.02 % MC). Generally, as MC increased, the minimum fluidization velocity values increased. The minimum fluidization velocity (Umf) was found to be 0.2 m/s for 8 % MC, 0.24 m/s at 14.86 % MC, 0.28 m/s at 19.86 % MC, and 0.32 m/s for 27.02 % MC. The correlation developed predicted the experimental data with mean relative deviation that was less than 10 %. PubDate: 2017-06-01 DOI: 10.1007/s13399-016-0223-9 Issue No: Vol. 7, No. 2 (2017) • The nutritional aspects of biorefined Saccharina latissima , Ascophyllum nodosum and Palmaria palmata • Authors: Peter Schiener; Sufen Zhao; Katerina Theodoridou; Manus Carey; Karen Mooney-McAuley; Chris Greenwell Pages: 221 - 235 Abstract: The chemical profile of biorefined Saccharina latissima, Ascophyllum nodosum and Palmaria palmata after carbohydrate and polyphenol extraction was analysed with the aim to evaluate the nutritional aspects of biorefined seaweeds as a novel animal feed supplement. Optimised enzymatic saccharification has been used to show that the protein concentration in the residue of P. palmata and A. nodosum can be increased by more than 2-fold. Nutritional value of the residue was further enhanced through an increase in total amino acids and fatty acids. As a consequence of removal of inorganic elements such as sodium, potassium and chloride, the total solid and ash content of all three seaweeds was reduced by around 40%. In contrast, divalent metals such as iron and zinc, as well as silicon, accumulated in all three residues. Potentially harmful components such as arsenic and iodine were reduced only in brown biorefined seaweeds, whilst in biorefined P. palmata, iodine increased by 39% compared to a 24% decline of arsenic. Nutritional values such as total fatty acid and total amino acid content increased in all three seaweeds after enzymatic saccharification. Polyphenol removal in all three seaweeds was >80% using aqueous acetonitrile and, in combination with enzymatic saccharification, did not impact on protein recovery in A. nodosum. This highlights the potential of biorefinery concepts to generate multiple products from seaweed such as extracts enriched in polyphenols and carbohydrates and residue with higher protein and lipid content. PubDate: 2017-06-01 DOI: 10.1007/s13399-016-0227-5 Issue No: Vol. 7, No. 2 (2017) • Pyrolysis kinetics of Sal ( Shorea robusta ) seeds • Authors: Ranjan R. Pradhan; Pragyan P. Garnaik; Bharat Regmi; Bandita Dash; Animesh Dutta Pages: 237 - 246 Abstract: Thermal kinetics of Sal seeds during pyrolysis process was investigated as feedstocks for chemical, material, and bioenergy industries. The physicochemical properties of the seeds were examined. Results showed that Sal seed can be characterized as high calorific values, low ash, and high volatile content biomass to suit pyrolysis applications. Kinetic analysis for thermal degradation of this biomass was given particular attention. Two major degradation zones were identified with T max at about 321 and 405 °C, and activation energy was evaluated using various methods. Model-free pyrolysis kinetic approach was verified to be appropriate and indicated that unprocessed Sal seed biomass can directly become potential renewable feedstock of energy, chemicals, and biochar. PubDate: 2017-06-01 DOI: 10.1007/s13399-017-0240-3 Issue No: Vol. 7, No. 2 (2017) • Characteristics of hydrochar and hydrothermal liquid products from hydrothermal carbonization of corncob • Authors: Kamonwat Nakason; Bunyarit Panyapinyopol; Vorapot Kanokkantapong; Nawin Viriya-empikul; Wasawat Kraithong; Prasert Pavasant Abstract: Corncob (CC) was converted to renewable fuel resource by hydrothermal carbonization (HTC). HTC was performed by varying process temperature (160–200 °C), residence time (1–3 h), and biomass to water ratio (BTW) (1:5 to 1:15). The properties of hydrochar were significantly enhanced where the fixed carbon and carbon content of hydrochar increased at about 24.9 and 83.7% from original contents in CC, respectively. The calorific values and yield of hydrochar were between 19.3–23.5 MJ/kg and 50.1–58.6%. The optimal condition for the production of hydrochar as solid fuel was determined at 200 °C, 3 h residence time, and BTW of 1:5 with maximum energy yield of 68.74%. In addition, hydrothermal liquid was characterized where volatile fatty acid, furfural, furfuryl alcohol, and hydroxymethylfurfural were the most abundant compositions with their highest yields of 17.3, 11.5, 7.9, and 5.1%, respectively. Process temperature was the most influencing variable on product properties and characteristics. The results suggested that corncob has high potential as a source for solid fuel and valuable platform chemicals. PubDate: 2017-07-22 DOI: 10.1007/s13399-017-0279-1 • Exploring the stability and reactivity of Ni 2 P and Mo 2 C catalysts using ab initio atomistic thermodynamics and conceptual DFT approaches • Authors: Ángel Morales-García; Junjie He; Pengbo Lyu; Petr Nachtigall Abstract: The stability and reactivity of Mo2C and Ni2P surfaces with different terminations are systematically investigated by means of ab initio atomistic thermodynamics and conceptual DFT approaches as a function of the chemical potential (μ). Five surfaces labeled as (001)-Mo-1, (110)-Mo/C, (001)-Ni3P2, (001)-Ni3P2-P, and (001)-Ni3P1 emerge as the most stable ones for Mo2C and Ni2P catalysts depending on μ C and μ P, respectively. The Fukui function, a reactivity descriptor, reveals that the metal atoms interact preferentially with nucleophilic adsorbates such as H2S. Here, our study predicts that a high concentration of C and P atoms on the surface reduces the catalytic activity where nucleophilic species are involved. The qualitative agreement between the nucleophilic Fukui function (f +) and the adsorption energies indicates that the Ni2P catalyst is, in general, more reactive than Mo2C catalyst. This study may help to improve and optimize the catalytic processes, such as the hydrogenations HDO and HDS, where Mo2C and Ni2P catalysts are involved. PubDate: 2017-07-12 DOI: 10.1007/s13399-017-0278-2 • Alkaline hydrogen peroxide pretreatment of lignocellulosic biomass: status and perspectives • Authors: Emmanuel Damilano Dutra; Fernando Almeida Santos; Bárbara Ribeiro Alves Alencar; Alexandre Libanio Silva Reis; Raquel de Fatima Rodrigues de Souza; Katia Aparecida da Silva Aquino; Marcos Antônio Morais Jr; Rômulo Simões Cezar Menezes Abstract: Lignocellulosic biomass is a renewable and abundant resource that is suitable for the production of bio-based materials such as biofuels and chemical products. However, owing to its complex chemical composition, it requires a process that enhances the release of sugars. Pretreatment is an essential stage in increasing the efficiency of enzymatic hydrolysis of lignocellulosic biomass. The most widely used pretreatment methods operate at high temperatures (160–290 °C) and pressures (0.69 to 4.9 MPa) and generate biological growth inhibitors such as furfural and hydroxymethylfurfural (HMF). Thus, there has been a growing need to adopt new approaches for an effective pretreatment that operates at ambient temperature and pressure and reduces the generation of inhibitors. Among these methods, alkaline hydrogen peroxide (AHP) is notable because it is effective for a wide range of lignocellulosic biomass concentrations, and can provide a high degree of enzymatic hydrolysis efficiency. However, few results have been discussed in the literature. Given this, the aim of this study was to investigate the use of alkaline hydrogen peroxide (AHP) as an oxidative pretreatment agent to improve the efficiency of enzymatic hydrolysis for different types of biomass and examine the key areas of the pretreatment. Finally, there is a discussion of the challenges facing a large-scale application of this method. PubDate: 2017-07-06 DOI: 10.1007/s13399-017-0277-3 • One-vessel saccharification and fermentation of pretreated sugarcane bagasse using a helical impeller bioreactor • Authors: Raul Alves de Oliveira; Leda Maria Fortes Gottschalk; Suely Pereira Freitas; Elba Pinto da Silva Bon Abstract: The effect of Tween® 80 and the cellulase load, on the enzymatic hydrolysis of hydrothermally pretreated sugarcane bagasse (HPSB), was evaluated in shake flask experiments, using experimental design. The optimized conditions were further applied in a second set of shake flask experiments to study the effect of the biomass load. The overall optimum parameters, e.g., 6.9% Tween® 80, 15 FPU/g glucan, and 150 g/L (dry HPSB), were used in hydrolysis experiments carried out in a laboratory-scale bioreactor equipped with a helical impeller. After a 48 h reaction time, 60% of the HPSB glucan content was hydrolyzed into glucose. The same bioreactor and hydrolysis conditions were used for one-vessel saccharification and fermentation experiments as follows: 150 g/L (dry HPSB) was hydrolyzed at 50 °C and 150 rpm for either 24 or 48 h, followed by the bioreactor’s temperature and mixing decrease to 30 °C and 90 rpm for ethanol fermentation by Saccharomyces cerevisiae. Experiments resulted in ethanol yields of 48 or 52%, for hydrolysis time of 24 or 48 h, respectively, taking into account the HPSB glucan content. The best ethanol productivity, for the overall process of 0.51 g/L.h, was achieved for the 24 h hydrolysis time. PubDate: 2017-06-29 DOI: 10.1007/s13399-017-0272-8 • Hydrothermal carbonization of food waste: simplified process simulation model based on experimental results • Authors: Kyle McGaughy; M. Toufiq Reza Abstract: Hydrothermal carbonization (HTC) was performed on homogenized food waste (FW) in a batch reactor at 200, 230, and 260 °C for 30 min. Solid product, called hydrochar, was characterized by means of ultimate analysis, proximate analysis, higher heating value (HHV), and ash content. On the other hand, liquid products were analyzed by inductively coupled plasma (ICP), total carbon, and pH. HHV of FW was increased from 25.1 to 33.1 MJ kg−1 by HTC. Ash content is less than 3% for hydrochars as well as the raw FW. Fixed carbon increased from 18.8 to 22.4% with the increase of HTC temperature. Fuel characteristics indicate hydrochar as a potential solid fuel and carbon storage. Therefore, a simplified simulation model was created for a continuous process that performs HTC of 1 t of FW per day. It was determined that HTC of food waste has potential to be a viable process for the production of solid fuel, primarily due to ease of drying product char. PubDate: 2017-06-28 DOI: 10.1007/s13399-017-0276-4 • Catalytic hydroprocessing of lignin β-O-4 ether bond model compound phenethyl phenyl ether over ruthenium catalysts • Authors: B. Gomez-Monedero; J. Faria; F. Bimbela; M. P. Ruiz Abstract: The catalytic hydroprocessing of phenethyl phenyl ether (PPE), a model compound of one of the most significant ether linkages within lignin structure, β-O-4, has been studied. Reactions were carried out using two ruthenium-based catalysts, supported on different materials: 3.8 wt.% Ru/C and 3.9 wt.% Ru/Al2O3. Aiming at studying the reaction mechanism, experiments were carried out at 150 °C and 25 bar in H2 atmosphere, with varying feed to catalyst mass ratios and reaction time. Differences between the relative importance of the steps of the mechanism were observed when using those two catalysts. The most significant finding was the predominance of the cleavage of Cβ-O bonds compared to the cleavage of the Caryl-O when using Ru/Al2O3 as catalyst; whereas with Ru/C, the two routes were nearly equivalent. It has been observed that the kinetic model describes the general tendencies of consumption and formation of the different products, but some over/under estimation of concentrations occurs. Finally, the effect of temperature was also explored by carrying out reactions at 100 and 125 °C, observing that decreasing temperature from 150 to 125 or 100 °C favored the dimer hydrogenation route versus the hydrogenolysis of the ether bonds. PubDate: 2017-06-24 DOI: 10.1007/s13399-017-0275-5 • Effect of additives on thermochemical conversion of solid biofuel blends from wheat straw, corn stover, and corn cob • Authors: Natasa Dragutinovic; Isabel Höfer; Martin Kaltschmitt Abstract: To investigate the effect of fuel blending and additives on ash melting behavior and the formation behavior of particulate matter (PM) emissions from combustion of crop residues, corn stover, corn cobs, and wheat straw as well as selected blends without and with 2 wt% additive have been examined by determining ash melting behavior in laboratory muffle furnace, ash elemental composition using ion chromatography (IC) and atomic absorption spectrometer (AAS), thermogravimetric properties of ashes using thermogravimetric analysis (TGA), and crystalline phases using powder X-ray diffraction (XRD). The results show that wheat straw starts sintering above 800 °C, corn cobs at 900 °C, whereas corn stover above 1000 °C. Fuel blending can influence the ash characteristics, but the influence is not sufficient to prevent ash sintering during typical combustion temperatures. All three additives (kaolinite (Al2Si2O5(OH)4), magnesium oxide (MgO), and calcite (CaCO3)) are successful in preventing ash sintering up to 1100 °C. At 1000 °C, K, Ca, Mg, and SO4 2− remain in decreased concentrations only partly in the ashes (i.e., a certain share of these components is transferred into the gas phase forming particulate matter emissions). However, Cl− is completely released into the gas phase. After heating 550 °C ashes to 1000 °C using TGA, mass losses of ~15 wt% were observed in most fuels and fuel blends with and without additives. An exception in the TGA was the blends with CaCO3; the samples show a mass loss higher than 25 wt%, which at the same time leads to an increased release of components into gas phase. Kaolinite and MgO are good K sorbents, forming new silicates in the ash such as K-Al silicates, K-Mg silicates, Ca-Mg silicates, and K-Al silicates, whereas CaCO3 facilitated K release and formation of Ca silicates, Ca-Na silicates, and Ca-Mg-Al silicates. Furthermore, MgO and CaCO3 can bind SO4 2− in the ashes. PubDate: 2017-06-23 DOI: 10.1007/s13399-017-0273-7 • Study of time reaction on alkaline pretreatment applied to rice husk on biomass component extraction • Authors: Lara Soares Monte; Viviane Alves Escócio; Ana Maria Furtado de Sousa; Cristina Russi Guimarães Furtado; Marcia Christina Amorim Moreira Leite; Leila Lea Yuan Visconte; Elen Beatriz Acordi Vasquez Pacheco Abstract: Rice husk (RH) residue was submitted to a sequence of experimental procedures, specifically to investigate the reaction time influence of NaOH pretreatment on the extraction of silica, hemicellulose, and lignin components. In order to follow the extraction of each non-cellulosic components of rice husk, techniques such as Fourier transform infrared spectroscopy, thermogravimetric analysis, X-ray diffraction analysis, and scanning electron microscopy were performed on untreated RH and samples collected from the NaOH reaction media at several different reaction times, as well as the sample after alkaline-peroxide treatment. Under the process parameters used in the present study, the results showed that a great part of hemicellulose and silica contents was removed during the first 30 min of reaction time in NaOH pretreatment. Although there is evidence that NaOH pretreatment also removed some lignin content, the complete delignification process was more effective just after alkaline-peroxide reaction, which produced material rich in type I cellulose. PubDate: 2017-06-20 DOI: 10.1007/s13399-017-0271-9 • Miscanthus as biogas feedstock: influence of harvest time and stand age on the biochemical methane potential (BMP) of two different growing seasons • Authors: Axel Schmidt; Sébastien Lemaigre; Thorsten Ruf; Philippe Delfosse; Christoph Emmerling Abstract: The use of perennial crops instead of maize as feedstock in biogas plants can be associated with multiple environmental and economic benefits. One promising species in this domain is the C4-grass Miscanthus × giganteus. The use of its biomass can mitigate carbon dioxide emissions by substitution of fossil fuels, sequestration of carbon in soils and reduced fertilizing. We compared Miscanthus from two different old fields (established 1995 and 2008) at three different harvest dates over 2 years. While the harvest in spring, like usual for combustion purposes, led to relatively low methane yields per hectare, the harvest in autumn, when the biomass is still green, exceeded the average methane yields per hectare of maize. The comparison of different old Miscanthus fields showed that there is no significant difference in terms of biomass yield, specific BMP and BMP per hectare. Only the influence of repeated autumn harvest showed differences in the methane production per hectare between both stand ages. The methane yield of the younger stand did not change considerable, while in the older stand, the productivity decreased about 15% after 1 year. PubDate: 2017-06-20 DOI: 10.1007/s13399-017-0274-6 • Editorial thematic issue BCAB • Authors: Frédéric Vogel PubDate: 2017-06-16 DOI: 10.1007/s13399-017-0270-x • Updates on the pretreatment of lignocellulosic feedstocks for bioenergy production–a review • Authors: Karthik Rajendran; Edward Drielak; V. Sudarshan Varma; Shanmugaprakash Muthusamy; Gopalakrishnan Kumar Abstract: Lignocellulosic biomass is the most abundant renewable energy bioresources available today. Due to its recalcitrant structure, lignocellulosic feedstocks cannot be directly converted into fermentable sugars. Thus, an additional step known as the pretreatment is needed for efficient enzyme hydrolysis for the release of sugars. Various pretreatment technologies have been developed and examined for different biomass feedstocks. One of the major concerns of pretreatments is the degradation of sugars and formation of inhibitors during pretreatment. The inhibitor formation affects in the following steps after pretreatments such as enzymatic hydrolysis and fermentation for the release of different bioenergy products. The sugar degradation and formation of inhibitors depend on the types and conditions of pretreatment and types of biomass. This review covers the structure of lignocellulose, followed by the factors affecting pretreatment and challenges of pretreatment. This review further discusses diverse types of pretreatment technologies and different applications of pretreatment for producing biogas, biohydrogen, ethanol, and butanol. PubDate: 2017-06-06 DOI: 10.1007/s13399-017-0269-3 • Mono-, bi-, and tri-metallic Ni-based catalysts for the catalytic hydrotreatment of pyrolysis liquids • Authors: Wang Yin; Robbie H. Venderbosch; Songbo He; Maria V. Bykova; Sofia A. Khromova; Vadim A. Yakovlev; Hero J. Heeres Abstract: Catalytic hydrotreatment is a promising technology to convert pyrolysis liquids into intermediates with improved properties. Here, we report a catalyst screening study on the catalytic hydrotreatment of pyrolysis liquids using bi- and tri-metallic nickel-based catalysts in a batch autoclave (initial hydrogen pressure of 140 bar, 350 °C, 4 h). The catalysts are characterized by a high nickel metal loading (41 to 57 wt%), promoted by Cu, Pd, Mo, and/or combination thereof, in a SiO2, SiO2-ZrO2, or SiO2-Al2O3 matrix. The hydrotreatment results were compared with a benchmark Ru/C catalyst. The results revealed that the monometallic Ni catalyst is the least active and that particularly the use of Mo as the promoter is favored when considering activity and product properties. For Mo promotion, a product oil with improved properties viz. the highest H/C molar ratio and the lowest coking tendency was obtained. A drawback when using Mo as the promoter is the relatively high methane yield, which is close to that for Ru/C. 1H, 13C-NMR, heteronuclear single quantum coherence (HSQC), and two-dimensional gas chromatography (GC × GC) of the product oils reveal that representative component classes of the sugar fraction of pyrolysis liquids like carbonyl compounds (aldehydes and ketones and carbohydrates) are converted to a large extent. The pyrolytic lignin fraction is less reactive, though some degree of hydrocracking is observed. PubDate: 2017-06-03 DOI: 10.1007/s13399-017-0267-5 JournalTOCs School of Mathematical and Computer Sciences Heriot-Watt University Edinburgh, EH14 4AS, UK Email: journaltocs@hw.ac.uk Tel: +00 44 (0)131 4513762 Fax: +00 44 (0)131 4513327 Home (Search) Subjects A-Z Publishers A-Z Customise APIs
2017-07-28 06:37:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5692591071128845, "perplexity": 12689.281761349239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549448095.6/warc/CC-MAIN-20170728062501-20170728082501-00123.warc.gz"}
https://physics.stackexchange.com/questions/274131/component-of-electric-field-tangential-to-spherical-shell-is-zero
# Component of electric field tangential to spherical shell is zero For a spherical shell of radius R with a static uniform surface charge density, the electric field component of $\theta$ and $\phi$ is zero. The reason supplied by my notes is this: "Due to the uniform nature of the charge distribution on the surface of the sphere, that part of electric field tangential to the surface must be zero". How can I 'see' that the $\phi$ and $\theta$ component is tangential to the surface of any given spherical shell? It pretty much spells it out for you in the quote, you could think of it in the same way as gravity works on earth. If you stand up straight, you will not be pulled any way except down by gravity, you won't fall left or right, or backwards or forwards, if you are on level ground. Look up central force on Wikipedia, with regard to spherical symmetrical potentials. Just to finish off, in a Cartesian coordinate system, x, y and z are orthogonal, (at right angles to each other), in spherical coordinates, $r$, $\theta$ and $\phi$ are orthogonal. This does only hold if the spherical coordinate system is centred at the centre of the spherical shell. If you take e.g. two spherical shells, this will not be true for both of them in the same coordinate system. For one spherical shell with the coordinate system centred at its centre, spherical coordinates give a natural parametrisation with $r=R_{shell}$ and $\vartheta, \varphi$ being variable. Thus it is clear that the vector components along the angles constitute tangential components to the shell. If you wish to calculate to electrical force at sompe point $\vec{R}$ outside the sphere, you have to integrate over the sphere the field due to a surface element. A surface Element is given by - $$dS=r^{2}\sin(\theta)d\theta d\phi$$ The element of charge on it is - $$dq=\sigma dS$$ So an element of the electric fiels is - $$d\vec{E}=k\frac{dq}{\vec{R}^2}\hat{R}$$ where $\hat{R}=\frac{\vec{R}}{|\vec{R}|}$ now we have to intgrate - $$\vec{E}=\intop_{S}d\vec{E}=k\intop_{S}\frac{\sigma}{|\vec{R}|^3}\vec{R}dS=k\intop_{0}^{\pi}d\theta \intop_{0}^{2\pi}d\phi \frac{\sigma}{|\vec{R}|^3}\vec{R}r^{2}\sin(\theta)d\theta d\phi$$ And you can see the due to the symetrical nature of the problem, the field you get does not depend on lies on the line that connects $\vec{R}$ and the center of the sphere. Now you may ask, what will happen if the center of the sphere is in an arbitrary loation in space? In the new system of coordinates the filed will not be radial to the center of the new coordiants, but this it not important as long a the sphere is the only source of electrical field. You can simply change coordinates and get to the above result. The physics of the problem does not depend on the location of the sphere, so the field is always radial to the sphere.
2019-10-23 15:35:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7945249676704407, "perplexity": 168.86611304656833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987834649.58/warc/CC-MAIN-20191023150047-20191023173547-00360.warc.gz"}
https://derrickmedia.com/findanswers-277
# How do you find the slant asymptote The solver will provide step-by-step instructions on How do you find the slant asymptote. ## Finding Slant Asymptotes of Rational Functions The general equation of slant asymptote of a rational function is of the form Q = mx + c, which is called quotient function produced by long dividing the numerator by the You want to know about a certain topic? We have the answer for you! Clarify mathematic problem With Decide math, you can take the guesswork out of math and get the answers you need quickly and easily. In mathematics, an equation is a statement that two things are equal. ## Finding Slant Asymptotes of Rational !If it is, a slant asymptote exists and can be found. . As an example, look at the polynomial x ^2 + 5 x + 2 / x + 3. The degree of its ## How to Find Slant Asymptotes: 8 Steps (with Pictures) First, since a slant asymptote is a linear function, we know it must be equal to {eq}ax+b {/eq}. The formal definition of a slant asymptote is that {eq}\lim_ {x\to\pm\infty} f (x) - (ax+b) = 0 • Decide mathematic • Provide multiple ways • Do math equations • Deal with mathematic questions • Determine mathematic equations
2023-01-30 10:29:00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9101327657699585, "perplexity": 848.0229427175952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499816.79/warc/CC-MAIN-20230130101912-20230130131912-00409.warc.gz"}
https://scr.readthedocs.io/en/latest/users/run.html
# Run a job¶ In addition to the SCR library, one may optionally include SCR commands in their job script. These commands are most useful on systems where failures are common. The SCR commands prepare the cache, scavenge files from cache to the parallel file system, and check that the scavenged dataset is complete among other things. The commands also automate the process of relaunching a job after failure. These commands are located in the /bin directory where SCR is installed. There are numerous SCR commands. Any command not mentioned in this document is not intended to be executed by users. For best performance, one should: 1) inform the batch system that the allocation should remain available even after a failure and 2) replace the command to execute the application with an SCR wrapper script. The precise set of options and commands to use depends on the system resource manager. ## Supported platforms¶ At the time of this writing, SCR supports specific combinations of resource managers and job launchers. The descriptions for using SCR in this section apply to these specific configurations, however the following description is helpful to understand how to run SCR on any system. Please contact us for help in porting SCR to other platforms. (See Section Support and Additional Information for contact information). ## Jobs and job steps¶ First, we differentiate between a job allocation and a job step. Our terminology originates from the SLURM resource manager, but the principles apply generally across SCR-supported resource managers. When a job is scheduled resources on a system, the batch script executes inside of a job allocation. The job allocation consists of a set of nodes, a time limit, and a job id. The job id can be obtained by executing the squeue command on SLURM, the apstat command on ALPS, and the bjobs command on LSF. Within a job allocation, a user may run one or more job steps, each of which is invoked by a call to srun on SLURM, aprun on ALPS, or mpirun on LSF. Each job step is assigned its own step id. On SLURM, within each job allocation, job step ids start at 0 and increment with each issued job step. Job step ids can be obtained by passing the -s option to squeue. A fully qualified name of a SLURM job step consists of: jobid.stepid. For instance, the name 1234.5 refers to step id 5 of job id 1234. On ALPS, each job step within an allocation has a unique id that can be obtained through apstat. ## Ignoring node failures¶ Before running an SCR job, it is recommended to configure the job allocation to withstand node failures. By default, most resource managers terminate the job allocation if a node fails, however SCR requires the job allocation to remain active in order to restart the job or to scavenge files. To enable the job allocation to continue past node failures, one must specify the appropriate flags from the table below. SCR job allocation flags MOAB batch script #MSUB -l resfailpolicy=ignore MOAB interactive qsub -I ... -l resfailpolicy=ignore SLURM batch script #SBATCH --no-kill SLURM interactive salloc --no-kill ... LSF batch script #BSUB -env "all, LSB_DJOB_COMMFAIL_ACTION=KILL_TASKS" LSF interactive bsub -env "all, LSB_DJOB_COMMFAIL_ACTION=KILL_TASKS" ... ## The SCR wrapper script¶ The easiest way to integrate SCR into a batch script is to set some environment variables and to replace the job run command with an SCR wrapper script. The SCR wrapper script includes logic to restart an application within an job allocation, and it scavenges files from cache to the parallel file system at the end of an allocation.: SLURM: scr_srun [srun_options] <prog> [prog_args ...] ALPS: scr_aprun [aprun_options] <prog> [prog_args ...] LSF: scr_mpirun [mpirun_options] <prog> [prog_args ...] The SCR wrapper script must run from within a job allocation. Internally, the command must know the prefix directory. By default, it uses the current working directory. One may specify a different prefix directory by setting the SCR_PREFIX parameter. It is recommended to set the SCR_HALT_SECONDS parameter so that the job allocation does not expire before datasets can be flushed (Section Halt a job). By default, the SCR wrapper script does not restart an application after the first job step exits. To automatically restart a job step within the current allocation, set the SCR_RUNS environment variable to the maximum number of runs to attempt. For an unlimited number of attempts, set this variable to -1. After a job step exits, the wrapper script checks whether it should restart the job. If so, the script sleeps for some time to give nodes in the allocation a chance to clean up. Then it checks that there are sufficient healthy nodes remaining in the allocation. By default, the wrapper script assumes the next run requires the same number of nodes as the previous run, which is recorded in a file written by the SCR library. If this file cannot be read, the command assumes the application requires all nodes in the allocation. Alternatively, one may override these heuristics and precisely specify the number of nodes needed by setting the SCR_MIN_NODES environment variable to the number of required nodes. For applications that cannot invoke the SCR wrapper script as described here, one should examine the logic contained within the script and duplicate the necessary parts in the job batch script. In particular, one should invoke scr_postrun for scavenge support. ## Example batch script for using SCR restart capability¶ An example SLURM batch script with scr_srun is shown below #!/bin/bash #SBATCH --partition pbatch #SBATCH --nodes 66 #SBATCH --no-kill # above, tell SLURM to not kill the job allocation upon a node failure # also note that the job requested 2 spares -- it uses 64 nodes but allocated 66 # specify where datasets should be written
2021-12-09 05:02:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37112894654273987, "perplexity": 5060.963490955557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363659.21/warc/CC-MAIN-20211209030858-20211209060858-00012.warc.gz"}
http://www.komal.hu/verseny/feladat.cgi?a=feladat&f=S43&l=en
Mathematical and Physical Journal for High Schools Issued by the MATFUND Foundation Already signed up? New to KöMaL? # Problem S. 43. (March 2009) S. 43. We are given a rectangular labyrinth. Its walls are unit squares and it has one entrance from which each square of the labyrinth can be reached. Corridors and walls are parallel with the outer walls, and it may be the case that there is more that one path between two given points. We would like to illuminate each square of the labyrinth by placing lamps on the ceiling of the corridors. A lamp illuminates the complete horizontal or vertical corridor segments in which the lamp is located. Your task is to illuminate the complete labyrinth by using the minimal number of lamps. In the example, Bemenet'' is input'', while Kimenet'' is output''. The first command line argument of your program solving this problem is the name of the input text file containing the plan of the labyrinth, while the second command line argument is the name of the output text file that should contain the solution. The input file has at most 100 lines. Each line has the same number of consecutive 0s or 1s: 0 means corridor and 1 is wall. The structure of the output file should be similar, and the position of the lamps should be marked with X. The total number of lamps should also be displayed on the screen. When evaluating your solution, running time for various labyrinth sizes will be taken into account, and some points will be awarded for solutions with slightly more than optimal number of lamps. The source code of your program (s43.pas, s43.cpp, ...) and a short documentation (s43.txt, s43.pdf, ...) should be submitted, also describing your solution and specifying the name of the developer environment to use for compiling. (10 pont) Deadline expired on April 15, 2009. Sorry, the solution is available only in Hungarian. Google translation Mintamegoldásként Weisz Ágoston, 10. osztályos budapesti diák C# nyelven készített programját (s43.cs) és Englert Péter 10. osztályos zalaegerszegi versenyző Pascal programját (S43.pas) adjuk közre. Mindkettő jól kommentezett, érthető forrás. A beküldött megoldásokat az alábbi tesztállományokkal vizsgáltuk: s43forras.zip A labirintusok egy része "furcsa" volt, előfordultak benne a bejárattól elérhetetlen részek, sőt izolált pont is. Természetesen nem vettük hibának azt, ha valaki ezekbe "nem jutott el". A versenyzők többsége a szükségesnél több lámpa elhelyezésével oldotta meg a feladatot, ők legföljebb 6 pontot kaptak. ### Statistics: 16 students sent a solution. 10 points: Englert Péter, Lájer Márton, Weisz Ágoston. 9 points: Nagy Róbert. 8 points: 1 student. 7 points: 1 student. 6 points: 8 students. 4 points: 2 students. Problems in Information Technology of KöMaL, March 2009
2018-09-26 07:24:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.532113790512085, "perplexity": 6058.748191998235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267163704.93/warc/CC-MAIN-20180926061824-20180926082224-00520.warc.gz"}
https://www.qb365.in/materials/stateboard/6th-standard-maths-english-medium-free-online-test-book-back-1-mark-questions-part-one-3716.html
#### 6th Standard Maths English Medium Free Online Test Book Back 1 Mark Questions Part - One 6th Standard Reg.No. : • • • • • • Maths Time : 00:10:00 Hrs Total Marks : 10 Part A 10 x 1 = 10 1. The value of 24 $\div$ {8-(3$\times$2)} is (a) 0 (b) 12 (c) 3 (d) 4 2. 6 less to ‘n’ gives 8 is represented as (a) n − 6 = 8 (b) 6 − n = 8 (c) 8 − n = 6 (d) n − 8 = 6 3. If the ratios formed using the numbers 2, 5, x, 20 in the same order are in proportion, then ‘x’ is (a) 50 (b) 4 (c) 10 (d) 8 4. The representation of ‘one picture to many objects’ in a Pictograph is called__________. (a) Tally mark (b) Pictoword (c) Scaling (d) Frequency 5. The number 87846 is divisible by (a) 2 only (b) 3 only (c) 11 only (d) all of these 6. What time will it be 5 hours after 22:35 hours? (a) 2:30 hours (b) 3:35 hours (c) 4:35 hours (d) 5:35 hours 7. If ${6\over7}={A\over49}$then the value of A is (a) 42 (b) 36 (c) 25 (d) 48 8. 3 units to the left of 1 is (a) -4 (b) -3 (c) -2 (d) 3 9. The length and breadth of a rectangular sheet of a paper are 15 cm and 12 cm respectively. A rectangular piece is cut from one of its corners. Which of the following statement is correct for the remaining sheet? (a) Perimeter remains the same but the area changes (b) Area remains the same but the perimeter changes (c) There will be a change in both area and perimeter (d) Both the area and perimeter remains the same 10. What will be the 25th letter in the pattern? ABCAABBCCAAABBBCCC,... (a) B (b) C (c) D (d) A
2021-12-03 06:44:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6942241787910461, "perplexity": 2335.267917255487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362605.52/warc/CC-MAIN-20211203060849-20211203090849-00604.warc.gz"}
http://mathoverflow.net/questions/46321/numeric-equality-testing?sort=oldest
# Numeric equality testing? Suppose we have two closed-form expressions with $k$ unknowns which are hard to test for equality but easy to evaluate numerically over $\mathbb{R}^k$. One could then approach the problem of equality testing by checking equality numerically at several points. The interesting questions are then -- for which kinds of expressions can you do it, how to pick sampling points and how many points are needed. Google Scholar gives 0 hits for "numeric equality testing" Has this kind of problem been studied before? What are the right keywords to search for? - Although this does not give you direct help, try some modifiers with "interpolation" or "approximation". They may help you to find the right search terms. For certain situations, "unification" is used, but I suspect this used as a search term will not help in your situation. Good luck. Gerhard "Ask Me About System Design" Paseman, 2010.11.16 –  Gerhard Paseman Nov 17 '10 at 3:50 Search for "identity testing". –  Felipe Voloch Nov 17 '10 at 4:33 thanks, "polynomial identity testing" gives lots of hits –  Yaroslav Bulatov Nov 17 '10 at 4:41 You may find something of interest at mathoverflow.net/questions/39733/39738#39738 –  Gerry Myerson Nov 17 '10 at 4:48 The Motwani/Raghavan randomized algorithm book (Chapter 7) is a good reference: I JUST taught this today in my algorithms class. –  Suresh Venkat Nov 17 '10 at 5:47
2015-07-02 23:37:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.89283287525177, "perplexity": 1021.0110769969793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095677.90/warc/CC-MAIN-20150627031815-00108-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/cubic-and-linear-thermal-expansion.688709/
# Cubic and Linear Thermal Expansion. 1. Apr 29, 2013 ### SherlockOhms 1. The problem statement, all variables and given/known data 1. a 100m long copper wire of diameter 4mm is heater from 20C to 80C. What is the change in the length of the wire? 2. a 0.2m diameter aluminium sphere is cooled from 250C to 0C. What is the change in the volume of the sphere? The coefficients of linear and cubic expansion were given for both materials. 2. Relevant equations α(L) = dL/dT(1/L) α(V) = dV/dT(1/V) 3. The attempt at a solution For the first question is it not just a simple substitution exercise using the formula α(L) = dL/dT(1/L)? The fact that the diameter was provided is confusing me though. For the second question isn't it just using the second equation α(V) = dV/dT(1/V), where V will be 4/3 * (pi) * r^3 and dT is -250? Thanks, point out any incorrect observations! Last edited: Apr 29, 2013 2. Apr 29, 2013 ### Staff: Mentor Your equations are the same as $$\frac{d\ln{L}}{dT}=α_L$$ $$\frac{d\ln{V}}{dT}=α_V$$ You need to integrate these equations with respect to T to get the change in length or volume. Usually, when you do this, there will be a roundoff issue. You need to make use of the relation ln(1+x)≈x, or the relation exp(x)-1≈x to deal with this roundoff issue. The diameter was deliberately included in the problem statement to confuse you and to test your understanding. Your method for solving the second question is on target, and is appropriate for the first question also. 3. Apr 30, 2013 ### SherlockOhms Brilliant. Thanks for the help! 4. Apr 30, 2013 ### SherlockOhms Actually, would you mind explaining how that equation is integrated to give the equations which I gave above? 5. Apr 30, 2013 ### Staff: Mentor These are the equations you gave above, just re-expressed mathematically. If you integrate these equations, you get: ln(L/Linit)L(T-Tinit) ln(V/Vinit)V(T-Tinit) If the change in L from its initial length is small, then you can write: ln(L/Linit)=ln(1+(L-Linit)/Linit)≈(L-Linit)/Linit The same goes for the change in V. 6. Apr 30, 2013 ### SherlockOhms I see. Thanks for this! Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Have something to add? Draft saved Draft deleted
2017-12-11 17:41:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6881445050239563, "perplexity": 1795.1104370892162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513784.2/warc/CC-MAIN-20171211164220-20171211184220-00289.warc.gz"}
https://math.stackexchange.com/questions/628772/for-all-borel-e-subset-0-1-s-t-me-1-2-we-have-mue-1-2-where-mu
# For all Borel $E\subset [0,1]$ s.t $m(E)=1/2$, we have $\mu(E)=1/2$ where $\mu$ is a prob meas, $m$ is Lebesgue meas. Is $\mu=m$ on Borel sets? For all Borel $E\subset [0,1]$ such that $m(E)=1/2$, we have $\mu(E)=1/2$ where $\mu$ is a probability measure on $[0,1]$ and $m$ is the Lebesgue measure. Does this imply that measures $\mu$ and $m$ agree on all Borel subset of $[0,1]$? What if we change the number 1/2 with any other number less than 1? Any hint or suggestions are welcome A simple solution is to notice that for all $n$, the assumption implies that $\mu$ and $m$ coincide on all intervals of length $1/2^n$, and therefore on all intervals, but the intervals generate the Borel sets. Similarly if $1/2$ is replaced by some other number in $(0,1)$. You easily get a sequence of lengths $\ell_n$ approaching $0$ such that $\mu$ and $m$ agree on all intervals of length $\ell_n$ for all $n$, and therefore on all intervals. • Thank you very much for your hint, I have found inductively that μ and m coincide on all intervals of length $1/2^n$, yes borel sets generated by intervals, but could you rigorously explain how you can pass from interval to Borels, are you using regularity? – seriously divergent Jan 6 '14 at 9:23 • The Borel sets form the smallest $\sigma$-algebra containing the open sets. Check that the collection of Borel sets where $\mu$ and $m$ coincide is a $\sigma$-algebra. This simply uses that the measure of an increasing union is its supremum, and (for complements) that both are probability measures. But sure, using regularity of Lebesgue measure is perhaps the easiest approach, as you only have to deal with $F_\sigma$ and $G_\delta$ sets, for which checking that their $\mu$ and $m$ measures coincide is easier than for general Borel sets. – Andrés E. Caicedo Jan 6 '14 at 14:52 Suppose that for some $\theta \in (0,1)$ that $mE = \theta$ implies $\mu E = \theta$. The hypothesis implies that $\mu \ll m$, and so $\mu A = \int_A f dm$ for some $f$. We want to show $f=1$ ae. [$m$]. Let $C = \{ x | f(x) <1 \}$, and suppose $mC >0$. If $mC=1$, then we have an immediate contradiction, so we can suppose $mC<1$. If $mC \le \theta$, we can choose some $t$ such that $m ( [0,t] \setminus C) = \theta - mC$, and then if we let $C'=C \cup ([0,t] \setminus C$), we have $mC' = \theta$, but $\mu C' < \theta$. If $\theta < mC$, we can choose some $t$ such that $m([0,t] \cap C) = \theta$. Letting $C'=[0,t] \cap C$ shows that $mC' = \theta$, but $\mu C' < \theta$. Consequently $mC = 0$ and so $f(x) \ge 1$ ae. [$m$]. Since $\int (f-1)dm = 0$, we conclude that $f = 1$ ae. [$m$], and so $\mu = m$. • Thank you, good proof, $\mu<<m$ is a clever observation! – seriously divergent Jan 6 '14 at 9:12
2020-10-28 11:56:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9886763095855713, "perplexity": 125.27305003999281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107898499.49/warc/CC-MAIN-20201028103215-20201028133215-00201.warc.gz"}
https://stacks.math.columbia.edu/tag/06DC
Theorem 96.16.1. Let $S$ be a scheme. Let $F : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism of stacks in groupoids over $(\mathit{Sch}/S)_{fppf}$. If 1. $\mathcal{X}$ is representable by an algebraic space, and 2. $F$ is representable by algebraic spaces, surjective, flat and locally of finite presentation, then $\mathcal{Y}$ is an algebraic stack. Proof. By Lemma 96.4.3 we see that the diagonal of $\mathcal{Y}$ is representable by algebraic spaces. Hence we only need to verify the existence of a $1$-morphism $f : \mathcal{V} \to \mathcal{Y}$ of stacks in groupoids over $(\mathit{Sch}/S)_{fppf}$ with $\mathcal{V}$ representable and $f$ surjective and smooth. By Lemma 96.14.2 we know that $\coprod \nolimits _{d \geq 1} \mathcal{H}_ d(\mathcal{X}/\mathcal{Y})$ is an algebraic stack. It follows from Lemma 96.15.1 and Algebraic Stacks, Lemma 93.15.5 that $\coprod \nolimits _{d \geq 1} \mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y})$ is an algebraic stack as well. Choose a representable stack in groupoids $\mathcal{V}$ over $(\mathit{Sch}/S)_{fppf}$ and a surjective and smooth $1$-morphism $\mathcal{V} \longrightarrow \coprod \nolimits _{d \geq 1} \mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y}).$ We claim that the composition $\mathcal{V} \longrightarrow \coprod \nolimits _{d \geq 1} \mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y}) \longrightarrow \mathcal{Y}$ is smooth and surjective which finishes the proof of the theorem. In fact, the smoothness will be a consequence of Lemmas 96.12.7 and 96.15.3 and the surjectivity a consequence of Lemma 96.15.4. We spell out the details in the following paragraph. By construction $\mathcal{V} \to \coprod \nolimits _{d \geq 1} \mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y})$ is representable by algebraic spaces, surjective, and smooth (and hence also locally of finite presentation and formally smooth by the general principle Algebraic Stacks, Lemma 93.10.9 and More on Morphisms of Spaces, Lemma 75.19.6). Applying Lemmas 96.5.3, 96.6.3, and 96.7.3 we see that $\mathcal{V} \to \coprod \nolimits _{d \geq 1} \mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y})$ is limit preserving on objects, formally smooth on objects, and surjective on objects. The $1$-morphism $\coprod \nolimits _{d \geq 1} \mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y}) \to \mathcal{Y}$ is 1. limit preserving on objects: this is Lemma 96.12.7 for $\mathcal{H}_ d(\mathcal{X}/\mathcal{Y}) \to \mathcal{Y}$ and we combine it with Lemmas 96.15.1, 96.5.4, and 96.5.2 to get it for $\mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y}) \to \mathcal{Y}$, 2. formally smooth on objects by Lemma 96.15.3, and 3. surjective on objects by Lemma 96.15.4. Using Lemmas 96.5.2, 96.6.2, and 96.7.2 we conclude that the composition $\mathcal{V} \to \mathcal{Y}$ is limit preserving on objects, formally smooth on objects, and surjective on objects. Using Lemmas 96.5.3, 96.6.3, and 96.7.3 we see that $\mathcal{V} \to \mathcal{Y}$ is locally of finite presentation, formally smooth, and surjective. Finally, using (via the general principle Algebraic Stacks, Lemma 93.10.9) the infinitesimal lifting criterion (More on Morphisms of Spaces, Lemma 75.19.6) we see that $\mathcal{V} \to \mathcal{Y}$ is smooth and we win. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2022-12-07 06:09:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9767755270004272, "perplexity": 349.09708411194794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711150.61/warc/CC-MAIN-20221207053157-20221207083157-00412.warc.gz"}
http://www.sciencemadness.org/talk/viewthread.php?tid=150011
Not logged in [Login - Register] Sciencemadness Discussion Board » Fundamentals » Beginnings » Lithium extraction from used button batteries? Select A Forum Fundamentals   » Chemistry in General   » Organic Chemistry   » Reagents and Apparatus Acquisition   » Beginnings   » Responsible Practices   » Miscellaneous   » The Wiki Special topics   » Technochemistry   » Energetic Materials   » Biochemistry   » Radiochemistry   » Computational Models and Techniques   » Prepublication Non-chemistry   » Forum Matters   » Legal and Societal Issues   » Detritus   » Test Forum Author: Subject: Lithium extraction from used button batteries? RogueRose International Hazard Posts: 1284 Registered: 16-6-2014 Member Is Offline Lithium extraction from used button batteries? I've looked into what the components of lithium button cells, mainly the CR3032 3v, and it is lithium and MnO2. Now that is what is in a new battery but I'm guessing it is lithium oxide in the used batteries. I'm wondering if it would be possible to extract the lithium from the batteries by breaking them apart or cutting them in half and soaking in water. The Li2O should react with water to form LiOH which should dissolve and everything else is either MnO2, stainless steel or a membrane (cellulose, plastic or something). I know this wouldn't be very worthwhile for a few batteries but if 5-10+lbs of them are available then I'm thinking there will be a fair amount of recoverable LiOH which is fairly soluble in methanol. I am kind of curious how much lithium there is in a standard li-ion 18650 battery as those are extremely plentiful as "dead" batteries. The problem would be cutting them open and removing the rolled electrodes. I've seen that they can be soaked in water to extract the lithium. Sulaiman International Hazard Posts: 2461 Registered: 8-2-2015 Location: Shah Alam, Malaysia Member Is Offline I don' know if it helps but lithium 123 cells have a nice piece of lithium foil; CAUTION : Hobby Chemist, not Professional or even Amateur RogueRose International Hazard Posts: 1284 Registered: 16-6-2014 Member Is Offline Quote: Originally posted by Sulaiman I don' know if it helps but lithium 123 cells have a nice piece of lithium foil; http://www.sciencemadness.org/talk/viewthread.php?tid=86157&... That might. The thing is that here recyclers charge anywhere from $1.50-$2.50/lb to take you lithium batteries, so it costs $$to dispose of them. Often times one can pick them up for free if not even taking a few$$ with them while doing so. This is strange because they used to pay $.50-1.75/lb for lithium based batteries just 1-2 years ago and NiCd's were selling for$.30-80/lb and not they can cost over \$10/lb to dispose of them! I know some recyclers with gaylords weighing 2-4 tons with lithium or NiCd batteries sitting in them, talk about a liability, these liabilities are kept off the books, so it looks like they are doing better than they are financially. It seems big here in "charity" recyclers (501.C.3's - non-for-profits) where they siphon off cash and leave a huge liability at the end when they declare bankruptcy. So I'd like to take some of these and maybe get paid at the same time and extract the Li from them. I'm thinking a bandsaw, chop saw (with a thin metal cutting disc) or even a shear press that will cut 10's of them in half lengthwise in one cut. Then pull out the rolled electrodes, soak in H2O then extract Li! j_sum1 Posts: 4632 Registered: 4-10-2014 Location: Oz Member Is Online Mood: Metastable, and that's good enough. One thing to consider when extracting Li is that the carbonate is surprisingly insoluble. This therefore is a good method for separating from other (especially alkali) metals that might be present. I see a possibility for crush/cut, soak and stir, filter, then precipitate with Na2CO3. Filter again and you have probably 95%+ recovery of Li with little else present. If you are interested, take a look at the latest offering from sum_lab: A primer on metals and non-metals with at least one novel experiment. RogueRose International Hazard Posts: 1284 Registered: 16-6-2014 Member Is Offline Quote: Originally posted by j_sum1 One thing to consider when extracting Li is that the carbonate is surprisingly insoluble. This therefore is a good method for separating from other (especially alkali) metals that might be present. I see a possibility for crush/cut, soak and stir, filter, then precipitate with Na2CO3. Filter again and you have probably 95%+ recovery of Li with little else present. Thanks for the suggestion! I was thinking of bubbling air through it with the CO2 in it but Na2CO3 is much faster. On top of all the lithium there is also graphite powder in the 18650 cells as well as copper and aluminum sheets. Separating the Cu and Al might be difficult by chemical means but I'm thinking of using a forge which might melt the Al and possibly alloy the Cu with it, and it definitely would if I added Zn into it to make a Zinc Aluminum alloy (which is an awesome alloy) that also has copper. I'm still trying to figure out the best method to separate the Cu and Al as the only way to do it is to cut the ends off and cut down the middle, then unroll the electrodes. These are the electrodes in a 18650, I removed the 2 plastic membranes that are just as long and covered in graphite on both pieces (maybe both sides of each as well). The Cu is 36" long and Al is 32" long and each are 2 3/8" wide. IDK how thick the "foil" is or what gauge it is, I'll have to do some tests and stack like 50-100 pieces to get a measurement and then divide to get the thickness. I'm guessing it's 2-3mil as it feels about the same thickness as plastic of the same thickness. I can cut the batteries down the length completely in half with a bandsaw (about 10-20 seconds a battery) but that would destroy the lengths of the foil and make them vary in length from 1.1" long down to ~.1" - all at 2.375" wide though. I guess I could make AlCl3 by dissolving in HCl to give H2 and Cl2 gas leaving the Cu basically untouched. Of course this would all be after soaking in water to extract any lithium compounds, rinse, then add the ~10% HCl. Can anyone think of a use of the foils like this? draculic acid69 National Hazard Posts: 272 Registered: 2-8-2018 Member Is Offline The button batteries would have a strip of lithium metal in them wouldn't they? And I think thionyl chloride as the electrolyte.id recommend taking apart under toluene or xylene and keeping the metal as it is in its most valuable form.apparently this is what speed cooks used to do peel the batteries.also the Russians do it to make krokodil which is disturbingly worse thing than hillbilly meth.except they keep the thionyl chloride.if your pulling apart batteries at a large scale trying to keep these two chemicals as is would be most profitable.the price of the carbonate or hydroxide isn't much compared to the metal. wg48temp9 Hazard to Self Posts: 88 Registered: 30-12-2018 Member Is Offline Quote: Originally posted by RogueRose I am kind of curious how much lithium there is in a standard li-ion 18650 battery as those are extremely plentiful as "dead" batteries. The problem would be cutting them open and removing the rolled electrodes. I've seen that they can be soaked in water to extract the lithium. One source gave the electrochemical equivalent of lithium as 3.86 Ah/g and a spec on the 18650 gave its capacity as 2.6Ah and a mass of 46.6g. So the battery must contain at least 2.6/3.86 =0.684g of lithium. or 1.4% of the battery mass is lithium. That's not very much. i am wg48 but not on my usual pc hence the temp handle. draculic acid69 National Hazard Posts: 272 Registered: 2-8-2018 Member Is Offline If you can find a non profit ewaste recycling center you could get them cheap from them.maybs even free.they are a good source for electronic scrap for recovering precious metals.and lithium Sciencemadness Discussion Board » Fundamentals » Beginnings » Lithium extraction from used button batteries? Select A Forum Fundamentals   » Chemistry in General   » Organic Chemistry   » Reagents and Apparatus Acquisition   » Beginnings   » Responsible Practices   » Miscellaneous   » The Wiki Special topics   » Technochemistry   » Energetic Materials   » Biochemistry   » Radiochemistry   » Computational Models and Techniques   » Prepublication Non-chemistry   » Forum Matters   » Legal and Societal Issues   » Detritus   » Test Forum
2019-07-21 10:44:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24752603471279144, "perplexity": 5928.715705074035}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526948.55/warc/CC-MAIN-20190721102738-20190721124738-00355.warc.gz"}
https://www.maa.org/book/export/html/2006917
# An Explication of the Antilogism in Christine Ladd-Franklin's "Algebra of Logic" Author(s): Julia M. Parker (University of Missouri – Kansas City) Identifying "famous firsts" in history is often a messy endeavor, in part requiring that achievements be stated in precise terms. When I came across the case of Christine Ladd-Franklin (1847–1930), the first American woman to write a doctoral dissertation in mathematics at an American university but not the first American woman to be formally awarded the PhD degree for mathematics,1 I could not help being hooked by her life story of creating opportunities and addressing challenges. While other scholars have provided more detailed accounts of Ladd-Franklin's biography [Green and LaDuke 2009; Riddle 2016; Johnson 2008], I used the technique of explication (see Delaware 2019) to analyze the content of her 1883 dissertation, "On the Algebra of Logic." In this contribution to symbolic logic, she created a test for the validity of syllogisms that she later named "antilogism". In the explication, Ladd shows that all valid syllogisms can be reduced to a single form. It had been long believed that this was the case, but mathematicians and logicians conjectured that the perfect form to which all syllogisms could be reduced was one of affirmative, or positive, statements. Ladd showed, however, that the single form is actually formed out of a contradiction, which is why she coined the term “antilogism” to represent the final product of her work. Opening paragraph of Ladd's dissertation "On the Algebra of Logic" ##### Notes: [1] Winifred Edgerton Merrill (1862–1951) earned a PhD in mathematics at Columbia University in 1886 [Kelly and Rozner 2012; Riddle 2016], an event that was extraordinary enough in its day to be considered newsworthy by the New York Times. When Clara L. Bacon (1866–1948) became the first woman to earn a PhD in mathematics from Johns Hopkins in 1911, she was one of only a dozen women in the US to hold a doctorate in the field. # An Explication of the Antilogism in Christine Ladd-Franklin's "Algebra of Logic" – Early Life and Education Author(s): Julia M. Parker (University of Missouri – Kansas City) 1885 photograph of Hopkins Hall, on the original downtown Baltimore campus of The Johns Hopkins University, Public Domain # An Explication of the Antilogism in Christine Ladd-Franklin's "Algebra of Logic" – Syllogisms Before "Algebra of Logic" Author(s): Julia M. Parker (University of Missouri – Kansas City) Before this article discusses the specific work in Ladd’s dissertation, a brief introduction to the study of syllogisms, an area of interest to mathematicians and scholars from ancient times, is provided. Several terms used when discussing syllogism are defined: Statement A sentence, either affirmative or negative, that has a truth value. Example: “All clouds produce rain” is an affirmative statement. Term A portion (subject or predicate) of a statement describing sets or classes or objects or properties. Example: The subject term from above is “clouds” (a class of objects) and the predicate term is “produce rain” (a property that this class of objects is claimed to have). Quantifier An indication of how many objects in the class possess a property. Quantifiers may be universal (“all” or “none”) or particular (“some”). Premise A statement from which another statement is inferred. Conclusion The inference statement made from the premise(s). A syllogism has one conclusion. One specific form of logical argument is the syllogism. A syllogism is made up of three statements, each of which consists of a quantifier, a subject term, and a predicate term. For each statement, the quantifiers can be universal or particular, and the statement can be affirmative or negative. Because statements can be affirmative and negative, and quantifiers can be universal or particular, the individual statements in a syllogism can take four forms, also referred to as moods. These are shown in the following table using variables a and b to represent the classes described by the following terminology: Statement Mood All a is b. universal affirmative No a is b. universal negative Some a is b. particular affirmative Some a is not b. particular negative The syllogism, then, is a logical argument with the structure of two premises and a conclusion, each of which takes on one of these four moods. While any argument consisting of three such statements may be a syllogism, not all syllogisms are valid. A valid syllogism is an argument form that is truth-preserving, in that the truth of the conclusion follows necessarily from the truth of premises. For instance, a classic example (obviously written in a male-dominant culture) of a valid syllogism is: All men are mortal. All Greeks are men. Therefore, All Greeks are mortal. The following syllogism has the same form, and is therefore also valid, even though one of its premises and its conclusion are false: All flowers are blue. All roses are flowers. Therefore, All roses are blue. Syllogisms can also be valid even when the premises are false and the conclusion is true or when the premises and conclusions are all false. The only way a syllogism is invalid is when it follows a general form that allows the premises to be true while the conclusion to be false.1 An important feature of syllogisms is that, though composed of ordinary sentences, they cannot simply be rearranged or reworded. The statements in a syllogism are taken as a whole, with subject and predicate inseparable and non-transposable, meaning the statements are not generally symmetric. As an example, the statement “all men are mortal” has a different meaning when changed to read “all mortals are men.” Additionally, syllogistic logic does not allow for negation of statements in a straightforward way. Again, in the statement “all men are mortal,” we might propose either “not all men are mortal” or “all men are not mortal” as possible negations, but these two statements do not have the same meaning. This shows that the negation or rearrangement of terms is not a simple process and leaves people at risk of considering an incorrectly converted syllogism to be logically sound [Shen 1927, p. 55]. The study of syllogisms began in ancient times, when the ancient Greek philosopher Aristotle (384–322 BCE) began to write on the subject of logic. Aristotle recognized that in addition to each statement having one of the four possible moods, the syllogism as a whole can be constructed in different ways, which he called figures. To give a more precise definition, a syllogism is an argument consisting of three statements concerning three terms: • P, the major term found in the predicate of the conclusion, • S, the minor term found in the subject of the conclusion, and • M, the middle term linking the two In the example of the classic syllogism above, “mortal” is the major term (P), “Greeks” is the minor term (S), and “men” is the middle term (M). Any single statement within a syllogism contains two of these three terms. Aristotle discovered that there are fourteen valid forms of syllogisms, depending on the order in which the terms are combined to make up the statements of the premises and conclusion, and the mood of these three statements. To show all fourteen is beyond the scope of this paper; however, the most important of these involved what Aristotle called the first figure,2 in which the terms are arranged in the following form: MP SM SP This figure is read as a series of statements: the first (MP) indicates a relationship between the middle term and the major term, the second (SM) indicates that the minor term also has a relationship to the middle term, then the third (SP) draws a conclusion that the minor term and major term must have a relationship because these have the middle term in common. As an example, we have: MP: All men (M) are mortal (P). SM: All Greeks (S) are men (M). SP: Therefore, all Greeks (S) are mortal (P). Aristotle believed, but was unable to conclusively show, that all valid syllogisms could be reduced to the first figure, which he considered to be the “perfect” figure, with all universal affirmative statements. He therefore attempted to formulate rules that would allow for the conversion of any (valid) syllogism to the first arrangement in order to demonstrate its validity. However, due to the complexities of rearrangement and negation discussed above, he was unable to provide a complete treatment of syllogistic argument that accomplished his goal [Russinoff 1999, pp. 453–454]. Although the study of syllogism remained a focus of logicians from the time of Aristotle through the middle nineteenth century, no one was able to show how all figures of syllogism could be converted to a single perfect figure [Russinoff 1999, p. 454]. In other words, there was no simple test or rule to apply that would identify a syllogism as either valid or not. It was precisely this question to which Ladd proposed a solution two thousand years after Aristotle, in what should have been her PhD dissertation, “On the Algebra of Logic” [Ladd 1883]. ##### Notes: [1] For instance, the following general form leads to invalid syllogisms, as the reader can check by taking, for instance, P to be “roses,” M to be “flowers” and S to be “marigolds.” All P are M. Some M are S. Therefore, Some P are S. Other values of M, P and S could result in true statements for the two premises and the conclusion, but that would be purely accidental, rather than necessitated by the syllogistic form itself. [2] Aristotle’s treatment of syllogistic logic included three distinct figures: 1st 2nd 3rd MP MP PM SM MS SM SP SP SP Medieval logicians added a fourth figure to the three recognized by Aristotle: 4th PM MS SP # An Explication of the Antilogism in Christine Ladd-Franklin's "Algebra of Logic" – Symbolic Notation in "Algebra of Logic" Author(s): Julia M. Parker (University of Missouri – Kansas City) While the syllogism had been studied by philosophers for centuries, George Boole (1815–1864) and others combined the disciplines of logic and mathematics in the latter half of the nineteenth century to undertake a renewed examination of logical deduction. Ladd’s dissertation advisor C. S. Peirce was especially important in the development of Boole’s notion of an “algebra of logic,” a system in which the statements of a logical argument are reduced to mathematical notation such as variables and symbols. In her dissertation, Ladd developed her own algebra of logic and described how it differed from those already in existence at the time of her writing. Before doing so, she first provided definitions of the symbols employed in her algebra of logic and other useful preliminaries, which will be presented here in brief. As is the case in other works on symbolic logic, the terms of statements are represented by variables throughout Ladd’s work. Further, in much of her work, an algebraic argument was paired with examples in which the terms of the symbolic statements were replaced by an object, a set of objects, a quality, or a set of qualities. Additionally, Ladd often used the term “proposition” to mean “statement”. The following table lays out Ladd’s notation, with a third column added to provide examples for clarity. Notation Meaning Examples $$a = b$$ $$a$$ and $$b$$ are equivalent “there is no $$a$$ which is not $$b$$ and no $$b$$ which is not $$a$$” Define $$a$$ to be the names of months that end in “r.” Define $$b$$ to be September, October, November, December. $$a$$ and $$b$$ are equivalent classes. $$\overline{a}$$ The negation of a proposition [statement] or term “what is not $$a$$” Using the same $$a$$ as above, $$\overline{a}$$ consists of the names of all the months that do not end in “r.” Alternately, for an example of a term as a quality, let $$a$$ represent the color blue (say paired with any set of colored objects, $$b$$). Then $$\overline{a}$$ would refer to any of those objects in question that are not blue. $$a \times b$$ or $$ab$$ What is common to the classes  $$a$$ and $$b$$ “what is both $$a$$ and $$b$$” Now let $$a$$ represent the names of all the months and $$b$$ be months that end with “r,” then $$ab$$ is the list: September, October, November, December. Or if $$a$$ is the set of qualities blue or yellow and $$b$$ is flowers, then $$ab$$ is the set of all objects that are flowers and are either blue or yellow. $$a+b$$ The whole of $$a$$ together with the whole of $$b$$ “what is either $$a$$ or $$b$$” Again $$a$$ represents months and $$b$$ represents months ending with “r,” then $$a + b$$ is the set of all the names of the months. $$\infty$$ The universe of discourse, or what is logically possible. If the universe of discourse is the names of months, then months that end with the letters: y, h, l, e, t, and r make up all of the possibilities, or $$\infty$$. $$0$$ The negation of $$\infty$$, or what is impossible or nonexistent. There are no months that end with the letter “p,” so this class would equal $$0$$. A foundational element in Ladd’s notation is a specific symbol used in her algebra of logic that is not found in those algebras that were developed before hers. Ladd called the symbol the “wedge” or “sign of exclusion,” and denoted it as $$\vee$$, to indicate an affirmative statement, and $$\overline{\vee}$$ to indicate a negative statement. The symbols $$\vee$$ and $$\overline{\vee}$$ were placed as connectors, or copulas,  between two variables to create a statement. The various specific usages of these copulas are given in the following table (adapted from page 26 of Ladd’s dissertation). In the dissertation, Ladd used capital letters, $$A$$ and $$B$$, for the variables in this table without explanation; for continuity with the remainder of her work, variables representing terms of statements have been changed to $$a$$ and $$b$$. Also, notes in the third column have been added for clarity. The new copula $$\vee$$ and its negation were a distinguishing element of Ladd’s algebra of logic. One advantage of this notation, as described by Ladd, is that $$\vee$$ and $$\overline{\vee}$$  are symmetrical, so that statements may be read either forward or backward without a change in meaning. The argument $$a \overline{\vee} b$$, then, can be considered an inconsistency, stating that the two classes $$a$$ and $$b$$ cannot coexist, or that $$a$$ and $$b$$ have no elements in common. Hence, the statement  $$a \overline{\vee} \infty$$ indicates that $$a$$ cannot, under any circumstances, exist. Ladd then introduced one more convention involving her new copula: when indicating a relationship between a class and $$\infty$$, the $$\infty$$ may be left off, leaving the copula as the end of the statement. This gave rise to the notation $$a \overline{\vee}$$ meaning “there is no $$a$$” [Ladd 1883, p. 29]; similarly, $$a \vee \infty$$ was denoted simply as $$a \vee$$, meaning “$$a$$ exists.” (1)     $$a \overline{\vee} b$$ $$a$$ is not $$b$$. No $$a$$ is $$b$$. $$\forall t \in a$$ we have $$t\not\in b$$ $$\Leftrightarrow$$ $$\forall t \in b$$ we have $$t \not\in a$$ Self-symmetric in $$a$$ and $$b$$ Negation of (2) (2)     $$a \vee b$$ $$a$$ is in part $$b$$ Some $$a$$ is $$b$$. $$\exists \in a$$ such that $$t\in b$$ $$\Leftrightarrow$$ $$\exists \in b$$ such that $$t \in a$$ Self-symmetric in $$a$$ and $$b$$ Negation of (1) (3)     $$a \overline{\vee} \overline{b}$$ $$a$$ is not not-$$b$$. All $$a$$ is $$b$$. $$\forall t \in a$$ we have $$t\not\in b$$ Symmetric in $$a$$ and $$b$$ with (5) Negation of (4) (4)     $$a \vee \overline{b}$$ $$a$$ is partly not-$$b$$. Some $$a$$ is $$b$$. $$\exists t \in a$$ such that $$t\not\in b$$ Symmetric in $$a$$ and $$b$$ with (6) Negation of (3) (5)     $$\overline{a} \overline{\vee} b$$ What is not $$a$$ is not $$b$$. $$a$$ includes all $$b$$. $$\forall t \in b$$ we have $$t\not\in a$$ Symmetric in $$a$$ and $$b$$ with (3) Negation of (6) (6)     $$\overline{a} \vee b$$ What is not $$a$$ is part $$b$$. $$a$$ does not include all $$b$$. $$\exists t \in b$$ such that $$t\not\in a$$ Symmetric in $$a$$ and $$b$$ with (3) Negation of (5) (7)     $$\overline{a} \overline{\vee} \overline{b}$$ What is not $$a$$ is not not-$$b$$. There is nothing besides $$a$$ and $$b$$. $$\forall t \in \infty$$ we have $$t \in a$$ or $$t \in b$$ Self-symmetric in $$a$$ and $$b$$ Negation of (8) (8)      $$\overline{a} \vee\overline{b}$$ What is not $$a$$ is in part not-$$b$$. There is something besides $$a$$ and $$b$$. $$\exists t \in \infty$$ such that $$t \not \in a$$ or $$t \not \in b$$ Self-symmetric in $$a$$ and $$b$$ Negation of (7) After the introduction of her notation, Ladd went on to develop an important connection between the study of symbolic logic and syllogistic argument. She first observed that the important subjects in a symbolic logic are uniting and separating propositions; inserting or omitting terms; and eliminating the least possible amount of content. According to Ladd, this third subject, the elimination of content without loss of content, was most closely related to the study of syllogisms since “the essential character of the syllogism is that it effects the elimination of the middle term” [Ladd 1883, p. 35]. She then began a thorough examination of how her algebra of logic could be applied to all three subjects. As “elimination” is the subject most closely related to syllogism, this is the section chosen for explication here. This explication was done according to the guidelines provided by [Delaware 2019]. In brief, notes, remarks, and explanations are added to an excerpt from a primary source, such as "Algebra of Logic," to foster readers' understanding of the technical content and intellectual context embedded in a writer's text by answering readers' questions before they have thought to ask them. Thus, on the next page, all comments in square brackets were added to Ladd's text by the author. # An Explication of the Antilogism in Christine Ladd-Franklin's "Algebra of Logic" – Explication of "On Elimination" Author(s): Julia M. Parker (University of Missouri – Kansas City) Editors' Note: The following explication was done according to the guidelines provided by [Delaware 2019]. In brief, notes, remarks, and explanations are added to an excerpt from a primary source, such as "Algebra of Logic," to foster readers' understanding of the technical content and intellectual context embedded in a writer's text by answering readers' questions before they have thought to ask them. Thus, on this page, all comments in square brackets were added to Ladd's text by the author. From “On the Algebra of Logic” by Christine Ladd [1883, pp. 37–39]: [Note: Throughout, Ladd used “proposition” to refer to a relationship between any number of terms, what has been defined as a “statement” above. Also, throughout, parentheses have been added for clarity, particularly to make clear groupings on either side of the copula.] On Elimination. – In (24ʹ) [derived in the section above and accompanied by an example quoted here without proof. The symbol $$\therefore$$ is read “therefore.” “… if the premises are both universal, $$\begin{array}{ll} (24ʹ) \,\,\,\,& a \overline{\vee} b \\ & c \overline{\vee} d \\ & \therefore ac \overline{\vee} (b + d) \end{array}$$ If no bankers [$$a$$] have souls [$$b$$] and no poets [$$c$$] have bodies [$$d$$], then no banker-poets [$$ac$$] have either souls or bodies [$$b + d$$]."] there is no elimination [all terms in both of the premises, namely $$a$$, $$b$$, $$c$$, $$d$$, appear in the conclusion], and in (24°) [derived earlier in the thesis and quoted here with example: “…if the premises are one universal and one particular, $\begin{array}{ll} (24^{\circ}) \,\,\,\,& a \overline{\vee} b \\ & ac \vee (b + d) \\ & \therefore c \vee d \end{array}$ If no Africans [$$a$$] are brave [$$b$$] and some African chiefs [$$ac$$] are either brave or deceitful [$$b + d$$], then some chiefs [$$c$$] are deceitful [$$d$$].”] there is elimination of the whole of the first premise [no Africans are brave, $$a \overline{\vee} b$$] and part of the second [some African chiefs are brave, $$ac\vee b$$ ]. The most common object [goal] in reasoning is to eliminate a single term at a time—namely, one which occurs in both premises [as seen in the classic "All Greeks are mortal" example of a valid syllogism, with the elimination of the middle term “men”]. Each of these inferences gives rise to a form of argument, as a special case, by which that object [elimination of a single term at a time] is accomplished,—the premises being on the one hand both universal [as in (24ʹ)], and on the other hand one universal and the other particular [as in (24°)]. The inconsistency $$I$$ [derived by Ladd-Franklin earlier in the thesis (p. 34), stated with large parentheses added: $I. \,\,\, (a \overline{\vee} b)(c\overline{\vee}d)\overline{\vee}\left (ac \vee (b+d)\right )$ Meaning, the conjunction of statements “no $$a$$ is $$b$$” and “no $$c$$ is $$d$$” is inconsistent with the conclusion that “some $$ac$$ is $$b$$ or $$d$$.” Or, as Ladd-Franklin explains, “it is not possible that a combination of several qualities should be found in any classes from each of which some one of those qualities is absent. If, for example, culture [$$a$$] is never found in business men [$$b$$] nor respectability [$$c$$] among artists [$$d$$], then it is impossible that cultured respectability [$$ac$$] should be found among either business men or artists [$$b + d$$]” [Ladd 1883, p. 35]] becomes when $$d$$ is equal to $$\overline{b}$$ [$$d$$ is not-$$b$$] and hence $$b + d$$ [what is either $$b$$ or $$d$$, means, with the substitution, what is $$b$$ or not-$$b$$] equal to [$$b +\overline{b} =$$] $$\infty$$. $\left ( (a \overline{\vee} b)(c\overline{\vee}\overline{b})(ac\vee \infty)\right )\overline{\vee}$ [large parentheses added. This formula indicates that the combination of statements, “$$a$$ is not $$b$$” ($$a \overline{\vee} b)$$ and “$$c$$ is not not-$$b$$” ($$c\overline{\vee}\overline{b})$$ and “the conjunction of $$a$$ and $$c$$ exists” ($$ac \vee\infty$$) is inconsistent ($$\overline{\vee}$$ ), meaning that given the first two statements as premises, the third statement does not follow as a valid conclusion.] or $II. \left ( (a \overline{\vee} b)(\overline{b}\overline{\vee}c)(c \vee a )\right ) \overline{\vee}$ [large parentheses added. The argument consisting of the first premise statement "$$a$$ is not $$b$$” and the second premise statement “not-$$b$$ is not $$c$$” ($$\overline{b}\overline{\vee}c$$) equivalent to ($$c\overline{\vee}\overline{b}$$) above, and the conclusion statement “some $$c$$ is $$a$$” ($$c \vee a$$) is inconsistent ($$\overline{\vee}$$) because, since by the first premise no $$a$$ is $$b$$, then not-$$b$$ would include $$a$$, and by the second premise not-$$b$$ is inconsistent with $$c$$, so the conclusion statement ($$c \vee a$$) does not follow ($$\overline{\vee}$$), meaning $$a$$ cannot coexist with $$c$$, previously written ($$ac\overline{\vee}\infty$$).] Given any two of these propositions, the third proposition, with which it is inconsistent, is free from the term common to the two given propositions; $$a$$, $$b$$, and $$c$$, are, of course, expressions [statements] of any degree of complexity. The propositions $$m a\overline{\vee}(x+y)$$  [what is both $$m$$ and $$a$$ is inconsistent with $$x$$ or $$y$$], $$\overline{x}\,\overline{y}\overline{\vee}(c+n)$$ [what is both not-$$x$$ and not-$$y$$ is inconsistent with $$c$$ or $$n$$], for instance, are inconsistent with $$ma\vee(c+n)$$ [what is both $$m$$ and $$a$$ is consistent with $$c$$ or $$n$$; this cannot be a valid conclusion because $$ma$$ is not consistent with $$x$$ or $$y$$, meaning what is not-$$x$$ and not-$$y$$, including $$ma$$, must be inconsistent with $$c$$ or $$n$$.]; any number of terms may be eliminated at once by combining them in such a way that they shall make up a complete universe [$$\infty$$, the universe of discourse]. When any two of the [three mutually] inconsistent propositions in II. are taken as the premises, the negative of the remaining one is the [valid] conclusion. There are, therefore, two distinct forms of inference with elimination of a middle term, special cases of (24ʹ) [when the premises are both universal] and (24°) [when one premise is universal and the other particular]. If we write $$x$$ for the middle term, we have (25ʹ) $\mbox{[i]}\,\, a\overline{\vee}x$  [$$a$$ is inconsistent with $$x$$] $\mbox{[ii]}\,\, b \overline{\vee} \overline{x}$ [$$b$$ is inconsistent with not-$$x$$, so $$b$$ is consistent with $$x$$] $\mbox{[iii]}\,\, \therefore ab \overline{\vee}$ [therefore, there is no $$a$$ and $$b$$, or $$a$$ is inconsistent with $$b$$] The premises are [can be rewritten as] $\mbox{[i*]}\,\, a(b+\overline{b})x\overline{\vee}$ [$$a=a(b+\overline{b})=a\infty$$] $\mbox{[ii*]}\,\, (a+\overline{a})b\overline{x}\overline{\vee}$  [$$b=(a+\overline{a})b=\infty b$$] and together they affirm that $\mbox{[iv]}\,\, (ab(x+\overline{x})+a\overline{b}x+\overline{a}b\overline{x})\overline{\vee}$ or [equivalently] $\mbox{[iv*]}\,\, (ab+a\overline{b}x+\overline{a}b\overline{x})\overline{\vee}$ [Applying the distributive laws, known to Ladd, to [i*] and [ii*], we obtain [i*] $$a(b+\overline{b})x\overline{\vee}=(abx+a\overline{b})x\overline{\vee} = (abx+a\overline{b}x)\overline{\vee}$$, and [ii*] $$(a+\overline{a})b\overline{x}\overline{\vee}= (ab\overline{x}+\overline{a}b\overline{x})\overline{\vee}$$. Then, adding, we obtain ($$abx + ab\overline{x} +a\overline{b}x+\overline{a}b\overline{x})\overline{\vee}$$, which simplifies to [iv]. Since $$x+\overline{x} = \infty$$ ,  it can be removed, so we finally reach [iv*].] Dropping the information concerning $$x$$ [in [iv*]], there remains $ab\overline{\vee}. \,\,\,\, \mbox{[the conclusion, [iii].}]$ The information given by the conclusion is thus exactly one half of the information given by the premises (Jevons) [Ladd is referencing a condition for elimination given by a developer of an existing algebra of logic, William Stanley Jevons (1835–1882). The condition that has been satisfied is the removal of one of the two terms (one half of the information), namely $$x$$, from each premise, leaving only $$a$$ and $$b$$ in the conclusion]. (25°) $\mbox{[i]}\,\, a\overline{\vee}x$  [$$a$$ is inconsistent with $$x$$] $\mbox{[ii]}\,\, b \vee x$ [$$b$$ is consistent with $$x$$] $\mbox{[iii]}\,\,\therefore b \overline{a}\vee$ [therefore, $$b$$ and not-$$a$$ are consistent] The second premise is [can be rewritten as] $\mbox{[ii*]}\,\, bx(ax+\overline{ax})\vee$ [because $$ax+\overline{ax} =\infty$$] Which becomes, since there is no $$ax$$, [as known from [i]] $\mbox{[ii*]}\,\, bx(ax +\overline{ax})\vee = bx (\overline{ax})\vee = bx(\overline{a}+\overline{x})\vee$ [By De Morgan’s law which states that the negation of ($$a$$ and $$x$$) is (not-$$a$$ or not-$$x$$)] or [equivalently] $bx\overline{a}\vee.$  [In [ii*] $$bx(\overline{a}+\overline{x})\vee = (bx\overline{a}+bx\overline{x})\vee$$. Since $$x\overline{x}=0$$, that term can be removed, leaving only $$bx\overline{a}\vee$$.] Dropping the information concerning $$x$$ there remains $b\overline{a}\vee \,\,\,\, \mbox{[the conclusion, [iii].}]$ This conclusion is equivalent to $b\overline{a}\vee(x+\overline{x})$ [because $$x+\overline{x}=\infty$$] but the [rewritten and simplified] premises permit the conclusion $b\overline{a} \vee x;$ [if $$b\overline{a}$$ is consistent with $$x$$ or not-$$x$$, then $$b\overline{a}$$ is consistent with $$x$$ is a valid conclusion] hence the amount of information retained is exactly one half of the (particular) information given by the premises. Elimination is therefore merely a particular case of dropping irrelevant information. # An Explication of the Antilogism in Christine Ladd-Franklin's "Algebra of Logic" – From Elimination to Antilogism Author(s): Julia M. Parker (University of Missouri – Kansas City) While impressive, this treatment of syllogism in Ladd’s algebraic notation in our explication was not the end of, or even the most important part, of Ladd’s work. After some further discussion of how figure and mood affect the structure of her inconsistency, Ladd arrived at the following conclusion, for which she provided no rigorous proofs or examples: Those syllogisms in which a particular conclusion is drawn from two universal premises [as in (25ʹ)] become illogical [inconsistent] when the universal proposition is taken as not implying the existence of its terms. The argument of inconsistency [defined as II. above], $II. \left ((a\overline{\vee} b)(\overline{b}\overline{\vee}c) (c \vee a) \right ) \overline{\vee}$ is therefore the single form [what Aristotle called “figure”] to which all the ninety-six valid syllogisms [referring to the possible combinations of the figures and moods which give rise to syllogisms that are logically sound. How the number ninety-six was calculated is not described] (both universal and particular) may be reduced. [Ladd 1883, pp. 39-40] This claim highlights the importance of Ladd’s work. With it, she provided an answer to the question first posed by Aristotle, but in a surprising way. Contrary to Aristotle’s unproven belief that all valid syllogisms could be reduced to the single perfect figure of all universal affirmative statements, Ladd showed that by using her algebra of logic based on the idea of exclusion (or logical inconsistency), all valid syllogisms can be reduced to a single symbolic argument–but not one of all universal affirmative statements. She later coined the term “antilogism” [Ladd-Franklin 1928] for this argument, which has the following symbolic representation: $II. \left ((a\overline{\vee} b)(\overline{b}\overline{\vee}c) (c \vee a) \right )$ In other words, when given a syllogism, if the statements can be rewritten so that the premises and conclusion are in the form of II., the antilogism, then the syllogism is valid. This test was presented by Ladd as a rule in her dissertation: Rule of Syllogism. – Take the contradictory [negation] of the conclusion, and see that the universal propositions [statements] are expressed [rewritten] with a negative copula [ $$\overline{\vee}$$ ] and particular propositions [statements] with an affirmative copula [$$\vee$$]. If two of the propositions [statements] are universal and the other particular, and if that term only which is common to the two universal propositions [meaning the middle term, or the one being eliminated from both of those universal statements] has unlike signs [the middle term must be $$x$$ in one premise and $$\overline{x}$$ in the other], then, and only then, the syllogism is valid [Ladd-Franklin 1883, p. 41]. Ladd's first example of the Rule of Syllogism, "On the Algebra of Logic," p. 41. As explained by Russinoff, by developing a rule or test that could be used to determine the validity of syllogisms, Ladd made a significant contribution to the study of syllogistic logic, but one which was incomplete by today's standards [Russinoff 1999, p. 463]. After providing this rule, Ladd did not give a formal proof of its correctness, though in the remainder of her paper she did provide a number of examples that illustrate its correctness. If Ladd’s rule is taken as a theorem, what she did was, in a sense, to prove only one direction of an “if and only if” statement. As Russinoff writes, “although it is obvious that all triads [arguments made up of three statements] with the form she describes [triads in the form of Ladd’s II.] are inconsistent, it is not at all obvious that every inconsistent triad has that form” [Russinoff 1999, p. 463]. In other words, Ladd did not show that all inconsistent arguments will take the form of the antilogism. Although it may be that Ladd simply left the proof of the reverse direction out of her dissertation, it is instead more likely that, at the time of her writing, what she showed in her dissertation was considered a sufficiently rigorous proof in symbolic logic. Russinoff [1999, pp. 463-467] has further argued that a complete proof of Ladd’s theorem is not possible without certain results and tools of modern logic that were not part of symbolic logic in the late nineteenth century. For instance, Ladd used the ideas of consistency and inconsistency in her work in the following way: $$a \overline{\vee} b$$ means “$$a$$ is inconsistent with $$b$$” or “if $$a$$ is true, $$b$$ is false and if $$b$$ is true, $$a$$ is false” whereas to say that $$a \vee b$$, or “$$a$$ is consistent with $$b$$” means that the truth of one does not imply the falseness of the other. As Russinoff explains, however, the definitions of inconsistency and consistency have changed to include possible interpretations, so the modern understanding is that a set of statements is said to be inconsistent only when there is no possible interpretation that would allow for all members of the set to be simultaneously true. Applying this current idea of interpretation and using modern notation, Russinoff was able to provide a contemporary proof for the reverse direction of the antilogism rule that Ladd’s insights allowed her to formulate in her 1883 dissertation. Though Ladd’s work may appear incomplete by today’s standards, this does not diminish the importance of her contribution to the study of logic. Her work showed that every valid syllogism can be reduced to a single argument of the antilogism. Thus, Ladd solved a problem that logicians from the time of Aristotle had failed to answer satisfactorily. Her antilogism, then, offered a very powerful tool to logicians, allowing the study of syllogisms to be greatly simplified. As she stated in a later paper, another benefit of using antilogism instead of syllogism is that it is more natural than formal syllogistic arguments, being a form of reasoning commonly used in rebuttal or discussion when speaking [Ladd-Franklin 1928, p. 532]. Ladd supported her claim that antilogism is a more natural form of reasoning by providing an example that she claimed was a real occurrence: A little girl of four years of age was making, at her dinner, the interesting experiment of eating her soup with a fork. Her nurse said to her, “Nobody eats soup with a fork, Emily,” and Emily immediately replied, “But I do, and I am somebody” [Ladd-Franklin 1928, p. 532]. This can be seen to be an antilogism by letting $$f$$ represent the class of people who eat soup with a fork, and $$e$$ represent the class of people like Emily. Then we have that Emily, as a member of $$e$$, is not consistent with the idea that people do not eat soup with a fork ($$e \overline{\vee}\overline{f}$$ ), and also that Emily (or people like her) do actually exist ($$e \vee\infty$$), which is inconsistent with the conclusion that no people who eat soup with a fork exist ($$f \overline{\vee} \infty$$). Meaning, when we combine these three statements, we arrive at an antilogism: $\left ((e\overline{\vee} \overline{f})(f\overline{\vee}\infty) (e \vee \infty) \right ) \overline{\vee} .$ # An Explication of the Antilogism in Christine Ladd-Franklin's "Algebra of Logic" – Later Career and Historical Significance Author(s): Julia M. Parker (University of Missouri – Kansas City) Despite not having received the degree for her work in 1882, Ladd-Franklin’s accomplishments did not go unrecognized. Her work in optics was so well-regarded that in 1887 she was awarded an honorary doctorate by Vassar, the only person to ever have been given this honor [Green and LaDuke 2016, p. 341]. In 1926, during celebrations of the institution's 50th anniversary, Johns Hopkins offered to bestow upon her an honorary doctorate in a special ceremony. Her response, as described by historian Judy Green, was to remind those at Johns Hopkins that she had already been awarded an honorary doctorate by Vassar. She went on to explain that, instead of another honorary degree, she believed she should be given the PhD she earned for the work done there [Green 1987, p. 124]. Thus it was in 1926, forty-four years after her completion of the degree requirements, Christine Ladd-Franklin was finally awarded her PhD by Johns Hopkins University for her work in symbolic logic [Green 1987, p. 124]. As the first female to earn, but not be awarded, a PhD in mathematics in the United States, Christine Ladd-Franklin deserves to be recognized for what she was: a pioneer in mathematics, physiology, and women’s rights, contributing significant and impressive work across all of these areas. Christine Ladd’s contributions to the fields of mathematics and science did not end with the completion of her doctoral-level studies at Johns Hopkins. Shortly after completing her studies in 1882, she married Fabian Franklin (1853–1939), a fellow student at Johns Hopkins who received his PhD in  mathematics in1880. She continued her work in symbolic logic and began to work in the field of physiological optics, where she contributed to a theory of color vision, eventually publishing a book of her work in optics [Ladd-Franklin 1929]. She also became an advocate for women’s education, spending both her time and money to help women obtain graduate education [Green 1987, pp. 122–124]. Acc. 90-105 - Science Service, Records, 1920s–1970s, Smithsonian Institution Archives # An Explication of the Antilogism in Christine Ladd-Franklin's "Algebra of Logic" – References Author(s): Julia M. Parker (University of Missouri – Kansas City) Delaware, R. (2019). More than just a grade: The HOM SIGMAA student contest fosters writing excellence at UMKC. MAA Convergence 16. Ending Life at College; Commencement Exercises at Columbia College. Awarding of Honors at the Academy of Music—Miss Winifred Edgerton Receives a Degree. (1886, June 10). New York Times. Goldin, C. (1980). The work and wages of single women, 1870–1920. Journal of Economic History 40(1): 81–88. Green, J. (1987). Christine Ladd-Franklin (1847-1930). In Women of Mathematics, edited by L. S. Grinstein and P. J. Campbell, 121–128. New York: Greenwood Press. Green, J., LaDuke, J. (2009). Pioneering Women in American Mathematics: The Pre-1940 Ph.D.’s. Providence, RI: American Mathematical Society. Green, J., LaDuke, J. (last revised in 2016). Supplementary Material for Pioneering Women in American Mathematics: The Pre-1940 PhD’s. American Mathematical Society. Johnson, C. (2008). Christine Ladd-Franklin. Vassar Encyclopedia. Kelly, S. E., Rozner, S. A. (2012). Winifred Edgerton Merrill: "She Opened the Door". Notices of the American Mathematical Society 59(4): 504–512. Ladd, C. (1883). On the algebra of logic. In Studies in Logic by Members of the Johns Hopkins University, edited by Eschbach, A., Fisch, M. H., Pierce, C. S., 17–71. Baltimore, MD: Johns Hopkins University. Ladd-Franklin, C. (1928). The antilogism. Mind 37(148): 532–534. Ladd-Franklin, C. (1929). Colour and Colour Theories. New York, NY: Harcourt, Brace, and Co. Rossiter, M. W. (1980). “Women’s Work” in science, 1880–1910. Isis 71(3): 381-398. Russinoff, I. S. (1999). The syllogism’s final solution. The Bulletin of Symbolic Logic 5(4): 451–469. Shen, E. (1927). The Ladd-Franklin formula in logic: The antilogism. Mind 36(141): 54–60. # An Explication of the Antilogism in Christine Ladd-Franklin's "Algebra of Logic" – About the Author Author(s): Julia M. Parker (University of Missouri – Kansas City) Julia M. Parker initially wrote this paper as a part of a History of Mathematics class at the University of Missouri – Kansas City, where she was pursuing teacher certification in the field of mathematics. (Her BA degree is in Biology from William Jewell College.) By the time the paper appeared in Convergence, she was enrolled at Avila University as a Graduate Student in Education, completing the requirements for both a middle school math teaching certification and a Master’s in Education.She works at Longview Community College as a Learning Specialist, working with tutors and Supplemental Instruction (SI) programs to provide in-classroom support for students in typically difficult college courses. With her husband of almost 19 years, she has five wonderful children, four boys ages 16, 14, 8, and 8, and a 12-year-old daughter. After supporting her husband through his BS, MDiv and PhD degrees, it was Julia's turn to go back to school. She is also busy with the kids’ band and school activities and helping to care for the family's various pets: a dog, a guinea pig, a rabbit, a bearded dragon, a Russian tortoise, and an assortment of fresh- and salt-water fish in several aquariums. She is a voracious reader who also crochets and cross-stitches when life gives her anything resembling free time.
2021-09-22 03:10:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5916523337364197, "perplexity": 1375.8609932243019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057303.94/warc/CC-MAIN-20210922011746-20210922041746-00447.warc.gz"}
http://www.hg.schaathun.net/maskinsyn/_diff/Neural%20Networks?to=444e5a3daf141740f8beedb68560a6eaf9c0fd3f
# Neural Networks --- title: Neural Networks categories: session --- + [PyTorch Quickstart](https://pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html) + the quickstart tutorial is part of [Learn the Basics](https://pytorch.org/tutorials/beginner/basics/intro.html) + Szeliski 2022 Chapter 5 [Briefing](ANN) # Exercise # Exercise 1. Basic tutorial. I have added a couple of exercises to the official [PyTorch Quickstart](https://pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html). Please reflect upon and discuss the questions. and try to I have added a couple of exercises to the official [PyTorch Quickstart](https://pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html). + Please reflect upon and discuss the questions. # Exercise 3. Regression. One in-house project is to classify images of remote galaxies, which are often distorted due to the gravity of dark matter. This effect is known as gravitational lensing. A sample dataset can be found at [github](https://github.com/CosmoAI-AES/datasets2022/Exercise2022). This directory contains + A data file, sphere-pm.csv + 10000 images of distorted galaxies + A python file Dataset.py defining a subclass of Dataset to manage these data The CSV file has the form index,filename,source,lens,chi,x,y,einsteinR,sigma,sigma2,theta,nterms "00001",image-00001.png,s,p,50,30,40,19,31,0,0,16 The interesting columns are the filename which points to the input image, and the four output variables $x$, $y$, einsteinR, and $\sigma$. The other columns are associated with more advanced problem instances and should be ignored. 1. Study the Dataset class Dataset.py. How is the dataset managed? 2. Use this class to test if you can train a network to determine the four outputs for an image. # Exercise 4. Managing Data. Doing a tutorial is good, but it is little use if you you can only use the sample data. The goal of this exercise is to learn to manage other datasets. Please be aware that deep learning is usually extremely compute intensive, and you can easily run into a problem which takes days to compute. The immediate solution to this is to
2023-02-03 14:15:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4869675636291504, "perplexity": 4636.23620047255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500056.55/warc/CC-MAIN-20230203122526-20230203152526-00080.warc.gz"}
http://openstudy.com/updates/559f80fae4b05670bbb4c708
## ganeshie8 one year ago show that $\large \sum\limits_{i=0}^{\infty} \left\lfloor \frac{n+2^i}{2^{i+1}}\right\rfloor = n$ for all positive integers $$n$$ 1. Empty 2. ganeshie8 Okay :) I wont say a word because I don't really have a proof for this I have been working on this for a while though, hopelessly.. 3. Empty Oh in that case tell me everything you know, I thought this was like something you had some clever answer to, but now it sounds like we're gonna need all the help we can get to figure this out. 4. ganeshie8 I tried to see if this has some connection to the legendre formula for highest exponent of a prime in the prime factorization of $$n!$$ http://www.artofproblemsolving.com/wiki/index.php/Legendre's_Formula couldn't use it to my advantage, so i paused on this and trying to use greatest integer function properties and induction 5. Empty It seems like it's related to counting in base 2. 6. Empty Yeah Legendre's actually sounds like a great idea! I just don't know how to make that work either hmm... 7. ganeshie8 yeah the successive divisions by 2 clearly has something got to do with base2, but again the numerator expression is not constant.. so its a bit complicated 8. Empty I'm trying to see if there's a pattern in here, also where did you find this problem just out of curiosity? https://www.desmos.com/calculator/sydso2ebqk 9. Empty We can lower the bound to a finite number. By separating the two terms on top, we can see that these terms will be 0. $\frac{n}{2^{i+1}} < \frac{1}{2}$ $\log_2(n)<i$ So maybe this helps us out to write it this way: $n=\sum_{i=0}^{\lfloor \log_2(n) \rfloor} \left\lfloor \frac{n+2^i}{2^{i+1}} \right\rfloor$ 10. Empty I am also looking at this, but it seems to take us nowhere interesting: $a+b=\sum_{i=0}^{\infty} \left\lfloor \frac{a+b+2^i}{2^{i+1}} \right\rfloor=\sum_{i=0}^{\infty} \left\lfloor \frac{a+2^i}{2^{i+1}} \right\rfloor+\left\lfloor \frac{b+2^i}{2^{i+1}} \right\rfloor$ 11. ganeshie8 that animation looks interesting, this is from bs grewal's advanced mathematics practice problems 12. ganeshie8 Adding to that, can we use below consecutive integers property to our advantage Let $S_n =\sum\limits_{i=0}^{\infty} \left\lfloor \frac{n+2^i}{2^{i+1}}\right\rfloor$ Clearly $$S_1=1$$, so it should be sufficient to show that $S_{n+1}-S_n = 1$
2017-01-23 23:29:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7225695252418518, "perplexity": 459.81649644749376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00282-ip-10-171-10-70.ec2.internal.warc.gz"}
https://dmurdoch.github.io/rgl/dev/reference/mesh3d.html
Creates meshes containing points, segments, triangles and quads. mesh3d( x, y = NULL, z = NULL, vertices, material = NULL, normals = NULL, texcoords = NULL, points = NULL, segments = NULL, triangles = NULL, quads = NULL, meshColor = c("vertices", "edges", "faces", "legacy")) qmesh3d(vertices, indices, homogeneous = TRUE, material = NULL, normals = NULL, texcoords = NULL, meshColor = c("vertices", "edges", "faces", "legacy")) tmesh3d(vertices, indices, homogeneous = TRUE, material = NULL, normals = NULL, texcoords = NULL, meshColor = c("vertices", "edges", "faces", "legacy")) ## Arguments x, y, z coordinates. Any reasonable way of defining the coordinates is acceptable. See the function xyz.coords for details. A 4 row matrix of homogeneous coordinates; takes precedence over x, y, z. material properties for later rendering normals at each vertex texture coordinates at each vertex vector of indices of vertices to draw as points 2 x n matrix of indices of vertices to draw as segments 3 x n matrix of indices of vertices to draw as triangles 4 x n matrix of indices of vertices to draw as quads (obsolete) 3 or 4 x n matrix of vertex indices (obsolete) should tmesh3d and qmesh3d vertices be assumed to be homogeneous? how should colours be interpreted? See details in shade3d. ## Details These functions create mesh3d objects, which consist of a matrix of vertex coordinates together with a matrices of indices indicating how the vertices should be displayed, and material properties. The "shape3d" class is a general class for shapes that can be plotted by dot3d, wire3d or shade3d. The "mesh3d" class is a class of objects that form meshes: the vertices are in member vb, as a 4 by n matrix using homogeneous coordinates. Indices of these vertices are contained in optional components ip for points, is for line segments, it for triangles, and ib for quads. Individual meshes may have any combination of these. The functions tmesh3d and qmesh3d are included for back-compatibility; they produce meshes of triangles and quads respectively. ## Value Objects of class c("mesh3d", "shape3d"). See rgl.primitive for a discussion of texture coordinates. shade3d, shapelist3d for multiple shapes ## Examples # generate a quad mesh object vertices <- c( -1.0, -1.0, 0, 1.0, -1.0, 0, 1.0, 1.0, 0, -1.0, 1.0, 0 ) indices <- c( 1, 2, 3, 4 ) open3d() wire3d( mesh3d(vertices = vertices, quads = indices) )
2021-10-19 12:09:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17392557859420776, "perplexity": 6196.352863802768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585265.67/warc/CC-MAIN-20211019105138-20211019135138-00591.warc.gz"}
https://forum.azimuthproject.org/discussion/1215/notes-on-networks-an-introduction-by-m-e-j-newman
#### Howdy, Stranger! It looks like you're new here. If you want to get involved, click one of these buttons! Options # Notes on "Networks: An Introduction", by M.E.J. Newman This is a thread for any notes from this textbook. I would like to create a Wiki page with reading notes, but not sure about a good way to name it. Calling it "Networks: An Introduction" would make it sound like an introduction. One way to go would be to add a prefix like "Notes - Networks: An Introduction," like the way we have Blog as a prefix. Any suggestions welcome here. • Options 1. p. 121 Let G be a directed graph, and let A be its adjacency matrix. Then the following are equivalent: • G is acyclic • The verticies of G can be ordered so that A is strictly upper triangular • A is nilpotent • All eigenvectors of A are zero Comment Source:p. 121 Let G be a directed graph, and let A be its adjacency matrix. Then the following are equivalent: * G is acyclic * The verticies of G can be ordered so that A is strictly upper triangular * A is nilpotent * All eigenvectors of A are zero • Options 2. I'd suggest 'Reading notes...' instead of just 'Notes...', and a new category 'reading notes'. It is similar to blogs, and also to experiments. Comment Source:I'd suggest 'Reading notes...' instead of just 'Notes...', and a new category 'reading notes'. It is similar to blogs, and also to experiments. • Options 3. That result on p. 121 is a very nice chunk of math, David Tanzer. There's a famous theorem that a linear transformation of a finite-dimensional vector space is nilpotent iff all its eigenvalues is zero iff there's some basis in which it's strictly upper triangular. This can most easily be seen as a spinoff of the Jordan canonical form of a matrix. But in fact the result you're talking about is easier, since we don't need to think about vector spaces and bases and linear transformations; we can just think about the adjaceny matrix $A$, and to bring it into upper triangular form we just need to order the vertices correctly. To do this first we list all the vertices $v_1, \dots, v_n$ that have no edges coming out of them, in any order. (Such vertices must exist since $G$ is acyclic.) Then we list the vertices $v_{n+1},\dots, v_m$ whose only outgoing edges go to the vertices $v_1, \dots, v_n$. Then we must list the vertices whose only outgoing edges go to the vertices $v_1, \dots, v_m$. And so on. (Prove that this eventually exhausts all the vertices, again using the fact that $G$ is acyclic.) When we list the vertices this way, our matrix $A$ is strictly upper triangular. (Or maybe strictly lower triangular, but that just means I listed them backwards.) I got a free textbook on network theory from Oxford Press, but it was Ernesto Estrada's, not Newman's. I'd like to compare them someday. Estrada's book is full of typos and omissions that make the math harder to follow than necessary. Comment Source:That result on p. 121 is a very nice chunk of math, David Tanzer. There's a famous theorem that a linear transformation of a finite-dimensional vector space is nilpotent iff all its eigenvalues is zero iff there's some basis in which it's strictly upper triangular. This can most easily be seen as a spinoff of the [Jordan canonical form](http://en.wikipedia.org/wiki/Jordan_canonical_form) of a matrix. But in fact the result you're talking about is easier, since we don't need to think about vector spaces and bases and linear transformations; we can just think about the adjaceny matrix $A$, and to bring it into upper triangular form we just need to order the vertices correctly. To do this first we list all the vertices $v_1, \dots, v_n$ that have no edges coming out of them, in any order. (Such vertices must exist since $G$ is acyclic.) Then we list the vertices $v_{n+1},\dots, v_m$ whose only outgoing edges go to the vertices $v_1, \dots, v_n$. Then we must list the vertices whose only outgoing edges go to the vertices $v_1, \dots, v_m$. And so on. (Prove that this eventually exhausts all the vertices, again using the fact that $G$ is acyclic.) When we list the vertices this way, our matrix $A$ is strictly upper triangular. (Or maybe strictly lower triangular, but that just means I listed them backwards.) I got a free textbook on network theory from Oxford Press, but it was Ernesto Estrada's, not Newman's. I'd like to compare them someday. Estrada's book is full of typos and omissions that make the math harder to follow than necessary. • Options 4. edited June 2013 That shows that G is acyclic iff its vertices can be ordered so that A is upper triangular. And these are equivalent to nilpotence, because the successive powers of A are upper triangular with more and more diagonal bands of zeros. Next, the fact that A is strictly upper triangular implies that all eigenvalues are zero, since the eigenvalues of a triangular matrix are the values on the diagonal. But to complete the proof of these equivalences, how do you propose to do it without invoking the general linear-algebraic theorem that you mentioned? I.e., is there an easier way to prove that, for an adjacency matrix, if all eigenvalues are zero, then the graph is acyclic? To put it more tangibly, we want to show that if the graph is cyclic, then it must have an eigenvector with non-zero eigenvalue. We could get the job done by showing that it is guaranteed to have a fixed point. That's clearly the case if the graph contains any isolated cycle, because the characteristic function of the set of nodes in the cycle is a fixed point. Or, if it contains any isolated cliques of size k, then the characteristic function of the clique is an eigenvector, with eigenvalue k. By what about the general case of any old cyclic graph? I was imagining starting with the entire set of nodes, i.e., the vector (1,1,...,1), and iterating to see if it converges to a fixpoint. But why would it, since we are operating over the ground ring R, not the ground rig {0,1}. But does it converge to something special, that could be analogous to the "cyclic kernel" of a directed graph, obtained by successively removing nodes that have zero indegree. Perhaps it eventually reaches some eigenvector, not necessarily a fixpoint? You see, I am grasping at hypotheses here :) The Newman book does give a graph-theoretic proof of these equivalences, which doesn't cite a theorem that is based on the Jordan canonical form. It's based on another spectral theorem about graphs. I don't have it at hand, but can post it later. But now I'll leave some space here for this discussion. Comment Source:That shows that G is acyclic iff its vertices can be ordered so that A is upper triangular. And these are equivalent to nilpotence, because the successive powers of A are upper triangular with more and more diagonal bands of zeros. Next, the fact that A is strictly upper triangular implies that all eigenvalues are zero, since the eigenvalues of a triangular matrix are the values on the diagonal. But to complete the proof of these equivalences, how do you propose to do it without invoking the general linear-algebraic theorem that you mentioned? I.e., is there an easier way to prove that, for an adjacency matrix, if all eigenvalues are zero, then the graph is acyclic? To put it more tangibly, we want to show that if the graph is cyclic, then it must have an eigenvector with non-zero eigenvalue. We could get the job done by showing that it is guaranteed to have a fixed point. That's clearly the case if the graph contains any isolated cycle, because the characteristic function of the set of nodes in the cycle is a fixed point. Or, if it contains any isolated cliques of size k, then the characteristic function of the clique is an eigenvector, with eigenvalue k. By what about the general case of any old cyclic graph? I was imagining starting with the entire set of nodes, i.e., the vector (1,1,...,1), and iterating to see if it converges to a fixpoint. But why would it, since we are operating over the ground ring R, not the ground rig {0,1}. But does it converge to something special, that could be analogous to the "cyclic kernel" of a directed graph, obtained by successively removing nodes that have zero indegree. Perhaps it eventually reaches some eigenvector, not necessarily a fixpoint? You see, I am grasping at hypotheses here :) The Newman book does give a graph-theoretic proof of these equivalences, which doesn't cite a theorem that is based on the Jordan canonical form. It's based on another spectral theorem about graphs. I don't have it at hand, but can post it later. But now I'll leave some space here for this discussion. • Options 5. edited June 2013 In addition to the main point that I just indicated which still needs proof if we aren't going to rely on the general linear algebra theorem, there is one other missing piece in the proofs I just gave: showing that if A is nilpotent, then G is acyclic. But this is easy; let's show the contrapositive. Suppose G is cyclic. Then $A ^ n$ is a the matrix whose (i,j)th entry counts the number of paths in G of length n from node i to node j. In a cyclic graph, for any n, there exists paths of length n. Hence $A ^ n$ cannot be zero. So A is not nilpotent. Comment Source:In addition to the main point that I just indicated which still needs proof if we aren't going to rely on the general linear algebra theorem, there is one other missing piece in the proofs I just gave: showing that if A is nilpotent, then G is acyclic. But this is easy; let's show the contrapositive. Suppose G is cyclic. Then $A ^ n$ is a the matrix whose (i,j)th entry counts the number of paths in G of length n from node i to node j. In a cyclic graph, for any n, there exists paths of length n. Hence $A ^ n$ cannot be zero. So A is not nilpotent. • Options 6. Based on the general theorem in linear algebra (that T is nilpotent iff all its eigenvectors are zero) it follows that a cyclic graph will have non-zero eigenvectors, but I can't at all see how to give a straight-ahead graph-theoretic proof of this. Taking a random cyclic graph, the eigenvectors of its adjacency matrix will generally have complex components. Again, I am at a loss for how to give a graph theoretic interpretation of such an eigenvector. For me, this is a stumbling block in the study of spectral graph theory. Comment Source:Based on the general theorem in linear algebra (that T is nilpotent iff all its eigenvectors are zero) it follows that a cyclic graph will have non-zero eigenvectors, but I can't at all see how to give a straight-ahead graph-theoretic proof of this. Taking a random cyclic graph, the eigenvectors of its adjacency matrix will generally have complex components. Again, I am at a loss for how to give a graph theoretic interpretation of such an eigenvector. For me, this is a stumbling block in the study of spectral graph theory. • Options 7. Hi David! We have copies of that book here in the office. It's a fairly decent one. I think we have to seperate the idea of complex networks and networks. We all know what a network is, but complex networks, what people typically mean by that, is that there is an emergent feature that is present globally. Like the forest through the trees. Of course, this is based on my own current understanding and clearly the term is not so well defined. Hopefully this can change... Comment Source:Hi David! We have copies of that book here in the office. It's a fairly decent one. I think we have to seperate the idea of complex networks and networks. We all know what a network is, but complex networks, what people typically mean by that, is that there is an emergent feature that is present globally. Like the forest through the trees. Of course, this is based on my own current understanding and clearly the term is not so well defined. Hopefully this can change... • Options 8. Hi Jacob, that's an interesting topic, complex networks. Just to be clear, though, I was referring to the complex numbers that arise when you take the eigenvectors and eigenvalues of even simple networks. For instance, the cyclic graph {(1,2), (2,1), (2,3)} has an adjacency matrix with complex eigenvectors and eigenvalues. Intuitively it seems that these eigenvectors must have a graph theoretic interpretation (they are after all a function of the graph), but it is eluding me. Comment Source:Hi Jacob, that's an interesting topic, complex networks. Just to be clear, though, I was referring to the complex numbers that arise when you take the eigenvectors and eigenvalues of even simple networks. For instance, the cyclic graph {(1,2), (2,1), (2,3)} has an adjacency matrix with complex eigenvectors and eigenvalues. Intuitively it seems that these eigenvectors must have a graph theoretic interpretation (they are after all a function of the graph), but it is eluding me. • Options 9. edited June 2013 Here I will paraphrase how Newman completes the proof. Theorem. Let G be a directed graph, with adjacency matrix A. Let L(r) be the number of cycles of length r in G. Then L(r) = $\sum_{i=1}^{n} k_i^r$, where $k_1, ..., k_n$ are the eigenvalues of A. Corollary. If G is cyclic, then it must have a non-zero eigenvalue. Indeed, if G is cyclic, then it has a cycle of length r, so L(r) > 0, and hence some $k_i$ must be non-zero. Note: cycles are defined as closed paths, and paths are sequences of edges. Hence each loop that passes through k nodes will be treated as k distinct cycles, one for each starting point in the loop. Comment Source:Here I will paraphrase how Newman completes the proof. Theorem. Let G be a directed graph, with adjacency matrix A. Let L(r) be the number of cycles of length r in G. Then L(r) = $\sum_{i=1}^{n} k_i^r$, where $k_1, ..., k_n$ are the eigenvalues of A. Corollary. If G is cyclic, then it must have a non-zero eigenvalue. Indeed, if G is cyclic, then it has a cycle of length r, so L(r) > 0, and hence some $k_i$ must be non-zero. Note: cycles are defined as closed paths, and paths are sequences of edges. Hence each loop that passes through k nodes will be treated as k distinct cycles, one for each starting point in the loop. • Options 10. edited June 2013 Proof of the theorem. (p. 136) $(A^r)_{i j}$ = the number of paths in G of length r that go from node i to node j. Hence $(A^r)_{i i}$ = the number of cycles in G of length r that start at node i. Hence $\sum_{i=1}^{n} (A^r)_{i i}$ = L(r) = the number of r-cycles in G. But this is just the trace of $A^r$, which equals the sum of the eigenvalues of $A^r$. This in turn equals the sum of the rth powers of the eigenvalues of A. Beautiful stuff. Comment Source:Proof of the theorem. (p. 136) $(A^r)_{i j}$ = the number of paths in G of length r that go from node i to node j. Hence $(A^r)_{i i}$ = the number of cycles in G of length r that start at node i. Hence $\sum_{i=1}^{n} (A^r)_{i i}$ = L(r) = the number of r-cycles in G. But this is just the trace of $A^r$, which equals the sum of the eigenvalues of $A^r$. This in turn equals the sum of the rth powers of the eigenvalues of A. Beautiful stuff. • Options 11. For instance, the cyclic graph {(1,2), (2,1), (2,3)} has an adjacency matrix with complex eigenvectors and eigenvalues. Intuitively it seems that these eigenvectors must have a graph theoretic interpretation (they are after all a function of the graph), but it is eluding me. I guess you meant {(1,2), (2,3), (3,1)}. If $\omega \neq 1$ is a cube root of 1, the eigenvalues are $1, \omega, \omega^2$. The eigenvectors can be seen as weighted sums of the vertices, and the weights are complex. I don't know how to make it more graph-theoretic than that. Comment Source:> For instance, the cyclic graph {(1,2), (2,1), (2,3)} has an adjacency matrix with complex eigenvectors and eigenvalues. Intuitively it seems that these eigenvectors must have a graph theoretic interpretation (they are after all a function of the graph), but it is eluding me. I guess you meant {(1,2), (2,3), (3,1)}. If $\omega \neq 1$ is a cube root of 1, the eigenvalues are $1, \omega, \omega^2$. The eigenvectors can be seen as weighted sums of the vertices, and the weights are complex. I don't know how to make it more graph-theoretic than that. • Options 12. Hi David! I was just making a general remark that I figured out only recently about complex networks vs networks. Thanks for this post. I learned something and have a copy of the book on my desk. Comment Source:Hi David! I was just making a general remark that I figured out only recently about complex networks vs networks. Thanks for this post. I learned something and have a copy of the book on my desk. • Options 13. Jacob, sure thing. I'm also interested to hear what you find out about emergent properties of complex networks! Comment Source:Jacob, sure thing. I'm also interested to hear what you find out about emergent properties of complex networks! • Options 14. edited June 2013 Graham, you're right, my example was erroneous, all the eigen values and vectors are real. The eigenvectors can be seen as weighted sums of the vertices, and the weights are complex. I don’t know how to make it more graph-theoretic than that. It is true the the eigenvectors are weighted sums of the vertices, but they are not just any old weighted sum. They have a very specific character which I believe must be related to the structural properties of the graph. One aspect of this question is the following. We know from the algebra that any cyclic graph must have a non-null eigenvector. Can we give a constructive procedure for finding the eigenvector, based on the connectivity structure of the graph? Of course we can just algebraically compute them, but that is using linear algebra rather than graph theory. Here are a few cases where this can be done. Suppose the graph contains an isolated cycle that goes through the nodes $v_1,...,v_k$. Then let w be the weighted combination of the nodes, which assigns 1 to each of the nodes in the cycle, and 0 to all the other nodes (i.e. the characteristic function of the nodes in the cycle). Then w is an eigenvector with eigenvalue 1. This generalizes to disjoint unions of isolated cycles, and also to any set of nodes which equals its predecessor-set in the graph. This is a construction of the eigenvectors in direct, graph-theoretic terms. For other examples, the characteristic function of an isolated clique of size k is an eigenvector with eigenvalue k, and the characteristic function of a set of nodes that have zero outdegree, is an eigenvector with eigenvalue zero. So, can we generalize and unify these constructive procedures, to handle the case of a general cyclic graph? Comment Source:Graham, you're right, my example was erroneous, all the eigen values and vectors are real. > The eigenvectors can be seen as weighted sums of the vertices, and the weights are complex. I don’t know how to make it more graph-theoretic than that. It is true the the eigenvectors are weighted sums of the vertices, but they are not just any old weighted sum. They have a very specific character which I believe must be related to the structural properties of the graph. One aspect of this question is the following. We know from the algebra that any cyclic graph must have a non-null eigenvector. Can we give a constructive procedure for finding the eigenvector, based on the connectivity structure of the graph? Of course we can just algebraically compute them, but that is using linear algebra rather than graph theory. Here are a few cases where this can be done. Suppose the graph contains an isolated cycle that goes through the nodes $v_1,...,v_k$. Then let w be the weighted combination of the nodes, which assigns 1 to each of the nodes in the cycle, and 0 to all the other nodes (i.e. the characteristic function of the nodes in the cycle). Then w is an eigenvector with eigenvalue 1. This generalizes to disjoint unions of isolated cycles, and also to any set of nodes which equals its predecessor-set in the graph. This is a construction of the eigenvectors in direct, graph-theoretic terms. For other examples, the characteristic function of an isolated clique of size k is an eigenvector with eigenvalue k, and the characteristic function of a set of nodes that have zero outdegree, is an eigenvector with eigenvalue zero. So, can we generalize and unify these constructive procedures, to handle the case of a general cyclic graph? • Options 15. edited June 2013 Since the adjacency matrix is non-negative, from one form of the Perron-Frobenious theorem, we know that there will always be a real eigenvalue, with an associated real and non-negative eigenvector -- a Perron eigenvector. It's eigenvalue will have maximal absolute value, but, because the adjacency matrix is not strictly positive, there may be multiple Perron eigenvectors for a given graph. This puts a sharper point on the spectrum of cyclic graph. We know that the graph must have a non-zero eigenvalue, and also that it has a Perron eigenvector v. So the eigenvalue of v must be positive, and its components real and non-negative. So the constructive procedure could limit itself to the case of non-negative eigenvectors and eigenvalues. Comment Source:Since the adjacency matrix is non-negative, from one form of the Perron-Frobenious theorem, we know that there will always be a real eigenvalue, with an associated real and non-negative eigenvector -- a Perron eigenvector. It's eigenvalue will have maximal absolute value, but, because the adjacency matrix is not strictly positive, there may be multiple Perron eigenvectors for a given graph. This puts a sharper point on the spectrum of cyclic graph. We know that the graph must have a non-zero eigenvalue, and also that it has a Perron eigenvector v. So the eigenvalue of v must be positive, and its components real and non-negative. So the constructive procedure could limit itself to the case of non-negative eigenvectors and eigenvalues. • Options 16. Graham wrote: I guess you meant {(1,2), (2,3), (3,1)}. If $\omega \neq 1$ is a cube root of 1, the eigenvalues are $1, \omega, \omega^2$ That's an interesting one to look at. The Perron vector is the function which assigns 1 to all three of the nodes, which is the obvious fixpoint of the adjacency matrix. The other eigenvalues are interesting. Here, the group ${1, w, w^2}$ is isomorphic to the symmetry group of the graph. Is this just a special coincidence, or is it a sign of something deeper? But note that there is no obvious generalization of this fact, because the spectrum does not generally form a group. It may be a coincidence, stemming from the fact that in this case the adjacency matrix is a permutation, which is a symmetry of the graph. Comment Source:Graham wrote: > I guess you meant {(1,2), (2,3), (3,1)}. If $\omega \neq 1$ is a cube root of 1, the eigenvalues are $1, \omega, \omega^2$ That's an interesting one to look at. The Perron vector is the function which assigns 1 to all three of the nodes, which is the obvious fixpoint of the adjacency matrix. The other eigenvalues are interesting. Here, the group ${1, w, w^2}$ is isomorphic to the symmetry group of the graph. Is this just a special coincidence, or is it a sign of something deeper? But note that there is no obvious generalization of this fact, because the spectrum does not generally form a group. It may be a coincidence, stemming from the fact that in this case the adjacency matrix is a permutation, which is a symmetry of the graph. • Options 17. edited June 2013 A while ago David wrote: I.e., is there an easier way to prove that, for an adjacency matrix, if all eigenvalues are zero, then the graph is acyclic? Sorry to punk out on you---my wife Lisa went to Singapore on June 5, and I gave the final for my class June 12, so I've been sort of distracted. It looks like you found a solution. But I think the Jordan canonical form may be hiding inside your solution. At some point in your argument you said But this is just the trace of $A^r$, which equals the sum of the eigenvalues of $A^r$. I'm wondering how you're concluding this. The trace of a matrix isn't always the sum of its eigenvalues. It's true when the eigenvectors of the matrix form a basis, since the trace of a linear transformation is the sum of its diagonal entries when written as a matrix in any basis, and writing the transformation in the basis of its eigenvectors, the diagonal entries are its eigenvalues. But the eigenvectors of a matrix don't always form a basis. For example, take $$1 \; 1$$ $$0 \; 1$$ This matrix doesn't have a basis of eigenvectors: its only eigenvectors are multiples of $$1$$ $$0$$ So, it just has one eigenvalue, namely 1, not two. But the trace is 2. There's a way around this problem. For one thing, this matrix can't be the adjacency matrix of a loop-free graph. (I think it's silly to avoid loop-free graphs, but lots of people do). However, right now the only way I see to always avoid this problem involves Jordan canonical form. (I don't personally think it's worthwhile avoiding the use of Jordan canonical form, since this is such a fundamental theorem in linear algebra: as the name says, it gives a canonical form for any linear transformation, making them all easy to understand. So, I don't think much about how to avoid it. So, there could be an easy way to avoid it here, which I'm not seeing!) Comment Source:A while ago David wrote: > I.e., is there an easier way to prove that, for an adjacency matrix, if all eigenvalues are zero, then the graph is acyclic? Sorry to punk out on you---my wife Lisa went to Singapore on June 5, and I gave the final for my class June 12, so I've been sort of distracted. It looks like you found a solution. But I think the Jordan canonical form may be hiding inside your solution. At some point in your argument you said > But this is just the trace of $A^r$, which equals the sum of the eigenvalues of $A^r$. I'm wondering how you're concluding this. The trace of a matrix isn't always the sum of its eigenvalues. It's true when the eigenvectors of the matrix form a basis, since the trace of a linear transformation is the sum of its diagonal entries when written as a matrix in any basis, and writing the transformation in the basis of its eigenvectors, the diagonal entries are its eigenvalues. But the eigenvectors of a matrix don't always form a basis. For example, take $$1 \; 1$$ $$0 \; 1$$ This matrix doesn't have a basis of eigenvectors: its only eigenvectors are multiples of $$1$$ $$0$$ So, it just has one eigenvalue, namely 1, not two. But the trace is 2. There's a way around this problem. For one thing, this matrix can't be the adjacency matrix of a loop-free graph. (I think it's silly to avoid loop-free graphs, but lots of people do). However, right now the only way I see to always avoid this problem involves Jordan canonical form. (I don't personally think it's worthwhile avoiding the use of Jordan canonical form, since this is such a fundamental theorem in linear algebra: as the name says, it gives a canonical form for any linear transformation, making them all easy to understand. So, I don't think much about how to avoid it. So, there could be an easy way to avoid it here, which I'm not seeing!) • Options 18. I agree it looks like the Jordan form is really needed. The proof I gave, by the way, is from the Newman textbook. I stated the trace theorem in a hasty way; more accurately put, the trace is the sum of the eigenvalues, viewed as a multi-set. And this follows from the Jordan form theorem, since every matrix is similar to its Jordan canonical form, which is triangular with the eigenvalues on the diagonal, and the multi-set of eigenvalues is shared by all of the matrices in a similarity class. Comment Source:I agree it looks like the Jordan form is really needed. The proof I gave, by the way, is from the Newman textbook. I stated the trace theorem in a hasty way; more accurately put, the trace is the sum of the eigenvalues, viewed as a multi-set. And this follows from the Jordan form theorem, since every matrix is similar to its Jordan canonical form, which is triangular with the eigenvalues on the diagonal, and the multi-set of eigenvalues is shared by all of the matrices in a similarity class. • Options 19. edited June 2013 I just want to note that while there's a standard notion of a multi-set of eigenvalues, this isn't quite the same as the multi-set of numbers that show up on the diagonal of the Jordan form. We say an eigenvalue $\lambda$ of a matrix $T$ has 'multiplicity $n$' if there are $n$ linearly independent vectors $v_i$ such that $$T v_i = \lambda v_i$$ Counting eigenvalues with their multiplicity this way, we get a multi-set of eigenvalues. So, for example, this matrix has eigenvalue 3 with multiplicity 2: $$3 \quad 0$$ $$0 \quad 3$$ so it gives the multiset ${3, 3 }$ But this matrix has eigenvalue 3 with multiplicity 1: $$3 \quad 1$$ $$0 \quad 3$$ since it doesn't have two linearly independent eigenvectors. So it gives the multiset ${3 }$, using the procedure I'm describing. I'm not trying to be a nuisance - honest! Of course you can make a multiset of eigenvalues in a different way where you just look at the list of numbers in the Jordan form... this way gives the trace when we add them up. I'm just trying to point out something that's a bit subtle. It took me too long to understand all this stuff, myself, because I learned a lot of linear algebra in physics classes, which focused on self-adjoint operators, whose Jordan form is diagonal. Then these subtleties don't arise. Comment Source:I just want to note that while there's a standard notion of a multi-set of eigenvalues, this isn't quite the same as the multi-set of numbers that show up on the diagonal of the Jordan form. We say an eigenvalue $\lambda$ of a matrix $T$ has 'multiplicity $n$' if there are $n$ linearly independent vectors $v_i$ such that $$T v_i = \lambda v_i$$ Counting eigenvalues with their multiplicity this way, we get a multi-set of eigenvalues. So, for example, this matrix has eigenvalue 3 with multiplicity 2: $$3 \quad 0$$ $$0 \quad 3$$ so it gives the multiset $\{3, 3 \}$ But this matrix has eigenvalue 3 with multiplicity 1: $$3 \quad 1$$ $$0 \quad 3$$ since it doesn't have two linearly independent eigenvectors. So it gives the multiset $\{3 \}$, using the procedure I'm describing. I'm not trying to be a nuisance - honest! Of course you can make a multiset of eigenvalues in a different way where you just look at the list of numbers in the Jordan form... _this_ way gives the trace when we add them up. I'm just trying to point out something that's a bit subtle. It took me too long to understand all this stuff, myself, because I learned a lot of linear algebra in physics classes, which focused on self-adjoint operators, whose Jordan form is diagonal. Then these subtleties don't arise. • Options 20. ## p. 118 sec 6.4.2 Acyclic directed networks Goes on to define cycles in directed networks and acyclic directed networks. • (Def.) A cycle in a directed network is a closed loop of edges with the arrows of each of the edges pointing the same way around the loop. Directed networks without cycles are called acyclic. Here all the arrows can be pointed, for example down the page or in the same overall direction. Examples that are easy to think about would be citation networks (since papers appearing later, cite papers appearing sooner typically). • (Thm.) Suppose we have an acyclic directed network of $n$ vertices. Then $\exists$ some node with indegree >$0$ and outdegree $0$. • (Proof.) Consider $n$ walks on the network, one starting from each node. If any of these walks either starts at any node $q$ or passes it along the walk and is able to traverse back to $q$ by steps along directed edges, the network is not acyclic. The walk can visit at most $n$ nodes before terminating at a node with outdegree $0$. If anyone that does not like the way this proof is written and wants to reword that proof in a better way, that'd be nice! Comment Source:## p. 118 sec 6.4.2 Acyclic directed networks ## Goes on to define cycles in directed networks and acyclic directed networks. * (**Def.**) A _cycle_ in a directed network is a closed loop of edges with the arrows of each of the edges pointing the same way around the loop. Directed networks without cycles are called _acyclic_. Here all the arrows can be pointed, for example down the page or in the same overall direction. Examples that are easy to think about would be citation networks (since papers appearing later, cite papers appearing sooner typically). * (**Thm.**) Suppose we have an acyclic directed network of $n$ vertices. Then $\exists$ some node with indegree >$0$ and outdegree $0$. * (**Proof.**) Consider $n$ walks on the network, one starting from each node. If any of these walks either starts at any node $q$ or passes it along the walk and is able to traverse back to $q$ by steps along directed edges, the network is not acyclic. The walk can visit at most $n$ nodes before terminating at a node with outdegree $0$. If anyone that does not like the way this proof is written and wants to reword that proof in a better way, that'd be nice! • Options 21. I don't like the proof, and the theorem is not quite true, because the network might have no edges. New proof. Use induction on n, starting with n=2. Suppose first that all nodes have outdegree $\gt$0. Start at any node, choose any edge pointing to another node, and repeat until we visit some node already visited. This makes a cycle, which is a contradiction. So $\exists$ a node $q$ with outdegree $0$. If it has indegree $\gt$0, we're done, otherwise use induction on the network obtained by removing $q$. Comment Source:I don't like the proof, and the theorem is not quite true, because the network might have no edges. **New proof.** Use induction on n, starting with n=2. Suppose first that all nodes have outdegree $\gt$0. Start at any node, choose any edge pointing to another node, and repeat until we visit some node already visited. This makes a cycle, which is a contradiction. So $\exists$ a node $q$ with outdegree $0$. If it has indegree $\gt$0, we're done, otherwise use induction on the network obtained by removing $q$. • Options 22. Cool! We have to get the statements solid since I assume that David Tanzer will post this onto the Azimuth wiki at one point. So how's this for a new version of the Theorem? • (Thm.) Suppose we have an acyclic network of $n$ nodes each of outdegree $+$ indegree > 0. Then $\exists$ at least one network node with outdegree $0$. That would force each node to either have an out edge, or an in one. What do you think? Comment Source:Cool! We have to get the statements solid since I assume that David Tanzer will post this onto the Azimuth wiki at one point. So how's this for a new version of the Theorem? * (**Thm.**) Suppose we have an acyclic network of $n$ nodes each of outdegree $+$ indegree > 0. Then $\exists$ at least one network node with outdegree $0$. That would force each node to either have an out edge, or an in one. What do you think? • Options 23. ...for completeness of the record, one more point regarding the algebraic multiset of eigenvalues: in addition to being displayed as the collection of diagonal entries on the Jordan canonical form of a matrix, they can also be obtained directly from the given matrix, since they are the multiset of roots of the characteristic polynomial. John, thanks for bringing the Jordan form into the discussion. I'd seen it before, but this time around I learned about it, because in the Azimuth context it has a living application. Comment Source:...for completeness of the record, one more point regarding the algebraic multiset of eigenvalues: in addition to being displayed as the collection of diagonal entries on the Jordan canonical form of a matrix, they can also be obtained directly from the given matrix, since they are the multiset of roots of the characteristic polynomial. John, thanks for bringing the Jordan form into the discussion. I'd seen it before, but this time around I learned about it, because in the Azimuth context it has a living application. • Options 24. Just a simple and quick idea that we might want to consider to connect these results to stochastic mechanics in the network theory series. The following I think is really obvious but still, I want to be careful about how we show it. In this thread, Graham Jones gave a proof of the theorem from the Newman book. • (Theorem 1). Suppose we have an acyclic network of $n$ nodes each of outdegree $+$ indegree > 0. Then $\exists$ at least one network node with outdegree $0$. Let's now consider a continuous time stochastic process on networks of this type with adjacency matrix $A$. (Note that in doing so, we're outside of the class of networks for which the Perron-Frobenius theorem gives a stationary solution since these networks are NOT strongly connected---see the network series part 20) The adjacency matrix of such a network can be elevated to a valid stochastic generator. See for example, the network theory series --- in particular parts 12 and 16. Consider the projection $D$ which sums columns of a matrix and returns a matrix with the sum along the diagonal. Then H = A - D(A) is a valid stochastic generator. This just means that we can exponentiate this matrix times time and get a valid stochastic evolution generator. Now I'd like to think of a simple way to show something that should come as no surprise. When we think about processes on graphs, we already know that the way this is talked about is using the term "walkers". We can make a walker basis by thinking about a walker being only on a single node at a time. If there are $N$ nodes in the network, then we have a complete orthogonal walker basis formed in the evident way. These would just be given by vectors $(1,0,0,...), (0,1,0,...), (0,0,0,...,0,1,0), (0,0,0,...,0,1)$ where the single non-zero 1 marks the location of a walker on some node. • (Theorem 2). For each of the k nodes of $A$ with outdegree $0$, $\exists$ $k$ stationary states given by the basis elements corresponding to walkers being present on nodes with outdegree $0$. Like I said, this theorem is no surprise but maybe it can teach us something else. I guess what we're really saying (let me think about this more) is that • (Remark 1). dim Ker$\{A - D(A)\}$ $= k$ and in particular, there is a $k$ element span of Ker$\{A - D(A)\}$ given by $k$ walkers at the nodes with outdegree $0$. I need to formalize this a lot better, think about how to go about proving it and think if we can do anything else with it. This is just a random idea for now, but hopefully more later. What do you think? Comment Source:Just a simple and quick idea that we might want to consider to connect these results to stochastic mechanics in the network theory series. The following I think is really obvious but still, I want to be careful about how we show it. In this thread, Graham Jones gave a proof of the theorem from the Newman book. * (**Theorem 1**). Suppose we have an acyclic network of $n$ nodes each of outdegree $+$ indegree > 0. Then $\exists$ at least one network node with outdegree $0$. Let's now consider a continuous time stochastic process on networks of this type with adjacency matrix $A$. (Note that in doing so, we're outside of the class of networks for which the Perron-Frobenius theorem gives a stationary solution since these networks are NOT strongly connected---see the network series <a href="http://math.ucr.edu/home/baez/networks/networks_20.html">part 20</a>) The adjacency matrix of such a network can be elevated to a valid stochastic generator. See for example, <a href="http://math.ucr.edu/home/baez/networks/">the network theory series</a> --- in particular parts 12 and 16. Consider the projection $D$ which sums columns of a matrix and returns a matrix with the sum along the diagonal. Then H = A - D(A) is a valid stochastic generator. This just means that we can exponentiate this matrix times time and get a valid stochastic evolution generator. Now I'd like to think of a simple way to show something that should come as no surprise. When we think about processes on graphs, we already know that the way this is talked about is using the term "walkers". We can make a _walker basis_ by thinking about a walker being only on a single node at a time. If there are $N$ nodes in the network, then we have a complete orthogonal walker basis formed in the evident way. These would just be given by vectors $(1,0,0,...), (0,1,0,...), (0,0,0,...,0,1,0), (0,0,0,...,0,1)$ where the single non-zero 1 marks the location of a walker on some node. * (**Theorem 2**). For each of the k nodes of $A$ with outdegree $0$, $\exists$ $k$ stationary states given by the basis elements corresponding to walkers being present on nodes with outdegree $0$. Like I said, this theorem is no surprise but maybe it can teach us something else. I guess what we're really saying (let me think about this more) is that * (**Remark 1**). dim Ker$\{A - D(A)\}$ $= k$ and in particular, there is a $k$ element span of Ker$\{A - D(A)\}$ given by $k$ walkers at the nodes with outdegree $0$. I need to formalize this a lot better, think about how to go about proving it and think if we can do anything else with it. This is just a random idea for now, but hopefully more later. What do you think? • Options 25. Hi Jacob, I'll give this one a try. Suppose that the ith node has outdegree 0. Then in the adjacency matrix A, the ith column will consist of all zeros. Hence the ith column of D(A) will consist of all zeros. Hence the ith column of A - D(A) will consist of all zeros. Hence the vector (0,0,...,1,0,...,0) which has the one in the ith position will be a null vector of A - D(A). This is the pure state corresponding to the walker being at the node with outdegree 0. Since it is a null vector of the stochastic generator, it is a stationary state. Note that the result doesn't depend on acyclicity. Comment Source:Hi Jacob, I'll give this one a try. Suppose that the ith node has outdegree 0. Then in the adjacency matrix A, the ith column will consist of all zeros. Hence the ith column of D(A) will consist of all zeros. Hence the ith column of A - D(A) will consist of all zeros. Hence the vector (0,0,...,1,0,...,0) which has the one in the ith position will be a null vector of A - D(A). This is the pure state corresponding to the walker being at the node with outdegree 0. Since it is a null vector of the stochastic generator, it is a stationary state. Note that the result doesn't depend on acyclicity. • Options 26. edited June 2013 It sounds like you are searching for stochastic implications of acyclicity. Here I'll fish around a bit. If the graph is acyclic, then A will be a strictly lower triangular matrix. And it will be nilpotent. The stochastic generator G = A - D(A) will be a lower triangular matrix, with diagonal entries equal to the outdegrees of each of the nodes. Hence all successive powers of G will be lower triangular, though G will not be nilpotent. The ith diagonal entry of $G^k$ will equal $(-o)^k$, where o is the outdegree of the ith node. Are there any special consequences of the fact that a stochastic generator has a triangular matrix? Here is a clear consequence of acyclicity: if the graph has N nodes, then after N steps, all walkers must be at an outdegree 0 node. Each step of the walk corresponds to another successive power of the adjacency matrix, which, being nilpotent, marches along towards the zero matrix, one diagonal band at a time. All walkers are inevitably swept towards the outdegree zero nodes. Can this be said with a theorem about the stochastic generator? Comment Source:It sounds like you are searching for stochastic implications of acyclicity. Here I'll fish around a bit. If the graph is acyclic, then A will be a strictly lower triangular matrix. And it will be nilpotent. The stochastic generator G = A - D(A) will be a lower triangular matrix, with diagonal entries equal to the outdegrees of each of the nodes. Hence all successive powers of G will be lower triangular, though G will not be nilpotent. The ith diagonal entry of $G^k$ will equal $(-o)^k$, where o is the outdegree of the ith node. Are there any special consequences of the fact that a stochastic generator has a triangular matrix? Here is a clear consequence of acyclicity: if the graph has N nodes, then after N steps, all walkers must be at an outdegree 0 node. Each step of the walk corresponds to another successive power of the adjacency matrix, which, being nilpotent, marches along towards the zero matrix, one diagonal band at a time. All walkers are inevitably swept towards the outdegree zero nodes. Can this be said with a theorem about the stochastic generator? • Options 27. Just polishing the theorem statement, and noting that if you reverse the arrows, you can find a node with indegree $0$: • (Thm.) Let $G$ be an acyclic network with a finite number of nodes, and suppose that no node is isolated (that is, each node has outdegree $+$ indegree > 0). Then there is at least one node with outdegree $0$ and at least one node with indegree $0$. Puzzle. Does it remain true if there are infinitely many nodes? Comment Source:Just polishing the theorem statement, and noting that if you reverse the arrows, you can find a node with indegree $0$: * (**Thm.**) Let $G$ be an acyclic network with a finite number of nodes, and suppose that no node is isolated (that is, each node has outdegree $+$ indegree > 0). Then there is at least one node with outdegree $0$ and at least one node with indegree $0$. **Puzzle.** Does it remain true if there are infinitely many nodes? • Options 28. edited June 2013 Nice ideas! I like your theorem Graham. I'm still parsing what you said David, but got excited about the chance to prove something not about the end nodes, but about the nodes which always have a non-decreasing escape velocity! So we know that the nodes with outdegree 0 give rise to stationary points of a stochastic process defined on the graph. I want to think about how to prove that a walker starting at a node with indegree 0 has an always non-decreasing chance of leaving! Let's start with some definitions ($A$ is the same as before). • (Definition). The probability function of a walker starting at node $n$ and being found there later is given as $P_t(n) := \langle n, e^{-t H} n \rangle$ where $H := A - D(A)$ is the stochastic Hamiltonian. The basic property of $P_t$ which we will use is the following $0 \leq P(n) \leq 1$ $\forall t \in R_+$ • (Lemma). The result of a silly calculation used to be right here. I don't think we ended up needing this, but I think we really get is: $\frac{d}{d t}P_t(n)=$ - outdegree(n) $P_t(n)$ • (Lemma). Let $n$ be a node with indegree 0 then $\langle n, H^ k n \rangle =$ ($-$ outdegree(n)$)^k$. The proof of this follows from the fact that if $n$ has indegree 0, the n-th row of $A$ will be all zeros. $D(A)$ simply subtracts the outdegree of $n$ and places it in the $n,n$ position. Successive powers are as described. Hopefully this is all correct and I think it's all we need to show $P_t$ is never increasing. More soon. Comment Source:Nice ideas! I like your theorem Graham. I'm still parsing what you said David, but got excited about the chance to prove something not about the end nodes, but about the nodes which always have a non-decreasing escape velocity! So we know that the nodes with outdegree 0 give rise to stationary points of a stochastic process defined on the graph. I want to think about how to prove that a walker starting at a node with indegree 0 has an always non-decreasing chance of leaving! Let's start with some definitions ($A$ is the same as before). * (**Definition**). The probability function of a walker starting at node $n$ and being found there later is given as $P_t(n) := \langle n, e^{-t H} n \rangle$ where $H := A - D(A)$ is the stochastic Hamiltonian. The basic property of $P_t$ which we will use is the following $0 \leq P(n) \leq 1$ $\forall t \in R_+$ * (**Lemma**). The result of a silly calculation used to be right here. I don't think we ended up needing this, but I think we really get is: $\frac{d}{d t}P_t(n)=$ - outdegree(n) $P_t(n)$ * (**Lemma**). Let $n$ be a node with indegree 0 then $\langle n, H^ k n \rangle =$ ($-$ outdegree(n)$)^k$. The proof of this follows from the fact that if $n$ has indegree 0, the n-th row of $A$ will be all zeros. $D(A)$ simply subtracts the outdegree of $n$ and places it in the $n,n$ position. Successive powers are as described. Hopefully this is all correct and I think it's all we need to show $P_t$ is never increasing. More soon. • Options 29. edited June 2013 For the theorem about finite graphs, we don't need to add an extra condition that there are no isolated nodes -- because the conclusion is true by definition when there are isolated nodes. • (Thm.) Let $G$ be an acyclic network with a positive number of nodes. Then there is at least one node with outdegree $0$ and at least one node with indegree $0$. Graham's original proof is good. Here is an alternative proof, that uses the notion of the height of a node in the graph to characterize the root and leaf nodes. Proof. Define the height of a node n in the graph, in the standard way, as the length of a longest path starting at n. The height can only be infinite if either (1) the graph is infinite, or (2) it is finite and has a cycle. Since our graph is assumed to be finite and acyclic, it follows that every node has a finite height. Claim 1. If n a node with minimal height, then n has outdegree 0. For a contradiction, suppose that n has a successor node n'. Let P' be a maximal path in the graph that starts at n'. So height(n') = length(P'). Then consider the path P = (n,P'). So length(P) = 1 + length(P') = 1 + height(n'). Hence height(n) > height(n'). But that contradicts the assumption that n has minimal height. Claim 2. If n is a node with maximal height, then n has indegree 0. This can be proven by reversing the edges in the graph, and applying Claim 1. Or we can spell out the details, as follows. For a contradiction, suppose that n has a predecessor node n'. Let P be a maximal path in the graph that starts at n. So height(n) = length(P). Then consider the path P' = (n',P). So length(P') = 1 + length(P) = 1 + height(n). Hence height(n') > height(n). But that contradicts the assumption that n has maximal height. Since the graph contains a positive number of nodes, there are nodes with minimal and maximal heights, and hence nodes with indegree 0 and outdegree 0. If the graph is infinite, then the conclusion does not hold. For instance, successor relation among the integers is acyclic, but has no roots or leaves. Comment Source:For the theorem about finite graphs, we don't need to add an extra condition that there are no isolated nodes -- because the conclusion is true by definition when there are isolated nodes. * (**Thm.**) Let $G$ be an acyclic network with a positive number of nodes. Then there is at least one node with outdegree $0$ and at least one node with indegree $0$. Graham's original proof is good. Here is an alternative proof, that uses the notion of the height of a node in the graph to characterize the root and leaf nodes. Proof. Define the height of a node n in the graph, in the standard way, as the length of a longest path starting at n. The height can only be infinite if either (1) the graph is infinite, or (2) it is finite and has a cycle. Since our graph is assumed to be finite and acyclic, it follows that every node has a finite height. Claim 1. If n a node with minimal height, then n has outdegree 0. For a contradiction, suppose that n has a successor node n'. Let P' be a maximal path in the graph that starts at n'. So height(n') = length(P'). Then consider the path P = (n,P'). So length(P) = 1 + length(P') = 1 + height(n'). Hence height(n) > height(n'). But that contradicts the assumption that n has minimal height. Claim 2. If n is a node with maximal height, then n has indegree 0. This can be proven by reversing the edges in the graph, and applying Claim 1. Or we can spell out the details, as follows. For a contradiction, suppose that n has a predecessor node n'. Let P be a maximal path in the graph that starts at n. So height(n) = length(P). Then consider the path P' = (n',P). So length(P') = 1 + length(P) = 1 + height(n). Hence height(n') > height(n). But that contradicts the assumption that n has maximal height. Since the graph contains a positive number of nodes, there are nodes with minimal and maximal heights, and hence nodes with indegree 0 and outdegree 0. If the graph is infinite, then the conclusion does not hold. For instance, successor relation among the integers is acyclic, but has no roots or leaves. • Options 30. edited June 2013 Jacob wrote: • (Lemma). $\frac{d}{dt} P_t(n) = 1 - P_t(n)$ The proof of this follows from the Taylor expansion of $e^{-t H}$ and the fact that $d/d t e^{-t H} = - H e^{-t H}$. Jacob, is this the statement you intended to make? This lemma, along with the fact that $0 \leq P(n) \leq 1$, would imply that $P_t(n)$ would have a non-negative derivative. So there's no way for it to decrease. In fact, since $P_0(n)$ = 1, it would remain at 1 for all time. Comment Source:Jacob wrote: > * (**Lemma**). $\frac{d}{dt} P_t(n) = 1 - P_t(n)$ > The proof of this follows from the Taylor expansion of $e^{-t H}$ and the fact that $d/d t e^{-t H} = - H e^{-t H}$. Jacob, is this the statement you intended to make? This lemma, along with the fact that $0 \leq P(n) \leq 1$, would imply that $P_t(n)$ would have a non-negative derivative. So there's no way for it to decrease. In fact, since $P_0(n)$ = 1, it would remain at 1 for all time. • Options 31. edited June 2013 No, it's not! I was about to fix it, glad you also noticed! I'll leave it there for now and fix it in a few days, for future readers. What I wanted to say was that $\frac{d}{d t} \langle n, e^{-t H} n \rangle = - \langle n, \sum_{k=0} \frac{1}{k!}(-1)^k t^k H^{k+1} n \rangle$ and now I'm messing around to try to get the condition that $P_t$ is always decreasing. I think I have it. Comment Source:No, it's not! I was about to fix it, glad you also noticed! I'll leave it there for now and fix it in a few days, for future readers. What I wanted to say was that $\frac{d}{d t} \langle n, e^{-t H} n \rangle = - \langle n, \sum_{k=0} \frac{1}{k!}(-1)^k t^k H^{k+1} n \rangle$ and now I'm messing around to try to get the condition that $P_t$ is always decreasing. I think I have it. • Options 32. edited June 2013 Arg! I think part of what is messing me up is that to make things look more like quantum physics we've let time run backwards in the network theory series. Let's try this $P_t(n) = \langle n, e^{t H} n \rangle = \langle n, \sum_{k=0} \frac{1}{k!} t^k H^{k} n \rangle = \sum_{k=0} \frac{1}{k!} t^k \langle n, H^{k} n \rangle$ Now if I can make the argument that $\langle n, H^{k} n \rangle = \langle n, H n \rangle^k$ We would recover $P_t(n) = e^{t \langle n, H n \rangle}$ and then if $\langle n, H^ k n \rangle =$ ($-$ outdegree(n)$)^k$ We would get $P_t(n) = e^{-\text{outdegree}(n)t}$ which is always decreasing and reaches zero in the long time limit. I have to think more to be certain this is OK. It's all simple so hope it's OK. Either way it's fun! And I'm learning something. Note to self: work things out more before posting here --- but hey this started to get exciting! Comment Source:Arg! I think part of what is messing me up is that to make things look more like quantum physics we've let time run backwards in the network theory series. Let's try this $P_t(n) = \langle n, e^{t H} n \rangle = \langle n, \sum_{k=0} \frac{1}{k!} t^k H^{k} n \rangle = \sum_{k=0} \frac{1}{k!} t^k \langle n, H^{k} n \rangle$ Now if I can make the argument that $\langle n, H^{k} n \rangle = \langle n, H n \rangle^k$ We would recover $P_t(n) = e^{t \langle n, H n \rangle}$ and then if $\langle n, H^ k n \rangle =$ ($-$ outdegree(n)$)^k$ We would get $P_t(n) = e^{-\text{outdegree}(n)t}$ which is always decreasing and reaches zero in the long time limit. I have to think more to be certain this is OK. It's all simple so hope it's OK. Either way it's fun! And I'm learning something. Note to self: work things out more before posting here --- but hey this started to get exciting! • Options 33. edited June 2013 Gent's, I have to head to a dinner tonight so won't get back to this until tmrw (unfortunatly). I'm thinking about this stuff though and actually just making sure I recall all the definitions when it comes to stochastic mechanics listed here. One things for sure is that this is getting more interesting with time so the fun derivative is increasing! I think the story here related to stochastic mechanics so far is that we have 2 things: • end points: stationary states • starting points: nodes where if you start there, your going to leave! Comment Source:Gent's, I have to head to a dinner tonight so won't get back to this until tmrw (unfortunatly). I'm thinking about this stuff though and actually just making sure I recall all the definitions when it comes to stochastic mechanics listed <a href="http://math.ucr.edu/home/baez/networks/networks_20.html"> here</a>. One things for sure is that this is getting more interesting with time so the fun derivative is increasing! I think the story here related to stochastic mechanics so far is that we have 2 things: * end points: stationary states * starting points: nodes where if you start there, your going to leave! • Options 34. edited June 2013 Jacob, Great. Also, can we generalize these results along the following lines. Let S be a set of nodes. Say that S is predecessor-closed if for every node n in S, and predecessor n' of n, then n' is in S. Define similarly the notion of a successor-closed set of nodes. Any set of roots is predecessor-closed, and any set of leaves is successor-closed. Statement 1. Now let S be set of predecessor-closed nodes. Suppose that at the start of the experiment, the walker is known to be somewhere within S. Then, as time progresses, the probability that the walker remains within S must be a non-increasing function. In fact, only if S is also successor-closed will the probability remain constant over time. Here is a variant on that statement: Statement 2a. Let S be set of predecessor-closed nodes. For any given starting condition of the experiment, as time progresses, the conditional probability that the walker is found within S must be a non-increasing function. In fact, only if S is also successor-closed will the probability remain constant over time. Statement 2b. Dually, let S be set of successor-closed nodes. For any given starting condition of the experiment, as time progresses, the conditional probability that the walker is found within S must be a non-decreasing function. In fact, only if S is also predecessor-closed will the probability remain constant over time. These statements, if proven, would "put fibre" into the intuitive notion that the walkers are swept from roots towards leaves in a directed acyclic graph. Comment Source:Jacob, Great. Also, can we generalize these results along the following lines. Let S be a set of nodes. Say that S is _predecessor-closed_ if for every node n in S, and predecessor n' of n, then n' is in S. Define similarly the notion of a successor-closed set of nodes. Any set of roots is predecessor-closed, and any set of leaves is successor-closed. Statement 1. Now let S be set of predecessor-closed nodes. Suppose that at the start of the experiment, the walker is known to be somewhere within S. Then, as time progresses, the probability that the walker remains within S must be a non-increasing function. In fact, only if S is also successor-closed will the probability remain constant over time. Here is a variant on that statement: Statement 2a. Let S be set of predecessor-closed nodes. For any given starting condition of the experiment, as time progresses, the conditional probability that the walker is found within S must be a non-increasing function. In fact, only if S is also successor-closed will the probability remain constant over time. Statement 2b. Dually, let S be set of successor-closed nodes. For any given starting condition of the experiment, as time progresses, the conditional probability that the walker is found within S must be a non-decreasing function. In fact, only if S is also predecessor-closed will the probability remain constant over time. These statements, if proven, would "put fibre" into the intuitive notion that the walkers are swept from roots towards leaves in a directed acyclic graph. • Options 35. David wrote • (Thm.) Let $G$ be an acyclic network with a positive number of nodes. Then there is at least one node with outdegree $0$ and at least one node with indegree $0$. Yes, that's better! But 'positive' should be 'finite'. If the graph is infinite, then the conclusion does not hold. For instance, successor relation among the integers is acyclic, but has no roots or leaves. That was the example I had in mind. Comment Source:David wrote > * (**Thm.**) Let $G$ be an acyclic network with a positive number of nodes. Then there is at least one node with outdegree $0$ and at least one node with indegree $0$. Yes, that's better! But 'positive' should be 'finite'. > If the graph is infinite, then the conclusion does not hold. For instance, successor relation among the integers is acyclic, but has no roots or leaves. That was the example I had in mind. • Options 36. edited June 2013 Jacob, We can order the nodes so that all the starting point nodes (lets call them roots) are first, and the others are ordered so as to respect the edge directions. Then $H$ has block form like D 0 A B where $D$ and $B$ are square, $B$ is lower triangular, and $D$ is diagonal. Then $H^k$ looks like D^k 0 A' B' and $\langle n, H^ k n \rangle =$ ($-$ outdegree(n)$)^k$ where $n$ is any root. Comment Source:Jacob, We can order the nodes so that all the starting point nodes (lets call them roots) are first, and the others are ordered so as to respect the edge directions. Then $H$ has block form like ~~~~ D 0 A B ~~~~ where $D$ and $B$ are square, $B$ is lower triangular, and $D$ is diagonal. Then $H^k$ looks like ~~~~ D^k 0 A' B' ~~~~ and $\langle n, H^ k n \rangle =$ ($-$ outdegree(n)$)^k$ where $n$ is any root. • Options 37. In my work in phylogenetic analysis, acyclic networks are used for modelling hybridization. They are quite restricted. There are 4 types of node: 1. a single root node, with indegree 0 and outdegree 2. 2. some leaf nodes with indegree 1 and outdegree 0. 3. 'speciation' nodes with indegree 1 and outdegree 2. 4. 'hybridization' nodes with indegree 2 and outdegree 1. One technique for these is multi-labeled trees. (Search for 'phylogenetic networks multi-labeled trees'). The basic idea is to split any nodes with indegree greater then 1 into multiple nodes, one for each incoming edge. This has to be done recursively, and you have to keep track of the labels. This turns the network into a set of trees, one for each root. Trees are a lot easier to work with then networks, but they might be very big. Comment Source:In my work in phylogenetic analysis, acyclic networks are used for modelling hybridization. They are quite restricted. There are 4 types of node: 1. a single root node, with indegree 0 and outdegree 2. 2. some leaf nodes with indegree 1 and outdegree 0. 3. 'speciation' nodes with indegree 1 and outdegree 2. 4. 'hybridization' nodes with indegree 2 and outdegree 1. One technique for these is multi-labeled trees. (Search for 'phylogenetic networks multi-labeled trees'). The basic idea is to split any nodes with indegree greater then 1 into multiple nodes, one for each incoming edge. This has to be done recursively, and you have to keep track of the labels. This turns the network into a set of trees, one for each root. Trees are a lot easier to work with then networks, but they might be very big. • Options 38. None of the following has been carefully checked! Suppose we have a single walk: nodes $1, 2, \dots, m$ and edges $(1,2), (2,3, \dots, (m-1,m)$, and a walker starts at the root $r$. Then $$e^{tH} r = e^{-t}\left(1, \quad t, \quad t^2/2!, \quad t^3/3!, \dots, \quad t^{m-2}/(m-2)!, \quad \sum_{i=m-1}^\infty t^i/i!\right)^T$$ In words, it is a 'curtailed' Poisson with mean $t$. By curtailed I mean that everything in the Poisson from $m-1$ on is bundled together at the leaf. This can be proved by exponentiating $H$. Or you can imagine sitting just after the $i$th node and waiting for a walker to arrive. You will wait a time that is a sum of $i$ independent exponentials and so the distribution of the waiting time is a gamma with shape parameter $i$. And since the rate at which walkers pass you is proportional to the number at $i$, the distribution at $i$ is the same. Now I think it should be possible to write an acyclic network as a set of walks, and find an expression for $P_t(n)$ for any node $n$. You need all the walks from a root to a leaf with the right weights... Might work well for sparse networks, but for dense ones it might be quicker to find $e^{tH}$ numerically. Comment Source:None of the following has been carefully checked! Suppose we have a single walk: nodes $1, 2, \dots, m$ and edges $(1,2), (2,3, \dots, (m-1,m)$, and a walker starts at the root $r$. Then $$e^{tH} r = e^{-t}\left(1, \quad t, \quad t^2/2!, \quad t^3/3!, \dots, \quad t^{m-2}/(m-2)!, \quad \sum_{i=m-1}^\infty t^i/i!\right)^T$$ In words, it is a 'curtailed' Poisson with mean $t$. By curtailed I mean that everything in the Poisson from $m-1$ on is bundled together at the leaf. This can be proved by exponentiating $H$. Or you can imagine sitting just after the $i$th node and waiting for a walker to arrive. You will wait a time that is a sum of $i$ independent exponentials and so the distribution of the waiting time is a gamma with shape parameter $i$. And since the rate at which walkers pass you is proportional to the number at $i$, the distribution at $i$ is the same. Now I think it should be possible to write an acyclic network as a set of walks, and find an expression for $P_t(n)$ for any node $n$. You need all the walks from a root to a leaf with the right weights... Might work well for sparse networks, but for dense ones it might be quicker to find $e^{tH}$ numerically.
2020-11-26 18:39:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8785822987556458, "perplexity": 290.2931968178355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188899.42/warc/CC-MAIN-20201126171830-20201126201830-00288.warc.gz"}
https://intelligencemission.com/free-hand-electricity-free-electricity-for-farmers.html
VHS videos also have some cool mini permanent magnet motors that could quite easily be turned into PMA (permanent magnet alternators). I pulled one apart about Free Power month ago. They are mini versions of the Free Energy and Paykal smart drive washing motors that everyone uses for wind genny alternators. I have used the smart drive motors on hydro electric set ups but not wind. You can wire them to produce AC or DC. Really handy conversion. You can acess the info on how to do it on “the back shed” (google it). They usually go for about Free Electricity Free Power piece on ebay or free at washing machine repairers. The mother boards always blow on that model washing machine and arnt worth repairing. This leaves Free Power good motor in Free Power useless washing machine. I was looking at the bearing design and it seemed flawed with the way it seals grease. Ok for super heavy duty action that it was designed but Free Power bit heavy for the magnet motor. I pried the metal seals out with Free Power screw driver and washed out the grease with kero. And solar panels are extremely inefficient. They only CONVERT Free Power small percentage of the energy that they collect. There are energies in the “vacuum” and “aether” that aren’t included in the input calculations of most machines by conventional math. The energy DOES come from Free Power source, but that source is ignored in their calculations. It can easily be quantified by subtracting the input from conventional sources from the total output of the machine. The difference is the ZPE taken in. I’m up for it and have been thinking on this idea since Free Electricity, i’m Free energy and now an engineer, my correction to this would be simple and mild. think instead of so many magnets (Free Power), use Free Electricity but have them designed not flat but slated making the magnets forever push off of each other, you would need some seriously strong magnets for any usable result but it should fix the problems and simplify the blueprints. Free Power. S. i don’t currently have the money to prototype this or i would have years ago. Never before has pedophilia and ritualistic child abuse been on the radar of so many people. Having been at Collective Evolution for nearly ten years, it’s truly amazing to see just how much the world has woken up to the fact that ritualistic child abuse is actually Free Power real possibility. The people who have been implicated in this type of activity over the years are powerful, from high ranking military people, all the way down to the several politicians around the world, and more. This is not Free Power grand revelation. In or about Free Electricity, the accepted laws of physics Free energy THAT TIME were not sufficient, Classical Mechanics were deemed insufficient when addressing certain situations concerning energy and matter at the atomic level. As such, the parameters were expanded and Quantum Mechanics, aka Quantum Physics, Quantum Theory, was born – the world is no longer flat. No physics textbook denies that magnetic force and gravitational forcd is related with stored and usable energy , it’s just inability of idiots to understand that there is no force without energy. Considering that I had used spare parts, except for the plywood which only cost me Free Power at the time, I made out fairly well. Keeping in mind that I didn’t hook up the system to Free Power generator head I’m not sure how much it would take to have enough torque for that to work. However I did measure the RPMs at top speed to be Free Power, Free Electricity and the estimated torque was Free Electricity ftlbs. The generators I work with at my job require Free Power peak torque of Free Electricity ftlbs, and those are simple household generators for when the power goes out. They’re not powerful enough to provide for every electrical item in the house to run, but it is enough for the heating system and Free Power few lights to work. Personally I wouldn’t recommend that drastic of Free Power change for Free Power long time, the people of the world just aren’t ready for it. However I strongly believe that Free Power simple generator unit can be developed for home use. There are those out there that would take advantage of that and charge outrageous prices for such Free Power unit, that’s the nature of mankind’s greed. To Nittolo and Free Electricity ; You guys are absolutely hilarious. I have never laughed so hard reading Free Power serious set of postings. You should seriously write some of this down and send it to Hollywood. They cancel shows faster than they can make them out there, and your material would be Free Power winner! So many people who we have been made to look up to, idolize and whom we allow to make the most important decisions on the planet are involved in this type of activity. Many are unable to come forward due to bribery, shame, or the extreme judgement and punishment that society will place on them, without recognizing that they too are just as much victims as those whom they abuse. Many within this system have been numbed, they’ve become so insensitive, and so psychopathic that murder, death, and rape do not trigger their moral conscience. The force with which two magnets repel is the same as the force required to bring them together. Ditto, no net gain in force. No rotation. I won’t even bother with the Free Power of thermodynamics. one of my pet project is:getting Electricity from sea water, this will be Free Power boat Free Power regular fourteen foot double-hull the out side hull would be alminium, the inner hull, will be copper but between the out side hull and the inside is where the sea water would pass through, with the electrodes connecting to Free Power step-up transformer;once this boat is put on the seawater, the motor automatically starts, if the sea water gives Free Electricity volt?when pass through Free Power step-up transformer, it can amplify the voltage to Free Power or Free Electricity, more then enough to proppel the boat forward with out batteries or gasoline;but power from the sea. Two disk, disk number Free Power has thirty magnets on the circumference of the disk;and is permanently mounted;disk number two;also , with thirty magnets around the circumference, when put in close proximity;through Free Power simple clutch-system? the second disk would spin;connect Free Power dynamo or generator? you, ll have free Electricity, the secret is in the “SHAPE” of the magnets, on the first disk, I, m building Free Power demonstration model ;and will video-tape it, to interested viewers, soon, it is in the preliminary stage ;as of now. the configuration of this motor I invented? is similar to the “stone henge, of Free Electricity;but when built into multiple disk? Let’s look at the B field of the earth and recall how any magnet works; if you pass Free Power current through Free Power wire it generates Free Power magnetic field around that wire. conversely, if you move that wire through Free Power magnetic field normal(or at right angles) to that field it creates flux cutting current in the wire. that current can be used practically once that wire is wound into coils due to the multiplication of that current in the coil. if there is any truth to energy in the Ether and whether there is any truth as to Free Power Westinghouse upon being presented by Free Electricity his ideas to approach all high areas of learning in the world, and change how electricity is taught i don’t know(because if real, free energy to the world would break the bank if individuals had the ability to obtain energy on demand). i have not studied this area. i welcome others who have to contribute to the discussion. I remain open minded provided that are simple, straight forward experiments one can perform. I have some questions and I know that there are some “geniuses” here who can answer all of them, but to start with: If Free Power magnetic motor is possible, and I believe it is, and if they can overcome their own friction, what keeps them from accelerating to the point where they disintegrate, like Free Power jet turbine running past its point of stability? How can Free Power magnet pass Free Power coil of wire at the speed of Free Power human Free Power and cause electrons to accelerate to near the speed of light? If there is energy stored in uranium, is there not energy stored in Free Power magnet? Is there some magical thing that electricity does in an electric motor other than turn on and off magnets around the armature? (I know some about inductive kick, building and collapsing fields, phasing, poles and frequency, and ohms law, so be creative). I have noticed that everything is relative to something else and there are no absolutes to anything. Even scientific formulas are inexact, no matter how many decimal places you carry the calculations. ## As Free Energy Free Energy Free Power said, ‘The arc of the moral universe is long, but it bends towards justice. ’ It seems like those of us who have been researching and learning about the fraud and corruption in politics have been waiting so long for the truth to emerge and justice to be served as to have difficulty believing that it may ever arrive. Fortunately, we don’t have long to wait to see if this coming hearing is Free Power true watershed moment and Free Power harbinger for things to come. If there are no buyers in LA, then you could take your show on the road. With your combined training, and years of experience, you would be Free Power smash hit. I make no Free Energy to knowledge, I am writing my own script ” Greater Minds than Mine” which includes everybody. My greatest feat in life is find Free Power warm commode, on time….. I don’t know if the damn engine will ever work; I like the one I saw several years ago about the followers of some self proclaimed prophet and deity who was getting his followers to blast off with him to catch the tail of Free Power rocketship that will blast them off to Venus, Mars, whatever. I think you’re being argumentative. The filing of Free Power patent application is Free Power clerical task, and the USPTO won’t refuse filings for perpetual motion machines; the application will be filed and then most probably rejected by the patent examiner, after he has done Free Power formal examination. Model or no model the outcome is the same. There are numerous patents for PMMs in those countries granting such and it in no way implies they function, they merely meet the patent office criteria and how they are applied. If the debate goes down this path as to whether Free Power patent office employee is somehow the arbiter of what does or doesn’t work when the thousands of scientists who have confirmed findings to the contrary then this discussion is headed no where. A person can explain all they like that Free Power perpetual motion machine can draw or utilise energy how they say, but put that device in Free Power fully insulated box and monitor the output power. Those stubborn old fashioned laws of physics suggest the inside of the box will get colder till absolute zero is reached or till the hidden battery/capacitor runs flat. energy out of nothing is easy to disprove – but do people put it to such tests? Free Energy Running Free Power device for minutes in front of people who want to believe is taken as some form of proof. It’s no wonder people believe in miracles. Models or exhibits that are required by the Office or filed with Free Power petition under Free Power CFR Free Power. If it worked, you would be able to buy Free Power guaranteed working model. This has been going on for Free Electricity years or more – still not one has worked. Ignorance of the laws of physics, does not allow you to break those laws. Im not suppose to write here, but what you people here believe is possible, are true. The only problem is if one wants to create what we call “Magnetic Rotation”, one can not use the fields. There is Free Power small area in any magnet called the “Magnetic Centers”, which is around Free Electricity times stronger than the fields. The sequence is before pole center and after face center, and there for unlike other motors one must mesh the stationary centers and work the rotation from the inner of the center to the outer. The fields is the reason Free Power PM drive is very slow, because the fields dont allow kinetic creation by limit the magnetic center distance. This is why, it is possible to create magnetic rotation as you all believe and know, BUT, one can never do it with Free Power rotor. ###### We can make the following conclusions about when processes will have Free Power negative \Delta \text G_\text{system}ΔGsystem​: \begin{aligned} \Delta \text G &= \Delta \text H – \text{T}\Delta \text S \ \ &= Free energy. 01 \dfrac{\text{kJ}}{\text{mol-rxn}}-(Free energy \, \cancel{\text K})(0. 022\, \dfrac{\text{kJ}}{\text{mol-rxn}\cdot \cancel{\text K})} \ \ &= Free energy. 01\, \dfrac{\text{kJ}}{\text{mol-rxn}}-Free energy. Free Power\, \dfrac{\text{kJ}}{\text{mol-rxn}}\ \ &= -0. Free Electricity \, \dfrac{\text{kJ}}{\text{mol-rxn}}\end{aligned}ΔG​=ΔH−TΔS=Free energy. 01mol-rxnkJ​−(293K)(0. 022mol-rxn⋅K)kJ​=Free energy. 01mol-rxnkJ​−Free energy. 45mol-rxnkJ​=−0. 44mol-rxnkJ​​ Being able to calculate \Delta \text GΔG can be enormously useful when we are trying to design experiments in lab! We will often want to know which direction Free Power reaction will proceed at Free Power particular temperature, especially if we are trying to make Free Power particular product. Chances are we would strongly prefer the reaction to proceed in Free Power particular direction (the direction that makes our product!), but it’s hard to argue with Free Power positive \Delta \text GΔG! Our bodies are constantly active. Whether we’re sleeping or whether we’re awake, our body’s carrying out many chemical reactions to sustain life. Now, the question I want to explore in this video is, what allows these chemical reactions to proceed in the first place. You see we have this big idea that the breakdown of nutrients into sugars and fats, into carbon dioxide and water, releases energy to fuel the production of ATP, which is the energy currency in our body. Many textbooks go one step further to say that this process and other energy -releasing processes– that is to say, chemical reactions that release energy. Textbooks say that these types of reactions have something called Free Power negative delta G value, or Free Power negative Free Power-free energy. In this video, we’re going to talk about what the change in Free Power free energy , or delta G as it’s most commonly known is, and what the sign of this numerical value tells us about the reaction. Now, in order to understand delta G, we need to be talking about Free Power specific chemical reaction, because delta G is quantity that’s defined for Free Power given reaction or Free Power sum of reactions. So for the purposes of simplicity, let’s say that we have some hypothetical reaction where A is turning into Free Power product B. Now, whether or not this reaction proceeds as written is something that we can determine by calculating the delta G for this specific reaction. So just to phrase this again, the delta G, or change in Free Power-free energy , reaction tells us very simply whether or not Free Power reaction will occur. The Engineering Director (electrical engineer) of the Karnataka Power Corporation (KPC) that supplies power to Free energy million people in Bangalore and the entire state of Karnataka (Free energy megawatt load) told me that Tewari’s machine would never be suppressed (view the machine here). Tewari’s work is known from the highest levels of government on down. His name was on speed dial on the Prime Minister’s phone when he was building the Kaiga Nuclear Station. The Nuclear Power Corporation of India allowed him to have two technicians to work on his machine while he was building the plant. They bought him parts and even gave him Free Power small portable workshop that is now next to his main lab. ” The idea of Free Power magnetic motor has been around for many years. Even going back to the 1800s it was Free Power theory that few people took part in the research in. Those that did were scoffed and made to look like fools. (Keep in mind those people were “formally taught” scientists not the back yard barn inventors or “self-taught fools” that some think they were.) Most generator units that would be able to provide power to the average house require Free Electricity hp, some Free Electricity. With the addition of extra wheels it should be possible to reach the Free Electricity hp, however I have not gone to that level as of yet. Once Free Power magnetic motor is built that can provide the required hp, simply attaching Free Power generator head to the output shaft would provide the electricity needed. Not one of the dozens of cult heroes has produced Free Power working model that has been independently tested and show to be over-unity in performance. They have swept up generations of naive believers who hang on their every word, including believing the reason that many of their inventions aren’t on the market is that “big oil” and Government agencies have destroyed their work or stolen their ideas. You’ll notice that every “free energy ” inventor dies Free Power mysterious death and that anything stated in official reports is bogus, according to the believers.
2021-03-03 08:36:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5343078970909119, "perplexity": 1534.7811559817937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178366477.52/warc/CC-MAIN-20210303073439-20210303103439-00114.warc.gz"}
https://forum.dynare.org/t/measure-of-expectations-heterogeneity/13286
Measure of Expectations Heterogeneity Dear Dynare Community, I try to develop a measure of heterogeneity of beliefs for the Branch&McGough (2009) model. A New Keynesian model with heterogeneous expectations. Journal of Economic Dynamics & Control 33 (2009) pp. 1036-1051. William A. Branch, Bruce McGough Motivated by the Di Bartolomeo et al. (2016) paper, more precisely their “consumption inequality” dimension which is driven by diverging expectations, my idea is to augment an ad hoc period loss function by some measure of heterogeneity of beliefs. The later is given by \begin{align} \mathrm{var}_i(E_t^i(x_{t+1}))=\alpha(1-\alpha)[E_t(x_{t+1})-\theta^2x_{t-1}]^2\end{align} which is a cross-sectional variance of one-step ahead forecasts and x is a generic variable. My question: Can I implement the measure above into a period loss like \begin{align} L_t &= \pi_t^2 + \omega_y y_t^2 + \omega_{E\pi} \mathrm{var}_i(E_t^i(\pi_{t+1})) \\&=\pi_t^2 + \omega_y y_t^2 + \omega_{E\pi}\left[ \alpha(1-\alpha)E_t(\pi_{t+1})^2 - 2\alpha(1-\alpha) \theta^2\pi_{t-1}E_t(\pi_{t+1})+ \alpha(1-\alpha)\theta^4\pi_{t-1}^2\right] \end{align} My problem is the square of the rational expectation. One may argue in favor of an auxiliary like: E_ppi = ppi(+1); (as additional state variable). But I am worried about the intertemporal loss J_0=E_0\sum_{k=0}^\infty\beta^k L_{t+k} since terms like E_0(E_{t+k}(\pi_{t+k+1})^2) show up. Is the loss still appropriate? Best, Max You can easily deal with nonlinear transformations of expected values using auxiliary variables. See Expected value of a power Dear Prof. Pfeifer, If I understand correct one can implement the loss L_t = \omega_{E\pi}\alpha(1-\alpha)[E_t(\pi_{t+1})-\theta^2\pi_{t-1}]^2 + \cdots as follows. model(linear); ... ppi_lag = ppi(-1); end; planner_objective w_Eppi*alp*(1-alp)*(ppi_lead - thet^2*ppi_lag)^2 + .... ; Yes, indeed. But as you are probably the first person trying this, please report any unusual behavior you encounter. Can someone tell me how to compute a “targeting rule” under the loss above by hand? For the sake of simplicity let L_t = \omega_yy_t + \omega_{E\pi} [E_t(\pi_{t+1})-\theta^2\pi_{t-1}]^2 and assume optimal monetary policy under commitment. The Lagrangian obtains as L= E_t\sum_{k=0}^\infty \beta^k \{ L_{t+k} + \psi_{t+k}[\pi_{t+k}-\beta\alpha\pi_{t+k+1}-\beta(1-\alpha)\theta^2\pi_{t+k-1} -\kappa y_{t+k}]\} + t.i.p. How would the FOC w.r.t \pi_{t+k} look like? What do you mean with “targeting rule”? And where is your problem with the FOC? You would plug in, write out the sum and then take the derivative. I am worried about the expectation inside the period loss. If I simply do the job (following your three steps) I obtain the FOC w.r.t \pi_{t+k} \begin{align}\frac{\partial L}{\partial \pi_{t+k}} =E_t\Big( \beta^k\psi_{t+k} + 2\beta^{k-1}\omega_{E\pi}[E_{t+k-1}(\pi_{t+k})-\theta^2\pi_{t+k-2}] - \beta^{k-1}\beta\alpha\psi_{t+k-1} \\ + 2\beta^{k+1}\omega_{E\pi}[E_{t+k+1}(\pi_{t+k+2})-\theta^2\pi_{t+k}] (-\theta^2) -\beta^{k+1}\beta (1-\alpha)\theta^2 \psi_{t+k+1} \Big) = 0 \end{align} is this correct? For k=0 I obtain further: \begin{align}E_t\Big( \psi_{t} + 2\beta^{-1}\omega_{E\pi}[E_{t-1}(\pi_{t})-\theta^2\pi_{t-2}] - \alpha\psi_{t-1} \\ + 2\beta\omega_{E\pi}[E_{t+1}(\pi_{t+2})-\theta^2\pi_{t}] (-\theta^2) -\beta^{2} (1-\alpha)\theta^2 \psi_{t+1} \Big) = 0 \\ \\ \psi_{t} + 2\beta^{-1}\omega_{E\pi}[E_{t-1}(\pi_{t})-\theta^2\pi_{t-2}] - \alpha\psi_{t-1} \\ - 2\theta^2\beta\omega_{E\pi}[E_{t}(\pi_{t+2})-\theta^2\pi_{t}] -\beta^{2} (1-\alpha)\theta^2 E_t\psi_{t+1} =0 \end{align} since E_tE_{t-1}=E_{t-1}. So the targeting rule will involve E_{t-1}. Targeting rule = reduced form of the FOC / optimal relationship between the target variables (eliminate the lagrange multipliers) (à la Svensson)
2022-05-21 21:51:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9974165558815002, "perplexity": 8018.241708024567}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662541747.38/warc/CC-MAIN-20220521205757-20220521235757-00450.warc.gz"}
https://www.physicsforums.com/threads/what-are-some-easy-ways-to-show-quantum-mechanics.897990/
# B What are some easy ways to show quantum mechanics Tags: 1. Dec 21, 2016 ### malcome123 hi I'm in grade 9 and I have to make a 20 slide slideshow and a 6 page essay or a module I need ideas on what to do 2. Dec 21, 2016 ### Staff: Mentor Last edited: Dec 21, 2016 3. Dec 21, 2016 ### ZapperZ Staff Emeritus Take your LED Christmas lights, plug it in, and you're done. Zz. 4. Dec 21, 2016 ### ChrisisC What country do you live in? You are learning of Quantum Mechanics in the 9th grade? I live in the United States and i learned about extremely easy earth science... im interested in what you are doing learning such advanced material at such a young age. 5. Dec 21, 2016 ### vanhees71 The explanation of the photo effect in the above cited Wikipedia article is utterly wrong! 6. Dec 21, 2016 ### mikeyork One of the most profound triumphs of QM is that it enables an explanation of chemistry and life as a purely physical consequence of atomic structure. 7. Dec 21, 2016 ### Staff: Mentor Okay I'll remove the link. perhaps you can suggest a better reference for the OP. 8. Dec 21, 2016 ### Staff: Mentor Here in Australia we learn very basic QM in grade 9 and 10 science - and at that time we started school a year earlier than the US ie first grade at 5 years of age. We learnt more in grade 11 and 12 - at about the level of AP physics in the US. That's why we have 3 year Bachelors in Australia and the UK - our 11 and 12 is roughly equivalent to AP level or IB SL level which is first year university in the US. In Britten they have A levels in their equivalent of 11 and 12 which is above AP or IB. In Australia ours is roughly equivalent to British AS levels. Either way we start at about US second year level in our universities which is why we have 3 year degrees. Another example is calculus - we start immediately on Multivariable calculus and advanced single variable calculus because we do calculus in HS. Thanks Bill 9. Dec 21, 2016 ### Staff: Mentor It will also cover quite a bit of what you will learn in 11 and 12 but that's OK. Of those I personally would recommend Feynman who if I remember correctly was delivering it to an audiene of HS students in NZ - but dont hold me to it. https://en.wikipedia.org/wiki/QED:_The_Strange_Theory_of_Light_and_Matter But there are a few to choose from. Thanks Bill 10. Dec 21, 2016 ### ChrisisC I would have to say i am quite jealous! I would do anything to learn even the slightest quantum mechanics in high school. You Australians are doing it right let me tell you that. 11. Dec 21, 2016 ### Staff: Mentor Its specified in a general way in the Australian science curriculum. Exactly what is taught is left rather open. We have specialist science schools like the Queensland Academy of Science where it would be taught at about the level of the books I gave. They do an IB program in 11 and 12 where it would be taught in greater depth - they even have access to university subjects. So it varies a lot depending on the school - but the best would teach it at about the level of the books I mentioned to 14 or 15 year olds. There are schools in the US like the Basis schools that teach at an even higher level - but they are the exception rather than the rule. It requires dedication on the part of the student that only some have. Thanks Bill 12. Dec 22, 2016 ### vanhees71 Hm, that's a very difficult task. I've not seen yet a correct reference about the photo effect understandable at 9th grade high school. I think on this level you can only state the phenomenological facts and then just mention that in theoretical physics that's only described by quantum theory of the bound electron; no photons needed, i.e., it's on the level of undergraduate non-relativistic quantum theory; it belongs in the lecture QM 1 when you treat time-dependent perturbation theory; see my Insights article https://www.physicsforums.com/insights/sins-physics-didactics/ 13. Dec 22, 2016 ### ZapperZ Staff Emeritus I'm the LAST one to defend a Wikipedia article, but nothing here makes what was linked to "utterly wrong". Your insight description decided to pursue this using "non-photon" picture. Fine, we know that can arrive at the naive photoelectric effect. But this also does NOT make the standard picture that we use to describe the photoelectric effect to be "utterly wrong". This was not proven to be so in your article. You offered an alternative description, not a falsifying description. The standard photon model is used in practically all photoemission texts. See "Photoemission Spectroscopy" by Hufner (Springer), which is a well-known text for those of us who WORK in this field. The Spicer's 3-step model makes use of this picture, and it has been extremely successful in describing the microscopic process of photoemission. Heck, such a picture has been extensively used in practically ALL photon-electron emission model (see http://server2.phys.uniroma1.it/gr/...ON_SPECTROSCOPY_Mariani-Stefani_revised10.pdf). It is not "utterly wrong". It is simply a matter of tastes. Zz. 14. Dec 22, 2016 ### Staff: Mentor Also there's the lightandmatter.com online books by Benjamin Crowell which are used to teach at community colleges and highschools in particular this one called Simple Nature has a good writeup on Quantum Mechanics: http://lightandmatter.com/html_books/0sn/ch13/ch13.html [Broken] Last edited by a moderator: May 8, 2017 15. Dec 22, 2016 ### vanhees71 Any source that claims a photon is like a little "billiard ball" is utterly wrong. A photon is a massless quantum of spin 1 and thus has not even a position observable! The standard model of the photon is QED and nothing else! As my Insights article shows, the "Einstein formula" is indeed not wrong, but it can be derived from the modern theory, and one should not impose wrong pictures on students. I had a hard time to unlearn these wrong pictures during my studies of physics later. 16. Dec 22, 2016 ### ZapperZ Staff Emeritus Did the Wikipedia article explicitly stated that photons are "billiard ball"? (I wouldn't put it pass a Wikipedia article to say that) Did the sources I cited explicitly stated that? Zz. 17. Dec 22, 2016 ### Staff: Mentor We are getting a little off topic here and far beyond the 9th grade level. So instead can we find a credible source of information on the photo-electric effect suitable for a 9th grader? I checked the wikipedia article briefly and could find no mention of billiard balls. So if you guys could vet the article for accuracy and post here that would be great. Jedi 18. Dec 22, 2016 ### vanhees71 I've not looked at your sources in much detail, and of course I'm pretty sure neither states it in this way, but the Wikipedia article is the usual wrong introductory treatment of the photoeffect, you find even in many textbooks on introductory quantum mechanics at the university level, and that's what I try to fight against. It's bad, because particularly starting a new subject (and even such an attractive one as QM!) leads to a strong foundation of these wrong pictures in a student's mind (at least it was the case for me), and then you study for some semesters physics, and then you learn that you have to unlearn these wrong pictures (among them the here discussed photoelectric-effect treatment and the Bohr-Sommerfeld model with its electron trajectories), but @jedishrfu is absolutely right in saying that we getting more and more off-topic. 19. Dec 22, 2016 ### ZapperZ Staff Emeritus I hate to say this, but it was a mistake to remove the Wikipedia link. I find nothing "utterly wrong" with it (and I'm someone who works in the field of photoemission and photocathdoes). If you find the way it is written now is "utterly wrong", then many of our standard textbooks are also "utterly wrong", and you're telling people to NOT pay attention to them, contrary to the PF policy. This isn't an issue with photoelectric effect. It is an issue of how to represent light. I've seen similar stuff being done carelessly in QFT and Feynman diagram representations involving light interaction. Zz. 20. Dec 22, 2016 ### ZapperZ Staff Emeritus If you care so much about giving the "wrong pictures in a student's mind", then you need to go back and re-read the IMPRESSION you left in your Post #5: You gave no indication on where it went "utterly wrong". Thus, you are dismissing the ENTIRE article, which in fact, contains many accurate and standard description of the photoelectric effect, including the Einstein model! Now sit back, and figure out the 'wrong picture' that you've given off to students with that kind of a post. And all because of what? That you thought the article was using "billiard balls" model, which it didn't? Zz. 21. Dec 22, 2016 ### vanhees71 Well, let's look at the quoted Wikipedia article. The first two sentences read The first sentence is correct, of course, but the second one states that electromagnetic radiation "is made of a series of particles called photons", and already harm is done. It's utterly wrong, and I stand to this claim. Electromagnetic radiation are, from the point of view of quantum physics, coherent states, and this is a state, where the "particle properties" (to be understood in a very loose sense either!) are not very apparent: Not even the photon number is determined, i.e., not even in a very loose sense the particle picture is appropriate. is also wrong since the phenomenology of the effect discussed on the level of the article (and in Einstein's famous article of 1905) has turned out NOT to have anything to do with the quantum nature of light. It's true that it has to do with the quantum nature of electrons (see the standard derivation in my Insights article, which can be found in many serious textbooks on QM of course). It's fully explained as the transition from a bound state of the electron hit by classical electromagnetic waves. The first paragraph of the section "Mechanism" is fine. The rest is bad again for the reasons discussed above. The History section sounds accurate. 22. Dec 22, 2016 ### vanhees71 I've written, what's "utterly wrong" with this picture (instead of billiard balls I should have written particles), and of course Einstein's photoelectric article is wrong from the point of view of modern physics. That's nothing against Einstein, who opened the door to the modern picture with his work on quantum theory, and he participated also strongly in formulating the modern theory, pushing its development and physical understanding forward in a great way. Still, his "old quantum theory" is now substituted by the more consistent and comprehensive modern quantum mechanics (and QED when it comes to photons, where they are really needed!). Of course, you can't teach QED in highschool, and at the same time you should tell the students some modern physics. You can do this only in a qualitative way anyway (with some very simple examples concerning the Schrödinger equation like a particle in a box and the harmonic oscillator, which we learnt in highschool, but only in the last year (in Germany 13th grade)), and it's (in my opinion) as well possible on this level to explain the photoelectric effect without using the classical-particle picture for photons. I'd rather start with Planck's black-body radiation, because here indeed you need the quantum nature of the electromagnetic field, and it establishes a much more appropriate picture about photons. Again, you cannot give the full mathematical analysis, but you can say that Planck's work showed that if you have an electromagnetic wave of frequency $f$ it exchanges energy with matter (Planck used a harmonic oscillator as a model) in portions $E_f=h f$, which we call photons. At the same time the electromagnetic wave has a wave vector $\vec{k}$ with $k = |\vec{k}|=\omega/c=2 \pi f/c$, and the photon accordingly carries a momentum $p=\hbar k=h f/c$. What happens in the photoelectric effect is indeed this: You shine with classical (!) light on the material, and the bound electron absorbes the energy of a photon out of the classical (!) electromagnetic wave, and the energy balance indeed reads $$E_{\text{kin}}=h f-W,$$ where $W$ is the binding energy of the electron. Although this is of course still not the full story, it's in my opinion better than a naive classical-particle picture of a photon. At least it's nothing qualitatively wrong. Last edited: Dec 22, 2016 23. Dec 22, 2016 ### malcome123 24. Dec 22, 2016 ### Lautaro Hi. Why not trying an EXPERIMENT! Photoelectric effect. Enjoy. Best. Lautaro Vergara 25. Dec 22, 2016
2018-07-18 09:13:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5168052315711975, "perplexity": 917.7210779939758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590074.12/warc/CC-MAIN-20180718080513-20180718100513-00033.warc.gz"}
https://stats.stackexchange.com/questions/512580/does-gridsearchcv-actually-fit-the-best-model-to-the-training-data-or-do-you-ha
# Does GridSearchCV actually fit the best model to the training data, or do you have to refit after hyperparameter optimisation? I have this code, with the aim being to develop a neural network with cross validation and hyperparameter optimisation for a regression problem (continuous features, continuous label). from keras.layers import Dense, Activation from keras.models import Sequential from sklearn.model_selection import StratifiedKFold def create_model(activation='relu',learning_rate = 0.01): model = Sequential() model.add(Dense(32, activation = activation, input_dim = 101)) model.add(Dense(units = 64, activation = activation)) model.add(Dense(units = 64, activation = activation)) model.add(Dense(units = 1)) model.compile(optimizer = 'adam', loss = 'mean_squared_error') return model model = KerasRegressor(build_fn = create_model, verbose = 1) params = {'activation': ["relu", "tanh"], 'batch_size': [16, 32, 64], 'epochs': [50, 100], 'learning_rate': [0.01, 0.001, 0.0001]} } random_search = GridSearchCV(model,param_grid = params, cv = KFold(5)) random_search_results = random_search.fit(X_train, y_train) print("Best Score: ",random_search_results.best_score_, "and Best Params: ", random_search_results.best_params_) Can someone confirm that if my next line is: y_pred = random_search_results.predict(X_test) That that is fitting the most optimal model to X_test? I thought it was, but then I saw this post, where they say 'The next task is to refit the model with the best parameters', after the code above is run. Did I not already fit the best param model to the training data using this method? Can someone explain to me what extra code is needed to add the optimal model according to GridSearchCV to the training data? ## 1 Answer Hello and welcome to the community :-) GridSearch searches the best estimator. Period. Thats the fundamental difference between RandomizedSearchCV and GridSearchCV ... and why GridSearch takes so awkwardly long. It may be that you will get slightly different params when using different random states, but all in all a pipeline and the hyperparameter tuning is just for finding your optimal combination of parameters. After that you tak these combinations AND FIT 1.) on the whole data for deployment 2.) Train_data for deployment The latter makes sense, if data is massive and neural network is so complex that training takes a considerable amount of time (e.g. imagine you get new data for a complex NN and get new data points e.g. 1.000.000 you won't fit your model every week, that too exhaustive.) and you dont want to get the well other 10-20% also into the set, that is if you have e.g. 1.000.000 data points, the other few percent wont budge the model at the end so hard, that train data may be enough, but it depends on the case/model. So all the guy does is using the optimal combination and fitting on the whole data, which is some sort of get my final results for deployment. If you predict with this line, you will get the best predict from gridsearch +/- a slightly deviation depending on random_state in the previous pipeline
2021-05-14 04:06:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.379384309053421, "perplexity": 2821.693339915214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991737.39/warc/CC-MAIN-20210514025740-20210514055740-00412.warc.gz"}
https://www.cfm.brown.edu/people/dobrush/am33/Mathematica/heaviside.html
Preface This is a tutorial made solely for the purpose of education and it was designed for students taking Applied Math 0330. It is primarily for students who have very little experience or have never used Mathematica before and would like to learn more of the basics for this computer algebra system. As a friendly reminder, don't forget to clear variables in use and/or the kernel. Finally, the commands in this tutorial are all written in bold black font, while Mathematica output is in normal font. This means that you can copy and paste all commands into Mathematica, change the parameters and run them. You, as the user, are free to use the scripts for your needs to learn the Mathematica program, and have the right to distribute this tutorial and refer to this tutorial as long as this tutorial is accredited appropriately. Heaviside function The Heaviside function was defined previously: $H(t) = \begin{cases} 1, & \quad t > 0 , \\ 1/2, & \quad t=0, \\ 0, & \quad t < 0. \end{cases}$ The objective of this section is to show how the Heaviside function can be used to determine the Laplace transforms of piecewise continuous functions. The main tool to achieve this is the shifted Heaviside function H(t-a), where a is arbitrary positive number. So first we plot this function: a = Plot[HeavisideTheta[t - 3], {t, 0, 7}, PlotStyle -> Thick] b = Graphics[{Blue, Arrowheads[0.07], Arrow[{{3.7, 1}, {3, 1}}]}] c = Graphics[{Blue, Arrowheads[0.07], Arrow[{{2, 0}, {3, 0}}]}] d = Graphics[{PointSize[Large], Point[{3, 1/2}]}] a1 = Graphics[{Blue, Thick, Line[{{-1.99, 0}, {2.9, 0}}]}] Show[a, a1, b, c, d, PlotRange -> {{-2.1, 7}, {-0.2, 1.2}}] We present a property of the Heaviside function that is not immediately obvious: $H\left( t^2 - a^2 \right) = 1- H(t-a) + H(t-a) = \begin{cases} 1, & \quad t < -a , \\ 0, & \quad -a < t < a, \\ 1, & \quad t > a > 0. \end{cases}$ The most important property of shifted Heaviside functions is that their difference, W(a,b) = H(t-a) - H(t-b), is actually a window over interval (a,b); this means that their difference is 1 over this interval and zero outside closed interval [a,b]: a = Plot[HeavisideTheta[t - 2] - HeavisideTheta[t - 5], {t, 0, 8}, PlotStyle -> Thick] b = Graphics[{Blue, Arrowheads[0.07], Arrow[{{3.7, 1}, {2, 1}}], Arrow[{{3.7, 1}, {5, 1}}]}] c = Graphics[{Blue, Arrowheads[0.07], Arrow[{{-1, 0}, {2, 0}}]}] d = Graphics[{PointSize[Large], Point[{2, 1/2}], Point[{5, 1/2}]}] a1 = Graphics[{Blue, Thick, Line[{{-1.99, 0}, {1.9, 0}}], Line[{{5.01, 0}, {8, 0}}]}] c2 = Graphics[{Blue, Arrowheads[0.07], Arrow[{{7, 0}, {5, 0}}]}] Show[a, a1, b, c, c2, d, PlotRange -> {{-2.1, 8}, {-0.2, 1.2}}] Its Laplace transform is $\left( {\cal L} \,H(t-a) \right) (\lambda ) = \int_a^{\infty}\,e^{-\lambda t} \,{\text d} t = \frac{1}{\lambda -a} \qquad\Longrightarrow \qquad \left( {\cal L} \,W(a,b) \right) (\lambda ) = \frac{1}{\lambda -a} - \frac{1}{\lambda -b} .$ Example: Consider the piecewise continuous function $f(t) = \begin{cases} 1, & \quad 0 < t < 1 , \\ t, & \quad 1 < t < 2 , \\ t^2 , & \quad 2 < t < 3 , \\ 0, & \quad 3 < t . \end{cases}$ Of course, we can find its Laplace transform directly $f^L (\lambda ) = \int_0^{\infty} f(t)\,e^{-\lambda \,t} \,{\text d} t = \int_0^{1} \,e^{-\lambda \,t} \,{\text d} t + \int_1^{2} t \,e^{-\lambda \,t} \,{\text d} t + \int_2^{3} t^2 \,e^{-\lambda \,t} \,{\text d} t .$ However, we can also find its Laplace transform using the shift rule. First, we represent the given function f(t) as the sum $f(t) = H(t) - H(t-1) + t \left[ H(t-1) - H(t-2) \right] + t^2 \left[ H(t-2) - H(t-3) \right] .$ Each term in right-hand side we consider separately. \begin{align*} t \, H(t-1) &= (t-1+1) \, H(t-1) = \left( t-1 \right) H(t-1) + H(t-1) \qquad\Longrightarrow \qquad {\cal L} \left[ t \, H(t-1) \right] = \frac{1}{\lambda^2} \, e^{-\lambda} + \frac{1}{\lambda}\, e^{-\lambda} , \\ t \, H(t-2) &= \left( t-2 +2 \right) H(t-2) = \left( t-2 \right) H(t-2) + 2\, H(t-2) \qquad\Longrightarrow \qquad {\cal L} \left[ t \, H(t-2) \right] = \frac{1}{\lambda^2} \, e^{-2\lambda} + \frac{2}{\lambda}\, e^{-2\lambda} , \\ t^2 \, H(t-2) &= \left( t-2 +2 \right)^2 H(t-2) = \left( t-2 \right)^2 H(t-2) + 4 \left( t-2 \right) H(t-2) + 4\, H(t-2) \qquad\Longrightarrow \qquad {\cal L} \left[ t^2 \, H(t-2) \right] = \frac{2}{\lambda^3} \, e^{-2\lambda} + \frac{4}{\lambda^2}\, e^{-2\lambda} + \frac{4}{\lambda}\, e^{-2\lambda} , \\ t^2 \, H(t-3) &= \left( t-3 +3 \right)^2 H(t-3) = \left( t-3 \right)^2 H(t-3) + 6 \left( t-3 \right) H(t-3) + 9\, H(t-3) \qquad\Longrightarrow \qquad {\cal L} \left[ t^2 \, H(t-3) \right] = \frac{2}{\lambda^3} \, e^{-3\lambda} + \frac{6}{\lambda^2}\, e^{-3\lambda} + \frac{9}{\lambda}\, e^{-3\lambda} . \end{align*} Collecting all terms, we obtain $f^L (\lambda ) = \frac{2}{\lambda^3} \left( e^{-2\lambda} - e^{-3\lambda} \right) + \frac{1}{\lambda^2} \left( e^{-\lambda} + 3\, e^{-2\lambda} -6\,e^{-3\lambda} \right) + \frac{1}{\lambda} \left( 1 + 2\,e^{-2\lambda} - 9\,e^{-3\lambda} \right) .$ Dirac delta function Paul Adrien Maurice Dirac (1902--1984) was an English theoretical physicist who made fundamental contributions to the early development of both quantum mechanics and quantum electrodynamics. Paul Dirac was born in Bristol, England, to a Swiss father and an English mother. Paul admitted that he had an unhappy childhood, but did not mention it for 50 years; he learned to speak French, German, and Russian. He received his Ph.D. degree in 1926. Dirac's work concerned mathematical and theoretical aspects of quantum mechanics. He began work on the new quantum mechanics as soon as it was introduced by Heisenberg in 1925 -- independently producing a mathematical equivalent, which consisted essentially of a noncommutative algebra for calculating atomic properties -- and wrote a series of papers on the subject. Among other discoveries, he formulated the Dirac equation, which describes the behavior of fermions and predicted the existence of antimatter. Dirac shared the 1933 Nobel Prize in Physics with Erwin Schrödinger "for the discovery of new productive forms of atomic theory." Dirac had traveled extensively and studied at various foreign universities, including Copenhagen, Göttingen, Leyden, Wisconsin, Michigan, and Princeton. In 1937 he married Margit Wigner, of Budapest. Dirac was regarded by his friends and colleagues as unusual in character for his precise and taciturn nature. In a 1926 letter to Paul Ehrenfest, Albert Einstein wrote of Dirac, "This balancing on the dizzying path between genius and madness is awful." Dirac openly criticized the political purpose of religion. He said: "I cannot understand why we idle discussing religion. If we are honest---and scientists have to be---we must admit that religion is a jumble of false assertions, with no basis in reality." He spent the last decade of his life at Florida State University. The Dirac delta function was introduced as a "convenient notation" by Paul Dirac in his influential 1930 book, "The Principles of Quantum Mechanics," which was based on his most celebrated result on relativistic equation for electron, published in 1928. He called it the "delta function" since he used it as a continuous analogue of the discrete Kronecker delta $$\delta_{n,k} .$$ Dirac predicted the existence of positron, which was first observed in 1932. Historically, Paul Dirac used δ-function for modeling the density of an idealized point mass or point charge, as a function that is equal to zero everywhere except for zero and whose integral over the entire real line is equal to one. Dirac’s cautionary remarks (and the efficient simplicity of his idea) notwithstanding, some mathematically well-bred people did from the outset take strong exception to the δ-function. In the vanguard of this group was the American-Hungarian mathematician John von Neumann (was born in Jewish family as János Neumann, 1903--1957), who dismissed the δ-function as a “fiction." As there is no function that has these properties, the computations that were done by the theoretical physicists appeared to mathematicians as nonsense. It took a while for mathematicians to give strict definition of this phenomenon. In 1938, the Russian mathematician Sergey Sobolev (1908--1989) showed that the Dirac function is a derivative (in generalized sense) of the Heaviside function. To define derivatives of discontinuous functions, Sobolev introduced a new definition of differentiation and the corresponding set of generalized functions that were later called distributions. The French mathematician Laurent-Moïse Schwartz (1915--2002) further extended Sobolev's theory by pioneering the theory of distributions, and he was rewarded the Fields Medal in 1950 for his work. Because of his sympathy for Trotskyism, Schwartz encountered serious problems trying to enter the United States to receive the medal; however, he was ultimately successful. But it was news without major consequence, for Schwartz’ work remained inaccessible to all but the most determined of mathematical physicists. In 1955, the British applied mathematician George Frederick James Temple (1901--1992) published what he called a “less cumbersome vulgarization” of Schwartz’ theory based on Jan Geniusz Mikusınski's (1913--1987) sequential approach. However, the definition of δ-function can be traced back to the early 1820s due to the work of James Fourier on what we now know as the Fourier integrals. In 1828, the δ-function had intruded for a second time into a physical theory by George Green who noticed that the solution to the nonhomogeneous Poisson equation can be expressed through the solution of a special equation containing the delta function. The history of the theory of distributions can be found in "The Prehistory of the Theory of Distributions" by Jesper Lützen (University of Copenhagen, Denmark), Springer-Verlag, 1982. Outside of quantum mechanics the delta function is also known in engineering and signal processing as the unit impulse symbol. Mechanical systems and electrical circuits are often acted upon by an external force of large magnitude that acts only for a very short period of time. For example, all strike phenomenon (caused by either piano hammer or tennis racket) involve impulse functions. Also, it is useful to consider discontinuous idealizations, such as the mass density of a point mass, which has a finite amount of mass stuffed inside a single point of space. Therefore, the density must be infinite at that point and zero everywhere else. Delta function can be defined as the derivative of the Heaviside function, which (when formally evaluated) is zero for all $$t \ne 0 ,$$ and it is undefined at the origin. Now time comes to explain what a generalized function or distribution means. In our everyday life, we all use functions that we learn from school as a map or transformation of one set (usually called input) into another set (called output, which is usually a set of numbers). For example, when we do our annual physical examinations, the medical staff measure our blood pressure, height, and weight, which all are functions that can be described as nondestructive testing. However, not all functions are as nice as previously mentioned. For instance, a biopsy is much less pleasant option and it is hard to call it a function, unless we label a destructive testing function. Before procedure, we consider a patient as a probe function, but after biopsy when some tissue has been taken from patient's body, we have a completely different person. Therefore, while we get biopsy laboratory results (usually represented in numeric digits), the biopsy represents destructive testing. Now let us turn to another example. Suppose you visit a store and want to purchase a soft drink, i.e. a bottle of soda. You observe that liquid levels in each bottle are different and you wonder whether they filled these bottles with different volumes of soda or the dimensions of each bottle differ from one another. So you decide to measure the volume of soda in a particular bottle. Of course, one can find outside dimensions of a bottle, but to measure the volume of soda inside, there is no other option but to open the bottle. In other words, you have to destroy (modify) the product by opening the bottle. The function of measuring the soda by opening the bottle could represent destructive testing Now consider an electron. Nobody has ever seen it and we do not know exactly what it looks like. However, we can make some measurements regarding the electron. For example, we can determine its position by observing the point where electron strikes a screen. By doing this we destroy the electron as a particle and convert its energy into visible light to determine its position in space. Such operation would be another example of destructive testing function, because we actually transfer the electron into another matter, and we actually loose it as a particle. Therefore, in real world we have and use nondestructive testing functions that measure items without their termination or modification (as we can measure velocity or voltage). On the other hand, we can measure some items only by completely destroying them or transferring them into another options as destructive testing functions. Mathematically, such measurement could be done by integration (hope you remember the definition from calculus): $\int_{-\infty}^{\infty} f(x)\,g(x)\,{\text d}x ,$ where f(x) is a nice (probe) function and g(x) can represent (bad or unpleasant) operation on our probe function. As a set of probe functions, it is convenient to choose smooth functions on the line with compact support (which means that they are zero outside some finite interval). As for electron, we don't know what the multiple g(x) looks like, all we know is the value of integral that represents a measurement. In this case, we say that g(x) acts on probe function and we call this operation the functional. Physicists denote it as $\left. \left\vert g \right\vert f \right\rangle = \int_{-\infty}^{\infty} f(x)\,g(x)\,{\text d}x \qquad\mbox{or simply} \qquad \langle g, f \rangle .$ (for simplicity, we consider only real-valued functions). Mathematicians also follow these notations; however, the integral on the right-hand side is mostly a show of respect to people who studied functions at school and it has no sense because we don't know what is the exact expression of g(x)---all we know or measure is the result of integration. Such objects as g(x) are now called distributions, or generalized functions, but actually they are all functionals: g acts on any probe function by mapping it into a number (real or complex). So strictly speaking, instead of the integral $$\int_{-\infty}^{\infty} f(x)\,g(x)\,{\text d}x$$ we have to write the formula $g\,:\, \mbox{set of probe functions } \mapsto \, \mbox{numbers}; \qquad g\,: \, f \, \mapsto \, \mathbb{R} \quad \mbox{or}\quad \mathbb{C} .$ Therefore, notation g(x) makes no sense because the value of g at any point x is undefined. So x is a dummy variable or invitation to consider functions depending on x. It is more appropriate to write $$g(f)$$ because it is a number that is assigned to a probe function f by distribution g. Nevertheless, it is a custom to say that a generalized function g(x) is zero for x from some interval [a,b] if, for every probe function f that is zero outside the given interval, $\langle g , f \rangle = \int_a^b f(x)\, g(x)\, {\text d} x =0.$ However, it is completely inappropriate to say that a generalized function has a particular value at some point (recall that the integral does not care about a particular value of integrable function). Following Sobolev, we define a derivative g' of a distribution g by the equation $\langle g' , f \rangle = -\int_a^b f'(x)\, g(x)\, {\text d} x ,$ which is valid for every smooth probe function f that is identically zero outside some finite interval. Now we define the derivative of the Heaviside function using new definition (because old calculus definition of derivative is useless). Let f(x) be a continuous function that vanishes at infinity. We use integration by parts to evaluate the integral \begin{align*} \int_{-\infty}^{\infty} f(x)\,\delta (x)\, {\text d} x &= \left[ f(x) \, H(x) \right]_{x=-\infty}^{x=\infty} - \int_{-\infty}^{\infty} f' (x) \, H(x) \, {\text d} x \\ &= - \int_0^{\infty} f' (x) \, {\text d} x = \left[ - f(x) \right]_{x=0}^{x=\infty} \\ &= f(0) . \end{align*} The definition of the delta function can be extended to piecewise continuous functions: $\int_a^b \delta (x-x_0 ) \, f (x)\,{\text d} x = \begin{cases} \frac{1}{2} \left[ f(x_0 +0) + f(x_0 -0) \right] , & \ \mbox{ if } x_0 \in (a,b) , \\ \frac{1}{2}\, f(x_0 +0) , & \ \mbox{ if } x_0 =a, \\ \frac{1}{2}\, f(x_0 -0) , & \ \mbox{ if } x_0 = b, \\ 0 , & \ \mbox{ if } x_0 \notin [a,b] . \end{cases}$ To understand the behavior of Dirac delta function, we introduce the rectangular pulse function $\delta_h (x,a) = \begin{cases} h, & \ \mbox{ if } \ a- \frac{1}{2h} < x < a+ \frac{1}{2h} , \\ 0, & \ \mbox{ otherwise. } \end{cases}$ We plot the pulse function with the following Mathematica command f[x_] = Piecewise[{{1, 2 < x < 3}}] Labeled[Plot[f[x], {x, 0, 7}, Exclusions -> {False}, PlotStyle -> Thick, Ticks -> {{{2, "a-1/2h"}, {3, "a+1/2h"}}, {Automatic, {1, "h"}}}], "The pulse function"] As it can be seen from figure, the amplitude of pulse becomes very large and its width becomes very small as $$h \to \infty .$$ Therefore, for any value of h, the integral of the rectangular pulse $\int_{\alpha}^{\beta} \delta_h (x,a)\, {\text d} x = 1$ if the interval of definition $$\left( a- \frac{1}{2h} , a+ \frac{1}{2h} \right)$$ lies in the interval (α , β), and zero if the range of integration does not contain the pulse. Now we can define the delta function located at the point x=a as the limit (in generalized sense): $\delta (x-a) = \lim_{h\to \infty} \delta_h (x,a) .$ Instead of large parameter h, one can choose a small one: $\delta (x) = \lim_{\epsilon \to 0} \delta (x, \epsilon ) , \qquad\mbox{where} \quad \delta (x, \epsilon ) = \begin{cases} 0 , & \ \mbox{ for } \ |x| > \epsilon /2 , \\ \epsilon^{-1} , & \ \mbox{ for } \ |x| < \epsilon /2 . \end{cases}$ This means that for every probe function (that is smooth and is zero outside some finite interval) f, we have $\left. \left\vert \delta \right\vert f \right\rangle = \int_{-\infty}^{\infty} \delta (x)\,f(x) \,{\text d}x = \lim_{\epsilon \to 0} \left. \left\vert \delta (x, \epsilon ) \right\vert f \right\rangle = \lim_{\epsilon \to 0} \int_{-\infty}^{\infty} \delta (x, \epsilon )\,f(x) \,{\text d}x .$ Let f(x) be a continuous function and let F'(x) = f(x). We compute the integral \begin{align*} \int_{-\infty}^{\infty} f(x)\,\delta (x)\, {\text d} x &= \lim_{\epsilon \to 0} \,\frac{1}{\epsilon} \, \int_{-\epsilon /2}^{\epsilon /2} f(x)\, {\text d} x \\ &= \lim_{\epsilon \to 0} \,\frac{1}{\epsilon} \left[ F(x) \right]_{x= -\epsilon /2}^{x=\epsilon /2} \\ &= \lim_{\epsilon \to 0} \,\frac{F(\epsilon /2) - F(-\epsilon /2)}{\epsilon} \\ &= F' (0) = f(0) . \end{align*} The delta function has many representations as limits (of course, in generalized sense) of regular functions; one may want to use another approximation: $\delta (x, \epsilon ) = \frac{1}{\sqrt{2\pi\epsilon}} \, e^{-x^2 /(2\epsilon )} \qquad \mbox{or} \qquad \delta (x, \epsilon ) = \frac{1}{\pi x} \,\sin \left( \frac{x}{\epsilon} \right) .$ In all choices of $$\delta (x, \epsilon ),$$ we will have \begin{align*} \int_{-\infty}^{\infty} \delta (x, \epsilon ) \,{\text d}x &= 1, \\ \lim_{\epsilon \to 0} \,\int_{-\infty}^{\infty} \delta (x-a, \epsilon ) \,f(x) \,{\text d}x &= f(a) , \end{align*} for any smooth integrable function f(x). The latter limit could be written more precisely $\lim_{n \to \infty} \, \sqrt{\frac{n}{\pi}} \, \int_{-\infty}^{\infty} e^{-n(x-a)^2} \, f(x) \, {\text d} x = \frac{1}{2} \,f(a+0) + \frac{1}{2} \, f(a-0) .$ Although the delta function is a distribution (which is a functional on a set of probe functions) and the notation $$\delta (x)$$ makes no sense from a mathematician point of view, it is a custom to manipulate the delta function $$\delta (x)$$ as with a regular function, keeping in mind that it should be applied to a probe function. Dirac remarks that “There are a number of elementary equations which one can write down about δ-functions. These equations are essentially rules of manipulation for algebraic work involving δ-functions. The meaning of any of these equations is that its two sides give equivalent results [when used] as factors in an integrand.'' Examples of such equations are \begin{align*} \delta (-x) &= \delta (x) , \\ x^n \delta (x) &= 0 \qquad\mbox{for any positive integer } n, \\ \delta (ax) &= a^{-1} \delta (x) , \qquad a > 0, \\ \delta \left( x^2 - a^2 \right) &= \frac{1}{2a} \left[ \delta (x-a) + \delta (x+a) \right] , \qquad a > 0, \\ \int \delta (a-x)\, {\text d} x \, \delta (x-b) &= \delta (a-b) , \\ f(x)\,\delta (x) &= f(a)\, \delta (x-a) , \\ \delta \left( g(x) \right) &= \sum_n \frac{\delta (x - x_n )}{| g' (x_n )|} , \end{align*} where summation is extended over all simple roots of the equation $$g(x_n ) =0 .$$ Note that the above formula is valid subject that $$g' (x_n ) \ne 0 .$$ Of course, the Heaviside function and the δ-function stand in a close relationship supplied by the calculus: $H (t-a) = \int_{-\infty}^t \delta (x-a)\,{\text d} x \qquad \Longleftrightarrow \qquad \frac{{\text d}}{{\text d} t}\,H(t-a) = \delta (t-a) .$ Theorem: The convolution of a delta function with a continuous function: $f(t) * \delta (t) = \int_{-\infty}^{\infty} f(\tau )\, \delta (t-\tau ) \, {\text d}\tau = \delta (t) * f(t) = \int_{-\infty}^{\infty} f(t-\tau )\, \delta (\tau ) \, {\text d}\tau = f(t) .$ Theorem: The Laplace transform of the Dirac delta function: ${\cal L} \left[ \delta (t-a)\right] = \int_0^{\infty} e^{-\lambda\,t} \delta (t-a) \, {\text d}t = e^{\lambda\,a} , \qquad a \ge 0. \qquad ■$ Example: Find the Laplace transform of the convolution of the function $$f(t) = t^2 -1$$ with shifted delta function $$\delta (t-3) .$$ According to definition of convolution, $f(t) * \delta (t-3) = \int_{0}^{\infty} f(\tau )\, \delta (t-3 -\tau ) \, {\text d}\tau = \int_{-\infty}^{\infty} (\tau^2 -1 )\, \delta (t-3 -\tau ) \, {\text d}\tau = f(t-3) = (t-3)^2 -1 .$ Actually, we have to multiply f(t-3) by a shifted Heaviside function, so the correct answer would be $$f(t-3)\, H(t-3)$$ because the original function was $$\left[ t^2 -1 \right] H(t) .$$ Now we apply the Laplace transform: \begin{align*} {\cal L} \left[ f(t) * \delta (t-3) \right] &= {\cal L} \left[ f(t) \right] \cdot {\cal L} \left[ \delta (t-3) \right] = \left( \frac{2}{\lambda^3} -\frac{1}{\lambda} \right) e^{-3\lambda} \\ &= {\cal L} \left[ f(t-3)\, H(t-3) \right] = {\cal L} \left[ f(t) \right] e^{-3\lambda} = \frac{2 - \lambda^2}{\lambda^3} \, e^{-3\lambda} . \end{align*} We check the answer with Mathematica: LaplaceTransform[ Integrate[(tau^2 - 1)*DiracDelta[t - 3 - tau], {tau, 0, t}], t, s] -((E^(-3 s) (-2 + s^2))/s^3) Example: A spring-mass system with mass 1, damping 2, and spring constant 10 is subject to a hammer blow at time t = 0. The blow imparts a total impulse of 1 to the system, which is initially at rest. Find the response of the system. The situation is modeled by $y'' +2\, y' +10\,y = \delta (t), \qquad y(0) =0, \quad y' (0) =0 .$ Application of the Laplace transform to the both sides utilizing the initial conditions yields $\lambda^2 y^L +2\,\lambda \, y^L +10\,y^L = 1 ,$ where $$y^L = {\cal L} \left[ y(t) \right] = \int_0^{\infty} e^{-\lambda\, t} y(t) \,{text d}t$$ is the Laplace transform of the unknown function. Solving for yL, we obtain $y^L (\lambda ) = \frac{1}{\lambda^2 + 2\lambda + 10} ,$ We can use the formula from the table to determine the system response $y (t ) = {\cal L}^{-1} \left[ \frac{1}{\lambda^2 + 2\lambda + 10} \right] = \frac{1}{3}\, e^{-t}\, \sin (3t) \, H(t) ,$ where H(t) is the Heaviside function.
2021-07-31 21:34:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9955457448959351, "perplexity": 917.0717199445545}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154126.73/warc/CC-MAIN-20210731203400-20210731233400-00037.warc.gz"}
https://socratic.org/questions/if-sample-1-contains-2-98-moles-of-hydrogen-at-35-1-degrees-c-and-2-3-atm-in-a-3
# If Sample 1 contains 2.98 moles of hydrogen at 35.1 degrees C and 2.3 atm in a 32.8 L container. How many moles of hydrogen are in a 45.3 liter container under the same conditions? Thank you for helping. May 14, 2017 $\text{4.12 moles}$ #### Explanation: You know that because the temperature and the pressure of the gas remain constant, you can use the fact that the volume of the container is directly proportional to the number of moles of gas as given by Avogadro's Law. Mathematically, this is can be written as $\textcolor{b l u e}{\underline{\textcolor{b l a c k}{{V}_{1} / {n}_{1} = {V}_{2} / {n}_{2}}}}$ Here • ${V}_{1}$ and ${n}_{1}$ represent the volume and number of moles of gas at an initial state • ${V}_{2}$ and ${n}_{2}$ represent the volume and the number of moles of gas at a final state This means that when temperature and pressure are kept constant, increasing the number of moles of gas present in the container will cause its volume to increase. Similarly, decreasing the number of moles of gas present in the container will cause its volume to decrease. In your case, the volume of the container increased $\text{32.8 L " -> " 45.3 L}$ you can say that the number of moles of gas present in the container must have increased. Rearrange the equation to solve for ${n}_{2}$ ${V}_{1} / {n}_{1} = {V}_{2} / {n}_{2} \implies {n}_{2} = {V}_{2} / {V}_{1} \cdot {n}_{1}$ Plug in your values to find n_2 = (45.3 color(red)(cancel(color(black)("L"))))/(32.8color(red)(cancel(color(black)("L")))) * "2.98 moles" = color(darkgreen)(ul(color(black)("4.12 moles")))# The answer is rounded to three sig figs.
2020-05-28 09:12:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6257843971252441, "perplexity": 302.50019951236106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347398233.32/warc/CC-MAIN-20200528061845-20200528091845-00461.warc.gz"}
https://physics.stackexchange.com/questions/287027/help-with-the-solution-to-the-linearized-wave-equation-known-as-retarded-integra
Help with the solution to the linearized wave equation known as Retarded Integral I need some little help here I'm reading about gravitational waves and particularly about gravitational waves described by the linearized non-homogeneus Einstein's equations $$\left( -\frac{\partial^2}{\partial t^2} + \nabla^2 \right)\bar h_{\mu\nu}=-16\pi T_{\mu\nu}$$ These three classic books says the solution to ths equation is know as "retarded solution", given by From Schutz $$h_{\mu\nu}(t,x^{i})=4\int\frac{T_{\mu\nu}\left( t-R, y^{i} \right)}{R}d^{3}y$$ From Hartle $$h^{\alpha\beta}(t,\vec x)=4\int d^{3}x \frac{\left[T^{\alpha\beta}\left( t', x' \right)\right]_{ret}}{\left| \vec x - \vec x' \right|}$$ From Schneider $$h^{\alpha\beta}(t,\vec x)=-\frac{4G}{c^4}\int \frac{ T^{\alpha\beta}\left( t-\frac{\left|y \right |}{c},\vec x + \vec y \right)}{\left | \vec y \right |} d^{3}y$$ well, in this point i guess the three last equations representing the same solution to the wave equation, so my questions are 1. This retarded solution, does where it come from? None of those books says where come from 2. what is the physical situation that describe it? Until I know $T$ is the Tensor Stress-Energy related to the mass that deform the spacetime, I guess that mass is the source of the gravitational waves but if that the case. The gravitational waves eventually will be so far from the source that the mass or T in this case will be zero, or T is related to another mass? • Any book on EM should explain how to solve the wave equation with sources. See Jackson, for example. – Javier Oct 17 '16 at 18:49
2019-10-16 02:41:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.6347299814224243, "perplexity": 520.9648502045709}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986661296.12/warc/CC-MAIN-20191016014439-20191016041939-00018.warc.gz"}